Anandtech

Syndicate content
This channel features the latest computer hardware related articles.
Updated: 35 min 57 sec ago

MSI GS60 Ghost Pro 3K Review

Thu, 2014-08-21 03:00

MSI has several lines of gaming notebooks catering to different types of users. At the high-end is the GT series that supports the fastest mobile CPUs and GPUs while the GE series caters more towards the cost-conscious buyers. Somewhere in the middle is the GS line, which offers similar (or slightly higher) specifications to the GE series but delivers everything in a refined and more attractive chassis. Read on to find out how the GS60 with a 3K display compares to the other gaming laptops.

Categories: Tech

Unity Adds Native x86 Support for Android

Wed, 2014-08-20 13:23

Intel is facing an uphill battle in the mobile space from a marketshare perspective, but there's an additional challenge: the bulk of mobile apps are compiled targeting ARM based CPU cores, not x86. With the launch of Medfield on Android, Intel introduced a binary translation software layer to enable running existing ARM based Android apps on x86. Binary translation is a useful fix for enabling compatibility but it does come with a performance and power penalty. Enabling native x86 applications is ultimately the goal here, BT is just used as a transitional tool. 

As far as I can tell, none of the big game engines (Unity, Unreal Engine) were ported to x86 on Android. As a result, any game that leveraged these engines would be ARM code translated to run on x86. This morning Intel and Unity Technologies announced a native x86 version of the Unity game engine for Android. Selected developers have access to the x86 version today, and it'll be made available to everyone else by the end of the year. There's no charge for the update. Note that this only applies to the Android Unity port, the engine under Windows and all Windows tools are already obviously compiled for x86.

Intel's press release mentions support for both Core and Atom families. I clarified with Intel that the Core reference mainly applies to any Core M (Broadwell Y or Skylake Y) Android tablets, and not a push into Core based smartphones. 

Intel is also working on enabling other game engines, but we'll have to wait to see those announcements. 

Categories: Tech

FMS 2014: Silicon Motion Showcases SM2256 SSD Controller with TLC NAND Support

Wed, 2014-08-20 10:00

A couple of weeks ago at Flash Memory Summit, Silicon Motion launched their next generation SATA 6Gbps SSD controller. Dubbed simply as SM2256, the new controller is the first merchant controller solution (hardware + firmware) to support TLC NAND out of the box and succeeds the SM2246 controller we tested a while ago with ADATA's Premier SP610. The SM2246 was not the fastest solution in the market but it provided decent performance at an alluring price and the SM2256 is set to lower the total cost even more thanks to support for lower cost TLC NAND.

The SM2256 continues to be a 4-channel design and I am guessing it is also based on the same single-core ARC design with most changes being in the ECC engine. NAND support includes all NAND that is currently available including Toshiba's 15nm TLC NAND and the controller is designed to support 3D NAND as well. DDR3 and DDR3L are supported for cache and the controller is also TCG Opal 1.0 compliant. 

To make TLC durable enough, the SM2256 features Low Density Parity Check (LDPC) error-correction, which is a new ECC scheme that is set to replace BCH ECC. Intel did a very detailed presentation on LDPC at FMS a few years ago, although I must warn you that it is also very technical with lots of math involved. Silicon Motion calls its implementation NANDXtend and it has three steps: LDPC hard decode, soft decode and RAID data recovery. Basically, hard decode is much faster than soft decode because there is less computation involved and in case the ECC engine fails to correct a bit, the RAID data recover kicks in and the data is recovered from parity. Silicon Motion claims that its NANDXtend technology can triple the endurance of TLC NAND, making it good for ~1,500-3000 P/E cycles depending on the quality of the NAND. Marvell's upcoming 88SS1074 controller supports LDPC as well and I will be taking a deeper look at the technology once we have a sample in our hands. 

TLC is expected to become the dominant NAND type in four years, so focusing on it makes perfect sense. Once the industry moves to 3D NAND, I truly expect TLC NAND to be the NAND for mainstream SSDs because the endurance should be close to 2D MLC NAND, which eliminates the biggest problem that TLC technology currently has.

The SM2256 is currently in customer evaluation and is expected to enter mass production in Q4'14 with shipping devices coming in late 2014 or early 2015. 

Categories: Tech

MSI Z97 Guard-Pro Review: Entry Level Z97 at $110

Wed, 2014-08-20 03:00

Next in our recent run of lower cost motherboards is the MSI Z97 Guard-Pro, a motherboard that MSI billed to me as one suited for the overclockable Pentium G3258 on a budget. At $110, we see if it differs much from the more expensive options on the market.

Categories: Tech

Zalman Reserator 3 Max Dual CPU Cooler Review

Tue, 2014-08-19 15:00

Zalman sent us their Reserator 3 Max Dual CPU cooler, which is a rather interesting device. It's an all-in-one liquid cooling solution that Zalman advertises as the "Ultimate Liquid CPU Cooler". Zalman's engineers are certainly no amateurs when it comes to liquid cooling and the Reserator 3 Max Dual does appear unique, but is it really an "Ultimate Cooler"? We are going to find out in today's review.

Categories: Tech

HTC Announces the HTC One (M8) for Windows

Tue, 2014-08-19 07:45

It's been a while since we've seen a high-end device running Windows Phone 8 launch from a company other than Nokia. Despite Nokia's dominance, HTC has certainly not given up on the platform and today they're demonstrating that with the launch of a new flagship Windows Phone 8 device that you may already know very well. This new device is named the HTC One (M8) for Windows, and both its design and its hardware are essentially the same as the Android powered HTC One M8 that HTC launched earlier this year. We've laid out the specifications of the One (M8) for Windows below.

HTC One (M8) for Windows SoC Qualcomm Snapdragon 801 (MSM8974ABv3) 4 x Krait 400 at 2.26GHz
Adreno 330 at 578 MHz Memory and Storage 2GB LPDDR3, 16/32GB NAND + microSDXC Display 5” 1920x1080 Super LCD3 at 441 ppi Cellular Connectivity 2G / 3G / 4G LTE (Qualcomm MDM9x25 UE Category 4 LTE) Dimensions 146.36 x 70.6 x 9.35mm max, 160 grams Camera 4.0 MP (2688 × 1520) Rear Facing with 2.0 µm pixels, 1/3" CMOS size, F/2.0, 28mm (35mm effective) and 2.0MP rear DOF camera, 5MP F/2.0 FFC Battery 2600 mAh (9.88 Whr) Other Connectivity 802.11a/b/g/n/ac + BT 4.0, USB2.0, GPS/GNSS, MHL, DLNA, NFC SIM Size Nano-SIM Operating System Windows Phone 8.1

With regards to the hardware there's not a whole lot to be said. This really is the HTC One (M8) running Windows Phone 8 instead of Android. For an in depth look at the experience on Windows Phone 8.1 you can take a look at Anand's review of it from earlier this year. HTC has worked to also bring over some of the features they include with HTC Sense on the One (M8), which include BlinkFeed, Duo Cam, and Sense TV.

BlinkFeed makes its way over to Windows Phone 8 with the One (M8) for Windows. For those who arent familiar with it, BlinkFeed is a feature that comes on some of HTC's Android devices which aggregates Facebook and Twitter posts, news, sports information, and more into a vertically scrolling list on HTC's launcher. On Windows Phone 8 HTC doesn't have the luxury of being able to drastically alter the launcher and so BlinkFeed is included as an application which functions in the same manner as the launcher widget on Android.

Because the One (M8) for Windows shares the same hardware as the M8, HTC has brought over their post processing effects enabled by the secondary sensor in their Duo Cam camera system. In addition, we see that Video Highlights is present in the stock OS. Unfortunately, the camera app doesn't also inherit the manual controls from the M8 and so users wanting more control over the exposure of their photos will have to look to Nokia's Windows Phone devices or buy an application like ProShot which has such controls.

The One (M8) for Windows also brings along HTC Sense TV which acts as a TV guide and a universal remote that displays when your favorite shows are playing as well as recommendations for shows you may like based on what you already watch. HTC emphasized the difficulty of bringing this functionality to Windows Phone, as it required close cooperation with Microsoft to properly implement IR remote functionality.

For some users the most exciting prospect of the HTC One (M8) for Windows may come from the fact that it shares the same hardware as the One (M8). It's possible that the developer community will be able to load the firmware from the HTC One (M8) onto the device in a dual boot configuration with Windows Phone 8 so users can switch between the operating systems as they please.

Overall, this seems to be a smart move for HTC. Instead of assuming additional risk in the form of new hardware, the only resources needed are for software development. There's no need for a new production line, hardware certification is easier because the hardware should be unchanged from other variants, and cost across the board is driven down due to increased economies of scale.

The HTC One (M8) for Windows will go on sale on August 19th at 12:00PM Eastern Time through Verizon's online store, and will be available for $99 on a two year contract.

Gallery: HTC One (M8) for Windows

Categories: Tech

SanDisk Releases Ultra II SSD: Bringing More TLC NAND to the Market

Tue, 2014-08-19 05:00

It is a busy day in the client SSD space as earlier today AMD announced the company's first SSD, the R7, and now SanDisk is releasing the Ultra II to the mainstream market. The Ultra II is based on SanDisk's second generation 19nm TLC NAND, which means that the Ultra II is the first non-Samsung SSD to ship with TLC NAND. We have covered TLC NAND several times already, but in short TLC NAND provides lower cost at the cost of performance and endurance, making it a feasible option for value drives.

Similar to SanDisk's other client drives, the Ultra II is based on the Marvell 88SS9187 platform. SanDisk's expertise lies in the firmware development and NAND know-how, which has generally given them an advantage over other Marvell based solutions. 

SanDisk Ultra II Specifications   120GB 240GB 480GB 960GB Controller Marvell 88SS9187 NAND SanDisk 2nd Gen 19nm TLC Sequential Read 550MB/s 550MB/s 550MB/s 550MB/s Sequential Write 500MB/s 500MB/s 500MB/s 500MB/s 4KB Random Read 81K IOPS 91K IOPS 98K IOPS 99K IOPS 4KB Random Write 80K IOPS 83K IOPS 83K IOPS 83K IOPS Warranty Three years Price $80 $115 $220 $430

Unfortunately I do not have the full spec sheet yet, so I have to go by the limited details listed in the press release, but I will be updating the table with more specs as soon as I get them. Update: Full specs added.

The Ultra II utilizes SanDisk's nCache 2.0 technology, which operates a portion of the NAND in SLC mode to increase performance and improve reliability. As a result, the Ultra II is able to achieve write speeds of up to 500MB/s even at the lowest capacity, although it should be kept in mind that this is peak performance -- as soon as the SLC buffer is full write speeds will drop quite dramatically. 

SanDisk is also bringing a new version of its SSD Dashboard along with the Ultra II. The new version features support for 17 difference languages and includes "Live Chat" in case the user has any questions about the Dashboard or SSD. Additionally, SanDisk is including cloning and antivirus features via third party software (Apricorn's EZ GIG IV for cloning, Trend Micro Titanium Antivirus+ for malware) with the goal of helping users to transition from a hard drive to an SSD. Combining antivirus with the SSD Dashboard might seem a bit odd but it actually makes sense. When you are about to clone your Windows install to a new SSD, the first thing you should do is a run an antivirus scan to make sure that no malware will be transferred through cloning because malware can ruin the faster user experience that an SSD provides.

Samsung certainly set the bar high with the SSD 840 and 840 EVO, so it will be interesting to see how SanDisk can match that. Pricing is very competitive with the 840 EVO and Crucial MX100, so as long as SanDisk has been able master the firmware for TLC the Ultra II should a viable option for value oriented consumers. The Ultra II will be available next month and we are scheduled to get review samples within the next couple of weeks, so stay tuned for the full review!

Categories: Tech

State of the Part: SoC Manufacturers

Tue, 2014-08-19 03:00


Introduction

A few years ago, it seemed a new System-On-Chip (SoC) design using an ARM-based architecture would pop up every other week. While competition can be great, with so many of the designs offering similar core features and the need to provide support for the products, over time the major players in the ARM SoC market have shifted and consolidated (e.g. Texas Instruments has exited the Application Processor space). There are still plenty of companies that ship SoCs, but some of these are lower end offerings that can't really compete except on price, leaving us with a few major players.

Depending on whom you wish to include/exclude, there are anywhere from four to eight SoC manufacturers still in the game, with three of them mostly catering to the budget sector. We could also include ARM as a company, but since they don't really manufacture any parts we'll leave that for a separate article.

Before we get to the manufacturers, let's quickly go over the pertinent items to know for any SoC. The main functionality of any SoC is going to come from the CPU cores, where modern designs typically use either two or four cores, with some offerings including up to eight cores -- four lower power cores that offer better battery life with four higher performance cores for when there's work to be done. Next to the CPU cores in importance is the GPU, the graphics processor, and again you can find SoCs with multiple GPU "cores" combined to improve graphics performance (e.g. MP1 for one core, MP2 for two, and so on). Finally, it's useful to know the manufacturing process for an SoC, as lower numbers generally mean smaller, more power efficient chips. We've had variations of 28nm manufacturing for a few years now, with the state-of-the-art next generation SoCs now moving to 20nm for most companies.

Ultimately, what it comes down to is the end user experience, which is a very broad topic and out of the scope of this article. The pace of technology in smartphones and tablets has been pretty brisk, however, so anything more than a year old can end up feeling quite sluggish, and after two years most users are ready for an upgrade. Most of the top SoCs right now are 28nm designs with at least two CPU cores, but not all cores are created equal, so this is another topic we'll address in a future piece.

The general idea is that if you're like most people and live on the two-year upgrade cycle for smartphones, the best time to upgrade is usually right after a major technology advance. Roughly two years ago we started seeing the first ARM Cortex-A15 designs along with Apple's competing A6 "Swift" architecture, and while there are faster devices today the smartphones and tablets using those chips are still able to deliver a decent user experience. We're now on the brink of the next generation of SoCs, thanks to the shift to the 20nm manufacturing process along with ARM's new 64-bit Cortex-A53 and Cortex-A57 designs, so if you've been holding off upgrading the next few months should prove very enticing. Of course we'll still have to wait for early 2015 before many of the devices using the new SoCs are available for purchase.

Without going into too much detail, here's the short list of significant SoC manufacturers, with a brief discussion of their current and upcoming parts. In the near future we'll have separate articles going into more detail on the various models and technology for each company, as well as some recommendations in terms of performance and the overall user experience. This is intended as a short high-level overview of the companies to get us started.

Qualcomm

Qualcomm is one of the biggest players in the SoC market, and their designs are in many of the leading Android devices. All of the current Qualcomm SoCs are sold under the Snapdragon brand, and there are multiple performance segments (i.e. the Snapdragon 200, 400, 600, and 800 series). Qualcomm has had several custom-designed ARM architectures, with the latest being Krait (with multiple variations of Krait as well). Most of the lower tier parts are dual-core Krait designs with the higher-end 600 and 800 series being quad-core Krait offerings. Their current halo parts are in the 800 family, with the 800, 801, and 805 all presently shipping, but even within the same model number there are multiple configurations available (e.g. with or without cellular baseband options).

The existing parts use either a 28nm LP or 28nm HPm process, but more recently Qualcomm has announced the Snapdragon 808 and 810, which move to a 20nm HPm process and use ARM's Cortex-A57/A53 64-bit cores in a 2+4/4+4 configuration, with Adreno 418/430 graphics. The shrink in process technology is something we should see from all of the major players in the coming months, with the benefits being potentially lower power and/or higher performance coupled with smaller chip sizes.

In the near term, Qualcomm also has their new 410 and 610 Snapdragon parts that will be their first 64-bit SoCs. The 410 appears to be shipping in certain markets, and the 610 should show up in Q4'2014 devices. Meanwhile, the 810 should start shipping in devices in the first half of 2015 and will be the halo product from Qualcomm, with 808 following shortly after. Qualcomm has not yet announced a custom designed 64-bit architecture, but that will likely come later in 2015.

Apple

Apple is next in the list, though really the top two positions are hotly contended and many would place them first. Either way, Apple needs little in the way of introduction. Largely responsible for the new paradigm in touchscreen smartphones thanks to their iPhone, the performance of Apple's devices has continued to increase at a rapid pace. Apple also has the advantage of running their own software with iOS, which potentially gives them an advantage over other companies that utilize Android. While earlier iPhones used designs largely built by other companies, Apple began designing their own ARMv7 architecture with the A6 in 2012.

The latest generation A7 SoC was introduced about a year ago with the iPhone 5S in September 2013 and has since found its way into the iPad Mini Retina and iPad Air. The A7 features a dual-core 1.3-1.4GHz Apple-designed processor codenamed Cyclone, running the ARMv8 instruction set, making it the first shipping 64-bit SoC. Short-term that may not be a huge deal, but long-term it paves the way for future devices. We believe the GPU in the A7 is a PowerVR G6430 200MHz GPU (four cluster MP4 configuration). The A7 is manufactured on Samsung's 28nm process and uses 1GB of memory.

Given Apple's history, we can expect some sort of update at the next iPhone event, rumored to be on September 9. Most think the next Apple chip will be called the A8 and will likely move to a 20nm process technology, allowing for a generational leap in performance. We might finally see an iOS device with 2GB RAM.

Samsung

In terms of volume if not performance, Samsung comes next in our hierarchy, and they've long been a player in the SoC market, going back as far as the early 2000s with some of their chips; they were also the SoC provider for the original iPhone in 2007. Samsung has numerous smartphones and tablets available, and while many use Samsung SoCs there are also Qualcomm SoCs in some models. Their current SoC designs belong to the Exynos family, which has been around since 2011.

The top performance Exynos SoC right now is the 5800, also called the Exynos 5 Octa, a 4+4 big.LITTLE implementation with four faster Cortex-A15 cores and four slower but more power efficient Cortex-A7 cores, paired with a Mali-T628 GPU. The Exynos 5 Octa (and Hexa if we include the 5260) use Samsung's 28nm HKMG process, but there's a new Exynos 5430 (used in certain models of the upcoming Samsung Galaxy Alpha) that has a similar configuration and uses a new 20nm HKMG process. We'll likely see more 20nm Samsung SoCs in the coming months, and with most companies shifting to the new 64-bit Cortex-A57/A53 we can expect Samsung to do the same.

NVIDIA

NVIDIA is a familiar name for PC enthusiasts, and they're sort of the reverse of Samsung right now: higher performance but lower volumes shipped. Given the growing popularity of tablets and smartphones, there's little surprise that NVIDIA is also working to gain (and maintain) a foothold in the mobile sector. Their latest SoC is the Tegra K1, found in the SHIELD Tablet and Acer's new Chromebook.

Tegra K1 pairs a 192-core Kepler-derived GPU with one of two CPU options. The first is a quad-core Cortex-A15 R3 design that's similar to the processor used in the Tegra 4, while an upcoming variant of the K1 will use a dual-core 64-bit Denver CPU designed by NVIDIA. Considering the most successful ARM SoCs are going the custom-logic route (e.g. Apple's Cyclone and Qualcomm's Krait are custom designs that use the ARMv8 and ARMv7 instruction sets rather than simply using Cortex-A15), NVIDIA hopes to improve performance while reducing power, among other things.

While the shipping K1 uses four A15 cores and is manufactured on a 28nm process, the Denver version could move to 20nm, but NVIDIA hasn't officially announced the process for the Denver K1. The Tegra K1 is currently one of the fastest (if not the fastest) shipping SoCs, but many of the next generation SoCs have not yet launched so this could very well change in the next month or two. Looking forward, the successor to the Tegra K1 is currently codenamed Erista and is expected to pair a Maxwell-based GPU with Denver CPU cores, but that part won't begin production until some time in 2015 so it's a ways off.

Intel

The final major player in terms of higher performance SoCs is Intel, another company that needs little introduction. Unlike everyone else on this list of SoC manufacturers, Intel is using their own custom architecture running the x86 instruction set instead of ARMv7 or ARMv8. Where x86 has proven to be a juggernaut in the PC space, in the mobile world where every device gets a customized software build it has not been nearly as useful – and some might even call it a handicap, though these days the difficulty of decoding x86 is relatively small. Intel has been trying to stake a claim in the mobile sector, and after a few initial forays that didn't accomplish much, their latest Atom SoCs have seen a moderate amount of use.

The currently shipping Atom SoCs are 22nm devices, codenamed Bay Trail. You can find them in NAS and other products, but we're mostly concerned with the tablet and smartphone spaces for this overview. The Merrifield and Moorefield platforms (Atom Z34xx/Z35xx) remain at 22nm and use the same core Silvermont CPU cores as Bay Trail, except with PowerVR 6 G6400/G6430 graphics.

Looking to the future, the platform that may finally give Intel a real leg up on the competition likely won't come out until early to mid 2015. That's the Cherry Trail platform, which will upgrade the CPU cores to Airmont and move to Intel's 14nm process, delivering better performance in a lower power package. We've been saying "wait for the next Atom update" for a while, but on paper at least Cherry Trail looks very promising – it's the first Atom to ship on Intel's latest process technology without waiting a year or more, and it's the second Atom design after Intel's commitment to begin updating the Atom platform on a yearly basis.

MediaTek

Moving over to the budget players, there are at least two that warrant mentioning. MediaTek has been around for a while now, though like most SoC companies they didn't really get into the ARM and Android space until 2009/2010. Starting in 2013, however, MediaTek managed a ton of design wins…but all of the wins are almost exclusively in lower performance, second tier parts. In terms of strict volume, MediaTek likely ships as many (or more) SoCs as the biggest companies, and they are a major provider of SoCs in the Chinese market. Not surprisingly, MediaTek devices tend to be budget friendly products, though that often means compromises in other areas as well.

Their current top-tier SoCs use quad-core, hexa-core, and even octa-core Cortex-A7 designs with Mali-400/450 graphics (up to MP4 configurations). Like the PC world, however, throwing more cores on a chip can only get you so far, and the big.LITTLE configurations seem like a better solution than eight Cortex-A7 cores. MediaTek does have some higher performance SoCs with big.LITTLE A17/A7 starting to ship to device manufacturers (e.g. MT6595), though these are not yet in shipping devices. The MT6732 part has a quad-core Cortex-A53 CPU with Mali-T760 graphics, while MT6752/MT6795 move to octa-core A53 with Mali-T760/PowerVR G6200 graphics. The A53 is sort of the 64-bit equivalent of the A7 where the A57 is the higher performance 64-bit part, but A53 could end up delivering a nice blend of performance and efficiency.

Allwinner

Allwinner Technology has many similarities to MediaTek. They've been around since 2007 but their earlier products were largely forgettable. In 2011 they became an official ARM licensee, and since then they've gained some popularity as a budget SoC provider. Their latest products include the quad-core Cortex-A7 based A33, which runs at up to 1.5GHz and includes a Mali 400 MP2 GPU. The A33 is marketed as "the first $4 quad-core tablet processor", and you will likely find this SoC in sub-$100 tablets; performance as you might expect largely follows pricing, but like MediaTek they can still do a huge volume thanks to their cost advantage.

Their fastest announced product is the new A80 SoC, which features an octal-core big.LITTLE configurations (four Cortex-A7 and four Cortex-A15 cores) with a PowerVR G6230 GPU. It's currently shipping in China in the Onda V989, a $200 tablet with a 9.7" 2048x1536 display. As far as manufacturing processes go, the A33 uses 40nm while the A80 uses 28nm HPM.

Closing Comments

There are other SoC vendors we could mention as well. RockChip is right there with Allwinner and MediaTek vying for market share in budget products, for example. They even have the RK3288, a quad-core Cortex-A17 part with Mali-T764 GPU, which should provide decent performance, which is starting to ship in some markets. Let's also not forget their strategic agreement with Intel, which is interesting to say the least. We could also include AMD with their Mullins APU from the PUMA family, which like Intel uses an x86 CPU core designed in-house by AMD. AMD is also an ARM licensee, and they might look to bring alternative SoCs to market using ARM instructions sets instead of x86.

The remaining players are small enough that it's difficult for them to compete with the bigger names. After all, if you're just licensing the same core architectures for the CPU and GPU as everyone else, you can't really offer better performance; all you can do is compete on price. Sometimes that's enough, but the real news in the SoC space is likely going to come from the companies doing their own custom logic.

If we group all the companies shipping vanilla ARM designs under ARM (e.g. MediaTek, Allwinner, RockChip, and even Samsung), that still leaves us with Qualcomm, Apple, NVIDIA, and Intel doing custom logic. It's difficult to imagine any of those companies bowing out right now, and Samsung is a big player as well, which means we'll likely continue to see the "Big Five" SoC companies duking it out in the smartphone and tablet sectors. And while they continue nipping at the market share in the PC laptop/desktop space, they still have to worry about giving up ground to even more budget friendly devices from the likes of MediaTek and Allwinner.

Categories: Tech

ECS Announces Z97-PK: A Motherboard with ‘One Key OC’ 4.7 GHz for Pentium G3258

Tue, 2014-08-19 00:51

With the launch of the overclockable Pentium G3258 processor, some motherboard manufacturers have been scrambling to get a cheaper product to market to be the centre point of a Pentium based build. We covered ASRock’s Z97 Anniversary launch and have an upcoming review of an $110 MSI motherboard aimed at the same market, but today ECS is announcing its Z97-PK (PK = Pentium K ?, even though Intel call it the Pentium AE). The main stand out feature of the motherboard is the ‘One Key OC’, which claims to boost the G3258 from 3.2 GHz to 4.7 GHz.

This an impressive claim, given the motherboard relies on a 2-3-phase power implementation with no power delivery heatsink – I would assume that ECS has performed enough testing on enough CPUs to make sure the 4.7 GHz value covers the majority of G3258 dies. Even though ECS lists a ‘One Key OC’ for this functionality, there is no actual physical button on board (like with MSI’s OC Genie), which makes me believe that this is either a software or a BIOS implementation. Ultimately I would have preferred a physical button due to the low number of home users who actually enter the BIOS or install the bundled software.

The motherboard is in the mATX form factor, offering a single PCIe 3.0 x16 slot for graphics with a PCIe 2.0 x4 slot from the chipset. This means there is no SLI support but Crossfire is available at a reduced bandwidth for the second card. Like other lower cost motherboards there are six SATA 6 Gbps ports, with this layout affording two at right angles to the motherboard and four coming out of the PCB. There are only four USB 3.0 ports, with two of these coming from an internal header. Also of note is the DRAM slots which are not equidistant from each other; this could reduce signalling margins for overclocked memory unless ECS have engineered the board to compensate. That being said, the webpage only lists DDR3-1600 memory support.

Audio and networking is provided by Realtek, and integrated video outputs come via the VGA, DVI-D and HDMI outputs. This is interesting when comparing to the Z97 Guard-Pro, which uses VGA, DVI-D and DisplayPort instead to allow tri-monitor setups. Also on the motherboard for legacy connectors are a COM header, an LPT header and a TPM header.

The ECS Z97-PK might not win any aesthetic awards, but ECS usually aims at the super low pricing bracket which might pique some interest in mass production builds. We are awaiting information about release dates and pricing, although ECS is claiming that this motherboard and the G3258 in a bundle would cost less than an i5 processor by itself. The only downside with that comparison is that as our review and analysis showed, the i5 performs significantly better than an overclocked G3258 in multi-threaded tasks and multi-GPU gaming.

Source: ECS

Update: ECS has let me know that this motherboard should be available at the end of September, for a combo price with a G3258 for $100. The CPU alone is $70, making this motherboard $30 in the deal.  To be honest, for budget builds, that is a quite good price.

Categories: Tech

AMD Jumps Into the SSD Market: Radeon R7 Series SSD Launched

Mon, 2014-08-18 21:01

Back in 2011, AMD made a rather unexpected move and expanded its Radeon brand to include memory in addition to graphics cards. With today's announcement AMD is adding another member to its Radeon family by releasing Radeon R7 Series SSDs. Similar to AMD memory, AMD is not actually designing or manufacturing the SSDs as the product design and manufacturing is handled by OCZ. In fact, all the customer support is also handled by OCZ, so aside from the AMD logo AMD is not really involved in the product.

Partnering with OCZ makes sense because OCZ's focus with the Barefoot 3 platform has always been gamers and enthusiasts and that is the target group for AMD's R7 SSDs as well. OCZ is a now owned by Toshiba, so OCZ has direct access to NAND with guaranteed supply, whereas fabless OEMs (e.g. Kingston) could face supply issues that might harm AMD. Here's the quick overview, which of course will be essentially the same as certain existing Barefoot 3 products.

AMD Radeon R7 Series SSD Specifications Capacity 120GB 240GB 480GB Controller OCZ Barefoot 3 M00 NAND Toshiba A19nm MLC Sequential Read 550MB/s 550MB/s 550MB/s Sequential Write 470MB/s 530MB/s 530MB/s 4KB Random Read 85K IOPS 95K IOPS 100K IOPS 4KB Random Write 90K IOPS 90K IOPS 90K IOPS Steady-State 4KB Random Write 12K IOPS 20K IOPS 23K IOPS Idle Power 0.6W 0.6W 0.6W Max Power 2.7W 2.7W 2.7W Encryption AES-256 Endurance 30GB/day for 4 years Warranty Four years MSRP $100 $164 $299

The R7 is based on OCZ's own Barefoot 3 controller and it is the same higher clocked M00 version (397MHz) as in the Vector 150. The new ARC 100 and Vertex 460 use the M10 version, which runs at 352MHz instead of 397MHz but is otherwise the same silicon. Performance wise the R7 SSD is very close to Vector 150 with slightly lower random write performance, although random read performance is marginally better in turn.

The biggest difference between the two is endurance as the Vector 150 is rated at 50GB per day for five years (91TB total) while the R7 is rated at 30GB per day for four years (43TB total). The firmware in the R7 is tailored for AMD, although I was told that the customizations are mainly wear-leveling related to increase endurance over the Vertex 460, so there should not be any surprises in performance. The NAND is also different as the R7 utilizes Toshiba's A19nm MLC, but OCZ should be making the switch to A19nm across its whole client SSD lineup soon to cut costs. 

The motivation behind AMD's move is identical to the reason AMD entered the memory market. AMD wants to provide users an "AMD-only" experience by offering as many of the components as possible. Another argument AMD had is that providing more AMD branded products makes it easier for novice PC builders to pick the parts because the buyer does not have to go through the trouble of deciding between dozens of products and making sure the parts are compatible with each other. In addition to providing an easier purchase experience, AMD can also use the R7 SSDs in bundles and promotions, which is definitely more lucrative than using third party components. 

Of course, the ultimate reason behind the move is that SSDs are becoming a mainstream product, and they provide another revenue source for AMD. AMD has not been doing that great financially lately and having an extra low risk revenue source is definitely welcome, even though client SSDs are not exactly a high profit market anymore. Then again, AMD is not investing much into SSDs since development and manufacturing is done by OCZ, so even if Radeon R7 SSD sales end up being low, AMD has no long-term investment to protect. The announced pricing is generally in line with what we're seeing for the existing OCZ Vector 150 products, though mail-in rebates can actually drop the Vector 150 below Radeon R7 SSD levels.

All in all, the R7 will not provide much from a technological standpoint since it uses the same platform we have tested several times, but it will be interesting to see how AMD bundles the R7 with other AMD products. AMD now has an opportunity to provide even more extensive bundles (CPU/APU, GPU, RAM and SSD); all that's left is for AMD to begin offering Radeon branded motherboards, power supplies, and cases to provide the ultimate AMD-only experience. Whether that happens remains to be seen, but AMD is trying an aggressive bundling strategy to increase their desktop CPU sales.

We have samples of the Radeon R7 SSD on the way, so stay tuned for the full review!

Categories: Tech

ARM's Cortex M: Even Smaller and Lower Power CPU Cores

Mon, 2014-08-18 18:14

ARM (and its partners) were arguably one of the major causes of the present day smartphone revolution. While AMD and Intel focused on using Moore’s Law to drive higher and higher performing CPUs, ARM and its partners used the same physics to drive integration and lower power. The result was ultimately the ARM11 and Cortex A-series CPU cores that began the revolution and continue to power many smartphones today. With hopes of history repeating itself, ARM is just as focused on building an even smaller, even lower power family of CPU cores under the Cortex M brand.

We’ve talked about ARM’s three major families of CPU cores before: Cortex A (applications processors), Cortex R (real-time processors) and Cortex M (embedded/microcontrollers). Although Cortex A is what we mostly talk about, Cortex M is becoming increasingly important as compute is added to more types of devices.

Wearables are an obvious fit for Cortex M, yet the initial launch of Android Wear devices bucked the trend and implemented Cortex A based SoCs. A big part of that is likely due to the fact that the initial market for an Android Wear device is limited, and thus a custom designed SoC is tough to justify from a financial standpoint (not to mention the hardware requirements of running Android outpace what a Cortex M can offer). Looking a bit earlier in wearable history and you’ll find a good number of Cortex M based designs including the FitBit Force and the Pebble Steel. I figured it’s time to put the Cortex M’s architecture, performance and die area in perspective.

We’re very much in the early days of the evolution of Cortex M. The family itself has five very small members: M0, M0+, M1, M3 and M4. For the purposes of this article we’ll be focusing on everything but Cortex M1. The M1 is quite similar to the M0 but focuses more on FPGA designs.

Before we get too far down the architecture rabbit hole it’s important to provide some perspective. At a tech day earlier this year, ARM presented this data showing Cortex M die area:

By comparison, a 40nm Cortex A9 core would be roughly around 2.5mm^2 range or a single core. ARM originally claimed the Cortex A7 would be around 1/3 - 1/2 of the area of a Cortex A8, and the Cortex A9 is roughly equivalent to the Cortex A8 in terms of die area, putting a Cortex A7 at 0.83mm^2 - 1.25mm^2. In any case, with Cortex M we’re talking about an order of magnitude smaller CPU core sizes.

The Cortex M0 in particular is small enough that SoC designers may end up sprinkling in multiple M0 cores in case they need the functionality later on. With the Cortex M0+ we’re talking about less than a hundredth of a square millimeter in die area, even the tightest budgets can afford a few of these cores.

In fact, entire SoCs based on Cortex M CPU cores can be the size of a single Cortex A core. ARM provided this shot of a Freescale Cortex M0+ design in the dimple of a golf ball:

ARM wouldn’t provide me with comparative power metrics for Cortex M vs. Cortex A series parts, but we do have a general idea about performance:

Estimated Core Performance   ARM Cortex M0/M0+ ARM Cortex M3/M4 ARM11 ARM Cortex A7 ARM Cortex A9 Qualcomm Krait 200 DMIPS/MHz 0.84/0.94 1.25 1.25 1.9 2.5 3.3

In terms of DMIPS/MHz, Cortex M parts can actually approach some pretty decent numbers. A Cortex M4 can offer similar DMIPS/MHz to an ARM11 (an admittedly poor indicator of overall performance). The real performance differences come into play when you look at shipping frequencies, as well as the type of memory interface built around the CPU. Cortex M designs tend to be largely SRAM and NAND based, with no actual DRAM. You'll note that the M3/M4 per clock performance is identical, that's because the bulk of what the M4 adds is in the form of other hardware instructions not measured by Dhrystone performance.

Instruction set compatibility varies depending on the Cortex M model we’re talking about. The M0 and M0+ both implement ARM’s v6-M instruction profile, while the M3 and M4 support ARM’s v7-M. As you go up the family in terms of performance you get access to more instructions (M3 adds hardware divide, M4 adds DSP and FP instructions):

Each Cortex M chip offers a superset of the previous model’s instructions. So a Cortex M3 should theoretically be able to execute code for a Cortex M0+ (but not necessarily vice versa).

You also get support for more interrupts the higher up you go on the Cortex M ladder. The Cortex M0/M0+ designs support up to 32 interrupts, but if you move up to the M3/M4 you get up to 240.

All Cortex M processors have 32-bit memory addressability and the exact same memory map across all designs. ARM’s goal with these chips is to make moving up between designs as painless as possible.

While we’ve spent the past few years moving to out-of-order designs in smartphone CPUs, the entire Cortex M family is made up of very simple, in-order architectures. The pipelines themselves are similarly simplified:

Cortex M0, M3 and M4 all feature 3-stage in-order pipelines, while the M0+ shaves off a stage of the design. In the 3-stage designs there’s an instruction fetch, instruction decode and a single instruction execute stage. In the event the decoder encounters a branch instruction, there’s a speculative instruction fetch that grabs the instruction at the branch target. This way regardless of whether or not the branch is taken, the next instruction is waiting with at most a 1 cycle delay.

These aren’t superscalar designs, there’s only a 1-wide path for instruction flow down the pipeline and not many execution units to exploit. The Cortex M3 and M4 add some more sophisticated units (hardware integer divide in M3, MAC and limited SIMD in M4), but by and large these are simple cores for simple needs.

The range of operating frequencies for these cores is relatively low. ARM typically expects to see Cortex M designs in the 20 - 150MHz range, but the cores are capable of scaling as high as 800MHz (or more) depending on process node. There’s a corresponding increase in power consumption as well, which is why we normally see lower clocked Cortex M designs.

Similar to the Cortex A and R lines, the Cortex M family has a roadmap ahead of it. ARM recently announced a new CPU design center in Taiwan, where Cortex M based cores will be designed. I view the Cortex M line today quite similarly to the early days of the Cortex A family. There’s likely room for a higher performing option in between Cortex M4 and Cortex A7. If/when we get such a thing I feel like we may see the CPU building block necessary for higher performance wearable computing.

Categories: Tech

NAS Units as VM Hosts: QNAP's Virtualization Station Explored

Mon, 2014-08-18 07:15

Virtualization has been around since the 1960s, but it has emerged as a hot topic over the last decade or so. Despite the rising popularity, its applications have been mostly restricted to enterprise use. Hardware-assisted virtualization features (AMD-V, VT-x and VT-d, for example) have been slowly making their way into the lower end x86 parts, thereby enabling low-cost virtualization platforms. QNAP is, to our knowledge, the only NAS vendor to offer a virtualization platform (using the Virtualization Station package for QTS) with some of their units. Read on to find out how it works and the impact it has on regular performance.

Categories: Tech

EVGA Torq X10 Gaming Mouse Review

Mon, 2014-08-18 07:14

EVGA recently sent us their new Torq X10, a gaming mouse that also marks EVGA's first foray into the gaming peripheral market. On paper, it boasts excellent features and specifications. We are going to find out if it can live up to the high expectations of both the company and the consumers in this review.

Categories: Tech

FMS 2014: HGST Announces FlashMAX III PCIe SSDs

Mon, 2014-08-18 06:00

Continuing with our Flash Memory Summit coverage, HGST announced their FlashMAX III enterprise SSD, which is the first fruit of HGST's Virident acquistion and continues Virident's FlashMAX brand. The FlashMAX III will come in half-height, half-length form factor and will be available in capacities of 1100GB, 1650GB and 2200GB. The controller is an FPGA-based 32-channel design with a PCIe 3.0 x8 interface, but there is no NVMe support since the FlashMAX III builds on the same architecture as the previous generation FlashMAX II. 

HGST FlashMAX III Specifications Capacity 1100GB 1650GB 2200GB Form Factor Half-Height, Half-Length (HH-HL) Interface PCIe 3.0 x8 Controller 32-channel FPGA based NAND Micron 20nm 64Gbit MLC Sequential Read 2.7GB/s 2.0GB/s 2.7GB/s Sequential Write 1.4GB/s 1.0GB/s 1.4GB/s 4KB Random Read 549K IOPS 409K IOPS 531K IOPS 4KB Random Write 53K IOPS 30K IOPS 59K IOPS 4KB 70/30 Random Read/Write 195K IOPS 145K IOPS 200K IOPS Write Latency < 30 µsec Max Power 25 watts Endurance 2 DWPD Warranty Five years

The maximum throughput seems a bit low for a design that uses up eight PCIe 3.0 lanes since 2.7GB/s should be achievable with just four PCIe 3.0 lanes. Obviously performance scaling is not that simple but for example Samsung's XS1715 (which we will be reviewing soon!) is rated at up to 3.0GB/s while only consuming four PCIe 3.0 lanes. Using less PCIe lanes allows for more drives to be delpoyed as the amount of PCIe lanes is always rather limited.

The 1650GB model is even slower due to the fact that it utilizes less NAND channels because it is a middle capacity. Basically, the 1100GB and 2200GB models have the same number of NAND packages, with the 2200GB model having twice as much NAND per package; the 1650GB model uses the higher capacity packages but doesn't fully populate the board. HGST told us that they are just testing the water to see if there is demand for something in between 1100GB and 2200GB.

The FlashMAX III also supports Virident Flash-management with Adaptive Shceduling (vFAS), which is a fancy name for Virident's storage driver. vFAS presents the FlashMAX as a single volume block device to the OS, meaning that no additional storage protocols or controllers are needed, whereas some drives just use a RAID controller or need software RAID solutions to be configured into an array. Additionally vFAS handles NAND management by doing wear-leveling, garbage collection, data path protection, NAND-level parity, ECC, and more. 

The FlashMAX III is currently being qualified by select OEMs and will ship later in this quarter. 

Categories: Tech

FMS 2014: SanDisk ULLtraDIMM to Ship in Supermicro's Servers

Mon, 2014-08-18 04:00

We are running a bit late with our Flash Memory Summit coverage as I did not get back from the US until last Friday, but I still wanted to cover the most interesting tidbits of the show. ULLtraDIMM (Ultra Low Latency DIMM) was initially launched by SMART Storage a year ago but SanDisk acquired the company shortly after, which made ULLtraDIMM a part of SanDisk's product portfolio.

The ULLtraDIMM was developed in partnership with Diablo Technologies and it is an enterprise SSD that connects to the DDR3 interface instead of the traditional SATA/SAS and PCIe interfaces. IBM was the first to partner with the two to ship the ULLtraDIMM in servers, but at this year's show SanDisk announced that Supermicro will be joining as the second partner to use ULLtraDIMM SSDs. More specifically Supermicro will be shipping ULLtraDIMM in its Green SuperServer and SuperStorage platforms and availability is scheduled for Q4 this year. 

SanDisk ULLtraDIMM Specifications Capacities 200GB & 400GB Controller 2x Marvell 88SS9187 NAND SanDisk 19nm MLC Sequential Read 1,000MB/s Sequential Write 760MB/s 4KB Random Read 150K IOPS 4KB Random Write 65K IOPS Read Latency 150 µsec Write Latency < 5 µsec Endurance 10/25 DWPD (random/sequential) Warranty Five years

We have not covered the ULLtraDIMM before, so I figured I would provide a quick overview of the product as well. Hardware wise the ULLtraDIMM consists of two Marvell 88SS9187 SATA 6Gbps controllers, which are configured in an array using a custom chip with a Diablo Technologies label, which I presume is also the secret behind DDR3 compatibility. ULLtraDIMM supports F.R.A.M.E. (Flexible Redundant Array of Memory Elements) that utilizes parity to protect against page/block/die level failures, which is SanDisk's answer to SandForce's RAISE and Micron's RAIN. Power loss protection is supported as well and is provided by an array of capacitors. 

The benefit of using a DDR3 interface instead of SATA/SAS or PCIe is lower latency because the SSDs sit closer to the CPU. The memory interface has also been designed with parallelism in mind and can thus take greater advantage of multiple drives without sacrificing performance or latency. SanDisk claims write latency of less then five microseconds, which is lower than what even PCIe SSDs offer (e.g. Intel SSD DC P3700 is rated at 20µs).

Unfortunately there are no third party benchmarks for the ULLtraDIMM (update: there actually are benchmarks) so it is hard to say how it really stacks up against PCIe SSDs, but the concept is definitely intriguing. In the end, NAND flash is memory and putting it on the DDR3 interface is logical, even though NAND is not as fast as DRAM. NVMe is designed to make PCIe more flash friendly but there are still some intensive workloads that should benefit from the lower latency of the DDR3 interface. Hopefully we will be able to get a review sample soon, so we can put ULLtraDIMM through our own tests and see how it really compares with the competition.

Categories: Tech

Browser Face-Off: Chrome 37 Beta Battery Life Revisited

Mon, 2014-08-18 03:00

Last week we posted our Browser Face-Off: Battery Life Explored 2014, where the battery run down times of Firefox 31, IE11 Desktop, IE11 Modern, Chrome 36, and Chrome 37 beta were tested on Windows. We used GUI automation to open browsers, tabs, and visit websites to simulate a real user in a light reading pattern. The article answered a lot of questions about popular browser battery life on Windows, but it raised additional questions as well.

Chrome 36 tested with the best battery life, but was the only browser that did not render correctly at 3200x1800 due to lack of HiDPI support. In the Chrome 37 beta, HiDPI support improved rendering but also took a 25% dive in battery life tying it for last place. However, the Chrome 37 beta includes more changes than just HiDPI support (along with some debugging code), so was the battery life penalty from the now-native 3200x1800 rendering or was it something else? After a few more days of testing at 1600x900 with 100% DPI scaling, we can narrow in on an answer.

When both Chrome 36 and Chrome 37 beta natively render at 1600x900 there is less than 3% difference in battery life. Two tests of each browser were performed and the results averaged. The variation between runs was only 1%. Looking at our previous numbers of Chome 36 and 37 beta on the HiDPI setting of 3200x1800 and 200% scaling, the situation is entirely different.

I've added an asterisk here (and clarified the same text on the original article) to indicate Chrome 36 isn't actually rendering at 3200x1800, but rather at 1600x900 and relying on Windows DPI Virtualization to scale up to 3200x1800.

Looking at the numbers, there's some good news and some bad news. The good news is that Chrome 37's new features likely won't hurt the battery life of current users. If you are using Chrome now, you are probably not using a HiDPI display due to the existing blurry rendering. For these users, the pending Chrome 37 upgrade has no significant impact on battery life. The bad news is that if you have been avoiding Chrome due to its HiDPI rending issues, Chrome 37 resolves those issues but also appears to provide worse battery efficiency compared to Internet Explorer. On our XPS 15 that equated to about an hour less of mobility.

Given that this is the first version of Chrome to properly support HiDPI, it's entirely possible – even likely – that there are many opportunities to further optimize the algorithms and hopefully return battery life at least close to Chome 36 levels. A slight dip in battery life is expected as it takes more work to render a 3200x1800 image compared to a 1600x900 image, but a 20% drop seems rather extreme. We'll have to see what future updates bring, but hopefully by noting the discrepancy it will encourage developers to better tune performance.

Categories: Tech

GIGABYTE AM1M-S2H Review: What Can $35 Get You?

Fri, 2014-08-15 12:00

While most of the time enthusiasts are playing around with the latest and greatest, the cheaper low performance platforms are usually the high volume movers. As we explained in our Kabini review, AMD has taken the unusual step of producing an upgradable platform for as little as $74. The motherboards for the AM1 Kabini platform range from $31 to $47, and today we are reviewing the GIGABYTE AM1M-S2H which retails at $35.

Categories: Tech

ASUS Motherboard Division Director: An Interview with Dr Albert Chang

Fri, 2014-08-15 07:00

Following our interviews previously with Rod O’Shea at Intel UK, Kris Huang at ASUS and Jackson Hsu at GIGABYTE, I was offered the opportunity to spend some time with Dr Albert Chang, the Senior Division Director in R&D for the entire Motherboard Business Unit at ASUS. The motherboard design and testing facilities span several floors of their headquarters, which we toured during Computex. I would like to thank ASUS and Dr Chang for their time and this opportunity.

Ian Cutress: Everyone sees a corporation, but there are always interesting people to talk to. Everyone has a back story and it is always interesting to hear how people have risen to where they are. Your business card says ‘Division Director’ – what exactly does a Division Director do?

Albert Chang: I am the ASUS motherboard R&D head, so I have three major teams. One of these is in Taiwan, with two others in China. The team in Taiwan focuses on ROG, with the other two in China for channel motherboards and SI customer designs.

IC: What is your typical day?

AC: Usually I have to review all the projects. We usually have 20-30 projects running at the same time. Being one person it can be hard to review all details, but I have staff that report to me and then I can discuss any problems in case any department needs assistance or approval.

IC: How long have you been in this position at ASUS?

AC: I have been Division Director for two years and at ASUS since 2002 straight after finishing my PhD from the National Taiwan University. I started as an engineer, checking datasheets and layouts. At ASUS, as a motherboard engineer, you are the project owner and you have to discuss everything with the Product Manager (layout, engineering) and others like the power and layout engineers. We had to design based on the specification sheets and confirm with layout engineers. I managed a couple of people at that time, and a lot more now!

IC: With regards to your education, what were your courses?

AC: I majored in Electrical Engineering, with a focus on Power Electronics. I finished my PhD at 28 and joined ASUS at that time.

IC: In your position, do you work a ‘9-to-5’, or do you have to come in on weekends?

AC: Sometimes at weekends, especially to have meetings with either North America or Europe, or to fix major issues that rise up. I have a family, but they are not too keen on me coming in on weekends! I sometimes have to buy cake or a gift when I get home!

IC: As Division Director, do you get final say on what happens with the motherboards?

AC: On the engineering side, yes, but there are also the firmware and software teams.

IC: Does the sales department ask you to do certain designs?

AC: All requests of that nature go through the product managers, who relay information through to R&D. So for ROG, Kris Huang (we interviewed him in 2012) is the product manager.

IC: How does user demand get fed back into designs?

AC: Typically I will speak to our product managers (both sales and marketing), or our technical marketing teams directly who monitor the forums and produce reports about user experience. Sometimes I like to hear direct from the teams gathering this information and interacting directly with the users, especially with our major regions such as North America. We have to look at the global market, and decide on ideas or features that benefit everyone.

IC: In terms of ideas for future platforms, who gets them/where do they come from?

AC: We initially look at our competitors’ product, to see which direction they are going, and also examine media reviews to see which options they like or want to see improved. Features like the OC Panel come from the engineers in the ROG team. Because I am only one person, we encourage every engineer to share any ideas in meetings so we can discuss them. There are multiple streams – some from in-house engineers, some from feedback, and some from product managers.

IC: What percentage of users need to request a feature before it is implemented?

AC: If a request comes up repeatedly, we evaluate the idea based on relevance and increased cost on the motherboards. For example, adding DC and PWM fan control on the motherboard came from a core group of users that wanted to be able to have the control. It also helps if the media notice the new feature as well, and can help relay this to other users.

IC: How is market research for new ideas performed?

AC: We have the forums, but also social media plays a role. We sometimes give users a choice between two features (for example, audio codec A or B), and even if we only get 40 or so responses, we weigh up the percentages. The product managers for each region that understand their customers can also have input on new ideas.

IC: At what point in the product cycle to you start looking at the next generation of motherboards?  If you released a motherboard today, how far back would you have to start planning for it?

AC: At least nine months, in terms of the start point in thinking about what we want to do.

In the first three months, we will start analyzing the new major features for the CPU and chipset generation from the CPU manufacturer guidelines, paying attention to the differences to the old platform. We also look over bugs from the old generation, or ideas that we could not implement in the last generation. We also check the competitors’ products for the last generation, including the feedback from their users. At that point we talk to the major IC vendors (Renesas, ASMedia, Qualcomm Atheros) for their plans and roadmaps for the next 6-9 months so we have the latest for launch.

In month four, we finalize the segmentation for the product line, including form factor, and start the circuit design. We also work with Intel with early samples which can have a lot of bugs, so we report back to Intel in terms of processor and chipset evaluation for their PVT/first stepping samples. The microcode gets revised several times. We take 4-6 weeks for the circuit design before we get the first motherboards ready for testing, and by this time we have those Intel CPU samples for testing.

We build 60-100 boards for a sample run when the design is coming together, for validation, reliability, checking the power and everything. This includes the aging tests, such as high temperature stress testing. Typically our rule is a 12 hour test at this point, and if there are any errors in those twelve hours for these pre-production models, we then have to check it. The process of testing, changing and retesting can take up to three months to catch any bugs. At every change or iteration due to hardware bugs, we need to retest and revalidate.

At 7.5 months, we are at PVT stage before mass production. We ensure all the third party IC orders are in and will work with the motherboards. We work with factories in China for mass production and place our orders with them to build our motherboards. We have to check the production quality of the factory output. We typically send project managers or leaders to manage production and work with the factories in terms of managing the schedules as well as quantity.

Mass production starts about a month before launch, and in that time we also distribute hardware around the world. This also involves the sales teams talking to their local regional SIs, as well as inviting media to preview events. Typically the media receive samples from the first mass production batch.

IC: So by the timeline, users and media need to start asking for certain features around 5-8 months before a launch!  We normally do not know that there is a launch until it almost happens.

AC: Yes, sometimes changes late in the day are difficult to do. But we keep the ideas generated throughout the generation and see what we can apply next time around. But for example, with the memory design, we do not always follow Intel guidelines. We have our own memory team and do a lot of simulations based on layout and tracing to find the best way to get the most out of the memory. We want to better than the reference design, and ROG team is the best at pushing the new designs. So if we want the best memory records, we need to have the best design.

IC: How long is the lead time, from placing an order to receiving stock, for the controllers?

AC: For the testing motherboards, we usually can get stock within a week or two. For the mass production, if it works in our design, it is more like 4-6 weeks. This includes other things like the PCB, which can sometimes be over 6 weeks.

IC: When do you start designing the additional materials (box, foam inserts, manuals)?

AC: We go through a lot of internal discussions, and there are a lot of revisions when it comes down to design. The design teams talk to sales and see what the competition are doing, but early design talks can be 6-9 months away from a launch, as the tracing teams are designing the motherboards.

IC: When you mention 60-100 motherboards for a sample run, is that 100 motherboards for every SKU? So for the seven Z97 channel motherboards, you would have almost 700 samples?

AC: Yes, every SKU, of course! 

IC: In terms of product production goals, what would be your main goals in the next twelve months?

AC: In the first two months of a launch, we check to see if our features meet the customers’ needs. After that, we start to study the next generation. For me, I hope that each generation we can make the boards that everyone likes, because this is my product line at ASUS.

IC: How about the next five years?

AC: I will still be at ASUS, and I want to help expand PC applications in the home. Our chairman Jonney Shih has mentioned at Computex that this is a primary focus for ASUS.

IC: What do you think are the most important innovations that ASUS has created in the motherboard segment recently?

AC: Too many, cannot pick! Our ROG features span so many projects, for example. We have made our overclocking features easier to use than before, especially with automatic overclocking in BIOS and software, but also with the ROG OC Panel. Not many users know how to overclock, so we want to make it easier with our Auto Tuning, especially with voltages and stress testing. But we also cater for the extreme tweakers that use ROG.

IC: What element or feature from the ASUS Motherboard Business Unit do you think users need to know more about?

AC: We use separate components on the motherboard to help manage features like overclocking, but not many users know that we also do the same for other features like power saving. This is separate from the CPU and chipset, for example our Dual Intelligent Processors design. This is our own custom designed chip for our motherboards, not something off the shelf, which users may not realize.

IC: Do you see a gap in the market that ASUS or the Motherboard Business Unit should move in to?

AC: Gaming and small form factor markets are growing, and other ultra-small form factors like the NUC and Chromebox are interesting. We announced the GR8 at Computex, which is a combination of this for around 1.5 liter of volume. The sub 1-liter market should be a focus in the future.

IC: A question I like to pose in our interviews – what advice would you give to a high school student wanting to work for ASUS or to be in the position where you are today?

AC: The best thing is to be interested in electronics and computers. An engineer has to be familiar with this industry, especially the DIY market. Part of being an engineer is building PCs every day, up to 20-30. At the start of my career I had to build every machine by myself. At university, studying electronics or electronic engineering is vital. Out of the ~100 engineers on the fifth floor of HQ, the motherboard engineering floor, three or four have PhDs, most (70%+) have a Master’s and the rest have a Bachelor’s degree.

IC: If you were not working at ASUS, what would you be doing now? Would you still be in engineering?

AC: I would enjoy trying my hand at marketing! I like to promote the products.

IC: To what extent do you look at your competitors’ products?

AC: Our competitors are very aggressive and focused. We use our testing and validation processes on their products to see if they qualify.

IC: What has been your best day working at ASUS? Is there one specific moment that stands out compared to any other?

AC: When I started at ASUS, there was (still is) a philosophy of doing it right first time. Any engineer that produced a product that did not need a second revision (or a revision 1.01) who achieved this got a small bonus, something like 10000NT$ (~$300). In the R&D team, I was the first person to get this award, and it was in my second project ever at ASUS, just after I had started. Normally there might be some layout bug, or signaling bug, but I was very pleased to get it right first time so early in my career.

IC: Do you remember the model name?

It was an AMD motherboard, the SK8V. (We actually reviewed this, back in 2003)

Many thanks to Dr Chang for his time!

Categories: Tech

ASRock Shows X99 Micro-ATX: The X99M Killer

Fri, 2014-08-15 04:18

One of the problems of Intel’s high end desktop platforms is size: the sockets are large, and all the DRAM slots take up a fair amount of space.  Couple this with the PCIe lane potential of the CPU, then restricting the motherboard size smaller than ATX limits the number of features and multi-PCIe capabilities afforded by the platform.  Nonetheless we saw a couple of motherboards for X79 move down to the micro-ATX size, as well as a few system designer builds that offered other sizes.  In that vein, ASRock is moving from its X79 Extreme4-M (our review) and sent us pictures of the upcoming X99M Killer.

One thing that a micro-ATX layout does is free up some of the PCIe lanes for extra controllers.  The X99M Killer will have ASRock’s Ultra M.2, giving PCIe 3.0 x4 bandwidth for devices up to 22110.  Being part of ASRock’s Killer range we get an E2200 series network interface, which also incorporates an EM shield similar to the Purity Sound 2 upgraded audio.  The Killer NIC is paired with an Intel NIC as well, with the Fatal1ty Mouse Port also appearing.

Due to the size, if any other mATX motherboards are released I would assume that like the X99M Killer there will only be four DDR4 memory slots, and here ASRock have used thinner slots in order to fit the power delivery and other features on board. I count five fan headers on the board, along with ASRock’s HDD Saver connector and ten SATA 6 Gbps ports.  I can just about make out that some of these are labelled SATA3_0_1 and some are labelled 5_SATA3_0_1, perhaps indicating the presence of a controller or a hub.  There is also a USB 3.0 header on board with power/reset buttons, a two digit debug, two BIOS chips, two USB 2.0 headers, a COM header and additional power to the PCIe slots via s 4-pin molex. We also have an eSATA on the rear panel, with a ClearCMOS button.

We can make out the final PCIe slot as having only four lanes of pins, suggesting an x16/x16/x4 layout.  Whether these four lanes are from the CPU or the chipset is unclear, especially with the presence of the PCIe 3.0 M.2 x4 slot in the middle.

The box lists XSplit, indicating a bundling deal with the software, as well as ECC and RDIMM support. I believe the X99M Killer will be due out at launch, or relatively soon after, although ASRock has not released the pricing details yet.

Categories: Tech

Intel Demonstrates Direct3D 12 Performance and Power Improvements

Fri, 2014-08-15 03:00

Since the introduction of Direct3D 12 and other low-level graphics APIs, the bulk of our focus has been on the high end. One of the most immediate benefits to these new APIs is their ability to better scale out with multiple threads and alleviate CPU bottlenecking, which has been a growing problem over the years due to GPU performance gains outpacing CPU performance gains.

However at the opposite end of the spectrum and away from the performance benefits are the efficiency benefits, and those are gains that haven’t been covered nearly as well. With that subject in mind, Intel is doing just that this week at SIGGRAPH 2014, where the company is showcasing both the performance and efficiency gains from Direct3D 12 on their hardware.

When it comes to power efficiency Intel stands to be among the biggest beneficiaries of Direct3D 12 due to the fact that they exclusvely ship their GPUs as part of an integrated CPU/GPU product. Because the GPU and CPU portions of their chips share a thermal and power budget, by reducing the software/CPU overhead of Direct3D, Intel can offer both improved performance and power usage with the exact same silicon in the same thermal environment. With Intel's recent focus on power consumption, mobile form factors, and chips like Core M, Direct3D 12 is an obvious boon to Intel.

Intel wisely demonstrated this improvement using a modern low-power mobile device: the Microsoft Surface Pro 3. For this demo Intel is using the Core i5-4300U version, Microsoft’s middle of the road model that clocks up to 2.9GHz on the CPU and features one of Intel’s HD 4400 GPUs, with a maximum GPU clockspeed of 1.1GHz. In our testing, we found the Surface Pro 3 to be thermally constrained – throttling when met with a medium to long duration GPU task. Broadwell should go a long way to improve the situation, and so should Direct3D 12 for current and future Intel devices.

To demonstrate the benefits of Direct3D 12, Intel put together a tech demo that renders 50,000 unique asteroid objects floating in space. The demo can operate in maximum performance mode with the frame rate unrestricted, as well as a fixed frame rate mode to limit CPU and GPU utilization in order to reduce power consumption. The demo can also dynamically switch between making Direct3D 11 and Direct3D 12 API calls. Additionally, an overlay shows power consumption of both the CPU and GPU portions of the Intel processor.

Intel states this demo data was taken after steady-state thermals were reached.

In the performance mode, Direct3D 11 reaches 19 frames per second and the power consumption is roughly evenly split between CPU and GPU. Confirming that while this is a graphical demo, there is significant CPU activity and overhead from handling so many draw calls.

After dynamically switching to Direct3D 12 while in performance mode, the frames per second jumps nearly 75% to 33fps and the power consumption split goes from 50/50 (CPU/GPU) to 25/75. The lower CPU overhead of making Direct3D 12 API calls versus Direct3D 11 API calls allows Intel's processor to maintain its thermal profile but shift more of its power budget to the GPU, improving performance.

Finally, in the power efficiency focused fixed frame rate mode, switching between Direct3D 11 and 12 slightly reduces GPU power consumption but dramatically reduces CPU power consumption, all while maintaining the same 19fps frame rate. Intel's data shows a 50% total power reduction, virtually all of which comes from CPU power savings. As Intel notes, not only do they save power from having to do less work overall, but they also save power because they are able to better distribute the workload over more CPU cores, allowing each core in turn to run at a lower clockspeed and voltage for greater power efficiency.

To put these numbers in perspective, a 50% reduction in power consumption is about what we would see from a new silicon process (i.e. moving from 22nm to 14nm), so to achieve such a reduction in consumption with software alone is a very significant result and a feather in Microsoft’s cap for Direct3D 12. If this carries over to when DirectX 12 games and applications launch in Q4 2015, it could help usher in a new era of mobile gaming and high end graphics. It is not often we see such a substantial power and performance improvement from a software update.

Source: Intel, Microsoft

Categories: Tech