Anandtech

Syndicate content
This channel features the latest computer hardware related articles.
Updated: 35 min 59 sec ago

Corsair Carbide Air 240 Case Review

Fri, 2014-08-15 03:00

With compact cases and SSFs being all the rage nowadays, today Corsair is launching the Carbide Air 240, a cubic Micro-ATX case designed to fit powerful PC hardware. As the name suggests, it is based on the design of the Full-ATX Carbide Air 540 that released last year. Can the smaller version make the same impact impact as its larger, older brother? We will find out in this review.

Categories: Tech

Isolated Internet Outages Caused By BGP Spike

Thu, 2014-08-14 16:00

The day was Tuesday, August 12th 2014. I arrived home, only to find an almost unusable internet situation in my home. Some sites such as AnandTech and Google worked fine, but large swaths of the internet such as Microsoft, Netflix, and many other sites were unreachable. As I run my own DNS servers, I assumed it was a DNS issue, however a couple of ICMP commands later and it was clear that this was a much larger issue than just something affecting my household.

Two days later, and there is a pretty clear understanding of what happened. Older Cisco core internet routers with a default configuration only allowed for a maximum 512k routes for their Border Gateway Protocol (BGP) tables. With the internet always growing, the number of routes surpassed that number briefly on Tuesday, which caused many core routers to be unable to route traffic.

BGP is not something that is discussed very much, due to the average person never needing to worry about it, but it is one of the most used and most important protocols on the internet. The worst part of the outage was that it was known well in advance that this would be an issue, yet it still happened.

Let us dig into the root cause. Most of us have a home network of some sort, with a router, and maybe a dozen or so devices on the network. We connect to an internet service provider through (generally) a modem. When devices on your local network want to talk to other devices on your network, they do so by sending packets upstream to the switch (which is in most cases part of the router) and then the switch forwards the packet to the correct port where the other device is connected. If the second device is not on the local network, the packets get sent to the default gateway which then forwards them upstream to the ISP. At the ISP level, in simple terms, it works very similarly to your LAN. The packet comes in to the ISP network, and if the IP address is something that is in the ISP’s network, it gets routed there, but if it is something on the internet, the packet is forwarded. The big difference though is that an ISP does not have a single default gateway, but instead connects to several internet backbones. The method in which internet packages are routed is based on the Border Gateway Protocol. The BGP contains a table of IP subnets, and lists which ports to transfer traffic based on rules and paths laid out by the network administrator. For instance, if you want to connect to Google to check your Gmail, your computer will send a TCP connection to 173.194.33.111 (or another address as determined by your DNS settings and location). Your ISP will receive this packet, and send the packet to the correct port to an outbound part of the internet which is closer to the subnet that the address is in. If you then want to connect to Anandtech.com, the packet will be sent to 192.65.241.100, and the BGP protocol of the ISP router will then send to possibly a different port. This continues upstream from core router to core router until the packet reaches the destination subnet, where it is then sent to the web server.

With the BGP tables being overfilled on certain routers in the chain, packets send to specific routers would then be dropped at some point in the chain, meaning you would not have any service.

The actual specifics of what happened seemed to be that Verizon unintentionally added approximately 15,000 /24 routes into the global routing table. These prefixes were supposed to be aggregated, but this didn’t happen, and as such, the total number of subnet prefixes in the table spiked. Verizon fixed the mistake quickly, but it still caused many routers to fail.

Although you could be quick to jump and blame Verizon for the outage, it has to be noted that Cisco issued a warning to customers explaining that the memory which is allocated for the BGP table would be very close to being full, and gave specific instructions on how to correct it. This warning came several months ago. Unfortunately not all customers of Cisco heeded or received the warning, which caused the brief spike to cripple parts of the internet.

Newer Cisco routers were not affected, because the default configuration for the TCAM memory which is designated for the BGP table allows for more than 512,000 entries. Older routers from Cisco have enough physical memory for up to 1,000,000 entries, assuming the configuration was changed as outlined by Cisco.

The effects of outages like this can be quite potent on the internet economy, with several online services being unavailable for large parts of the day. However this outage doesn’t need to happen again, even though the steady state number of entries in the BGP table will likely exceed magic 512,000 number again. Hopefully with this brief outage, lessons can be learned, and equipment can be re-configured or upgraded which will prevent this particular issue from rearing its head again in the future.

 

Sources

DSLReports

Renesys

BGPMon

Categories: Tech

HTC Launches Zoe Beta: Hands On and First Impressions

Thu, 2014-08-14 09:00

Today, HTC is finally launching their Zoe application. Normally, such an announcement is rolled into a smartphone launch, but the launch of this application is one of the first to stand alone. In fact, this is the first application launched by HTC Creative Labs. As a result, the vision and strategy behind this application is dramatically different from HTC's existing hardware and software divisions.

But before we dive into what this means for HTC, it's important to see what this application is. In short, Zoe is the culmination of multiple pieces of HTC Sense, packaged into a single application. It integrates the video highlights feature first introduced in Sense 5, and effectively brings back the HTC Share application which disappeared with the launch of the One (M8). For those that haven't kept up with what these features are, video highlights was an extension of the gallery application in HTC Sense, which allowed for automatic creation of ~30 second highlight reels. These highlight reels were automatically set to music with specific cuts and animations based upon the theme selected. HTC Share was simply a method to present highlight reels and all of the photos/Zoes used in a highlight reel over social media. Both of these were well-received when they first launched, and even now they're good features to have.

There's definitely more to Zoe though, as HTC has integrated a social network aspect with the ability to collaborate and remix other people's videos. While viewing highlight reels from friends using the Zoe network is expected, HTC has also integrated a discovery feature to see highlight reels from other people. In summary, the Zoe application brings HTC's video highlights feature and HTC Share to smartphones running Android 4.4 and adds a social network aspect on top.

In practice, it works just as expected in theory. HTC has done a surprisingly good job of bringing the Sense 6 UX to devices like the Galaxy S5, although some aspects like the status bar don't carry over perfectly. HTC has done a good job of providing a small taste of the well-designed HTC Sense 6 UI without removing major selling points from HTC devices. HTC emphasized that the major goal for this application was to become widely adopted, as while these features were good selling points for HTC hardware there was no real way to add a social network on top without making these features available to non-HTC devices.

In discussions with HTC, it seems clear that they are prioritizing user experience first over a monetization strategy. This is made clearer by the fact that there aren't any in-app purchases at this point in time. Of course, there are plenty of possibilities in this area once Zoe becomes widely adopted. Premium themes and music, along with in-app advertising were all mentioned as ways that HTC could bring in revenue. While HTC wasn't clear on how this would drive hardware sales, it wouldn't be surprising to see features exclusive to HTC devices in the future. It's surprising how far HTC has come in their software design, and I wouldn't be surprised to see this take off. Even if people aren't interested in the social networking aspect, the ability to create highlight videos and share them on pre-existing social networks is quite compelling. The social features also have great potential in situations where multiple people attend an event and take photos and videos on multiple devices. However, it's not quite clear whether this will gain the popularity of applications like Instagram and Twitter, and even if that happens it's currently hard to see the benefits to HTC's hardware division.

As of publication, the HTC Zoe Beta is available on the Play Store. HTC has stated that Android 4.4 devices should be compatible with this application although there may be additional restrictions.

Categories: Tech

be quiet! Power Zone 850W CM Power Supply Review

Thu, 2014-08-14 03:00

be quiet! is a German company that specializes in low-noise computer PSUs and coolers, and they are slowly making their way into the North American market. Today we have their Power Zone 850W CM in our labs for review, an apparently popular but expensive power supply. Read on to see if it warrants the high price.

Categories: Tech

ASUS ROG Z97 Maximus VII Impact Officially Launched

Thu, 2014-08-14 01:25

One of the highlights of ASUS’ ROG Computex Press Event was the announcement of the Maximus VII Impact, the successor to the popular mini-ITX ROG Impact range of motherboards.  Today the motherboard is officially released from ASUS HQ, with stock coming to regions shortly.  We reviewed the Maximus VI Impact last year and was appropriately impressed by the effort to include so many features in the mini-ITX platform – ASUS is hoping that this Z97 upgrade kicks it up a notch.

Similar to the Maximus VI, the order of the day is extra PCBs in order to add features.  The power delivery is upgraded to the ROG 9-series design, and the audio add-on SupremeFX card moves up with fewer filter caps in a more optimized output.  The mPCIe Combo moves up to revision four which includes a PCIe 3.0 x4 M.2 slot and a mini-PCIe slot with a bundled 802.11ac dual band WiFi module.  Note that when the M.2 slot is occupied, the PCIe slot will reduce to PCIe 3.0 x8, although our previous testing of a similar feature shows no frame rate difference at 1080p.

The Impact Control on the rear panel uses the same two-digit debug as the previous version, along with ROG Connect and a Clear CMOS button, but the two buttons included are for the KeyBot and SoundStage functionality introduced with the Maximus VII range.  KeyBot is a feature to enable macros with any keyboard with the ASUS software, and SoundStage acts as a configurable op-amp that modifies various aspects of the audio output to be more suited for various styles of gaming.

Interestingly enough the rear of the Impact Control is another PCB, holding two 4-pin fan headers to compliment the 4-pin CPU fan header on the top left and the chassis header on the right of the motherboard.  Also of note is the rear panel, where a single block of USB ports uses two USB 3.0 ports and two USB 2.0 ports – this configuration I have not seen on a motherboard before and could pave the way for something on most motherboards for the future.

The DRAM slots use single side latches on the power delivery side, and the power connectors are gladly on the outsides of the motherboard. This follows the start button, a fan header and a USB 3.0 header.  The front panel header is also on the right side of the DRAM slots.  The SATA 6 Gbps ports are unfortunately just inside the DRAM slots and all in the same direction, perhaps causing issues with trying to remove locking cables.

The socket area is up against Intel specifications, suggesting that CPU coolers might be up against the power delivery or tall memory modules.  The audio is buffeted by Sonic Rader II, an onscreen representation of directional audio, and networking comes via the 802.11ac WiFi and an Intel I218-V with GameFirst III.

We are awaiting information from ASUS’ US office for pricing and availability.

Categories: Tech

Samsung Announces Exynos 5430: First 20nm Samsung SoC

Wed, 2014-08-13 17:44

While we mentioned this in our Galaxy Alpha launch article, Samsung is finally announcing the launch of their new Exynos 5430 SoC.

The main critical upgrade that the new chips revolve around is the manufacturing process, as Samsung delivers its first 20nm SoC product and is also at the same time the first manufacturer to do so.

On the CPU side for both the 5430, things don’t change much at all from the 5420 or 5422, with only a slight frequency change to 1.8GHz for the A15 cores and 1.3GHz for the A7 cores. We expect this frequency jump to actually be used in consumer devices, unlike the 5422’s announced frequencies which were not reached in the end, being limited to 1.9GHz/1.3GHz in the G900H version of the Galaxy S5. As with the 5422, the 5430 comes fully HMP enabled.

A bigger change is that the CPU IP has been updated from the r2p4 found in previous 542X incarnations to a r3p3 core revision. This change, as discussed by Nvidia earlier in the year, should provide better clock gating and power characteristics for the CPU side of the SoC.

On the GPU side, the 5430 offers little difference from the 5422 or 5420 beyond a small frequency boost to 600MHz for the Mali T628MP6.

While this is still a planar transistor process, a few critical changes have been made that make 20nm HKMG a significant leap forward from 28nm HKMG. First, instead of a gate-first approach for the high-K metal gate formation, the gate is now the last part of the transistor to be formed. This improves performance because the characteristics of the gate are no longer affected by significant high/low temperatures during manufacturing. In addition, lower-k dielectric in the interconnect layers reduce capacitance between the metal and therefore increase maximum clock speed/performance and reduce power consumption. Finally, improved silicon straining techniques should also improve drive current in the transistors, which can drive higher performance and lower power consumption. The end-effect is that we should expect an average drop in voltage of about 125mV, and quoting Samsung, a 25% reduced power.

In terms of auxiliary IP blocks and accelerators, the Exynos 5430 offer a new HEVC (H.265) hardware decoder block, bringing its decoding capabilities on par with Qualcomm’s Snapdragon 805.

Also added is a new Cortex A5 co-processor dedicated to audio decoding called “Seiren”. Previously Samsung used a custom FPGA block called Samsung Reprogrammable Processor (SRP) for audio tasks, which seems to have been now retired. The new subsystem allows for processing of all audio-related tasks, which ranges from decoding of simple MP3 streams to DTS or Dolby DS1 audio codecs, sample rate conversion and band equalization. It also provides the chip with voice capabilities such as voice recognition and voice triggered device wakeup without external DSPs. Samsung actually published a whitepaper on this feature back in January, but we didn’t yet know which SoC it was addressing until now.

The ISP is similar to the one offered in the 5422, which included a clocking redesign and a new dedicated voltage plane.

The memory subsystem remains the same, maintaining the 2x32-bit LPDDR3 interface, able to sustain frequencies up to 2133MHz or 17GB/s. We don’t expect any changes in the L2 cache sizes, and as such, they remain the same 2MB for the A15 cluster and 512KB for the A7 cluster.

The Galaxy Alpha will be the first device to ship with this new SoC, in early September of this year.

Categories: Tech

Razer Announces Chroma Keyboard, Mouse, and Headset

Wed, 2014-08-13 12:46

Today in Cologne, Germany at Gamescom 2014, Razer revealed their latest updates for their line of peripherals. Launching with a new feature dubbed Chroma, Razer announced three updated devices for the 2015/2016 timeframe: the BlackWidow Ultimate keyboard, DeathAdder gaming mouse, and Kraken 7.1 headset. Presumably these devices will be similar to the existing line of Razer peripherals, with the key difference being Chroma, which provides customizable multi-colored backlighting.

The BlackWidow Ultimate keyboard is perhaps the most eye-catching of the three, and it appears similar to Corsair's RGB-backlit K70 and K95 keyboards with per-key lighting. The difference is in the details of course, and Razer uses their own custom Green/Orange switches, so the feel will be slightly different from the Corsair models. The DeathAdder and Kraken aren't quite as advanced, in that there are fewer backlights available – the scroll wheel and Razer logo on the DeathAdder are linked to the same color, while the ear cups on the Kraken are likewise linked. One interesting feature however is that all three devices can be synchronized via Razer's cloud-based Synapse software.

Like other RGB backlit devices, Chroma in theory allows up to 16.8 million colors, though as we've noted before overlap among the colors means the "useful" palette is going to be more like 20-40 colors. Besides selecting individual colors, Razer offers several effects for colors as well. Spectrum cycling is for those that want to show the full rainbow of colors, while breathing causes the backlight to pulse one or two colors on and off every seven seconds. The BlackWidow keyboard offers several additional options, including the ability to customize each individual key and save/load templates optimized for various games. Reactive mode causes the individual keys to light up when pressed and then fade out with three time delays for fading to black (slow, medium, and fast), and finally there's a wave effect that cycles the colors on the keyboard in a wave.

Razer has a web demonstration showing what the various effects look like, or you can watch the promo video on YouTube. Razer will also be providing an open Chroma SDK to allow game developers and users full access to the devices, providing the potential for an even deeper level of customization (e.g. reactive mode with multiple colors should be possible). Pricing for the devices has not been announced, but both the Chroma-enabled devices will be available starting in September 2014. The Chroma SDK meanwhile is slated for release in "late 2014".

Gallery: Razer Announces Chroma Keyboard, Mouse, and Headset

Categories: Tech

OCZ Launches ARC 100 Value SSD

Wed, 2014-08-13 12:05

The release of the Vector marked as a change in OCZ's strategy. With a new CEO, OCZ's goal was to change the company's brand image from being a low-cost value brand to a higher-end, high performance and quality SSD manufacturer. For the first time, OCZ decided not to release a value version (Agility-level drive) of its Barefoot 3 platform and only focused on the higher-end market with the Vector and Vertex 4xx lineups. Almost two years later since the introduction of the Vector, OCZ is now finally comfortable with bringing the Barefoot 3 platform to the mainstream market and the ARC 100 acts as the comeback vehicle. 

OCZ ARC 100 Specifications Capacity 120GB 240GB 480GB Controller OCZ Barefoot 3 NAND Toshiba A19nm MLC Sequential Read 475MB/s 480MB/s 490MB/s Sequential Write 395MB/s 430MB/s 450MB/s 4KB Random Read 75K IOPS 75K IOPS 75K IOPS 4KB Random Write 80K IOPS 80K IOPS 80K IOPS Steady-State 4KB Random Write 12K IOPS 18K IOPS 20K IOPS Idle Power 0.6W 0.6W 0.6W Max Power 3.45W 3.45W 3.45W Encryption AES-256 Endurance 20GB/day for 3 years Warranty Three years MSRP $75 $120 $240

Similar to Vector 150 and Vertex 460, one of the main focuses in the ARC 100 is performance consistency and OCZ remains to be one of the only manufacturers that reports steady-state performance for client drives. The biggest difference to Vector 150 and Vertex 460 is in the NAND department as the ARC 100 utilizes Toshiba's second generation 19nm NAND, i.e. A19nm as Toshiba calls it. Despite the smaller process node NAND OCZ is rating the ARC 100 at the same 20GB of writes per day for three years as the Vertex 460, although the ARC 100 is slightly slower in performance and also drops bundled cloning software and 3.5" adapter.

Given the smaller cell size of the A19nm NAND, OCZ is able to price the ARC 100 more aggressively. At higher capacities OCZ is able to hit the $0.50/GB mark and the ARC 100 is actually very price competitive with Crucial's MX100, which has been our favorite mainstream SSDs for the past couple of months. I am getting back from the US tomorrow and my review samples are already waiting for me at home, so you should expect to see the full review next week!

Categories: Tech

ioSafe 1513+ Review: A Disaster-Resistant Synology DS1513+

Wed, 2014-08-13 04:30

The 3-2-1 data backup strategy involves keeping three copies of all essential data, spread over at least two different devices with at least one of them being off-site or disaster-resistant in some way. It is almost impossible to keep copies of large frequently updated data sets current in an off-site data backup strategy. This is where ioSafe's disaster-resistant units come into the picture. Products such as the SoloPRO and the ioSafe N2 show how ioSafe has continued to innovate in this space. The ioSafe 1513+ is their most ambitious product to date, attempting to place Synology's most powerful 5-bay NAS unit inside a fire- and waterproof package. Read on for a closer look at the hardware and performance of the unit.

Categories: Tech

Samsung launches the Galaxy Alpha

Wed, 2014-08-13 01:51

Samsung today announces the new Galaxy Alpha, a mid-range "premium" built device that creates a new range in Samsung's lineup. The Alpha totes a 4.7" 1280x720 AMOLED screen, coming with either a yet unnanounced Exynos 5430 SoC with 4 A15 cores running at 1.8GHz and 4 A7 cores running at 1.3GHz and a Mali T628MP6 GPU for the international market, or with a Snapdragon 801 SoC for select markets such as the US. Both versions come with 2GB of memory on board.

A new 12MP rear sensor and a 2.1MP front camera can be found. 

The device comes in a new aluminium frame, marking this as a change in build material from Samsung's usual plastic. The phone is extremely thin at only 6.7mm and weighing a lightweight 115g. The footprint of 132.4 x 65.5mm matches the 4.7" screen format of the phone. The back cover is removable and sports a 1860mAh replaceable battery. Strangely, Samsung omitted a microSD card slot in this device which comes at a standard 32GB of internal storage space. We find the same fingerprint and heatbeat sensor as on the S5, however it lacks the waterproofing of the former. It's shipping with Android 4.4.4 KitKat version with the same TouchWiz iteration as the S5.

More interestingly the international version of the device should sport LTE-A category 6 with help of an Intel XMM7260 modem. This would be the first device announced with Intel's new LTE modem and mark a break from Qualcomm's dominance in the sector.

The Alpha is an intriguing device that apparently to wants to fill in a gap in Samsung's lineup which has seen device size go up with each iteration of the S-series. The 720p screen, its slimness and design seems to target directly the iPhone instead of other high-end Android handsets, pricing should also end up in the higher end.

Source: SamsungTomorrow

Categories: Tech

USB Type-C Connector Specifications Finalized

Tue, 2014-08-12 20:50

Today it was announced by the USB-IF (USB Implementers Forum) that the latest USB connector which we first caught a glimpse of in April has been finalized, and with this specification many of the issues with USB as a connector should be corrected. USB, or Universal Serial Bus, has been with us for a long time now, with the standard first being adopted in 1996. At the time, it seemed very fast at up to 12 Mbps, and the connector form factor was not an issue on the large desktop PCs of the day, but over the years, the specifications for USB have been updated several times, and the connectors have also been updated to fit new form factor devices.

In the early ‘90s, when USB was first being developed, the designers had no idea just how universal it would become. The first connectors, USB-A and USB-B, were not only massive in size, but the connection itself was only ever intended to provide power at a low draw of 100 mA. As USB evolved, those limitations were some of the first to go.

First, the mini connectors were introduced, which, at approximately 3 mm x 7 mm, were significantly smaller than the original connector, but other than the smaller size they didn’t correct every issue with the initial connectors. For instance, they still had a connector which had to be oriented a certain way in order to be plugged in. As some people know, it can take several tries to get a USB cable to connect, and has resulted in more than a few jokes being made about it. The smaller size did allow USB to be used on a much different class of device than the original connector, with widespread adoption of the mini connectors on everything from digital cameras to Harmony remotes to PDAs of the day.

USB Cables and Connectors - Image Source Viljo Viitanen

In January 2007, the Micro-USB connector was announced by the USB-IF, and with this change, USB now had the opportunity to become ubiquitous on smartphones and other such devices. Not only was the connector smaller and thinner, but the maximum charging rate was increased to up to 1.8 A for pins 1 and 5. The connection is also rated for at least 10,000 connect-disconnect cycles, which is much higher than the original USB specification of 1,500 cycles, and 5,000 for the Mini specification. However once again, the Micro-USB connector did not solve every issue with USB as a connector. Again, the cable was not reversible, so the cable must be oriented in the proper direction prior to insertion, and with USB 3.0 being standardized in 2008, the Micro connector could not support USB 3.0 speeds, and therefore a USB 3.0 Micro-B connector was created. While just as thin as the standard connector, it adds an additional five pins beside the standard pins making it a very wide connection.

With that history behind us, we can take a look at the changes which were finalized for the latest connector type. There are a lot of changes coming, with some excellent enhancements:

  • Completely new design but with backwards compatibility
  • Similar to the size of USB 2.0 Micro-B (standard Smartphone charging cable)
  • Slim enough for mobile devices, but robust enough for laptops and tablets
  • Reversible plug orientation for ease of connection
  • Scalable power charging with connectors being able to supply up to 5 A and cables supporting 3 A for up to 100 watts of power
  • Designed for future USB performance requirements
  • Certified for USB 3.1 data rates (10 Gbps)
  • Receptacle opening: ~8.4 mm x ~2.6 mm
  • Durability of 10,000 connect-disconnect cycles
  • Improved EMI and RFI mitigation features

With this new design, existing devices won’t be able to mate using the new cables, so for that reason the USB-IF has defined passive cables which will allow older devices to connect to the new connector, or newer devices to connect to the older connectors for backwards compatibility. With the ubiquity of USB, this is clearly important.

There will be a lot of use cases for the new connector, which should only help cement USB as an ongoing standard. 10 Gbps transfer rates should help ensure that the transfer is not bottlenecked by USB, and with the high current draw being specified by connectors, USB may now replace the charging ports on many laptops as well as some tablets that use it now. The feature that will be most helpful to all users though is the reversible plug, which will finally do away with the somewhat annoying connection that has to be done today.

As this is a standard that is just now finalized, it will be some time before we see it in production devcies, but with the universal nature of USB, you can expect it to be very prevalent in upcoming technology in the near future.

 

Categories: Tech

Intel Disables TSX Instructions: Erratum Found in Haswell, Haswell-E/EP, Broadwell-Y

Tue, 2014-08-12 17:20

One of the main features Intel was promoting at the launch of Haswell was TSX – Transactional Synchronization eXtensions. In our analysis, Johan explains that TSX enables the CPU to process a series of traditionally locked instructions on a dataset in a multithreaded environment without locks, allowing each core to potentially violate each other’s shared data. If the series of instructions is computed without this violation, the code passes through at a quicker rate – if an invalid overwrite happens, the code is aborted and takes the locked route instead. All a developer has to do is link in a TSX library and mark the start and end parts of the code.

News coming from Intel’s briefings in Portland last week boil down to an erratum found with the TSX instructions. Tech Report and David Kanter of Real World Technologies are stating that a software developer outside of Intel discovered the erratum through testing, and subsequently Intel has confirmed its existence. While errata are not new (Intel’s E3-1200 v3 Xeon CPUs already have 140 of them), what is interesting is Intel’s response: to push through new microcode to disable TSX entirely. Normally a microcode update would suggest a workaround, but it would seem that this a fundamental silicon issue that cannot be designed around, or intercepted at an OS or firmware/BIOS level.

Intel has had numerous issues similar to this in the past, such as the FDIV bug, the f00f bug and more recently, the P67 B2 SATA issues. In each case, the bug was resolved by a new silicon stepping, with certain issues (like FDIV) requiring a recall, similar to recent issues in the car industry. This time there are no recalls, the feature just gets disabled via a microcode update.

The main focus of TSX is in server applications rather than consumer systems. It was introduced primarily to aid database management and other tools more akin to a server environment, which is reflected in the fact that enthusiast-level consumer CPUs have it disabled (except Devil’s Canyon). Now it will come across as disabled for everyone, including the workstation and server platforms. Intel is indicating that programmers who are working on TSX enabled code can still develop in the environment as they are committed to the technology in the long run.

Overall, this issue affects all of the Haswell processors currently in the market, the upcoming Haswell-E processors and the early Broadwell-Y processors under the Core M branding, which are currently in production. This issue has been found too late in the day to be introduced to these platforms, although we might imagine that the next stepping all around will have a suitable fix. Intel states that its internal designs have already addressed the issue.

Intel is recommending that Xeon users that require TSX enabled code to improve performance should wait until the release of Haswell-EX. This tells us two things about the state of Haswell: for most of the upcoming LGA2011-3 Haswell CPUs, the launch stepping might be the last, and the Haswell-EX CPUs are still being worked on. That being said, if the Haswell-E/EP stepping at launch is not the last one, Intel might not promote the fact – having the fix for TSX could be a selling point for Broadwell-E/EP down the line.

For those that absolutely need TSX, it is being said that TSX can be re-enabled through the BIOS/firmware menu should the motherboard manufacturer decide to expose it to the user. Reading though Intel’s official errata document, we can confirm this:

We are currently asking Intel what the required set of circumstances are to recreate the issue, but the erratum states ‘a complex set of internal timing conditions and system events … may result in unpredictable system behaviour’. There is no word if this means an unrecoverable system state or memory issue, but any issue would not be in the interests of the buyers of Intel’s CPUs who might need it: banks, server farms, governments and scientific institutions.

At the current time there is no road map for when the fix will be in place, and no public date for the Haswell-EX CPU launch.  It might not make sense for Intel to re-release the desktop Haswell-E/EP CPUs, and in order to distinguish them it might be better to give them all new CPU names.  However the issue should certainly be fixed with Haswell-EX and desktop Broadwell onwards, given that Intel confirms they have addressed the issue internally.

Source: Twitter, Tech Report

 

Categories: Tech

NVIDIA Launches Next GeForce Game Bundle - Borderlands: The Pre-Sequel

Tue, 2014-08-12 09:00

After letting their previous Watch Dogs bundle run its course over the past couple of months, NVIDIA sends word this afternoon that they will be launching a new game bundle for the late-summer/early-fall period.

Launching today, NVIDIA and their partners will be bundling Gearbox and 2K Australia’s forthcoming FPS Borderlands: The Pre-Sequel with select video cards. This latest bundle is for the GTX 770 and higher, so buyers accustomed to seeing NVIDIA’s bundles will want to take note that this bundle is a bit narrower than usual since it doesn’t cover the GTX 760.

As for the bundled game itself, Borderlands: The Pre-Sequel is the not-quite-a-sequel to Gearbox’s well received 2012 title Borderlands 2. As was the case with Borderlands 2 before it, this latest Borderlands game will be receiving PhysX enhancements courtesy of NVIDIA, leveraging the PhysX particle, cloth, and fluid simulation libraries for improved effects.

NVIDIA Current Game Bundles Video Card Bundle GeForce GTX
770/780/780Ti/Titan Black Borderlands: The Pre-Sequel GeForce GTX 750/750Ti/760 None

Meanwhile on a lighter note, it brings a chuckle to see that NVIDIA is bundling what will most likely be a Direct3D 9 game with their most advanced video cards. This if nothing else is a testament to longevity of the API, having long outlasted the hardware it originally ran on.

Finally, as always, these bundles are being distributed in voucher from, with retailers and etailers providing vouchers with qualifying purchases. So buyers will want to double check whether their purchase includes a voucher for either of the above deals. Checking NVIDIA’s terms and conditions, the codes from this bundle are good through October 31st, so it looks like this will bundle will run for around 2 months.

Categories: Tech

NVIDIA Refreshes Quadro Lineup, Launches 5 New Quadro Cards

Tue, 2014-08-12 07:45

Continuing today’s spate of professional graphics announcements, along with AMD’s refresh of their FirePro lineup NVIDIA is announcing that they are undertaking their own refresh of their Quadro lineup. Being announced today and shipping in September are 5 new Quadro cards that will come just short of a top-to-bottom refresh of the Quadro lineup.

With the exception of NVIDIA’s much more recently introduced Quadro K6000 – which will continue its reign as NVIDIA’s most powerful professional GPU – NVIDIA’s Quadro refresh comes as the bulk of the current Quadro K5000 family approaches 2 years old. At the point NVIDIA is looking to offer an across-the-board boost to their Quadro lineup, to increase performance and memory capacity at every tier. As a result this refresh will involve replacing NVIDIA’s Quadro cards with newer models based on larger and more powerful Kepler and Maxwell GPUs, and released as the Quadro Kx200 series.  All told, NVIDIA is shooting for an average performance improvement of 40%, on top of any benefits from the larger memory amounts.

NVIDIA Quadro Refesh Specification Comparison   Quadro K5200 Quadro K4200 Quadro K2200 Quadro K620 Quadro K420 CUDA Cores 2304 1344 640 384 192 Core Clock 650MHz 780MHz 1GHz 1GHz 780MHz Memory Clock 6GHz GDDR5 5.4GHz GDDR5 5GHz GDDR5 1.8GHz DDR3 1.8GHz DDR3 Memory Bus Width 256-bit 256-bit 128-bit 128-bit 128-bit VRAM 8GB 4GB 4GB 2GB 1GB Double Precision ? 1/24 1/32 1/32 1/24 TDP 150W 105W 68W 45W 41W GPU GK110 GK104 GM107 GM107? GK107? Architecture Kepler Kepler Maxwell Maxwell Kepler Displays Supported (Outputs) 4 (4) 4 (3) 4 (3) 4 (2) 4 (2)

We’ll start things off with the Quadro K5200, NVIDIA’s new second-tier Quadro card. Based on a cut down version of NVIDIA’s GK110 GPU, the K5200 is a very significant upgrade to the K5000 thanks to the high performance and unique features found in GK110. The combination of which elevates the K5200 much closer to the K6000 than the K5000 it replaces.

The K5200 ships with 12 SMXes (2304 CUDA cores) enabled and utilizes a 256-bit memory bus, making this the first NVIDIA GK110 product we’ve seen ship without the full 384-bit memory bus. NVIDIA has put the GPU clockspeed at 650MHz while the memory clock stands at 6GHz. Meanwhile the card has the second largest memory capacity of the Quadro family, doubling K5000’s 4GB of VRAM for a total of 8GB.

Compared to the K5000, K5200 offers an increase in shader/compute throughput of 36%, and a smaller 11% increase in memory bandwidth. More significant however are GK110’s general enhancements, which elevate K5200 beyond K5000. Whereas K5000 and its GK104 GPU made for a strong graphics card, it was a relatively weak compute card, a weakness that GK110 resolved. As a result K5200 should be similar to K6000 in that it’s a well-balanced fit for mixed graphics/compute workloads, and the ECC memory support means that it offers an additional degree of reliability not found on the K5000.

As is usually the case in rolling out a refresh wave of cards based on existing GPUs, because performance has gone up power consumption has as well. NVIDIA has clamped K5200 at 150W (important for workstation compatibility), which is much lower than the full-fledged K6000 but is 28W more than the K5000. None the less the performance gains should easily outstrip the power consumption increase.

Meanwhile display connectivity remains unchanged from the K5000 and K6000. NVIDIA’s standard Quadro configuration is a DL-DVI-I port, a DL-DVI-D port, and a pair of full size DisplayPorts, with the card able to drive up to 4 displays in total through a combination of those ports and MST over DisplayPort.

NVIDIA’s second new Quadro card is the K4200. Replacing the GK106 based K4000, the K4200 sees NVIDIA’s venerable GK104 GPU find a new home as NVIDIA’s third-tier Quadro card. Unlike K5200, K4200’s GPU shift doesn’t come with any kind of dramatic change in functionality, so while it will be an all-around more powerful card than the previous K4000, it’s still going to be primarily geared towards graphics like the K4000 and K5000 before it.

For the K4200 NVIDIA is using a cut down version of GK104 to reach their performance and power targets. Comprised of 7 active SMXes (1344 CUDA cores), the K4200 is paired with 4GB of VRAM. Clockspeeds stand at 780MHz for the GPU and 5.4GHz for the VRAM.

On a relative basis the K4200 will see some of the greatest performance gains of this wave of refreshes. Its 2.1 TFLOPS of compute/shader performance blasts past K4000 by 75%, and memory bandwidth has been increased by 29%. However the 4GB of VRAM makes for a smaller increase in VRAM than the doubling most other Quadro cards are seeing. Otherwise power consumption is once again up slightly, rising from 80W to 105W in exchange for the more powerful GK104 GPU.

Finally, as was the case with K5200 display connectivity remains unchanged. Since the K4200 is a single slot card like K4000 before it, this means NVIDIA uses a single DL-DVI-I port along with a pair of full size DisplayPorts. Like other Kepler products the card can drive up to 4 displays, though doing this will require a DisplayPort MST hub to get enough outputs. Which on that note, users looking to pair this card with multiple monitors will be pleased to find that Quadro Sync is supported in the K4200 for the first time, being limited to the K5000 and higher previously.

In NVIDIA’s refreshed Quadro lineup, the K4200 will primarily serve as the company’s highest-end single-slot offering. As with other GK10x based GPUs compute performance is not its strongest suit, while for graphics workloads such as CAD and modeling it should offer a nice balance of performance and price.

Moving on, NVIDIA’s third Quadro refresh card is the K2200. This replaces the GK107 based K2000 and marks the first Quadro product to utilize one of NVIDIA’s newest generation Maxwell GPUs, tapping NVIDIA’s GM107 GPU. The use of Maxwell on a Quadro K part makes for an amusing juxtaposition, though the architectural similarities between Maxwell and Kepler mean that there isn’t a meaningful feature difference despite the generation gap.

As was the case with NVIDIA’s consumer desktop GM107 cards, NVIDIA is aiming to produce an especially potent sub-75W card for K2200. Here NVIDIA uses a fully enabled GM107 GPU – all 5 SMMs (640 CUDA cores) are enabled – and it’s paired with 4GB of VRAM on a 128-bit bus. Meanwhile based on NVIDIA’s performance figures the GPU clockspeed should be just north of 1GHz while the memory clock stands at 5GHz.

Since the K2200 is replacing a GK107 based card, the performance gains compared to the outgoing K2000 should be significant. On the consumer desktop side we’ve seen GM107 products come close to doubling GK107 parts, and we’re expecting much the same here. K2200’s 1.3 TFLOPS of single precision compute/shader performance is 78% higher than K2000’s, which means that K2200 should handily outperform its predecessor. Otherwise the 4GB of VRAM is a full doubling over the K2000’s smaller VRAM pool, greatly increasing the size of the workloads K2200 can handle.

Meanwhile display connectivity is identical to the new K4200 and the outgoing K2000. The K2200 can drive up to 4 displays by utilizing a mix of its DL-DVI port, two DisplayPorts, and a DisplayPort MST hub.

In NVIDIA’s new Quadro lineup the K2200 will serve as their most powerful sub-75W card. As we’ve seen in other NVIDIA Maxwell products, this is an area the underlying GM107 excels at.

NVIDIA’s fourth Quadro card is the K620. This is another Maxwell card, and while NVIDIA doesn’t specify the GPU we believe it to be based on GM107 (and not GM108) due to the presence of a 128-bit memory bus. K620 replaces the GK108 based K600, and should offer substantial performance gains similar to what is happening with the K2200.

K620’s GM107 GPU features 3 SMMs (384 CUDA cores) enabled, and it is pair with 2GB of DDR3 operating on a 128-bit memory bus. Like K2200 the GPU clockspeed appears to be a bit over 1GHz, and meanwhile the memory clockspeed stands at 1.8GHz.

Compared to the K600 overall performance should be significantly improved. Though it’s worth pointing out that since memory bandwidth is identical to NVIDIA’s previous generation card, in memory bandwidth bound scenarios the K620 may not pull ahead by too much. None the less the memory pool has been doubled from 1GB to 2GB, so in memory capacity constrained situations the K620 should fare much better. Power consumption is just slightly higher this time, at 45W versus K600’s 41W.

As this is a 3 digit Quadro product, NVIDIA considers this an entry level card and it is configured accordingly. A single DL-DVI port and a single full size DisplayPort are the K620’s output options, with an MST hub being required to attach additional monitors to make full use of its ability to drive 4 displays. By going with this configuration however NVIDIA is able to offer the K620 in a low profile configuration, making it suitable for smaller workstations that can’t accept full profile cards.

Finally, NVIDIA’s last new Quadro card is the K420. Dropping back to a Kepler GPU (likely GK107), it replaces the Quadro 410. From a performance perspective this card won’t see much of a change – the number of CUDA cores is constant at 192 – but memory bandwidth has been doubled alongside the total VRAM pool, which is now 1GB.

Like K620, K420 can drive a total of 4 displays, while the physical display connectors are composed of a single DL-DVI port and a single full size DisplayPort. This low profile card draws 41W, the same as the outgoing 410.

With all but 1 of these cards receiving a doubled VRAM pool and significantly improved performance, NVIDIA expects that these cards should be well suited to accommodating the larger datasets that newer applications use, especially in the increasingly important subject of 4K video. Coupled with NVIDIA’s existing investment in software – both ISVs and their own cloud technology ecosystem – NVIDIA expects to remain ahead of the curve on functionality and reliability.

Wrapping things up, NVIDIA tells us that the Quadro refresh cards will be shipping in September. In the meantime we’ll be reviewing some of these cards later this month, so stay tuned.

Categories: Tech

AMD Completes FirePro Refresh, Adds 4 New FirePro Cards

Tue, 2014-08-12 05:00

Kicking off a busy day for professional graphics, AMD is first up to announce that they will be launching a quartet of new FirePro cards. As part of the company’s gradual FirePro refresh that began with the W9100 in April and followed by W8100 in June, today AMD is gearing up to refresh the rest of their FirePro lineup with new products for the mid-range and low-end segments of the pro graphics market.

Being announced today are the FirePro W7100, W5100, W4100, and W2100. These parts are based on a range of AMD GPUs – including Tonga, a new GPU that has yet to show up in any other AMD products – and are designed to the sub-$2500 market segment that the current W8100 tops out at. With a handful of exceptions, the bulk of these upgrades are straightforward, focused on making AMD’s entire FirePro lineup 4K capable, improving performance across the board, and doubling the amount of VRAM compared to the past generation to allow for larger data sets to be used.

AMD FirePro W Series Specification Comparison   AMD FirePro W7100 AMD FirePro W5100 AMD FirePro W4100 AMD FirePro W2100 Stream Processors 1792 768 512 320 ROPs ? 16 16 8 Core Clock ? 930MHz 630MHz 630MHz Memory Clock 5GHz GDDR5 6GHz GDDR5 5.5GHz GDDR5 1.8GHz DDR3 Memory Bus Width 256-bit 128-bit 128-bit 128-bit VRAM 8GB 4GB 4GB 2GB Double Precision ? 1/16 1/16 1/16 TDP 150W 75W 50W 26W GPU Tonga Bonaire Cape Verde Oland Architecture GCN 1.1? GCN 1.1 GCN 1.0 GCN 1.0 Display Outputs 4 4 4 2

Starting at the top, from a technical perspective the W7100 is the most interesting of the new FirePro cards. Whereas the previous-generation W7000 was based on a second-tier version of AMD’s venerable Tahiti GPU, the W7100 gets a brand new GPU entirely, one that we haven’t seen before. Named Tonga, this new GPU is a smaller, lower performance part that slots in under the Hawaii GPU used in the W9100 and W8100. However while AMD is announcing the W7100 today they are not disclosing any additional information on Tonga, so while we can draw some basic conclusions from W7100’s specifications a complete breakdown of this new GPU will have to wait for another day.

From a specification point of view AMD is not disclosing the GPU clockspeed or offering any floating point throughput performance numbers, but we do know that W7100 will feature 1792 stream processors. Coupled with that is 8GB of GDDR5 clocked at 5GHz sitting on a 256-bit memory bus.

The W7100 is designed to be a significant step up compared to the outgoing W7000. Along with the doubling W7000’s memory from 4GB to 8GB, the Tonga GPU in W7100 inherits Hawaii’s wider geometry front-end, allowing W7100 to process 4 triangles/clock versus W7000’s 2 tris/clock. Overall compute/rendering performance should also greatly be increased due to the much larger number of stream processors (1792 vs. 1280), but without clockspeeds we can’t say for sure.

Like the W7000 before it, the W7100 is equipped with 4 full size DisplayPorts, allowing for a relatively large number of monitors to be used with the card. And because it gets AMD's newest GCN display controller, W7100 is particularly well suited for 4K displays, being able to drive 3 4K@60Hz displays or 4 4K displays if some operate at 30Hz.

In AMD’s product stack the W7100 is designed be a budget alternative to the W9100 and W8100, offering reduced performance but at a much lower cost. AMD is primarily targeting the engineering and media markets with the W7100, as its compute performance and 8GB of VRAM should be enough for most engineering workloads, or alternatively its VRAM capacity and ability to drive 4 4K displays makes it a good fit for 4K video manipulation.

The second card being introduced today is the W5100. This part is based on AMD’s Bonaire GPU, a GCN 1.1 GPU that has been in AMD’s portfolio for over a year now but has not made it into a FirePro part until now. W5100 replaces the outgoing W5000, which was a heavily cut-down Pitcairn part.

In terms of specifications, the W5100 utilizes a slightly cut-down version of Bonaire with 768 SPs active. It is clocked at approximately 910MHz, which puts its compute performance at 1.4 TFLOPS for single precision. Feeding W5100 is 4GB of VRAM attached to a 128-bit memory bus and clocked at 6GHz.

Compared to the outgoing W5000 the W5100 gains the usual VRAM capacity upgrades that the rest of the Wx100 cards have seen, while the other specifications are a mixed bag on paper. Compute performance is only slightly improved – from 1.28 TFLOPS to 1.4 TFLOPS – and memory bandwidth has actually gone regressed slightly from 102GB/sec. Consequently the biggest upgrade will be found in memory bound scenarios, otherwise the W5100’s greatest improvements would be from its GCN 1.1 lineage.

Speaking of which, with 4 full size DisplayPorts the W5100 has the same 4K display driving capabilities as the W7100. However with lower performance and half the VRAM, it’s decidedly a mid-range card and AMD treats it as such. This means it’s targeted towards lower power usage scenarios where the high compute performance and 8GB+ VRAM capacities of the W7100 and higher aren’t needed.

The third of today’s new FirePro cards is the W4100. Based on AMD’s older Cape Verde GPU, this is not the first time that Cape Verde has appeared in a FirePro product. But it is the first time that it has appeared in a workstation part, its previous appearance being the display wall niche W600. At the same time the W4100 doesn’t have a true analogue in AMD’s previous generation FirePro stack, which stopped at the W5000, so the W4100 marks a newer, lower priced and lower performance tier for FirePro.

With just 512 SPs active the W4100 tops out at only 50W power consumption, reflecting the fact that it is targeted towards lower power use cases. AMD has paired the card with 2GB of VRAM, and based on Cape Verde’s capabilities we expect that this is on a 128-bit bus. AMD has not provided any more technical details on the card, but it goes without saying that this is not a card meant to be a performance powerhouse.

AMD’s target market for this is lightweight 2D and 3D workloads such as finance and entry level CAD. The 4 mini-DisplayPorts allow the card to directly drive up to 4 displays, though because this is a GCN 1.0 GPU it doesn’t have the same flexibility of the W5100.

The final FirePro card being introduced today is the FirePro W2100, which is AMD’s new entry-level FirePro card. Like the W4100 it had no true analogue in AMD’s older product stack, but functionally it replaces the old Turks based V4900, a card which AMD kept around even after the launch of GCN to serve as their entry level FirePro product.

W2100 is based on AMD’s Oland GPU, which marks the first time that this existing AMD GPU has appeared in a FirePro product. W2100 uses a cut down version of Oland with 320 SPs active and attached to 2GB of memory on a 128-bit bus. Oland is a very limited functionality GPU, and while it’s more than suitable for basic imaging it should be noted that it doesn’t have a video decoder.

At a TDP of just 26W, the W2100 is AMD’s lowest power, lowest performance card. Functionally it’s a cheaper alternative to the W4100 for users who don’t need to drive 4 displays, with W2100 featuring just 2 DisplayPorts. The targeted market is otherwise similar, with a focus on lightweight 2D and 3D workloads over 1-2 monitors.

Meanwhile along with today’s product announcements AMD is also announcing that they will be bringing their low-level Mantle API over to the FirePro family. The nature of the pro graphics market means that it will likely be some time before we see Mantle put in meaningful use here since the API is still under development, but once AMD gets the API locked down they believe that Mantle can offer many of the same benefits for professional graphics workloads as it can gaming. The greatly reduced draw call overhead should be a boon here for many 3D workloads, and Mantle’s ability to more easily transition between compute and graphics workloads would map well towards engineering tasks that want to do both at the same time.

Wrapping things up, AMD has not revealed final pricing for these cards at this time, though we expect pricing to follow the previous generation W series cards. Meanwhile the W2100, W4100, and W5100 will be available next month. Otherwise no doubt owing to its use of the new Tonga GPU, W7100 will be farther out, with availability expected in Q4 of this year.

Categories: Tech

Short Bytes: Intel's Core M and Broadwell-Y SoC

Tue, 2014-08-12 03:00

Intel has slowly been feeding us information about their upcoming Broadwell processors for a couple years now, with the first real details kicking off almost a year ago at IDF 2013. Since then, the only other noteworthy piece of information came back in March when it was revealed that socketed Broadwell CPUs with unlocked multipliers will be available with Iris Pro Graphics. Today, Intel is ready to begin providing additional information, and it starts with the Broadwell-Y processor, which Intel is now referring to as an SoC (System On Chip). We have an in-depth article on the subject, but for Short Bytes we want to focus on the bottom line: what does this mean for end users?

The big news for Broadwell is that it will be the first 14nm processor available to the public, following on the success of Intel's 22nm process technology. Shrinking the process technology from 22nm to 14nm can mean a lot of things, but the primary benefit this time appears to be smaller chip sizes and lower power requirements. The first parts will belong to the Core M family of products, a new line catering specifically to low power, high mobility form factors (typically tablets and hybrid devices). With Core M, Intel has their sights set on the fanless computing market with sub-9mm thick designs, and they have focused on reducing power requirements in order to meet the needs of this market. This brings us to Broadwell-Y, the lowest power version of Broadwell and the successor to Haswell-Y and the codename behind the new Core M.

The reality of Intel's Y-series of processors is that they haven't been used all that much to date. Only a handful of devices used Haswell-Y (and even fewer used Ivy Bridge-Y), mostly consisting of 2-in-1 devices that can function as both a laptop and a tablet. For example, the base model Surface Pro 3 uses a Core i3-4020Y, and Dell's XPS 11 and certain Venue Pro 11 tablets also use Y-series parts; Acer, HP, Sony, and Toshiba also have some detachable hybrid devices with the extreme low power processors. Unfortunately, pricing on the Y-series is generally much higher than competiting solutions (i.e. ARM-based SoCs), and there have been criticisms of Intel's higher power requirements and lower overall battery life as well.

Core M thus serves marketing needs as well as technical requirements: it replaces the Core i3/i5/i7 Y-series parts and gives Intel a brand they can market directly at premium tablets/hybrids. And in another move likely driven by marketing, Core M will be the launch part for Intel's new 14nm process technology. Transitions between process technology usually come every 2-3 years, so the 14nm change is a big deal and launching with their extreme low power part makes a statement. The key message of Broadwell is clear: getting into lower power devices and improving battery life is a critical target. To that end, Broadwell-Y probably won't be going into any smartphones, but getting into more premium tablets and delivering better performance with at least competitive battery life relative to other SoCs is a primary goal.

Compared to the Haswell-Y parts, Intel has made some significant advances in performance as well as power use, which we've covered elsewhere. The cumulative effect of the improvements Intel is bringing is that Broadwell-Y has a greater than 2X reduction in TDP (Thermal Design Power) compared to Haswell-Y. It also has a 50% smaller and 30% thinner package and uses 60% lower idle power. Intel points out that Broadwell-Y is set to deliver more than a 2X improvement in performance per Watt over Haswell-Y, though that's a bit more of a nebulous statement (see below). Many of the improvements come thanks to Intel's increased focus on driving down power requirements. Where previous Intel processors targeted laptops and desktops as the primary use case and then refined and adjusted the designs to get into lower power envelopes, with Broadwell Intel is putting the Y-series requirements center stage. The term for this is "co-optimization" of the design process, and these co-optimizations for Broadwell-Y are what allows Intel to talk about "2x improvements". But you need to remember what is being compared: Haswell-Y and Broadwell-Y.

Broadwell parts in general will certainly be faster/better than the current Haswell parts – Intel doesn't typically "go backwards" on processor updates – but you shouldn't expect twice the performance at the same power. Instead, Broadwell-Y should offer better performance than Haswell-Y using much less power, but if you reduce total power use by 2X you could increase performance by 5% and still claim a doubling of performance per Watt. And that's basically what Intel is doing here. Intel estimates the core Broadwell architecture to be around 5% faster than Haswell at the same clocks; specifically, IPC (Instructions Per Cycle) are up ~5% on average. Similarly, changes and improvements to the graphics portion of the processor should deliver more performance at a lower power draw. Add in slightly higher clock speeds and you get a faster part than last generation that uses less power. These are all good improvements, but ultimately it comes down to the final user experience and the cost.

Everywhere you go, people are increasingly using tablets and smartphones for many of their daily computing needs, and being left out of that market is the road to irrelevance. Core M (Broadwell-Y) is Intel's latest push to make inroads into these extremely low power markets, and on paper it looks like Intel has a competitive part. It's now up to the device vendors to deliver compelling products, as fundamentally the choice of processor is only one element of an electronics device. Being the first company to deliver 14nm parts certainly gives Intel an edge over the competition, but high quality Android and iOS tablets sell for $300-$500, so there's not a lot of room for a $100+ processor – which is why Intel has their Atom processors (due for the 14nm treatment with Braswell, if you're wondering).

Core M is going after the premium tablet/hybird market, with benefits including full Windows 8 support, but will it be enough? If you're interested in such a device and you don't already own the Haswell-Y version, Core M products should deliver slimmer and lighter devices with improved battery life and better performance. Don't expect a 10" Core M tablet to deliver the same battery life as a 7" Android/iOS device (at least, not without a larger battery), since the display and other components contribute a lot to power use and Windows 8 has traditionally been far less battery friendly than Android; still, Core M tablets may finally match or perhaps even exceed the battery life of similarly sized iOS/Android tablets. The first retail products with Core M should be shipping before the end of the year, so we'll find out later this year and early next how well Broadwell-Y is able to meet its lofty goals. And we'll also find out how much the Core M products cost.

Categories: Tech

Browser Face-Off: Battery Life Explored 2014

Tue, 2014-08-12 03:00

It has been five years since we did a benchmark of the various web browsers and their effect on battery life, and a lot has changed. Our testing then included Opera 9 & 10, Chrome 2, Firefox 3.5.2, Safari 4, and IE8. Just looking at those version numbers is nostalgic. Not only have the browsers gone through many revisions since then, but computer hardware and the Windows operating system are very different. While there has been a lot of talk, there hasn't been a lot of data comparing browser battery usage. Today we're going to put the latest browsers to the test and deliver some concrete numbers.

Categories: Tech

Lenovo Announces New ThinkStation P Series Desktop Workstations

Mon, 2014-08-11 21:01

As much as I would like to be at SIGGRAPH, one of the reasons to visit would be to see Lenovo’s latest launch of their Haswell-E Desktop Workstation series. One of the key elements to the workstation market in recent quarters is to develop a professional-grade system that can encompass all the critical industries that require horsepower under the desk: engineering, media, energy, medical, finance and others. These systems have to be verified with industry standards to even be considered by these markets, but also the shift to Haswell-E and DDR4 will be an all-important factor for those that rely on speed and performance. One of the issues that these system manufacturers have is to define themselves in the market – Lenovo is already a big player in many other PC-related industries, so listening to customers is all important when trying to develop market share.

The new ThinkStation P series will be based around the LGA2011-3 socket, using Xeon processors and high capacity DDR4 memory. Given the nature of the platform, we can assume that the DDR4 will be ECC by default. For GPU Compute Quadro is being used, with the top line P900 model supporting dual Xeons alongside up to three Quadro K6000 graphics cards and up to 14 storage devices. All the P series will be certified to work on all key ISV applications, and via Intel they are quoted as supporting Thunderbolt 2, which should make for interesting reading regarding the PCIe lane distribution or PLX chip distribution depending if it is onboard or via an optional add-in-card.

In terms of that all important product differentiation, the P series will use ‘tri-channel cooling’ and air baffles to direct the cool air immediately to the component in question and then out of the chassis without touching other components. This essentially becomes a more integrated solution than the compartmentalized chassis we see in the consumer market, except when the company makes the whole system, the company can control the experience to a much tighter level.

The P series also runs a trio of ‘FLEX’ themed additions. The FLEX Bay is designed to support an optical drive or the FLEX module which can hold an ultraslim ODD, media reader or firewire hub. The FLEX Tray on the P900 allows each of the seven HDD trays to support either one 3.5” drive or two 2.5” drives, hence the fourteen drive support mentioned earlier. The FLEX Connector is a mezzanine card allowing users to add in storage related cards without sacrificing rear PCIe slots, meaning that this connector brings this extra card away from the other devices, presumably at right angles. Lenovo is also wanting to promote their tool-less power supply removal without having to adjust the cables on the P700 and P500, which suggests that the PSU connects into a daughter PCB with all the connectors pre-connected, allowing the PSU to be replaced easily.

Lenovo is also adorning their components with QR codes so if a user has an issue the code can be scanned such that the user will be directed to the specific webpage dealing with the component. The chassis will have integrated handles for easier movement or rack mounting. Lenovo is also promoting its diagnostic port, allowing the user to plug in an Android smartphone or tablet via USB for system analysis using the ThinkStation app.

Until Haswell-E and the motherboard chipsets are officially announced, Lenovo cannot unfortunately say more about the specifications regarding the series beyond memory capacities, DRAM support and power supply numbers, however they do seem confident in their ability to provide support and an experience to their ThinkStation users. We have been offered a review sample later in the year when we can test some of these features.

Source: Lenovo

Gallery: Lenovo Announces New Thinkstation P Series Desktop Workstations

Addition: Ryan recently met with Lenovo last week where we were allowed to take some images of the chassis and design:

Gallery: Lenovo P-Series Event Photos

Categories: Tech

HTC Brings the Desire 816 to the US

Mon, 2014-08-11 13:55

HTC is expanding their lineup of devices in the United States with the official launch of the Desire 816 on Virgin Mobile USA. We talked about the Desire 816 when it launched earlier this year at MWC, and much like the recently launched Desire 610 on AT&T it has taken quite some time for the Desire 816 to makes its way to the US. In many ways the Desire 816 can be viewed as a big brother to the Desire 610, with improved specs across the board. It's also a device that helps HTC combat inexpensive phablets like the Huawei Ascend Mate2. The full specifications of the Desire 816 are laid out below.

HTC Desire 816 SoC Qualcomm Snapdragon 400 (MSM8928) 4 x Cortex A7 at 1.6GHz
Adreno 305 Memory and Storage 8GB NAND + MicroSDXC, 1.5GB LPDDR2 Display 5.5” 1280x720 Super LCD2 at 267ppi Cellular Connectivity 2G / 3G (EVDO) / 4G LTE (Qualcomm MDM9x25 UE Category 4 LTE) Dimensions 156.6 x 78.7 x 7.9 mm, 165g Camera 13 MP f/2.2 Rear Facing, 5MP f/2.8 Front Facing  Battery 2600 mAh (9.88 Whr) Other Connectivity 802.11 b/g/n + BT 4.0, USB2.0, GPS/GNSS, NFC SIM Size Nano-SIM Operating System Android 4.4.2 KitKat with HTC Sense 5.5

Looking at the specs there's not a whole lot to talk about. The Snapdragon 400 platform has become ubiquitous among devices in this price bracket, and the 720p display is also fairly standard. It's interesting to compare the Desire 816 to the Huawei Ascend Mate2. While they don't compete on the same carrier in the US, they share similar specifications right down to the camera resolutions. The big difference comes with the larger display on the Mate2 and the subsequent larger battery due to the increased physical size of the device. Overall, HTC looks to have put together a very decent device for its price bracket. Features like multiple color choices and HTC's front facing Boomsound speakers will also help to differentiate the Desire 816 from the other competing devices that have a similar hardware platform.

Currently the Desire 816 will launch on Virgin Mobile USA on August 12 for $299 off contract. This variant will support the Sprint EVDO network that Virgin Mobile utilizes. HTC has also stated that they intend to bring other smartphones in the Desire lineup to the United States later this year. Whether that also means expanded carrier availability for the Desire 610 and 816 is something only time will tell.

Categories: Tech

Intel’s 14nm Technology in Detail

Mon, 2014-08-11 09:45

Much has been made about Intel’s 14nm process over the past year, and admittedly that is as much as Intel’s doing as it is the public’s. As one of the last Integrated Device Manufacturers and the leading semiconductor manufacturer in the world, Intel has and continues to set the pace for the semiconductor industry. Which means that Intel’s efforts to break the laws of physics roughly every 2 years mark an important milestone in the continuing development of semiconductor technology and offer a roadmap of sorts to what other semiconductor manufacturers might expect.

To that end, at a time when ramping up new process nodes is more complex and more expensive than ever, Intel’s 14nm process is especially important. Although concerns over the immediate end of Moore’s Law remain overblown and sensationalistic, there is no denying that continuing the pace of Moore’s Law has only gotten more difficult. And as the company on the forefront of semiconductor fabrication, if anyone is going to see diminishing returns on Moore’s Law first it’s going to be Intel.

Today Intel is looking to put those concerns at rest. Coinciding with today’s embargo on Intel’s 14nm technology and a preview of Intel’s upcoming Broadwell architecture based Core M processor, Intel will be holding a presentation dubbed Advancing Moore’s Law in 2014. Intel for their part is nothing short of extremely proud over what advancements they have made over the last several years to make their 14nm process a reality, and with that process now in volume production in their 14nm Oregon fab and being replicated to others around the world, Intel is finally ready to share more information about the 14nm process.

We’ll start off our look at Intel’s 14nm process with a look at Intel’s yields. Yields are important for any number of reasons, and in the case of Intel’s 14nm process the yields tell a story of their own.

Intel’s 14nm process has been their most difficult process to develop yet, a fact that Intel is being very straightforward about. Throughout the life of the 14nm process so far its yields have trailed the 22nm at equivalent points in time, and while yields are now healthy enough for volume production Intel still has further work to do to improve the process to catch up with 22nm. In fact at the present Intel’s 22nm process is the company’s highest yielding (lowest defect density) process ever, which goes to show just how big a set of shoes the up and coming 14nm process needs to fill to completely match its predecessor.

Concerns over these yields has no doubt played a part in Intel’s decision to go ahead with today’s presentation, for if nothing else they need to showcase their progress to their investors and justify the company’s heavy investment into 14nm and other R&D projects. While 14nm has made it into production in 2014 and the first 14nm products will hit retail by the end of the year, these yield issues have caused 14nm to be late for Intel. Intel’s original plans, which would have seen the bulk of their Broadwell lineup launch in 2014, have been reduced to the single Broadwell-Y SKU this year, with the rest of the Broadwell lineup launching in 2015.

Ultimately while 14nm is still catching up to 22nm, Intel is increasingly confident that they will be able to finish catching up, forecasting that 14nm will reach parity with 22nm on a time adjusted basis in the first quarter of 2015, or roughly 6 months from now. Intel is already in the process of replicating their 14nm to their other fabs, with fabs in Arizona and Ireland expected to come online later this year and in 2015 respectively. These fab ramp-ups will in turn allow Intel to further increase their manufacturing capacity, with Intel projecting that they will have sufficient volume to handle multiple 14nm product ramps in H1’2015.

Moving on to the specifications and capabilities of their 14nm process, Intel has provided the minimum feature size data for 3 critical feature size measurements: transistor fin pitch, transistor gate pitch, and the interconnect pitch. From 22nm to 14nm these features have been reduced in size by between 22% and 35%, which is consistent with the (very roughly) 30%-35% reduction in feature size that one would expect from a full node shrink.

Intel is especially proud of their interconnect scaling on the 14nm node, as the 35% reduction in the minimum interconnect pitch is better than normal for a new process node.

Along with the immediate feature size improvements that come with a smaller manufacturing node, Intel has also been iterating on their FinFET technology, which is now in its second generation for the 14nm process. Compared to the 22nm process, the 14nm process’s fins are more tightly packed, thinner, taller, and fewer in number (per transistor).

Each one of these changes in turn improves the performance of the FinFETs in some way. The tighter density goes hand-in-hand with 14nm’s feature size reductions, while the taller, thinner fins allow for increased drive current and increased performance. Meanwhile by reducing the number of fins per transistor, Intel is able to improve on density once again while also reducing the transistor capacitance that results from those fins.

Intel is also reporting that they have been able to maintain their desired pace at improving transistor switching speeds and reducing power leakage. Across the entire performance curve the 14nm process offers a continuum of better switching speeds and/or lower leakage compared to Intel’s 22nm process, which is especially important for Intel’s low power ambitions with the forthcoming Core M processor.

Plotted differently, here we can see how the last several generations of Intel’s process nodes compare across mobile, laptop, and server performance profiles. All 3 profiles are seeing a roughly linear increase in performance and decrease in active power consumption, which indicates that Intel’s 14nm process is behaving as expected and is offering similar gains as past processes. In this case the 14nm process should deliver a roughly 1.6x increase in performance per watt, just as past processes have too.

Furthermore, these base benefits when coupled with Intel’s customized 14nm process for Core M (Broadwell-Y) and Broadwell’s power optimizations have allowed Intel to more than double their performance per watt as compared to Haswell-Y.

Moving on to costs, Intel offers a breakdown of costs on a cost per mm2 and pairs that with a plot of transistor sizes. By using more advanced double patterning on their 14nm node Intel was able to achieve better than normal area scaling, as we can see here. The tradeoff for that is that wafer costs continue to rise from generation to generation, as double patterning requires additional time and ever-finer tools that drive up the cost of production. The end result is that while Intel’s cost per transistor is not decreasing as quickly as the area per transistor, the cost is still decreasing and significantly so. Even with the additional wafer costs of the 14nm process, on a cost per transistor basis the 14nm process is still slightly ahead of normal for Intel.

At the same time the fact that costs per transistor continue to come down at a steady rate may be par for the course, but that Intel has been able to even maintain par for the course is actually a very significant accomplishment. As the cost of wafers and fabbing have risen over the years there has been concern that transistor costs would plateau, which would lead to chip designers being able to increase their performance but only by increasing prices, as opposed to the past 40 years of cheaper transistors allowing prices to hold steady while performance has increased. So for Intel this is a major point of pride, especially in light of complaints from NVIDIA and others in recent years that their costs on new nodes aren’t scaling nearly as well as they would like.

Which brings us to the final subject of Intel’s 14nm presentation, the competitive landscape. Between the ill-defined naming of new process nodes across the entire industry and Intel’s continuing lead in semiconductor manufacturing, Intel likes to point out how their manufacturing nodes compare to foundry competitors such as TSMC and the IBM alliance. Citing 3rd party journal articles for comparison, Intel claims that along with their typical lead in rolling out new nodes, as of the 14nm node they are going to have a multiple generation technical advantage. They expect that their 14nm node will offer significantly smaller feature sizes than competing 14nm nodes, allowing them to maintain consistent logic area scaling at a time when their competitors (i.e. TSMC) cannot.

From a technical perspective it's quite obvious why it is that Intel is able to maintain density scaling above the level that TSMC and Common Platform members can deliver. In short, this goes back to the improved interconnect density that was discussed earlier in this article. While Intel is pushing 14nm transistor and interconnect, TSMC and Common Platform members are using the same interconnect technology that they did at 20nm. This means that only areas where transistor density was the gating factor for 20nm will decrease in size at 14/16nm, while areas already gated by 20nm interconnect technology won't be able to get any smaller.

Thus for what it’s worth the basic facts do appear to check out, but we would be the first to point out that there is more to semiconductor manufacturing than just logic area scaling. At least until Intel’s competitors start shipping their FinFET products this is going to be speculative, and doesn’t quantify how well those competing process nodes will perform. But then again, the fact that Intel is already on their second FinFET node when their competitors are still ramping up their first is no small feat.

Wrapping things up, while Intel’s bring up of their 14nm process has not been without problems and delays, at this point Intel appears to be back on track. 14nm is in volume production in time for Broadwell-Y to reach retail before the end of the year, and Intel is far enough along that they can begin replicating the process to additional fabs for production in 2014 and 2015. Meanwhile it will still be a few months before we can test the first 14nm chips, but based on Intel’s data it looks like they have good reason to be optimistic about their process. The feature size and leakage improvements are in-line with previous genartion process nodes, which should be a great help for Intel in their quest to crack the high performance mobile market in the coming year.

Categories: Tech