Anandtech

Syndicate content
This channel features the latest computer hardware related articles.
Updated: 33 min 51 sec ago

Corsair Gaming K70 RGB Mechanical Keyboard Review

Mon, 2014-09-22 06:00

Today is the dawn of a new era for Corsair, as the company has multiple announcements. Corsair is establishing their own gaming brand, announcing the availability of the new RGB keyboards and mice, and they're also releasing a new software engine for their input devices. We're focusing mostly on the new RGB keyboards, and Corsair is dropping the "Vengeance" series name with the new keyboards simply use the brand name and model. So how does the newly christened Corsair Gaming K70 RGB keyboard fare? This keyboard probably had more hype than any other keyboard in history, so let's find out if it can live up to expectations in our full review.

Categories: Tech

MediaTek Labs and LinkIt Platform Launch Targeting IoT and Wearables

Mon, 2014-09-22 06:00

Companies such as Motorola, Apple, Nest, and Fitbit have been targeting the Internet of Things (IoT) and wearables market with devices for the past several years. However, if the smartphone revolution was any indication, we are merely at the tip of the iceberg for these devices. Even Apple acknowledged as much by naming the processor inside the Apple Watch the “S1”, clearly planning for future revisions.

Today, hoping to capitalize on this next wave of technology proliferation, MediaTek is formally launching their Labs program for IoT and wearables. This is one of many announcements we will see over the next year as companies look to enter this market.

MediaTek Labs' goal is to be a central hub for developers to collaborate on everything from side-projects to big business device production. With Labs, MediaTek provides software and hardware development kits (SDKs and HDKs), technical documentation, example code, and discussion forums. MediaTek was a late entry into the smart phone market in 2009/2010 but has since exploded in popularity largely due to very complete reference designs and aggressive pricing. MediaTek aims to reproduce this success, only earlier, for the IoT and wearables space.

When discussing hardware, it’s important to keep in mind there are actual several sub markets. I’ve reproduced a slide and table from MediaTek that does a decent job laying out the differences.

MediaTek's IoT and Wearables Market Segment Description   One Application Use (OAU) Simple Application Use (SAU) Rich Application Use (RAU) Examples

Fitness Tracker
Health Tracker
Simple Bluetooth

Smart Wristband
Smart Watch
Child/Elderly Safety High-end Smart Watch
Smart Glasses

Hardware

MCU (<100 MHz)
Bluetooth
Sensor

MCU (100-300 MHz)
Bluetooth
Sensors

AP (>1GHz w/ multi-core)
Bluetooth
Sensors
TFT Display

Optional Hardware LED Display LED or TFT Display
GSM/GPRS
GPS
Wi-Fi See-Through Display
GSM/GPRS
GPS
Wi-Fi OS None Mostly RTOS Mostly Linux Price Point Lowest Middle Highest Battery Life Long (>7 days) Medium (5-7 days) Short (2-3 days) Characteristics

Limited computing power, focusing on one task (such as sports, health, find device)

Mostly non-display or with very simply LED display

May have multiple functions and can update apps

Also need outdoor/indoor positioning

Focus for MediaTek LinkIt and Aster (MT2502) chipset

Multiple apps and functions

Sophisticated UI with more powerful graphics and multimedia features

One thing I do not like about this table is it insinuates these markets are mutually exclusive. While I agree there are indeed hardware and software differences between sub markets, with low enough sleep power and smart enough software, a single device could contain both a high performance applications processor (AP) as well as a low power microcontroller (MCU). In fact, that’s exactly what Intel’s Edison platform and many smart phones do, such as the Moto X. Nevertheless, hybrid devices are certainly more complicated and there is a lot of success to be had focusing on a single task.

For example, the popular Pebble smart watch and Nest thermostat each contain a simple MCU with no high performance AP.  This is exactly what MediaTek is targeting with their first platform release on labs: LinkIt. LinkIt actually refers to MediaTek’s new MCU operating system, which is launching alongside a new SoC named Aster or MT2502. Additionally, a hardware development kit from partner Seed Studio is available through Labs, as well as a software development kit to aid in firmware development and to help port existing Arduino code.

The core of this kit is of course the new Aster MT2502 SoC. MediaTek feels it is uniquely positioned with an SoC that contains an MCU, Power Management Unit (PMU), Memory, Bluetooth 4.0, and a GSM and GPRS Dual SIM modem (850/900/1800/1900MHz). The total size of the SoC is 5.4x6.2mm. If GPS/GLONASS/BEIDOU or WiFi b/g/n are desired, MediaTek provides compatible external ICs for each.

MediaTek Aster MT2502 SoC Size 5.4mm x 6.2mm Package 143-ball, 0.4mm pitch, TFBGA CPU ARM7 EJ-S 260MHz Memory 4MB RAM integrated Storage 4MB Flash integrated PAN Dual Bluetooth 4.0 WAN GSMS and GPRS dual SIM modem Power PMU and charger functions
Low power mode and sensor hub function Multimedia AMR speech codec, HE-AAC music codec, Integrated audio amplifier, JPEG decoder/encoder, MJPEG decoder/encoder, MPEG4 decoder/encoder Interfaces LCD, VGA camera, I2C, SPI, UART, GPIO, SDIO, USB 1.1, Keypad, Serial Flash, JTAG, ADC, DAC, PWM, FM Radio

Developers eager to get their hands dirty can do so as of today for $79. The LinkIt One development board is available and shipping from Seed Studio. This board combines the Aster MT2502A SoC, MT5931 for WiFi, MT3332 for GPS, audio codec, SD card, many I/O interfaces similar to Arduino, and Arduino shield compatibility.

It will be a while before we see non-prototype designs featuring LinkIt and Aster hit the market, but if MediaTek has its way that will only be the start. MediaTek plans on releasing more SDKs, HDKs, and chips through their Labs website and partners over the next few years. As of this writing MediaTek has already posted a beta SDK and emulator for Android targeting the higher performance IoT and wearable devices. While I am not personally sure just what additional smart devices I need in my life right now, I actually think that gets me more excited about the future than otherwise.

Categories: Tech

iPhone 6 and iPhone 6 Plus: Preliminary Results

Mon, 2014-09-22 04:07

While we’re still working on the full review, I want to get out some preliminary results for the iPhone 6. For now, this means some basic performance data and battery life, which include browser benchmarks, game-type benchmarks, and our standard web browsing battery life test. There’s definitely a lot more to talk about for this phone, but this should give an idea of what to expect in the full review. To start, we'll look at the browser benchmarks, which can serve as a relatively useful proxy for CPU performance.

There are a few interesting observations here, as a great deal of the scaling is above what one would expect from the minor frequency bump when comparing A7 and A8. In SunSpider, we see about a 13% increase in performance that can't be explained by frequency increases alone. For Kraken, this change is around 7.5%, and we see a similar trend across the board for the rest of these tests. This points towards a relatively similar underlying architecture, although it's still too early to tell how much changes between the A7 and A8 CPU architectures. Next, we'll look at GPU performance in 3DMark and GFXBench, although we're still working on figuring out the exact GPU in A8.

In in GPU benchmarks, we generally see a pretty solid lead over the competition for the iPhone 6/A8. It's seems quite clear that there is a significant impact to GPU performance in the iPhone 6 Plus due to the 2208x1242 resolution that all content is rendered at. It seems that this is necessary though, as the rendering system for iOS cannot easily adapt to arbitrary resolutions and display sizes. Before we wrap up this article though, I definitely need to address battery life. As with all of our battery life tests, we standardize on 200 nits and ensure that our workload in the web browsing test has a reasonable amount of time in all power states of an SoC.

As one can see, it seems that Apple has managed to do something quite incredible with battery life. Normally an 1810 mAh battery with 3.82V nominal voltage would be quite a poor performer, but the iPhone 6 is a step above just about every other Android smartphone on the market. The iPhone 6 Plus also has a strong showing, although not quite delivering outrageous levels of battery life the way the Ascend Mate 2 does. That's it for now, but the full review should be coming in the near future.

Categories: Tech

NVIDIA 344.11 & 344.16 Drivers Available

Fri, 2014-09-19 12:40

In the crazy rush to wrap up the GeForce GTX 980 review in time for the NDA lift yesterday, news of the first R343 driver release may have been lost in the shuffle. This is a full WHQL driver release from NVIDIA, and it's available for Windows 8.1, 7, Vista, and even XP (though I don't know what you'd be doing with a modern GPU on XP at this point). Notebooks also get the new drivers, though only for Windows 7 and 8 it seems. You can find the updates at the usual place, or they're also available through GeForce Experience (which has also been updated to version 2.1.2.0 if you're wondering).

In terms of what the driver update provides, this is the Game Ready driver for Borderlands: The Pre-Sequel, The Evil Within, F1 2014, and Alien: Isolation – all games that are due to launch in early to mid-October. Of course this is also the publicly available driver for the GeForce GTX 980 and GTX 970, which are apparently selling like hotcakes based on the number of "out of stock" notifications we're seeing (not to mention some hefty price gouging on the GTX 970 and GTX 980).

The drivers also enable NVIDIA's new DSR (Dynamic Super Resolution), with hooks for individual games available in the Control Panel->Manage 3D Settings section. It's not clear whether DSR will be available for other GPUs, but it's definitely not enabled on my GTX 780 right now and I suspect it will be limited to the new Maxwell GM204 GPUs for at least a little while.

There are a host of other updates, too numerous to go into, but you can check the release notes for additional information. These drivers also drop support for legacy GPUs (anything from the 300 series and older), so if you're running an older GPU you'll need to stay with the previous driver release.

Update: 334.16 is now available for the GTX 900 series. These drivers include the fixes to resolve the compatibility issues we were seeing with the GTX 970

Categories: Tech

Samsung Acknowledges the SSD 840 EVO Read Performance Bug - Fix Is on the Way

Fri, 2014-09-19 12:23

During the last couple of weeks, numerous reports of Samsung SSD 840 and 840 EVO having low read performance have surfaced around the Internet. The most extensive one is probably a forum thread over at Overclock.net, which was started about month ago and currently has over 600 replies. For those who are not aware of the issue, there is a bug in the 840 EVO that causes the read performance of old blocks of data to drop dramatically like the HD Tach graph below illustrates. The odd part is that the bug only seems to affect LBAs that have old data (>1 month) associated with them because freshly written data will read at full speed, which also explains why the issue was not discovered until now. 

Source: @p_combe

I just got off the phone with Samsung and the good news is that they are aware of the problem and have presumably found the source of it. The engineers are now working on an updated firmware to fix the bug and as soon as the fix has been validated, the new firmware will be distributed to end-users. Unfortunately there is no ETA for the fix, but obviously it is in Samsung's best interest to provide it as soon as possible.

Update 9/27: Samsung just shed some light on the timeline and the fixed firmware is scheduled to be released to the public on October 15th.

I do not have any further details about the nature of the bug at this point, but we will be getting more details early next week, so stay tuned. It is a good sign that Samsung acknowledges the bug and that a fix is in the works, but for now I would advise against buying the 840 EVO until there is a resolution for the issue.

Categories: Tech

Short Bytes: NVIDIA GeForce GTX 980 in 1000 Words

Fri, 2014-09-19 11:20

To call the launch of NVIDIA's Maxwell GM204 part impressive is something of an understatement. You can read our full coverage of the GTX 980 for the complete story, but here's the short summary. Without the help of a manufacturing process shrink, NVIDIA and AMD are both looking at new ways to improve performance. The Maxwell architecture initially launched earlier this year with GM107 and the GTX 750 Ti and GTX 750, and with it we had our first viable mainstream GPU of the modern era that could deliver playable frame rates at 1080p while using less than 75W of power. The second generation Maxwell ups the ante by essentially tripling the CUDA core count of GM107, all while adding new features and still maintaining the impressive level of efficiency.

It's worth pointing out that "Big Maxwell" (or at least "Bigger Maxwell") is enough of a change that NVIDIA has bumped the model numbers from the GM100 series to GM200 series this round. NVIDIA has also skipped the desktop 800 line completely and is now in the 900 series. Architecturally, however, there's enough change going into GM204 that calling this "Maxwell 2" is certainly warranted.

NVIDIA is touting a 2X performance per Watt increase over GTX 680, and they've delivered exactly that. Through a combination of architectural and design improvements, NVIDIA has moved from 192 CUDA cores per SMX in Kepler to 128 CUDA cores per SMM in Maxwell, and a single SMM is still able to deliver around 90% of the performance of an SMX of equivalent clocks. Put another way, NVIDIA says the new Maxwell 2 architecture is around 40% faster per CUDA core than Kepler. What that means in terms of specifications is that GM204 only needs 2048 CUDA cores to compete with – and generally surpass! – the performance of GK110 with its 2880 CUDA cores, which is used in the GeForce GTX 780 Ti and GTX Titan cards.

In terms of new features, some of the changes with GM204 come on the software/drivers side of things while other features have been implemented in hardware. Starting with the hardware side, GM204 now implements the full set of D3D 11.3/D3D 12 features, where previous designs  (Kepler and Maxwell 1) stopped at full Feature Level 11_0 with partial FL 11_1. The new features include Rasterizer Ordered Views, Typed UAV Load, Volume Tiled Resources, and Conservative Rasterization. Along with these, NVIDIA is also adding hardware features to accelerate what they're calling VXGI – Voxel accelerated Global Illumination – a forward-looking technology that brings GPUs one step closer to doing real-time path tracing. (NVIDIA has more details available if you're interested in learning more).

NVIDIA also has a couple new techniques to improve anti-aliasing, Dynamic Super Resolution (DSR) and Multi-Frame Anti-Aliasing (MFAA). DSR essentially renders a game at a higher resolution and then down-sizes the result to your native resolution using a high-quality 13-tap Gaussian filter. It's similar to super sampling, but the great benefit of DSR over SSAA is that the game doesn't have any knowledge of DSR; as long as the game can support higher resolutions, NVIDIA's drivers take care of all of the work behind the scenes. MFAA (please, no jokes about "mofo AA") is supposed to offer essentially the same quality as 4x MSAA with the performance hit of 2x MSAA through a combination of custom filters and looking at previously rendered frames. MFAA can also function with a 4xAA mode to provide an alternative to 8x MSAA.

The above is all well and good, but what really matters at the end of the day is the actual performance that GM204 can offer. We've averaged results from our gaming benchmarks at our 2560x1440 and 1920x1080 settings, as well as our compute benchmarks, with all scores normalized to the GTX 680. Here's how the new GeForce GTX 980 compares with other GPUs. (Note that we've omitted the overclocking results for the GTX 980, as it wasn't tested across all of the games, but on average it's around 18% faster than the stock GTX 980 while consuming around 20% more power.)

Wow. Obviously there's not quite as much to be gained by running such a fast GPU at 1920x1080, but at 2560x1440 we're looking at a GPU that's a healthy 74% faster on average compared to the GTX 680. Perhaps more importantly, the GTX 980 is also on average 8% faster than the GTX 780 Ti and 13.5% faster than AMD's Radeon R9 290X (in Uber mode, as that's what most shipping cards use). Compute performance sees some even larger gains over previous NVIDIA GPUs, with the 980 besting the 680 by 132%; it's also 16% faster than the 780 Ti but "only" 1.5% faster than the 290X – though the 290X still beats the GTX 980 in Sony Vegas Pro 12 and SystemCompute.

If we look at the GTX 780 Ti, on the one hand performance hasn't improved so much that we'd recommend upgrading, though you do get some new features that might prove useful over time. For those that didn't find the price/performance offered by GTX 780 Ti a compelling reason to upgrade, the GTX 980 sweetens the pot by dropping the MSRP down to $549, and what's more it also uses quite a bit less power:

This is what we call the trifecta of graphics hardware: better performance, lower power, and lower prices. When NVIDIA unveiled the GTX 750 Ti back in February, it was ultimately held back by performance while its efficiency was a huge step forward; it seemed almost too much to hope for that sort of product in the high performance GPU arena. NVIDIA doesn't disappoint, however, dropping power consumption by 18% relative to the GTX 780 Ti while improving performance by roughly 10% and dropping the launch price by just over 20%. If you've been waiting for a reason to upgrade, GeForce GTX 980 is about as good as it gets, though the much less expensive GTX 970 might just spoil the party. We'll have a look at the 970 next week.

Categories: Tech

Acer Releases XBO Series: 28-inch UHD/4K with G-Sync for $800

Fri, 2014-09-19 06:52

Monitors are getting exciting. Not only are higher resolution panels becoming more of the norm, but the combination of different panel dimensions and feature sets means that buying the monitor you need for the next 10 years is getting more difficult. Today Acer adds some spice to the mix by announcing pre-orders for the XB280HK – a 28-inch TN monitor with 3840x2160 resolution that also supports NVIDIA’s G-Sync to reduce tearing and stuttering.

Adaptive frame rate technologies are still in the early phases for adoption by the majority of users. AMD’s FreeSync is still a few quarters away from the market, and NVIDIA’s G-Sync requires an add-in card which started off as an interesting, if not expensive, monitor upgrade. Fast forward a couple of months and as you might expect, the best place for G-Sync to go is into some of the more impressive monitor configurations. 4K is becoming a go-to resolution for anyone with deep enough wallets, although some might argue that the 21:9 monitors might be better for gaming immersion at least.

The XB280HK will support 3840x2160 at 60 Hz via DisplayPort 1.2, along with a 1 ms gray-to-gray response time and a fixed frequency up to 144 Hz. The stand will adjust up to 155mm in height with 40º of tilt. There is also 120º of swivel and a full quarter turn of pivot allowing for portrait style implementations. The brightness of the panel is rated at 300 cd/m2, with an 8 bit+HiFRC TN display that has a typical contrast ratio of 1000:1 and 72% NTSC. VESA is also supported at the 100x100mm scale, as well as a USB 3.0 Hub as part of the monitor, although there are no monitor speakers.

The XB280HK is currently available for pre-order in the UK at £500, but will have a US MSRP of $800. Also part of the Acer XBO range is the XB270H, a 27-inch 1920x1080 panel with G-Sync with an MSRP of $600. Expected release date, according to the pre-orders, should be the 3rd of October.

Source: Acer

Gallery: Acer Releases XBO Series: 28-inch UHD/4K with G-Sync for $800

Categories: Tech

Microsoft Details Direct3D 11.3 & 12 New Rendering Features

Thu, 2014-09-18 20:30

Back at GDC 2014 in March, Microsoft and its hardware partners first announced the next full iteration of the Direct3D API. Now on to version 12, this latest version of Direct3D would be focused on low level graphics programming, unlocking the greater performance and greater efficiency that game consoles have traditionally enjoyed by giving seasoned programmers more direct access to the underlying hardware. In particular, low level access would improve performance both by reducing the overhead high level APIs incur, and by allowing developers to better utilize multi-threading by making it far easier to have multiple threads submitting work.

At the time Microsoft offered brief hints that there would be more to Direct3D 12 than just the low level API, but the low level API was certainly the focus for the day. Now as part of NVIDIA’s launch of the second generation Maxwell based GeForce GTX 980, Microsoft has opened up to the press and public a bit more on what their plans are for Direct3D. Direct3D 12 will indeed introduce new features, but there will be more in development than just Direct3D 12.

Direct3D 11.3

First and foremost then, Microsoft has announced that there will be a new version of Direct3D 11 coinciding with Direct3D 12. Dubbed Direct3D 11.3, this new version of Direct3D is a continuation of the development and evolution of the Direct3D 11 API and like the previous point updates will be adding API support for features found in upcoming hardware.

At first glance the announcement of Direct3D 11.3 would appear to be at odds with Microsoft’s development work on Direct3D 12, but in reality there is a lot of sense in this announcement. Direct3D 12 is a low level API – powerful, but difficult to master and very dangerous in the hands of inexperienced programmers. The development model envisioned for Direct3D 12 is that a limited number of code gurus will be the ones writing the engines and renderers that target the new API, while everyone else will build on top of these engines. This works well for the many organizations that are licensing engines such as UE4, or for the smaller number of organizations that can justify having such experienced programmers on staff.

However for these reasons a low level API is not suitable for everyone. High level APIs such as Direct3D 11 do exist for a good reason after all; their abstraction not only hides the quirks of the underlying hardware, but it makes development easier and more accessible as well. For these reasons there is a need to offer both high level and low level APIs. Direct3D 12 will be the low level API, and Direct3D 11 will continue to be developed to offer the same features through a high level API.

Direct3D 12

Today’s announcement of Direct3D 11.3 and the new features set that Direct3D 11.3 and 12 will be sharing will have an impact on Direct3D 12 as well. We’ll get to the new features in a moment, but at a high level it should be noted that this means that Direct3D 12 is going to end up being a multi-generational (multi-feature level) API similar to Direct3D 11.

In Direct3D 11 Microsoft introduced feature levels, which allowed programmers to target different generations of hardware using the same API, instead of having to write their code multiple times for each associated API generation. In practice this meant that programmers could target D3D 9, 10, and 11 hardware through the D3D 11 API, restricting their feature use accordingly to match the hardware capabilities. This functionality was exposed through feature levels (ex: FL9_3 for D3D9.0c capable hardware) which offered programmers a neat segmentation of feature sets and requirements.

Direct3D 12 in turn will also be making use of feature levels, allowing developers to exploit the benefits of the low level nature of the API while being able to target multiple generations of hardware. It’s through this mechanism that Direct3D 12 will be usable on GPUs as old as NVIDIA’s Fermi family or as new as their Maxwell family, all the while still being able to utilize the features added in newer generations.

Ultimately for users this means they will need to be mindful of feature levels, just as they are today with Direct3D 11. Hardware that is Direct3D 12 compatible does not mean it supports all of the latest feature sets, and keeping track of feature set compatibility for each generation of hardware will still be important going forward.

11.3 & 12: New Features

Getting to the heart of today’s announcement from Microsoft, we have the newly announced features that will be coming to Direct3D 11.3 and 12. It should be noted at this point in time this is not an exhaustive list of all of the new features that we will see, and Microsoft is still working to define a new feature level to go with them (in the interim they will be accessed through cap bits), but none the less this is our first detailed view at what are expected to be the major new features of 11.3/12

Rasterizer Ordered Views

First and foremost of the new features is Rasterizer Ordered Views (ROVs). As hinted at by the name, ROVs is focused on giving the developer control over the order that elements are rasterized in a scene, so that elements are drawn in the correct order. This feature specifically applies to Unordered Access Views (UAVs) being generated by pixel shaders, which buy their very definition are initially unordered. ROVs offers an alternative to UAV's unordered nature, which would result in elements being rasterized simply in the order they were finished. For most rendering tasks unordered rasterization is fine (deeper elements would be occluded anyhow), but for a certain category of tasks having the ability to efficiently control the access order to a UAV is important to correctly render a scene quickly.

The textbook use case for ROVs is Order Independent Transparency, which allows for elements to be rendered in any order and still blended together correctly in the final result. OIT is not new – Direct3D 11 gave the API enough flexibility to accomplish this task – however these earlier OIT implementations would be very slow due to sorting, restricting their usefulness outside of CAD/CAM. The ROV implementation however could accomplish the same task much more quickly by getting the order correct from the start, as opposed to having to sort results after the fact.

Along these lines, since OIT is just a specialized case of a pixel blending operation, ROVs will also be usable for other tasks that require controlled pixel blending, including certain cases of anti-aliasing.

Typed UAV Load

 

The second feature coming to Direct3D is Typed UAV Load. Unordered Access Views (UAVs) are a special type of buffer that allows multiple GPU threads to access the same buffer simultaneously without generating memory conflicts. Because of this disorganized nature of UAVs, certain restrictions are in place that Typed UAV Load will address. As implied by the name, Typed UAV Load deals with cases where UAVs are data typed, and how to better handle their use.

Volume Tiled Resources

 

The third feature coming to Direct3D is Volume Tiled Resources. VTR builds off of the work Microsoft and partners have already done for tiled resources (AKA sparse allocation, AKA hardware megatexture) by extending it into the 3rd dimension.

VTRs are primarily meant to be used with volumetric pixels (voxels), with the idea being that with sparse allocation, volume tiles that do not contain any useful information can avoid being allocated, avoiding tying up memory in tiles that will never be used or accessed. This kind of sparse allocation is necessary to make certain kinds of voxel techniques viable.

Conservative Rasterization

 

Last but certainly not least among Direct3D’s new features will be conservative rasterization. Conservative rasterization is essentially a more accurate but performance intensive solution to figuring out whether a polygon covers part of a pixel. Instead of doing a quick and simple test to see if the center of the pixel is bounded by the lines of the polygon, conservative rasterization checks whether the pixel covers the polygon by testing it against the corners of the pixel. This means that conservative rasterization will catch cases where a polygon was too small to cover the center of a pixel, which results in a more accurate outcome, be it better identifying pixels a polygon resides in, or finding polygons too small to cover the center of any pixel at all. This in turn being where the “conservative” aspect of the name comes from, as a rasterizer would be conservative by including every pixel touched by a triangle as opposed to just the pixels where the tringle covers the center point.

Conservative rasterization is being added to Direct3D in order to allow new algorithms to be used which would fail under the imprecise nature of point sampling. Like VTR, voxels play a big part here as conservative rasterization can be used to build a voxel. However it also has use cases in more accurate tiling and even collision detection.

Final Words

Wrapping things up, today’s announcement of Direct3D 11.3 and its new features offers a solid roadmap for both the evolution of Direct3D and the hardware that will support it. By confirming that they are continuing to work on Direct3D 11 Microsoft has answered one of the lingering questions surrounding Direct3D 12 – what happens to Direct3D 11 – and at the same time this highlights the hardware features that the next generation of hardware will need to support in order to be compliant with the latest D3D feature level. And with Direct3D 12 set to be released sometime next year, these new features won’t be too far off either.

Gallery: Direct3D 11.3/12 New Rendering Features

Categories: Tech

NVIDIA GameWorks: More Effects with Less Effort

Thu, 2014-09-18 19:32

While NVIDIA's hardware is the big start of the day, the software that we run on the hardware is becoming increasingly important. It's one thing to create the world's fastest GPU, but what good is the GPU if you don't have anything that can leverage all that performance? As part of their ongoing drive to improve the state of computer graphics, NVIDIA has a dedicated team of over 300 engineers whose primary focus is the creation of tools and technologies to make the lives of game developers better.

Gallery: NVIDIA GameWorks Overview

GameWorks consists of several items. There's the core SDK (Software Development Kit), along with IDE (Integrated Development Environment) tools for debugging, profiling, and other items a developer might need. Beyond the core SDK, NVIDIA has a Visual FX SDK, a PhysX SDK, and an Optix SDK. The Visual FX SDK offers solutions for complex, realistic effects (e.g. smoke and fire, faces, waves/water, hair, shadows, and turbulence). PhysX is for physics calculations (either CPU or GPU based, depending on the system). Optix is a ray tracing engine and framework, often used to pre-calculate ("bake") lighting in game levels. NVIDIA also provides sample code for graphics and compute, organized by effect and with tutorials.

Many of the technologies that are part of GameWorks have been around for a few years, but NVIDIA is constantly working on improving their GameWorks library and they had several new technologies on display at their GM204 briefing. One of the big ones has already been covered in our GM204 review, VXGI (Voxel Global Illumination), so I won't rehash that here; basically, it allows for more accurate and realistic indirect lighting. Another new technology that NVIDIA showed is called Turf Effects, which properly simulates individual blades of grass (or at least clumps of grass). Finally, PhysX FleX also has a couple new additions, Adhesion and Gases; FleX uses PhysX to provide GPU simulations of particles, fluids, cloth, etc.

Still images don't do justice to most of these effects, and NVIDIA will most likely have videos available in the future to show what they look like. PhysX FleX for example has a page with a currently unavailable video, so hopefully they'll update that with a live video in the coming weeks. You can find additional content related to GameWorks on the official website.

The holiday 2014 season will see the usual avalanche of new games, and many of the AAA titles will sport at least one or two technologies that come from GameWorks. Here's a short list of some of the games, and then we'll have some screen shots to help illustrate what some of the specific technologies do.

Upcoming Titles with GameWorks Technologies Assassin’s Creed: Unity HBAO+, TXAA, PCSS, Tessellation Batman: Arkham Knight Turbulence, Environmental PhysX, Volumetric Lights, FaceWorks, Rain Effects Borderlands: The Pre-Sequel PhysX Particles Far Cry 4 HBAO+, PCSS, TXAA, God Rays, Fur, Enhanced 4K Support Project CARS DX11, Turbulence, PhysX Particles, Enhanced 4K Support Strife PhysX Particles, HairWorks The Crew HBAO+, TXAA The Witcher 3: Wild Hunt HairWorks, HBAO+, PhysX, Destruction, Clothing Warface PhysX Particles, Turbulence, Enhanced 4K Support War Thunder WaveWorks, Destruction

In terms of upcoming games, the two most prominent titles are probably Assassin's Creed Unity and Far Cry 4, and we've created a gallery for each. Both games use multiple GameWorks elements, and NVIDIA was able to provide before/after comparisons for FC4 and AC Unity. Batman: Arkham Knight and The Witcher 3: The Wild Hunt also incorporate many effects from GameWorks, but we didn't get any with/without comparisons.

Gallery: GameWorks - Assassin's Creed Unity

 

Gallery: GameWorks - Far Cry 4

Starting with HBAO+ (Horizon Based Ambient Occlusion), this is a newer way of performing Ambient Occlusion calculations (SSAO, Screen Space AO, being the previous solution that many games have used). Games vary in how they perform AO, but if we look at AC Unity the comparison between HBAO+ and (presumably SSAO) the default AO, HBAO+ clearly offers better shadows. HBAO+ is also supposed to be faster and more efficient than other AO techniques.

TXAA (Temporal Anti-Aliasing) basically combines a variety of filters and post processing techniques to help eliminate jaggies, something which we can all hopefully appreciate. There's one problem I've noticed with TXAA however, which you can see in the above screenshot: it tends to make the entire image look rather blurry in my opinion. It's almost as though someone took Photoshop's "blur" filter and applied it to the image.

PCSS (Percentage Closer Soft Shadows) was introduced a couple years back, which means we should start seeing it in more shipping games. You can see the video from 2012, and AC Unity and Far Cry 4 are among the first games that will offer PCSS.

Tessellation has been around for a few years now in games, and the concepts behind tessellation go back much further. The net result is that tessellation allows developers to extrude geometry from an otherwise flat surface, creating a much more realistic appearance to games when used appropriately. The cobble stone streets and roof shingles in AC Unity are great examples of the difference tessellation makes.

God rays are a lighting feature that we've seen before, but now NVIDIA has implemented a new way of calculating the shafts of light. They now use tessellation to extrude the shadow mapping and actually create transparent beams of light that they can render.

HairWorks is a way to simulate large strands of hair instead of using standard textures – Far Cry 4 and The Witcher 3 will both use HairWorks, though I have to admit that the hair in motion still doesn't look quite right to me. I think we still need an order of magnitude more "hair", and similar to the TressFX in Tomb Raider this is a step forward but we're not there yet.

Gallery: GameWorks - Upcoming Games Fall 2014

There are some additional effects being used in other games – Turbulence, Destruction, FaceWorks, WaveWorks, PhysX, etc. – but the above items give us a good idea of what GameWorks can provide. What's truly interesting about GameWorks is that these libraries are free for any developers that want to use them. The reason for creating GameWorks and basically giving it away is quite simple: NVIDIA needs to entice developers (and perhaps more importantly, publishers) into including these new technologies, as it helps to drive sales of their GPUs among other things. Consider the following (probably not so hypothetical) exchange between a developer and their publisher, paraphrased from NVIDIA's presentation on GameWorks.

A publisher wants to know when game XYZ is ready to ship, and the developer says it's basically done, but they're excited about some really cool features that will just blow people away, and it will take a few more months to get those finished up. "How many people actually have the hardware required to run these new features?" asks the publisher. When the developers guess that only 5% or so of the potential customers have the hardware necessary, you can guess what happens: the new features get cut, and game XYZ ships sooner rather than later.

We've seen this sort of thing happen many times – as an example, Crysis 2 shipped without DX11 support (since the consoles couldn't support that level of detail), adding it in a patch a couple months later. Other games never even see such a patch and we're left with somewhat less impressive visuals. While it's true that great graphics do not an awesome game make, they can certainly enhance the experience when used properly.

It's worth pointing out is that GameWorks is not necessarily exclusive to NVIDIA hardware. While PhysX as an example was originally ported to CUDA, developers have used PhysX on CPUs for many games, and as you can see in the above slide there are many PhysX items that are supported on other platforms. Several of the libraries (Turbulence, WaveWorks, HairWorks, ShadowWorks, FlameWorks, and FaceWorks) are also listed as "planned" for being ported to the latest generation of gaming consoles. Android is also a growing part of NVIDIA's plans, with the Tegra K1 effectively brining the same feature set over to the mobile world that we've had on PCs and notebooks for the past couple of years.

NVIDIA for their part wants to drive the state of the art forward, so that the customers (gamers) demand these high-end technologies and the publishers feel compelled to support them. After all, no publisher would expect great sales from a modern first-person shooter that looks like it was created 10 years ago [insert obligatory Daikatana reference here], but it's a bit of a chicken vs. egg problem. NVIDIA is trying to push things along and maybe hatch the egg a bit earlier, and there have definitely been improvements thanks to their efforts. We applaud their efforts, and more importantly we look forward to seeing better looking games as a result.

Categories: Tech

The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2

Thu, 2014-09-18 19:30

At the start of this year we saw the first half of the Maxwell architecture in the form of the GeForce GTX 750 and GTX 750 Ti. Based on the first generation Maxwell based GM107 GPU, NVIDIA did something we still can hardly believe and managed to pull off a trifecta of improvements over Kepler. GTX 750 Ti was significantly faster than its predecessor, it was denser than its predecessor (though larger overall), and perhaps most importantly consumed less power than its predecessor. In GM107 NVIDIA was able to significantly improve their performance and reduce their power consumption at the same time, all on the same 28nm manufacturing node we’ve come to know since 2012. For NVIDIA this was a major accomplishment, and to this day competitor AMD doesn’t have a real answer to GM107’s energy efficiency.

However GM107 was only the start of the story. In deviating from their typical strategy of launching high-end GPU first – either a 100/110 or 104 GPU – NVIDIA told us up front that while they were launching in the low end first because that made the most sense for them, they would be following up on GM107 later this year with what at the time was being called “second generation Maxwell”. Now 7 months later and true to their word, NVIDIA is back in the spotlight with the first of the second generation Maxwell GPUs, GM204.

Categories: Tech

Hands On With ODG's R-7: Augmented Reality Glasses

Thu, 2014-09-18 14:38

While it's still unclear to me what the future of wearables will be, I must admit that all things considered I feel that glasses are a better idea than watches as a form factor. If the goal is glanceable information, a heads-up display is probably as good as it gets. This brings us to the ODG R-7, which is part of Qualcomm's Vuforia for Digital Eyewear (VDE) platform. This VDE platform brings new capabilities for augmented reality. What this really means is that developers no longer need to worry about coming up with their own system of aligning content from a VR headset to the real world, as this platform makes it a relatively simple process. Judging by the ODG R-7, there's no need for a 3D camera to pull this off.

So let's talk about the ODG R-7, one of the most fascinating wearables I've ever seen. While its primary purpose is for government and industrial use, it isn't a far leap to see the possibilities for consumers. For reference, the ODG R-7 that I saw at this show is an early rev, and effectively still a prototype. However, the initial specs have been established. This wearable has a Qualcomm Snapdragon 805 SoC running at 2.7 GHz, with anywhere between one to four gigabytes of RAM and 16 to 128 gigabytes of storage. There are two 720p LCoS displays that run at 100 Hz refresh rate, which means that the display is see-through. There's one 5MP camera on the front to enable the augmented vision aspects. There's also one battery on each side of the frame for a 1400 mAh battery, which is likely to be a 3.8V nominal voltage.

While the specs are one thing, the actual device itself is another. In person, this is clearly still a prototype as on the face it feels noticeably front heavy, which is where all of the electronics are contained. It's quite obvious that this is running up against thermal limits, as there is a noticeable heat sink running along the top of the glasses. This area gets noticeably hot during operation, and easily feels to be around 50-60C although the final product is likely to be much cooler in operation.

However, these specs aren't really what matter so much as the use cases demonstrated. While it's effectively impossible to really show what it looks like, one demo shown was a terrain map. When this was detected by the glasses, it automatically turned the map into a 3D model that could be viewed from any angle. In addition, a live UAV feed was just above the map, with the position of the UAV indicated by a 3D model orbiting around the map.

It's definitely not a long shot to guess the next logical steps for such a system. Overlaying directions for turn by turn navigation is one obvious use case, as is simple notification management, similar to Android Wear watches. If anything, the potential for glasses is greater than watches as it's much harder to notice glasses in day to day use as they rely on gravity instead of tension like a watch band. However, it could be that I'm biased though, as I've worn glasses all my life.

Categories: Tech

Intel’s Haswell-EP Xeons with DDR3 and DDR4 on the Horizon?

Thu, 2014-09-18 00:30

Johan’s awesome overview of the Haswell-EP ecosystem showed that the server processor line from Intel is firmly in the track for DDR4 memory along with the associated benefits of lower power consumption, higher absolute frequencies and higher capacity modules. At that point, we all assumed that all Haswell-EP Xeons using the LGA2011-3 socket were DDR4 only, requiring each new CPU to be used with the newer generation modules. However thanks to ASRock’s server team, ASRock Rack, it would seem that there will be some Xeons for sale from Intel with both DDR3 and DDR4 support.

Caught by Patrick at ServeTheHome, ASRock Rack had released their motherboard line without much of a fuss. There is nothing strange about that in itself; however the following four models were the subject of interest:

A quick email to our contacts at ASRock provided the solution: Intel is going to launch several SKUs with a dual DDR3/DDR4 controller. These processors are available in eight, ten and twelve core flavors, ranging from 85W to 120W:

QVL CPUs for ASRock Rack EPC612D8T   E5-2629 v3 E5-2649 v3 E5-2669 v3 Cores / Threads 8 / 16 10 / 20 12 / 24 Base Frequency (GHz) 2.4 2.3 2.3 L3 Cache (MB) 20 25 30 TDP (W) 85 105 120

At the current time there is no release date or pricing for these DDR3 Haswell-EP processors, however it would seem that ASRock Rack is shipping these motherboards to distributors already, meaning that Intel cannot be far behind. It does offer a server team the ability to reuse the expensive DDR3 memory they already have, especially given the DDR4 premium, although the processor counts are limited.

CPU-World suggested that these processors have dual memory controllers, and we recieved confirmation that this is true. This could suggest that all Xeons have dual memory controllers but with DDR3 disabled. Note that these motherboards would reject a DDR4-only CPU as a result of their layout. It does potentially pave the way for combination DDR3/DDR4 based LGA2011-3 motherboards in the future. We have also been told that the minimum order quantity for these CPUs might be higher than average, and thus server admins will have to contact their Intel distribution network for exact numbers. This might put a halt on smaller configurations keeping their DDR3.

Source: ServeTheHome, ASRock Rack

Additional (9/25): We have been asked to make clear that these CPUs will not be on general sale for end-users. Only those companies with large minimum-order-quantities will be able to obtain the CPUs, and as a result these motherboards might find their way into complete servers only, rather than be up for sale individually. These are off-roadmap processors, not intended for general release.

Categories: Tech

The iOS 8 Review

Wed, 2014-09-17 10:00

Another year has passed and like clockwork Apple has released a new iPhone and a new version of iOS to accompany it. Our reviews of both new iPhones will be coming soon, with a look at new iOS features specific to those devices like ApplePay, but with iOS 8 rolling out today to millions of existing iOS users across the iPad, iPhone, and iPod Touch, it's worth taking a look at what Apple is bringing to the users that are already in the iOS ecosystem. The eighth iteration of Apple's mobile operating system brings some new features, and while on the surface it may appear quite similar to iOS 7, under the hood the changes are quite significant. If iOS 7 was the biggest update for users in the seven years since the iPhone and iOS first appeared, then iOS 8 is the biggest update for developers since the launch of iOS 2.0 and the App Store. Read on for our full review.

Categories: Tech

Logitech Targets Home Automation Play with Harmony Living Home Lineup

Wed, 2014-09-17 06:00


Home Automation and Control - Setting the Stage

The increasing popularity of home automation (HA) equipment has fueled the Internet of Things (IoT) revolution. However, the low barrier to entry (there are innumerable crowdfunded projects in this space) has resulted in a very fragmented ecosystem. Interoperability is a major concern, and different devices use different protocols. In order to get a seamless experience across all home automation equipment, consumers have been forced to go the custom installation or integrated package route. These avenues tend to keep the joys of home automation and control out of reach of the average consumer.

The current market situation is ripe for someone to come in with a home automation gateway. Vendors such as Lowes (with the Iris product line) and Staples (with the Staples Connect initiative) have made interesting forays. However, the primary aim has been to sell more connected peripherals under the same brand. Interoperability with other HA devices is not given any importance.

On the other side, we have vendors such as Securifi trying to integrate a home automation gateway into a standard wireless router with their Almond+ product. All things considered, it would be best if the wireless router at home were to act as a home automation gateway. Consumers don't need to buy yet another device to act as a gateway purely for their IoT clients. The problems would then be making sure that various HA devices can talk to the gateway and consumers have the ability to interact with all of them using one interface. Unfortunately, these aspects have contributed to Securifi delaying the retail launch of the Almond+. Under these circumstances, the slot is still open for a unified home automation controller. Logitech is hoping to fill that void with today's Harmony Living Home launch.

Logitech Harmony - A Brief Background

Logitech's Harmony lineup is very well respected in the universal remote control market. The ability of a single remote / hub device to control multiple home entertainment devices (AVR / TV / media players) coupled with one-touch control and simple setup has been well-received by the consumers. In fact, Harmony's database of over 200K devices (which is also frequently updated) is unparalleled in the industry. The only downside of the units is the pricing aspect.

Prior to today's launch, the scope of the Harmony lineup didn't go beyond control of entertainment devices in the living room. However, the current popularity of home automation devices and the IoT ecosystem (coupled with the rapid rise of mobile devices that enable easy control via apps) make the next stop for the Harmony lineup quite obvious. Logitech is launching four new product SKUs centered around a home automation gateway hub under the Harmony Living Home category:

  • Logitech Harmony Home Hub
  • Logitech Harmony Home Control
  • Logitech Harmony Ultimate Home
  • Logitech Harmony Hub Extender
Logitech Harmony Living Home Lineup - Delving Deeper

The Logitech Harmony Home Hub connects to the home network and uses RF, IR, Bluetooth and Wi-Fi to relay commands from the Harmony mobile app or the Harmony remote to all supported entertainment and automation devices. The Harmony mobile apps can work over the Internet. True remote control of the various devices in one's home from anywhere on the Internet is now possible.

Logitech Harmony Home Hub and Mobile App

Consumers can purchase the hub alone for $100 and use the full functionality with just the mobile app. As with any home automation setup, scenes can be programmed involving multiple devices from different vendors. Logitech terms these scenes as experiences.

The next 'upgrade' in the Living Home lineup is the Logitech Harmony Home Control that costs $150. This kit bundles a button-only remote with the hub described above.

Logitech Harmony Home Control and Mobile App

The remote communicates via RF, enabling the hub to be placed in a closed cabinet (if necessary). The mobile apps are obviously compatible with the hub even when the physical remote is being used. This configuration can control any number of home automation devices, but only up to eight entertainment devices.

The highest end configuration is the Logitech Harmony Ultimate Home. It is quite similar to the Harmony Home Control, except for a few updates to the remote control itself: a 2.4" clour touchscreen, gesture control and additional programmability.

Logitech Harmony Ultimate Home and Mobile App

The kit including the hub and the touchscreen remote will retail for $350. This configuration can control up to fifteen entertainment devices and virtually unlimited number of home automation devices.

In addition to the above three configurations (which will be available for purchase this month), Logitech will also be introducing the Logitech Harmony Hub Extender in December for $130. This extender will expand compatibility by allowing the hub to talk to devices that communicate using ZigBee or Z-Wave. Logitech also stressed the fact that the extender will be Thread-compatible.

Concluding Remarks

The Living Home lineup is a welcome addition to the home automation market. However, Logitech faces a few challenges. There are also a few questionable decisions that have been made with respect to the operating details.

1. Entertainment device manufacturers have typically adopted a hands-off approach after selling their wares to the consumers. As such, they don't have any issues sharing methods to control their equipment with Logitech. On the other hand, many of the IoT / home automation device makers treat their customers as recurring revenue sources by adopting subscription models. Some of them also want to tightly control the customer experience within a walled ecosystem. Under these circumstances, it is not clear how willing they would be to share their APIs with Logitech or work to make their products compatible with the Harmony platform. That said, Logitech says more than 6000 home automation devices are currently compatible with the hub, and the number is expected to keep growing.

2. Logitech is not adopting a subscription fee model for the Living Home lineup. While this is excellent news for consumers, it would be interesting to see what keeps the cloud servers for the external control aspect running in the future. It might not be a big deal for a company of Logitech's size, but it leads to another aspect - decentralized control.

3. Based on the initial information provided to us, it looks like the Logitech Living Home lineup requires the hub to be always connected to the Internet for it to control the connected devices. This makes sense for devices that currently offer cloud-based control only. But, we are at a loss to understand why devices that can be controlled via the local network itself (such as, say, the UFO Power Center from Visible Energy and the Ubiquiti mFi mPower strips) need an Internet connection when accessed through the hub while being part of the local network. In our opinion, the control logic (i.e, processing the APIs that talk to the various devices) should be resident on the hub rather than on the cloud.

4. It is not clear whether it is possible for third-party apps to talk to the hubs. Logitech does have a developer program for device makers to make their products compatible with the Harmony home hub. While Logitech indicated that the products being launched today can talk to the recently SmartThings and PEQ hubs, the availability of APIs for the Logitech hub itself remains an open question.

In conclusion, the launch of the Harmony Living Home lineup looks to be just what the home automation market needs. If Logitech can replicate their success with home entertainment control in this space, it solves a very important problem for the consumers and will allow consumers to invest in home automation without the risk of a fragmented experience. A reputable and reliable company had to get serious about this space, and we believe Logitech has the right play here.

Categories: Tech

The New Motorola Moto X (2nd Gen) Review

Wed, 2014-09-17 06:00

While I talked about Motorola’s issues in the launch article for the new Moto X, it’s well worth repeating. Motorola has been through a lot these past few years. Once the iconic symbol of Android with their Droid smartphones, Motorola had lost its way. It wasn’t unusual to see one phone launch after the other, with no real regard for strategy, and no real cohesive message to tie all of their devices together. If anything, there was a point where Motorola had become an ODM for network operators in the US, with no real international presence. After Google acquired it in 2012, we saw the launch of the Moto X in 2013. The amount of hype that I saw online before the announcement of the Moto X was unlike anything I’ve ever seen.

Unfortunately, the device that launched didn’t quite fit with the hype. The Snapdragon S4 Pro chipset was decidedly mid-range by the time it launched. The display was good for the time, but AMOLED wasn’t quite the imminent LCD replacement that it is today. The camera was also rather unfortunate at launch. For better or worse, the Moto X was a phone with the right size and shape, but a lot of hardware choices that aged poorly. This leads us to the new Moto X. On the surface, this phone corrects a lot of issues that were present in the original Moto X. The new Moto X brings an SoC that is up to par with its competition, a new camera with a Sony sensor, and an improved AMOLED panel. To find out how it performs, read on for the full review.

Categories: Tech

USB Power Delivery v2.0 Specification Finalized - USB Gains Alternate Modes

Wed, 2014-09-17 05:00

The last while has been a busy time for the USB 3.0 Promoters Group, with the new USB 3.1 Type-C Connector detailed last month. Joshua was able to get a hands on with the new connector at IDF last week. With support for up to 10 Gbps, a new reversible Type-C connector, and up to 100 watts of power delivery, the USB group is trying to expand the already universal connector to be able to do much more than is possible with the current specification. To fulfill this mandate, they have now finalized the USB Power Delivery v2.0 spec, and the Billboard Device Class v1.0 spec.

When USB was first introduced, the thought was that it would be primarily a data interface, with a limited amount of power delivery which was generally used to power the electronics of certain devices. The initial specification for USB only had provisions for 0.75 watts of power – 150 mA at 5 V. USB 2.0 bumped that to 500 mA, or 2.5 watts, and USB 3.0 specified 900 mA at 5 V, or 4.5 watts. All of these specifications allow for power as well as data transmission at the same time. In addition, there was also a Battery Charging specification which allows up to 1.5 A at 5 V for a maximum of 7.5 watts of power but with no data transmission available. The jump from 7.5 watts to 100 watts of the new specification is a huge increase, and one that cannot be done with just an amperage increase on the system as was done in the previous versions of USB. Version 3.1 now supports 5 V, 12 V, and 20 V on the pins to allow the higher power output without excessive current, but even the current has been increased to a maximum of 5 A which is much higher than before.

The inelegant USB 3.0 Micro-B connector

USB Power Delivery is designed to increase the flexibility of USB, by providing enough power for many more devices, while at the same time still allowing data delivery. It is also even more flexible, due to a couple of changes. First, the direction of power delivery is no longer fixed. Imagine a tablet with a keyboard attached. The keyboard can have a battery, and the battery can be charged through the data connection, but when the tablet is unplugged from its charger, the power flow can reverse and the tablet can now be powered by the keyboard. Another example is a laptop with six USB ports. The USB ports can be used for peripherals, or, a USB charger can be connected to any port to charge the laptop. Dedicated charging connectors will no longer be required.

The reversible USB 3.1 Type-C connector

Another change is that all devices must now negotiate the amount of power required, and that can be renegotiated if another devices requires additional power. A good scenario would be if you have a laptop, and you are charging your phone on one of the USB ports. The phone would be pulling the maximum amount of power it can in order to charge quickly. If you then plug in a USB RAID array, it will need additional power at the start in order to get all of the disks spinning, but then can be lowered to a steady state. The system can lower the power delivery to the phone, provide it to the RAID array, and then move it back to the phone when the power is available.

The final key is that the Power Delivery specification is not just for power, nor is it just for USB. The Power Delivery Specification allows Alternate Modes to be defined, and the system can negotiate to enable these modes, enter them, and exit them. These modes will be defined outside the scope of USB-IF specifications using Structured Vendor Defined Messages. This allows the ability to reconfigure some of the pins a USB Type-C connector exposes and will allow the cable to be used for many different purposes rather than just for USB.

This leads us to the second specification – the Billboard Device Class. This specification outlines the methods used to communicate the Alternate Modes supported by the Device Container to a host system. It includes descriptors which can be used to provide support details in a human-readable format. What it does not contain is the methodology to switch to the Alternate Mode – that is done in the Power Delivery specification itself. The Billboard Device Class will allow a device which supports an Alternate Mode to connect to a host which does not support that mode, and then inform the user why it does not work without having silent failures, and for this reason all Billboard Devices must support USB 2.0 as a minimum.

This new framework could open the ubiquitous USB cable up to an entirely new array of devices and functions. One possibility that the USB-IF mentions in the specification is a theoretical means for PCI-E over USB. I’ve already given the example of a tablet with a battery in the keyboard, but a laptop could be connected to a monitor which can also charge the laptop. Much more power hungry devices can be connected to a USB port as well, including printers and high speed storage. All of this was defined with a graceful fail as well, so the user is not stuck wondering why his device is not functioning any longer.

The new Alternate Modes have quite a potential, and with the increased power capabilities of USB 3.1 and the Power Delivery Specification, it will be very interesting to see how these capabilities are taken advantage of in future devices.

Categories: Tech

Testing Swiftkey for iOS 8

Tue, 2014-09-16 18:00

Earlier this week SwiftKey announced that they hope to have their keyboard available on the App Store when iOS 8 finally rolls out to users worldwide. They've been kind enough to provide us with a beta version of the SwiftKey keyboard for testing, along with some insight into developing extensions on iOS 8 and their hopes for SwiftKey's future on both iOS and Android.

Assuming that nothing delays Apple's approval for the app, the first step for users who want to get SwiftKey after upgrading to iOS 8 will be to download it from the App Store. Due to the nature of Apple's implementation of extensions, even applications on iOS 8 that are essentially just extensions meant to run in other apps must have a container application that gets placed on the home screen. In the case of SwiftKey this is fine, as the application is home to settings for SwiftKey Cloud, languages, settings, and themes.

Once you've installed SwiftKey from the App Store, you'll need to go into the settings app to add it as shown in the image above. With the app installed the SwiftKey keyboard would show up in the section to add a new keyboard under a header for third party keyboards. Once you've done this you'll be ready to start using SwiftKey. However, you may notice that there's one very important thing missing.

Swiftkey immediately after installation on an iPad

As you can see above, the bar for predictions is covered by a banner asking the user to enable "full access." This is because of the sandboxing that Apple does for third party keyboards on iOS. Third party keyboards are, by default, placed in an extremely restrictive sandbox. They are unable to get information about what words are being inputted or what content is in an app, and they are unable to access networking to do server side prediction. These measures are in place to protect the user's privacy and security. To enable extended functionality, Apple allows users to enable full access for third party keyboards in the settings app. By doing this, SwiftKey is able to grab what characters are being typed to use with their prediction and correction technology which learns more about how a user types and what mistakes they make.

As far as the design of the keyboard goes, its layout is essentially same as the stock iOS keyboard so there's no real learning curve. Users who are familiar with the iOS keyboard can start using SwiftKey and feel at home right off the bat. iPad users will notice that the keys are larger than the stock keyboard, which in my experience made keys easier to hit without causing me to hit the wrong key due to the reduced amount of space between them. In addition to its accuracy, it's also extremely responsive. iOS 8 on the third generation iPad I was using for testing has an enormous delay between when you touch a key and when it actually registers. SwiftKey has no such issue, and for that reason alone it has become my daily keyboard on iPad. As for its appearance, the keyboard currently only comes with the nickel dark and nickel light themes that you see above. 

Swiftkey's application, much like the keyboard itself, fits in very well with the visual style and design conventions of iOS. It's obvious that some care has been put into making it more than a port of the Android version of SwiftKey. The application is home to all the settings for the keyboard, including themes, languages, and SwiftKey Cloud. Currently not all the settings from the Android version have been brought over. Layout options and themes are two notable omissions. However, this is a first release, and I've been told that there will definitely be updates as time goes on.

With iOS including Apple's QuickType keyboard with similar correction and suggestion functionality, some users may be wondering why they even need a third party keyboard like SwiftKey. For me the most obvious reason is SwiftKey Cloud. My current setup is an HTC One (M7) and an iPad. Using SwiftKey on my phone but the stock iOS keyboard on the iPad would mean that the information each keyboard learns about how I type would not available to both keyboards. With SwiftKey on iOS and SwiftKey Cloud, all the information Swiftkey has collected about how I type and what mistakes I often make are available to both of my devices. This is a huge advantage for people who have mobile devices running multiple operating systems.

Another advantage that can be argued is that SwiftKey is a company that has had their keyboard available for 4 years, and specializes only in keyboards. It's not unreasonable to think that SwiftKey's technology for learning from how the user types may be more mature than Apple's which is just now being introduced with iOS 8.

The last advantage simply comes down to features. SwiftKey, and other third party keyboards, can always offer more features than Apple does due to their focus only on keyboards and the ability to ship updates whenever necessary via the App Store. One notable feature that SwiftKey offers is Flow, which is a method of typing by moving your finger from key to key. Users who use Swype by Nuance will be familiar with how SwiftKey Flow works. Unfortunately, I was unable to test Flow as the iPad version of the keyboard does not have it. This is due to memory limitations imposed on extensions. The functionality and visual effects for SwiftKey Flow when run on an iPad exceeds this amount of RAM which is why only the version for iPhone and iPod Touch includes it. Flow is most useful in a one handed situation on a phone so I'm not heartbroken by its omission in the iPad version. That being said, I am still hopeful that SwiftKey will be able to do further optimization to eventually bring Flow to iPad.

Overall, I'm very happy with how SwiftKey has turned out on iOS 8. The SwiftKey Note application makes it clear that the company has wanted to bring their keyboard to iOS for some time now and has just been blocked by the lack of third party keyboard support. The fact that it fixes the keyboard lag on my iPad and syncronizes what it has learned across iOS and Android are enough to make me very happy to use SwiftKey on iOS. I'm very excited to see what future updates bring.

SwiftKey should be available for free on the App Store not long after the release of iOS 8. The application is subject to Apple's approval, and so there could be some delay in its availability to users.

Update: SwiftKey is approved and available on the App Store now. Grab it here.

Categories: Tech

SanDisk Ultra II (240GB) SSD Review

Tue, 2014-09-16 11:00

For nearly two years Samsung was the only manufacturer with an SSD that utilized TLC NAND. Most of the other manufacturers had talked about TLC SSDs in one way or another, but nobody had come up with anything retail worthy until now. A month ago SanDisk took the stage and unveiled the Ultra II, the company's first TLC SSD and the first TLC SSD that is not by Samsung. Read on for our full review.

Categories: Tech

Samsung's Exynos 5433 is an A57/A53 ARM SoC

Tue, 2014-09-16 08:40

There has been a lot of confusion going on over the last few weeks on what exactly Samsung's Exynos 5433 is. Joshua and I were pretty much convinced that it was a standard big.LITTLE A15/A7 chip configuration due to naming consistencies and evidence in firmware. Even though the Note 4 was already announced with region-specific models employing this chip, Samsung S.LSI has yet to divulge any kind of official information on the matter or even publicly announce the chip.

With the release of new source code, we can now confirm that the Exynos 5433 is indeed the first Cortex A57/A53 SoC to market. We see a 4x Cortex A57, 4x Cortex A53 big.LITTLE CPU configuration employed in the part, here's a little overview of what we currently know: 

Samsung Exynos 5 Octa 2014 lineup SoC Samsung
Exynos 5422
Samsung
Exynos 5430
Samsung
Exynos 5433
CPU

4x Cortex A7 r0p5 @ 1.3GHz

4x Cortex A15 r2p4 @ 1.9GHz

4x Cortex A7 r0p5 @ 1.3GHz

4x Cortex A15 r3p3 @ 1.8GHz

4x Cortex A53 @ 
1.3GHz

4x Cortex A57 r1p0 @
1.9GHz

Memory
Controller

2x 32-bit @ 933MHz

14.9GB/s b/w 

2x 32-bit @ 1066MHz

17.0GB/s b/w 

2x 32-bit @ 825MHz

13.2GB/s b/w 

GPU Mali T628MP6
@ 533MHz  Mali T628MP6
@ 600MHz Mali T760MP?
@ 700MHz Mfc.
Process Samsung
28nm HKMG Samsung
20nm HKMG Samsung
  20nm HKMG

The big question is why Samsung choose to name this chip Exynos 5433 and not market it as a 64-bit chip in a new product lineup? The answer could be simply that we won't ever see the 5433 running in AArch64 mode. The chip's firmware and drivers are running on a "CAL" / Chip-Abstraction-Layer on the lowest level of the driver stacks. In fact, beyond the CPU cores (and GPU), the Exynos 5433 looks very similar to the Exynos 5430 which employs A15/A7 cores. 

While we won't be seeing the Exynos 5433 running 64-bit code any time soon, it still takes advantage of the architectural improvements of ARM's Cortex A57 and A53 cores and their ARMv8 instruction set (running in AArch32 mode). Power consumption should also be improved due to the new A50's power management and new retention features. The silicon, similarly to the 5430, is manufactured on Samsung's new 20nm process.


Atlas (A57) and Apollo (A53) cores in the power management drivers

Also employed for the first time is ARM's new Mali T760 GPU running at 700MHz. We already published an architectural dive into the T760 detailing what's new. I wasn't able to determine the number of cores on this GPU due to ARM's transparent and scalable driver architecture in regards to shader cores, this is something we'll have to wait for in the eventual official announcement or in a hands-on investigation.

While the Exynos 5433 seems nothing more than a "brain-transplant" in terms of SoC design, the newer Exynos 7 chip is a genuinely new part. Over the last 3 weeks Samsung has been busy submitting patches to the Linux kernel mailing lists adding support for their new SoC lineup. The Exynos 7420 seems to be on track for Samsung's next flagship lineup, this time in full 64-bit AArch64 mode with Android L. The details of the chip are still sparse, but we'll be seeing the same A57/A53 CPU combination together with an Mali T760, powered by an LPDDR4-capable memory controller.

The Exynos 5433 is definitely a surprise that many didn't expect. Qualcomm's Snapdragon 810 isn't officially due until Q1 2015, and we don't know yet when we'll be seeing it in consumer devices. Here Samsung has quite a lead as the Note 4 variants with the 5433 are shipping in the coming weeks. While I'm still a bit perplexed at Samsung's silence and lack of announcements, the fact that many regions are supplied a Snapdragon S805 in the Note 4 may have to do something with it, as they wouldn't want to cause buyer's remorse. 

Categories: Tech

Micron Launches M600 Client SSD for OEMs/SIs

Tue, 2014-09-16 07:00

Micron/Crucial has been one of the go-to manufacturers for value client SSDs during the past couple of years, but the one thing that the company has lacked is a higher performing solution. The M600 that is being released today is Micron's answer to the demand for a more high-end SSD with better performance. 

The M600 is positioned in the client segment above the M550, making it the highest-end drive that Micron offers for client workloads. Note that the M600 is a Micron-only product and is only available for OEMs and SIs, meaning there is not going to be a Crucial branded retail counterpart and you will not be seeing the M600 on the store shelves. Micron and Crucial have separated some parts of their product development because the needs of the OEM and retail markets are a bit different, so from now on the difference between Micron and Crucial SSDs will more than just the label. The engineering core should still be the same, though, and some of the features that are introduced in the M600 will find their way to Crucial branded SSDs too.

Micron M600 Specifications Capacity 128GB 256GB 512GB 1TB Controller Marvell 88SS9189 NAND Micron 128Gbit 16nm MLC Form Factors 2.5" 7mm, mSATA & M.2 2260/2280 2.5" 7mm Sequential Read 560MB/s 560MB/s 560MB/s 560MB/s Sequential Write 400MB/s 510MB/s 510MB/s 510MB/s 4KB Random Read 90K IOPS 100K IOPS 100K IOPS 100K IOPS 4KB Random Write 88K IOPS 88K IOPS 88K IOPS 88K IOPS Idle Power (DevSleep/Slumber) 2mW / 95mW 2mW / 100mW 2mW / 100mW 3mW / 100mW Max Power 3.6W 4.4W 4.7W 5.2W Encryption TCG Opal 2.0 & eDrive Endurance 100TB 200TB 300TB 400TB Warranty Three years

The M600 is available in four form factors: 2.5" 7mm, mSATA and both 2260 and 2280 flavors of M.2. The 2260 is double-sided, whereas the 2280 is single-sided, which explains why both max out at 512GB and the 1TB model is only available as a 2.5" 7mm drive.

The controller remains unchanged from the M550, but the M600 switches to Micron's latest 128Gbit 16nm NAND like the MX100 did a few months ago. Despite the same hardware as the MX100 has, the M600 is a different product. The most important new feature in the M600 is what Micron calls Dynamic Write Acceleration.

Dynamic Write Acceleration (DWA) is Micron's implementation of a pseudo-SLC cache. Instead of being static with a predetermined amount of NAND set in SLC mode, DWA is dynamic and can switch between SLC and MLC on the fly. In other words, an empty SSD will run nearly all of its NAND in SLC mode to increase performance and the size of the SLC cache decreases as the drive is filled. At 95% full, most of the NAND will now be running in MLC to meet the advertised user capacity, but the size of the SLC cache is still comparable to competitors' static SLC caches. DWA is transparent to the user so the shown capacity of the drive will not change -- the drive itself manages the change between SLC and MLC in the background.

For small drives the increased capacity and dynamic nature of the SLC cache can be beneficial. With only 128GB or 256GB of NAND and with each die being 16GB, the lower program and read latencies of pseudo-SLC will make a big difference to especially write performance. As a result even the 128GB model achieves  peak speeds of 410MB/s sequential write and 88K random write IOPS. The 512GB and 1TB 2.5" models do not use DWA at all because they have enough NAND to provide the same performance without the need for an SLC cache, but all mSATA and M.2 models utilize DWA (including the 512GB ones).

Aside from the increased performance, the benefit of the pseudo-SLC cache is increased endurance. The 128GB model is rated at 100TB, which is a fair increase over the 72TB rating that Micron's previous drives have had. The rating also scales linearly with capacity now, so the 1TB is good for up to 400TB. Keep in mind that the M600 is only validated for typical client usage, which allows for higher ratings because the write amplification will be lower due to a lighter workload.

In addition the M600 features the usual Micron/Crucial feature set. There is DevSleep, TCG Opal and eDrive support as well as power loss protection. The warranty is three years similar to Micron's other client SSDs.

The M600 is available now (though only for OEMs/SIs) and we already have samples, but there is a separate embargo for the reviews, so look out for the review in the next couple of weeks.

Categories: Tech