Anandtech

Syndicate content
This channel features the latest computer hardware related articles.
Updated: 36 min 1 sec ago

Intel Broadwell Architecture Preview: A Glimpse into Core M

Mon, 2014-08-11 09:01

Typically we would see Intel unveil the bulk of the technical details of their forthcoming products at their annual Intel Developer Forum, and with the next IDF scheduled for the week of September 9th we’ll see just that. However today Intel will be breaking from their established standards a bit by not waiting until IDF to deliver everything at once. In a presentation coinciding with today’s embargo, dubbed Advancing Moore’s Law in 2014, Intel will be offering a preview of sorts for Broadwell and the 14nm process.

Today’s preview and Intel’s associated presentation are going to be based around the forthcoming Intel Core M microprocessor, Broadwell configuration otherwise known at Broadwell-Y. The reason for this is a culmination of several factors, and in all honesty it’s probably driven as much by investor relations as it is consumer/enthusiast relations, as Intel would like to convince consumer and investor alike that they are on the right path to take control of the mobile/tablet through superior products, superior technology, and superior manufacturing. Hence today’s preview will be focused on the part and the market Intel feels is the most competitive and most at risk for the next cycle: the mobile market that Core M will be competing in.

Categories: Tech

AMD’s Big Bet on ARM Powered Servers: Opteron A1100 Revealed

Mon, 2014-08-11 09:00

It has been a full seven months since AMD released detailed information about its Opteron A1100 server CPU, and twenty two months since announcement. Today, at the Hot Chips conference in Cupertino, AMD revealed the final missing pieces about its ARM powered server strategy headlining the A1100. One thing is certainly clear, AMD is betting heavily on ARM powered servers by delivering one of the most disruptive server CPUs yet, and it is getting closer to launch.

Categories: Tech

Khronos Announces Next Generation OpenGL Initiative

Mon, 2014-08-11 06:02

As our regular readers have already seen, 2013 and 2014 has been one of the most significant periods for graphics APIs in years. While OpenGL and Direct3D have not necessarily been stagnant over the last half-decade or so, both APIs have been in a mature phase where both are stable products that receive relatively minor feature updates as opposed to more sweeping overhauls. Since reaching that stability there has been quite a bit of speculation over what would come next – or indeed whether anything would come next – and in the last year we have seen the answer to that in a series of new graphics APIs from hardware and software vendors alike.

In all of these announcements thus far, we have seen vendors focus on similar issues and plan to enact similar solutions. AMD’s Mantle, Microsoft’s Direct3D 12, and Apple’s Metal all reflect the fact that there is a general consensus among the graphics industry over where the current bottlenecks lie, where graphics hardware will be going in the future, and where graphics APIs need to go in response to these issues. The end result has been the emergence of several new APIs, all meaningfully different from each other but none the less all going in the same direction and all implementing the same style solutions.

That common solution is a desire by all parties to scrape away the abstraction that has defined high level graphics APIs like Direct3D and OpenGL for so much of their lives. As graphics hardware becomes more advanced it has become more similar and more flexible; the need to abstract and hide the differences between GPU architectures has become less important, and the abstraction itself has become the issue. By removing the abstraction and giving developers more direct control over the underlying hardware, these next generation APIs aim to improve performance, ease API implementation, and give developers more flexibility than ever before.

It’s this subject which brings us to today’s final announcement from Khronos. At 22 years old OpenGL is the oldest of the 3D graphics APIs in common use today, and in 2014 it is facing many of the same issues as the other abstraction-heavy APIs. OpenGL continues to serve its intended purposes well, but the need for a lower level (or at least greater controlling) API exists in the OpenGL ecosystem as much as it does in any other ecosystem. For that reason today Khronos is announcing the Next Generation OpenGL Initiative to develop the next generation of OpenGL.

The Next Generation OpenGL Initiative

For the next generation of OpenGL – which for the purposes of this article we’re going to shorten to OpenGL NG – Khronos is seeking nothing less than a complete ground up redesign of the API. As we’ve seen with Mantle and Direct3D, outside of shading languages you cannot transition from a high level abstraction based API to a low level direct control based API within the old API; these APIs must be built anew to support this new programming paradigm, and at 22 years old OpenGL is certainly no exception. The end result being that this is going to be the most significant OpenGL development effort since the creation of OpenGL all those years ago.

By doing a ground up redesign Khronos and its members get to throw out everything and build the API that they will need for the future. We’ve already covered the subject of low level APIs in great detail over the past year, so we’ll defer to our past articles on Mantle and Direct3D 12. But in short the purpose of OpenGL NG is to build a lower level API that gives developers explicit control over the GPU. By doing so developers will be able to achieve greater performance by directly telling the GPU what they want to do, bypassing both the CPU overhead of abstraction and the GPU inefficiencies that come from indirectly accessing a GPU through an API. This is especially beneficial in the case of multithreading – something that has never worked well with high-level APIs – as it’s clear that single-threaded CPU performance gains have slowed and will continue to be limited over the coming years, so multithreading is becoming functionally mandatory in order to avoid CPU bottlenecking.

With that said however, along with the common performance case OpenGL NG also gives Khronos and its members the chance to fix certain aspects of OpenGL and avoid others, typically legacy cruft from earlier generations of the API and times where the hardware was far more limited. For example OpenGL NG will be a single API for desktop and mobile devices alike – there will not be an ES version of OpenGL NG, the desktop and mobile will be unified. As mobile GPUs have nearly caught up in functionality with desktop GPUs and OpenGL NG is a clean break, there is no need to offer separate APIs on the basis of legacy support or hardware feature gaps. There will be just one modern OpenGL: OpenGL NG.

Khronos will also be using this opportunity to establish a common intermediate representation shading language for OpenGL. The desire to offer shader IRs is a recurring theme for Khronos as both OpenGL and OpenCL were originally designed to have all shader programs distributed in either source form or architecture-specific binary form. However for technical reasons and IP protection reasons (avoiding trivial shader source code theft), developers want to be able to distribute their shaders in a compiled IR. For OpenCL this was solved with SPIR, and for OpenGL this will be solved in OpenGL NG.

Finally, the clean break afforded by OpenGL NG can’t be understated. At 22 years old OpenGL is huge; between its Core and Compatibility profiles it covers the history of GPUs since the beginning. As a result a complete OpenGL implementation is incredibly complex, making it difficult to write and maintain a complete OpenGL implementation and making it even more difficult to do conformance testing against such an implementation.

OpenGL NG’s clean break means all of that legacy cruft goes away, which is something that will be a significant boon over the long run for hardware vendors. Like Mantle and Direct3D 12, OpenGL NG is expected to be a very thin API – after all, it doesn’t need much code if it’s just acting as a shim between the hardware and software developers – which means OpenGL NG will be much easier to implement and test. The sheer size of OpenGL has been a problem that has been brewing for many years and previous efforts to deal with it have faltered (Longs Peak), so finally being able to do a clean break is a very big deal for the consortium.

Building a Consortium and a Consensus

Of course the fact that this is a consortium effort is going to be the final piece of the puzzle, as this means the development of OpenGL NG will have to be approached in a different manner than any of the other forthcoming APIs. Microsoft for their part works with their hardware partners, but at the end of the day they still have the final say in the development of Direct3D. Meanwhile AMD specifically developed Mantle on their own so that they could develop it to their needs and abilities without the compromises and politicking that comes from a consortium effort. Khronos on the other hand is the consortium – the organization’s goals to offer open, cross-vendor APIs means that they need to take into consideration the technical and political considerations of all of their members on both the software and hardware sides.

Because OpenGL NG is still in early development, from a technical perspective it’s impossible to say just what the final API will look like. However one thing that Khronos is making clear early on is that because they’re going to be cross-vendor and cross-platform, they expect that OpenGL NG won’t be quite as low level as some of the other APIs we’ve seen. The goal for OpenGL NG is to offer the explicit control benefits of a low level language and maintaining the broad reach of an Open standard, and that means that whatever form OpenGL NG takes will require it to be a bit higher than the other languages (e.g. Mantle). Khronos for their part is confident that they can still deliver on their desired goals even without being quite so low level, so it will be interesting to see just how OpenGL NG, Mantle, and Direct3D 12 compare and contrast once all of those APIs are released.

This focus on portability means that OpenGL NG will also be a more tightly defined specification than any OpenGL before it. Poorly defined or undefined aspects of OpenGL have led to slightly inconsistent implementations in the past, and even though this has improved in recent versions of the API, even the stripped down OpenGL ES still has areas where there are compatibility issues due to differences in how the standard is interpreted and implemented. With a clean break and a much smaller API overall, Khronos has made it a goal for OpenGL NG to be fully portable and the specification completely unambiguous, so that all implementations implement all functions identically. This is something Khronos has been able to do with WebGL, and now they tell us that they believe that they have to do the same for OpenGL NG in order for it to succeed.


Recent Timeline of OpenGL Releases

But perhaps more daunting than the consortium’s technical considerations are the consortium’s political considerations. Khronos has attempted to overhaul OpenGL once before in 2007’s failed Longs Peak initiative, with the consortium ultimately unable to come to a consensus and Longs Peak being put to rest in favor of the more iterative OpenGL 3.0. There are a number of reasons for this failure including technical disagreements and concerns over backwards compatibility with existing software, but at the end of the day Khronos can only move forward when there’s a consensus among its members, something they didn’t have for Longs Peak.

Learning from Longs Peak and desiring to avoid another failure this time around, Khronos is being far more inclusive on development of OpenGL, working to include as many software and hardware developers as they can. This is why OpenGL NG is still an initiative and is in all likelihood some time off – design by committee projects will always take longer than solo efforts – so today’s announcement is as much an invitation to additional developers as it is Khronos describing a new API. Khronos has made it clear that they want to get it right this time, and that means getting all of the major players invested in initiative.

At this point the single hardest sell for Khronos and the members backing the initiative will be the clean break. This is a large part of what doomed Longs Peak, and Khronos admits that even now this isn’t going to be easy; even a year ago they may not have been able to get consensus. However as Mantle, Metal, and Direct3D 12 have made their own cases for new APIs and/or clean breaks, Khronos tells us that they believe the time is finally right for a clean break for OpenGL. They believe there will be consensus on the clean break, that there must be genuine consensus on the clean break, and have been passionately making their case to the consortium members.

To that end the OpenGL NG participant list is quickly becoming a who’s who of the graphics industry, both hardware and software. NVIDIA, AMD, Intel, ARM, and more of the major hardware players are all participating in the initiative, and on the software side members include everyone from Google to Apple to Valve. Khronos tells us that they have been especially impressed with the participation from software vendors, who haven’t always been as well represented in past efforts. As a result Khronos tells us that they feel there is more energy and excitement than in any past working group, even the burgeoning OpenGL ES working group.

Ultimately OpenGL NG will be a long and no doubt heated development process, but Khronos seems confident that they will get the consensus they need. Once they can sell the clean break, developing the API itself should be relatively straightforward. Due to its direct-access nature and the relatively few functions such an API would need to provide, the biggest hurdle is at the beginning and not the end.

Categories: Tech

OpenGL SIGGRAPH 2014 Update: OpenGL 4.5, OpenGL ES 3.1, & More

Mon, 2014-08-11 06:01

Taking place this week is SIGGRAPH 2014, the graphics industry’s yearly professional event. As the biggest graphics event of the year this show has become the Khronos Group’s favorite venue for delivering news about the state and development of OpenGL, and this year’s show is no exception. This week will see Khronos delivering news about all of their major OpenGL initiatives: OpenGL, OpenGL ES, and WebGL, taking to the show to announce a new version of their core graphics API while also delivering updates on recent advancements in its offshoots.

OpenGL 4.5 Announced

Kicking things off, we’ll start with the announcement of the next iteration of OpenGL, OpenGL 4.5. As has become customary for Khronos, they are issuing their yearly update for OpenGL 4 at SIGGRAPH, further iterating on the API by integrating some additional features into the OpenGL core standard. By continually updating OpenGL in such a fashion Khronos has been able to respond to developer requests relatively quickly and integrate features into the OpenGL core as policy/standard issues are settled, however on the broader picture it does mean that as OpenGL 4 approaches maturity/completeness, these features do become a bit more niche as the major issues have since been solved.

To that end OpenGL 4.5 will see a small but important set of feature additions to the standard. The bulk of these changes have to deal with API alignment, with Khronos making changes to better align OpenGL with OpenGL ES, WebGL, and Direct3D 11. In the case of OpenGL ES, OpenGL 4.5 brings the two APIs back in alignment by updating the API to match the changes from this year’s release of OpenGL ES 3.1. Khronos intends for OpenGL to remain a superset of OpenGL ES, and by doing so allowing OpenGL devices to run applications targeting OpenGL ES, and for OpenGL ES developers to do their initial development and testing on desktops as opposed to having to stick to OpenGL ES-only devices.

Elsewhere OpenGL 4.5 is also adding some further Direct3D 11 emulation features to improve the ability to port between the two APIs. The APIs continue to have their corner cases where similar features are implemented differently, with the addition of Direct3D emulation features simplifying porting by offering versions of these features that adhere to Direct3D’s implementation requirements and quirks. Finally OpenGL 4.5 is also implementing further robustness requirements, these being primarily targeted at improving WebGL execution by enhancing security and isolation (e.g. preventing a GPU reset affecting any other running applications).

Meanwhile from a development standpoint OpenGL 4.5 will bring with it support for Direct State Access and Flush Control. Direct State Access allows objects to have their state queried and modified without the overhead of first binding those objects; in other words, bindless objects. Flush Control on the other hand sees limited command flushing being handed over to applications, allowing them to delay/avoid flushing in certain cases to improve performance with multi-threaded applications. This primarily involves situations where the context is being switched amongst multiple threads from the same application.

OpenGL 4.5 is being released today as a final specification, and based on prior experience we expect to start seeing desktop GPU implementations of it later this year.

WebGL Support Nears Ubiquity

Meanwhile on the WebGL front, Khronos is happy to report that WebGL support is nearing ubiquity. The web-friendly/web-safe version of OpenGL has been complete for a while now, but it has taken some time for browser developers to implement support for it in to all of the major browsers. This past year has seen WebGL support on the desktop finally become ubiquitous with the launch of Internet Explorer 11, and now the mobile world is nearing the same with the impending releases of Apple’s iOS 8 and Microsoft’s Windows Phone 8.1.

Commonly a laggard when it comes to OpenGL support, Apple has supported WebGL for the past couple of versions of desktop Safari, however they are among the last of major browser developers to not support WebGL on their mobile browser. This is finally changing on Safari for iOS 8, which will see WebGL support enabled on what’s historically a very conservative platform for Apple.

Meanwhile Microsoft’s cascading browser development plan for Windows Phone means that Internet Explorer 11 is only now being ported over to Windows Phone through the release of Windows Phone 8.1. With the upgrade to IE 11’s core, Windows Phone 8.1 will similarly be gaining WebGL compatibility this year as it is released. Altogether, ignoring the increasingly dated Android stock web browser (which itself is rarely used these days in favor of Chrome), this means that WebGL support should be nearly pervasive on desktops and mobile devices alike going into 2015.

OpenGL ES 3.1: Validation & Android Extension Pack

Finally, for OpenGL ES 3.1 Khronos is announcing that the first GPUs and drivers have finished their conformance testing and are being validated. Khronos keeps a running list over on their website, where we can see that ARM Mali Midgard, Imagination PowerVR Rogue, NVIDIA Tegra K1, and Intel HD Graphics for Atom products have all been validated. At this point there are a handful of products from the various families that haven’t finished validation, but ultimately all the major mobile GPU architectures expected to support OpenGL ES 3.1 are present in one form or another. The only vendor not present at this time is Qualcomm – the Adreno 300 series will not support OpenGL ES 3.1, and the Adreno 400 series is not yet through testing.

With the speed of validation and the limited amount of changes between OpenGL ES 3.0 and 3.1, Khronos tells us that they expect OpenGL ES 3.1 adoption will be very quick compared to the longer adoption periods required for major chnages like OpenGL ES 2.0 and ES 3.0. With that said however, in the high-end mobile device market Qualcomm has been by far the biggest winner of the ES 3.x generation thus far, so as a percentage of devices shipped we expect that there will still be a number of ES 3.0 devices in use that cannot be upgraded to ES 3.1. Ultimately as OpenGL ES 3.1 is designed to be fully backwards compatible with Open GL ES 3.0, developers will be able to tap into ES 3.1 features while still supporting these ES 3.0 devices.

Of course even ES 3.1 only goes so far, which is why Khronos is also telling developers that they’re rather pleased with the development of the Android Extension Pack, even if it’s not a Khronos standard. The AEP is implemented as a set of OpenGL ES 3.1 extensions, so it will be further building off of what OpenGL ES 3.1 will be accomplishing. Through the AEP Google will be enabling tessellation, geometry shaders, compute shaders, and ASTC texture compression on the forthcoming Android L, all major features that most of the latest generation mobile GPUs can support but are not yet part of the OpenGL ES standard. With these latest mobile GPUs approaching feature parity with their desktop counterparts, the AEP in turn brings the OpenGL ES API closer to parity with the OpenGL API, and indeed this may be a good hint of what features to expect in a future version of OpenGL ES.

Gallery: Khronos OpenGL Presentation

Categories: Tech

Khronos Announces OpenCL SPIR 2.0

Mon, 2014-08-11 06:00

The last time we talked to Khronos about the OpenCL Standard Portable Intermediate Representation (SPIR) was back at SIGGRAPH 2013. At the time Khronos was gearing up for the release of the first iteration of SPIR, then based on the OpenCL 1.2 specification. By building an intermediate representation for OpenCL, Khronos was aiming to expand the capabilities of OpenCL and its associated runtime, both by offering a method of distributing programs in a more developer-friendly “non-source” form, and by allowing languages other than OpenCL’s dialect of C to build upon the OpenCL runtime.

At the time of its announcement, Khronos released OpenCL SPIR 1.2 as a provisional specification, keeping it there over a protracted period to solicit feedback over the first version of the standard. Since that provisional release, Khronos finalized OpenCL 1.2 SPIR in early 2014 and has been working on building up their developer and user bases for SPIR.

Which brings us to SIGGRAPH 2014 and Khronos’s latest round of specification updates. With OpenCL 2.0 already finalized and device makers scheduled to deliver the first runtimes a bit later this year, Khronos has now turned their attention towards updating SPIR to take advantage of OpenCL 2.0’s greater functionality. To that end, today Khronos is announcing the provisional specification for the next version of SPIR, OpenCL SPIR 2.0.

With much of the groundwork for SPIR already laid out on SPIR 1.2, SPIR 2.0 is a (generally) straightforward update to the specification to integrate OpenCL 2.0 functionality. OpenCL 2.0 in turn is the biggest upgrade to OpenCL since its introduction, adding several major features to the API to keep pace with functionality offered by the latest generations of GPUs.

As a quick recap, OpenCL 2.0’s headline features dynamic parallelism (device side kernel enqueue), shared virtual memory, and support for a generic address space. Dynamic parallelism allows kernels running on a device (e.g. GPU) to issue additional kernels without going through the host, reducing host bottlenecks. Meanwhile shared virtual memory allows for the host and device to share complex data, including memory pointers, executing and using data without the need to explicitly transfer it from host to device and vice versa. This feature is especially important for the HSA Foundation, as this is one of the critical features for enabling HSA execution on OpenCL. Finally generic address space support alleviates the need to write a version of a function for each named address space. Instead a single generic function can handle working with all of the named address spaces, simplifying development and cutting down on the amount of code that needs to be cached for execution.

With these functions finally exposed through SPIR, they can be tapped in to by all SPIR developers – both those developers looking to distribute their programs in intermediate form, and for developers using OpenCL as a backend for their own languages. In the case of the latter SPIR 2.0 should be especially interesting, as these feature additions make SPIR a more versatile backend that’s better capable of efficiently executing more complex languages.

In keeping with their goals of providing common, open standard APIs, in the long run it is Khronos’s hope that OpenCL and SPIR will become the runtime of choice for GPU applications. By building up a robust runtime and set of tools through SPIR, language developers can simply target SPIR rather than needing to develop against multiple different hardware devices; and meanwhile device manufacturers can focus on tightly optimizing their OpenCL runtime rather than juggling with supporting several disparate runtimes. To that end Khronos is claiming that they’re up to nearly 200 languages and frameworks that will be capable of using SPIR, including a few high-profile languages such as C++ AMP, Python, and OpenACC.

However from a PC standpoint Khronos still faces an uphill battle, and it will be interesting to see whether SPIR 2.0 runs into the same difficulties as SPIR 1.2 did. NVIDIA for their part never did fully support OpenCL 1.2, and as a result SPIR 1.2 couldn’t be used on NVIDIA’s products, preventing SPIR’s use on what’s a significant majority of high-performance discrete GPUs. So far we have not seen NVIDIA comment much on OpenCL 2.0 (though it’s interesting to note that Khronos president Neil Trevett is an NVIDIA employee); so SPIR’s success may hinge on whether NVIDIA chooses to fully support OpenCL 2.0 and SPIR.

NVIDIA for their part has their competing CUDA ecosystem, and like Khronos they are leveraging LLVM to allow for 3rd party languages and frameworks to compile down to PTX (NVIDIA’s intermediate language). For languages and frameworks utilizing LLVM this opens the door to compiling code down to both SPIR and PTX, but it’s a door that swings both ways since it diminishes the need for NVIDIA to support SPIR (never mind OpenCL 2.0). For their parts, AMD and Intel will be fully backing OpenCL 2.0 and SPIR 2.0, so it remains to be seen whether NVIDIA finally comes on board with SPIR 2.0 after essentially skipping out on 1.2.

Gallery: OpenCL SPIR 2.0 Presentation

Categories: Tech

Tegra K1 Lands in Acer's Newest Chromebook

Mon, 2014-08-11 05:00

Today Acer announced four new models of a new 13.3" Chromebook design featuring Tegra K1. This is a significant launch for NVIDIA, proving there's industry interest in Tegra K1 after the disappointing interest in Tegra 4 and notching NVIDIA their first Chromebook design win.

NVIDIA has two versions of the Tegra K1, one implementing a 4+1 configuration of ARM Cortex A15s, and another implementing two custom designed NVIDIA Denver CPUs. Acer's new Chomebooks feature the former, so we have yet to see Denver CPUs in the wild. Samsung previously shipped a Chromebook featuring Cortex A15s via its Exynos processor and HP used the same SoC in their Chromebook 11. Samsung has since refreshed their ARM Chromebooks a few times, with new models using the "Chromebook 2" branding.

The most significant portion of the Tegra K1 SoC is its 192 CUDA cores. Chromebook relies heavily on web based applications, but with the rise of WebGL there have been some experiments with browser based 3D games. There haven't been any AAA title WebGL games yet, but when they arrive, this Chromebook should be well equipped to handle them; NVIDIA specifically mentions the upcoming Miss Take and Oort Online, as well as WebGL ports of Unreal Engine 4 and Unity 5.

NVIDIA claims up to 3X the WebGL performance of competing Chromebooks, with processor performance superior to the Exynos 5800 and Bay Trail Celeron N2830. Unfortunately, no performance comparisons between K1 and the Haswell Celeron 2955U were provided. Since both Haswell and Tegra K1 are available for the Chromebook platform, we'll also have the opportunity to perform CPU and GPU benchmarking to directly compare the processors. We have requested a review sample when Acer makes them available.

Beyond the marquee feature of the Tegra K1 processor, the Acer Chromebook also includes 2x2 MIMO wireless AC, an anti-glare coating, and two models feature a 1080p display. Specifications provided by Acer are listed below; Acer provided the model numbers for the three available for presale, and there is a fourth configuration available through resellers where we do not yet have the model number. Acer states they will begin shipping the first week of September.

Acer Chromebook 13 Models Model CB5-311-T7NN CB5-311-T9B0 ? CB5-311-T1UU SoC NVIDIA Tegra K1 (2.1GHz) NVIDIA Tegra K1 (2.1GHz) NVIDIA Tegra K1 (2.1GHz) NVIDIA Tegra K1 (2.1GHz) Memory 2GB 2GB 4GB 4GB Storage 16GB SSD 16GB SSD 16GB SSD 32GB SSD Display 1366x768
Anti Glare 1920x1080
Anti Glare
1366x768
Anti Glare 1920x1080
Anti Glare
Manufacturer Estimated Battery Life 13 hours 11 Hours 13 hours 11 Hours Battery Size 4-cell 3220mAh 48Wh 4-cell 3220mAh 48Wh 4-cell 3220mAh 48Wh 4-cell 3220mAh 48Wh Networking 802.11ac
2x2 MIMO 802.11ac
2x2 MIMO 802.11ac
2x2 MIMO 802.11ac
2x2 MIMO Ports 2x USB 3.0
HDMI
3.5mm Audio 2x USB 3.0
HDMI
3.5mm Audio 2x USB 3.0
HDMI
3.5mm Audio 2x USB 3.0
HDMI
3.5mm Audio Extras 720p Webcam
Stero Speakers
Microphone 720p Webcam
Stero Speakers
Microphone 720p Webcam
Stero Speakers
Microphone 720p Webcam
Stero Speakers
Microphone Thickness 0.71 in 0.71 in 0.71 in 0.71 in Weight 3.31 lbs 3.31 lbs 3.31 lbs 3.31 lbs Price $279.99 $299.99 $329.99 $379.99

Source: Acer

The higher resolution displays drop battery life a couple hours, which isn't too surprising, but overall battery life of 11-13 hours is still great for a Chromebook. The industrial design of the new Acer Chromebooks is also much better than on the previous models, with clean lines and a white body. The Acer Chromebook is also fanless, thanks to reduced power requirements for NVIDIA's Tegra K1 SoC.

Overall pricing looks good, with the base model matching the price of HP's current Chromebook 11 and the 1080p upgrade taking on the HP Chromebook 14. But the real competition is still going to be with Acer's existing Chromebook C720, which can be found with 32GB storage and 2GB RAM and a Celeron 2955U for just $229. There's also the question of size; the C720 was an 11.6" Chromebook, and while some might prefer a smaller device the 13.3" will likely be preferred by others. Samsung's Chromebook 2 13.3, which has a 1080p display and 16GB of storage and 4GB of ram, likely needs a price drop to compete as it is listed for $399. Either way, with ChromeOS continuing to improve over time, Windows laptops continue to face increasing competition from alternative laptops.

Gallery: Acer Chromebook 13

Categories: Tech

MSI A88X-G45 Gaming Review

Mon, 2014-08-11 05:00

One of AMD’s main selling points it likes to promote is towards the gamer, especially those on a tighter budget. This subsequently suggests to the motherboard manufacturers to build models oriented for gaming. MSI’s Gaming Range has become a solid part of MSI’s plethora of motherboards, and now this extends to the FM2+ platform. Today we review the MSI A88X-G45 Gaming.

Categories: Tech

NVIDIA FY 2015 Q2 Financial Results

Sun, 2014-08-10 20:10

On Thursday August 7th, NVIDIA released their results for the second quarter of their fiscal year 2015. Year-over-year, they had an excellent quarter based on strong growth in the PC GPU market, Datacenter and Cloud (GRID), and mobile with the Tegra line.

GAAP Revenue for the quarter came in at $1.103 billion which is flat from Q1 2015, but up 13% from $977 million at the same time last year. Gross margin for Q2 was up both sequentially and year-over-year at 56.1%. Net income for the quarter came in at $128 million, down 6% from Q1 and up 33% from Q2 2014. These numbers resulted in diluted earnings per share of $0.22, down 8% from Q1 and up 38% from Q2 last year but beating analysts expectations.

NVIDIA Q2 2015 Financial Results (GAAP) In millions except EPS Q2'2015 Q1'2015 Q2'2014 Q/Q Y/Y Revenue $1103 $1103 $977 0% +13% Gross Margin 56.1% 54.8% 55.8% +1.3% +0.3% Operating Expenses $456 $453 $440 +1% +4% Net Income $128 $137 $96 -6% +33% EPS $0.22 $0.24 $0.16 -8% +38%

 

NVIDIA Q2 2015 Financial Results (Non-GAAP) In millions except EPS Q2'2015 Q1'2015 Q2'2014 Q/Q Y/Y Revenue $1103 $1103 $977 0% +13% Gross Margin 56.4% 55.1% 56.3% +1.3% +0.1% Operating Expenses $411 $411 $401 0% +2% Net Income $173 $166 $133 +4% +30% EPS $0.30 $0.29 $0.23 +3% +30%

The GPU business is the primary source of revenue for NVIDIA, and includes GeForce for desktops and notebook PCs, Quadro for workstations, Tesla for high performance computing, and GRID for cloud-enabled graphic solutions. This quarter, the GPU revenue rose 2% over Q2 2014 with $878 million in revenue. This is down 2% from the previous quarter due to the seasonal decline of consumer PCs. Revenue from the PC GPU line rose 10% over last year and was helped by the introduction of the GeForce GTX 750 and 750 Ti Maxwell based boards. They are also seeing growth in the Tesla datacenter business. Quadro revenue also increased, citing strong growth in mobile workstations.

The mobile side of NVIDIA hasn’t seen as many product wins compared to Qualcomm, but the Tegra business is still growing strongly for NVIDIA. Tegra revenue was up 14% from Q1 2015, and 200% from Q2 2014 with a total revenue of $159 million for the quarter. Tegra continues to have strong demand in the automotive infotainment sector, with a 74% growth in that market year-over-year. This could be a lucrative market, with automotive systems generally locking in for at least several years compared to the mobile sector which might see a product replaced in less than a single year. The Tegra K1 has just come to market though, and it has shown itself to be a capable performer and may win some more designs soon.

The last avenue of income for NVIDIA is $66 million per quarter in a licensing deal with Intel.

NVIDIA Quarterly Revenue Comparison (GAAP) In millions Q2'2015 Q1'2015 Q2'2014 Q/Q Y/Y GPU $878 $898 $858 -2% +2% Tegra Processor $159 $139 $53 +14% +200% Other $66 $66 $66 flat flat

The company projected this quarter to be flat on revenue as compared to Q1, and they were exactly right. Projections for Q3 2015 are for revenue of $1.2 billion plus or minus 2%.

During the quarter, $47 million was paid in dividends and NVIDIA purchased 6.8 million shares back from investors. This brings them to $594 million of the $1 billion promised to shareholders for FY 2015. The next dividend of $0.085 per share will be paid on September 12th to all stockholders of record as of August 21st.

It was an excellent quarter for NVIDIA, and their stock prices jumped after the numbers were released. All segments of the company are growing at the moment, and with the recent release of the Tegra K1 they can only be hoping to have another strong quarter of mobile grown in Q3 after a great 200% jump in Tegra revenue since last year. The stronger than expected PC sales have helped their biggest business as well, with the GPU division up 2%. CEO Jen-Hsun Huang has worked to bring the company a more diversified portfolio, and with the recent gains in mobile and datacenter computing, the company has certainly had some recent success.

Categories: Tech

QNAP Launches x53 Pro Bay Trail NAS for SMBs

Sun, 2014-08-10 02:00

The launch of the QNAP TS-x51 series was covered in great detail. QNAP took the lead over other NAS manufacturers in opting for the 22 nm Atom SoCs. While we were expecting the first Silvermont-based NAS units to use either Avoton or Rangeley, QNAP surprised us by opting for Bay Trail-D with the Celeron J1800. The advantage was that the Celeron J1800 included the Quick Sync engine, which enabled some nifty multimedia features targeting home consumers.

Fast forward a few weeks, and we have QNAP's play targeting business users using the same platform. The difference in the hardware relates to the choice of the Bay Trail-D part. Instead of the Celeron J1800 (which was a 2C/2T part), we have the Celeron J1900 (a 4C/4T part). While the 2-bay version comes with 2x GbE ports, the others come with 4x GbE ports. All the innovative features from the TS-x51 series (hardware transcoding and Virtualization Station, mainly) are present in the x53-Pro series too. Due to the availability of more resources, it is possible to run two VMs concurrently in the x53 Pro (compared to one in the x51). Since we have already touched upon the two main features in our x51 launch piece, we will not discuss them in detail here.

QNAP has bundled all the business-oriented features of the x53 Pro under the QvPC umbrella. QNAP projects using the x53 Pro NAS as a business PC in addition to fulfilling the tasks of a SMB networked storage device. The display is driven through the HDMI port and the interface is through the HD Station package (Hybrid Desk). Three 'views' are made available:

  • QVM Desk: Users have a window into any of the VMs running on the unit.
  • Defense Desk: Users can access the Surveillance Station UI for monitoring the IP cameras being recorded on the NAS
  • Local Display Desk: Users have access to the X-Windows session on the NAS, enabling access to apps such as XBMC, Chrome, YouTube, Spotify etc. - all of which are specific apps for the NAS. Obviously, the NAS can be administered from within this UI also

The HD Station package also supports touchscreen monitors. Since Virtualization Station is supposed to soon support Android VMs, this is going to be a nifty feature.

Another SMB-targeted feature in QTS 4.1 is  IT Management Station, based on Mandriva Pulse. It enables management of IT resources and applications for the whole business in a simplified manner. Tasks include inventory management, remote control, cloning, deployments and backup / restore.

The x53 Pro series (like the x51 series) also supports the UX-500P / UX-800P expansion towers. Using these, consumers can add 5 or 8 bays to their existing NAS by connecting via an USB 3.0 port. The various models in the x53 Pro series, along with their specifications, are provided below.

Interestingly, QNAP has two SS- models which support only 2.5" drives. It looks like the market for NAS units which support only 2.5" drives is slowly taking off. Earlier this year, we saw the introduction of Synology's DS414slim sporting a Marvell ARMADA 370 SoC. However, these are based on Bay Trail Celerons, and definitely much more powerful. With SSDs becoming cheaper by the day, all-flash arrays will soon be within the reach of even SMBs. Units such as the SS-453 Pro and SS-853 Pro are well-suited to tap into that market.

Categories: Tech

AMD’s 5 GHz Turbo CPU in Retail: The FX-9590 and ASRock 990FX Extreme9 Review

Sat, 2014-08-09 05:00

While AMD’s FX-9590 CPU has been in systems for over a year, it suddenly comes to market as a retail package for end-users to buy with a bundled liquid cooling system. This 220W CPU that has a turbo speed of 5.0 GHz still sits at the top of AMD’s performance stack, despite subsequent improvements in the architecture since. We have decided to grab ASRock’s 990FX Extreme9 and an FX-9590 for a review to see if it still is the AMD performance CPU champion.

Categories: Tech

Best Desktops for Under a Grand

Fri, 2014-08-08 15:59

Following up on last week's Best Budget PC Guide, today we have midrange systems with roughly twice the cost. Of all the systems types to configure, the midrange market can be the most difficult. With budget systems you're often limited in what you can do by price constraints while at the high end the best components are usually pretty clear cut choices; for midrange builds there are many factors to consider. One of the core questions you always need to answer is: what do you want to do with the system? Office PCs will often have a different goal than something for a student, and there are many ways to adapt a particular system to fit the needs of the user. We have two configurations again, one AMD and one Intel, with optional graphics cards for those who want a system capable of handling the latest games. Let's start with AMD:

Midrange AMD System Component Description Price CPU AMD A10-7850K (4x3.7-4.0GHz, 4MB, 95W, 28nm) $170 Motherboard MSI A88X-G43 $78 RAM Team Vulcan 2x4GB DDR3-2133 CL10 1.65V $78 Storage Seagate Barracuda ST2000DM001 2TB $84 SSD Crucial MX100 256GB $109 Case Fractal Design Core 3300 $63 Power Supply Rosewill Capstone 450W 80 Plus Gold $60 Subtotal   $642 GPU (Optional) Sapphire Radeon R9 270X 2GB $190 GPU (Alternative) Zotac GeForce GTX 750 Ti 2GB $138 Total with GPU   $832

Right from the first component choice – the APU – we have plenty of things to consider. I've tailored the above build more towards performance than price or power, so the A10-7850K is really the only APU that makes sense. (You can make an argument for an AM3+ CPU like the FX-6300 or FX8320, but considering that platform has been around a while and is basically fading away I'm hesitant to recommend that route.) Besides the quad-core (dual-module) CPU portion of the APU, the 7850K has the full 512 core (eight Compute Unit) GPU. The A10-7800 is an option to consider at its $155 MSRP, but the only place I can find with the part in stock charges $166; for $4 more you might as well just go whole hog and get the 7850K. Dropping down to an A10-7700K will lose two of the GPU CUs and 200MHz off the CPU for $15, so it's also worth a thought, but if you don't need faster GPU performance you might as well go for the A8-7600 for $110 at that point.

For the rest of the system, the MSI motherboard has AMD's latest A88X chipset, we've selected DDR3-2133 RAM to provide increased bandwidth for the APU graphics, and the case is Fractal Design's latest Core 3300 (though you can use the case in the Intel build as an alternative). For storage, we've again included both an SSD for the OS and apps with a rather large 2TB HDD for mass storage; you could easily drop the HDD if you don't need that much storage, but for any modern system I simply refuse to leave out an SSD. The Crucial MX100 isn't the fastest SSD on the planet, but the price makes it incredibly attractive. Finally, the power supply may be overkill for the base build, but having some power to spare means adding a graphics card is always an option.

Speaking of graphics cards, while the APU graphics will do fine for most tasks and even light gaming, if you want to be able to play most games at 1080p with medium or higher detail settings, a dedicated graphics card is required. Here we've listed two options: NVIDIA's GTX 750 Ti (Maxwell) card and AMD's R9 270X card. The AMD card is faster and costs more, and it also uses a lot more power; if you want 1080p with high quality settings in most games, that's the card to get (and it's reflected in the price of the system with the GPU). NVIDIA's GTX 750 Ti on the other hand uses less than 75W and doesn't even require a PCI-E power adapter, and it can still run most games at medium to high settings and 1080p. Either GPU is certainly worth considering, at least if you want to play games – and if you don't, just get the core system and you can always add a GPU at some future date.

Midrange Intel System Component Description Price CPU Core i5-4590 (4x3.3-3.7GHz, 6MB, 84W, 22nm) $200 Motherboard ASRock Z97 Anniversary $90 RAM ADATA 2x4GB DDR3-1866 CL10 1.5V $77 Storage Seagate Barracuda ST2000DM001 2TB $84 SSD Crucial MX100 256GB $109 Case Antec Three Hundred Two $64 Power Supply Rosewill Capstone 450W 80 Plus Gold $60 Subtotal   $684 GPU (Optional) Sapphire Radeon R9 270X 2GB $190 GPU (Alternative) Zotac GeForce GTX 750 Ti 2GB $138 Total with GPU   $874

The Intel system this round ends up costing about $50 more than the AMD setup, thanks to a more expensive CPU and motherboard. There are ways to keep the prices closer, but overall the i5-4590 strikes a good balance of price and performance. It's about $25 less than the slightly faster i5-4690 but only around 3-5% slower, and unless you plan on overclocking it should offer everything you need. As we discussed in our recent CPU State of the Part, looking at overall system performance Intel's processors make a lot of sense for those that want a faster system.

The motherboard this time comes from ASRock and features Intel's latest Z97 chipset, and for the RAM we elected to go with a 1.5V kit of DDR3-1866 memory. While faster memory can help with the processor graphics on AMD's APUs, for Intel's CPUs the HD 4600 is usually limited by other factors than bandwidth. The same caveats about the storage components apply here as well, but if you're looking for alternatives the Samsung EVO 840 250GB is generally slightly faster than the Crucial MX100 while costing about $20 more.

The case for our Intel setup is an Antec Three Hundred Two, which is another popular option. Optional graphics choices can add a boost to gaming performance if you need it, but again a faster GPU could easily be added later on. If you're sure you won't want to add a dedicated GPU later, you can also save money on the PSU by going with the 300W Seasonic we used in our budget PC guide.

On either system, it's of course possible to go for a smaller micro-ATX case and motherboard. The prices are typically comparable and these days the only thing you're really sacrificing are expansion options, but considering many people don't run anything more than a hard drive and SSD along with a GPU, you really don't miss much. For mATX cases, you might like the Rosewill Line-M or Silverstone SST-PS07B. As far as mATX motherboards, the ASRock Z97M Pro4 would work well for the Intel platform, or for AMD the Gigabyte GA-F2A88XM-D3H will even save you a few bucks compared to the MSI board we listed above.

As before, we've elected to leave out the OS, keyboard, mouse, and display; these are all commodity items and most people have existing accessories they can carry over from an old PC. You can always use a free OS like Ubuntu or some other flavor of Linux, whereas Windows will generally add $100 to the total. As far as displays go, I'm a sucker for larger displays and I've been using 30" LCDs for most of the past decade – one of the best investments I've ever made in terms of computer hardware! For a good midrange display, I'd give serious consideration to the 27" 2560x1440 panels that start at around $300; if you don't want something that large (or expensive), there are also plenty of 23-24" IPS/VA displays for around $150.

Finally, let's quickly talk about pre-built systems and why I don't generally recommend them. Really, it comes down to one thing: the refusal of the big OEMs and system builders to deliver a competitively priced desktop that includes at least a good quality 250/256GB SSD (or even a 128GB SSD). $500 will get you a Core i5 or AMD A10 processor, 4-8GB RAM, 1TB HDD, and whatever case and power supply the OEM uses. Generally speaking, you get fewer features, lower quality parts, and a less attractive design – but you do get a valid Windows license along with a low-end keyboard and mouse.

We could easily take the above systems and remove the SSD and drop down to a 1TB HDD to save $140. Using lower quality motherboards can shave off another $30-$50. Wrap things up by using a cheaper case and power supply (another $50 saved) and guess what you have: a less desirable system for one, with a base price of $450 or so. Buy a Windows license and you basically have the equivalent of a pre-built system.

It's not that OEM systems are necessarily terrible, but it's the age old story: you get what you pay for. I for one would much rather have a decent SSD, motherboard, case, and power supply. You can pay a system integrator to put something together as well, but even then your choice of parts is often limited and the prices are typically higher than if you DIY.

Categories: Tech

WD Red Pro Review: 4 TB Drives for NAS Systems Benchmarked

Fri, 2014-08-08 06:00

A couple of weeks back, Western Digital updated their NAS-specific drive lineup with 5 and 6 TB Red drives. In addition, 7200 RPM Red Pro models with 2 - 4 TB capacities were also introduced. We have already looked at the performance of the WD Red, and it now time for us to take the WD Red Pro for a spin. In our 4 TB NAS drive roundup from last year, we also indicated that efforts would be taken to add more drives to the mix along with an updated benchmarking scheme involving RAID-5 volumes. The Red Pro gives us an opportunity to keep our word. Read on for our comparison of 10 different 4 TB hard drives (both consumer NAS-specific, as well as nearline units) targeting the NAS market.

Categories: Tech

Revisiting SHIELD Tablet: Gaming Battery Life and Temperatures

Fri, 2014-08-08 05:00

While the original SHIELD Tablet review hit most of the critical points in the review, there wasn't enough time to investigate everything. One of the areas where there wasn't enough data was gaming battery life. While the two hour figure gave a good idea of what to expect in terms of the lower bound for battery life, it didn't give a realistic amount of time for battery life

One of the first issues that I attempted to tackle after the review was battery life performance in our T-Rex rundown when capping FPS to ~30, which was still enough to exceed the competition in performance, and avoid any chance of throttling. This also gives a much better idea of real world battery life, as most games shouldn't come close to stressing the Kepler GPU in Tegra K1.

By capping T-Rex to 30 FPS, the SHIELD Tablet actually comes quite close to the battery life delivered by SHIELD Portable with significantly more performance. The SHIELD Portable also needed a larger 28.8 WHr battery and a smaller, lower power 5" display in order to achieve its extra runtime. It's clear that the new Kepler GPU architecture, improved CPU, and 28HPm process are enabling much better experiences compared to what we see on SHIELD Portable with Tegra 4.

The other aspect that I wanted to revisit were temperatures, I mentioned that I noticed skin temperatures were high, but I didn't know what they really were. In order to get a better idea temperatures in the device, Andrei managed to make a tool to log such data from on-device temperature sensors. Of course, the most interesting data is always generated at the extremes, so we'll look at an uncapped T-Rex rundown first.

In order to understand the results I'm about to show, this graph is critical. As ambient temperatures were lower (15-18C vs 20-24C) when I ran this variant of the test, we don't see much throttling until the end of the test where there's a dramatic drop to 46 FPS.

As we can see, the GPU clock graph almost perfectly mirrors the downward trend that is presented in the FPS graph. It's also notable that relatively little time is spent at the full 852 MHz that the graphics processor is capable of. The vast majority of the time is spent at around 750 MHz, which suggests that this test isn't pushing the GPU to the limit, although looking at the FPS graph would also confirm this as it's sitting quite close to 60 FPS throughout the run.

While I was unable to quantify skin temperature measurements in the initial review, battery temperature is often quite close to skin temperature. Here, we can see that battery temperatures (which is usually the charger IC temperature) hit a maximum of around 45C as I predicted. While this is perfectly acceptable to the touch, I was definitely concerned about how hot the SoC would get under such conditions.

Internally, it seems that the temperatures are much higher than the 45C battery temperature might suggest. We see max temperatures of around 85C, which is edging quite close to the maximum safe temperature for most CMOS logic. The RAM is also quite close to maximum safe temperatures. It definitely seems that NVIDIA is pushing their SoC to the limit here, and such temperatures would be seriously concerning in a desktop PC, although not out of line for a laptop.

On the other hand, it's a bit unrealistic for games not developed for Tegra K1 to push the GPU to the limit like this. Keeping this in mind, I did another run with the maximum frame rate capped to 30 FPS. As even the end of run FPS is over 30 FPS, showing the FPS vs time graph would be rather boring as it's a completely flat line pegged at 30 FPS. Therefore, it'll be much more interesting to start with the other data I've gathered.

As one can see, despite performance near that of the Adreno 330, the GPU in Tegra K1 sits at around 450 MHz for the majority of this test. There is a bit of a bump towards the end, but that may be due to the low battery overlay as this test was unattended until the end.

In addition to the low GPU clocks, we see that the skin temperatures never exceed 34C, which is completely acceptable. This bodes especially well for the internals, which should be much cooler in comparison to previous runs.

Here, we see surprisingly low temperatures. Peak temperatures are around 50C and no higher, with no real chance of throttling. Overall, this seems to bode quite well for Tegra K1, even if the peak temperatures are a bit concerning. After all, Tegra K1 delivers immense amounts of performance when necessary, but manages to sustain low temperatures and long battery life when it it isn't. More importantly, it's important to keep in mind that the Kepler GPU in Tegra K1 was designed for desktop and laptop use first. The Maxwell GPU in NVIDIA's Erista SoC is the first to be designed to target mobile devices first. That's when things get really interesting.

Categories: Tech

More X99 Teasers: GIGABYTE’s X99 Gaming G1 WiFi

Fri, 2014-08-08 02:18

The summer months are usually some of the quietest in the tech world, however motherboard manufacturers seem to be keen to release preview images of the upcoming X99 platform. Next in line is GIGABYTE with its X99 Gaming G1 WIFI. As GIGABYTE’s new gaming line is still gaining a foothold, synchronizing the GPU and Motherboard gaming ranges, the color scheme is a combination of red, black, white, grey and some green for audio.

The X99 Gaming G1 WIFI looks like it comes in at the top end of the range, featuring a full 4-way PCIe layout with 8 DIMM slots. We see 10 SATA 6 Gbps ports, two of the SATA with SATA Express which is complemented by the M.2 slot in the middle of the PCIe slots. The M.2 area also houses a mini-PCIe slot which contains the WiFi module, with the antenna connected via the rear panel next to the rear audio. It looks like that the M.2 and WiFi modules can be used at the same time along with GPUs however we might test if we get the motherboard in. The heatsinks are all connected via heat-pipes low to the motherboard to avoid conflict with other devices.

While we cannot see the rear IO, the bottom of the motherboard contains two USB 3.0 headers, two USB 2.0 headers and a thunderbolt header. The audio looks like a Sound Core 3D combined with filter caps, PCB separation, an EMI shield on the codec, an audio gain switch and a switchable op-amp. The top right of the motherboard houses several buttons and switches for dual BIOS/selectable BIOS functionality, along with voltage read points, a power switch and a two-digit debug. The extra power for the PCIe slots is provided by SATA power next to the SATA ports.

I would imagine the Gaming G1 WIFI to be nearer the top of GIGABYTE’s X99 launch range, but we will have to wait until launch day to see some full specifications and pricing.

Source: GIGABYTE Tech Daily

Categories: Tech

NEC EA244UHD Review

Thu, 2014-08-07 11:30

The NEC EA244UHD is the first UltraHD (UHD) monitor from NEC. While it's not from their professional line, it has many of the features we've come to expect in their monitors: uniformity compensation, a wider color gamut but also sRGB and AdobeRGB support, and many user configurable settings. It also has a few things NEC has never done before including SpectraView calibration support on an EA-series model and full USB 3.0. Read on for our full review.

Categories: Tech

ADATA Officially Launches XPG Z1 DDR4 Memory

Thu, 2014-08-07 08:00

Given that the supposed release date of DDR4, according to a pre-order listing which suggests it is almost three weeks away, DRAM module manufacturers are slowly initiating press releases to tie in with which products they will be releasing. This is good news for the rest of us, as we will get to see what timings and pricings to expect when the full release happens. Today it is ADATA launching some of its higher performance kits under the XPG Z1 branding. If you followed our Computex coverage, you will notice a striking similarity to the modules we saw on display at ADATA’s booth.

Aside from the regular quotes about reducing the voltage from DDR3’s 1.5 volts to 1.2 volts, ADATA is stating that its XPG Z1 range will offer speeds up to 2800 MHz with timings of CL 17-17-17, all within the 1.2 volts standard. The press release would also seem to suggest that ADATA is equipping these modules with a plug and play system, by stating ‘the SPD of XPG Z1 allows direct application without changing settings in the BIOS’. I am going to follow up with ADATA to find out what they mean by this, whether it will be plug and play or they are just referring to JEDEC.

The XPG Z1 design uses the angular heatsink tapering to a point, which underneath uses a 10-layer PCB with 2-oz copper layers. The heatsink is in direct contact with the ICs, and if the past serves me correctly this is mostly likely via an epoxy that is hard to remove.

The full list of kit capabilities is listed at ADATA’s website. Kits will be available in dual (2x4/2x8) and quad (4x4/4x8) channel variants, all in red to begin with, using the following speeds:

  • DDR4-2133 15-15-15 (CAS/CL = 142)
  • DDR4-2133 13-13-13 (CAS/CL = 164)
  • DDR4-2400 16-16-16 (CAS/CL = 150)
  • DDR4-2800 17-17-17 (CAS/CL = 165)

No pricing information as of yet, but given ADATA’s previous press releases, we usually get it around two weeks after the kit being announced.

Source: ADATA

Gallery: ADATA Officially Launches XPG Z1 DDR4 Memory

Categories: Tech

MSI’s Next Haswell-E Teaser: X99S Gaming 9 AC

Thu, 2014-08-07 07:00

The increase in leaks and teasers regarding X99 makes for some compelling reading. Shortly after showing off their X99S SLI PLUS on Facebook, a couple of Gaming 9 AC renders seem to have been posted as well. The X99S Gaming 9 AC, as the name suggests, represents the top member of MSI’s gaming motherboard range if previous range identifiers are to be continued. Along with the 802.11ac support, the board looks like it will have eight DDR4 slots, five PCIe slots with SLI and Crossfire supported, an M.2 slot up to 2280, SATA Express, ten SATA 6 Gbps ports, eight USB 3.0 ports, upgraded audio and a Killer E2200 series network interface.

Right in the middle of the motherboard is a feature called ‘Streaming Engine’, which is plugged into what looks like a mini-PCIe slot. Current internet chatter is wondering if this is some new proprietary feature from MSI, or something akin to onboard WiDi allowing video streaming without wires. MSI is remaining tight lipped until the full release.

It is interesting to see SATA Express and M.2 on X99, and we are still in the dark as to whether these features have shared bandwidth via the PCH due to Intel RST limitations or can be used concurrently.

Pricing is unknown, and will most likely be in the higher echelons of the X99 price bracket in. If MSI is going to release an X99 XPower type of motherboard, it will be either the XPower or the Gaming 9 AC that would be the most expensive.

Source: MSI US Facebook

Categories: Tech

ROCCAT Integrates Keyboard and Smartphone: The Skeltr

Thu, 2014-08-07 04:00

Alongside the Nyth, ROCCAT is also announcing another hybrid technology at Gamescom in the form of a keyboard called the Skeltr. The purpose of the Skeltr is to bring the smartphone as an add-on device for the keyboard, allowing apps to be developed that integrate with either the game being played or the on-screen action.

I remember importing one of the original Logitech G15 models from the US almost a decade ago. I had the original black-and-white model, and used the display mostly for Battlefield 2 / 2142 at the time. I must say that with all due respect, I did not use it that much. There was not much time while playing to glance down at the display to see what was going on, although I did look at it between rounds to see the extra statistics it had collected. The concept of the Skeltr is perhaps a step beyond this, allowing users of any smartphone to have an interactive (key word there) integration with their game.

The keyboard will speak to the smartphone via Bluetooth, and use a sliding rail with a rotating holster fit to enable any size smartphone or tablet. This second screen will also allow the user to take calls, receive and make texts and other normal smartphone uses through the keyboard.

The initial issue I found with the Logitech G15 might rear its ugly head here: lack of app availability. It took a while before third-party developers were able to make interfaces for my favorite games back for the G15, but I believe ROCCAT might have more luck. Depending on whether users program using the proprietary ROCCAT application language or Android/iOS itself, the most popular games should be covered quickly by ROCCAT themselves or third-parties. With a full color display and direct interactivity, it might be a step forward as well.

The motherboard itself will feature RGB lighting (on a keyboard wide basis, not per key), a small selection of macro buttons and audio outputs. There is no indication if this is a mechanical keyboard as yet.

Currently ROCCAT is only showing their prototypes at Gamescom later this month, and release dates/pricing will be announced later this year.

Source: ROCCAT

Gallery: ROCCAT Integrates Keyboard and Smartphone: The Skeltr

Categories: Tech

ROCCAT Announces the Nyth Semi-Modular Mouse

Thu, 2014-08-07 01:00

The world of gaming peripherals is a tricky one. There are plenty of standard off-the-shelf peripherals that will do the basic job. In order to create a brand away from the cheap or ultra-cheap, each peripheral company has to add value to their product and introduce the feel of premium quality. This might mean using exotic materials, special lights, custom designs/aesthetics, or offer something that someone else cannot. ROCCAT believes it is doing something along those lines with their new Nyth MMO mouse.

The mouse is designed around the concept of semi-modular system. If a user does not like the side-button arrangement, or it does not work with their particular game, then it can be changed. With the wealth of MMO mice on the market with fixed button arrangements, ROCCAT is attempting to offer a mouse which can be configured in terms of buttons and applications on a per-game basis, allowing the device to extend beyond its initial MMO design origins towards FPS or RTS.

One would assume that the device uses laser optics, although there is no indication whether the weight is adaptable as well. It seems that the device will only come in a wired version, and significant customization for each title will be performed via the included software.

The Nyth MMO mouse will be on display at Gamescom later this month, with a full release later in the year. The price for the mouse or any add-ons has yet to be announced.

Source: ROCCAT

Gallery: ROCCAT Announces the Nyth Semi-Modular Mouse

Categories: Tech

More Fanless Bay-Trail: ASRock Releases Two Pentium J2900 Motherboards

Wed, 2014-08-06 19:00

When we looked at AMD’s Kabini platform, AMD in its press materials pitted their high end APU against the Pentium J2900 in terms of price and performance. The only issue from the reviewer’s standpoint was the availability of the Pentium J2900 in a retail product. At the time, the J2900 was found only in OEM devices, or a single system was found through Google Shopping. Fast forward a few months and we are now seeing a small wave of J2900 motherboards coming to market for custom home builds. ASRock look poised to release the Q2900-ITX and Q2900M to meet that demand.

As both motherboards are using the quad core J2900 at 2.40 GHz (2.66 GHz turbo) and 10W, both are supplied with large fanless heatsinks to provide the cooling. The CPU is soldered on to the motherboard (this is an Intel limitation) meaning upgrading is not possible, but the CPU does offer dual channel DDR3, 2 MB of L2 cache and Intel HD graphics.

The Q2900-ITX is an ITX motherboard that relies on SO-DIMM DDR3 memory. The standard Atom chipset ports are here – two SATA 6 Gbps, two SATA 3 Gbps, four USB 3.0 ports, a PCIe 2.0 x1 slot, a mini-PCIe slot (for WiFi) and three standard video outputs (VGA, DVI-D, HDMI).

The Q2900M goes up to the micro-ATX size, which affords use of full-sized DDR3. Note how each of the DDR3 DIMMs are at right angles to each other, which comes across as really, really odd.  The PCIe lane layout is a little different, giving a full sized PCIe slot capable of PCIe 2.0 x4. There is also two other PCIe 2.0 x1 slots, however judging by other motherboards of this ilk, using the PCIe 2.0 x4 will disable the other PCIe ports or vice versa.

Pricing and availability is not yet announced.

Source: XtremeHardware

Categories: Tech