Tech

VIDEO: 'Supermoon' lights up world skies

BBC Tech - Mon, 2014-08-11 14:09
Stargazers have been taking images of the spectacular ''supermoon'' overnight, where the moon appears bigger and brighter than usual because of its proximity to the earth.
Categories: Tech

Reckless drone operators annoy animals, people across US national parks

ARS Technica - Mon, 2014-08-11 14:03
Yosemite National Park

About six weeks ago in June, two days after the National Park Service (NPS) banned drones in all 401 national parks, a 68-year-old California man reported that his drone was stuck in a tree at Grand Teton National Park in Wyoming.

“He called for help, and one of our rangers responded,” Jackie Skaggs, a Grand Teton spokeswoman, told Ars. “He was flying it in the Gros Ventre campground and got it stuck in a cottonwood tree. There was no damage as far as I know to the cottonwood tree. He wanted it out, the rangers came to help, and he was in violation of a public use closure.”

The man's drone was never rescued. When rangers returned to help the man retrieve it, it was gone. The rangers presumed it had been stolen.

Read 17 remaining paragraphs | Comments

Categories: Tech

HTC Brings the Desire 816 to the US

Anandtech - Mon, 2014-08-11 13:55

HTC is expanding their lineup of devices in the United States with the official launch of the Desire 816 on Virgin Mobile USA. We talked about the Desire 816 when it launched earlier this year at MWC, and much like the recently launched Desire 610 on AT&T it has taken quite some time for the Desire 816 to makes its way to the US. In many ways the Desire 816 can be viewed as a big brother to the Desire 610, with improved specs across the board. It's also a device that helps HTC combat inexpensive phablets like the Huawei Ascend Mate2. The full specifications of the Desire 816 are laid out below.

HTC Desire 816 SoC Qualcomm Snapdragon 400 (MSM8928) 4 x Cortex A7 at 1.6GHz
Adreno 305 Memory and Storage 8GB NAND + MicroSDXC, 1.5GB LPDDR2 Display 5.5” 1280x720 Super LCD2 at 267ppi Cellular Connectivity 2G / 3G (EVDO) / 4G LTE (Qualcomm MDM9x25 UE Category 4 LTE) Dimensions 156.6 x 78.7 x 7.9 mm, 165g Camera 13 MP f/2.2 Rear Facing, 5MP f/2.8 Front Facing  Battery 2600 mAh (9.88 Whr) Other Connectivity 802.11 b/g/n + BT 4.0, USB2.0, GPS/GNSS, NFC SIM Size Nano-SIM Operating System Android 4.4.2 KitKat with HTC Sense 5.5

Looking at the specs there's not a whole lot to talk about. The Snapdragon 400 platform has become ubiquitous among devices in this price bracket, and the 720p display is also fairly standard. It's interesting to compare the Desire 816 to the Huawei Ascend Mate2. While they don't compete on the same carrier in the US, they share similar specifications right down to the camera resolutions. The big difference comes with the larger display on the Mate2 and the subsequent larger battery due to the increased physical size of the device. Overall, HTC looks to have put together a very decent device for its price bracket. Features like multiple color choices and HTC's front facing Boomsound speakers will also help to differentiate the Desire 816 from the other competing devices that have a similar hardware platform.

Currently the Desire 816 will launch on Virgin Mobile USA on August 12 for $299 off contract. This variant will support the Sprint EVDO network that Virgin Mobile utilizes. HTC has also stated that they intend to bring other smartphones in the Desire lineup to the United States later this year. Whether that also means expanded carrier availability for the Desire 610 and 816 is something only time will tell.

Categories: Tech

US financial protection agency warns against Bitcoin, Dogecoin use

ARS Technica - Mon, 2014-08-11 11:30
Zach Copley

The Consumer Financial Protection Bureau (CFPB) has issued a new warning to the general public, alerting everyone to the potential dangers of Bitcoin, Litecoin, Dogecoin, and other cryptocurrencies.

In the Monday document, the CFPB warns that such currencies have four notable risks:

Hackers. Virtual currencies are targets for highly sophisticated hackers, who have been able to breach advanced security systems.

Fewer protections. If you trust someone else to hold your virtual currencies and something goes wrong, that company may not offer you the kind of help you expect from a bank or debit or credit card provider.

Cost. Virtual currencies can cost consumers much more to use than credit cards or even regular cash.

Scams. Fraudsters are taking advantage of the hype surrounding virtual currencies to cheat people with fake opportunities.

The warning comes just two days after Dell CEO Michael Dell announced that his firm recently accepted a large order (around $50,000 worth) paid for in bitcoins. Dell became one of the largest corporations ever to begin accepting the virtual currency.

Read 4 remaining paragraphs | Comments

Categories: Tech

Acer’s Chromebook 13 adds a Tegra K1 chip and optional 1080p display

ARS Technica - Mon, 2014-08-11 11:17
Acer

Acer's contributions to the Chromebook ecosystem have so far been mostly inexpensive Intel-based systems, things like the Acer C720 from late last year. These systems are the archetypal Chromebooks—11.6-inch screens with 1366×768 resolutions, 2GB or 4GB of RAM, cheap Intel Celeron CPUs (though a Core i3 model surfaced more recently), and relatively light price tags.

Today Acer announced a different kind of entry to the field: its fanless Chromebook 13 is a larger model with a 13.3-inch screen that comes in 1366×768 and 1080p flavors that starts at $279.99 and uses Nvidia's Tegra K1 SoC instead of an Intel chip. The Nvidia Tegra K1 Acer is using the 32-bit version, which uses four 2.3GHz ARM Cortex A15 CPU cores and an Nvidia GPU that uses the same Kepler architecture as many GeForce GT 600- and 700-series GPUs. The new Chromebook's closest competition is probably Samsung's Chromebook 2, which uses an Exynos 5 Octa processor in the same general performance bracket.

Acer has also added 2GB of RAM, a 16GB SSD, two USB 3.0 ports, an HDMI port, and two-stream 802.11ac Wi-Fi (which should provide maximum theoretical transfer speeds of 867Mbps), and the company promises 13 hours of battery life in the 1366×768 version and 11 hours in the 1080p version. The laptop weighs 3.31 pounds, average for a 13-inch notebook, and Acer says it's 0.71 inches thick. The 1080p version of the Chromebook 13 is $299.99, and an upgraded version with 4GB of RAM and a 32GB SSD will run you $379.99. The $300 1080p model is currently available for pre-order at Best Buy while the other two models are available for pre-order from Amazon.

Read 1 remaining paragraphs | Comments

Categories: Tech

Earth’s gravitational pull may partly melt a bit of the Moon

ARS Technica - Mon, 2014-08-11 10:50
NASA

Scientists have studied stars many light-years away, exoplanets, black holes, neutron stars, and even the invisible dark matter that permeates every galaxy. Given that, it hardly seems (at first) that the Moon could still surprise us. After all, the study of the Moon is as old as astronomy itself, and it's the only astronomical object a human being has ever set foot on. But a new study suggests that the Moon has a previously undiscovered low-viscosity region, residing just above the core. The region is partially molten, which fits with earlier models that suggest some melting may exist on the core-mantle boundary.

The region, referred to in the study as the “low-viscosity zone," could better explain measurements of tidal dissipation on the Moon. While scientists have previously calculated the effects of Earth's tidal forces on the Moon, none of those calculations have been able to account for certain observations. Specifically, there is a relationship between the Moon's tidal period and its ability to absorb seismic waves, which are converted to heat deep in the Moon's interior. That relationship was unexplained until now.

The authors of the study, however, were able to closely match those observations with their simulation when a low-viscosity zone was included in their models.

Read 7 remaining paragraphs | Comments

Categories: Tech

Video of racer’s death goes viral as cops crowdsource criminal inquiry

ARS Technica - Mon, 2014-08-11 10:27

Warning: This footage is graphic.
Police are crowdsourcing an investigation to determine whether criminal charges should be brought in connection with a deadly sprint car race on Saturday—and are urging spectators to turn over any still or video footage that might help with the probe.

Spectator videos of the deadly crash Saturday at the Canandaigua Motorsports Park in Canandaigua, New York, already have gone viral, with millions of viewers watching the incident on YouTube. Famed NASCAR driver Tony Stewart was competing in the non-NASCAR event when he collided with Kevin Ward Jr.'s car and knocked him out of the race. Ward exited his vehicle and walked down the dirt track, apparently angry at Stewart and gesturing at him. When Stewart's car came around after another lap, it struck Ward, who later died.

"We're trying to collect the facts so we can definitively identify the cause of the crash," Ontario County Sheriff Philip Povero told a news conference late Sunday. "The world is watching. We're working on it diligently."

Read 1 remaining paragraphs | Comments

Categories: Tech

Intel’s 14nm Technology in Detail

Anandtech - Mon, 2014-08-11 09:45

Much has been made about Intel’s 14nm process over the past year, and admittedly that is as much as Intel’s doing as it is the public’s. As one of the last Integrated Device Manufacturers and the leading semiconductor manufacturer in the world, Intel has and continues to set the pace for the semiconductor industry. Which means that Intel’s efforts to break the laws of physics roughly every 2 years mark an important milestone in the continuing development of semiconductor technology and offer a roadmap of sorts to what other semiconductor manufacturers might expect.

To that end, at a time when ramping up new process nodes is more complex and more expensive than ever, Intel’s 14nm process is especially important. Although concerns over the immediate end of Moore’s Law remain overblown and sensationalistic, there is no denying that continuing the pace of Moore’s Law has only gotten more difficult. And as the company on the forefront of semiconductor fabrication, if anyone is going to see diminishing returns on Moore’s Law first it’s going to be Intel.

Today Intel is looking to put those concerns at rest. Coinciding with today’s embargo on Intel’s 14nm technology and a preview of Intel’s upcoming Broadwell architecture based Core M processor, Intel will be holding a presentation dubbed Advancing Moore’s Law in 2014. Intel for their part is nothing short of extremely proud over what advancements they have made over the last several years to make their 14nm process a reality, and with that process now in volume production in their 14nm Oregon fab and being replicated to others around the world, Intel is finally ready to share more information about the 14nm process.

We’ll start off our look at Intel’s 14nm process with a look at Intel’s yields. Yields are important for any number of reasons, and in the case of Intel’s 14nm process the yields tell a story of their own.

Intel’s 14nm process has been their most difficult process to develop yet, a fact that Intel is being very straightforward about. Throughout the life of the 14nm process so far its yields have trailed the 22nm at equivalent points in time, and while yields are now healthy enough for volume production Intel still has further work to do to improve the process to catch up with 22nm. In fact at the present Intel’s 22nm process is the company’s highest yielding (lowest defect density) process ever, which goes to show just how big a set of shoes the up and coming 14nm process needs to fill to completely match its predecessor.

Concerns over these yields has no doubt played a part in Intel’s decision to go ahead with today’s presentation, for if nothing else they need to showcase their progress to their investors and justify the company’s heavy investment into 14nm and other R&D projects. While 14nm has made it into production in 2014 and the first 14nm products will hit retail by the end of the year, these yield issues have caused 14nm to be late for Intel. Intel’s original plans, which would have seen the bulk of their Broadwell lineup launch in 2014, have been reduced to the single Broadwell-Y SKU this year, with the rest of the Broadwell lineup launching in 2015.

Ultimately while 14nm is still catching up to 22nm, Intel is increasingly confident that they will be able to finish catching up, forecasting that 14nm will reach parity with 22nm on a time adjusted basis in the first quarter of 2015, or roughly 6 months from now. Intel is already in the process of replicating their 14nm to their other fabs, with fabs in Arizona and Ireland expected to come online later this year and in 2015 respectively. These fab ramp-ups will in turn allow Intel to further increase their manufacturing capacity, with Intel projecting that they will have sufficient volume to handle multiple 14nm product ramps in H1’2015.

Moving on to the specifications and capabilities of their 14nm process, Intel has provided the minimum feature size data for 3 critical feature size measurements: transistor fin pitch, transistor gate pitch, and the interconnect pitch. From 22nm to 14nm these features have been reduced in size by between 22% and 35%, which is consistent with the (very roughly) 30%-35% reduction in feature size that one would expect from a full node shrink.

Intel is especially proud of their interconnect scaling on the 14nm node, as the 35% reduction in the minimum interconnect pitch is better than normal for a new process node.

Along with the immediate feature size improvements that come with a smaller manufacturing node, Intel has also been iterating on their FinFET technology, which is now in its second generation for the 14nm process. Compared to the 22nm process, the 14nm process’s fins are more tightly packed, thinner, taller, and fewer in number (per transistor).

Each one of these changes in turn improves the performance of the FinFETs in some way. The tighter density goes hand-in-hand with 14nm’s feature size reductions, while the taller, thinner fins allow for increased drive current and increased performance. Meanwhile by reducing the number of fins per transistor, Intel is able to improve on density once again while also reducing the transistor capacitance that results from those fins.

Intel is also reporting that they have been able to maintain their desired pace at improving transistor switching speeds and reducing power leakage. Across the entire performance curve the 14nm process offers a continuum of better switching speeds and/or lower leakage compared to Intel’s 22nm process, which is especially important for Intel’s low power ambitions with the forthcoming Core M processor.

Plotted differently, here we can see how the last several generations of Intel’s process nodes compare across mobile, laptop, and server performance profiles. All 3 profiles are seeing a roughly linear increase in performance and decrease in active power consumption, which indicates that Intel’s 14nm process is behaving as expected and is offering similar gains as past processes. In this case the 14nm process should deliver a roughly 1.6x increase in performance per watt, just as past processes have too.

Furthermore, these base benefits when coupled with Intel’s customized 14nm process for Core M (Broadwell-Y) and Broadwell’s power optimizations have allowed Intel to more than double their performance per watt as compared to Haswell-Y.

Moving on to costs, Intel offers a breakdown of costs on a cost per mm2 and pairs that with a plot of transistor sizes. By using more advanced double patterning on their 14nm node Intel was able to achieve better than normal area scaling, as we can see here. The tradeoff for that is that wafer costs continue to rise from generation to generation, as double patterning requires additional time and ever-finer tools that drive up the cost of production. The end result is that while Intel’s cost per transistor is not decreasing as quickly as the area per transistor, the cost is still decreasing and significantly so. Even with the additional wafer costs of the 14nm process, on a cost per transistor basis the 14nm process is still slightly ahead of normal for Intel.

At the same time the fact that costs per transistor continue to come down at a steady rate may be par for the course, but that Intel has been able to even maintain par for the course is actually a very significant accomplishment. As the cost of wafers and fabbing have risen over the years there has been concern that transistor costs would plateau, which would lead to chip designers being able to increase their performance but only by increasing prices, as opposed to the past 40 years of cheaper transistors allowing prices to hold steady while performance has increased. So for Intel this is a major point of pride, especially in light of complaints from NVIDIA and others in recent years that their costs on new nodes aren’t scaling nearly as well as they would like.

Which brings us to the final subject of Intel’s 14nm presentation, the competitive landscape. Between the ill-defined naming of new process nodes across the entire industry and Intel’s continuing lead in semiconductor manufacturing, Intel likes to point out how their manufacturing nodes compare to foundry competitors such as TSMC and the IBM alliance. Citing 3rd party journal articles for comparison, Intel claims that along with their typical lead in rolling out new nodes, as of the 14nm node they are going to have a multiple generation technical advantage. They expect that their 14nm node will offer significantly smaller feature sizes than competing 14nm nodes, allowing them to maintain consistent logic area scaling at a time when their competitors (i.e. TSMC) cannot.

From a technical perspective it's quite obvious why it is that Intel is able to maintain density scaling above the level that TSMC and Common Platform members can deliver. In short, this goes back to the improved interconnect density that was discussed earlier in this article. While Intel is pushing 14nm transistor and interconnect, TSMC and Common Platform members are using the same interconnect technology that they did at 20nm. This means that only areas where transistor density was the gating factor for 20nm will decrease in size at 14/16nm, while areas already gated by 20nm interconnect technology won't be able to get any smaller.

Thus for what it’s worth the basic facts do appear to check out, but we would be the first to point out that there is more to semiconductor manufacturing than just logic area scaling. At least until Intel’s competitors start shipping their FinFET products this is going to be speculative, and doesn’t quantify how well those competing process nodes will perform. But then again, the fact that Intel is already on their second FinFET node when their competitors are still ramping up their first is no small feat.

Wrapping things up, while Intel’s bring up of their 14nm process has not been without problems and delays, at this point Intel appears to be back on track. 14nm is in volume production in time for Broadwell-Y to reach retail before the end of the year, and Intel is far enough along that they can begin replicating the process to additional fabs for production in 2014 and 2015. Meanwhile it will still be a few months before we can test the first 14nm chips, but based on Intel’s data it looks like they have good reason to be optimistic about their process. The feature size and leakage improvements are in-line with previous genartion process nodes, which should be a great help for Intel in their quest to crack the high performance mobile market in the coming year.

Categories: Tech

Comcast conveniently forgets “no fees” promise until confronted by recording [Updated]

ARS Technica - Mon, 2014-08-11 09:44
Not everyone likes their friendly neighborhood cable behemoth.

Two weeks ago, in the wake of Ryan Block’s nightmare of a cancellation call, Comcast Chief Operating Officer Dave Watson issued an internal memo saying that the recording was "painful to listen to." He exhorted his employees to "do better." Unfortunately for Watson, another call surfaced on Sunday that will likely be just as painful: a fellow named Tim Davis called Comcast to contest some bogus charges on his bill and only managed to get them refunded because he had recordings of previous Comcast calls.

According to the write-up by The Consumerist, Davis had moved to a new apartment and transferred his Comcast service to his new residence, opting to perform a self-install rather than have Comcast send out a technician. After a few weeks without problems, his Internet connection started dropping out, and a technician was dispatched. Comcast determined that the problems had to do with outside wiring rather than anything under Davis’ control, and thus the company told him that the truck roll and service were gratis.

Tim Davis and the Comcastic Customer Service Call.

(Warning: the above video has some strong language, and you should probably put on headphones if you're going to listen to it at work.)

Read 10 remaining paragraphs | Comments

Categories: Tech

Intel Broadwell Architecture Preview: A Glimpse into Core M

Anandtech - Mon, 2014-08-11 09:01

Typically we would see Intel unveil the bulk of the technical details of their forthcoming products at their annual Intel Developer Forum, and with the next IDF scheduled for the week of September 9th we’ll see just that. However today Intel will be breaking from their established standards a bit by not waiting until IDF to deliver everything at once. In a presentation coinciding with today’s embargo, dubbed Advancing Moore’s Law in 2014, Intel will be offering a preview of sorts for Broadwell and the 14nm process.

Today’s preview and Intel’s associated presentation are going to be based around the forthcoming Intel Core M microprocessor, Broadwell configuration otherwise known at Broadwell-Y. The reason for this is a culmination of several factors, and in all honesty it’s probably driven as much by investor relations as it is consumer/enthusiast relations, as Intel would like to convince consumer and investor alike that they are on the right path to take control of the mobile/tablet through superior products, superior technology, and superior manufacturing. Hence today’s preview will be focused on the part and the market Intel feels is the most competitive and most at risk for the next cycle: the mobile market that Core M will be competing in.

Categories: Tech

AMD shows off the guts of its first ARM server chip

ARS Technica - Mon, 2014-08-11 09:00
This is what Seattle looks like underneath its heatsink. AMD

First unveiled in January, AMD today gave a detailed look at its first ARM-based server processor, the Opteron A1100 "Seattle."

Seattle has eight 64-bit ARM Cortex-A57 cores arranged into four pairs, with each pair sharing 1MB of level 2 cache. All eight cores share an 8 MB level 3 cache. There are two memory controllers, supporting both DDR3 and DDR4, enabling a total of 128GB ECC memory in total.

These cores all share a set of I/O options. The system-on-a-chip has 8 lanes of PCIe 3.0 for expansion cards and 8 lanes of SATA revision 3.0 for storage. Network connectivity comes from two 10GBASE-KR controllers. (10GBASE-KR is a short-range specification designed for copper connections to backplanes in blade servers and modular routers.)

Read 3 remaining paragraphs | Comments

Categories: Tech

Broadwell is coming: A look at Intel’s low-power Core M and its 14nm process

ARS Technica - Mon, 2014-08-11 09:00
You were fabbed in the long summer; you've never known anything else. Aurich Lawson

Last week I flew from New Jersey to Portland, Oregon, to get briefed by Intel PR reps and engineers about the company's next-generation CPUs and the new manufacturing process behind them. It was my first-ever visit to Intel's campus.

One of its campuses, anyway. I saw several peppered throughout suburban Portland, and that's not even counting the gargantuan Intel-branded factory construction site I jogged by the next morning, or Intel's other facilities worldwide. Usually our face-to-face interactions with tech company employees take place on neutral ground—an anonymous hotel room, convention hall, or Manhattan PR office—but two-and-change days on Intel's home turf really drove home the size of its operation. Its glory may be just a little faded these days, primarily because of a drooping PC market, tablet chips that are actually losing the company money, and a continuing smartphone boom that Intel's still scrambling to get a piece of, but something like 315 million PCs were sold worldwide in 2013, and the lion's share still has Intel inside.

That's what makes Intel's progress important, and that's why we’re champing at the bit to get the Broadwell architecture and see Intel’s new 14nm manufacturing process in action. The major industry players—everyone from Microsoft to Dell to Apple—depend on Intel’s progress to refine their own products. The jump between 2012’s Ivy Bridge architecture and 2013’s Haswell architecture wasn’t huge, but for many Ultrabooks it made the difference between a mediocre product and a good one.

Read 58 remaining paragraphs | Comments

Categories: Tech

AMD’s Big Bet on ARM Powered Servers: Opteron A1100 Revealed

Anandtech - Mon, 2014-08-11 09:00

It has been a full seven months since AMD released detailed information about its Opteron A1100 server CPU, and twenty two months since announcement. Today, at the Hot Chips conference in Cupertino, AMD revealed the final missing pieces about its ARM powered server strategy headlining the A1100. One thing is certainly clear, AMD is betting heavily on ARM powered servers by delivering one of the most disruptive server CPUs yet, and it is getting closer to launch.

Categories: Tech

Khronos Announces Next Generation OpenGL Initiative

Anandtech - Mon, 2014-08-11 06:02

As our regular readers have already seen, 2013 and 2014 has been one of the most significant periods for graphics APIs in years. While OpenGL and Direct3D have not necessarily been stagnant over the last half-decade or so, both APIs have been in a mature phase where both are stable products that receive relatively minor feature updates as opposed to more sweeping overhauls. Since reaching that stability there has been quite a bit of speculation over what would come next – or indeed whether anything would come next – and in the last year we have seen the answer to that in a series of new graphics APIs from hardware and software vendors alike.

In all of these announcements thus far, we have seen vendors focus on similar issues and plan to enact similar solutions. AMD’s Mantle, Microsoft’s Direct3D 12, and Apple’s Metal all reflect the fact that there is a general consensus among the graphics industry over where the current bottlenecks lie, where graphics hardware will be going in the future, and where graphics APIs need to go in response to these issues. The end result has been the emergence of several new APIs, all meaningfully different from each other but none the less all going in the same direction and all implementing the same style solutions.

That common solution is a desire by all parties to scrape away the abstraction that has defined high level graphics APIs like Direct3D and OpenGL for so much of their lives. As graphics hardware becomes more advanced it has become more similar and more flexible; the need to abstract and hide the differences between GPU architectures has become less important, and the abstraction itself has become the issue. By removing the abstraction and giving developers more direct control over the underlying hardware, these next generation APIs aim to improve performance, ease API implementation, and give developers more flexibility than ever before.

It’s this subject which brings us to today’s final announcement from Khronos. At 22 years old OpenGL is the oldest of the 3D graphics APIs in common use today, and in 2014 it is facing many of the same issues as the other abstraction-heavy APIs. OpenGL continues to serve its intended purposes well, but the need for a lower level (or at least greater controlling) API exists in the OpenGL ecosystem as much as it does in any other ecosystem. For that reason today Khronos is announcing the Next Generation OpenGL Initiative to develop the next generation of OpenGL.

The Next Generation OpenGL Initiative

For the next generation of OpenGL – which for the purposes of this article we’re going to shorten to OpenGL NG – Khronos is seeking nothing less than a complete ground up redesign of the API. As we’ve seen with Mantle and Direct3D, outside of shading languages you cannot transition from a high level abstraction based API to a low level direct control based API within the old API; these APIs must be built anew to support this new programming paradigm, and at 22 years old OpenGL is certainly no exception. The end result being that this is going to be the most significant OpenGL development effort since the creation of OpenGL all those years ago.

By doing a ground up redesign Khronos and its members get to throw out everything and build the API that they will need for the future. We’ve already covered the subject of low level APIs in great detail over the past year, so we’ll defer to our past articles on Mantle and Direct3D 12. But in short the purpose of OpenGL NG is to build a lower level API that gives developers explicit control over the GPU. By doing so developers will be able to achieve greater performance by directly telling the GPU what they want to do, bypassing both the CPU overhead of abstraction and the GPU inefficiencies that come from indirectly accessing a GPU through an API. This is especially beneficial in the case of multithreading – something that has never worked well with high-level APIs – as it’s clear that single-threaded CPU performance gains have slowed and will continue to be limited over the coming years, so multithreading is becoming functionally mandatory in order to avoid CPU bottlenecking.

With that said however, along with the common performance case OpenGL NG also gives Khronos and its members the chance to fix certain aspects of OpenGL and avoid others, typically legacy cruft from earlier generations of the API and times where the hardware was far more limited. For example OpenGL NG will be a single API for desktop and mobile devices alike – there will not be an ES version of OpenGL NG, the desktop and mobile will be unified. As mobile GPUs have nearly caught up in functionality with desktop GPUs and OpenGL NG is a clean break, there is no need to offer separate APIs on the basis of legacy support or hardware feature gaps. There will be just one modern OpenGL: OpenGL NG.

Khronos will also be using this opportunity to establish a common intermediate representation shading language for OpenGL. The desire to offer shader IRs is a recurring theme for Khronos as both OpenGL and OpenCL were originally designed to have all shader programs distributed in either source form or architecture-specific binary form. However for technical reasons and IP protection reasons (avoiding trivial shader source code theft), developers want to be able to distribute their shaders in a compiled IR. For OpenCL this was solved with SPIR, and for OpenGL this will be solved in OpenGL NG.

Finally, the clean break afforded by OpenGL NG can’t be understated. At 22 years old OpenGL is huge; between its Core and Compatibility profiles it covers the history of GPUs since the beginning. As a result a complete OpenGL implementation is incredibly complex, making it difficult to write and maintain a complete OpenGL implementation and making it even more difficult to do conformance testing against such an implementation.

OpenGL NG’s clean break means all of that legacy cruft goes away, which is something that will be a significant boon over the long run for hardware vendors. Like Mantle and Direct3D 12, OpenGL NG is expected to be a very thin API – after all, it doesn’t need much code if it’s just acting as a shim between the hardware and software developers – which means OpenGL NG will be much easier to implement and test. The sheer size of OpenGL has been a problem that has been brewing for many years and previous efforts to deal with it have faltered (Longs Peak), so finally being able to do a clean break is a very big deal for the consortium.

Building a Consortium and a Consensus

Of course the fact that this is a consortium effort is going to be the final piece of the puzzle, as this means the development of OpenGL NG will have to be approached in a different manner than any of the other forthcoming APIs. Microsoft for their part works with their hardware partners, but at the end of the day they still have the final say in the development of Direct3D. Meanwhile AMD specifically developed Mantle on their own so that they could develop it to their needs and abilities without the compromises and politicking that comes from a consortium effort. Khronos on the other hand is the consortium – the organization’s goals to offer open, cross-vendor APIs means that they need to take into consideration the technical and political considerations of all of their members on both the software and hardware sides.

Because OpenGL NG is still in early development, from a technical perspective it’s impossible to say just what the final API will look like. However one thing that Khronos is making clear early on is that because they’re going to be cross-vendor and cross-platform, they expect that OpenGL NG won’t be quite as low level as some of the other APIs we’ve seen. The goal for OpenGL NG is to offer the explicit control benefits of a low level language and maintaining the broad reach of an Open standard, and that means that whatever form OpenGL NG takes will require it to be a bit higher than the other languages (e.g. Mantle). Khronos for their part is confident that they can still deliver on their desired goals even without being quite so low level, so it will be interesting to see just how OpenGL NG, Mantle, and Direct3D 12 compare and contrast once all of those APIs are released.

This focus on portability means that OpenGL NG will also be a more tightly defined specification than any OpenGL before it. Poorly defined or undefined aspects of OpenGL have led to slightly inconsistent implementations in the past, and even though this has improved in recent versions of the API, even the stripped down OpenGL ES still has areas where there are compatibility issues due to differences in how the standard is interpreted and implemented. With a clean break and a much smaller API overall, Khronos has made it a goal for OpenGL NG to be fully portable and the specification completely unambiguous, so that all implementations implement all functions identically. This is something Khronos has been able to do with WebGL, and now they tell us that they believe that they have to do the same for OpenGL NG in order for it to succeed.


Recent Timeline of OpenGL Releases

But perhaps more daunting than the consortium’s technical considerations are the consortium’s political considerations. Khronos has attempted to overhaul OpenGL once before in 2007’s failed Longs Peak initiative, with the consortium ultimately unable to come to a consensus and Longs Peak being put to rest in favor of the more iterative OpenGL 3.0. There are a number of reasons for this failure including technical disagreements and concerns over backwards compatibility with existing software, but at the end of the day Khronos can only move forward when there’s a consensus among its members, something they didn’t have for Longs Peak.

Learning from Longs Peak and desiring to avoid another failure this time around, Khronos is being far more inclusive on development of OpenGL, working to include as many software and hardware developers as they can. This is why OpenGL NG is still an initiative and is in all likelihood some time off – design by committee projects will always take longer than solo efforts – so today’s announcement is as much an invitation to additional developers as it is Khronos describing a new API. Khronos has made it clear that they want to get it right this time, and that means getting all of the major players invested in initiative.

At this point the single hardest sell for Khronos and the members backing the initiative will be the clean break. This is a large part of what doomed Longs Peak, and Khronos admits that even now this isn’t going to be easy; even a year ago they may not have been able to get consensus. However as Mantle, Metal, and Direct3D 12 have made their own cases for new APIs and/or clean breaks, Khronos tells us that they believe the time is finally right for a clean break for OpenGL. They believe there will be consensus on the clean break, that there must be genuine consensus on the clean break, and have been passionately making their case to the consortium members.

To that end the OpenGL NG participant list is quickly becoming a who’s who of the graphics industry, both hardware and software. NVIDIA, AMD, Intel, ARM, and more of the major hardware players are all participating in the initiative, and on the software side members include everyone from Google to Apple to Valve. Khronos tells us that they have been especially impressed with the participation from software vendors, who haven’t always been as well represented in past efforts. As a result Khronos tells us that they feel there is more energy and excitement than in any past working group, even the burgeoning OpenGL ES working group.

Ultimately OpenGL NG will be a long and no doubt heated development process, but Khronos seems confident that they will get the consensus they need. Once they can sell the clean break, developing the API itself should be relatively straightforward. Due to its direct-access nature and the relatively few functions such an API would need to provide, the biggest hurdle is at the beginning and not the end.

Categories: Tech

OpenGL SIGGRAPH 2014 Update: OpenGL 4.5, OpenGL ES 3.1, & More

Anandtech - Mon, 2014-08-11 06:01

Taking place this week is SIGGRAPH 2014, the graphics industry’s yearly professional event. As the biggest graphics event of the year this show has become the Khronos Group’s favorite venue for delivering news about the state and development of OpenGL, and this year’s show is no exception. This week will see Khronos delivering news about all of their major OpenGL initiatives: OpenGL, OpenGL ES, and WebGL, taking to the show to announce a new version of their core graphics API while also delivering updates on recent advancements in its offshoots.

OpenGL 4.5 Announced

Kicking things off, we’ll start with the announcement of the next iteration of OpenGL, OpenGL 4.5. As has become customary for Khronos, they are issuing their yearly update for OpenGL 4 at SIGGRAPH, further iterating on the API by integrating some additional features into the OpenGL core standard. By continually updating OpenGL in such a fashion Khronos has been able to respond to developer requests relatively quickly and integrate features into the OpenGL core as policy/standard issues are settled, however on the broader picture it does mean that as OpenGL 4 approaches maturity/completeness, these features do become a bit more niche as the major issues have since been solved.

To that end OpenGL 4.5 will see a small but important set of feature additions to the standard. The bulk of these changes have to deal with API alignment, with Khronos making changes to better align OpenGL with OpenGL ES, WebGL, and Direct3D 11. In the case of OpenGL ES, OpenGL 4.5 brings the two APIs back in alignment by updating the API to match the changes from this year’s release of OpenGL ES 3.1. Khronos intends for OpenGL to remain a superset of OpenGL ES, and by doing so allowing OpenGL devices to run applications targeting OpenGL ES, and for OpenGL ES developers to do their initial development and testing on desktops as opposed to having to stick to OpenGL ES-only devices.

Elsewhere OpenGL 4.5 is also adding some further Direct3D 11 emulation features to improve the ability to port between the two APIs. The APIs continue to have their corner cases where similar features are implemented differently, with the addition of Direct3D emulation features simplifying porting by offering versions of these features that adhere to Direct3D’s implementation requirements and quirks. Finally OpenGL 4.5 is also implementing further robustness requirements, these being primarily targeted at improving WebGL execution by enhancing security and isolation (e.g. preventing a GPU reset affecting any other running applications).

Meanwhile from a development standpoint OpenGL 4.5 will bring with it support for Direct State Access and Flush Control. Direct State Access allows objects to have their state queried and modified without the overhead of first binding those objects; in other words, bindless objects. Flush Control on the other hand sees limited command flushing being handed over to applications, allowing them to delay/avoid flushing in certain cases to improve performance with multi-threaded applications. This primarily involves situations where the context is being switched amongst multiple threads from the same application.

OpenGL 4.5 is being released today as a final specification, and based on prior experience we expect to start seeing desktop GPU implementations of it later this year.

WebGL Support Nears Ubiquity

Meanwhile on the WebGL front, Khronos is happy to report that WebGL support is nearing ubiquity. The web-friendly/web-safe version of OpenGL has been complete for a while now, but it has taken some time for browser developers to implement support for it in to all of the major browsers. This past year has seen WebGL support on the desktop finally become ubiquitous with the launch of Internet Explorer 11, and now the mobile world is nearing the same with the impending releases of Apple’s iOS 8 and Microsoft’s Windows Phone 8.1.

Commonly a laggard when it comes to OpenGL support, Apple has supported WebGL for the past couple of versions of desktop Safari, however they are among the last of major browser developers to not support WebGL on their mobile browser. This is finally changing on Safari for iOS 8, which will see WebGL support enabled on what’s historically a very conservative platform for Apple.

Meanwhile Microsoft’s cascading browser development plan for Windows Phone means that Internet Explorer 11 is only now being ported over to Windows Phone through the release of Windows Phone 8.1. With the upgrade to IE 11’s core, Windows Phone 8.1 will similarly be gaining WebGL compatibility this year as it is released. Altogether, ignoring the increasingly dated Android stock web browser (which itself is rarely used these days in favor of Chrome), this means that WebGL support should be nearly pervasive on desktops and mobile devices alike going into 2015.

OpenGL ES 3.1: Validation & Android Extension Pack

Finally, for OpenGL ES 3.1 Khronos is announcing that the first GPUs and drivers have finished their conformance testing and are being validated. Khronos keeps a running list over on their website, where we can see that ARM Mali Midgard, Imagination PowerVR Rogue, NVIDIA Tegra K1, and Intel HD Graphics for Atom products have all been validated. At this point there are a handful of products from the various families that haven’t finished validation, but ultimately all the major mobile GPU architectures expected to support OpenGL ES 3.1 are present in one form or another. The only vendor not present at this time is Qualcomm – the Adreno 300 series will not support OpenGL ES 3.1, and the Adreno 400 series is not yet through testing.

With the speed of validation and the limited amount of changes between OpenGL ES 3.0 and 3.1, Khronos tells us that they expect OpenGL ES 3.1 adoption will be very quick compared to the longer adoption periods required for major chnages like OpenGL ES 2.0 and ES 3.0. With that said however, in the high-end mobile device market Qualcomm has been by far the biggest winner of the ES 3.x generation thus far, so as a percentage of devices shipped we expect that there will still be a number of ES 3.0 devices in use that cannot be upgraded to ES 3.1. Ultimately as OpenGL ES 3.1 is designed to be fully backwards compatible with Open GL ES 3.0, developers will be able to tap into ES 3.1 features while still supporting these ES 3.0 devices.

Of course even ES 3.1 only goes so far, which is why Khronos is also telling developers that they’re rather pleased with the development of the Android Extension Pack, even if it’s not a Khronos standard. The AEP is implemented as a set of OpenGL ES 3.1 extensions, so it will be further building off of what OpenGL ES 3.1 will be accomplishing. Through the AEP Google will be enabling tessellation, geometry shaders, compute shaders, and ASTC texture compression on the forthcoming Android L, all major features that most of the latest generation mobile GPUs can support but are not yet part of the OpenGL ES standard. With these latest mobile GPUs approaching feature parity with their desktop counterparts, the AEP in turn brings the OpenGL ES API closer to parity with the OpenGL API, and indeed this may be a good hint of what features to expect in a future version of OpenGL ES.

Gallery: Khronos OpenGL Presentation

Categories: Tech

OpenGL 4.5 released—with one of Direct3D’s best features

ARS Technica - Mon, 2014-08-11 06:00

The Khronos Group today released OpenGL 4.5, the newest version of the industry standard 3D programming API. The new version contains a mix of features designed to make developers' lives easier and to improve performance and reliability of OpenGL applications.

The group also issued a call for participation in its next generation OpenGL initiative. Amid growing interest in "low-level" APIs, such as AMD's Mantle and Microsoft's forthcoming Direct3D 12 specification, Khronos is working on its own vendor-neutral, cross-platform API to give developers greater low-level control and to extract more performance from 3D hardware.

The big feature in OpenGL 4.5 is Direct State Access (DSA). OpenGL is a complex API that relies extensively on an implicit state that is maintained between function calls. For example, to specify properties of a texture, first a texture unit must be set as active. Then, the texture must be bound to the currently active texture unit. Then, the properties of the currently bound texture are specified. In each case, the link between the calls is implicit; the binding of the texture implicitly uses the active texture unit, and the property setting implicitly uses the bound texture.

Read 10 remaining paragraphs | Comments

Categories: Tech

Khronos Announces OpenCL SPIR 2.0

Anandtech - Mon, 2014-08-11 06:00

The last time we talked to Khronos about the OpenCL Standard Portable Intermediate Representation (SPIR) was back at SIGGRAPH 2013. At the time Khronos was gearing up for the release of the first iteration of SPIR, then based on the OpenCL 1.2 specification. By building an intermediate representation for OpenCL, Khronos was aiming to expand the capabilities of OpenCL and its associated runtime, both by offering a method of distributing programs in a more developer-friendly “non-source” form, and by allowing languages other than OpenCL’s dialect of C to build upon the OpenCL runtime.

At the time of its announcement, Khronos released OpenCL SPIR 1.2 as a provisional specification, keeping it there over a protracted period to solicit feedback over the first version of the standard. Since that provisional release, Khronos finalized OpenCL 1.2 SPIR in early 2014 and has been working on building up their developer and user bases for SPIR.

Which brings us to SIGGRAPH 2014 and Khronos’s latest round of specification updates. With OpenCL 2.0 already finalized and device makers scheduled to deliver the first runtimes a bit later this year, Khronos has now turned their attention towards updating SPIR to take advantage of OpenCL 2.0’s greater functionality. To that end, today Khronos is announcing the provisional specification for the next version of SPIR, OpenCL SPIR 2.0.

With much of the groundwork for SPIR already laid out on SPIR 1.2, SPIR 2.0 is a (generally) straightforward update to the specification to integrate OpenCL 2.0 functionality. OpenCL 2.0 in turn is the biggest upgrade to OpenCL since its introduction, adding several major features to the API to keep pace with functionality offered by the latest generations of GPUs.

As a quick recap, OpenCL 2.0’s headline features dynamic parallelism (device side kernel enqueue), shared virtual memory, and support for a generic address space. Dynamic parallelism allows kernels running on a device (e.g. GPU) to issue additional kernels without going through the host, reducing host bottlenecks. Meanwhile shared virtual memory allows for the host and device to share complex data, including memory pointers, executing and using data without the need to explicitly transfer it from host to device and vice versa. This feature is especially important for the HSA Foundation, as this is one of the critical features for enabling HSA execution on OpenCL. Finally generic address space support alleviates the need to write a version of a function for each named address space. Instead a single generic function can handle working with all of the named address spaces, simplifying development and cutting down on the amount of code that needs to be cached for execution.

With these functions finally exposed through SPIR, they can be tapped in to by all SPIR developers – both those developers looking to distribute their programs in intermediate form, and for developers using OpenCL as a backend for their own languages. In the case of the latter SPIR 2.0 should be especially interesting, as these feature additions make SPIR a more versatile backend that’s better capable of efficiently executing more complex languages.

In keeping with their goals of providing common, open standard APIs, in the long run it is Khronos’s hope that OpenCL and SPIR will become the runtime of choice for GPU applications. By building up a robust runtime and set of tools through SPIR, language developers can simply target SPIR rather than needing to develop against multiple different hardware devices; and meanwhile device manufacturers can focus on tightly optimizing their OpenCL runtime rather than juggling with supporting several disparate runtimes. To that end Khronos is claiming that they’re up to nearly 200 languages and frameworks that will be capable of using SPIR, including a few high-profile languages such as C++ AMP, Python, and OpenACC.

However from a PC standpoint Khronos still faces an uphill battle, and it will be interesting to see whether SPIR 2.0 runs into the same difficulties as SPIR 1.2 did. NVIDIA for their part never did fully support OpenCL 1.2, and as a result SPIR 1.2 couldn’t be used on NVIDIA’s products, preventing SPIR’s use on what’s a significant majority of high-performance discrete GPUs. So far we have not seen NVIDIA comment much on OpenCL 2.0 (though it’s interesting to note that Khronos president Neil Trevett is an NVIDIA employee); so SPIR’s success may hinge on whether NVIDIA chooses to fully support OpenCL 2.0 and SPIR.

NVIDIA for their part has their competing CUDA ecosystem, and like Khronos they are leveraging LLVM to allow for 3rd party languages and frameworks to compile down to PTX (NVIDIA’s intermediate language). For languages and frameworks utilizing LLVM this opens the door to compiling code down to both SPIR and PTX, but it’s a door that swings both ways since it diminishes the need for NVIDIA to support SPIR (never mind OpenCL 2.0). For their parts, AMD and Intel will be fully backing OpenCL 2.0 and SPIR 2.0, so it remains to be seen whether NVIDIA finally comes on board with SPIR 2.0 after essentially skipping out on 1.2.

Gallery: OpenCL SPIR 2.0 Presentation

Categories: Tech

Tegra K1 Lands in Acer's Newest Chromebook

Anandtech - Mon, 2014-08-11 05:00

Today Acer announced four new models of a new 13.3" Chromebook design featuring Tegra K1. This is a significant launch for NVIDIA, proving there's industry interest in Tegra K1 after the disappointing interest in Tegra 4 and notching NVIDIA their first Chromebook design win.

NVIDIA has two versions of the Tegra K1, one implementing a 4+1 configuration of ARM Cortex A15s, and another implementing two custom designed NVIDIA Denver CPUs. Acer's new Chomebooks feature the former, so we have yet to see Denver CPUs in the wild. Samsung previously shipped a Chromebook featuring Cortex A15s via its Exynos processor and HP used the same SoC in their Chromebook 11. Samsung has since refreshed their ARM Chromebooks a few times, with new models using the "Chromebook 2" branding.

The most significant portion of the Tegra K1 SoC is its 192 CUDA cores. Chromebook relies heavily on web based applications, but with the rise of WebGL there have been some experiments with browser based 3D games. There haven't been any AAA title WebGL games yet, but when they arrive, this Chromebook should be well equipped to handle them; NVIDIA specifically mentions the upcoming Miss Take and Oort Online, as well as WebGL ports of Unreal Engine 4 and Unity 5.

NVIDIA claims up to 3X the WebGL performance of competing Chromebooks, with processor performance superior to the Exynos 5800 and Bay Trail Celeron N2830. Unfortunately, no performance comparisons between K1 and the Haswell Celeron 2955U were provided. Since both Haswell and Tegra K1 are available for the Chromebook platform, we'll also have the opportunity to perform CPU and GPU benchmarking to directly compare the processors. We have requested a review sample when Acer makes them available.

Beyond the marquee feature of the Tegra K1 processor, the Acer Chromebook also includes 2x2 MIMO wireless AC, an anti-glare coating, and two models feature a 1080p display. Specifications provided by Acer are listed below; Acer provided the model numbers for the three available for presale, and there is a fourth configuration available through resellers where we do not yet have the model number. Acer states they will begin shipping the first week of September.

Acer Chromebook 13 Models Model CB5-311-T7NN CB5-311-T9B0 ? CB5-311-T1UU SoC NVIDIA Tegra K1 (2.1GHz) NVIDIA Tegra K1 (2.1GHz) NVIDIA Tegra K1 (2.1GHz) NVIDIA Tegra K1 (2.1GHz) Memory 2GB 2GB 4GB 4GB Storage 16GB SSD 16GB SSD 16GB SSD 32GB SSD Display 1366x768
Anti Glare 1920x1080
Anti Glare
1366x768
Anti Glare 1920x1080
Anti Glare
Manufacturer Estimated Battery Life 13 hours 11 Hours 13 hours 11 Hours Battery Size 4-cell 3220mAh 48Wh 4-cell 3220mAh 48Wh 4-cell 3220mAh 48Wh 4-cell 3220mAh 48Wh Networking 802.11ac
2x2 MIMO 802.11ac
2x2 MIMO 802.11ac
2x2 MIMO 802.11ac
2x2 MIMO Ports 2x USB 3.0
HDMI
3.5mm Audio 2x USB 3.0
HDMI
3.5mm Audio 2x USB 3.0
HDMI
3.5mm Audio 2x USB 3.0
HDMI
3.5mm Audio Extras 720p Webcam
Stero Speakers
Microphone 720p Webcam
Stero Speakers
Microphone 720p Webcam
Stero Speakers
Microphone 720p Webcam
Stero Speakers
Microphone Thickness 0.71 in 0.71 in 0.71 in 0.71 in Weight 3.31 lbs 3.31 lbs 3.31 lbs 3.31 lbs Price $279.99 $299.99 $329.99 $379.99

Source: Acer

The higher resolution displays drop battery life a couple hours, which isn't too surprising, but overall battery life of 11-13 hours is still great for a Chromebook. The industrial design of the new Acer Chromebooks is also much better than on the previous models, with clean lines and a white body. The Acer Chromebook is also fanless, thanks to reduced power requirements for NVIDIA's Tegra K1 SoC.

Overall pricing looks good, with the base model matching the price of HP's current Chromebook 11 and the 1080p upgrade taking on the HP Chromebook 14. But the real competition is still going to be with Acer's existing Chromebook C720, which can be found with 32GB storage and 2GB RAM and a Celeron 2955U for just $229. There's also the question of size; the C720 was an 11.6" Chromebook, and while some might prefer a smaller device the 13.3" will likely be preferred by others. Samsung's Chromebook 2 13.3, which has a 1080p display and 16GB of storage and 4GB of ram, likely needs a price drop to compete as it is listed for $399. Either way, with ChromeOS continuing to improve over time, Windows laptops continue to face increasing competition from alternative laptops.

Gallery: Acer Chromebook 13

Categories: Tech

MSI A88X-G45 Gaming Review

Anandtech - Mon, 2014-08-11 05:00

One of AMD’s main selling points it likes to promote is towards the gamer, especially those on a tighter budget. This subsequently suggests to the motherboard manufacturers to build models oriented for gaming. MSI’s Gaming Range has become a solid part of MSI’s plethora of motherboards, and now this extends to the FM2+ platform. Today we review the MSI A88X-G45 Gaming.

Categories: Tech

NVIDIA FY 2015 Q2 Financial Results

Anandtech - Sun, 2014-08-10 20:10

On Thursday August 7th, NVIDIA released their results for the second quarter of their fiscal year 2015. Year-over-year, they had an excellent quarter based on strong growth in the PC GPU market, Datacenter and Cloud (GRID), and mobile with the Tegra line.

GAAP Revenue for the quarter came in at $1.103 billion which is flat from Q1 2015, but up 13% from $977 million at the same time last year. Gross margin for Q2 was up both sequentially and year-over-year at 56.1%. Net income for the quarter came in at $128 million, down 6% from Q1 and up 33% from Q2 2014. These numbers resulted in diluted earnings per share of $0.22, down 8% from Q1 and up 38% from Q2 last year but beating analysts expectations.

NVIDIA Q2 2015 Financial Results (GAAP) In millions except EPS Q2'2015 Q1'2015 Q2'2014 Q/Q Y/Y Revenue $1103 $1103 $977 0% +13% Gross Margin 56.1% 54.8% 55.8% +1.3% +0.3% Operating Expenses $456 $453 $440 +1% +4% Net Income $128 $137 $96 -6% +33% EPS $0.22 $0.24 $0.16 -8% +38%

 

NVIDIA Q2 2015 Financial Results (Non-GAAP) In millions except EPS Q2'2015 Q1'2015 Q2'2014 Q/Q Y/Y Revenue $1103 $1103 $977 0% +13% Gross Margin 56.4% 55.1% 56.3% +1.3% +0.1% Operating Expenses $411 $411 $401 0% +2% Net Income $173 $166 $133 +4% +30% EPS $0.30 $0.29 $0.23 +3% +30%

The GPU business is the primary source of revenue for NVIDIA, and includes GeForce for desktops and notebook PCs, Quadro for workstations, Tesla for high performance computing, and GRID for cloud-enabled graphic solutions. This quarter, the GPU revenue rose 2% over Q2 2014 with $878 million in revenue. This is down 2% from the previous quarter due to the seasonal decline of consumer PCs. Revenue from the PC GPU line rose 10% over last year and was helped by the introduction of the GeForce GTX 750 and 750 Ti Maxwell based boards. They are also seeing growth in the Tesla datacenter business. Quadro revenue also increased, citing strong growth in mobile workstations.

The mobile side of NVIDIA hasn’t seen as many product wins compared to Qualcomm, but the Tegra business is still growing strongly for NVIDIA. Tegra revenue was up 14% from Q1 2015, and 200% from Q2 2014 with a total revenue of $159 million for the quarter. Tegra continues to have strong demand in the automotive infotainment sector, with a 74% growth in that market year-over-year. This could be a lucrative market, with automotive systems generally locking in for at least several years compared to the mobile sector which might see a product replaced in less than a single year. The Tegra K1 has just come to market though, and it has shown itself to be a capable performer and may win some more designs soon.

The last avenue of income for NVIDIA is $66 million per quarter in a licensing deal with Intel.

NVIDIA Quarterly Revenue Comparison (GAAP) In millions Q2'2015 Q1'2015 Q2'2014 Q/Q Y/Y GPU $878 $898 $858 -2% +2% Tegra Processor $159 $139 $53 +14% +200% Other $66 $66 $66 flat flat

The company projected this quarter to be flat on revenue as compared to Q1, and they were exactly right. Projections for Q3 2015 are for revenue of $1.2 billion plus or minus 2%.

During the quarter, $47 million was paid in dividends and NVIDIA purchased 6.8 million shares back from investors. This brings them to $594 million of the $1 billion promised to shareholders for FY 2015. The next dividend of $0.085 per share will be paid on September 12th to all stockholders of record as of August 21st.

It was an excellent quarter for NVIDIA, and their stock prices jumped after the numbers were released. All segments of the company are growing at the moment, and with the recent release of the Tegra K1 they can only be hoping to have another strong quarter of mobile grown in Q3 after a great 200% jump in Tegra revenue since last year. The stronger than expected PC sales have helped their biggest business as well, with the GPU division up 2%. CEO Jen-Hsun Huang has worked to bring the company a more diversified portfolio, and with the recent gains in mobile and datacenter computing, the company has certainly had some recent success.

Categories: Tech
Syndicate content