Tech
Texas man must pay $40.4M for running Bitcoin-based scam
A federal judge in Texas has convicted a local man of conducting a massive Bitcoin-based Ponzi scheme, and ordered him to pay $40.4 million. The court found on Friday that Tendon Shavers had created a virtual bitcoin-based hedge fund that many suspected of being a scam—and it turned out they were right.
The Bitcoin Savings and Trust (BTCST) shut down in August 2012, and by June 2013 the Securities and Exchange Commission (SEC) filed charges against its founder. In a statement at the time, the SEC said Shavers "raised at least 700,000 Bitcoin in BTCST investments, which amounted to more than $4.5 million based on the average price of Bitcoin in 2011 and 2012 when the investments were offered and sold."
Judge Amos Mazzant wrote:
Read 2 remaining paragraphs | Comments
Alibaba raises over $21 billion, making it the biggest IPO ever in the US
When Alibaba stopped trading its shares on Friday, the Chinese e-commerce company had officially logged the biggest Initial Public Offering (IPO) in US history, raising $21.8 billion in its first day on the New York Stock Exchange. The company's earnings give it a market capitalization of over $200 billion, "putting it among the 20 biggest companies by market cap in the US," the Wall Street Journal notes.
Alibaba's IPO beat out record IPOs like Visa's $17.9 billion IPO in 2008 and General Motors' $15.8 billion sale in 2010. And Alibaba beat out its peers in the tech sector too, like Facebook (whose first-day earnings were $16 billion) and Google (whose 2004 IPO raised only $1.67 billion—paltry in today’s terms).
Earlier this month, the company announced that it would price shares at $66 per share. This morning around 12pm ET, the NYSE gave the go-ahead for the company, whose ticker symbol is BABA, to start trading. Shares started at $92.70, a third larger than what the company was aiming for, and ended the day at $93.89 after reaching a high of $99.70. In after hours trading, Alibaba is just down slightly at $93.60 per share, as of this writing.
Read 3 remaining paragraphs | Comments
US courts agree to restore 10 years of deleted online public records
The US bureaucracy agreed Friday to restore a decade's worth of electronic federal court documents that were deleted last month from online viewing because of an upgrade to a computer database known as PACER.
The move by the Administrative Office of the Courts, first reported by The Washington Post, comes amid a fierce backlash from lawmakers who urged it to restore the data that is among the few methods of delivering court documents to the public. It's a paid service, costing 10 cents a page, and has long been criticized as a deeply dated system that already does too little and charges too much for online access to things like judicial orders and court briefs.
To be restored are, combined, about a decade's worth of court dockets and all manner of documents at the US Courts of Appeals for the 2nd, 7th, 11th, and Federal Circuits, as well as the Bankruptcy Court for the Central District of California.
Read 2 remaining paragraphs | Comments
FAA bars drone from delivering game ball to college football matchup
The Federal Aviation Administration has blocked plans for a small drone to deliver the game football for the University of Michigan kickoff Saturday against the University of Utah before a crowd of about 110,000 fans.
The FAA's move is the latest example of flight regulators blocking the use of small drones for commercial purposes, despite the questionable legal authority for them to do so. The drone, built by Ann Arbor-based SkySpecs, was supposed to participate in a pre-game program of the American football game to celebrate the University of Michigan's 100-year anniversary of its aerospace-engineering program.
Bloomberg News said that after the FAA explained its rules, "the school backed down."
Read 3 remaining paragraphs | Comments
NVIDIA 344.11 & 344.16 Drivers Available
In the crazy rush to wrap up the GeForce GTX 980 review in time for the NDA lift yesterday, news of the first R343 driver release may have been lost in the shuffle. This is a full WHQL driver release from NVIDIA, and it's available for Windows 8.1, 7, Vista, and even XP (though I don't know what you'd be doing with a modern GPU on XP at this point). Notebooks also get the new drivers, though only for Windows 7 and 8 it seems. You can find the updates at the usual place, or they're also available through GeForce Experience (which has also been updated to version 2.1.2.0 if you're wondering).
In terms of what the driver update provides, this is the Game Ready driver for Borderlands: The Pre-Sequel, The Evil Within, F1 2014, and Alien: Isolation – all games that are due to launch in early to mid-October. Of course this is also the publicly available driver for the GeForce GTX 980 and GTX 970, which are apparently selling like hotcakes based on the number of "out of stock" notifications we're seeing (not to mention some hefty price gouging on the GTX 970 and GTX 980).
The drivers also enable NVIDIA's new DSR (Dynamic Super Resolution), with hooks for individual games available in the Control Panel->Manage 3D Settings section. It's not clear whether DSR will be available for other GPUs, but it's definitely not enabled on my GTX 780 right now and I suspect it will be limited to the new Maxwell GM204 GPUs for at least a little while.
There are a host of other updates, too numerous to go into, but you can check the release notes for additional information. These drivers also drop support for legacy GPUs (anything from the 300 series and older), so if you're running an older GPU you'll need to stay with the previous driver release.
Update: 334.16 is now available for the GTX 900 series. These drivers include the fixes to resolve the compatibility issues we were seeing with the GTX 970
Samsung Acknowledges the SSD 840 EVO Read Performance Bug - Fix Is on the Way
During the last couple of weeks, numerous reports of Samsung SSD 840 and 840 EVO having low read performance have surfaced around the Internet. The most extensive one is probably a forum thread over at Overclock.net, which was started about month ago and currently has over 600 replies. For those who are not aware of the issue, there is a bug in the 840 EVO that causes the read performance of old blocks of data to drop dramatically like the HD Tach graph below illustrates. The odd part is that the bug only seems to affect LBAs that have old data (>1 month) associated with them because freshly written data will read at full speed, which also explains why the issue was not discovered until now.
Source: @p_combe
I just got off the phone with Samsung and the good news is that they are aware of the problem and have presumably found the source of it. The engineers are now working on an updated firmware to fix the bug and as soon as the fix has been validated, the new firmware will be distributed to end-users. Unfortunately there is no ETA for the fix, but obviously it is in Samsung's best interest to provide it as soon as possible.
Update 9/27: Samsung just shed some light on the timeline and the fixed firmware is scheduled to be released to the public on October 15th.
I do not have any further details about the nature of the bug at this point, but we will be getting more details early next week, so stay tuned. It is a good sign that Samsung acknowledges the bug and that a fix is in the works, but for now I would advise against buying the 840 EVO until there is a resolution for the issue.
A not-so-friendly reminder from the gov’t: Yelp is not for kids
In some ways, the modern Internet is a Wild West in terms of privacy. Internet companies collect and share heaps of data from adults, but getting the same data from kids—even a few of them, even by mistake—can land them in hot water.
This week, Yelp agreed to pay a $450,000 fine to settle charges that it violated the Children's Online Privacy Protection Act, or COPPA. The FTC's complaint outlines how Yelp's mobile app allowed kids under 13 to register for the site, between 2009 and April 2013.
COPPA requires app-makers and website owners to get explicit parental permission before collecting any personal information about children under 13. That personal information can include things as simple as a name and email address. COPPA is the reason why Facebook and many other popular sites don't allow users under 13.
Read 10 remaining paragraphs | Comments
Hack runs Android apps on Windows, Mac, and Linux computers
If you remember, about a week ago, Google gave Chrome OS the ability to run Android apps through the "App Runtime for Chrome." The release came with a lot of limitations—it only worked with certain apps and only worked on Chrome OS. But a developer by the name of "Vladikoff" has slowly been stripping away these limits. First he figured out how to load any app on Chrome OS, instead of just the four that are officially supported. Now he's made an even bigger breakthrough and gotten Android apps to work on any desktop OS that Chrome runs on. You can now run Android apps on Windows, Mac, and Linux.
The hack depends on App Runtime for Chrome (ARC), which is built using Native Client, a Google project that allows Chrome to run native code safely within a web browser. While ARC was only officially released as an extension on Chrome OS, Native Client extensions are meant to be cross-platform. The main barrier to entry is obtaining ARC Chrome Web Store, which flags desktop versions of Chrome as "incompatible."
Vladikoff made a custom version of ARC, called ARChon, that can be sideloaded simply by dragging the file onto Chrome. It should get Android apps up and running on any platform running the desktop version of Chrome 37 and up. The hard part is getting Android apps that are compatible with it. ARC doesn't run raw Android app packages (APKs)—they need to be converted into a Chrome extension—but Vladikoff has a tool called "chromeos-apk" that will take care of that, too.
Read 4 remaining paragraphs | Comments
2014 Ig Nobel awards honor nasal tampons made of bacon
The 24th Ig Nobel prizes were awarded last night, recognizing scientific research that “first makes people laugh and then makes them think."
The traditionally elaborate ceremony's entertainment included the Win-a-Date-With-a-Nobel-Laureate Contest, two Paper Airplane Deluges, and an opera set to the music of Mozart and called What's Eating You, "about people who stop eating food and instead nourish themselves exclusively with pills."
The awards for individual categories were presented at Harvard University by "a group of genuine, genuinely bemused Nobel Laureates." All but one of the teams managed to get representatives to Boston to receive the awards in person (and the group that couldn't make it appeared by video).
Read 12 remaining paragraphs | Comments
Short Bytes: NVIDIA GeForce GTX 980 in 1000 Words
To call the launch of NVIDIA's Maxwell GM204 part impressive is something of an understatement. You can read our full coverage of the GTX 980 for the complete story, but here's the short summary. Without the help of a manufacturing process shrink, NVIDIA and AMD are both looking at new ways to improve performance. The Maxwell architecture initially launched earlier this year with GM107 and the GTX 750 Ti and GTX 750, and with it we had our first viable mainstream GPU of the modern era that could deliver playable frame rates at 1080p while using less than 75W of power. The second generation Maxwell ups the ante by essentially tripling the CUDA core count of GM107, all while adding new features and still maintaining the impressive level of efficiency.
It's worth pointing out that "Big Maxwell" (or at least "Bigger Maxwell") is enough of a change that NVIDIA has bumped the model numbers from the GM100 series to GM200 series this round. NVIDIA has also skipped the desktop 800 line completely and is now in the 900 series. Architecturally, however, there's enough change going into GM204 that calling this "Maxwell 2" is certainly warranted.
NVIDIA is touting a 2X performance per Watt increase over GTX 680, and they've delivered exactly that. Through a combination of architectural and design improvements, NVIDIA has moved from 192 CUDA cores per SMX in Kepler to 128 CUDA cores per SMM in Maxwell, and a single SMM is still able to deliver around 90% of the performance of an SMX of equivalent clocks. Put another way, NVIDIA says the new Maxwell 2 architecture is around 40% faster per CUDA core than Kepler. What that means in terms of specifications is that GM204 only needs 2048 CUDA cores to compete with – and generally surpass! – the performance of GK110 with its 2880 CUDA cores, which is used in the GeForce GTX 780 Ti and GTX Titan cards.
In terms of new features, some of the changes with GM204 come on the software/drivers side of things while other features have been implemented in hardware. Starting with the hardware side, GM204 now implements the full set of D3D 11.3/D3D 12 features, where previous designs (Kepler and Maxwell 1) stopped at full Feature Level 11_0 with partial FL 11_1. The new features include Rasterizer Ordered Views, Typed UAV Load, Volume Tiled Resources, and Conservative Rasterization. Along with these, NVIDIA is also adding hardware features to accelerate what they're calling VXGI – Voxel accelerated Global Illumination – a forward-looking technology that brings GPUs one step closer to doing real-time path tracing. (NVIDIA has more details available if you're interested in learning more).
NVIDIA also has a couple new techniques to improve anti-aliasing, Dynamic Super Resolution (DSR) and Multi-Frame Anti-Aliasing (MFAA). DSR essentially renders a game at a higher resolution and then down-sizes the result to your native resolution using a high-quality 13-tap Gaussian filter. It's similar to super sampling, but the great benefit of DSR over SSAA is that the game doesn't have any knowledge of DSR; as long as the game can support higher resolutions, NVIDIA's drivers take care of all of the work behind the scenes. MFAA (please, no jokes about "mofo AA") is supposed to offer essentially the same quality as 4x MSAA with the performance hit of 2x MSAA through a combination of custom filters and looking at previously rendered frames. MFAA can also function with a 4xAA mode to provide an alternative to 8x MSAA.
The above is all well and good, but what really matters at the end of the day is the actual performance that GM204 can offer. We've averaged results from our gaming benchmarks at our 2560x1440 and 1920x1080 settings, as well as our compute benchmarks, with all scores normalized to the GTX 680. Here's how the new GeForce GTX 980 compares with other GPUs. (Note that we've omitted the overclocking results for the GTX 980, as it wasn't tested across all of the games, but on average it's around 18% faster than the stock GTX 980 while consuming around 20% more power.)
Wow. Obviously there's not quite as much to be gained by running such a fast GPU at 1920x1080, but at 2560x1440 we're looking at a GPU that's a healthy 74% faster on average compared to the GTX 680. Perhaps more importantly, the GTX 980 is also on average 8% faster than the GTX 780 Ti and 13.5% faster than AMD's Radeon R9 290X (in Uber mode, as that's what most shipping cards use). Compute performance sees some even larger gains over previous NVIDIA GPUs, with the 980 besting the 680 by 132%; it's also 16% faster than the 780 Ti but "only" 1.5% faster than the 290X – though the 290X still beats the GTX 980 in Sony Vegas Pro 12 and SystemCompute.
If we look at the GTX 780 Ti, on the one hand performance hasn't improved so much that we'd recommend upgrading, though you do get some new features that might prove useful over time. For those that didn't find the price/performance offered by GTX 780 Ti a compelling reason to upgrade, the GTX 980 sweetens the pot by dropping the MSRP down to $549, and what's more it also uses quite a bit less power:
This is what we call the trifecta of graphics hardware: better performance, lower power, and lower prices. When NVIDIA unveiled the GTX 750 Ti back in February, it was ultimately held back by performance while its efficiency was a huge step forward; it seemed almost too much to hope for that sort of product in the high performance GPU arena. NVIDIA doesn't disappoint, however, dropping power consumption by 18% relative to the GTX 780 Ti while improving performance by roughly 10% and dropping the launch price by just over 20%. If you've been waiting for a reason to upgrade, GeForce GTX 980 is about as good as it gets, though the much less expensive GTX 970 might just spoil the party. We'll have a look at the 970 next week.
Bill would limit reach of US search warrants for data stored abroad
Proposed legislation unveiled Thursday seeks to undermine the Obama administration's position that any company with operations in the United States must comply with valid warrants for data, even when that data is stored on overseas servers.
The bipartisan Law Enforcement Access to Data Stored Abroad Act (LEADS Act) [PDF] comes in response to a federal judge's July decision ordering Microsoft to turn over e-mails stored on its Irish servers as part of a Department of Justice drug investigation. The Department of Justice argued that global jurisdiction is necessary in an age when "electronic communications are used extensively by criminals of all types in the United States and abroad, from fraudsters to hackers to drug dealers, in furtherance of violations of US law." New York US District Judge Loretta Preska agreed, ruling that "it is a question of control, not a question of the location of that information." The decision is stayed pending appeal.
Microsoft, along with a slew of other companies, maintains that the Obama administration's position in the case puts US tech companies into conflict with foreign data protection laws. And it fears that if the court decision stands, foreigners could lose more confidence in US companies' cloud and tech offerings, especially in the wake of the Edward Snowden revelations.
Read 9 remaining paragraphs | Comments
AT&T’s friends: Meet the companies and politicians “enthusiastic” about DirecTV buy
Customers, consumer advocacy groups, and small cable companies are speaking out against AT&T’s proposed purchase of DirecTV.
But just as Comcast was able to claim broad support for its Time Warner Cable merger, AT&T has plenty of moneyed interests and politicians on its side. Microsoft wrote to the Federal Communications Commission this week in support of the merger, pointing to AT&T’s promises to expand broadband deployments if it’s allowed to buy the satellite TV company.
Microsoft, which partners with AT&T to offer technology services to businesses, echoed AT&T’s talking points in its letter:
Read 14 remaining paragraphs | Comments
iFixit tears new iPhones apart, finds they’re pretty easy to fix
When you want to know more about the stuff inside your phone without actually taking it apart, you can count on iFixit to make the sacrifice for you. Late last night, the site began (and eventually completed) its teardowns of the new iPhone 6 and iPhone 6 Plus, revealing slightly more detailed information about the insides of both devices, as well as how difficult to repair they'll be if you happen to break them.
Like the iPhone 5 and 5S, the first step toward dismantling an iPhone 6 is to remove a pair of Pentalobe screws flanking the Lightning port, then lifting up the screen with a suction cup. The TouchID button on the 5S relied on a cable routed between the display and the bottom of the phone, and would-be repairers had to be careful not to sever this while taking the phone apart. Both iPhone 6 models integrate this cable into the display assembly, removing one more potential point of failure.
Once opened, both phones prove to be substantially similar—the 6 Plus has a larger 2915mAh battery, while the 6 has an 1810mAh version, but this is the chief difference. Both phones include a Qualcomm MDM9625M LTE modem and WTR1625L transceiver, which collectively provide faster 150Mbps LTE speeds and wider support for different LTE bands (an additional WFR1620 chip provides carrier aggregation). The Apple A8 SoC is (still) paired with 1GB of RAM. The expected NFC, Wi-Fi, and M8 motion coprocessor, along with other power management and touch controller chips, are all present and accounted for.
Read 1 remaining paragraphs | Comments
Royal Observatory announces the winners of its 2013 photography contest
CN.dart.call("xrailTop", {sz:"300x250", kws:["top"], collapse: true});
Each year, the UK's Royal Observatory in Greenwich runs an Astronomy Photographer of the Year contest. Yesterday, the Observatory announced the winners of its 2013 version, the winners of which will be on display, making it worth a visit if you're anywhere near London. We've brought you some of the winners of microscopy contests in the past; this gives us the chance to feature things at the opposite end of the scale, from planets to galaxies.
Just like the microscopy images, all of them can tell us something about the natural world. Details of images can reveal information about topics that run from orbital mechanics to the behavior of supernovae. But they're a great reminder that something can be both informative and stunningly beautiful. For many people, it was the beauty of the natural world that first inspired them to ask questions about it and set them off on the road that led to a career in science.
Entries are being accepted for the 2014 contest up until late February this year, so if you've got a scope and something compelling, get to work!
Read 1 remaining paragraphs | Comments
Acer Releases XBO Series: 28-inch UHD/4K with G-Sync for $800
Monitors are getting exciting. Not only are higher resolution panels becoming more of the norm, but the combination of different panel dimensions and feature sets means that buying the monitor you need for the next 10 years is getting more difficult. Today Acer adds some spice to the mix by announcing pre-orders for the XB280HK – a 28-inch TN monitor with 3840x2160 resolution that also supports NVIDIA’s G-Sync to reduce tearing and stuttering.
Adaptive frame rate technologies are still in the early phases for adoption by the majority of users. AMD’s FreeSync is still a few quarters away from the market, and NVIDIA’s G-Sync requires an add-in card which started off as an interesting, if not expensive, monitor upgrade. Fast forward a couple of months and as you might expect, the best place for G-Sync to go is into some of the more impressive monitor configurations. 4K is becoming a go-to resolution for anyone with deep enough wallets, although some might argue that the 21:9 monitors might be better for gaming immersion at least.
The XB280HK will support 3840x2160 at 60 Hz via DisplayPort 1.2, along with a 1 ms gray-to-gray response time and a fixed frequency up to 144 Hz. The stand will adjust up to 155mm in height with 40º of tilt. There is also 120º of swivel and a full quarter turn of pivot allowing for portrait style implementations. The brightness of the panel is rated at 300 cd/m2, with an 8 bit+HiFRC TN display that has a typical contrast ratio of 1000:1 and 72% NTSC. VESA is also supported at the 100x100mm scale, as well as a USB 3.0 Hub as part of the monitor, although there are no monitor speakers.
The XB280HK is currently available for pre-order in the UK at £500, but will have a US MSRP of $800. Also part of the Acer XBO range is the XB270H, a 27-inch 1920x1080 panel with G-Sync with an MSRP of $600. Expected release date, according to the pre-orders, should be the 3rd of October.
Source: Acer
Gallery: Acer Releases XBO Series: 28-inch UHD/4K with G-Sync for $800Patent troll gives up, can’t defend “matchmaking” patent under new law
A patent troll called Lumen View Technology got stopped in its tracks last year after it sued Santa Barbara-based startup FindTheBest, then asked the company for a quick $50,000 settlement. It lost its case, and has now said it won't even bother appealing.
Instead of settling to avoid a costly lawsuit, as several other small companies had, FindTheBest responded with a pledge to fight the patent all the way and also slapped Lumen View with a civil RICO lawsuit.
The counter-attack caused Lumen View's patent to be dismantled in short order, when the judge in the case ruled that it was nothing more than a computerized twist on an ancient idea. The patent delineated a process of having parties input preference data, and then an automated process of determining a good match. "Matchmakers have been doing this for millennia," wrote US District Judge Denise Cote in her order invalidating the patent.
Read 6 remaining paragraphs | Comments
Microsoft Details Direct3D 11.3 & 12 New Rendering Features
Back at GDC 2014 in March, Microsoft and its hardware partners first announced the next full iteration of the Direct3D API. Now on to version 12, this latest version of Direct3D would be focused on low level graphics programming, unlocking the greater performance and greater efficiency that game consoles have traditionally enjoyed by giving seasoned programmers more direct access to the underlying hardware. In particular, low level access would improve performance both by reducing the overhead high level APIs incur, and by allowing developers to better utilize multi-threading by making it far easier to have multiple threads submitting work.
At the time Microsoft offered brief hints that there would be more to Direct3D 12 than just the low level API, but the low level API was certainly the focus for the day. Now as part of NVIDIA’s launch of the second generation Maxwell based GeForce GTX 980, Microsoft has opened up to the press and public a bit more on what their plans are for Direct3D. Direct3D 12 will indeed introduce new features, but there will be more in development than just Direct3D 12.
Direct3D 11.3First and foremost then, Microsoft has announced that there will be a new version of Direct3D 11 coinciding with Direct3D 12. Dubbed Direct3D 11.3, this new version of Direct3D is a continuation of the development and evolution of the Direct3D 11 API and like the previous point updates will be adding API support for features found in upcoming hardware.
At first glance the announcement of Direct3D 11.3 would appear to be at odds with Microsoft’s development work on Direct3D 12, but in reality there is a lot of sense in this announcement. Direct3D 12 is a low level API – powerful, but difficult to master and very dangerous in the hands of inexperienced programmers. The development model envisioned for Direct3D 12 is that a limited number of code gurus will be the ones writing the engines and renderers that target the new API, while everyone else will build on top of these engines. This works well for the many organizations that are licensing engines such as UE4, or for the smaller number of organizations that can justify having such experienced programmers on staff.
However for these reasons a low level API is not suitable for everyone. High level APIs such as Direct3D 11 do exist for a good reason after all; their abstraction not only hides the quirks of the underlying hardware, but it makes development easier and more accessible as well. For these reasons there is a need to offer both high level and low level APIs. Direct3D 12 will be the low level API, and Direct3D 11 will continue to be developed to offer the same features through a high level API.
Direct3D 12Today’s announcement of Direct3D 11.3 and the new features set that Direct3D 11.3 and 12 will be sharing will have an impact on Direct3D 12 as well. We’ll get to the new features in a moment, but at a high level it should be noted that this means that Direct3D 12 is going to end up being a multi-generational (multi-feature level) API similar to Direct3D 11.
In Direct3D 11 Microsoft introduced feature levels, which allowed programmers to target different generations of hardware using the same API, instead of having to write their code multiple times for each associated API generation. In practice this meant that programmers could target D3D 9, 10, and 11 hardware through the D3D 11 API, restricting their feature use accordingly to match the hardware capabilities. This functionality was exposed through feature levels (ex: FL9_3 for D3D9.0c capable hardware) which offered programmers a neat segmentation of feature sets and requirements.
Direct3D 12 in turn will also be making use of feature levels, allowing developers to exploit the benefits of the low level nature of the API while being able to target multiple generations of hardware. It’s through this mechanism that Direct3D 12 will be usable on GPUs as old as NVIDIA’s Fermi family or as new as their Maxwell family, all the while still being able to utilize the features added in newer generations.
Ultimately for users this means they will need to be mindful of feature levels, just as they are today with Direct3D 11. Hardware that is Direct3D 12 compatible does not mean it supports all of the latest feature sets, and keeping track of feature set compatibility for each generation of hardware will still be important going forward.
11.3 & 12: New FeaturesGetting to the heart of today’s announcement from Microsoft, we have the newly announced features that will be coming to Direct3D 11.3 and 12. It should be noted at this point in time this is not an exhaustive list of all of the new features that we will see, and Microsoft is still working to define a new feature level to go with them (in the interim they will be accessed through cap bits), but none the less this is our first detailed view at what are expected to be the major new features of 11.3/12
Rasterizer Ordered ViewsFirst and foremost of the new features is Rasterizer Ordered Views (ROVs). As hinted at by the name, ROVs is focused on giving the developer control over the order that elements are rasterized in a scene, so that elements are drawn in the correct order. This feature specifically applies to Unordered Access Views (UAVs) being generated by pixel shaders, which buy their very definition are initially unordered. ROVs offers an alternative to UAV's unordered nature, which would result in elements being rasterized simply in the order they were finished. For most rendering tasks unordered rasterization is fine (deeper elements would be occluded anyhow), but for a certain category of tasks having the ability to efficiently control the access order to a UAV is important to correctly render a scene quickly.
The textbook use case for ROVs is Order Independent Transparency, which allows for elements to be rendered in any order and still blended together correctly in the final result. OIT is not new – Direct3D 11 gave the API enough flexibility to accomplish this task – however these earlier OIT implementations would be very slow due to sorting, restricting their usefulness outside of CAD/CAM. The ROV implementation however could accomplish the same task much more quickly by getting the order correct from the start, as opposed to having to sort results after the fact.
Along these lines, since OIT is just a specialized case of a pixel blending operation, ROVs will also be usable for other tasks that require controlled pixel blending, including certain cases of anti-aliasing.
Typed UAV Load
The second feature coming to Direct3D is Typed UAV Load. Unordered Access Views (UAVs) are a special type of buffer that allows multiple GPU threads to access the same buffer simultaneously without generating memory conflicts. Because of this disorganized nature of UAVs, certain restrictions are in place that Typed UAV Load will address. As implied by the name, Typed UAV Load deals with cases where UAVs are data typed, and how to better handle their use.
Volume Tiled Resources
The third feature coming to Direct3D is Volume Tiled Resources. VTR builds off of the work Microsoft and partners have already done for tiled resources (AKA sparse allocation, AKA hardware megatexture) by extending it into the 3rd dimension.
VTRs are primarily meant to be used with volumetric pixels (voxels), with the idea being that with sparse allocation, volume tiles that do not contain any useful information can avoid being allocated, avoiding tying up memory in tiles that will never be used or accessed. This kind of sparse allocation is necessary to make certain kinds of voxel techniques viable.
Conservative Rasterization
Last but certainly not least among Direct3D’s new features will be conservative rasterization. Conservative rasterization is essentially a more accurate but performance intensive solution to figuring out whether a polygon covers part of a pixel. Instead of doing a quick and simple test to see if the center of the pixel is bounded by the lines of the polygon, conservative rasterization checks whether the pixel covers the polygon by testing it against the corners of the pixel. This means that conservative rasterization will catch cases where a polygon was too small to cover the center of a pixel, which results in a more accurate outcome, be it better identifying pixels a polygon resides in, or finding polygons too small to cover the center of any pixel at all. This in turn being where the “conservative” aspect of the name comes from, as a rasterizer would be conservative by including every pixel touched by a triangle as opposed to just the pixels where the tringle covers the center point.
Conservative rasterization is being added to Direct3D in order to allow new algorithms to be used which would fail under the imprecise nature of point sampling. Like VTR, voxels play a big part here as conservative rasterization can be used to build a voxel. However it also has use cases in more accurate tiling and even collision detection.
Final WordsWrapping things up, today’s announcement of Direct3D 11.3 and its new features offers a solid roadmap for both the evolution of Direct3D and the hardware that will support it. By confirming that they are continuing to work on Direct3D 11 Microsoft has answered one of the lingering questions surrounding Direct3D 12 – what happens to Direct3D 11 – and at the same time this highlights the hardware features that the next generation of hardware will need to support in order to be compliant with the latest D3D feature level. And with Direct3D 12 set to be released sometime next year, these new features won’t be too far off either.
NVIDIA GameWorks: More Effects with Less Effort
While NVIDIA's hardware is the big start of the day, the software that we run on the hardware is becoming increasingly important. It's one thing to create the world's fastest GPU, but what good is the GPU if you don't have anything that can leverage all that performance? As part of their ongoing drive to improve the state of computer graphics, NVIDIA has a dedicated team of over 300 engineers whose primary focus is the creation of tools and technologies to make the lives of game developers better.
Gallery: NVIDIA GameWorks Overview
GameWorks consists of several items. There's the core SDK (Software Development Kit), along with IDE (Integrated Development Environment) tools for debugging, profiling, and other items a developer might need. Beyond the core SDK, NVIDIA has a Visual FX SDK, a PhysX SDK, and an Optix SDK. The Visual FX SDK offers solutions for complex, realistic effects (e.g. smoke and fire, faces, waves/water, hair, shadows, and turbulence). PhysX is for physics calculations (either CPU or GPU based, depending on the system). Optix is a ray tracing engine and framework, often used to pre-calculate ("bake") lighting in game levels. NVIDIA also provides sample code for graphics and compute, organized by effect and with tutorials.
Many of the technologies that are part of GameWorks have been around for a few years, but NVIDIA is constantly working on improving their GameWorks library and they had several new technologies on display at their GM204 briefing. One of the big ones has already been covered in our GM204 review, VXGI (Voxel Global Illumination), so I won't rehash that here; basically, it allows for more accurate and realistic indirect lighting. Another new technology that NVIDIA showed is called Turf Effects, which properly simulates individual blades of grass (or at least clumps of grass). Finally, PhysX FleX also has a couple new additions, Adhesion and Gases; FleX uses PhysX to provide GPU simulations of particles, fluids, cloth, etc.
Still images don't do justice to most of these effects, and NVIDIA will most likely have videos available in the future to show what they look like. PhysX FleX for example has a page with a currently unavailable video, so hopefully they'll update that with a live video in the coming weeks. You can find additional content related to GameWorks on the official website.
The holiday 2014 season will see the usual avalanche of new games, and many of the AAA titles will sport at least one or two technologies that come from GameWorks. Here's a short list of some of the games, and then we'll have some screen shots to help illustrate what some of the specific technologies do.
Upcoming Titles with GameWorks Technologies Assassin’s Creed: Unity HBAO+, TXAA, PCSS, Tessellation Batman: Arkham Knight Turbulence, Environmental PhysX, Volumetric Lights, FaceWorks, Rain Effects Borderlands: The Pre-Sequel PhysX Particles Far Cry 4 HBAO+, PCSS, TXAA, God Rays, Fur, Enhanced 4K Support Project CARS DX11, Turbulence, PhysX Particles, Enhanced 4K Support Strife PhysX Particles, HairWorks The Crew HBAO+, TXAA The Witcher 3: Wild Hunt HairWorks, HBAO+, PhysX, Destruction, Clothing Warface PhysX Particles, Turbulence, Enhanced 4K Support War Thunder WaveWorks, DestructionIn terms of upcoming games, the two most prominent titles are probably Assassin's Creed Unity and Far Cry 4, and we've created a gallery for each. Both games use multiple GameWorks elements, and NVIDIA was able to provide before/after comparisons for FC4 and AC Unity. Batman: Arkham Knight and The Witcher 3: The Wild Hunt also incorporate many effects from GameWorks, but we didn't get any with/without comparisons.
Gallery: GameWorks - Assassin's Creed Unity
Gallery: GameWorks - Far Cry 4
Starting with HBAO+ (Horizon Based Ambient Occlusion), this is a newer way of performing Ambient Occlusion calculations (SSAO, Screen Space AO, being the previous solution that many games have used). Games vary in how they perform AO, but if we look at AC Unity the comparison between HBAO+ and (presumably SSAO) the default AO, HBAO+ clearly offers better shadows. HBAO+ is also supposed to be faster and more efficient than other AO techniques.
TXAA (Temporal Anti-Aliasing) basically combines a variety of filters and post processing techniques to help eliminate jaggies, something which we can all hopefully appreciate. There's one problem I've noticed with TXAA however, which you can see in the above screenshot: it tends to make the entire image look rather blurry in my opinion. It's almost as though someone took Photoshop's "blur" filter and applied it to the image.
PCSS (Percentage Closer Soft Shadows) was introduced a couple years back, which means we should start seeing it in more shipping games. You can see the video from 2012, and AC Unity and Far Cry 4 are among the first games that will offer PCSS.
Tessellation has been around for a few years now in games, and the concepts behind tessellation go back much further. The net result is that tessellation allows developers to extrude geometry from an otherwise flat surface, creating a much more realistic appearance to games when used appropriately. The cobble stone streets and roof shingles in AC Unity are great examples of the difference tessellation makes.
God rays are a lighting feature that we've seen before, but now NVIDIA has implemented a new way of calculating the shafts of light. They now use tessellation to extrude the shadow mapping and actually create transparent beams of light that they can render.
HairWorks is a way to simulate large strands of hair instead of using standard textures – Far Cry 4 and The Witcher 3 will both use HairWorks, though I have to admit that the hair in motion still doesn't look quite right to me. I think we still need an order of magnitude more "hair", and similar to the TressFX in Tomb Raider this is a step forward but we're not there yet.
Gallery: GameWorks - Upcoming Games Fall 2014
There are some additional effects being used in other games – Turbulence, Destruction, FaceWorks, WaveWorks, PhysX, etc. – but the above items give us a good idea of what GameWorks can provide. What's truly interesting about GameWorks is that these libraries are free for any developers that want to use them. The reason for creating GameWorks and basically giving it away is quite simple: NVIDIA needs to entice developers (and perhaps more importantly, publishers) into including these new technologies, as it helps to drive sales of their GPUs among other things. Consider the following (probably not so hypothetical) exchange between a developer and their publisher, paraphrased from NVIDIA's presentation on GameWorks.
A publisher wants to know when game XYZ is ready to ship, and the developer says it's basically done, but they're excited about some really cool features that will just blow people away, and it will take a few more months to get those finished up. "How many people actually have the hardware required to run these new features?" asks the publisher. When the developers guess that only 5% or so of the potential customers have the hardware necessary, you can guess what happens: the new features get cut, and game XYZ ships sooner rather than later.
We've seen this sort of thing happen many times – as an example, Crysis 2 shipped without DX11 support (since the consoles couldn't support that level of detail), adding it in a patch a couple months later. Other games never even see such a patch and we're left with somewhat less impressive visuals. While it's true that great graphics do not an awesome game make, they can certainly enhance the experience when used properly.
It's worth pointing out is that GameWorks is not necessarily exclusive to NVIDIA hardware. While PhysX as an example was originally ported to CUDA, developers have used PhysX on CPUs for many games, and as you can see in the above slide there are many PhysX items that are supported on other platforms. Several of the libraries (Turbulence, WaveWorks, HairWorks, ShadowWorks, FlameWorks, and FaceWorks) are also listed as "planned" for being ported to the latest generation of gaming consoles. Android is also a growing part of NVIDIA's plans, with the Tegra K1 effectively brining the same feature set over to the mobile world that we've had on PCs and notebooks for the past couple of years.
NVIDIA for their part wants to drive the state of the art forward, so that the customers (gamers) demand these high-end technologies and the publishers feel compelled to support them. After all, no publisher would expect great sales from a modern first-person shooter that looks like it was created 10 years ago [insert obligatory Daikatana reference here], but it's a bit of a chicken vs. egg problem. NVIDIA is trying to push things along and maybe hatch the egg a bit earlier, and there have definitely been improvements thanks to their efforts. We applaud their efforts, and more importantly we look forward to seeing better looking games as a result.
The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2
At the start of this year we saw the first half of the Maxwell architecture in the form of the GeForce GTX 750 and GTX 750 Ti. Based on the first generation Maxwell based GM107 GPU, NVIDIA did something we still can hardly believe and managed to pull off a trifecta of improvements over Kepler. GTX 750 Ti was significantly faster than its predecessor, it was denser than its predecessor (though larger overall), and perhaps most importantly consumed less power than its predecessor. In GM107 NVIDIA was able to significantly improve their performance and reduce their power consumption at the same time, all on the same 28nm manufacturing node we’ve come to know since 2012. For NVIDIA this was a major accomplishment, and to this day competitor AMD doesn’t have a real answer to GM107’s energy efficiency.
However GM107 was only the start of the story. In deviating from their typical strategy of launching high-end GPU first – either a 100/110 or 104 GPU – NVIDIA told us up front that while they were launching in the low end first because that made the most sense for them, they would be following up on GM107 later this year with what at the time was being called “second generation Maxwell”. Now 7 months later and true to their word, NVIDIA is back in the spotlight with the first of the second generation Maxwell GPUs, GM204.
Giant MQ-4C Triton surveillance drone flies across the United States
This morning, a giant Navy surveillance drone landed at Patuxent River base in Maryland after flying over the Gulf of Mexico and the American Southwest from an airfield owned by Northrup Grumman in Palmdale, California. The test flight represented the first cross-country flight for the MQ-4C Triton drone after 15 previous test flights.
The drone flew 3,290 nautical miles over 11 hours, a Navy press release said. “Operators navigated the aircraft up the Atlantic Coast and Chesapeake Bay at altitudes in excess of 50,000 feet to ensure there were no conflicts with civilian air traffic,” the release noted.
The drone is just the first piece in what the Navy calls Broad Area Maritime Surveillance, or BAMS. The MQ-4C Triton will be used to keep tabs on a wide area using “radar, infrared sensors and advanced cameras to provide full-motion video and photographs to the military,” according to The Washington Post. Eventually, a network of these drones could be deployed to fly around the world and provide 24-hour, 7-day-a-week coverage of a given area.
Read 3 remaining paragraphs | Comments