Tech
Sony sells 10 million PS4s in less than 9 months
At its pre-Gamescom press conference today, Sony Computer Entertainment Europe President and CEO Jim Ryan announced that the PlayStation 4 has sold over 10 million units worldwide since its launch late last November. "Just to be clear, that's 10 million PS4s sold through to consumers," Ryan clarified.
Sony last shared official PS4 sales numbers in April, when it confirmed seven million PS4s sold through that point. Last month, Sony noted in its earnings report that strong PlayStation sales were responsible for a rise in the company's overall profits.
Microsoft last announced that it shipped five million Xbox One units to retailers worldwide back in April, though it stated that sales had more than doubled in the US following the unbundling of the included Kinect camera. Last week, Ryse developer Crytek said it was "not 100 percent happy" with sales of Microsoft's system so far when considering where to place a possible sequel to the Xbox One launch exclusive.
Read 2 remaining paragraphs | Comments
Tiny, reversible USB Type-C connector finalized
The USB Promoter Group announced today that it has finalized the design of the USB Type-C plug, a new type of USB plug that's designed to completely replace every size of all current USB connectors. Like Apple's Lightning cables, the new connector is reversible so that it can be used in any orientation.
According to the USB-IF's press release (PDF), the new connector is "similar in size" to current micro USB 2.0 Type-B connectors (the ones you use for most non-Apple phones and tablets). It is designed to be "robust enough for laptops and tablets" and "slim enough for mobile phones." The openings for the connector measure roughly 8.4mm by 2.6mm.
As we've reported previously, cables and adapters for connecting Type-C devices into older Type-A and Type-B ports will be readily available—the prevalence of these older ports will make any industry-wide shift to USB Type-C an arduous, years-long process.
Read 2 remaining paragraphs | Comments
NVIDIA Launches Next GeForce Game Bundle - Borderlands: The Pre-Sequel
After letting their previous Watch Dogs bundle run its course over the past couple of months, NVIDIA sends word this afternoon that they will be launching a new game bundle for the late-summer/early-fall period.
Launching today, NVIDIA and their partners will be bundling Gearbox and 2K Australia’s forthcoming FPS Borderlands: The Pre-Sequel with select video cards. This latest bundle is for the GTX 770 and higher, so buyers accustomed to seeing NVIDIA’s bundles will want to take note that this bundle is a bit narrower than usual since it doesn’t cover the GTX 760.
As for the bundled game itself, Borderlands: The Pre-Sequel is the not-quite-a-sequel to Gearbox’s well received 2012 title Borderlands 2. As was the case with Borderlands 2 before it, this latest Borderlands game will be receiving PhysX enhancements courtesy of NVIDIA, leveraging the PhysX particle, cloth, and fluid simulation libraries for improved effects.
NVIDIA Current Game Bundles Video Card Bundle GeForce GTX770/780/780Ti/Titan Black Borderlands: The Pre-Sequel GeForce GTX 750/750Ti/760 None
Meanwhile on a lighter note, it brings a chuckle to see that NVIDIA is bundling what will most likely be a Direct3D 9 game with their most advanced video cards. This if nothing else is a testament to longevity of the API, having long outlasted the hardware it originally ran on.
Finally, as always, these bundles are being distributed in voucher from, with retailers and etailers providing vouchers with qualifying purchases. So buyers will want to double check whether their purchase includes a voucher for either of the above deals. Checking NVIDIA’s terms and conditions, the codes from this bundle are good through October 31st, so it looks like this will bundle will run for around 2 months.
Sierra Games returns with new King’s Quest and Geometry Wars titles
If you're a PC gamer of a certain age, the name Sierra On-Line (or Sierra Entertainment) revives memories of some of the most classic point-and-click adventures of the late 20th century. New corporate owner Activision is set to reactivate those memories today, reviving the brand as "Sierra Games" and promising new games in the King's Quest and Geometry Wars franchises.
The new Sierra name will apparently serve as an umbrella for a number of independent studios to reinterpret some classic gaming franchises. The newest King's Quest entry is being developed for 2015 by The Odd Gentlemen, best known for esoteric puzzle platform game The Misadventures of PB Winterbottom. Geometry Wars 3: Dimensions, meanwhile, is being worked on by mobile/portable developer Lucid Games for this holiday season. No platforms have been announced for either title.
“Sierra’s goal is to find and work with gifted up-and-coming indie developers working on their own amazing projects or who are passionate about working on great Sierra IP,” a Sierra representative told GamesBeat. “We’re in talks with a large number of other indie devs, and we can’t wait to share more details with fans in the near future.”
Read 4 remaining paragraphs | Comments
NVIDIA Refreshes Quadro Lineup, Launches 5 New Quadro Cards
Continuing today’s spate of professional graphics announcements, along with AMD’s refresh of their FirePro lineup NVIDIA is announcing that they are undertaking their own refresh of their Quadro lineup. Being announced today and shipping in September are 5 new Quadro cards that will come just short of a top-to-bottom refresh of the Quadro lineup.
With the exception of NVIDIA’s much more recently introduced Quadro K6000 – which will continue its reign as NVIDIA’s most powerful professional GPU – NVIDIA’s Quadro refresh comes as the bulk of the current Quadro K5000 family approaches 2 years old. At the point NVIDIA is looking to offer an across-the-board boost to their Quadro lineup, to increase performance and memory capacity at every tier. As a result this refresh will involve replacing NVIDIA’s Quadro cards with newer models based on larger and more powerful Kepler and Maxwell GPUs, and released as the Quadro Kx200 series. All told, NVIDIA is shooting for an average performance improvement of 40%, on top of any benefits from the larger memory amounts.
NVIDIA Quadro Refesh Specification Comparison Quadro K5200 Quadro K4200 Quadro K2200 Quadro K620 Quadro K420 CUDA Cores 2304 1344 640 384 192 Core Clock 650MHz 780MHz 1GHz 1GHz 780MHz Memory Clock 6GHz GDDR5 5.4GHz GDDR5 5GHz GDDR5 1.8GHz DDR3 1.8GHz DDR3 Memory Bus Width 256-bit 256-bit 128-bit 128-bit 128-bit VRAM 8GB 4GB 4GB 2GB 1GB Double Precision ? 1/24 1/32 1/32 1/24 TDP 150W 105W 68W 45W 41W GPU GK110 GK104 GM107 GM107? GK107? Architecture Kepler Kepler Maxwell Maxwell Kepler Displays Supported (Outputs) 4 (4) 4 (3) 4 (3) 4 (2) 4 (2)We’ll start things off with the Quadro K5200, NVIDIA’s new second-tier Quadro card. Based on a cut down version of NVIDIA’s GK110 GPU, the K5200 is a very significant upgrade to the K5000 thanks to the high performance and unique features found in GK110. The combination of which elevates the K5200 much closer to the K6000 than the K5000 it replaces.
The K5200 ships with 12 SMXes (2304 CUDA cores) enabled and utilizes a 256-bit memory bus, making this the first NVIDIA GK110 product we’ve seen ship without the full 384-bit memory bus. NVIDIA has put the GPU clockspeed at 650MHz while the memory clock stands at 6GHz. Meanwhile the card has the second largest memory capacity of the Quadro family, doubling K5000’s 4GB of VRAM for a total of 8GB.
Compared to the K5000, K5200 offers an increase in shader/compute throughput of 36%, and a smaller 11% increase in memory bandwidth. More significant however are GK110’s general enhancements, which elevate K5200 beyond K5000. Whereas K5000 and its GK104 GPU made for a strong graphics card, it was a relatively weak compute card, a weakness that GK110 resolved. As a result K5200 should be similar to K6000 in that it’s a well-balanced fit for mixed graphics/compute workloads, and the ECC memory support means that it offers an additional degree of reliability not found on the K5000.
As is usually the case in rolling out a refresh wave of cards based on existing GPUs, because performance has gone up power consumption has as well. NVIDIA has clamped K5200 at 150W (important for workstation compatibility), which is much lower than the full-fledged K6000 but is 28W more than the K5000. None the less the performance gains should easily outstrip the power consumption increase.
Meanwhile display connectivity remains unchanged from the K5000 and K6000. NVIDIA’s standard Quadro configuration is a DL-DVI-I port, a DL-DVI-D port, and a pair of full size DisplayPorts, with the card able to drive up to 4 displays in total through a combination of those ports and MST over DisplayPort.
NVIDIA’s second new Quadro card is the K4200. Replacing the GK106 based K4000, the K4200 sees NVIDIA’s venerable GK104 GPU find a new home as NVIDIA’s third-tier Quadro card. Unlike K5200, K4200’s GPU shift doesn’t come with any kind of dramatic change in functionality, so while it will be an all-around more powerful card than the previous K4000, it’s still going to be primarily geared towards graphics like the K4000 and K5000 before it.
For the K4200 NVIDIA is using a cut down version of GK104 to reach their performance and power targets. Comprised of 7 active SMXes (1344 CUDA cores), the K4200 is paired with 4GB of VRAM. Clockspeeds stand at 780MHz for the GPU and 5.4GHz for the VRAM.
On a relative basis the K4200 will see some of the greatest performance gains of this wave of refreshes. Its 2.1 TFLOPS of compute/shader performance blasts past K4000 by 75%, and memory bandwidth has been increased by 29%. However the 4GB of VRAM makes for a smaller increase in VRAM than the doubling most other Quadro cards are seeing. Otherwise power consumption is once again up slightly, rising from 80W to 105W in exchange for the more powerful GK104 GPU.
Finally, as was the case with K5200 display connectivity remains unchanged. Since the K4200 is a single slot card like K4000 before it, this means NVIDIA uses a single DL-DVI-I port along with a pair of full size DisplayPorts. Like other Kepler products the card can drive up to 4 displays, though doing this will require a DisplayPort MST hub to get enough outputs. Which on that note, users looking to pair this card with multiple monitors will be pleased to find that Quadro Sync is supported in the K4200 for the first time, being limited to the K5000 and higher previously.
In NVIDIA’s refreshed Quadro lineup, the K4200 will primarily serve as the company’s highest-end single-slot offering. As with other GK10x based GPUs compute performance is not its strongest suit, while for graphics workloads such as CAD and modeling it should offer a nice balance of performance and price.
Moving on, NVIDIA’s third Quadro refresh card is the K2200. This replaces the GK107 based K2000 and marks the first Quadro product to utilize one of NVIDIA’s newest generation Maxwell GPUs, tapping NVIDIA’s GM107 GPU. The use of Maxwell on a Quadro K part makes for an amusing juxtaposition, though the architectural similarities between Maxwell and Kepler mean that there isn’t a meaningful feature difference despite the generation gap.
As was the case with NVIDIA’s consumer desktop GM107 cards, NVIDIA is aiming to produce an especially potent sub-75W card for K2200. Here NVIDIA uses a fully enabled GM107 GPU – all 5 SMMs (640 CUDA cores) are enabled – and it’s paired with 4GB of VRAM on a 128-bit bus. Meanwhile based on NVIDIA’s performance figures the GPU clockspeed should be just north of 1GHz while the memory clock stands at 5GHz.
Since the K2200 is replacing a GK107 based card, the performance gains compared to the outgoing K2000 should be significant. On the consumer desktop side we’ve seen GM107 products come close to doubling GK107 parts, and we’re expecting much the same here. K2200’s 1.3 TFLOPS of single precision compute/shader performance is 78% higher than K2000’s, which means that K2200 should handily outperform its predecessor. Otherwise the 4GB of VRAM is a full doubling over the K2000’s smaller VRAM pool, greatly increasing the size of the workloads K2200 can handle.
Meanwhile display connectivity is identical to the new K4200 and the outgoing K2000. The K2200 can drive up to 4 displays by utilizing a mix of its DL-DVI port, two DisplayPorts, and a DisplayPort MST hub.
In NVIDIA’s new Quadro lineup the K2200 will serve as their most powerful sub-75W card. As we’ve seen in other NVIDIA Maxwell products, this is an area the underlying GM107 excels at.
NVIDIA’s fourth Quadro card is the K620. This is another Maxwell card, and while NVIDIA doesn’t specify the GPU we believe it to be based on GM107 (and not GM108) due to the presence of a 128-bit memory bus. K620 replaces the GK108 based K600, and should offer substantial performance gains similar to what is happening with the K2200.
K620’s GM107 GPU features 3 SMMs (384 CUDA cores) enabled, and it is pair with 2GB of DDR3 operating on a 128-bit memory bus. Like K2200 the GPU clockspeed appears to be a bit over 1GHz, and meanwhile the memory clockspeed stands at 1.8GHz.
Compared to the K600 overall performance should be significantly improved. Though it’s worth pointing out that since memory bandwidth is identical to NVIDIA’s previous generation card, in memory bandwidth bound scenarios the K620 may not pull ahead by too much. None the less the memory pool has been doubled from 1GB to 2GB, so in memory capacity constrained situations the K620 should fare much better. Power consumption is just slightly higher this time, at 45W versus K600’s 41W.
As this is a 3 digit Quadro product, NVIDIA considers this an entry level card and it is configured accordingly. A single DL-DVI port and a single full size DisplayPort are the K620’s output options, with an MST hub being required to attach additional monitors to make full use of its ability to drive 4 displays. By going with this configuration however NVIDIA is able to offer the K620 in a low profile configuration, making it suitable for smaller workstations that can’t accept full profile cards.
Finally, NVIDIA’s last new Quadro card is the K420. Dropping back to a Kepler GPU (likely GK107), it replaces the Quadro 410. From a performance perspective this card won’t see much of a change – the number of CUDA cores is constant at 192 – but memory bandwidth has been doubled alongside the total VRAM pool, which is now 1GB.
Like K620, K420 can drive a total of 4 displays, while the physical display connectors are composed of a single DL-DVI port and a single full size DisplayPort. This low profile card draws 41W, the same as the outgoing 410.
With all but 1 of these cards receiving a doubled VRAM pool and significantly improved performance, NVIDIA expects that these cards should be well suited to accommodating the larger datasets that newer applications use, especially in the increasingly important subject of 4K video. Coupled with NVIDIA’s existing investment in software – both ISVs and their own cloud technology ecosystem – NVIDIA expects to remain ahead of the curve on functionality and reliability.
Wrapping things up, NVIDIA tells us that the Quadro refresh cards will be shipping in September. In the meantime we’ll be reviewing some of these cards later this month, so stay tuned.
Lyft: Uber scheduled, canceled 5,000 rides to hassle us [Updated]
CNN reports that people associated with car-on-demand service Uber have been attempting to sabotage an Uber competitor, Lyft, by ordering and canceling as many as 5,000 rides since October 2013. Lyft drivers have also complained that Uber employees will call them to take "short, low-profit rides largely devoted to luring them to work for Uber."
Uber reportedly used the ride request-and-cancellation tactic earlier this year on another competitor, Gett, to the tune of around 100 rides. Those ride calls were placed by employees as high in the company as Uber's New York general manager, Josh Mohrer. The calls serve a number of purposes: frustrating drivers, wasting their time and gas approaching a fare that won't come through, and occupying them to artificially limit driver availability, if only temporarily.
Lyft claims to have sussed out the fake requests using phone numbers used by "known Uber recruiters." Lyft claims that one Uber recruiter requested and canceled 300 rides from May 26 to June 10, and it said that recruiter's phone number was associated with 21 more accounts with 1,524 canceled rides between them. However, in this instance, there's no evidence that the cancellations were suggested by Uber corporate, according to CNN.
Read 2 remaining paragraphs | Comments
Track who’s buying politicians with “Greenhouse” browser add-on
Nicholas Rubin, a 16-year-old programmer from Seattle, has created a browser add-on that makes it incredibly easy to see the influence of money in US politics. Rubin calls the add-on Greenhouse, and it does something so brilliantly simple that once you use it you'll wonder why news sites didn't think of this themselves.
Greenhouse pulls in campaign contribution data for every Senator and Representative, including the total amount of money received and a breakdown by industry and size of donation. It then combines this with a parser that finds the names of Senators and Representatives in the current page and highlights them. Hover your mouse over the highlighted names and it displays their top campaign contributors.
In this sense, Greenhouse adds another layer to the news, showing you the story behind the story. In politics, as in many other things, if you want to know the why behind the what, you need to follow the money. And somewhat depressingly, in politics it seems that it's money all the way down.
Read 13 remaining paragraphs | Comments
AMD Completes FirePro Refresh, Adds 4 New FirePro Cards
Kicking off a busy day for professional graphics, AMD is first up to announce that they will be launching a quartet of new FirePro cards. As part of the company’s gradual FirePro refresh that began with the W9100 in April and followed by W8100 in June, today AMD is gearing up to refresh the rest of their FirePro lineup with new products for the mid-range and low-end segments of the pro graphics market.
Being announced today are the FirePro W7100, W5100, W4100, and W2100. These parts are based on a range of AMD GPUs – including Tonga, a new GPU that has yet to show up in any other AMD products – and are designed to the sub-$2500 market segment that the current W8100 tops out at. With a handful of exceptions, the bulk of these upgrades are straightforward, focused on making AMD’s entire FirePro lineup 4K capable, improving performance across the board, and doubling the amount of VRAM compared to the past generation to allow for larger data sets to be used.
AMD FirePro W Series Specification Comparison AMD FirePro W7100 AMD FirePro W5100 AMD FirePro W4100 AMD FirePro W2100 Stream Processors 1792 768 512 320 ROPs ? 16 16 8 Core Clock ? 930MHz 630MHz 630MHz Memory Clock 5GHz GDDR5 6GHz GDDR5 5.5GHz GDDR5 1.8GHz DDR3 Memory Bus Width 256-bit 128-bit 128-bit 128-bit VRAM 8GB 4GB 4GB 2GB Double Precision ? 1/16 1/16 1/16 TDP 150W 75W 50W 26W GPU Tonga Bonaire Cape Verde Oland Architecture GCN 1.1? GCN 1.1 GCN 1.0 GCN 1.0 Display Outputs 4 4 4 2Starting at the top, from a technical perspective the W7100 is the most interesting of the new FirePro cards. Whereas the previous-generation W7000 was based on a second-tier version of AMD’s venerable Tahiti GPU, the W7100 gets a brand new GPU entirely, one that we haven’t seen before. Named Tonga, this new GPU is a smaller, lower performance part that slots in under the Hawaii GPU used in the W9100 and W8100. However while AMD is announcing the W7100 today they are not disclosing any additional information on Tonga, so while we can draw some basic conclusions from W7100’s specifications a complete breakdown of this new GPU will have to wait for another day.
From a specification point of view AMD is not disclosing the GPU clockspeed or offering any floating point throughput performance numbers, but we do know that W7100 will feature 1792 stream processors. Coupled with that is 8GB of GDDR5 clocked at 5GHz sitting on a 256-bit memory bus.
The W7100 is designed to be a significant step up compared to the outgoing W7000. Along with the doubling W7000’s memory from 4GB to 8GB, the Tonga GPU in W7100 inherits Hawaii’s wider geometry front-end, allowing W7100 to process 4 triangles/clock versus W7000’s 2 tris/clock. Overall compute/rendering performance should also greatly be increased due to the much larger number of stream processors (1792 vs. 1280), but without clockspeeds we can’t say for sure.
Like the W7000 before it, the W7100 is equipped with 4 full size DisplayPorts, allowing for a relatively large number of monitors to be used with the card. And because it gets AMD's newest GCN display controller, W7100 is particularly well suited for 4K displays, being able to drive 3 4K@60Hz displays or 4 4K displays if some operate at 30Hz.
In AMD’s product stack the W7100 is designed be a budget alternative to the W9100 and W8100, offering reduced performance but at a much lower cost. AMD is primarily targeting the engineering and media markets with the W7100, as its compute performance and 8GB of VRAM should be enough for most engineering workloads, or alternatively its VRAM capacity and ability to drive 4 4K displays makes it a good fit for 4K video manipulation.
The second card being introduced today is the W5100. This part is based on AMD’s Bonaire GPU, a GCN 1.1 GPU that has been in AMD’s portfolio for over a year now but has not made it into a FirePro part until now. W5100 replaces the outgoing W5000, which was a heavily cut-down Pitcairn part.
In terms of specifications, the W5100 utilizes a slightly cut-down version of Bonaire with 768 SPs active. It is clocked at approximately 910MHz, which puts its compute performance at 1.4 TFLOPS for single precision. Feeding W5100 is 4GB of VRAM attached to a 128-bit memory bus and clocked at 6GHz.
Compared to the outgoing W5000 the W5100 gains the usual VRAM capacity upgrades that the rest of the Wx100 cards have seen, while the other specifications are a mixed bag on paper. Compute performance is only slightly improved – from 1.28 TFLOPS to 1.4 TFLOPS – and memory bandwidth has actually gone regressed slightly from 102GB/sec. Consequently the biggest upgrade will be found in memory bound scenarios, otherwise the W5100’s greatest improvements would be from its GCN 1.1 lineage.
Speaking of which, with 4 full size DisplayPorts the W5100 has the same 4K display driving capabilities as the W7100. However with lower performance and half the VRAM, it’s decidedly a mid-range card and AMD treats it as such. This means it’s targeted towards lower power usage scenarios where the high compute performance and 8GB+ VRAM capacities of the W7100 and higher aren’t needed.
The third of today’s new FirePro cards is the W4100. Based on AMD’s older Cape Verde GPU, this is not the first time that Cape Verde has appeared in a FirePro product. But it is the first time that it has appeared in a workstation part, its previous appearance being the display wall niche W600. At the same time the W4100 doesn’t have a true analogue in AMD’s previous generation FirePro stack, which stopped at the W5000, so the W4100 marks a newer, lower priced and lower performance tier for FirePro.
With just 512 SPs active the W4100 tops out at only 50W power consumption, reflecting the fact that it is targeted towards lower power use cases. AMD has paired the card with 2GB of VRAM, and based on Cape Verde’s capabilities we expect that this is on a 128-bit bus. AMD has not provided any more technical details on the card, but it goes without saying that this is not a card meant to be a performance powerhouse.
AMD’s target market for this is lightweight 2D and 3D workloads such as finance and entry level CAD. The 4 mini-DisplayPorts allow the card to directly drive up to 4 displays, though because this is a GCN 1.0 GPU it doesn’t have the same flexibility of the W5100.
The final FirePro card being introduced today is the FirePro W2100, which is AMD’s new entry-level FirePro card. Like the W4100 it had no true analogue in AMD’s older product stack, but functionally it replaces the old Turks based V4900, a card which AMD kept around even after the launch of GCN to serve as their entry level FirePro product.
W2100 is based on AMD’s Oland GPU, which marks the first time that this existing AMD GPU has appeared in a FirePro product. W2100 uses a cut down version of Oland with 320 SPs active and attached to 2GB of memory on a 128-bit bus. Oland is a very limited functionality GPU, and while it’s more than suitable for basic imaging it should be noted that it doesn’t have a video decoder.
At a TDP of just 26W, the W2100 is AMD’s lowest power, lowest performance card. Functionally it’s a cheaper alternative to the W4100 for users who don’t need to drive 4 displays, with W2100 featuring just 2 DisplayPorts. The targeted market is otherwise similar, with a focus on lightweight 2D and 3D workloads over 1-2 monitors.
Meanwhile along with today’s product announcements AMD is also announcing that they will be bringing their low-level Mantle API over to the FirePro family. The nature of the pro graphics market means that it will likely be some time before we see Mantle put in meaningful use here since the API is still under development, but once AMD gets the API locked down they believe that Mantle can offer many of the same benefits for professional graphics workloads as it can gaming. The greatly reduced draw call overhead should be a boon here for many 3D workloads, and Mantle’s ability to more easily transition between compute and graphics workloads would map well towards engineering tasks that want to do both at the same time.
Wrapping things up, AMD has not revealed final pricing for these cards at this time, though we expect pricing to follow the previous generation W series cards. Meanwhile the W2100, W4100, and W5100 will be available next month. Otherwise no doubt owing to its use of the new Tonga GPU, W7100 will be farther out, with availability expected in Q4 of this year.
Short Bytes: Intel's Core M and Broadwell-Y SoC
Intel has slowly been feeding us information about their upcoming Broadwell processors for a couple years now, with the first real details kicking off almost a year ago at IDF 2013. Since then, the only other noteworthy piece of information came back in March when it was revealed that socketed Broadwell CPUs with unlocked multipliers will be available with Iris Pro Graphics. Today, Intel is ready to begin providing additional information, and it starts with the Broadwell-Y processor, which Intel is now referring to as an SoC (System On Chip). We have an in-depth article on the subject, but for Short Bytes we want to focus on the bottom line: what does this mean for end users?
The big news for Broadwell is that it will be the first 14nm processor available to the public, following on the success of Intel's 22nm process technology. Shrinking the process technology from 22nm to 14nm can mean a lot of things, but the primary benefit this time appears to be smaller chip sizes and lower power requirements. The first parts will belong to the Core M family of products, a new line catering specifically to low power, high mobility form factors (typically tablets and hybrid devices). With Core M, Intel has their sights set on the fanless computing market with sub-9mm thick designs, and they have focused on reducing power requirements in order to meet the needs of this market. This brings us to Broadwell-Y, the lowest power version of Broadwell and the successor to Haswell-Y and the codename behind the new Core M.
The reality of Intel's Y-series of processors is that they haven't been used all that much to date. Only a handful of devices used Haswell-Y (and even fewer used Ivy Bridge-Y), mostly consisting of 2-in-1 devices that can function as both a laptop and a tablet. For example, the base model Surface Pro 3 uses a Core i3-4020Y, and Dell's XPS 11 and certain Venue Pro 11 tablets also use Y-series parts; Acer, HP, Sony, and Toshiba also have some detachable hybrid devices with the extreme low power processors. Unfortunately, pricing on the Y-series is generally much higher than competiting solutions (i.e. ARM-based SoCs), and there have been criticisms of Intel's higher power requirements and lower overall battery life as well.
Core M thus serves marketing needs as well as technical requirements: it replaces the Core i3/i5/i7 Y-series parts and gives Intel a brand they can market directly at premium tablets/hybrids. And in another move likely driven by marketing, Core M will be the launch part for Intel's new 14nm process technology. Transitions between process technology usually come every 2-3 years, so the 14nm change is a big deal and launching with their extreme low power part makes a statement. The key message of Broadwell is clear: getting into lower power devices and improving battery life is a critical target. To that end, Broadwell-Y probably won't be going into any smartphones, but getting into more premium tablets and delivering better performance with at least competitive battery life relative to other SoCs is a primary goal.
Compared to the Haswell-Y parts, Intel has made some significant advances in performance as well as power use, which we've covered elsewhere. The cumulative effect of the improvements Intel is bringing is that Broadwell-Y has a greater than 2X reduction in TDP (Thermal Design Power) compared to Haswell-Y. It also has a 50% smaller and 30% thinner package and uses 60% lower idle power. Intel points out that Broadwell-Y is set to deliver more than a 2X improvement in performance per Watt over Haswell-Y, though that's a bit more of a nebulous statement (see below). Many of the improvements come thanks to Intel's increased focus on driving down power requirements. Where previous Intel processors targeted laptops and desktops as the primary use case and then refined and adjusted the designs to get into lower power envelopes, with Broadwell Intel is putting the Y-series requirements center stage. The term for this is "co-optimization" of the design process, and these co-optimizations for Broadwell-Y are what allows Intel to talk about "2x improvements". But you need to remember what is being compared: Haswell-Y and Broadwell-Y.
Broadwell parts in general will certainly be faster/better than the current Haswell parts – Intel doesn't typically "go backwards" on processor updates – but you shouldn't expect twice the performance at the same power. Instead, Broadwell-Y should offer better performance than Haswell-Y using much less power, but if you reduce total power use by 2X you could increase performance by 5% and still claim a doubling of performance per Watt. And that's basically what Intel is doing here. Intel estimates the core Broadwell architecture to be around 5% faster than Haswell at the same clocks; specifically, IPC (Instructions Per Cycle) are up ~5% on average. Similarly, changes and improvements to the graphics portion of the processor should deliver more performance at a lower power draw. Add in slightly higher clock speeds and you get a faster part than last generation that uses less power. These are all good improvements, but ultimately it comes down to the final user experience and the cost.
Everywhere you go, people are increasingly using tablets and smartphones for many of their daily computing needs, and being left out of that market is the road to irrelevance. Core M (Broadwell-Y) is Intel's latest push to make inroads into these extremely low power markets, and on paper it looks like Intel has a competitive part. It's now up to the device vendors to deliver compelling products, as fundamentally the choice of processor is only one element of an electronics device. Being the first company to deliver 14nm parts certainly gives Intel an edge over the competition, but high quality Android and iOS tablets sell for $300-$500, so there's not a lot of room for a $100+ processor – which is why Intel has their Atom processors (due for the 14nm treatment with Braswell, if you're wondering).
Core M is going after the premium tablet/hybird market, with benefits including full Windows 8 support, but will it be enough? If you're interested in such a device and you don't already own the Haswell-Y version, Core M products should deliver slimmer and lighter devices with improved battery life and better performance. Don't expect a 10" Core M tablet to deliver the same battery life as a 7" Android/iOS device (at least, not without a larger battery), since the display and other components contribute a lot to power use and Windows 8 has traditionally been far less battery friendly than Android; still, Core M tablets may finally match or perhaps even exceed the battery life of similarly sized iOS/Android tablets. The first retail products with Core M should be shipping before the end of the year, so we'll find out later this year and early next how well Broadwell-Y is able to meet its lofty goals. And we'll also find out how much the Core M products cost.
Browser Face-Off: Battery Life Explored 2014
It has been five years since we did a benchmark of the various web browsers and their effect on battery life, and a lot has changed. Our testing then included Opera 9 & 10, Chrome 2, Firefox 3.5.2, Safari 4, and IE8. Just looking at those version numbers is nostalgic. Not only have the browsers gone through many revisions since then, but computer hardware and the Windows operating system are very different. While there has been a lot of talk, there hasn't been a lot of data comparing browser battery usage. Today we're going to put the latest browsers to the test and deliver some concrete numbers.
Lenovo Announces New ThinkStation P Series Desktop Workstations
As much as I would like to be at SIGGRAPH, one of the reasons to visit would be to see Lenovo’s latest launch of their Haswell-E Desktop Workstation series. One of the key elements to the workstation market in recent quarters is to develop a professional-grade system that can encompass all the critical industries that require horsepower under the desk: engineering, media, energy, medical, finance and others. These systems have to be verified with industry standards to even be considered by these markets, but also the shift to Haswell-E and DDR4 will be an all-important factor for those that rely on speed and performance. One of the issues that these system manufacturers have is to define themselves in the market – Lenovo is already a big player in many other PC-related industries, so listening to customers is all important when trying to develop market share.
The new ThinkStation P series will be based around the LGA2011-3 socket, using Xeon processors and high capacity DDR4 memory. Given the nature of the platform, we can assume that the DDR4 will be ECC by default. For GPU Compute Quadro is being used, with the top line P900 model supporting dual Xeons alongside up to three Quadro K6000 graphics cards and up to 14 storage devices. All the P series will be certified to work on all key ISV applications, and via Intel they are quoted as supporting Thunderbolt 2, which should make for interesting reading regarding the PCIe lane distribution or PLX chip distribution depending if it is onboard or via an optional add-in-card.
In terms of that all important product differentiation, the P series will use ‘tri-channel cooling’ and air baffles to direct the cool air immediately to the component in question and then out of the chassis without touching other components. This essentially becomes a more integrated solution than the compartmentalized chassis we see in the consumer market, except when the company makes the whole system, the company can control the experience to a much tighter level.
The P series also runs a trio of ‘FLEX’ themed additions. The FLEX Bay is designed to support an optical drive or the FLEX module which can hold an ultraslim ODD, media reader or firewire hub. The FLEX Tray on the P900 allows each of the seven HDD trays to support either one 3.5” drive or two 2.5” drives, hence the fourteen drive support mentioned earlier. The FLEX Connector is a mezzanine card allowing users to add in storage related cards without sacrificing rear PCIe slots, meaning that this connector brings this extra card away from the other devices, presumably at right angles. Lenovo is also wanting to promote their tool-less power supply removal without having to adjust the cables on the P700 and P500, which suggests that the PSU connects into a daughter PCB with all the connectors pre-connected, allowing the PSU to be replaced easily.
Lenovo is also adorning their components with QR codes so if a user has an issue the code can be scanned such that the user will be directed to the specific webpage dealing with the component. The chassis will have integrated handles for easier movement or rack mounting. Lenovo is also promoting its diagnostic port, allowing the user to plug in an Android smartphone or tablet via USB for system analysis using the ThinkStation app.
Until Haswell-E and the motherboard chipsets are officially announced, Lenovo cannot unfortunately say more about the specifications regarding the series beyond memory capacities, DRAM support and power supply numbers, however they do seem confident in their ability to provide support and an experience to their ThinkStation users. We have been offered a review sample later in the year when we can test some of these features.
Source: Lenovo
Gallery: Lenovo Announces New Thinkstation P Series Desktop Workstations
Addition: Ryan recently met with Lenovo last week where we were allowed to take some images of the chassis and design:
Gallery: Lenovo P-Series Event Photos
Snowden critic resigns Naval War College after online penis photo flap [UPDATED]
John Schindler, the former National Security Agency analyst and an outspoken critic of Edward Snowden, resigned Monday from his position as a professor at the US Naval War College months after a picture of his alleged penis surfaced online. The professor of national security affairs announced via Twitter his resignation from the Rhode Island institution, effective August 29.
"Sorry to say I'm severing my affiliation with Naval War College. I had a great time there but it's time to move on. Thanks for your support," Schindler tweeted.
Read 6 remaining paragraphs | Comments
First US smartphone kill-switch legislation awaits California governor signature
A bill to require mandatory kill switches on smartphones so that they can be disabled in the event of theft passed the California state senate today and could become law if Governor Jerry Brown signs it in the coming weeks. The bill would mandate (PDF) that all smartphones manufactured after July 1, 2015 to be sold in California come equipped with the means to “render the essential features of the smartphone inoperable when not in the possession of the authorized user.”
The bill was introduced by California Senator Mark Leno (D), and it passed on a vote of 53-20 (PDF). With the approval of Governor Brown, all smartphones sold in California would prompt the user to enable the wiping feature upon initial setup, although opting out would be possible as well. If the smartphone were stolen, the kill switch would have to prevent the phone from being re-activated on a network without the phone's owner's approval. The California bill also stipulates that smartphone designers would have to make it possible for the phone to be re-activated if it found its way back into the rightful owner's hands.
Finally, the bill asks that a civil penalty of between $500 and $2,500 be levied per smartphone if a person is found to be selling stolen phones.
Read 3 remaining paragraphs | Comments
Announced 1,161 days ago, data hints Super Smash Bros. for Wii U may hit 2015
As we enter into the middle of August, the release calendar for this holiday season's AAA blockbuster games is pretty much set, running from The Sims 4 on September 2 through to Far Cry 4 on November 18 (over a week before the all-important Black Friday hits on November 28). Practically every big-name game coming out this holiday season now has a specific, publicized North American release date within that timeframe, even if some of those dates may end up slipping into the future.
And then there's the exhaustingly named Super Smash Bros. for Wii U, which seems to be 2014's only major holiday title yet to lock in a specific release date (aside from Nintendo's official, amorphous "fourth quarter" release window). That's despite the fact that the Nintendo 3DS version of the same game has been set for an October 3 release. (We should note that a leak from HMV suggests the Wii U version is coming November 28, but that is not yet confirmed.)
Even though Nintendo has stressed how important Super Smash Bros. is to the Wii U's holiday comeback plans, the lack of a specific release date and Nintendo's long history of game delays has some worried that the game may end up being pushed into 2015 after all. Every day that goes by without a release date announcement makes it more likely that Nintendo has decided to give the developers additional time to finish polishing the game. And that would mean it's less likely the company will have time to roll out the production, supply chain, marketing, and retail channel preparations it needs to get Super Smash Bros. for Wii U into consumers' hands by the end of the year.
Read 14 remaining paragraphs | Comments
Amtrak employee sold customer data to DEA for two decades
A former Amtrak employee has been giving passenger information to the Drug Enforcement Administration in exchange for money for nearly two decades, according to reports from the Whittier Daily News. A total of over $854,460 changed hands over the last 20 years, despite the fact that information relevant to the DEA's work could have been obtained from Amtrak for free.
The employee, described as a "secretary to a train and engine crew" in a summary obtained by the AP, was selling the customer data without Amtrak's approval. Amtrak and other transportation companies collect information from their customers including credit card numbers, travel itineraries, emergency contact info, passport numbers, and dates of birth. When booking tickets online in recent years, Amtrak has also collected phone numbers and e-mail addresses.
The Whittier Daily News points out that Amtrak's corporate privacy policy allows the company to share this information with "certain trustworthy business partners," however, the secretary's actions didn't happen under this sanction.
Read 2 remaining paragraphs | Comments
Feeding everyone with a minimum of carbon emissions
Agriculture has an enormous footprint—by some estimates, it accounts for more than 90 percent of humanity's water use. One of the other areas where its footprint is felt is in carbon emissions. Converting land to agriculture disrupts the existing soil ecosystem, releasing carbon stored there into the atmosphere; a large fraction of humanity's collective carbon emissions fall under the category of "land use change."
In the developed world, the intensification of agriculture has actually allowed some formerly farmed areas to revert to something akin to their original state. But it's unclear whether there are limits to that intensification that will eventually force us to bring more agricultural land into use. Even if we don't run into limits, population growth means that it will have to scale quickly, as global food demand is expected to increase by at least 70 percent by the middle of the century.
A new paper in this week's PNAS examines whether there are ways to add significant new agricultural land without causing a huge boost in carbon emissions. It finds that it's possible to greatly expand farmed land while avoiding billions of metric tons of carbon emissions, but doing so would require a level of international cooperation that would be unprecedented.
Read 8 remaining paragraphs | Comments
Amazon to publishers: You think too small about cheap e-books
Amazon and Hachette escalated their e-books PR battle over the weekend with an impassioned message from a product team on one side, a response from a CEO on the other, and both pleading for their loyalists to help. As the fight carries on, it's clear Amazon knows it has a retail reputation to lose, but the company is unsure how to reconcile that with the business dealings coming to light over the last few months.
The dispute surfaced publicly in late spring, when it became clear Amazon was suppressing Hachette book sales and shipments in response to Hachette's refusal to agree to lower e-book prices. Since then, Amazon has tried twice to justify its actions, and Hachette has responded by saying it doesn't intend to give in, and Amazon's market influence should not be underestimated. At the end of last week, 900 authors got involved and created a $104,000 full-page ad in The New York Times condemning Amazon for putting writers in the middle of the contract dispute, handing out Amazon CEO Jeff Bezos' e-mail address to encourage readers to give him what for.
On Saturday, the Amazon Books team wrote a long letter to its Kindle authors invoking George Orwell and asking the publishing industry to get on board with its product and pricing "innovations." At the end of the letter, the Books team asked readers to e-mail Hachette's CEO with a form letter that included demands like "stop working so hard to overcharge for e-books. They can and should be less expensive" and "stop using your authors as leverage and accept one of Amazon's offers to take them out of the middle."
Read 7 remaining paragraphs | Comments
Espionage programs linked to spying on former Soviet targets
A one-two combination of malware programs has infiltrated the embassies and government systems of a number of former Eastern Bloc nations as well as European targets, according to a technical analysis by security researchers.
Using exploits and malicious downloads delivered through phishing attacks or on compromised websites, attackers first infect a system with a program, known as Wipbot, according to an analysis posted by security firm Symantec on Friday. The program conducts initial reconnaissance, collecting system information and only compromising systems that correspond with a specific Internet address. After the target is verified, a second program—alternatively known as Turla, Uroburos, and Snake—is downloaded to further compromise the system, steal data, and exfiltrate information camouflaged as browser requests.
The one-two combination has all the hallmarks of a nation-state intelligence gathering operation targeting the embassies of former Eastern Bloc countries in Europe, China, and Jordan, according to Symantec.
Read 8 remaining paragraphs | Comments
Judge affirms probationer has a right to tape police officer in her home
In a statement of findings and recommendations filed last week, a US Magistrate Judge for the Eastern District of California affirmed that a woman on searchable probation had the right to videotape three officers who came to her home to search it.
In February 2011, plaintiff Mary Crago was visited by three police officers, including defendant Officer Kenneth Leonard. Leonard was working on the Sacramento Police Department's Metal Theft Task Force, and he was tipped off that Crago may have been involved in a theft involving a vehicle battery. Since Crago was on searchable probation, the officers entered her home—the door was open—and they found Crago “sitting on a mattress, digging furiously through a purse.”
According to court documents, “Inside the purse, defendant found a four-inch glass pipe and a small baggie with white residue. The white residue subsequently tested positive for methamphetamine.” Crago did not resist the officers' search, but she allegedly told Leonard that she was recording the search on her laptop. Leonard then took her laptop and deleted her recording, telling her that recording was forbidden.
Read 6 remaining paragraphs | Comments
In possible FAA violation, San Jose police already tested drone 4 times
On Monday, the San Jose Police Department responded to Ars’ request for comment concerning its drone use, saying that it will “follow all regulations the [Federal Aviation Administration] requires regarding its drone use."
Last week, newly published documents showed that the San Jose Police Department (SJPD) did not believe it needed federal authorization in order to fly the drone it acquired in January 2014. The FAA said otherwise, and now the SJPD will oblige.
“The SJPD will seek a [Certificate of Waiver or Authorization (COA)] if that is required,” SJPD spokesman Albert Morales told Ars. “The SJPD obtained FAA literature regarding requirements for an [unmanned aerial system, or UAS], prior to procuring the UAS.”
Read 5 remaining paragraphs | Comments