Tech
Western Digital Updates Red NAS Drive Lineup with 6 TB and Pro Versions
Back in July 2012, Western Digital began the trend of hard drive manufacturers bringing out dedicated units for the burgeoning NAS market with the 3.5" Red hard drive lineup. They specifically catered to units having 1-5 bays. The firmware was tuned for 24x7 operation in SOHO and consumer NAS units. 1 TB, 2 TB and 3 TB versions were made available at launch. Later, Seagate also jumped into the fray with a hard drive series carrying similar firmware features. Their differentiating aspect was the availability of a 4 TB version. Western Digital responded in September 2013 with their own 4 TB version (as well as a 2.5" lineup in capacities up to 1 TB).
Today, Western Digital is making updates to their Red lineup for the third straight year in a row. The Red lineup gets the following updates:
- New capacities (5 TB and 6 TB versions)
- New firmware (NASware 3.0)
- Official sanction for use in 1-8 bay tower form factor NAS units
In addition, a new category is also being introduced, the Red Pro. Available in 2 - 4 TB capacities, this is targeted towards rackmount units with 8 - 16 bays (though nothing precludes it from use in tower form factor units with lower number of bays).
WD Red UpdatesEven though 6 TB drives have been around (HGST introduced the Helium drives last November, while Seagate has been shipping Enterprise Capacity and Desktop HDD 6 TB versions for a few months now), Western Digital is the first to claim a NAS-specific 6 TB drive. The updated firmware (NASware 3.0) puts in some features related to vibration compensation, which allows the Red drives to now be used in 1 - 8 bay desktop NAS systems (earlier versions were officially sanctioned only for 1 - 5 bay units). NASware 3.0 also has some new features to help with data integrity protection in case of power loss. The unfortunate aspect here is that units with NASware 2.0 can't be upgraded to NASware 3.0 (since NASware 3.0 requires some recalibration of internal components that can only be done in the factory).
The 6 TB version of the WD Red has 5 platters, which makes it the first drive we have seen to have an areal density of more than 1 TB/platter (1.2 TB/platter in this case). This areal density increase is achieved using the plain old Perpendicular Magnetic Recording (PMR) technology. Western Digital has not yet found reason to move to any of the new technologies such as SMR (Shingled Magnetic Recording), HAMR (Heat-assisted Magnetic Recording) or Helium-filling for the WD Red lineup.The 5 TB and 6 TB versions also have WD's StableTrac technology (securing of the motor shaft at both ends in order to minimize vibration). As usual, the drive comes with a 3 year warranty. Other aspects such as the rotation speed, buffer capacity and qualification process remain the same as that of the previous generation units.
WD Red ProThe Red Pro targets medium and large business NAS systems which require more performance by moving to a rotation speed of 7200 rpm. Like the enterprise drives, the Red Pro comes with hardware-assisted vibration compensation, undergoes extended thermal burn-in testing and carries a 5-year warranty. 2, 3 and 4 TB versions are available, with the 4 TB version being a five platter design (800 GB/platter).
The WD Green drives are also getting a capacity boost to 5 TB and 6 TB. WD also specifically mentioned that their in-house NAS and DAS units (My Cloud EX2 / EX4, My Book Duo etc.) are also getting new models with these higher capacity drives pre-installed. The MSRPs for the newly introduced drives are provided below
WD Red Lineup 2014 Updates - Manufacturer Suggested Retail Prices Model Model Number Price (USD) WD Red - 5 TB WD50EFRX $249 WD Red - 6 TB WD60EFRX $299 WD Red Pro - 2 TB WD2001FFSX $159 WD Red Pro - 3 TB WD3001FFSX $199 WD Red Pro - 4 TB WD4001FFSX $259We do have review units of both the 6 TB WD Red and the 4 TB WD Red Pro. Look out for the hands-on coverage in the reviews section over the next couple of weeks.
Explaining Continuity: The tech tying iOS 8 and OS X Yosemite together
Apple wants you to buy Apple devices. It insists, mostly successfully, that computers, tablets, and phones are fully separate product categories with separate use cases and that “you should be able to use the right device for the moment.” The company brags on its earnings calls that first-time iPhone buyers are more likely to pick up additional Apple devices in the future. It’s selling a vision in which everything works best if you own an iPhone and an iPad and a Mac rather than mixing and matching.
If you subscribe to Apple’s philosophy, iOS 8 and OS X Yosemite will reward your faith. While iOS and OS X have shared certain services and features since 2011 or so, this year’s releases will take that interoperability to the next level under the “Continuity” banner.
Yes, for those of us who prefer to live in between ecosystems, Continuity takes today’s vendor lock-in problems and makes them even worse. For the large number of people who own and use multiple Apple products, though, it promises to make your devices work together in ways beyond simple data synchronization.
Read 42 remaining paragraphs | Comments
Apollo 11 turns 45: A lunar landing anniversary retrospective
CN.dart.call("xrailTop", {sz:"300x250", kws:["top"], collapse: true});
On July 20, 1969, at about four minutes before 10:00pm Central Daylight Time, former naval aviator and test pilot Neil Armstrong became the first human being to stand on the surface of the Moon. About 20 minutes later, he was followed by Buzz Aldrin, an Air Force colonel with a PhD in astronautics from MIT (Aldrin had, quite literally, written the book on orbital rendezvous techniques). Armstrong and Aldrin’s landing was the culmination of almost a decade of scientific and engineering work by hundreds of thousands of people across the United States. Even though the lunar program’s goals were ultimately political, the Apollo project ranks as one of the greatest engineering achievements in human history.
The six successful Apollo landings between 1969 and 1972 still inspire awe today, almost half a century later. A big part of that awe comes from the fact that those voyages from the Earth to the Moon were accomplished with only the most basic of computing assistance. There were no supercomputers as we’d understand them today; although the computers that eventually powered the Apollo spacecraft were almost unbelievably advanced at the time, they are alarmingly primitive when viewed through the lens of 21st century computing.
Fortunately for amateur and professional historians wondering how the effort succeeded despite its comparatively primitive computing, NASA has extensive historical resources about project Apollo available in the public domain to study, including the outstanding Apollo Lunar Surface Journal (along with its companion site, the Apollo Flight Journal). We’ve combed through gigabytes of documents and images to bring you this brief retrospective of some lesser-known interesting historical tidbits around Apollo 11 and that one small step nearly a half-century ago.
Read 26 remaining paragraphs | Comments
Growth factor restores insulin response in diabetic mice
In Type 1 diabetes, the body's immune system destroys the cells that produce insulin, leaving your body unable to make it. In Type 2 diabetes, the body continues to produce insulin, but organs don't respond to it efficiently. As a result, insulin injections, which effectively treat Type 1, don't do as much to help people with Type 2 diabetes.
There is a class of drugs called thiazolidinediones that help restore the body's ability to respond to insulin. Unfortunately, these drugs also cause a variety of side effects, including weight gain, bone density loss, and heart problems, so the search for a less problematic treatment has continued.
Now, working with mice, researchers have found that a well-known growth factor also restores the body's sensitivity to insulin and does so without any of the side effects associated with existing drugs. And they show that a modified form of the growth factor can still work effectively while reducing the risk of unforeseen consequences. This doesn't mean that using this method as a treatment will be free of side effects, but it does provide a promising avenue for further experiments.
Read 8 remaining paragraphs | Comments
DOE, commercial partners start world’s largest carbon capture project
Earlier this week, the US Department of Energy announced that work has started on what when finished will be the world's largest carbon capture facility. Located in Thompsons, Texas, the project will capture a portion of the emissions from the coal-fired W.A. Parish Generating Station. The CO2 will then be compressed and piped to the West Ranch oil field, where it will be injected under ground. This will help liberate oil that's otherwise difficult to extract, but has the added benefit that the carbon dioxide typically stays underground, sequestered.
The project was originally planned as a small pilot that would only extract CO2 from the equivalent of 60 megawatts of the plant's 3,500MW of generating capacity. When it was realized that the amount of CO2 from 60MW of would be too little CO2 to supply the oil field's needs, the project scope was expanded to 240MW. At that scale, the facility would become the largest of its type in the world.
The exhaust gas will have its sulfates removed before being bubbled through a solution of amines, which will bind the CO2. Once separated from the rest of the gasses, the carbon dioxide will be released by heating the amine solution, which can be recycled. The CO2 is then sent under pressure via a pipeline.
Read 1 remaining paragraphs | Comments
Robotic glove gives you extra fingers for grabbing
Four fingers and a thumb on each hand is pretty useful. Humans have gone from caves to sprawling cities in part because of our dexterous digits.
But researchers at MIT think we could do even better if we had an upgrade. They have developed a glove with two extra robotic fingers that respond intelligently to your movements, allowing you to perform two-handed tasks with just one robot-enhanced hand.
"You do not need to command the robot, but simply move your fingers naturally. Then the robotic fingers react and assist your fingers," said the glove's creator Harry Asada, of MIT's Department of Mechanical Engineering.
Read 10 remaining paragraphs | Comments
Manual Camera Controls and RAW in Android L
For those that have followed the state of camera software in AOSP and Google Camera in general, it’s been quite clear that this portion of the experience has been a major stumbling block for Android. Third party camera applications are almost always worse for options and camera experience than first party ones. Manual controls effectively didn’t exist because the underlying camera API simply didn’t support any of this. Until recently, the official Android camera API has only supported three distinct modes. These modes were preview, still image capture, and video recording. Within these modes, capabilities were similarly limited. It wasn’t possible to do burst image capture in photo mode or take photos while in video mode. Manual controls were effectively nonexistent as well. Even something as simple as tap to focus wasn’t supported through Android’s camera API until ICS (4.0). In response to these issues, Android OEMs and silicon vendors filled the gap in capabilities with custom, undocumented camera APIs. While this opened up the ability to deliver much better camera experiences, these APIs were only usable in the OEM’s camera applications. If there were no manual controls, there was no way for users to get a camera application that had manual controls.
With Android L, this will change. Fundamentally, the key to understanding this new API is understanding that there are no longer distinct modes to work with. Photos, videos, and previews are all processed in the same exact way. This opens up a great deal of possibility, but also means more work on the part of the developer to do things correctly. Now, instead of sending capture requests in a given mode with global settings, individual requests for image capture are sent to a request queue and are processed with specific settings for each request.
This sounds simple enough, but the implications are enormous. First, image capture is much faster. Before, if the settings for an image changed the entire imaging pipeline would have to clear out before another image could be taken. This is because any image that entered the pipeline would have settings changed while processing, which means that the settings would be inconsistent and incorrect. This slowed things down greatly because of this wait period after each change to capture settings. With the new API, you simply request captures with specific settings (device dependent) so there’s no need to wait on the pipeline with settings changes. This dramatically increases the maximum capture rate regardless of the format used. In other words, the old API set changes globally. This slowed down image capture every time image settings changed because all of the images in the pipeline had to be discarded once the settings were changed. In the new API, settings are done on a per-image basis. This means that no discarding has to happen, which means image capture stays fast.
The second implication is that the end user will have much more control over the settings that they can use. These have been discussed before in the context of iOS 8’s manual camera controls, but in effect it’s now possible to control shutter speed, ISO, focus, flash, white balance manually, along with options to control exposure level bias, exposure metering algorithms, and also select the capture format. This means that the images can be output as JPEG, YUV, RAW/DNG, or any other format that is supported.
While not an implication, the elimination of distinction between photo and video is crucial. Because these distinctions are removed, it’s now possible to do burst shots, full resolution photos while capturing lower resolution video, and HDR video. In addition, because the pipeline gives all of the information on the camera state for each image, Lytro-style image refocusing is doable, as are depth maps for post-processing effects. Google specifically cited HDR+ in the Nexus 5 as an example of what’s possible with the new Android camera APIs.
This new camera API will be officially released in Android L, and it’s already usable on the Android L preview for the Nexus 5. While there are currently no third party applications that take advantage of this API, there is a great deal of potential to make camera applications that greatly improve upon OEM camera applications. However, the most critical point to take away is that the new camera API will open up the possibility for applications that no one has thought of yet. While there are still issues with the Android camera ecosystem, with the release of Android L software won’t be one of them.
VIDEO: Loris success in Sri Lanka
VIDEO: Troubled Glasgow tower reopens
Search-and-rescue drone mission readies for takeoff after defeating FAA
A Texas volunteer search-and-rescue outfit that uses five-pound drones to find missing persons is resuming operations following its Friday courthouse victory against US flight regulators.
Federal Aviation Administration officials in February grounded Texas EquuSearch Mounted Search and Recovery Team, which deployed the unmanned aircraft to search for the missing for free.
EquuSearch, which does not charge for its services, says it has found more than 300 persons alive in some 42 states and eight countries. It challenged the FAA's order and, indirectly, prevailed. The US Court of Appeals for the District of Columbia Circuit found [PDF] that the e-mail from the FAA to EquuSearch was not the official method for a cease-and-desist order.
Read 8 remaining paragraphs | Comments
The Internet’s Own Boy review: Remembering—and honoring—Aaron Swartz
Every element of Aaron Swartz’s brief, remarkable life exemplifies the stuff we cover all the time on Ars. His tech-filled upbringing, his teenage rise to geek royalty, his hand in reddit’s genesis, and his online political activism made him a worthy subject of Ars conversation well before he became a household name.
Sadly, Swartz’s story didn’t reach critical mass until he took his own life nearly two years after being indicted by a federal court on twelve felony charges. The case hinged on allegations that he had downloaded 4.8 million documents from JSTOR, an online academic research archive, which he accessed from within MIT’s campus without permission.
In the weeks after his suicide, the Internet saw both a massive outpouring of grief and a comprehensive examination of what made his case so outrageous. The latter makes the new feature-length documentary about his life, The Internet’s Own Boy, less than indispensable in telling Swartz’s story, but considering the fact that he spent his final years trying to make information free and open, that’s fitting.
Read 16 remaining paragraphs | Comments
Pirate Bay traffic has doubled post-ISP blocks
Despite court-governed blocks and its founders being jailed, The Pirate Bay's traffic has doubled since 2011.
The world's most infamous peer-to-peer file-sharing site shared these stats with Torrent Freak, adding that nine percent of all visitors use a proxy to access the site and that the US continues to be its biggest source of traffic (last year it was revealed that the US was responsible for a third of traffic to the site). That's despite the majority of copyright complaints about content shared on The Pirate Bay coming from US record labels and Hollywood studios.
An increasing number of countries around the globe block the site by forcing Internet service providers to directly block access. In 2011, Advocate General Cruz Villalón of the European Court of Justice said that forcing an ISP to filter Web traffic would infringe upon its fundamental rights. The installation of such a filter would be "a restriction on the right to respect for the privacy of communications and the right to protection of personal data." In addition, "such a system would restrict freedom of information, which is also protected by the Charter of Fundamental Rights." Essentially, forcing ISPs to block Web content by their own expense and indefinitely breaches rights of citizens and companies. This view was upheld by the court.
Read 5 remaining paragraphs | Comments
A guide to winning the customer service cancellation phone battle
AOL VP Ryan Block’s cancellation nightmare phone call with Comcast’s customer service went insanely viral this week, drawing a contrite canned response from Comcast’s public relations group and likely resulting in the firing of the overly zealous customer service employee who badgered Block for 10 solid minutes about his request to terminate service. Unfortunately, Block’s experience is far from unique. Putting aside the Comcast representative’s hilariously insensitive tenacity ("This phone call is a really, actually amazing example of why I don't want to stay with Comcast," Block said at one point), terrible phone-based customer service is standard operating procedure for most companies.
There is some delicious irony in the fact that Block is an AOL employee, since AOL’s ludicrous and borderline-abusive customer retention tactics are the stuff of legends. However, in this instance, Block's affiliation with AOL was immaterial: he was just another customer being forced to fight a war to cancel his Internet service.
Why do companies like Comcast and AOL make it so hard to pull the plug? Do customer service representatives get some kind of incentive for keeping customers from canceling? Is there anything you can do to power through their garbage and get what you want without having to verbally fight it out, Block-style?
Read 31 remaining paragraphs | Comments
Ars editor learns feds have his old IP addresses, full credit card numbers
In May 2014, I reported on my efforts to learn what the feds know about me whenever I enter and exit the country. In particular, I wanted my Passenger Name Records (PNR), data created by airlines, hotels, and cruise ships whenever travel is booked.
But instead of providing what I had requested, the United States Customs and Border Protection (CBP) turned over only basic information about my travel going back to 1994. So I appealed—and without explanation, the government recently turned over the actual PNRs I had requested the first time.
The 76 new pages of data, covering 2005 through 2013, show that CBP retains massive amounts of data on us when we travel internationally. My own PNRs include not just every mailing address, e-mail, and phone number I've ever used; some of them also contain:
Read 24 remaining paragraphs | Comments
Citing lack of interest, Lenovo pulls 8-inch Windows tablets from the US [Updated]
Lenovo announced today that it will no longer sell its 8-inch Windows tablets in the US, less than a year after introducing both the lower-end Miix 2 8 and the high-end ThinkPad Tablet 8. IT World reports that Lenovo is stopping sales because of a general lack of interest but that the ThinkPad 8 in particular will continue to be sold in international markets where it has managed to gain more traction. The company will also continue to sell 10-inch Windows tablets, which it claims are performing better, as well as its 7- and 8-inch Android tablets.
This isn't great news for Microsoft, which came to the small-screen tablet market late but has devoted considerable energy to making Windows work on those screens. When first released, Windows 8 required 1366×768 screens and didn't work well in portrait mode, making it poorly suited for smaller tablets with lower-resolution screens that were easy to hold upright. Microsoft later loosened those resolution restrictions, and Windows 8.1 tweaked the OS to work better in portrait mode. After a rocky start, high-quality small-screened Windows tablets began to hit the market in late 2013.
Lenovo is only the third largest PC manufacturer in the US (though it's number one worldwide), so its exit from the 8-inch Windows tablet market isn't as worrisome as it would be if Dell were to pack up and leave. Still, it's probably a bad sign for the other OEMs, who are all selling similar Windows tablets running similar hardware in a market dominated by the iPad Mini, the Kindle Fire, and any number of Android tablets from the likes of Samsung and Asus.
Read 2 remaining paragraphs | Comments
Google tests new Chrome OS UI that’s more Android than Windows
Google-watchers may have already heard about "Project Athena," a Chrome OS-related experiment of Google's that has appeared in the Chromium source code a few times in the past. Today we got our first official look at the new interface via Francois Beaufort, a Chrome enthusiast who was hired by Google last year after leaking several high-profile Chrome features.
The new UI, pictured above, displays a cascading stack of cards, each of which appears to represent an individual browser tab. At the bottom of the screen, an app drawer full of dummy icons and a Search field will allow the user to jump quickly into other applications. The battery indicator and network status are in the upper-right corner of the screen. Putting aside the rough, obviously-a-work-in-progress aesthetic of the interface, it bears a strong resemblance to the new multitasking UI in the Android L release, which shows apps and individual browser tabs as a similar stack of cards.
The Android L developer preview's multitasking UI on a 2013 Nexus 7. Andrew CunninghamThe current Chrome user interface, codenamed "Aura," hews much closer to Windows 7 than to Android, and it works better with a traditional keyboard and mouse combo than with fingers. The Athena UI looks like a more touch-friendly take on Chrome OS—touchscreens are gradually beginning to show up on Chromebooks like the Pixel and one of Acer's C720 models, but as we pointed out in our Chromebook Pixel review the operating system isn't particularly touch-friendly. It's possible that Google is looking to give touchscreen Chromebooks a boost by developing an interface for them that's easier to use.
Read 3 remaining paragraphs | Comments
Russia caught editing Wikipedia entry about downed Malaysian airliner
The world is still reeling from the shock of the deaths of 298 people on Malaysian flight MH17, which was shot down in Ukraine yesterday, but the battle to write and rewrite history has already begun online.
Thanks to a Twitter bot that monitors Wikipedia edits made from Russian government IP addresses, someone from the All-Russia State Television and Radio Broadcasting Company (VGTRK) has been caught editing a Russian-language Wikipedia reference to MH17 in an article on aviation disasters.
Статья в Википедии Список авиационных катастроф в гражданской авиации была отредактирована ВГТРК http://t.co/peZ60q07Fj
— Госправки (@RuGovEdits) July 18, 2014
Read 9 remaining paragraphs | Comments
“SOHOpelessly BROKEN” hacking contest aims to test home router security
Over the past few years, consumer-grade routers have emerged as a key security threat. Whether manufactured by Asus, Linksys, D-Link, Micronet, Tenda, TP-Link, or others, small office/home office (SOHO) routers have suffered a variety of real-world attacks that in some cases have allowed hackers to remotely commandeer hundreds of thousands of devices.
Now, security advocates are sponsoring "SOHOpelessly BROKEN," a no-holds-barred router hacking competition at next month's Defcon hacker conference in Las Vegas. The contest will challenge attendees to unleash novel exploits on 10 off-the-shelf SOHO routers running recent firmware versions.
"The objective in this contest is to demonstrate previously unidentified vulnerabilities in off-the-shelf consumer wireless routers," organizers said. "Contestants must identify weaknesses and exploit the routers to gain control. Pop as many as you can over the weekend to win. Contest will take place at Defcon 22, August 7-12, 2014 in the Wireless Village contest area."
Read 1 remaining paragraphs | Comments
Photoshopping of adult porn nets man 10-year child-porn conviction
A federal appeals court upheld Thursday the child pornography conviction and accompanying 10-year prison term handed to a Nebraska man who superimposed the image of an underaged girl's face onto a picture of two adults having sex.
The 8th US Circuit Court of Appeals rejected (PDF) claims from 28-year-old Jeffrey Anderson that his actions were protected by the First Amendment. Anderson sent the doctored image to his 11-year-old half-sister via Facebook, resulting in the charge of distributing child pornography. Anderson had superimposed the half sister's face onto the photo, the court said.
Among other defenses, Anderson argued that because no minor engaged in sex, he should not have been charged.
Read 2 remaining paragraphs | Comments
How US satellites pinpointed source of missile that shot down airliner
President Barack Obama today said without hesitation that the missile that shot down Malaysia Airlines Flight 17 was launched from within territory controlled by pro-Russian separatists in Eastern Ukraine. While he didn’t go into the sources the US used to pinpoint the launch, early reports say that US intelligence had identified the infrared signature of a missile launch just before contact with the airliner was lost.
That information likely came from one of a network of satellites operated by the Air Force and the National Reconnaissance Office (NRO), the US intelligence community’s spy satellite operations agency. Using highly sensitive infrared sensors and other electronic intelligence gathering sensors, these satellites can detect a variety of ground-based missile systems, as well as some aircraft, by their infrared signature. They also carry sensitive electronic intelligence sensors that can detect radar signals associated with anti-aircraft missile systems like the Buk launcher that has been widely pointed to as the culprit in the MH17 downing.
The latest of these satellite systems is the Space Based Infrared System (SBIRS), the successor to the long-running and euphemistically named “Defense Support Program” (DSP) satellite system. The DSP started in the late 1960s and continued in various forms through the last decade. A portion of the DSP constellation of satellites continues to function today and has been considered for use in tracking forest fires and potentially forecasting impending volcanic eruptions.
Read 9 remaining paragraphs | Comments