Anandtech

Syndicate content
This channel features the latest computer hardware related articles.
Updated: 35 min 53 sec ago

Intel Announces XMM6255: The World's Smallest Standalone 3G Modem

Tue, 2014-08-26 09:44

Today Intel announced their XMM6255 modem which is aimed at providing 3G network connectivity to the many future connected devices that will make up the Internet of Things (IoT). At approximately 300mm^2 in size, Intel is claiming that XMM6255 is the world's smallest standalone 3G modem. Their hope is that its small size will allow it to be integrated into small internet connected devices such as wearables, small appliances, and security devices. 

XMM6255 uses Intel's X-GOLD 625 baseband and its SMARTI UE2p transceiver which is the first transceiver that integrates the transmit and receive functionality and the 3G power amplifier on a single die with its own power management. Intel claims this protects the modem from damage caused by excessive heat, voltage spikes, or overcurrent, which makes it a good choice for IoT applications like safety monitors and sensors where a hardware failure could present a safety risk. Integrating the power amplifier and transceiver on a single chip also reduces the bill of materials and power consumption, which allows XMM6255 to be put in low-cost and low-power devices.

XMM6255 typically comes in a dual-band HSPA configuration with 7.2Mbps downstream and 5.76Mbps upstream speeds. Up to quad-band 2G support can be optionally added, but requires an external power amplifier that Intel is billing as low-cost. A-GPS is also supported but is again optional.

XMM6255 represents another move by Intel to becoming a big part of the IoT market. Intel expects that the IoT market will be made up of billions of devices in the coming years, and getting a head start is a good way to make sure that many of them have Intel inside.

Categories: Tech

Google Updates Chrome To Version 37 With DirectWrite Support

Tue, 2014-08-26 08:00

Today Google updated the stable version of their Chrome browser to version 37.0.2062.94 on Windows, OS X, and Linux. This is a highly anticipated release for users on Windows specifically, as it marks the move from Microsoft's Graphics Device Interface rendering method to Microsoft's DirectWrite text rendering API. Using GDI resulted in significantly worse text rendering in Chrome compared to other browsers like Internet Explorer and Mozilla Firefox. The issue was also non-existent on Google's versions of Chrome for OS X and Linux which use font renderers native to their own operating systems. Switching to DirectWrite has been requested for years by users on Windows, and Google has stated that it took significant rewriting of their font rendering engine which is why it has taken so long.

The issue was more pronounced in some areas than others. Below we have a screenshot of a section of my own website which has always had significant issues with font rendering in Chrome. Somewhat funny is the fact that the fonts used are sourced from Google fonts.

Chrome 36 on the left, Chrome 37 on the right

The aliasing in the font rendered on Chrome 36 is quite apparent. Some letters even have entire areas that appear to be chopped off, like the top and bottom of the letter 'O'. On Chrome 37 the rendering is significantly improved. There's far less aliasing on fonts, no missing chunks from letters, and even the double down arrow glyph inside the circle looks much sharper.

While Google didn't detail it in their changelog, Chrome 37 also includes the new password manager interface that existed on the beta for Chrome 37. When on the login page for a website, a key will appear in the search bar with a list of all saved passwords for that website. This also replaces the bar that would appear at the top asking to save a username and password after entering it for the first time.

Google's changelog also states that Chrome 37 has a number of new APIs for apps and extensions, as well as many under the hood changes for improved performance and stability. There are also 50 security fixes, with the most interesting or significant fixes detailed in the source below.

Categories: Tech

AMD Set To Announce New FX Processors

Tue, 2014-08-26 05:00

During the 30 Years of Graphics & Gaming Innovation celebration on the weekend, AMD took the opportunity to announce several new models of FX Processors that will be coming to market soon. The new models announced are the FX-8320E, the FX-8370, and the FX-8370E. The E at the end represents a lower TDP than the normal model.

As this was not a true product launch, details were light, but based on previous releases of the FX processors we should be able to make some assumptions. The turbo clock speed was announced as 4.0 GHz for the FX-8230E which is the same as the older FX-8320 which is 3.5 GHz as a base, so we can assume the base clock will be 3.5 GHz. The FX-8370 and FX-8370E are new to the product lineup however, with an announced boost speed of 4.3 GHz for both. No base clock speed was revealed for these processors though, but the previously announced FX-8350 comes in at a base of 4.0 GHz, so the higher model number should be slightly higher than that.

AMD FX CPU Comparison   FX-
8320 FX-
8320E FX-
8350 FX-
8370 FX-
8370E FX-
9590 Release Date October 2012 August 2014 October 2012 August 2014 August 2014 June 2013 Modules 4 L1 Cache (Code) 256 KB L1 Cache (Data) 128 KB L2 Cache 8 MB L3 Cache 8 MB TDP 125 W 95 W 125 W 125 W 95 W 220 W Base Frequency (MHz) 3500 3500 (est) 4000 4000+ (est) 4000+ (est) 4700 Turbo Frequency (MHz) 4000 4000 4200 4300 4300 5000 Core Name Vishera Microarchitecture Piledriver Socket AM3+ Memory Support DDR3-1866

The E designation is slightly interesting. As a tradeoff for a lower TDP of 95 watts versus the 125 watts of the standard CPU, only the amount of boost time is affected. Base and boost clocks are the exactly the same as non-E chips.

The final announcements on the FX side of the presentation were to do with pricing. The FX-9590 will see a “significant” price cut this month, and AMD will now offer CPUs in a six-pack bundle to offer a lesser price per chip when bought in a relatively small volume. Whether the price cut of the FX-9590 affects the rest of the lineup is unclear, but we should know more soon.

Source:
AMD 30 Live

Categories: Tech

OCZ ARC 100 (240GB) SSD Review

Tue, 2014-08-26 04:00

OCZ launched the Barefoot 3 platform and the Vector SSDs in late 2012, and with their new direction OCZ has been trying to change their image to become a premium manufacturer of high performance SSDs rather than a budget brand. The Vector remained as the only Barefoot 3 based product for months until OCZ introduced the Vertex 450, a not exactly cheap but more mainstream version of the Vector with a shorter three-year warranty. Now, almost two years after the introduction of the Barefoot 3, OCZ is back in the mainstream SSD game with the ARC 100. Read on to find out what has changed and how the ARC 100 performs.

Categories: Tech

LG Reveals Details About the G3 Stylus Debuting at IFA

Mon, 2014-08-25 19:45

The IFA trade show is scheduled to take place a couple of weeks from now, but LG is already giving details about their upcoming LG G3 Stylus ahead of the event. The G3 Stylus joins the original G3, the G3 Beat, and the G Vista in LG's lineup of similarly named devices for 2014. LG is only releasing limited information but we've put together all the specifications that they have detailed in a chart below. LG will be displaying the device at IFA with more detailed information about its specifications and details about pricing.

LG G3 Stylus SoC Unknown 1.3GHz Quad Core Memory and Storage 8GB NAND + MicroSD, 1GB RAM Display 5.5” 960x540 IPS LCD Dimensions 149.3 x 75.9 x 10.2mm, 163g Camera 13 MP Rear Facing, 1.2MP Front Facing Battery Removable 3000 mAh (11.4Whr) Network 3G Operating System Android 4.4.2 KitKat

As you can see, it's not the most comprehensive specification sheet. Based on the display resolution and amount of RAM and NAND, this looks to be a fairly mid-range device. The display resolution may be problematic at that screen size, as 960x540 at 5.5" amounts to a pixel density of only 200ppi. A smaller display may have been preferable for sharpness, but LG is likely trying to find a balance to maintain stylus usability, 

The size and mass of the device are both slightly greater than the LG G3. Despite being the same display size and battery capacity, the price point and the addition of the stylus necessitates changes to the device's chassis.

The G3 Stylus will be launched in Brazil in September, with other countries in Africa, Asia, and the Middle East to follow afterward. Pricing is currently unknown, and LG gave no details about a possible US launch, but there will definitely be more information about the G3 Stylus at IFA.

Categories: Tech

Zotac ZBOX EI750 Plus: A Feature-Rich Iris Pro mini-PC

Mon, 2014-08-25 04:00

Intel's Crystal Well parts (-R series) with integrated eDRAM have arguably been the most interesting products in the Haswell line-up. In the early stages, only Apple had access to these parts. However, since the beginning of 2014, we have seen other vendors roll out products based on the -R series processors. We have already covered the BRIX Pro (BXi7-4770R) in great detail. Today, we will take a look at what Zotac has conjured up with Crystal Well in the ZBOX EI750 Plus.

Categories: Tech

Exclusive: ASRock’s X99 OC Formula Motherboard in Pictures

Mon, 2014-08-25 03:20

To add another element to the current whirlwind of X99 motherboard shots being released, ASRock has now lifted the lid on its high end overclocking motherboard, the X99 OC Formula. Similar to other OC Formula motherboards, this model is designed by ASRock’s in-house overclocker former world #1 Nick Shih. With the yellow and black livery, ASRock is keen to promote its use of 12 power phases capable of supporting up to 1300W. While regular users will come nowhere near to 1300W, extreme overclockers have (with previous platforms) hit 500W-700W while using liquid nitrogen to push the processors. Given ASRock’s recent push for overclocking records, it makes sense to design a product that can compete. Alongside this feature, ASRock still wants to have a motherboard that regular end-users can use 24/7 with high-end overclocks.

Alongside the new socket, the motherboard will offer 8 DIMM slots and 10 SATA 6 Gbps ports. Instead of SATA Express it looks like there is two M.2 slots, one PCIe 2.0 x4 (sharing lanes with the black slot) and one PCIe 3.0 x4, both supporting up to 110mm drives. Next to the Purity Sound 2 is a half-height mini-PCIe slot, suggesting that there may be a WiFi edition or users can add their own WiFi module. The PCIe slots should allow 4-way Crossfire and SLI, with a central PCIe slot (PCIe 2.0 x4?) that will allow an additional PCIe card for two-way setups.

For overclockers, the X99 OC Formula will have the superhydrophobic Conformal Coating similar to previous models that protects the components on the motherboard from moisture. On the top right of the motherboard are quick frequency change buttons along with voltage check points, a PCIe disable switch, an LN2 mode switch and a slow mode switch. On the rear panel is a ClearCMOS switch, and additional PCIe power is provided by a molex connector.

For regular home users, there are six fan headers, a Thunderbolt header (requires a Thunderbolt PCIe card), a COM header, two USB 2.0 headers, a TPM header and two USB 3.0 headers. There are 10 USB 3.0 ports total including the two headers, and the rear IO shows dual network ports. The connector on the right hand side under the SATA ports is for ASRock's relatively new SATA power feature that makes use of hot-plug functionality to hide drives not in use. I would imagine that the ASRock BIOS and Software are also receiving iterative updates, and the in-the-box contents for OC Formula models in the past have always been interesting. With X99 being an expensive platform by comparison to Z97, I hope that there can be something in there to tantalize everyone.

With all the X99 press shots floating around the media, along with DDR4 pricing going live for pre-orders, things are getting more and more exciting. No word if the ASRock X99 OC Formula will be available at launch, or the release pricing, but I am sure we will know in due course.

Categories: Tech

A Month with the iPhone 5s: Impressions from an Android User

Sun, 2014-08-24 04:00

I must confess that the last time I used an iPhone was three or four years ago. While I’ve followed the hardware changes from generation to generation, I’ve never really been able to write about the iPhone or iOS in detail. While objective data is great to work with, a great deal of evaluation relies on subjective experience. To fix this gap in knowledge, I received an iPhone 5s. After a month, I’ve really come to have a much more nuanced view of how Android and iOS compare, along with how Apple’s iPhone compares to the rest of the smartphone market. To find out how it compares, read on for the full article.

Categories: Tech

AMD Announces Radeon R9 285, Shipping September 2nd

Sat, 2014-08-23 16:00

During their 30 years of graphics celebration, today AMD announced a forthcoming addition to the Radeon R9 200 graphics card lineup. Launching on September 2nd will be the company’s new midrange enthusiast card, the Radeon R9 285.

The R9 285 will take up an interesting position in AMD’s lineup, being something of a refresh of a refresh that spans all the way back to Tahiti (Radeon 7970). Spec wise it ends up being extremely close on paper to the R9 280 (née 7950B) and it’s telling that the R9 280 is no longer being advertised by AMD as a current member of their R9 lineup. However with a newer GPU under the hood the R9 285 stands to eclipse the 280 in features, and with sufficient efficiency gains we hope to see it eclipse 280 in performance too.

AMD GPU Specification Comparison   AMD Radeon R9 290 AMD Radeon R9 280X AMD Radeon R9 285 AMD Radeon R9 280 Stream Processors 2560 2048 1792 1792 Texture Units 160 128 112 112 ROPs 64 32 32 32 Core Clock 662MHz 850MHz ? 827MHz Boost Clock 947MHz 1000MHz 918MHz 933MHz Memory Clock 5GHz GDDR5 6GHz GDDR5 5.5GHz GDDR5 5GHz GDDR5 Memory Bus Width 512-bit 384-bit 256-bit 384-bit VRAM 4GB 3GB 2GB 3GB FP64 1/8 1/4 ? 1/4 TrueAudio Y N Y N Typical Board Power 250W 250W 190W 250W Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm? TSMC 28nm Architecture GCN 1.1 GCN 1.0 GCN 1.1? GCN 1.0 GPU Hawaii Tahiti Tonga? Tahiti Launch Date 11/05/13 10/11/13 09/02/14 03/04/14 Launch Price $399 $299 $249 $279

Looking at the raw specifications, the R9 285 is a 1792 stream processor Graphics Core Next product. Paired with these SPs are 112 texture units (in the standard 16:1 ratio), and on the backend of the rendering pipeline is 32 ROPs. As is unfortunately consistent for AMD, they are not disclosing the product’s base clockspeed, but they have published the boost clockspeed of 918MHz.

Meanwhile feeding R9 285’s GPU falls to the card’s 2GB of GDDR5. This is on a 256-bit bus, and is clocked at 5.5GHz for a total memory bandwidth of 176GB/sec.

The R9 285 will have a rated typical board power (AMD’s analogue for TDP) of 190W. Notably this is only 10W higher than the Pitcairn based R9 270X despite the 40% larger SP count, or alternatively is 60W lower than the Tahiti based R9 280. While we don’t have a ton of details on the GPU at this time, taking into consideration the R9 270X comparison in particular, it’s clear that AMD has done some work on efficiency to squeeze out more compared to the GCN 1.0 based Pitcairn and Tahiti parts that R9 285 is going to be placed between.

The GPU itself is based on a newer version of AMD’s architecture, at least GCN 1.1 based on the presence of TrueAudio support. AMD has not formally announced the underlying GPU at this time, but given the timing and the specifications we believe it’s based on the new Tonga GPU, which was first announced for the FirePro W7100 earlier this month. In any case we don’t have much in the way of details on Tonga at this time, though we expect AMD to flesh out those details ahead of R9 285’s September 2nd launch. The biggest question right now – besides whether this is a “full” Tonga configuration – is whether Tonga is based on GCN 1.1 or something newer.

Based on some prior AMD statements and information gleaned from AMD’s CodeXL tool, there is reason to suspect (but not confirm) that this is a newer generation design. AMD for their part has done something very similar in the past, launching GCN 1.1 back on the Radeon HD 7790, but essentially hiding access to and details of GCN 1.1’s feature set until the launch of the Hawaii based R9 290X later in the year. Whether AMD is doing this again remains to be seen, but it is something we have seen them do before and don’t doubt they could do again. Though whether they will confirm it is another matter, as the company does not like to publicly differentiate between GCN revisions, which is why even the GCN 1.1 name is unofficial.


Sapphire's Radeon R9 285 Dual-X

Working for the moment off of the assumption that R9 285 is Tonga based and that it’s a GCN 1.1 part, we expect that performance should be a wash with the R9 280 while the R9 285 has an advantage on features. GCN 1.1 does have some mild performance optimizations to it that will give the R9 285 an edge, though it remains to be seen what the impact will be of the narrower memory bus. The fact that the Tahiti based R9 280X remains in AMD’s lineup indicates that if nothing else, it won’t match the performance of a full Tahiti configuration. Otherwise when it comes to features, being GCN 1.1 based means that the R9 285 will bring with it support for True Audio, support for bridgeless CrossFire thanks to the XDMA engine, GCN 1.1’s superior boost mechanism, and full support for AMD’s upcoming FreeSync implementation of DisplayPort Adaptive Sync (GCN 1.0 GPUs are not fully adaptive).

As for AMD, this offers the chance to refresh some of their oldest GCN 1.0 products with a more capable GPU while also cutting costs. While we don’t have die size numbers for Tonga, it is reasonable to expect that it is smaller due to the narrower memory bus along with the die size optimizations that we saw go into Hawaii last year, which means it will be cheaper to manufacture than Tahiti. This also brings down board costs, again due to the narrower memory bus and the lower TDP allows for simpler power delivery circuitry.

AMD will be positioning the R9 285 to compete with NVIDIA’s GeForce GTX 760, the company’s second-tier GK104 part. The GTX 760 performs roughly the same as the R9 280, so AMD need only not regress to maintain their competitiveness, though any performance lead they can squeeze out will be all for the better. The GTX 760 is frequently found at $239 – a hair under the R9 285’s launch price – so NVIDIA will hold a very slight edge on price assuming they don’t adjust prices further (the GTX 760 launched at $249 almost 14 months ago).

The R9 285 for its part will be launching at $249 on September 2nd. This will be a hard launch, and with AMD’s partners already posting product pages for their designs we suspect this will be a pure virtual (no reference card) launch. AMD also tells us that there will be both 2GB and 4GB cards; we’re going to have to see what the price premium is, as the suitability of 2GB enthusiast cards has been challenged by the presence of so much RAM on the current-generation consoles, which will have a knock-on effect on console-to-PC ports.

Though with the launch of the R9 285 and impending discontinuation of the R9 280, buyers looking at picking up an R9 285 in the near term will have to be on the looking for R9 280 on clearance sale. It’s already regularly found for $220 and lower, making it $30 cheaper than the R9 285 and possessing 3GB of VRAM to the R9 285’s 2GB. This will make the R9 280 a strong contender, at least until supplies run out.

Fall 2014 GPU Pricing Comparison AMD Price NVIDIA Radeon R9 290 $400     $310 GeForce GTX 770 Radeon R9 280X $280   Radeon R9 285 $250     $240 GeForce GTX 760 Radeon R9 280 $220   Radeon R9 270X $180     $160 GeForce GTX 660

Finally, coinciding with the launch of the R9 285 will be a refresh of AMD’s Never Settle bundles. The details on this are still murky at this time, but AMD is launching what they call the Never Settle Space Edition bundle, which will see Alien Isolation and Star Citizen as part of a bundle for all R9 series cards. The lack of clarity is whether this replaces the existing Never Settle Forever bundle in this case, or if these games are being added to the Never Settle Forever lineup in some fashion. AMD has said that current Silver and Gold voucher holders will be able to get the Space Edition bundle with their vouchers, which lends credit to the idea that these are new games in the NSF program rather than a different program entirely.

Both Alien Isolation and Star Citizen are still-in-development games. Alien Isolation is a first person shooter and is expected in October of this year. Meanwhile the space sim Star Citizen does not yet have a release date, and as best as we can tell won’t actually be finished until late 2015 at the earliest. In which case the inclusion here is more about access to the ongoing beta, which is the first time we’ve seen beta access used as part of a bundle in this fashion.

Categories: Tech

Habey Releases MITX-6771, J1900 Thin Mini-ITX with Long Life Cycle Support

Sat, 2014-08-23 02:41

Habey’s main focus in the PC market is towards industrial computers, with Ganesh having reviewed the BIS-6590 and BIS-6922 fanless systems last year. Industrial oriented components by their nature require sufficient design for 24/7 operation, sometimes in niche environments. To that extent Habey has released the MITX-6771, a Bay Trail based thin mini-ITX motherboard equipped with the Celeron J1900 SoC for the main purpose of providing an drop-in upgrade path for Intel DN2800MT users in the embedded sector. This means that the connectivity/IO of this new motherboard is designed to match the DN2800MT, with Habey adding a couple of extra features.

Habey MITX-6771 left, Intel DN2800MT right

Using the 10W J1900 (quad core, 2.42 GHz) and a sufficient heatsink allows Habey to continue its fanless range, but due to the thin mini-ITX standard, motherboard design starts to get creative. The motherboard supports two DDR3L SODIMM modules for up to 8 GB of DRAM, and storage comes via an mSATA slot and a SATA 3 Gbps port. The mini-PCIe slot supports SIM card adaptors (such as this one suggested by Habey) to be used with the onboard SIM reader, with a further PCIe 2.0 x1 slot for non-thin mini-ITX environments. Network connectivity is either via the SIM card or the Realtek NIC.

The board has VGA and HDMI video outputs on the rear IO, with an LVDS header next to the CPU heatsink. The four USB 2.0 ports on the rear are combined with a USB 2.0 header and a USB 3.0 header. There is also an LPT header, two COM headers and audio is provided by a Realtek ALC662. OEM options include ALC892 audio, an additional SATA port or a second gigabit Ethernet port (in exchange for two of the rear USB ports).

The main aim for this sort of product is digital signage, industrial automation, medical, connected appliances and point-of-sale type systems, although Habey is keen to point out that media streaming is also a focus. Despite the limited one-year warranty, Habey is offering a Long Life Cycle Support package, although we are currently enquiring as to what this entails.

Users might note the lack of an ATX power connector on board, namely because power is either derived from the DC-IN jack on the rear IO or a 2-pin ATX connector just behind the DC-In. Inside the box is a SATA to two-pin ATX cable.

Habey retails some of their products on Newegg rather than going through them direct, and the MITX-6771 comes in at $150. This is near double what the consumer motherboards cost, although there are no consumer motherboards with SIM card (and thus 3G/4G) functionality or are a direct drop-in for the DN2800MT, rear IO and all.

Source: Habey

Gallery: Habey Releases MITX-6771, J1900 Thin Mini-ITX with Long Life Cycle Support

Categories: Tech

Apple Begins iPhone 5 Battery Replacement Program for Certain Defective Devices

Fri, 2014-08-22 17:27

Today Apple has started a replacement program for certain iPhone 5 devices experiencing significantly reduced battery life. The company is stating that the affected devices were sold between the months of September 2012 and January 2013. Users with devices purchased within that timeframe who are experiencing issues are advised to check their serial number with Apple's new support page to see if they are eligible for a free battery replacement. Apple is also offering refunds to users with affected devices who paid for a battery replacement prior to the service program being launched.

The replacement process for affected users will begin on August 22 in the United States and China, and on August 29 in the rest of the world. Apple recommends that users backup their iPhone to iTunes or iCloud and then wipe all user data prior to having their battery serviced. More information, as well as the service to check your device's serial number, can be found in the source link below.

Categories: Tech

AMD Celebrates 30 Years of Gaming and Graphics Innovation

Fri, 2014-08-22 14:15

AMD sent us word that tomorrow they will be hosting a Livecast celebrating 30 years of graphics and gaming innovation. Thirty years is a long time, and certainly we have a lot of readers that weren't even around when AMD had its beginnings. Except we're not really talking about the foundation of AMD; they had their start in 1969. It appears this is more a celebration of their graphics division, formerly ATI, which was founded in… August, 1985.

AMD is apparently looking at a year-long celebration of the company formerly known as ATI, Radeon graphics, and gaming. While they're being a bit coy about the exact contents of the Livecast, we do know that there will be three game developers participating along with a live overclocking event. If we're lucky, maybe AMD even has a secret product announcement, but if so they haven't provided any details. And while we can now look forward to a year of celebrating AMD graphics and most likely a final end-of-the-year party come next August, why not start out with a brief look at where AMD/ATI started and where they are now?


Source: Wikimedia Evan-Amos

I'm old enough that I may have been an owner of one of ATI's first products, as I began my addiction career as a technology enthusiast way back in the hoary days of the Commodore 64. While the C64 initially started shipping a few years earlier, Commodore was one of ATI's first customers and they were largely responsible for an infusion of money that kept ATI going in the early days.

By 1987, ATI began moving into the world of PC graphics with their "Wonder" brand of chips and cards, starting with 8-bit PC/XT-based board supporting monochrome or 4-color CGA. Over the next several years ATI would move to EGA (640x350 and provided an astounding 16 colors) and VGA (16-bit ISA and 256 colors). If you wanted a state-of-the-art video card like the ATI VGA Wonder in 1988, you were looking at $500 for the 256K model or $700 for the 512K edition. But all of this is really old stuff; where things start to become interesting is in the early 90s with the launch and growing popularity of Windows 3.0.


Source: Wikimedia Misterzeropage

ATI's Mach 8 was their first true graphics processor from the company. It was able to offload 2D graphics functions from the CPU and render them independently, and at the time it was one of the few video cards that could do this. Sporting 512K-1MB of memory, it was still an ISA card (or it was available in MCA if you happened to own an IBM PS/2).

Two years later the Mach 32 came out, the first 32-bit capable chip with support for ISA, EISA, MCA, VLB, and PCI slots. Mach 32 shipped with either 1MB or 2MB DRAM/VRAM and added high-color (15-bit/16-bit) and later True Color (the 24-bit color that we're still mostly using today) to the mix, along with a 64-bit memory interface. And two years after came the Mach 64, which brought support for up to 8MB of DRAM, VRAM, or the new SGRAM. Later variants of the Mach 64 also started including 3D capabilities (and were rebranded as Rage, see below), and we're still not even in the "modern" era of graphics chips yet!


Rage Fury MAXX

Next in line was the Rage series of graphics chips, and this was the first line of graphics chips built with 3D acceleration as one of the key features. We could talk about competing products from 3dfx, NVIDIA, Virge, S3, etc. here, but let's just stick with ATI. The Rage line appropriately began with the 3D Rage I in 1996, and it was mostly an enhancement of the Mach64 design with added on 3D support. The 3D Rage II was another Mach64 derived design, with up to twice the performance of the 3D Rage. The Rage II also found its way into some Macintosh systems, and while it was initially a PCI part, the Rage IIc later added AGP support.

That part was followed by the Rage Pro, which is when graphics chips first started handling geometry processing (circa 1998 with DirectX 6.0 if you're keeping track), and you could get the Pro cards with up to 16MB of memory. There were also low-cost variations of the Rage Pro in the Rage LT, LT Pro, and XL models, and the Rage XL may hold the distinction of being one of the longest-used graphics chips in history, as I know even in 2005 or thereabouts there were many servers still shipping with that chip on the motherboard providing graphics output. In 1998 ATI released the Rage 128 with AGP 2X support (the enhanced Rage 128 Pro added AGP 4X support among other things a year later), and up to 32MB RAM. The Rage 128 Ultra even supported 64MB in its top configuration, but that wasn't the crowning achievement of the Rage series. No, the biggest achievement for Rage was with the Rage Fury MAXX, ATI's first GPU to support alternate frame rendering to provide up to twice the performance.


Radeon 9700 Pro

And last but not least we finally enter the modern era of ATI/AMD video cards with the Radeon line. Things start to get pretty dense in terms of releases at this point, so we'll mostly gloss over things and just hit the highlights. The first iteration Radeon brought support for DirectX 7 features, the biggest being hardware support for transform and lighting calculations – basically a way of offloading additional geometry calculations. The second generation Radeon chips (sold under the Radeon 8000 and lower number 9000 models) added DirectX 8 support, the first appearance of programmable pixel and vertex shaders in GPUs.

Perhaps the best of the Radeon breed goes to the R300 line, with the Radeon 9600/9700/9800 series cards delivering DirectX 9.0 support and, more importantly, holding onto a clear performance lead over their chief competitor NVIDIA for nearly two solid years! It's a bit crazy to realize that we're now into our tenth (or eleventh, depending on how you want to count) generation of Radeon GPUs, and while the overall performance crown is often hotly debated, one thing is clear: games and graphics hardware wouldn't be where it is today without the input of AMD's graphics division!

That's a great way to finish things off, and tomorrow I suspect AMD will have much more to say on the subject of the changing landscape of computer graphics over the past 30 years. It's been a wild ride, and when I think back to the early days of computer games and then look at modern titles, it's pretty amazing. It's also interesting to note that people often complain about spending $200 or $300 on a reasonably high performance GPU, when the reality is that the top performing video cards have often cost several hundred dollars – I remember buying an early 1MB True Color card for $200 back in the day, and that was nowhere near the top of the line offering. The amount of compute performance we can now buy for under $500 is awesome, and I can only imagine what the advances of another 30 years will bring us. So, congratulations to AMD on 30 years of graphics innovation, and here's to 30 more years!

Categories: Tech

Lenovo Announces Trio Of Business PCs

Fri, 2014-08-22 12:25

Lenovo has added three ThinkCentre desktop PCs to its stable of business devices this week. The three devices span the range of desktops, with the ThinkCentre E63z being an All-In-One, the ThinkCentre M53 being classified as a “tiny” desktop, and the ThinkCentre M79 offering the more traditional Small Form Factor (SFF) and Mini Tower models.

 

ThinkCentre M79 Mini Tower

The typical office PC is likely a Mini Tower or SFF desktop, and the ThinkCentre M79 is an AMD A-Series APU equipped desktop offering optional Solid State Drive (SSD) or Solid State Hybrid Drive (SSHD) storage options in the SFF or Mini Tower configurations. Many businesses have moved to dual-displays for their desktop workers, and the M79 supports that out of the box, but also offers an optional second DisplayPort connector for those that want to move up to three displays. As a business PC, it also employs the Trusted Platform Module (TPM) version 1.2 for enhanced security feature support such as Bitlocker. It also includes version 3.0 of the Lenovo Intelligent Cooling Engine which controls the desktop acoustics and temperatures.  Also of benefit to the business crowd, the M79 has a 15 month guaranteed hardware cycle to allow for an easier time managing system images. The ThinkCentre M79 is available now starting at $449.

ThinkCentre M73 photo which shares the form factor with the M53

The micro desktops from Lenovo have been around for a while, and the latest model to join the group is the ThinkCentre M53. Though larger than the NUC, the M53 is still extremely compact at 7.2” x 7.16” x 2.5” and can be vertically or horizontally arranged or can be mounted on the back of a monitor with VESA mounting holes on the underside of the device. The M53 shares accessories and power connectors with the other “tiny” computers from Lenovo which is always appreciated. The ThinkCentre M53 is available soon with a starting price of $439.

ThinkCentre E63z All-In-One

The final business aimed desktop is an all-in-one device called the ThinkCentre E63z. This unit features an integrated 19.5” display with optional touch, and an integrated camera and stereo speakers to allow for voice over IP and other collaboration software usage. Models equipped with the optional Core i3 CPU include an additional HDMI port, a card reader, and a Rapid Charge USB port for charging mobile devices. The E63z is available now starting at $479, with the Core i3 models available later this year.

We do not have a full list of specifications for these devices at this time, but those should be available on the Lenovo site when they devices are made for sale.

Source:

Lenovo

Categories: Tech

G.Skill Announces Ripjaws DDR4, up to DDR4-3200

Fri, 2014-08-22 11:19

Much like the recent swathe of X99 motherboard previews we have seen, memory manufacturers are getting on board with showcasing their DDR4 memory modules to use with the Haswell-E platform. Unlike the CPUs from Intel, there is no formal NDA as such, allowing the media to report the design and specifications, although because real-world performance requires the CPU, no-one is able to post benchmark numbers.

The new DDR4 from G.Skill is the next DRAM module manufacturer to come out with an official press release, and following the previous high performance Ripjaws DDR3 range G.Skill will introduce its memory under the Ripjaws 4 moniker with a new heatspreader design.

G.Skill’s press release confirms the voltage ranges for DDR4, with 1.2 volts being standard on 2133 MHz to 2800 MHz kits, with the higher performance modules at ≥3000 MHz and above requiring 1.35V. The product line that G.Skill is aiming to release at launch is quite impressive with all the 1.2 volt modules in 16GB, 32GB and 64 GB kits. Due to the extra binning and higher tolerances of the more performance oriented kits, the DDR4-3000 C15 will be in 16GB or 32GB kits, DDR4-3000 C16 will be in a kit 32GB and the top line 3200 MHz C16 will be in a 16GB kit only.

G.Skill is reporting full XMP 2.0 support, and that this new module design matches the 40mm height of previous Ripjaws designs, allowing previous CPU coolers to be matched with this generation. As the modules are launched, the three colors G.Skill is pursuing are blue, red and black. I know G.Skill monitors our news, so if you really want another color in there, make a note in the comments.

Preorder pricing puts these modules at:

DDR4-2133 C15 4x4GB: $260
DDR4-2400 C15 4x4GB: $280 / £240
DDR4-2666 C15 4x4GB: $300 / £290
DDR4-3000 C15 4x4GB: $400 / £380

DDR4-2133 C15 4x8GB: $480
DDR4-2400 C15 4x8GB: $530 / £440
DDR4-2666 C15 4x8GB: $550 / £500

Source: G.Skill

Gallery: G.Skill Announces Ripjaws DDR4, up to DDR4-3200

Categories: Tech

Measuring Toshiba's 15nm 128Gbit MLC NAND Die Size

Fri, 2014-08-22 09:42

Courtesy of Custom PC Review

At Flash Memory Summit, Toshiba was showcasing their latest 15nm 128Gbit MLC NAND wafer that has been developed in partnership with SanDisk. I simply could not resist to calculate the die size as Toshiba/SanDisk has not published it and die size is always the basis of any semiconductor cost analysis. Unfortunately I was too busy running between meetings that I did not take a photo of the wafer, so I am borrowing the picture from Custom PC Review.

To estimate the die size, I used the same method as with Samsung's second generation V-NAND. Basically I just calculated the amount of dies in both X and Y axes of the wafer as that gives as an approximation of the die size since we know that the diameter of the wafer is 300mm. 

The 15nm node from Toshiba/SanDisk is extremely competitive. Its bit density is essentially equivalent to Samsung's V-NAND, so it is no wonder that Toshiba and SanDisk are betting heavily on their 15nm node before moving to 3D in early 2016. Compared to other 2D NAND dies, the 15nm node is a clear winner from bit density standpoint as Micron's 16nm MLC does not even come close.

Toshiba's and SanDisk's secret lies in two-sided sense amp and all bit line (ABL) architecture, which reduce the size of the peripheral circuits and sense amplifier, resulting in higher memory array efficiency. Based on my estimation, the array efficiency (i.e. how big portion of the die is dedicated to memory cells) is about 80%, which is typical for a 128Gbit capacity. Higher capacities tend to yield better array efficiency since the peripheral circuitry does not scale as well as the memory cells do, so increasing the die capacity is one of the key solutions in lowering the cost per gigabyte.

Since nobody has yet taken a cross-section of the 15nm die, it is hard to say for sure what Toshiba and SanDisk are doing to shrink the dimensions. There is no mention of high-K dielectrics, so that seems unlikely and if history is any guidance, then Toshiba/SanDisk is simply increasing the aspect ratio by making the floating gate taller to compensate for the smaller feature size and keep the overall floating gate volume similar. That also helps to maintain the gate coupling ratio because the control gate is still wrapped around the floating gate and with a taller floating gate the capacitance between the gates should remain sufficient despite the increasing proximity of the floating gates. 

The production of Toshiba/SanDisk 15nm NAND node is currently ramping up and SSDs based on the new node are expected in Q4'14.

Categories: Tech

Interview with ADATA's President Shalley Chen

Fri, 2014-08-22 07:00

At this year’s Computex, I had the opportunity to sit down with Mrs. Shalley Chen, ADATA’s President, to discuss the current trends in the memory and SSD business, as well as get an overview of ADATA’s future plans. Mrs. Chen has been with ADATA since the company was founded in 2001 and is also the wife of the founder, Simon Chen. Before stepping in as President in April this year, Mrs. Chen served as an Executive Vice President. Mrs. Chen also holds a degree in business management from the Ming Chuan University in Taiwan.

Before we get into the actual interview, I want to provide a brief overview of ADATA. The company generates over $1 billion in yearly revenue, which makes ADATA one of the largest memory companies in the world.  Over a half of the revenue comes from the APAC (Asia-Pacific) region, which is logical given ADATA’s Taiwanese roots and the size of the Asian market. The North and Latin America region ranks as the second largest revenue source with about 15% share in total revenue, followed by Europe and other smaller regions. In the interview Mrs. Chen hinted that Asia, Europe and especially Russia are potential future growth areas for ADATA since the memory and SSD markets are still in a developing stage, whereas in the US the markets are more mature.

ADATA has had an office in the US since 2002 and employs 41 people across two offices in Los Angeles and Miami. These are both sales and customer support offices with the LA office in charge of North America while the Miami office is responsible for Latin America. All R&D is done in Taiwan at ADATA HQ whereas production is split between ADATA’s owned factories in China and Taiwan. While in Taiwan I took advantage of the offer to visit ADATA’s headquarters and the Taiwanese factory, as well as take some images for another article. Ever since the company was founded, ADATA has been a memory centric company. Like many companies of a similar nature, the mission, as it stood from day one, is to become the global leading brand of memory products. Although the product portfolio has grown over the years to include newer products such as USB flash drives, external hard drives, SSDs, memory cards, and, more recently, mobile power banks - fundamentally ADATA is still a memory company. Over half of ADATA’s revenue is generated by DRAM sales, and market researches rank ADATA as the Number Two DRAM module supplier in the world.

Given the high competition in the memory and SSD business, the question I always put to the manufacturers is this: what differentiates you from all the other brands? There are a dozen consumer focused DRAM companies, and there is little room for innovation or differentiation in the industry. Mrs. Chen told me that ADATA’s best weapon against the competition starts from the diversity of the product portfolio to the close relations with both chip suppliers and distributors. Mrs. Chen was keen to point out that ADATA makes products for all three major markets (client, enterprise and industrial), giving ADATA several different revenue sources, and the percentage of revenues from enterprise and industrial is getting bigger and bigger. This directly implies that the enterprise and industrial segments are substantial to ADATA.

Big enterprise OEMs like Intel and Samsung are typically interested only in large enterprises that buy upwards of tens of thousands of units, which leaves the small to medium size enterprise market to OEMs like ADATA to fight for the rest of the market. For example, some of Samsung’s enterprise products are only available to large OEMs (like EMC, Dell etc.), which leaves a niche for OEMs like ADATA and other smaller OEMs to offer better support for small to medium size enterprises. This also lends a benefit to work directly with the OEM for any customization.

Like other fabless DRAM and SSD manufacturers, ADATA does not manufacture the chips they use – ADATA have to buys them from the likes of Micron and Samsung. I asked if ADATA has ever thought about moving to chip fabrication, but the answer was negative. The main reason is the cost of a fab, and investing billions of dollars is a large risk. If we look at the major semiconductor fabricators, most of them have been in the industry for decades, developing new technologies as the research progresses. As a result, it would be extremely difficult for a new player to gain any significant market share without innovation or a wide product portfolio and mountains of investment (it is worth noting that innovation can come from start-ups that have new technology but get acquired). Another point ADATA raised is that it has close relations with DRAM and NAND suppliers, and thus has no need for a chip fab. In the end, the DRAM module industry is all about managing inventory against cost and potential sales, so the competitive advantage lies in forecasting the demand and managing the inventory efficiently.

The same applies to SSD controller development. Even though controllers can be fabricated by a third party, the capital required for the development and manufacturing is still a large sum. ADATA raised STEC as an example, which took the path to design its own controller platform but got into serious financial trouble due to the cost of the development.  STEC ended up being acquired by Western Digital. ADATA does, however, have its own SSD firmware development team that has been in action since 2007. ADATA believes that the firmware team will play a key role to ensure competitiveness in the future. At this point in time, the team is mainly focusing on industrial SSD firmware development but there will be a change towards more unique firmware in the consumer side as well.

One of the big topics at Computex was the state of DDR4, and ADATA was heavily presenting its DDR4 portfolio at the show. Given ADATA’s position, the company wants to be the leader in DDR4 and will aim to push the new technology quite aggressively to both consumers and enterprises. ADATA is one of Intel’s six Haswell-E/X99 launch partners (the others are Micron, Samsung, Hynix, Kingston and Crucial), so there should be plenty of ADATA DDR4 available when the X99 platform launches later this year.

I asked ADATA whether the market for DDR4 will any different from current DDR3 from an OEM perspective. Mrs. Chen replied that DDR4 is different in the sense that right now DDR4 is mostly an enterprise product and will be sold through B2B marketing. The enterprise segment, due to the demand of more units per sale, also gets a greater benefit from DDR4, which is due to the lower voltage and higher frequency. The stereotypical scenario of hundreds of racks with each server equipped with eight to sixty-four DIMMs or more, lower power consumption on one module adds up and is thus always welcome. The speed should help enterprise workloads due to the tendency to be more often bound by memory performance than client workloads.

For the end-users, ADATA showed us there will be branded products at retail as well, but until the mainstream platform adopts DDR4, the enterprise segment will be the main market. In terms of production, ADATA believes that DDR4 will overtake DDR3 in H1’15 for the enterprise market, but the same will not happen in the consumer side until sometime in 2016.

All in all, there is a lot going on in both DRAM and SSD industries at the moment, so it will be interesting to see how the market reacts. We would like to thank Mrs. Chen and ADATA for their time giving us the opportunity to discuss the DRAM and SSD markets. As part of my visit to ADATA, I also met with ADATA’s DRAM and SSD directors to discuss their technology at a lower level. Keep your eyes peeled for that article in due course.

Categories: Tech

Recovering Data from a Failed Synology NAS

Fri, 2014-08-22 03:00

It was bound to happen. After 4+ years of running multiple NAS units 24x7, I finally ended up with a bricked NAS. Did I lose data? Were my recovery attempts successful? If so, what sort of hardware and software setup did I use? How can you prevent something like that from happening in your situation? Read on to find out.

Categories: Tech

SanDisk X300s (512GB) Review

Thu, 2014-08-21 11:15

Back in May SanDisk announced the X300s, which is the company's first SED (Self-Encrypting Drive). The X300s is based on the same Marvell platform as SanDisk's client drives but with the differentiation that the X300s is the only drive that supports encryption via TCG Opal and IEEE-1667 (eDrive) standards. Due to the encryption support the X300s is positioned as a business product since the main markets for encrypted drives are corporations and governments that handle sensitive and confidential data on a daily basis. SanDisk includes Wave's EMBASSY Security Center with every purchase of X300s, which allows Opal encryption on systems that are not eDrive compatible. Dive in to read more about the X300s, Wave's encryption software, and SEDs in general!

Categories: Tech

FMS 2014: Marvell Announces NVMe-Enabled PCIe 3.0 x4 88SS1093 SSD Controller

Thu, 2014-08-21 09:10

Two weeks ago Marvell announced their first PCIe SSD controller with NVMe support, named as 88SS1093. It supports PCIe 3.0 x4 interface with up to 4GB/s of bandwidth between the controller and the host, although Marvell has yet to announce any actual performance specs. While PCIe 3.0 x4 is in theory capable of delivering 4GB/s, in our experience the efficiency of PCIe has been about 80%, so in reality I would expect peak sequential performance of around 3GB/s. No word on the channel count of the controller, but if history provides any guidance the 88SS1093 should feature eight NAND channels similar to its SATA siblings. Silicon wise the controller is built on a 28nm CMOS process and features three CPU cores.

The 88SS1093 has support for 15nm MLC and TLC and 3D NAND, although I fully expect it to be compatible with Micron's and SK Hynix' 16nm NAND as well (i.e. 15nm TLC is just the smallest it can go). TLC support is enabled by the use of LDPC error-correction, which is part of Marvell's third generation NANDEdge technology. Capacities of up to 2TB are supported and the controller fits in both 2.5" and M.2 designs thanks to its small package size and thermal optimization (or should I say throttling). 

The 88SS1093 is currently sampling to Marvell's key customers and product availability is in 2015. Given how well Intel's SSD DC P3700 fared in our tests, I am excited to see more NVMe designs popping up. Marvell has known to be the go-to controller source for many of the major SSD manufacturers (SanDisk and Micron/Crucial to name a couple), so the 88SS1093 will play an important part in bringing NVMe to the client market.

Categories: Tech

Examining Huawei's Benchmark Optimizations in the Ascend P7

Thu, 2014-08-21 04:00

While benchmark optimization has been a hot topic, recently it has faded into the background as the industry adjusted. Previously, we saw changes such as an automatic 10% GPU overclock that was almost never achieved in normal applications, and behavior that would automatically plug in all cores and set the CPU frequency to maximum. Now, most OEMs have either stopped this behavior. Even if an OEM hasn't stopped such behavior, there are options that make it possible to use the altered CPU/GPU governor in all applications.

Unfortunately, I have to talk about a case where this isn't true. While I've been working on reviewing the Ascend P7 and have found a lot to like, I am sure that the Ascend P7 alters CPU governor behavior in certain benchmarks. For those that are unfamiliar with the Huawei Ascend P7, it's considered to be Huawei's flagship smartphone. As Huawei's flagship, it's equipped with a Kirin 910T SoC, which has four Cortex A9r4 CPUs running at a maximum of 1.8 GHz, and two gigabytes of RAM. As a flagship smartphone, it also has a five inch display with a 1080p resolution.

To test for differences in governor behavior, we'll start by looking at how the P7 normally behaves when faced with a benchmark workload. I haven't seen any differences in GPU behavior as the governor seems to stay clocked at an appropriate level regardless of the benchmark. At any rate, the behavior is noticeably quite reluctant when it comes to reaching 1.8 GHz. For the most part this only happens in short periods, and there is a great deal of variation in clock speeds, with an average of about 1.3 GHz throughout the test.

Here, we can see a significant difference in the CPU frequency curve. There's far more time spent at 1.8 GHz, and the frequency profile is incredibly tight outside of the beginning and end. The average frequency is around 1.7 GHz, which is significantly higher than what we see in the renamed version of the benchmark.

While this graph is somewhat boring, it's important as it shows that only three cores are plugged for the full duration of the test. Any noticeable deviation from this pattern would definitely be concerning.

When running the same workload on the Play Store version of GFXBench, we see that four cores are plugged for almost the entirety of the test. While I'm not surprised to see this kind of behavior when combined with altered frequency scaling, it's a bit disappointing. Strangely, this policy doesn't seem to be universal either as I haven't seen evidence of altered behavior in Huawei's Snapdragon devices. This sort of optimization seems to be exclusive to the HiSilicon devices. Such behavior is visible in 3DMark as well, although it doesn't seem to happen in Basemark OS II or Basemark X 1.1.

Huawei Ascend P7 Performance   Play Store Renamed Perf Increase GFXBench T-Rex 12.3 10.6 +16% 3DMark Ice Storm U/L 7462 5816 +28.3%

While normally such optimizations have a small effect, in the case of the affected benchmarks the difference is noticeable and quite significant. Needless to say, it's not really acceptable that Huawei is doing this, and I'm disappointed that they have chosen this path.

In response to this issue, Huawei stated the following:

"CPU configuration is adjusted dynamically according to the workload in different scenarios. Benchmark running is a typical scenario which requires heavy workload, therefore main frequency of CPU will rise to its highest level and will remain so for a while. For P7, the highest frequency is 1.8GHz. It seldom requires CPU to work at the highest frequency for long in others scenarios. Even if the highest level appears, it will only last for a very short time (for example 400 ms). Situation is the same for most devices in the market."

Unfortunately, I'm not sure how this statement explains the situation, as two identical workloads performed differently. While I was hoping to see an end to rather silly games like this, it seems that this path before OEMs stop this kind of behavior will continue on for longer than I first expected. Ultimately, such games don't affect anyone that actually knows how to benchmark SoCs and evaluate performance, and one only needs to look to the PC industry to see that such efforts will ultimately be discovered and defeated.

 

Categories: Tech