Category: Hardware



Intel’s upcoming Haswell architecture, designed to follow on from Ivy Bridge, is rumoured to include L4 cache which can be shared between central processing and graphics processing units for vastly improved 3D performance.

While as-yet unannounced by Intel itself, the claim comes from VR-Zone which claims to have spotted evidence that the Haswell launch line-up will include desktop chips featuring integrated graphics some two or three times faster than the best Ivy Bridge has to offer.

The biggest change, however, comes in the form of a new layer of cache memory. In addition to the usual L1, L2 and L3 cache layers, VR-Zone claims Haswell is to ship with a chunk of L4 cache. Traditionally, such a feature is limited to expensive chips aimed at the high-performance computing (HPC) market.

While Intel certainly has intentions in this area, Haswell’s L4 cache is there for a different reason: graphics. Like AMD, its biggest competitor, Intel believes that increased coherency between the graphics and central processing infrastructure on chips is the way forward. While not quite as clear a roadmap as AMD’s heterogeneous systems architecture (HSA), Intel’s apparent move to add an L4 cache layer to Haswell indicates a similar goal: improved cache coherency between GPU and CPU tasks.

For consumer applications, that means vastly improved graphical capabilities which could mean the death of low- and potentially even mid-range dedicated graphics acceleration hardware. For the server room it means the ability to execute instructions on either the CPU or the GPU independent of where the data is stored, eliminating one of the biggest bottlenecks in general purpose GPU (GPGPU) programming – moving data.

Details of the Haswell chip design are sketchy, and Intel isn’t talking. As per usual, the company merely stated that it ‘refuses to comment on rumour or speculation regarding unannounced products.‘ Should VR-Zone’s guess prove accurate, however, Intel’s integrated graphics could be getting a serious speed boost come Haswell’s launch in 2013.


The copper-selenium compound analysed by the team acts as both a liquid and a solid, making it an incredibly efficient material for thermoelectric systems.

Researchers from the Chinese Academy of Science’s Shanghai Institute of Ceramics, Brookhaven National Laboratory, the University of Michigan and the California Institute of Technology (Caltech) have released details of a liquid-like compound they claim could lead to more efficient thermoelectric devices.

Thermoelectric materials turn heat into electricity, and are seeing increasing use in waste energy reclamation projects. Using a thermoelectric material, it’s possible to turn wasted heat from industrial facilities or vehicle engines into useful electricity.

The technology works by exploiting the temperature differential between its two ends. Electrons in the hot end diffuse to the cold end, producing a small but useful electric current. While current thermoelectric systems are typically bulky, the technology shows real promise for extending battery life in laptops and even smartphones by harnessing wasted heat – or even the heat of the user’s hand – to generate a top-up current for the battery.

A key milestone in achieving this is increasing the efficiency of the thermoelectric material, which will result in the creation of smaller thermoelectric devices. It’s this goal that the team at Caltech had in mind while experimenting with new compounds – including their most promising candidate yet, a mixture of copper and selenium which exhibits liquid-like behaviours despite being a solid.

It’s like a wet sponge,‘ explained Jeff Snyder, a faculty associate in applied physics and materials science at Caltech and a research team member, at the announcement. ‘If you have a sponge with very fine pores in it, it looks and acts like a solid. But inside, the water molecules are diffusing just as fast as they would if they were a regular liquid. That’s how I imagine this material works. It has a solid framework of selenium atoms, but the copper atoms are diffusing around as fast as they would in a liquid.

The new compound features a mixture of crystalline and amorphous properties, allowing electrons to flow easily while inhibiting the transmission of the vibrations which carry heat. Using the crystal structure of selenium with free-flowing copper atoms which act like a liquid, the team created a compound with a thermoelectric figure of merit of 1.5 at 1000 degrees Kelvin – one of the highest values of any mass-producible material, the team claims.

A compound of copper and selenium isn’t new: back in 1970 NASA was using a similar material in the construction of spacecraft power systems. Its liquid-like nature, however, was poorly understood and made it difficult to work with. The team’s new research sheds light on the reasons for its high figure of merit, while paving the way for exploitation of other liquid-like thermoelectric materials in the future.


Storage giant Seagate has become the first harddrive manufacturer to reach the dizzy heights of one terabit per square inch areal density, using a technology known as heat-assisted magnetic recording (HAMR.)

Designed as a next-generation replacement for perpendicular magnetic recording as used in today’s hard drives, HAMR holds the potential for 3.5in hard drives holding as much as 60TB. That, Seagate is quick to point out, would mean more bits in a square inch of hard drive platter than stars in the Milky Way.

As the name suggests, HAMR supplements traditional magnetic storage technologies by first heating the area to be magnetically flipped with a small laser. The result is a bypassing of the superparamagnetic effect, a phenomenon in which sufficiently small magnetic particles decide to randomly change state – turning your carefully stored data into so much gibberish.

The technology has been in the pipeline for quite some time: the first patent describing the use of heat to improve the stability of magnetically recorded data dates back to 1954, while the 1980s saw the launch of the HAMR-based magneto-optical drive before it was superseded by writable CDs. More recently, Fujtisu announced a HAMR-based head element in 2006 which it claimed could potentially lead to terabit-per-square-inch areal densities.

Seagate is now claiming to deliver on Fujitsu’s six-year promise. A prototype HAMR implementation created by the company as a proof-of-concept platform for the technology shows serious promise: at a linear density of two million bits per inch for an overall density of 1Tb per square inch, the drive boasts an areal density around 55 per cent higher than PMR’s theoretical 620Gb per square inch limit.

As a result, the first-generation commercial HAMR products can be expected to double hard drive capacities. That means desktop drives holding up to 6TB, and laptop drives holding up to 2TB. Seagate promises that HAMR will scale rapidly, however, with its upper limit sitting somewhere around five to 10Tb per square inch. As a result, by the time HAMR reaches end-of-life, we should be enjoying 3.5in hard drives with between 30TB and 60TB of storage space.

The growth of social media, search engines, cloud computing, rich media and other data-hungry applications continues to stoke demand for ever greater storage capacity,‘ claimed Mark Re, senior vice president of heads and media R&D at Seagate. ‘Hard disk drive innovations like HAMR will be a key enabler of the development of even more data-intense applications in the future, extending the ways businesses and consumers worldwide use, manage and store digital content.

Sadly, Seagate is somewhat quiet on a launch schedule for its HAMR-based drives, saying only that it is planning the first commercial HAMR drives for launch ‘later this decade.’


On Tuesday SanDisk revealed two separate lines of solid state drives: the high-performance Extreme series for the retail channeland the X100 series for desktop and notebook manufacturers. The former series is available in 120 GB and 240 GB capacities, with a meatier 480 GB version hitting the market at a later date.

SanDisk claims that its Extreme line delivers up to 83,000 maximum random write IOPS and up to 44,000 maximum random read IOPS. It also provides sequential read speeds up to 550 MB/s and sequential write speeds up to 520 MB/s. SanDisk didn’t specify a SATA interface, but it’s presumably SATA 6 Gb/s. The SSD also reportedly consumes 30-percent less power, generates less noise, and generates less heat than an HDD, making it ideal for laptops.

The 120 GB and 240 GB versions are available now, costing $189.99 and $399.99 respectively. The 480 GB ($749.99) will launch at a later date.

As for the X100 series, SanDisk is now sampling the drive to PC manufacturers. It will connect via a SATA 6 Gb/s interface, and provide sequential read speeds up to 500 MB/s and sequential write speeds up to 420 MB/s. Available capacities will be 32 GB, 64 GB, 128 GB, 256 GB and 512 GB.

“In addition to the standard 2.5 inch form factor (7-mm or 9.5-mm), the X100 SSD also comes in mSATA and customized thin form factors to address the emerging ultra-thin laptop market,” the company said on Tuesday. “SanDisk offers a wide variety of configuration options, including standalone and dual drive caching solutions.”

Tech specs reveal that the X110 series will have a typical active power consumption of 150mW (@ 3.3V) and a typical standby mode of 75mW (@ 3.3V). The drive’s MTBF is up to 2 million hours, and the target platforms are ultrabooks, notebooks and desktops.


EVGA has released a teaser image of its upcoming SR-X dual-socket LGA2011 motherboard, and while it’s keeping launch details a secret the picture tells a story of its own.Promising ‘more details soon,’ the company posted a teaser image of a production-status SR-X board complete with its final heatsink design to microblogging service Twitter to keep its fans assured that development continues apace.

The image shows that EVGA has, unsurprisingly, chosen to cover all voltage regulator modules (VRMs) in aggressive-looking heatsinks, while the chipset itself gets a surprisingly compact yet wide-area heatsink of its own to keep things cool during overclocking.

The picture reveals 12 memory slots, eight situated in two banks of four by the first processor socket and an additional two banks of two by the second, which suggests support for a total of 96GB of DDR3 memory for those that can afford it.

Seven PCI-Express 3.0 slots are included, all of which appear to be full 16x slots and which are known to include support for both Nvidia’s SLI and AMD’s CrossfireX multi-GPU capabilities. As is becoming increasingly common on high-end enthusiast boards, switches for disabling individual PCIe slots are provided for improved stability when overclocking. EVGA has also added voltage read points.

Designed for use with Intel’s Sandy Bridge-EP Xeon processors, the board requires plenty of power. Both CPU sockets have an eight pin and a six pin power connector each, although two of these can be left disconnected if only a single CPU is used.

Additional features rounding out the board include six SATA and four SAS ports, two eSATA ports, six USB 3.0 ports, and dual gigabit Ethernet ports. EVGA has also previously confirmed that its EVBot tweaking tool will be fully supported by the board.

What EVGA isn’t sharing, however, is potentially the most crucial point of all: the price. With buyers having to invest in the high-end server-oriented Sandy Bridge-EP Xeon series of chips in order to make use of the dual socket design, a system with the SR-X at its heart is unlikely to come cheap.


Researchers have published a paper detailing a way of using laser heating to significantly speed up the data transfer capabilities of the humble hard drive.

In a paper published in the Nature: Communicationsjournal, a team of researchers led by Thomas Ostler from the University of York detail a method of dramatically increasing data storage speeds of magnetic hard drives using a laser-based heating system.

In the paper, a system is detailed whereby a laser briefly heats the magnetic medium to a temperature of around 800 degrees Celsius for a brief period using a sub-picosecond laser pulse. This heating, the team found, significantly speeds up the process of reversing the magnetic polarity of a particular bit compared to the traditional methods of using another magnet or electricity.

While much of the paper is indecipherable to the layman, its conclusions are clear: it’s now proven possible to alter the magnetic polarity of a medium by blasting it with a laser.

The team’s work is far from a commercial reality, of course. There are questions to be answered about how the technology can be implemented in a way that won’t bankrupt anyone hoping to buy a laser-based hard drive, while the experimental results don’t address the issue of medium longevity.

It’s a promising start to a potentially disruptive new technology for the storage space, however. According to the team’s findings, a laser-based hard drive would have two main advantages over a typical magnetic drive. The first is energy efficiency: while high power, the laser pulses for such a tiny amount of time the overall power draw is much less than a magnetic drive would require for the same operation.

It’s the performance that is of the most interest, however. A laser-based storage system could write at speeds in the terabytes per second region, making the fastest solid-state drive available today seem positively pedestrian in comparison.

Sadly, Ostler and his team are silent on when – or, indeed, if – the technology will be making ti to the desktop.


Intel has released further details of its next-next generation processor architecture, Haswell, and its transaction memory functionality.

Due to take over from the yet-to-be-released Ivy Bridge, Haswell will be constructed on the same process size but with a new microarchitecture. The company has previously detailed some of the improvements that will bring, including the Haswell New Instructions, but this week’s announcement is something a bit special.

Dubbed Transaction Synchronisation Extensions, or TSX, the technology provides extensions to the processor’s instruction set that allows programmers to specify particular regions of code as suitable for transactional synchronisation.

As Intel’s James Reinders puts it in his blog post on the matter: ‘With transactional synchronisation, the hardware can determine dynamically whether threads need to serialise through lock-protected critical sections, and perform serialisation only when required. This lets the processor expose and exploit concurrency that would otherwise be hidden due to dynamically unnecessary synchronisation.

Code engineered to take advantage of TSX could prove pretty nippy. Reinders explains that if the programmer has done his or her job correctly and the transactional execution completes successfully, all memory operations performed in that code segment will appear to have occurred instantaneously from the perspective of other processors in the system.

The technology works in a similar way to a transactional database: operations can be performed in isolation, making it possible for a multicore processor to run code more efficiently. Like a database, it’s also possible to roll back a transaction if necessary, while TSX promises to do away with the requirement to lock resources during computation in order to prevent modification by other process threads.

Put simply: with TSX, Intel promises, coders will be able to push their workloads onto multiple processing cores without worrying about complex and performance-sapping fine-grain locking techniques.

Transactional memory isn’t new. IBM’s BlueGene/Q supercomputer uses a very similar method, while other HPC platforms have their own implementations. TSX will mark the first time that transactional memory technology has been available at a consumer level, however, and it holds the potential to be a significant disruptive influence in the multithreading arena.

Sadly, as with any new instruction set, it could be a while before consumers start to see the benefit. With Haswell due to launch in 2013, expect a teething period of a year or two for developers to figure out the best ways to implement TSX in their own applications.


Patriot Memory, a global pioneer in high-performance memory, NAND flash, storage and enthusiast computer products today introduces their new lineup of Quad-Channel Division 4 DDR3 memory solutions specifically engineered for future quad-channel memory based platforms.

For hardware enthusiasts, content developers, and high-end gamers looking for ultimate performance, the Patriot Memory Viper Xtreme Division 4 kits are designed for extreme overclocking and feature a custom copper and all aluminum heat shield to provide the best thermal protection available.

Performance users and budget gamers will get superior price-per-performance in the Gamer 2 Division 4 kits. With unrivaled build quality and low-profile extruded aluminum heat-spreaders for enhanced thermal protection, the Patriot Memory Gamer 2 Division 4 kits deliver superior performance and stability.

“Building on Patriot Memory’s award winning Viper Xtreme and Gamer 2 brands, these quad- channel kits will feature the same aggressive cooling solutions, build quality, and extreme performance that have made them a popular choice for enthusiast and performance users,” says Les Henry, Patriot Memory’s VP of Engineering.

General details:

  • Speeds of up to 1866 MHz (Viper Xtreme Division 4) and 1600 MHz (Gamer 2 Division 4)
  • Heat shields designed to deliver maximum thermal protection
  • Lifetime Warranty

Intel’s next-generation processor architecture, Haswell, has been pictured for the first time in a spy shot of an engineering sample.

According to Slovakian technology site OBR-Hardware, the leaked image is of a quality control sample produced at an Intel fabrication facility to prove the next-generation architecture’s efficacy ahead of its launch next year.

Built on a 22nm process, Haswell represents the ‘tick’ portion of Intel’s ‘tick-tock’ development cycle: where the upcoming Ivy Bridge is merely a process size shrink from Sandy Bridge’s 32nm to 22nm, Haswell represents a new generation of microarchitecture.

Based on the the existing Core architecture, Haswell promises numerous improvements over its predecessors. The most interesting of these, and sadly invisible in the leaked picture, is HNI: Haswell New Instructions.

Designed to extend the existing instruction set available in Ivy Bridge, HNI includes Advanced Vector Extensions (AVX) 2 with support for SIMD (Single Instruction, Multiple Data) operations on 256-bit integer data types, bit manipulation instructions for improved handling of variable bitstreams, cryptography, compression and large number arithmetic, gather instructions for vectorising codes with non-adjacent data elements, any-to-any permutes with support for DWORD and QWORD granularity permutes across an entire 256-bit register, vector-vector shifts and floating-point multiply accumulate functionality for boosted floating point performance.

In simpler terms: for software designed to take advantage of the new instructions, Haswell promises some significant performance benefits. In particular, better vector handling means vastly improved parallel processing capabilities that could boost overall compute performance significantly for multi-threaded applications.

The image does, however, offer some insight into other aspects of the first Haswell-based processors likely to hit the market: according to OBR-Hardware’s analysis of the spy shot, the GPU portion of the die is around twice the size of that included in Sandy Bridge. As a result, we can expect significantly improved graphics performance from the chips.

That observation fits with existing claims – including that from VR-Zone – that Haswell will include multi-standard encode and decode functionality for video resolutions up to QuadHD 4K.

Sadly, with Ivy Bridge not even out of the door yet, Intel is keeping quiet on precise facts and figures for the first Haswell silicon; but it certainly looks like it’s going to be a tempting upgrade for performance enthusiasts when it finally hits the market in 2013.


Memory specialist Micron has won an award for its Hybrid Memory Cube technology, a prototype system which promises to boost massively improve DRAM performance while dropping the power draw drastically over current generation hardware.

Based on through-silicon via technology – vertical conduits that allow components to be connected in a three dimensional mesh, in a way not dissimilar toIntel’s tri-gate transistor technology – Micron’s Hybrid Memory Cubes promise significantly improved performance over existing two-dimensional memory structures: current prototypes push 128GB/s, compared to peak throughput of around 12.8GB/s obtainable via commercial DDR3 DRAM implementations.

Micron’s HMC isn’t just about improving performance, however: prototype implementations of the technology boast a 70 per cent reduction in power draw during data transfer while reducing the footprint to one-tenth that of traditional two-dimensional memory; as a result, the technology is turning heads in the world of ultra-mobile computing and embedded systems.

Sadly, the technology isn’t destined for desktops and smartphones just yet: due to the heightened cost of HMC chips compared to traditional DRAM, the company is concentrating its efforts in large-scale networking and high-performance computing (HPC) markets. Once production costs have lowered, industrial automation will be next before the consumer markets finally get their hands on the technology.

That somewhat lengthy roadmap to affordable implementations hasn’t stopped The Linley Group, a microprocessor and semiconductor industry analysis organisation, awarding the company Best NewTechnology in its 2011 awards. ‘Hybrid Memory Cubes promise greater density, lower latency, higher bandwidth, and better power efficiency per bit compared with conventional memories,‘ explained Tom R. Halfhill, a senior analyst at The Linley Group and one of those responsible for the award. ‘Early benchmarks show a memory cube blasting data 12 times faster than DDR3-1333 SDRAM while using only about 10 percent of the power.

With so many other great innovations out there, we’re especially honored to receive this prestigious award from The Linley Group. This world-class recognition is a reflection on the team that is working to match processors with memory and break down the memory wall by bringing the Hybrid Memory Cube to the market,” added Brian Shirley, vice president for Micron’s DRAM Solutions Group, at the award ceremony. ‘Special thanks to the Micron engineers who’ve designed this breakthrough architecture, and to our esteemed colleagues in the industry who are helping to expedite the specifications and manufacturing for this revolutionary technology.

Wondering what chips will be turned three-dimensional next, or just impatient to get your hands on consumer-grade implementations?


OCZ has announced a new firmware for its Octane range of solid-state drives that offers a little something extra: up to double the IOPS performance on selected models.

IOPS – Input/Output Operations Per Second – is a common method of measuring raw storage performance, and while it doesn’t necessarily relate to real-world data throughput it’s a given that more is better. Thankfully for Octane owners, that’s exactly what the latest firmware offers.

According to OCZ’s own figures, the improvement varies from model to model but is worth having nevertheless: the 128GB Octane goes from 7,700 random 4K write IOPS under the previous firmware to a healthy 18,000; the 256GB Octane jumps from 12,000 IOPS to 25,000; and the 512GB manages a respectable leap from 16,000 IOPS to 26,000.

Before you run to download the update, which takes the drives from version 1.12 to 1.13, a word of warning: the firmware update is a destructive process, entirely wiping the contents of the drive in the process. As a result, make sure you have an up-to-date backup before grabbing your free speed boost.

Additionally, you’ll need the MSAHCI drivers installed to complete the upgrade process, the SSD must be set as a secondary drive, and – sadly – if you’re running an Intel X79 chipset you’ll have to find another system to flash the SSD on, as it’s not yet supported by OCZ’s Toolbox software.

With all that in mind, and if you’re still hankering for your IOPS boost, head over to the OCZ Octane page to download the upgrade.

Are you pleased that OCZ is improving its products for existing customers rather than just developing new – and expensive – models, or wondering exactly why it’s necessary to wipe the entire drive just to upgrade the firmware?

%d bloggers like this: