It’s sobering to think that the hard disk drive is now 50 years old, and yet as a data storage technology it is still about 100 times more cost effective than its competitors. Yet its drawbacks are pretty apparent – hard drives are inherently slow and unreliable, and because of the mechanical nature of the device this is not likely to change fundamentally. Meanwhile various solid state memories, like SRAM, DRAM and flash memory, are very fast-growing market segments. Yet even with the continual shrinking of circuit dimensions, these technologies will not be able to match the raw storage capacity of hard drives. The search is on, then, for memory devices that have the capacity of hard drives but the robustness and speed of access of solid state devices. One candidate for such a device is the magnetic racetrack, invented by IBM’s Stuart Parkin. One needs to take Parkin’s views on magnetic data storage devices very seriously; as the inventor of the giant magnetoresistance based read head, it’s his work that has permitted the miniaturisation of hard disks that we see today and which makes possible devices like the video iPod.
I heard Parkin speak about his invention at last week’s Condensed Matter and Materials Physics meeting of the Institute of Physics, where he was one of the plenary lecturers. A version of this lecture, which Parkin recently gave at UCSB, can be downloaded from here – this includes some very helpful videos. Parkin’s invention is disclosed in this US patent; the basic idea is that data is stored in a pattern of magnetic domain walls in a nanowire of a magnetic material, which needs to be about 10 microns long and less than 100 nanometers wide. Rather than reading the pattern of magnetic domains by moving a read-head along the wire, in the magnetic racetrack the wire is held stationary and the magnetic domains are swept past a stationary read-head by applying a current. There’s a lot of fascinating physics in the way a current can move the domain walls, but the attraction of this arrangement from a practical point of view is that there are no moving parts, and the data density can be very high. Parkin envisages an array of these nanowires, each bent in a U-shape, with the bottom of the U held against a read head and a write head which are laid down by conventional planar silicon technology. It’s this compatibility with existing manufacturing technology that Parkin sees as a compelling advantage of his idea, as compared with other proposals for high density data storage devices depending on more exotic schemes using, for example, carbon nanotubes.
This reminds me of a memory scheme proposed by Guy Wilson at Queen Mary College (London) back in the mid-eighties. His version involved moving sheets of electrical charge between layers of conducting (conjugated) molecules separated by insulating layers. His group showed that the charge could be made to hop in the presence of an electric field. The bits (sheets of charge) would be cycled through a stack of layers using electrical pulses. The scheme predates nanowires and the like by quite some time – I guess the world wasn’t ready for nanotechnology back then