The 200 million amp 'low-power' memory

|

The way IBM describes its racetrack memory – yet another candidate for "memory of the future" – it's easy to be left with the impression that Big Blue is out on its own with this one. Stacey Higginbotham breathlessly opines: "IBM sure has some seriously crazy semiconductor researchers locked in its basement. These guys question everything when it comes to advancing chip technology."

Maybe IBM does. But it's not alone. What IBM claimed in the press release was that a memory 100 times denser than today's flash devices is on its way:

"The devices would not only store vastly more information in the same space, but also require much less power and generate much less heat, and be practically unbreakable; the result: massive amounts of personal storage that could run on a single battery for weeks at a time and last for decades."

Sounds great. When can I buy one? Not any time soon if you look more closely at what IBM's release is based on. The journal Science has published a paper on the work of Stuart Parkin's group at its Almaden lab in San Jose that describes a tweak to a type of magnetic memory. It's a bit like a solid-state disk state. You store bits magnetically: the state depends on which way the stored field points, either forwards or backwards along a metal wire.

With a disk drive, you read the bits by passing them at high speed under a magnetic head. With domain-wall memory, you push the bits along until they pass by a circuit that can read the state of the field. This is where IBM came up with its moniker of 'racetrack' memory. In reality, it's a bit more like a sushi counter: you have to wait until the bits you want slide into view. In that respect, it works a bit like a NAND flash memory, albeit with much longer chains of bits. And so, the main application, if and when this type of memory appears, is going to be in media players where you simply want to suck bits from it in a more or less continuous stream.

In principle, domain-wall devices sound great, and there are other groups pushing ahead on the research in addition to Parkin's, although he has a key patent on the technology. When Professor Russell Cowburn's group at Imperial College, London reported making logic switches using this approach, they explained why the world might want to switch from silicon semiconductors to this new way of working. Because you make the circuits out of conductors instead of semiconductors, you get a lot more electrons into a small space. That makes it, in principle, easier to scale down in size. On top of that, you don't have to mess with all the complicated steps that silicon technology needs: you simply lay down tracks of a simple magnetic alloy.

Now the bad news. There are still some big problems with this type of memory. The one that is likely to trip it up is power. Parkin's team describes in the Science paper a way of pushing the bits along a short wire – just three bits in a row – using current. The previous devices use magnetic fields. By using pulses of electric current to push the domains along the wire instead of magnetic fields, it should be easier to make the devices - chipmakers have been dealing with that kind of circuit for years; the idea trying to control magnetic fields on a chip is far less well developed. But it's not just IBM doing it.

Professor Teruo Ono's team in Kyoto published in Applied Physics Express in January a description of their work on current-driven domain walls, and they name-check quite a few groups doing work in the same area. What Ono's team points out clearly, and you find people with similar concerns in the Science news item as well, is that using current to push the bits along demands a lot of juice. 100 million amps per square centimetre. The wires are tiny, so you don't have to put the entire generating capacity of a small nation into the chip, but this is two orders of magnitude more juice than the magnetic memories that are going into production today. And they consume a lot more power than electronics engineers want, which is why Freescale Semiconductor's magnetic memories are going into satellites and not handsets.

Where the IBM team has gone further is in building a small memory; Ono's team was just demonstrating the ability to use current to push the magnetic domains around. Parkin's memory is even more power hungry: 200 million amps per square centimetre. Which kind of makes a nonsense of IBM PR's claim that this type of memory is going to consume less power than today's devices, or even disk drives. And the bits are big. The wires are only a couple of hundred nanometres across, but the bits have to have 2000nm separating them to stop them running into each other and disappearing – another knotty problem with this kind of memory. That makes the size of each bit considerably larger than today's flash memories.

There is a way round the density problem: you can stack layers of wires on top of each other – the wires are are less than 10nm thick. That is the design that Cowburn favours. Not only does it improve density, if you read the layers in parallel, you could potentially get very high datarates out of these devices even if you read the bits out quite slowly – a technique that should reduce power consumption. IBM seems to favour a vertical configuration so the read elements are all on the bottom of the chip. Strings like this have yet to be made but Toshiba has pursued a similar idea with a flash memory it unveiled at last year's VLSI Symposium.

Ono's team has worked on the energy problem, identifying an alternative to the regular nickel-iron wires normally used in this kind of research that can work at room temperature – another Japanese team slashed the energy needed by three orders of magnitude a few years back, but the device needs to be cooled artificially. The cobalt-platinum alloy Ono's group used has a built-in magnetic field which should improve on the way these nanowires behave. However, according the paper, the current needed is still in that 100 million range.