Professor Cristophe Muller of Aix-Marseille University gave an excellent overview of non-volatile semiconductor memory as the third ISQED keynote this week. It’s a very good overview of today’s landscape and well worth discussing in a wider forum like this blog.
First, Professor Muller displayed this image, which gives a simple semiconductor memory taxonomy:
The simplest semiconductor memory split is between volatile memory (DRAM and SRAM) and non-volatile memory (everything else). Non-volatile memory then splits into memory based on charge storage (EEPROM, Flash, FRAM, and Silicon dots) and resistive memory. Now, I’m not sure about including FRAM in this first group, because FRAM stores data in the actual shape of ferroelectric crystal dipoles rather than in charge, but then I’m not a professor either. However, FRAM has been around for more than two decades and TI has been marketing relatively new versions of the very successful MSP430 microcontroller family with on-chip FRAM, including the recently announced and deliciously named low-power Wolverine line of “ultra-low power” MCUs. So there’s clearly life in FRAM.
Professor Muller has labeled the second type of non-volatile memory as “resistance switching” memory. To me, this is currently the most interesting category in the taxonomy because all of the would-be contenders for taking the crowns from Flash and DRAM are in this category, which includes MRAM, PCM, and RRAM (or ReRAM, or Memristor memory). This category includes a range of memory cells based on wildly varying physical phenomena.
Which brings us to a refinement in the taxonomy. Here’s the ITRS classification tree for non-volatile memory, as classified by technology:
On the left are the “baseline” technologies already in wide use: NAND and NOR Flash. Although FRAM appears in the ITRS “prototypical” category, it’s been shipping in low-capacity memory products for decades (I first wrote about FRAMs in the 1980s for EDN magazine and Ramtron announced an FRAM prototype at ISSCC in 1988). However, it’s certain that PCM (phase-change memory) and STT-MRAM (spin-torque-transfer magnetic RAM) are properly classified as prototypical. (See “Can the Magneticians finally succeed in getting MRAM launched as a viable, low-power ASIC NV memory?”) Not shown here are earlier MRAM generations such as the products currently sold by Everspin, see “The return of magnetic memory? A review of the MRAM panel at the Flash Memory Summit”.) These existing MRAM products are not prototypical; they’re shipping in the millions of units per year.
Finally, there are the “emerging” non-volatile memory technologies including Redox (reduction-oxidation, a form of resistive memory), nano-mechanical (nanoscale relays), molecular (using a variety of physical phenomena to store data), and FeFETs (FETs where the gate oxide is replaced with a non-volatile ferroelectric layer). As the name implies, the emerging non-volatile memory technologies are somewhat further out.
Although Professor Muller’s taxonomy classifies semiconductor memory as it exists today, none of these categories can yet meet his definition of the “universal memory,” which would in theory have the following characteristics:
- Nanosecond read/write times
- Gbytes of capacity
- Infinite endurance (> 1015 write/erase cycles)
- CMOS logic interface compatibility
- Low power consumption (< 1pJ/bit) for read/write operations
- Greater than 10-year data retention without power
- Scalable with semiconductor process node advances
There’s currently no such animal as this universal memory, so system designers compromise on one or more of these characteristics depending on the application. NAND Flash memory is great for its low cost per bit and process scalability—it’s currently the semiconductor manufacturing industry’s lithographic process driver—but NAND Flash write times are relatively slow compared to DRAM and SRAM and NAND Flash data-retention time is falling with process-node enhancements as the number of electrons stored in each Flash memory cell drops with dimensional scaling (although this might change with 3D Flash structural advances). DRAM is great for speed and it’s pretty scalable with respect to process technology, but it’s volatile. Consequently, more than a few researchers and entrepreneurs sense that there’s an available niche—a chance to displace the current king and queen of semiconductor memory: NAND Flash and DRAM.
At the moment, there’s a large race in the resistance-RAM arena. Here’s a logo slide supplied by Professor Muller of just some of the entrants in the resistance-RAM derby:
The winner(s) of this derby may get the rights to displace DRAM and Flash memory, depending on how close the resulting devices come to achieving all of the characteristics of Professor Muller’s universal memory. The prize for this derby is worth millions of dollars per year and will soon be worth billions of dollars per year, as indicated in this slide:
The remainder of Professor Muller’s keynote speech focused on the three leading challengers in the non-volatile memory derby: MRAM, PCM, and RRAM. MRAM has the advantage of SRAM speeds, Flash data retention, and DRAM endurance. When and if STT MRAM comes to market, it could displace both the RAM/Flash combos and battery-backed RAM currently in use. Here’s a logo slide of some of the twenty or so companies in the MRAM race:
Note that Everspin on the left is currently far ahead of the others in commercializing MRAM, with millions of parts shipped. However, no company has yet announced STT MRAM parts, and STT MRAM is the type of MRAM that has a hope of reaching the bit densities of the current non-volatile memory king: NAND Flash.
PCM was off to an early start in this race. Numonyx was an early favorite to be the first to offer commercial PCM parts and Samsung announced a PCM device in May, 2010. (See “Samsung announces imminent release of a multichip module integrating DRAM and PCM for Smartphone applications”. ) However, there are technical hurdles with the chalcogenide alloys currently employed as the working material in PCM and not much has been heard from the PCM entrants lately. Certainly no new PCM product announcements have appeared in the past two years or so. Micron, a leading DRAM and Flash vendor, now owns Numonyx.
The big issue with PCM, said Professor Muller, is that the thermal-spike profiles needed to make the PCM chalcogenide alloys switch between the amorphous and crystalline states are somewhat difficult to control in practice. A fast, hot spike throws the material into the amorphous state by providing enough heat to liquefy some of the material and then quickly cooling the material, creating an amorphous solid. A slower, lower spike anneals the material into a solid crystalline state without entering the liquid state. The amorphous and crystalline states represent the two states of a binary bit. As you might expect, it takes some amount of energy to liquefy the alloy, making low-power operation somewhat challenging.
Professor Muller also mentioned the problem PCM devices have with thermal annealing of the chalcogenide alloy by ambient temperature. As the device geometries scale with more advanced process nodes, it gets easier to anneal the bits out of a PCM device just from ambient temperature. Because of this, retention time becomes more of an issue with more advanced process nodes. Perhaps new structures or new materials might yet revamp PCM’s chances in the non-volatile memory derby.
Then there’s RRAM, ReRAM, and Memristors. HP put memristors on the memory map with the HP Labs announcement in 2008. Since then, HP announced a commercial memory partner for the technology—Hynix, a leading DRAM and Flash semiconductor memory vendor. Memristor memory is based on the creation and destruction of conductive filaments in a thin-film insulating layer usually made of some sort of metallic oxide. For example, HP’s memristor uses titanium oxide. One voltage causes the filaments to form by driving oxygen vacancies in the oxide, creating a conductive state. A higher or reverse voltage disrupts the filaments and reduces the conductivity. The binary bit is stored in the conductivity difference.
To date, there are no memristor-based parts on the market. The race is still being run.
It’s worth a bit of time to consider how Professor Muller sees these new memory technologies affecting system design. Here’s a very interesting slide that illustrates his thinking:
On the left is a highly simplified diagram showing how we partition systems today. There are large, identifiable blocks of cache SRAM and separate blocks of NOR Flash for code and NAND Flash for data. These days, the SRAM cache is usually on chip with the CPU and logic. For reasons of cost/bit, the NAND and NOR Flash memory is usually included in the system on separate, high-volume chips. With the right sort of memory—that is memory with characteristics that are closer to the ideal universal memory’s characteristics—memory could move to be more intimately connected to related blocks in the system. For example, multicore CPUs could have large blocks of non-volatile memory on the same chip and on-chip caches could become non-volatile. Heterogeneous memory hierarchies consisting of DRAM, NOR Flash, and NAND Flash could disappear. All of these changes would have a large influence on future processor-based system design.
However, all of these changes await the creation of a commercially viable non-volatile memory technology that can compete with DRAM and Flash. There are many entrants to this derby and it’s both exciting and interesting to watch as the race is run.
I can see that NVRAM needs ultra-fast read to be valuable for integration on (for example) a MPU.
However it seems to me that, for level2 cache, the speed required for write could be somewhat slower; presumably the limits would be set by the size of any buffer RAM, energy-per-bit, and the total energy required to complete the storage process at (unscheduled) power-down? Have I missed something important?
George, I don’t think you missed anything. Read and write times for L2 cache could be slower. However, the biggest reason for slower L2 cache with SRAM has to do with the size of the cache. Larger-capacity caches are slower as much for address decoding and data multiplexing as for the speed of the SRAM cell. Changing out the memory cell technology isn’t the only speed consideration.
Thanks – I think we agree on this.
I was also trying to highlight the added impact on NVRAM write speed – that there is no point in NV unless we can control the stored state at power-down.
Many situations will benefit from a known state even when power-down is unplanned*, so the ability to support this condition would be a real “plus”. Stored-energy will presumably place constraints on the number of bits that can be programmed once a power down event is recognised.
*Just think how many UPSs (and their lead-acid batteries) could be saved.
Pingback: Add Avalanche Technologies to the growing list of vendors in pursuit of STT MRAM | Denali Memory Report