Future Memory: The MemCon Panel. What comes after NAND Flash and DRAM?

Just announced, there’s a pre-lunch panel at MemCon covering future memories. There are several new memory technologies that would usurp the thrones from DRAM and NAND Flash memory. Will any succeed? Come and hear the panel to find out.

Jim Handy, “The Memory Guy” and Chief Analyst at Objective Analysis will moderate. The panelists include:

  • Christophe Chevallier, Vice President, NVM/Storage Division, Rambus
  • Barry Hoberman, Chief Marketing Officer, Crocus
  • Michael Miller, Vice President, Technology, Innovation and Systems Applications, MoSys

MemCon is a free event, taking place on September 18 at the Santa Clara Convention Center. Breakfast, lunch, and an Octoberfest early evening celebration are also free, so sign up here.

Advertisements
Posted in DRAM, Flash, MRAM, NAND | Tagged , , , | Leave a comment

Using SSD controller technology as a differentiator: Kingston adds another data point with SSDNow Enterprise-class drives

Kingston E100 SSD

Memory and SSD vendor Kingston Technology has just announced enterprise-class SSDs called the SSDNow E100 in capacities of 100, 200, and 400 Gbytes. What I find interesting about this announcement are the emphasis on endurance and reliability (“10x improvements … over client SSDs”) and the use of special names for endurance enhancements (DuraWrite) and reliability (RAISE). What’s notable about these two terms is that they are leveraged from LSI SandForce, which supplies the SSD controller chip for these drives.

According to SandForce, DuraWrite “optimizes the number of program cycles to the flash effectively extending flash rated endurance by 20x or more when compared to standard controllers” and RAISE (Redundant Array of Independent Silicon Elements) “deliver an orders-of-magnitude improvement in drive reliability versus today’s best enterprise HDDs and SSDs. The result is single-drive RAID-like protection and recovery from a potentially catastrophic flash block failures – all while avoiding the inefficiencies of traditional RAID.”

What I think is notable here is that Kingston, which has an excellent reputation in this industry already, is relying on SandForce controller technology and terminology to carry the water for the SSDNow drives’ endurance and reliability.

For more discussion of this topic, see:

Need yet another argument for designing your own SSD controller?

Add Hitachi Data Systems to the growing list of companies developing their own SSD controllers

How Skyera developed the 44Tbyte, enterprise-class Skyhawk SSD from the ground up. A System Realization story.

More on developing your own SSD controller chip. Is rolling your own right for you?

STEC’s MACH16 Slim 2.5-in SATA SSD requires small footprint, fits in small embedded spaces

Micron introduces Enterprise-class, 2.5-inch SSD with PCIe interface

Examining The SSD Industry – Researching The Controller or Processor

Posted in SSD, Storage | Tagged , , , , , | 1 Comment

It’s what you do with the memory that counts. Case in point: the TI Stellaris M4F microcontrollers

NAND Flash wear leveling is an established error- and fault-management technique in SSDs, but Texas Instruments is touting on-chip Flash and EEPROM durability in a low-cost microcontroller: the TI Stellaris M4F series based on the ARM Cortex-M4F microprocessor core. There’s a 256Kbyte Flash memory on the TI Stellaris M4F microcontroller. Here are the relevant words TI uses to describe the Flash memory on the device:

“It can be hard to get excited about memory. It is often simply taken for granted. But changing to a TI 65nm process for the Stellaris LM4F family raises the products to a new level of reliability and integration. Borrowing the Flash technology that TI developed for use in automotive products, the Stellaris LM4F MCUs have extended memory durability by an order of magnitude beyond competition. The minimum number of times the flash memory on these MCUs can be erased and reprogrammed is as high as 100,000 cycles.

For most applications, this breakthrough eliminates any concern of wearing out the memory from re-flashing for data collection, configuration parameters or program modifications. More of the high-reliability Flash is also available for customer-written code because StellarisWare drivers are embedded in a small mask ROM on-chip.

All Stellaris LM4F MCUs have the StellarisWare binaries committed in on-chip ROM, including the peripheral drivers, the in-system programming routines, utilities such as CRC (cyclic redundancy check) algorithms, and AES (advanced encryption standard) tables. These APIs (application programming interfaces) let the programmer take full advantage of these well-proven services, routines and tables, while leaving all of the flash for customer and application-specific code.”

There’s also a 2Kbyte EEPROM on the TI Stellaris microcontroller, described like this:

“There are many other memory features on the MCUs, but one new memory type deserves special attention. The new Stellaris LM4F MCUs have 2K bytes of secure, on-chip EEPROM. EEPROM is normally used to store long-term variables that may even need to survive power outages and dead batteries. Since the implementation is interrupt-enabled, the integrated memory allows for the execution of code while writing values to nonvolatile memory (execute-while-write). The EEPROM use is architected using a built-in wear- leveling technique that ensures each location can be modified 500,000 times. If the data was re-written 100 times a day, the EEPROM would last nearly 15 years!”

Make no mistake. Memory is a very competitive part of any SoC or system design, no more and no less important that other components. Memory is not as simple as it might seem at first and the right approach to providing memory in a design can make a big difference in its perceived value.

For more information on the TI Stellaris M4F microcontroller and its new eval board, see “TI Stellaris LaunchPad eval board features ARM Cortex-M4F. Intro price: $4.99. Get yours now.

Posted in ARM | Tagged , , , , | Leave a comment

How ya gonna’ control that DDR4 SDRAM next year? The 28nm answer.

Cadence has just completed testing of its DDR4 SDRAM controller and PHY in two of the TSMC 28nm process technologies: 28HPM and 28HP. The DDR4 PHY exceeds the data rates needed to operate DDR-2400 SDRAMs and it is interoperable with DDR3 and DDR3L SDRAM devices as well. The same test chip also included an all-digital mobile SDRAM PHY capable of DDR-1600 and DDR-1833 DDR3 data rates and full-speed LPDDR2 SDRAM data rates as well. In addition, the test chip included a copy of the Cadence DDR4 SDRAM controller, so that too is now silicon proven.

Although the JEDEC DDR4 SDRAM specification is still in draft form, it’s expected in final form later this year. Production SDRAM devices based on this standard will follow shortly after finalizing the spec, as evidenced by early prototype announcements from Micron and Samsung. You can expect to see the first products based on DDR4 memory to begin appearing next year.

For more information on DDR4, see:

The DDR4 SDRAM spec and SoC design. What do we know now?

and

Memory to processors: ‘Without me, you’re nothing.’ DDR4 is on the way.

For more information on the Cadence announcement, see “Cadence Announces Industry’s First DDR4 Design IP Solutions Are Now Proven in 28nm Silicon.”

Posted in DDR, DDR3, DDR4, DRAM, LPDDR2, SDRAM | Tagged , , , , , , , | Leave a comment

The top 21 things you probably didn’t know about Flash memory, from the Flash Memory Summit

Last week’s Flash Memory Summit ended with a session titled “The top 10 things you need to know about Flash memory today. Richard Goering summarized the panel in his blog titled “Flash Memory Panelists Challenge Conventional Thinking About NAND and SSDs” but I thought it would be fun to compile that information into a longer list. So for your amusement, here are the top 21 things you don’t know about Flash memory based on presentation from Andy Tomlin, VP of Solid state Development at Western Digital; Jered Floyd, CTO of Permabit; and Jim Handy, Chief Analyst at Objective Analysis:

  1. It takes a minimum of two years to develop firmware for a new SSD and if you think it takes less, you’ll make poor decisions along the way. –Tomlin
  2. Flash is already cheaper than disk. –Floyd
  3. Data optimization is a requirement. –Floyd
  4. You shouldn’t build it yourself. –Floyd
  5. The end of the road is not in sight. –Floyd
  6. Enterprise is a quality grade, not a technology. –Floyd
  7. Flash device vendors will vertically integrate—or die. –Floyd
  8. Hybrid drives are nothing new. –Floyd
  9. Consumers will be all flash. –Floyd
  10. Data centers will all adopt Flash. –Floyd, Handy
  11. Flash will save the world. –Floyd
  12. NAND prices will not rebound until mid 2013. –Handy
  13. New controllers will enable enterprise-class SSDs based on TLC Flash. –Handy
  14. NAND-aware software is the next high-growth market. –Handy
  15. Ultrabooks will drive NAND Flash cache use in PCs and notebooks. –Handy
  16. The SSD market will split into multiple segments. –Handy
  17. Alternative (new) memories will displace very little NOR Flash. –Handy
  18. PC SSD revenues will decline with the adoption of Flash cache. –Handy
  19. Few will realize it when Flash reaches its scaling limit. –Handy
  20. The current SSD form factor and interface will eventually disappear. –Handy
  21. Flash will eventually scale to 10nm and then be replaced in 6 to 8 years. –Handy
Posted in Flash, SSD, Storage | Tagged , , , , , , , | Leave a comment

Need yet another argument for designing your own SSD controller?

A Web site called legitreviews.com recently reviewed the ADATA XPG SX900 128Gbyte SSD and this review contains additional justification for seriously considering developing your own SSD controller for new storage products. The review starts off this way:

“ADATA is long known for their memory and storage products and as such, are well known even outside of tech circles. They’ve stepped a bit on the ledge with the marketing of their SX900 series of drives with ‘the most powerful SSD on Earth’ prominently displayed on the product page of their website. Being that this is yet another SandForce (LSI) SF-2281 drive with what appears to be relatively generic firmware, this appears to be more hype than substance. Still, we took it upon ourselves to give the 128GB version they sent us a good working over to see what all the fuss was about.”

The review quickly gets into a discussion of the drive’s internals and this is what the authors have to say:

“Once again we find ourselves looking upon the ever popular SandForce (LSI) SF-2281 SSD controller which nearly everyone that has been even remotely following SSDs should be familiar with. Employing real time compression technology, they are essentially able to turbocharge writes and post some impressive numbers. We also know that it does a nice job at wear-leveling, encryption and supports TRIM as well as idle garbage collection. There’s a good reason why this controller shows up in drives from nearly every manufacturer – it’s a solid performer.”

And so the review’s conclusion should not come as a surprise:

“In the opening of the article, we made reference to ADATA’s marketing of the SX900 as the “most powerful SSD on Earth” and after spending some time banging on the drive do we feel that description is warranted? Nope. It’s really more or less equal to a fair number of drives on the market already.”

My conclusion: the SSD controller and the controller firmware are key differentiators for many—certainly for these reviewers. The Denali Memory Report has already written about several companies that either are or are planning on developing their own SSD controllers and they are doing this work specifically for differentiation in the marketplace.

For more discussion of this topic, see:

Add Hitachi Data Systems to the growing list of companies developing their own SSD controllers

How Skyera developed the 44Tbyte, enterprise-class Skyhawk SSD from the ground up. A System Realization story.

More on developing your own SSD controller chip. Is rolling your own right for you?

STEC’s MACH16 Slim 2.5-in SATA SSD requires small footprint, fits in small embedded spaces

Micron introduces Enterprise-class, 2.5-inch SSD with PCIe interface

Examining The SSD Industry – Researching The Controller or Processor

Posted in SSD, Storage | Tagged , , , , , , , | 2 Comments

What does Intel’s choice of GDDR5 graphics DRAM for main memory with its Manycore Xeon Phi coprocessor say about SoC design?

George Chrysos discussed the Intel MIC (Many Integrated Core) architecture of the Knights Bridge chip (officially called the Intel Xeon Phi coprocessor) at today’s Hot Chips 24 conference and disclosed that it uses GDDR5 graphics memory as the main memory for the manycore part. In fact, the Intel Xeon Phi coprocessor has several (an undisclosed number) on-chip GDDR5 memory controllers. Now GDDR5 SDRAM is high-bandwidth memory generally found on graphics cards, not computing engines. GDDR5 memory supports extremely high data rates in the tens of Gbits/sec using multi-GHz transfer clocks. These SDRAMs also cost more per Gbit than bulk SDRAM, but you’re paying for performance.

And memory performance is exactly what the Intel Xeon Phi coprocessor requires because it contains more than 50 x86 processor cores with an immense thirst for data. Slaking that thirst is why Intel selected GDDR5 graphics SDRAM.

I think this choice has implications for many future manycore SoC designs. The Intel Xeon Phi coprocessor gives us a taste of things to come with other manycore SoC designs. Although the Intel Xeon Phi coprocessor is a homogeneous computing device, the same memory bandwidth issues will surround multicore heterogeneous SoC designs as well. However, I doubt that the solution for these designs will be the use of GDDR5 SDRAM, because that’s not a low-cost approach. Intel can afford to use expensive, high-performance SDRAM because the application, server-centric HPC (high-performance computing), warrants the expense. The Intel Xeon Phi coprocessor replaces even more expensive computing clusters. However most SoCs will need a different sort of approach that doesn’t cost as much.

Wide I/O SDRAM is one possibility, but it requires a more mature 3D IC assembly infrastructure. The Hybrid Memory Cube Consortium represents another such approach, but its target application is HPC, the same application targeted by the Intel Xeon Phi coprocessor.

It’s a problem that will need solving.

For more information on the Intel Xeon Phi coprocessor, see “Zowie! More than 50 x86 cores on the Intel Knights Corner Manycore Coprocessor

For more information on Wide I/O SDRAM, see “Wide I/O. Don’t leave your SoC without it

For more information on the Hybrid Memory Cube, see:

 

Posted in DDR, HMC, Hybrid Memory Cube, SDRAM | Tagged , , , , , | 1 Comment