Home  >  ADVANCED PACKAGING  > Micron reveals “Hyper Memory Cube” 3DIC Technology...
Feb 18th, 2011
Micron reveals “Hyper Memory Cube” 3DIC Technology
A few weeks ago Mark Durcan, COO of Micron, at the IEEE ISS meeting in Half Moon Bay commented that Micron is ''sampling products based on TSVs” and that “Mass production for TSV-based 3-D chips are slated for the next year or 18 month.
Send to a friend

In a recent interview with cnet news, Brian M. Shirley, vice president of DRAM Solutions at Micron unveiled a new “hyper memory cube”  technology that it claims offers a “…. 20-fold performance increase while reducing the size of the chip and consuming about one-tenth of the power”.

Micron says its development addresses the so called "memory wall" problem. Essentially, DRAM performance today is constrained by the capacity of the data channel that sits between the memory and the processor. No matter how much faster the DRAM chip itself gets, the channel typically chokes on the capacity. Systems are not able to take advantage of new memory technologies because of this latency issue – they need more bandwidth.

Micron reports that they are using TSV technology to stack memory on top of a controller chip (“logic layer”). The on-chip controller is the key to delivering the performance boost. This allows a higher speed bus from the controller chip to the CPU and means memory can be packed more densely in a given volume. 


 Micron uses through-silicon via (TSV) technology to stack memory on top of a controller chip ('logic layer'). The on-chip controller is the key to delivering the performance boost (Source Micron)

Shirley stated that  Micron is currently working with high-performance computing and networking companies targeting networking and high-performance computers. “Performance needs are most direct in networking and cloud computing. One-hundred gigabit Ethernet routers and switches and cloud computing servers require everything they can get….this is our way of giving them a fire hydrant.” They hope to see the memory cube technology in server and networking markets as early as 2012, with significant volumes in 2013, and could then start to work their way toward the consumer space in 2015. Customers will reportedly also include major processor suppliers although no customers have yet been named.

Yole has spoken to a couple of 3D memory experts to get their opinion on this announcement:

Paul Franzon of NC State  offered that “Many  have predicted that integrating a DRAM stack with a “controller” logic chip is a significant potential “early” market for 3DIC.  In particular, it has the advantage that you don’t need someone else to design a processor to match your part, while still delivering significant advantages, for example when used compute servers.  While such designs do not realize the full potential of 3DIC/TSV technology, they do, however, exercise the technology.  Such steps are important as it starts using the entire ecosystem for 3DIC design and fabrication, which to date, has largely been a research exercise.  Such an announcement by a major commercial vendor helps show that 3D chip stacking is likely to be real”

Bob Patti, CTO of Tezzaron who has been working in 3D memory for a decade comments “I think this is a step in the direction of 3D, but I think it’s more of a 2.5D like solution. The use of a “controller” to interface and aggregate simple memories is similar to other industry work at Elpida and Samsung. The announcement doesn’t really have enough information to be sure about the actual implementation. We can assume that the interface is of a wide bus nature and this will increase the bandwidth. The 512 bit  widebus devices have been floating around and I assume something like these are used. Using the basic memory device, a single die can produce 12.8GBytes/s in page mode. This is the same amount as a x64 DDR3 DIMM. 

The limiting factor is definitely the I/O physical and power available/heat dissipation that can be tolerated. As Micron  has shown, if the memory were directly attached to the host device, the loading is such that four devices could be interfaced to produce a 4 X 12.8GByte, or a  51.2 GBytes/s data transfer bandwidth. Of course any device that would be capable of using that much bandwidth would require significant power and the DRAM on the back of the host would not be workable. The host would have to be attached directly to a heatsink. A more likely scenario is that a memory  stack would be adjacent to the host.

“The use of a controller allows some customization of the DRAM stack without touching the DRAM itself. This is the only viable alternative. It may be the only way to get interchangeability among vendors also. That is one of the key reasons that Tezzaron has a controller layer. Of course we go much further in exploiting the controller to improve latency, reliability and power. Our latest version, now in development, provides open page data at 288GBytes/s and has a random access time of <10ns”.

“I still question the application space for the Micron part. I can see this as a LPDRAM step beyond LPDDR2. It makes good sense there. The temperature envelope is compatible with backside mounting and the low capacitance interface would drive down the power. For HP apps, a high latency wide memory requiring an expensive interposer doesn't seem to be a good choice, although it may be much better than doing nothing to address the memory bottleneck.



Sep 17th
Sep 11th
Sep 11th
Sep 11th
Sep 11th
©2007 Yole Developpement All rights reserved                  Disclaimer | Legal notice | To advertise
Yole Développement: Le Quartz, 75 cours Emile Zola, 69100 Villeurbanne, France. TEL: (33) 472 83 01 80 FAX: (33) 472 83 01 83 E-Mail: info @yole.fr