A new type of 3D computer chip that combines two cutting-edge nanotechnologies could dramatically increase the speed and energy efficiency of processors, a new study said.
Today's chips separate memory (which stores data) and logic circuits (which process data), and data is shuttled back and forth between these two components to carry out operations. But due to the limited number of connections between memory and logic circuits, this is becoming a major bottleneck, particularly because computers are expected to deal with ever-increasing amounts of data.
Previously, this limitation was masked by the effects of Moore's law, which says that the number of transistors that can fit on a chip doubles every two years, with an accompanying increase in performance. But as chip makers hit fundamental physical limits on how small transistors can get, this trend has slowed. [10 Technologies That Will Transform Your Life]
The new prototype chip, designed by engineers from Stanford University and the Massachusetts Institute of Technology, tackles both problems simultaneously by layering memory and logic circuits on top of each other, rather than side by side.
Not only does this make efficient use of space, but it also dramatically increases the surface area for connections between the components, the researchers said. A conventional logic circuit would have a limited number of pins on each edge through which to transfer data; by contrast, the researchers were not restricted to using edges and were able to densely pack vertical wires running from the logic layer to the memory layer.
"With separate memory and computing, a chip is almost like two very populous cities, but there are very few bridges between them," study leader Subhasish Mitra, a professor of electrical engineering and computer science at Stanford, told Live Science. "Now, we've not just brought these two cities together — we've built many more bridges so traffic can go much more efficiently between them."
On top of this, the researchers used logic circuits constructed from carbon nanotube transistors, along with an emerging technology called resistive random-access memory (RRAM), both of which are much more energy-efficient than silicon technologies. This is important because the huge energy needed to run data centers constitutes another major challenge facing technology companies.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
"To get the next 1,000-times improvement in computing performance in terms of energy efficiency, which is making things run at very low energy and at the same time making things run really fast, this is the architecture you need," Mitra said.
While both of these new nanotechnologies have inherent advantages over conventional, silicon-based technology, they are also integral to the new chip's 3D architecture, the researchers said.
The reason today's chips are 2D is because fabricating silicon transistors onto a chip requires temperatures of more than 1,800 degrees Fahrenheit (1,000 degrees Celsius), which makes it impossible to layer silicon circuits on top of each other without damaging the bottom layer, the researchers said.
But both carbon nanotube transistors and RRAM are fabricated at cooler than 392 degrees F (200 degrees C), so they can easily be layered on top of silicon without damaging the underlying circuitry. This also makes the researchers' approach compatible with current chip-making technology, they said. [Super-Intelligent Machines: 7 Robotic Futures]
Stacking many layers on top of each other could potentially lead to overheating, Mitra said, because top layers will be far from the heat sinks at the base of the chip. But, he added, that problem should be relatively simple to engineer around, and the increased energy-efficiency of the new technology means less heat is generated in the first place.
To demonstrate the benefits of its design, the team built a prototype gas detector by adding another layer of carbon nanotube-based sensors on top of the chip. The vertical integration meant that each of these sensors was directly connected to an RRAM cell, dramatically increasing the rate at which data could be processed.
This data was then transferred to the logic layer, which was implementing a machine learning algorithm that enabled it to distinguish among the vapors of lemon juice, vodka and beer.
This was just a demonstration, though, Mitra said, and the chip is highly versatile and particularly well-suited to the kind of data-heavy, deep neural network approaches that underpin current artificial intelligence technology.
Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research, said he agrees.
"These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction," he told MIT News.
The new study was published online July 5 in the journal Nature.
Original article on Live Science.