In this case, the authors used a combination of aluminum oxide and titanium dioxide (Al2O3 and TIO2) to form a memristor. They started with “an exhaustive experimental search over a range of titanium dioxide compositions and layer thicknesses (from 5 nm to 100 nm)” and then paired that with similar adjustments to the thickness of aluminum oxide. The titanium dioxide layer influenced how readily a memristor could be generated at the desired locations, while the aluminum oxide layer influenced the consistency and strength of its operation.
The neural network was formed by linking traditional circuitry through a grid of wiring (technically a crossbar). Memristors were formed at each place the perpendicular wires crossed—first by placing the metal oxide layers at these locations and then by flowing current through to a ground.
The neural network was trained to identify three letters (V, N, and Z), including the possibility of single-pixel errors. After a single time through the training set, the network was able to successfully identify all three letters, although performance continued to improve with further experience. Several aspects of the underlying calculations were performed by the traditional hardware, but the memristors handled the most computationally intense work.
The system produced by the authors here involved only a 12-by-12 grid of memristors, so it’s pretty limited in capacity. But Robert Legenstein, from Austria’s Graz University of Technology, writes in an accompanying perspective that “If this design can be scaled up to large network sizes, it will affect the future of computing.”
That’s because there are still many challenges where a neural network can easily outperform traditional computing hardware—and do so at a fraction of the energy cost. Even on a 30 nm process, it would be possible to place 25 million cells in a square centimeter, with 10,000 synapses on each cell. And all that would dissipate about a Watt.
Nature, 2015. DOI: 10.1038/nature14441 (About DOIs).