Memory and processing being two separate pieces of hardware was never a big deal — until now. With the advent of artificial intelligence and its branches, machine learning and deep neural network, more time and energy is needed for information to jump back and forth between ‘processing’ and ‘memory’, and is therefore avoidable.

Scientists are investigating ways of unifying processing and memory, leading to a new branch of electronics called ‘in-memory computing’. A ‘deep neural network’ could have millions of nodes organised into layers to perform a certain computation from input data. Any unification of logic and memory would be a big help.

Bhaswar Chakrabarti, an assistant professor in the department of electrical engineering, Indian Institute of Technology, Madras, is among the scientists investigating how to unify processing and memory. “I have always been intrigued by ‘memory’,” Chakrabarti told Quantum, “both in humans and machines.” So, he embarked on designing a memory chip that can offer an “alternative computational paradigm” with higher performance and energy efficiency.

Related Stories
How machine learning can forewarn about disasters
Machine learning has rejuvenated research on early warning systems for events ranging from disease outbreak to stock market crashes

Among the various in-memory computing hardware, the one that Chakrabarti found particularly interesting was the ‘content-addressable memory’ (CAM). Basically, when you search a memory device to retrieve information, you don’t search by the ‘address’ and instead go straight into the ‘content’. CAM, therefore, “is a promising candidate for wide application in data-intensive, high-performance search operations”, he says.

Chakrabarti began designing a CAM that would be useful in applications such as network routing, CPU caching and deep learning. His idea was to use a special type of transistor that is the in-thing in the electronics industry today, the ‘ferroelectric field effect transistors’ or FeFET. (Transistors are a part of an electronic circuit, where they amplify and regulate the flow of electricity in the circuit.) The FeFETs are made using a compound called indium gallium zinc oxide, and they “are being vigorously investigated for deployment in in-memory computing”. Chakrabarti sourced FeFETs from the Fraunhofer Institute of Germany, which collaborated in the research effort.

Chakrabarti has designed a new CAM cell using FeFET transistors which, he says, “significantly improves density and energy efficiency compared with conventional ‘complementary metal-oxide-semiconductor’-based cells. To illustrate, the design uses eight times fewer transistors than the conventional ones.

Related Stories
Peptide factory powered by artificial intelligence
IIT-Madras uses machine learning to help create newer, useful peptides

Chakrabarti and his fellow scientists have published a paper on their research in Applied Electronic Materials. “Simulation shows that the proposed CAM has sufficient decision range to perform the search operations. We have also demonstrated the impact of retention degradation on the feasibility of the multi-bit operation in IGZO-based CAM cells. Our proposed CAM is highly promising for energy-efficient in-memory computing platforms, compared with other solutions, because of its simple one FeFET−one transistor architecture and multi-bit operation,” the paper says.

Chakrabarti says there is still work needed before a chip based on this design can be deployed in industry. Memory arrays would need to be developed and peripherals would need to be tweaked to sync with this new type of chip. Nevertheless, the new CAM is a breakthrough in the area of electronics in the era of artificial intelligence.

comment COMMENT NOW