Computing and Storage at the Same Time! Macronix Creates Innovative Memory for AI Applications
Artificial intelligence (AI) applications have become ubiquitous in our daily lives. With the continuous evolution of sensors, 5G communications, edge computing and other technologies, large-scale data centers, including those for automotive, factory automation equipment, medical healthcare devices, various consumer electronics products, and small battery-powered Internet of Things (IoT) nodes, have gradually evolved from the addition of digitization and networking functions to the ownership of different levels of “intelligence.”
These intelligent systems can turn the vast amounts of collected data into useful information, make quick decisions and respond appropriately in real time, or transmit the data to the cloud for in-depth analysis for higher-value insights. Whether they can play a full role depends not only on the computing power of the central processing unit (CPU) but also on the memory, which is no longer only responsible for the simple data storage function but has begun playing an increasingly important role in sharing the computing tasks with the processor.
Donald Huang is director of the Product Marketing Division of Macronix International Co., Ltd. (Macronix), a global leader in non-volatile memory integration components. Citing the smart connected vehicles equipped with advanced driver assistance systems (ADAS) and autonomous driving functions as an example, he said that this type of system equipped with sensors such as cameras, lidars and radars can generate up to several TB of data each day. Its memory, therefore, not only requires a large amount of storage capacity and high transmission bandwidth but must also meet the stringent automotive specifications that require very high standards for reliability and quality. Moreover, the memory needs to be more closely integrated with various sensing devices and system computing units in order to optimize the system operation and enable the smooth execution of various smart functions.
In addition, he emphasized that when AI applications move from the cloud to the edge, for edge devices such as vehicles that demand security and quick response, the support provided by memory in real-time data processing and high-speed transmission is indispensable, while the demand for solutions that reduce system power consumption and cost is also rising.
Mr. Huang pointed out that to meet the requirements of AI applications for high storage capacity, high-speed transmission and low latency, the role of memory components in the system has begun to shift in paradigm; in the past, flash memory only played in the system a pure back-end storage role, supporting the front-end DRAM and the embedded SRAM of the processor. However, as the amounts of data increase significantly and the requirements for transmission bandwidth and speed rise higher, he said, need a brand-new memory architecture to cope with these emerging applications.
He went on to explain that in response to the demand for big data, the current mainstream NAND and NOR flash memories have shifted from 2D to 3D structure to achieve higher storage density and lower cost. And AI systems also expect the flash memory to stay closer to the computing unit so as to support high-speed access and reduce data transfer power consumption. Today’s new generation of flash memory adds computing functions directly inside and is ready to go from behind the scenes to the front stage, fighting side by side with central processing units (CPUs) and graphics processing units (GPUs). “Macronix’s latest FortiX series of 3D NAND/NOR flash memory is such a ‘memory-centric’ innovative solution,” said Donald.
In addition to providing high storage capacity, stable quality and reliability of 3D flash memory, the FortiX series products can deliver the additional advantages of supporting real-time data processing, high transmission bandwidth and low power consumption. Donald said that the in-memory search (IMS) and computing-in-memory (CIM) of FortiX solutions are the computing functions of the digital and analog architecture. When the traditional von Neumann architecture that separates storage and computing encounters latency and power consumption bottlenecks, this new architecture can not only greatly reduce the data transmission between the memory and the CPU/GPU. This not only improves speed and reduces power consumption, but it can also save the need for components for analog-to-digital converters, and microcontrollers and GPUs, thus reducing the overall system cost. FortiX is the brainchild of the Macronix team that has spent years in its research and development. Related technical papers have been favored in recent years in global academic seminars such as International Electron Devices Meeting (IEDM), International Solid-State Circuits Conference (ISSCC). Moreover, FortiX has been applying patents.
Macronix’s FortiX series 3D flash memory supports computing-in-memory (CIM) / in-memory search (IMS) functions, which can share CPU computing tasks, improve system speed and reduce power consumption. (Image credit: Macronix)
The IMS function of FortiX can directly search and compare data (exact or proximity) from the existing data in memory and supports parallel input. Donald explained that 3D NAND is suitable for the applications with large data volume (>64Gb), while 3D NOR supports the high-speed applications of TCAM and Hamming Distance Sorter architectures. There are a variety of innovative architectures that provide flexible options to help object detection /image recognition, including applications such as lane recognition for smart vehicles. The CIM function, which supports bit-by-bit logic operations, can perform the required MAC operations in deep-neural-network inference tasks. Donald added that in terms of performance, compared with the traditional von Neumann architecture system, FortiX IMS 3D NAND has an internal search speed of up to 300Gb/s, its data query rate per second (QPS) can be improved by more than 10 times and its operating power consumption (active power) is only about 300mW, which is much lower than the 1W power consumption of DRAM. In addition, after the operation of FortiX IMS 3D NAND accelerator, the data volume can be reduced to only 5% of the original data volume, which greatly reduces the data movement of the subsequent operation of a von Neumann-architecture system, thereby not only reducing power consumption and total cost, but also improving performance significantly.
Macronix’s FortiX IMS 3D NAND accelerator greatly reduces the data movement of the subsequent operations of a von Neumann architecture system. (Image credit: Macronix)
However, Donald also emphasized that the new FortiX architecture is different from that of the existing standard flash memory products. To unleash the full advantages of FortiX full play, Macronix is cooperating with customers very closely in the early stage of product development by deploying something similar to the application-specific standard products for different applications and then aiming to design the product into a general-purpose product by referring to the industry standards, including memory interfaces. Macronix is able to provide customers with excellent technical support because Macronix not only has its own fab production line to ensure the quality and reliability of memory products, but it also has a strong team of software and hardware engineers supporting the design stage. Although there has been no official announcement of the FortiX series products, Donald revealed that Macronix has been actively promoting design cooperation with target application customers and is likely to push out the end products in the next two to three years, and he is also optimistic about the development prospects of this innovative technology in the AI era. Macronix welcomes those manufacturers interested in FortiX to work together to explore and develop more potential applications of “memory-centric” solutions.
The post Computing and Storage at the Same Time! Macronix Creates Innovative Memory for AI Applications appeared first on EETimes.