Revolutionizing AI with Brain-Inspired Dynamic Neural Networks: A Joint Effort by Fudan, HKU, and CAS Researchers

A groundbreaking collaboration between researchers from Fudan University, the University of Hong Kong, and the Chinese Academy of Sciences has led to the development of a dynamic neural network that draws inspiration from the efficient and associative computing of the human brain. This innovative design, which reduces energy consumption by a staggering 93.3%, promises to revolutionize 2D and 3D visual processing capabilities in artificial intelligence.

The team, consisting of experts from esteemed institutions, has introduced a hardware-software co-designed solution known as a semantic memory-based dynamic neural network. This network integrates memory and processing by associating incoming data with past experiences stored as semantic vectors. Unlike traditional AI models, which are static and cannot connect inputs with previous knowledge, this dynamic neural network emulates the brain’s ability to dynamically reconfigure itself.

The design is physically realized through the use of memristive ternary circuits for in-memory computing (CIM) and content-addressable memory (CAM) for semantic storage. The忆阻器, a cutting-edge device resembling the synapses in the human brain, plays a crucial role in this implementation. The researchers validated the design using 40-nanometer memristor macros on ResNet and PointNet++ architectures, demonstrating comparable accuracy in classifying images from the MNIST dataset and 3D points from the ModelNet dataset. This was achieved while reducing computational budgets by 48.1% and 15.9% and slashing energy consumption by 77.6% and 93.3%, respectively.

Published on August 14, 2024, in Science Advances, the study, titled Semantic memory–based dynamic neural network using memristive ternary CIM and CAM for 2D and 3D vision, underscores the potential of this brain-inspired approach to overcome the limitations of conventional AI models.

The human brain’s remarkable computational efficiency lies in its dynamic restructuring, associative memory, and the synergistic combination of memory and information processing. It adaptively modifies its neural connections in response to various stimuli and tasks, a feature that static neural networks lack. This adaptability allows the brain to allocate resources more effectively, tackling diverse and evolving information with minimal energy consumption.

In contrast, most artificial neural networks have a fixed topology, limiting their adaptability and leading to inefficient resource allocation. Unlike the brain, which can associate unfamiliar information with past experiences, conventional computers rely on address-based information storage and search, lacking the ability to associate observations based on similarity. Moreover, the brain’s ability to process information where it is stored results in low power consumption and high parallelism, contrasting the energy-intensive and latency-prone von Neumann architecture found in digital computers.

The novel hardware-software co-designed approach adopted by the Fudan, HKU, and CAS researchers mimics three key aspects of brain computing. The software component, the semantic memory-based DNN, equips artificial networks with the brain’s dynamic reconfigurability. By linking new information with past experiences and allocating computation on demand, this network outperforms static networks in many applications. Its adaptability enables it to balance accuracy and efficiency by adjusting computational budgets in real-time, making it suitable for scenarios where resources may fluctuate or be constrained.

On the hardware side, the use of memristors in CIM and CAM circuits physically realizes the noise-resistant DNN and its associative semantic memory. Memristive CIM circuits perform matrix-vector multiplication based on simple physical laws, with matrix weights physically stored in the memristor arrays. By executing multiplication at the location of weight storage, these circuits integrate computation and storage elements, overcoming the von Neumann bottleneck and enabling high parallelism.

This groundbreaking work not only pushes the boundaries of AI efficiency but also paves the way for more advanced, brain-inspired computing architectures. As AI continues to permeate various sectors, from autonomous vehicles to healthcare, the development of such dynamic and energy-efficient neural networks will be crucial for the future of artificial intelligence and its integration into our daily lives.

【source】https://www.jiqizhixin.com/articles/2024-08-26-16

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注