top of page
Writer's pictureChockalingam Muthian

Neuromorphic Computing - Third Generation AI



The first generation of AI was rules-based and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second, current generation is largely concerned with sensing and perception, such as using deep-learning networks to analyze the contents of a video frame.


A coming next generation will extend AI into areas that correspond to human cognition, such as interpretation and autonomous adaptation. This is critical to overcoming the so-called “brittleness” of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and common sense understanding. Next-generation AI must be able to address novel situations and abstraction to automate ordinary human activities.


Key focus areas include neuromorphic computing, which is concerned with emulating the neural structure and operation of the human brain, as well as probabilistic computing, which creates algorithmic approaches to dealing with the uncertainty, ambiguity, and contradiction in the natural world.


Neuromorphic Computing Research Focus


The key challenges in neuromorphic research are matching a human's flexibility, and ability to learn from unstructured stimuli with the energy efficiency of the human brain. The computational building blocks within neuromorphic computing systems are logically analogous to neurons. Spiking neural networks (SNNs) are a novel model for arranging those elements to emulate natural neural networks that exist in biological brains.


Each “neuron” in the SNN can fire independently of the others, and doing so, it sends pulsed signals to other neurons in the network that directly change the electrical states of those neurons. By encoding information within the signals themselves and their timing, SNNs simulate natural learning processes by dynamically remapping the synapses between artificial neurons in response to stimuli.


Producing a Silicon Foundation for Brain-Inspired Computation


To provide functional systems for researchers to implement SNNs, Intel Labs designed Loihi, its fifth-generation self-learning neuromorphic research test chip, which was introduced in November 2017. This 128-core design is based on a specialized architecture that is optimized for SNN algorithms and fabricated on 14nm process technology. Loihi supports the operation of SNNs that do not need to be trained in the conventional manner of a convolutional neural network, for example.


The Loihi chip includes a total of some 130,000 neurons, each of which can communicate with thousands of others. Developers can access and manipulate on-chip resources programmatically by means of a learning engine that is embedded in each of the 128 cores. Because the hardware is optimized specifically for SNNs, it supports dramatically accelerated learning in unstructured environments for systems that require autonomous operation and continuous learning, with extremely low power consumption, plus high performance and capacity.


Advancing Neuromorphic Computing as a Cross-Disciplinary Challenge


Neuromorphic computing develops at the intersection of diverse research disciplines, including computational neuroscience, machine learning, microelectronics, and computer architecture, among others. Intel Labs has established the Intel Neuromorphic Research Community, a collaborative research effort that brings together academic, government, and industry entities to work on complementary architectures, tools, and approaches that enable neuromorphic computing as a whole.


The community works to abstract principles of neuroscience and adapt them to practical computational technology. For example, producing more advanced SNN algorithms is a key area of focus, including the development of programming models and tools. In particular, it drives experimentation and development with the Loihi research chip, including applications to solve real-world problems and mechanisms for interfacing systems based on SNNs with external data and computing systems.


Probabilistic Computing Research Focus


The fundamental uncertainty and noise that are modulated into natural data are a key challenge for the advancement of AI. Algorithms must become adept at tasks based on natural data, which humans manage intuitively but computer systems have difficulty with.


Having the capability to understand and compute with uncertainties will enable intelligent applications in diverse AI domains. For example, in medical imaging, based on the uncertainty measures one can prioritize which images a radiologist needs to look at and show on the image regions highlighted with low uncertainty. In case of smart assistant at home, an agent can interact with the user by asking clarifying questions to get better understanding of a request when there is a high uncertainty in the intent recognition.


In the autonomous vehicles domain, the systems piloting autonomous cars have many tasks that are well suited to conventional computing, such as navigating along a GPS route and controlling speed. The current state of AI enables the systems to recognize and respond to their surroundings, such as avoiding collision with an unexpected pedestrian.


To advance those capabilities into the realm of fully autonomous driving, however, the algorithms must incorporate the type of expertise that humans develop as experienced drivers. The sensors like GPS, cameras, etc. exhibit uncertainty in their position estimates. Also the ball that children are playing with in a nearby yard could roll into the street and one of the kids may decide to chase it. It’s wise to be wary of an aggressive driver in the next lane. In these cycles of perception and response, both the inputs and the outputs carry a degree of uncertainty. The decision making in such scenarios depends on the perception and understanding of the environment to predict future events in order to decide on the correct course of action. The perception and understanding tasks need to be aware of the uncertainty inherent in such tasks.


Managing and Modelling Uncertainty


Probabilistic computing generally addresses problems of dealing with uncertainty, which is inherently built into natural data. There are two main ways the uncertainly plays a role in AI systems:


  • Uncertainty in perception and recognition of natural data. The contributing sources include input uncertainty arising from hardware sensors and environment, as well as the recognition model uncertainty because of the disparity in training data and the data being recognized.

  • Uncertainty in understating and predicting dynamic events. Human movement and intent prediction is one example where such uncertainty is exhibited. Any agent trying to predict such dynamic events needs to model human intent and understand the uncertainties in the model. Observations can then be used to continuously reduce the uncertainties for efficient intent & goal prediction.

Key problems in this area revolve around efficiently characterizing and quantifying uncertainty, incorporating that uncertainty into computations and outcomes, and storing a model of those interacting uncertainties with the corresponding data.


One implication of the fact that outputs are expressed as probabilities, rather than deterministic values, is that all conclusions are tentative and associated with specific degrees of confidence. To extend the autonomous driving example above, the children’s ball disappearing from view or increasingly erratic behavior by the aggressive driver might increase confidence that such a potential hazard will require a response.


In addition to enabling intuition and prediction in AI, probabilistic methods can also be used to impart a degree of transparency to existing AI recognition systems that tend to operate as a black box. For example, today’s Deep Learning engines output a result without a measure of uncertainty. Probabilistic methods can augment such engines to output a principled uncertainty estimate along with the result making it possible for an application to decide the reliability of the prediction. Making uncertainty visible helps to establish trust in the AI system’s confidence in decision making.


Whereas deterministic processes have predictable, repeatable outcomes, probabilistic ones do not, because of random influences that cannot be known or measured. This process of incorporating the noise, uncertainties, and contradictions of natural data is a vital aspect of building computers capable of human (or super-human) levels of understanding, prediction, and decision-making. This work builds on prior applications of randomness in data analysis, such as the well-established use of Monte Carlo algorithms to model probability.


Enabling a Probabilistic Computing Ecosystem


In addition to its main thrust—dealing with incomplete, uncertain data—probabilistic computing depends for its success on being integrated collaboratively and holistically into the broader universe of computing technology. Intel Labs is helping to build the necessary bridges across entities in academia and industry through the Intel Strategic Research Alliance for Probabilistic Computing.


This research initiative is dedicated to advancing probabilistic computing from the lab to reality, by integrating probability and randomness into fundamental hardware and software building blocks. Drawing together and enabling research in these areas, the Alliance works toward engineering the capacities for perception and judgment to enable next-generation AI.


26 views0 comments

Recent Posts

See All

LLM Tech Stack

Pre-trained AI models represent the most important architectural change in software development. They make it possible for individual...

Yorumlar


bottom of page