Neuromorphic computing, a better solution for Artificial Intelligence? – An interview with BrainChip

Deep Learning is a great technology, some would say revolutionary. Companies like Intel and NVIDIA are currently thriving on the computing needs of deep learning and many startup are currently offering news ways to improve the compute. Most of these companies are using “Von Neumann” architecture but a new breed of computing player propose to break away from current AI limitations. Neuromorphic companies are among the pioneers to advocate for a new AI paradigm.

In its latest report, “Neuromorphic Sensing and Computing 2019”, Yole Développement (Yole) estimated that the neuromorphic computing market could rise from $69 million in 2024 to $5 billion in 2029 – and $21.3 billion in 2034. Applications such as smartphones, robotics, and smart homes will be impacted first.

BrainChip, historical and leading player in that domain, has responded to Yole Développement’s questions on neuromorphic computing. Yohann Tschudi, PHD., Technology & Market Analyst and Pierre Cambou, Principal Analyst, Imaging at Yole Développement interviewed Roger Levinson, Chief Operating Officer at BrainChip.

Yohann Tschudi (YT): Please could you introduce BrainChip and its activities ?

Roger Levinson (RL): BrainChip is a global technology company that has developed a revolutionary neural network processor that brings artificial intelligence to the edge in a way that existing AI technologies are not capable. The solution is high-performance, small, ultra-low power and enables a wide array of edge capabilities that include local training, learning and inference.

Revenue wise our company markets an event-based neural network processor that is inspired by the spiking nature of the human brain and implements the network processor in an industry standard digital process. By mimicking brain processing BrainChip has pioneered an event domain neural network, called Akida, which is both scalable and flexible to address the requirements of edge devices. The event domain processor supports both standard CNN networks as well as SNN networks. The specific demand for those devices is for sensor inputs to be analyzed directly at the point of acquisition rather than transmitted to the cloud or a datacenter for this purpose. Akida is designed to provide a complete ultra-low power AI Edge Network for vision, audio and smart transducer applications. The reduction in system latency such approach provides a much faster response and a more power-efficient system that can reduce the large carbon footprint of using cloud based datacenters.

YT: Can you define in a few words what neuromorphic computing is and in what aspects it is so different from actual “Von Neumann” computing?

RL: Neuromorphic Computing is based upon how the brain actually processes information, by utilizing neurons, synapses, and a data format which is called “spikes”. The use of spiking neurons better mimick the way the brain operates than ‘perceptron’ style neurons that use floating-point values and which is currently the neuron model of reference in the AI industry. All neural networks consist of some form of simulation or emulation of ‘neural cells’ and the weighted connections between those cells. In the neuromorphic approach we have the spiking neurons and also the connections between neural cells with a memory which we call a Synapse. Neurons perform a spatial and temporal integration of synaptic inputs and produce a spike, or series of spikes, when the integrated sum of input values exceeds a threshold value. In the brain, most information is transmitted this way in the form of spikes or series of spikes. There are also direct electrical connections between neurons. Spikes are short bursts of energy that indicate the occurrence of an event. The value that is stored in a synapse is released when a spike is received. A digital spike is a value at a given time triggered by an event. This event can be any occurrence in the physical world, such as the light to dark transition at the edge of an object in an image. Spikes contain information in their spatial distribution, intensity, and the time when they occur. In biological neurons the synaptic weight value is an analog potential, and the integration of these potentials causes the membrane potential of the neural cell to increase or decrease. Biological and digital neuron Spikes are always binary, there is either a spike or no spike. In digital neuron emulation in Akida all these potentials are simulated as integer values. The aspect of time however is not lost. Event-based processing is an essential part of neural efficiency and spike timing is central to its learning mechanism.

The major difference between Von Neumann and Neuromorphic computing is that the former is basically a mathematical model which follow analytical rules and where time has no impact. The introduction of time, which is within the concept of spiking and the use of memories in the connections, ie the synapses, makes the neuromorphic approach a new paradigm which breaks away from current von Neumann limitations.

BrainChip is a leading supplier of neuromorphic computing solutions. The Akida Neuromorphic System-on-Chip (NSoC) represented a new breed of device that accelerates spiking neural networks for edge and server/cloud applications. These spiking neural networks are created and trained using the Akida Development Environment.



YT: Please tell us about your products (software and hardware) and what applications you are targeting?

RL: We offer both the Akida SoC fully integrated silicon device as well as Akida IP for integration into embedded solutions such as ASICs. The Akida platform contains everything that is necessary to construct a single chip or embedded edge A.I. solution.

The Akida NSoC silicon ASIC includes the neural fabric which runs the entire neural network without the need for a microprocessor or external memory and a general purpose microprocessor for chip management and system support. It can also function as a co-processor to a host computer through the on-chip PCIe interface or the USB3 interface. I3S and I2C interfaces are provided for sensor data input. The on-chip processor can be used for pre-processing of sensory data in stand-alone mode, or to create additional learning methods. The chip can be expanded with external DRAM through an LPDDR4 interface. The SPI Flash interface can be used to load the OS and programs for the ARM processor. Weights and reconfiguration details can be stored in flash to initialize the neural fabric. Similarly, the Akida IP is a configurable neural fabric and data to event conversion offering which interfaces directly to other blocks on an ASIC through a standard AXI bus.



YT: What sets you apart from competition? Are your products already available?

RL: We are targeting a diverse set of edge applications using CNNS and SNNs including smart camera solutions, smart home devices and appliances, assisted driving applications and autonomous vehicle applications (such as automobiles, planes, ships and drones), robotics, industrial monitoring, industrial IoT, and more.

Because we are offering an ultra low power solution able to perform complex AI operations in the range of uW to mW as well as offering autonomous operation and personalization through on chip, edge learning capabilities, we are quite differentiated from the nearest competition.

Our IP is available today and our SoC is targeted to be available in the first half of 2020. 



YT: At Yole, we see a race toward high number of transistors in a smaller footprint bringing more power to enable larger neural networks. Is it justified to say that this race will hit a wall, particularly in edge devices?

RL: The main challenges for edge devices are severely constrained power budgets for a variety of reasons as well as autonomous capability. The race to higher transistor count addresses neither of these issues. The answer lies in a much more efficient architecture combined with the capability to learn without requiring large data sets and deep learning cycles. Our Akida device is the first in a series of BrainChip devices that directly addresses both of the issues head-on.

YT: When do you expect on your side to see a neuromorphic chip in mass production? Could this technology be applied to phones?

RL: Based upon current customer design activity and market demand, 2021 looks like the year in which the next generation of AI processors based upon neuromorphic computing principals will hit mass adoption.

Smartphones are a unique product case where space constraints are severe and the number of devices that can included is severely limited.  Based upon this, neuromorphic computing is most likely to be embedded in an application processor either as integrated IP or through advances in packaging included in a multi-die package solution.

YT: Do you think neuromorphic computing will be the next generation of computing in cars where power consumption is not really a central problem for now? If so, why?

RL: Neuromorphic computing will definitely play a role in automotive solutions where latency of response is measured in milli-seconds for smart sensors, autonomous behavior is required to adapt to differentiated environmental conditions and where next generation solutions requiring predictive analysis is required. Although power is abundant in the automobile, there are power budget constraints due to thermal requirements which drive the need for more efficient solutions.

YT: Datacenter players are not yet interested in looking at hardware types or scenarios other than brute force for now. Can you explain why? Will it be, however, a preoccupation in a few years?

RL: Brute force is working quite well at this point and power is available for this purpose.

The trend will continue for some time until the advantages seen at the edge will percolate back to the datacenters.  Additionally, as learning methods other than brute force deep learning mature, these approaches will significantly improve the training cycles and capabilities and drive a shift to neuromorphic compute. The brute force method, although effective so far, does not bring real intelligence.  Neuromorphic compute is needed to bring intelligence to Artificial Intelligence.

YT: Do you want to add something for our readers?

RL: These are exciting times for the field of AI. We are just at the beginning of seeing the true benefits of AI can bring and at BrainChip we are excited to be leading the next generation of AI solutions.

Interviewee

Roger Levinson most recently served as Vice President of Data Management at Rstor where he previously served as Vice President of ASIC Engineering. Mr. Levinson has also served as Vice President of Engineer at Rambus and Vice President/General Manager of Strategy and Innovation at Semtech. He held a variety of senior engineering positions prior to joining Semtech including at Intersil, Xicor, Analog Integration Partners, Exar ,and Micropower. Mr. Levison earned his Bachelor’s Degree from the University of California, Davis in Electrical and Electronic Engineering, and also earned his Master’s Degree from the University of California, Davis in Electrical and Electronic Engineering.

Interviewers

As a Technology & Market Analyst, Yohann Tschudi, PhD is a member of the Semiconductor & Software division at Yole Développement (Yole). Yohann is daily working with Yole’s analysts to identify, understand and analyze the role of the software parts within any semiconductor products, from the machine code to the highest level of algorithms. Market segments especially analyzed by Yohann include big data analysis algorithms, deep/machine learning, genetic algorithms, all coming from Artificial Intelligence (IA) technologies.
After his thesis at CERN (Geneva, Switzerland) in particle physics, Yohann developed a dedicated software for fluid mechanics and thermodynamics applications. Afterwards, he served during 2 years at the University of Miami (FL, United-States) as a research scientist in the radiation oncology department. He was involved in cancer auto-detection and characterization projects using AI methods based on images from Magnetic Resonance Imaging (MRI). During his research career, Yohann has authored and co-authored more than 10 relevant papers.
Yohann has a PhD in High Energy Physics and a master degree in Physical Sciences from Claude Bernard University (Lyon, France).

Pierre Cambou - Yole Développement

Pierre Cambou MSc, MBA, is a Principal analyst in the Photonic and Display Division at Yole Développement (Yole).
Pierre’s mission is dedicated to imaging related activities by providing market & technology analyses along with strategy consulting services to semiconductor companies.
He has been deeply involved in the design of early mobile camera modules and the introduction of 3D semiconductor approaches to CMOS Image Sensors (CIS). Pierre has a broad understanding of the various markets and technologies associated with CIS, having obtained 6 patents in this field and founded one startup company in 2012.
At Yole, Pierre is responsible for the CIS Quarterly Market Monitor while he has authored more than 15 Yole Market & Technology reports. Known as an expert in the imaging industry, he is regularly interviewed and quoted by leading international media.
Previously, Pierre held several positions at Thomson TCS, which became Atmel Grenoble (France) in 2001 and e2v Semiconductors (France) in 2006. In 2012, he founded Vence Innovation, later renamed Irlynx (France), to bring to market an infrared sensor technology for smart environments.
Pierre has an Engineering degree from Université de Technologie de Compiègne (France) and a Master of Science from Virginia Tech. (VA, USA). Pierre also graduated with an MBA from Grenoble Ecole de Management (France).

Related reports

Neuromorphic Sensing and Computing 2019_bd

Neuromorphic Sensing and Computing 2019

Facing huge hurdles in data bandwidth and computational efficiencies, computing and sensing must reinvent themselves by mimicking neurobiological architectures.

Artificial Intelligence Computing for Consumer 2019

Artificial Intelligence Computing for Consumer

While AI is a feature expected in smartphones, this fantastic technology has spread like wildfire to the smart home ecosystem and is profoundly impacting the semiconductor industry.



Source: http://www.bainchip.com, http://www.yole.fr

Related presentations

Liked this post?

Share it on your social networks

Webcast From ADAS to Automoted Driving - Yole Développement