Newest addition to the GAP IoT application family delivers five times lower power consumption than its predecessor GAP8.
A fabless semiconductor startup designing disruptive ultra-low power embedded solutions for AI processing in sensing devices at the very edge, announced a new member of the GAP IoT application processor family, GAP9. GAP9 combines architectural enhancements and an industry Global Foundries 22nm FDX semiconductor process to deliver a peak cluster memory bandwidth of 41.6 GB/sec and up to 50 GOPS combined compute power at an overall power consumption of 50mW.
GAP9 enables customers to embed machine learning and signal processing capabilities into battery operated or energy harvesting devices such as IoT sensors in smart building, consumer and industrial markets and consumer and medical wearable devices. Compared to GreenWaves Technologies’ currently shipping product, GAP8, GAP9 reduces energy consumption by 5 times while enabling inference on neural networks 10 times larger.
“GAP9 enables a new level of capabilities for embedding combinations of sophisticated machine learning and signal processing capabilities into consumer, medical and industrial product applications,” said Loic Lietar, CEO of GreenWaves Technologies. “The GAP family provides product designers with a powerful, flexible solution for bringing the next generation of intelligent devices to market.”
GAP9 is built on the same GAP architectural attributes as GAP8. GAP9 adds support for floating-point arithmetic across all cores based on an innovative transprecision floating-point unit capable of handling floating-point numbers in 8, 16, and 32-bit precision with support for vectorization. GAP9 also extends GAP8’s support for fixed-point arithmetic with support for vectorized 4-bit and 2-bit operations. This positions GAP9 as the ideal platform for applications exploiting deep levels of quantization to deliver more energy efficiency while simplifying porting of existing floating-point signal processing libraries.
GAP9 incorporates bi-directional multichannel, synchronized digital audio interfaces making it a perfect fit for sophisticated wearable audio products. It also incorporates both CSI2 and parallel camera interfaces allowing the use of low resolution, low power camera for scene analysis and then extract a region of interest from high-resolution, higher power camera for analysis of scene details.
GAP9 handles sophisticated neural networks such as MobileNet V1 with ease processing a 160 x 160 image with a channel scaling of 0.25 in just 12ms with a power consumption of 806μW/frame/second. GAP9 provides a 20 times increase of effective memory bandwidth compared to GAP8, enabling significant improvements in detection accuracy by simultaneously analysing streams of data from multiple different sensors such as images, sounds, and radar.
GAP9 incorporates additional security features protecting device makers firmware and models while also protecting devices from tampering including hardware support for AES128/256 cryptography and a Physically Unclonable Function (PUF) unit that allows devices to be uniquely and securely identified.
All of the enhancements delivered in the GAP SDK since the introduction of GAP8 enable immediate benefit to GAP9. This includes the GAP AutoTiler, allowing automatic code generation for neural network graphs often reducing memory movement down to below 1.2 times the theoretical minimum and GAPFlow, a series of tools for automating the conversion of neural networks from training packages such as Google TensorFlow. Combined with out of the box, open-source, network implementations such as a full, Open Source Face Identification implementation, the GAPFlow toolset reduces implementation time from months to days.