Secure payment on i-micronews Contact Yole Développement for I Micronews reports

RSS I-micronewsYole Dévelopement on TwitterYole Développement on Google +LinkedIn Yole pageSlideshare Yole Développement I-Micronews

Imaging

Before running powerful artificial intelligence (AI) algorithms, devices must transform pixels to an image understandable to human eyes. Engineers think of the dedicated image signal processor (ISP) hardware architecture that does this as a pipeline, processing pixels one-by-one or in groups. This pipeline then feeds into more powerful vision processor hardware that runs the later AI algorithms, with more memory to store entire frames. According to Yole Développement’s ‘Embedded Image and Vision Processing’ report, released in 2017, the automotive market will be one of the first important areas for this technology.

As vehicles race towards autonomy and sensor fusion, AI technology and the related hardware are interacting to reach new frontiers. In this area, Algolux is a young and fascinating startup that understands the role of image processing and AI and tries to combine them in original and interesting ways. Therefore, Yole Développement has interviewed Allan Benchetrit, Algolux’s President and Chief Executive Officer, about his company and its technology.

image vision processing algolux yole developpement

Yole Développement: Please introduce Algolux and its products.

Allan Benchetrit: Algolux enables autonomous vision though machine learning – empowering cameras to see more clearly and perceive what cannot be sensed with today’s vision systems. We deliver this through our CRISP-ML tool, a machine learning workflow that automates the typical 3-6 month image quality tuning process down to hours or days. Taking this one step further, we recognize that vision systems demand higher robustness in difficult imaging scenarios than current approaches can provide. Algolux addresses this with our CANA DNN-based full perception stack, replacing existing ISP-based architectures.

The company is based in Montreal with offices in Palo Alto and Munich and is primarily focused on the automotive market, though our technology applies to any camera-based application, like video surveillance and smartphones. In addition to our customer traction, we’ve also been recognized by the industry for the innovation we bring to these challenging problems, including recently winning the AutoSens ‘Most Exciting Startup’ Award.

YD: Why is Algolux initially focusing on the automotive market?

AB: Algolux is primarily focused on the imaging and perception use cases in automotive. These include challenging scenarios for advanced driver assistance systems (ADAS) and autonomy such as low and high light, noise, high dynamic range, weather, and lens issues that aren’t effectively addressed today. Another driver is the automotive industry’s collaboration in IEEE P2020 to establish image quality standards for viewing and computer vision, which we’re contributing to. The working group was formed when the industry leaders recognized that current approaches would not consistently achieve the mission-critical safety goals for these systems, and this was a natural area where Algolux could help optimize results.

YD: Is it more complex to create a neural network for object detection than an ISP pipeline, as is usually done for standard camera?

AB: Both are actually quite sophisticated but have a number of key differences. An ISP is comprised of many sequential stages that process an image captured by a sensor for color, sharpness, noise reduction, and the like. The algorithms for each stage are carefully crafted for their task despite the fact that the ground truth is only referred to at the outset. The ISP is then ‘tuned’ by imaging experts through thousands of parameters to get the best possible image from a particular lens/sensor combination. The end result is a visually pleasing image that may take many months of tuning.

An AI-based perception task such as object recognition is typically implemented in a deep neural network (DNN) made up of many layers that each have many nodes doing computation, similar to the neurons in the brain. The DNN is trained with large datasets of annotated images for the specific task, allowing the DNN to learn how to identify pedestrians, cars, and trucks.

In today’s architectures, the images from an ISP feed into the DNN. A key limitation is that ISPs are designed and tuned based on expert eyes to produce images that are pleasing to a person, but is not well suited for a computer algorithm (the DNN) to recognize an object recognition

YD: You intend your CANA perception stack to replace the signal processing pipeline for ADAS and autonomous driving. Do you think this replacement can be generalized for applications beyond autonomous automotive?

AB: Since ISPs are meant to target human vision rather than computer vision, Felix Heide, our Chief Technology Officer and inventor of the technology, saw an opportunity to take out this limiting factor and instead have a complete DNN stack “learn” the best way to process the image from the sensor together with the perception task. This has resulted in significantly improved accuracy of results versus today’s approaches using state of the art ISPs with DNN perception for automotive tasks.  We have also seen the same improvements for typical classification tasks, so the approach looks like it will generalize quite well.

YD: For you, what are the stakes of hardware and software companies developing vision systems with the emergence of AI technologies?

AB: AI, and deep learning specifically, has improved the ability for machines to perceive their surroundings at an incredible rate. There is a clear movement in the semiconductor industry to deploy graphics processing unit (GPUs) and neural processing units (NPUs) that accelerate how quickly neural networks run with lower power to enable more sophisticated tasks. Similarly, there is a tremendous investment in AI software innovation and infrastructure. We are still at the early stages but it is a high-stakes race to see who will emerge as leaders.

YD: Yole is expecting a market evolution in imaging and sensing for automotive from simple ISPs to fusion processor boards integrating AI processors. What could be the key hardware and software parameters to support this evolution?

AB: As these new approaches require significant embedded processing, either in the computer vision electronic control unit or smart edge cameras, the need to increase processing performance while reducing power consumption is critical. Intelligent sensor fusion just increases that processing load as you ingest and process separate data intensive information streams. From our customers, we hear a lot of discussion in the industry about exploring the different fusion approaches and models based on the tasks being addressed. However, we strongly maintain that having complementary sensors and fusion is clearly the path for autonomous vehicles.

YD: Yole is expecting a market evolution in imaging and sensing for automotive from simple ISPs to fusion processor boards integrating AI processors. What could be the key hardware and software parameters to support this evolution?

AB: As these new approaches require significant embedded processing, either in the computer vision electronic control unit or smart edge cameras, the need to increase processing performance while reducing power consumption is critical. Intelligent sensor fusion just increases that processing load as you ingest and process separate data intensive information streams. From our customers, we hear a lot of discussion in the industry about exploring the different fusion approaches and models based on the tasks being addressed. However, we strongly maintain that having complementary sensors and fusion is clearly the path for autonomous vehicles.

 

Sources: logo yole petitAlgolux logo 2018 yole developpement

 

INTERVIEWEE

Allan Benchetrit thumbnail Algolux yole developpement

Allan Benchetrit is Algolux’s President and Chief Executive Officer
Allan is an accomplished business leader with over 25 years of experience with blue chips and startups in the ICT industry, including pioneering companies in computer vision, mobile video and mobile Internet. Prior to Algolux, he served as President and CEO of Vantrix between 2008-2012, a global mobile video infrastructure company he co-founded in 2004. He previously held sales, marketing and business administration positions of increasing responsibility at Wysdom, Oracle and HP. Allan holds an MBA from John Molson School of Business and is an active mentor with local incubators and business schools.

 

 

 

 RELATED REPORT 

cover embedded imageEmbedded Image and Vision Processing

From algorithms included in the image processing pipeline to neural networks running in vision processors, focus on the evolution of hardware in vision systems and how software disrupts this domain - More

Upcoming Events


> Components for 3D Sensing Revolution
(July 10 - July 10, on your computer)

> 1st Executive Forum on 3D Sensing for Consumers
(September 6 - September 6, Shenzhen, China)

> 3rd Executive Infrared Imaging Forum: Heading to 1 million units shipment
(September 7 - September 7, Shenzhen, China)