Algolux has announced the Ion™ Platform for end-to-end vision system design and implementation. Based on patented research combining machine learning, computer vision, and computational imaging technologies, Ion enables teams to bring market-leading scalability and robustness to their perception and imaging systems for applications like autonomous vehicles, ADAS, and video surveillance.
Algolux, the leading provider of software for autonomous vision, has announced the Ion™ Platform for end-to-end vision system design and implementation. Based on patented research combining machine learning, computer vision, and computational imaging technologies, Ion enables teams to bring market-leading scalability and robustness to their perception and imaging systems for applications like autonomous vehicles, ADAS, and video surveillance.
New sensing technologies, advanced processing, and artificial intelligence have led to impressive improvements in perception and imaging capabilities. But these vision systems are increasingly difficult to develop and bring to market due to growing complexity and isolated design approaches. Safety also continues to be compromised in harsh operating environments such as low light, bad weather, dust or clutter. In addition, mandates by industry and government groups, such as Euro NCAP and NHTSA, continue to drive more stringent requirements on these systems.
The Ion Platform addresses these challenges by enabling teams to design their vision systems end-to-end, breaking down the silos and development limitations seen in today’s approaches. Through simplified workflows and optimized implementations that cross traditional sub-system boundaries, Ion provides greater flexibility for teams to make design and architectural tradeoffs for optimal system performance.
The Ion modules can be deployed across the different blocks of the system, including the sensors, processors, perception algorithms, and higher stack components such as autonomous planning and control. This approach not only improves traditional system architectures but also empowers radical new designs that are optimized for cost and performance, whether for human viewing or perception applications.
Ion Platform Benefits
- Increase vehicle safety: massively improve vision system accuracy and robustness across all operating scenarios
- Improve system effectiveness: holistically optimize against key system performance metrics
- Reduce program risks: minimize system costs, accelerate time-to-revenue, and scale resources
The Ion Platform features two main product lines based on the Eos™ deep neural network (DNN) perception technology (formerly known as CANA). Eos can be deployed as either an embedded software perception stack in the vision system or through Atlas™, a suite of tools that design teams can use to optimize their camera-based systems.
(See Figure 1)
“Artificial intelligence is becoming a foundation for perception, planning, and even control for automotive robotic and traditional automotive ADAS. Its impact on the automotive sector will be huge. At Yole Développement (Yole), we estimate that computing hardware alone for AI will generate more than US$14 billion revenue in 2028 with AI software stacks generating even more revenue (1). AI is being produced by a growing ecosystem of providers and can yield improved results vs. traditional techniques, however we still see significant challenges,” said Yohann Tschudi, PhD., technology and market analyst, Yole. “Algolux stands out with their end-to-end learning approach to providing robust and accurate perception. As they extend further up the autonomous stack and take advantage of emerging technologies, it also provides a path to full systems that will learn on their own.”
(1) Source: Artificial Intelligence Computing for Automotive report, Yole Développement, 2019
Eos Perception Software – Delivering the Industry’s Most Robust and Accurate Perception
Eos perception software is based on groundbreaking research that addresses the fundamental requirement to improve perception accuracy across all operating conditions. Through a new deep learning approach, Eos delivers improvements in accuracy of more than 30% as compared to today’s most accurate computer vision algorithms, especially in the harshest scenarios. This enables perception teams to develop more optimal system architectures, thereby simplifying the design process and even reducing bill of material (BoM) costs.
Eos benefits include:
- Delivering industry-leading robustness and accuracy across all conditions, with improvements of over 30% vs. public and commercial algorithms in many cases
- Support of any sensor configuration and processing environment through an end-to-end deep learning approach
- Significantly reducing BoM costs and power by enabling designers to use lower-cost lenses and sensors or even remove components such as the image signal processor (ISP)
- Flexible implementation of single-camera through multi-sensor fusion perception architectures
The Eos neural network architecture allows ADAS, autonomous vehicle, and surveillance product teams to significantly improve perception across many applications while providing a path to fully end-to-end learned systems. Example applications are:
- Single-camera, such as intelligent front-facing, rear-view, and mirror-replacement use cases
- Multi-camera for 360-degree use cases, such as self-parking and autopilot
- Multi-sensor early fusion for autonomous vehicles and robots
Atlas Camera Tuning Suite – Optimize Camera Architectures for Image Quality and Computer Vision
The Atlas suite automates today’s painful process of manual camera tuning. Atlas provides product modules and workflows that enable end-to-end camera tuning for visual image quality (IQ) and computer vision.
Atlas benefits include:
- Accelerating time to revenue by shrinking tuning time from many months to weeks or even days
- Scalable and predictable methodology through automatic metric-driven camera tuning
- Optimized tuning for any camera configuration specific to the target application
- Automating the currently impossible task of camera tuning for optimal computer vision accuracy
Atlas Camera Tuning Suite includes the following modules:
- Atlas Objective IQ (formerly known as CRISP-ML) applies patented solvers and optimization technology to automatically tune cameras to objective image quality metrics, such as sharpness, noise, and color accuracy. Objective IQ requires a very simple lab setup and the workflow allows the image quality team to quickly optimize to their specific lens, sensor, and ISP combination.
- Atlas HDR / AWB combines additional workflows to Objective IQ to tune High-Dynamic Range (HDR) and Automatic White Balance (AWB) capabilities. These apply to specific camera configurations across attributes such as brightness, contrast, and color temperature.
- Atlas Natural IQ automates and shrinks the many months-long process of subjective camera tuning by applying a deep learning approach to achieve the customer’s image quality preference. Customers create a small dataset of natural images that represents the desired “look and feel” and the camera is automatically tuned to match it as closely as possible.
- Atlas Computer Vision maximizes the accuracy of a computer vision system by leveraging Eos DNN technology and specialized solvers. By enabling teams to automatically optimize the camera image quality specifically for computer vision metrics rather than only being able to manually tune for visual image quality, Atlas finally closes this gap in the vision system development process.
“Addressing the challenging requirements that today’s vision systems need to meet is mission-critical for many markets, such as automotive, autonomous vehicles, and video surveillance. But we consistently see developers struggling to maximize the robustness and accuracy of these systems while being able to scale and mitigate program risks,” said Allan Benchetrit, Algolux President and CEO. “Ion is the first platform that enables end-to-end design and implementation of vision systems to achieve optimal performance while reducing the time, cost, and effort of these development programs.
Related Reports and Monitors
CMOS Image Sensor Service – Imaging Research
Neuromorphic Sensing and Computing 2019
Market & Technology