From algorithms included in the image processing pipeline to neural networks running in vision processors, focus on the evolution of hardware in vision systems and how software disrupts this domain.
SOFTWARE IN VISION SYSTEMS
Vision systems are becoming increasingly important. Therefore, this report shows and explains the close links between embedded software and hardware in vision systems at the technology and market levels. What are the software technologies? How do they impact the hardware? Which hardware is impacted? What kinds of markets are affected? And how will they evolve?
We can consider software in vision systems as having two different levels. The first is very close to the hardware, inscribed inside standalone field programmable gate array (FPGA) or application specific integrated circuit (ASIC) chips, or integrated into more complicated architectures. This layer, not often considered, is the most important step in any image treatment after image acquisition by pixels. Image processing, realized in the image signal processor (ISP), has a quite simple function. It must transform a signal from the sensor to an understandable image for the human eye. It is composed as a pipeline of multiple blocks, where each block takes as input the output of the previous block. A lot of different algorithms are implemented to accomplish tasks such as removing artefacts, color correction and reproduction. This is done at a single-pixel or pixel-group level and does not need a lot of memory or power.
The second software layer is completely different, with much more diverse and complicated functions. In this report, we focused on embedded software and, more precisely, inference software derived from the latest artificial intelligence (AI) methods. This kind of technology necessitates a lot of memory and computing power. It uses complete image frames as inany image treatment afterput, and its goal is not to correct, but to analyze and understand the world in the picture. In vision systems, AI technology focuses on detection of eyes, faces, traffic signs, pedestrians, lanes, objects in front of cars and free space, and recognition of faces, irises, behaviors and gestures based on a mathematical technique called a neural network. This report especially investigates one of the most famous technologies, which has given spectacular results in recent years: deep learning.
WHEN SOFTWARE DISRUPTS HARDWARE
AI has completely disrupted hardware in vision systems, and has had an impact on entire segments, like Mobileye has in automotive, for example. Image analysis adds a lot of value and image sensor builders are therefore increasingly interested in integrating a software layer to their system in order to capture it. Today, image sensors must go beyond taking images – they must be able to analyze them.
However, to run these types of software, high power computing and memory are necessary, which led to the creation and development of vision processors. The image signal processor (ISP) market offers a steady compound annual growth rate (CAGR) of 6.3%, making the total market worth $4,400M in 2017. Meanwhile, the vision processor market is exploding, with a 30.7% CAGR and a market worth $653M in 2017!
Today, optimization requires software and hardware to be developed in parallel. Depending on the issues and specifications, companies can invest more in hardware than software or vice versa. However, software is easier to specify, tune and update, and so its growth is more important than hardware. The AI market is therefore expected to reach $35B in 2025 with an estimated CAGR at 50% per year from 2017-2025.
HARDWARE MARKET AND BUSINESS MODELS
This report carefully evaluates ISP and vision processor market shares and their evolution in order to correctly understand how AI technology impacts the hardware. This market has been divided in two different business models: Intellectual Property (IP) companies, which don’t have physical products, and hardware companies, which sell the processors physically. The leaders are pretty easy to identify for each category. ARM and Synopsys lead the IP segment and Omnivision, Mobileye and On Semiconductor lead the hardware segment.
The AI market, particularly in vision systems, is new and still moving, with hundreds of startups created each year. It has no clear leaders but a lot of highly specialized companies. This report therefore gives a high-level view of driving forces, technology hype, and the most important mergers and acquisitions.
The main goal of this report is to understand what is happening with the emergence of AI. Even if it is not a new technology, thanks to technological factors AI has made a spectacular entry into vision systems. It opens new perspectives in various segments such as automotive, surveillance, biometrics and medical. However, it also poses ethical questions, which we have tried to answer.
AI technologies promise a bright future in many areas, with rapid software and hardware progress. In autonomous vehicles, AI allows cars to understand the world around them, predict trajectories, communicate and drive. This has led to the development of sensor fusion boards. For example, NVidia’s Drive PX boards provide very high performance computing and memory, giving the ability to compile information from many completely different sensors. In surveillance and security, face and iris recognition have never been as powerful, entering the consumer world through the iPhone X this year, and behavior recognition is on track.
Al is very exciting for the entire area of vision systems. This report tries to show why it is important to understand the technologies and their impacts, and how to react. AI is in vision systems, from technology to market.
Objectives of the Report
Understand the technologies:
- Image processing pipeline
- AI technologies based on neural networks
Understand the evolution of hardware:
- Business models
- ISP to vision processors to sensor fusion
- Market share and evolution
Understand how software is disrupting hardware and future trends