It was not so long ago that the main objective of a camera was to produce the most beautiful images possible to the human eye. The means were rather simple: either increasing resolution, the number of pixels, or optimizing the “ISP pipeline”. ISP means ‘image signal processing’, and refers to a series of blocks of algorithms transforming the raw image, providing colors better suiting our vision. However, the arrival of artificial intelligence (AI) has provided stunning results and significantly changed this segment, which used to evolve in line with increasing pixel counts. The shock was so violent that it has opened up a gaping hole in the world of hardware for image processing for new players offering solutions that meet this immediate need. In Yole Développement’s latest report, Image Signal Processor and Vision Processor Market and Technology Trends 2019 , we try to describe this impact on the imaging industry.
Processing and computing
Let’s take a few minutes to first describe the technologies involved .
What the industry has been doing for a good twenty years is image processing. An input image, raw, straight out of the sensor, becomes an output image, processed, understandable by the human eye. Today, either the algorithms or the related hardware are optimized to provide fast, low power consumption, sufficient performance. Yesterday and today, this processing has been carried out by an Image Signal Processor, which is either a discrete chip, or embedded in a system-on-a-chip (SoC) or the sensor. This market and this technology has existed for a long time and the players are well-established. We estimate a compound annual growth rate (CAGR) of 3% for the period 2018-2024. Nothing really disruptive here.
So what happened? Well, the troublemaker is called Deep Learning. A set of algorithms even older than those used in image processing, but impossible to set up for lack of computing power – until recently. We will not go into details here, but what must be understood is that this technique doesn’t make the image more beautiful. Instead it understands what it represents. The output is not an image, but information. Boom! The paradigm is changed – now we can give machines the ability to see.
At first, the hardware used was the Graphic Processing Unit (GPU). This chip, dedicated to graphics rendering and capable of great parallelization of calculations, is what was missing to make deep learning real. This type of chip is still largely the type most used to perform calculations. Then, as technology is shifting to do calculations at the network edge, on user devices, it was necessary to optimize, accelerate and dedicate the computing. Thus the vision processor was born. This type of chip can be either discrete or embedded in an application processor, as can be found in all smartphones. Such chips are dedicated to the analysis of the image and what the machine sees. Today and even more tomorrow, they are multiplying. We estimate a CAGR of 16% for the period 2018-2024.
A shaken ecosystem
What about this dull image processing industry? Well, the reaction took a long time. First, GPU companies have jumped at the opportunity. One of the most important, NVIDIA, a GPU provider company in the gaming industry, has captured most of the calculation for AI in the cloud and for robotic cars. Then startups took advantage of the lack of such chips and still benefit from it. The startup ecosystem includes more than 40 new entrants in the last five years in the chip segment for AI. The best example of course is Mobileye, bought by Intel in 2017 for more than $15 billion, which has penetrated the automotive market in the race for autonomy. For smartphones, which are still the largest market in history for the semiconductor industry, the handset manufacturers themselves have started introducing accelerators for the execution of deep learning algorithms in their chips, including Huawei, Apple and Samsung. Meanwhile, historical names like Texas Instruments, Ambarella, On Semi and Intel are far from being dropped.
Low power, low consumption, always on: Let’s go there
One solution has been to rely on existing designs offered by the “Intellectual Property” (IP) companies .
Meanwhile, today there is strong trend towards edge computing, but the concern is that vision processors are greedy for power. It is important to reduce their consumption by architectural and algorithmic optimization. Would you like a drone that flies for six minutes and stops but can recognize me on my bike in the rain, travelling at 40 km/h? I am not sure. The changes needed are familiar. Optimize, reduce power consumption and the silicon area. These players have already done that with image processing. They know how to do it.
What are the issues? What are the stakes? What are the market sizes, volumes, applications and trends? This article is an introduction to the report that focuses on and answers all these questions precisely.
About the author
As a Technology & Market Analyst, Yohann Tschudi, PhD is a member of the Semiconductor & Software division at Yole Développement (Yole). Yohann is daily working with Yole’s analysts to identify, understand and analyze the role of the software parts within any semiconductor products, from the machine code to the highest level of algorithms. Market segments especially analyzed by Yohann include big data analysis algorithms, deep/machine learning, genetic algorithms, all coming from Artificial Intelligence (IA) technologies.
After his thesis at CERN (Geneva, Switzerland) in particle physics, Yohann developed a dedicated software for fluid mechanics and thermodynamics applications. Afterwards, he served during 2 years at the University of Miami (FL, United-States) as a research scientist in the radiation oncology department. He was involved in cancer auto-detection and characterization projects using AI methods based on images from Magnetic Resonance Imaging (MRI). During his research career, Yohann has authored and co-authored more than 10 relevant papers.
Yohann has a PhD in High Energy Physics and a master degree in Physical Sciences from Claude Bernard University (Lyon, France).
Image Signal Processor and Vision Processor Market and Technology Trends 2019
Artificial intelligence-powered newcomers are reshuffling the pack.
Related Reports and Monitors
pmd/Infineon’s 3D Indirect Time-of-Flight in LG G8 ThinQ
Reverse Costing - Structural, Process & Cost Report
Neuromorphic Sensing and Computing 2019
Market & Technology