From data capture to advertisement on your favorite social network, understand how AI generates value

Measuring the impact of Artificial Intelligence (AI) at the consumer level is a difficult exercise. Indeed, even if it is sometimes hard to realize, AI ​​is already everywhere around us, in our cars, phones, virtual personal assistants, and homes. In addition, AI ​​is a software technology. When we talk about deep learning, we mean neural networks and computer programming, which makes the estimate even vaguer because it’s difficult to quantify. So the first question is: what around AI can we quantify in a sufficiently precise way to be able to measure an impact? At Yole Développement (Yole), we are specialized in the semiconductor industry. We have therefore made the choice in the Artificial Intelligence Computing for Consumer 2019 report to measure the impact of AI ​​by the associated dedicated computing hardware required to run it, in mm² of silicon and revenues generated by them. This report is therefore interested in what exists for AI for imaging and audio at the edge computing level, embedded in user devices themselves. Cloud computing, meanwhile, requires its own dedicated report.

Following the data makes it possible to estimate the value

In order to understand the stakes, it is interesting to follow the path that the data follows and the various incremental manipulations carried out. Let’s take the four steps from the sensor to the advertisement that appears on our favorite social network. The first step is the capture of the data, by an image sensor or a microphone. The raw data is then processed to output the same type of data, so either an image or a sound. The computing step proposes to add information, and this is where AI begins to operate. Finally, the last step is the thorough analysis of both the information and the data and is done in the Cloud, due to the power required to be able to perform AI operations there too.

Let’s go further in understanding stakes and value for each of the four steps:

  • Data capture is the first step and it is essential. A sensor adapted and efficient for doing processing and computing operations that will be performed upstream is essential, or at least must facilitate the good performance of the entire chain. The value to be recovered here corresponds to the average value of the sensor, which is, for the consumer segment, between $0.10 and $1 for a microphone and a good imager respectively.
  • Processing introduces the first software blocks, allowing raw data to be transformed for future steps. In the case of an image, this operation is performed in an image processing pipeline by a hardware unit/chip called an image signal processor (ISP). The value here is to be able to understand what the ultimate goal of the data is. If it’s an image, will it be displayed on a screen? Shared with computer vision algorithms or AI or both? If it’s a sound, will it be broadcast through a headset? Speakers? Should it be prepared for speech recognition? The value of the processing hardware is estimated to be between $1 and $3.

Let’s stop here for a moment because there are already significant technological repercussions. The first battle is to take the value from processing. Of course, the sensor ecosystem wants to add value to its products by including processing either with a chip, as proposed by Knowles, or stacking processing with the sensor as Sony has. The argument here is that as sensor manufacturers, they are better able to provide an efficient solution for processing. Intermediate actors like STMicroelectronics, ON Semiconductor or OmniVision will instead put forward optimal processing blocks for the subsequent operations, including computing. This is also the argument of Ambarella or Texas Instruments,  specialized in computing chips rather than sensor chips. The opportunist will understand here that it is important to take into account the whole chain, not just the sensor or the processing.

  • Computing is the main part of Yole’s Artificial Intelligence Computing for Consumer 2019 and Artificial Intelligence Computing for Automotive 2019 reports. The idea is to add value not by modifying the image but by providing information on top of the image. AI is central here, especially in deep learning that has shown spectacular results for recognition of objects, people, faces and speech. The associated hardware is specialized: it is actually necessary for energy consumption and performance constraints to use dedicated architectures. We think here of the neural engine, for example, in the latest mobile phone processors which are responsible for performing calculations specific to neural networks. The value is high: between $10 and $100 for this type of hardware. This range takes into account integrated features, the type of information delivered and its quality/accuracy.

Whatever the segment, it is obvious that companies want to move up the value chain and consequently processing companies now offer computing solutions. However, and as described for the imaging segment in the Image Signal Processor and Vision Processor Market and Technology Trends 2019 report, historical players in the processing ecosystem have left the door open to players from other backgrounds. NVidia, for example, produces Graphic Processing Units, which are very suitable for deep learning algorithms or for gaming platforms. There are also a growing number of startups. However, we realize here that this value of information is important enough to encourage even companies higher up in the value chain to implement their own solutions. Apple and Huawei have launched their own edge computing chips for the mobile phone with a rather extraordinary technology. Some sensor companies are attracted by this step too, however it is far from what they are used to doing and necessitates a lot of investments. To be clear, it is difficult to see a sensor player providing computing for AI in imaging today. However, why is this not possible for audio AI, as nothing is already established today? What can the opportunist take away here? For edge computing, the trend is to reduce energy consumption while demanding a high level of performance. Yesterday, the objective was to looking to higher number of Tera-Operations Per Second (TOPS), today the objective is to looking to higher number of TOPS per watt ! This has a huge impact on how to be competitive.

  • Another conclusion of the Artificial Intelligence Computing for Consumer 2019 report comes from an analysis of the investments and acquisitions of the tech giants. The value here is to transform data and information into revenues directly. Data therefore becomes a currency. For illustration, if a skier is in an image, it means that they are doing some winter sport. So let’s offer an advertisement for a ski hat, especially as he’s not wearing one in the analyzed picture. This goes much further in the analysis than the computing step and requires from AI and deep learning, and therefore enormous computing power. Consequently such functions are realized in the Cloud.

How can companies recover this value? To penetrate this level is extremely complex, because  companies must possess all the technologies, software, hardware and especially a consumer pool significant enough to both train and sell the data. The opportunist will now understand that it is first necessary to be a member of the computing ecosystem. The biggest problem is creating the consumer pool. Could Cloud gaming be an opportunity?

What are the prospects? What can move this value chain? What is the way to recover value in the future?

Deep learning has become the cornerstone of AI through its spectacular results in recent years.  However it is beginning to raise questions. At the technological level, the constraints are becoming stronger to allow larger neural network to run. At the architecture level and to boost computing power, the process nodes must be smaller and more expensive. The latest processor applications are today at the 7nm lithographic node. Convolutional neural networks (CNNs) are the stars of neural networks used for deep learning, but they have been criticized. The sore point, without going into the technical details, is training. Let’s say that to recognize a cat needs millions of annotated cat images. This is tedious, and it means that for all that we want to make the device recognize, it is necessary to get a sufficient sample of images or the precision of the recognition will suffer. This is the case in medical applications, for example where the data are few and protected. Then, we realize that we introduce bias in recognition. If the image sample helps recognize cats of a certain color, it may not recognize another color. Export this to humans, and in sensitive areas it is not necessary to explain all the problems that follow.

Finally, it is probably now important to start with other methods of doing things, closer to what we can see around us in nature. One way can be what we call ‘neuromorphic’ and is described in our report Neuromorphic Sensing and Computing 2019. Of course, deep learning will not disappear, especially at the cloud level where it always produces better results. However, the opportunist will have understood that it is already necessary to look into other, smarter, ways to bring intelligence at the edge.

About the author:

As a Technology & Market Analyst, Yohann Tschudi, PhD is a member of the Semiconductor & Software division at Yole Développement (Yole). Yohann is daily working with Yole’s analysts to identify, understand and analyze the role of the software parts within any semiconductor products, from the machine code to the highest level of algorithms. Market segments especially analyzed by Yohann include big data analysis algorithms, deep/machine learning, genetic algorithms, all coming from Artificial Intelligence (IA) technologies.
After his thesis at CERN (Geneva, Switzerland) in particle physics, Yohann developed a dedicated software for fluid mechanics and thermodynamics applications. Afterwards, he served during 2 years at the University of Miami (FL, United-States) as a research scientist in the radiation oncology department. He was involved in cancer auto-detection and characterization projects using AI methods based on images from Magnetic Resonance Imaging (MRI). During his research career, Yohann has authored and co-authored more than 10 relevant papers.
Yohann has a PhD in High Energy Physics and a master degree in Physical Sciences from Claude Bernard University (Lyon, France).

Related reports:

Artificial Intelligence Computing for Consumer

Artificial Intelligence Computing for Consumer 2019
While AI is a feature expected in smartphones, this fantastic technology has spread like wildfire to the smart home ecosystem and is profoundly impacting the semiconductor industry.


Related presentations

Liked this post?

Share it on your social networks