Secure payment on i-micronews Contact Yole Développement for I Micronews reports

RSS I-micronewsYole Dévelopement on TwitterYole Développement on Google +LinkedIn Yole pageSlideshare Yole Développement I-Micronews

Imaging

People see artificial intelligence (AI) strangely. As soon as a technology is adopted, it is no longer labeled as AI, but becomes an everyday technology. This has happened with Siri, Alexa, the Facebook social network, parking assist, automatic braking, drones and Google Translate. However, since DeepBlue beat Garry Kasparov in chess in 1997, the amount of progress that has been made in AI, particularly in vision systems, is huge.

While AI wasn’t taken seriously in 2000s, the value of the AI market reached $1 billion in 2017. We at Yole Développement expect it to be worth $34 billion in 2025, with compound annual growth rate (CAGR) just over 50% during this period. Vision systems are part of this trend, thanks to face detection and recognition, eye tracking, iris recognition and gesture recognition, to cite a few, which have seen spectacular results. Yole’s new report, "Embedded image and vision processing”, describes the algorithms involved, from those performed very close to sensors in the image signal processor (ISP) pipeline, to deep convolutional neural networks. The report also analyses the impact of these new technologies on the vision computing ecosystem, and the parallels between them.
AI’s long-awaited emergence is principally due to three factors: advancements in the mathematics of neural networks and deep learning; increasing availability of data allowing the neural networks to be trained; and the decreasing price of computing power. In vision systems, it is now possible to run an embedded set of AI technologies at the network’s edge – that is, directly in a product, rather than in the cloud.

 yole embedded software 1(Source: Embedded Image and Vision Processing , Yole Développement, Nov. 2017)


Before running powerful AI algorithms, the fundamental step of transforming the image from pixels to an image understandable to human eyes is necessary, and needs to be done perfectly. Initial algorithms implement functions like de-mosaicing, concealing dead pixels, color correction, and filtering, in a pipeline embedded in a dedicated ISP architecture. While ISPs process individual pixels or groups of pixels, they need more powerful hardware to run the later AI algorithms, and more memory to be able to store entire frames. Vision processors fill this role. The automotive market, with its race towards autonomy, is one of the first places where both AI technology and the related hardware are pulling each other to the summit in this way.

Big players saw this happening a few years ago, and have consequently been acquiring, investing in and signing partnerships with AI and hardware companies. The best example is Intel. In 2013, it acquired Omek Interactive, an Israeli company that develops gesture recognition and motion tracking software for use in combination with 3D depth sensor cameras, for $40 million. For the hardware side, in 2016 it bought Movidius for $400 million, and in 2017 it bought Mobileye for $15.3 billion. Most of the big players like Google, Apple and Facebook are following this strategy.

  

yole embedded software 2

(Source: Embedded Image and Vision Processing , Yole Développement, Nov. 2017)

In the report, we analyze the response of intellectual property (IP) players and physical chips sellers in the computing hardware market for vision systems to the emergence of AI. We expect the market to evolve from a still slowly growing ISP market to a rapidly growing vision processor market, with technology implemented everywhere. This will see it grow $500 million today to more than $4.5 billion in 2021.
 

yole embedded software 3

(Source: Embedded Image and Vision Processing , Yole Développement, Nov. 2017)

Pursuing this growth, the race in AI encompasses intense merger, acquisition and partnership activity and growing interest in consumer applications, with low barriers to market entry for AI software. But in the hardware domain technologies are complex, necessitate data of different types from different sensors, backed up with computing and memory power. Therefore, we are seeing camera-based sensor fusion processing, in automotive for example, although it is still in its infancy for applications for consumer products, like subject tracking modes for drones, and mobile devices, like face ID.  No doubt this will be the next step – and where the next round of AI hype will come from.

RELATED REPORT

yole Laser Technologies for Semiconductor ManufacturingEmbedded Image and Vision Processing

From algorithms included in the image processing pipeline to neural networks running in vision processors, focus on the evolution of hardware in vision systems and how software disrupts this domain. - More

Upcoming Events

No events