Challenges and stakes for image processing – An interview with OmniVision

Courtesy of Omnivision, 2019

Behind a camera, there may be several ways to process raw data, depending on the purpose. The alternatives usually break down into methods of either viewing or analyzing the image to understand the environment outside the module or system containing the camera. Each of these purposes, however, requires a different type of hardware. AI has completely disrupted hardware in vision systems and has had an impact on entire segments. Image analysis adds a lot of value and image sensor builders are therefore increasingly interested in integrating a software layer to their system in order to capture it. Today, image sensors must go beyond taking images – they must be able to analyze them. This is the reason for the existence of vision processors that are literally exploding with an expected 13% CAGR from 2018 to 2024, making the market worth US$20 billion in 2024 (Source: Image Signal Processor and Vision Processor Market and Technology Trends report, Yole Développement, 2019)!

However, this does not mean that ISPs will disappear. Indeed, Yole Développement (Yole) estimates that the ISP market will offer a steady compound annual growth rate (CAGR) from 2018 to 2024 of 4%, making the total market worth US$6.87 billion in 2024. It needs to evolve to not only viewing but also able to support more complex algorithms in order to offer images that can be adapted to computer vision in general and deep learning in particular.

These trends and data are coming from the new report, Image Signal Processor & Vision Processor Market and Technology Trends, authored by Yohann Tschudi, PhD., Technology & Market Analyst at Yole.

OmniVision is in the forefront of the scene thanks to its portfolio of image sensors in automotive and related ISPs. For a recent example of this, see the following teardown of ZF’s fourth generation ADAS S-Cam by Yole’s sister company, System Plus Consulting. This camera uses OmniVision’s OV10642 image sensor in combination with the latest Mobileye EyeQ4 vision processor.

Mr. Andy Hanvey, Director of Marketing, Automotive explains OmniVision’s vision for the future of image processing and its strategy to Yohann Tschudi at Yole.

Yohann Tschudi (YT): OmniVision is known to develop image sensors. Why did you make the choice to add processing to your product portfolio? What is currently the significance of image processing and image sensors for OmniVision?

Andy Hanvey (AH): As a leader in the automotive market for over 15 years, OmniVision understands that being able to offer image sensors and image processing is very important to our portfolio. With our expertise, we are able to utilize our ISPs to achieve excellent image quality from our image sensors, which for automotive is critical in the wide range of lighting conditions and in end user applications. Since we can offer our customers a smooth integration and enable quicker time to market, they are able to see the great value we bring being the same vendor for both.

Supporting the widest range of architectures is key for automotive applications, and with the OmniVision ASIC portfolio this can be achieved. OmniVision has led with architectures that moved ISP processing to the ECU (for surround view applications).

The OV490 launched in 2013 was our first ASIC to offer dual processing capabilities and subsequently in 2016, the OV491 product followed with much improved image processing capabilities.

YT: Some years ago, you chose to enter the automotive market. Why did you enter this market then? What are the key challenges for image processing in this segment? For viewing? For ADAS?

AH: OmniVision has been a major player in the automotive market for over 15 years and over 160 million of our image sensors can be found in vehicles on the road today. We have been a leader in CMOS imaging solutions since 1995, entering the automotive market was a natural progression for the application of our image sensors. For example, we were the first to bring BSI pixel technology to the automotive segment. This is another key advantage of OmniVision; in automotive we can leverage our R&D from other segments.

With our technology leadership, we continue to address the challenges that the automotive market presents. Today, the key challenges for automotive ISPs include the need to process more pixels from higher resolution sensors, and from more cameras, and the need to process a range of sensors (not just image sensors).

YT: HDR and LED flickering are typical ADAS-related issues. What are the applications for viewing in automotive? Is there any desire to enter the ADAS market?

AH: We are seeing that a lot of viewing applications, including CMS and ADAS, demand LED Flicker mitigation functionality. Additionally, OmniVision has led with a number of optimal HDR techniques for automotive applications, such as Split Pixel and DCG and , along with our LFM technology—most recently with our HALE (HDR and LFM Engine) algorithm. We have a deep understanding of the challenges automotive imaging presents. For example, covering a dynamic range of 120dB is the norm, yet our customers are requesting a wider dynamic range of 140dB, and we are well prepared to address this need. These requirements are coming from both viewing and ADAS applications. We are currently in the ADAS market, working closely with all ADAS platforms and shipping in volume production. 

Courtesy of Omnivision, 2019
The OAX4010 automotive ISP features OmniVision’s new HDR and LFM Engine (HALE) combination algorithm – Courtesy of Omnivision, 2019
Courtesy of OmniVision, 2019

YT: Surround cameras that are part of the ADAS system will mostly bypass ISP to eventually be integrated in vision processors through LVDS interfaces. Do you think the numbers of cameras will double, or will there be a double use of each camera?

AH: The short answer is both!

There are a number of different surround view architectures, some of which include the ISP in the camera and others that will be placing the ISP into the ECU.

With the rapidly increasing adoption rate, we are expecting the number of surround view cameras to roughly double every 2-3 years. A trend that we are seeing at OmniVision is “sensor fusion,” which is referring to one camera being used for multiple use cases. For example, the surround view camera could also be used for machine vision processing, in which case, the image processing is likely to be using a different CFA (Color Filter Array) like RCCB, rather than the traditional Bayer (RGGB).

YT: What other segments than automotive are you targeting? Are the challenges the same?

AH: OmniVision targets a wide range of other segments, including Mobile, Security, Notebook, Medical and Emerging. There are some unique challenges to the automotive segment, such as performance over a wide temperature range. In some cases, there is a common focus among these markets on obtaining optimal pixel and sensor performance for a certain size and power target. This also establishes OmniVision’s strength in synergy across different segments, which means innovative technologies are shared amongst different markets to target problems at different stages.

YT: At Yole, we are seeing AI everywhere nowadays and it is rapidly entering the consumers’ daily life. In your opinion, why and how will AI change the game in the image processing market?

AH: I would agree that that AI is everywhere and has the potential to impact many aspects of our lives. The new applications we are seeing are amazing.

AI can change the image processing market in a number of ways. First, the ISP is not only concerned with processing an image for viewing, but it also needs to process accurate images data for machines to act on. The processing required for these two applications is not necessarily the same. Where is the best place to perform AI—at the center or the edge? Could we envision every camera having a small AI processing capability? These and many other possibilities are currently being developed, and it remains to be seen which AI model will become predominant.

YT: Is there anything else that OmniVision would like to tell our readers?

AH: Thank you for the opportunity for OmniVision to share with your readers. We covered many topics ranging from viewing to ADAS to AI. Another key application is in-cabin, both driver and interior monitoring systems (DMS and IMS). This area presents a wide range of challenges for image processing as well. IMS uses an RGB-IR CFA to enable both RGB images for viewing and IR images for machine vision processing.

With DMS, there are three main challenges; low power, small size and high NIR QE. OmniVision can address all three of these challenges with our Global Shutter portfolio. In addition, with the exciting Nyxel® near-infrared (NIR) technology, it can be a game changer for the DMS market.

About the interviewee

Andy Hanvey joined OmniVision Technologies, Inc. in October 2016 and is currently Director of Automotive Marketing. Andy is responsible for product and regional marketing for OmniVision’s automotive segment. Prior to OmniVision, he worked at Andor Technology, Aptina Imaging and most recently with Imagination Technologies as Senior Customer Engineering Manager. With more than 20 years experience in the semiconductor industry, he has held a number of positions in engineering, applications and marketing. In addition, he has been involved in automotive applications for over 10 years. Mr. Hanvey holds an MSc degree in optoelectronics from Queens University, Belfast.

About the interviewer

As a Technology & Market Analyst, Yohann Tschudi, PhD is a member of the Semiconductor & Software division at Yole Développement (Yole). Yohann is daily working with Yole’s analysts to identify, understand and analyze the role of the software parts within any semiconductor products, from the machine code to the highest level of algorithms. Market segments especially analyzed by Yohann include big data analysis algorithms, deep/ machine learning, genetic algorithms, all coming from Artificial Intelligence (IA)
technologies. After his thesis at CERN (Geneva, Switzerland) in particle physics, Yohann developed a dedicated software for fluid mechanics and thermodynamics applications. Afterwards, he served during 2 years at the University of Miami (FL, United-States) as a research scientist in the radiation oncology department. He was involved in cancer auto-detection and characterization projects using AI methods based on images from Magnetic Resonance Imaging (MRI). During his research career, Yohann has authored and co-authored more than 10 relevant papers. Yohann has a PhD in High Energy Physics and a master degree in Physical Sciences from Claude
Bernard University (Lyon, France).

Related reports:

Image Signal Processor and Vision Processor Market and Technology Trends

Image Signal Processor and Vision Processor Market and Technology Trends 2019

Artificial intelligence-powered newcomers are reshuffling the pack.

ZF S-Cam 4 – Forward Automotive Mono and Tri Camera for Advanced Driver Assistance Systems (ADAS) - System Plus Consulting

ZF S-Cam 4 – Forward Automotive Mono and Tri Camera for Advanced Driver Assistance Systems

Fourth generation of the S-Cam family from the leading ADAS camera player.

Source: http://www.yole.fr, http://www.omnivision.com

Related presentations

Liked this post?

Share it on your social networks