The Embedded Vision Summit is the only conference focused on practical computer vision and deep learning visual AI. Learn about the latest applications, techniques, technologies, and opportunities in computer vision and deep learning with 90+ talks from 100+ speakers. See 100+ demos from 60+ providers of the latest technologies enabling vision-based capabilities; including processors, algorithms, software, sensors, development tools, services, and more. Gain new insights and know-how about computer vision enabling technologies, applications, and markets.
The technical content is outstanding. Pete Warden from Google will be giving a keynote: The Future of Computer Vision and Machine Learning is Tiny. Ramesh Raskar from MIT Media Lab will present: Making the Invisible Visible: Within Our Bodies, the World Around Us, and Beyond. Other speakers include representatives from Micron, Aquifi, GumGum, Whirlpool, Texas Instruments, Cadence, Intel, MathWorks, Khronos Group, Yole Development, and many more.
Join more than 1,200 leading technologists and innovators in this fast-growing, quickly changing market. And take advantage of Day 1 hands-on, full-day trainings on TensorFlow and OpenCV plus Day 4 in-depth workshops that will increase your skills and knowledge!
Get access to a 10% discount on the registration, contact us to receive the promotional code.
Yole Développement will participate in the following:
“AI is moving to the edge – imaging today, audio tomorrow – what is the impact on the semiconductor industry?”
By Yohann Tschudi, Software & Computing Technolgy and Market Analyst at Yole Développement
Meet us onsite!
Abstract: Artificial Intelligence (AI) is a major trend for multiple applications, disrupting industries in each one. But this brings key questions. One concerns the partitioning of whether AI hardware or software firms will benefit most from adding value. Another is whether the hardware and semiconductor content of AI systems will be based in the cloud, in the system, or at the device level. Today there is a strong and established trend to reduce amount of the calculation done on cloud hardware, and do it instead directly on user devices, which is referred to as ‘at the edge’. This trend is mainly due to cost, but also latency and data confidentiality. However, bringing calculation to device hardware must also consider important constraints: low power consumption, always-on, low latency and high performance.
Meanwhile, Moore’s law is slowing down. To further improve performance, it has become necessary to design hardware specialized to the software that runs on it. This is even truer for AI algorithms that necessitate billions of operations. While it was possible to calculate the weights of neurons in a huge network with a graphics processing unit (GPU), it was necessary to create accelerators, more commonly called neural engines, to make these millions of calculations possible on portable devices. How will this trend unfold? What kind of hardware will it need? What are the new corresponding markets that will emerge?