In this full-day Pre-Conference Symposia, we’ll cover the next generation of sensors and technology in autonomous applications, from passenger vehicles to robots to drones, and more. The development of driverless car technology is on the rise, and automakers are investing millions and billions to be the first to market with their lineup of autonomous vehicles. The investment into the autonomous vehicle industry has reached over $100 billion, with the leader in spending investing more than half of this number.
9:00AM - 9:25AM PDT
Host: Willard Tu, Senior Director, Automotive Business Unit, Xilinx
Title: Opening Remarks
Description: Robo Trucks or Robo Taxi: Pickup or Drop Off? As the commercial applications for AD are explored, several NEV players continue to disrupt the consumer market pushing for mainstream availability of the technology. Session will provide a multi-layered point of view that covers topics such as a) Has Covid or chip supply shortage derailed the momentum to autonomous driving. b) What is the Perfect Autonomous Vehicle Sensor Package. c) What is the factors that will drive technology realization and adoption. Cut through the misinformation that has formed a lot of misguided perceptions, it is not as easy as listening to the media coverage, or following the money trail. Gain the insight to help you make the right decisions.
9:25AM - 9:50AM PDT
Speaker: Amit Mehta, Head of Innovation, Koito
Title: Light it up and Look Forward: Integration of Sensors into Headlights and Infrastructure
Description: Koito/NAL are the world's number one supplier of exterior automotive lighting and have 30% market share of infrastructure lighting in Japan. This presentation will discuss use cases and implementation of sensors to support a transition into mobility.
9:50AM - 10:15AM PDT
Speaker: Jacopo Alaimo, NA Business Development Manager, XenomatiX
Title: Camera CMOS as ToF imagers for LiDAR
Description: Since 2012 XenomatiX founders have been developing a LiDAR technology based on camera CMOS. The result is a sensor capable to capture both point cloud, like a LiDAR, and a 2D image, like a camera. Using the same imager, derived from the same chip present in any cellphone. Without any further need to fuse the two type of information, the perception stack can be heavily and images can be captured also with little external illumination.
10:15AM - 10:40AM PDT
Speaker: Mahmoud Saadat, CEO, Zadar Labs
Title: Advancing Autonomous Perception with 4D Imaging Radar
Description: Over the last decade new applications have emerged for radar technology in the automotive industry such as adaptive cruise control, blind spot detection and automatic emergency braking. It is expected that the continued development of radar will unlock new capabilities for autonomous vehicles and safety systems.
However, challenges persist to meet the next generation of perception requirements; achieving high angular resolution, resolving detection ambiguities (range, velocity and angle), mitigation multipath effects and false target and strong resistance to interference and jamming, while meeting automotive device requirements and costs.
Zadar Labs has addressed these challenges using innovative techniques in system design, signal processing and AI+ML based data processing. In this session, we will use intuitive and simplified examples and real data to show some of these challenges, solutions and opportunities using our 4D imaging radar sensors to advance perception systems for real-word applications.
10:40AM - 10:55AM PDT: Break
10:55AM - 11:20AM PDT
Speaker: Dave Tokic, VP Marketing and Strategic Partnerships, Algolux
Title: Case Study: Optimizing an Automotive Camera ISPs to Improve Computer Vision Accuracy
Description: Cameras are the most ubiquitous ADAS / Autonomous Vehicle (AV) sensor for both display and computer vision. Typical applications include surround view and teleoperation display, object detection for collision warning and automatic emergency braking (AEB), or traffic light and sign recognition.
To achieve subjectively “good” image quality (IQ), the camera’s ISP parameters must be manually tuned for each specific lens and sensor configuration by experienced ISP/IQ engineers, a many-month process in the lab and field. While intensive, an experienced imaging team can converge to good visual IQ.
But for computer vision (CV), it is impossible to equate good visual IQ with what’s needed for a specific CV algorithm. Applying “rules of thumb”, such as increasing sharpness or contrast, may improve results in certain cases but is not robust or generalizable to all imaging scenarios or vision requirements. The work has shown that for any CV algorithm, a camera’s ISP must be specifically optimized based on its structure, task, semantics, training, and bias. This talk will:
• Provide an overview of a typical ISP image quality tuning process
• Introduce a workflow that can automatically optimize ISPs to quickly maximize computer vision
• Present results from work with an automotive provider that significantly improved their object detection results
11:20AM - 11:45AM PDT
Speaker: Alberto Stochino, CEO, Perceptive Machines AI
Title: Combo Sensing: LiDAR + “X”, What is “X”
Description: The majority of Autonomous vehicles are using some combination of four main kinds of sensors - cameras, radar, ultrasonics and LiDARs. Engineers often debate the superiority of the individual sense technologies abilities to see through bad weather such as fog, rain or snow. New sensor innovation in cameras is helping address issues like LED flicker, low light conditions or glare. Other technologies try to solve the ranging issues or relative speed issues. However, a few companies have decided to combine the best of different sense modalities. In this session, learn about the combinations that are being combined and what the advantages are of combo sensors.
11:45AM - 12:10PM PDT
Speaker: Indu Vijayan, Director of Product Management, Aeye
Title: High Performance Automotive LiDAR
Description: Long range LiDAR systems are now deemed essential for obtaining the type of resolution at range demanded for safe, L3+ autonomy. Now, high performance LiDAR systems are coming to market that can be software-configured for specific OEM use cases and mounting preferences. In this session, AEye will explain how these next gen sensors expand upon LiDAR’s intrinsic value to bring agility to the sensor - in the form of ultra-long ranges, wide field-of-view, software configurability, and instantaneous resolution that address the industry’s biggest pain points and toughest corner cases.