Friday, December 20, 2024

Helm.ai Announces Generative Simulation of High-Fidelity Labeled Images for Autonomous Driving

Helm.ai, a provider of advanced AI software for Advanced Driver Assistance Systems (ADAS), autonomous driving, and robotics automation, announced the launch of neural network-based, high-fidelity virtual scenario generation models for perception simulation. The new technology enhances the company’s suite of AI software solutions for developing high-end ADAS (Levels 2 and 3) and Level 4 autonomous driving systems.

The company has developed its generative simulation models by training neural networks with large-scale image datasets. The models can generate highly realistic images of virtual driving environments with variations in parameters, including illumination and weather conditions, different times of day, geographical locations, highway and urban scenarios, road geometries, and road markings. Additionally, the generated synthetic images contain accurate label information for the surrounding agents, obstacles and other aspects of the driving environment, such as pedestrians, vehicles, lane markings and traffic cones. Thus, Helm.ai’s generative simulation produces highly realistic synthetic labeled image data which can be used for large scale training and validation, especially to resolve rare corner cases.

Users can provide text- or image-based prompts to instantly generate high-fidelity driving scenes that replicate real-world encounters or create entirely synthetic environments. These AI-based simulation capabilities enable scalable training and validation of robust perception software for autonomous systems. Simulation is crucial in the development and validation process for ADAS and autonomous driving systems, particularly when addressing rarely occurring corner cases, such as difficult lighting conditions, complicated road geometries, or encounters with unusual obstacles (such as animals or flipped over vehicles) and specific object configurations (such as a bicyclist partially occluded by a vehicle).

Also Read: Marelli and Infineon Collaborate to Showcase Marelli’s Zone Control Unit at the 2024 Beijing International Automotive Exhibition

Neural network-based simulation developed by Helm.ai provides significant advantages over traditional physics-based simulators, particularly in scalability and extensibility. While physics-based simulators are often limited by the complexity of accurately modeling physical interactions and realistic appearances, generative AI-based simulation learns directly from real image data, allowing highly realistic appearance modeling, rapid asset generation with simple prompts and the scalability required to accommodate diverse driving scenarios and ODDs. Helm.ai’s generative simulation models can be further enhanced to construct any object class or environmental condition, enabling the creation of a wide variety of driving environments to meet the specific development and validation requirements of automakers.

“Generative simulation provides a highly scalable and unified approach to the development and validation of robust high-end ADAS and L4 autonomous driving systems,” said Helm.ai’s CEO and founder, Vladislav Voroninski.

“Our models, trained on extensive real-world datasets, capture the complexities of driving environments accurately. This milestone in generative simulation is important for developing and validating production grade autonomous driving software, in particular when it comes to addressing the critical tail end of rare corner cases. We’re excited to pave the way for AI-based simulation for autonomous driving.”

SOURCE: Businesswire

Subscribe Now

    Hot Topics