UK-based software company rFpro has announced the development of its latest simulation technology which greatly reduces industry dependence on real-world testing when developing AVs and ADAS. The company’s new ray tracing rendering technology is stated to be the first to accurately simulate how a vehicle’s sensor system perceives its environment.
“The industry has widely accepted that simulation is the only way to safely and thoroughly subject AVs and autonomy systems to a substantial number of edge cases to train AI and prove they are safe,” said Matt Daley, operations director, rFpro. “However, up until now, the fidelity of simulation hasn’t been high enough to replace real-world data. Our ray tracing technology is a physically modeled simulation solution that has been specifically developed for sensor systems to accurately replicate the way they ‘see’ the world.”
The ray tracing graphics engine is described as a fidelity image rendering system which complements rFpro’s existing rasterization-based rendering engine. Rasterization works to simulate light taking single bounces through a simulated scene. This is quick enough to enable real-time simulation and powers rFpro’s driver-in-the-loop solution which is used across the automotive and motorsport sectors.
Ray tracing is rFpro’s software-in-the-loop solution aimed at generating synthetic training data. It uses multiple light rays in a scene to accurately capture all the nuances of the real world. As a multi-path technique, the solution can reliably simulate the huge number of reflections that happen around a sensor. This is extremely important for low-light scenarios or environments where there are multiple light sources to accurately portray reflections and shadows, such as in multi-story parking lots or illuminated tunnels with bright daylight upon exits, or when driving at night under streetlamps.
Within the automotive sector, modern HDR (high dynamic range) cameras are used to capture multiple exposures of varying lengths of time – including short, medium and long exposure per frame. In order to simulate this accurately, the software specialist has introduced its multi-exposure camera API. The solution ensures the simulated images contain accurate blurring, caused by fast vehicle motions or road vibrations, alongside physically modeled rolling shutter effects.
“Simulating these phenomena is critical to accurately replicating what the camera ‘sees’, otherwise the data used to train ADAS and autonomous systems can be misleading,” said Daley. “This is why traditionally only real-world data has been used to develop sensor systems. Now, for the first time, ray tracing and our multi-exposure camera API is creating engineering-grade, physically modeled images enabling manufacturers to fully develop sensor systems in simulation.”
Ray tracing technology is applied to each and every element within a simulated scene, which has been physically modeled to include accurate material properties to create the highest-fidelity images. Computationally demanding, it can be decoupled from real time. Furthermore, the rate of frame rendering can be adjusted to suit the level of detail required, enabling high-fidelity rendering to be conducted overnight and played back in subsequent real-time runs.
“Ray tracing provides such high-quality simulation data that it enables sensors to be trained and developed before they physically exist.” explained Daley. “As a result, it removes the need to wait for a real sensor before collecting data and starting development. This will significantly accelerate the advancement of AVs and sophisticated ADAS technologies and reduce the requirement to drive so many developmental vehicles on public roads.”