Written by Márton Helli / Posted at 2/19/25
Discover edge cases smarter with AI-based adaptive testing
How aiFab Leverages Adaptive Design of Experiments for Better Edge Case Discovery
Testing Advanced Driver Assistance Systems (ADAS) and Automated Driving (AD) systems is an enormous challenge. The sheer number of possible driving scenarios is overwhelming, and traditional Design of Experiments (DoE) methods struggle to efficiently identify edge cases—those rare but critical situations where system performance is pushed to its limits.
At aiMotive, our aiFab solution is now equipped with a more advanced adaptive testing approach, significantly improving how we find and analyze these crucial edge cases.
Static DoE methods: the traditional approach
Standard DoE methods aim to systematically explore a scenario’s parameter space, ensuring thorough test coverage. However, they often fall short when trying to efficiently identify edge cases, scenarios that potentially expose safety risks, and performance limitations. Let’s look at some common approaches:
Grid Search
- How it works: Divides the scenario space into a grid and tests all possible parameter combinations.
- Pros: Guarantees full coverage.
- Cons: Computationally impractical for large parameter spaces (curse of dimensionality).
Random Sampling
- How it works: Selects test cases randomly within the defined parameter space.
- Pros: Simple to implement and scalable.
- Cons: Can miss important regions, leading to inefficient testing.
Latin Hypercube Sampling (LHS)
- How it works: Ensures each parameter is evenly sampled across its range, improving distribution.
- Pros: More efficient than pure random sampling, offering better coverage.
- Cons: Still treats all scenarios equally, not prioritizing critical edge cases.
While these static methods provide some level of coverage, they lack adaptability—they don’t prioritize high-risk situations or learn from past test results.
Why Adaptive DoE is a Game-Changer
Traditional methods assume that all scenarios are equally important, but in ADAS/AD testing, edge cases matter most. In aiMotive's aiFab solution, AI-based adaptive DoE dynamically selects test cases based on previous results, learning from the system’s failures and adjusting its focus accordingly.
Bayesian Optimization: Smarter Testing Through Learning
Bayesian Optimization (BO) transforms scenario selection from a brute-force search into an intelligent, data-driven process. Instead of randomly sampling, BO:
- Predicts which new test cases are most likely to expose failures.
- Uses a surrogate model (e.g., Gaussian Processes) to approximate the unknown function that maps scenario parameters to criticality.
- Incorporates an acquisition function (e.g., Lower Confidence Bound or Expected Improvement) to decide the next best test case.
Why Criticality Metrics Matter
BO relies on Key Performance Indicators (KPIs) that define what makes a scenario “interesting.” Common KPIs include:
- Time to Collision (TTC) – How close a vehicle is to a crash.
- Post-encroachment Time (PET) – The time gap between conflicting traffic movements.
- Delta-v (Change in Velocity) – A key indicator of crash severity.
By continuously updating the model based on KPI outcomes, aiFab ensures that computational resources are focused where they matter most—on discovering critical failures, rather than wasting time on uncritical scenarios.