aiDrive reference solutions
Our perception solution – MS²N
State-of-the-art perception serves as the foundation of all automated driving solutions. All aiDrive implementations are built on our Multi-Sensor Model-Space Neural Network (MS²N), ensuring that we deliver the safest and most robust solutions on the market.
&download=true
Specifications
Minimum sensor set | 6x min. Full HD camera 70-190° FoV, 1x long-range radar, 4 x corner radar |
Supported features | ACC, LKA, FCW, AEB, LDW, LCDAS, ISA, TSR, TLR, Blind Spot Assist, Automated Lane Change, Highway Interchange Management |
Perception features | Lane and road marking detection Object detection and classification: vehicles & VRU-s, traffic signs, traffic lights |
Fusion feature | Road model MS²N early fusion |
Localization features | Vehicle dynamics-based odometry Lane-level localization |
Motion planning | Route planning Behavior planning Trajectory planning |
Target platform | NVIDIA Xavier with QNX, Qualcomm Snapdragon Ride |
&download=true
Specifications
Minimum sensor set | 11x min. Full HD camera 70-190° FoV, 2 x long-range radar, 4 x corner radar, 4 Lidar |
Supported features | ACC, LKA, FCW, AEB, LDW, LCDAS, ISA, TSR, TLR Blind Spot Assist, Unsupervised Lane Change, Highway Interchange Management |
Perception features | Drivable free space detection Lane and road marking detection Object detection and classification: vehicles & VRU-s, traffic signs, traffic lights |
Fusion feature | Road model MS²N early fusion |
Localization features | Vehicle dynamics-based odometry HD map-based localization |
Motion planning | Route planning Behavior planning Trajectory planning |
Target platform | Dual NVIDIA Xavier with QNX, Dual Qualcomm Snapdragon Ride |
&download=true
Specifications
Sensor input | 4 x min. Full HD camera > 180° FoV, 4 x corner radar, 12 x USS system |
Supported features | Navigating a pre-recorded route into driveway or home garage |
Perception features | Drivable free space detection Object detection and classification: vehicles & VRU-s |
Fusion features | Occupancy grid fusion |
Localization features | Visual odometry Vehicle signal-based odometry Visual landmark-based localization |
Motion Planning feature | Global route-based navigation |
Target platform | TI TDA4, NVIDIA Xavier with QNX, Qualcomm Snapdragon Ride |
&download=true
Specifications
Sensor input | 4 x min. Full HD camera > 180° FoV, 4 x corner radar, 12 x USS system, 1x Lidar |
Supported features | Driverless vehicle maneuvering in structured parking facilities, driverless parking-in and parking-out maneuvers, forward and backward AEB |
Perception features | Drivable free space detection Object detection and classification: vehicles & VRU-s, road markings, traffic signs Parking space detection |
Fusion features | Occupancy grid fusion MS²N early fusion |
Localization features | Visual odometry Vehicle signal-based odometry Visual landmark-based localization |
Motion Planning features | Parking structure path planning Parking space maneuvering |
Target platform | TI TDA4, NVIDIA Xavier with QNX, Qualcomm Snapdragon Ride |
&download=true
Specifications
Sensor input | Front min. Full HD camera, Front LRR |
Supported features | ACC, AEB, FCW, LDW, LKA, TSR, TLR |
Perception features | Lane and road marking detection Object detection and classification: vehicles & VRU-s, traffic signs, traffic lights |
Fusion features | MS²N early fusion Road model |
Localization features | Visual odometry Vehicle signal-based odometry |
Motion Planning features | – |
Target platform | TI TDA4, NVIDIA Xavier with QNX, Nextchip Apache5 and Apache6 |