Recognition Engine identifies objects and interprets the environment around the vehicle in real time, utilizing a camera-first approach, fusing low- and high-level information from radar and ultrasonic sensors. The engine solves core recognition tasks using state-of-the-art machine learning techniques and deep neural networks. Our scalable network architectures provide superior performance in AI-based object detection and classification, depth estimation, lane detection, ego-motion calculation and sensor fusion.
Location Engine integrates AImotive’s in-house navigation engine to enhance positioning precision, utilizing vision-based localization and mapping algorithms optimized for parallel computing hardware. To ensure maximum safety the component also relies on sensor fusion-data to study its environment. The engine uses conventional and AI-based methods for feature extraction, ego-motion calculation and localization, assuring compatibility with all major HD landmark databases.
Motion Engine handles the complete self-driving decision chain, relying on the abstract 3D environment created by the Recognition and Location Engines. The engine explores the surroundings of the vehicle with robust object tracking and state prediction techniques, integrating probability-based behavior elements. Motion Engine uses reinforcement learning to choose the next maneuver, and translates high-level routing, detection and fused sensor information to the local trajectory, taking the physical constraints of the vehicle into account.
Control Engine enables vehicle control through low level actuator commands, integrating AImotive’s in-house developed drive-by-wire solution with the corresponding APIs. The engine translates and communicates the calculated dynamic trajectory to the vehicle. This method provides a universal, safety-compliant platform, which satisfies local public road testing requirements.