required

aimotive car
aimotive success

Great!

We have received your email.

aimotive failure

Something went wrong!

Please try again later or write an email to info@aimotive.com with the subject FWD TO: Info

Building Blocks

aiWare Building Blocks

Features

  • Optimized for multiple high-resolution sensor applications
  • Scalable, configurable, low latency, high efficiency, architecture
  • Performance up to >50 TMAC/S with clock speeds up to 1GHz
  • Optimizes use of local DDR and on-chip memory for efficiency
  • Patented data management for automotive inference workloads
  • Comprehensive SDK includes tools to convert FP32 NNs to INT8

Benefits

  • Enables integration within SoCs or dedicated accelerators
  • Support several ASIL-B (v2) and ASIL-D (v3) compliant solutions
  • Ideal for NN processing in camera, LiDAR or radar subsystems
  • Highly autonomous NN processing maximizes host offload
  • Khronos® NNEF allows NN import from several AI/ML frameworks
  • Application agnostic – accelerates any NN

RTL

The aiWare IP core is fully synthesizable RTL needing no special libraries, enabling neural network acceleration cores from 1TMAC/s to 50+ TMAC/s. It can be used on-chip with a host CPU SoC or as a dedicated NN accelerator. The application agnostic IP is optimized for low-latency automotive applications. The architecture maximizes host CPU offload. Using on-chip SRAM and external DRAM it keeps execution and dataflow within the aiWare core.

SDK

The aiWare SDK provides tools to maximize efficiency. It enables the acceleration of NNs from a wide range of frameworks such as Caffe, TensorFlow or PyTorch. An early implementation of the Khronos NNEF™ standard the SDK offers an NNEF importer to translate these frameworks into binaries executable on aiWare-based systems. The SDK also includes tools to translate CNNs based on FP32, FP16 or INT16 into INT8 with minimal loss of precision.

Benchmarks

AImotive has seen many partners suffer the consequences of relying on inappropriate benchmarks for NN acceleration. Thus, we have created an inference environment for benchmarking aiWare. This tightly-specified suite is published openly. The benchmark framework runs on the Caffe framework, with a set of well-defined NN workloads derived from industry-standard benchmarks. The results of tests run with aiWare are also publicly available.

Evaluation Systems

AImotive offers an aiWare v2 FPGA Evaluation System, delivering up to 200 GMAC/s (400-500 GOPS). The system runs sample neural networks created by AImotive and the customers’ own networks. Including the aiWare SDK, the system relies on NNEF for flexibility. Thus, our partners can gain an understanding of the performance of their approaches when run on aiWare-based systems. A high-performance custom aiWare chip was created in Q4 2018.