aiWare is a universal AI-optimized hardware IP that is scalable from embedded solutions to data centers. aiWare allows the creation of tailor-made hardware to the exact needs of any given NN structures. Its ultimate edge is that it is designed for real-world use-cases, handling high-resolution inputs, and efficiently scaling to more complex neural networks.

Benchmark Framework

AImotive engineers have applied their many years of experience creating professional industry benchmarks to develop a controlled inference environment for benchmarking aiWare. The benchmark framework runs on top of the Caffe deep learning framework with a well-defined NN workload. Download the NN descriptor files of the framework here.

Benchmark Chart

aiWare Benchmarks

We have published the first public benchmark results of tests run on our FPGA evaluation platform. Using publicly available workloads the results show how aiWare outperforms high-end desktop GPUs. We will update this document over the next few months with more publicly available workloads. These will demonstrate the efficiency of aiWare under realistic operating conditions.

Read it Here!

aiWare-based Test Chip


To prove the capabilities of aiWare on a custom ASIC, aiWare will be integrated into a low volume proof-of-concept chip. The aiWare-based test chips will be designed by VeriSilicon Holdings Co., Ltd., a top Silicon Platform as a Service (SiPaaS®) company. Manufacturing will be done by GLOBALFOUNDRIES (GF), a leading global full-service semiconductor foundry. An ideal platform for power sensitive AI applications, GF’s 22FDX® process technology will be used for manufacturing. Projected test chip debut: Q1 2018.

FPGA Evaluation Kit

Evaulation Kit

To showcase aiWare’s capabilities in accelerating NNs, AImotive offers an FPGA evaluation kit that is custom made to allow independent, hands-on benchmarking of aiWare. It can run both the sample neural networks created by AImotive, and your own neural networks. Included in the kit are sample applications that can run the delivered neural networks, visualize their output, and measure their runtime performance.

Get in touch!

Get in touch with us!

The message has been sent. Thank you for your inquiry.

Oops.. Something went wrong!

Global standardization of neural network formats under the NNEF framework

Under AImotive’s initiative, the Khronos Group, an open consortium of leading hardware and software companies, is working on developing a Neural Network Exchange Format (NNEF) to facilitate the deployment of trained neural networks from deep learning frameworks to hardware accelerated inference engines.

The NNEF is to describe NN structure and data in a unified way, with standardized semantics, that can be easily exported from deep learning frameworks, and is easy to digest for inference engines.

The preliminary NNEF standard is scheduled for release in late 2017. AImotive’s aiWare-based test chip is the first design to comply with the new NNEF standard.