AImotive engineers have applied their many years of experience creating professional industry benchmarks to develop a controlled inference environment for benchmarking aiWare. The benchmark framework runs on top of the Caffe deep learning framework with a well-defined NN workload. Download the NN descriptor files of the framework here.
We have published the first public benchmark results of tests run on our FPGA evaluation platform. Using publicly available workloads the results show how aiWare outperforms high-end desktop GPUs. We will update this document over the next few months with more publicly available workloads. These will demonstrate the efficiency of aiWare under realistic operating conditions.Read it Here!
To prove the capabilities of aiWare on a custom ASIC, aiWare will be integrated into a low volume proof-of-concept chip. The aiWare-based test chips will be designed by VeriSilicon Holdings Co., Ltd., a top Silicon Platform as a Service (SiPaaS®) company. Manufacturing will be done by GLOBALFOUNDRIES (GF), a leading global full-service semiconductor foundry. An ideal platform for power sensitive AI applications, GF’s 22FDX® process technology will be used for manufacturing. Projected test chip debut: Q1 2018.
Under AImotive’s initiative, the Khronos Group, an open consortium of leading hardware and software companies, is working on developing a Neural Network Exchange Format (NNEF) to facilitate the deployment of trained neural networks from deep learning frameworks to hardware accelerated inference engines.
The NNEF is to describe NN structure and data in a unified way, with standardized semantics, that can be easily exported from deep learning frameworks, and is easy to digest for inference engines.
The preliminary NNEF standard is scheduled for release in late 2017. AImotive’s aiWare-based test chip is the first design to comply with the new NNEF standard.