TECHNOLOGIES We saw a fast-growing number of IoT devices and believed the era of
Artificial Intelligence is something inevitable. To contribute our talents to the world,
we are challenging the most power-efficient and advanced
NPU (Neural Processing Unit) for IoT devices.
PERFORMANCE TESTThe Video illustrates how fast 50,000 pictures from the Imagenet 2012 dataset are inferred
by the DEEPX NPU in an FPGA device.
The demo system is composed of a Windows PC and Xilinx Alveo U250 FPGA board.
The FPGA-based implementation running at 320Mhz demonstrates 600 IPS
(1.6 ms per inference) for the MobileNet Version 1 in the MLPerf AI benchmark category.
The performance implemented in an ASIC will become over three times higher without any
NPU design change because of simple clock frequency improvement
(top-ranked in the MLPerf AI benchmark).