TECHNOLOGY

Core competencies
Support SOTA
DNN Algorithm
Outstanding
NPU Efficiency for
Smart Mobility
Maximizing
Memory
Bandwidth
Scalable
Performance
0.5 ~ 200 TOPS
Support Advanced
DNN model
compression
PURSUING
TECHNOLOGIES
We saw a fast-growing number of IoT devices and believed the era of
Artificial Intelligence is something inevitable. To contribute our talents to the world,
we are challenging the most power-efficient and advanced
NPU (Neural Processing Unit) for IoT devices.
ML PERF
BENCHMARK RESULT
Learn more
ML PERF
BENCHMARK RESULT
Learn more
INITIAL
PERFORMANCE TESTThe Video illustrates how fast 50,000 pictures from the Imagenet 2012 dataset are inferred
by the DEEPX NPU in an FPGA device.
The demo system is composed of a Windows PC and Xilinx Alveo U250 FPGA board.
The FPGA-based implementation running at 320Mhz demonstrates 600 IPS
(1.6 ms per inference) for the MobileNet Version 1 in the MLPerf AI benchmark category.
The performance implemented in an ASIC will become over three times higher without any
NPU design change because of simple clock frequency improvement
(top-ranked in the MLPerf AI benchmark).
Applications
01Consumer
Electronics
Robot
Vacuums
Smart TV
Smart
Air conditioner
02Smart Mobility
Self-driving car
Drone
AMR
(Autonomous
Mobile Robot)
03Automotive
ADAS
Driver Monitoring
System
infotainment
04VR/AR
Entertainment
Devices
Enterprise Device
Personal Assistants