Model | Resolution | Parameters | FLOPs | PCKh@50 (MPII val) |
PCKh@10 (MPII val) |
PCKh@50 (MPII test) |
PCKh@10 (MPII test) |
---|---|---|---|---|---|---|---|
EfficientPose RT Lite* | 224x224 | 0.40M | 0.86G | 80.6 | 23.1 | - | - |
EfficientPose RT | 224x224 | 0.46M | 0.87G | 80.6 | 23.6 | 84.8 | 24.2 |
EfficientPose I Lite* | 256x256 | 0.59M | 1.54G | 83.7 | 27.7 | - | - |
EfficientPose I | 256x256 | 0.72M | 1.67G | 85.2 | 26.5 | - | - |
EfficientPose II Lite* | 368x368 | 1.46M | 7.25G | 87.1 | 30.8 | - | - |
EfficientPose II | 368x368 | 1.73M | 7.70G | 88.2 | 30.2 | - | - |
EfficientPose III | 480x480 | 3.23M | 23.35G | 89.5 | 30.9 | - | - |
EfficientPose IV | 600x600 | 6.56M | 72.89G | 89.8 | 35.6 | 91.2 | 36.0 |
OpenPose (Cao et al.) | 368x368 | 25.94M | 160.36G | 87.6 | 22.8 | 88.8 | 22.5 |
Network | Type | Score (AUC) | SRAM, MB | Flash, MB | Inference time, ms |
---|---|---|---|---|---|
MicroNet Large INT8 | INT8 | 0.968 | 0.4 | 0.46 | 4.8 |
MicroNet Medium INT8 | INT8 | 0.963 | 0.27 | 0.47 | 4.83 |
MicroNet Small INT8 | INT8 | 0.955 | 0.12 | 0.25 | 2.51 |
Network | Type | Score (Top 1 Accuracy) | SRAM, MB | Flash, MB | Inference time, ms |
---|---|---|---|---|---|
MobileNet v2 1.0 224 INT8 * | INT8 | 0.697 | 1.47 | 3.57 | 43.13 |
MobileNet v2 1.0 224 UINT8 | UINT8 | 0.708 | 1.47 | 3.27 | 40.19 |
Network summary
for
mobilenet_v2_1.0_224_INT8
Accelerator configuration Ethos_U55_256
System configuration Ethos_U55_Alif_HP
Memory mode Shared_Sram
Accelerator clock
400
MHz
Design peak SRAM bandwidth
1.60
GB/s
Design peak Off-chip Flash bandwidth
0.10
GB/s
Total SRAM used
1474.22
KiB
Total Off-chip Flash used
3576.78
KiB
CPU operators =
0
(
0.0
%)
NPU operators =
95
(
100.0
%)
Average SRAM bandwidth
0.60
GB/s
Input SRAM bandwidth
11.75
MB/batch
Weight SRAM bandwidth
6.95
MB/batch
Output SRAM bandwidth
6.99
MB/batch
Total SRAM bandwidth
25.86
MB/batch
Total SRAM bandwidth per input
25.86
MB/inference (batch size
1
)
Average Off-chip Flash bandwidth
0.08
GB/s
Input Off-chip Flash bandwidth
0.00
MB/batch
Weight Off-chip Flash bandwidth
3.46
MB/batch
Output Off-chip Flash bandwidth
0.00
MB/batch
Total Off-chip Flash bandwidth
3.47
MB/batch
Total Off-chip Flash bandwidth per input
3.47
MB/inference (batch size
1
)
Neural network macs
304452946
MACs/batch
Network Tops/s
0.01
Tops/s
NPU cycles
10635874
cycles/batch
SRAM Access cycles
5024963
cycles/batch
DRAM Access cycles
0
cycles/batch
On-chip Flash Access cycles
0
cycles/batch
Off-chip Flash Access cycles
4959164
cycles/batch
Total cycles
17252122
cycles/batch
Batch Inference time
43.13
ms,
23.19
inferences/s (batch size
1
)
Network summary
for
mobilenet_v2_1.0_224_quantized_1_default_1
Accelerator configuration Ethos_U55_256
System configuration Ethos_U55_Alif_HP
Memory mode Shared_Sram
Accelerator clock
400
MHz
Design peak SRAM bandwidth
1.60
GB/s
Design peak Off-chip Flash bandwidth
0.10
GB/s
Total SRAM used
1474.03
KiB
Total Off-chip Flash used
3279.23
KiB
CPU operators =
0
(
0.0
%)
NPU operators =
64
(
100.0
%)
Average SRAM bandwidth
0.62
GB/s
Input SRAM bandwidth
11.73
MB/batch
Weight SRAM bandwidth
6.06
MB/batch
Output SRAM bandwidth
6.97
MB/batch
Total SRAM bandwidth
24.94
MB/batch
Total SRAM bandwidth per input
24.94
MB/inference (batch size
1
)
Average Off-chip Flash bandwidth
0.08
GB/s
Input Off-chip Flash bandwidth
0.00
MB/batch
Weight Off-chip Flash bandwidth
3.16
MB/batch
Output Off-chip Flash bandwidth
0.00
MB/batch
Total Off-chip Flash bandwidth
3.17
MB/batch
Total Off-chip Flash bandwidth per input
3.17
MB/inference (batch size
1
)
Neural network macs
304450944
MACs/batch
Network Tops/s
0.02
Tops/s
NPU cycles
9635618
cycles/batch
SRAM Access cycles
5013209
cycles/batch
DRAM Access cycles
0
cycles/batch
On-chip Flash Access cycles
0
cycles/batch
Off-chip Flash Access cycles
4773911
cycles/batch
Total cycles
16074423
cycles/batch
Batch Inference time
40.19
ms,
24.88
inferences/s (batch size
1
)
Network | Type | Score (Accuracy) | SRAM, MB | Flash, MB | Inference time, ms |
---|---|---|---|---|---|
CNN Large INT8 * | INT8 | 0.931 | 0.17 | 0.45 | 4.64 |
CNN Medium INT8 * | INT8 | 0.911 | 0.16 | 0.16 | 1.69 |
CNN Small INT8 * | INT8 | 0.912 | 0.05 | 0.07 | 0.76 |
DNN Large INT8 * | INT8 | 0.863 | 0.0009 | 0.46 | 5.34 |
DNN Medium INT8 * | INT8 | 0.844 | 0.0005 | 0.19 | 1.89 |
DNN Small INT8 * | INT8 | 0.825 | 0.0003 | 0.09 | 1.44 |
DS-CNN Clustered INT8 * | INT8 | 0.940 | 0.11 | 0.45 | 4.37 |
DS-CNN Large INT8 * | INT8 | 0.946 | 0.12 | 0.52 | 5.05 |
DS-CNN Medium INT8 * | INT8 | 0.946 | 0.12 | 0.52 | 5.05 |
DS-CNN Small INT8 * | INT8 | 0.935 | 0.02 | 0.04 | 0.36 |
MicroNet Large INT8 | INT8 | 0.965 | 0.21 | 0.55 | 5.94 |
MicroNet Medium INT8 | INT8 | 0.958 | 0.1 | 0.15 | 1.68 |
MicroNet Small INT8 | INT8 | 0.953 | 0.07 | 0.09 | 1.01 |
Network | Type | Score (Accuracy Pesq) | SRAM, MB | Flash, MB | Inference time, ms |
---|---|---|---|---|---|
RNNoise INT8 * | INT8 | 2.945 | 0.0009 | 0.12 | 1.45 |
Network | Type | Score (LER) | SRAM, MB | Flash, MB | Inference time, ms |
---|---|---|---|---|---|
Wav2letter INT8 | INT8 | 0.0877 | 1.77 | 21.3 | 217.95 |
Wav2letter Pruned INT8 * | INT8 | 0.0783 | 1.25 | 13.56 | 139.7 |
Tiny Wav2letter INT8 * | INT8 | 0.0348 | 0.71 | 3.67 | 37.46 |
Tiny Wav2letter Pruned INT8 * | INT8 | 0.0283 | 0.5 | 2.34 | 23.75 |
Network | Type | Score (Accuracy) | SRAM, MB | Flash, MB | Inference time, ms |
---|---|---|---|---|---|
MicroNet VWW-2 INT8 | INT8 | 0.768 | 0.03 | 0.18 | 1.51 |
MicroNet VWW-3 INT8 | INT8 | 0.855 | 0.13 | 0.42 | 4.65 |
MicroNet VWW-4 INT8 | INT8 | 0.822 | 0.12 | 0.37 | 4.15 |
EDGE AI FOUNDATION, formerly known as tinyML Foundation, is a global non-profit community focused on building a global community dedicated to innovation, collaboration, advocacy, and education for efficient, affordable, and scalable edge AI technologies.
EDGE AI FOUNDATION aims to bring together researchers, developers, business leaders and policymakers to tackle the big challenges in AI, from low-power machine learning to advanced edge computing.
Electronics People Since 1965
Brilliant Electro Systems Pvt. Ltd. is a leading distributor of specialized electronic components in India. The company supplies components to EMS and OEM manufacturers, ODM manufacturers, Automotive manufacturers, Industrial, Metering, IoT, Lighting and Power, Computing and Telecom industries.
Alif Semiconductor is the industry-leading supplier of the next-generation Ensemble family of microcontrollers and fusion processors. The Ensemble family scales from single core MCUs to a new class of multi-core devices, fusion processors, that blend up to two Cortex-M55 MCU cores, up to two Cortex-A32 microprocessor cores capable of running high-level operating systems, and up to two Ethos-U55 microNPUs for AI/ML acceleration.
JP Electronic Devices (JPED) specializes in marketing & distribution of Electronic Components from leading semiconductor manufacturers.
Established in 2003 , JPED has over the years earned enviable position of being distributor of choice for major OEMs, ODMS, EMS.
Integrity, commitment to quality & service has helped JPED, to become distributor of choice.
JPED has become an integral part of supply chain management of several eminent customers in segments such as Consumer Electronics, Industrial Electronics, Lighting, Entertainment, Telecom & IT, Medical and Automotive.
We develop compilers, simulators, toolchains, and IDEs. We utilise LLVM, LLDB, GDB, and Eclipse.
Semiconductors – LLVM Compiler
We simplify and optimize neural network inference for Arm Cortex-M55 and Ethos-U55, Xilinx Kria based Edge ML solutions. We leverage TVM, uTVM, TF Lite, Vela, Vitis AI, and LLVM. We accomplish this via ML compiler tuning and fine manual optimization on the assembler level.
Network-based Transfer Learning
LeWorks is a Swedish company, based in China. It provides services for fast growing companies that need flexible factory rental contracts. LeWorks offers sourcing services, helping companiesfind and buy quality parts and products from China and helps companies develop their products, prototypes and design for manufacturing world
Arm Ltd. is a British semiconductor and software design company based in Cambridge, England. Its primary business is in the design of ARM processors (CPUs).
Arm technology is at the heart of a computing and data revolution. The Arm architecture is the keystone of the world’s largest compute ecosystem. Together with technology partners Arm is at the forefront of designing, securing and managing artificial intelligence enhanced computing.
Andes Technology (TWSE: 6533) was established in Hsinchu Science Park in 2005. Sixteen years in business and a founding Premier member of RISC-V International, Andes is a leading supplier of high-performance/low-power 32/64-bit embedded processor IP solutions, and a main force to take RISC-V mainstream. Andes’ fifth-generation AndeStar™ architecture (V5) adopted the RISC-V as the base. Its V5 RISC-V CPU families range from tiny 32-bit cores to advanced 64-bit cores with DSP, FPU, Vector, Linux, superscalar and/or multicore capabilities. The annual volume of Andes-Embedded SoCs has exceeded 2 billion since 2020 and continues to rise. To the end of 2020, the cumulative volume of Andes-Embedded™ SoCs has surpassed 7 billion.
All-Hardware is a web-based service that allows chipmakers to provide remote access to their development boards to their customers around the world, and instructors to conduct hands-on workshops online.
We develop board support packages, drivers, and other system software. We leverage Linux, ERIKA, Zephyr, FreeRTOS, U-Boot, Yocto, OpenEmbedded, and more.
Chilicon Power – Obsolete Wi-Fi module Replacement
Smart Control Case for TWS Earbuds