Tensor Computing Processor BM1880

BM1880 TPU can provide 1TOPS computing power @ int8, and up to 2TOPS computing power under Winograd convolution acceleration.

Peak performance

[email protected]

Video decoding

H.264 decoding
& MJPEG codec

Expandability

Multichip parallel operation

processor

Dual-core Cortex [email protected] & single-core [email protected]

Tensor processor

Optimized deep learning architecture

Flexible software design

Ultra-low power architecture

High efficiency and small size package

Wide application and rich scenes

The edge computing AI chip BM1880 can be used in artificial intelligence, machine vision and high performance computing environments.

Easy to use and convenient, full stack efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools such as the underlying driver environment, compiler, inference deployment tool and so on. Easy to use and convenient, covering the model optimization, efficient runtime support and other capabilities required for the neural network inference stage, providing easy-to-use and efficient full-stack solutions for deep learning application development and deployment. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of Fortune Group to facilitate intelligent applications.

Support mainstream programming framework

More

Function template