Tensor Computing Processor BM1680

SOPHON BM1680 is SOPHGO's first tensor processor for deep learning, suitable for training and inference of neural network models such as CNN / RNN / DNN and other neural network models

Peak performance

2TFlops

Data precision

FP32

On-chip SRAM capacity

32MBytes

Average power consumption

25W

Tensor Computing acceleration

The architecture is optimized for deep learning

The product form is flexible and can be customized

The product form is flexible and can be customized

Optimized instruction set and software stack

Wide application and rich scenes

The edge computing AI chip BM1680 can be used in artificial intelligence, machine vision and high performance computing environments.

Easy-to-use, Convenient and Efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools including the underlying driver environment, compiler and inference deployment tool. The easy-to-use and convenient toolkit covers the model optimization, efficient runtime support and other capabilities required for neural network inference. It provides easy-to-use and efficient full-stack solutions for the development and deployment of deep learning applications. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of SOPHGO to facilitate intelligent applications.

Support mainstream programming framework

More

Function template