Tensor Computing Processor BM1684

SOPHON BM1684 is the third-generation tensor processor launched by SOPHGO for deep learning. Its performance has been improved by 6 times compared with the previous generation.

Peak Performance

17.6 TOPS INT8
2.2 TFLOPS FP32

Video Decoding

32 channels HD hardware decoding

On-chip SRAM Capacity

32MBytes

Arithmetic Unit

1024

Support INT8 and FP32 precision, greatly improving AI performance

Integrated high-performance ARM core, supporting secondary development

Integrated video and image decoding and encoding capabilities

Support PCIe and Ethernet interface

Support TensorFlow, Caffe and other mainstream frameworks

Wide Application and Scenarios

The edge computing AI chip BM1684 can be used in artificial intelligence, machine vision and high performance computing environments.

Easy-to-use, Convenient and Efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools including the underlying driver environment, compiler and inference deployment tool. The easy-to-use and convenient toolkit covers the model optimization, efficient runtime support and other capabilities required for neural network inference. It provides easy-to-use and efficient full-stack solutions for the development and deployment of deep learning applications. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of SOPHGO to facilitate intelligent applications.

Support mainstream programming framework

More