AI computing module SM5

SOPHON SM5 is an AI computing module with super performance. It is positioning the edge computing scenes with high performance requirements and has AI analysis capabilities of over 16 channels FHD video .

Independent R&D, powerful AI performance

SOPHGO's SOPHON SM5 is equipped with the third-generation TPU chip BM1684 independently developed by SOPHGO.The INT8 performance is up to 17.6TOPS and it can process over 16 FHD videos simultaneously. Its size is like a credit card, and it has rich IO interfaces, which make it easy to be integrated into various edge or embedded devices.The tool chain is complete and easy to use, and the algorithm migration cost is small.

Chip BM1684


32 channels HD video hardware decoding

High performance and low power consumption

17.6 TOPS INT8 performance, up to 35.2 TOPS with Winograd convolution acceleration, far superior than similar products in the industry. The typical power consumption of 16-channel video stream analysis is lower than 16W

Ultra-high video decoding channels

Support up to 32 channels of full HD video decoding, support H264/H265 format, can realize over 16 channels FHD video stream face detection analysis or video structuring

Flexible use

Support PCIE slave mode and SOC host mode, support FP32 high precision and INT8 low precision

Complete tool chain

Support the mainstream framework of AI industry Caffe, tensorflow, Pytorch, paddle, Mxnet

Wide application and rich scenes

It is applied to intelligent public security, intelligent park, intelligent retail, intelligent power, industrial robot UAV and other visual computing AI scenes.

Easy to use and convenient, full stack efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools such as the underlying driver environment, compiler, inference deployment tool and so on. Easy to use and convenient, covering the model optimization, efficient runtime support and other capabilities required for the neural network inference stage, providing easy-to-use and efficient full-stack solutions for deep learning application development and deployment. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of Fortune Group to facilitate intelligent applications.

Support mainstream programming framework



AI chip

1 BM1684

AI performance



INT8(TOPS) Winograd OFF


INT8(TOPS) Winograd ON


Memory configuration

Standard configuration


CPU capacity

CPU (SOC host mode)

8-core ARM A53, 2.3GHz main frequency

High speed data interface

PCIE EP interface

PCIE 3.0, X4

(connector interface)

PCIE RC interface

PCIE 3.0, X4

Ethernet ports

Dual Gigabit Ethernet ports

Video decoding and encoding

Video decoding capability

960fps @1080P

Video decoding format

H.264 and H.265

Maximum decoding resolution

Support 4K, 8K (semi-real time)

Video coding

2ch 1080P@25FPS

Picture decoding and encoding performance

480 PCS/sec @1080p

Low speed data interface

RS485 / RS232 / GPIO / SDIO / PWM / I2C / SPI etc.


144-pin connector

Power consumption

Typical power consumption <20W

Maximum power consumption 25W

Heat dissipation mode

SM5-P includes passive heatsink

SM5-A includes active cooling fan

L x W x H

87 x 65 x 8mm without heatsink

92 x 70 x 20.1mm (SM5-P includes passive heatsink)

92 x 70 x 20.1mm (SM5-A includes active cooling fan)