AI Computing Module SM5

SOPHON SM5 is an AI computing module with super computing power. It is positioned in edge computing scenarios with requirements for high performance and has over 16 channels of HD video AI analysis.

Consultancy

Independent R&D, Powerful AI Performance

SOPHON SM5 is equipped with the third-generation TPU chip BM1684 independently developed by SOPHGO. It has up to 17.6TOPS of INT8 computing power and can process over 16 channels of HD video simultaneously. It has the size of a credit card and rich IO interfaces, so it can be easily integrated into edge or embedded devices. The toolchain is complete and easy to use, and the cost of algorithm migration is low.

Chip BM1684

17.6TOPS

32 Channels HD Video Hardware Decoding

High Computing Power and Low Power Consumption

With 17.6 TOPS INT8 computing power and up to 35.2 TOPS with Winograd convolution acceleration, SM5 far surpasses similar products in the industry. The typical power consumption of 16-channel video stream analysis is lower than 16W.

Ultra-high Video Decoding Capabilities

SM5 supports up to 32 channels of full HD video decoding and H264/H265 format. It can realize over 16 channels of HD video stream face detection or video structurization.

Flexible Application

Support PCIE slave mode and SOC host mode and FP32 high precision and INT8 low precision

Complete Toolchain

Support the mainstream AI frameworks including Caffe, Tensorflow, Pytorch, Paddle and Mxnet

Wide Application and Scenarios

It is applied in visual computing AI scenarios including intelligent public security, parks, retailing, power and robot UAVs.

Easy-to-use, Convenient and Efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools including the underlying driver environment, compiler and inference deployment tool. The easy-to-use and convenient toolkit covers the model optimization, efficient runtime support and other capabilities required for neural network inference. It provides easy-to-use and efficient full-stack solutions for the development and deployment of deep learning applications. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of SOPHGO to facilitate intelligent applications.

Support mainstream programming framework

More

Performance Parameter

AI chip

1 BM1684

AI performance

FP32(FLOPS)

2.2TFLOPS

INT8(TOPS) Winograd OFF

17.6TOPS

INT8(TOPS) Winograd ON

35.2TOPS

Memory configuration

Standard configuration

12GB

CPU capacity

CPU (SOC host mode)

8-core ARM A53, 2.3GHz main frequency

High speed data interface

PCIE EP interface

PCIE 3.0, X4

(connector interface)

PCIE RC interface

PCIE 3.0, X4

Ethernet ports

Dual Gigabit Ethernet ports

Video decoding and encoding

Video decoding capability

960fps @1080P

Video decoding format

H.264 and H.265

Maximum decoding resolution

Support 4K, 8K (semi-real time)

Video coding

2ch 1080P@25FPS

Picture decoding and encoding performance

480 PCS/sec @1080p

Low speed data interface

RS485 / RS232 / GPIO / SDIO /PWM / I2C / SPI etc.

Connector

144-pin connector

Power consumption

Typical power consumption <20W

Maximum power consumption 25W

Heat dissipation mode

SM5-P includes passive heatsink

SM5-A includes active cooling fan

L x W x H

87 x 65 x 8mm without heatsink

92 x 70 x 28.5mm (SM5-P includes passive heatsink)

92 x 70 x 28.5mm (SM5-A includes active cooling fan)