AI Accelerator Card SC5+

SOPHON SC5 + adopts a standard half-height and half-length design and is equipped with three BM1684 high-performance computing chips. It can provide up to 105.6T INT8 (Winograd Enable) and 6.6T FP32 computing power, supporting high-precision computing.

Consultancy

The chip utilization rate exceeds 70%, and the actual AI performance gain ratio is higher

The third generation of mass-produced products, with higher maturity and stability

2880fps (more than 100 channels 1080P @ 25fps) HD video hardware decoding capability

Memory capacity up to 36GB / 48GB, unlimited applications

96MB cache SRAM, small model calculation can greatly speed up (more than 50% of similar products)

Video and picture decoding resolution range up to above 8K, suitable for all kinds of ultra-high-definition network cameras

Adapt to various x86 servers and domestic CPU systems such as Phytium and Shenwei.

Adapt to various operating systems (CentOS / Ubuntu / Debian) including domestic Kylin and Deepin.

Wide Application and Scenarios

SC5 + can be loaded on the standard server and used in various environments including face recognition, video structurization, video transcoding, security monitoring, machine vision and high-performance computing so as to accelerate the computing of CNN / RNN / DNN and other neural network models.

Easy-to-use, Convenient and Efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools including the underlying driver environment, compiler and inference deployment tool. The easy-to-use and convenient toolkit covers the model optimization, efficient runtime support and other capabilities required for neural network inference. It provides easy-to-use and efficient full-stack solutions for the development and deployment of deep learning applications. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of SOPHGO to facilitate intelligent applications.

Support mainstream programming framework

More

Performance Parameter

AI Developer Portfolio

AI computing accelerator card

AI computing accelerator card

TPU core architecture

SOPHON

SOPHON

SOPHON

SOPHON

NPU core number

64

-

64

192

AI performance

FP32(FLOPS)

2.2T

-

2.2T

6.6T

INT8(OPS) Winograd OFF

17.6T

-

17.6T

52.8T

INT8(OPS) Winograd ON

35.2T

-

35.2T

105.6T

CPU

ARM 8-core A53 @ 2.3GHz

-

ARM 8-core A53 @ 2.3GHz

3x ARM 8-core A53 @ 2.3GHz

VPU

Video decoding capability

H.264:1080P @960fps
H.265:1080P @960fps

-

H.264:1080P @960fps
H.265:1080P @960fps

H.264:1080P @2880fps
H.265:1080P @2880fps

Video decoding resolution

CIF / D1 / 720P / 1080P / 4K(3840×2160) / 8K(8192×4096)

-


CIF / D1 / 720P / 1080P / 4K(3840×2160) / 8K(8192×4096)

CIF / D1 / 720P / 1080P / 4K(3840×2160) / 8K(8192×4096)

Video encoding capability

H.264:1080P @50fps
H.265:1080P @50fps

-

H.264:1080P @50fps
H.265:1080P @50fps

H.264:1080P @150fps
H.265:1080P @150fps

Video encoding resolution

CIF / D1 / 720P / 1080P / 4K(3840×2160)

-

CIF / D1 / 720P / 1080P / 4K(3840×2160)

CIF / D1 / 720P / 1080P / 4K(3840×2160)

Video transcoding capability (1080P to CIF)

Max. 18 channels

-

Max. 18 channels

Max. 54 channels

JPU

JPEG image decoding capability

480 images / second @ 1080p

-

480 images / second @ 1080p

1440 images / second @ 1080p

Maximum resolution (pixels)

32768×32768

-

32768×32768

32768×32768

System interface

Data link

EP PCIE X8
RC PCIE X8

PCIE X2

PCIE X16

PCIE X8

Operating mode

EP+RC

SOC extension

EP

EP

Physical / power interface

PCIE X16

12VDC Jack

PCIE X16

PCIE X16

RAM

Standard configuration

12GB

-

12GB

36GB

Maximum capacity

16GB

-

16GB

48GB

Power consumption

30W MAX

No load: 6W
With load: 30W

30W MAX

75W MAX

Heat dissipation mode

active

-

active

passive

Working status display

N/A

LED x3 (power / hard disk / status)

LED x1

LED x1

External I/O expansion *

SD-Card

1

-

-

RESET Button

1

-

-

RJ45

2 *1000Base-T

-

-

USB

4

-

-

SATA

1

-

-

4G/LTE

1

-

-

micro USB

1

-

-

working temperature

0℃-55℃

-10℃-55℃

0℃-55℃

Deep learning framework

Caffe / TensorFlow / Pytorch / Mxnet / Darknet / Paddle

Operating system support

Ubuntu / CentOS / Debian

compatibility

Compatible with mainstream x86 architecture and ARM architecture servers

Localization support

Support domestic CPU system such as Feiteng, Shenwei, Zhaoxin, etc.; support domestic Linux operating system such as Kylin, Deepin, etc.; support domestic AI framework Paddle Lite

Length x height x width (including bracket)

200x111.2x19.8mm

206x28.5x59.5mm

169.1x68.9x19mm

169.1x68.9x19.5mm

* All external I/O expansion interfaces in the AI developer portfolio must be used with SC5-IO