AI computing accelerator card SC5+

Sophon SC5 + adopts standard half-height and half-length design, and is equipped with three BM1684 high-performance computing chips, which can provide up to 105.6T INT8 computing power (Winograd Enable) and 6.6T FP32 computing power, supporting high-precision computing.

The chip utilization rate exceeds 70%, and the actual computing power gain ratio is higher

The third generation of mass-produced products, with higher maturity and stability

2880fps (more than 100 channels 1080P @ 25fps) HD video hardware decoding capability

Memory capacity up to 36GB / 48GB, unlimited applications

96MB cache SRAM, small model calculation can greatly speed up (more than 50% of similar products)

Video and picture decoding resolution range up to above 8K, suitable for all kinds of ultra-high-definition network cameras

Adapt to various x86 servers, and domestic CPU systems such as Feiteng, Shenwei, etc.

Adapt to various operating systems (CentOS / Ubuntu / Debian), including domestic Kylin, Deepin, etc.

Wide application and rich scenes

SC5 + can be loaded on the standard server, and can be used in various face recognition, video structure, video transcoding, security monitoring, machine vision, high-performance computing environments, to accelerate the calculation of a variety of CNN / RNN / DNN and other neural network models.

Easy to use and convenient, full stack efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools such as the underlying driver environment, compiler, inference deployment tool and so on. Easy to use and convenient, covering the model optimization, efficient runtime support and other capabilities required for the neural network inference stage, providing easy-to-use and efficient full-stack solutions for deep learning application development and deployment. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of Fortune Group to facilitate intelligent applications.

Support mainstream programming framework

More

Specifications

AI Developer Portfolio

AI computing accelerator card

AI computing accelerator card

TPU core architecture

SOPHON

SOPHON

SOPHON

SOPHON

NPU core number

64

-

64

192

AI computing power

FP32(FLOPS)

2.2T

-

2.2T

6.6T

INT8(OPS) Winograd OFF

17.6T

-

17.6T

52.8T

INT8(OPS) Winograd ON

35.2T

-

35.2T

105.6T

CPU

ARM 8-core A53 @ 2.3GHz

-

ARM 8-core A53 @ 2.3GHz

3x ARM 8-core A53 @ 2.3GHz

VPU

Video decoding capability

H.264:1080P @960fps
H.265:1080P @1000fps

-

H.264:1080P @960fps
H.265:1080P @1000fps

Video decoding resolution

CIF / D1 / 720P / 1080P / 4K(3840×2160) / 8K(8192×4096)

-


CIF / D1 / 720P / 1080P / 4K(3840×2160) / 8K(8192×4096)

CIF / D1 / 720P / 1080P / 4K(3840×2160) / 8K(8192×4096)

Video encoding capability

H.264:1080P @70fps
H.265:1080P @60fps

-

H.264:1080P @70fps
H.265:1080P @60fps

H.264:1080P @210fps
H.265:1080P @180fps

Video encoding resolution

CIF / D1 / 720P / 1080P / 4K(3840×2160)

-

CIF / D1 / 720P / 1080P / 4K(3840×2160)

CIF / D1 / 720P / 1080P / 4K(3840×2160)

Video transcoding capability (1080P to CIF)

Max. 18 channels

-

Max. 18 channels

Max. 54 channels

JPU

JPEG image decoding capability

800 images / second @ 1080p

-

800 images / second @ 1080p

2400 images / second @ 1080p

Maximum resolution (pixels)

32768×32768

-

32768×32768

32768×32768

System interface

Data link

EP PCIE X8
RC PCIE X8

PCIE X2

PCIE X16

PCIE X8

Operating mode

EP+RC

SOC extension

EP

EP

Physical / power interface

PCIE X16

12VDC Jack

PCIE X16

PCIE X16

RAM

Standard configuration

12GB

-

12GB

36GB

Maximum capacity

16GB

-

16GB

48GB

Power consumption

30W MAX

No load: 6W
With load: 30W

30W MAX

75W MAX

Heat dissipation mode

active

-

active

passive

Working status display

N/A

LED x3 (power / hard disk / status)

LED x1

LED x1

External I/O expansion *

SD-Card

1

-

-

RESET Button

1

-

-

RJ45

2 *1000Base-T

-

-

USB

4

-

-

SATA

1

-

-

4G/LTE

1

-

-

micro USB

1

-

-

working temperature

0℃-55℃

-10℃-55℃

0℃-55℃

Deep learning framework

Caffe / TensorFlow / Pytorch / Mxnet / Darknet / Paddle

Operating system support

Ubuntu / CentOS / Debian

compatibility

Compatible with mainstream x86 architecture and ARM architecture servers

Localization support

Support domestic CPU system such as Feiteng, Shenwei, Zhaoxin, etc.; support domestic Linux operating system such as Kylin, Deepin, etc.; support domestic AI framework Paddle Lite

Length x height x width (including bracket)

200x111.2x19.8mm

206x28.5x59.5mm

169.1x68.9x19mm

169.1x68.9x19.5mm

* All external I/O expansion interfaces in the AI developer portfolio must be used with SC5-IO