AI Accelerator Card SC3

The SC3 accelerator card is equipped with a BM1682 chip, which provides 3TFLOPS single-precision floating-point computing capability. Its actual computing power utilization rate is significantly higher than its competitors. SC3 provides two product forms of active/passive heat dissipation, which can be deployed on servers or industrial computers in the cloud and on the edge based on demand. It is suitable for business scenarios that have strict requirements on computing accuracy, such as industrial, medical, and dangerous goods management.

Outstanding Performance by Superior Design

SOPHON SC3 is equipped with a second-generation tensor processor BM1682 of SOPHGO

Each BM1682 chip has 64 NPU processing units, and each NPU has 32 EU arithmetic units. A single BM1682 chip can provide up to 3TFLOPs of single-precision peak computing power. At the same time, the chip has up to 16MB of on-chip SRAM, which can greatly reduce data handling during model calculation, improve performance and reduce power consumption.

High-performance deep learning accelerator card with completely independent intellectual property rights

On-chip hardware decoding engine supports HD video stream decoding from 1080P@240fps to 4K@60fps

Rich tool chain, support Caffe / TensorFlow / Pytorch / Mxnet and other deep learning frameworks

Passed CE / FCC and other international standard certification

Support PCIE 3.0 interface, compatible with mainstream x86 servers, easy to apply and expand

Passive cooling, fanless design

Wide Application and Scenarios

Sophon SC3 deep learning accelerator card can be used in various artificial intelligence, machine vision, high-performance computing environments, supporting facial feature detection, extraction, tracking, recognition, comparison, machine vision, and video structured analysis and processing Video structured applications such as image search and track tracking.

Easy-to-use, Convenient and Efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools including the underlying driver environment, compiler and inference deployment tool. The easy-to-use and convenient toolkit covers the model optimization, efficient runtime support and other capabilities required for neural network inference. It provides easy-to-use and efficient full-stack solutions for the development and deployment of deep learning applications. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of SOPHGO to facilitate intelligent applications.

Support mainstream programming framework


Performance Parameter

TPU architecture


NPU Core number

64 Cores

Core Frequency


Single-Precision Performance(FP32)


System Interface

PCI Express 3.0 x8

DDR Memory


On-Chip SRAM




Thermal Solution


Video Decoder format

H.264/H.265 / HEVC / MPEG1 / 2 / 4 / DivX / XviD / H.263 / VC-1 / Sorenson / VP8 / AVS

Video decoder performance

1080p @ 240fps or 4K @60fps

DL Framework


OS Support

Ubuntu16.04 / CentOS7.4/Debian9.4

Operating Environment Temperature


Operating Environment Humidity


Storage Temperature


Storage Humidity


Form Factor Length*Height*Thickness