Deep Learning Acceleration Card SC3

The SC3 accelerator card is equipped with a BM1682 chip, which provides 3TFLOPS single-precision floating-point computing capability, and the actual computing power utilization rate is significantly higher than similar competitors. SC3 provides two product forms of active / passive heat dissipation, which can be deployed on the server or industrial computer production environment on demand in the cloud and on the edge. It is suitable for business scenarios that have strict requirements on computing accuracy, such as industrial, medical, and dangerous goods management etc.

Outstanding Performance by Superior Design

Sophon SC3 is equipped with a second-generation tensor processor BM1682 of BITMAIN

Each BM1682 chip has 64 NPU processing units, and each NPU has 32 EU arithmetic units. A single BM1682 chip can provide up to 3TFLOPs of single-precision peak computing power. At the same time, the chip has up to 16MB of on-chip SRAM, which can greatly reduce data handling during model calculation, improve performance and reduce power consumption.

High-performance deep learning accelerator card with completely independent intellectual property rights

On-chip hardware decoding engine supports HD video stream decoding from 1080P@240fps to 4K@60fps

Rich tool chain, support Caffe / TensorFlow / Pytorch / Mxnet and other deep learning frameworks

Passed CE / FCC and other international standard certification

Support PCIE 3.0 interface, compatible with mainstream x86 servers, easy to apply and expand

Passive cooling, fanless design

Wide application and rich scenes

Sophon SC3 deep learning accelerator card can be used in various artificial intelligence, machine vision, high-performance computing environments, supporting facial feature detection, extraction, tracking, recognition, comparison, machine vision, and video structured analysis and processing Video structured applications such as image search and track tracking.

Easy to use and convenient, full stack efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools such as the underlying driver environment, compiler, inference deployment tool and so on. Easy to use and convenient, covering the model optimization, efficient runtime support and other capabilities required for the neural network inference stage, providing easy-to-use and efficient full-stack solutions for deep learning application development and deployment. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of Fortune Group to facilitate intelligent applications.

Support mainstream programming framework

More

Specs

TPU architecture

Sophon

NPU Core number

64 Cores

Core Frequency

750Mhz

Single-Precision Performance(FP32)

3 TFLOPS

System Interface

PCI Express 3.0 x8

DDR Memory

8GB

On-Chip SRAM

16MB

TDP

65W

Thermal Solution

Passive(Fanless)

Video Decoder format

H.264/H.265 / HEVC / MPEG1 / 2 / 4 / DivX / XviD / H.263 / VC-1 / Sorenson / VP8 / AVS

Video decoder performance

1080p @ 240fps or 4K @60fps

DL Framework

Caffe/TensorFlow/Pytorch/Mxnet

OS Support

Ubuntu16.04 / CentOS7.4/Debian9.4

Operating Environment Temperature

0°C~55°C

Operating Environment Humidity

5%~95%RH

Storage Temperature

-45°C~75°C

Storage Humidity

5%~95%RH

Form Factor Length*Height*Thickness

217.21*125.44*21.59(mm)