AI computing box SE5

The SOPHON AI computing box SE5 is a high-performance, low-power edge computing product. It is equipped with the third-generation TPU chip BM1684 independently developed by Bitmain. The INT8 computing power is up to 17.6TOPS. It can support up to 38 channels of 1080P HD video hardware decoding and 2 channels encoding.

Strong computing power

Support up to 17.6T INT8 peak computing power or 2.2T FP32 high-precision computing power

Ultra high performance

Support up to 16 channels 1080P HD video full-process processing, support 38 channels 1080P HD video hardware decoding and 2 channels encoding

Quick transplant

Support Caffe / TensorFlow / PyTorch / MXNet / Paddle Lite and other mainstream deep learning frameworks

Passive cooling

Support fanless cooling, wide temperature range from -20℃ to + 60℃

Rich algorithms

Support various algorithms such as person / vehicle / non-vechicle / object recognition, video structuring, trajectory behavior analysis, etc.

Rich scenes

Support smart Park / security / industrial control / business and other fields and scenarios for flexible deployment

Rich interface

Support USB, HDMI, RS-485, RS-232, SATA, custom I/O and other interfaces

Flexible deployment

Support LTE wireless function (optional)

Application scenario

Intelligent security

Smart Transportation

Smart Park

Smart Retail

Intelligent security

Video stream and image stream recognition, video structure analysis, trajectory analysis, etc.

Smart Transportation

Checkpoint, automatic vechicle identification, law enforcement assistance, smart parking

Smart Park

Face recognition security checking, staff attendance management, smart alart system

Smart Retail

Goods recognition, smart stores application, face recongnization payment

Easy to use and convenient, full stack efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools such as the underlying driver environment, compiler, inference deployment tool and so on. Easy to use and convenient, covering the model optimization, efficient runtime support and other capabilities required for the neural network inference stage, providing easy-to-use and efficient full-stack solutions for deep learning application development and deployment. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of Fortune Group to facilitate intelligent applications.

Support mainstream programming framework

More

Specifications

chip

model

SOPHON BM1684

CPU

8-core A53@2.3GHz

Computing power

INT8

Peak computing power 17.6TOPS

FP32

Peak computing power 2.2TOPS

Video / Picture codec

Video decoding capability

960fps 1080p (38 channels 1080P @ 25FPS)

Video encoding capability

50fps 1080p (2 channels 1080P @ 25FPS)

Picture codec capability

480 PCS / s 1080p

Memory and storage

Memory

12GB

eMMC

32GB

External interface

Ethernet interface

10/100 / 1000Mbps adaptive * 2

USB

USB3.1 *2

Storage

MicroSD *1

Display

HDMI *1

Phoenix terminal

RS-232 * 1 / RS-485 * 1 / Custom I/O

Mechanical

Length * width * height

188mm*148mm*44.5mm

Power supply and power consumption

Power supply

DC 12V

Typical power consumption

≤20W

Temperature and humidity

Working temperature

-20 ℃ ~ + 60 ℃ (depending on the configuration)

Humidity

10% ~ 90%, no condensation

Other functions

Optional

SATA hard disk support

LTE wireless backhaul support