AI Micro Server SE5-8

The SOPHON AI Micro Server SE5-8 is an edge computing product with high performance and low power consumption. It is equipped with the third-generation TPU chip BM1684 independently developed by SOPHGO. It has INT8 performance up to 10.6TOPS and supports up to 8 channels of full HD video hardware decoding and 1 channel encoding.

Consultancy
Strong Computing Power

Support up to 10.6T INT8 peak performance or 1.3T FP32 high-precision performance

Ultra-high Performance

Support up to 8 channels 1080P HD video full-process processing, support 8 channels full HD video hardware decoding and 1 channel encoding

Rapid Transplantation

Support Caffe/TensorFlow/PyTorch/MXNet/Paddle Lite and other mainstream deep learning frameworks

Passive Heat Dissipation

Support fanless cooling with wide temperature ranging from -20℃ to + 70℃

Rich algorithms

Support various algorithms such as person vehicle /non-vehicle/object recognition, video structurization and trajectory behavior analysis.

Rich Scenarios

Support smart park/security/industrial control/business and other fields and scenarios for flexible deployment

Rich interface

Support USB, HDMI, RS-485, custom I/O and other interfaces

Flexible Deployment

Support LTE wireless function (optional)

Application Scenarios

Intelligent Security

Intelligent Transportation

Intelligent Park

Intelligent Retail

Intelligent Security

Dynamic and static comparison and recognition, video structurization, attribute analysis and control trajectory analysis.

Intelligent Transportation

Checkpoint, automatic vechicle identification, law enforcement assistance, smart parking

Intelligent Park

Face recognition, security checking, staff attendance management and smart alert system

Intelligent Retail

Goods recognition, smart stores application, face recognition payment

Easy-to-use, Convenient and Efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools including the underlying driver environment, compiler and inference deployment tool. The easy-to-use and convenient toolkit covers the model optimization, efficient runtime support and other capabilities required for neural network inference. It provides easy-to-use and efficient full-stack solutions for the development and deployment of deep learning applications. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of SOPHGO to facilitate intelligent applications.

Support mainstream programming framework

More

Performance Parameter

chip

model

SOPHON BM1684

CPU

8-core A53@2.3GHz

AI Performance

INT8

Peak performance 10.6TOPS

FP32

Peak performance 1.3TOPS

Video / Picture codec

Video decoding capability

240fps 1080p (8channels 1080P @ 30FPS)

Video encoding capability

25fps 1080p (1 channels 1080P @ 25FPS)

Picture codec capability

240 PCS / s 1080p

Memory and storage

Memory

6GB

eMMC

32GB

External interface

Ethernet interface

10/100 / 1000Mbps adaptive * 2

USB

USB3.1 *2

Storage

MicroSD *1

Display

HDMI *1

Phoenix terminal

RS-485 * 2 / CustomI/O

Mechanical

Length * width * height

185mm*170mm*43.6mm

Power supply and power consumption

Power supply

DC 12V

Typicalpower consumption

20W

Temperature and humidity

Working temperature

-20 ℃ ~ + 70 ℃ (depending on the configuration)

Humidity

10% ~ 90%, no condensation

Other functions

Optional

LTE wireless backhaul support