AI Micro Server SE5-16

The SOPHON AI Micro Server SE5-16 is a high-performance and low-power consumption edge computing product. It is equipped with the third-generation TPU chip BM1684 independently developed by SOPHGO. The INT8 performance is up to 17.6TOPS. It can support up to 32 channels of full HD video hardware decoding and 2 channels of encoding.

Consultancy
Strong Computing Power

Support up to 17.6T INT8 peak performance or 2.2T FP32 high-precision performance

Ultra-high Performance

Support up to 16 channels 1080P HD video full-process processing
Support 32 channels full HD video hardware decoding and 2 channels encoding

Rapid Transplantation

Support Caffe / TensorFlow / PyTorch / MXNet / Paddle Lite and other mainstream deep learning frameworks

Passive Heat Dissipation

Support fanless cooling with wide temperature ranging from -20℃ to + 60℃

Rich Algorithms

Support various algorithms such as person/vehicle/non-vehicle/object recognition, video structurization and trajectory behavior analysis

Rich Scenarios

Support intelligent park/security/industrial control/business and scenarios for flexible deployment

Rich Interfaces

Support USB, HDMI, RS-485, RS-232, SATA, custom I/O and other interfaces

Flexible Deployment

Support LTE wireless function (optional)

Application Scenarios

Intelligent Security

Intelligent Transportation

Intelligent Park

Intelligent Retail

Intelligent Security

Dynamic and static comparison and recognition, video structurization, attribute analysis and control trajectory analysis.

Intelligent Transportation

Checkpoint, automatic vechicle identification, law enforcement assistance, smart parking

Intelligent Park

Face recognition, security checking, staff attendance management and smart alert system

Intelligent Retail

Goods recognition, smart stores application, face recognition payment

Easy-to-use, Convenient and Efficient

BMNNSDK (BITMAIN Neural Network SDK) one-stop toolkit provides a series of software tools including the underlying driver environment, compiler and inference deployment tool. The easy-to-use and convenient toolkit covers the model optimization, efficient runtime support and other capabilities required for neural network inference. It provides easy-to-use and efficient full-stack solutions for the development and deployment of deep learning applications. BMNNSDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various AI hardware products of SOPHGO to facilitate intelligent applications.

Support mainstream programming framework

More

Performance Parameter

chip

model

SOPHON BM1684

CPU

8-core A53@2.3GHz

AI Performance

INT8

Peak performance 17.6TOPS

FP32

Peak performance 2.2TOPS

Video / Picture codec

Video decoding capability

960fps 1080p (32 channels 1080P @ 30FPS)

Video encoding capability

50fps 1080p (2 channels 1080P @ 25FPS)

Picture codec capability

480 PCS / s 1080p

Memory and storage

Memory

12GB

eMMC

32GB

External interface

Ethernet interface

10/100 / 1000Mbps adaptive * 2

USB

USB3.1 *2

Storage

MicroSD *1

Display

HDMI *1

Phoenix terminal

RS-232 * 1 / RS-485 * 1 / CustomI/O

Mechanical

Length * width * height

188mm*148mm*44.5mm

Power supply and power consumption

Power supply

DC 12V

Typicalpower consumption

≤20W

Temperature and humidity

Working temperature

-20 ℃ ~ + 60 ℃ (depending on the configuration)

Humidity

10% ~ 90%, no condensation

Other functions

Optional

SATA hard disk support

LTE wireless backhaul support