TEL: 010-57590724
Please leave a message to us
Your Name *

Your Phone *

Your Email *
Your Company

Message Content *


Message Success
Customer service personnel will notify you of the results of the message through your contact information.
Message Fail
System error, Please try again later.
Continue Messaging
Follow the SOPHGO X (Twitter) account
Look forward to working with you

Tensor Computing Processor BM1684X

SOPHON BM1684X is the fourth-generation tensor processor launched by SOPHGO for deep learning. Its performance has been improved by 2 times compared with the previous generation.

Peak Performance

32 channels HD video intelligent analysis
FP32/BF16/FP16/INT8 supported

Video Decoding

32 channels HD hardware decoding

Video Encoding

12 channels HD hardware encoding

Energy Efficiency Ratio

2x EER vs previous generation

Support INT8, FP16/BF16 and FP32 precision, greatly improving deep learning performance

Integrated high-performance ARM core,support secondary development

Integrated video and image decoding and encoding capabilities

Support PCIe, Ethernet interface

Support PyTorch, TensorFlow and other mainstream frameworks

Wide Application and Scenarios

The edge computing deep learning processor BM1684X can be used in artificial intelligence, machine vision and high performance computing environments.

Easy-to-use, Convenient and Efficient

SOPHON SDK one-stop toolkit provides a series of software tools including the underlying driver environment, compiler and inference deployment tool. The easy-to-use and convenient toolkit covers the model optimization, efficient runtime support and other capabilities required for neural network inference. It provides easy-to-use and efficient full-stack solutions for the development and deployment of deep learning applications. SOPHON SDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various deep learning hardware products of SOPHGO to facilitate intelligent applications.