TPU processor, 16 channels HD video intelligent analysis, 16 channels of full HD video decoding, 10 channels of full HD video encoding
TPU processor, 32 channels HD video intelligent analysis, 32 channels of full HD video decoding, 12 channels of full HD video encoding
RISC-V + ARM intelligent deep learning processor
Based on the RISC-V core, operating at a frequency of 2GHz, the processor features a single SOC with 64 cores and 64MB shared L3 cache.
SRC1-10 is an excellent performance server cluster based on RISC-V arch. It has both computing and storage capabilities, and the full stack of software and hardware is domestically produced.
The RISC-V Fusion Server, supports dual-processor interconnection and enabled intelligent computing acceleration.
SRB1-20 is an excellent performance storage server based on RISC-V arch. It supports CCIX, 128-core concurrent, multi-disk large-capacity secure storage, and the full stack of software and hardware is domestically produced.
SRA1-20 is an excellent performance computing server based on RISC-V arch. It supports CCIX, 128-core concurrent, both software and hardware are open source and controllable.
SRA3-40 is a RISC-V server for high-performance computing, domestic main processor,excellent performance,fusion of intelligent computing, support powerful codec.
SRB3-40 is a high-performance RISC-V storage server with multiple disk slots and large-capacity secure storage.
Intelligent computing server SGM7-40, adapted to mainstream LLM, a single card can run a 70B large language model
96-channel HD video decoding, 96-channel HD video analysis
SOM1684, BM1684, 16-Channel HD Video Analysis
Core-1684-JD4,BM1684, 16-Channel HD Video Analysis
SBC-6841,BM1684, 16-Channel HD Video Analysis
iCore-1684XQ,BM1684X,32-Channel HD Video Analysis
Core-1684XJD4,BM1684X,32-Channel HD Video Analysis
Shaolin PI SLKY01,BM1684, 16-Channel HD Video Analysis
QY-AIM16T-M,BM1684, 16-Channel HD Video Analysis
QY-AIM16T-M-G,BM1684, 16-Channel HD Video Analysis
QY-AIM16T-W,BM1684, 16-Channel HD Video Analysis
AIV02T,1684*2,Half-Height Half-Length Accelerator Card
AIO-1684JD4,BM1684, 16-Channel HD Video Analysis
AIO-1684XJD4,BM1684X,32-Channel HD Video Analysis
AIO-1684XQ,BM1684X,32-Channel HD Video Analysis
IVP03X,BM1684X,32-Channel HD Video Analysis
IVP03A,Microserver, passive cooling, 12GB RAM
Coeus-3550T,BM1684, 16-Channel HD Video Analysis
EC-1684JD4,BM1684, 16-Channel HD Video Analysis
CSA1-N8S1684,BM1684*8,1U Cluster Server
DZFT-ZDFX,BM1684X,Electronic Seal Analyzer,ARM+DSP architecture
ZNFX-32,BM1684, 16-Channel HD Video Analysis
ZNFX-8,BM1684X,ARM+DSP architecture,Flameproof and Intrinsic Safety Analysis Device
EC-A1684JD4,Microserver with active cooling, 16GB RAM, 32GB eMMC
EC-A1684JD4 FD,BM1684, 16-Channel HD Video Analysis,6GB of RAM, 32GB eMMC
EC-A1684XJD4 FD,BM1684X,32-Channel HD Video Analysis
ECE-S01, BM1684, 16-Channel HD Video Analysis
IOEHM-AIRC01,BM1684,Microserver Active Cooling,16-Channel HD Video Analysis
IOEHM-VCAE01, BM1684, 16-Channel HD Video Analysis
CSA1-N8S1684X,BM1684*8,1U Cluster Server
QY-S1U-16, BM1684, 1U Server
QY-S1U-192, BM1684*12, 1U Cluster Server
QY-S1X-384, BM1684*12, 1U Cluster Server
Deep learning intelligent analysis helps make city management more efficient and precise
Using deep learning video technology to analyze sources of dust generation and dust events, contributing to ecological environmental protection
Using deep learning intelligent analysis to monitor scenarios such as safety production, urban firefighting, and unexpected incidents for emergency regulation.
Using deep learning technology to detect and analyze individuals, vehicles, and security incidents in grassroots governance
Empowering the problems of traffic congestion, driving safety, vehicle violations, and road pollution control
Utilizing domestically developed computational power to support the structured analysis of massive volumes of videos, catering to practical applications in law enforcement
Build a "smart, collaborative, efficient, innovative" gait recognition big data analysis system centered around data
Effectively resolving incidents of objects thrown from height, achieving real-time monitoring of such incidents, pinpointing the location of the thrown object, triggering alerts, and effectively safeguarding the safety of the public from falling objects
Using edge computing architecture to timely and accurately monitor community emergencies and safety hazards
SOPHGO with SOPHON.TEAM ecosystem partners to build a deep learning supervision solution for smart hospitals, enhancing safety management efficiency in hospitals
SOPHGO with SOPHON.TEAM ecosystem partners to build a smart safe campus solution
Using a combination of cloud-edge deep learning methods to address food safety supervision requirements across multiple restaurant establishments, creating a closed-loop supervision system for government and enterprise-level stakeholders
SOPHON's self-developed computing hardware devices, such as SG6/SE5/SE6, equipped with SOPHON.TEAM video analysis algorithms, are used to make industrial safety production become smarter
Combining deep learning, edge computing and other technologies, it has the ability to intelligently identify people, objects, things and their specific behaviors in the refueling area and unloading area. It also automatically detects and captures illegal incidents at gas stations to facilitate effective traceability afterwards and provide data for safety management.
SOPHGO, in collaboration with SOPHON.TEAM and its ecosystem partners, is focusing on three major scene requirements: "Production Safety Supervision," "Comprehensive Park Management," and "Personnel Safety & Behavioral Standard Supervision." Together, they are developing a comprehensive deep learning scenario solution, integrating "algorithm + computing power + platform."
SOPHGO, cooperates with SOPHON.TEAM ecological partners to build a deep learning monitoring solution for safety risks in chemical industry parks
SOPHGO with SOPHON.TEAM ecosystem partners to build a Smart Computing Center solution, establishing a unified management and scheduling cloud-edge collaborative smart computing center
SOPHGO, in collaboration with SOPHON.TEAM ecosystem, have jointly developed a set of hardware leveraging domestically-produced deep learning computational power products. This is based on an AutoML zero-code automated deep learning training platform, enabling rapid and efficient implementation of deep learning engineering solutions
The dataset comprises two modalities, with medical imaging data consisting of 6670 dimensions and intestinal data consisting of 377 dimensions, totaling 39 samples. Positive samples are labeled as 1, and negative samples are labeled as -1.
The dataset covers two distinct types of Chinese information extraction tasks, involving relationship extraction and event extraction, encompassing both sentence and discourse-level natural language texts.
Similar to the preliminary round, this dataset contains two modalities: medical imaging data (6670 dimensions) and intestinal data (377 dimensions), with a total of 39 samples. Positive samples are labeled as 1, and negative samples are labeled as -1.
This dataset includes two types of Chinese information extraction tasks: relationship extraction and event extraction, covering both sentence and discourse-level natural language texts.
Dataset based on tpu-mlir The dataset is derived from the CASIA Face Image Database 5.0 version, containing 2500 color face images from 500 subjects. CASIA-FaceV5 facial images can be captured with a Logitech USB camera in a single instance. Volunteers for CASIA-FaceV5 include graduate students, workers, service staff, etc. All facial images are 16-bit color BMP files with an image resolution of 640 * 480. Typical intra-class variations include lighting, pose, expression, eyeglasses, imaging distance, etc.
Learning Materials for TPU-MLIR: https://tpumlir.org/index.html
Open Source Repository for TPU-MLIR: https://github.com/sophgo/tpu-mlir
TPU-MLIR Learning Videos: https://space.bilibili.com/1829795304/channel/collectiondetail?sid=734875
TPU-MLIR Getting Started Guide: https://tpumlir.org/docs/quick_start/index.html
Participants are required to conduct model training and testing on the Momodel platform, and finally submit their results on the platform.
Participants need to perform model conversion and deployment on the SOPHON.NET platform, and ultimately submit their results on the platform.
The finals will be organized in an offline format, structured as a Hackathon.
Final Score = Accuracy * 50 + Recall * 30 + F1 Score * 20.
For the systems evaluated on the test set, their SPO (Subject-Predicate-Object) outputs are compared precisely with manually annotated SPO results. F1 score is used as the evaluation metric. Note: For complex 'O' value types in SPO, all slots must match precisely to consider the SPO extracted correctly. To address entity alias issues in some texts, a Baidu Knowledge Graph alias dictionary is utilized to assist in evaluation. F1 score is calculated as: F1 = (2 * P * R) / (P + R). Where P = Number of correctly predicted SPOs in all test sentences / Number of predicted SPOs in all test sentences; R = Number of correctly predicted SPOs in all test sentences / Number of manually annotated SPOs in all test sentences.
The evaluation metrics comprise two parts: Precision involves comprehensive score metrics (consistent with Preliminary Task 1), and Speed involves the time taken to process all data.
The evaluation metrics also comprise two parts: Precision involves the F1 score (consistent with Preliminary Task 2), and Speed involves the time taken to process all text data.
The evaluation metrics consist of two components: Precision involves the F1 score, and Speed involves the time taken to process all images.