TPU processor, 16 channels HD video intelligent analysis, 16 channels of full HD video decoding, 10 channels of full HD video encoding
TPU processor, 32 channels HD video intelligent analysis, 32 channels of full HD video decoding, 12 channels of full HD video encoding
RISC-V + ARM intelligent deep learning processor
Based on the RISC-V core, operating at a frequency of 2GHz, the processor features a single SOC with 64 cores and 64MB shared L3 cache.
SRC1-10 is an excellent performance server cluster based on RISC-V arch. It has both computing and storage capabilities, and the full stack of software and hardware is domestically produced.
The RISC-V Fusion Server, supports dual-processor interconnection and enabled intelligent computing acceleration.
SRB1-20 is an excellent performance storage server based on RISC-V arch. It supports CCIX, 128-core concurrent, multi-disk large-capacity secure storage, and the full stack of software and hardware is domestically produced.
SRA1-20 is an excellent performance computing server based on RISC-V arch. It supports CCIX, 128-core concurrent, both software and hardware are open source and controllable.
SRA3-40 is a RISC-V server for high-performance computing, domestic main processor,excellent performance,fusion of intelligent computing, support powerful codec.
SRB3-40 is a high-performance RISC-V storage server with multiple disk slots and large-capacity secure storage.
Intelligent computing server SGM7-40, adapted to mainstream LLM, a single card can run a 70B large language model
SOM1684, BM1684, 16-Channel HD Video Analysis
Core-1684-JD4,BM1684, 16-Channel HD Video Analysis
SBC-6841,BM1684, 16-Channel HD Video Analysis
iCore-1684XQ,BM1684X,32-Channel HD Video Analysis
Core-1684XJD4,BM1684X,32-Channel HD Video Analysis
Shaolin PI SLKY01,BM1684, 16-Channel HD Video Analysis
QY-AIM16T-M,BM1684, 16-Channel HD Video Analysis
QY-AIM16T-M-G,BM1684, 16-Channel HD Video Analysis
QY-AIM16T-W,BM1684, 16-Channel HD Video Analysis
AIV02T,1684*2,Half-Height Half-Length Accelerator Card
AIO-1684JD4,BM1684, 16-Channel HD Video Analysis
AIO-1684XJD4,BM1684X,32-Channel HD Video Analysis
AIO-1684XQ,BM1684X,32-Channel HD Video Analysis
IVP03X,BM1684X,32-Channel HD Video Analysis
IVP03A,Microserver, passive cooling, 12GB RAM
Coeus-3550T,BM1684, 16-Channel HD Video Analysis
EC-1684JD4,BM1684, 16-Channel HD Video Analysis
CSA1-N8S1684,BM1684*8,1U Cluster Server
DZFT-ZDFX,BM1684X,Electronic Seal Analyzer,ARM+DSP architecture
ZNFX-32,BM1684, 16-Channel HD Video Analysis
ZNFX-8,BM1684X,ARM+DSP architecture,Flameproof and Intrinsic Safety Analysis Device
EC-A1684JD4,Microserver with active cooling, 16GB RAM, 32GB eMMC
EC-A1684JD4 FD,BM1684, 16-Channel HD Video Analysis,6GB of RAM, 32GB eMMC
EC-A1684XJD4 FD,BM1684X,32-Channel HD Video Analysis
ECE-S01, BM1684, 16-Channel HD Video Analysis
IOEHM-AIRC01,BM1684,Microserver Active Cooling,16-Channel HD Video Analysis
IOEHM-VCAE01, BM1684, 16-Channel HD Video Analysis
CSA1-N8S1684X,BM1684*8,1U Cluster Server
QY-S1U-16, BM1684, 1U Server
QY-S1U-192, BM1684*12, 1U Cluster Server
QY-S1X-384, BM1684*12, 1U Cluster Server
Deep learning intelligent analysis helps make city management more efficient and precise
Using deep learning video technology to analyze sources of dust generation and dust events, contributing to ecological environmental protection
Using deep learning intelligent analysis to monitor scenarios such as safety production, urban firefighting, and unexpected incidents for emergency regulation.
Using deep learning technology to detect and analyze individuals, vehicles, and security incidents in grassroots governance
Empowering the problems of traffic congestion, driving safety, vehicle violations, and road pollution control
Utilizing domestically developed computational power to support the structured analysis of massive volumes of videos, catering to practical applications in law enforcement
Build a "smart, collaborative, efficient, innovative" gait recognition big data analysis system centered around data
Effectively resolving incidents of objects thrown from height, achieving real-time monitoring of such incidents, pinpointing the location of the thrown object, triggering alerts, and effectively safeguarding the safety of the public from falling objects
Using edge computing architecture to timely and accurately monitor community emergencies and safety hazards
SOPHGO with SOPHON.TEAM ecosystem partners to build a deep learning supervision solution for smart hospitals, enhancing safety management efficiency in hospitals
SOPHGO with SOPHON.TEAM ecosystem partners to build a smart safe campus solution
Using a combination of cloud-edge deep learning methods to address food safety supervision requirements across multiple restaurant establishments, creating a closed-loop supervision system for government and enterprise-level stakeholders
SOPHON's self-developed computing hardware devices, such as SG6/SE5/SE6, equipped with SOPHON.TEAM video analysis algorithms, are used to make industrial safety production become smarter
Combining deep learning, edge computing and other technologies, it has the ability to intelligently identify people, objects, things and their specific behaviors in the refueling area and unloading area. It also automatically detects and captures illegal incidents at gas stations to facilitate effective traceability afterwards and provide data for safety management.
SOPHGO, in collaboration with SOPHON.TEAM and its ecosystem partners, is focusing on three major scene requirements: "Production Safety Supervision," "Comprehensive Park Management," and "Personnel Safety & Behavioral Standard Supervision." Together, they are developing a comprehensive deep learning scenario solution, integrating "algorithm + computing power + platform."
SOPHGO, cooperates with SOPHON.TEAM ecological partners to build a deep learning monitoring solution for safety risks in chemical industry parks
SOPHGO with SOPHON.TEAM ecosystem partners to build a Smart Computing Center solution, establishing a unified management and scheduling cloud-edge collaborative smart computing center
SOPHGO, in collaboration with SOPHON.TEAM ecosystem, have jointly developed a set of hardware leveraging domestically-produced deep learning computational power products. This is based on an AutoML zero-code automated deep learning training platform, enabling rapid and efficient implementation of deep learning engineering solutions
测试集A与测试集B中提供*.jpg图片。
为便于参赛选手模型优化,特选取测试集10张图片给出参考答案:
0001 433
0002 121
0003 244
0004 131
0005 560
0006 1515
0007 507
0008 143
0009 936
0010 153
初赛阶段
参赛者将结果以单个txt文件提交到平台,平台进行在线评分,实时排名。以截止日排名决出入围决赛的队伍;
在线评估提交限制:每个参赛团队每天最多提交3次结果文件,如果新提交结果好于之前提交结果,排行榜中的成绩将自动进行更新覆盖。
参赛者需要提交在TPU平台上对训练集进行推理后的结果文件,文件名为“val.txt”,使用UTF-8无BOM编码格式。推理结果需包括每张图片的序号、估计的人数、推理花费的时间。详情:
[val.txt文件]:
0001 280 880
0002 1300 980
0003 45 320
…
注释:每一行对应一张图片的推理结果,第一列为数据集中的图片序号,第二列是对该图片估计的人数,第三列是该图片推理花费的时间(以s为单位)。
• 复现阶段
初赛B榜TOP5队伍进入复现阶段,需要按照要求提交复现资料,复现结束后公布入围决赛团队名单。
参赛者需要提交能在BM1684平台上运行的bmodel模型及相应测试代码。详情:
1.测试代码应保存为tester.py格式
2.模型应保存为out.bmodel格式
3.测试代码和模型应置于同一文件夹下,将文件夹压缩成.zip文件后提交
[tester.py文件]
参赛者需要使用我们提供的统一的测试接口,测试接口模板如下:
class Tester:
"""Model inference and testing using SAIL
"""
def __init__(self) -> None:
"""sail engine initialization"""
self.engine = sail.Engine(args.model, 0, sail.IOMode.SYSIO)
def test_one(self, img_pth):
"""predict number of people in the given image
Args:
img_pth: image path
Returns:
predicted number
"""
def test_all(self):
"""test all images and save results"""
with open(args.result, 'w') as out:
with open(os.path.join(args.data, 'list.txt')) as f:
for line in f.readlines():
img_id = line.split()[0]
img_pth = os.path.join(args.data, 'img_'+img_id + '.jpg')
time_start = time.time()
pred = self.test_one(img_pth)
time_cost = time.time() - time_start
print('{} {:.4f} {:.9f}'.format(img_id, pred, time_cost), file=out)
print('{} {:.4f}'.format(img_id, pred))
print('time cost {}'.format(time_cost))
tester = Tester()
tester.test_all()
1. 通过Mean Absolute Error(MAE)、Mean Squared Error(MSE)和Normalized Absolute Error(NAE)三个指标评估模型精度
2. 通过模型推理时间itime评估模型性能,itime应为数据集图片推理的平均时间,单位为s。
3. 最终得分计算公式为:score=(250-mae score)*0.2+(500-rmse score)*0.1+(0.4-naescore)*200+(2-i time score)*100