TPU processor, 16 channels HD video intelligent analysis, 32 channels of full HD video decoding
TPU processor, 32 channels HD video intelligent analysis, 32 channels of full HD video decoding, 12 channels of full HD video encoding
Based on the RISC-V core, operating at a frequency of 2GHz, the processor features a single SOC with 64 cores and 64MB shared L3 cache.
Based on RISC-V 3-5M slightly intelligent deep learning vision processor
Based on RISC-V 5M light intelligent deep learning vision processor
RISC-V 5M light intelligent deep learning vision processor
5M light intelligent deep learning vision processor
4K Super Definition Deep learning vision processor
5M high-performance Deep learning vision processor
5M light intelligent Deep learning vision processor
960-channel HD video decoding, 480-channel HD video analysis
576-channel HD video decoding, 288-channel HD video analysis
BM1684X, 416-channel HD video analysis
X86 host processor,288-channel HD video analysis
BM1684X, 32-channel HD Video Analysis
BM1684, 16-Channel HD Video Analysis
BM1684, 192-channel HD video analysis
BM1684, 8-channel HD video analysis
CV186AH, 8-channel HD Video Analysis
BM1688, 16-channel HD Video Analysis
72-channel HD video decoding, 72-channel HD video analysis
96-channel HD video decoding,48-channel HD video analysis
32-channel HD video decoding,16-channel HD video analysis
32-channel HD video decoding, 32-channel HD video analysis
32-channel HD video decoding, 32-channel HD video analysis
32-channel HD video decoding, 16-channel HD video analysis
32-channel HD video decoding, 16-channel HD video analysis
Deep Learning Developer Product Portfolio
Deep learning intelligent analysis helps make city management more efficient and precise
Using deep learning video technology to analyze sources of dust generation and dust events, contributing to ecological environmental protection
Empower prison management with intelligent monitoring of key controlled areas through smart video analysis
Using deep learning intelligent analysis to monitor scenarios such as safety production, urban firefighting, and unexpected incidents for emergency regulation.
Using specific deep learning algorithms to watermark, blur, or apply other methods to streaming videos, achieving video confidentiality and preventing leaks
SOPHGO with the SOPHON.TEAM ecosystem to build a data intelligence content governance solution.
Using deep learning technology to detect and analyze individuals, vehicles, and security incidents in grassroots governance
Real-time compression and transcoding of video to the cloud and monitoring of abnormal events, enhancing the ability to detect and handle road safety incidents
Empowering the problems of traffic congestion, driving safety, vehicle violations, and road pollution control
Utilizing domestically developed computational power to support the structured analysis of massive volumes of videos, catering to practical applications in law enforcement
Build a "smart, collaborative, efficient, innovative" gait recognition big data analysis system centered around data
To rapidly construct business capabilities that integrate multidimensional data including people, vehicles, and traffic flow for users
Effectively resolving incidents of objects thrown from height, achieving real-time monitoring of such incidents, pinpointing the location of the thrown object, triggering alerts, and effectively safeguarding the safety of the public from falling objects
Using edge computing architecture to timely and accurately monitor community emergencies and safety hazards
SOPHGO with SOPHON.TEAM ecosystem partners to build a deep learning supervision solution for smart hospitals, enhancing safety management efficiency in hospitals
SOPHGO with SOPHON.TEAM ecosystem partners to build a smart safe campus solution
Using a combination of cloud-edge deep learning methods to address food safety supervision requirements across multiple restaurant establishments, creating a closed-loop supervision system for government and enterprise-level stakeholders
Providing deep learning capabilities for the financial, insurance, and various business service industries to enhance operational efficiency and improve service quality
SOPHGO with SOPHON.TEAM ecosystem partners to offer a "Deep Learning Video Analysis + Restaurant Front-of-House Management" solution
SOPHON's self-developed computing hardware devices, such as SG6/SE5/SE6, equipped with SOPHON.TEAM video analysis algorithms, are used to make industrial safety production become smarter
Provided safety monitoring solutions for violations and abnormal events in offices, quality inspection, weighing rooms, storage areas and other areas of large storage parks such as granaries and cotton warehouses
SOPHON.TEAM is collaborating with ecological partners to develop a comprehensive solution for ensuring the safety of tobacco industry production and control
In collaboration with SOPHON.TEAM and its ecological partners, SOPHGO utilizes domestically developed computing power products as the hardware foundation to build a safety production management system and improve the safety production management level of liquor enterprises
Combining deep learning, edge computing and other technologies, it has the ability to intelligently identify people, objects, things and their specific behaviors in the refueling area and unloading area. It also automatically detects and captures illegal incidents at gas stations to facilitate effective traceability afterwards and provide data for safety management.
SOPHGO, in collaboration with SOPHON.TEAM and its ecosystem partners, is focusing on three major scene requirements: "Production Safety Supervision," "Comprehensive Park Management," and "Personnel Safety & Behavioral Standard Supervision." Together, they are developing a comprehensive deep learning scenario solution, integrating "algorithm + computing power + platform."
SOPHGO, cooperates with SOPHON.TEAM ecological partners to build a deep learning monitoring solution for safety risks in chemical industry parks
SOPHGO with SOPHON.TEAM ecosystem partners to build a Smart Computing Center solution, establishing a unified management and scheduling cloud-edge collaborative smart computing center
SOPHGO, in collaboration with SOPHON.TEAM ecosystem, have jointly developed a set of hardware leveraging domestically-produced deep learning computational power products. This is based on an AutoML zero-code automated deep learning training platform, enabling rapid and efficient implementation of deep learning engineering solutions
测试集A与测试集B中提供*.jpg图片。
为便于参赛选手模型优化,特选取测试集10张图片给出参考答案:
0001 433
0002 121
0003 244
0004 131
0005 560
0006 1515
0007 507
0008 143
0009 936
0010 153
初赛阶段
参赛者将结果以单个txt文件提交到平台,平台进行在线评分,实时排名。以截止日排名决出入围决赛的队伍;
在线评估提交限制:每个参赛团队每天最多提交3次结果文件,如果新提交结果好于之前提交结果,排行榜中的成绩将自动进行更新覆盖。
参赛者需要提交在TPU平台上对训练集进行推理后的结果文件,文件名为“val.txt”,使用UTF-8无BOM编码格式。推理结果需包括每张图片的序号、估计的人数、推理花费的时间。详情:
[val.txt文件]:
0001 280 880
0002 1300 980
0003 45 320
…
注释:每一行对应一张图片的推理结果,第一列为数据集中的图片序号,第二列是对该图片估计的人数,第三列是该图片推理花费的时间(以s为单位)。
• 复现阶段
初赛B榜TOP5队伍进入复现阶段,需要按照要求提交复现资料,复现结束后公布入围决赛团队名单。
参赛者需要提交能在BM1684平台上运行的bmodel模型及相应测试代码。详情:
1.测试代码应保存为tester.py格式
2.模型应保存为out.bmodel格式
3.测试代码和模型应置于同一文件夹下,将文件夹压缩成.zip文件后提交
[tester.py文件]
参赛者需要使用我们提供的统一的测试接口,测试接口模板如下:
class Tester:
"""Model inference and testing using SAIL
"""
def __init__(self) -> None:
"""sail engine initialization"""
self.engine = sail.Engine(args.model, 0, sail.IOMode.SYSIO)
def test_one(self, img_pth):
"""predict number of people in the given image
Args:
img_pth: image path
Returns:
predicted number
"""
def test_all(self):
"""test all images and save results"""
with open(args.result, 'w') as out:
with open(os.path.join(args.data, 'list.txt')) as f:
for line in f.readlines():
img_id = line.split()[0]
img_pth = os.path.join(args.data, 'img_'+img_id + '.jpg')
time_start = time.time()
pred = self.test_one(img_pth)
time_cost = time.time() - time_start
print('{} {:.4f} {:.9f}'.format(img_id, pred, time_cost), file=out)
print('{} {:.4f}'.format(img_id, pred))
print('time cost {}'.format(time_cost))
tester = Tester()
tester.test_all()
1. 通过Mean Absolute Error(MAE)、Mean Squared Error(MSE)和Normalized Absolute Error(NAE)三个指标评估模型精度
2. 通过模型推理时间itime评估模型性能,itime应为数据集图片推理的平均时间,单位为s。
3. 最终得分计算公式为:score=(250-mae score)*0.2+(500-rmse score)*0.1+(0.4-naescore)*200+(2-i time score)*100