Artificial Intelligence thread

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!

a lot of good articles here about computation. Good things to think about just how they approach digital economy and how they build a national network of super computation centers that can schedule tasks, transmit data really quickly & have a system that can do computation in distributed manner. This basically just allows fuller utilization of your computation resources for various AI and other tasks.

Can't do AI without computation power and data.
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Please, Log in or Register to view URLs content!

the 8 1sts from 2023
first QC cloud with china mobile
new generation lossless web
4th gen intelligent GP AI server
1st RISC-V server (everything RISC-V from CPU to TPU and such)
new generation cloud edge robot
西部数谷
新一代信息技术应用创新适配基地
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Please, Log in or Register to view URLs content!

They unveiled this new GP AI server
这台名为金舟远航品牌G658V3的智慧型通用AI服务器由宁夏怡海信息技术有限公司研发和生产,支持多达10张全高全长双宽GPU卡,可适配包括英伟达在内的多种高性能GPU卡,同时还进行了与国产GPU卡的深度适配,全面升级了CPU、GPU和I/O规格。该服务器具备高算力、强扩展性、丰富配置和高可靠性等特点,广泛适用于人工智能、高性能计算、数据分析、元宇宙等多个领域的应用场景。
each AI server comes with up to 10 GPUs. Can fit in Nvidia or any of the dmestic GPUs alone with CPUs and various I/O standard. Can be used for AI, high performance computing, data analysis, metaverse and such

2023年中,为抢抓“东数西算”“数字经济”目标带来的绿色创新发展重大战略机遇期,按照“优势互补、互利共赢,深化合作、协同推进”的原则,宁夏与深圳市同泰怡信息技术有限公司、海尔海创航星信息科技(青岛)有限公司三方合作成立宁夏怡海信息技术有限公司,计划总投资10亿元,以在宁夏建设人工智能服务器生产基地为契机,围绕人工智能服务器整机研发制造、智算平台建设、国产GPU应用适配、人工智能大模型开发等多个领域,共同构建宁夏数字经济产业生态。项目达成后可实现年产值50亿元。
This is the way China tends to think about things. I invest 1B RMB and get 5B RMB of economic value out of this. We will see if that actually happens. But this is the goal of digital economy
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Alibaba T-head unveiling new RISC-V based AI platform

Please, Log in or Register to view URLs content!

目前在中国智能网联汽车领域,大型企业联盟正大力发展基于RISC-V的解决方案,RISC-V预计会在车载推理训练数据中心以及未来的终端里面崭露头角。面对复杂生态系统,杨静说,平头哥认为软件会是RISC-V落地的一个关键路径。
RISC-V is getting used for vehicle training for AVs

杨静宣布,平头哥正式发布首个自研RISC-V AI平台——玄铁多媒体AI软硬件融合平台。平台通过软硬件深度协同,较经典方案提升超8成性能,支持运行170余个主流AI模型,从处理器IP到芯片平台、编译器、工具链等软硬件技术全面升级,并实现RISC-V与Debian、Ubuntu、安卓、OpenKylin、OpenHarmony、龙蜥、酷开WebOS等主流操作系统的深度适配,推动RISC-V持续向2GHz高性能应用演进。
T-head unveils its self developed AI platform with hardware/software integration and supports 170 main LLMs and intercts it with all the main OSs

Please, Log in or Register to view URLs content!
平头哥表示,他们通过更新自研一站式 AI 部署套件 HHB 实现了在典型网络性能比第三方工具平均提升 88%,并增加支持运行 Transformer、TensorFlow、PyTorch 等 170 余个主流框架 AI 模型。
They've added support for Transformer, TensorFlow & PyTorch and achieved 88% improvement over 3rd party tools

该平台将 RISC-V 扩展性的新型 Vector、Matrix 及第三方硬件进行算力抽象,并接入 OpenCV 与 CSI-NN 等弹性计算库,融合多媒体处理流程,形成面向业务的流水线设计,方便用户在流水线的不同步骤上进行 AI 增强优化,为检测、分类、跟踪和超分等各类应用提速。
Platform add support in RISC-V for new Vector, Matrix & 3D hardware computation.

Please, Log in or Register to view URLs content!
据平头哥介绍,基于玄铁多媒体AI软硬件融合平台,旗下的玄铁处理器全新升级,C920首次加入Vector1.0,C907将实现Matrix扩展,执行GEMM计算较Vector方案可提速15倍。
Xuantie multi-media AI software/hardware platform, flagship Xuantie CPU going through new upgrade. C920 added vector1.0 & C907 supports Matrix. Can do GEMM computation Vector 15x faster
 

tokenanalyst

Brigadier
Registered Member


The research team of the Department of Electronics has made progress in the privacy and security key technology field of distributed machine learning​


Tsinghua News Network, August 23rd. Distributed machine learning can coordinate data and resources distributed in different nodes in the actual system, and perform model training by sharing and learning intermediate variables (such as model parameters) between nodes. This technology has the characteristics of decentralization, which avoids the privacy risks brought about by centralized data storage to a certain extent, and is currently the mainstream machine learning method for privacy protection. However, with the deepening of research, distributed machine learning has also encountered many challenges. The current distributed machine learning framework utilizes the data dispersion of each node to achieve data privacy protection. The privacy of node raw data is highly correlated with shared variables in learning, and existing research has proved that private data can be successfully decoded from shared variables. Therefore, how to build a distributed machine learning framework that protects the privacy of the whole process and each link is a basic frontier topic in the current data security field.
However, the contradiction between data security and processing efficiency is an eternal topic. With the enhancement of privacy protection in distributed machine learning, it will inevitably affect the efficiency and effect of machine learning, especially in the training of large-scale parameter models. This contradiction is particularly prominent. On the one hand, as the scale of machine learning models increases and the privacy protection of each link is enhanced, the communication resource and computing resource overhead of shared variables between nodes will increase exponentially, becoming a major bottleneck in large-scale model learning. On the other hand, for some complex raw data, such as strongly correlated graph data, etc., these highly correlated raw data are scattered in different nodes in the distributed learning framework, and privacy protection can be achieved through the "de-correlation" of decentralized data, but It also loses a large amount of correlation information between these data, which greatly reduces the efficiency of machine learning. Existing methods assume that nodes have independent and complete data and learn based on their internal features, and it is difficult to effectively model cross-node strongly correlated data. How to solve the contradiction between the "endogenous strong correlation" of graph data and the "de-correlation" of distributed learning for privacy protection, and improve the learning effect of strongly correlated data is a highly challenging topic.

Aiming at the cutting-edge topics in distributed machine learning for privacy protection, the research team of the Open Source Data Cognitive Innovation Center of the Department of Electronics, Tsinghua University has carried out systematic research work (the systematic architecture of the research is shown in Figure 1), and has achieved phased progress . The research team created a set of privacy-enhancing distributed machine learning models (the method is shown in Figure 2). The model adopts the collaborative learning framework of differential privacy knowledge transfer to realize the "whole process" privacy protection in the distributed learning process. When the machine learning model directly applies differential privacy, the learning effect falls off a cliff. While providing effective and provable privacy security protection for the distributed learning process, it improves the performance of existing privacy-preserving machine learning methods by up to 84.2%. Aiming at the model scale bottleneck problem caused by the contradiction between "privacy enhancement" and "model learning efficiency" in distributed machine learning, the research team created a set of efficient model training methods for privacy-enhanced distributed architecture (the method is shown in 3). On the basis of the privacy-enhanced distributed learning model, a set of two-way knowledge distillation technology based on the "disciple effect" is developed, and a set of model knowledge adaptive compression method based on mutual learning constraints is proposed, which breaks through the enhanced privacy protection. Efficiency bottleneck of knowledge sharing in machine learning process. Experimental results prove that in a large-scale privacy-enhanced distributed learning model, this method can increase the training efficiency of complex models by 20 times. Aiming at the contradiction between "strong association" and "de-association" in the distributed learning of graph data, the research team proposed a complex data learning method for privacy-enhanced distributed architecture (the method is shown in Figure 4). By establishing an association model learning method for enhanced privacy protection, the "strong association" graph data is distributed in each node "de-association", and at the same time, the data expansion mechanism is used to model the high-order association information of cross-node data. Experiments on actual scene data prove that the framework can effectively mine the association between distributed graph data, reaching 98.2% of the optimal association modeling effect without privacy protection constraints.

1692888414016.png
Please, Log in or Register to view URLs content!
 

tonyget

Senior Member
Registered Member
This video talks about AI SaaS start up in China. At the end of the video,he mentioned about the compliance requirement,which is high cost and heavy burden on AI start up. This could discourage innovation in long run. The comments below also echoes this issue,the compliance procedure is expensive lengthy and full of red tape.

I remember they have discussed the same issue in the US and EU,about putting too much regulation on AI,could slow down the advancement of the technology and harmful for innovation

Please, Log in or Register to view URLs content!

2023-08-26_151515.png
2023-08-26_151558.png
 

SanWenYu

Captain
Registered Member
China has built 14 national supercomputer centers so far. The 14th of them is in Jiaxing, Zhejiang. It has passed acceptance assessmet for operation in June, one year after phase one completed construction in May 2022. It can currently provide 180P of computation power in total.

These national supercomputer centers are part of the national strategic information infrastructure. They require operation permits from the Ministry of Science and Technology. They are probably funded (in part or wholely) and supervised by the ministry, too.

Please, Log in or Register to view URLs content!

近日,浙江(长三角)新一代全功能智能超算中心(“乌镇之光”超算中心)顺利通过科技部组织的专家评审,同意国家超级计算乌镇中心通过验收,这也意味着“乌镇之光”超算中心正式纳入国家超算中心序列,成为浙江唯一、全国第14个国家超算中心,填补了浙江在国家超算中心领域的空白,建成浙北最大科学装置。

作为浙江首个超算中心,国家超级计算乌镇中心于2020年11月启动建设,一期建设已完成并于去年5月正式投入运营;目前,已建成总算力达到180P的超级计算中心及相关配套项目,算力水平进入全球前十。

自落地以来,国家超级计算乌镇中心秉持“立足桐乡,辐射全国”理念,通过连接全国各大超算中心,打造覆盖全国的超级计算服务网络,服务于国家重大战略需求和区域经济社会发展需要,正成为浙江乃至全国技术创新和产业发展的“超级引擎”。

如今,国家超级计算乌镇中心平均利用率超过40%,总用户数9874个,包含中科院体系研究所,省内重点实验室、科研院所、高校及创新企业,涵盖物理化学材料、生物信息、人工智能等领域。去年,乌镇之光产业园已超额完成10亿元产值,切实将超级算力转化为了生产力。

此次顺利跻身“国家队”,对于桐乡来说,是积极承接大会红利、培育发展数字经济新动能的生动实践,也是推进数字经济创新提质“一号发展工程”的重要举措。国家超级计算乌镇中心将在助力桐乡抢占未来科技创新制高点的同时,为参与城市发展共建带来更大的成长空间。

随着这一国之重器的持续“运转”,有利于桐乡承担国家重大创新工程和培育战略性新兴产业,积极融入国家创新发展全局,对促进全省科技创新、推动产业转型升级、引领和支撑经济高质量发展都具有十分重要的意义。

新发展,新起点。今后,国家超级计算乌镇中心也将持续加强在高性能计算领域的科研、应用水平,促进研究成果的应用和向产品的转化不断提升,着力提升服务国家和区域创新发展的能力。

据了解,国家超级计算中心是由国家科技部批准成立,是国家战略性信息基础设施和科技创新战略平台。在国家超级计算乌镇中心通过验收之前,科技部批准建立的国家超级计算中心共有13个,位于天津、广州、长沙、深圳、济南、无锡、郑州、昆山、成都和西安等地。
 

SanWenYu

Captain
Registered Member
Prof. Zhou Zhihua, of the Nanjing University, has been elected as the President of Trustees of International Joint Conference on Artificial Intelligence (
Please, Log in or Register to view URLs content!
). Established in 1969, IJCAI is one of the most influential academic conference on AI in the world. Prof. Zhou is the first Chinese mainlander, and the second Chinese, who is elected to the IJCAI president of trustees.

Please, Log in or Register to view URLs content!

8 月 21 日,在第32 届国际人工智能联合会议(IJCAI)大会举办期间,IJCAI 执行委员会选举出了新任IJCAI 理事会理事(IJCAI Trustee)及理事会主席人选,其中南京大学周志华教授当选为新一届的国际人工智能联合会理事会(IJCAI Trustee)主席。在 8月25日的IJCAI 2023闭幕式上,大会宣布了这一消息,周志华教授将于本届理事会主席 Christian Bessiere 任期结束后上任。

作为人工智能领域历史最悠久、最具权威性和影响力的国际学术盛会之一,IJCAl 大会始于1969年,每两年举办一次(从2016年起改为每年举办)。它旨在促进人工智能领域各个方面的研究、发展和应用,并促进国际间的交流与合作。除组织举办学术会议,IJCAI还主办了在人工智能领域享有盛誉的学术期刊Artificial Intelligence (AIJ) ,打造了最具影响力的人工智能研究者的学术平台。

周志华长期致力于人工智能核心的机器学习理论与方法研究,在 AI 领域国际一流期刊和顶级会议发表论文 200 余篇,谷歌学术被引用量达 8 万余次,多次蝉联爱思唯尔高被引学者,曾两次获国家自然科学二等奖、并获得IEEE计算机学会Edward J. McCluskey技术成就奖、中国计算机学会“王选奖”等,他的多项发明技术在华为等企业转化成效显著,获得中国专利奖。

2018年,周志华受命创办南京大学人工智能学院,建立并发布了国内第一个人工智能本科专业教育培养体系。他所著的《机器学习》一书,被国外学者翻译为英、日、韩文出版,被海内外500多院校用作教材,很多人工智能学子奉为经典的入门教材之一。
 
Top