Artificial Intelligence thread

Wuhun

New Member
Registered Member
The evaluation on a number of benchmarks shows that InternLM achieves state-of-the-art performance in multiple aspects, including knowledge understanding, reading comprehension, mathematics, and coding. With such well-rounded capabilities, InternLM achieves outstanding performances on comprehensive exams without resorting to external tools. On these benchmarks, InternLM not only significantly outperforms open-source models, but also obtains superior performance compared to ChatGPT.


View attachment 114125
 

Wuhun

New Member
Registered Member
The evaluation on a number of benchmarks shows that InternLM achieves state-of-the-art performance in multiple aspects, including knowledge understanding, reading comprehension, mathematics, and coding. With such well-rounded capabilities, InternLM achieves outstanding performances on comprehensive exams without resorting to external tools. On these benchmarks, InternLM not only significantly outperforms open-source models, but also obtains superior performance compared to ChatGPT.


View attachment 114125

1686220543558.png
 
Last edited:

SanWenYu

Captain
Registered Member
The construction of the China Computing Net, or C2NET, led by the Pengcheng Lab, has now linked in more than 20 heterogeneous computation clusters with 3E (exaFLOPS) computation power in total, reaching a milestone months ahead of the plan for year 2023.

Please, Log in or Register to view URLs content!

广东省持续支持人工智能算力自主研发 “中国算力网”汇聚算力已达3E

在国家有关部委的关心指导下,在广东省委省政府、深圳市委市政府的超前部署和持续支持下,由鹏城实验室牵头研制建设的“中国算力网(China Computing NET,C²NET)”已取得阶段性进展。在今年5月29日召开的首届中国算力网大会上,鹏城实验室正式宣布“中国算力网”已接入20余个大型智算、超算、数据中心,汇聚异构算力达3E,提前完成2023年既定工作目标,初步实现了全国大型算力的协同调度与高效计算,为数字经济打造最强算力底座。

为应对AI模型训练与运行所需算力的爆发式增长态势,近年来,省委省政府在重大项目布局上积极支持鹏城实验室开展关键核心技术攻关,推动构建人工智能大科学装置。鹏城实验室联合有关单位,基于自主国产软硬件体系建成了国内首个E级智能算力平台“鹏城云脑II”,目前已实现全球IO500总榜单六连冠、AIPerf500排行榜连续三届第一。 “鹏城云脑II”的示范效应加速健全了国产研发芯片的产业生态,推动了自主化产品的快速成熟与推广,使得我国在16位精度的中端AI芯片产品上已具备市场竞争能力。

早在2019年,鹏城实验室就以“鹏城云脑”为初始枢纽节点启动了“中国算力网”的研发与建设,并于2022年6月正式上线“智算板块”。目前,“中国算力网”已接入20余个异构算力集群,汇聚算力规模达3E,其中自主研发的AI算力超1.8E,初步满足了粤港澳大湾区数字经济与智能产业发展急剧增长的超大规模算力需求。

鹏城实验室牵头研制建设的“中国算力网”,将构建我国自主创新的算力网技术及标准体系,推动全国大型智算中心、超算中心、数据中心等异构算力资源实现互联互通、协同调度与高效计算,形成国家级数字经济算力基础设施,为实现数字经济时代算力供给模式的变革,服务“数字中国”“东数西算”等国家重大战略提供强有力支撑。​
 

Dark Father

Junior Member
Registered Member

Microsoft to move top AI experts from China to new lab in Canada​

Microsoft is moving some of its best artificial intelligence researchers from China to Canada in a move that threatens to gut an essential training ground for the Asian country’s tech talent. The Beijing-based Microsoft Research Asia (MSRA) has begun seeking visas to move top AI experts from China’s capital to its institute in Vancouver, said four people with knowledge of the plans.
These people said the move could affect 20 to 40 staff. A person close to Microsoft said fewer Chinese staff will move to Canada this year, where the US tech giant is creating a new lab staffed by experts from around the world.

Non paywall source:
Please, Log in or Register to view URLs content!




In the US and Taiwan there was & is great tensions by the PRC recruiting great minds for working in China. Even bringing them to court and purging their ass with lawfare. Why should we allow those filth to poach our best minds to work for their benefit.
 
Last edited:

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
HuaweiMindSpore_DistributedTraining-Jun.png
HuaweiMindSpore_LargeModelTraining-Jun.png
A couple of interesting slides from Huawei Ascend team on usage of AI clusters. 1st one shows how allowing parallel/distributed training can reduce latency and improve efficiency

Second one show that using super large clusters can reduce training time from months to weeks. Can be used for training 100B class parameter large language modesl
 

HighGround

Senior Member
Registered Member
Dylan Patel's write up on MI-300 is
Please, Log in or Register to view URLs content!
.

Let me know if you folks want me to paste the full text on here. Of particular interest to me was this bit here.

One the markets we believe is very promising for AMD is China, as they can derate their IO and glide under rules very easily. The China hyperscalers are very good at their own software stacks, especially Baidu, so that is a great market AMD is trying to penetrate. It helps that leading AI researchers there get paid an order magnitude than the AI folks in the Bay Area. We doubt AMD will talk much about it the engagements with Alibaba or Tencent publicly though.

AMD will ship less than $1B worth MI300 this year. Meanwhile, Nvidia will be ramping to beyond $10B quarterly datacenter GPU revenue in the near term. The aspirational goal for AMD is 100,000 GPU a quarter next year. That will be tough to hit.

AMD is highly limited by using such a high bin of HBM. Furthermore, Nvidia is going to be competing to order all of the top bin for the H100 refresh with higher speed memory later this year. We believe AMD has a big cap on MI300 ramp between supply difficulties with the advanced packaging, top bin HBM, and Nvidia / Broadcom trying to get as much of the CoWoS supply out there.

Thoughts, folks?
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Dylan Patel's write up on MI-300 is
Please, Log in or Register to view URLs content!
.

Let me know if you folks want me to paste the full text on here. Of particular interest to me was this bit here.



Thoughts, folks?
i think Dylan is knowledgeable enough, but there is not going to be much interest in non-Nvidia foreign GPUs in China. There are so many Chinese GPU makers. Why go for second rate American GPU?
 

HighGround

Senior Member
Registered Member
i think Dylan is knowledgeable enough, but there is not going to be much interest in non-Nvidia foreign GPUs in China. There are so many Chinese GPU makers. Why go for second rate American GPU?

I wouldn't necessarily call it second rate. It's actually better than Hopper in terms of some performance metrics and memory bandwidth. Which makes sense, since it's sampling only in the next few months, significantly later than Hopper. By the time MI300 is shipping, Hopper will have been out for a year.

1686705150219.png
One of the main issue with MI300 is a lack of a competitive software stack.

But Dylan's point is that Chinese firms are actually good at providing their own software solutions, rather than relying on Nvidia's, which might make MI300 an attractive product for them.

Another major issue is "inferior" networking.

Compare this to Nvidia’s H100, where there is 900GB/s from every GPU to every other GPU. This is because the H100’s doesn’t connect directly, but instead connect through the NVSwitch.

The worrying point is that AMD’s GPU to GPU IO is limited and non-uniform. The GPUs connect directly to each other. Nvidia has another chip, the NVSwitch which is providing full IO speed from one GPU to any other GPU. Nvidia’s solution enables most developers to treat 8 GPU servers as 1 massive GPU in AI. The topology complexities influence on software are still opaque on MI300.

It is also because Nvidia utilizes ethernet style SerDes which are faster, denser, and longer reach than the PCIe style SerDes that AMD uses. We believe this is an engineering tradeoff mistake for AMD, but that could be argued the other ways. AMD’s solution enables lower latency, but the bandwidth offered is significantly worse. Furthermore, Nvidia can do out of node NVLink, while AMD is limited to going over much slower Ethernet or Infiniband.

What AMD gains is they have supreme flexibility with their IP. Because each AID has 36 lanes of PCIe 5.0 SerDes, they can be configured for xGMI (GPU to GPU),
Please, Log in or Register to view URLs content!
, and PCIe flexibly. How AMD deploys these is incredibly important for closing the gap with Nvidia’s H100.

On paper, AMD with all these lanes can offer a total 1152GB/s of IO.

So Idk, I guess you guys would have to tell me.
 
Top