Chinese semiconductor industry

Status
Not open for further replies.

tonyget

Senior Member
Registered Member
Please, Log in or Register to view URLs content!

寒武纪和百度双方均未对合作细节做出正式回应,或许存有这样的可能性:思元590在“文心一言”中可能小规模部署进行测试,比如用于推理应用中的小部分场景,进入大模型训练中的可能性或许并不大。

其实,在性能达到要求的前提下,国产芯片所能实现的成本优势,以及从防止“断供”、“卡脖子”的战略意义层面来看,都有其发展空间和必要性所在。在与一位业内人士的对话中,问其:如果英伟达A100的库存用完了,国内企业宁愿买“阉割版”的芯片,也不会买国产芯片吗?对方答:“如果国产算力芯片整体性价比能够达到英伟达的60%-70%,就有购买的意愿,可惜目前还达不到。”

对于大模型训练所需的算力芯片,业内认为“只有英伟达的A100和A800能够真正跑起来,国产GPU只能做小规模和中规模的训练和推理。”


Both Cambrian and Baidu have not made a formal response to the details of the cooperation. There may be such a possibility: Siyuan 590 may be deployed on a small scale for testing in "Wen Xin Yi Yan", such as small-scale applications used in reasoning applications. In some scenarios, the possibility of entering large model training may not be great.

In fact, on the premise that the performance meets the requirements, the cost advantages that domestic chips can achieve, as well as the strategic significance of preventing "supply cuts" and "stuck necks", have their development space and necessity. In a conversation with an industry insider, I asked him: If the inventory of Nvidia A100 runs out, would domestic companies prefer to buy "castrated version" chips instead of domestic chips? The other party replied: "If the overall cost performance of domestic computing power chips can reach 60%-70% of Nvidia's, there will be a willingness to buy, but unfortunately it has not yet been achieved."

As for the computing power chips required for large-scale model training, the industry believes that "only Nvidia's A100 and A800 can really run, and domestic GPUs can only do small-scale and medium-scale training and reasoning."
 

zbb

Junior Member
Registered Member
I agree. The bizarre thing is that this has always been one of my suspicions. China would never have been able to do any of this if China wasn't restricted. Even a 1% loss in efficiency means they would never adapt Chinese tools.

Thus, without this insane trade war it was likely Chinese manufacturers would just keep happily never using Chinese equipment. Thus, was this planned by China?
It's so very similar to what happened with telecom equipment 30 years ago. For many years in the 1980s, 100% of Chinese telecom equipment orders went to American firms. It was only after the US restricted telecom equipment exports to China in the early 1990s that firms like Huawei and ZTE were able to take off. Those once dominant North American telecom equipment makers like Lucent, Nortel Networks, and Motorola would all cease to exist by the early 2010s due in no small part to competition from Huawei and ZTE.
 

antiterror13

Brigadier

720,000 pieces! The SiC project is about to go into production.​


Recently, the domestic silicon carbide project has made new progress. It is expected to be put into production in the fourth quarter of this year. After reaching the production capacity, it will form an annual production capacity of 720,000 power chips.

On April 25, the Nanhu District People's Government issued a document stating that in the first quarter of this year, investment in manufacturing and technological transformation in the region achieved a "good start", completing 2.518 billion yuan and 1.422 billion yuan respectively, with growth rates of 55% and 72.8% respectively.

According to reports, the R&D and industrialization projects of high-voltage characteristic process power chips and SiC chips built by STAR in Nanhu District, Jiaxing City, Zhejiang Province, currently two of its factories have started debugging equipment, and other factories are waiting for acceptance, which is expected to be completed in the fourth quarter of this year . It will be put into production and will form an annual production capacity of 720,000 power chips after reaching the production capacity.

Please, Log in or Register to view URLs content!

Is it 720,000 wafers or chips? 720,000 chips is not many
 
D

Deleted member 24525

Guest
Please, Log in or Register to view URLs content!

寒武纪和百度双方均未对合作细节做出正式回应,或许存有这样的可能性:思元590在“文心一言”中可能小规模部署进行测试,比如用于推理应用中的小部分场景,进入大模型训练中的可能性或许并不大。

其实,在性能达到要求的前提下,国产芯片所能实现的成本优势,以及从防止“断供”、“卡脖子”的战略意义层面来看,都有其发展空间和必要性所在。在与一位业内人士的对话中,问其:如果英伟达A100的库存用完了,国内企业宁愿买“阉割版”的芯片,也不会买国产芯片吗?对方答:“如果国产算力芯片整体性价比能够达到英伟达的60%-70%,就有购买的意愿,可惜目前还达不到。”

对于大模型训练所需的算力芯片,业内认为“只有英伟达的A100和A800能够真正跑起来,国产GPU只能做小规模和中规模的训练和推理。”


Both Cambrian and Baidu have not made a formal response to the details of the cooperation. There may be such a possibility: Siyuan 590 may be deployed on a small scale for testing in "Wen Xin Yi Yan", such as small-scale applications used in reasoning applications. In some scenarios, the possibility of entering large model training may not be great.

In fact, on the premise that the performance meets the requirements, the cost advantages that domestic chips can achieve, as well as the strategic significance of preventing "supply cuts" and "stuck necks", have their development space and necessity. In a conversation with an industry insider, I asked him: If the inventory of Nvidia A100 runs out, would domestic companies prefer to buy "castrated version" chips instead of domestic chips? The other party replied: "If the overall cost performance of domestic computing power chips can reach 60%-70% of Nvidia's, there will be a willingness to buy, but unfortunately it has not yet been achieved."

As for the computing power chips required for large-scale model training, the industry believes that "only Nvidia's A100 and A800 can really run, and domestic GPUs can only do small-scale and medium-scale training and reasoning."
This seems inconsistent with the data tphuang posted earlier. Ascend910 chip performance is equal to A100 in terms of FLOPS at least, and we have seen them used in many city-level ai data center projects. Not saying it's wrong I'm just wondering why industry insiders would have that perception.
 

antiterror13

Brigadier
I think he was just asking if china has caught up in ai and you. That's a reasonable question.

Recently, tencent came out with their hcc ai framework that promised to be much faster in training than previous incarnation of their ai framework.

It makes things clear that gpus are only part of the equation. The communication bandwidth between different chips and clusters really matters the dram speed and size really matters. The I/o speed to storage really matters. The CPU processing really matters. The chip to chip data rate matters. The software hardware layer really matters. The ai framework itself matters.

For a long time, I didn't think much about Huawei for ai because I thought they were sanctioned and they would only have enough chipset for their own usage. That turned out to be false. China built so many data centers as part of their smart city plans. Yet almost all of them picked Huawei/pangu with only a couple that picked Cambrian. Almost nobody picked a100. It seems like a100 is more common with other Chinese big cloud providers and inspir. Why is that?

Well, it would seem to me that Huawei has the most advanced full stack in terms of ai. Their pangu platform with ascend and kunpeng chipset along with their communication technology and software prowess is what's leading the industry.

Raw performance for ascend 910 is at about 1/3 computational power of br100 and at about the same level as a100. But in terms of actual ai application and training, the availability of ascend GPU and software hardware integration makes it the best option. The constant theme from Chinese social media is that ascend is the best Chinese GPU. When people say that, they are taking holistically. Biren chips still need to improve software hardware integration and integration with cloud service providers to achieve potential. They also need to stack up more gpus. Having everything in house allows Huawei to compete against a100 with cuda. Clearly, Chinese smart city planners think so.

So as we go forward, it's clear that the market has spoken. Huawei is the current ai leader in china. Not Baidu as I previously thought. And it appears to be that Huawei can get it's chips produced by smic. It won't be hurt that much if America sanctions it's cloud business. It will also be providing it's ai smart city solutions to arab countries. This is a major growing business for Huawei.

So from that perspective, I understand why they say the worst is over. The worst is over. Things should get better for them.

And as a supporter for china's ai industry, I would say it's a great thing they can now expand as much as they want. And as smic improve their 7nm process, Huawei will also get better chips produced. At this time, I would imagine smic has many limitations in what it is capable of producing. It's good to have a captive customer with money and design expertise to work with you.

Btw, when ascend first came out, it had spec of 256 tflops for 16fp. Now, hisilicon website shows 320. So looks like hisilicon design managed to overcome smic deficiencies and made chips even better than the original one. I guess that's to be expected after 3 years.

Interesting that it is still called Ascend 910 even the performance has increased by approximately 25% with the same power envelope 310W
Please, Log in or Register to view URLs content!


I am really surprised (and very happy) that SMIC can make Ascend 910, amazing ... is it really with 7nm ?

Wondering too whether SMIC can make enough Ascend 910 just for Chinese market

Even comparable with A100, I'd bet that Ascend 910 is significantly cheaper than A100 ($10K), do you know how much is the price of Ascend 910 ?
:rolleyes:
 

Eventine

Junior Member
Registered Member
This seems inconsistent with the data tphuang posted earlier. Ascend910 chip performance is equal to A100 in terms of FLOPS at least, and we have seen them used in many city-level ai data center projects. Not saying it's wrong I'm just wondering why industry insiders would have that perception.
FLOPS is not the only measure of GPU performance. Benchmarks are needed to do an accurate comparison across different compute domains. NVIDIA’s advantage over, say, AMD is far more than just FLOPS.
 
Status
Not open for further replies.
Top