Chinese semiconductor industry

Status
Not open for further replies.

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Btw, smic does get 20% of its revenue from American clients. Clearly, smic thinks it can get more money from Huawei than Broadcom. And I agree with them. Huawei was one of the largest clients of tsmc. People may dismiss tsmc losing hw as a customer. Let's see how they feel when hisilicon makes a comeback
 

Franklin

Captain
The problem Chinese domestic AI chip facing,is ecosystem/software compatibility/ease to use,thus the overall cost is high compare to Nvidia chip. That's why even though NVIDIA Chip is being pushed to sky high price,Chinese companies are still rush to buy these chips like crazy

Please, Log in or Register to view URLs content!

实际上,除了硬件性能差距外,软件生态也是国产AI芯片厂商的短板。

芯片需要适配硬件系统、工具链、编译器等多个层级,需要很强的适配性,否则会出现这款芯片在某个场景能跑出90%的算力,在另一场景只能跑出80%效能的情景。

上文提到,英伟达在这方面优势明显。早在2006年,英伟达就推出了计算平台CUDA,这是一个并行计算软件引擎,CUDA框架里集成了很多调用GPU算力所需的代码,工程师可以直接使用这些代码,无须一一编写。开发者可使用CUDA更高效地进行AI训练和推理,更好的发挥GPU算力。时至今日,CUDA已成为AI基础设施,主流的AI框架、库、工具都以CUDA为基础进行开发。

如果没有这套编码语言,软件工程师发挥硬件价值的难度会变得极大。

英伟达之外的GPU和AI芯片如要接入CUDA,需要自己提供适配软件。据业内人士透露,曾接触过一家非英伟达GPU厂商,尽管其芯片和服务报价比英伟达更低,也承诺提供更及时的服务,但使用其GPU的整体训练和开发成本会高于英伟达,还得承担结果和开发时间的不确定性。

虽然英伟达GPU价格贵,但实际用起来反而是最便宜的。这对有意抓住大模型机会的企业来说,钱往往不是问题,时间才是更宝贵的资源,大家都必须尽快获得足够多的先进算力来确保先发优势。

因此,对于国产芯片供应商来讲,哪怕能通过堆芯片的方式能堆出一个算力相当的产品,但软件适配与兼容让客户接受更难。此外,从服务器运营的角度,它的主板开销、电费、运营费,以及需要考虑的功耗、散热等问题,都会大大增加数据中心的运营成本。

因为算力资源常需要以池化的形式呈现,数据中心通常更愿意采用同一种芯片,或者同一家公司的芯片来降低算力池化难度。

算力的释放需要复杂的软硬件配合,才能将芯片的理论算力变为有效算力。对客户而言,把国产AI芯片用起来并不容易,更换云端AI芯片要承担一定的迁移成本和风险,除非新产品存在性能优势,或者能在某个维度上提供其他人解决不了的问题,否则客户更换的意愿很低。

In fact, in addition to the gap in hardware performance, the software ecosystem is also a shortcoming of domestic AI chip manufacturers.

The chip needs to adapt to multiple levels such as hardware system, tool chain, compiler, etc., and needs strong adaptability. Otherwise, it will appear that this chip can run 90% of the computing power in one scene, but only in another scene. Run out of 80% performance scenario.

As mentioned above, Nvidia has obvious advantages in this regard. As early as 2006, Nvidia launched the computing platform CUDA, which is a parallel computing software engine. The CUDA framework integrates a lot of codes required to invoke GPU computing power. Engineers can directly use these codes without writing them one by one. Developers can use CUDA to perform AI training and reasoning more efficiently, and make better use of GPU computing power. Today, CUDA has become an AI infrastructure, and mainstream AI frameworks, libraries, and tools are all developed based on CUDA.

Without this set of coding languages, it will be extremely difficult for software engineers to realize the value of hardware.

If GPUs and AI chips other than Nvidia want to access CUDA, they need to provide their own adaptation software. According to industry insiders, I have contacted a non-NVIDIA GPU manufacturer. Although its chip and service quotations are lower than NVIDIA’s and promise to provide more timely services, the overall training and development costs of using its GPU will be higher than NVIDIA’s. Undertake the uncertainty of results and development time.

Although Nvidia GPUs are expensive, they are actually the cheapest to use. For companies that intend to seize the opportunity of large-scale models, money is often not a problem, and time is a more precious resource. Everyone must obtain enough advanced computing power as soon as possible to ensure the first-mover advantage.

Therefore, for domestic chip suppliers, even if a product with comparable computing power can be stacked by stacking chips, it is more difficult for customers to accept software adaptation and compatibility. In addition, from the perspective of server operation, its motherboard expenses, electricity charges, operating expenses, and issues such as power consumption and heat dissipation that need to be considered will greatly increase the operating costs of the data center.

Because computing power resources often need to be presented in the form of pooling, data centers are usually more willing to use the same chip or chips from the same company to reduce the difficulty of computing power pooling.

The release of computing power requires complex software and hardware cooperation to turn the theoretical computing power of the chip into effective computing power. For customers, it is not easy to use domestic AI chips. Replacement of cloud AI chips requires certain migration costs and risks, unless the new product has performance advantages, or can provide problems that others cannot solve in a certain dimension. Otherwise, the willingness of customers to replace is very low.
Isn't the problem that Chinese companies are going to be or are already being cut off from NVIDIA chips so they wille have to use domestic chips in the future.
 

paiemon

Junior Member
Registered Member
Isn't the problem that Chinese companies are going to be or are already being cut off from NVIDIA chips so they wille have to use domestic chips in the future.
Its a convenience thing, if and when they switch more fully to domestic chips they will have to rewrite all those libraries, extensions, etc that make using Nvidia chips more adoptable. Its not to say they can't/won't do the legwork, but its going to take time. Nvidia has had well over a decade to build up the CUDA ecosystem, you can't expect Chinese vendors to reproduce the same tools for running SW/ML models on domestic chips with a snap of the finger even if everyone switched over today. The current approach is the most practical, continue buying and using Nvidia as long as its available since stuff still needs to get done and unless no other choice is given. At the same time, take a parallel track approach to building up a domestic ecosystem, whether its China specific or open-sourced.
 

antiterror13

Brigadier
View attachment 116573

Mediatek, still not allowed to export anything to Huawei. Remember, the restrictions are meant to help American companies like Qualcomm while restricting Huawei to just 4G RF stuff.

The big jump here for Huawei isn't the 7nm Kirin SoC but rather the RF chips. That's only became available in the past couple of months as I've documented here.

When Huawei unveils its 5G phones, it will be the first time anyone has broken through the US controlled 5G mobile RF market

Very interesting that the US controlled 5G mobile RF market almost entirely, is that hard to master it?
On the other hand Huawei and ZTE dominating other 5G technologies (non 5G mobile RF), why is that?
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Very interesting that the US controlled 5G mobile RF market almost entirely, is that hard to master it?
On the other hand Huawei and ZTE dominating other 5G technologies (non 5G mobile RF), why is that?
who told you they dominate non 5G mobile RF? They may dominate the general technology & base stations, but they've always in the past had to use supply chain for the RF chips.

If you read what I posted in this thread (please read it), you will see that Guobo electronics is the main provider of 5G base station RF chips in China
 

olalavn

Senior Member
Registered Member
Jingjia micro GPU industry project settled in Wuxi High-tech Zone, the planned output value will exceed 5 billion yuan

On July 30, Zeng Wanhui, chairman and president of Changsha Jingjia Microelectronics Co., Ltd., and his party visited the high-tech zone, and Jingjia Micro Wuxi GPU industry project was officially signed


006iuXW1ly1hgg49guiplj30q60c3qab.jpg006iuXW1ly1hgg49h43lgj30u00chgt5.jpg
 

sunnymaxi

Major
Registered Member
LOOL

Please, Log in or Register to view URLs content!

Intel’s new Chinese chip innovation centre is a collaboration with a Shenzhen district, deepening ties amid US scrutiny​

  • The US chip giant partnered with the Nanshan district government and local tech firms on a centre focusing on AI, chip applications and edge computing
  • Intel has been seeking to maintain business in the world’s second-largest economy, with CEO Patrick Gelsinger travelling there twice in three months


US chip giant
Please, Log in or Register to view URLs content!
is increasing its business ties in China with a new innovation hub in Shenzhen meant to help domestic
Please, Log in or Register to view URLs content!
, even as Washington puts increasing pressure on semiconductor firms to reduce trade with the country.

The Santa Clara-based chip giant and the Nanshan district government in Shenzhen, a technology hub in southern China, launched the Intel Greater Bay Area Innovation Centre on Saturday, according to a post published to the centre’s official
Please, Log in or Register to view URLs content!
account. The centre will focus on artificial intelligence (AI), chip applications and edge computing, among other technologies, the statement said.

The district government aims to use the partnership to grow into a global “innovation highland” through a combination of industrial policy, Intel’s product and technology ecosystem, and innovation from local partners, according to the statement.
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
LOOL

Please, Log in or Register to view URLs content!

Intel’s new Chinese chip innovation centre is a collaboration with a Shenzhen district, deepening ties amid US scrutiny​

  • The US chip giant partnered with the Nanshan district government and local tech firms on a centre focusing on AI, chip applications and edge computing
  • Intel has been seeking to maintain business in the world’s second-largest economy, with CEO Patrick Gelsinger travelling there twice in three months


US chip giant
Please, Log in or Register to view URLs content!
is increasing its business ties in China with a new innovation hub in Shenzhen meant to help domestic
Please, Log in or Register to view URLs content!
, even as Washington puts increasing pressure on semiconductor firms to reduce trade with the country.

The Santa Clara-based chip giant and the Nanshan district government in Shenzhen, a technology hub in southern China, launched the Intel Greater Bay Area Innovation Centre on Saturday, according to a post published to the centre’s official
Please, Log in or Register to view URLs content!
account. The centre will focus on artificial intelligence (AI), chip applications and edge computing, among other technologies, the statement said.

The district government aims to use the partnership to grow into a global “innovation highland” through a combination of industrial policy, Intel’s product and technology ecosystem, and innovation from local partners, according to the statement.
generally, these are good things. Whatever you think of Intel, working with or for Intel should help train more engineers in China in the field of AI & edge computing chip design and integration
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Please, Log in or Register to view URLs content!

So the question is how confident the people at xinhua Wang need to be to be posting this?

I am not saying this proves anything. I am not sure they even got the right model number. But what is this about? Did something new happen? Are they announcing this unofficially due to upcoming dutch ban?

Did something new happen with smic testing with them
 
Last edited:

ansy1968

Brigadier
Registered Member
generally, these are good things. Whatever you think of Intel, working with or for Intel should help train more engineers in China in the field of AI & edge computing chip design and integration
Sir because they have a brain and are not stupid, China semiconductor will grow and may dwarf that of the US. Since they're a major political contributor and an American poster boy they can able to navigate it easily.
 
Last edited:
Status
Not open for further replies.
Top