Chinese semiconductor thread II

tokenanalyst

Brigadier
Registered Member

Zhilun New Materials Technology (Xi'an) Co., Ltd. Binzhou production base officially put into production​


The Binzhou production base of Zhilun New Materials Technology (Xi'an) Co., Ltd. was officially put into production, and the ceremony was grandly held in Binzhou City, Shaanxi Province. This historic moment marks that Zhilun New Materials has taken a solid step in promoting the localization of semiconductor new materials, and has also injected new impetus into the economic development of Binzhou City.

Zhilun New Materials Technology (Xi'an) Co., Ltd. was established in May 2022. It is a high-tech enterprise of "R&D + production + sales" of ultra-high purity low-chlorine electronic-grade epoxy resin, and a member of the Electronic Chemical New Materials Industry Alliance. It is committed to the R&D, production and sales of ultra-high purity electronic-grade epoxy resin.
The company has high-quality and diverse products, and is focused on providing high-quality standard epoxy high-purity electronic special materials, customized epoxy resins and special materials integrated product solutions to customers in the semiconductor, composite materials, new energy and other industries.

The company has applied for 23 invention patents (2 in the United States, 1 in Europe, 1 in Japan, and 1 in South Korea), participated in the formulation of 1 industry standard, and took the lead in the formulation of 2 group standards. At the same time, it actively applied for EU REACH certification.​

Please, Log in or Register to view URLs content!
 

OptimusLion

New Member
Registered Member
[Huawei publishes patent for "semiconductor structure of all-around gate nanosheet device"] According to information from the National Intellectual Property Administration, a patent application entitled "semiconductor structure of all-around gate nanosheet device" by Huawei Technologies Co., Ltd. was published today, with the application date being April 2022. According to the patent abstract, for a GAA nanosheet device, a semiconductor structure (100) and a manufacturing method are provided.

The semiconductor structure (100) includes a substrate (101) and a gate stack on the substrate (101), wherein the gate stack has a plurality of gate regions (103) and silicon-based channel regions (102) arranged alternately with each other. The length of the gate region (103) is less than the length of the channel region (102). Therefore, a cavity (104) is formed on one side of the gate stack, and each cavity (104) is arranged next to a gate region (103) and between two channel regions (102) adjacent to the gate region (103). Furthermore, a first silicon-based contact region (105) extends within a distance from the one side of the gate stack, and a silicon-based filling material (106) is arranged between the first contact region (105) and the first side of the gate stack and is located in each first recess (104).

e4462f576a67ae28c7bd587c1aba17e2.jpeg
 

Enestori

New Member
Registered Member
So how bad is the TSMC ban on advanced nodes is going to be for Chinese A.I development? EUV is many years away and I don't think SMIC can scale up 7nm/5nm fast enough to cover the needs of a very rapidly growing industry. The good news is that it's clear that LLM models are seeing diminishing returns and will likely hit a wall where throwing more processing power and data at it only results in negligible improvements meaning that America won't race nearly too much in A.I development while China is currently constrained on computing power.
This is a good question. According to Google, 5nm semiconductors are 15% faster than 7nm semiconductors. That means Chinese LLMs would be 15% slower for five years - my estimate of the remaining time China needs for EUV. 5nm chips are also smaller. That can obviously be ameliorated with a bigger data center.

In my opinion, using a bigger node for LLM is not really a big deal.

I also believe, frankly, that LLMs are overrated. Based on my experience, ChatGPT is unable to solve engineering problems whose solutions are not available on the Internet. LLMs cannot think imo. They just regurgitate Reddit in sophisticated language.

It appears that LLMs have fundamental problems? They appear to be consuming exponentially more data and power, for only minor improvements in performance. They also appear fundamentally unable to resolve the "ghost data" problem. ChatGPT 5.0 still isn't out.

I still remember when Silicon Valley convinced me that self-driving cars were just around the corner. That was ten years ago.
 

GiantPanda

Junior Member
Registered Member
This is a good question. According to Google, 5nm semiconductors are 15% faster than 7nm semiconductors. That means Chinese LLMs would be 15% slower for five years - my estimate of the remaining time China needs for EUV. 5nm chips are also smaller. That can obviously be ameliorated with a bigger data center.

In my opinion, using a bigger node for LLM is not really a big deal.

I also believe, frankly, that LLMs are overrated. Based on my experience, ChatGPT is unable to solve engineering problems whose solutions are not available on the Internet. LLMs cannot think imo. They just regurgitate Reddit in sophisticated language.

It appears that LLMs have fundamental problems? They appear to be consuming exponentially more data and power, for only minor improvements in performance. They also appear fundamentally unable to resolve the "ghost data" problem. ChatGPT 5.0 still isn't out.

I still remember when Silicon Valley convinced me that self-driving cars were just around the corner. That was ten years ago.

China is able to do pretty well with the AI chips it has.

Self driving cars are a reality in China. There are robotaxis in operation across there country.

But the real big impact of practical AI is in industry. That is the path of AI in China unlike in the US -- and I don't see the any ban on chips impacting that:

Please, Log in or Register to view URLs content!

This is something I don't see the US doing in the near future even with far better chips for ChatGPT. They'll be able to write better sex stories with ChatGPT 5/6/7 whatever but I highly doubt it for full automation on the scale of a port, a mine or even a simple bus service like this:
Please, Log in or Register to view URLs content!
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
This is a good question. According to Google, 5nm semiconductors are 15% faster than 7nm semiconductors. That means Chinese LLMs would be 15% slower for five years - my estimate of the remaining time China needs for EUV. 5nm chips are also smaller. That can obviously be ameliorated with a bigger data center.

In my opinion, using a bigger node for LLM is not really a big deal.

I also believe, frankly, that LLMs are overrated. Based on my experience, ChatGPT is unable to solve engineering problems whose solutions are not available on the Internet. LLMs cannot think imo. They just regurgitate Reddit in sophisticated language.

It appears that LLMs have fundamental problems? They appear to be consuming exponentially more data and power, for only minor improvements in performance. They also appear fundamentally unable to resolve the "ghost data" problem. ChatGPT 5.0 still isn't out.

I still remember when Silicon Valley convinced me that self-driving cars were just around the corner. That was ten years ago.
again, if I read what I wrote after him to move on from this topic. We had an AI thread and you can see the situation there. It's really not on topic to discuss LLMs on semi thread.
 

tokenanalyst

Brigadier
Registered Member
This is a good question. According to Google, 5nm semiconductors are 15% faster than 7nm semiconductors. That means Chinese LLMs would be 15% slower for five years - my estimate of the remaining time China needs for EUV. 5nm chips are also smaller. That can obviously be ameliorated with a bigger data center.

In my opinion, using a bigger node for LLM is not really a big deal.

I also believe, frankly, that LLMs are overrated. Based on my experience, ChatGPT is unable to solve engineering problems whose solutions are not available on the Internet. LLMs cannot think imo. They just regurgitate Reddit in sophisticated language.

It appears that LLMs have fundamental problems? They appear to be consuming exponentially more data and power, for only minor improvements in performance. They also appear fundamentally unable to resolve the "ghost data" problem. ChatGPT 5.0 still isn't out.

I still remember when Silicon Valley convinced me that self-driving cars were just around the corner. That was ten years ago.
I don't think that how things works, there is more to speed than node scaling, like a good architecture, software optimization, hardware optimization and so on. Node scaling help with power but speed is not that easy.
 

iewgnem

Junior Member
Registered Member
This is a good question. According to Google, 5nm semiconductors are 15% faster than 7nm semiconductors. That means Chinese LLMs would be 15% slower for five years - my estimate of the remaining time China needs for EUV. 5nm chips are also smaller. That can obviously be ameliorated with a bigger data center.

In my opinion, using a bigger node for LLM is not really a big deal.

I also believe, frankly, that LLMs are overrated. Based on my experience, ChatGPT is unable to solve engineering problems whose solutions are not available on the Internet. LLMs cannot think imo. They just regurgitate Reddit in sophisticated language.

It appears that LLMs have fundamental problems? They appear to be consuming exponentially more data and power, for only minor improvements in performance. They also appear fundamentally unable to resolve the "ghost data" problem. ChatGPT 5.0 still isn't out.

I still remember when Silicon Valley convinced me that self-driving cars were just around the corner. That was ten years ago.
Companies don't use the same number of chips, cost of chips aren't the same, energy prices aren't the same, software aren't the same, for anything that involve parallel compute, speed different at individual chip level is not nearly as important as a lot of people think, and even then 15% is a very small difference.

Case in point if you build a datacenter entirely powered by cheap solar, inside naturally cooled geographic features and in proxmity to other datacenters, you can get far more compute for every dollar spent using a larger but non-linearly cheaper node.
 
Top