Chinese semiconductor industry

Status
Not open for further replies.

olalavn

Senior Member
Registered Member
TBH I always expected the US to tighten sanctions after this news came out. It’s way too embarrassing to announce less than 10 months ago this was happening, and then twist the arms of other countries to play along against their own economic interest … and just to give up because Huawei shipped one phone?

No way. The US will probably lock down all direct US products to SMIC / Huawei within the next month, no more Qualcomm licenses. They’ll try to lock down ASML, but they’ll fail.
2-4 years later... the number of Qualcomm chips will drop to the lowest level... I will be grateful to Trump... when he pulled China out of the quagmire of chip fund corruption, and China did it again from the beginning...
 

BoraTas

Captain
Registered Member
2-4 years later... the number of Qualcomm chips will drop to the lowest level... I will be grateful to Trump... when he pulled China out of the quagmire of chip fund corruption, and China did it again from the beginning...
iPhone sales will likely peak in the coming years too. Especially with Apple de-risking from China and GDP growth in Henan (the last stronghold of cheap assembly work in China) the Chinese govt won't be as accommodating to Apple as it has been. People view of Apple will change too.
 

BlackWindMnt

Captain
Registered Member
I've met him irl. Dylan comes off as a ML enthusiast that thinks he's qualified to talk about chips and GPUs just because he's got low-level connections in Taiwan and Nvidia. The only bigshots that pay any attention to him are VC and software numbskulls like Sam Altman or Elon who don't know the first thing about hardware.


The Kirin 9000S is the first ever mobile ARM SoC to implement SMT. Has anybody posted about SMT hyperthreading perf for the Kirin chip?
Really depends on the code.
Code written with a high cache miss rate and that is also highly threaded might get some extra performance out of it.
When a piece of code has a cache miss, because the data is not in cache. The CPU will need to retrieve the data from memory making the code wait 300~400 cycles(could be a order smallers like 30~40 cycles) before the data arrives and the CPU core can continue. A core with hyper threading or SMT will try to do something else in those 300~400 cycles, interleaving other pieces of code that can run because all the data is already in cache.

It will do nothing for the theoretical max performance of a core but it might make shitty optimised threaded code perform better.
But it has been a while since i did high performance coding, so i might be mistaken
 

KampfAlwin

Senior Member
Registered Member
iPhone sales will likely peak in the coming years too. Especially with Apple de-risking from China and GDP growth in Henan (the last stronghold of cheap assembly work in China) the Chinese govt won't be as accommodating to Apple as it has been. People view of Apple will change too.
Yeah, have you seen the iPhone 15 reveal? I have never seen people so excited about features already present in Chinese phones years ago.
 

BlackWindMnt

Captain
Registered Member
Really depends on the code.
Code written with a high cache miss rate and that is also highly threaded might get some extra performance out of it.
When a piece of code has a cache miss, because the data is not in cache. The CPU will need to retrieve the data from memory making the code wait 300~400 cycles(could be a order smallers like 30~40 cycles) before the data arrives and the CPU core can continue. A core with hyper threading or SMT will try to do something else in those 300~400 cycles, interleaving other pieces of code that can run because all the data is already in cache.

It will do nothing for the theoretical max performance of a core but it might make shitty optimised threaded code perform better.
But it has been a while since i did high performance coding, so i might be mistaken
Wanted to edit something but i was past my edit time.

let say you have a core that runs at 3Ghz that's like 3000 cycles per 1µs(microsecond) average access time to ssd is around 20~25µs so reading some data from ssd into cache will take around 75.000 cycles. Usually when a piece of code does this the code will be moved to smaller cores in the big little SOC setup so the Big core can continue to work on other more immediate tasks.. Let say you want to do a network call that takes like 10ms(milisecond) for a 3Ghz cpu thats about 3.000.000 cycles it has to wait before it can continue to do something with the requested data.

That why you see this setup of one really fast and big core, with 3 medium powered core and 4 slower and smaller cores in cpu these days. Things like accessing data from ssd will be parked onto those 3 medium or 4 slower cores. While network calls will just be parked on the slow cores.
 

Proton

Junior Member
Registered Member
Really depends on the code.
Code written with a high cache miss rate and that is also highly threaded might get some extra performance out of it.
When a piece of code has a cache miss, because the data is not in cache. The CPU will need to retrieve the data from memory making the code wait 300~400 cycles(could be a order smallers like 30~40 cycles) before the data arrives and the CPU core can continue. A core with hyper threading or SMT will try to do something else in those 300~400 cycles, interleaving other pieces of code that can run because all the data is already in cache.

It will do nothing for the theoretical max performance of a core but it might make shitty optimised threaded code perform better.
But it has been a while since i did high performance coding, so i might be mistaken

I don't know much about the technical aspect.

But the practical situation seem to be the opposite of your conclusion, with more modern PC-applications greatly benefiting from Hyperthreading, while back in 2010 or so you'd often get slightly worse performance.

Not sure if you're perhaps missing some points about branch predictions - where mispredictions causes stalling. Or certain cases where the threads don't conflict.
 

Staedler

Junior Member
Registered Member
I don't know much about the technical aspect.

But the practical situation seem to be the opposite of your conclusion, with more modern PC-applications greatly benefiting from Hyperthreading, while back in 2010 or so you'd often get slightly worse performance.

Not sure if you're perhaps missing some points about branch predictions - where mispredictions causes stalling. Or certain cases where the threads don't conflict.

Well most code that is written is poorly optimized just because the cost of optimizing the code is higher in terms of manpower to write and manpower to maintain afterwards. We're also typically several layers up in stuff like Java (Android) so we lose even more context. The trend is towards less and less optimized code, simply because it gives us the throughput to build stuff at the speed business demands.

That said, I think there's a lot of latency that is baked in and can't really be improved algorithmically such as network calls and cache misses like BlackWindMnt mentioned. A lot of apps and productivity work these days really are just network calls to some DB structure, so that is indeed a lot of dead time that SMT helps with. An example of that would be trading & personal banking apps.
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Sai Micro has started mass production of micro galvanometer which will enable its customer (most likely Robosense here) to make MEMS Lidar


A previous teardown of Robosense Lidar shows that almost all the core chips are sourced from abroad including Micro DRAMs, Xilinx FPGA, chips from TI & analog devices. This is hopefully another step in its way toward establishing domestic supply chain
 
Status
Not open for further replies.
Top