Chinese semiconductor industry

Status
Not open for further replies.

BoraTas

Captain
Registered Member
Re the restrictions on accessing the latest ARM designs, that may be moot since they may have required more advanced processes to be fabbed on that would have been subject to the recent export controls placed on the foundries. Unless they could have those fabbed at SMIC, having licenses to those new ARM cores would be useless if they required access to export controlled TSMC/Samsung nodes. Typically new ARM cores would require a node shrink to implement, so I would suspect that to be the case here.

I am not an expert in this area but is RISC-V viable for consumer applications (i.e., mobile), I know the focus right now is on it as a B2B/enterprise solution? Even if RISC-V ensures continuity for datacenters, government, industrial, etc I feel that another alternative should be established to minimize disruptions to consumer operations like Huawei smartphones or Lenovo solutions in case they cutoff ARM licenses for those cores or x86.
Actually not moot and makes domestic software and OS development more critical. I think it is time for China to subsidize RISC-V development, and prepare emulators and virtualization tools. ISA is important since most software is platform specific. Shouldn't be too hard. Apple changed ISAs multiple times in the past. And for RISC-V, such tools already exist. And yes. RISC-V is viable. It just needs someone to invest a few billion USD.
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Please, Log in or Register to view URLs content!
Oppo coming out with their second chip called MariSilicon Y. It passed 12Mbps bluetooth transmission speed. It achieved 192kHz/24 bit lossless audio transmission for the first time.

MariSilicon Y achieved AI computer power up to 590 GOPS for fast growing computing audio.

Looks to be using N6RF process. It really is annoying that they are not even trying to use SMIC for something that doesn't seem to require that advanced process.
 

tokenanalyst

Brigadier
Registered Member

The dual-mode hardware verification system is here! In-depth analysis of the six core highlights of HuaPro P2E​

Please, Log in or Register to view URLs content!

With the continuous development of SoC and chiplet chip innovation, there are more requirements for virtual or physical verification, in-depth debugging, and advanced software development for high-performance hardware verification systems. These requirements often require switching between multiple EDA tools. In the verification process of large-scale chips, hardware simulation and prototype verification are necessary verification links to solve different verification requirements. However , the common hardware emulation and prototype verification system products are separated from each other in terms of system and use, which makes users face various challenges. To solve these challenges, it is necessary to have an overall solution from software, hardware to debugging, so that hardware emulation and Prototype verification The two verification systems can be tightly integrated and switched seamlessly.
The new cloud-native software and hardware architecture created by Xinhuazhang supports the use of P2E cloud mode. From the dynamic management of cloud computing resources and FPGA resources, the allocation of user tasks, to the real-time management and monitoring of the underlying hardware system, all can be achieved through the beautiful and easy-to-use cloud The management interface is used to access and use, which can dynamically allocate resources, support multi-user and multi-task, and comprehensively improve the efficiency of the hardware verification system.

The launch of Xinhuazhang HuaPro P2E products is a comprehensive innovation in the field of hardware simulation. In addition to bringing cost and efficiency benefits to chip verification users, it will also drive the reform of the EDA industry. In the future, we will see more EDA companies go The road of hardware verification products on a unified software and hardware platform, and jointly promote the EDA industry to the 2.0 era.

Please, Log in or Register to view URLs content!

1671078443815.png1671078481484.png
1671078517520.png


 

FriedButter

Major
Registered Member
Pardon me if it's out of topic but while ppl claimed that military doesn't need small node chips, this osint account argued otherwise


Where? I don’t see him mentioning anything about chips or the size of the node anywhere.

This is what I mean when I say the biggest advances are on the software side, not the hardware side. Data processing and analysis are going to be greatest enablers on the battlefields of true 21st century conflicts.

This capability is essentially what Palantir is building for the DoD through Gotham. This is a capability currently being pursued by both the PLA and the US military
 

tokenanalyst

Brigadier
Registered Member
Pardon me if it's out of topic but while ppl claimed that military doesn't need small node chips, this osint account argued otherwise

The vast majority of weapons require reliability rather than the cutting edge nodes, you don't want your missile to fail in a critical mission because the chip got too hot. Your GPU may crash in a game but the GPU that powers the Sensor fusion in fighter jet is something that not pilot wants that fail.
Not to the question. When power is non an issue I think you can get away with using custom architectures, in this case a data processing ASIC for this case application maybe the best solution for this case even if is made in a older node. Or you can use 3D IC staking and packaging the get out more performance out of chips.
1671080418194.png
Also China is developing memory integrated AI solutions that could get a lot of computer power from Chips. Probably the next step on the evolution of AI chips.
Yizhu Technology focuses on domestic storage and computing integrated AI chips with large computing power, and the 28nm process achieves 10 times the energy efficiency ratio
Final, in low volume I think Chinese military have the capabilities to make their own advanced chips, using EUV, immersion or other means.
 

ZeEa5KPul

Colonel
Registered Member
I've glanced over the paper that account linked and nothing about it indicates that it's a real-time algorithm. Genetic algorithms aren't real-time because it takes many generations for a solution to emerge. The paper explicitly mentions:
However, since this algorithm is based on the framework of evolutionary computation, it cannot be performed in a real-time environment. Therefore, this proposed algorithm can be used in a non real-time and high-precision environment.
I'm not aware of any "real-time" training algorithms for AIs - in fact, I'm not sure how such a thing could be defined. A trained AI can make decision in "real-time", but that's a different matter. And genetic algorithms aren't really considered "AI" (which has come to mean artificial neural networks). Training an AI and running more generations on a genetic algorithm are usually constrained by how much time the researcher is willing to spend.

Data is fed in and the algorithm is given time to crunch it; specialized hardware can help but hardware generally isn't the constraint. Algorithmic improvements are often what drives performance gains, and it's not unusual for a superior algorithm to complete a task that used to take days in minutes on the same hardware.

The Twitter account you linked is a know-nothing osintbro, it's not worth our attention.
 

ZeEa5KPul

Colonel
Registered Member
This issue actually has some interesting implications. Think of a problem like bitcoin mining (but without the artificially inflated difficulty) - at first it was done on regular desktops, then on ASICs/FPGAs/GPUs, and now on chips designed for mining. The speedups came from adapting the hardware to the problem - not using faster CPUs - and I think the same holds here.

If this class of algorithms proves particularly effective in solving the problem, China will design hardware optimized for running this specific form of genetic algorithm.

It's still true that algorithmic improvements are the biggest source of speedups, but there is a role for specialized hardware to play once the juice is squeezed out of the algorithm lemon.
 
Status
Not open for further replies.
Top