Chinese semiconductor thread II

tokenanalyst

Brigadier
Registered Member

EFEM for Lithography Machines​


1754093302805.png

This FORTREND front - end automation module for lithography equipment docking is the only domestic supplier that can provide direct - connect automatic loading and unloading with lithography machines.

It provides a direct - connect automatic loading and unloading function with lithography machines. Its robot can transfer wafers from the wafer box to the lithography machine docking port. It is configured with a dual - Robot system (a regular Robot and a small Z - axis Robot).

It can support the transfer of 6 - inch, 8 - inch, and 12 - inch wafers.
The small Z - axis Robot can meet the transfer requirements of docking platform heights ranging from 620mm to 800mm.
Its internal cleanliness can reach up to ISO Class 1.
It is compatible with all FOUP, FOSB, SMIF POD, and Cassette that meet SEMI standards.
It has outstanding COO and COC, helping customers reduce costs and improve efficiency.
It also has customization capabilities and can be designed according to customer requirements.

Please, Log in or Register to view URLs content!
 

def333

New Member
Registered Member
TechInsights did a teardown of CXMT's G4 (D1z) 16Gb DDR5 DRAM chip

TechInsights’ teardown analysis revealed that CXMT’s DDR5 chips are manufactured using a 16nm process. Given that U.S. sanctions prohibit CXMT from accessing technologies below 18nm, this implies that the DDR5 chips were likely produced using domestically developed lithography equipment.
 

Michael90

Junior Member
Registered Member
Personal Information Protection Law, the Cyberspace Administration of China summoned Nvidia on July 31, 2025, requesting that the company explain the security risks of a backdoor vulnerability in its H20 computing chips sold to China and submit relevant supporting documentation.
Lmao, what do they expect them to say? That they agree their chips have backfoors for tracking and kill switches? Lol. Even a fool will not agree to confess to such a thing. So it's a waste of time..if they can't do their own throrough examination and indepth inspection of Nvidia hardware and software products then they might as well just let things be . Lol
 

Michael90

Junior Member
Registered Member

Indeed this can be a light protectionism measure to help new Chinese startups. Not that it is without merit..just that the current H20 are the same ones of last year, but last year apparently there was no problem with tracking.

Of course you can also believe that they realized only now that H20 has backdoors, but IMO more realistically last year there was no alternative: TSMC was banned for Chinese AI chips and SMIC advanced node was 100% dedicated to HW.

I read this as indirect hint that SMIC capacity is ramping up at advanced nodes and Chinese AI startup are maturing software, indeed CUDA replacement is a big challenge. Still not clear to me why Chinese firms do not consolidate around a single, open sourced, local CUDA alternative: it is a gigantic challenge even for Huawei to replace CUDA. I don't see how small players can do.
Simple the government should push and encourage tech conpanies to adopt huawei's CUDA alternative. Since they have the best chance and most advance software ecosystem along with AI chips alternative to Nvidia in China and they also have the scale to sustain such operations and adoption in china. Its surprising that despite the urgency this has not been happening yet in China
 

vincent

Grumpy Old Man
Staff member
Moderator - World Affairs
Simple the government should push and encourage tech conpanies to adopt huawei's CUDA alternative. Since they have the best chance and most advance software ecosystem along with AI chips alternative to Nvidia in China and they also have the scale to sustain such operations and adoption in china. Its surprising that despite the urgency this has not been happening yet in China
Because the Chinese government is not a totalitarian one as smeared by Western propaganda? Chinese government can’t force private corporations to adopt anything.
 

gelgoog

Lieutenant General
Registered Member
The industry itself should come up with a standard.
In the US the industry came up with OpenCL as an alternative to CUDA but it never took off for several reasons.
Too much middleware and final applications have been written on top of CUDA at this point. NVIDIA itself had loads of programmers working on CUDA acceleration for clients for like a decade and then there is the 3rd party software.
 

tokenanalyst

Brigadier
Registered Member

A disruptor in the post-Moore era: Wuyuan Semiconductor leads China's 3D integrated manufacturing industry​

Building on my previous post about Wuyuan Semiconductor's rise in China's 3D integrated manufacturing space, let me provide more granular details about their equipment capabilities and product lineup - the foundation of their competitive edge.

Based on hybrid bonding technology, Wuyuan Semiconductor, driven by independent innovation, has deeply cultivated core 3D integrated manufacturing processes:

-Lithography Machine Milestone: On August 1, 2025, they installed their first lithography machine (a critical piece of equipment for precision pattern transfer in advanced packaging). A high-precision machine capable of sub-10nm feature resolution, essential for their hybrid bonding processes.

-Core Process Equipment: Their production line features specialized equipment for:​
  • Hybrid bonding: For wafer-level stacking with micron-level precision​
  • TSV (Through-Silicon Vias) equipment: Creating vertical interconnects through silicon​
  • CMP (Chemical Mechanical Polishing) systems: For ultra-smooth wafer surfaces​
  • ECP (Electroless Copper Plating) systems: For reliable interconnects​
  • Advanced grinding and deep trenching equipment: For precise wafer thinning and structure creation​
-Localization Achievements:​
  • Equipment localization rate >70%​
  • Key equipment developed by domestic suppliers like Shenzhen Key Laboratory and Qingdao-based manufacturers​
  • Raw materials localization >85% (including silicon wafers, bonding materials, and interconnect metals)​
-Production Capacity:​
  • Line 1: 2,000 wafers/month (12-inch) capacity​
  • Line 2 (completed): Expected to double capacity to 4,000 wafers/month by November 2025​
  • Full production line expected to reach 10,000 wafers/month by 2026​
Wuyuan's product focus targets high-growth markets where 3D integration delivers critical advantages:
1754142416413.png1754142456955.png
  1. Near-Memory Computing AI Chips:
    • 3D-stacked architecture integrating CPU/GPU with high-bandwidth memory (HBM)​
    • Achieves 30-40% latency reduction vs. traditional 2D designs​
    • Target applications: AI inference accelerators for data centers​
  2. Customized High-Bandwidth Memory (HBM) 1+8 Stacking:
    • Breaking through the 8-stack limit (industry standard is 4-6 stacks)​
    • Achieved 1+8 configuration (1 memory stack + 8 logic chips)​
    • Expected to deliver 1.5x bandwidth vs. current 1+4 configurations​
    • Critical for next-gen AI accelerators requiring massive data throughput​
  3. New Memory Solutions:
    • 3D-stacked DRAM with embedded logic​
    • 20% lower power consumption vs. conventional 2D DRAM​
    • 30% higher density (40% more storage per unit area)​
  4. Micro-LED Display Chips:
    • 3D integration for micro-LED displays with pixel pitch <50μm​
    • Enables 8K+ resolution displays with 10x brightness​
    • Targeting AR/VR headsets and next-gen TVs​
Technical Differentiation
Hybrid Bonding Technology
: Their proprietary bonding process achieves 99.98% yield at 10μm pitch (vs. industry average of 98.5% at 15μm)
Wafer-on-Wafer (WoW) & Chip-on-Wafer (CoW): They've optimized both approaches, with WoW achieving 99.5% yield and CoW at 98.7% yield
Process Integration: Their unique combination of TSV, ECP, and CMP processes reduces total process steps by 35% vs. competitors

Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!
 
Last edited:

tokenanalyst

Brigadier
Registered Member

Flagship Microsemi completes Series C financing to accelerate mass production of high-end automotive chips​

Qixin Microsemi reached a significant milestone in its development: the company successfully completed hundreds of millions of yuan in financing, continuing to gain recognition from prominent market investors, including Xiaomi Group's Beijing Xiaomi Intelligent Manufacturing Equity Investment Fund, Hainan Jimu Venture Capital, the Beijing Advanced Manufacturing and Intelligent Equipment Industry Investment Fund led by Jing Guorui, and Beijing Shunyi Fund, all of which are helping the company reach new heights in its development. At the same time, the company will officially relocate to Beijing's Shunyi District. With the support of industry partners, shareholders, and local authorities, the company will continue to leverage its technological and product advantages to better serve downstream customers, promote the high-quality development of the automotive industry, and establish a virtuous cycle of technology, industry, and capital.
As an innovative company specializing in core control chips for next-generation vehicle electrical and electronic (E/E) architectures, Qixin Microelectronics has established a significant advantage in technological breakthroughs and market expansion. Its current business network covers leading Tier 1 suppliers and major automakers. As the company approaches its fifth anniversary, Qixin Micro's FC4150 series automotive-grade MCUs based on the ARM Cortex M4 architecture have shipped over 10 million units. Its next-generation FC7300 series multi-core ASIL-D high-end automotive-grade MCUs based on the ARM Cortex M7 architecture will enter mass production in January 2025 and have secured orders for over one million units. With this round of financing, the company will further consolidate its leading position in China's high-performance, high-functionality safety controller chips, deepen collaboration across the industry chain, and steadily embark on a new phase of commercial development.

Please, Log in or Register to view URLs content!
 

tokenanalyst

Brigadier
Registered Member

Big Intelligent Computing Clusters: FP8 Training and Reliability Breakthroughs​

Moore Threads recently shared insights at WAIC2025 about the evolving landscape of large-scale AI model training. Here's what's essential to know:

Why Big Clusters Are Now Essential​

The computing power requirements for AI models have exploded - from 10^23 FLOPS for top models in 2020 to projected 10^26 FLOPS for Grok-3 by 2025. This near-1,000x increase is driven by scaling laws where better performance requires more parameters and data.

Real-world examples show the challenge:​
  • DeepSeek: ~3.4×10^24 FLOPS​
  • Kimi K2 (trillion-parameter model): 2.98×10^24 FLOPS (85 days on 1k-GPU cluster)​
  • GPT-4: 10^25 FLOPS (602 days on 1k-GPU cluster vs 80 days on 10k-GPU cluster)​

FP8: The Precision Sweet Spot​

Recent research shows FP8 has emerged as the optimal precision for large-scale training:​
  • Each precision halving (FP32→FP16→FP8) doubles computing power​
  • FP8 hits the "sweet spot" where model loss is minimized (FP6 and FP4 show increased loss)​
  • Challenges include overflow/underflow risks due to limited value range​
Moore Threads has developed complete FP8 support:
  • Hardware: Full-featured GPU supporting FP64 to FP8
  • Software stack:
    • Torch-MUSA, a low-level MUSA plug-in built on the Torch stack that can run the entire Torch stack on MUSA. We have fully implemented support for the FP8 data type.
    • MT-MegatronLM, which supports FP8 mixed parallel training. The framework supports efficient training of dense, multimodal, and MoE models, and supports FP8 mixed precision training, the high-performance muDNN library, and the MCCL communication library.
    • MT-TransformerEngine, which is mainly used for efficient training and inference optimization of Transformer. It supports FP8 mixed-precision training and improves training and inference efficiency through technologies such as operator fusion and parallel acceleration.
"Using this software stack, we successfully replicated the entire DeepSeek-V3 training process. As you all know, DeepSeek wasn't initially open source, or even if it was open source, it was only partially available. Building on this software stack, we implemented a series of related technologies, such as the MT Flash MLA and DeepGEMM libraries. We were the first vendor in the industry to replicate DeepSeek training with a full version, while others were still attempting to replicate inference."
1754144242391.png

With thousands of GPUs, even small failure rates become significant. Moore Threads' solution includes:​
  1. Takeoff checks: Comprehensive pre-training inspection of all hardware, network, and small workloads to automatically remove abnormal nodes​
  2. Flight checks: Monitoring during training to detect hangs, abnormal exits, and performance issues​
  3. Slow node mitigation: Identifying and handling slow nodes (which can improve performance by 10-20%)​
  4. Fault-tolerant training: Removing failed nodes from communication groups rather than stopping entire training runs​

As model training pushes into the trillion-dollar computational realm, both precision optimization through FP8 and cluster reliability have become critical factors determining the pace and success of large-scale AI development.

Please, Log in or Register to view URLs content!
 

tokenanalyst

Brigadier
Registered Member

High-end manufacturing transformation accelerates, and Xinyichang's Mini LED die bonder business rises strongly​


Xinyichang revealed the latest progress in its Mini LED die-bonding machine business: the company officially signed a purchase order for Mini LED die-bonding machines with Suzhou Huaxing Optoelectronics Display Co., Ltd. (a wholly-owned subsidiary of TCL Huaxing), and the company has completed the internal approval procedures for signing the order.

This order confirms the success of the company's strategic transformation. Xinyichang stated that since the beginning of the year, some leading panel manufacturers have begun to deploy Mini LED production capacity, and the company's Mini LED order situation is good.

Xinyichang's advantage in the Mini LED die bonder market stems from its extensive technical expertise. Its product processing speed reaches 180K per hour, placing it at the forefront of the industry. In 2024, R&D investment reached 97.6192 million yuan, accounting for 10.45% of the current operating income, with a focus on new products of intelligent manufacturing equipment in the fields of semiconductors, Mini/Micro LED, etc. The company has independently developed and mastered a number of core technologies, including WMD high-speed mixed-signal wireless transmission technology, parallel computing technology, Mini LED wafer defect detection algorithm, etc.

1754153679109.png
Xinyichang has pledged to continue increasing its Mini LED R&D investment through 2025 to further consolidate its technological advantages. These investments have resulted in significant product premiums, with the gross profit margin for Mini LED die bonders exceeding 40%, significantly exceeding the industry average. To meet growing market demand, Xinyichang is actively planning capacity expansion. In January 2025, the company completed the topping-out ceremony for its high-end intelligent equipment manufacturing base project, which it invested 600 million yuan in.

The project aims to build a comprehensive base integrating intelligent equipment manufacturing and the group's R&D center. The R&D center will include R&D offices, laboratories, testing laboratories, and other advanced facilities. Upon reaching full production, the project is expected to generate an additional annual output value of 680 million yuan, providing a solid foundation for the company's future growth. With the base put into use, Xinyichang's development in the field of Mini/Micro LED equipment is expected to be further accelerated, and its R&D and manufacturing capabilities will be qualitatively improved.

Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!
 
Top