Cambrian: MLU370-X8 has achieved in-depth cooperation with leading Internet companies, and shipped over one million edge chips
According to the micro-net news, at the beginning of this year, Cambrian released the new training accelerator card MLU370-X8, and it quickly made waves in the domestic Internet, server and other markets. MLU370-X8 is equipped with dual-chip four-core Siyuan 370, integrated Cambrian MLU-Link™ multi-core interconnection technology, and completed the adaptation work with domestic mainstream server manufacturers at the beginning of its release.
Recently, Cambricon disclosed in an investor survey that after the release of the MLU370-X8 accelerator card, with its excellent product competitiveness, it has achieved in-depth cooperation with some leading Internet customers in some scenarios. The company has formed a certain income scale. In addition, some customers have completed product introduction and are in business contact. In the financial field, the company has in-depth exchanges on OCR and other related businesses and product applications with leading banks and well-known enterprises, and has conducted in-depth technical exchanges on new business scenarios (such as natural language processing, etc.), and some enterprises are conducting business trials. In terms of server manufacturers, the company's products have also been recognized by head server manufacturers.
It is understood that the MLU370-X8 provides twice the memory bandwidth of the Siyuan 370, combined with the MLUarch03 architecture and MLU-Link multi-core interconnection technology, to give full play to the advantages of the Siyuan 370 chip in training tasks. MLU370-X8 is positioned at the mid-to-high end, and it is combined with high-end training products Siyuan 290 and Xuansi 1000, which further enriches Cambrian’s training computing power delivery methods; -S4 smart accelerator cards work together to form a complete cloud training and reasoning product portfolio.
At present, Cambrian has mastered the intelligent processor micro-architecture, intelligent processor instruction set, SoC chip design, processor chip functional verification, advanced process physical design, chip packaging design and mass production testing, hardware system design in the field of intelligent chips. In the field of basic system software technology, he has mastered programming framework adaptation and optimization, smart chip programming language, smart chip compiler, smart chip math library, smart chip virtualization software, smart chip core driver, cloud edge Seven categories of core technologies, including the terminal integrated development environment. Among them, the two major technologies of processor micro-architecture and instruction set belong to the lowest core technology. At present, the company's fifth-generation intelligent processor micro-architecture and fifth-generation intelligent processor instruction set are all under development.
At the same time, Cambrian continues to promote the optimization and iteration of the basic system software platform. For the reasoning software platform, Cambrian continues to improve the reasoning acceleration engine MagicMind and its surrounding ecology. Functionally, MagicMind continues to add basic features such as sub-channel quantization and weight hot update. The number of operators supported has been expanded to more than 200. The number of public Benchmark models and non-public private models has continued to increase. The latest version has fully supported image classification and video understanding. , semantic segmentation, similarity detection, text detection, OCR, speech processing, natural language processing, search, recommendation and other fields of cloud-side-end reasoning business.
As Coreteks said in the video I posted earlier, GPUs are one method to execute AI workloads but not the only method, what Nvidia offers is really good software ecosystem that makes easy to develop for their GPUs, like CUDA for example. But what is going to happen when others developing frameworks for AI workloads in different more efficient AI hardware architectures start to catch up with Nvidia? That could spell doom for Nvidia in AI.