Artificial Intelligence thread

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Director of the Pengcheng Lab revealed that in Oct 2020 they built the "Pengcheng Cloud Brain-2" cluster with 4000 Huawei Ascend and Kunpeng processors for 1000P FLOPS of computation power. It is about same as 4000 Nvidia A100 GPUs combined, according to him.

By Jun 2022 when the construction completed, more processors from other suppliers have been added to increase the total power to 2300P.

The next version under construction, "Pengcheng Cloud Brain-3", will provide 16000P FLOPS of computation power. He expects that it will be ready by end of 2024 or start of 2025.

Please, Log in or Register to view URLs content!
i literally posted this several times on this thread and the semi thread several times if you look book a couple of pages
 

Eventine

Junior Member
Registered Member
Industrial AI is very important and it'd be great to see China lead there.

But, I want to mention that the hype around generative AI is based on the promise of Artificial General Intelligence, which Open AI has publicized as being "within this decade." Now, I personally think that's a load of marketing talk, but the fact of it is that you're never going to get to Artificial General Intelligence with a targeted industrial model. You need a more general intelligence system and so far, the closest that's gotten to that is multi-modal LLMs.

Thus, the problem that generative AI is attempting to address - whether it will be successful or not - is at a higher abstraction level than industrial AI. It's trying to answer "how to learn," rather than "what to learn." If Open AI realizes what it promises, then in theory, you'll get an AI that can do anything. This is what "Artificial General Intelligence" means - an AI agent that is capable of learning and executing any task at human or super human levels.

If such a technology can be realized and if the West manages to achieve it first, specialized industrial AI models will become obsolete, as you can just tell the Artificial General Intelligence to "build me a model for weighting cows" and it'll do that faster than any Huawei engineering team can. It'll collect the training data it needs, do the training & programming required, test, validate, and fine tune the model all by itself. And it'll do that for ANY task you give it.

This is why the West is so desperate to achieve Artificial General Intelligence first, why they've fast tracked sanctions on China's chips & AI industry, since China is their only competitor in the field. They believe it is literally the race for the singularity - the god mode of technological evolution where the civilization that achieves it first, will dominate forever. The stakes are so high they can't risk China even having a 1% chance of realizing Artificial General Intelligence before they do.

We'll know in the coming years whether they're right or wrong. The field of AI has promised the world before, and failed spectacularly to deliver it. If it fails again and we get into a new AI winter, the massive effort the West invested into the race to Artificial General Intelligence will probably mean that China dominates industrial AI. So that's the gamble - either in the next 10-20 years, we'll see the rise of Artificial General Intelligence and the beginning of the technological singularity, or we'll see it fail, and a huge waste of the US's resources facilitating its decline.

If I were China, I'd make it a critical goal to be a "fast follower" in generative AI, just in case the US is right, but allow the Americans to waste the most resources chasing it.
 

jshw31

New Member
Registered Member
Industrial AI is very important and it'd be great to see China lead there.

But, I want to mention that the hype around generative AI is based on the promise of Artificial General Intelligence, which Open AI has publicized as being "within this decade." Now, I personally think that's a load of marketing talk, but the fact of it is that you're never going to get to Artificial General Intelligence with a targeted industrial model. You need a more general intelligence system and so far, the closest that's gotten to that is multi-modal LLMs.

Thus, the problem that generative AI is attempting to address - whether it will be successful or not - is at a higher abstraction level than industrial AI. It's trying to answer "how to learn," rather than "what to learn." If Open AI realizes what it promises, then in theory, you'll get an AI that can do anything. This is what "Artificial General Intelligence" means - an AI agent that is capable of learning and executing any task at human or super human levels.

If such a technology can be realized and if the West manages to achieve it first, specialized industrial AI models will become obsolete, as you can just tell the Artificial General Intelligence to "build me a model for weighting cows" and it'll do that faster than any Huawei engineering team can. It'll collect the training data it needs, do the training & programming required, test, validate, and fine tune the model all by itself. And it'll do that for ANY task you give it.

This is why the West is so desperate to achieve Artificial General Intelligence first, why they've fast tracked sanctions on China's chips & AI industry, since China is their only competitor in the field. They believe it is literally the race for the singularity - the god mode of technological evolution where the civilization that achieves it first, will dominate forever. The stakes are so high they can't risk China even having a 1% chance of realizing Artificial General Intelligence before they do.

We'll know in the coming years whether they're right or wrong. The field of AI has promised the world before, and failed spectacularly to deliver it. If it fails again and we get into a new AI winter, the massive effort the West invested into the race to Artificial General Intelligence will probably mean that China dominates industrial AI. So that's the gamble - either in the next 10-20 years, we'll see the rise of Artificial General Intelligence and the beginning of the technological singularity, or we'll see it fail, and a huge waste of the US's resources facilitating its decline.

If I were China, I'd make it a critical goal to be a "fast follower" in generative AI, just in case the US is right, but allow the Americans to waste the most resources chasing it.

Quite a few, if not most of the top ML/AI experts doubt the long-term potential of LLMs. While LLMs have undoubtedly produced some pretty amazing tools, and will continue to do so in the near future, there are many structural limitations that makes it very unlikely for it to be the main architecture for AGI. We'll probably see a continued push in LLMs for the foreseeable future though, as large AI hype/investments are still market driven. LLMs are currently seen as the safe choice as it has already been demonstrated to provide pretty good results. Though, success in LLMs or any other forms of AI/ML will only draw more funding towards the field, and even if you don't believe in LLMs, it will still likely play at least an indirect role in the other AI/ML developments. I would say China's focus on industrial AI produces much of the same type of effect with likely more efficacy. The increased revenue will translate to increased AI/ML R&D.

It's pretty hard to definitively gauge China's progress in AGI compared to the US (and I guess technically the rest of the world) since LLMs are probably not the most accurate measuring stick. Honestly, we are still very much in the wild wild west when it comes to AGI (true AGI, not statistical learning rebranded as ML rebranded as AI), that investment and general scientific output is probably just as good, if not better, measuring stick.
 

Eventine

Junior Member
Registered Member
Quite a few, if not most of the top ML/AI experts doubt the long-term potential of LLMs. While LLMs have undoubtedly produced some pretty amazing tools, and will continue to do so in the near future, there are many structural limitations that makes it very unlikely for it to be the main architecture for AGI. We'll probably see a continued push in LLMs for the foreseeable future though, as large AI hype/investments are still market driven. LLMs are currently seen as the safe choice as it has already been demonstrated to provide pretty good results. Though, success in LLMs or any other forms of AI/ML will only draw more funding towards the field, and even if you don't believe in LLMs, it will still likely play at least an indirect role in the other AI/ML developments. I would say China's focus on industrial AI produces much of the same type of effect with likely more efficacy. The increased revenue will translate to increased AI/ML R&D.

It's pretty hard to definitively gauge China's progress in AGI compared to the US (and I guess technically the rest of the world) since LLMs are probably not the most accurate measuring stick. Honestly, we are still very much in the wild wild west when it comes to AGI (true AGI, not statistical learning rebranded as ML rebranded as AI), that investment and general scientific output is probably just as good, if not better, measuring stick.
I agree that LLMs are not likely to be the final architecture for artificial general intelligence. But I'd be really surprised if it didn't play an important role, both as a stepping stone, and as a component of artificial general intelligence.

The reason is that if the last few decades of research in AI have shown anything, it's that data is critical to achieving intelligence. All foundation models, whether LLMs, diffusion models, GANs, or anything else, rely on having mountains of diverse training data. Once the frame work embeddings and initial weights are trained, sure, few shot or zero shot fine tuning is possible. But you'll get no where without building the foundation.

So how do you get data? Well, language is how humans communicate. Being able to talk to humans, learn from humans, and obtain the knowledge we've built up over thousands of years is the easiest, most efficient way of obtaining data - and it relies on understanding natural languages. It's also how you'd communicate with humans to understand our goals and objectives, and an AI system that cannot communicate effectively with humans is not useful.

For these reasons, I believe LLMs have an important role to play in the search for artificial general intelligence. It's not the answer, but a component of the answer. What remains to be done is to complete the self-learning, self-organizing loop, such that the AI system can do its own discovery and evolution. This problem, no one has accomplished, but everyone recognizes the importance of it. It's why academics have all been exploring techniques like reinforcement learning and evolutionary algorithms.

I don't know if we'll see artificial general intelligence realized in our life time. It maybe that a major revolution in hardware is required for the software to function, much like how neutral networks were considered a lost cause until advancements in chips, distributed computing, and data gathering allowed for the training of billion+ parameters "deep" models.

It maybe that quantum computing is necessary before we can realize the promise of artificial general intelligence - yet still, we're a lot closer to that today than we've ever been, and advancements in AI go hand in hand with advancements in hardware, because the two are mutually reinforcing. Better AI allows for faster hardware advancements; and faster hardware advancements allow for even better AI.

Indeed, this kind of effect is exactly what was predicted, decades ago, for the final sage before the singularity - that technological change would accelerate as we approach the event horizon, due to the interaction of so many multiplicative effects across different but mutually reinforcing industries. It took centuries to go from steam engines to modern cars, but only fifty years from room sized computers to mobile devices, and only a few decades from dial up to 5G networks. In times like these, it's hard not to believe that anything can happen; and that we must, as such, be prepared for everything.
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Another example from today.
Please, Log in or Register to view URLs content!
iFlyTek releasing their sparkDesk LLM to southeast Asia market. They also said the following today:

星火大模型 is sparkDesk. Keep in mind iFlyTek is on entity list, so made a lot of sense for them to go for domestic options and they clearly have

Since Spark Model was announced 2 months ago, iFlyTek has added 850k developers!

AI development is really taking off. Back in end of June, iFlyTek had said they had 120k developers internationally, including 40k in southeast asia

Please, Log in or Register to view URLs content!
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Since Spark Model was announced 2 months ago, iFlyTek has added 850k developers!

AI development is really taking off. Back in end of June, iFlyTek had said they had 120k developers internationally, including 40k in southeast asia

Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!

iFlyTek did quite well on the back of AI boom. It had 7.8B in revenue in first half as well as 3.1B in profit. It had a 45% growth in developer count over past year and now have close to 5 million developers on its platform

This is from Huawei's weibo account summarizing on their AI conference
Please, Log in or Register to view URLs content!

26 companies joined their platform for LLMs using Ascend AI hardware and developer tools

their have now 30+ hardware partner, 12000 ISVs and 2500 industrial AI solutions with different industries. Over 30 large models, close to half of the China's total here

Wuhan university unveiled LuoJia LLM, world's first dedicated framework for intelligent interpretation of remote sensing images

That alone with other models that got unveiled.

Pangu models are in finance, manufacturing, medicinal research, mining and rail road.
 

sunnymaxi

Major
Registered Member
Please, Log in or Register to view URLs content!

Tianqiao & Chrissy Chen Institute is to spend 1 Billion RMB on AI in brain science research

The Tianqiao and Chrissy Chen Institute (TCCI 陈天桥雒芊芊脑科学研究院), a non-profit organization founded by Chen Tianqiao, chairman of global investment firm Shanda Group, and his wife Chrissy Luo, plans to invest RMB1 billion ($138 million) to support the use of artificial intelligence in brain science research, Yicai Global reported on July 7..

The TCCI will set up an AI brain science laboratory, MindX, with startup capital of $100 million, Gerwin Schalk, director of the Chen Frontier Lab for Applied Neurotechnology, said at the World Artificial Intelligence Conference in Shanghai on July 6.

The institute will also establish frontier labs for specific application scenarios, including ‘AI + sleep’ and ‘AI + anti-aging,’ with startup capital of RMB50 million ($6.9 million) each, he added.

MindX will recruit two specialist directors in neuroscience and AI to lead the development of AI through deep research on aspects of the human brain, including learning, memory, cognition, and emotion, and use AI to unravel the brain’s mysteries.

More investment for the labs will be forthcoming in the future, Yicai Global learned from the TCCI.

Founded in 2016, California-based TCCI funds brain science research projects, including mental health, sleep, and neurodegenerative diseases. In the same year, Chen announced a $1 billion donation to the California Institute of Technology for brain science research.

The TCCI has also joined the generative AI push, announcing in March that it would recruit AI talent from around the world, especially algorithm engineers with experience in large language models and natural language processing.

NeuroXess, a brain-computer interface research and development firm founded by the TCCI, shared some initial findings at the WAIC, which ended on July 8. The Shanghai-based startup has successfully implanted brain-computer interfaces in a two-year-old Labrador retriever and a seven-year-old rhesus monkey, said the Yicai Global report.
 

sunnymaxi

Major
Registered Member
MIIT: China to accelerate AI industry policies

China will accelerate the research and formulation of artificial intelligence industry policies and increase innovation and research on foundational technologies such as computing power and algorithms, the country's top industry regulator MIIT said..

Vice-Minister of Industry and Information Technology Xu Xiaolan said that with the joint efforts of all parties, China's AI industry has flourished, reported China Daily on July 7.

The AI industrial system is gradually improving. The market size of the core AI industry hit RMB500 billion ($69 billion), and there are over 4,300 AI enterprises, Xu said on July 6 at an AI conference in Shanghai, reported China Daily.

Innovative achievements such as smart chips and universal large language models are constantly emerging, Xu added in the China Daily report.
 

tokenanalyst

Brigadier
Registered Member
Structure Guided Multi-modal Pre-trained Transformer for Knowledge Graph Reasoning

ABSTRACT Multimodal knowledge graphs (MKGs), which intuitively organize information in various modalities, can benefit multiple practical downstream tasks, such as recommendation systems, and visual question answering. However, most MKGs are still far from complete, which motivates the flourishing of MKG reasoning models. Recently, with the development of general artificial architectures, the pretrained transformer models have drawn increasing attention, especially for multimodal scenarios. However, the research of multimodal pretrained transformer (MPT) for knowledge graph reasoning (KGR) is still at an early stage. As the biggest difference between MKG and other multimodal data, the rich structural information underlying the MKG still cannot be fully leveraged in existing MPT models. Most of them only utilize the graph structure as a retrieval map for matching images and texts connected with the same entity. This manner hinders their reasoning performances. To this end, we propose the graph Structure Guided Multimodal Pretrained Transformer for knowledge graph reasoning, termed SGMPT. Specifically, the graph structure encoder is adopted for structural feature encoding. Then, a structure-guided fusion module with two different strategies, i.e., weighted summation and alignment constraint, is first designed to inject the structural information into both the textual and visual features. To the best of our knowledge, SGMPT is the first MPT model for multimodal KGR, which mines the structural information underlying the knowledge graph. Extensive experiments on FB15k-237-IMG and WN18-IMG, demonstrate that our SGMPT outperforms existing state-of-the-art models, and prove the effectiveness of the designed strategies.

1689019462174.png

Please, Log in or Register to view URLs content!
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
From HW before HR
告别英伟达和ChatGPT崇拜,人工智能大模型,华为走出了一条不一样道路。可能很多企业会发现,盘古AI大模型这个这个才是他们要的,不作诗、只做事更可能才是人工智能真正的业务方向。

盘古AI大模型在自然杂志正刊发布文章,震动了业界。审核文章的三位欧美科学家都是气象学家,而有专业网友说业界公认欧洲在气象预测科技水平领先中国至少十年以上。结果华为盘古模型一加入,实现了在中期预测超越欧洲的水平。

盘古3.0为客户提供100亿参数、380亿参数、710参数和1000亿参数的系列化基础大模型,匹配客户不同场景、不同时延、不同响应速度的行业多样化需求。这让盘古介入到各行业绰绰有余。

尤其可贵的是盘古大模型从算力(昇腾算力,昇腾的底层架构也是华为自创的)、芯片使能、AI框架(MindSpore AI计算框架)到AI平台(AI开发生产线ModelArts)实现了全栈自主创新。这让华为AI研发端到端都清楚自己的优缺点在哪里,如何改进,也有办法改进,这在业界是独一无二的、芯端云一起结合。

盘古模型主要应用于工业和科学领域,比如在金融、气象、制造、医药研发、煤矿、铁路等诸多行业发挥巨大价值,这和ChatGPT主要用于聊天很不一样。
比如,原来一款新药研发平均需要10年时间、花费10亿美元。盘古药物分子大模型助力西安交通大学第一附属医院刘冰教授团队发现全球40年来首个新靶点、新类别的抗生素,并将先导药物研发周期缩短至1个月、研发成本降低70%。这个价值就非常高了。在医药研究上时间就是金钱,效率就是生命。

基于昇腾AI,原生孵化和适配了30多个大模型,支撑了中国一半左右的大模型创新;已经发展了30多家硬件伙伴、1200多家ISV,联合推出了2500多个行业AI解决方案,规模服务于运营商、互联网、金融等行业核心场景。
比如中国商飞上海飞机设计研究院,基于昇腾AI开发的“东方.翼风”大模型,获得了今年AI大会最高奖项—SAIL奖,这一大模型基于AI+通用计算能力,实现了三维超临界机翼流体仿真,能够对大飞机全场景飞行状况,实现快速高精度的模拟,用时仅为原来的千分之一,这一革命性突破,也将在气动领域加速国计民生行业跨越式创新。

目前华为AI领域的开发者数量在去年90万的基础上,实现了翻倍,达到了180多万。“人工智能的发展,关键是要‘走深向实’,赋能产业升级。” 华为轮值董事长胡厚崑在6日召开的世界人工智能大会上如是说。
而华为AI大模型一开始就搞面向工业的使能,工业情景丰富是中国比美国发展工业类AI的优势,只要搞出来就会有应用给试验,这是很不一样的地方。
总体上,华为AI大模型正走出一条独特的、服务于工业的路子,这对中国制造业的发展影响将非常大。尤其告诉美国人,不用英伟达GPU,中国一样可以搞成功AI大模型。

obviously huge HW supporter posting this. But it's impressive to see them go from 900k to 1.8million developers in 1 year with Ascend AI platform

In terms of AI for Industries, Ascend/Pangu appear to offer the best platform, since half of LLMs in China are developed on Pangu platform. It also has probably the most complete set of models that customers can base their own customized models off. They support 10, 38, 71 & 100 billion parameter

some of the most well known use cases are drug development, finance, COMAC, NLP, image recognition, communication & such.

They just signed up Talkweb on their platform
Please, Log in or Register to view URLs content!
拓维信息(002261)与华为正式签署合作协议,成为盘古大模型生态合作伙伴

there are more signing up to Huawei cloud as partners since last week

Please, Log in or Register to view URLs content!
万达信息与华为云计算技术有限公司(简称“华为云”)签署盘古大模型合作协议,正式成为盘古大模型生态合作伙伴

Please, Log in or Register to view URLs content!
恒生电子(600570)与华为云正式签署AI大模型联合创新协议
据悉,Light GPT拥有专业的金融语料积累处理和更高效稳定的大模型训练方式,使用了超4000亿tokens的金融领域数据和超过400亿tokens的语种强化数据,并以之作为大模型的二次预训练语料,支持超过80+金融专属任务指令微调,使Light GPT具备金融领域的准确理解能力。
Looks like these guys are more known for financial AI

all the news updates on Huawei cloud AI can be found here
Please, Log in or Register to view URLs content!
 
Top