Artificial Intelligence thread

sunnymaxi

Captain
Registered Member

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Please, Log in or Register to view URLs content!

huawei cloud CEO talked about AI and computation at world internet conference

Says AI computation requirement to reach 105ZFLOPS by 2030 (500x current amt).

Ascend/Kunpeng ecosystem has > 4.4m developers, 6000 partners & 72 industry-education centers.

HW partnered w/ > 30 industry leaders for Hi-Quality data using AI to mine application value of data.

It provides AI for industries to them.

Ascend AI cloud services allow 100B parameter large models to be developed/implemented in just 1 month
 

gadgetcool5

Senior Member
Registered Member
Please, Log in or Register to view URLs content!

This guy thinks China may fall permanently behind the US in Generative AI because of onerous regulations. He gives three reasons:
1. In China, access to the GitHub equivalent for AI models is blocked because the data hosted there is deemed "too sensitive", and this is screwing Chinese developers.
2. In China, new AI models take 2-3 months to be approved, which is similar to the license Raj in India in the 1950s. In the AI world, 2-3 months is a very long time and this will discourage a lot of startups.
3. The Chinese language as it is used online is fundamentally more confusing due to strategies that people use to avoid censorship, and therefore it is harder to train AI models.

Do any of these points have validity?

I do notice that although Chinese LLM were finally allowed to be released to the public by regulators at the end of August, none of them have been offered overseas in other languages. That means that while ChatGPT and other U.S. models are able to take advantage of inputs, training, and usage from people and organizations around the world, Chinese models cannot. They are trapped within the wall of China. Is this something that might eventually change? How would a non-Chinese person, for instance, use ErnieBot?
 

tokenanalyst

Brigadier
Registered Member
1. In China, access to the GitHub equivalent for AI models is blocked because the data hosted there is deemed "too sensitive", and this is screwing Chinese developers.
-Git sites like github, gitlab and China gitee are for storing code not the weights of the models, i don't what is the Chinese equivalent of hugging face is. But Chinese made models are pretty popular in hugging, Qwen, Yi and ChatGLM have been trending a lot there.

2. In China, new AI models take 2-3 months to be approved, which is similar to the license Raj in India in the 1950s. In the AI world, 2-3 months is a very long time and this will discourage a lot of startups.

-I don't know what a "Raj" the 50s have anything to do with a 2023 large AI model, but ok, two months doesn't sound that bad.

3. The Chinese language as it is used online is fundamentally more confusing due to strategies that people use to avoid censorship, and therefore it is harder to train AI models.

-All I can say is that for their size Chinese LLMs are pretty smart, WizardCode, WizardLM, ChatGLM, Yi and Qwen are one of the smartest models i have tested.

-In term of censorship, the Chinese models are probably censored China sensitive stuff like territorial issues, NSFW things and terrorism. but Western model are even more censored the Chinese ones because apart from the NSFW and terrorism stuff, you have all the good woke agenda incorporated in these models.
 

Maikeru

Major
Registered Member
1. In China, access to the GitHub equivalent for AI models is blocked because the data hosted there is deemed "too sensitive", and this is screwing Chinese developers.
-Git sites like github, gitlab and China gitee are for storing code not the weights of the models, i don't what is the Chinese equivalent of hugging face is. But Chinese made models are pretty popular in hugging, Qwen, Yi and ChatGLM have been trending a lot there.

2. In China, new AI models take 2-3 months to be approved, which is similar to the license Raj in India in the 1950s. In the AI world, 2-3 months is a very long time and this will discourage a lot of startups.

-I don't know what a "Raj" the 50s have anything to do with a 2023 large AI model, but ok, two months doesn't sound that bad.

3. The Chinese language as it is used online is fundamentally more confusing due to strategies that people use to avoid censorship, and therefore it is harder to train AI models.

-All I can say is that for their size Chinese LLMs are pretty smart, WizardCode, WizardLM, ChatGLM, Yi and Qwen are one of the smartest models i have tested.

-In term of censorship, the Chinese models are probably censored China sensitive stuff like territorial issues, NSFW things and terrorism. but Western model are even more censored the Chinese ones because apart from the NSFW and terrorism stuff, you have all the good woke agenda incorporated in these models.
Can confirm. I gave up on ChatGPT when it censored itself on questions re group differences in crime rates, academic achievement, etc. At first you could get round this using the DAN mode prompt, but this got shut down pretty quickly. When it did answer in DAN mode at first it got very weird and "talked" in a very familiar vernacular, kept calling me "hunny". Most odd. All societies have their taboos and shibboleths, in the current Western world these concern race and "gender".
 

tygyg1111

Captain
Registered Member
Can confirm. I gave up on ChatGPT when it censored itself on questions re group differences in crime rates, academic achievement, etc. At first you could get round this using the DAN mode prompt, but this got shut down pretty quickly. When it did answer in DAN mode at first it got very weird and "talked" in a very familiar vernacular, kept calling me "hunny". Most odd. All societies have their taboos and shibboleths, in the current Western world these concern race and "gender".
I just wanted to know what the hell happened to shinzo abe #whereisshinzo
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Please, Log in or Register to view URLs content!

This guy thinks China may fall permanently behind the US in Generative AI because of onerous regulations. He gives three reasons:
1. In China, access to the GitHub equivalent for AI models is blocked because the data hosted there is deemed "too sensitive", and this is screwing Chinese developers.
2. In China, new AI models take 2-3 months to be approved, which is similar to the license Raj in India in the 1950s. In the AI world, 2-3 months is a very long time and this will discourage a lot of startups.
3. The Chinese language as it is used online is fundamentally more confusing due to strategies that people use to avoid censorship, and therefore it is harder to train AI models.

Do any of these points have validity?

I do notice that although Chinese LLM were finally allowed to be released to the public by regulators at the end of August, none of them have been offered overseas in other languages. That means that while ChatGPT and other U.S. models are able to take advantage of inputs, training, and usage from people and organizations around the world, Chinese models cannot. They are trapped within the wall of China. Is this something that might eventually change? How would a non-Chinese person, for instance, use ErnieBot?
why do you take these things seriously?

There is actually 1 major advantage Chinese LLMs have over Western ones. That is access to data inside China. I was talking to Taylor about this recently and he mentioned that China will probably just block out Western LLM access to Chinese data. And that will end it for Western LLM in terms of training against Chinese language contents.

There hasn't btw been demonstration that user input actually helps LLMs. LLMs right now improve through digesting online content or books or print media and such

Erniebot according to Taylor performed better than even GPT4 on the tests that he ran
 

tamsen_ikard

Junior Member
Registered Member
All this hype about LLM is just a pissing contest. There are very few practical applications of LLM.

What is more important is the use of AI in everyday applications to automate tasks. Automatic recognition and detection via vision/sensors, Autonomous driving of cars, drones and other vehicles, automatic detection of patterns of behavior and actions.

Compared to these things LLM is doing nothing useful in the world.
 

tokenanalyst

Brigadier
Registered Member
why do you take these things seriously?

There is actually 1 major advantage Chinese LLMs have over Western ones. That is access to data inside China. I was talking to Taylor about this recently and he mentioned that China will probably just block out Western LLM access to Chinese data. And that will end it for Western LLM in terms of training against Chinese language contents.

There hasn't btw been demonstration that user input actually helps LLMs. LLMs right now improve through digesting online content or books or print media and such

Erniebot according to Taylor performed better than even GPT4 on the tests that he ran
There is some confusion about datasets, most big quality datasets are behind iron curtain walls of AI companies, facebook hasn't release their dataset, they just released the weights of their LLM and called "open", not the code or dataset just the weights, the same goes for google, mistral, anthropic, 01Ai, Alibaba, Baidu, Tsinghua. The "open" datasets that are "free" are much smaller, usually for academic use and not amount of access to "github" will change that.
 
Top