Artificial Intelligence thread

Overbom

Brigadier
Registered Member
I share a lot of Chomsky's views about these systems that he outlines in this article. Yes, I anticipate all your objections: "Who's Chomsky? Does he work at Nvidia? OpenAI?" I still think his take is the correct one.
Please, Log in or Register to view URLs content!

By the way, there's a simple retort to the proposition that LLMs learn deep insights about the language that they trained on: Have the LLM formulate this insight into a theory of linguistics. That would be a strong indication of an actual intelligence, not just autocomplete.
I appreciate that you cited something even though its a New York Times Opinion article. That's a quite an old article (I think even before GPT 4 was released) which was giving examples of GPT 3 (which anyone will tell you how bad it was. Btw not even GPT 3.5...). In any case, I consider the real pros' opinions much more credible than whatever these people are saying.

Your "autocomplete" claim has been directly countered by Ilya Sutskever but because I am not of his caliber on AI, I don't have anything of value to add on his comments. Its your opinion of course, and you know what they say about opinions, everyone has one

But anyway, lets see from Google's recent report something interesting:



And circling back to the original world model argument, some months ago Runway (a leading text2vid company) published this:
Please, Log in or Register to view URLs content!

Introducing General World Models​

by Anastasis Germanidis / Dec 11, 2023
You can think of video generative systems such as
Please, Log in or Register to view URLs content!
as very early and limited forms of general world models. In order for Gen-2 to generate realistic short videos, it has developed some understanding of physics and motion. However, it’s still very limited in its capabilities, struggling with complex camera or object motions, among other things.
To build general world models, there are several open research challenges that we’re working on. For one, those models will need to generate consistent maps of the environment, and the ability to navigate and interact in those environments. They need to capture not just the dynamics of the world, but the dynamics of its inhabitants, which involves also building realistic models of human behavior.
So now we have Nvidia, DeepMind, OpenAI and Runway all of them talking about world models but somehow internet people still proclaim that they are all wrong.
 
Last edited:

fatzergling

Junior Member
Registered Member
You're taking this very personally. Relax. I don't need to have advanced degrees in ML and publish high impact papers to have an opinion of these things and how they work. Despite this notable absence in my resume, I think still neural networks trained by backpropagation are fundamentally limited and will never achieve AGI.

I share a lot of Chomsky's views about these systems that he outlines in this article. Yes, I anticipate all your objections: "Who's Chomsky? Does he work at Nvidia? OpenAI?" I still think his take is the correct one.
Please, Log in or Register to view URLs content!

By the way, there's a simple retort to the proposition that LLMs learn deep insights about the language that they trained on: Have the LLM formulate this insight into a theory of linguistics. That would be a strong indication of an actual intelligence, not just autocomplete.
For good or for ill, Chomsky's methods fell out of favor in natural language processing years ago. Statistical methods performed better on benchmarks and eventually spawned the deep learning models hailed as "AI". Since Chomsky's theories could not generate a comparable model, many AI people don't believe him.

I appreciate that you cited something even though its a New York Times Opinion article. That's a quite an old article (I think even before GPT 4 was released) which was giving examples of GPT 3 (which anyone will tell you how bad it was. Btw not even GPT 3.5...). In any case, I consider the real pros' opinions much more credible than whatever these people are saying.

Your "autocomplete" claim has been directly countered by Ilya Sutskever but because I am not of his caliber on AI, I don't have anything of value to add on his comments. Its your opinion of course, and you know what they say about opinions, everyone has one

But anyway, lets see from Google's recent report something interesting:



And circling back to the original world model argument, some months ago Runway (a leading text2vid company) published this:
Please, Log in or Register to view URLs content!



So now we have Nvidia, DeepMind, OpenAI and Runway all of them talking about world models but somehow internet people still proclaim that they are all wrong.
The current generation of ML scientists are all trained in the deep learning paradigm. If you look at academia, the acronym AI/ML demonstrates how machine learning (more specifically deep learning) hijacked the AI community by achieving state of the art on tasks in vision and natural language. And nobody can doubt that LLM's have impressive capabilities. However, these domains are perfect for deep learning because
1. ease of obtaining data, real or synthetic.
2. no good logical model for the tasks (Chomsky's theories are either ignored or rejected outright here)
There are other domains where DL has not managed to make inroads in. Just because it's possible to train a multimodal video model using who knows how much synthetic video data doesn't mean that big neural network is the key to AGI or whatever people are speculating nowadays.
 

ember

New Member
Registered Member
I find that the people most impressed by large language models are the liberals/leftists because they can be perfectly simulated by a chat bot. Both are talking complete nonsense while being utterly convinced that they are right.

For example I recently asked ChatGPT to give me a mathematical function that includes the gamma function and evaluates to 24.4. Not only where the answers not even close, at one point it claimed that 24.4 is not a positive real number. Then it gave me a python snippet to prove its function's correctness, but the program produced a value of about 4.9. ChatGPT doesn't even evaluate its own output.

The difference between a chat bot and a liberal is that the bot will always apologize profusely when I point out an error.
 

Overbom

Brigadier
Registered Member
I find that the people most impressed by large language models are the liberals/leftists because they can be perfectly simulated by a chat bot. Both are talking complete nonsense while being utterly convinced that they are right.

For example I recently asked ChatGPT to give me a mathematical function that includes the gamma function and evaluates to 24.4. Not only where the answers not even close, at one point it claimed that 24.4 is not a positive real number. Then it gave me a python snippet to prove its function's correctness, but the program produced a value of about 4.9. ChatGPT doesn't even evaluate its own output.
ChatGPT? Its garbage, there are open source models out there which are better. You should use GPT 4 instead (costs real money though)

As for your problem, it doesn't surprise me at all. LLMs due to their token-based architecture are inherently weak at maths.

However there are credible rumours that all maths issues up to (and including) grade school level were solvable by an internal new algorithm. Hopefully it will be implemented when GPT 5 is released
Please, Log in or Register to view URLs content!
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
Reuters could not independently verify the capabilities of Q* claimed by the researchers.
 

GZDRefugee

Junior Member
Registered Member
You're taking this very personally. Relax. I don't need to have advanced degrees in ML and publish high impact papers to have an opinion of these things and how they work. Despite this notable absence in my resume, I think still neural networks trained by backpropagation are fundamentally limited and will never achieve AGI.

I share a lot of Chomsky's views about these systems that he outlines in this article. Yes, I anticipate all your objections: "Who's Chomsky? Does he work at Nvidia? OpenAI?" I still think his take is the correct one.
Please, Log in or Register to view URLs content!

By the way, there's a simple retort to the proposition that LLMs learn deep insights about the language that they trained on: Have the LLM formulate this insight into a theory of linguistics. That would be a strong indication of an actual intelligence, not just autocomplete.
If you're gonna cite Chomsky in regards to LLM, you should definitely cite his book Syntactic Structures (1957). To this day Chomsky Normal Form is still highly relevant in the analysis of syntax.
 

FairAndUnbiased

Brigadier
Registered Member
I find that the people most impressed by large language models are the liberals/leftists because they can be perfectly simulated by a chat bot. Both are talking complete nonsense while being utterly convinced that they are right.

For example I recently asked ChatGPT to give me a mathematical function that includes the gamma function and evaluates to 24.4. Not only where the answers not even close, at one point it claimed that 24.4 is not a positive real number. Then it gave me a python snippet to prove its function's correctness, but the program produced a value of about 4.9. ChatGPT doesn't even evaluate its own output.

The difference between a chat bot and a liberal is that the bot will always apologize profusely when I point out an error.
Leftists aren't liberals, thank you.

But I have been able to trick ChatGPT with something even simpler: asking it to translate a text A to Chinese then translate it back to English. First translation is wrong and second translation back does not result in the original text.
 

BlackWindMnt

Captain
Registered Member
I find that the people most impressed by large language models are the liberals/leftists because they can be perfectly simulated by a chat bot. Both are talking complete nonsense while being utterly convinced that they are right.

For example I recently asked ChatGPT to give me a mathematical function that includes the gamma function and evaluates to 24.4. Not only where the answers not even close, at one point it claimed that 24.4 is not a positive real number. Then it gave me a python snippet to prove its function's correctness, but the program produced a value of about 4.9. ChatGPT doesn't even evaluate its own output.

The difference between a chat bot and a liberal is that the bot will always apologize profusely when I point out an error.
This reminds me of a recent copilot study where it seems gpt4 and copilot are creating/generating shittier code over time. So you get shit going in and worse shit coming out.

From friends that are copilot enjoyers they seem to spend as much time correcting copilot as they would have written the code themselves.
 

siegecrossbow

General
Staff member
Super Moderator
This reminds me of a recent copilot study where it seems gpt4 and copilot are creating/generating shittier code over time. So you get shit going in and worse shit coming out.

From friends that are copilot enjoyers they seem to spend as much time correcting copilot as they would have written the code themselves.

Remember how some Chinese liberals complain that China invented gunpowder mostly used it for fireworks and entertainment instead of improving weapons? I wonder if future generations will have the same revelation if China applies AI to less glamorous things like IOT and drone swarms instead of chatbots and image/video generators.
 

ZeEa5KPul

Colonel
Registered Member
Remember how some Chinese liberals complain that China invented gunpowder mostly used it for fireworks and entertainment instead of improving weapons? I wonder if future generations will have the same revelation if China applies AI to less glamorous things like IOT and drone swarms instead of chatbots and image/video generators.
Wow... a Chinese liberal had a correct take. Will wonders never cease.

As for the AI, I'm sure future generations will look at it that way. Only the IoT and swarms are the improved weapons and the chatbots and media generators are the fireworks and entertainment.
 
Top