Artificial Intelligence thread

fatzergling

Junior Member
Registered Member
The fake news firehose is about to become a fake news dam release. It can simply overwhelm the truth with social media bots reposting fake news that takes 10 seconds to make and weeks of investigation to correct if even possible.
To make it worse, there is no "moat" to speak of with generative models. There are open source datasets, open-source architectures, hell even open source models nowadays. Anything OpenAI puts out is matched by open source within 6 months to a year. Adding safety constraints to current models is useless when bad actors can bypass it by training custom models.
 

luminary

Senior Member
Registered Member
To make it worse, there is no "moat" to speak of with generative models. There are open source datasets, open-source architectures, hell even open source models nowadays. Anything OpenAI puts out is matched by open source within 6 months to a year. Adding safety constraints to current models is useless when bad actors can bypass it by training custom models.
I disagree, open source is the best thing can happen to AI. Thanks to open source, there can be anti-AI software developed and available for artists to protect their work, like nightshade.


Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!


Very clever methodology at work here. Helped by the fact it's hard for human labelers to tell.
 
Last edited:

Zhong"Geodaddy"Li

New Member
Registered Member
Open AI is the Tesla of this generation. I’m sure Chinese companies will eventually catch up but it still begs the reason why there’s no Chinese Open AI despite all the AI talent. Got to think it has to do with the management mind set of state owned companies and the lack o strong start up scenes.
I clearly remember using AI filters and effects on Douyin before ChatGPT became mainstream. Where is Bytedance’s progress in the AI sector?
 

luminary

Senior Member
Registered Member
To understand the implications of this, in order to create realistic video you basically need to utilise physics. If that is either by imitating your training data or actually understanding physics, is another matter. But as long as you have some kind of physics demonstration(video) in a tokenised format, then given a big enough dataset, your trained AI model will start having some emergent basic understanding of physics.

This is what OpenAI is doing because in order to generate a realistic video you need a simulation of the physical world. Well, would you guess, check this out..
Insane stuff
Please, Log in or Register to view URLs content!








/End
The "physics" of the video are taken from stock video. The model does not generate them. You can tell because in some of the samples, the stock video it is based off of is a drone video, and the people stop what they're doing to look at the drone. So no simulated world was actually generated. It is similar to rotoscoping.

I would not put much stock into OpenAI hypebeast marketing. They've been championing the "emergent capabilities" shtick for a while now, it's a well-known in Silicon Valley to be a nothing-burger.

What certainly is impressive is the consistency of the video, splicing together different stock videos, as well as converting text to the appropriate imagery or stock video. We'll know just exactly how impressive once it's out. But the open source image processing community has been working on this for a while now as well, they just don't have the same levels of computing power.

Hope this clears up any misconceptions.
 
Last edited:

Overbom

Brigadier
Registered Member
The "physics" of the video are taken from stock video. The model does not generate them. You can tell because in some of the samples, the stock video it is based off of is a drone video, and the people stop what they're doing to look at the drone. So no simulated world was actually generated. It is similar to rotoscoping.

I would not put much stock into OpenAI hypebeast marketing. They've been championing the "emergent capabilities" shtick for a while now, it's a well-known in Silicon Valley to be a nothing-burger.

What certainly is impressive is the consistency of the video, splicing together different stock videos, as well as converting text to the appropriate imagery or stock video. We'll know just exactly how impressive once it's out. But the open source image processing community has been working on this for a while now as well, they just don't have the same levels of computing power.

Hope this clears up any misconceptions.
Sorry, but between you and these guys:
IMG_20240218_145845.jpg
IMG_20240218_150927.jpg
I would rather believe these guys:
Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!

Unless of course you are willing to state your credentials, which are strong enough which can put you into the same league as these two. Unless you are feeling certain that you are able to do so, I would suggest you stop talking nonsense and just say "dont know" if you really dont know instead of talking so confidently.

There are a lot more people who talk about world simulation for videos than you think so. And contrary to you, I can actually cite credible sources. Your argument reminds me people saying how LLMs are just text parrots, which has been debunked a long time ago
 

ZeEa5KPul

Colonel
Registered Member
There are a lot more people who talk about world simulation for videos than you think so. And contrary to you, I can actually cite credible sources. Your argument reminds me people saying how LLMs are just text parrots, which has been debunked a long time ago
LLMs are stochastic parrots that generate plausible sounding gibberish. It's autocomplete on steroids. There's no way to debunk that because it's true.

OpenAI's new system is the video equivalent of that and you don't need credentials or expertise to see this. There's a very simple test: hands. When these things can do hand movements and articulation that doesn't look like something out of a horror movie, I'll consider it a significant advance.

These systems - LLMs, deep neural networks, etc. are fundamentally limited by the fact that they don't have concepts. They don't build internal mental models and reason over them. They work by statistical association.
 

Overbom

Brigadier
Registered Member
LLMs are stochastic parrots that generate plausible sounding gibberish. It's autocomplete on steroids. There's no way to debunk that because it's true.

OpenAI's new system is the video equivalent of that and you don't need credentials or expertise to see this. There's a very simple test: hands. When these things can do hand movements and articulation that doesn't look like something out of a horror movie, I'll consider it a significant advance.
Your credentials sir. Where and what are they? Are you an NVIDIA AI, or Google DeepMind or OpenAI lead?

Very funny seeing all sorts of people talking with certainly about such things when they have no serious background on the field.

The semiconductors thread helds itself on a high standard but it seems on this thread here, everyone can come and go proclaiming grand statements disregarding the actual subject experts (btw I am not one). Release a few critically acclaimed research papers on AI and then come back again
These systems - LLMs, deep neural networks, etc. are fundamentally limited by the fact that they don't have concepts. They don't build internal mental models and reason over them. They work by statistical association.
Well, let me see how you are going to debunk Ilya Sutskevar. If you can "win" against Nvidia's Senion research scientist & lead on AI Agents, and DeepMind's lead on Creative AI, can you also win against the the minnow OpenAI's Chief Research and co-author of Alpha Go's paper, Ilya Sutsekvar?
Please, Log in or Register to view URLs content!
In an interview between Nvidia Jensen Huang and Ilya Sutskever at the beginning of the year, Ilya put forward a point: LLM does far more than predict the next word based on probability. It is also learning the model of our real world, and the text is an actual projection. Here is the text of the video:
You can think of it this way: when we train a huge neural network to accurately predict the next word in all kinds of text on the Internet, we are actually learning a “world model.”At first glance, it seems like we are just learning statistical correlations in text. But in fact, in order to accurately learn the statistical correlations in text and effectively compress this information, the neural networkactually learns some representation of the process that produced these texts.
These texts are actually a projection of the real world. The outside world seems to cast its own shadow on this text. As a result, neural networks learn not just textual information, but much more about the world, people’s emotional states, their hopes, dreams, motivations, interactions, and the environment in which we live. What the neural network learns is a compressed, abstract and practical expression of this information. This is the knowledge gained by accurately predicting the next word.
I would recommend everyone interested to watch the interview between Nvidia's CEO and OpenAi's chief scientist. Maybe ZeEa5KPul and other sceptics can enlighten them both to correct their amateurish misconeptions on how LLMs work

Easter egg at 17:40
 

ZeEa5KPul

Colonel
Registered Member
Your credentials sir. Where and what are they? Are you an NVIDIA AI, or Google DeepMind or OpenAI lead?

Very funny seeing all sorts of people talking with certainly about such things when they have no serious background on the field.

The semiconductors thread helds itself on a high standard but it seems on this thread here, everyone can come and go proclaiming grand statements disregarding the actual subject experts (btw I am not one). Release a few critically acclaimed research papers on AI and then come back again

Well, let me see how you are going to debunk Ilya Sutskevar. If you can "win" against Nvidia's Senion research scientist & lead on AI Agents, and DeepMind's lead on Creative AI, can you also win against the the minnow OpenAI's Chief Research and co-author of Alpha Go's paper, Ilya Sutsekvar?
Please, Log in or Register to view URLs content!



I would recommend everyone interested to watch the interview between Nvidia's CEO and OpenAi's chief scientist. Maybe ZeEa5KPul and other sceptics can enlighten them both to correct their amateurish misconeptions on how LLMs work

Easter egg at 17:40
You're taking this very personally. Relax. I don't need to have advanced degrees in ML and publish high impact papers to have an opinion of these things and how they work. Despite this notable absence in my resume, I think still neural networks trained by backpropagation are fundamentally limited and will never achieve AGI.

I share a lot of Chomsky's views about these systems that he outlines in this article. Yes, I anticipate all your objections: "Who's Chomsky? Does he work at Nvidia? OpenAI?" I still think his take is the correct one.
Please, Log in or Register to view URLs content!

By the way, there's a simple retort to the proposition that LLMs learn deep insights about the language that they trained on: Have the LLM formulate this insight into a theory of linguistics. That would be a strong indication of an actual intelligence, not just autocomplete.
 

gelgoog

Lieutenant General
Registered Member
I think the LLMs provide a useful tool. Ever since we had Internet Search Engines available this seems like the first major step forward in terms of boosting productivity for knowledge workers. It will likely end up replacing a lot of people who used to do clerical work. Just don't overstate what they can do.
 
Top