Artificial Intelligence thread

BoraTas

Captain
Registered Member
Leftists aren't liberals, thank you.

But I have been able to trick ChatGPT with something even simpler: asking it to translate a text A to Chinese then translate it back to English. First translation is wrong and second translation back does not result in the original text.
The ChatGPT is garbage when you ask it anything that is not videly available on the internet. It outputs a very verbose and superficial answer. Usually with many mistakes.
 

european_guy

Junior Member
Registered Member
LLMs are stochastic parrots that generate plausible sounding gibberish. It's autocomplete on steroids. There's no way to debunk that because it's true.

Maybe you refer to the way they are trained, i.e. predicting next word in a sentence.

But training system and final result are 2 different concepts.

If you see a boxer jumping the rope and you know nothing about boxing, maybe you think that he trains to jump ropes....instead he trains to punch people.

At the moment we have still limited deep understanding of these models, and even less of how our brain works....maybe one day scientists will discover that also our brain works as a very sophisticated stochastic parrot...who knows?

These systems - LLMs, deep neural networks, etc. are fundamentally limited by the fact that they don't have concepts. They don't build internal mental models and reason over them. They work by statistical association.

This is not the case. It is already verified they build an internal representation of the world called "latent space", and generate text / images / videos out of this latent space. If you google for it, there is plenty of source out there.
 

broadsword

Brigadier
Maybe you refer to the way they are trained, i.e. predicting next word in a sentence.

But training system and final result are 2 different concepts.

If you see a boxer jumping the rope and you know nothing about boxing, maybe you think that he trains to jump ropes....instead he trains to punch people.

At the moment we have still limited deep understanding of these models, and even less of how our brain works....maybe one day scientists will discover that also our brain works as a very sophisticated stochastic parrot...who knows?



This is not the case. It is already verified they build an internal representation of the world called "latent space", and generate text / images / videos out of this latent space. If you google for it, there is plenty of source out there.

You mean they were not generated from a vast cloud library of images? If every pixel was machine-generated, the amount of GPUs must be incredible.
 

xypher

Senior Member
Registered Member
You mean they were not generated from a vast cloud library of images? If every pixel was machine-generated, the amount of GPUs must be incredible.
That is the case with all generative models and no, simple generative models could be run even on your CPU - output resolution and overall image quality would be crap obviously but every pixel would be machine-generated. No neural network works as some kind of matching platform that picks out "similar" images out of some stored library - this is a pretty severe misunderstanding of how NNs operate. In fact, it is hard for the network to even produce identity of its input if there are no global skip connections (i.e. adding the input after the whole network) in the architecture. You could argue that the latent space is a compressed cloud library but in that case human mind can also be interpreted as a compressed library since we are not born with the knowledge we have today, our mind, much like the latent space of the neural network, is formed through the learning process which is constant for humans.

However, it is important not to overrate modern NNs and put them on the same level as a human brain - NNs in their current state are crude simplified mathematical models of our brains. Their latent space is far more limited in real capacity (it's true that some models have trillions of parameters but when you dig further, you will find that these models are extremely sparse in their representation - i.e. only a small portion of those trillions carries important information) and the current learning methods are extremely inefficient - they need billions upon billions of examples sampled from different distributions/domains in order to learn how to generalize the training information, which leads to the problem of "overfitting" where a dataset without enough data diversity leads to model merely learning how to perform well on the data from the same distribution as its training data. Humans, on the other hand, have a "few shot" learning where only a couple of examples could already create some basic understanding of the task.

Overall, I'd say modern NNs are very convenient and powerful tools that can be used to further increase human productivity rather than real "intelligence".
 
Last edited:

Overbom

Brigadier
Registered Member
Another "just predicting the next word/frame/pixel" breakthrough by DeepMind.

Please, Log in or Register to view URLs content!

Genie: Generative Interactive Environments​

Abstract​

We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model.
Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain-specific requirements typically found in the world model literature.
Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future.

3. Training Agents​

We believe Genie could one day be used as a foundation world model for training generalist agents. In Figure
Please, Log in or Register to view URLs content!
we show that the model can already be used for generating diverse trajectories in unseen RL environments given starting frames.

5. Conclusion and Future Work​

We proposed Genie, a new form of generative AI that enables anyone, even children, to dream up, create, and step into generated worlds as we can with human-designed simulated environments. Genie can be prompted to generate a diverse set of interactive and controllable environments despite training from video-only data.
Please, Log in or Register to view URLs content!
 

tamsen_ikard

Junior Member
Registered Member
How good is Ernie bot compared to ChatGPT 4.0?

Is China really lagging behind the US in AI as western media keeps producing articles about this? How big is the gap and how fast can China reduce the gap?
 

Overbom

Brigadier
Registered Member
Is China really lagging behind the US in AI as western media keeps producing articles about this? How big is the gap and how fast can China reduce the gap?
From my casual observations, it seems that research on AI is quickly accelerating in the US. These are my rankings

LLM Ranking:
  1. OpenAI
  2. Google
  3. Everyone else
Novel Research in generative AI areas:
  1. Google
  2. OpenAI
  3. Everyone else

Areas to watch for in 2024:
  • World Simulation
  • Agents
  • Robotics
  • Context length
  • Multimodalities
World simulation is especially important as it could act as a synthetic multimodal data generator for agents and robotics which need a huge and adaptable playground to train and test themselves
 
Last edited:

FairAndUnbiased

Brigadier
Registered Member
How good is Ernie bot compared to ChatGPT 4.0?

Is China really lagging behind the US in AI as western media keeps producing articles about this? How big is the gap and how fast can China reduce the gap?
let's see what the end result is.

in China there's medical AI that aids radiography diagnostics.

Please, Log in or Register to view URLs content!

It has already been put into field usage during COVID.

Please, Log in or Register to view URLs content!

so what is more valuable, unemploying a few thousand fake news journalists with mediocre quality but almost 0 cost fake news, or saving lives?
 

measuredingabens

Junior Member
Registered Member
let's see what the end result is.

in China there's medical AI that aids radiography diagnostics.

Please, Log in or Register to view URLs content!

It has already been put into field usage during COVID.

Please, Log in or Register to view URLs content!

so what is more valuable, unemploying a few thousand fake news journalists with mediocre quality but almost 0 cost fake news, or saving lives?
Western media also reports little about Chinese AI, period. Comparing the amount of articles being generated about the likes of OpenAI, Google etc. there's extremely little presence of Chinese AI in English language journalism.
 
Last edited:
Top