Artificial Intelligence thread

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
AGI, if really achieved, would have the potential to change almost everything about our society. If you crack the code to consciousness(/intelligence) you have a template you can copy endlessly. It's not that AGI would replace just white-collar jobs, it's that it would replace every job, only limited by the physical instrumentation we give it. Definitionally, if AGI cannot do any job a human can (given the right physical apparatus), then it is not AGI. If you believe that such a thing is possible in the next few years, it's not a bad bet to make (slowing your adversary by any means possible). The problem is that it's very unlikely AGI will be achieved in this decade.
I think we are trying to say is that LLMs could achieve something that looks AGI to end users because AI companies will do a lot of work up front to train the models and utilize it and provide info for it (via RAG or something) to look like it can know everything and answer everything.

But behind the picture, there is a lot of stuff going on that end users don't see.

I will give you an example of limitations. Today, I had a call with some colleagues about implementing some features. Because we've seen hallucination in responses, we can't trust GPT-4o with actually giving us the correct answers every time, so we had to end up putting additional validation steps using non-AIs.

It's hard to really appreciate how much hallucination you get with these models still. Dealing with hallucinations is one of the biggest challenges for AI users.
 

Overbom

Brigadier
Registered Member
I will give you an example of limitations. Today, I had a call with some colleagues about implementing some features. Because we've seen hallucination in responses, we can't trust GPT-4o with actually giving us the correct answers every time, so we had to end up putting additional validation steps using non-AIs.

It's hard to really appreciate how much hallucination you get with these models still. Dealing with hallucinations is one of the biggest challenges for AI users.
One mitigation strategy I have seen wrt to unreliability issue is using multiple LLMs doing the same thing, alongside non-AI validation steps
 

Xiongmao

Junior Member
Registered Member
If ASI is x months away, China needs to do AR within half of x or 6 months, whichever is sooner
If you are talking about machine consciousness, then I don't believe we will ever achieve it with the way semiconductors are currently built. I subscribe to Roger Penrose's theory that human consciousness arises due to quantum entanglement effects between microtubules in neurons. In computers, the LLMs and other AI models exist in high level application space many levels above the bare metal, so any quantum entanglement effects will be between adjacent transistor gates but have no logical connection to the AI model being executed whatsoever.
 

ZeEa5KPul

Colonel
Registered Member
It's hard to really appreciate how much hallucination you get with these models still. Dealing with hallucinations is one of the biggest challenges for AI users.
They're not hallucinations. To hallucinate, you have to have a mind that suffers a malfunction in some way. With LLMs there's no mind, just the probability distribution of the next token based on the data the model was trained on.

This isn't something that can be solved in the paradigm of ANNs trained by backpropagation, it's intrinsic to the statistical nature of these systems.
 
Top