I think we are trying to say is that LLMs could achieve something that looks AGI to end users because AI companies will do a lot of work up front to train the models and utilize it and provide info for it (via RAG or something) to look like it can know everything and answer everything.AGI, if really achieved, would have the potential to change almost everything about our society. If you crack the code to consciousness(/intelligence) you have a template you can copy endlessly. It's not that AGI would replace just white-collar jobs, it's that it would replace every job, only limited by the physical instrumentation we give it. Definitionally, if AGI cannot do any job a human can (given the right physical apparatus), then it is not AGI. If you believe that such a thing is possible in the next few years, it's not a bad bet to make (slowing your adversary by any means possible). The problem is that it's very unlikely AGI will be achieved in this decade.
But behind the picture, there is a lot of stuff going on that end users don't see.
I will give you an example of limitations. Today, I had a call with some colleagues about implementing some features. Because we've seen hallucination in responses, we can't trust GPT-4o with actually giving us the correct answers every time, so we had to end up putting additional validation steps using non-AIs.
It's hard to really appreciate how much hallucination you get with these models still. Dealing with hallucinations is one of the biggest challenges for AI users.