Quite a few, if not most of the top ML/AI experts doubt the long-term potential of LLMs. While LLMs have undoubtedly produced some pretty amazing tools, and will continue to do so in the near future, there are many structural limitations that makes it very unlikely for it to be the main architecture for AGI. We'll probably see a continued push in LLMs for the foreseeable future though, as large AI hype/investments are still market driven. LLMs are currently seen as the safe choice as it has already been demonstrated to provide pretty good results. Though, success in LLMs or any other forms of AI/ML will only draw more funding towards the field, and even if you don't believe in LLMs, it will still likely play at least an indirect role in the other AI/ML developments. I would say China's focus on industrial AI produces much of the same type of effect with likely more efficacy. The increased revenue will translate to increased AI/ML R&D.
It's pretty hard to definitively gauge China's progress in AGI compared to the US (and I guess technically the rest of the world) since LLMs are probably not the most accurate measuring stick. Honestly, we are still very much in the wild wild west when it comes to AGI (true AGI, not statistical learning rebranded as ML rebranded as AI), that investment and general scientific output is probably just as good, if not better, measuring stick.
I agree that LLMs are not likely to be the final architecture for artificial general intelligence. But I'd be really surprised if it didn't play an important role, both as a stepping stone, and as a component of artificial general intelligence.
The reason is that if the last few decades of research in AI have shown anything, it's that
data is critical to achieving intelligence. All foundation models, whether LLMs, diffusion models, GANs, or anything else, rely on having mountains of diverse training data. Once the frame work embeddings and initial weights are trained, sure, few shot or zero shot fine tuning is possible. But you'll get no where without building the foundation.
So how do you get data? Well, language is how
humans communicate. Being able to talk to humans, learn from humans, and obtain the knowledge we've built up over thousands of years is the easiest, most efficient way of obtaining data - and it relies on understanding natural languages. It's also how you'd communicate with humans to understand our goals and objectives, and an AI system that cannot communicate effectively with humans is not useful.
For these reasons, I believe LLMs have an important role to play in the search for artificial general intelligence. It's not the answer, but a component of the answer. What remains to be done is to complete the self-learning, self-organizing loop, such that the AI system can do its own discovery and evolution. This problem, no one has accomplished, but everyone recognizes the importance of it. It's why academics have all been exploring techniques like reinforcement learning and evolutionary algorithms.
I don't know if we'll see artificial general intelligence realized in our life time. It maybe that a major revolution in hardware is required for the software to function, much like how neutral networks were considered a lost cause until advancements in chips, distributed computing, and data gathering allowed for the training of billion+ parameters "deep" models.
It maybe that quantum computing is necessary before we can realize the promise of artificial general intelligence - yet still, we're a lot closer to that today than we've ever been, and advancements in AI go hand in hand with advancements in hardware, because the two are mutually reinforcing. Better AI allows for faster hardware advancements; and faster hardware advancements allow for even better AI.
Indeed, this kind of effect is exactly what was predicted, decades ago, for the final sage before the singularity - that technological change would accelerate as we approach the event horizon, due to the interaction of so many multiplicative effects across different but mutually reinforcing industries. It took centuries to go from steam engines to modern cars, but only fifty years from room sized computers to mobile devices, and only a few decades from dial up to 5G networks. In times like these, it's hard not to believe that anything can happen; and that we must, as such, be prepared for everything.