Just for fun, I asked ChatGPT4 if LLM AI model can reach AGI and if ChatGPT will developed into an AGI one day.
I agree with some of the points @9dashline brought up about the limitation of LLM and that the pursuit to an AGI requires a more comprehensive approach rather than focusing entirely on LLM. I personally think the current issue of the development of AGI is not so much of an AI architecture problem but rather a AI system or composition problem.
Reducing the brain's intelligence to simply owing to the number of neurons or synapse is a gross simplification of how the brain functions, the brain is composed of many parts each responsible for a certain function. Which is why I think instead of relying entirely on constantly training the LLM AI in hopes of it suddenly developing the capabilities we want, why not make the LLM act as the language processing and/or the control unit of the system with specialized models trained for certain functions as its subsystem. This would make training for each specific functions easier and reduce the cost of training the AI system.
On the other hand, what needs or uses cases are there that requires an AGI? Wouldn't training specialized models for a specific task be more cost effective and performative, because I don't think the AI needs to know how to file taxes in order to design a warship:
Or know mitochondria is the powerhouse of the cell in order to do chemical experiments:
Or know how to write poems to be able to track small objects using satellites:
I agree with some of the points @9dashline brought up about the limitation of LLM and that the pursuit to an AGI requires a more comprehensive approach rather than focusing entirely on LLM. I personally think the current issue of the development of AGI is not so much of an AI architecture problem but rather a AI system or composition problem.
Reducing the brain's intelligence to simply owing to the number of neurons or synapse is a gross simplification of how the brain functions, the brain is composed of many parts each responsible for a certain function. Which is why I think instead of relying entirely on constantly training the LLM AI in hopes of it suddenly developing the capabilities we want, why not make the LLM act as the language processing and/or the control unit of the system with specialized models trained for certain functions as its subsystem. This would make training for each specific functions easier and reduce the cost of training the AI system.
On the other hand, what needs or uses cases are there that requires an AGI? Wouldn't training specialized models for a specific task be more cost effective and performative, because I don't think the AI needs to know how to file taxes in order to design a warship:
Or know mitochondria is the powerhouse of the cell in order to do chemical experiments:
Or know how to write poems to be able to track small objects using satellites: