I’ve messed around with GPT-3 and other language models, and while they’re incredibly impressive at generating text or answering questions, you can tell they’re not “thinking.” They’re just really good at predicting what comes next based on patterns in their training data. For example, I’ve tried asking them stuff that requires reasoning or connecting ideas in a way that isn’t obvious, and they either mess it up or give something that sounds smart but doesn’t really make sense when you think about it. It’s not AGI—it’s more like a really advanced autocomplete. AGI would need to handle totally new problems and learn like humans do, not just regurgitate patterns.