Artificial Intelligence thread

ougoah

Brigadier
Registered Member
Is this thread still talking about AGI. lol

China's official take on AI in my opinion is the correct one (honestly not that often China gets something right from the get go). The majority of Chinese application of AI is industrial, narrow use items. This is correct because machine learning at the moment is only good as a peripheral tool to improve productivity and automation. LLMs cannot replace skilled workers. It doesn't currently even have the appropriate interfaces to replace any workers. Not a single complete job has been replaced to this day despite nearly a decade of the usual hypers claiming this nonsense.

This reminds me of my discussions with friends around 2010 about how Elon Musks' supa dupa self driving cars will be all over global roads by the year 2015 at the absolute latest. You can bet your last dollar that this won't be the case even in year 2025.

Have any of you actually used the gpt bullshit for work? Have you tried completely automating your role? Let me know how well that's going and how many hours you've wasted already on substandard results.

Racing the silicon brats to AGI is quite literally a fool's errand. Keep tabs on what they're doing, how they're doing it and if by some miracle they do blindly stumble into AGI, copy it and spend 1/100 the amount of time and resources. Meanwhile continue devoting the lion share of resources into incorporating machine learning to the emerging manufacturing, industrial and social landscape.

The race to AI supremacy is no doubt important but so many people are distracted with what's real and what's complete propaganda fantasy. AI supremacy at the time of writing this is resembles whichever side is able to better utilise the working aspects of AI to improve their competitive edge in every existing and new domain. Real tangible things. Not glorified search engines that make things up and get facts wrong way too often for it to be useful. LLMs are not AGI. It also is not the path to AGI. Consciousness and intelligence is not some emergent property of parameter thresholds and syntax.

There are a surprising number of genuinely intelligent people who still believe this. I have to say many of these mouthpieces must therefore be working under instruction for whatever reason. Reminds me of all the buzz around aliens and UFOs these days. Absolute brain rot happening and mass psyops.
 
Last edited:

9dashline

Captain
Registered Member
Is this thread still talking about AGI. lol

China's official take on AI in my opinion is the correct one (honestly not that often China gets something right from the get go). The majority of Chinese application of AI is industrial, narrow use items. This is correct because machine learning at the moment is only good as a peripheral tool to improve productivity and automation. LLMs cannot replace skilled workers. It doesn't currently even have the appropriate interfaces to replace any workers. Not a single complete job has been replaced to this day despite nearly a decade of the usual hypers claiming this nonsense.

This reminds me of my discussions with friends around 2010 about how Elon Musks' supa dupa self driving cars will be all over global roads by the year 2015 at the absolute latest. You can bet your last dollar that this won't be the case even in year 2025.

Have any of you actually used the gpt bullshit for work? Have you tried completely automating your role? Let me know how well that's going and how many hours you've wasted already on substandard results.

Racing the silicon brats to AGI is quite literally a fool's errand. Keep tabs on what they're doing, how they're doing it and if by some miracle they do blindly stumble into AGI, copy it and spend 1/100 the amount of time and resources. Meanwhile continue devoting the lion share of resources into incorporating machine learning to the emerging manufacturing, industrial and social landscape.

The race to AI supremacy is no doubt important but so many people are distracted with what's real and what's complete propaganda fantasy. AI supremacy at the time of writing this is resembles whichever side is able to better utilise the working aspects of AI to improve their competitive edge in every existing and new domain. Real tangible things. Not glorified search engines that make things up and get facts wrong way too often for it to be useful. LLMs are not AGI. It also is not the path to AGI. Consciousness and intelligence is not some emergent property of parameter thresholds and syntax.

There are a surprising number of genuinely intelligent people who still believe this. I have to say many of these mouthpieces must therefore be working under instruction for whatever reason. Reminds me of all the buzz around aliens and UFOs these days. Absolute brain rot happening and mass psyops.
It’s funny how AGI always seems to trigger this mix of skepticism and over-simplification. I mean, yeah, sure, most current AI applications are narrow by design, and no, LLMs aren’t out here literally replacing entire jobs on their own. But to say they’re not transformative or paving the way for broader generalization is just missing the forest for the trees. The reality is that LLMs aren’t supposed to be an end-all solution—on their own, they’re just one piece of a much bigger puzzle. Their real value comes from acting as a connective tissue between all the specialized systems that already exist. You’ve got vision models, reasoning models, dynamic interfaces—things that can complement each other if orchestrated intelligently. That’s where LLMs shine: tying it all together, not replacing people outright.

The “fool’s errand” claim makes me laugh a bit. It’s not about some naïve race to AGI for the sake of racing. It’s about solving practical problems and pushing boundaries at the same time. Modular approaches that use LLMs as a reasoning backbone already let AI systems generalize in ways they couldn’t before. They can dynamically spin up new tools, fine-tune themselves for edge cases, and adapt on the fly without constant human intervention. That’s not science fiction—it’s already happening in limited but real ways. So, to act like all this research is just chasing unicorns is kind of disingenuous. It’s like ignoring the process that leads to those narrow industrial AI tools you’re talking about. They’re not separate things. The progress in AGI feeds directly into making industrial AI more robust and useful.

And no, nobody’s saying AGI is just an emergent property of bigger models or more parameters. That’s such a tired strawman. AGI isn’t some magic “parameter threshold” moment; it’s about designing systems that can reason, adapt, and generalize across tasks in a functional way. It’s not about mimicking human consciousness—it’s about creating tools that can approach problems with a broader context and without needing to be micromanaged for every new situation. We’re already seeing the beginnings of that in reasoning models and multimodal systems that go beyond simple token prediction. It’s iterative, but it’s real.

The idea that we should just throw all our resources into narrow-use AI instead of chasing AGI feels shortsighted. The two aren’t at odds. AGI research doesn’t take away from industrial AI; it supercharges it. When systems become better at adapting to the unexpected or solving problems outside their original scope, that benefits every application, narrow or not. The line between “narrow” and “general” AI is already blurring, and it’s going to keep blurring as these systems get better at orchestrating specialized tools dynamically. That’s not hype—that’s just the direction things are going.

So yeah, it’s easy to dunk on the idea of AGI and lump it in with UFOs or whatever, but dismissing it outright is ignoring the real progress being made. AGI isn’t about a single breakthrough; it’s about connecting the dots between all these specialized tools in ways that add up to something greater than the sum of its parts. It’s happening whether people want to admit it or not.
 

ougoah

Brigadier
Registered Member
It’s funny how AGI always seems to trigger this mix of skepticism and over-simplification. I mean, yeah, sure, most current AI applications are narrow by design, and no, LLMs aren’t out here literally replacing entire jobs on their own. But to say they’re not transformative or paving the way for broader generalization is just missing the forest for the trees. The reality is that LLMs aren’t supposed to be an end-all solution—on their own, they’re just one piece of a much bigger puzzle. Their real value comes from acting as a connective tissue between all the specialized systems that already exist. You’ve got vision models, reasoning models, dynamic interfaces—things that can complement each other if orchestrated intelligently. That’s where LLMs shine: tying it all together, not replacing people outright.

The “fool’s errand” claim makes me laugh a bit. It’s not about some naïve race to AGI for the sake of racing. It’s about solving practical problems and pushing boundaries at the same time. Modular approaches that use LLMs as a reasoning backbone already let AI systems generalize in ways they couldn’t before. They can dynamically spin up new tools, fine-tune themselves for edge cases, and adapt on the fly without constant human intervention. That’s not science fiction—it’s already happening in limited but real ways. So, to act like all this research is just chasing unicorns is kind of disingenuous. It’s like ignoring the process that leads to those narrow industrial AI tools you’re talking about. They’re not separate things. The progress in AGI feeds directly into making industrial AI more robust and useful.

And no, nobody’s saying AGI is just an emergent property of bigger models or more parameters. That’s such a tired strawman. AGI isn’t some magic “parameter threshold” moment; it’s about designing systems that can reason, adapt, and generalize across tasks in a functional way. It’s not about mimicking human consciousness—it’s about creating tools that can approach problems with a broader context and without needing to be micromanaged for every new situation. We’re already seeing the beginnings of that in reasoning models and multimodal systems that go beyond simple token prediction. It’s iterative, but it’s real.

The idea that we should just throw all our resources into narrow-use AI instead of chasing AGI feels shortsighted. The two aren’t at odds. AGI research doesn’t take away from industrial AI; it supercharges it. When systems become better at adapting to the unexpected or solving problems outside their original scope, that benefits every application, narrow or not. The line between “narrow” and “general” AI is already blurring, and it’s going to keep blurring as these systems get better at orchestrating specialized tools dynamically. That’s not hype—that’s just the direction things are going.

So yeah, it’s easy to dunk on the idea of AGI and lump it in with UFOs or whatever, but dismissing it outright is ignoring the real progress being made. AGI isn’t about a single breakthrough; it’s about connecting the dots between all these specialized tools in ways that add up to something greater than the sum of its parts. It’s happening whether people want to admit it or not.

Yes this is all well and good if true. I personally don't see these things happening. That could be my own ignorance and incomplete understanding of how work towards AGI is blurring the lines. Based on what I have interacted with and what I see on the ground when it is made available, my thoughts are expressed in my original post. This is going to be the whole FSD discussion again. Perhaps a lot of work in AGI will make narrow AI better and begin tying them together. We will see how accurate that prediction is in time. More well informed minds are at the reigns of policy at least in China. US AI policy is run by the $ and confidence men. I find it to be unwise to blindly follow the paths they pursue.

China doesn't seem to be that enthusiastic about AGI.
 

ougoah

Brigadier
Registered Member
View attachment 140241
View attachment 140242


Please, Log in or Register to view URLs content!


Please, Log in or Register to view URLs content!

I'm aware China has presence and decent effort in this space. It is not far behind the US in most respects. These to me represent more token efforts to ensure there are domestic industries and state affiliated institutions that have a stake in this game. It is shy of the resources China could mobilise if it were to truly believe this is what everything will be about in the way AGI is lauded by some.

I've spoken to some friends in China who work in various private enterprises and government offices. They have all spoken about how their respective fields have adopted narrow AI. In some cases, been pushed by the gov to adopt narrow AI. It feels to me that China is pouring far greater effort and resource in narrow AI with actual applications today while letting some of its talent chase that unicorn like the Americans. It just doesn't represent the majority of their focus within the AI question.
 

iewgnem

Junior Member
Registered Member
I'm aware China has presence and decent effort in this space. It is not far behind the US in most respects. These to me represent more token efforts to ensure there are domestic industries and state affiliated institutions that have a stake in this game. It is shy of the resources China could mobilise if it were to truly believe this is what everything will be about in the way AGI is lauded by some.

I've spoken to some friends in China who work in various private enterprises and government offices. They have all spoken about how their respective fields have adopted narrow AI. In some cases, been pushed by the gov to adopt narrow AI. It feels to me that China is pouring far greater effort and resource in narrow AI with actual applications today while letting some of its talent chase that unicorn like the Americans. It just doesn't represent the majority of their focus within the AI question.
For Americans AGI is like a religion rn and they're trying to build a pyramidal tower with exponentially larger size trying to climb into heaven, without knowing how high heaven is.
 

9dashline

Captain
Registered Member
No announcement of GPT5 or full o1 or AGI on chatgpt 2nd bday today

I guess with QwQ being at 20 cents per million tokens, a 300x times cheaper than $60 /mil that ClosedAI charges.... they in bigly troubles now

Crazy part is they were charging users a 30000% inflated rate for reasoning tokens that where intentionally hidden from the user themselves.

They were discussing selling Enterprises the full strength o1 for $2000/mo per user seat...

Now Qwen and Deepseek dropped open weights for free.

The moat is 6 feet underground at age 2.

Please, Log in or Register to view URLs content!
 
Last edited:

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
No announcement of GPT5 or full o1 or AGI on chatgpt 2nd bday today

I guess with QwQ being at 20 cents per million tokens, a 300x times cheaper than $60 /mil that ClosedAI charges.... they in bigly troubles now

Crazy part is they were charging users a 30000% inflated rate for reasoning tokens that where intentionally hidden from the user themselves.

They were discussing selling Enterprises the full strength o1 for $2000/mo per user seat...

Now Qwen and Deepseek dropped open weights for free.

The moat is 6 feet underground at age 2.

Please, Log in or Register to view URLs content!

by the way, it would help your case if you stop in this AGI religion preaching to us. I actually work in this industry and use it on a daily basis.

The fact that they have consumed the entire internet and can't produce massively better GPT5 over GPT4 seems to indicate that the future is more of this extra inference/reasoning steps, which is just nonsensical and takes too long. At some point, your application needs to respond in a couple of seconds.
 

AndrewS

Brigadier
Registered Member
It’s funny how AGI always seems to trigger this mix of skepticism and over-simplification. I mean, yeah, sure, most current AI applications are narrow by design, and no, LLMs aren’t out here literally replacing entire jobs on their own. But to say they’re not transformative or paving the way for broader generalization is just missing the forest for the trees. The reality is that LLMs aren’t supposed to be an end-all solution—on their own, they’re just one piece of a much bigger puzzle. Their real value comes from acting as a connective tissue between all the specialized systems that already exist. You’ve got vision models, reasoning models, dynamic interfaces—things that can complement each other if orchestrated intelligently. That’s where LLMs shine: tying it all together, not replacing people outright.

The “fool’s errand” claim makes me laugh a bit. It’s not about some naïve race to AGI for the sake of racing. It’s about solving practical problems and pushing boundaries at the same time. Modular approaches that use LLMs as a reasoning backbone already let AI systems generalize in ways they couldn’t before. They can dynamically spin up new tools, fine-tune themselves for edge cases, and adapt on the fly without constant human intervention. That’s not science fiction—it’s already happening in limited but real ways. So, to act like all this research is just chasing unicorns is kind of disingenuous. It’s like ignoring the process that leads to those narrow industrial AI tools you’re talking about. They’re not separate things. The progress in AGI feeds directly into making industrial AI more robust and useful.

And no, nobody’s saying AGI is just an emergent property of bigger models or more parameters. That’s such a tired strawman. AGI isn’t some magic “parameter threshold” moment; it’s about designing systems that can reason, adapt, and generalize across tasks in a functional way. It’s not about mimicking human consciousness—it’s about creating tools that can approach problems with a broader context and without needing to be micromanaged for every new situation. We’re already seeing the beginnings of that in reasoning models and multimodal systems that go beyond simple token prediction. It’s iterative, but it’s real.

The idea that we should just throw all our resources into narrow-use AI instead of chasing AGI feels shortsighted. The two aren’t at odds. AGI research doesn’t take away from industrial AI; it supercharges it. When systems become better at adapting to the unexpected or solving problems outside their original scope, that benefits every application, narrow or not. The line between “narrow” and “general” AI is already blurring, and it’s going to keep blurring as these systems get better at orchestrating specialized tools dynamically. That’s not hype—that’s just the direction things are going.

So yeah, it’s easy to dunk on the idea of AGI and lump it in with UFOs or whatever, but dismissing it outright is ignoring the real progress being made. AGI isn’t about a single breakthrough; it’s about connecting the dots between all these specialized tools in ways that add up to something greater than the sum of its parts. It’s happening whether people want to admit it or not.

@ougoah

The latest "news" coming out of AGI efforts is that:

1. They've literally run out of data to train the latest models on
2. They are facing severely diminishing returns from adding extra compute

It looks like LLM/AGI efforts have hit a practical ceiling, so AGI is not feasible with current tech/approaches.

So narrow-use sophisticated pattern recognition and pattern prediction tools is what we're left with.

Source below
techcrunch.com/2024/11/20/ai-scaling-laws-are-showing-diminishing-returns-forcing-ai-labs-to-change-course
 
Top