Artificial Intelligence thread

FairAndUnbiased

Brigadier
Registered Member
Well, so does everyone else, including those working in OpenAI and Elon' Grok.



It would work well as long as Deepseek become/stay as the best of best AI around. However, OpenAI and Grok can always look through Deepseek's code and copy/improve whatever feature they like into their own and without sharing anything back to Deepseek. How fast do you think Deepseek can come up with new feature/methods/ideas compares to how fast OpenAI/Grok can copy and improve them?
if Grok is so good, how come it thinks Steve Cheung is Kim Jong Un?

Please, Log in or Register to view URLs content!

I think the results show they're not really using Deepseek's code or they tried and couldn't get it to work with their changes, only as-is.

Llama is also open source, yet Meta spent billions on it.
 

SDtom

New Member
Registered Member
if Grok is so good, how come it thinks Steve Cheung is Kim Jong Un?

I am not knowledge enough to determine if Grok3 is so good/bad itself or relatively based on one instance of it not recognize a person in an image. I tried to test it out with Deepseek but its live search is down, not recognize photo in web link, and it doesn't do recognition of uploaded images yet.
 

9dashline

Captain
Registered Member
Artificial General Intelligence (AGI) is the hypothetical ability of a machine to perform any intellectual task that a human can do. Large language models, such as OpenAI's GPT-3, have demonstrated impressive capabilities in natural language understanding and generation, leading some to speculate about their potential for achieving AGI. However, despite their notable achievements, large language models have inherent limitations that make them unlikely candidates for true AGI. This essay will discuss these limitations, including their lack of reasoning abilities, reliance on massive amounts of data, absence of common sense understanding, and ethical concerns.

A key aspect of AGI is the ability to engage in complex reasoning and problem-solving. While large language models can generate coherent text and answer questions based on the patterns they have learned from their training data, they lack the ability to engage in deductive or inductive reasoning that is essential for AGI. This is because these models primarily rely on pattern matching and statistical associations rather than understanding the underlying logic or principles behind the information they process. Consequently, they are prone to making errors when faced with novel situations or questions that require logical reasoning.

Large language models depend on vast amounts of data for their training, which presents several challenges for achieving AGI. First, the need for extensive data limits the applicability of these models in domains where data is scarce or expensive to acquire. Second, the sheer scale of computational resources required for training large models raises questions about their efficiency and ecological impact. In contrast, humans can learn and generalize from a relatively small number of examples, highlighting the difference between the learning mechanisms of large language models and true AGI.

Common sense understanding is a fundamental aspect of human intelligence, allowing us to make inferences and predictions about the world based on our background knowledge. However, large language models often lack this basic understanding. Despite being trained on vast amounts of text, these models still make mistakes that a human with common sense would not. This is partly because large language models learn from text data alone, which may not fully capture the richness of human experience and understanding. True AGI would require the integration of various types of knowledge, including visual, auditory, and tactile, as well as an understanding of the underlying structure of the world.

While large language models have undoubtedly advanced the field of AI and demonstrated impressive natural language capabilities, they fall short of achieving true AGI due to their lack of reasoning abilities, reliance on massive amounts of data, absence of common sense understanding, and ethical concerns. To reach AGI, researchers must explore alternative approaches that move beyond the limitations of current large language models, incorporating reasoning, efficient learning mechanisms, and a more comprehensive understanding of the world. Addressing the ethical challenges associated with AI development is also crucial to ensure that AGI benefits all of humanity and avoids causing harm.
Artificial intelligence is rapidly transforming how our world harnesses energy, produces goods, and carries out complex tasks that once depended on human ingenuity. At its heart, AI offers a new way to direct the use of energy without requiring the same high-wage lifestyles that human experts traditionally demand. A skilled professional such as a doctor or corporate executive may appear to consume only 100 watts to power the body or around 25 watts for the brain, yet in reality, their overall consumption extends far beyond metabolic requirements. Salaries translate into large homes, frequent travel, and an array of consumer goods, all of which command significant upstream energy inputs. By contrast, AI that runs on electricity alone bypasses much of that indirect overhead. Once trained, a single model can be duplicated across data centers, potentially outpacing the total energy efficiency of thousands of human workers whose wages would otherwise perpetuate high levels of resource use and associated carbon emissions.


In thermodynamic terms, intelligence—be it human or machine—acts as an agent accelerating the conversion of stored energy into useful work and, ultimately, entropy. Humans did this dramatically by exploiting fossil fuels on an unprecedented scale, but AI can arguably push that capability much further. Modern neural networks, powered by specialized GPUs or other accelerators, require substantial electricity for training, but if deployed wisely, they may achieve a better ratio of energy input to productive output than the traditional model of human labor. A single large language model can handle medical triage, analyze legal documents, write software, or optimize a manufacturing line, all without the web of support systems—housing, transportation, personal consumption—that humans inevitably need. This does not mean AI is inherently low-energy, but rather that the indirect lifestyle costs associated with human labor can often dwarf the electricity demands of a well-run data center.


As a result, a novel concept emerges: energy return on intelligence, where AI may eventually surpass humans in delivering high-value cognition for less net energy cost once we account for the entire life-cycle footprint of employing a human workforce. We see this vividly in tasks like image generation or text composition, where comparisons of carbon emissions suggest AI-based creation can emit orders of magnitude less CO₂ per unit of content compared to a human doing the same job. Yet whether this leads to an overall reduction of global resource consumption remains uncertain, since gains in efficiency often tempt societies to do more—sometimes provoking rebounds in energy use. Still, if AI is steered toward practical tasks that genuinely raise productivity, streamline processes, and reduce waste, it could become a powerful instrument for sustainability, especially if fueled by cleaner energy grids rather than fossil sources. AI thereby offers an opportunity to decouple economic expansion from runaway energy consumption, provided that growth aims for quality and well-being rather than sheer volume of output.


On the human side, the rise of AI raises deeper questions about the purpose of work and the distribution of resources. In an age when machines can handle ever-larger portions of both manual and knowledge-based tasks, societies must redefine how and why people earn a living. Historically, specialized professionals have harnessed their intelligence to capture a significant share of the overall energy budget by way of wages, but widespread AI may destabilize that arrangement. Freed from labor obligations, some individuals might devote themselves to creative endeavors, immersive experiences, or community building, possibly leading to a cultural shift away from endless material consumption. Others might struggle with the loss of traditional roles and identities if no social framework replaces the norms of the work-centered life. The key will be to channel AI’s tremendous cognitive capacity so it enhances human experiences, addresses ecological constraints, and relieves drudgery—rather than simply reinforcing existing inequalities or escalating resource usage. In that sense, artificial intelligence carries within it both the potential to enable a more harmonious balance with our finite planet and the risk of turbocharging entropy if left to pure market forces. The challenge for humanity is to guide AI development in a direction that treasures purpose, fosters creative exploration, and consciously reduces the indirect energy overhead that has, for too long, been taken for granted as the cost of intellect.
 

FairAndUnbiased

Brigadier
Registered Member
Artificial intelligence is rapidly transforming how our world harnesses energy, produces goods, and carries out complex tasks that once depended on human ingenuity. At its heart, AI offers a new way to direct the use of energy without requiring the same high-wage lifestyles that human experts traditionally demand. A skilled professional such as a doctor or corporate executive may appear to consume only 100 watts to power the body or around 25 watts for the brain, yet in reality, their overall consumption extends far beyond metabolic requirements. Salaries translate into large homes, frequent travel, and an array of consumer goods, all of which command significant upstream energy inputs. By contrast, AI that runs on electricity alone bypasses much of that indirect overhead. Once trained, a single model can be duplicated across data centers, potentially outpacing the total energy efficiency of thousands of human workers whose wages would otherwise perpetuate high levels of resource use and associated carbon emissions.


In thermodynamic terms, intelligence—be it human or machine—acts as an agent accelerating the conversion of stored energy into useful work and, ultimately, entropy. Humans did this dramatically by exploiting fossil fuels on an unprecedented scale, but AI can arguably push that capability much further. Modern neural networks, powered by specialized GPUs or other accelerators, require substantial electricity for training, but if deployed wisely, they may achieve a better ratio of energy input to productive output than the traditional model of human labor. A single large language model can handle medical triage, analyze legal documents, write software, or optimize a manufacturing line, all without the web of support systems—housing, transportation, personal consumption—that humans inevitably need. This does not mean AI is inherently low-energy, but rather that the indirect lifestyle costs associated with human labor can often dwarf the electricity demands of a well-run data center.


As a result, a novel concept emerges: energy return on intelligence, where AI may eventually surpass humans in delivering high-value cognition for less net energy cost once we account for the entire life-cycle footprint of employing a human workforce. We see this vividly in tasks like image generation or text composition, where comparisons of carbon emissions suggest AI-based creation can emit orders of magnitude less CO₂ per unit of content compared to a human doing the same job. Yet whether this leads to an overall reduction of global resource consumption remains uncertain, since gains in efficiency often tempt societies to do more—sometimes provoking rebounds in energy use. Still, if AI is steered toward practical tasks that genuinely raise productivity, streamline processes, and reduce waste, it could become a powerful instrument for sustainability, especially if fueled by cleaner energy grids rather than fossil sources. AI thereby offers an opportunity to decouple economic expansion from runaway energy consumption, provided that growth aims for quality and well-being rather than sheer volume of output.


On the human side, the rise of AI raises deeper questions about the purpose of work and the distribution of resources. In an age when machines can handle ever-larger portions of both manual and knowledge-based tasks, societies must redefine how and why people earn a living. Historically, specialized professionals have harnessed their intelligence to capture a significant share of the overall energy budget by way of wages, but widespread AI may destabilize that arrangement. Freed from labor obligations, some individuals might devote themselves to creative endeavors, immersive experiences, or community building, possibly leading to a cultural shift away from endless material consumption. Others might struggle with the loss of traditional roles and identities if no social framework replaces the norms of the work-centered life. The key will be to channel AI’s tremendous cognitive capacity so it enhances human experiences, addresses ecological constraints, and relieves drudgery—rather than simply reinforcing existing inequalities or escalating resource usage. In that sense, artificial intelligence carries within it both the potential to enable a more harmonious balance with our finite planet and the risk of turbocharging entropy if left to pure market forces. The challenge for humanity is to guide AI development in a direction that treasures purpose, fosters creative exploration, and consciously reduces the indirect energy overhead that has, for too long, been taken for granted as the cost of intellect.
The current model is:

Originally:

Machines do the drudge work like being waiters and janitors, while humans create art and poetry.

Actuality:

Humans do the drudge work like being waiters and janitors, while machines create art and poetry.
 

9dashline

Captain
Registered Member
The current model is:

Originally:

Machines do the drudge work like being waiters and janitors, while humans create art and poetry.

Actuality:

Humans do the drudge work like being waiters and janitors, while machines create art and poetry.
Well... physical AI is not far behind...

Unitree cost only $16000 per unit, once its AI brain is fully trained, can do the work of most low wage manual labor

Musk targets his Omptimus bot at $25000 and it can cook, clean, babysit, mow lawn, wash dish, do laundry, run errands.... Gig workers are SOL.....
 

Wrought

Junior Member
Registered Member
It would work well as long as Deepseek become/stay as the best of best AI around. However, OpenAI and Grok can always look through Deepseek's code and copy/improve whatever feature they like into their own and without sharing anything back to Deepseek. How fast do you think Deepseek can come up with new feature/methods/ideas compares to how fast OpenAI/Grok can copy and improve them?

The whole point of the open-source model is that everyone can see it and everyone can contribute for free, whereas in closed-source the only contributors are the salaried employees on your payroll who have access to the source code. You give for free and you get for free. Or you charge customers and pay employees. Either or. None of this is new in any way, and neither model is necessarily superior, which is why both open-source and closed-source are both common paradigms.

In your example, it's not how fast Deepseek vs OpenAI can come up with new features. It's how fast the entire world vs OpenAI can come up with new features. Because if all OpenAI is charging for is copied Deepseek code, well literally everyone has free access to the exact same thing and why would anyone ever pay them?
 

FairAndUnbiased

Brigadier
Registered Member
Well... physical AI is not far behind...

Unitree cost only $16000 per unit, once its AI brain is fully trained, can do the work of most low wage manual labor

Musk targets his Omptimus bot at $25000 and it can cook, clean, babysit, mow lawn, wash dish, do laundry, run errands.... Gig workers are SOL.....
people don't pay waiters and janitors to get the work done. its more cost effective until you get to extremely high income levels to just do the work yourself.

people pay waiters and janitors for the privilege of commanding the time of another human.
 

OptimusLion

Junior Member
Registered Member
Loongson DeepSeek large model inference all-in-one machine released: autonomous instruction system architecture 3C5000 processor, equipped with Taichu Yuanqi T100 acceleration card

According to Loongson Anhui official account, Loongson Zhongke successfully released a full-stack inference machine based on DeepSeek large model. The product is based on Loongson autonomous instruction system architecture (LoongArch) processor, equipped with Taichu Yuanqi T100 acceleration card...


001ZzMwgly1hyuh545n6gj60tg0p6tqb02.jpg

001ZzMwgly1hyuh54b6hyj60um0i8wjj02.jpg

001ZzMwgly1hyuh54g1cdj60r41220z502.jpg

Please, Log in or Register to view URLs content!
 

OptimusLion

Junior Member
Registered Member
Shanghai Artificial Intelligence Laboratory released the general embodied intelligent simulation platform Taoyuan 2.0 (GRUtopia2.0) at the "Pujiang AI Ecosystem Forum" of the 2025 Global Developers Conference (GDC) held yesterday, and it is open to global developers.

IMG_20250223_121313_452.jpg

Taoyuan 2.0 is an upgrade of Taoyuan 1.0, a “city-level” simulation platform released in July 2024, through three core technology upgrades: modular architecture, automated scenario generation, and efficient data collection.

001ZzMwgly1hyuiz8qaunj60u00chgr002.jpg

Please, Log in or Register to view URLs content!
 
Top