Miscellaneous News

coolgod

Brigadier
Registered Member
Please, Log in or Register to view URLs content!




I imagine this will probably end up as something similar to the support base the PLA has in Djibouti with a smaller armed presence as protection in the best case scenario. Since the need for a military presence for combat operations is greatly diminished since it will shoved off to the Turks/HTS to deal with.

I wonder if Assad should start sweating bullets.
I doubt Russia will hand Assad over. The Syrian government still have enemies left and right, who knows how long the current regime will last. The HTS is currently fighting with Lebanon on the border, fighting SDF in the northeast and killing Alawites and minorities across its territories.
 
Last edited:

Eventine

Junior Member
Registered Member
I find it intriguing when people worship AGI as if it were a God, can agi magically fix American crumbling infrastructure? fix hopelessly dysfunctional American govt? Fix its rotten education system?, purge the traitorous jewish lobby? And bring back manufacturing and make it as cost efficient as China's? Lol
AGI and ASI are two related but ultimately different concepts.

AGI is a prerequisite to ASI, which is considered the event horizon of what is commonly known as the technological singularity, a concept worshipped by many tech. bros throughout the industry. There is even a Reddit community dedicated to this cult of the singularity, which has 3.5+ million subscribers, most from Silicon Valley.

The technological singularity is, generally speaking, defined as the moment in which technological improvement enters into a self-reinforcing hyper cycle, where inventions are happening at such a speed that it can no longer be controlled or stopped. The most popular theory behind the singularity is that, once an artificial general intelligence is created, it can be told to "make itself more intelligent," at which stage, it will enter into a recursive self-improvement "intelligence explosion," during which ever more intelligent versions of the AI are created in quick succession by the previous versions of itself.

At the end of it, an artificial super intelligence, or ASI, emerges. The ASI's capabilities would be so beyond human comprehension that we could not even dream of the technologies it is able to create. Achievements like fusion power, quantum computing, and space colonization would be as child's play to the ASI, and it would realize such capabilities in rapid succession through sheer power of intellect, and in doing so, make itself even more intelligent and powerful through near infinite energy / compute.

Such an outcome would effectively spell the end of human civilization. The ASI would launch millions of ships to colonize space, build a Dyson Sphere around the sun, learn how to harness black holes, etc. etc. etc.

This is also the reason why Western tech. bro leaders like Ilya Sutskever - chief scientist of Open AI before Sam made the company for profit - and Dario Amodei, CEO of Anthropic, are so obsessed with safety. Because in their mind, the event horizon is near, and the most important priorities now are 1) reaching it first and 2) making sure it can be controlled by humanity them. The priests of the singularity will bring forth their god and - in their minds - be rewarded with eternal life and power over all creation. That is what they're so desperately fighting for; it's why this competition with China over AI is now being considered existential.
 
Last edited:

gpt

Junior Member
Registered Member
This is also the reason why Western tech. bro leaders like Ilya Sutskever - chief scientist of Open AI before Sam made the company for profit - and Dario Amodei, CEO of Anthropic, are so obsessed with safety. Because in their mind, the event horizon is near, and the most important priorities now are 1) reaching it first and 2) making sure it can be controlled by humanity them. The priests of the singularity will bring forth their god and - in their minds - be rewarded with eternal life and power over all creation. That is what they're so desperately fighting for; it's why this competition with China over AI is now being considered existential.
LLMs can't be AGI, let alone ASI. The real reason they can't is simply that they don't have any kind of central loop that would let them mull over things indefinitely, a large memory that can be written and restructured in real time, an ability to track their confidence in various assertions and distinguish real memories from hallucinations. They're basically like the language area of the human brain, running without the entire rest of the brain.

This begs the question of whether Ilya truly believes next-token prediction is sufficient for AGI. His advisor Geoffrey Hinton thinks we're nowhere near, possibly not even by the end of this century. Hinton's probably right, its a completely different type of intelligence (although he has been saying some pretty silly things in interviews lately).
 

GulfLander

Captain
Registered Member
I have seen a video korean guy riding horse in Inner mongolia, china (instagram). somebody asked him where, he say inner mongolia, but didnt say china. The comments seem to be full of english speaking outer mongolia accounts claiming the place to be outer mongolia or using mongolia flag.. is it widespread in mongolia (outer?) Or just ngo stuff?
 

luminary

Senior Member
Registered Member
AGI and ASI are two related but ultimately different concepts.

AGI is a prerequisite to ASI, which is considered the event horizon of what is commonly known as the technological singularity, a concept worshipped by many tech. bros throughout the industry. There is even a Reddit community dedicated to this cult of the singularity, which has 3.5+ million subscribers, most from Silicon Valley.

The technological singularity is, generally speaking, defined as the moment in which technological improvement enters into a self-reinforcing hyper cycle, where inventions are happening at such a speed that it can no longer be controlled or stopped. The most popular theory behind the singularity is that, once an artificial general intelligence is created, it can be told to "make itself more intelligent," at which stage, it will enter into a recursive self-improvement "intelligence explosion," during which ever more intelligent versions of the AI are created in quick succession by the previous versions of itself.

At the end of it, an artificial super intelligence, or ASI, emerges. The ASI's capabilities would be so beyond human comprehension that we could not even dream of the technologies it is able to create. Achievements like fusion power, quantum computing, and space colonization would be as child's play to the ASI, and it would realize such capabilities in rapid succession through sheer power of intellect, and in doing so, make itself even more intelligent and powerful through near infinite energy / compute.

Such an outcome would effectively spell the end of human civilization. The ASI would launch millions of ships to colonize space, build a Dyson Sphere around the sun, learn how to harness black holes, etc. etc. etc.

This is also the reason why Western tech. bro leaders like Ilya Sutskever - chief scientist of Open AI before Sam made the company for profit - and Dario Amodei, CEO of Anthropic, are so obsessed with safety. Because in their mind, the event horizon is near, and the most important priorities now are 1) reaching it first and 2) making sure it can be controlled by humanity them. The priests of the singularity will bring forth their god and - in their minds - be rewarded with eternal life and power over all creation. That is what they're so desperately fighting for; it's why this competition with China over AI is now being considered existential.
1) there is limit to how fast AI on silicon can get. At one point, the code cannot be optimized anymore, the transistors cannot be spaced closer together, the electrons cannot jump any faster. So AI almost impossible to get smarter above their optimization cap, which we are actually pretty close to already.

2) You can not iterate infinitely in the physical world. Everything has a time cost. AI cannot iterate faster than the time it takes to print out a new chip. Nvidia at its fastest takes 1+ year to design and release a new version. Do you know how many times AI generally iterates to make a tiny bit of progress?

AI cannot build infrastructure faster than its robots can move bricks. It cannot gather real life data any faster than its Chinese-made lidar is capable of. So AI basically currently has no power over the physical world, especially in the deindustrialized West. AI cannot MAGA.

Sam Altman Thought is Dead On Arrival.
 
Top