I need this answer from someone who isn't an average Joe who confuses real life with AI apocalypse movies.
When I was a little kid my father gave me a floppy disk with a neat little program inside called "Mopy Fish". This was a screensaver that doubled as a virtual fish tank (your PC monitor became the fish tank once the screensaver activated) and contained a single orange fish called a "Mopy", it was commissioned by HP (computer company) and created by a Japanese "AI'' company called Virtual Creatures. Anyway, I was too young back then to realize it was all smoke and mirrors and for the longest time thought this was an "AI" fish that really had digital feelings and thoughts and was "alive"... I still remember it was this moment that really got me interested in the whole notion of artificial intelligence... I never understood how the "sentience" of a fish could just somehow be encoded onto the 1.44MB capacity of a floppy disk...
Fast forward to December 2022, Elon Musk's OpenAI quasi non-profit debuts "ChatGPT" to the world, a chat bot built on top of the language trained GPT3.5 artificial neural network... Make no mistake, technologies like ChatGPT will indeed displace a LOT of jobs, not by replacing skilled workers outright but by being an intellectual multiplier effect that allows 1 skilled worker to now do the intellectual work of what used to take a whole team.
But my real interest in AI, ever since childhood, was in this notion of "Sentience". Basically, a digital simulated being able to subjectively feel the canonical 'redness of red', to subjectively experience in its inner mind the direct raw sensation of these neural correlates of consciousness, these ineffable qualia, as it were... Going back to Descartes, I feel and therefore I know I can experience qualia and that qualia is real. The real holy grail is if and when we can create an artificial digital being capable of experiencing this same level of sentience as us. I believe it is possible, because we ourselves are nothing more than a collection of atoms and molecules, and in actuality it is the universe itself and the so-called laws of physics/math that does the "computing", and all other forms of computers from abacus, to slide ruler, to the Intel process to the human brain are all higher-order simulated 'emulators'...
Thought experiment: in a universe in which qualia never existed and could not exist, and everything was just subjectively ‘dark’, would any intelligent beings that evolved in such a universe ever have or be capable of having conversations of why they feel qualia even when they don’t?
So there is no reason why 'sentience' has to be encased in a biological body/brain, in fact this sentience/consciousness/qualia-experiencing-ability should and most likely is substrate independent. Going back to tech parlance again, a bare metal hypervisor does not know and does not care what operating systems the virtual machines existing on top of it are running on, and from the perspective of the individual virtual machines they do not care and don't even know that they are actually a simulated instance running within a larger computer "out there" in the "real world" etc.
If 'sentience' is indeed substrate independent then this means in theory we should be able to use computers to create virtual digital minds that can think, feel, act and most importantly experience subjective qualia just exactly like the way we do... basically digital humans are not only pass the Turing Test outwardly but are actually "alive" on the inside... the fire in the equations giving rise to the ghosts in the machine, so to speaks.
But for this to be possible it has to be an emulation from the ground-up. Basically each neuron has to be simulated at the cellular level if not even the molecular or perhaps quantum level... There have been some attempts, there is a project called OpenWorm that tried to simulate an entire worm in this manner, and years ago IBM had a project called BlueBrain to try to simulate a human brain, they got up to the level of a portion of a rat's brain before giving up, realizing it would take more than all the computers in the world combined and that wouldn't even be enough to simulate fully the brain of a cat.
All of the prominent "AI" today, including ChatGPT and the likes, are really just smokes and mirrors and fancy parlor tricks. These AI will never be 'sentient' not even in a trillion years from now. The only way for true 'sentience' to happen is to emulate a digital connectome from the ground up.
But our current paradigms and hardware/technology aren't capable of simulating 'sentience'. It will take many more fundamental breakthroughs in both algorithms, methods of machine learning, and also the speed of processors etc in order to approach anything close to being able to fully emulate sentient AI. Quantum AI comes to mind... Because of the larger geopolitical issues and global diminishing EROEI, we may never reach this level of AI development.
The current paradigm of deep machine learning and artificial neural networks was only made possible by the advent of advanced graphics card technology basically GPU that doubled as AI inference chips and other ASICs such as Google’s TPU (Tensorflow processor unit). Even if the algorithms and techniques were discovered a lot earlier the hardware simply wasn’t available. Back before the age of deep learning in games like Go everything relied on piecemeal algorithms handcrafted and flushed out by brute force or Monte Carlo etc but, unlike Chess, Go could never be brute forced since there were more board positions than atoms in the known universe. Deep Neural networks came to the rescue because they were good at pattern recognition that could also emulate intuition, thereby dramatically reducing the “search space” and making brute force unnecessary. Likewise, chat bots have existed since the dawn of the computer age, but traditionally they were just a bunch of “if…then…else” branching statements that searched for keywords in a string or substring against a canned database of sentences and then replied back with canned messages. It wasn’t fooling anyone. But with deep learning and neural networks trained on the entire text-based internet of content these advanced networks can almost fool some people some of the time, but it's all lexiconal card tricks. The AI doesn’t have a brain, isn’t really capable of thinking on its own, and is not at all sentient.
Suffice it to say, I bring all this up to say that ChatGPT is not true AI. It is very neat smokes and mirrors, but nonetheless it's nothing more than a cheap parlor trick.
This is because language is a reduction symbology meant to represent/map reality itself as experienced through a human, but since ChatGPT doesn't even have 0.00001% of the processing power needed to fully simulate a human brain/connectome, then its letters no matter how seemingly convincing at first, will never reach the depths of a real, alive, deep thinking human being.
Thought experiment: if language was limited to only ~20 words then the Turing Test would be a lot easier to pass for AI.