I can tell you right now as we speak, in the company where I work for, one person's job position has been eliminated by LLM. Was doing document processing.
For the edge cases where something goes wrong, it is being handled by another existing office employee
That's not "many years if not decades away from". That's now
I think LLMs are going to be replacing those sorts of roles as they are on balance going to be the better financial decision for corporations. It doesn't mean they are able to replace people in all specialised professions. Okay it'll gradually be consolidating teams and trimming them as they become a more widespread tool used to assist and speed up processes.
The discussion on general intelligence, well LLMs are not general intelligence and maybe never will be. At the moment, an LLM wouldn't be able to perform 1% of my job and to squeeze out that 1% would require some effort unless the process becomes streamlined for the role somehow. I want to imagine this is true for countless professions out there. In theory an LLM may be able to do some roles here and there but it's going to require a lot of checking or refining over time as it trains. The thing is it just gets so many things hilariously wrong and where I'm working they've spent some coin investigating and developing the tech to take over roles with hilariously poor results.
LLMs can obviously be useful but it is not the intelligent or the hyped up world changer many voices have proclaimed it to be. It is not able to navigate any nuance or complexity. In specialised fields, it currently does not have the basic information perhaps and it's a matter of time to develop them to a point where they can perform lots of grunt work. It may be years or decades to get to that level. Obviously software engineers are not going to be learning accounting or geophysics to produce LLMs specifically designed to assist those professions. Therefore I wonder how much people progressing AI are mindlessly investing in the belief that general AI will magically appear and solve those problems by creating a truly infallible piece of digital intelligence that can navigate everything all on its own. It's just that with LLMs, they don't actually really learn. They don't actually think. They just have clever pattern recognition if I understand it at all (admittedly I have no idea on how it functions in entirety). All I know is that if it was all it's claimed to be, they wouldn't be this ridiculously imperfect. Stringing together barely coherent sentences to produce an illusion of intelligible interaction is the only thing it does. This field just happens to overlap a tiny pathetic bit with some ability to write code (not that well) and do a few peripheral tricks by happy coincidence.
Even in shining examples where it has replaced some roles in limited capacity, I think it has already hit its current limits. Even graphic design teams have not suffered massive redundancies around the world. Law clerks are still being employed and these are the bread and butter of what LLMs do best so far.
Last edited: