Bullshit arguments about AI (not) replacing jobs


This is a transcript from the Lex Fridman Podcast where he interviewed Sam Altman :

You know, when Kasparov lost to Deep Blue, somebody said, and maybe it was him, that, like, chess is over now. If an AI can beat a human at chess, then no one's gonna bother to keep playing, right? Because like, what's the purpose of us, or whatever? That was 30 years ago, 25 years ago, something like that. I believe that chess has never been more popular than it is right now. And people keep wanting to play and wanting to watch. And, by the way, we don't watch two AI's play each other. Which would be a far better game, in some sense, than whatever else. But that's not what we choose to do. Like, we are somehow much more interested in what humans do.
I think this argument is bullshit.

Just like a racing car doesn't replace an Olympic sprinter, nor does a forklift replace a strongman, AI chess didn't replace humans because the way it plays is fundamentally different. People watch chess games for entertainment and competitiveness, not purely out of performance. Programming is not a sport[1], and no one likes to watch it for fun.

Programming is a creative process based fundamentally on solving problems by creating something new or improving something that already exists. If every problem could be solved instantaneously by an AI at scale, no one will ask humans to do so. Furthermore, companies don't have any incentive to keep hundreds of programmers on payroll, when only one person guiding an AI could do all the work.

If AI automates art creation, people will still seek human-made art for it's emotional value. A lot of artists will lose their jobs due to automation, but the inherent value of human-made art will still exist. On the other hand, no one will prefer human-written code if it behaves similiary to AI-written code.

The same goes for the argument that says that since calculators didn’t replace mathematicians, AI won't replace humans.
A calculator doesn't do "math", it performs arithmetic calculations. And mathematician's work is rarely doing arithmetic, but instead working with axioms, conjectures and theorems, among others.
I feel silly just pointing out this obvious fact, and I don't understand how people came up with this argument without second guessing it.
If instead someone made an AI model that outputs mathematical theorems, something like a proof assistant, but more advanced and using natural language, then the job of mathematicians will be in danger.

I postulate that if AI keeps getting better at programming, human programmers will start losing their job roughly in the order of its complexity and the amount of documentation available online that AIs can train from. Web development will probably be automated first, followed by more niche subjects, and potentially ending with theoretical subjects like AI research.
If instead a super intelligent general AI came to exist, I don't see any reason for any of these jobs to last longer than the others.

Right now I'm scared of AIs, not because of a potential end of the world scenario caused by a misaligned AI, but because of the existential fear that a super intelligent AI will render human intelligence obsolete. No more engineering jobs, no more science, no more programming.
I don't care about losing my job or money, however I enjoy solving problems, and part of that joy comes from the existence of said problems. If an AI solves everything at an unprecedented speed, there will be no need to think anymore, and that scares me.
I don't know when that will happen, or if it will happen at all. And I hope that in a few years I will look back at this and think it was stupid.
My best bet right now is to release as much open source code as possible, and feel like I had an impact on the potential future super AI, even if it's a minuscule impact.