Why you shouldn't build your career around existential risk
created: ; modified:I feel weird writing this because the core of the argument is almost metaphysical for me. I believe that attention is the most powerful thing in the world and I have a very deep sense that whatever we pay attention to – whether positively or negatively – we bring more of into the universe. [1]
Patrick MacKenzie once noted that if you want a problem solved, you give it to someone as a project. If you don’t want a problem to be solved, you give it to someone as a job [2].
“The Department of X, for the 25th straight year, has reported that they did a lot about X, that they have made progress on initiatives A, B, and C with metrics to show for it, that X is nonetheless more pressing than last year, and that they need more headcount.”
If you’re anti-capitalist, you need capitalism. If you’re anti-communist, you need communism. “Any PR is good PR”. Any attention is good attention. If you’re anti-something it means that something exists and it’s important enough to be anti-it. In fact, the bigger it is, the better for your career.
I’m especially bothered by people having existential risk jobs and careers. If you built your entire career around a certain existential risk, then what happens to you if this risk is dealt away with? You no longer have a job. You no longer have a career.
I mean, what happens to Eliezer Yudkowsky’s – the biggest advocate of stopping all AI research due to AI existential risk – career if it turns out that AI risk is simply not an existential concern? Would anyone care about him at all? And what would he do with his life then? Become an e/acc? People believe what they must believe. And they bring their beliefs into the world with all of their life force and intelligence.
(notably, Nick Bostrom, who taught Yudkowsky about AI risk – but hasn’t centered his entire career around it – has recoiled and now believes the risks are overblown. [3])
There’s clearly a way in which this argument is stupid. Like, if there’s a giant asteroid hurtling towards Earth that will reach us in 10 years causing a mass extinction, that’s an existential risk. And I think working on it would be amazing. But it would be amazing because it’s a concrete problem facing us and nobody will build their careers around it. We’ll deal with the asteroid and move on to other things.
There’s also the immense opportunity cost of working on existential risks. All of these incredibly talented and smart people, all of the capital, and instead of working towards building a better future, solving real problems, they got one-shotted by scary thought experiments when they were in high school and college, built their entire career around these thought experiments, and are now stuck. That’s just so sad.
How many diseases would we have cured? How much physics and engineering progress would we have made? How much great art would’ve been created? But instead we have some of the smartest minds of the generation staring into the abyss most of their waking time, waiting for the abyss to stare back.
In fact, it has already stared back at many of them. Sam Altman noted that Eliezer Yudkowsky probably did more than anyone else to speed up the advent of AGI by waking everyone up to AI, inspiring Altman to start OpenAI, and helping Hassabis to fundraise for DeepMind very early on. [4]
Let’s not wait until the abyss stares at the rest of us as well. Let’s work towards the future we want, not against the future we don’t want. After all, the fate of the universe might depend on this.