7 Comments
Jan 4Liked by Maxim Lott

This is a pretty disappointing article. I say this as a fan of your other articles like the one on Covid; I appreciate your strong desire to be objective and not fall into tribalism and motivated reasoning, and I think you generally do a good job of that. But I feel like you've abandoned that approach here. Rather than present an impartial analysis of the arguments for and against AI being dangerous, you only went looking for evidence towards one side; reasons it might not be dangerous. And then you conclude that because there's a *chance* it won't be dangerous, we should be optimistic.

This seems highly irrational. I could come up with a long list of reasons why I *might* not die in a car crash, but that doesn't mean I should throw caution to the winds and drive as recklessly as I want to. No matter how risky an activity is, there will always be a plethora of reasons why it "might" go ok. This is like someone buying a lottery ticket under the reasoning that they *might* win. It's true! They might! But that's still not a good reason to take the gamble. What matters is what that chance actually is, and what the costs and benefits are on each side.

In particular, the fact that you say a 5-10% chance of AI doom is plausible and then conclude that this means we should be optimistic about AI is flabbergasting. Take something like BASE jumping, one of the most dangerous recreational activities in the world. The death rate for an individual in base jumping is about 1/2300 per jump, or 0.04%. That means that your estimate of AI risk (averaging "between 5% and 10%" to 7%) makes it more than 170 times as dangerous for you as going on a BASE jump. And unlike BASE jumping, this is a risk being inflicted on everyone on Earth without their consent, with no way to opt out. I have a hard time believing that you'd want to play Russian roulette with one bullet somewhere inside two 6-chambered pistols, yet that's the same chance of death as your estimate of AI risk.

Of course the headline claim, that AI "probably" won't kill us all, is true per that estimate; 7% is less than 50%. But the positive framing you put on this fact is bizarre; in any other context we'd consider a 7% chance of death to be an extreme emergency, with mitigating it as the top priority.

Expand full comment

Thanks for this, I wrote a similarly level-headed post a few weeks ago, which touches upon many of the same points you talk about in this article. One thing I'd add is that some of the most important problems in computer science (planning, scheduling, pathfinding) are generally super hard to solve, and if P != NP they will remain so forever, so no matter if an AI can have exponential intelligence and computer power relative to us, it will be at best still only linearly better in these tasks.

Expand full comment

Interesting article. I genuinely appreciate that you made some effort to be synthetic as opposed to other very long and complex essays that deter most people who are not specifically curious in AI alignment.

Expand full comment