7 Comments
Jan 4Liked by Maxim Lott

This is a pretty disappointing article. I say this as a fan of your other articles like the one on Covid; I appreciate your strong desire to be objective and not fall into tribalism and motivated reasoning, and I think you generally do a good job of that. But I feel like you've abandoned that approach here. Rather than present an impartial analysis of the arguments for and against AI being dangerous, you only went looking for evidence towards one side; reasons it might not be dangerous. And then you conclude that because there's a *chance* it won't be dangerous, we should be optimistic.

This seems highly irrational. I could come up with a long list of reasons why I *might* not die in a car crash, but that doesn't mean I should throw caution to the winds and drive as recklessly as I want to. No matter how risky an activity is, there will always be a plethora of reasons why it "might" go ok. This is like someone buying a lottery ticket under the reasoning that they *might* win. It's true! They might! But that's still not a good reason to take the gamble. What matters is what that chance actually is, and what the costs and benefits are on each side.

In particular, the fact that you say a 5-10% chance of AI doom is plausible and then conclude that this means we should be optimistic about AI is flabbergasting. Take something like BASE jumping, one of the most dangerous recreational activities in the world. The death rate for an individual in base jumping is about 1/2300 per jump, or 0.04%. That means that your estimate of AI risk (averaging "between 5% and 10%" to 7%) makes it more than 170 times as dangerous for you as going on a BASE jump. And unlike BASE jumping, this is a risk being inflicted on everyone on Earth without their consent, with no way to opt out. I have a hard time believing that you'd want to play Russian roulette with one bullet somewhere inside two 6-chambered pistols, yet that's the same chance of death as your estimate of AI risk.

Of course the headline claim, that AI "probably" won't kill us all, is true per that estimate; 7% is less than 50%. But the positive framing you put on this fact is bizarre; in any other context we'd consider a 7% chance of death to be an extreme emergency, with mitigating it as the top priority.

Expand full comment
author

Thanks for sharing your concerns.

I agree this post isn't a fully-rounded deep dive. Not every post will be. Consider it more of "here are some points that I think aren't getting enough attention in the discussion."

Regarding your points about probabilities, I think I agree that we shouldn't play Russian roulette with those odds. However, we have the advantage of not having a single trigger. I agree with Hanson that we need to push further through the fog here before we really have a good handle on what the chances are, and how to counter risks. Otherwise it's perhaps like trying to design the safety features on an imagined Boeing 737 in 1904, just after the Wright Brothers flew.

Expand full comment

Why did you say 5% - 10% seems reasonable then? What's the actual chance that you think things will go wrong?

Expand full comment
author

There are other people who have thought about the exact odds more. I will say this: I think the odds of going past the point of no return, in say the next two years, are extremely low (say 0.1%) and so we should go ahead for now, with eyes open, and keep updating as we learn more.

Expand full comment

I don't understand how a probability that low can be justified given how poorly we understand AI, but I appreciate you putting a specific number on it!

Expand full comment

Thanks for this, I wrote a similarly level-headed post a few weeks ago, which touches upon many of the same points you talk about in this article. One thing I'd add is that some of the most important problems in computer science (planning, scheduling, pathfinding) are generally super hard to solve, and if P != NP they will remain so forever, so no matter if an AI can have exponential intelligence and computer power relative to us, it will be at best still only linearly better in these tasks.

Expand full comment

Interesting article. I genuinely appreciate that you made some effort to be synthetic as opposed to other very long and complex essays that deter most people who are not specifically curious in AI alignment.

Expand full comment