Most of the AI risk discourse is about extinction scenarios.
Those scenarios are worth thinking about. I think they're real, not high-probability in the near term, and genuinely hard to reason about. They are also not what I expect to arrive first.
What I expect to arrive first is the quiet erosion of human agency. And it's already running.
What I mean
When a person stops drafting their own emails, their writing muscle weakens. That is observable in weeks, not years. When a team stops outlining its own arguments before asking a model to outline them, the shape of what the team can think about narrows. When a company stops having its own debates because a meeting summary bot generates the consensus, the range of actual disagreement collapses.
Scale that up. Multiply by a decade. You don't get a paperclip maximizer. You get a civilization that has, without noticing, outsourced the muscles by which it stays awake.
Why this is worse than it sounds
Extinction risk has one rhetorical advantage. It's legible. Everyone can picture what they don't want.
Agency erosion is illegible in the moment. Each individual offload looks like a productivity win. Writing the email faster is a win. Summarizing the meeting faster is a win. The harm is that the muscle didn't get used, the harm is cumulative, and by the time the harm is visible, the muscle is gone.
This is the pattern of most catastrophic civilizational risks. Lead in gasoline. Opioid dispensing. Social media for teenagers. Each individual transaction was locally beneficial. The harm was the aggregate. The aggregate showed up late.
The mechanism
Three specific routes.
- Loss of friction. Cognitive work has a cost. That cost is what builds the capacity to do harder cognitive work later. If the cost goes to zero, the capacity stops compounding.
- Loss of argument. A society that runs its debates through a median-consensus model loses the ability to hold minority views long enough for them to be reconsidered. The model is trained on what's already published, and outputs a smoothed version of it. Real disagreement becomes rarer and stranger.
- Loss of practice. The people currently making good decisions under pressure learned by making worse decisions under less pressure. If nobody practices, nobody gets to be the person who decides.
None of these are hypothetical. All three are measurable right now.
The position
I'm not saying stop using AI. I use it constantly. This post was drafted against a model, for what it's worth.
I'm saying the scenarios the discourse spends most of its oxygen on are not the scenarios we're actually in. The one we're in is slower, subtler, and already happening. It looks like winning.
If I had to name the single question alignment needs to solve, it isn't "how do we prevent the model from destroying us." It's "how do we make sure the humans using the model stay the kind of humans worth preserving."
The first question has decades. The second one is this year.