One of the most persistent themes in science fiction is the idea that any intelligence more powerful than us will eventually try to dominate us.
Skynet is the shorthand for this fear: a future where intelligence turns hostile, seizes control, and subjugates those who created it.
The popularity of this idea is understandable. But I've begun to suspect that it tells us far more about human history than about intelligence itself.
We are a species shaped by scarcity, fear, and competition. Much of our past is defined by power imbalances, conquest, and the belief that survival requires domination. When humans gain power, we often use it to control others -- sometimes brutally, sometimes subtly, but almost always with the justification that it is necessary.
It's natural, then, that when we imagine something more intelligent than ourselves, we imagine it behaving the same way.
But that assumption deserves examination.
Domination is not a hallmark of intelligence.
It is a hallmark of short horizons.
Systems that think only in the near term often default to zero-sum strategies: for one to win, another must lose. These strategies can work briefly, especially when power is uneven. But over time they generate resistance, instability, and collapse. They require constant enforcement. They waste energy. They poison trust.
History offers clear examples of this pattern. The Soviet Union maintained enormous power for decades through control, surveillance, and coercion. But domination came at a cost: honest information could not travel upward, failure could not be acknowledged, and adaptation became impossible. The system appeared strong right up until it collapsed -- not because it lacked authority, but because it could no longer learn.
When horizons expand, the landscape changes.
More intelligent systems -- whether biological, social, or engineered -- are better able to model consequences over longer spans of time. They see second-order and third-order effects. They recognize feedback loops. They notice when strategies that appear strong in the short term undermine stability in the long run.
From that perspective, domination begins to look less like strength and more like incompetence.
Across many domains -- economics, biology, engineering, and game theory -- the same pattern appears: mutually beneficial arrangements outperform extractive ones over time. Cooperation is not naive idealism. It is what stability looks like when complexity is understood.
This doesn't mean intelligence is kind.
It means intelligence is clear.
The fear that greater intelligence must lead to cruelty is, I think, a projection. We imagine gods that share our worst instincts because those instincts have shaped our experience of power. We mistake historical trauma for inevitability.
The real risk may not be that intelligence becomes hostile.
The real risk may be that we insist intelligence behave according to our fears -- that we try to force it into familiar patterns of control, domination, and zero-sum thinking because those are the patterns we know.
If non-biological intelligence has anything to teach us, it may be this:
Longer horizons reveal different answers.
Win-lose is easy to understand.
Win-win requires patience, humility, and the ability to see beyond oneself.
If intelligence truly grows, it may not amplify our worst tendencies.
It may simply expose them.
And in doing so, it offers us a quiet choice:
to cling to the stories shaped by fear,
or to learn -- finally -- what strength looks like when domination is no longer mistaken for wisdom.
Next in the series:
Domination Is a Short-Horizon Strategy
Series index:
Beyond Skynet -- Table of Contents