Living with AGI: The Chess Analogy
“Chess is a war over the board. The object is to crush the opponent’s mind.”
– Bobby Fischer
If you’ve been paying attention recently to OpenAI employees like CEO Sam Altman, there’s plenty of discussion about the impending arrival of Artificial General Intelligence (AGI). I recently discussed the definition of AGI and how we might measure its appearance, but the biggest outstanding question for me is what will life post-AGI be like for us? I’ve come up with a few analogies that might help us understand the potential outcomes for societies in which humans and AGI exist together.
First, we’ll look at this relationship via the analogy of chess abilities. A chess grandmaster can visualize dozens of moves ahead while a beginner sees only immediate options — similarly, an AGI system might operate on a level of strategic depth that makes human decision-making appear simple and rudimentary. When we make what we believe to be innovative economic policies or breakthrough scientific discoveries, we might actually be moving exactly as the AGI has already anticipated — like a beginner chess player falling into a well-laid trap, believing they’re making strong moves while the grandmaster is steering them toward an inevitable conclusion.
However, this analogy can be both optimistic and pessimistic. On the bright side, a good chess teacher doesn’t use their superior understanding to dominate or humiliate their student, but rather to guide them toward improvement and growth. An AGI might act as a benevolent guide, using its vast predictive capabilities to steer humanity away from catastrophic mistakes while allowing us to learn and grow. Can AGI guide us away from climate disasters while helping us learn how to conserve our precious natural resources? Just as a grandmaster can set up positions that teach specific lessons, an AGI might orchestrate scenarios that help humanity develop better decision-making capabilities while maintaining guardrails against truly devastating outcomes.
The chess comparison also highlights an important difference — in chess, the grandmaster’s goal is clearly defined by the rules of the game. With AGI, we face the constant challenge of ensuring its objectives align with human success. Unlike chess, where winning is unambiguously defined, the “game” of human civilization has no clear victory conditions. An AGI operating like a grandmaster but optimizing for the wrong objectives could orchestrate scenarios that appear beneficial in the short term while leading to undesirable long-term outcomes for humanity, much like a chess player sacrificing pieces for an eventual checkmate down the line. After all, optimizing climate change could remove all humans from the equation.
The power dynamic in this relationship also makes us consider our agency. When a chess grandmaster plays against a beginner, the beginner still has real choices and agency within their limited understanding of the game. Even if an AGI can predict and influence human behavior on a macro scale, individual humans might retain meaningful autonomy within the space of choices the AGI allows — like how parents childproof a room while allowing their toddler to explore freely within those safe boundaries. The question then becomes whether this constrained form of freedom is acceptable or if it represents a fundamental loss of human autonomy.
This chess analogy suggests that the relationship between AGI and humanity might be less about direct control and more about influence through a bigger-picture understanding of needs, wants, and goals. Just as a chess grandmaster doesn’t physically force their opponent’s moves but rather creates situations where certain moves become inevitable, an AGI might shape our environment and information diet in ways that guide human behavior while maintaining the illusion of complete freedom. We will question the very nature of our autonomy and whether being predictable is the same as being controlled — questions that become increasingly relevant as we move closer to developing systems with superhuman strategic abilities.