This Article From Issue
May-June 2026
Volume 114, Number 3
Page 184
THE LAWS OF THOUGHT: The Quest for a Mathematical Theory of the Mind. Tom Griffiths. 400 pp. Henry Holt, 2026. $31.99.
I discovered cognitive science almost by accident, after years of meandering through other fields that posed huge questions, including cosmology (“How did the universe come to be?”) and evolutionary biology (“How did life as we know it emerge?”). Cognitive science, I came to realize, was where all these questions converged—in the study of the inquiring mind itself. But cognitive science remains a field that most people never encounter.
Tom Griffiths, who directs both the Computational Cognitive Science Lab and the Laboratory for Artificial Intelligence at Princeton University, wants to change that lack of exposure. In The Laws of Thought: The Quest for a Mathematical Theory of the Mind, he aims to make his work on understanding the human mind accessible to anyone. Most of us leave school knowing something about the laws of nature: that what goes up must come down, that every action has an equal and opposite reaction. Griffiths makes a compelling argument that the workings of the human mind are governed by mathematical principles as fundamental as those behind gravity and motion—and that “in the twenty-first century, knowing the Laws of Thought is just as important to scientific literacy as knowing the Laws of Nature.”
The book traces the search for those principles from the early Enlightenment through the contemporary age of AI and argues that progress has been propelled through the development of three major mathematical frameworks for understanding the mind, each capturing something different about what it means to think: rules and symbols, neural networks, and probability theory.
The first framework formulates the laws of thought as a set of rules, expressed in symbolic terms. Griffiths traces this idea back centuries, but it became tangible in the 1950s when Herbert Simon, a political scientist by training, and Allen Newell, a computer scientist, wrote a program called the Logic Theorist. The purpose of this program was to automatically prove theorems by following the rules of formal logic. Before the program ever ran on a machine, Simon tested the idea on his family. He gathered his wife, his three children, and some graduate students in a room, then gave each a card with part of the program and had them work through proofs by following the rules step-by-step. It worked. The experience emboldened Simon and Newell to propose that a system of rules and symbols “has the necessary and sufficient means for general intelligent action.” If true, all of human thought could, in principle, be simulated. As Griffiths puts it, “The claims of necessity and sufficiency were [to be] tested through the fields of cognitive science and AI, respectively. As cognitive scientists build models of human cognition using systems of rules and symbols instantiated on digital computers, we get to find out exactly how much of our intelligence can be captured in this way.”
The second framework, one based on neural networks, proposes that the laws of thought cannot be written down as rules, but instead emerge from the behavior of interconnected neurons, the cells that are the basic working components of the brain. If you model those neurons and the connections between them, and allow those connections to strengthen or weaken through experience, you arrive at a system that can learn without anyone having to spell out the rules. In Griffiths’s words, “What if, rather than trying to copy the mathematician’s thoughts and actions, we instead try to copy her brain?”
Frank Rosenblatt, a psychologist working in the 1950s, attempted to do just that. He created a device called the Perceptron that learned to classify visual images by adjusting its own connections. When computer scientists soon exposed the limits of those early neural networks, the field seemed to move on. But psychologists David Rumelhart and Jay McClelland, among others, continued to develop more complex and capable neural network architectures, laying the groundwork for the modern neural network–based systems that are generating so much excitement today.
The third framework, grounded in probability theory, approaches thinking as a problem of reasoning under uncertainty. Whereas the first two frameworks offer accounts of how the mind represents and processes information, the probabilistic framework asks a different question: How should an individual decide what to believe, given incomplete and ambiguous evidence? Griffiths argues that this situation is one we face constantly, whether we are learning a language, recognizing a face, or deciding whether a mushroom is safe to eat. What makes probability theory so powerful as a tool for studying the mind is that it can be used not only to describe human reasoning, but to derive general principles of cognition. As Griffiths writes, “The key was to stop focusing on human minds and instead ask about principles of intelligence that might apply to any mind, anywhere in the universe.”
One of the most striking examples of such general principles is what Griffiths describes as psychology’s first universal law: Roger Shepard’s 1987 demonstration that the probability of generalizing from one object to another decreases as a function of the distance between them in psychological space—a result that holds for humans, rats, and pigeons alike.
Griffiths shows readers that the AI technologies that now dominate our attention, such as chatbots, image generators, and agentic code generators, ultimately have their origins in the quest to answer scientific questions about the human mind. His book arrives at a moment when tech companies and the architects of AI systems command the spotlight. But Griffiths redirects our attention to the psychologists and neuroscientists who share responsibility for the foundations on which these technologies were built, and he reminds us that we are all responsible for ensuring that the science of the mind doesn’t get lost in the rush to build smarter machines.
The way we understand thought does not have to rely on only one framework; instead, we can acknowledge that all three frameworks have their place in understanding how we think. Indeed, a great pleasure of the book is seeing how often breakthroughs came from researchers working together across disciplines. Cognitive science, as Griffiths tells it, has never been the work of any single field. The Laws of Thought is a beautiful tribute to that collaborative ethos and an invitation to the next generation to join this enterprise.
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.