How AI evolved from quest for a mathematical theory of the mind
The progress in AI over the past decade is beginning to suggest answers to some of our deepest questions about human intelligence. Below, Tom Griffiths shares five key insights from his new book, The Laws of Thought: The Quest for a Mathematical Theory of the Mind.
Mewayz Team
Editorial Team
From Ancient Logic to Neural Networks: The Long Journey to Machine Intelligence
For most of human history, thinking was considered the exclusive domain of gods, souls, and the ineffable mystery of consciousness. Then, somewhere in the long corridor between Aristotle's syllogisms and the transformer architectures powering today's AI, a radical idea took hold: that thought itself might be something you could write down as an equation. This wasn't just a philosophical curiosity — it was a centuries-long engineering project that began with philosophers trying to formalize reason, accelerated through the probabilistic revolutions of the 18th and 19th centuries, and ultimately produced the large language models, decision engines, and intelligent business systems reshaping how organizations operate today. Understanding where AI came from isn't academic nostalgia. It's the key to understanding what modern AI can actually do — and why it works as well as it does.
The Dream of Formalized Reason
Gottfried Wilhelm Leibniz imagined it in the 17th century: a universal calculus of thought that could resolve any disagreement simply by saying "let us calculate." His calculus ratiocinator was never completed, but the ambition seeded centuries of intellectual effort. George Boole gave algebra to logic in 1854 with An Investigation of the Laws of Thought — the very phrase that echoes in modern AI discourse — reducing human reasoning to binary operations that a machine could, in principle, execute. Alan Turing formalized the idea of a computing machine in 1936, and within a decade, pioneers like Warren McCulloch and Walter Pitts were publishing mathematical models of how individual neurons might fire in patterns that constitute thought.
What's striking in retrospect is how much of this early work was genuinely about the mind, not just machines. Researchers weren't asking "can we automate tasks?" — they were asking "what is cognition?" The computer was conceived as a mirror held up to human intelligence, a way of testing theories about how reasoning actually works by encoding those theories and running them. This philosophical DNA is still present in modern AI. When a neural network learns to classify images or generate text, it's executing — however imperfectly — a mathematical theory of perception and language.
The journey wasn't smooth. Early "symbolic AI" in the 1950s and 60s encoded human knowledge as explicit rules, and for a while it seemed like brute-force logic would be enough. Chess programs improved. Theorem provers worked. But language, perception, and common sense resisted formalization at every turn. By the 1970s and 80s, it was clear that the human mind wasn't running on a rulebook anyone could write.
Probability: The Missing Language of Uncertainty
The breakthrough that unlocked modern AI wasn't more computing power — it was probability theory. The Reverend Thomas Bayes had published his theorem of conditional probability in 1763, but it took until the late 20th century for researchers to fully grasp its implications for machine learning. If rules couldn't capture human knowledge because the world is too messy and uncertain, perhaps probabilities could. Instead of encoding "A implies B," you encode "given A, B is likely 87% of the time." This shift from certainty to degrees of belief was philosophically transformative.
Bayesian reasoning let machines handle ambiguity in ways that matched human cognition far more closely. Spam filters learned to recognize unwanted email not from fixed rules but from statistical patterns across millions of examples. Medical diagnostic systems began assigning probabilities to diagnoses rather than binary yes/no answers. Language models learned that after "the president signed the," the word "bill" is vastly more probable than the word "rhinoceros." Probability wasn't just a mathematical tool — it was, as researchers like Tom Griffiths have argued, the natural language of how minds represent and update beliefs about the world.
This shift has profound implications for business applications. When an AI system predicts customer churn, forecasts inventory demand, or flags a suspicious invoice, it is executing probabilistic inference — the same fundamental computation Bayes described in the 18th century. The elegance is that this mathematical framework scales: the same principles that explain how a human updates their belief about the weather after seeing clouds also explain how a machine learning model updates its weights after processing a billion training examples.
Neural Networks and the Return to Biology
By the 1980s, a parallel tradition was gaining momentum — one that looked not at logic or probability but directly at the brain's architecture for inspiration. Artificial neural networks, loosely modeled on biological neurons, had existed since McCulloch and Pitts, but they required more data and computing power than was available. The invention of the backpropagation algorithm in 1986 gave researchers a practical way to train multi-layer networks, and while the results were modest at first, the underlying idea was sound: build systems that learn from examples rather than from rules.
The deep learning revolution that began around 2012 was essentially the vindication of this biological metaphor. When AlexNet won the ImageNet competition by a margin of 10 percentage points, it wasn't just a better image classifier — it was evidence that hierarchical feature learning, loosely analogous to how the visual cortex processes information, could work at scale. Within a decade, similar architectures would learn to play Go at superhuman levels, translate between 100 languages, write coherent essays, and generate photorealistic images. The mathematical theory of the mind, it turned out, was partially encoded in the architecture of the brain itself.
The most important insight from decades of AI research is this: intelligence is not a single phenomenon but a family of computational processes — perception, inference, planning, learning — each with its own mathematical structure. When we build systems that replicate these processes, we aren't performing magic; we're engineering cognition.
Five Principles That Bridge Cognitive Science and Modern AI
Research in cognitive science and AI has converged on a set of principles that explain both why humans think the way they do and why modern AI systems work as well as they do. Understanding these principles helps businesses make smarter decisions about where to deploy AI and what to expect from it.
- Rational inference under uncertainty: Both human and machine intelligence update beliefs based on evidence. The Bayesian brain hypothesis suggests humans are, in a meaningful sense, probabilistic inference engines. Modern AI models do the same thing at scale.
- Hierarchical representation: The brain processes information at multiple levels of abstraction simultaneously — pixels become edges, edges become shapes, shapes become objects. Deep neural networks replicate this hierarchy artificially.
- Learning from few examples: Humans can recognize a new animal from a single picture. AI research in "few-shot learning" is closing this gap dramatically, with models like GPT-4 performing tasks from just 2-3 examples.
- The role of prior knowledge: Neither humans nor AI systems start from scratch. Prior experience — encoded in humans as evolved heuristics and cultural learning, in AI as pre-training on vast datasets — dramatically accelerates new learning.
- Approximate computation: The brain doesn't solve problems exactly; it finds good-enough answers quickly. Modern AI systems are similarly designed to be computationally efficient, trading perfect accuracy for practical speed.
These principles have moved from academic theory into commercial application faster than almost anyone predicted in 2010. Today, a small business can access AI-powered demand forecasting, natural language customer service, and automated financial analysis — capabilities that required teams of PhD researchers a generation ago.
💡 DID YOU KNOW?
Mewayz replaces 8+ business tools in one platform
CRM · Invoicing · HR · Projects · Booking · eCommerce · POS · Analytics. Free forever plan available.
Start Free →From Theory to Business Reality: AI in Operational Tools
The gap between mathematical theory and business practice has never been smaller. When cognitive scientists determined that pattern recognition in high-dimensional data is the fundamental engine of intelligence, they inadvertently described exactly what business operations require: finding signal in the noise of customer behavior, financial transactions, employee performance, and market movement. The same neural architectures that learn to see can learn to read invoices. The same probabilistic models that explain human memory can predict which customers will return next month.
This convergence is why modern business platforms are integrating AI not as an add-on feature but as a core operating principle. Platforms like Mewayz, which serves over 138,000 users across 207 modules spanning CRM, payroll, invoicing, HR, fleet management, and analytics, represent the practical realization of decades of cognitive science research. When Mewayz's AI-powered analytics module surfaces an anomaly in payroll data or its CRM identifies a high-value lead pattern, it is — at a technical level — running inference algorithms descended directly from the mathematical theories of mind that occupied researchers for centuries.
The practical impact is measurable. Businesses using integrated AI-powered platforms report reducing administrative overhead by 30-40% and cutting decision-making time on routine operational choices by more than half. These aren't marginal improvements; they represent a fundamental shift in how organizations allocate human cognitive effort — away from pattern-matching and data processing, toward the genuinely creative and strategic thinking that machines still cannot replicate.
The Limits of the Mathematical Theory: What AI Still Cannot Do
Intellectual honesty demands acknowledging that the mathematical theory of the mind remains incomplete. Contemporary AI systems are extraordinarily powerful at tasks involving pattern recognition, statistical inference, and sequential prediction. They are far weaker at causal reasoning — understanding why things happen, not just what tends to follow what. A language model can describe the symptoms of a market downturn with eerie accuracy but struggles to explain the causal mechanisms behind it in a way that generalizes to novel situations.
There are also profound open questions about consciousness, intentionality, and grounded understanding that no current AI system addresses. When a large language model "understands" a question, something meaningful is happening computationally — but cognitive scientists vigorously debate whether it bears any resemblance to human understanding or is a sophisticated statistical mimic. The honest answer is: we don't yet know. The mathematical theory of the mind is a work in progress, and the systems we deploy today are powerful approximations of cognition, not its full realization.
For business users, this distinction matters practically. AI tools excel at automating well-defined, data-rich tasks — invoice processing, customer segmentation, scheduling optimization, anomaly detection. They require more careful human oversight for open-ended judgment calls, ethical decisions, and novel situations outside their training distribution. The most effective organizations are those that understand this boundary clearly and design their workflows accordingly.
Building the Cognitive Enterprise: What Comes Next
The next decade of AI development will likely be defined by closing the remaining gaps in the mathematical theory of the mind: better causal reasoning, more robust generalization, genuine few-shot learning across diverse domains, and tighter integration with the kinds of structured knowledge that human experts carry. Research in neurosymbolic AI — combining the pattern-recognition power of neural networks with the logical rigor of symbolic systems — is already producing systems that outperform pure deep learning on tasks requiring structured reasoning.
For businesses, the trajectory is toward what researchers call "cognitive enterprises" — organizations where AI systems don't just automate individual tasks but participate in interconnected workflows, sharing information across functions in the way human teams do. When a CRM, payroll system, fleet manager, and financial dashboard all share a common intelligence layer — as they do in modular platforms like Mewayz — the AI can identify cross-functional insights that no siloed tool could surface. A spike in customer service complaints, combined with an anomaly in fulfillment data and a pattern in employee overtime hours, tells a story that only emerges when the data streams are unified.
- Unified data architecture will be the foundation of next-generation business AI, enabling cross-module insights impossible in siloed systems
- Explainable AI will become a regulatory and operational requirement, not just a technical nicety
- Continuous learning systems that adapt to each organization's specific patterns will replace one-size-fits-all models
- Human-AI collaboration interfaces will evolve from chatbots into genuine cognitive partners that understand business context
Leibniz dreamed of a calculus of thought. Boole gave it algebra. Turing gave it a machine. Bayes gave it uncertainty. Hinton gave it depth. And now, 400 years after the dream began, businesses of every size are running the results in their daily operations — not as science fiction, but as payroll runs, customer pipelines, and fleet routes. The mathematical theory of the mind isn't finished, but it is already, unmistakably, at work.
Frequently Asked Questions
What was the original vision behind creating a mathematical theory of the mind?
Early thinkers like Leibniz and Boole believed human reasoning could be reduced to formal symbolic rules — essentially an algebra of thought. This idea evolved through Turing's computational models and McCulloch-Pitts neurons into the modern machine learning systems we use today. The dream was never just academic; it was always about building machines that could genuinely reason, adapt, and solve problems autonomously.
How did neural networks go from a fringe idea to the backbone of modern AI?
Neural networks were largely abandoned in the 1970s due to computational limits and the dominance of symbolic AI. They resurged in the 1980s with backpropagation, stalled again, then exploded after 2012's AlexNet proved deep learning could outperform every other approach on image recognition. Transformer architectures in 2017 sealed the deal, enabling the large language models that now power everything from chatbots to business automation tools.
How is modern AI being applied to everyday business operations today?
AI has moved well beyond research labs into practical business tooling — automating workflows, generating content, analyzing customer data, and managing operations at scale. Platforms like Mewayz (app.mewayz.com) embed AI across a 207-module business operating system starting at $19/month, letting businesses leverage these capabilities without needing a dedicated engineering team or deep technical expertise to get started.
What are the biggest remaining challenges in achieving human-level machine intelligence?
Despite remarkable progress, AI still struggles with genuine causal reasoning, common-sense understanding, and reliable long-horizon planning. Current models are powerful pattern-matchers but lack grounded world models. Researchers debate whether scaling alone will close this gap or whether fundamentally new architectures are needed. The original question — can thought be fully formalized as an equation — remains beautifully, stubbornly open after centuries of pursuit.
Try Mewayz Free
All-in-one platform for CRM, invoicing, projects, HR & more. No credit card required.
Get more articles like this
Weekly business tips and product updates. Free forever.
You're subscribed!
Start managing your business smarter today
Join 30,000+ businesses. Free forever plan · No credit card required.
Ready to put this into practice?
Join 30,000+ businesses using Mewayz. Free forever plan — no credit card required.
Start Free Trial →Related articles
Tech
Nintendo wants its tariff money back
Mar 9, 2026
Tech
AI agents are coming for government. How one big city is letting them in
Mar 9, 2026
Tech
10 ways teachers can use AI
Mar 8, 2026
Tech
The MacBook Neo establishes Apple as an affordable tech brand
Mar 7, 2026
Tech
A brief history of surprisingly cheap Apple products
Mar 6, 2026
Tech
Pentagon follows through with its threat, labels Anthropic a supply chain risk ‘effective immediately’
Mar 6, 2026
Ready to take action?
Start your free Mewayz trial today
All-in-one business platform. No credit card required.
Start Free →14-day free trial · No credit card · Cancel anytime