Category Theory, Abstraction and Computer Science
I am by training a computer scientist. One of the things I find most fascinating about computer science is its reliance on layers of abstraction. Much of what we do with computers is built upon carefully hidden complexity. When you run a word processor, for instance, you simply tap an icon, and a window appears. You interact with menus, buttons, and icons, rarely stopping to consider what lies beneath.
But behind that user interface are pixels—just colored dots on a screen—that visually represent program logic. That logic is itself encoded in software: programs that appear to us as applications, but are in fact binary sequences of 0s and 1s. These sequences are not arbitrary; they carry meaning specific to the architecture of the computer on which they run. Whether it’s a laptop, a smartphone, or a server, each interprets that binary differently, but all follow the same principle: abstraction.
Even at this level, abstraction has already begun. We often forget that computers are fundamentally electronic machines. They operate through electric flows, with binary digits representing the ON and OFF states of electric current. These states correspond to the physical behavior of transistors—tiny switches that form the foundation of all computing hardware. A computer program, then, is ultimately just a highly structured set of instructions that controls the ON/OFF patterns of billions of switches, based on inputs like keyboard strokes and mouse clicks.
But this begs a deeper question: How can such electronic switching give rise to something as sophisticated as a word processor or even a programming language?
To answer that, we must go back almost a century, to a time when mathematicians began asking a profound question:
Can all problems be solved algorithmically?
Surprisingly, the answer is no .
But while the answer itself is sobering, the journey that led to this conclusion is full of rich ideas—and it is here that abstraction reaches one of its highest forms.
Formalization of Mathematics
The story begins in the early twentieth century, a time when many mathematicians believed they could formalize all the mathematics. But what does formalization actually mean?
Think back to high school geometry—specifically, Euclidean geometry. You might recall that this entire system is built upon just a few basic assumptions, known as Euclid’s postulates. There are five of them:
- A straight line segment can be drawn joining any two points.
- A straight line segment can be extended indefinitely in both directions.
- A circle can be drawn with any center and any radius.
- All right angles are equal to one another.
- If two lines are drawn such that a third line crossing them forms interior angles on the same side that sum to less than two right angles, then the two lines will eventually intersect on that side (this is the parallel postulate). From these five seemingly simple rules, Euclidean geometry unfolds. Every theorem, every construction in this system follows logically from these axioms.
This is the power of formalization: you start with a small set of precise, foundational rules, and then derive everything else through logical deduction. So naturally, mathematicians wondered—can we do the same for all the mathematics?
That’s what Bertrand Russell and Alfred North Whitehead tried to do in their massive work Principia Mathematica. Their goal was to find a complete, formal foundation for all of mathematics built from logic alone. The title of their work was a deliberate echo of Newton’s Principia, a nod to their desire to give mathematics a similarly rigorous grounding.
But this grand vision was soon interrupted.
A young mathematician named Kurt Gödel published a result in 1931 that shook the foundations of this pursuit: the incompleteness theorems. Gödel proved that in any sufficiently expressive formal system—one powerful enough to include arithmetic—there exist true mathematical statements that cannot be proven from within that system.
In other words, some truths can’t be reached through logic alone. This result showed that no formal system can be both complete and consistent.
It was a turning point. The idea of reducing all mathematical truth to logic suddenly seemed not just difficult, but impossible.
To visualize this chapter of intellectual history, the graphic novel Logicomix offers a beautifully illustrated account of Gödel’s work and its impact. His theorems didn’t just challenge mathematical certainty, they revealed the limits of abstraction and logic themselves.
But at the same time, these limits opened new doors—leading to new ways of thinking about computation, formal systems, and abstraction. One path led to the birth of computer science. Another, to the rise of category theory. Interestingly they are not far from each other, that is what we will see in this writeup.
The Abstraction of computation
Let’s now turn our attention to computer science.
To be honest, I’ve always had an issue with the name “computer science.” As the great computer scientist Edsger W. Dijkstra once said,
“Computer science is no more about computers than astronomy is about telescopes.” That quote resonates deeply with me.
For a long time, the term “computer” didn’t refer to a machine. It referred to a person — someone who performed computations. These human computers worked with pencils, erasers, and sheets of paper, often performing tedious arithmetic for scientific, military, or commercial purposes. Their work mostly involved long multiplications, divisions, and logarithmic calculations.
But if you step back and observe what they were doing, you’ll notice something remarkably abstract: they were manipulating symbols on paper.
They followed fixed mathematical rules or algorithms. They performed operations step by step, often erasing and rewriting as needed. Occasionally they kept intermediate values in their heads, but because human working memory is limited, most of the computation was offloaded to paper. The paper, in effect, was their working memory.
Let’s ground this idea in a simple example say, multiplying 123 by 145. You could approach this task in various ways, but let’s use the method I learned in school:
- Multiply 123 by 5 → 615
- Multiply 123 by 40 (which is 4 × 10) → 4920
- Multiply 123 by 100 → 12,300
- A4. A4. A4. Add all the results: 615 + 4920 + 12,300
Now, unless you have an exceptional memory, you probably wouldn’t keep all those intermediate results in your head. You’d write them down. You’d perform one step, record it, and move to the next, all while following a repeatable process.
This is the essence of computation.
What matters here is not the specific problem, but the structure of the process itself: a finite set of rules, a medium for recording state (like paper), and a series of steps leading to a result.
Now imagine trying to formalize this entire idea. Not just a single algorithm, but the very nature of following an algorithm — of mechanical reasoning. How do we describe that mathematically?
This is the question that Alan Turing set out to answer.
In the 1930s, as part of his attempt to address a famous problem posed by David Hilbert (the Entscheidungsproblem, or “decision problem”), Turing proposed an abstract mathematical model of computation. It had:
- A tape (like the paper),
- A read/write head (like the pencil),
- A finite set of states (like memory),
- And a fixed set of rules (the algorithm).
- He called it the Turing machine.
What made Turing’s model so powerful was its generality. It didn’t matter what you were computing: multiplication, sorting, logic; the model was capable of simulating any algorithmic process. It was an abstraction of computation itself.
This is where we see abstraction at its highest form. Turing stripped away all the specifics the numbers, the symbols, the tools and gave us a formal framework for what it means to compute. He wasn’t interested in computers, per se, but in the very idea of computability.
To be continued..