How you invent math: From counting to complex numbers
What is a number?
I don't mean "what does the symbol 3 look like" or "how do you count to ten." Where does the number three actually live? You can hold three apples, but you can't hold threeness. You can write the digit 3, but that's ink on paper. So what is it?
You can swap apples for stars or boxes, but the count stays the same. "Three" is whatever all groups-of-three have in common. But that's circular. What is the thing they have in common?
There are so many numbers: 1, 3/7, pi, 3.12. Where did these come from? Mathematics is people making up a rule and following it to see what happens.
We'll do that here. We start with nothing: no numbers, no arithmetic, no assumptions. By the end we'll have built the natural numbers, the integers, the rationals, the reals, and the complex numbers.
Counting by rule
We need a starting point. Say there is one thing, a starting object we'll call zero. Not zero the quantity you already know, but just a name for where we begin.
Now one rule. Given any object, you can produce the next one. Write this as a function , where means "the successor of ." is the first object we generate, is the second, and so on. There's no addition and no multiplication yet. All you can do is say "give me the next one."
That's all counting is. Keep applying the one rule. You don't need to know what 7 "means" to get there. Apply seven times and you land on the seventh object.
This chain never stops. There's no largest natural number, because every number has a successor. Mathematicians call this infinite: the process of taking successors has no end. We write this unbounded behavior with symbols like in limits and extended settings. But itself is not a natural number. A finite value is any specific natural number you can reach by stopping at some point, like 7, 400, or a trillion. When we say something "goes to infinity," we mean it keeps increasing past every finite value, never arriving anywhere.
You can always go forward, but you can't go back. There's no predecessor of zero, and there's no shortcut. To reach 5 you have to pass through 1, 2, 3, and 4 first.
But how do we know this chain is all there is? Our rules say "there's a starting point" and "every object has a successor." Nothing prevents a rogue cluster of objects (say A, B, C) forming their own loop, each one the successor of the previous, disconnected from zero. The successor rule would still be satisfied. We need something extra.
Induction. Picture a chain of dominoes. Show that domino 0 falls (the base case). Then show that whenever domino falls, it knocks over domino (the inductive step). If both hold, every domino in the chain falls, and no domino outside the chain can claim to be part of the system.
Once domino 0 falls, and each domino tips the next, the whole chain goes down. In the strongest version of this idea, the natural numbers are exactly the things you can reach from 0 by repeating successor. A rogue cluster disconnected from zero? This rule excludes it, because the chain from 0 never reaches it. (Logic side note: in some weaker rule systems, mathematicians can build versions with extra "numbers" beyond this chain.)
Mathematicians call this the natural numbers, written . A standard axiom list (axioms are just starting rules) for this is called the Peano axioms. The core ingredients are a starting point (zero), a rule (successor), and induction (nothing else sneaks in), plus two guardrails: 0 is not anyone's successor, and different numbers have different successors.
From counting to adding
Counting is fine, but we need arithmetic. Addition comes from and two rules:
Adding zero changes nothing. Adding the successor of is the same as taking the successor of . It's recursive, meaning the rule calls itself on a smaller input until hits 0.
To compute , the rules unwind: , then , then . Each step peels off one from and wraps one around the result, until the base case stops the recursion.
Multiplication follows the same recursive pattern, defined in terms of addition:
Multiplying by zero gives zero. Multiplying by the successor of means computing and adding one more . Recursive again, where each layer adds one copy of . So unwinds to .
Every operation reduces to repeated applications of simpler ones. Multiplication is repeated addition, and addition is repeated successor. The whole thing rests on the successor function.
Adding any two natural numbers always produces another natural number. Multiplying them does too. This property is called closure, meaning the naturals are closed under addition and multiplication. Every operation stays inside the system, and the next operation we try won't.
Subtraction breaks
The natural numbers feel complete. Add or multiply any two of them and you always get another natural number.
But addition raises a question. If , can you recover from and ? "I had some apples, you gave me 3, now I have 8. How many did I start with?" That's subtraction.
5 minus 3 is 2, that works. 7 minus 7 is 0, still works. Try 3 minus 5.
In the natural numbers, 3 minus 5 has no answer. You can't go below zero, so the operation just fails. Subtraction breaks closure. You can set up a subtraction with two natural numbers, but the result might not be one.
Subtraction produces results our system can't represent. We could ban it when it fails, or expand the system so it always works.
How do you represent "below zero" when all you have are natural numbers? Represent every integer as a pair , where the pair means " minus ."
So means 3. But also means 3, and so does , and . All these pairs represent the same integer.
This is an equivalence class, a collection of representations that all behave identically. The integer 3 is the whole collection of pairs whose difference is 3.
Defining integers as pairs is only useful if we can do arithmetic with them. Adding two pairs: . Combine the positives and the negatives separately. If different pairs representing the same integer produce the same answer, the definition is consistent.
This consistency isn't obvious. Different pairs represent the same integer, so arithmetic has to give the same result no matter which representative you pick. It does. Mathematicians call this well-definedness: the answer doesn't depend on which equivalent form you started with.
This gives us the integers, . Every natural number is still here (write it as ), but now we also have negative numbers. Subtraction demanded they exist.
The number line extends left of zero, and the operations that broke before work.
3 minus 5 kicked us out of ℕ, but in ℤ it's just −2, a point on the number line two steps left of zero. Every subtraction lands somewhere.
Now that we have negative numbers, what happens when we multiply them? Negative times negative being positive isn't a convention. It follows from rules we already have. Multiplication distributes over addition: . This is the distributive law. Since multiplication is repeated addition, means "jump by a total of times." Do all those jumps at once, or first then more, and you end up at the same spot either way:
Try applying it. . Distributing the left side gives , which simplifies to , forcing . The distributive law leaves no room for a negative result.
Division breaks
The integers handle subtraction. Subtract any two integers and you stay inside . Closed under addition, subtraction, and multiplication. But try dividing.
6 divided by 3 is 2, that works. 10 divided by 2 is 5, still works. What about 1 divided by 3?
There's no integer equal to one-third. Division breaks closure the same way subtraction broke the naturals.
Same fix as before: pairs. This time means " divided by ." The top number is the numerator, the bottom is the denominator. These are the rational numbers.
Like integers, different pairs can represent the same value. and and are all the same rational number.
The numerator and denominator grow, but the proportion stays constant. Each scaled version is a different pair, same equivalence class. The rational number one-half is the collection of pairs for every positive integer .
Arithmetic works like it did for integers. Adding two fractions: . Cross-multiply the numerators, multiply the denominators.
Multiplying: . Straight across, numerator times numerator, denominator times denominator.
These are the fraction rules you already know, but we derived them from the pair construction instead of assuming them. And well-definedness holds: different representatives of the same rational give the same results.
To divide: . Flip the second fraction and multiply. Try 7 ÷ 4, the operation that had no integer answer.
The rationals, , fix division. Add, subtract, multiply, or divide any two rationals (except by zero) and the result is always rational. Closed under all four operations. A system like this is called a field, a number system where these operations behave the usual way.
Pick any two rationals, say 1/3 and 1/2. Their average, 5/12, is rational and sits between them. Now pick 1/3 and 5/12. Their average is between those. You can keep going forever and never run out of rationals to wedge in.
Mathematicians call this density. Between any two rationals, there's always another one. They seem to fill the number line completely.
They don't.
The number line has holes
Every crisis so far came from reversing an operation. Subtraction reverses addition, division reverses multiplication. There's one more reversal. You can square a number (), so what number times itself produces a given result? That's the square root. because .
Every rational number can be written as a decimal. Divide numerator by denominator one digit at a time: take the remainder, bring down a zero (multiply by 10), see how many times the denominator fits. The quotient digit goes after the decimal point, repeat.
When the remainder hits zero, the decimal terminates. When a remainder you've already seen comes back, the same digits repeat forever. Try 1/4 (terminates), then 1/3 (repeats).
Some fractions terminate: , . Others repeat forever: , where the long division produces remainder 1 every step, so digit 3 comes out every step, without end. That trailing "..." hides something important. You need infinitely many digits to pin down the value, and no finite string of them lands on it exactly. Fractions and decimals are two notations for the same values.
For perfect squares this works. , , . But what about ? What rational number, multiplied by itself, gives exactly 2? Try 1.4: , a little too small. Try 1.5: , too big. So is between 1.4 and 1.5. Keep going: , . Each digit narrows the bracket, and you can zoom in forever. But no fraction will ever land on it exactly. Not because there's an interval with no rationals (there isn't, they're dense), but because this specific limit point is missing from .
No matter how far you zoom, the square root of 2 sits between rational ticks, never on one. The rationals are dense, meaning between any two of them there's always another one. But they're not complete.
To see what "complete" means, we need two ideas first. A sequence is an ordered list of numbers that never ends. It has a first term, a second term, a third, and keeps going. This is what the "..." in was hiding. The values 0.3, 0.33, 0.333, 0.3333, ... form a sequence of finite decimals that edges closer and closer to without any single term equaling it. The decimal approximations of form a sequence too: 1, 1.4, 1.41, 1.414, 1.4142, ....
A sequence converges when its terms settle toward a single value. The gap between each term and that target keeps shrinking, and it shrinks past any threshold you name. That target value is called the limit of the sequence. The sequence 1, 1.4, 1.41, 1.414, ... converges toward . By the fifth term the gap is less than 0.001, and by the tenth it's less than 0.0000000001. It never reaches the limit, but it gets arbitrarily close.
Now we can say what completeness means. If the terms of a sequence keep bunching closer and closer together (mathematicians call this a Cauchy sequence), a complete system guarantees the sequence converges to something in the system. The rationals fail this test. You can write a Cauchy sequence of rationals whose terms squeeze tighter and tighter around , but there's no rational for it to land on.
To fill the holes, we construct the real numbers, . A real number is an equivalence class of Cauchy sequences of rationals. Just as the integer 3 was the whole collection of pairs with difference 3, the real number is the whole collection of rational sequences converging toward it.
Each term below is a fraction. Each one gets closer to , but the target itself lives in the gap between all of them.
That sequence is one representative. But there are infinitely many other rational sequences converging to the same gap. Here are three completely different strategies, each producing a sequence of fractions that closes in on :
One approach is to average guesses. Start with 1. Since is too small, must be bigger, so is too big. Average the guess and the correction: . Now is too big (), and is too small, so average again: . Each step squares the error, so the fractions lock onto fast.
Another approach looks for the fraction with the smallest denominator closest to the target. For , it produces 1, 3/2, 7/5, 17/12, 41/29, .... These are the best rational approximations you can get without using bigger denominators, and they alternate between slightly too small and slightly too big.
The simplest approach is brute force: chop the decimal expansion at each digit. 1, 14/10, 141/100, 1414/1000, .... Each fraction is just the decimal truncation written as a ratio. It converges, but slowly compared to the other two.
All three strategies generate different sequences of fractions, but they all close in on the same spot. The real number is all of them at once, the whole equivalence class.
The reals complete the number line. Every Cauchy sequence of real numbers converges to a real number. There are no more holes.
The reals also include other numbers that the rationals miss. Take a circle whose diameter is exactly 1. How long is its circumference, the distance around it? You can measure a straight line by laying a ruler along it. A shape made entirely of straight sides, called a polygon (triangle, square, hexagon, and so on), is almost as easy since you just measure each side and add them up. But a circle has no straight sides. You can't break it into flat pieces and add their lengths.
You can get close, though. Inscribe a polygon inside the circle so that every corner touches the edge. A triangle (3 sides) gives a rough underestimate. A square (4 sides) is better. A hexagon (6 sides), better still. Each polygon is made of straight segments you can measure exactly, and as you add more sides the polygon hugs the circle more tightly. Its perimeter creeps toward the true circumference. That value is
No fraction equals exactly. Like , it sits in a gap between rationals. Numbers like this are called irrational (not equal to any ratio of whole numbers). It's another equivalence class of Cauchy sequences: 3, 3.1, 3.14, 3.141, 3.1415, ....
Another irrational number shows up when you ask about growth. Suppose you invest €1 at 100% annual interest. Compounded once at year's end, you get . Compounded twice (50% every six months), you get . Compounded four times (25% each quarter), . As you compound more and more often, the result keeps growing but never explodes. It settles toward a specific value. Remember, the value a sequence settles toward is its limit, the number the terms keep getting closer to without overshooting. The limit of as grows toward infinity is
Like , no fraction equals exactly. Both sit in gaps between rationals, just like . The reals fill all of these gaps at once.
Square roots break again
The reals filled every hole on the number line. Addition, subtraction, multiplication, division, and square roots of positive numbers all work.
But we've only tested square roots on positive numbers. Every previous operation eventually hit an input that broke the system. Does this one?
Try . Fine. . Fine. Now try . What number, squared, gives ?
No real number does. We saw earlier that the distributive law forces negative times negative to be positive, and squaring zero gives zero, so nothing in the reals has a negative square.
Same pattern. A reasonable operation produces a result the system can't hold.
Extend the system using pairs again. A complex number is a pair of real numbers, written , where is defined by one rule: . Calling it "imaginary" is a historical accident, no more imaginary than negative numbers were when people first invented them.
Where does live? No real number squares to , so isn't on the number line. It's genuinely new. Since it's not to the left or right of any real number, put it off the line entirely, one unit above zero on a new vertical axis. Every complex number is a point on this plane: tells you how far along the real line, how far along the axis. sits 3 units right and 2 units up. A one-dimensional line became a two-dimensional plane, because the new object didn't fit on the old line.
All the arithmetic rules from before still apply. The only new fact is . Everything else follows.
Watch what multiplying by does. Start at 1 on the real axis. , one unit up. , one unit left. , one unit down. , back where you started. Four multiplications, a full loop.
Split a full circle into 360 degrees. Four equal steps make 360, so each step is 90 degrees. Multiplying by rotates a point 90 degrees counterclockwise.
Two multiplications rotate degrees, from 1 to . That's , two quarter-turns. The operation that seemed impossible on the number line becomes a rotation in the plane.
had no natural number. had no integer. had no rational. had no real. Each time we expanded the system. But what about ? Expressions like these (adding, subtracting, multiplying a variable by constants, raising to whole-number powers) are polynomials. The complex numbers settle all of them. Every non-constant polynomial (non-constant just means "not a plain number") with complex coefficients (the numbers in front of each power of ) has at least one complex root. Equivalently, over , you can break any such polynomial into linear factors. That doesn't mean every polynomial has a simple radical formula, but it does mean roots exist in .
The expansion engine
Every number system followed one cycle:
- Define a system with rules and operations.
- Push those operations until one fails.
- Extend the system with new objects that make it work.
- Check that the extension preserves the old rules.
Each system nests inside the next like concentric rings, with the naturals at the center and each expansion adding a ring around the previous ones.
The cycle repeats across all five systems.
The naturals couldn't subtract, so we built the integers. The integers couldn't divide, so we built the rationals. The rationals had holes, so we built the reals. The reals couldn't handle all square roots, so we built the complex numbers.
Each system contains all the previous ones: naturals inside integers, inside rationals, inside reals, inside complex numbers. Nothing is lost, and each step adds something new.
Division by zero never gets fixed while keeping the usual field laws. If and division is still inverse to multiplication, then for every , collapsing the distinction between all numbers. (There are alternative algebraic systems that assign values to division by zero, but they give up familiar rules.)
The expansion doesn't stop at either. Quaternions (four-dimensional) and octonions (eight-dimensional) go further. But each extension costs something. Quaternions give up commutativity (), octonions give up associativity (). The pattern continues, but the price gets steeper.
Every number you use today exists because an earlier system couldn't handle some operation, and someone decided to build a new one rather than accept the limitation.