Digital Logic: Building Computation from Switches

Switches such as transistors are the most basic building block of modern electronic computation. But computer architects don't think in terms of individual switches. Instead, they build up a hierarchy of increasingly complex structures, eventually forming a CPU. The next layer up consists of logic gates: devices formed from a few switches in standard circuits that represent basic Boolean functions like AND and OR.

Logic gates can be connected together to build even larger structures, simple machines for arithmetic, memory, and control. In this chapter, we'll explore how this hierarchy works, from the transistors that implement gates all the way up to networks of gates that perform computation.


Claude Shannon and the Birth of Digital Logic

By 1936, complex electronic switching circuits were in heavy use in telephone exchanges. These circuits automated the work previously done by human telephone operators, making and breaking connections between wires to route calls. As more telephone networks connected, call-routing logic grew increasingly complex. There was urgent economic pressure to reduce the wiring and complexity of these circuits.

Many ad-hoc hacks existed for simplifying circuits, but nobody understood how to do this reliably or optimally. The fundamental problem: switches convert electrical energy to heat, so outputs have less energy than inputs. This made it difficult to chain switches together, the voltage would drop with each gate until signals became unrecognizable.

Everything changed in 1936 when Claude Shannon started his master's degree at MIT. His thesis (published as "A Symbolic Analysis of Relay and Switching Circuits") is widely regarded as one of the greatest master's theses ever written. Shannon made two revolutionary contributions:

First, he defined logic gates as a higher-level abstraction above switches. A logic gate takes binary inputs and produces binary outputs using the same representation. Crucially, a logic gate refreshes the signal, it uses external power to restore any voltage lost to switching. This means the output of one gate can be cleanly used as input to the next, allowing arbitrarily long chains of gates without signal degradation.

Second, Shannon showed that any computation could be performed by combining small sets of standard gates like AND, OR, and NOT. He mapped these to George Boole's Boolean algebra, a 100-year-old mathematical system. Boolean algebra provided proven laws to simplify circuits while preserving functionality.

Why Shannon mattered so much

Shannon separated the logical problem (what function do we want to compute?) from the physical problem (how do we implement it with transistors?). Circuit designers no longer had to track low-level energy losses, they could think purely about logic.

This abstraction was so powerful that it enabled the entire field of computer architecture. Everything built in the decades that followed, from early computers to modern processors, is built on Shannon's insight.


Logic Gates: The Fundamental Building Blocks

A logic gate is any device with binary inputs and outputs where both use the same physical representation for 0 and 1, and the function is completely described by a truth table.

While infinitely many gates could be designed, the most common are those with one or two inputs and one output: AND, OR, NOT, XOR, NOR, and NAND. Let's explore them interactively.

Interactive: Logic Gates & Truth Tables

AND Gate

Output is 1 if and only if both inputs are 1

Input XInput YOutput
000
010
100
111

The truth table for each gate lists every possible input combination and shows the corresponding output. For example:

Testing Logic Gates Interactively

Use the interactive tester below to see how each gate responds to different inputs. Toggle the checkboxes to change inputs and watch the output change in real-time.

Interactive: Test Logic Gates
A: 0B: 0AND GateQ: 0

Output: LOW (0)


Universal Gates: NAND and NOR

One of Shannon's key insights: not all gates are equally important. Some sets of gates are universal, they can be recombined to build any other gate.

For example, if you have AND and NOT, you can build anything. Same with OR and NOT. But here's the remarkable part: you can build everything from only NAND gates (or only NOR gates). This is why NAND is "universal."

This matters enormously for manufacturing. Instead of building multiple types of gates, chip makers can build just one type of physical gate and reconfigure combinations of them to implement any desired logic.

Interactive: Building Gates from NAND

NAND is a universal gate—you can build any other gate from only NAND gates.

Can you figure out how?

Hint: If you connect both inputs to the same signal, what does NAND do?

De Morgan's Laws

The mathematical principle behind this is De Morgan's Law: NOT(A AND B) = (NOT A) OR (NOT B)

This equivalence lets you trade AND gates for NOR gates and vice versa, which is how NAND can replicate all other gate types.


Making Logic Gates from Transistors

How do we actually build a logic gate from transistors? We can't just use one transistor, it acts as a switch, losing energy in the process. Instead, we combine several transistors while using external power to refresh the signal.

Modern chips use CMOS (Complementary Metal-Oxide-Semiconductor) technology. A CMOS NAND gate uses four transistors: two n-type (pulling the output toward ground) and two p-type (pulling it toward power). By combining them strategically, the output is always refreshed to full voltage.

Since NAND is universal, every other gate can be built from NAND gates. This means modern chips are effectively made entirely from NAND gates built from transistors, all the way down.


Boolean Logic: The Mathematics of Gates

Shannon's second insight was connecting logic gates to George Boole's algebra. Boole (1815-1864) developed a system where truth values behave like numbers: true = 1, false = 0.

In Boolean algebra, AND becomes multiplication, OR becomes addition, and NOT becomes inversion (1 − x). For example:

This isn't just a notational trick, it means we can use the laws of arithmetic to simplify logical expressions. For instance, the distributive law still works: A(B + C) = AB + AC

Evaluating Boolean Expressions

Try evaluating your own Boolean expressions below. The tool generates a truth table showing all possible combinations of input values:

Interactive: Boolean Algebra Evaluation

Use A, B, C for variables, + for OR, no space for AND

ABCA + BC
0000
0010
0100
0110
1001
1011
1101
1111

Laws of Boolean Algebra

Just as arithmetic has laws (associativity, commutativity, etc.), Boolean algebra has laws we can use to simplify circuits. Here are the most useful ones:

LawAND FormOR Form
Identity1A = A0 + A = A
Null0A = 01 + A = 1
IdempotentAA = AA + A = A
InverseA·Ā = 0A + Ā = 1
CommutativeAB = BAA + B = B + A
DistributiveA + BC = (A+B)(A+C)A(B+C) = AB + AC
De Morgan'sĀB̄ = Ā + B̄Ā + B̄ = AB

Building Complex Circuits

With these laws, we can simplify logic circuits. A complex circuit with many gates can often be simplified to use fewer gates while computing the same function. Modern CAD tools do this automatically, but Shannon's theoretical foundation made it possible.

For example, consider the expression (A + B)(A + C). Using the distributive law, this simplifies to A + BC, which uses fewer gates. The simplified version:

This matters because each gate requires transistors, which consume chip area and power. Smaller circuits mean lower costs and better performance.


Implementing Circuits on Hardware

TTL Chips

Since the 1960s, standardized logic gate chips have been available. The 7400 series from Texas Instruments contains just a few gates per chip (e.g., 7408 has four AND gates). You can wire these chips together on a breadboard with a 5V power source to build working digital logic networks.

This approach is still used for hobbyist projects and rapid prototyping, even though modern systems mostly use ASICs or FPGAs.

Custom ASICs

For high-volume production, custom application-specific integrated circuits (ASICs) are manufactured using photolithography. This is where the chip designs from the previous chapter become reality. Multiple layers of masks create transistors and interconnects that form logic gates.

Programmable Logic Arrays (PLAs)

A PLA is a generic chip with many inputs and outputs connected through AND and OR planes, controlled by programmable fuses. You "burn" your specific circuit by blowing out unwanted fuse connections. This avoids the $5 million cost of custom mask sets.

Field-Programmable Gate Arrays (FPGAs)

FPGAs go further, they're electrically programmable and can be reprogrammed unlimited times. An FPGA chip contains blocks of configurable logic connected by programmable switches. You download a bitstream to the device to configure it for your design.

FPGAs are invaluable for prototyping CPU designs before committing to expensive photolithography. Consumer FPGA development boards cost around $30 and are available from Xilinx and Altera (now AMD and Intel, respectively).


Beyond Gates: Simple Machines

With logic gates as building blocks, computer architects build more complex structures called simple machines. These are standard circuits that perform specific, useful operations like arithmetic, selection, and storage. Just as gates are universal via NAND, simple machines can be combined to build CPUs.

Simple machines fall into two categories: combinatorial logic machines (which compute instantly based on inputs alone) and sequential logic machines (which depend on time and memory, using feedback and clocks).


Shifters: Fast Multiplication and Division

In decimal arithmetic, multiplying by 10 is easy: just shift digits left (append a zero). Similarly, in binary, multiplying by 2ⁿ is fast: shift the bits left by n positions. Division by 2ⁿ works the same way to the right.

Modern CPUs use dedicated shifter circuits to make operations like x << 2 (multiply by 4) nearly free compared to full multiplication. This is why performance-critical code (games, codecs) often exploits powers-of-two arithmetic.

Interactive: Left Shifter

Input (Decimal)

5
Binary: 0101

After Left Shift (× 2)

10
Binary: 01010

Formula: Shifting left by 1 position is equivalent to multiplying by 2

A shifter works by using multiplexers on each bit position. When the shift enable signal is high, each bit takes its value from the position to its right (a left shift). Otherwise, it passes through unchanged.


Decoders and Multiplexers

A decoder converts a binary number into a "one-hot" signal. For example, input 3 (binary 011) produces output where only bit 3 is 1. Decoders are essential for selecting one address from many in memory.

Interactive: 3-to-8 Decoder

A decoder converts a binary input to a 1-of-N output. Only the output at position (input) is 1.

Binary: 000
0
1
2
3
4
5
6
7

A multiplexer (mux) does the opposite: it selects one of many inputs based on a selector code and outputs just that one signal. Together, decoders and muxes let us dynamically connect sources to destinations, just like the Analytical Engine mechanically routed gears between different components.

The shared wire connecting a mux output to a decoder input is called a bus. Buses are how CPUs move data between registers, memory, and the arithmetic logic unit (ALU).


Adders: Building Arithmetic

Addition is the fundamental arithmetic operation. Binary addition works just like decimal addition: you add column by column, propagating carries to the next position.

A full adder is a simple machine that adds two 1-bit values plus an input carry, producing a sum and output carry. Multiple full adders chained together form a ripple-carry adder that can add multi-bit numbers.

Interactive: Full Adder

A full adder combines two 1-bit values and a carry input to produce a sum and carry output.

XYCinSumCout
00000
00110
01010
01101
10010
10101
11001
11111

Output: Sum = 0, Cout = 0

The truth table above shows all 8 possible inputs and their outputs. Notice the pattern: the sum is the XOR of all three inputs, while the carry is 1 whenever at least two inputs are 1.

Faster Addition: Carry-Save vs Ripple-Carry

A ripple-carry adder has a latency problem: the carry ripples left through all positions, so a 64-bit add takes time proportional to 64. In parallel hardware, this is slow.

Modern CPUs use carry-save adders, which perform carries in parallel using more transistors but finish in O(log n) time instead of O(n). It's a classic speed-versus-area tradeoff.


Sequential Logic: Memory and Time

So far, all our circuits (gates and simple machines) are combinatorial: their output depends only on current inputs. But computers need memory, they must remember data between operations. This requires feedback: circuits whose outputs feed back into their inputs.

Feedback is tricky. A naive feedback loop like Q = NOT Q is paradoxical. The circuit would oscillate uncontrollably or enter an undefined state. To tame feedback, we use clocks: synchronous control signals that tell the circuit when to update state.

The rising edge of a clock is called a tick. Only on clock edges do states update. Between edges, the circuit remains stable. This synchronization lets us control which state lasts how long.


Flip-Flops: 1-Bit Memory

A flip-flop is the simplest sequential machine. It stores a single bit that persists across clock cycles. There are several types. An SR flip-flop (Set-Reset) has two inputs: Set makes Q=1, Reset makes Q=0. A D flip-flop (Data) captures whatever signal is on its D input at the clock edge.

Interactive: Flip-Flop

Output State (Q)

0

D flip-flops are the most common in practice. They're the basis of shift registers, counters, and all memory structures. Chain 64 D flip-flops together and you have a 64-bit register that holds data for one clock cycle.


Counters: Counting Events

A counter is a simple machine that increments on each clock tick. Internally, it's flip-flops connected in a feedback loop where the output of each flip-flop becomes the input to the next, implementing binary counting.

Counters are everywhere: they divide clocks (a 64-bit counter powered by a 1 GHz clock produces a 1 Hz signal on its highest bit), control loop iterations, and timestamp events. The CPU's program counter is a specialized counter that increments through memory addresses.


RAM: Addressable Storage

Memory isn't just flip-flops. Random-access memory (RAM) is organized as an array of addressable words. Each address selects a different word, which can be read or written using control signals.

Internally, RAM uses a decoder to select which word's flip-flops to activate, and multiplexers to route data in and out. This is exactly the principle Babbage used in the Analytical Engine 150 years before transistors existed.

How RAM Works

A 1024-word, 8-bit RAM has 1024 rows (one per address) and 8 columns (one per bit). A 10-bit address enters a decoder, which enables the matching row. Data lines then read from or write to that row's flip-flops.

The decoder requires 2¹⁰ output lines, one for each address. This is why large memories use hierarchical decoders: a primary decoder (row select) and secondary decoders (column select). The crossbar intersection picks the exact cell.


Summary

Logic gates are the language in which we describe computation. Transistors are the vocabulary. Simple machines are the sentences. Together, they form the foundation of all digital computers.

What's next: Now that we understand logic gates and simple machines like adders, multiplexers, and memory, we have all the pieces to build a complete CPU. The next step is to wire these machines together to fetch, decode, and execute instructions.