Back to Blog

Switches: The Foundation of Computation

What separates a general-purpose computer from a music box or a Jacquard loom? The loom can only play through a fixed sequence of operations. A computer can look at the state of a value and change what it does next. That ability comes down to one thing: a switch that can turn one signal on or off in response to another signal.

From switches, we build logic gates, arithmetic units, and CPUs. But what makes a switch work in the first place?


A one-way gate

Let's start with something concrete: a one-way water valve. A gate is mounted on a hinge inside a pipe. It can swing open in one direction, but a block prevents it from swinging the other way.

One-Way Water Valve
pressure →Gate open — water flows

When water pressure pushes from left to right (forward bias), the gate swings open and water flows through. When pressure comes from the other direction (reverse bias), the gate is pushed against the block and seals shut.

Here's the subtle part: as water particles hit the gate, they transfer kinetic energy to it. That energy gets absorbed by the block and released as heat. Without this energy dissipation, the gate would bounce forever and the system wouldn't have a preferred direction.

That's the key idea. The system is directional because energy flows out as heat. And when a particle passes through, it loses some velocity. If you wanted to use those particles as input to another valve downstream, you'd need to add energy back. Every operation costs energy, which comes out as heat.


From water to electrons

Water valves are fine for demonstrating the principle, but we need something that operates at electrical speeds. The earliest electrical switches were vacuum tube diodes.

A metal cathode (the electron emitter) sits inside a cylindrical anode (the electron absorber), separated by vacuum. An external energy source heats the cathode.

Vacuum Tube Diode
CCathodeAAnode+Current flows

In forward bias, electrons are heated up, gain kinetic energy, and fly across the vacuum to the anode. Current flows. In reverse bias, electrons arrive at the anode from outside, but the anode isn't heated, so they don't have the energy to jump back. Current is blocked.

Notice the pattern: we pump heat energy into the system, that energy dissipates to the environment, and the result is a one-way flow. Same principle as the water valve, different medium.

But vacuum tubes are bulky, fragile, and run hot. What if we could get the same directional behavior from a solid material?


Semiconductors and the p-n junction

Modern diodes are made from silicon. Silicon has 14 electrons, with 4 in its outermost shell (which can hold 8). Atoms prefer full outer shells, so silicon atoms naturally form bonds with their neighbors, sharing electrons to fill those shells.

We can change silicon's electrical behavior by doping it with other elements. Add aluminum (3 outer electrons) and you get p-doped silicon with holes, missing electrons that act like positive charge carriers. Add phosphorus (5 outer electrons) and you get n-doped silicon with excess electrons.

When you place a p-doped and n-doped region next to each other, something happens at the boundary. Step through the process:

p-n Junction Formation
n-regionexcess e−p-regionholes

Excess electrons near the boundary drift into the p-region to fill holes. This creates a depletion zone where atoms have full outer shells, a low-energy state. The energy released in forming this zone escapes as photons (heat). The depletion zone becomes an insulator, and the junction is now directional.

Forward and reverse bias

In forward bias, we pump electrons in from outside, forcing them to work against the chemical forces that created the depletion zone. This costs energy, which is released as heat when the system relaxes back to equilibrium.

In reverse bias, we try to pull electrons out of the p-region. The atoms resist, enlarging the depletion zone instead. Push hard enough and the junction breaks down, releasing all that stored energy at once, usually destructively.

We've solved the vacuum tube problem. A tiny piece of doped silicon gives us a one-way switch with no vacuum, no filament, and much less heat. But a diode only lets current through or blocks it. What if we want one signal to control another?


Switches that control switches

A transistor is built by putting two diodes back-to-back: a p-n-p sequence. The middle region is called the base, and a wire connected to it acts as a control input.

p-n-p Transistor
EBCbase offEmitterBaseCollector

Inject a small current into the base and it opens both junctions, allowing a much larger current to flow from the emitter to the collector. A small signal controls a big one. That's the core of computation.

But there's a problem. The electrons we pump into the base have to go somewhere. They leak out through the collector along with the main current. We're wasting the control signal.


A better switch

The field-effect transistor (FET) fixes this. Instead of physically injecting electrons into the base, we apply an electric field across an insulator. The field pushes away electrons in the channel without any actual current flowing into the gate.

Field-Effect Transistor (FET)
SourceDrainGateinsulator0No field — channel closed

The control current is nearly zero. This makes FETs vastly more efficient than p-n-p transistors. Modern silicon chips use FETs (specifically MOSFETs: Metal-Oxide-Semiconductor FETs).

BJT vs FET
p-n-p TransistorEBCbase current lostControl: actual currentEfficiency: lowerHeat: moreField-Effect (FET)SDGfield onlyControl: electric fieldEfficiency: higherHeat: less

Making things tick

Often we need to switch a signal on and off at a regular interval. This kind of repeating binary signal is called a clock, a square wave that oscillates between high and low.

Clock Signal
10time

Fast electrical clocks are made from materials with piezoelectric properties: they vibrate mechanically in response to voltage. Quartz crystals oscillate at millions to billions of times per second, producing clock signals in the megahertz to gigahertz range.

Clock signals are what make sequential logic possible, circuits whose state updates at regular time intervals. Sequential logic is the foundation of CPUs. You can buy quartz crystal oscillators for a few dollars to use in your own projects.


Fabrication

Modern chips contain billions of transistors on a single piece of silicon. Silicon starts as purified sand, formed into ingots, sliced into wafers, and cut into small chips.

Creating transistors and wires on those chips is called fabrication. It uses the same masking concept as screen printing but at microscopic scales. You design the circuit in a CAD program, specifying where each doping chemical goes (to create p and n regions) and where copper wires connect them. You create masks for each layer, spray atoms through them onto the wafer, let each layer dry, and repeat. Through many iterations, you build up the complete circuit.

This is expensive. A fabrication plant costs around $5 billion to build, and a single mask set costs around $5 million. You really don't want bugs in your circuit design.


The power wall

Throughout the transistor age, the number of transistors per chip has roughly doubled every two years (Moore's law). The Intel 4004 (1971) had 2,250 transistors. Modern chips have billions.

As transistors got smaller, they also got faster, so clock speeds doubled too. But this couldn't last forever. Power consumption scales as P = CV²f: as frequency (f) increases, power rises steeply. At 3.5 GHz, a CPU generates enough heat to fry an egg. If the trend had continued, CPUs would have reached the temperature of the sun's surface by 2010.

Transistor density kept doubling, but clock speeds plateaued around 3.5 GHz. Smaller transistors use less power individually, so packing more of them doesn't burn the chip up. But the end of clock scaling changed computer architecture forever. We couldn't make processors faster, so we added more cores, deeper caches, and smarter pipelines, all topics that follow from the physics we've covered here.


Summary

Every concept in this article traces back to one principle: switches work by directing energy. A water valve dissipates kinetic energy as heat in the block. A vacuum tube burns off energy heating the cathode. A p-n junction releases energy forming the depletion zone. A transistor uses a control signal to open or close a channel. And a FET does it with an electric field instead of actual current, making it efficient enough to pack billions onto a chip.

Understanding this deeply changes how you think about computers. They're not magic. They're physics.