Building BigInt in JavaScript from Scratch
If you've been using JavaScript for a while, you might have encountered BigInt without thinking too much about it. It's that thing you add an 'n' to when you need really big numbers, right?
const big = 9007199254740993n; // Notice the 'n'But have you ever wondered how BigInt actually works under the hood? What happens when you write 9007199254740993n + 1n? How does JavaScript store a number that doesn't fit in a CPU register?
In this post, we'll build a BigInt implementation from scratch and discover the clever design decisions that make it work. We'll start with a fundamental problem and watch solutions emerge naturally, each one introducing new challenges we'll need to solve.
Let's get started!
The Breaking Point
JavaScript's Number type can only safely represent integers up to 2⁵³ - 1. What happens when you need to work with larger numbers, in cryptography, financial systems, or scientific computing?
Let's see where JavaScript's regular numbers break:
// This looks fine...
const a = 9007199254740992; // Exactly 2^53
const b = 1;
console.log(a); // 9007199254740992 ✓
// But watch what happens when we add 1:
console.log(a + b); // 9007199254740992 (WRONG!)
console.log(a + b === a); // true (!!!)
// The number literally can't change anymore
console.log(a + 1); // 9007199254740992
console.log(a + 2); // 9007199254740994 (skipped 93!)
console.log(a + 3); // 9007199254740996 (skipped 95!)Try it yourself, at 2⁵³, JavaScript's Number type starts skipping integers. It's not a bug; it's a fundamental limitation of how numbers are stored.
Why does this happen?
JavaScript's Number uses IEEE 754 double-precision format:
Beyond 2⁵³, there aren't enough bits to represent every integer exactly.
Now watch what happens with BigInt:
const x = 9007199254740992n; // The 'n' makes it a BigInt
const y = 1n;
console.log(x + y); // 9007199254740993n ✓ CORRECT!
console.log(x + y === x); // false ✓
// Every integer works perfectly
console.log(x + 1n); // 9007199254740993n
console.log(x + 2n); // 9007199254740994n
console.log(x + 3n); // 9007199254740995nThis isn't just a theoretical problem. Real-world applications hit this limit constantly:
- Cryptography: RSA-2048 keys use 617-digit numbers
- Financial systems: Nanosecond timestamps exceed 2⁵³ around year 2262
- Database IDs: Twitter's Snowflake IDs are 64-bit integers
- Blockchain: Ethereum uses 256-bit integers for everything
The Core Trade-off: We're trading raw speed for unlimited precision. BigInt can represent any integer, but operations will be slower because we can't use simple CPU instructions anymore.
The Mental Model: Think in Chunks
Here's the fundamental insight that makes BigInt work: we can't store a million-bit number in a single CPU register, but we can store many 32-bit numbers in memory.
Think of how you write large numbers on paper:
Each comma separates groups of three digits (base 1000). BigInt does the same thing, but uses base 2³² instead of base 1000:
// The number 2^64 (18,446,744,073,709,551,616)
// is TOO BIG for a single 32-bit or 64-bit slot
// But we can split it into 32-bit "digits":
{
sign: 1, // 1 for positive, -1 for negative
num_digits: 2, // We need two 32-bit chunks
digits: [0, 1] // [low 32 bits, high 32 bits]
}
// How to read this:
// digits[0] + (2³² × digits[1])
// = 0 + (2³² × 1)
// = 2³²
// = 4,294,967,296Wait, that's not right! Let me recalculate:
// Actually, 2^64 in this representation:
{
sign: 1,
num_digits: 3,
digits: [0, 0, 1] // [2^0 place, 2^32 place, 2^64 place]
}
// Because: digits[0] + (2³² × digits[1]) + (2⁶⁴ × digits[2])
// = 0 + (2³² × 0) + (2⁶⁴ × 1)
// = 2⁶⁴Let's see this with a smaller example:
Number: 4,294,967,297 (which is 2³² + 1)
{
sign: 1,
num_digits: 2,
digits: [1, 1]
}
Calculation:
digits[0] × (2³²)⁰ → 1 × 1 = 1
digits[1] × (2³²)¹ → 1 × 4,294,967,296 = 4,294,967,296
Sum = 4,294,967,297 ✓Why Base 2³²? This isn't arbitrary:
- 32 bits fits perfectly in a CPU register on 32-bit systems
- On 64-bit systems, you can multiply two 32-bit numbers and get a 64-bit result that still fits in a register
- It's a power of 2, so we can use bit shifts instead of division
- Modern CPUs have special instructions (
adc,mulx) designed for exactly this
Problem #1: Addition Overflow
Now we have a way to represent huge numbers. But there's an immediate problem:
When we add two 32-bit digits, the result can overflow to 33 bits. Where does that extra bit go?
Let's see this concretely:
// Maximum 32-bit value
const MAX = 0xFFFFFFFF; // 4,294,967,295
// What happens when we add two?
const sum = MAX + MAX;
console.log(sum.toString(16)); // 0x1fffffffe
// In binary:
// 0xFFFFFFFF = 11111111111111111111111111111111 (32 ones)
// 0xFFFFFFFF = 11111111111111111111111111111111 (32 ones)
// ───────────────────────────────────
// = 1 11111111111111111111111111111110 (33 bits!)
// ↑
// This bit doesn't fit in 32 bits!The solution is the same algorithm you learned in elementary school: carry propagation.
9 9 9
+ 1
─────
¹ ¹ ← carries
1,0 0 0
When a digit addition exceeds the base (10 in elementary school, 2³² for BigInt), we:
- Keep the remainder in the current digit
- Carry the overflow to the next digit
Here's the implementation:
View implementation code →
function addBigInts(a, b) {
const result = { sign: 1, digits: [] };
let carry = 0;
const maxLen = Math.max(a.num_digits, b.num_digits);
// Process each digit position
for (let i = 0; i < maxLen || carry; i++) {
const digitA = a.digits[i] || 0; // Treat missing digits as 0
const digitB = b.digits[i] || 0;
// Add the two digits plus any carry from previous position
const sum = digitA + digitB + carry;
// The low 32 bits stay in this position
result.digits[i] = sum % (2**32);
// The overflow becomes carry for next position
carry = Math.floor(sum / (2**32));
}
result.num_digits = result.digits.length;
return result;
}Let's trace through an example step-by-step:
Adding: [0xFFFFFFFF, 0x00000001] + [0x00000002, 0x00000000]
Step 1 (i=0): digitA = 0xFFFFFFFF (4,294,967,295) digitB = 0x00000002 (2) carry = 0 sum = 4,294,967,295 + 2 + 0 = 4,294,967,297 result[0] = 4,294,967,297 % 2³² = 1 carry = floor(4,294,967,297 / 2³²) = 1 Step 2 (i=1): digitA = 0x00000001 (1) digitB = 0x00000000 (0) carry = 1 ← from previous step! sum = 1 + 0 + 1 = 2 result[1] = 2 % 2³² = 2 carry = floor(2 / 2³²) = 0 Step 3 (i=2): Loop ends (i >= maxLen and carry is 0) Final result: [0x00000001, 0x00000002] Which equals: 1 + (2 × 2³²) = 8,589,934,593 ✓
Worst-Case Carry Propagation: What happens when you add 999...999 + 1? The carry ripples through every single digit. Addition is O(n) where n is the number of digits, you can't do better than this because you have to visit every position.
🔧 CPU Hardware Acceleration
Modern CPUs have a special instruction for exactly this: adc (add with carry) on x86-64 processors.
; x86-64 assembly for adding two BigInts mov rax, [digit_a] ; Load first digit add rax, [digit_b] ; Add second digit ; Carry flag is now set if overflow occurred! mov rcx, [digit_a+8] ; Load next digit of a adc rcx, [digit_b+8] ; Add WITH carry from previous operation ; CPU automatically adds the carry bit!
JavaScript engines can generate this assembly code when JIT-compiling BigInt operations, making addition surprisingly fast.
Problem #2: Multiplication Gets Complex
Subtraction works the same way as addition (using borrow instead of carry), so let's tackle the next challenge: multiplication.
Multiplying two arbitrary-size numbers is fundamentally more complex than adding them. How do we do it efficiently?
Remember long multiplication from elementary school?
123
× 45
─────
615 ← 123 × 5
4920 ← 123 × 4 (shifted left)
─────
5,535
BigInt multiplication uses the exact same algorithm, but with 32-bit "digits" instead of base-10 digits:
View implementation code →
function multiply(a, b) {
const result = { sign: a.sign * b.sign, digits: [] };
// Multiply each digit of 'a' by each digit of 'b'
for (let i = 0; i < a.num_digits; i++) {
let carry = 0;
for (let j = 0; j < b.num_digits || carry; j++) {
// Multiply digits (can produce up to 64-bit result)
const product = (a.digits[i] || 0) * (b.digits[j] || 0);
const position = i + j; // This digit goes at position i+j
// Add to existing value (partial products overlap!)
const current = result.digits[position] || 0;
const sum = current + product + carry;
// Store low 32 bits, carry the rest
result.digits[position] = sum % (2**32);
carry = Math.floor(sum / (2**32));
}
}
result.num_digits = result.digits.length;
return result;
}Let's visualize how this works with a concrete example:
Multiplying: [5, 0] × [3, 0] (which is 5 × 2³² and 3 × 2³² in decimal)
[5, 0] (5 × 2³² = 21,474,836,480) × [3, 0] (3 × 2³² = 12,884,901,888) ──────── Step 1 (i=0, j=0): product = 5 × 3 = 15 position = 0 + 0 = 0 result[0] = 15 Step 2 (i=0, j=1): product = 5 × 0 = 0 position = 0 + 1 = 1 result[1] = 0 Step 3 (i=1, j=0): product = 0 × 3 = 0 position = 1 + 0 = 1 result[1] = 0 + 0 = 0 Step 4 (i=1, j=1): product = 0 × 0 = 0 position = 1 + 1 = 2 result[2] = 0 Final: [15, 0, 0] Which is: 15 × (2³²)⁰ = 15 Wait... that's wrong! Let me reconsider...
Actually, the issue is that I'm representing the numbers wrong. Let me show a better example:
Better Example: [5] × [3] = 15
Simple case: [5] × [3] i=0, j=0: product = 5 × 3 = 15 result[0] = 15 Result: [15] ✓ Larger example: [0xFFFFFFFF] × [2] i=0, j=0: product = 4,294,967,295 × 2 = 8,589,934,590 This is bigger than 2³²! result[0] = 8,589,934,590 % 2³² = 4,294,967,294 carry = floor(8,589,934,590 / 2³²) = 1 i=0, j=1: j >= b.num_digits but carry exists result[1] = carry = 1 Result: [4,294,967,294, 1] Which is: 4,294,967,294 + (2³² × 1) = 8,589,934,590 ✓
The O(n²) Problem: This algorithm visits every pair of digits (i × j). For an n-digit number, that's n² operations. When n = 1,000, that's 1,000,000 multiplications!
This is why production systems switch algorithms based on size:
- Small numbers (< 70 digits): Use schoolbook multiplication
- Medium numbers (70-1500 digits): Karatsuba algorithm (O(n^1.585))
- Large numbers (> 1500 digits): FFT-based methods (O(n log n))
🔧 CPU Hardware: The MULX Instruction
Modern x86-64 processors have a mulx instruction that multiplies two 64-bit numbers and produces the full 128-bit result:
; Multiply two 64-bit values, get 128-bit result mulx rdx, rax, rbx ↑ ↑ ↑ high low operand ; Result: rdx:rax = rax × rbx (128 bits total) ; Perfect for digit multiplication!
This is why implementations use 64-bit digits on 64-bit machines, one instruction does the entire digit multiplication with carry extraction.
Problem #3: How Do We Compare?
We can add and multiply, but we also need comparisons. How do we know if a < b efficiently?
The naive approach compares digits from high to low, looking for the first difference. But can we do better using CPU instructions?
Here's the naive approach:
View naive comparison code →
function lessThan_naive(a, b) {
// Different number of digits? Easy case
if (a.num_digits !== b.num_digits) {
return a.num_digits < b.num_digits;
}
// Same length: compare from most significant digit
for (let i = a.num_digits - 1; i >= 0; i--) {
if (a.digits[i] !== b.digits[i]) {
return a.digits[i] < b.digits[i];
}
}
return false; // They're equal
}This works, but there's a clever trick: use subtraction with borrow instead!
View borrow-based comparison code →
function lessThan_borrow(a, b) {
let borrow = 0;
const maxLen = Math.max(a.num_digits, b.num_digits);
for (let i = 0; i < maxLen; i++) {
const digitA = a.digits[i] || 0;
const digitB = b.digits[i] || 0;
// Subtract: digitA - digitB - borrow
const diff = digitA - digitB - borrow;
// If diff is negative, we need to borrow
borrow = diff < 0 ? 1 : 0;
}
// If we still have borrow at the end, a < b
return borrow !== 0;
}Let's trace through an example:
Is [3, 5] less than [7, 5]?
Step 1 (i=0): digitA = 3 digitB = 7 borrow = 0 diff = 3 - 7 - 0 = -4 borrow = 1 (because diff < 0) Step 2 (i=1): digitA = 5 digitB = 5 borrow = 1 ← from previous step! diff = 5 - 5 - 1 = -1 borrow = 1 (because diff < 0) Final: borrow = 1, so [3, 5] < [7, 5] ✓
Is [9, 3] less than [2, 3]?
Step 1 (i=0): digitA = 9 digitB = 2 borrow = 0 diff = 9 - 2 - 0 = 7 borrow = 0 (diff >= 0) Step 2 (i=1): digitA = 3 digitB = 3 borrow = 0 diff = 3 - 3 - 0 = 0 borrow = 0 (diff >= 0) Final: borrow = 0, so [9, 3] >= [2, 3] ✓
if (a.high != b.high) return a.high < b.high;
return a.low < b.low;Why This Is Clever: The CPU has an sbb (subtract with borrow) instruction that does exactly this. By reframing comparison as subtraction, we:
- Use CPU instructions that execute without branches (faster!)
- Avoid conditional logic that causes pipeline stalls
- Get the comparison "for free" as a side effect of subtraction
It's an example of hardware-aware algorithm design.
💡 Short-Circuit Optimization
In practice, implementations combine both approaches:
- If the number of digits differs, return immediately (O(1))
- Otherwise, use the borrow trick (O(n) worst case)
- Can abort early as soon as borrow becomes non-zero
Problem #4: The Negative Number Paradox
So far we've only handled positive numbers. Adding negative support seems straightforward, just track a sign bit. But there's a subtle problem:
Our BigInt stores sign separately (sign: 1 or sign: -1). But JavaScript's BigInt specification requires that bitwise operations behave as if BigInts use two's complement representation. How do we reconcile this?
Let's understand the conflict:
Our representation (sign-magnitude):
-5 is stored as:
{
sign: -1,
num_digits: 1,
digits: [5]
}Two's complement representation:
-5 in 8-bit two's complement: Step 1: 5 in binary = 00000101 Step 2: Flip bits = 11111010 Step 3: Add 1 = 11111011 So -5 is: 11111011 (251 if unsigned!) The high bit (1) indicates negative. To get -1, ALL bits must be 1: 11111111
The problem: in two's complement, negative numbers have infinite leading 1s:
// In two's complement (conceptually infinite bits):
-1 = ...11111111111111111111111111111111
-2 = ...11111111111111111111111111111110
-5 = ...11111111111111111111111111111011If we stored BigInts this way, we'd need infinite memory for negative numbers! The solution is elegant: store sign separately, but convert to two's complement only when needed.
View bitwise operations implementation →
function bitwiseAnd(a, b) {
// Convert negative numbers to two's complement
const aTC = a.sign < 0 ? toTwosComplement(a) : a;
const bTC = b.sign < 0 ? toTwosComplement(b) : b;
// Perform bitwise AND
const result = { sign: 1, digits: [] };
const maxLen = Math.max(aTC.num_digits, bTC.num_digits);
for (let i = 0; i < maxLen; i++) {
// For negative numbers, simulate infinite 1s
const digitA = aTC.digits[i] || (aTC.sign < 0 ? 0xFFFFFFFF : 0);
const digitB = bTC.digits[i] || (bTC.sign < 0 ? 0xFFFFFFFF : 0);
result.digits[i] = digitA & digitB;
}
// Convert back from two's complement if needed
return fromTwosComplement(result);
}
function toTwosComplement(bigint) {
if (bigint.sign >= 0) return bigint;
// Flip all bits
const flipped = {
sign: 1,
digits: bigint.digits.map(d => (~d) >>> 0) // >>> 0 keeps it 32-bit
};
// Add 1
return addBigInts(flipped, { sign: 1, digits: [1], num_digits: 1 });
}Let's see this in action:
Computing: -5 & 3
Step 1: Convert -5 to two's complement
-5 stored as: { sign: -1, digits: [5] }
Flip bits: ~0x00000005 = 0xFFFFFFFA
Add 1: 0xFFFFFFFA + 1 = 0xFFFFFFFB
Two's complement: { sign: 1, digits: [0xFFFFFFFB] }
Step 2: 3 is already positive
{ sign: 1, digits: [3] }
Step 3: AND them together
0xFFFFFFFB & 0x00000003 = 0x00000003
Result: { sign: 1, digits: [3] }
Step 4: Check if result is negative (high bit set)
High bit of 0x00000003 is 0, so it's positive.
Final: 3 ✓Computing: -5 & -3
Step 1: -5 in two's complement = 0xFFFFFFFB Step 2: -3 in two's complement = 0xFFFFFFFD Step 3: AND 0xFFFFFFFB & 0xFFFFFFFD = 0xFFFFFFF9 Step 4: Convert back High bit is 1, so result is negative Subtract 1: 0xFFFFFFF9 - 1 = 0xFFFFFFF8 Flip bits: ~0xFFFFFFF8 = 0x00000007 Result: -7 ✓ Verify: -5 & -3 = -7 in JavaScript
The Performance Trade-off: Bitwise operations on negative BigInts require:
- Convert operand A to two's complement
- Convert operand B to two's complement
- Perform the bitwise operation
- Convert result back
This is expensive! But it optimizes for the common case, most code uses arithmetic (add, subtract, multiply) on mixed positive/negative numbers, which don't pay this conversion cost.
⚠️ Design Philosophy
This reveals a core principle: Optimize for the common path.
- Arithmetic on positive numbers: Fast (no conversion)
- Arithmetic on mixed signs: Still fast (just track sign separately)
- Bitwise on negative numbers: Slower (but rare in practice)
Real-world code rarely does bitwise operations on negative BigInts, so the performance hit is acceptable.
Real-World Performance
How fast is BigInt compared to regular Number? The honest answer: it depends dramatically on size.
⚡ Fast Operations
- Small values: BigInts fitting in 64 bits run almost as fast as Number
- Addition/Subtraction: Linear time with minimal overhead
- Comparison: Often short-circuits early
🐌 Slower Operations
- Multiplication: O(n²) for naive algorithm
- Division: No hardware support, requires iteration
- Bitwise on negatives: Requires two's complement conversion
Let's look at actual benchmarks:
View addition benchmark code →
// Benchmark: Addition
const a = 9007199254740991n; // Near 2^53
const b = 12345n;
console.time('BigInt addition');
for (let i = 0; i < 1000000; i++) {
const sum = a + b;
}
console.timeEnd('BigInt addition'); // ~10ms
// Compare with Number
const x = 9007199254740991;
const y = 12345;
console.time('Number addition');
for (let i = 0; i < 1000000; i++) {
const sum = x + y;
}
console.timeEnd('Number addition'); // ~2ms
// BigInt is ~5x slower, but still very fast!View multiplication benchmark code →
// Benchmark: Multiplication (large numbers)
const big1 = 123456789012345678901234567890n;
const big2 = 987654321098765432109876543210n;
console.time('BigInt multiplication');
const product = big1 * big2;
console.timeEnd('BigInt multiplication');
// ~0.05ms (uses optimized algorithm)
// vs Number (loses precision!)
const num1 = 1.234567890123456789e+29;
const num2 = 9.876543210987654321e+29;
console.time('Number multiplication');
const numProduct = num1 * num2;
console.timeEnd('Number multiplication');
// ~0.0001ms (but result is approximate!)When to Use BigInt:
- ✅ Exact integer arithmetic beyond 2⁵³
- ✅ Cryptography, finance, or ID generation
- ✅ When correctness matters more than raw speed
When to Use Number:
- ✅ Numerical computing with approximation
- ✅ Graphics, physics, simulations
- ✅ Performance-critical tight loops
Conclusion: The Art of Trade-offs
Building BigInt from scratch reveals a cascade of design decisions, each optimized for real-world usage patterns:
- Store digits in base 2³² → Maps to CPU word size
- Use carry propagation → Leverages hardware
adcinstruction - Sign-magnitude representation → Fast arithmetic, slower bitwise
- Adaptive multiplication algorithms → Karatsuba for medium, FFT for huge
- Convert to two's complement only when needed → Optimize common case
These aren't arbitrary choices, they're carefully balanced trade-offs between:
- Memory usage vs precision
- Common-case performance vs worst-case complexity
- Hardware utilization vs algorithmic elegance
The next time you write const x = 123n, you'll know there's an entire world of engineering behind that little 'n'.
Further Reading: