Functional Programming in Python
Learn to write cleaner, more maintainable Python code using functional programming principles. From pure functions to higher-order functions, discover how FP makes your code more predictable and easier to test.
You've written Python code with loops, classes, and mutable state. It works, but debugging is hard. Tests break when you change one thing. Side effects hide everywhere. There's a better way.
Functional programming isn't about abandoning everything you know. It's about adding powerful patterns to your toolkit: pure functions that always return the same output for the same input, immutable data that never changes, and higher-order functions that treat functions as values.
What you'll learn
By the end of this article, you'll understand:
- Pure functions and why they matter
- Immutability and avoiding side effects
- First-class functions and closures
- Map, filter, and reduce
- List comprehensions as functional patterns
- Lambda functions and when to use them
- Function composition and pipelines
- Higher-order functions
The Problem with Mutable State
When functions modify global state or mutate their inputs, debugging becomes a nightmare. Which function changed this value? When? Why?
Let's start with a typical imperative approach to calculating a shopping cart discount:
class ShoppingCart:
def __init__(self):
self.items = []
self.book_added = False
def add_item(self, item):
self.items.append(item)
if item == "Book":
self.book_added = True
def get_discount_percentage(self):
if self.book_added:
return 5
else:
return 0
def get_items(self):
return self.items
def remove_item(self, item):
self.items.remove(item)
if item == "Book":
self.book_added = FalseThis code has serious problems. The book_added flag can get out of sync
with the actual items. If you add two books and remove one, the discount incorrectly
becomes 0%. The state is spread across multiple methods, making it hard to reason about.
Write pure functions that calculate values from inputs instead of maintaining mutable state. Pass data as arguments, return new values, never modify originals.
Here's the functional approach:
def get_discount_percentage(items):
"""
Pure function: same input always gives same output.
No state, no side effects.
"""
if "Book" in items:
return 5
else:
return 0
# Usage
items = ["Apple", "Banana"]
print(get_discount_percentage(items)) # 0
items = ["Apple", "Book", "Banana"]
print(get_discount_percentage(items)) # 5The entire logic is in one function. No hidden state. No flags to maintain. The function takes a list, returns a number. Same input, same output, every time.
Pure Functions
A pure function is one that follows two simple rules:
- Same input → same output: Call it with the same arguments, you always get the same result
- No side effects: Doesn't modify anything outside itself, no global variables, no file I/O, no database writes
def pure_increment(n):
return n + 1
# Same input → Same output
# No side effectsglobal_state = {'clicks': 0}
def impure_increment():
global_state['clicks'] += 1
return global_state['clicks']
# Modifies global state
# Side effects!Pure functions are easier to test (no setup required), easier to debug (no hidden state to track), and easier to reason about (just look at the arguments and return value).
Side effects aren't evil
Every program needs side effects eventually, printing to console, writing to files, making HTTP requests. The goal isn't to eliminate them, but to isolate them. Keep most of your code pure, push side effects to the edges.
Core business logic? Pure functions. Database writes and API calls? Separate layer. This makes testing trivial: test pure functions with assertions, mock the side effects.
Testing Pure vs Impure Functions
# Impure: depends on global state
global_count = 0
def increment():
global global_count
global_count += 1
return global_count
# Test requires setup
def test_increment():
global global_count
global_count = 0 # Reset state
assert increment() == 1
assert increment() == 2
global_count = 0 # Cleanup# Pure: just input → output
def increment(n):
return n + 1
# Test is simple
def test_increment():
assert increment(5) == 6
assert increment(0) == 1
assert increment(-1) == 0
# No setup, no cleanup!Immutability
When you mutate a list or dictionary, all references to it see the change. This leads to bugs where data changes unexpectedly because some other part of the code modified a shared object.
In functional programming, we don't modify data structures, we create new ones. This sounds expensive, but Python's list/dict operations are optimized, and the benefits are huge.
# Mutates the list in place
nums = [1, 2, 3]
nums.append(4) # Changes original!# Creates new list
nums = [1, 2, 3]
new_nums = [*nums, 4] # Original unchanged!Immutable Operations in Python
# Lists
original = [1, 2, 3]
# DON'T: Mutate original
original.append(4)
# DO: Create new list
new_list = [*original, 4]
# or
new_list = original + [4]
# Dictionaries
person = {"name": "Alice", "age": 30}
# DON'T: Mutate original
person["age"] = 31
# DO: Create new dict
updated_person = {**person, "age": 31}
# or
from copy import copy
updated_person = copy(person)
updated_person["age"] = 31
# Removing items
numbers = [1, 2, 3, 4, 5]
# DON'T: Mutate original
numbers.remove(3)
# DO: Create new list
new_numbers = [n for n in numbers if n != 3]
# or with filter
new_numbers = list(filter(lambda n: n != 3, numbers))Performance considerations
"Doesn't copying everything make it slow?" Sometimes. But:
- Python's list/dict operations are implemented in C and highly optimized
- You avoid entire classes of bugs (worth the slight overhead)
- For hot paths, you can still use mutation, but keep it local, not across function boundaries
- Libraries like
pyrsistentprovide truly persistent data structures with structural sharing
First-Class Functions
In Python, functions are first-class citizens. You can:
- Assign them to variables
- Pass them as arguments to other functions
- Return them from functions
- Store them in data structures
# Assign to variable
def greet(name):
return f"Hello, {name}!"
say_hello = greet
print(say_hello("Alice")) # "Hello, Alice!"
# Store in data structure
operations = {
"add": lambda x, y: x + y,
"subtract": lambda x, y: x - y,
"multiply": lambda x, y: x * y,
}
print(operations["add"](5, 3)) # 8
# Return from function
def make_multiplier(factor):
def multiply(x):
return x * factor
return multiply
times_three = make_multiplier(3)
print(times_three(4)) # 12
print(times_three(7)) # 21This is powerful. It means you can write functions that operate on functions, creating flexible, reusable abstractions.
Map, Filter, and Reduce
These three functions are the workhorses of functional programming. They let you transform collections without loops.
Applies a function to each element, returns new list of same length
numbers = [1, 2, 3, 4, 5]
doubled = list(map(lambda x: x * 2, numbers))
# or with comprehension:
doubled = [x * 2 for x in numbers]Why use map/filter instead of loops?
Loops say how to iterate. Map/filter say what you want to do:
# "How to iterate"
squared = []
for num in numbers:
result = num * num
squared.append(result)
evens = []
for num in numbers:
if num % 2 == 0:
evens.append(num)# "What to compute"
squared = [num * num for num in numbers]
evens = [num for num in numbers if num % 2 == 0]
# Or with map/filter
squared = list(map(lambda x: x * x, numbers))
evens = list(filter(lambda x: x % 2 == 0, numbers))The declarative version is shorter and clearer. You immediately see the transformation
(x * x) and the condition (x % 2 == 0) without parsing loop
syntax.
Reduce: Combining Values
from functools import reduce
numbers = [1, 2, 3, 4, 5]
# Sum
total = reduce(lambda acc, x: acc + x, numbers, 0)
# Or just: sum(numbers)
# Product
product = reduce(lambda acc, x: acc * x, numbers, 1)
# Find maximum
max_num = reduce(lambda acc, x: max(acc, x), numbers)
# Or just: max(numbers)
# Flatten nested lists
nested = [[1, 2], [3, 4], [5, 6]]
flat = reduce(lambda acc, lst: acc + lst, nested, [])
# Result: [1, 2, 3, 4, 5, 6]
# Build dictionary from list of pairs
pairs = [("a", 1), ("b", 2), ("c", 3)]
dictionary = reduce(
lambda acc, pair: {**acc, pair[0]: pair[1]},
pairs,
{}
)
# Result: {"a": 1, "b": 2, "c": 3}
# Or just: dict(pairs)When to use reduce
reduce is powerful but can be hard to read. For common operations like
sum, max, or min, use the built-in functions instead. Use reduce when
you're building up a complex value (like a dictionary) from a sequence.
List Comprehensions
Python's list comprehensions are a functional pattern baked into the language.
They're more readable than map and filter for most cases.
squares = []
for n in numbers:
squares.append(n * n)squares = [n * n for n in numbers]Comprehension Types
numbers = [1, 2, 3, 4, 5]
# List comprehension
squares = [n * n for n in numbers]
# Set comprehension
unique_squares = {n * n for n in numbers}
# Dict comprehension
number_to_square = {n: n * n for n in numbers}
# Generator expression (lazy evaluation)
squares_gen = (n * n for n in numbers)
# Nested comprehensions
matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
flat = [num for row in matrix for num in row]
# Result: [1, 2, 3, 4, 5, 6, 7, 8, 9]
# With multiple conditions
result = [
n
for n in range(100)
if n % 2 == 0
if n % 3 == 0
]
# Numbers divisible by both 2 and 3
# With if-else (transform, not filter)
labels = ["even" if n % 2 == 0 else "odd" for n in numbers]Lambda Functions
Lambda creates anonymous functions, functions without names. Use them for short, one-line operations.
Lambda is a shorthand for simple functions
def add(x, y):
return x + y
result = add(3, 5) # 8add = lambda x, y: x + y
result = add(3, 5) # 8When to use lambda vs def
# Good lambda use: short, obvious
squared = map(lambda x: x * x, numbers)
sorted_by_age = sorted(people, key=lambda p: p['age'])
# Bad lambda use: complex logic
# DON'T
calculate = lambda x, y: x * 2 + y if x > 10 else x - y * 3
# DO
def calculate(x, y):
if x > 10:
return x * 2 + y
else:
return x - y * 3Lambda limitations
Lambdas in Python can only contain a single expression, no statements, no multiple
lines. If you need more than one line or a statement (like print or
if/else blocks), use def.
Higher-Order Functions
A higher-order function either:
- Takes one or more functions as arguments, or
- Returns a function
Functions that take other functions as arguments or return functions
def apply_twice(f, x):
"""Apply function f twice to x"""
return f(f(x))
def add_three(n):
return n + 3
# Function as argument
result = apply_twice(add_three, 5)
# result = 11Real-World Example: Retry Logic
from time import sleep
from typing import Callable, TypeVar
T = TypeVar('T')
def retry(attempts: int, delay: float):
"""
Higher-order function that returns a decorator.
"""
def decorator(func: Callable[..., T]) -> Callable[..., T]:
def wrapper(*args, **kwargs) -> T:
for attempt in range(attempts):
try:
return func(*args, **kwargs)
except Exception as e:
if attempt == attempts - 1:
raise
print(f"Attempt {attempt + 1} failed: {e}")
sleep(delay)
return wrapper
return decorator
# Usage
@retry(attempts=3, delay=1.0)
def fetch_data(url: str):
# Might fail due to network issues
return requests.get(url)
# The retry logic is separate from business logic
# You can reuse it anywhereDecorators are Higher-Order Functions
Python decorators are just higher-order functions with special syntax:
def add(x, y):
return x + y
result = add(3, 5)
# result = 8def log_calls(func):
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__}{args}")
result = func(*args, **kwargs)
print(f"Returned {result}")
return result
return wrapper
@log_calls
def add(x, y):
return x + y
result = add(3, 5)
# Calling add(3, 5)
# Returned 8
# result = 8The @decorator syntax is just sugar for passing a function to another
function. The decorator wraps the original function, adding behavior before/after
the call.
# Common decorator patterns
# 1. Timing decorator
import time
def timer(func):
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
print(f"{func.__name__} took {end - start:.2f}s")
return result
return wrapper
@timer
def slow_function():
time.sleep(1)
return "done"
# 2. Memoization decorator
def memoize(func):
cache = {}
def wrapper(*args):
if args not in cache:
cache[args] = func(*args)
return cache[args]
return wrapper
@memoize
def fibonacci(n):
if n < 2:
return n
return fibonacci(n-1) + fibonacci(n-2)
# 3. Parameterized decorator
def repeat(times):
def decorator(func):
def wrapper(*args, **kwargs):
for _ in range(times):
result = func(*args, **kwargs)
return result
return wrapper
return decorator
@repeat(times=3)
def greet(name):
print(f"Hello, {name}!")Passing Functions as Arguments
One of the most powerful features of functional programming is passing functions as
arguments to other functions. Python's built-in sorted() function
is a perfect example.
The key parameter takes a function that extracts a comparison key from each element
words = ['ada', 'haskell', 'scala', 'java', 'rust']
sorted(words, key=len)The key parameter is powerful because it separates what to
extract from how to sort. You can pass any function that transforms each
element into a comparable value.
# Sort people by age
people = [
{'name': 'Alice', 'age': 30},
{'name': 'Bob', 'age': 25},
{'name': 'Charlie', 'age': 35}
]
sorted(people, key=lambda p: p['age'])
# [{'name': 'Bob', 'age': 25}, ...]
# Sort strings by last character
words = ['apple', 'pie', 'cherry']
sorted(words, key=lambda s: s[-1])
# ['apple', 'pie', 'cherry']
# Sort with multiple keys (tuple comparison)
sorted(people, key=lambda p: (p['age'], p['name']))
# Sort case-insensitive
sorted(words, key=str.lower)Real-World Example: Word Ranking
Let's build a word ranking system that scores words by different criteria. This demonstrates how passing functions makes code flexible and composable:
def score(word):
return len(word.replace('a', ''))
def bonus(word):
return 5 if 'c' in word else 0
def penalty(word):
return 7 if 's' in word else 0
# Compose scoring functions
def rank_words(words):
return sorted(words,
key=lambda w: score(w) ,
reverse=True
)Notice how we can combine multiple scoring functions using a lambda. Each scoring rule is a separate function, making the code modular and testable. When requirements change (add a new bonus/penalty), you just write a new function, no need to change the ranking logic.
Design principle: Small, composable functions
Instead of one large scoreWithBonusAndPenalty() function, we have
three small functions: score(), bonus(), and
penalty(). Benefits:
- Each function is trivial to test
- Easy to add new scoring rules
- Can combine them in any way
- Function names document what each piece does
Function Composition
Function composition is the process of combining simple functions to build more complex
ones. If you have f(x) and g(x), composition gives you
f(g(x)).
Chain functions together: each function's output becomes the next function's input
def add_three(x):
return x + 3
def double(x):
return x * 2
def square(x):
return x * x
# Compose them:
result = square(double(add_three(x)))x = 55Building a compose Function
from functools import reduce
from typing import Callable
def compose(*functions: Callable) -> Callable:
"""
Compose functions right-to-left.
compose(f, g, h)(x) = f(g(h(x)))
"""
return reduce(
lambda f, g: lambda x: f(g(x)),
functions,
lambda x: x
)
# Or with reversed order (left-to-right pipeline)
def pipe(*functions: Callable) -> Callable:
"""
Pipe functions left-to-right.
pipe(f, g, h)(x) = h(g(f(x)))
"""
return reduce(
lambda f, g: lambda x: g(f(x)),
functions,
lambda x: x
)
# Usage
add_three = lambda x: x + 3
double = lambda x: x * 2
square = lambda x: x * x
# Compose (right to left)
transform = compose(square, double, add_three)
result = transform(5) # square(double(add_three(5))) = square(double(8)) = square(16) = 256
# Pipe (left to right, more natural)
transform = pipe(add_three, double, square)
result = transform(5) # square(double(add_three(5))) = square(double(8)) = square(16) = 256Method Chaining as Composition
Modern Python libraries often use method chaining, which is a form of function composition:
import pandas as pd
# Pandas method chaining
result = (
df
.filter(['name', 'age', 'salary'])
.query('age > 25')
.groupby('department')
.agg({'salary': 'mean'})
.sort_values('salary', ascending=False)
)
# Each method returns a new DataFrame
# No mutation, composable transformationsPractical Patterns
Partial Application
Create specialized versions of general functions by fixing some arguments:
from functools import partial
def power(base, exponent):
return base ** exponent
# Create specialized functions
square = partial(power, exponent=2)
cube = partial(power, exponent=3)
print(square(5)) # 25
print(cube(5)) # 125
# Real-world use: configure API client
def make_request(method, url, headers=None, timeout=30):
# ... make HTTP request
pass
# Create specialized request function
api_get = partial(
make_request,
method='GET',
headers={'Authorization': 'Bearer token'},
timeout=60
)
# Use it
response = api_get('https://api.example.com/users')Currying
Transform a function that takes multiple arguments into a sequence of functions that each take a single argument:
# Regular function
def add(x, y):
return x + y
# Curried version
def add_curried(x):
def inner(y):
return x + y
return inner
# Usage
add_five = add_curried(5)
print(add_five(3)) # 8
print(add_five(7)) # 12
# Or in one line
print(add_curried(5)(3)) # 8Function Pipelines
Process data through a series of transformations:
from typing import Callable, Any
def pipeline(value: Any, *functions: Callable) -> Any:
"""
Pass value through a sequence of functions.
"""
result = value
for func in functions:
result = func(result)
return result
# Example: process text
text = " Hello, World! "
result = pipeline(
text,
str.strip,
str.lower,
lambda s: s.replace(',', ''),
lambda s: s.split(),
)
print(result) # ['hello', 'world!']
# Example: data transformation
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
result = pipeline(
numbers,
lambda ns: filter(lambda n: n % 2 == 0, ns),
lambda ns: map(lambda n: n * n, ns),
sum,
)
print(result) # 220 (2² + 4² + 6² + 8² + 10² = 4 + 16 + 36 + 64 + 100)When NOT to Use FP in Python
Functional programming is powerful, but it's not always the right choice:
- Performance-critical code: Loops can be faster than map/filter for large datasets. Profile first.
- I/O-heavy operations: File reading, database access, and network calls are inherently stateful. Don't fight it.
- Complex state machines: Sometimes mutable state is clearer. Game loops, event systems, and UIs often need mutation.
- Team familiarity: If your team doesn't know FP, imperative code might be more maintainable. Don't be clever at the expense of clarity.
Pragmatic FP
Python isn't a pure functional language like Haskell or Erlang. Use FP patterns where they make code clearer, but don't force it. Mix paradigms:
- Pure functions for business logic
- Classes for domain objects and state
- Procedural code for scripts and glue
The goal is maintainable code, not ideological purity.
Key Takeaways
- Pure functions are predictable and easy to test. Same input always gives same output, no side effects.
- Immutability prevents bugs from shared mutable state. Create new values instead of modifying existing ones.
- First-class functions let you pass behavior around like data. This enables powerful abstractions.
- Map, filter, reduce replace most loops with declarative transformations.
- List comprehensions are Pythonic functional patterns, use them over map/filter when possible.
- Higher-order functions operate on functions, enabling composition and reuse.
- Composition builds complex behavior from simple functions.
- Be pragmatic: Use FP where it helps, mix paradigms where it doesn't.
Want to go deeper? Check out the toolz library for
more functional utilities, pyrsistent for persistent data structures,
and returns for monads and railways-oriented programming in Python.