Timeit Module: Precision Performance Measurement
TL;DR
The timeit module provides precise execution time measurement for small code snippets, automatically handling timing complexities and providing both programmatic and command-line interfaces for performance benchmarking.
Interesting!
Timeit temporarily disables garbage collection during measurements and automatically determines the optimal number of loop iterations to get accurate timing results, making it far more reliable than simple time.time() measurements.
Basic Timing
Simple Function Timing
python code snippet start
import timeit
# Time a simple operation
time_taken = timeit.timeit('sum([1, 2, 3, 4, 5])', number=100000)
print(f"Time: {time_taken:.6f} seconds")
# Time with setup code
time_taken = timeit.timeit(
stmt='result = func(data)',
setup='func = lambda x: sum(x); data = [1, 2, 3, 4, 5]',
number=50000
)
python code snippet end
Comparing Approaches
python code snippet start
# Compare string concatenation methods
method1 = timeit.timeit('"+".join(map(str, range(100)))', number=10000)
method2 = timeit.timeit('["".join([str(i) for i in range(100)])]', number=10000)
print(f"Join method: {method1:.6f}s")
print(f"Comprehension: {method2:.6f}s")
python code snippet end
Advanced Usage
Repeat for Statistical Accuracy
python code snippet start
import timeit
# Run multiple timing tests
times = timeit.repeat(
stmt='sorted([3, 1, 4, 1, 5, 9, 2, 6])',
repeat=5,
number=100000
)
print(f"Best time: {min(times):.6f}s")
print(f"Average: {sum(times)/len(times):.6f}s")
python code snippet end
Timer Class for Complex Testing
python code snippet start
import timeit
class MyTimer:
def __init__(self):
self.data = list(range(1000))
def sort_builtin(self):
return sorted(self.data)
def sort_manual(self):
result = self.data.copy()
result.sort()
return result
# Create timer with custom setup
timer = timeit.Timer(
stmt='obj.sort_builtin()',
setup='from __main__ import MyTimer; obj = MyTimer()'
)
print(f"Builtin sort: {timer.timeit(1000):.6f}s")
python code snippet end
Command Line Usage
bash code snippet start
# Time from command line
python -m timeit "sum([1, 2, 3, 4, 5])"
# With setup code
python -m timeit -s "data = list(range(100))" "sum(data)"
# Multiple statements
python -m timeit -s "import random" "random.random(); random.random()"
bash code snippet end
Best Practices
Performance Comparison Function
python code snippet start
def compare_performance(*funcs, number=10000):
"""Compare performance of multiple functions"""
results = {}
for func in funcs:
time_taken = timeit.timeit(
lambda: func(),
number=number
)
results[func.__name__] = time_taken
# Sort by performance
for name, time in sorted(results.items(), key=lambda x: x[1]):
print(f"{name}: {time:.6f}s")
# Usage
def list_comp():
return [x**2 for x in range(100)]
def map_func():
return list(map(lambda x: x**2, range(100)))
compare_performance(list_comp, map_func)
python code snippet end
Use timeit for micro-benchmarks and algorithm comparison - it’s the gold standard for measuring small code snippet performance. Compare with datetime for longer measurements and use functools.lru_cache to optimize performance. For statistical analysis of results, see statistical functions and random data generation for test data.
Reference: Python Timeit Documentation