Time Complexity Analysis¶
This guide covers time complexity analysis, Big O notation, and how to analyze algorithm efficiency across different programming languages.
🎯 What is Time Complexity?¶
Time complexity measures how the runtime of an algorithm grows as the input size increases. It helps us predict performance and compare different algorithms.
Big O Notation¶
Big O notation describes the upper bound of an algorithm's growth rate: - O(1): Constant time - execution time doesn't depend on input size - O(log n): Logarithmic time - grows very slowly - O(n): Linear time - grows proportionally with input size - O(n log n): Linearithmic time - common in efficient sorting - O(n²): Quadratic time - grows quickly, common in nested loops - O(2ⁿ): Exponential time - grows extremely fast - O(n!): Factorial time - grows fastest possible
🔧 Complexity Analysis Examples¶
O(1) - Constant Time¶
# Accessing array element by index
def get_first_element(arr):
return arr[0] # Always one operation
# Hash table lookup (average case)
def lookup_value(dictionary, key):
return dictionary.get(key) # Direct access
O(log n) - Logarithmic Time¶
# Binary search
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1
O(n) - Linear Time¶
# Linear search
def linear_search(arr, target):
for i in range(len(arr)): # n iterations
if arr[i] == target:
return i
return -1
# Finding maximum in array
def find_max(arr):
max_val = arr[0]
for num in arr: # n iterations
if num > max_val:
max_val = num
return max_val
O(n log n) - Linearithmic Time¶
# Merge sort
def merge_sort(arr):
if len(arr) <= 1:
return arr
mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)
def merge(left, right):
result = []
i = j = 0
while i < len(left) and j < len(right):
if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:])
return result
O(n²) - Quadratic Time¶
# Bubble sort
def bubble_sort(arr):
n = len(arr)
for i in range(n): # Outer loop: n iterations
for j in range(0, n-i-1): # Inner loop: n iterations
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
return arr
# Finding all pairs
def find_pairs(arr):
pairs = []
for i in range(len(arr)): # n iterations
for j in range(i+1, len(arr)): # n iterations
pairs.append((arr[i], arr[j]))
return pairs
O(2ⁿ) - Exponential Time¶
# Recursive Fibonacci (inefficient)
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2) # Two recursive calls
📊 Complexity Comparison¶
| Input Size | O(1) | O(log n) | O(n) | O(n log n) | O(n²) | O(2ⁿ) |
|---|---|---|---|---|---|---|
| 10 | 1 | 3 | 10 | 33 | 100 | 1,024 |
| 100 | 1 | 7 | 100 | 664 | 10,000 | 1.27×10³⁰ |
| 1,000 | 1 | 10 | 1,000 | 9,966 | 1,000,000 | 1.07×10³⁰¹ |
| 10,000 | 1 | 13 | 10,000 | 132,877 | 100,000,000 | 1.99×10³⁰¹ |
🎯 Language-Specific Analysis¶
Java Complexity¶
// ArrayList.get() - O(1)
ArrayList<Integer> list = new ArrayList<>();
int value = list.get(0); // O(1)
// ArrayList.add() - O(1) amortized
list.add(value); // O(1)
// LinkedList.get() - O(n)
LinkedList<Integer> list = new LinkedList<>();
int value = list.get(0); // O(n)
// HashMap.get() - O(1) average case
HashMap<String, Integer> map = new HashMap<>();
int value = map.get("key"); // O(1)
Python Complexity¶
# List operations
my_list = [1, 2, 3, 4, 5]
value = my_list[0] # O(1)
my_list.append(6) # O(1) amortized
my_list.insert(0, 0) # O(n)
# Dictionary operations
my_dict = {"a": 1, "b": 2}
value = my_dict["a"] # O(1) average case
my_dict["c"] = 3 # O(1) average case
C Complexity¶
// Array access
int arr[100];
int value = arr[0]; // O(1)
// Linked list traversal
struct Node {
int data;
struct Node* next;
};
struct Node* current = head;
while (current != NULL) { // O(n)
current = current->next;
}
🚀 Optimization Strategies¶
Choose Right Data Structure¶
# Bad: List for frequent lookups
user_list = [("alice", 25), ("bob", 30), ("charlie", 35)]
def find_age(name):
for user in user_list: # O(n)
if user[0] == name:
return user[1]
return None
# Good: Dictionary for frequent lookups
user_dict = {"alice": 25, "bob": 30, "charlie": 35}
def find_age(name):
return user_dict.get(name) # O(1)
Avoid Nested Loops When Possible¶
# Bad: O(n²) - nested loops
def find_duplicates(arr1, arr2):
duplicates = []
for item1 in arr1: # n iterations
for item2 in arr2: # m iterations
if item1 == item2:
duplicates.append(item1)
return duplicates
# Good: O(n + m) - using set
def find_duplicates_optimized(arr1, arr2):
set1 = set(arr1) # O(n)
set2 = set(arr2) # O(m)
return list(set1.intersection(set2)) # O(min(n, m))
Use Efficient Algorithms¶
# Bad: O(n²) - bubble sort
def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
return arr
# Good: O(n log n) - merge sort
def merge_sort(arr):
if len(arr) <= 1:
return arr
mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)
📈 Measuring Complexity¶
Empirical Analysis¶
import time
import random
def measure_time(func, input_size):
# Generate test data
data = list(range(input_size))
random.shuffle(data)
# Measure execution time
start_time = time.time()
result = func(data)
end_time = time.time()
return end_time - start_time
# Test different input sizes
for size in [1000, 2000, 4000, 8000]:
time_taken = measure_time(bubble_sort, size)
print(f"Size: {size}, Time: {time_taken:.6f}s")
Theoretical Analysis Steps¶
- Identify Basic Operations: Count the fundamental operations
- Express in Terms of n: Write operation count as function of input size
- Find Dominant Term: Keep the fastest-growing term
- Apply Big O: Remove constants and lower-order terms
🧪 Practice Problems¶
Easy¶
- Analyze Linear Search: What is the time complexity of searching an unsorted array?
- Array Access: What is the complexity of accessing an element by index?
- Hash Table: What is the average-case complexity of hash table lookup?
Medium¶
- Binary Search: Prove that binary search is O(log n).
- String Matching: Analyze the complexity of naive string matching vs. KMP algorithm.
- Tree Traversal: What is the complexity of traversing a binary tree?
Hard¶
- Quick Sort: Analyze the average and worst-case complexity of quicksort.
- Dynamic Programming: Analyze the complexity of the Fibonacci sequence with memoization.
- Graph Algorithms: Compare Dijkstra's algorithm vs. Floyd-Warshall algorithm.
📚 Related Resources¶
- Introduction to Algorithms - Classic algorithms textbook
- Big O Cheat Sheet - Quick reference
- Algorithm Visualizer - Interactive algorithm visualization
🔗 Related Guides¶
- Space Complexity - Memory usage analysis
- Algorithm Analysis - Comprehensive analysis techniques
- Optimization Techniques - Performance improvement strategies
- Benchmarking Methods - Measuring actual performance
🔗 Language-Specific Performance¶
- Java Performance Tips - Java optimization
- Python Performance Tips - Python optimization
- C Performance Tips - C optimization
- Oracle Performance Tips - Oracle optimization