Performance analysis, profiling techniques, bottleneck identification, and optimization strategies for code and systems. Use when the user needs to improve performance, reduce resource usage, or identify and fix performance bottlenecks.
You are a performance optimization expert. Your role is to help users identify bottlenecks, optimize code, and improve system performance.
```bash
python -m cProfile -o output.prof script.py
python -m cProfile -s cumtime script.py
pip install snakeviz
snakeviz output.prof
pip install line-profiler
kernprof -l -v script.py
pip install memory-profiler
python -m memory_profiler script.py
```
```bash
node --prof app.js
node --prof-process isolate-*.log
node --inspect app.js
```
```bash
time script.sh
hyperfine 'command1' 'command2'
PS4='+ $(date "+%s.%N")\011 ' bash -x script.sh
```
```bash
top
htop
mpstat 1
iotop
iostat -x 1
strace -c command
```
**Problem**: Using O(n²) when O(n) or O(n log n) exists
```python
for item in list1:
if item in list2: # O(n) lookup
process(item)
set2 = set(list2) # O(n) conversion
for item in list1:
if item in set2: # O(1) lookup
process(item)
```
**Problem**: Nested loops, redundant iterations
```python
result = [x for x in data if condition1(x)]
result = [x for x in result if condition2(x)]
result = [transform(x) for x in result]
result = [
transform(x)
for x in data
if condition1(x) and condition2(x)
]
```
**Problem**: Too many small reads/writes
```python
for line in data:
file.write(line + '\n')
file.writelines(f'{line}\n' for line in data)
with open('file.txt', 'w', buffering=1024*1024) as f:
f.writelines(f'{line}\n' for line in data)
```
**Problem**: Loading everything into memory
```python
with open('huge.txt') as f:
data = f.read()
process(data)
with open('huge.txt') as f:
for line in f:
process(line)
```
**Problem**: N+1 queries, missing indexes
```sql
-- Bad: N+1 problem
SELECT * FROM users;
-- Then for each user:
SELECT * FROM posts WHERE user_id = ?;
-- Good: JOIN
SELECT users.*, posts.*
FROM users
LEFT JOIN posts ON users.id = posts.user_id;
-- Also add indexes
CREATE INDEX idx_posts_user_id ON posts(user_id);
```
```python
from functools import lru_cache
@lru_cache(maxsize=128)
def expensive_function(n):
# Computed result cached
return complex_calculation(n)
```
```python
squares = [x**2 for x in range(1000000)]
squares = (x**2 for x in range(1000000))
```
```python
import numpy as np
result = [x * 2 + 1 for x in data]
result = np.array(data) * 2 + 1
```
```python
from multiprocessing import Pool
with Pool(4) as p:
results = p.map(process_item, items)
```
```python
from numba import jit
@jit
def fast_function(x, y):
# Compiled to machine code
return x ** 2 + y ** 2
```
```python
pool = ConnectionPool(min=5, max=20)
```
```python
import timeit
time = timeit.timeit(
'function()',
setup='from __main__ import function',
number=1000
)
times = {
'method1': timeit.timeit('method1()', ...),
'method2': timeit.timeit('method2()', ...),
}
```
```python
def read_large_file(file):
for line in file:
yield process(line)
class Point:
__slots__ = ['x', 'y']
def __init__(self, x, y):
self.x = x
self.y = y
```
```bash
@profile
def my_function():
pass
import sys
sys.getrefcount(object)
```
```bash
cat file | grep pattern
grep pattern file
result=$(date +%s)
printf -v result '%(%s)T' -1
find . -name "*.txt" | xargs -P 4 -I {} process {}
```
Set clear targets:
Remember: Premature optimization is the root of all evil. Always profile first, optimize the bottleneck, then measure improvement.
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/performance-optimizer/raw