AI-powered pprof profiling toolkit for CPU and heap analysis. Convert .pb.gz profiles to Markdown, detect regressions, and generate optimization recommendations backed by evidence.
AI-assisted performance profiling using pprof: analyze CPU and heap profiles, detect regressions between versions, and generate evidence-backed optimization recommendations.
This skill enables AI assistants to analyze pprof profiles (.pb.gz format) from Node.js, Go, and other platforms. It converts binary profiles to structured Markdown, identifies performance hotspots, compares profiles to detect regressions, and generates actionable recommendations with concrete steps.
Key capabilities:
Before using this skill, install the `perf-skill` package:
```bash
npm install -g perf-skill
```
Or use npx for one-off commands:
```bash
npx perf-skill analyze profile.pb.gz
```
When the user provides a pprof profile or asks to analyze performance:
**Step 1:** Check if the file exists and is a valid .pb.gz or .pprof file.
**Step 2:** Run basic analysis to convert to Markdown:
```bash
perf-skill analyze <profile.pb.gz> -o report.md
```
**Step 3:** Read the generated report.md and summarize key findings:
**Step 4:** If the user wants recommendations, ensure analysis mode is enabled:
```bash
perf-skill analyze <profile.pb.gz> --mode analyze -o report.md
```
This requires an LLM API key (OpenAI, Anthropic, etc.) set in environment variables.
When the user wants to profile their Node.js code:
**Step 1:** Identify the entry point (main file to run).
**Step 2:** Run profiling with analysis:
```bash
perf-skill run <entry.js> --duration 10s -o cpu-report.md
```
For both CPU and heap profiling:
```bash
perf-skill run <entry.js> --duration 10s --heap --output cpu-report.md --heap-output heap-report.md
```
**Step 3:** Read both reports and present findings organized by profile type.
When the user provides baseline and current profiles, or asks to detect regressions:
**Step 1:** Verify both profile files exist.
**Step 2:** Run diff analysis:
```bash
perf-skill diff base.pb.gz current.pb.gz -o diff-report.md
```
**Step 3:** Parse the diff report and highlight:
**Step 4:** Present findings in a table format for easy scanning.
When using `--mode analyze`, the tool produces structured recommendations. Each recommendation includes:
**Your role:** Summarize these recommendations and explain why they matter in the user's context.
**Custom LLM provider:**
```bash
perf-skill analyze profile.pb.gz --llm-provider anthropic --llm-model claude-opus-4-5-20251101
```
**Include source code context:**
```bash
perf-skill analyze profile.pb.gz --source-dir ./src
```
**Limit hotspots:**
```bash
perf-skill analyze profile.pb.gz --max-hotspots 5
```
**Output JSON for programmatic parsing:**
```bash
perf-skill analyze profile.pb.gz -j results.json
```
Choose based on user needs:
By default, sensitive information is redacted (API keys, tokens, absolute paths). To disable:
```bash
perf-skill analyze profile.pb.gz --no-redact
```
**Important:** Only disable redaction if the user explicitly requests it and confirms the environment is secure.
If the user wants a persistent API:
```bash
perf-skill server --port 3000
```
Endpoints:
Security defaults: CORS enabled, Helmet enabled, rate limiting (60 req/min).
1. User reports slow performance or high memory usage
2. Ask if they have an existing profile or need to create one
3. If creating: `perf-skill run <entry> --duration 10s --heap`
4. Read generated reports (cpu.md, heap.md)
5. Summarize top 3-5 hotspots with file locations
6. Generate recommendations with `--mode analyze`
7. Present actionable steps prioritized by impact
1. User has baseline profile (before change) and current profile (after change)
2. Run: `perf-skill diff base.pb.gz current.pb.gz -o diff.md`
3. Identify functions with significant regressions (e.g., >5% increase)
4. Cross-reference with recent code changes in those files
5. Suggest rollback or fixes for regressed functions
1. User provides multiple profiles from CI/CD pipeline
2. For each pair of consecutive profiles, run diff
3. Track trends over time (which functions are getting slower)
4. Alert on sustained regressions across multiple runs
5. Generate weekly/monthly summary reports
Ensure these are set for AI-powered analysis:
Optional tuning:
1. **Start with convert-only** (`--mode convert-only`) for fast triage, then upgrade to full analysis if needed
2. **Use adaptive format** (default) for best balance of detail and readability
3. **Set context** via `--service`, `--scenario` for more relevant recommendations
4. **Compare normalized profiles** with `--normalize scale-to-base-total` for structural comparison
5. **Chain commands**: Profile → Convert → Analyze → Diff as needed
6. **Reference line numbers** from reports when suggesting code changes (e.g., `handler.ts:142`)
**User:** "My API is slow under load. Can you help me profile it?"
**Assistant:**
1. Runs: `perf-skill run server.js --duration 15s --heap`
2. Reads cpu.md and heap.md
3. Reports: "Top CPU hotspot is `JSON.parse` at 23.4%, called from `handler.ts:142`. Heap shows 15MB allocated in `cacheManager.ts:89`."
4. Runs: `perf-skill analyze cpu.pb.gz --mode analyze --service api-server --scenario "load test"`
5. Presents 3 recommendations:
- Replace JSON.parse with streaming parser (high impact, low risk)
- Add LRU cache to reduce repeated parsing (medium impact, low risk)
- Review cache eviction policy in cacheManager.ts (medium impact, medium risk)
6. Offers to implement top recommendation or investigate further
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/pprof-performance-analysis/raw