Use when appending structured perf investigation notes and evidence.
Use when managing perf baselines, consolidating results, or comparing versions. Ensures one baseline JSON per version.
Use when synthesizing perf findings into evidence-backed recommendations and decisions.
Use when generating performance hypotheses backed by git history and code evidence.
Use when mapping code paths, entrypoints, and likely hot files before profiling.
Use when profiling CPU/memory hot paths, generating flame graphs, or capturing JFR/perf evidence.
Use when running performance benchmarks, establishing baselines, or validating regressions with sequential runs. Enforces 60s minimum runs (30s only for binary search) and no parallel benchmarks.
Use when running controlled perf experiments to validate hypotheses.