Prometheus monitoring — scrape configuration, service discovery, recording rules, alert rules, and production deployment for infrastructure and application metrics.
Use when creating demo GIFs for Swift package READMEs, recording iOS simulator videos, or setting up demo apps for SwiftUI libraries
List, create, and assign iterations for Azure DevOps projects and teams.
[Architecture] Use when designing or implementing cross-service communication, data synchronization, or service boundary patterns.
This skill retrieves and displays work items assigned to the user in Azure DevOps, organized by type and sorted by recently changed. It prompts for a project name or lists available projects if not provided, then fetches work item details and displays them in a formatted table with clickable links.
Database design patterns, SQL best practices, ORM usage (Prisma/Drizzle), and migration strategies. Use when designing schemas, writing queries, or optimizing database operations.
Self-healing skill that improves signal mapper keyword coverage through iterative problem generation and keyword patching. Use when user says "heal signal mapper", "improve keyword coverage", "generate problem statements", "run healing loop", "patch signal mapper", or asks about "signal mapper gaps". Do NOT use for project architecture or deployment.
List, inspect, and troubleshoot Azure DevOps pipeline builds. Shows recent builds for a pipeline definition, drills into a specific build's status and result, displays logs for failed steps, and lists the changes associated with a build.
Add a new blog post to this Docusaurus site from a staging folder.
Apply task-specific templates to AI session plans using ai-update-plan. Use when starting a new task to load appropriate plan structure (feature, bugfix, refactor, documentation, security).
Use when encountering Swift 6 concurrency errors, Sendable conformance warnings, actor isolation issues, "global variable is not concurrency-safe" errors, or migrating codebases to Swift 6 language mode
Amplifier design philosophy using Linux kernel metaphor. Covers mechanism vs policy, module architecture, event-driven design, and kernel principles. Use when designing new modules or making architectural decisions.
Summarize a single Azure DevOps work item (and its links and comments) by ID.
SQLite disaster recovery and streaming replication to cloud storage (S3, GCS, Azure, SFTP, NATS). Use this skill for configuring Litestream, deploying to cloud platforms, troubleshooting WAL replication issues, implementing point-in-time recovery, and setting up VFS read replicas.
Document Claude Code sessions by extracting knowledge into cross-referenced documentation. Triggers on "recap the session", "summarize the work", or after significant code changes.
The Engineer skill — deploys Fabric items in dependency-ordered waves and remediates issues found during validation. Use when user says "deploy", "create items", "run deployment", "fix deployment issues", "remediate", or after user sign-off is complete. Do NOT use for design (use fabric-design) or testing (use fabric-test).
Check code against the project's style guidelines and design system standards.
Collects intake (project name + problem statement), scaffolds the project, infers architectural signals, and produces a Discovery Brief. This is the DEFAULT first skill — invoke it for ANY new user request that describes a data, analytics, reporting, or integration problem, no matter how it is phrased. The signal mapper handles keyword matching; routing just needs to get the user here first. When in doubt, invoke fabric-discover — a false-positive intake is recoverable; a freelanced answer is not. Common triggers include "IoT sensors", "batch ETL", "real-time analytics", "data warehouse", "compliance", "dashboard", "streaming", "we need analytics", or "what task flow fits my needs", but any description of a data problem qualifies. Do NOT use for architecture design (use fabric-design), deployment (use fabric-deploy), or validation (use fabric-test).
Test and evaluate AI agents and LLM outputs using code-first evaluation framework with strong typing. Use when the user wants to: (1) Create evaluation datasets with test cases for AI agents, (2) Define evaluators (deterministic, LLM-as-Judge, custom, or span-based), (3) Run evaluations and generate reports, (4) Compare model performance across experiments, (5) Integrate evaluations with Pydantic AI agents, (6) Set up observability with Logfire, (7) Generate test datasets using LLMs, (8) Implement regression testing for AI systems.
List and review Advanced Security alerts for an Azure DevOps repository. Shows dependency vulnerabilities, secret exposure, and code scanning findings with filtering by severity, state, and alert type.