Open Source Claude Plugin

Turn your AI coding sessions into skill growth

Stars1
Forks1

Sparkey Reflect analyzes how you use Claude Code and generates personalized coaching insights grounded in DORA, SPACE, DevEx, GitClear, and METR research benchmarks. All analysis runs locally — no data leaves your machine.

Install PluginView on GitHubView on PyPI
7 Scoring Dimensions

Industry-benchmarked analysis

Smooth, continuous scoring curves across every aspect of AI-assisted development — not step-function grades, but signals that reflect gradual improvement.

Prompt Quality78/100

Specificity, context richness, clarity, efficiency, chain of thought

GitClear: specific prompts → 40% less churn

Conversation Flow71/100

Turns to resolution, correction rate, context retention, iteration velocity

DORA: fewer iterations = faster lead time

Context Management62/100

File references, error context, code snippets, scope clarity

METR: code context → better completions

Session Patterns55/100

Duration, frequency, diversity, fatigue detection, deep work alignment

DORA 2024: uninterrupted blocks boost throughput

Tool Usage80/100

Tool diversity, MCP utilization, slash commands, automation

Specialized tools = mastery signal

Rule File Quality41/100

Completeness, specificity, actionability, currency, ecosystem coverage

DORA: stale docs = risk

Outcome Tracker73/100

AI commit rate, productivity, rework rate, quality signals

GitClear 2024: AI rework benchmarks

Scoring grounded in industry-standard frameworks (DORA, SPACE, DevEx) and validated against published research from GitClear and METR.

DORA is a program of Google Cloud. GitClear and METR are independent organizations. Sparkey is not affiliated with or endorsed by any of these entities.

3 Modes

Analyze. Dive deep. Improve.

Report Mode

/sparkey:reflect

Run daily, weekly, monthly, or full analysis to get severity-ranked insights with real session evidence and next-step recommendations.

Deep Dive Mode

/sparkey:reflect deep-dive <skill>

Get an in-depth analysis of one skill area with before/after examples and tailored practice exercises.

Update Rules Mode

/sparkey:reflect update-rules

Automatically improve your CLAUDE.md based on real session data — close the gap between how you work and what your AI knows.

Use Cases

Real-world impact

01
23% → 8%
Correction rate

Weekly Skill Check-In

A senior developer runs /sparkey:reflect every Monday to track AI coding effectiveness. They discover a 23% correction rate — nearly 1 in 4 AI responses need fixing. The deep-dive reveals they skip error context when debugging, forcing extra back-and-forth. After two weeks of including stack traces, their correction rate drops to 8%, saving ~30 minutes per day.

02
48 → 71
Team score

Onboarding Teams to AI-Assisted Development

An engineering manager installs the plugin for a team of 6 developers. Junior developers run /sparkey:reflect monthly for a comprehensive baseline. Reports highlight specific patterns — one developer uses Bash(sed) instead of Edit, another has 3-hour marathon sessions with declining quality. After one quarter, the team's average score improves from 48 to 71.

03
52% → 74%
First acceptance

Improving Project Rule Files

A tech lead runs /sparkey:reflect update-rules to optimize their CLAUDE.md. The analysis reveals only 1 instruction file with no code examples. After the update, First Response Acceptance jumps from 52% to 74% because Claude follows project conventions on the first try.

How It Works

From sessions to insights

Your Claude Code Sessions
Conversations, tool calls, edits
Session Reader
Parses conversation logs
7 Analyzers
Smooth scoring curves
Insight Generator
LLM-powered coaching
Personalized Report
Actionable insights
Installation

Up and running in 30 seconds

Install via Plugin Marketplace

Requires Claude Code 1.0.33+ and Python 3.11+

# In Claude Code, run:
/plugin marketplace add Sparkey-AI/sparkey-reflect
/plugin install sparkey@sparkey-reflect

Install CLI Standalone

Optional — use outside of Claude Code

pip install sparkey-reflect

Quick Start

# Run your first analysis
/sparkey:reflect
# Deep dive into a specific skill
/sparkey:reflect deep-dive prompt_engineering
# Improve your CLAUDE.md
/sparkey:reflect update-rules
Pricing

Free and open source

Open Source
Free

MIT License — forever

  • Full scoring engine (7 dimensions, 35 sub-dimensions)
  • CLI analysis (sparkey-reflect analyze)
  • Claude Code plugin (/sparkey:reflect)
  • Local SQLite trend storage
Teams Edition
Custom

via sparkey.ai

  • Everything in Open Source
  • Version Control + Project Management integrations
  • Multi-developer trend tracking and comparison
  • Team-level usage analytics
  • Industry benchmark comparisons
  • Manager dashboard
Contact Us
Technical Details

Built for privacy and speed

Language
Python 3.11+
Framework
Click CLI + Claude Code plugin system
Scoring
Sigmoid, bell, diminishing return curves
Storage
Local SQLite for trend history
AI
Optional LLM-powered insights via Claude API
Privacy
All analysis runs locally
License
MIT

Ready to level up your AI coding skills?

Install Sparkey Reflect — takes 30 seconds, runs entirely on your machine.

/plugin marketplace add Sparkey-AI/sparkey-reflect
/plugin install sparkey@sparkey-reflect
/sparkey:reflect