PERSONAL AI USAGE COACH
Personal DORA metrics from your local Claude Code, Codex and Copilot sessions. Runs on your laptop. No telemetry, no ranking — just a clear weekly read of what your AI is helping with and where it is hurting you.
For developers who want a personal weekly read of their own AI use — not a corporate dashboard watching them.
PERSONAL AI USAGE COACH
Healthy teams ship more with AI; struggling teams ship more chaos with AI. The bottleneck is rarely the model — it is your loop, your goals and your verification habits. The Personal AI Usage Coach is a local-first mirror that surfaces what your AI is amplifying in your week, so you can correct it at the source.
“AI is an amplifier.”
Healthy team. Clear goals, fast feedback, small batches. AI compounds the throughput.
Struggling team. Vague prompts, no tests, big diffs. AI compounds the rework.
02
DORA's 2026 report is honest about the timeline. Productivity often dips first as teams climb a learning curve, pay a verification tax on AI-generated code, and adapt their pipelines to a new cadence. Only after that does throughput actually grow. The Personal AI Usage Coach does not promise to skip the valley — it gives you a local mirror so you can see exactly where you are in it, and what to correct next.
“Realism is essential when forecasting the timeline to ROI for AI.”
Figure: J-Curve of AI value realisation. Original illustration.
03
DORA's four classic metrics measure team outcomes — they are lagging by design and require a whole team to move them. The Personal AI Usage Coach surfaces the leading, individual signals you can correct this week, in the loop of your own AI sessions. Same vocabulary, different scope.
| DORA team metric (lagging) | Personal coach signal (leading) | First corrective step |
|---|---|---|
| Deployment Frequency | throughput_per_week |
Reduce sessions abandoned mid-flow. |
| Lead Time for Changes | time_to_deliverable_p50 |
Tighten goal-clarity at session start. |
| Change Failure Rate | personal_failure_rate |
Cut course-changes mid-session. |
| Time to Restore Service | recovery_days_after_chaos |
Schedule a recovery day after a red day. |
| Wellbeing | wellbeing_flags |
Cap late-night sessions; protect weekends. |
| Goal clarity | goal_clarity_rate |
Write acceptance criteria before prompting. |
| Batch / iteration mix | mode_mix |
Match the mode to the work — do not deliver in discovery mode. |
04
The Personal DORA radar plots the seven canonical signals exposed by the local coach. The example below uses synthetic values; on your machine the radar is computed from your real Claude Code, Codex and Copilot sessions.
Synthetic example. Real values come from your local sessions.
05
A noisy week of LLM sessions can look productive on a dashboard while shipping nothing. The Personal AI Usage Coach distinguishes activity (sessions, retries, course-changes, message count) from delivery (commits, tests passing, branches merged, evidence screenshots). The point is not to optimise the noise — it is to widen the gap between activity and delivery, in the right direction.
“We don’t measure AI by the code it writes but by the bottlenecks it clears.”
Activity (LLM session noise)
Delivery (commits, tests passing, branches merged, evidence)
Monday and Tuesday show high activity and low delivery — many sessions, little merged. Thursday and Friday flip: fewer sessions, more shipped. The coach surfaces the inversion so you can ask whether the early days were exploration that paid off, or just churn.
If your week shows high activity and low delivery, the coach surfaces the pattern — not to shame, to redirect.
06
Some weeks you do not ship features — you map the terrain. The coach treats discovery as a first-class mode and refuses to penalise it. The mode-mix below shows a real exploratory week, framed for what it was: a successful expedition, not a slow delivery week.
The objective was to map the surface of a mainframe stack — PL/I, COBOL, CICS, BMS, VSAM — and prove that a local interpreter plus a Hercules host could run a non-trivial sample. By Friday a 3270 terminal connected end-to-end and CARDDEMO was running on the host. Commits were few and small; learning was vast and concrete. The coach renders this as a 70 percent discovery week, with the artefacts that did ship.
What shipped
Where the time went
Delivery
Discovery
Maintenance
Research
A 70% discovery week is healthy when the goal is mapping new terrain. The coach surfaces the mix so you can verify the mode matched the intent.
07
The coach is local-first because measuring AI at the individual level only works if developers trust the tool. No telemetry leaves your machine; no leaderboard ranks you against your peers; the analysis core ships zero external dependencies.
Runs locally — no telemetry leaves your machine.
No ranking between developers. This is a personal weekly read for you, not a tool for managers to compare or monitor people.
Open source. Tier 1: zero external dependencies in the analysis core.
Two minutes from install to your first weekly note. Works with Claude Code, Codex and Copilot sessions out of the box.
pipx install obsly-ai