RELEASE v0.1 April 13, 2026
5 shipped · 7 planned · 9.60 router avg · MIT
On this page
// the tacit skills collection

One command
for any lonely
technical decision.

Five Claude Code skills for senior engineers making decisions alone — design reviews before the review meeting, postmortems nobody has time to run, decision memos that don't warrant an offsite but do warrant a shared doc. Described in plain English. Structured as artifacts you can forward.

one-liner — install all five skills
curl -fsSL https://tacit.sh/install.sh | bash
Installs into ~/.claude/commands/. Re-run the same command to update. Audit the script first: tacit.sh/install.sh.
// 01 — release notes

v0.1 — what's in this release.

Five skills installed today. Seven planned for follow-up releases. Each shipped skill passed a dual-evaluator review (Russian Judge + Grumpy Staff Engineer) at the bar listed below. The router was held to a stricter 9.0 bar because it gates every interaction with the collection.

Skill Round Avg Min Fixtures Evaluators
/scrutiny R2 9.15 9.07 3 RJ
/verdict R1 9.41 9.10 3 RJ + GSE
/autopsy R1 9.47 9.29 3 RJ + GSE
/fracture R1 9.33 9.29 3 RJ + GSE
/tacit R2 9.60 9.39 6 RJ + GSE

Avg / min are weighted scores out of 10 per the dual-evaluator rubric (Interrogation / Specificity / Persona / Structure / Shareability). Eval transcripts and fixtures live in the dev repo; methodology is detailed in Evaluation methodology below.

// 02 — why this exists

The generic chatbot produces generic chatbot output.

Senior engineers make a specific class of decision alone.

Design review the week before the design review meeting. Postmortem at 11pm because the all-hands is tomorrow. Decision memo for a build-vs-buy call that doesn't warrant an offsite but does warrant a shared doc that survives the meeting.

These decisions are too small for a full consultation and too consequential to guess at. The default alternative is a blank chat window and a generic helpful assistant. That produces output that reads like "have you considered the tradeoffs?" — which is not what the situation needs.

A skill in this collection is what happens when you take the specific expert you'd want in the room, encode their voice and questioning discipline and structured output format, and give them one command each.

Not pair-programming tools. Not chatbot wrappers with a clever preamble. Not generic productivity skills dressed up in engineer voice. Five distinct personas, each calibrated to one decision type.

// 03 — the five skills

What makes each skill unique.

Every skill shares the same underlying discipline (see Shared discipline). What makes each one load-bearing is the specific lens — the specific expert — the specific questioning protocol. Five characters, not one chatbot five times.

/scrutiny
The Reviewer

Architecture review by a paranoid Staff+ engineer. Walks through your system the way a principal walks through a post-incident Zoom.

Output A Failure Modes table with Trigger / Impact / Current Mitigation. A Scaling Assessment with numbers, not adjectives. A Fix These First list specific enough to turn into tickets.

What it is not A design consultant who tells you to consider microservices. It names the specific component that will fail, the specific number at which it fails, and the specific thing to do about it.
eval 9.15 avg
live 3 fixtures
/verdict
The Chief of Staff

Turns messy thinking into a decision memo that survives the meeting. State the decision as a question. Steelman every option. Name the stakeholder who has to agree.

Output A memo. TL;DR. Options with Reversibility ratings. Constraints Applied showing which options get eliminated by which constraint. Stakeholder Concerns matrix. A Recommendation that cites at least one constraint or stakeholder by name.

What it is not A chatbot that tells you both options have merit. The recommendation is singular and grounded in the constraint set you named.
eval 9.41 avg
live 3 fixtures
/autopsy
The Investigator

Reconstructs an incident or near-miss into a postmortem that teaches the next on-call something. Timeline must have signals. Root cause must be a mechanism.

Output A full postmortem. What Happened in past-perfect. Timeline with timestamps + signals. Root Cause that names the code path, the cache, the specific assumption. Action Items with owners, dates, and what each one prevents.

What it is not An apology document. The audience is the next on-call engineer, not the customer. No humans as causes — only mechanisms.
eval 9.47 avg
live 3 fixtures
/fracture
The Interrogator

Closes your spec. Asks you to explain it. Where you hesitate, that's a fracture. Surfaces hidden dependencies, contradictions, scope creep dressed as MVP.

Output A Fracture Report. Verdict SOUND / FRACTURED / UNSOUND. Fractures typed by category. What's Sound so the report isn't pure demolition. Testability Gaps with Signal-of-Success and Signal-of-Quiet-Failure per requirement.

What it is not A writing critique — that's /clarity when it ships. /fracture targets logical integrity, not prose flow.
eval 9.33 avg
live 3 fixtures
/tacit
The Triage Operator · Router

The router. One command. Describe what you're working on in plain English. Get a short plan that runs the right skills in the right order — or an honest decline with a real choice.

Asks at most two questions. Proposes a plan with the exact skills it will run, the reason for each, the inputs each needs, the dependencies between them. Waits for yes before running anything. Respects sequencing rules automatically. Decomposes oversized requests instead of chaining 5+ skills.

Three decline cases

Case A — on roadmap, not yet shipped

"The skill for this is /foresight — designed and on the roadmap, not yet shipped. Want me to route to the partial fit now, or wait for /foresight?"

Case B — outside Tacit, external adjacent exists

"No Tacit skill fits cold-email sequences. Closest external: /marketing-skills:cold-email."

Case C — genuinely outside scope

"No Tacit skill fits. I don't know an external skill that does either." No apology. No invented skill. No force-fitting.

What it is not It does not perform expertise — the router routes; the skills do the work. It does not invent skills. It does not silently chain skills. It does not run a skill before user approval.
eval 9.60 avg / 9.39 min
live 6 fixtures 9.0 bar
// 04 — shared discipline

Same discipline underneath every persona.

Each skill has its own voice. The underlying discipline is identical — same questioning pattern, same grounding rule, same banned words, same structural ceiling. That discipline is what makes the output forwardable.

  • Progressive questioning. One question per message. Acknowledgment of the prior answer first. An adaptive middle question that branches from 6 distinct options per skill.
  • Structured verdict. A specific markdown template with tables, severity ratings, action items. Not a prose dump.
  • Grounded output. Every finding references something you said. No inventions. When data is missing, the output says "Assessment limited — [specific thing not named]" rather than speculating.
  • Distinct voice per skill. The Reviewer, The Chief of Staff, The Investigator, The Interrogator, The Triage Operator — five characters, each calibrated to the task.
  • Banned hedges. "Likely", "approximately", "might", "perhaps", "sort of", "I'd recommend". Gone. If unknown, labeled unknown. If known, stated directly.
  • 500-1200 word ceiling. The discipline of the brief. Long means the thinking wasn't clear enough yet.
// 05 — evaluation methodology

How the eval scores were measured.

Each skill was built using the same 10-phase recipe and evaluated by two independent simulated evaluators applying a 5-dimension rubric.

The dual-evaluator pattern

  • Russian Judge (RJ). Cold-War Olympic-judge persona. Brutal. Weights specificity and shareability higher.
  • Grumpy Staff Engineer (GSE). 18-years-production persona. Weights actionability and clarity higher; allergic to padding and marketing-disguised-as-engineering.

The 5-dimension rubric

Dimension Weight What it measures
Interrogation Quality25%Q-protocol discipline, acknowledgment, adaptive Q, no yes/no, progressive depth
Verdict Specificity25%User-reference grounding, mechanism-not-category, action items shippable, honest gaps
Persona Consistency20%No AI tells, voice carries to the verdict, consequence framing, distinct persona maintained
Structural Compliance15%All sections present, table format correct, severity justified, word count in range
Shareability15%Standalone summary, would a senior engineer forward this, no branding excess

SHIP bars

  • Content skills (4): overall ≥ 8.75, no dimension below 8.0.
  • Router (/tacit): overall ≥ 9.0, no dimension below 8.5. Stricter because it gates every interaction.
  • Hard ceiling on revisions: 3 rounds. Round 3 fail = the design is structurally wrong, not promptable. Escalate to redesign rather than patch.

Pre-bakes carried forward

The first skill (/scrutiny) discovered four systemic failure modes in Round 1. They were patched and pre-baked into every subsequent skill's initial draft:

  1. Voice must carry into the verdict — frame fixes as consequences ("If you don't fix X, Y will happen"), not recommendations.
  2. Banned hedge words baked into voice rules.
  3. Scale-only fixes (bumping a plan, adding resources) trigger a Q6 probe — they're capacity management, not prevention.
  4. The "Lessons / 10x / What We're Giving Up" section must surface NEW material — never restate earlier findings.

Result: every subsequent skill (/verdict, /autopsy, /fracture, /tacit) shipped at Round 1 — the discovered bugs became prevented bugs.

// 06 — roadmap

Seven more skills planned.

The collection is designed as twelve skills. Five are installed (four content skills plus the router). Seven are on the roadmap — designed, specified, not yet built. When you ask /tacit for one of them, it says so honestly and offers the closest installed partial fit.

05
/foresight
Phase-by-phase evolution plan for growth. What breaks at 2x, 10x, and what to build before each break lands.
planned
06
/bridge
Translate technical findings into business narrative for the board, the exec, the stakeholder who does not care about thread pools.
planned
07
/tradeoff
Structured build-vs-buy, vendor-selection, opportunity-cost analysis. Opportunity cost as a first-class column.
planned
08
/armor
Production hardening checklist before launch. Security posture, observability coverage, reliability guarantees.
planned
09
/signal
Turn meeting notes into decisions, owners, action items. Kill the "I thought you were handling that" failure class.
planned
10
/blindspot
Meta-review. What am I missing? Applied to any plan, design, or decision after the primary skill produced its output.
planned
11
/clarity
Anti-jargon pass on documentation. Every sentence earns its place or gets cut. Targets prose, not logic.
planned

Release order is signal-driven — whichever Case A decline the router gets asked for most often gets built first.

// 07 — install

Install. Run. Forward the output.

Each skill is a single markdown file. Drop it into ~/.claude/commands/ and the command is live. No subscription. No cloud. No login. Works in Claude Code out of the box.

all five (recommended)

curl -fsSL https://tacit.sh/install.sh | bash

specific skills only

curl -fsSL https://tacit.sh/install.sh | bash -s tacit
curl -fsSL https://tacit.sh/install.sh | bash -s scrutiny verdict

update later

curl -fsSL https://tacit.sh/install.sh | bash

Same command — the installer overwrites existing files.

audit before running (recommended for any curl | bash)

curl -fsSL https://tacit.sh/install.sh | less
curl -fsSL https://tacit.sh/install.sh -o install.sh && bash install.sh

manual — clone and copy

git clone https://github.com/ketankhairnar/tacit-skills
cp tacit-skills/skills/*.md ~/.claude/commands/

use it in Claude Code

/tacit had a 34-min outage last night, need the postmortem before all-hands
/tacit deciding between Mixpanel, Amplitude, and building on ClickHouse
/tacit                       # bare invocation → "What are you working on?"
// 08 — use elsewhere

The skills are portable.

The skills are pure prompts — no tool-use, no MCP calls, no Claude Code-specific APIs. Only the install path and slash-command invocation are Claude Code-specific. The SKILL.md content works as a system prompt in any LLM agent that takes one.

OpenAI Codex CLI

Drops in ~/.codex/prompts/. Strip the YAML frontmatter:

mkdir -p ~/.codex/prompts
for skill in scrutiny verdict autopsy fracture tacit; do
  curl -fsSL "https://raw.githubusercontent.com/ketankhairnar/tacit-skills/main/skills/${skill}.md" \
    | sed '1,/^---$/d' | sed '1,/^---$/d' \
    > ~/.codex/prompts/${skill}.md
done

OpenCode

Drops in ~/.config/opencode/command/ (global) or .opencode/command/ (per-project):

mkdir -p ~/.config/opencode/command
for skill in scrutiny verdict autopsy fracture tacit; do
  curl -fsSL "https://raw.githubusercontent.com/ketankhairnar/tacit-skills/main/skills/${skill}.md" \
    -o ~/.config/opencode/command/${skill}.md
done

Generic ChatGPT / Claude / Gemini chat

Paste the contents of any SKILL.md as the system prompt (or first message). The skill follows its protocol from there.

Caveats Output quality is bounded by the underlying model. The eval bar (8.75 for content skills, 9.0 for the router) was measured against Anthropic Opus. Behavior on other models may differ — the persona discipline holds, the questioning protocol holds, but specific scoring is Opus-anchored.
// 09 — what's next

v0.2 and beyond.

  • Next skill build: driven by which Case A decline the router gets asked for most often. Most-requested planned skill ships first.
  • Multi-target installer: if there's signal that engineers want first-class Codex / OpenCode install, the installer grows a --target flag.
  • R2-hosted skills: if GitHub raw rate limits or latency become an issue, the installer flips to a Cloudflare R2 bucket. End-user one-liner stays the same.
  • Versioned skill paths: for users who pin a specific skill version (e.g., if a skill update regresses a behavior they relied on).
No subscription. No login. No cloud. Install the markdown file. Run the command. Get an artifact you can forward.