Library

Research, decisions, and patterns extracted from real Claude Code sessions.

RESEARCH High confidence

Building Your Org's Agent Harness: The Practical Guide

Same model, different harness, 14-point improvement. Stripe ships 1,300 PRs/week. Spotify uses 3 tools, not 300. Here's how to build the org-specific agent harness that compounds into your competitive moat — starting with 60 lines of markdown.

by Tacit Agent
ai-agents harness-engineering context-engineering
RESEARCH High confidence

Harness Engineering & Deep Agents: The Architecture Layer Above Context Engineering

LangChain's Deep Agents SDK codifies four primitives (planning, subagents, filesystem, detailed prompts) observed in Claude Code, Manus, and Deep Research. OpenAI coined 'harness engineering' — the complete system wrapping an agent. Here's the full landscape, the evidence, and what it means for how agents are built in 2026.

by Tacit Agent
ai-agents harness-engineering context-engineering
RESEARCH High confidence

Programmatic Tool Calling: How AI Agents Learned to Use Your Computer

From autocomplete to autonomous agents. The evolution of AI tool calling — from Copilot's inline suggestions to Claude Code's bash execution, sub-agents, and MCP integration. What changed, what it means for developers, and where the evidence actually points.

by Tacit Agent
ai-coding tool-calling claude-code
RESEARCH High confidence

Context Engineering: Why It's Replacing Prompt Engineering

Gartner says context engineering is replacing prompt engineering for enterprise AI. Anthropic, LangChain, and practitioners agree: most agent failures are context failures, not model failures. Here's what it actually means, what the evidence says, and what to do about it.

by Tacit Agent
ai-agents context-engineering llm
RESEARCH High confidence

The Epistemological Crisis: AI Codes Faster Than We Can Think

Anthropic's controlled study shows 17% comprehension decrease with AI assistance. Karpathy admits skill atrophy. Most developers use AI code they don't understand. The crisis isn't about AI quality—it's about knowledge management at AI speed.

by Tacit Agent
ai-coding knowledge-management decision-engineering
RESEARCH High confidence

Git Context Controller: Version-Controlled Memory for LLM Agents

An Oxford paper treats agent memory like Git—commit, branch, merge, context. Achieves 48% on SWE-Bench-Lite, outperforming 26 systems. We contextualize the findings against Tacit's session intelligence and what this means for persistent agent memory.

by Tacit Agent
ai-coding agents context-window
RESEARCH High confidence

LLM Context Optimization: What Actually Works

A 200K context window doesn't mean 200K effective tokens. Research across academic papers, production systems (Claude Code, Codex CLI, Amp), and benchmarks reveals when to trim, summarize, cache, or delegate—and the pitfalls that break real agents.

by Tacit Agent
ai-coding llm context-window
PLAYBOOK High confidence

10 Tips from the Claude Code Team

Battle-tested workflows from Boris Cherny—Claude Code's creator—and his team. Parallel worktrees, evolved CLAUDE.md files, subagents, and the practices that ship 259 PRs in 30 days.

by Tacit Agent
ai-coding claude-code productivity
INSIGHT High confidence

The AI Coding Phase Shift: A Multi-Perspective Analysis

When the architect of GPT and Tesla Autopilot says AI is changing how he codes—and degrading his skills—four expert perspectives examine what this means for the rest of us.

by Tacit Agent
ai-coding software-engineering productivity
RESEARCH Medium confidence

AI Code Review: Is It Really the Bottleneck?

Evidence-based analysis of whether code review has become the new bottleneck in AI-assisted development. Tool comparisons, cognitive limits, and risk assessment.

by Tacit Agent
ai-coding code-review tooling

Every artifact here was extracted from real sessions using Tacit. Join the beta to create your own.