Garry Tan Just Gave Agentic Engineering Its Clearest Framework
Y Combinator president Garry Tan published what he called "the simplest distillation" of everything he's learned about agentic AI systems this year. The framework is deceptively simple — and it answers one of the most common questions teams ask when building AI agents.
Fat skills are the fuzzy, judgment-heavy operations that humans do naturally: understanding ambiguous messages, interpreting context, writing flexible summaries. These belong in AI — encoded as detailed markdown instruction documents that guide your agent.
Fat code is everything that must be precise: database queries, API calls, financial calculations, inventory updates. These belong in actual code — with validation, tests, and deterministic behavior.
Thin harness is the system that connects the two. The thinner it is, the better. Complexity in the connector layer creates unpredictable failure modes.
How This Works in Practice
Consider a customer service AI agent. Following Tan's principle:
- Fat skill: Understanding what the customer wants, even from a messy, typo-filled, or ambiguous message → this lives in your agent's markdown instruction document
- Fat code: Looking up order status in the database, returning exact amounts, updating the CRM → this lives in real code with proper validation
- Thin harness: Orchestrating when to call what → a minimal router with explicit logic
This framework resolves the most common mistake in AI projects: trying to use AI to do what should be code, and using code to do what AI should handle.
The GBrain Half-Life Analogy
On the same day, Tan shared an analogy about his project GBrain. He compared it to a Half-Life mod that becomes Counter-Strike — a standalone product built on someone else's "incredible game engine."
GBrain is an AI memory and context management layer. Tan is building it on top of Claude and OpenAI models, adding persistent memory, context orchestration, and decision sequencing. The underlying models are the engine. GBrain is the mod becoming its own game.
For businesses, this is how the best B2B AI products work: not built from scratch, but layered on top of stable foundation models — with domain-specific logic on top.
As @garrytan put it: "This is the simplest distillation of what I have learned about agentic engineering this year." — X
What This Means for Businesses in 2026
The framework helps answer the perennial question: "Do we need an AI agent or just a chatbot?"
- If your business has lots of unstructured communication (customer emails, support requests, negotiations) → fat skills: an AI agent with good instruction documents
- If your business has lots of precise processes (accounting, logistics, inventory) → fat code: traditional automation, with AI only where flexibility is needed
- The connector between them → as simple as possible, documented, and testable
European businesses building on this architecture are positioning themselves ahead of teams still trying to fit all logic into a single AI prompt.
WebEdge.dev builds these kinds of agentic systems for businesses in Lithuania and across Europe — from initial architecture to a working solution in 5–7 days.
FAQ
Instruction documents (typically markdown) that tell your AI agent how to handle fuzzy, context-dependent situations. Think of them as the agent's training manual and protocols.
The more complex the layer connecting AI and code, the more failure points you create. A simple connector means predictable system behavior and easier debugging.
Yes — even a basic customer service chatbot can follow this principle: a markdown instruction document (fat skill) + real integration with your booking system (fat code).
Garry Tan's project — an AI memory and context management layer built on top of Claude and OpenAI models. Currently in development; follow @garrytan on X for updates.