Type something to search...
Pre-Selection Beats Post-Selection: How I Made Claude Code 10-30x Faster

Pre-Selection Beats Post-Selection: How I Made Claude Code 10-30x Faster

Pre-Selection Hook Architecture

Every code navigation costs time. When you multiply 300ms delays across hundreds of searches per day, you’re losing hours per week. I found a way to cut those searches to ~50ms—a 10-30x improvement—by intercepting tool choices before they happen.

Here’s the core insight: guidance that arrives before a decision beats guidance buried in instructions.

Why This Matters for Your Security Workflows

If you’re building AI-assisted security workflows—threat hunting, log analysis, configuration review—tool selection directly impacts both speed and accuracy.

Speed: A 300ms search that becomes 50ms doesn’t sound dramatic. But multiply that across hundreds of navigations per day and you’re saving hours per week. For incident response, those delays compound.

Accuracy: When you ask “where is validateToken defined?”, a semantic search finds the actual definition. A text search finds every file that mentions validateToken—definitions, calls, comments, documentation. During an incident, the difference between precise and noisy results can be critical.

The Problem With Post-Selection Guidance

Claude Code’s built-in tool guidance embeds preferences in the system prompt: “Use Grep for pattern matching.” “Prefer Glob over Bash find.”

This is post-selection guidance. By the time Claude chooses a tool, those instructions are background knowledge competing with the immediate task context.

Pre-selection guidance works differently. It intercepts the decision before Claude commits, with full context about what you’re asking.

The Three-Layer Architecture

Here’s the approach you can implement—three hooks that operate at different points in Claude’s decision pipeline:

YOUR PROMPT: "Where is the validateDocker function defined?"

┌─────────────────────────────────────────────────────────────┐
│  LAYER 1: Prompt Enhancement (soft guidance)                │
│  Detects "where...defined" pattern                          │
│  Injects: "Use LSP tool—10-30x faster than text search"     │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│  LAYER 2: Tool Interception (hard block)                    │
│  Claude chose Grep anyway? Block it.                        │
│  Returns redirect message with LSP example                  │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│  LAYER 3: Usage Tracking (advisory)                         │
│  Suggests structured APIs over raw bash commands            │
│  Tracks patterns for continuous improvement                 │
└─────────────────────────────────────────────────────────────┘

RESULT: LSP returns definition in ~50ms instead of 250-350ms

Layer 1: Soft Guidance

The first hook fires when you submit a prompt—before Claude starts reasoning about tools. It pattern-matches your request and injects guidance as part of the request context.

When you ask “Where is validateDocker defined?”, the hook matches the navigation pattern and adds: “For code navigation, use LSP. It’s 10-30x faster and provides semantic accuracy.”

This is fundamentally different from buried system prompt instructions. The guidance arrives with your question.

Layer 2: Hard Redirect

Sometimes Claude ignores the soft guidance. Layer 2 catches these cases by intercepting tool calls before execution.

If Claude attempts a Grep call that looks like a navigation query (searching for function definitions, class declarations, etc.), the hook blocks it and returns a redirect message. Claude then retries with LSP.

Layer 3: Advisory Tracking

The third layer encourages structured API calls over raw bash commands—docker ps becomes a structured JSON response instead of parsed text output. This layer advises rather than blocks, and tracks usage patterns for later analysis.

Validating the Claims

I originally wrote “50x faster” based on general LSP vs text search comparisons. Before publishing, I ran actual benchmarks.

What I Found

MethodAverage TimeRange
Ripgrep287ms246-337ms
LSP~50ms10-100ms
Actual Speedup5-7x typical2.5-34x range

Why the Original Estimate Was Wrong

  1. Ripgrep is fast: Parallel execution, smart filtering, memory-mapped I/O. It’s not your grandfather’s grep.
  2. Codebase size matters: My 22-project workspace isn’t enterprise-scale. A monorepo with millions of lines would show larger differences.
  3. Warm cache: Subsequent searches hit OS file cache. Cold cache searches are slower.

The Honest Numbers

For a typical development codebase:

  • Conservative estimate: 5-7x faster
  • Realistic estimate: 10-30x faster (larger codebases, cold cache)
  • Enterprise scale: 50x+ possible (massive codebases, network filesystems)

I updated this article’s claims based on these results. Inflating numbers undermines credibility.

Lesson learned: Benchmark before you publish.

Performance Impact Summary

ScenarioWithout HooksWith HooksYour Gain
Code navigation250-350ms (text search)~50ms (LSP)10-30x faster
Function lookupEvery mention returnedOnly definitionsNoise eliminated
Docker statusBash output parsingStructured JSONReliable automation

Implementation Considerations

Fail-Open Design

Every hook should fail open. If a hook crashes, Claude continues normally—no workflow interruption. You never want optimization logic to block your actual work.

Cooldown Mechanism

To prevent suggestion fatigue, the advisory layer throttles recommendations. You don’t need the same hint every 30 seconds.

Hook Registration

Hooks are registered in Claude Code’s settings file. Layer 1 fires on UserPromptSubmit. Layer 2 fires on PreToolUse with a matcher for specific tools. Total execution time per hook: 5-15ms.

What You Should Do

Good (5 minutes): Enable Claude Code’s built-in tool guidance in settings. This provides baseline optimization and works for most workflows.

Better (30 minutes): Add a UserPromptSubmit hook that pattern-matches your common queries—code navigation, log analysis, config lookups—and injects tool guidance as context. Soft steering without blocking.

Best (2 hours): Implement the three-layer architecture with hard blocks for high-impact redirects. Add usage tracking to identify optimization opportunities over time. Maximum performance gains with continuous improvement data.

Comparison: Built-in vs Hook-Based

CapabilityBuilt-In GuidanceHook-Based Approach
TimingSystem prompt (static)Per-request injection
EnforcementSuggestions onlyCan block suboptimal choices
Context awarenessGenericKnows your active tools/MCPs
LearningNoneTracks usage patterns
CustomizationLimitedFully configurable

The key insight: pre-selection guidance with blocking capability beats post-selection suggestions.

The Bottom Line

Claude Code’s built-in tool guidance is a solid default. But if you’re running workflows where tool selection impacts performance or accuracy—code review, incident response, security analysis—pre-selection hooks give you enforcement rather than suggestions.

The difference between “here’s a suggestion buried in your instructions” and “I’m intercepting your choice before you commit” is the difference between guidance and control.

Your next step: Audit your Claude Code usage this week. Identify the three queries you run most often. If any involve code navigation, implement at least a soft-guidance hook. One hour of configuration can save ten hours of cumulative delay.


Analysis conducted February 2026. Hook implementations available in the AIfred repository — an open-source Claude Code configuration framework.

Related Posts

4 Essentials for Executive & Business Buyin on your Incident Response Plan

4 Essentials for Executive & Business Buyin on your Incident Response Plan

The impact and subsequent fallout from a business-impacting cyber security attack are stressful at the best of times. Experience time and again shows that organizations without the benefit of an Inci

read more
The CyberSecurity & Evolving Threats

The CyberSecurity & Evolving Threats

Cybersecurity is a critical concern in today's world, as more and more of our daily lives are conducted online. The threat landscape is constantly evolving, and it can be challenging to keep up with t

read more
Top 5 things for a Successful Cyber Response 'IR' Plan

Top 5 things for a Successful Cyber Response 'IR' Plan

Incident Response Planning & Strategy How important is an Incident Response Plan? Some studies show that just having a plan, can reduce the cost of a breach [example one](https://insights.integrity36

read more
I Ran 849 Tests on AI Context Files. Here's What Actually Works.

I Ran 849 Tests on AI Context Files. Here's What Actually Works.

After 849 controlled tests, $20 in API costs, and a week of experiments, I can tell you exactly how to organize your Claude Code reference files. The short version: Put everything in one flat fol

read more
How I Made Claude Code Safer (And You Can Too)

How I Made Claude Code Safer (And You Can Too)

I've been running Claude Code on real projects for months. It's great at writing code — but it doesn't always understand the consequences of what it writes. Claude Code validates which tools can run.

read more
Claude Code Has Two New CVEs — Here's What They Exploit and How to Harden Your Setup

Claude Code Has Two New CVEs — Here's What They Exploit and How to Harden Your Setup

Your engineers cloned repositories today. Probably dozens. If any of those repos contained a malicious .claude/settings.json, they may have executed arbitrary shell code without a single confirmatio

read more
I Scanned 152 Files of My Own AI-Generated Code for Invisible Unicode Malware

I Scanned 152 Files of My Own AI-Generated Code for Invisible Unicode Malware

Two weeks ago, a supply chain attack called Glassworm compromised 150+ GitHub repositories and 72+ browser extensions by hiding malicious payloads in characters that are literally invisible in every

read more
Four Generations of Broken Promises: Why AI SOC Agents Might Actually Be Different

Four Generations of Broken Promises: Why AI SOC Agents Might Actually Be Different

Series: The SIEM & AI Reckoning — Article 1 of 10Over twenty years and hundreds of vendor pitches, one line never changes: "This is going to change everything." 2005, SIEM. 2012, Next-Gen

read more
The Math Problem AI Just Changed for Security Testing

The Math Problem AI Just Changed for Security Testing

Published: 2026-03-22 | RSA 2026 Pre-Conference SeriesHere's the problem every security team lives with but rarely says out loud. Your environment changes every time a developer merges code,

read more
The SIEM Cost Trap — Why Your Data Lake + AI Agents Will Win

The SIEM Cost Trap — Why Your Data Lake + AI Agents Will Win

If you've ever sat across from your CFO, your VP of Engineering, or your board and tried to explain why your SIEM costs what it costs — you already know how this conversation goes. The short version o

read more
Your Data Lake Is Only as Useful as Its Ability to Answer a Question

Your Data Lake Is Only as Useful as Its Ability to Answer a Question

You moved your security data out of the SIEM and into a data lake. Costs dropped. For the first time in years, you had budget to spare. Then an investigation hit — and your team spent two weeks findi

read more