Back to Skills

defense-in-depth

LerianStudio
Updated Today
9 views
4
4
View on GitHub
Testingaidesigndata

About

The defense-in-depth skill is used when invalid data causes deep execution failures, requiring validation at every layer data passes through. It ensures bugs become structurally impossible by implementing checks across multiple system layers including entry points, business logic, and environment guards. This approach prevents single validation points from being bypassed through different code paths or refactoring.

Documentation

Defense-in-Depth Validation

Overview

When you fix a bug caused by invalid data, adding validation at one place feels sufficient. But that single check can be bypassed by different code paths, refactoring, or mocks.

Core principle: Validate at EVERY layer data passes through. Make the bug structurally impossible.

Why Multiple Layers

Single validation: "We fixed the bug" Multiple layers: "We made the bug impossible"

Different layers catch different cases:

  • Entry validation catches most bugs
  • Business logic catches edge cases
  • Environment guards prevent context-specific dangers
  • Debug logging helps when other layers fail

The Four Layers

Layer 1: Entry Point Validation

Purpose: Reject obviously invalid input at API boundary

function createProject(name: string, workingDirectory: string) {
  if (!workingDirectory || workingDirectory.trim() === '') {
    throw new Error('workingDirectory cannot be empty');
  }
  if (!existsSync(workingDirectory)) {
    throw new Error(`workingDirectory does not exist: ${workingDirectory}`);
  }
  if (!statSync(workingDirectory).isDirectory()) {
    throw new Error(`workingDirectory is not a directory: ${workingDirectory}`);
  }
  // ... proceed
}

Layer 2: Business Logic Validation

Purpose: Ensure data makes sense for this operation

function initializeWorkspace(projectDir: string, sessionId: string) {
  if (!projectDir) {
    throw new Error('projectDir required for workspace initialization');
  }
  // ... proceed
}

Layer 3: Environment Guards

Purpose: Prevent dangerous operations in specific contexts

async function gitInit(directory: string) {
  // In tests, refuse git init outside temp directories
  if (process.env.NODE_ENV === 'test') {
    const normalized = normalize(resolve(directory));
    const tmpDir = normalize(resolve(tmpdir()));

    if (!normalized.startsWith(tmpDir)) {
      throw new Error(
        `Refusing git init outside temp dir during tests: ${directory}`
      );
    }
  }
  // ... proceed
}

Layer 4: Debug Instrumentation

Purpose: Capture context for forensics

async function gitInit(directory: string) {
  const stack = new Error().stack;
  logger.debug('About to git init', {
    directory,
    cwd: process.cwd(),
    stack,
  });
  // ... proceed
}

Applying the Pattern

When you find a bug:

  1. Trace the data flow - Where does bad value originate? Where used?
  2. Map all checkpoints - List every point data passes through
  3. Add validation at each layer - Entry, business, environment, debug
  4. Test each layer - Try to bypass layer 1, verify layer 2 catches it

Example from Session

Bug: Empty projectDir caused git init in source code

Data flow:

  1. Test setup → empty string
  2. Project.create(name, '')
  3. WorkspaceManager.createWorkspace('')
  4. git init runs in process.cwd()

Four layers added:

  • Layer 1: Project.create() validates not empty/exists/writable
  • Layer 2: WorkspaceManager validates projectDir not empty
  • Layer 3: WorktreeManager refuses git init outside tmpdir in tests
  • Layer 4: Stack trace logging before git init

Result: All 1847 tests passed, bug impossible to reproduce

Key Insight

All four layers were necessary. During testing, each layer caught bugs the others missed:

  • Different code paths bypassed entry validation
  • Mocks bypassed business logic checks
  • Edge cases on different platforms needed environment guards
  • Debug logging identified structural misuse

Don't stop at one validation point. Add checks at every layer.

Quick Install

/plugin add https://github.com/LerianStudio/ring/tree/main/defense-in-depth

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

LerianStudio/ring
Path: skills/defense-in-depth

Related Skills

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill