Back to Skills

cynefin-classifier

majiayu000
Updated Today
58
9
58
View on GitHub
Otherai

About

This skill classifies problems into the Cynefin Framework domains (Clear, Complicated, Complex, Chaotic, Confusion) and recommends the correct response strategy. Use it when you're unsure how to approach a problem, need to choose between expert analysis and experimentation, or are facing analysis paralysis. It helps developers apply the appropriate cognitive approach to problem-solving.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/majiayu000/claude-skill-registry
Git CloneAlternative
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/cynefin-classifier

Copy and paste this command in Claude Code to install this skill

Documentation

Cynefin Classifier

Classify problems into the correct Cynefin domain and recommend the appropriate response strategy. This prevents applying the wrong cognitive approach to problems.

Triggers

Activate when the user:

  • "classify this problem"
  • "cynefin analysis"
  • "which domain is this"
  • "what approach should we take"
  • "how should we tackle this"
  • "problem classification"
  • "should we analyze or experiment"
  • "is this complex or complicated"

The Cynefin Framework

                    UNORDERED                          ORDERED
              ┌─────────────────────────────────┬─────────────────────────────────┐
              │         COMPLEX                 │        COMPLICATED              │
              │                                 │                                 │
              │  Cause-effect visible only      │  Cause-effect discoverable      │
              │  in retrospect                  │  through expert analysis        │
              │                                 │                                 │
              │  Response: PROBE-SENSE-RESPOND  │  Response: SENSE-ANALYZE-RESPOND│
              │  • Safe-to-fail experiments     │  • Expert consultation          │
 NOVEL        │  • Emergent practice            │  • Root cause analysis          │   KNOWN
              │  • Amplify what works           │  • Good practice                │
              ├─────────────────────────────────┼─────────────────────────────────┤
              │         CHAOTIC                 │          CLEAR                  │
              │                                 │                                 │
              │  No discernible cause-effect    │  Cause-effect obvious to all    │
              │  No time for analysis           │                                 │
              │                                 │  Response: SENSE-CATEGORIZE-    │
              │  Response: ACT-SENSE-RESPOND    │           RESPOND               │
              │  • Stabilize first              │  • Apply best practice          │
              │  • Novel practice               │  • Follow procedures            │
              │  • Then move to complex         │  • Standardize                  │
              └─────────────────────────────────┴─────────────────────────────────┘

                                    CONFUSION (center)
                              Domain unknown - gather information

Classification Process

Step 1: Identify Cause-Effect Relationship

Ask: "Can we predict the outcome of an action?"

If...Then Domain is Likely...
Anyone can predict outcomeClear
Experts can predict outcomeComplicated
Outcome only knowable after actionComplex
No one can predict, crisis modeChaotic
Insufficient information to determineConfusion

Step 2: Check Temporal State

Problems can move between domains:

  • Crisis → Stabilization: Chaotic → Complex (after immediate action)
  • Learning → Optimization: Complex → Complicated (after patterns emerge)
  • Maturity → Commoditization: Complicated → Clear (after expertise codified)
  • Disruption → Uncertainty: Clear → Complex/Chaotic (black swan event)

Step 3: Validate with Diagnostic Questions

Clear Domain Indicators:

  • Is there a documented procedure?
  • Would a junior developer handle this the same way?
  • Is this a "solved problem"?

Complicated Domain Indicators:

  • Do we need an expert to analyze this?
  • Are there multiple valid approaches requiring evaluation?
  • Can we predict the outcome with sufficient analysis?

Complex Domain Indicators:

  • Are multiple independent variables interacting?
  • Has similar analysis failed to predict outcomes before?
  • Do we need to "try and see"?

Chaotic Domain Indicators:

  • Is there immediate harm occurring?
  • Do we lack time for any analysis?
  • Is the situation unprecedented?

Output Format

## Cynefin Classification

**Problem**: [Restate the problem concisely]

### Domain: [CLEAR | COMPLICATED | COMPLEX | CHAOTIC | CONFUSION]

**Confidence**: [HIGH | MEDIUM | LOW]

### Rationale

[2-3 sentences explaining why this domain based on cause-effect relationship]

### Response Strategy

**Approach**: [Sense-Categorize-Respond | Sense-Analyze-Respond | Probe-Sense-Respond | Act-Sense-Respond | Gather Information]

### Recommended Actions

1. [First specific action]
2. [Second specific action]
3. [Third specific action]

### Pitfall Warning

[Domain-specific anti-pattern to avoid]

### Related Considerations

- **Temporal**: [Will domain likely shift? When?]
- **Boundary**: [Is this near a domain boundary?]
- **Compound**: [Are sub-problems in different domains?]

Domain-Specific Guidance

Clear Domain

When you see it: Bug with known fix, style violation, typo, standard CRUD operation.

Response: Apply best practice immediately. Don't over-engineer.

Pitfall: Over-complicating simple problems. Creating abstractions where none needed.

Software Examples:

  • Fixing a null reference with documented pattern
  • Adding a missing import
  • Correcting a typo in documentation
  • Following established coding standards

Complicated Domain

When you see it: Performance issue, security vulnerability assessment, architecture evaluation.

Response: Gather experts, analyze thoroughly, then act decisively.

Pitfall: Analysis paralysis OR acting without sufficient expertise.

Software Examples:

  • Debugging a memory leak
  • Evaluating database schema design
  • Security audit of authentication flow
  • Choosing between well-documented frameworks with clear trade-offs

Complex Domain

When you see it: User behavior prediction, team dynamics, new technology adoption, architectural decisions with uncertainty.

Response: Run safe-to-fail experiments. Probe, sense patterns, respond. Amplify what works.

Pitfall: Trying to fully analyze before acting. Expecting predictable outcomes.

Software Examples:

  • Deciding microservices vs monolith for new product
  • Predicting which features users will adopt
  • Evaluating emerging frameworks with limited production data
  • Team restructuring impacts on productivity
  • A/B testing user experience changes

Chaotic Domain

When you see it: Production outage, data breach, critical security incident.

Response: Act immediately to stabilize. Restore order first. Analyze later.

Pitfall: Forming committees. Waiting for consensus. Deep analysis during crisis.

Software Examples:

  • Database corruption with active users
  • Active security breach
  • Complete service outage
  • Cascading infrastructure failure

Confusion Domain

When you see it: Insufficient information to classify. Contradictory signals. Unknown unknowns.

Response: Gather information. Break problem into smaller pieces. Reclassify components.

Pitfall: Assuming a domain without evidence. Paralysis from uncertainty.

Software Examples:

  • Vague requirement that could be simple or complex
  • Bug report without reproduction steps
  • Performance issue without metrics
  • "System is slow" without specifics

Integration with Other Skills

SkillIntegration Point
decision-criticAfter classifying as Complicated, use decision-critic to validate analysis
plannerAfter classifying as Complex, use planner to design experiments
architectComplicated architectural decisions benefit from ADR process
analystConfusion domain benefits from analyst investigation

Compound Problems

When a problem spans multiple domains:

  1. Decompose the problem into sub-problems
  2. Classify each sub-problem independently
  3. Sequence work by domain priority:
    • Chaotic first (stabilize)
    • Clear next (quick wins)
    • Complicated then (expert analysis)
    • Complex last (experiments need stable foundation)

Scripts

classify.py

Structured classification with validation.

python3 .claude/skills/cynefin-classifier/scripts/classify.py \
  --problem "Description of the problem" \
  --context "Additional context about constraints, environment"

Exit Codes:

  • 0: Classification complete
  • 1: Invalid arguments
  • 2: Insufficient information (Confusion domain)

Escalation Criteria

Escalate to human or senior decision-maker when:

  • Confidence is LOW
  • Problem is on domain boundary
  • Stakes are high (production, security, data)
  • Classification contradicts team consensus
  • Chaotic domain with no runbook

Examples

Example 1: CI Test Failures

Input: "Tests pass locally but fail randomly in CI"

Classification: COMPLEX

Rationale: Multiple interacting factors (timing, environment, dependencies, parallelism) make cause-effect unclear. Analysis alone won't solve this.

Strategy: Probe-Sense-Respond

  1. Add instrumentation to failing tests
  2. Run experiments with different configurations
  3. Look for patterns, amplify what works

Pitfall: Don't spend weeks trying to "root cause" before experimenting.

Example 2: Production Database Down

Input: "Production database is unresponsive, customers cannot access the site"

Classification: CHAOTIC

Rationale: Active harm occurring. No time for analysis. Stabilization required.

Strategy: Act-Sense-Respond

  1. Execute failover runbook immediately
  2. Restore service using backup/replica
  3. Only after stable: investigate root cause

Pitfall: Don't form a committee. Don't start analyzing before acting.

Example 3: Framework Choice

Input: "Should we use React or Vue for our new frontend?"

Classification: COMPLEX

Rationale: Team dynamics, learning curves, ecosystem fit, and long-term maintainability only emerge through experience. Trade-off analysis alone is insufficient.

Strategy: Probe-Sense-Respond

  1. Build small prototype with each (timeboxed)
  2. Measure team velocity and satisfaction
  3. Let experience inform decision

Pitfall: Don't try to "perfectly analyze" all trade-offs in spreadsheet.

Example 4: Memory Leak

Input: "Application memory grows steadily over 24 hours"

Classification: COMPLICATED

Rationale: Cause-effect is discoverable through expert analysis. Heap dumps, profiling, and code review will reveal the source.

Strategy: Sense-Analyze-Respond

  1. Collect heap dumps at intervals
  2. Analyze object retention with profiler
  3. Expert review of suspected areas

Pitfall: Don't guess and patch. Systematic analysis will find root cause.

Example 5: Vague Bug Report

Input: "The app feels slow sometimes"

Classification: CONFUSION

Rationale: Insufficient information to determine domain. Could be Clear (known fix), Complicated (needs profiling), or Complex (user perception).

Strategy: Gather Information

  1. What operations feel slow?
  2. What device/network conditions?
  3. Can it be reproduced?
  4. What does "slow" mean (seconds? milliseconds?)

Next Step: Reclassify once information gathered.

References

Anti-Patterns

Anti-PatternDescriptionConsequence
Complicated-izing ComplexityApplying analysis to emergent problemsAnalysis paralysis, wasted effort
Simplifying ComplicatedSkipping expert analysis for nuanced problemsRework, technical debt
Analyzing ChaosForming committees during crisisProlonged outage, increased damage
Experimenting on ClearRunning A/B tests on solved problemsWasted time, unnecessary risk
Guessing ConfusionAssuming domain without evidenceWrong approach, compounded problems

GitHub Repository

majiayu000/claude-skill-registry
Path: skills/cynefin-classifier

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill