cynefin-classifier
About
This skill classifies problems into the Cynefin Framework domains (Clear, Complicated, Complex, Chaotic, Confusion) and recommends the correct response strategy. Use it when you're unsure how to approach a problem, need to choose between expert analysis and experimentation, or are facing analysis paralysis. It helps developers apply the appropriate cognitive approach to problem-solving.
Quick Install
Claude Code
Recommended/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/cynefin-classifierCopy and paste this command in Claude Code to install this skill
Documentation
Cynefin Classifier
Classify problems into the correct Cynefin domain and recommend the appropriate response strategy. This prevents applying the wrong cognitive approach to problems.
Triggers
Activate when the user:
- "classify this problem"
- "cynefin analysis"
- "which domain is this"
- "what approach should we take"
- "how should we tackle this"
- "problem classification"
- "should we analyze or experiment"
- "is this complex or complicated"
The Cynefin Framework
UNORDERED ORDERED
┌─────────────────────────────────┬─────────────────────────────────┐
│ COMPLEX │ COMPLICATED │
│ │ │
│ Cause-effect visible only │ Cause-effect discoverable │
│ in retrospect │ through expert analysis │
│ │ │
│ Response: PROBE-SENSE-RESPOND │ Response: SENSE-ANALYZE-RESPOND│
│ • Safe-to-fail experiments │ • Expert consultation │
NOVEL │ • Emergent practice │ • Root cause analysis │ KNOWN
│ • Amplify what works │ • Good practice │
├─────────────────────────────────┼─────────────────────────────────┤
│ CHAOTIC │ CLEAR │
│ │ │
│ No discernible cause-effect │ Cause-effect obvious to all │
│ No time for analysis │ │
│ │ Response: SENSE-CATEGORIZE- │
│ Response: ACT-SENSE-RESPOND │ RESPOND │
│ • Stabilize first │ • Apply best practice │
│ • Novel practice │ • Follow procedures │
│ • Then move to complex │ • Standardize │
└─────────────────────────────────┴─────────────────────────────────┘
CONFUSION (center)
Domain unknown - gather information
Classification Process
Step 1: Identify Cause-Effect Relationship
Ask: "Can we predict the outcome of an action?"
| If... | Then Domain is Likely... |
|---|---|
| Anyone can predict outcome | Clear |
| Experts can predict outcome | Complicated |
| Outcome only knowable after action | Complex |
| No one can predict, crisis mode | Chaotic |
| Insufficient information to determine | Confusion |
Step 2: Check Temporal State
Problems can move between domains:
- Crisis → Stabilization: Chaotic → Complex (after immediate action)
- Learning → Optimization: Complex → Complicated (after patterns emerge)
- Maturity → Commoditization: Complicated → Clear (after expertise codified)
- Disruption → Uncertainty: Clear → Complex/Chaotic (black swan event)
Step 3: Validate with Diagnostic Questions
Clear Domain Indicators:
- Is there a documented procedure?
- Would a junior developer handle this the same way?
- Is this a "solved problem"?
Complicated Domain Indicators:
- Do we need an expert to analyze this?
- Are there multiple valid approaches requiring evaluation?
- Can we predict the outcome with sufficient analysis?
Complex Domain Indicators:
- Are multiple independent variables interacting?
- Has similar analysis failed to predict outcomes before?
- Do we need to "try and see"?
Chaotic Domain Indicators:
- Is there immediate harm occurring?
- Do we lack time for any analysis?
- Is the situation unprecedented?
Output Format
## Cynefin Classification
**Problem**: [Restate the problem concisely]
### Domain: [CLEAR | COMPLICATED | COMPLEX | CHAOTIC | CONFUSION]
**Confidence**: [HIGH | MEDIUM | LOW]
### Rationale
[2-3 sentences explaining why this domain based on cause-effect relationship]
### Response Strategy
**Approach**: [Sense-Categorize-Respond | Sense-Analyze-Respond | Probe-Sense-Respond | Act-Sense-Respond | Gather Information]
### Recommended Actions
1. [First specific action]
2. [Second specific action]
3. [Third specific action]
### Pitfall Warning
[Domain-specific anti-pattern to avoid]
### Related Considerations
- **Temporal**: [Will domain likely shift? When?]
- **Boundary**: [Is this near a domain boundary?]
- **Compound**: [Are sub-problems in different domains?]
Domain-Specific Guidance
Clear Domain
When you see it: Bug with known fix, style violation, typo, standard CRUD operation.
Response: Apply best practice immediately. Don't over-engineer.
Pitfall: Over-complicating simple problems. Creating abstractions where none needed.
Software Examples:
- Fixing a null reference with documented pattern
- Adding a missing import
- Correcting a typo in documentation
- Following established coding standards
Complicated Domain
When you see it: Performance issue, security vulnerability assessment, architecture evaluation.
Response: Gather experts, analyze thoroughly, then act decisively.
Pitfall: Analysis paralysis OR acting without sufficient expertise.
Software Examples:
- Debugging a memory leak
- Evaluating database schema design
- Security audit of authentication flow
- Choosing between well-documented frameworks with clear trade-offs
Complex Domain
When you see it: User behavior prediction, team dynamics, new technology adoption, architectural decisions with uncertainty.
Response: Run safe-to-fail experiments. Probe, sense patterns, respond. Amplify what works.
Pitfall: Trying to fully analyze before acting. Expecting predictable outcomes.
Software Examples:
- Deciding microservices vs monolith for new product
- Predicting which features users will adopt
- Evaluating emerging frameworks with limited production data
- Team restructuring impacts on productivity
- A/B testing user experience changes
Chaotic Domain
When you see it: Production outage, data breach, critical security incident.
Response: Act immediately to stabilize. Restore order first. Analyze later.
Pitfall: Forming committees. Waiting for consensus. Deep analysis during crisis.
Software Examples:
- Database corruption with active users
- Active security breach
- Complete service outage
- Cascading infrastructure failure
Confusion Domain
When you see it: Insufficient information to classify. Contradictory signals. Unknown unknowns.
Response: Gather information. Break problem into smaller pieces. Reclassify components.
Pitfall: Assuming a domain without evidence. Paralysis from uncertainty.
Software Examples:
- Vague requirement that could be simple or complex
- Bug report without reproduction steps
- Performance issue without metrics
- "System is slow" without specifics
Integration with Other Skills
| Skill | Integration Point |
|---|---|
| decision-critic | After classifying as Complicated, use decision-critic to validate analysis |
| planner | After classifying as Complex, use planner to design experiments |
| architect | Complicated architectural decisions benefit from ADR process |
| analyst | Confusion domain benefits from analyst investigation |
Compound Problems
When a problem spans multiple domains:
- Decompose the problem into sub-problems
- Classify each sub-problem independently
- Sequence work by domain priority:
- Chaotic first (stabilize)
- Clear next (quick wins)
- Complicated then (expert analysis)
- Complex last (experiments need stable foundation)
Scripts
classify.py
Structured classification with validation.
python3 .claude/skills/cynefin-classifier/scripts/classify.py \
--problem "Description of the problem" \
--context "Additional context about constraints, environment"
Exit Codes:
- 0: Classification complete
- 1: Invalid arguments
- 2: Insufficient information (Confusion domain)
Escalation Criteria
Escalate to human or senior decision-maker when:
- Confidence is LOW
- Problem is on domain boundary
- Stakes are high (production, security, data)
- Classification contradicts team consensus
- Chaotic domain with no runbook
Examples
Example 1: CI Test Failures
Input: "Tests pass locally but fail randomly in CI"
Classification: COMPLEX
Rationale: Multiple interacting factors (timing, environment, dependencies, parallelism) make cause-effect unclear. Analysis alone won't solve this.
Strategy: Probe-Sense-Respond
- Add instrumentation to failing tests
- Run experiments with different configurations
- Look for patterns, amplify what works
Pitfall: Don't spend weeks trying to "root cause" before experimenting.
Example 2: Production Database Down
Input: "Production database is unresponsive, customers cannot access the site"
Classification: CHAOTIC
Rationale: Active harm occurring. No time for analysis. Stabilization required.
Strategy: Act-Sense-Respond
- Execute failover runbook immediately
- Restore service using backup/replica
- Only after stable: investigate root cause
Pitfall: Don't form a committee. Don't start analyzing before acting.
Example 3: Framework Choice
Input: "Should we use React or Vue for our new frontend?"
Classification: COMPLEX
Rationale: Team dynamics, learning curves, ecosystem fit, and long-term maintainability only emerge through experience. Trade-off analysis alone is insufficient.
Strategy: Probe-Sense-Respond
- Build small prototype with each (timeboxed)
- Measure team velocity and satisfaction
- Let experience inform decision
Pitfall: Don't try to "perfectly analyze" all trade-offs in spreadsheet.
Example 4: Memory Leak
Input: "Application memory grows steadily over 24 hours"
Classification: COMPLICATED
Rationale: Cause-effect is discoverable through expert analysis. Heap dumps, profiling, and code review will reveal the source.
Strategy: Sense-Analyze-Respond
- Collect heap dumps at intervals
- Analyze object retention with profiler
- Expert review of suspected areas
Pitfall: Don't guess and patch. Systematic analysis will find root cause.
Example 5: Vague Bug Report
Input: "The app feels slow sometimes"
Classification: CONFUSION
Rationale: Insufficient information to determine domain. Could be Clear (known fix), Complicated (needs profiling), or Complex (user perception).
Strategy: Gather Information
- What operations feel slow?
- What device/network conditions?
- Can it be reproduced?
- What does "slow" mean (seconds? milliseconds?)
Next Step: Reclassify once information gathered.
References
- Cynefin Framework - Dave Snowden's original framework
- Domain Transitions - How problems move between domains
- Software Engineering Applications - Domain patterns in software
Anti-Patterns
| Anti-Pattern | Description | Consequence |
|---|---|---|
| Complicated-izing Complexity | Applying analysis to emergent problems | Analysis paralysis, wasted effort |
| Simplifying Complicated | Skipping expert analysis for nuanced problems | Rework, technical debt |
| Analyzing Chaos | Forming committees during crisis | Prolonged outage, increased damage |
| Experimenting on Clear | Running A/B tests on solved problems | Wasted time, unnecessary risk |
| Guessing Confusion | Assuming domain without evidence | Wrong approach, compounded problems |
GitHub Repository
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
