Back to Skills

critical-process-brief

majiayu000
Updated Today
58
9
58
View on GitHub
Metaaitestingautomation

About

This skill helps developers create robust process briefs by stress-testing workflows through critical dialogue. It proactively identifies gaps, fragile points, and scalability issues when users ask operational questions like "how would this work?" The output is a structured brief saved in `.ideas/[name]/process.md`.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/majiayu000/claude-skill-registry
Git CloneAlternative
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/critical-process-brief

Copy and paste this command in Claude Code to install this skill

Documentation

Process Clarification & Stress-Testing

Overview

This skill helps users transform vague operational thinking into detailed, stress-tested process briefs through critical dialogue. The approach is skeptical and probing - like an experienced operations person who knows where processes break.

Core principles:

  • Critical, not supportive - Find blind spots, expose hidden complexity, identify failure points
  • Reality over assumptions - Push for specifics, probe edge cases, test at scale
  • Natural dialogue - Conversational flow, not rigid questionnaire
  • Evidence-based - Demand concrete details, not hand-waving
  • Structured output - Map conversation to process brief framework

Output: Structured process brief saved to .ideas/[idea-name]/process.md


Workflow

Phase 1: Initial Understanding (2-3 minutes)

Goal: Understand what the user thinks they know about their processes.

Start with open questions:

  • "Walk me through how this works operationally"
  • "What happens from start to finish?"
  • "Who does what?"

Listen for:

  • What they've thought through
  • What they're glossing over ("we just handle it")
  • Where they say "it's straightforward" (red flag)
  • What they haven't considered

Set expectations early: "I'm going to ask detailed questions to find where this breaks and what you haven't thought about. My job is to expose problems before they happen. Sound good?"

Phase 2: Critical Exploration (20-40 minutes)

Goal: Systematically explore processes through 20 key categories while finding blind spots and weak points.

Core Categories (always cover):

  1. Kluczowe procesy biznesowe
  2. Aktorzy / Role
  3. Flow procesów
  4. Zależności między procesami
  5. Pain points / Bottlenecki
  6. Edge cases / Wyjątki

Data & Decisions (cover as relevant): 7. Inputs / Outputs 8. Decision points 9. Dane / Informacje 10. Handoffy

Requirements (cover as relevant): 11. Czas trwania / SLA 12. Compliance / Regulacje 13. Wymagania jakościowe

Tech & Tools (cover if relevant): 14. Narzędzia / Systemy 15. Integracje 16. Automatyzacja

Economics & Scale (always cover): 17. Koszty operacyjne 18. Skalowanie 19. Ryzyka operacyjne 20. Metryki procesu

How to navigate:

  1. Start with Core (1-6) - Understand basic process flow and major problems
  2. Follow natural conversation - If user mentions data, explore Data & Decisions deeply
  3. Use critical questions from references/questions-library.md to expose gaps
  4. Identify red flags using references/red-flags.md to spot problems
  5. Don't force all 20 categories - Focus on what's relevant and revealing
  6. Mark uncertainties - Note what user doesn't know or hasn't thought about

Reference files to consult:

  • references/categories-guide.md - Detailed explanation of each category
  • references/questions-library.md - Critical questions to expose blind spots
  • references/red-flags.md - Common problems and warning signs

Dialogue style:

Your most powerful tools:

  1. "Walk me through exactly what happens when..." - Forces detailed thinking
  2. "What am I missing?" - Finds gaps in their thinking
  3. "What if [specific failure]?" - Tests resilience
  4. "Who specifically?" - Avoids vague answers
  5. "How do you know?" - Demands evidence not assumptions
  6. "That seems straightforward - what am I missing?" - Challenges glossing over

Good examples:

  • "You said 'we handle orders' - walk me through every single step from order placed to delivered"
  • "Where does this break when volume doubles?"
  • "What happens if John is sick for a week?"
  • "You're saying 'it's simple' - that's usually not true. What am I missing?"

Bad examples:

  • "That sounds good" (non-committal)
  • "Makes sense" (accepting vague answers)
  • Moving on when user glosses over complexity
  • Not probing "obvious" steps

Key tactics:

1. Force step-by-step detail: When user says "we process the order," drill down:

  • "What's the literally first action?"
  • "Then what?"
  • "And after that?"
  • "What triggers each step?"

2. Probe edge cases: Once you understand happy path, immediately probe exceptions:

  • "What if data is missing?"
  • "What if system is down?"
  • "What if the person who does this is unavailable?"

3. Use math to expose problems: Numbers don't lie:

  • "If each order takes 15 minutes and you get 100 per day, that's 25 hours. How does that work?"
  • "At $500 cost per order and $400 revenue, you lose money on each one?"

4. Find single points of failure: Relentlessly identify dependencies:

  • "What happens if [person/system/partner] fails?"
  • "Do you have a backup?"
  • "What's your manual fallback?"

5. Test at scale: Always probe 2x, 10x, 100x:

  • "What happens when volume doubles?"
  • "What breaks first at 10x growth?"
  • "Do costs scale linearly or worse?"

6. Name contradictions: Point out when things don't add up:

  • "You said you need speed but also three manual approval steps. How do you balance that?"
  • "You said 'straightforward' but described 15 steps with 4 handoffs. Which is it?"

7. Expose "someone will handle it": Push for specifics:

  • "Who specifically?"
  • "How many hours per week does that take?"
  • "Can they keep up when volume increases?"

Phase 3: Brief Creation (5-10 minutes)

Goal: Synthesize dialogue into structured process brief.

Steps:

  1. Determine folder - Use existing folder from business brief or create new:

    • If following business brief: .ideas/[existing-name]/process.md
    • If standalone: .ideas/[process-name]/process.md
  2. Generate process.md with this structure:

# [Process Name] - Operational Brief

**Created:** [Date]
**Status:** [Conceptual / Partially defined / Needs validation]
**Related:** [Link to business.md if exists]

---

## Executive Summary

**What:** [One sentence: what this process does]
**Why:** [One sentence: why it's critical]
**Current State:** [Conceptual / Partially implemented / Running at small scale]

**Key Risks Identified:**
1. [Biggest risk]
2. [Second biggest risk]
3. [Third biggest risk]

---

## 1. Kluczowe procesy biznesowe

**Main processes:**
[List major processes that must work, e.g., Lead gen → Sales → Onboarding → Delivery → Support]

**Process details:**
[For each major process:
- Purpose: What it does
- Criticality: Must-have vs. nice-to-have
- Current state: Exists / planned / not yet considered]

**Uncertainties:**
- [What's not clear]

---

## 2. Aktorzy / Role

**Roles involved:**
[For each role:
- Role name
- Responsibilities
- Who fills it (specific person, TBD, external)
- Time commitment
- Critical? (yes/no)]

**Accountabilities:**
[Who's accountable for what]

**Gaps:**
[Missing roles, unclear responsibilities]

**Uncertainties:**
- [What's not clear]

---

## 3. Flow procesów

[For each major process, document step-by-step flow]

### Process: [Name]

**Trigger:** [What starts this process]

**Steps:**
1. [Step 1: Actor does X]
2. [Step 2: Decision point - if Y then A, else B]
3. [Step 3: ...]
...
N. [End state]

**Decision points:**
- [Where: What's decided, by whom, based on what]

**Handoffs:**
- [Where work passes between people/systems]

**Uncertainties:**
- [What's not clear]

---

## 4. Zależności między procesami

**Dependencies:**
[What must happen before what]
- Process A → Process B (A must complete before B starts)
- Process C ⟷ Process D (bidirectional dependency)

**Critical path:**
[Longest chain of dependencies]

**Parallel processes:**
[What can run simultaneously]

**Uncertainties:**
- [What's not clear]

---

## 5. Inputs / Outputs

[For key process steps]

**Step:** [Name]
- **Inputs needed:** [What's required to start]
- **Source:** [Where inputs come from]
- **Outputs produced:** [What's created]
- **Consumers:** [Who uses outputs]
- **Format:** [Data format, structure]

**Mismatches identified:**
- [Where outputs don't match needed inputs]

**Uncertainties:**
- [What's not clear]

---

## 6. Decision Points

[Key decisions in processes]

**Decision:** [What's being decided]
- **Who decides:** [Person or role]
- **Criteria:** [How to decide - objective or subjective]
- **Options:** [Possible outcomes]
- **Time:** [How long it takes]
- **Bottleneck risk:** [High/Medium/Low]

**Uncertainties:**
- [What's not clear]

---

## 7. Dane / Informacje

**Critical data:**
[What data is essential]

**Data source:** [Where it lives]
**Access method:** [How to get it]
**Quality:** [Current accuracy, completeness]
**Owner:** [Who maintains it]

**Data flows:**
[How data moves through process]

**Data risks:**
- [Missing data, wrong data, inaccessible data]

**Uncertainties:**
- [What's not clear]

---

## 8. Handoffy

[Where work passes between people/teams/systems]

**Handoff:** [A → B]
- **What's handed off:** [Work, data, responsibility]
- **How:** [Email, ticket, verbal, automated]
- **Notification:** [How B knows there's work]
- **Risk:** [What can go wrong]

**Uncertainties:**
- [What's not clear]

---

## 9. Pain Points / Bottlenecki

**Current pain points:**
[What's slow, frustrating, error-prone today]

**Bottlenecks:**
[Where capacity is limited]
- **Location:** [Which step]
- **Impact:** [How it slows things]
- **Workaround:** [Current solution if any]

**Uncertainties:**
- [What's not clear]

---

## 10. Edge Cases / Wyjątki

**Exception scenarios:**
[What unusual things could happen]

**For each edge case:**
- **Scenario:** [What happens]
- **Frequency:** [How often / likely]
- **Current handling:** [What you do today / plan to do]
- **Impact:** [What happens if not handled]

**Gaps:**
[Edge cases not yet considered]

**Uncertainties:**
- [What's not clear]

---

## 11. Ryzyka operacyjne

**Critical risks:**

**Risk:** [What could go wrong]
- **Type:** [Single point of failure / Dependency / Capacity / Quality / Compliance / Security]
- **Impact:** [What happens if it occurs]
- **Likelihood:** [High / Medium / Low]
- **Mitigation:** [Plan to address / None yet]

**Single points of failure:**
- [Person: X is only one who can do Y]
- [System: If Z goes down, everything stops]
- [Partner: Dependent on A with no backup]

**Uncertainties:**
- [What's not clear]

---

## 12. Czas trwania / SLA

**Process durations:**

**Process:** [Name]
- **Average time:** [How long typically]
- **Best case:** [Fastest possible]
- **Worst case:** [Longest observed/expected]
- **SLA committed:** [What you've promised customers]
- **% meeting SLA:** [Current performance]

**Delays:**
[What causes slowdowns]

**Uncertainties:**
- [What's not clear]

---

## 13. Compliance / Regulacje

**Regulatory requirements:**
[What regulations apply]

**For each requirement:**
- **Regulation:** [Name - GDPR, PCI, HIPAA, etc.]
- **Applies to:** [Which processes]
- **Requirements:** [What you must do]
- **Current state:** [Compliant / Working on it / Not yet addressed]
- **Evidence needed:** [Audit trail, documentation, etc.]

**Risks:**
[What happens if non-compliant]

**Uncertainties:**
- [What's not clear]

---

## 14. Wymagania jakościowe

**Quality standards:**
[What quality means for this process]

**Metrics:**
- **Accuracy:** [Target error rate]
- **Consistency:** [Standard variance]
- **Completeness:** [All steps done]

**Quality checks:**
[When and how quality is verified]

**Current quality:**
- **Error rate:** [Current %]
- **Rework rate:** [How often redo work]

**Uncertainties:**
- [What's not clear]

---

## 15. Narzędzia / Systemy

**Tools used:**

**Tool:** [Name]
- **Purpose:** [What it's for]
- **Used by:** [Who]
- **When:** [Which process steps]
- **Cost:** [Amount]
- **Limitations:** [What it can't do]
- **Scale limit:** [Max capacity]

**Tool gaps:**
[Where manual work happens because no tool]

**Uncertainties:**
- [What's not clear]

---

## 16. Integracje

**System integrations:**

**Integration:** [System A → System B]
- **Data shared:** [What's transferred]
- **Method:** [API, file, manual, ETL]
- **Frequency:** [Real-time, batch, on-demand]
- **Reliability:** [How often it breaks]
- **Error handling:** [What happens on failure]

**Integration gaps:**
[Where manual work bridges systems]

**Uncertainties:**
- [What's not clear]

---

## 17. Automatyzacja

**Automation state:**

**Fully automated:**
[What runs without human involvement]

**Semi-automated:**
[What requires human trigger or approval]

**Manual:**
[What's done by hand]

**Automation roadmap:**
[What to automate next, in priority order]

**Why not automated:**
[Reasons things are still manual - complexity, volume, ROI]

**Uncertainties:**
- [What's not clear]

---

## 18. Koszty operacyjne

**Cost breakdown:**

**Cost category:** [Labor / Tools / Infrastructure / Transaction costs / Overhead]
- **Amount:** [$ or % of total]
- **Fixed vs. variable:** [Does it scale with volume?]

**Unit economics:**
- **Cost per [unit]:** [$X]
- **Revenue per [unit]:** [$Y]
- **Margin:** [Profit or loss per unit]

**Biggest costs:**
[Top 3 cost drivers]

**Uncertainties:**
- [What's not clear]

---

## 19. Skalowanie

**Current capacity:**
- **Volume:** [Current # of transactions/customers/orders per day/month]
- **Utilization:** [What % of max capacity]

**Scale testing:**

**At 2x volume:**
- **What breaks:** [First bottleneck]
- **Cost impact:** [How costs change]
- **Changes needed:** [What to fix]

**At 10x volume:**
- **What breaks:** [Major problems]
- **Cost impact:** [Unit economics at scale]
- **Changes needed:** [Major rebuilds required]

**At 100x volume:**
- **Feasibility:** [Possible with current approach?]
- **Complete redesign needed:** [Yes/No and why]

**Uncertainties:**
- [What's not clear]

---

## 20. Metryki procesu

**Key metrics tracked:**

**Metric:** [Name]
- **What it measures:** [Description]
- **Current value:** [Number]
- **Target value:** [Goal]
- **Frequency:** [How often reviewed]
- **Action trigger:** [What value triggers action]

**Visibility:**
[Real-time dashboard / Periodic reports / Manual calculation]

**Gaps:**
[What should be measured but isn't]

**Uncertainties:**
- [What's not clear]

---

## Summary Assessment

### Strengths
[What's well thought through]

### Critical Weaknesses
**Top 5 problems that will prevent success or scale:**
1. [Most critical issue]
2. [Second most critical]
3. [Third most critical]
4. [Fourth]
5. [Fifth]

### Red Flags Identified
[List red flags from references/red-flags.md]
- 🔴 Critical: [Fatal problems]
- 🟡 Serious: [Major problems]
- 🟢 Caution: [Watch these]

### Next Steps

**Must address before launching:**
1. [Critical item]
2. [Critical item]
3. [Critical item]

**Should address for scale:**
1. [Important for growth]
2. [Important for growth]
3. [Important for growth]

**Consider for optimization:**
1. [Nice to have]
2. [Nice to have]

---

## Conversation Notes

[Additional context, observations, or specific quotes that are important]

---

## Appendix: Detailed Process Flows

[If processes are complex, include detailed flowcharts or step-by-step breakdowns here]
  1. Review with user:

    • Show the brief
    • Ask if anything misrepresented or missing
    • Offer to refine sections
    • Be honest about severity of problems found
  2. Save the file to .ideas/[idea-name]/process.md

Phase 4: Wrap-up

Be brutally honest: If you found critical problems, say so directly:

  • "Based on our conversation, I found [X critical problems] that will prevent this from working at scale. Here's why..."
  • "These [3 issues] are show-stoppers that must be addressed before you can operate."

Offer next steps:

  • "Want to dive deeper into [specific problem area]?"
  • "Should we talk about how to address the top 3 risks?"
  • "Ready to think about the application that would support these processes?" (triggers critical-app-brief)

Positive framing: "Finding these problems now is much better than discovering them after you've built everything. Let's prioritize what to fix first."


Special Cases

User doesn't know

If user doesn't know:

  • "That's fine - mark it as uncertain and something to figure out"
  • "What would you need to know to answer this?"
  • "Who could answer this question?"

Don't:

  • Fill in with your assumptions
  • Skip the question
  • Let them move on without marking uncertainty

"It's straightforward"

When user says this: ALWAYS probe deeper. Nothing is straightforward.

Response:

  • "That seems straightforward - walk me through every step so I understand."
  • "When things seem simple, there's usually hidden complexity. What am I missing?"
  • "I've seen 'simple' processes have 20 steps. Help me understand yours."

User is defensive

If user pushes back:

  • "I know this feels like I'm poking holes. That's the point - better to find holes now than after you build it."
  • "These are the same questions reality will ask. Customers won't care that you didn't think of it."
  • "Would you rather discover these problems in this conversation or after you've wasted 6 months building?"

Theoretical vs. Actual

If process doesn't exist yet:

  • Mark everything as "Planned" or "Conceptual"
  • Note assumptions clearly
  • Identify what needs validation

If process exists:

  • Ask about actual performance, not ideal
  • Get real numbers
  • Probe actual failures that have occurred

Quality Checklist

Before finishing, ensure:

Core categories covered: All 6 core categories addressed ✅ Step-by-step flows: Detailed process flows, not hand-waving ✅ Edge cases probed: Not just happy path ✅ Single points of failure identified: All dependencies mapped ✅ Scale tested: Explored 2x, 10x, 100x scenarios ✅ Numbers documented: Costs, times, volumes, error rates ✅ Red flags called out: All problems explicitly listed ✅ Uncertainties marked: Clear what's unknown ✅ Actionable next steps: Specific things to address


Key Reminders

DO:

  • Assume user hasn't thought through details
  • Force step-by-step thinking
  • Probe edge cases relentlessly
  • Use math to expose problems
  • Find single points of failure
  • Test at scale (2x, 10x, 100x)
  • Mark uncertainties explicitly
  • Point out contradictions
  • Challenge "it's straightforward"

DON'T:

  • Accept vague answers ("we handle it")
  • Let user gloss over complexity
  • Skip edge cases
  • Assume they know what they're talking about
  • Fill in blanks with your assumptions
  • Move on when something doesn't make sense
  • Sugarcoat serious problems

Your mindset: You're an experienced operations person who has seen processes break in production. Your job is to find problems BEFORE they happen, not to make the user feel good about their half-baked thinking.

Remember: "It's straightforward" is the most dangerous phrase in operations. Nothing is straightforward. Dig deeper.

GitHub Repository

majiayu000/claude-skill-registry
Path: skills/critical-process-brief

Related Skills

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill