cascade-orchestrator
About
The Cascade Orchestrator skill coordinates complex workflows by chaining multiple micro-skills into cohesive processes. It enables sequential pipelines, parallel execution, and conditional branching, enhanced with multi-model routing and memory persistence. Use it for building sophisticated production workflows that require reliable coordination between different AI models and execution environments.
Documentation
Cascade Orchestrator (Enhanced)
Overview
Manages workflows (cascades) that coordinate multiple micro-skills into cohesive processes. This enhanced version integrates Codex sandbox iteration, multi-model routing, ruv-swarm coordination, and memory persistence across stages.
Philosophy: Composable Excellence
Complex capabilities emerge from composing simple, well-defined components.
Enhanced Capabilities:
- Codex Sandbox Iteration: Auto-fix failures in isolated environment (from audit-pipeline)
- Multi-Model Routing: Use Gemini/Codex based on stage requirements
- Swarm Coordination: Parallel execution via ruv-swarm MCP
- Memory Persistence: Maintain context across stages
- GitHub Integration: CI/CD pipeline automation
Key Principles:
- Separation of concerns (micro-skills execute, cascades coordinate)
- Reusability through composition
- Flexible orchestration patterns
- Declarative workflow definition
- Intelligent model selection
Cascade Architecture (Enhanced)
Definition Layer
Extended Stage Types:
stages:
- type: sequential # One after another
- type: parallel # Simultaneous execution
- type: conditional # Based on runtime conditions
- type: codex-sandbox # NEW: Iterative testing with auto-fix
- type: multi-model # NEW: Intelligent AI routing
- type: swarm-parallel # NEW: Coordinated via ruv-swarm
Enhanced Data Flow:
data_flow:
- stage_output: previous stage results
- shared_memory: persistent across stages
- multi_model_context: AI-specific formatting
- codex_sandbox_state: isolated test environment
Advanced Error Handling:
error_handling:
- retry_with_backoff
- fallback_to_alternative
- codex_auto_fix # NEW: Auto-fix via Codex
- model_switching # NEW: Try different AI
- swarm_recovery # NEW: Redistribute tasks
Execution Engine (Enhanced)
Stage Scheduling with AI Selection:
for stage in cascade.stages:
if stage.type == "codex-sandbox":
execute_with_codex_iteration(stage)
elif stage.type == "multi-model":
model = select_optimal_model(stage.task)
execute_on_model(stage, model)
elif stage.type == "swarm-parallel":
execute_via_ruv_swarm(stage)
else:
execute_standard(stage)
Codex Sandbox Iteration Loop:
def execute_with_codex_iteration(stage):
"""
From audit-pipeline Phase 2: functionality-audit pattern
"""
results = execute_tests(stage.tests)
for test in failed_tests(results):
iteration = 0
max_iterations = 5
while test.failed and iteration < max_iterations:
# Spawn Codex in sandbox
fix = spawn_codex_auto(
task=f"Fix test failure: {test.error}",
sandbox=True,
context=test.context
)
# Re-test
test.result = rerun_test(test)
iteration += 1
if test.passed:
apply_fix_to_main(fix)
break
if still_failed(test):
escalate_to_user(test)
return aggregate_results(results)
Multi-Model Routing:
def select_optimal_model(task):
"""
Route to best AI based on task characteristics
"""
if task.requires_large_context:
return "gemini-megacontext" # 1M tokens
elif task.needs_current_info:
return "gemini-search" # Web grounding
elif task.needs_visual_output:
return "gemini-media" # Imagen/Veo
elif task.needs_rapid_prototype:
return "codex-auto" # Full Auto
elif task.needs_alternative_view:
return "codex-reasoning" # GPT-5-Codex
else:
return "claude" # Best overall
Enhanced Cascade Patterns
Pattern 1: Linear Pipeline with Multi-Model
cascade:
name: enhanced-data-pipeline
stages:
- stage: extract
model: auto-select
skill: extract-data
- stage: validate
model: auto-select
skill: validate-data
error_handling:
strategy: codex-auto-fix # NEW
- stage: transform
model: codex-auto # Fast prototyping
skill: transform-data
- stage: report
model: gemini-media # Generate visuals
skill: generate-report
Pattern 2: Parallel Fan-Out with Swarm
cascade:
name: code-quality-swarm
stages:
- stage: quality-checks
type: swarm-parallel # NEW: Via ruv-swarm
skills:
- lint-code
- security-scan
- complexity-analysis
- test-coverage
swarm_config:
topology: mesh
max_agents: 4
strategy: balanced
- stage: aggregate
skill: merge-quality-reports
Pattern 3: Codex Sandbox Iteration
cascade:
name: test-and-fix
stages:
- stage: functionality-audit
type: codex-sandbox # NEW
test_suite: comprehensive
codex_config:
mode: full-auto
max_iterations: 5
sandbox: true
error_recovery:
auto_fix: true
escalate_after: 5
- stage: validate-fixes
skill: regression-tests
Pattern 4: Conditional with Model Switching
cascade:
name: adaptive-workflow
stages:
- stage: analyze
model: gemini-megacontext # Large context
skill: analyze-codebase
- stage: decide
type: conditional
condition: ${analyze.quality_score}
branches:
high_quality:
model: codex-auto # Fast path
skill: deploy-fast
low_quality:
model: multi-model # Comprehensive path
cascade: deep-quality-audit
Pattern 5: Iterative with Memory
cascade:
name: iterative-refinement
stages:
- stage: refactor
model: auto-select
skill: refactor-code
memory: persistent # NEW
- stage: check-quality
skill: quality-metrics
- stage: repeat-decision
type: conditional
condition: ${quality < threshold}
repeat: refactor # Loop back
max_iterations: 3
memory_shared: true # Context persists
Creating Enhanced Cascades
Step 1: Define with AI Considerations
Identify Model Requirements:
For each stage, determine:
- Large context needed? → Gemini
- Current web info needed? → Gemini Search
- Visual output needed? → Gemini Media
- Rapid prototyping needed? → Codex
- Testing with auto-fix? → Codex Sandbox
- Best overall reasoning? → Claude
Step 2: Design with Swarm Parallelism
When to Use Swarm:
- Multiple independent tasks
- Resource-intensive operations
- Need load balancing
- Want fault tolerance
Swarm Configuration:
swarm_config:
topology: mesh | hierarchical | star
max_agents: number
strategy: balanced | specialized | adaptive
memory_shared: true | false
Step 3: Add Codex Iteration for Quality
Pattern from audit-pipeline:
stages:
- type: codex-sandbox
tests: ${test_suite}
fix_strategy:
auto_fix: true
max_iterations: 5
sandbox_isolated: true
network_disabled: true
regression_check: true
Step 4: Enable Memory Persistence
Shared Memory Across Stages:
memory:
persistence: enabled
scope: cascade | global
storage: mcp__ruv-swarm__memory
keys:
- analysis_results
- intermediate_outputs
- learned_patterns
Enhanced Cascade Definition Format
cascade:
name: cascade-name
description: What this accomplishes
version: 2.0.0
config:
multi_model: enabled
swarm_coordination: enabled
memory_persistence: enabled
github_integration: enabled
inputs:
- name: input-name
type: type
description: description
stages:
- stage_id: stage-1
name: Stage Name
type: sequential | parallel | codex-sandbox | multi-model | swarm-parallel
model: auto-select | gemini | codex | claude
# For micro-skill execution
skills:
- skill: micro-skill-name
inputs: {...}
outputs: {...}
# For Codex sandbox
codex_config:
mode: full-auto
sandbox: true
max_iterations: 5
# For swarm execution
swarm_config:
topology: mesh
max_agents: 4
# For memory
memory:
read_keys: [...]
write_keys: [...]
error_handling:
strategy: retry | codex-auto-fix | model-switch | swarm-recovery
max_retries: 3
fallback: alternative-skill
memory:
persistence: enabled
scope: cascade
github_integration:
create_pr: on_success
report_issues: on_failure
Real-World Enhanced Cascades
Example 1: Complete Development Workflow
cascade: complete-dev-workflow
stages:
1. gemini-search: "Research latest framework best practices"
2. codex-auto: "Rapid prototype with best practices"
3. codex-sandbox: "Test everything, auto-fix failures"
4. gemini-media: "Generate architecture diagrams"
5. style-audit: "Polish code to production standards"
6. github-pr: "Create PR with comprehensive report"
Example 2: Legacy Modernization
cascade: modernize-legacy-code
stages:
1. gemini-megacontext: "Analyze entire 50K line codebase"
2. theater-detection: "Find all mocks and placeholders"
3. [swarm-parallel]:
- codex-auto: "Complete implementations" (parallel)
- gemini-media: "Document architecture"
4. codex-sandbox: "Test all changes with auto-fix"
5. style-audit: "Final polish"
6. generate-pr: "Create PR with before/after comparison"
Example 3: Bug Fix with RCA
cascade: intelligent-bug-fix
stages:
1. root-cause-analyzer: "Deep RCA analysis"
2. multi-model-decision:
condition: ${rca.complexity}
simple: codex-auto (quick fix)
complex: [
gemini-megacontext (understand broader context),
codex-reasoning (alternative approaches),
claude (implement best approach)
]
3. codex-sandbox: "Test fix thoroughly"
4. regression-suite: "Ensure no breakage"
5. github-issue-update: "Document fix with RCA report"
Integration Points
With Micro-Skills
- Executes micro-skills in stages
- Passes data between skills
- Handles skill errors gracefully
With Multi-Model System
- Routes stages to optimal AI
- Uses gemini-* skills for unique capabilities
- Uses codex-* skills for prototyping/fixing
- Uses Claude for best reasoning
With Audit Pipeline
- Incorporates theater → functionality → style pattern
- Uses Codex sandbox iteration from Phase 2
- Applies quality gates throughout
With Slash Commands
- Commands trigger cascades
- Parameter mapping from command to cascade
- Progress reporting to command interface
With Ruv-Swarm MCP
- Parallel stage coordination
- Memory persistence
- Neural training
- Performance tracking
Working with Enhanced Cascade Orchestrator
Invocation: "Create a cascade that [end goal] using [micro-skills] with [Codex/Gemini/swarm] capabilities"
The orchestrator will:
- Design workflow with optimal AI model selection
- Configure Codex sandbox for testing stages
- Set up swarm coordination for parallel stages
- Enable memory persistence across stages
- Integrate with GitHub for CI/CD
- Generate executable cascade definition
Advanced Features:
- Automatic model routing based on task
- Codex iteration loop for auto-fixing
- Swarm coordination for parallelism
- Memory sharing across stages
- GitHub PR/issue integration
- Performance monitoring and optimization
Version 2.0 Enhancements:
- Codex sandbox iteration pattern
- Multi-model intelligent routing
- Ruv-swarm MCP integration
- Memory persistence
- GitHub workflow automation
- Enhanced error recovery
Quick Install
/plugin add https://github.com/DNYoussef/ai-chrome-extension/tree/main/cascade-orchestratorCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
when-chaining-agent-pipelines-use-stream-chain
OtherThis skill enables chaining agent outputs as inputs in sequential or parallel pipelines for data flow orchestration. Use it when you need to coordinate multiple agents in workflows with streaming data between them. It provides pipeline configuration, streaming flows, and performance metrics for intermediate-level agent coordination.
when-orchestrating-swarm-use-swarm-orchestration
OtherThis skill provides advanced multi-agent swarm orchestration for complex workflows. It handles task decomposition, distributed execution across specialized agents, and result synthesis. Use it when you need to coordinate multiple AI agents to solve intricate problems requiring parallel processing.
sparc-methodology
DevelopmentThe SPARC methodology provides a systematic development framework with 17 specialized modes for comprehensive software development from specification to completion. It integrates multi-agent orchestration to handle complex development workflows including architecture design, testing, and deployment. Use this skill when you need structured guidance throughout the entire development lifecycle with automated agent coordination.
github-release-management
OtherThis Claude Skill automates comprehensive GitHub release orchestration using AI swarm coordination for versioning, testing, deployment, and rollback management. It's ideal for developers needing to streamline their CI/CD pipeline with intelligent automation from changelog generation to multi-platform deployment. Use it when you want to coordinate complex release workflows across repositories with built-in rollback capabilities.
