Back to Skills

Coordinator Helper

majiayu000
Updated Yesterday
58
9
58
View on GitHub
Otherautomation

About

This skill manages autonomous task delegation through a coordinator daemon, handling task approval/rejection workflows and monitoring agent activities. Use it when developers need to delegate tasks, check status, review agent work, or manage GitHub-driven approval processes. It provides interactive task exploration with diff reviews and log monitoring capabilities.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/majiayu000/claude-skill-registry
Git CloneAlternative
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/Coordinator Helper

Copy and paste this command in Claude Code to install this skill

Documentation

Coordinator Helper

Manage the coordinator daemon for autonomous task delegation, approval workflows, and monitoring agent work.

Quick Start

Most common usage:

# User says: "Delegate this bug fix to an agent"
# This skill will:
# 1. Check if coordinator daemon is running
# 2. Send the task via ailang messages
# 3. Monitor the task status
# 4. Guide you through approval when complete

# User says: "What tasks are pending?"
# This skill will:
# 1. Run ailang coordinator list --pending
# 2. Show interactive task explorer
# 3. Let you review diffs, logs, and approve/reject

When to Use This Skill

Invoke this skill when:

  • User asks to "delegate a task" or "send to coordinator"
  • User wants to "check task status" or "see what's running"
  • User asks to "review agent work" or "approve/reject tasks"
  • User says "start the coordinator" or "stop the daemon"
  • User wants to "clean up worktrees" or manage coordinator state

Available Scripts

scripts/check_daemon.sh

Check if the coordinator daemon is running and show status.

scripts/delegate_task.sh <type> <title> <description>

Send a task to the coordinator for autonomous execution.

scripts/quick_status.sh

Show a quick summary of pending, running, and completed tasks.

Core Commands

Starting/Stopping the Daemon

# Start coordinator + server (recommended)
make services-start

# Or just coordinator
ailang coordinator start

# Check status
ailang coordinator status

# Stop all
make services-stop

Delegating Tasks

# Send a task
ailang messages send coordinator "Fix the null pointer bug in parser.go" \
  --title "Bug: Parser NPE" --from "claude-code" --type bug

Monitoring & Approving

# Interactive task list
ailang coordinator list

# Filter by status
ailang coordinator list --pending
ailang coordinator list --running

# Approve from list: select task, press [a]
# Or directly: ailang coordinator approve <task-id>

Task Lifecycle

pending → queued → running → pending_approval → completed
                          ↘ failed
                          ↘ rejected → [feedback] → pending (iteration 2) → running → ...

Feedback Loop (v0.6.4+): When rejecting, the task can be re-triggered with feedback up to 3 iterations. Claude uses --resume to continue with full conversation context.

Unified Approvals (v0.6.5+)

When an agent has trigger_on_complete configured with auto_approve_handoffs: false, approvals are combined:

Approval TypeDescriptionOn Approve
mergeSimple merge onlyMerges code to dev branch
merge_handoffCombined merge + handoffMerges code AND triggers next agent

CLI display shows:

⏳ [1] [merge+handoff] → sprint-planner  task-12345678
       Title: Agent completed work on: Fix parser bug

What happens on approve:

  1. Code is merged to dev branch
  2. Handoff message is sent to next agent's inbox with session_id for continuity
  3. Worktree is cleaned up

What happens on reject:

  1. Worktree is preserved
  2. Feedback is sent to same agent's inbox
  3. Agent resumes with --resume <sessionId> (same context, same worktree)
  4. Iteration counter increments (max 3 attempts)

GitHub-Driven Workflow (v0.6.2+)

For tasks linked to GitHub issues, the coordinator supports a fully GitHub-native approval workflow.

How It Works

GitHub Issue
    ↓ (import)
DESIGN STAGE → posts design doc to GitHub → needs-design-approval label
    ↓ (human adds: design-approved)
SPRINT STAGE → posts sprint plan to GitHub → needs-sprint-approval label
    ↓ (human adds: sprint-approved)
IMPLEMENTATION → posts file changes → needs-merge-approval label
    ↓ (human adds: merge-approved)
Changes merged, issue auto-closed

GitHub Labels Reference

You Add This LabelWhat Happens
design-approvedAdvances to sprint planning
sprint-approvedAdvances to implementation
merge-approvedMerges changes, closes issue
needs-revisionPauses pipeline for changes

Quick Commands for GitHub Workflow

# Import GitHub issues as tasks
ailang messages import-github

# Check which issues are being watched
tail -100 ~/.ailang/logs/coordinator.log | grep -i "watching issue"

# Fallback: approve locally if labels aren't detected
ailang coordinator approve <task-id>

# Check pending approvals
ailang coordinator pending

Why Use GitHub Workflow?

  • Review in GitHub UI - See design docs and diffs alongside issue discussion
  • Mobile-friendly - Approve from GitHub mobile app
  • Team collaboration - Multiple reviewers can discuss in comments
  • Audit trail - All approvals tracked in issue history

Workflow

1. Delegate a Task

  1. Describe clearly - Be specific about what needs to be done
  2. Choose type - bug, feature, docs, research, refactor, test
  3. Send message - ailang messages send coordinator "..." --type bug

2. Review Completed Work

  1. Open explorer - ailang coordinator list
  2. Select task - Enter task number
  3. Review:
    • [c] View chat history (turn-by-turn conversation with tool calls)
    • [d] View diff
    • [f] Browse files
    • [l] View logs
  4. Decide:
    • [a] Approve - merge changes to dev branch
    • [r] Reject - prompt for feedback, re-trigger task with context

3. Task Routing

TypeExecutorUse Case
bug-fixClaude CodeCode fixes
featureClaude CodeNew functionality
docsGeminiDocumentation
researchGeminiInvestigation
scriptShellDeterministic workflows (v0.6.4+)

4. Script Agents (v0.6.4+)

For deterministic tasks that don't need AI inference:

# In ~/.ailang/config.yaml
coordinator:
  agents:
    - id: echo-demo
      inbox: echo-demo
      invoke:
        type: script
        command: "./scripts/coordinator/echo_payload.sh"
        env_from_payload: true
        timeout: "1m"
      output_markers:
        - "ECHO_COMPLETE:"

Test the demo:

ailang messages send echo-demo '{"model": "gpt5", "benchmark": "fizzbuzz"}' \
  --title "Echo test" --from "user"

What happens:

  • JSON {"model": "gpt5"} → env var MODEL=gpt5
  • Nested JSON {"db": {"host": "x"}} → env var DB_HOST=x
  • Auto-injected: AILANG_TASK_ID, AILANG_MESSAGE_ID, AILANG_WORKSPACE
  • Cost: $0.00 (no AI inference)

Auditing Agent Work

After a task completes, audit what the agent actually did before approving:

# View conversation per turn (shows agent reasoning + tool calls)
ailang coordinator logs <task-id> --limit 1000 --json | python3 -c "
import json, sys
data = json.load(sys.stdin)
events = data.get('events', [])
turns = {}; tools = {}
for evt in events:
    tn = evt.get('turn_num', 0); st = evt.get('stream_type', '')
    if st == 'text': turns.setdefault(tn, []).append(evt.get('text', ''))
    elif st == 'tool_use': tools.setdefault(tn, []).append(evt.get('tool_name', '?'))
for tn in sorted(turns.keys()):
    text = ''.join(turns[tn]).strip()
    if len(text) > 20:
        print(f'=== Turn {tn} (tools: {\", \".join(tools.get(tn, []))}) ===')
        print(text[:600]); print()
"

# View tool timeline with spans
ailang dashboard spans --task-id <task-id> --limit 200

# View git changes
ailang coordinator diff <task-id>

Audit checklist:

  • Did the agent modify internal/ code or just create examples/docs?
  • What model was used? (Check executor.model in spans - Haiku may be too weak)
  • Did it run ailang run (runtime test) or just ailang check (compile test)?
  • Did it mark tasks as "already working" without verifying the specific bug scenario?

Per-agent model config (v0.8.0+): Set model: opus in agent config for complex coding tasks:

agents:
  - id: sprint-executor
    model: opus

Troubleshooting

Daemon won't start: Check ailang coordinator status, then make services-stop && make services-start

Task stuck: View logs with [l] in task explorer

Worktree limit: git worktree list then git worktree remove <path> --force

GitHub labels not detected: The ApprovalWatcher may not be detecting labels. Use CLI fallback:

ailang coordinator pending    # List tasks waiting for approval
ailang coordinator approve <task-id>   # Approve locally (syncs label to GitHub)

No logs from ApprovalWatcher: Check coordinator logs for "GitHub approval watcher started". If missing, verify ~/.ailang/config.yaml has github_sync.enabled: true.

Resources

See resources/reference.md for complete CLI reference and advanced options.

Notes

  • Coordinator uses isolated git worktrees per task
  • Worktrees auto-cleanup after approval
  • Events stream to dashboard at http://localhost:1957
  • State stored in ~/.ailang/state/coordinator.db

GitHub Repository

majiayu000/claude-skill-registry
Path: skills/coordinator-helper

Related Skills

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

Algorithmic Art Generation

Meta

This skill helps developers create algorithmic art using p5.js, focusing on generative art, computational aesthetics, and interactive visualizations. It automatically activates for topics like "generative art" or "p5.js visualization" and guides you through creating unique algorithms with features like seeded randomness, flow fields, and particle systems. Use it when you need to build reproducible, code-driven artistic patterns.

View skill

cloudflare-turnstile

Meta

This skill provides comprehensive guidance for implementing Cloudflare Turnstile as a CAPTCHA-alternative bot protection system. It covers integration for forms, login pages, API endpoints, and frameworks like React/Next.js/Hono, while handling invisible challenges that maintain user experience. Use it when migrating from reCAPTCHA, debugging error codes, or implementing token validation and E2E tests.

View skill