_old-earnings-orchestrator
About
This skill orchestrates batch earnings analysis by processing a company's 8-K filings chronologically. It runs a prediction-to-attribution feedback loop, tracking accuracy across filings while querying Neo4j for structured data. Use it when you need automated, sequential analysis of historical earnings reports with integrated learning from past predictions.
Quick Install
Claude Code
Recommended/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/_old-earnings-orchestratorCopy and paste this command in Claude Code to install this skill
Documentation
Earnings Orchestrator
Goal: Process all 8-K earnings filings for a company chronologically, building a closed-loop prediction→attribution system with accuracy feedback.
Thinking: ALWAYS use ultrathink for maximum reasoning depth.
Input: Company ticker (e.g., "GBX", "AAPL")
Architecture
Master Orchestrator (this skill)
│
├─→ Query Neo4j for all 8-Ks with Item 2.02
│
└─→ For each filing chronologically:
│
├─→ First filing: /earnings-attribution only
│
└─→ Subsequent filings:
│
├─→ /earnings-prediction (predict before seeing outcome)
│
└─→ /earnings-attribution (verify & learn)
Workflow
Use TodoWrite to track progress through all filings.
Step 1: Query 8-K Universe
Query Neo4j for all 8-Ks with Item 2.02 (earnings) for the ticker:
MATCH (r:Report)-[pf:PRIMARY_FILER]->(c:Company)
WHERE c.ticker = $ticker
AND r.formType = '8-K'
AND any(item IN r.items WHERE item CONTAINS 'Item 2.02')
AND pf.daily_stock IS NOT NULL
RETURN r.accessionNo AS accession_no,
c.ticker AS ticker,
c.name AS company_name,
r.created AS filing_datetime,
pf.daily_stock AS daily_return,
pf.daily_macro AS macro_adj_return,
r.items AS items
ORDER BY r.created ASC
Extract list of accession numbers with filing dates.
Step 2: Check Processing State
Read earnings-analysis/8k_fact_universe.csv and earnings-analysis/predictions.csv to determine:
- Which filings are already completed
- Which predictions exist (and whether they need attribution)
Step 3: Process Each Filing
For each filing in chronological order:
First Filing (no prior history)
/earnings-attribution {accession_no}
Reason: No historical baseline to predict from. Attribution only.
Subsequent Filings
1. /earnings-prediction {accession_no}
→ Outputs prediction to predictions.csv
2. /earnings-attribution {accession_no}
→ Verifies prediction accuracy
→ Updates predictions.csv with actual_* columns
→ Stores company-specific learnings
Step 4: Update Tracking
After each filing:
- Mark
completed=TRUEin 8k_fact_universe.csv - Update predictions.csv with actual outcomes
- Compute running accuracy metrics
Accuracy Tracking
Direction Accuracy
correct = predicted_direction == actual_direction
Magnitude Accuracy
magnitude_correct = predicted_magnitude == actual_magnitude
Running Metrics
After processing, compute:
- Total filings processed
- Predictions made (excludes first filing)
- Direction accuracy: correct / predictions
- Magnitude accuracy: magnitude_correct / predictions
Resume Logic
If a filing is partially processed:
- Check predictions.csv for existing prediction
- Check Companies/{TICKER}/{accession}.md for attribution
- Skip completed steps, resume where needed
Output Files
| File | Purpose |
|---|---|
earnings-analysis/predictions.csv | All predictions + actuals |
earnings-analysis/8k_fact_universe.csv | Processing status tracker |
earnings-analysis/Companies/{TICKER}/{accession}.md | Attribution reports |
earnings-analysis/Companies/{TICKER}/learnings.md | Company-specific patterns |
earnings-analysis/orchestrator-runs/{ticker}_{timestamp}.md | Run summary |
Run Summary Format
After completing a ticker, write summary to earnings-analysis/orchestrator-runs/{ticker}_{YYYYMMDD}.md:
# {TICKER} Orchestrator Run - {DATE}
## Summary
- Total 8-Ks found: N
- Already completed: N
- Processed this run: N
- First filing (attribution only): {accession}
## Prediction Accuracy
- Predictions made: N
- Direction correct: N/M (X%)
- Magnitude correct: N/M (X%)
## Filings Processed
| # | Accession | Date | Prediction | Actual | Correct |
|---|-----------|------|------------|--------|---------|
| 1 | xxx | 2023-01-01 | (first) | +5.2% | N/A |
| 2 | xxx | 2023-04-01 | up/medium | up/large | dir: Y, mag: N |
...
## Learnings Applied
- [List company-specific patterns discovered]
Invocation Examples
Process all filings for a ticker
/earnings-orchestrator GBX
Process with limit
/earnings-orchestrator GBX --limit 5
Processes only first 5 unprocessed filings.
Resume interrupted run
/earnings-orchestrator GBX --resume
Continues from last processed filing.
Error Handling
- Neo4j query fails: Log error, exit gracefully
- Skill invocation fails: Log error, mark filing as failed, continue to next
- Missing data: Note in summary, continue with available data
- Rate limits: Built-in retry with exponential backoff
Arguments
| Arg | Type | Default | Description |
|---|---|---|---|
| ticker | string | required | Company ticker to process |
| --limit | int | none | Max filings to process |
| --resume | flag | false | Resume from last completed |
| --dry-run | flag | false | Show plan without executing |
Version 1.0 | 2026-01-16
GitHub Repository
Related Skills
algorithmic-art
MetaThis Claude Skill creates original algorithmic art using p5.js with seeded randomness and interactive parameters. It generates .md files for algorithmic philosophies, plus .html and .js files for interactive generative art implementations. Use it when developers need to create flow fields, particle systems, or other computational art while avoiding copyright issues.
subagent-driven-development
DevelopmentThis skill executes implementation plans by dispatching a fresh subagent for each independent task, with code review between tasks. It enables fast iteration while maintaining quality gates through this review process. Use it when working on mostly independent tasks within the same session to ensure continuous progress with built-in quality checks.
executing-plans
DesignUse the executing-plans skill when you have a complete implementation plan to execute in controlled batches with review checkpoints. It loads and critically reviews the plan, then executes tasks in small batches (default 3 tasks) while reporting progress between each batch for architect review. This ensures systematic implementation with built-in quality control checkpoints.
cost-optimization
OtherThis Claude Skill helps developers optimize cloud costs through resource rightsizing, tagging strategies, and spending analysis. It provides a framework for reducing cloud expenses and implementing cost governance across AWS, Azure, and GCP. Use it when you need to analyze infrastructure costs, right-size resources, or meet budget constraints.
