competitive-analysis
About
This skill generates a comprehensive competitive analysis pack, including competitor landscapes, battlecards, and positioning hypotheses. Developers can use it to understand why deals are lost, identify market alternatives, and create sales enablement materials. It's ideal for competitor research, win/loss analysis, and strategic product differentiation.
Quick Install
Claude Code
Recommended/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/competitive-analysisCopy and paste this command in Claude Code to install this skill
Documentation
Competitive Analysis
Scope
Covers
- Mapping competitive alternatives (status quo, workarounds, analog/non-consumption, direct + indirect competitors)
- Building a competitor landscape grounded in customer decision criteria
- Turning analysis into actionable artifacts: positioning hypotheses, win themes, battlecards, and a monitoring plan
When to use
- “Do a competitive analysis / competitor landscape for our product.”
- “Why are we losing deals to <competitor>?”
- “What are the real alternatives if we didn’t exist?”
- “Help us differentiate and position vs competitors.”
- “Create sales battlecards and win/loss takeaways.”
When NOT to use
- You need market sizing / TAM/SAM/SOM as the primary output (different workflow)
- You don’t know the target customer, core use case, or the decision this analysis should support
- You only need a quick list of competitors (no synthesis, no artifacts)
- You’re seeking confidential or non-public competitor information (do not attempt)
Inputs
Minimum required
- Product + target customer segment + core use case (what job is being done)
- The decision to support (e.g., positioning, sales enablement, roadmap bets, pricing, market entry)
- 3–10 known competitors/alternatives (or “unknown—please map them”)
- Any available evidence (links, win/loss notes, call transcripts, customer quotes, pricing pages, reviews)
- Constraints: geography, ICP, price band, compliance/regulation (if relevant), time box
Missing-info strategy
- Ask up to 5 questions from references/INTAKE.md.
- If answers aren’t available, proceed with explicit assumptions and label unknowns. Provide 2–3 plausible alternative scopes (narrow vs broad).
Outputs (deliverables)
Produce a Competitive Analysis Pack in Markdown (in-chat; or as files if requested):
- Context snapshot (decision, ICP, use case, constraints, time box)
- Competitive alternatives map (direct/indirect/status quo/workarounds/analog)
- Competitor landscape table (top 5–10) with evidence links + confidence
- Customer decision criteria + comparison matrix (customer POV)
- Differentiation & positioning hypotheses (why win, why lose, proof points)
- Win themes + loss risks (objections, landmines, traps)
- Battlecards (3–5 priority competitors)
- Monitoring plan (signals, cadence, owners, update triggers)
- Risks / Open questions / Next steps (always included)
Templates: references/TEMPLATES.md
Workflow (8 steps)
1) Intake + decision framing
- Inputs: User context; references/INTAKE.md.
- Actions: Confirm the decision, ICP, use case, geography, and time box. Define what “good” looks like (who will use this and for what).
- Outputs: Context snapshot.
- Checks: A stakeholder can answer: “What decision will this analysis change?”
2) Map competitive alternatives (not just logos)
- Inputs: Use case + customer job.
- Actions: List what customers do instead: status quo, internal build, manual workaround, analog tools, agencies/outsourcing, and direct/indirect competitors. Identify the “true competitor” for the deal.
- Outputs: Competitive alternatives map + short notes per alternative.
- Checks: At least 1–2 non-obvious alternatives appear (workarounds / analog / non-consumption).
3) Select the focus set + collect evidence (time-boxed)
- Inputs: Alternatives map; available sources.
- Actions: Pick 5–10 focus alternatives (by frequency/impact). Gather publicly available facts (positioning, features, pricing, distribution, target ICP) and internal learnings (win/loss, sales notes). Track confidence and unknowns.
- Outputs: Evidence log + initial landscape table.
- Checks: Each competitor row has at least 2 evidence points (link/quote/data) or is explicitly labeled “low confidence”.
4) Build the comparison from the customer’s perspective
- Inputs: Focus set + evidence.
- Actions: Define 6–10 customer decision criteria (JTBD outcomes, constraints, trust, time-to-value, switching cost, price, ecosystem fit). Compare alternatives on criteria and surface “why they win”.
- Outputs: Decision criteria list + comparison matrix.
- Checks: Criteria are framed as customer outcomes/risks (not internal feature checklists).
5) Derive differentiation + positioning hypotheses
- Inputs: Matrix + wins/losses.
- Actions: Write 2–3 positioning hypotheses: (a) who we’re for, (b) the value we deliver, (c) why we’re different vs the true alternative, (d) proof points, (e) tradeoffs/non-goals.
- Outputs: Differentiation & positioning section.
- Checks: Each hypothesis names the competitive alternative it’s positioning against.
6) Translate into win themes + battlecards
- Inputs: Positioning hypotheses + competitor notes.
- Actions: Create 3–5 win themes and 3–5 loss risks. Produce battlecards for priority competitors (how to win, landmines, objection handling, traps to avoid).
- Outputs: Win/loss section + battlecards.
- Checks: Battlecards contain do/don’t talk tracks and are usable in a live sales call.
7) Recommend actions (product, messaging, GTM)
- Inputs: Findings.
- Actions: Propose 5–10 actions: product bets, messaging changes, pricing/packaging, distribution, partnerships, and “stop doing” items. Tie each action to a win theme or loss risk.
- Outputs: Recommendations list with rationale and owners (if known).
- Checks: Each recommendation is specific enough to execute next week/month.
8) Monitoring + quality gate + finalize
- Inputs: Draft pack.
- Actions: Define monitoring signals, cadence, and update triggers. Run references/CHECKLISTS.md and score with references/RUBRIC.md. Add Risks/Open questions/Next steps.
- Outputs: Final Competitive Analysis Pack.
- Checks: Pack is shareable as-is; assumptions and confidence levels are explicit.
Quality gate (required)
- Use references/CHECKLISTS.md and references/RUBRIC.md.
- Always include: Risks, Open questions, Next steps.
Examples
Example 1 (B2B SaaS): “We keep losing deals to Competitor X. Build a competitive alternatives map and a battlecard for X.”
Expected: alternatives map (incl. status quo), decision criteria, X battlecard, win themes/loss risks, and a monitoring plan.
Example 2 (Consumer subscription): “We’re repositioning for a new segment. Analyze alternatives and propose 2 positioning hypotheses.”
Expected: comparison matrix by customer criteria and two clear positioning options with proof points and tradeoffs.
Boundary example: “List every competitor in our industry worldwide.”
Response: narrow scope (ICP, geography, category) and propose a focused set + monitoring plan; otherwise output becomes a low-signal directory of logos.
GitHub Repository
Related Skills
algorithmic-art
MetaThis Claude Skill creates original algorithmic art using p5.js with seeded randomness and interactive parameters. It generates .md files for algorithmic philosophies, plus .html and .js files for interactive generative art implementations. Use it when developers need to create flow fields, particle systems, or other computational art while avoiding copyright issues.
subagent-driven-development
DevelopmentThis skill executes implementation plans by dispatching a fresh subagent for each independent task, with code review between tasks. It enables fast iteration while maintaining quality gates through this review process. Use it when working on mostly independent tasks within the same session to ensure continuous progress with built-in quality checks.
executing-plans
DesignUse the executing-plans skill when you have a complete implementation plan to execute in controlled batches with review checkpoints. It loads and critically reviews the plan, then executes tasks in small batches (default 3 tasks) while reporting progress between each batch for architect review. This ensures systematic implementation with built-in quality control checkpoints.
cost-optimization
OtherThis Claude Skill helps developers optimize cloud costs through resource rightsizing, tagging strategies, and spending analysis. It provides a framework for reducing cloud expenses and implementing cost governance across AWS, Azure, and GCP. Use it when you need to analyze infrastructure costs, right-size resources, or meet budget constraints.
