Back to Skills

creative-learnings

majiayu000
Updated Today
58
9
58
View on GitHub
Metawordtestingdata

About

This skill documents learnings from creative test cycles to capture institutional knowledge. It analyzes performance data to identify patterns, updates a performance database, and generates new hypotheses. Use it after tests to systematically inform future creative strategy.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/majiayu000/claude-skill-registry
Git CloneAlternative
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/creative-learnings

Copy and paste this command in Claude Code to install this skill

Documentation


name: creative-learnings description: Document learnings from creative tests including patterns of what worked and what didn't, updating the angle/hook performance database, and identifying new hypotheses to test. Use after test cycles to capture institutional knowledge and inform future creative strategy.

Creative Learnings

Document and systematize learnings from creative tests.

Process

Step 1: Analyze Recent Test Results

Gather Test Data:

  • All creatives tested in period
  • Performance metrics (CPA, CTR, CVR)
  • Spend and volume
  • Test duration

Categorize Results:

  • Clear winners (scale)
  • Promising (iterate)
  • Clear losers (kill)
  • Inconclusive (retest)

Step 2: Extract Patterns

What Worked - Analyze:

  • Common elements in winners
  • Hook types performing
  • Body structures winning
  • CTA formats converting
  • Visual styles succeeding
  • Avatar responses

What Didn't Work - Analyze:

  • Common failure points
  • Hook types failing
  • Angles not resonating
  • Visual styles flopping
  • Audiences not responding

Step 3: Update Performance Database

Angle Tracker:

AngleTestsWinsWin RateBest CPANotes
[Angle 1]XXX%$X[Learning]

Hook Type Tracker:

Hook TypeTestsWinsWin RateNotes
GreedXXX%[Learning]
EmotionXXX%[Learning]

Framework Tracker:

FrameworkTestsWinsWin RateNotes

Step 4: Identify New Hypotheses

From Winners:

  • What can we double down on?
  • What variations should we test?
  • What audiences should we expand to?

From Losers:

  • What should we stop doing?
  • What assumptions were wrong?
  • What variables need isolation?

From Market:

  • What competitors are doing?
  • What trends are emerging?
  • What gaps exist?

Step 5: Output Learnings Document

## CREATIVE LEARNINGS: [Date Range]

### TEST SUMMARY

**Tests Conducted:**
- Total creatives tested: [#]
- Winners identified: [#]
- Win rate: [X%]
- Total test spend: $[X]

**By Category:**
| Type | Tested | Winners | Win Rate |
|------|--------|---------|----------|
| New angles | X | X | X% |
| Hook variations | X | X | X% |
| Body iterations | X | X | X% |
| CTA tests | X | X | X% |

---

### KEY LEARNINGS

**LEARNING 1: [Title]**
- What we tested: [Description]
- Result: [Outcome]
- Why it worked/failed: [Analysis]
- Application: [How to use this]
- Confidence: [High/Medium/Low]

**LEARNING 2: [Title]**
...

---

### WHAT'S WORKING

**Winning Angles:**
1. [Angle] - Why: [Explanation]
2. [Angle] - Why: [Explanation]

**Winning Hook Types:**
1. [Type] - Performance: [Metrics]
2. [Type] - Performance: [Metrics]

**Winning Formats:**
- [Format description and why]

**Winning Visual Styles:**
- [Style description and why]

**Winning Avatars:**
- [Avatar responding best]

---

### WHAT'S NOT WORKING

**Failed Angles:**
1. [Angle] - Why failed: [Analysis]
   - Action: [Stop/Revise/Retest]

**Failed Hook Types:**
1. [Type] - Why failed: [Analysis]

**Failed Formats:**
- [What and why]

**Avoid:**
- [Thing to stop doing]
- [Thing to stop doing]

---

### PATTERN ANALYSIS

**Successful Patterns:**
- [Pattern 1]: Seen in X winners
- [Pattern 2]: Seen in X winners

**Failure Patterns:**
- [Pattern 1]: Seen in X losers
- [Pattern 2]: Seen in X losers

**Correlations Found:**
- [Variable A] + [Variable B] = [Outcome]

---

### ANGLE/HOOK DATABASE UPDATE

**New Additions:**
| Element | Type | Status | Win Rate | Notes |
|---------|------|--------|----------|-------|
| [New angle] | Angle | Proven | X% | [Note] |
| [New hook] | Hook | Testing | - | [Note] |

**Status Changes:**
- [Element]: [Old status] → [New status]

**Retired:**
- [Element]: Reason: [Why removed]

---

### HYPOTHESES FOR NEXT CYCLE

**High Priority Tests:**
1. **Hypothesis:** [Statement]
   - Based on: [Learning that inspired this]
   - Test: [What to create]
   - Expected outcome: [Prediction]

2. **Hypothesis:** [Statement]
   ...

**Medium Priority Tests:**
1. [Hypothesis and test plan]

**Experimental:**
1. [Wild card ideas worth trying]

---

### COMPETITIVE INSIGHTS

**What competitors are doing:**
- [Observation 1]
- [Observation 2]

**Opportunities identified:**
- [Gap we can exploit]

---

### RECOMMENDATIONS

**Creative Strategy Adjustments:**
1. [Recommendation]
2. [Recommendation]

**Process Improvements:**
1. [Recommendation]

**Resource Allocation:**
- More focus on: [Area]
- Less focus on: [Area]

---

### NEXT STEPS

**Immediate (This Week):**
1. [ ] [Action item]
2. [ ] [Action item]

**Short-term (This Month):**
1. [ ] [Action item]

**Share With Team:**
- Key insight to communicate: [Summary]

Building Institutional Knowledge

Document Everything:

  • Even "obvious" learnings
  • Capture the "why" not just "what"
  • Include context and conditions

Make It Searchable:

  • Consistent naming conventions
  • Tags/categories
  • Regular updates

Share and Apply:

  • Team access to learnings
  • Reference in creative briefs
  • Update SOPs based on learnings

Source: General creative optimization best practices

GitHub Repository

majiayu000/claude-skill-registry
Path: skills/creative-learnings

Related Skills

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

cloudflare-turnstile

Meta

This skill provides comprehensive guidance for implementing Cloudflare Turnstile as a CAPTCHA-alternative bot protection system. It covers integration for forms, login pages, API endpoints, and frameworks like React/Next.js/Hono, while handling invisible challenges that maintain user experience. Use it when migrating from reCAPTCHA, debugging error codes, or implementing token validation and E2E tests.

View skill

llamaindex

Meta

LlamaIndex is a data framework for building RAG-powered LLM applications, specializing in document ingestion, indexing, and querying. It provides key features like vector indices, query engines, and agents, and supports over 300 data connectors. Use it for document Q&A, chatbots, and knowledge retrieval when building data-centric applications.

View skill