Back to Skills

generating-test-data

jeremylongshore
Updated Today
79 views
409
51
409
View on GitHub
Metaaitestingautomationdata

About

This Claude skill generates realistic test data like users, products, and orders for software development. It's ideal for populating databases, creating test fixtures, or simulating user behavior in testing environments. Developers can trigger it with phrases like "generate test data" or "create fake users" to quickly produce sample data.

Documentation

Overview

This skill empowers Claude to generate realistic and diverse test data, streamlining software testing and development workflows. It leverages the test-data-generator plugin to produce data sets tailored to your specific needs, from user profiles to complex business transactions.

How It Works

  1. Identify Data Requirements: Claude analyzes your request to determine the type and volume of test data required (e.g., users, products, orders, custom schemas).
  2. Generate Data: Claude uses the test-data-generator plugin to create realistic test data based on your specifications.
  3. Present Data: Claude presents the generated data in a suitable format, such as JSON or a data file, ready for use in your testing environment.

When to Use This Skill

This skill activates when you need to:

  • Generate a large number of realistic user profiles for testing authentication and authorization.
  • Create a dataset of products with varying attributes for testing e-commerce functionality.
  • Simulate order placements and transactions for performance testing and load testing.
  • Populate a database with realistic data for demonstration or training purposes.
  • Generate data that adheres to a specific schema or data model.

Examples

Example 1: Generating User Data

User request: "Generate 500 test users with realistic names, emails, and addresses."

The skill will:

  1. Invoke the test-data-generator plugin to create 500 user records.
  2. Populate each record with realistic names, email addresses, and physical addresses.
  3. Provide the generated data in JSON format.

Example 2: Creating Product Data

User request: "Create product test data including name, description, price, and category for 100 different products."

The skill will:

  1. Utilize the test-data-generator plugin to generate 100 product records.
  2. Populate each product with relevant details like name, description, price, and category.
  3. Deliver the data in a structured format suitable for database insertion.

Best Practices

  • Schema Definition: Provide a clear schema or data model when generating custom data to ensure accuracy and consistency.
  • Locale Considerations: Specify the desired locale when generating data that is sensitive to regional variations (e.g., names, addresses, phone numbers).
  • Seed Values: Use seed values for reproducible test data generation, ensuring consistency across multiple runs.

Integration

This skill can be integrated with other plugins, such as database management tools, to directly populate databases with the generated test data. It can also be used in conjunction with API testing tools to generate realistic request payloads.

Quick Install

/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/test-data-generator

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

jeremylongshore/claude-code-plugins-plus
Path: backups/skills-migration-20251108-070147/plugins/testing/test-data-generator/skills/test-data-generator
aiautomationclaude-codedevopsmarketplacemcp

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill