config-skills
About
This skill provides configuration module patterns for LlamaFarm using Pydantic v2 models with JSONSchema validation. It covers YAML/TOML/JSON processing, schema generation, and custom validation logic. Use this when implementing structured configuration systems with type-safe loading and validation.
Quick Install
Claude Code
Recommended/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/config-skillsCopy and paste this command in Claude Code to install this skill
Documentation
Config Skills for LlamaFarm
Specialized patterns and best practices for the LlamaFarm configuration module (config/).
Module Overview
The config module provides YAML/TOML/JSON configuration loading with JSONSchema validation:
| File | Purpose |
|---|---|
datamodel.py | Generated Pydantic v2 models from JSONSchema |
schema.yaml | Source JSONSchema with $ref references |
compile_schema.py | Dereferences $ref to create schema.deref.yaml |
generate_types.py | Generates Python types via datamodel-codegen |
validators.py | Custom validators beyond JSONSchema capabilities |
helpers/loader.py | Config loading, saving, and format detection |
helpers/generator.py | Template-based config generation |
Links to Shared Skills
This module follows Python conventions from the shared skills:
| Topic | Link | Key Relevance |
|---|---|---|
| Patterns | python-skills/patterns.md | Pydantic v2, dataclasses |
| Typing | python-skills/typing.md | Type hints, constrained types |
| Testing | python-skills/testing.md | Pytest fixtures, temp files |
| Errors | python-skills/error-handling.md | Custom exceptions |
| Security | python-skills/security.md | Path traversal prevention |
Framework-Specific Checklists
| Checklist | Description |
|---|---|
| pydantic.md | Pydantic v2 configuration patterns, nested models, constraints |
| jsonschema.md | JSONSchema generation, dereferencing, validation |
Tech Stack
- Python: 3.11+
- Pydantic: v2 with
ConfigDict,Field, constrained types - JSONSchema: Draft-07 with
$refdereferencing viajsonref - YAML:
ruamel.yamlfor comment-preserving read/write - Code Generation:
datamodel-codegenfor schema-to-Pydantic
Key Patterns
Generated Pydantic Models
The datamodel.py file is auto-generated from JSONSchema:
# Generated by datamodel-codegen from schema.deref.yaml
from pydantic import BaseModel, ConfigDict, Field, conint, constr
class Database(BaseModel):
model_config = ConfigDict(extra="forbid")
name: constr(pattern=r"^[a-z][a-z0-9_]*$", min_length=1, max_length=50)
type: Type
config: dict[str, Any] | None = Field(None, description="Database-specific configuration")
Custom Validators for Cross-Field Constraints
JSONSchema draft-07 cannot express all constraints. Custom validators extend validation:
def validate_llamafarm_config(config_dict: dict[str, Any]) -> None:
"""Validate constraints beyond JSONSchema (uniqueness, references)."""
# Check for duplicate prompt names
prompt_names = [p.get("name") for p in config_dict.get("prompts", [])]
duplicates = [name for name in prompt_names if prompt_names.count(name) > 1]
if duplicates:
raise ValueError(f"Duplicate prompt set names: {', '.join(set(duplicates))}")
Comment-Preserving YAML with ruamel.yaml
Configuration files preserve user comments when modified:
from ruamel.yaml import YAML
from ruamel.yaml.comments import CommentedMap
def _get_ruamel_yaml() -> YAML:
yaml_instance = YAML()
yaml_instance.preserve_quotes = True
yaml_instance.indent(mapping=2, sequence=4, offset=2)
return yaml_instance
Directory Structure
config/
├── pyproject.toml # UV-managed dependencies
├── schema.yaml # Source JSONSchema with $ref
├── schema.deref.yaml # Dereferenced schema (generated)
├── datamodel.py # Pydantic models (generated)
├── compile_schema.py # Schema compilation script
├── generate_types.py # Type generation script
├── validators.py # Custom validation beyond JSONSchema
├── validate_config.py # CLI validation wrapper
├── __init__.py # Public API exports
├── helpers/
│ ├── loader.py # Config loading/saving
│ └── generator.py # Template-based generation
├── templates/
│ └── default.yaml # Default config template
└── tests/
├── conftest.py # Shared fixtures
└── test_*.py # Test modules
Workflow: Schema Changes
When modifying the configuration schema:
- Edit
schema.yaml(or referenced schemas like../rag/schema.yaml) - Run
nx run generate-typesto compile and generate types - Update
validators.pyif new cross-field constraints are needed - Test with
uv run pytest config/tests/
Common Commands
# Generate types from schema
nx run generate-types
# Validate a config file
uv run python config/validate_config.py path/to/llamafarm.yaml --verbose
# Run tests
uv run pytest config/tests/ -v
# Lint and format
ruff check config/ --fix
ruff format config/
GitHub Repository
Related Skills
algorithmic-art
MetaThis Claude Skill creates original algorithmic art using p5.js with seeded randomness and interactive parameters. It generates .md files for algorithmic philosophies, plus .html and .js files for interactive generative art implementations. Use it when developers need to create flow fields, particle systems, or other computational art while avoiding copyright issues.
subagent-driven-development
DevelopmentThis skill executes implementation plans by dispatching a fresh subagent for each independent task, with code review between tasks. It enables fast iteration while maintaining quality gates through this review process. Use it when working on mostly independent tasks within the same session to ensure continuous progress with built-in quality checks.
executing-plans
DesignUse the executing-plans skill when you have a complete implementation plan to execute in controlled batches with review checkpoints. It loads and critically reviews the plan, then executes tasks in small batches (default 3 tasks) while reporting progress between each batch for architect review. This ensures systematic implementation with built-in quality control checkpoints.
cost-optimization
OtherThis Claude Skill helps developers optimize cloud costs through resource rightsizing, tagging strategies, and spending analysis. It provides a framework for reducing cloud expenses and implementing cost governance across AWS, Azure, and GCP. Use it when you need to analyze infrastructure costs, right-size resources, or meet budget constraints.
