Back to Skills

aqe-v2-v3-migration

majiayu000
Updated 14 days ago
17 views
58
9
58
View on GitHub
Othermigrationupgradev2-to-v3agentic-qe

About

This skill automates the migration of Agentic QE projects from version 2 to version 3 while ensuring zero data loss. It handles configuration, memory, and patterns, performing analysis, backup, and validation steps. Use it when upgrading your Agentic QE installation to safely transition between major versions.

Quick Install

Claude Code

Recommended
Primary
npx skills add majiayu000/claude-skill-registry -a claude-code
Plugin CommandAlternative
/plugin add https://github.com/majiayu000/claude-skill-registry
Git CloneAlternative
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/aqe-v2-v3-migration

Copy and paste this command in Claude Code to install this skill

GitHub Repository

majiayu000/claude-skill-registry
Path: skills/aqe-v2-v3-migration
0

Related Skills

database-testing

Other

This skill handles comprehensive database testing including schema validation, data integrity checks, and migration verification. It's designed for testing data persistence, ensuring referential integrity, and validating database migrations. Key capabilities include transaction isolation testing and query performance analysis.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

cost-optimization

Other

This Claude Skill helps developers optimize cloud costs through resource rightsizing, tagging strategies, and spending analysis. It provides a framework for reducing cloud expenses and implementing cost governance across AWS, Azure, and GCP. Use it when you need to analyze infrastructure costs, right-size resources, or meet budget constraints.

View skill

quantizing-models-bitsandbytes

Other

This skill quantizes LLMs to 8-bit or 4-bit precision using bitsandbytes, achieving 50-75% memory reduction with minimal accuracy loss. It's ideal for running larger models on limited GPU memory or accelerating inference, supporting formats like INT8, NF4, and FP4. The skill integrates with HuggingFace Transformers and enables QLoRA training and 8-bit optimizers.

View skill