Back to Skills

hqq-quantization

davila7
Updated 9 days ago
281 views
18,478
1,685
18,478
View on GitHub
OtherQuantizationHQQOptimizationMemory EfficiencyInferenceModel Compression

About

HQQ enables fast, calibration-free quantization of LLMs down to 4/3/2-bit precision without needing a dataset. It's ideal for rapid quantization workflows and deployment with vLLM or HuggingFace Transformers. Key advantages include significantly faster quantization than methods like GPTQ and support for fine-tuning quantized models.

Quick Install

Claude Code

Recommended
Primary
npx skills add davila7/claude-code-templates -a claude-code
Plugin CommandAlternative
/plugin add https://github.com/davila7/claude-code-templates
Git CloneAlternative
git clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/hqq-quantization

Copy and paste this command in Claude Code to install this skill

GitHub Repository

davila7/claude-code-templates
Path: cli-tool/components/skills/ai-research/optimization-hqq
0
anthropicanthropic-claudeclaudeclaude-code

Related Skills

quantizing-models-bitsandbytes

Other

This skill quantizes LLMs to 8-bit or 4-bit precision using bitsandbytes, achieving 50-75% memory reduction with minimal accuracy loss. It's ideal for running larger models on limited GPU memory or accelerating inference, supporting formats like INT8, NF4, and FP4. The skill integrates with HuggingFace Transformers and enables QLoRA training and 8-bit optimizers.

View skill

gguf-quantization

Design

This skill enables GGUF quantization for efficient model deployment on consumer hardware like CPUs and Apple Silicon. It provides flexible 2-8 bit quantization options without requiring GPU acceleration. Use it when optimizing models for local inference tools or resource-constrained environments.

View skill

awq-quantization

Other

AWQ is a 4-bit weight quantization technique that uses activation patterns to preserve critical weights, enabling 3x faster inference with minimal accuracy loss. It's ideal for deploying large models (7B-70B) on limited GPU memory and is particularly effective for instruction-tuned and multimodal models. This skill integrates with vLLM and Marlin kernels for optimized deployment.

View skill

lambda-labs-gpu-cloud

Other

This Claude Skill provisions dedicated GPU cloud instances from Lambda Labs for ML training and inference. It's ideal for developers needing full SSH access, persistent storage, or large multi-node clusters with pre-installed stacks like PyTorch. Use it for long-running jobs where simple pricing and high-performance GPUs are required.

View skill