create-analyzer
About
This Claude Skill generates template code for creating new Minecraft Bedrock packet analyzers. It helps developers quickly set up analyzers for specific domains by providing documentation links and guiding the testing workflow. Use it when you need to analyze captured Bedrock packets for areas like inventory, movement, or chat systems.
Quick Install
Claude Code
Recommended/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/create-analyzerCopy and paste this command in Claude Code to install this skill
Documentation
Create Minecraft Bedrock Packet Analyzer
Generate a new domain-specific packet analyzer for analyzing captured Bedrock packets.
Quick Start
- Identify the domain: inventory, movement, entities, chat, blocks, etc.
- Find relevant packets: Search
protocol.d.tsfor packet types - Generate analyzer file using template below
- Export from index.ts
- Test with captured logs
Analyzer Template
Create file at: packages/minecraft-logs-analyzers/src/analyzers/{domain}.ts
import { BaseAnalyzer } from "../base-analyzer.ts";
import type { Direction, LogEntry, AnalyzerConfig } from "../types.ts";
const PACKETS_TO_LOG = [
// Add packet names from protocol.d.ts
];
/**
* Analyzer for {domain}-related packets.
*/
export class {Domain}Analyzer extends BaseAnalyzer {
readonly config: AnalyzerConfig = {
name: "{domain}",
packets: PACKETS_TO_LOG,
};
constructor(basePath: string, registry?: any) {
super(basePath);
if (registry) this.registry = registry;
this.init();
}
protected extractFields(
direction: Direction,
name: string,
packet: any
): LogEntry | null {
const base = this.createBaseEntry(direction, name);
switch (name) {
// Add case for each packet type
// case "packet_name":
// return { ...base, field: packet.field };
default:
return null;
}
}
}
Registration
Add export to packages/minecraft-logs-analyzers/src/index.ts:
export { {Domain}Analyzer } from "./analyzers/{domain}.ts";
Documentation Locations
| Resource | Location |
|---|---|
| Protocol types | packages/mineflayer-bedrock/src/protocol.d.ts |
| Existing analyzers | packages/minecraft-logs-analyzers/src/analyzers/ |
| Base class | packages/minecraft-logs-analyzers/src/base-analyzer.ts |
| Types | packages/minecraft-logs-analyzers/src/types.ts |
| Bedrock plugins | packages/mineflayer/lib/bedrockPlugins/ |
Finding Relevant Packets
# Search protocol.d.ts for packet types
grep -n "packet_" packages/mineflayer-bedrock/src/protocol.d.ts | head -50
# Find packets used by a specific plugin
grep -n "client.on" packages/mineflayer/lib/bedrockPlugins/*.mts
# Search for specific packet handling
grep -rn "move_player" packages/mineflayer/lib/bedrockPlugins/
Common Packet Domains
| Domain | Key Packets |
|---|---|
| Inventory | inventory_slot, inventory_content, inventory_transaction, item_stack_request, item_stack_response, mob_equipment |
| Movement | player_auth_input, move_player, set_actor_motion, move_actor_absolute, move_actor_delta |
| Entities | add_entity, remove_entity, set_entity_data, set_entity_motion, add_player |
| Chat | text, command_request, command_output |
| Blocks | update_block, block_event, level_event, update_sub_chunk_blocks |
| Combat | actor_event, hurt_armor, entity_fall, animate |
| World | level_chunk, sub_chunk, network_chunk_publisher_update |
Testing Workflow
1. Capture Packets from Real Client
Critical: Always capture from the real Minecraft client first to understand the exact packet format.
# Start capture proxy
npm run start --workspace=minecraft-logs-recorder -- -o ./test-logs
# Connect Minecraft Bedrock to localhost:19150
# Perform the exact action you want to implement (crafting, chest interaction, etc.)
# Disconnect to save logs
The .jsonl file contains processed packets, .bin contains raw data for replay.
2. Analyze Captured Logs
# View first 20 packets
head -20 test-logs/*.jsonl
# Count packets by type
grep -o '"p":"[^"]*"' test-logs/*.jsonl | sort | uniq -c | sort -rn
# Find specific packets
grep '"p":"move_player"' test-logs/*.jsonl | head -5
3. Test Your Analyzer
import { {Domain}Analyzer } from 'minecraft-logs-analyzers';
import { createReplayClient } from 'minecraft-logs-recorder/replay';
// Attach analyzer to replay client
const analyzer = new {Domain}Analyzer('test-output/replay');
const client = createReplayClient('test-logs/1.21.130-*.bin');
// Analyzer will log packets to test-output/replay-{domain}.jsonl
analyzer.attachToBot(client);
// Check output
// cat test-output/replay-{domain}.jsonl
4. Verify Output Format
# View analyzer output
cat test-output/replay-{domain}.jsonl | head -10
# Validate JSON
cat test-output/replay-{domain}.jsonl | jq -c '.' | head -10
BaseAnalyzer Methods
| Method | Description |
|---|---|
createBaseEntry(direction, name) | Create log entry with t, tick, d, p fields |
updateTick(packet) | Extract tick from packet.tick |
itemName(item) | Resolve item name from network_id |
writeEntry(entry) | Write entry to JSONL file |
message(msg, data?) | Log custom debug message |
Override Points
| Method | When to Override |
|---|---|
shouldLog(name, packet) | Custom filtering (e.g., only log non-empty actions) |
extractFields(direction, name, packet) | Required - extract relevant fields |
IMPORTANT: Start with Full Packets
Always log full packet data initially, not filtered/summarized data. This was critical for solving crafting implementation:
// BAD: Filtering too early loses critical details
protected shouldLog(name: string, packet: unknown): boolean {
// Only log if it has crafting containers... ❌
return hasCraftingContainer;
}
// GOOD: Log everything first, filter later
protected shouldLog(name: string, packet: unknown): boolean {
return true; // ✅ See all packets first
}
// GOOD: Return full packet data in extractFields
return {
...base,
responses: p.responses, // ✅ Full data, not summarized
};
Real client packet captures revealed crucial details that were being filtered out:
craft_recipe_autouseshotbar_and_inventorycontainer, notcrafting_inputresults_deprecatedaction requiresresult_itemsarray with full item data- Stack IDs use negative request_id for chained actions
placegoes directly to inventory, not through cursor
Only add filtering after you fully understand the protocol.
Example: Movement Analyzer
import { BaseAnalyzer } from "../base-analyzer.ts";
import type { Direction, LogEntry, AnalyzerConfig } from "../types.ts";
const PACKETS_TO_LOG = [
"player_auth_input",
"move_player",
"set_actor_motion",
];
export class MovementAnalyzer extends BaseAnalyzer {
readonly config: AnalyzerConfig = {
name: "movement",
packets: PACKETS_TO_LOG,
};
constructor(basePath: string) {
super(basePath);
this.init();
}
protected extractFields(
direction: Direction,
name: string,
packet: any
): LogEntry | null {
const base = this.createBaseEntry(direction, name);
switch (name) {
case "player_auth_input":
this.updateTick(packet);
return {
...base,
tick: packet.tick,
pos: [packet.position?.x, packet.position?.y, packet.position?.z],
yaw: packet.yaw,
pitch: packet.pitch,
};
case "move_player":
return {
...base,
pos: [packet.position?.x, packet.position?.y, packet.position?.z],
mode: packet.mode,
onGround: packet.on_ground,
};
case "set_actor_motion":
return {
...base,
entityId: packet.runtime_entity_id,
velocity: [packet.velocity?.x, packet.velocity?.y, packet.velocity?.z],
};
default:
return null;
}
}
}
GitHub Repository
Related Skills
content-collections
MetaThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
creating-opencode-plugins
MetaThis skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
