Luau Devstral 24B Instruct v0.2
State-of-the-art Luau code generation through reinforcement learning post-training
A refined version of Luau-Devstral-24B-Instruct-v0.1, enhanced with Dr. GRPO (Zichen Liu et al., 2025) to deliver superior Luau programming capabilities for Roblox development.
Overview
This model represents a significant advancement in specialized code generation for Luau, building upon continuous pretraining with targeted reinforcement learning to achieve exceptional code quality.
Key Achievements:
- State-of-the-art code formatting and linting performance
- Minimal typechecker issues with strict mode compliance
- Concise, direct responses without unnecessary verbosity
- Robust problem-solving capabilities on complex Luau challenges
Model Information
- Developer: Zack Williams (boatbomber)
- Sponsor: Torpedo Software LLC
- Base Model: Luau-Devstral-24B-Instruct-v0.1
- Training Method: Dr. GRPO (Group Relative Policy Optimization)
Performance Benchmarks
Evaluated on the test
split of TorpedoSoftware/LuauLeetcode containing 226 challenges, with results averaged across 3 runs per challenge.
Comparison Models
Base Models:
Competitive Benchmarks:
- Qwen3-Coder-30B-A3B-Instruct
- gpt-oss-20b (low reasoning)
- GPT-5 nano (minimal reasoning)
- GPT-5 (minimal reasoning)
- Claude Sonnet 4
- Claude Opus 4.1
Note: OpenAI models utilize reasoning tokens as complete disabling of thinking is not available.
Benchmark Results
Unit Test Pass Rate
Measures problem-solving accuracy and correctness
Result: 4th place overall, demonstrating solid problem-solving capabilities while outperforming OpenAI models.
Linter Errors
Evaluates fundamental code quality
Result: State-of-the-art performance with the lowest error rate by a significant margin.
Linter Warnings
Assesses non-critical code quality issues
Result: State-of-the-art performance in minimizing code warnings.
Type Safety
Strict mode typechecking compliance
Result: 2nd place, closely trailing Claude Opus 4.1. Our model favors explicit type definitions for enhanced code clarity, which creates more opportunities for mistakes compared to Claude's reliance on inferred types.
Code Formatting
Edit distance from Stylua's standard format
Result: State-of-the-art performance with exceptional adherence to standard formatting conventions.
Response Length
Average response size (excluding reasoning tokens)
Result: Most concise responses among all models, delivering direct solutions without unnecessary preamble. This efficiency suggests potential for further improvements in problem solving through explicit problem decomposition or reasoning.
Training Methodology
Dataset
Primary Source: TorpedoSoftware/LuauLeetcode
- 2.6K leetcode-style Luau programming challenges
- Structured difficulty progression: Easy → Medium → Hard
Training Process
Curriculum Learning Approach:
Easy Difficulty Phase
- 6.45M input tokens
- 25 hours training
Medium Difficulty Phase
- 17.02M input tokens
- 58 hours training
Hard Difficulty Phase
- 6.07M input tokens
- 20 hours training
Technical Configuration:
- LoRA adapter with rank=128
- Full precision training
- Final merge to BF16 model
Reward Function Design
The model was optimized using four complementary reward signals:
- Correctness - Unit testing via Jest-Lua
- Quality - Code linting with Selene
- Type Safety - Strict typechecking using Luau
- Formatting - Style conformance via Stylua
Training Progress
Easy Difficulty Training
Medium Difficulty Training
Hard Difficulty Training
Quantization Support
Imatrix Calibration
Custom importance matrix computed using 5.73MB of specialized text data:
Calibration Sources:
This calibration ensures optimal performance for Luau/Roblox tasks while maintaining general intelligence. The imatrix.gguf
file is included in the repository for custom quantization needs.
Environmental Impact
Carbon emissions estimated using the Machine Learning Impact calculator (Lacoste et al., 2019):
- Hardware: A100 80GB SXM
- Training Duration: 103 hours
- Carbon Emissions: ~12 kg CO2eq
- Equivalent Impact: ~31 miles driven by an average internal combustion engine vehicle
- Downloads last month
- 2,199