Luau Devstral 24B Instruct v0.2

State-of-the-art Luau code generation through reinforcement learning post-training

A refined version of Luau-Devstral-24B-Instruct-v0.1, enhanced with Dr. GRPO (Zichen Liu et al., 2025) to deliver superior Luau programming capabilities for Roblox development.

Overview

This model represents a significant advancement in specialized code generation for Luau, building upon continuous pretraining with targeted reinforcement learning to achieve exceptional code quality.

Key Achievements:

  • State-of-the-art code formatting and linting performance
  • Minimal typechecker issues with strict mode compliance
  • Concise, direct responses without unnecessary verbosity
  • Robust problem-solving capabilities on complex Luau challenges

Model Information

Performance Benchmarks

Evaluated on the test split of TorpedoSoftware/LuauLeetcode containing 226 challenges, with results averaged across 3 runs per challenge.

Comparison Models

Base Models:

Competitive Benchmarks:

Note: OpenAI models utilize reasoning tokens as complete disabling of thinking is not available.

Benchmark Results

Unit Test Pass Rate

Measures problem-solving accuracy and correctness

Unit Tests

Result: 4th place overall, demonstrating solid problem-solving capabilities while outperforming OpenAI models.

Linter Errors

Evaluates fundamental code quality

Linter Errors

Result: State-of-the-art performance with the lowest error rate by a significant margin.

Linter Warnings

Assesses non-critical code quality issues

Linter Warnings

Result: State-of-the-art performance in minimizing code warnings.

Type Safety

Strict mode typechecking compliance

Typechecker Issues

Result: 2nd place, closely trailing Claude Opus 4.1. Our model favors explicit type definitions for enhanced code clarity, which creates more opportunities for mistakes compared to Claude's reliance on inferred types.

Code Formatting

Edit distance from Stylua's standard format

Formatter Distance

Result: State-of-the-art performance with exceptional adherence to standard formatting conventions.

Response Length

Average response size (excluding reasoning tokens)

Response Length

Result: Most concise responses among all models, delivering direct solutions without unnecessary preamble. This efficiency suggests potential for further improvements in problem solving through explicit problem decomposition or reasoning.

Training Methodology

Dataset

Primary Source: TorpedoSoftware/LuauLeetcode

  • 2.6K leetcode-style Luau programming challenges
  • Structured difficulty progression: Easy → Medium → Hard

Training Process

Curriculum Learning Approach:

  1. Easy Difficulty Phase

    • 6.45M input tokens
    • 25 hours training
  2. Medium Difficulty Phase

    • 17.02M input tokens
    • 58 hours training
  3. Hard Difficulty Phase

    • 6.07M input tokens
    • 20 hours training

Technical Configuration:

  • LoRA adapter with rank=128
  • Full precision training
  • Final merge to BF16 model

Reward Function Design

The model was optimized using four complementary reward signals:

  1. Correctness - Unit testing via Jest-Lua
  2. Quality - Code linting with Selene
  3. Type Safety - Strict typechecking using Luau
  4. Formatting - Style conformance via Stylua

Training Progress

Easy Difficulty Training

Easy Overall Reward Curve Easy Individual Reward Curves

Medium Difficulty Training

Medium Overall Reward Curve Medium Individual Reward Curves

Hard Difficulty Training

Hard Overall Reward Curve Hard Individual Reward Curves

Quantization Support

Imatrix Calibration

Custom importance matrix computed using 5.73MB of specialized text data:

Calibration Sources:

This calibration ensures optimal performance for Luau/Roblox tasks while maintaining general intelligence. The imatrix.gguf file is included in the repository for custom quantization needs.

Environmental Impact

Carbon emissions estimated using the Machine Learning Impact calculator (Lacoste et al., 2019):

  • Hardware: A100 80GB SXM
  • Training Duration: 103 hours
  • Carbon Emissions: ~12 kg CO2eq
  • Equivalent Impact: ~31 miles driven by an average internal combustion engine vehicle
Downloads last month
2,199
Safetensors
Model size
23.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TorpedoSoftware/Luau-Devstral-24B-Instruct-v0.2

Dataset used to train TorpedoSoftware/Luau-Devstral-24B-Instruct-v0.2