π Pixtral 12B Fine-Tuned on Titan-Hohmann-Transfer-Orbit
β¨ Updated to the latest suitable Mistral multimodal base:
mistralai/Pixtral-12B-Base-2409
.
π Overview
Fine-tuned variant of Pixtral 12B for orbital mechanics with emphasis on Hohmann transfer orbits. Supports multimodal (image + text) inputs and text outputs.
π§ Model Details
- Base:
mistralai/Pixtral-12B-Base-2409
- Type: πΌοΈ Multimodal (Vision + Text)
- Params: ~12B (decoder) + vision encoder
- Languages: πΊπΈ English
- License: π MIT
π― Intended Use
- π°οΈ Hohmann transfer βv estimation
- β±οΈ Transfer-time approximations
- π Orbit analysis aids and reasoning
π Quickstart
π vLLM (multimodal)
from vllm import LLM
from vllm.sampling_params import SamplingParams
llm = LLM(model="mistralai/Pixtral-12B-Base-2409", tokenizer_mode="mistral")
sampling = SamplingParams(max_tokens=512, temperature=0.2)
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "Given this diagram, estimate the delta-v for a Hohmann transfer to Titan."},
{"type": "image_url", "image_url": {"url": "https://example.com/orbit_diagram.png"}}
]
}
]
resp = llm.chat(messages, sampling_params=sampling)
print(resp[0].outputs[0].text)
π€ Transformers (text-only demo)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "mistralai/Pixtral-12B-Base-2409"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
prompt = "Compute approximate delta-v for a Hohmann transfer to Titan. State assumptions."
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=512, temperature=0.2)
print(tok.decode(out[0], skip_special_tokens=True))
π Training Data
- Dataset:
Taylor658/titan-hohmann-transfer-orbit
- Modalities: π text (explanations), π» code (snippets), πΌοΈ images (orbital diagrams)
β οΈ Limitations
- π― Optimized for Hohmann transfers and related reasoning
- πΎ Requires sufficient GPU VRAM for best throughput
π Acknowledgements
- Base model by Mistral AI (Pixtral 12B)
- Dataset by A Taylor
π Contact Information
- Author: π¨βπ A Taylor
- Email: π§
- Repository: π https://github.com/ATaylorAerospace/HohmannHET
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Taylor658/Titan-Hohmann
Base model
mistralai/Pixtral-12B-Base-2409