Capella-Qwen3-DS-V3.1-4B
Capella-Qwen3-DS-V3.1-4B is a reasoning-focused model fine-tuned on Qwen-4B using DeepSeek v3.1 synthetic traces (10K). It specializes in random event simulations, logical problem analysis, and structured reasoning tasks. The model blends symbolic precision, probabilistic logic, and structured output fluency—making it an ideal tool for researchers, educators, and developers working with uncertainty modeling and event-driven analysis.
GGUF: https://huggingface.co/prithivMLmods/Capella-Qwen3-DS-V3.1-4B-GGUF
Key Features
Event Simulation & Logical Analysis Fine-tuned on 10,000 synthetic traces from DeepSeek v3.1 to model random events, probability-driven reasoning, and logical decision-making.
Advanced Code Reasoning & Generation Supports multi-language coding with explanations, optimization hints, and error detection—ideal for algorithm synthesis, stochastic simulations, and debugging.
Mathematical & Probabilistic Problem Solving Performs analytical reasoning across probability, statistics, and mathematics—explaining concepts, solving equations, and simulating uncertain outcomes.
Hybrid Symbolic-Probabilistic Thinking Combines structured logic, probabilistic inference, and chain-of-thought reasoning, delivering robust performance on uncertainty-driven tasks.
Structured Output Mastery Seamlessly generates output in LaTeX, Markdown, JSON, CSV, and YAML, suited for technical documentation, simulations, and structured analysis.
Optimized Lightweight Footprint for Versatile Deployment Balances performance and efficiency, making it deployable on mid-range GPUs, offline clusters, and edge AI systems.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Capella-Qwen3-DS-V3.1-4B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Simulate the probability of rolling two dice and getting a sum greater than 9. Show the reasoning."
messages = [
{"role": "system", "content": "You are a reasoning tutor skilled in probability, logic, and coding."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- Random event simulation, probability modeling, and uncertainty analysis
- Logical problem-solving in research and education
- Structured data and technical content generation
- STEM-focused chatbot or API for probabilistic reasoning tools
- Deployment in mid-resource environments requiring efficient reasoning
Limitations
- Not tuned for general-purpose or creative writing
- Context limitations may hinder multi-document or full codebase analysis
- Specialized for simulations and logical reasoning—general chat may underperform
- Prioritizes probabilistic and logical precision over casual or emotional tone
- Downloads last month
- 54