|
--- |
|
language: |
|
- en |
|
license: mit |
|
library_name: transformers |
|
base_model: Qwen/Qwen2.5-Coder-7B-Instruct |
|
pipeline_tag: question-answering |
|
tags: |
|
- isaac-sim |
|
- omniverse |
|
- robotics |
|
- nvidia |
|
- question-answering |
|
- chat |
|
--- |
|
|
|
 |
|
|
|
|
|
# Qwen2.5‑Coder‑7B‑Instruct‑Omni1.0 (Isaac Sim Assistant) |
|
|
|
Purpose‑built coding assistant for NVIDIA Isaac Sim 5.0+ and Omniverse Kit 107.x. Fine‑tuned to deliver high-level assistance and troubleshooting help. |
|
|
|
- Base: Qwen2.5‑Coder‑7B‑Instruct |
|
- Repo: `TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.0` |
|
- Interface: Chat messages or single‑turn text |
|
|
|
--- |
|
|
|
## 0. Changelog |
|
- 2025‑08‑13: Public Inference Endpoint live; added quickstart and examples. |
|
|
|
--- |
|
|
|
## 1. Model Introduction |
|
|
|
This model specializes in: |
|
- Isaac Sim API usage and best practices |
|
- Robot + sensor setup, physics, and extension patterns |
|
- Robotics code generation and refactoring |
|
- Diagnosing common Isaac Sim errors and warnings |
|
|
|
Key features |
|
- Chat‑optimized with structure‑aware prompting |
|
- Good defaults for coding (stable, low randomness) |
|
- Works via HTTP; no SDK required |
|
|
|
--- |
|
|
|
## 2. Model Summary |
|
|
|
- Architecture: Transformer (7B) |
|
- Context: typical 4K+ (prompt truncation handled internally) |
|
- Input formats: Chat `messages[]` or single‑turn `inputs` |
|
- Output: `generated_text` plus simple token accounting |
|
|
|
--- |
|
|
|
## 3. Try it now (Public Endpoint) |
|
|
|
Current live URL (may change if redeployed): |
|
`https://k6yeljf74w9gw134.us-east4.gcp.endpoints.huggingface.cloud` |
|
|
|
Recommended defaults for coding: |
|
- temperature: 0.2 |
|
- top_p: 0.7 |
|
- max_new_tokens: 256 (raise as needed) |
|
|
|
cURL (chat) |
|
```bash |
|
curl -s -X POST "https://k6yeljf74w9gw134.us-east4.gcp.endpoints.huggingface.cloud" \ |
|
-H "Content-Type: application/json" \ |
|
-d '{ |
|
"messages": [ |
|
{"role":"system","content":"You are a helpful coding assistant for NVIDIA Isaac Sim."}, |
|
{"role":"user","content":"Create a minimal script to spawn a URDF and enable PhysX."} |
|
], |
|
"parameters": {"max_new_tokens":256, "temperature":0.2, "top_p":0.9} |
|
}' |
|
``` |
|
|
|
Python |
|
```python |
|
import requests |
|
|
|
url = "https://k6yeljf74w9gw134.us-east4.gcp.endpoints.huggingface.cloud" |
|
payload = { |
|
"messages": [ |
|
{"role":"system","content":"You are a helpful coding assistant for NVIDIA Isaac Sim."}, |
|
{"role":"user","content":"How do I attach a camera sensor to a robot link?"} |
|
], |
|
"parameters": {"max_new_tokens": 256, "temperature": 0.2, "top_p": 0.9} |
|
} |
|
print(requests.post(url, json=payload).json()) |
|
``` |
|
|
|
Single‑turn (non‑chat) |
|
```bash |
|
curl -s -X POST "https://k6yeljf74w9gw134.us-east4.gcp.endpoints.huggingface.cloud" \ |
|
-H "Content-Type: application/json" \ |
|
-d '{"inputs":"Say hello in one sentence.","parameters":{"max_new_tokens":64}}' |
|
``` |
|
|
|
--- |
|
|
|
## 4. Inputs / Outputs |
|
|
|
Inputs (choose one) |
|
- Chat: `messages` as a list of `{role, content}` |
|
- Single‑turn: `inputs` as a string |
|
|
|
Common parameters |
|
- `max_new_tokens`, `temperature`, `top_p`, `top_k`, `repetition_penalty`, `num_beams`, `do_sample`, `seed` |
|
- `stop` (string or list) |
|
- `max_input_tokens` (truncate prompt to reserve room for generation) |
|
|
|
Response shape |
|
```json |
|
{ |
|
"generated_text": "...", |
|
"input_tokens": 123, |
|
"generated_tokens": 256, |
|
"total_tokens": 379, |
|
"params": { "max_new_tokens": 256, "temperature": 0.2 } |
|
} |
|
``` |
|
|
|
--- |
|
|
|
## 5. Local usage (Transformers) |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tok = AutoTokenizer.from_pretrained( |
|
"TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.0", trust_remote_code=True |
|
) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.0", |
|
trust_remote_code=True, torch_dtype="auto", device_map="auto" |
|
) |
|
|
|
messages = [ |
|
{"role":"system","content":"You are a helpful coding assistant for NVIDIA Isaac Sim."}, |
|
{"role":"user","content":"Example: spawn a robot and start the simulation loop."} |
|
] |
|
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
inputs = tok(prompt, return_tensors="pt").to(model.device) |
|
out = model.generate(**inputs, max_new_tokens=256, temperature=0.2) |
|
print(tok.decode(out[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True)) |
|
``` |
|
|
|
--- |
|
|
|
## 6. Limitations |
|
|
|
- May produce version‑specific code; verify imports and extension names for your Isaac Sim version. |
|
- Not a substitute for official safety or hardware guidance. |
|
|
|
--- |
|
|
|
## 7. License |
|
|
|
MIT (see LICENSE). |
|
|
|
--- |
|
|
|
## 8. Citation |
|
|
|
```bibtex |
|
@misc{qwen25-coder-isaac-sim, |
|
title = {Qwen2.5-Coder-7B-Instruct-Omni1.0: Fine-tuned for NVIDIA Isaac Sim Development}, |
|
author = {TomBombadyl}, |
|
year = {2024}, |
|
url = {https://huggingface.co/TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.0} |
|
} |
|
``` |
|
|
|
--- |
|
|
|
## 9. Contact |
|
|
|
Open a Discussion on this model page with questions or feedback. |