---
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
datasets:
- SWE-bench/SWE-smith
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- agent
- software engineering
---
SWE-agent LM
Code
•
Paper
•
Site
SWE-agent-LM-32B is a Language Model for Software Engineering trained using the [SWE-smith](https://github.com/SWE-bench/SWE-smith) toolkit.
We introduce this model as part of our work: [SWE-smith: Scaling Data for Software Engineering Agents](https://swesmith.com).
SWE-agent-LM-32B is 100% open source.
Training this model was simple - we fine-tuned Qwen 2.5 Coder Instruct on 5k trajectories generated by SWE-agent + Claude 3.7 Sonnet.
The dataset can be found [here](https://huggingface.co/datasets/SWE-bench/SWE-smith-trajs-250429).
SWE-agent-LM-32B is compatible with [SWE-agent](https://github.com/SWE-agent/SWE-agent).
Running this model locally only takes a few steps!
Check [here]() for more instructions on how to do so.
If you found this work exciting and want to push SWE-agents further, please feel free to connect with us (the [SWE-bench team](https://swe-bench.github.io/)) more!