Commit
·
8f6a5bc
1
Parent(s):
af8416a
Update arXiv link & citation
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ library_name: transformers
|
|
25 |
|
26 |
We introduce EXAONE Deep, which exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research. Evaluation results show that 1) EXAONE Deep **2.4B** outperforms other models of comparable size, 2) EXAONE Deep **7.8B** outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini, and 3) EXAONE Deep **32B** demonstrates competitive performance against leading open-weight models.
|
27 |
|
28 |
-
For more details, please refer to our [documentation](https://
|
29 |
|
30 |
<p align="center">
|
31 |
<img src="assets/exaone_deep_overall_performance.png", width="100%", style="margin: 40 auto;">
|
@@ -104,7 +104,7 @@ else:
|
|
104 |
|
105 |
## Evaluation
|
106 |
|
107 |
-
You can check the evaluation results of original EXAONE Deep models at [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep) or our [documentation](https://
|
108 |
|
109 |
## Deployment
|
110 |
|
@@ -152,7 +152,14 @@ The model is licensed under [EXAONE AI Model License Agreement 1.1 - NC](./LICEN
|
|
152 |
|
153 |
## Citation
|
154 |
|
155 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
156 |
|
157 |
## Contact
|
158 |
LG AI Research Technical Support: contact_us@lgresearch.ai
|
|
|
25 |
|
26 |
We introduce EXAONE Deep, which exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research. Evaluation results show that 1) EXAONE Deep **2.4B** outperforms other models of comparable size, 2) EXAONE Deep **7.8B** outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini, and 3) EXAONE Deep **32B** demonstrates competitive performance against leading open-weight models.
|
27 |
|
28 |
+
For more details, please refer to our [documentation](https://arxiv.org/abs/2503.12524), [blog](https://www.lgresearch.ai/news/view?seq=543) and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep).
|
29 |
|
30 |
<p align="center">
|
31 |
<img src="assets/exaone_deep_overall_performance.png", width="100%", style="margin: 40 auto;">
|
|
|
104 |
|
105 |
## Evaluation
|
106 |
|
107 |
+
You can check the evaluation results of original EXAONE Deep models at [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep) or our [documentation](https://arxiv.org/abs/2503.12524).
|
108 |
|
109 |
## Deployment
|
110 |
|
|
|
152 |
|
153 |
## Citation
|
154 |
|
155 |
+
```
|
156 |
+
@article{exaone-deep,
|
157 |
+
title={EXAONE Deep: Reasoning Enhanced Language Models},
|
158 |
+
author={{LG AI Research}},
|
159 |
+
journal={arXiv preprint arXiv:2503.12524},
|
160 |
+
year={2025}
|
161 |
+
}
|
162 |
+
```
|
163 |
|
164 |
## Contact
|
165 |
LG AI Research Technical Support: contact_us@lgresearch.ai
|