Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ base_model:
|
|
13 |
# Granite-3.3-8B-Instruct
|
14 |
|
15 |
**Model Summary:**
|
16 |
-
Granite-3.3-8B-Instruct is a 8-billion parameter 128K context length language model fine-tuned for improved reasoning and instruction-following capabilities. Built on top of Granite-3.3-8B-Base, the model delivers significant gains on benchmarks for measuring generic performance including AlpacaEval-2.0 and Arena-Hard, and improvements in mathematics, coding, and instruction following. It
|
17 |
|
18 |
- **Developers:** Granite Team, IBM
|
19 |
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
|
@@ -27,7 +27,7 @@ English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian,
|
|
27 |
This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
|
28 |
|
29 |
**Capabilities**
|
30 |
-
*
|
31 |
* Summarization
|
32 |
* Text classification
|
33 |
* Text extraction
|
@@ -36,7 +36,7 @@ This model is designed to handle general instruction-following tasks and can be
|
|
36 |
* Code related tasks
|
37 |
* Function-calling tasks
|
38 |
* Multilingual dialog use cases
|
39 |
-
*
|
40 |
* Long-context tasks including long document/meeting summarization, long document QA, etc.
|
41 |
|
42 |
|
|
|
13 |
# Granite-3.3-8B-Instruct
|
14 |
|
15 |
**Model Summary:**
|
16 |
+
Granite-3.3-8B-Instruct is a 8-billion parameter 128K context length language model fine-tuned for improved reasoning and instruction-following capabilities. Built on top of Granite-3.3-8B-Base, the model delivers significant gains on benchmarks for measuring generic performance including AlpacaEval-2.0 and Arena-Hard, and improvements in mathematics, coding, and instruction following. It has been trained with Fill-in-the-Middle (FIM) for code completion tasks and supprts structured reasoning through \<think\>\<\/think\> and \<response\>\<\/response\> tags, providing clear separation between internal thoughts and final outputs. The model has been trained on a carefully balanced combination of permissively licensed data and curated synthetic tasks.
|
17 |
|
18 |
- **Developers:** Granite Team, IBM
|
19 |
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
|
|
|
27 |
This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
|
28 |
|
29 |
**Capabilities**
|
30 |
+
* Thinking
|
31 |
* Summarization
|
32 |
* Text classification
|
33 |
* Text extraction
|
|
|
36 |
* Code related tasks
|
37 |
* Function-calling tasks
|
38 |
* Multilingual dialog use cases
|
39 |
+
* Fill-in-the-middle
|
40 |
* Long-context tasks including long document/meeting summarization, long document QA, etc.
|
41 |
|
42 |
|