nielsr HF Staff commited on
Commit
bd93c4b
·
verified ·
1 Parent(s): 45374cb

Add pipeline tag and library name

Browse files

This PR adds the missing `pipeline_tag` and `library_name` to the model card metadata. The `pipeline_tag` is set to `text-generation` as the model is a large language model performing text generation tasks. The `library_name` is set to `transformers` as indicated by the provided code example.

Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -1,8 +1,10 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - it
5
  - en
 
 
 
6
  ---
7
 
8
  # Llama-3.1-8B-Italian-LAPT
@@ -14,7 +16,7 @@ language:
14
 
15
  The **Llama-3.1-8B-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 8B (text in/text out), adapted models from **Llama-3.1-8B**.
16
 
17
- *Llama-3.1-8B-Italian-LAPT* is a continual trained mistral model.
18
 
19
  **Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
20
 
@@ -22,15 +24,15 @@ The **Llama-3.1-8B-Adapted** collection of large language models (LLMs), is a co
22
 
23
  ## Data used for the adaptation
24
 
25
- The **Mistral-7B-v0.1-Adapted** model are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
26
- The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.
27
 
28
 
29
  ## Use with Transformers
30
 
31
  You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
32
 
33
- Make sure to update your transformers installation via pip install --upgrade transformers.
34
 
35
  ```python
36
  import transformers
 
1
  ---
 
2
  language:
3
  - it
4
  - en
5
+ license: apache-2.0
6
+ pipeline_tag: text-generation
7
+ library_name: transformers
8
  ---
9
 
10
  # Llama-3.1-8B-Italian-LAPT
 
16
 
17
  The **Llama-3.1-8B-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 8B (text in/text out), adapted models from **Llama-3.1-8B**.
18
 
19
+ *Llama-3.1-8B-Italian-LAPT* is a continually trained Mistral model.
20
 
21
  **Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
22
 
 
24
 
25
  ## Data used for the adaptation
26
 
27
+ The **Mistral-7B-v0.1-Adapted** model is trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
28
+ The data are extracted to be skewed toward Italian language with a ratio of one over four. Extracting the first 9B tokens from the Italian part of CulturaX and the first 3B tokens from the English part of CulturaX.
29
 
30
 
31
  ## Use with Transformers
32
 
33
  You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
34
 
35
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
36
 
37
  ```python
38
  import transformers