psinger commited on
Commit
695913a
·
1 Parent(s): 0101397

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -11
README.md CHANGED
@@ -8,13 +8,18 @@ tags:
8
  - large language model
9
  - h2o-llmstudio
10
  inference: false
11
- thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
 
 
 
 
12
  ---
13
  # Model Card
14
  ## Summary
15
 
16
  This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
17
  - Base model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
 
18
 
19
 
20
  ## Usage
@@ -22,7 +27,7 @@ This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmst
22
  To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
23
 
24
  ```bash
25
- pip install transformers==4.29.0
26
  pip install accelerate==0.20.3
27
  pip install torch==2.0.0
28
  ```
@@ -174,15 +179,6 @@ LlamaForCausalLM(
174
  This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
175
 
176
 
177
- ## Model Validation
178
-
179
- Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
180
-
181
- ```bash
182
- CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
183
- ```
184
-
185
-
186
  ## Disclaimer
187
 
188
  Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
 
8
  - large language model
9
  - h2o-llmstudio
10
  inference: false
11
+ thumbnail: >-
12
+ https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
13
+ license: apache-2.0
14
+ datasets:
15
+ - OpenAssistant/oasst1
16
  ---
17
  # Model Card
18
  ## Summary
19
 
20
  This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
21
  - Base model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
22
+ - Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28)
23
 
24
 
25
  ## Usage
 
27
  To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
28
 
29
  ```bash
30
+ pip install transformers==4.30.2
31
  pip install accelerate==0.20.3
32
  pip install torch==2.0.0
33
  ```
 
179
  This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
180
 
181
 
 
 
 
 
 
 
 
 
 
182
  ## Disclaimer
183
 
184
  Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.