Safetensors
English
llava_next
remote-sensing
AdaptLLM commited on
Commit
0ff0062
·
verified ·
1 Parent(s): 83152a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -5,7 +5,7 @@ language:
5
  base_model:
6
  - Lin-Chen/open-llava-next-llama3-8b
7
  tags:
8
- - remote sensing
9
  ---
10
  # Adapting Multimodal Large Language Models to Domains via Post-Training
11
 
@@ -28,7 +28,7 @@ image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
28
 
29
  instruction = "What's in the image?"
30
 
31
- model_path='AdaptLLM/remote sensing-LLaVA-NeXT-Llama3-8B'
32
 
33
  # =========================== Do NOT need to modify the following ===============================
34
  # Load the processor
@@ -60,7 +60,7 @@ print(pred)
60
 
61
  ## 2. To Evaluate Any MLLM on Domain-Specific Benchmarks
62
 
63
- Refer to the [remote sensing-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/remote sensing-VQA-benchmark) to reproduce our results and evaluate many other MLLMs on domain-specific benchmarks.
64
 
65
  ## 3. To Reproduce this Domain-Adapted MLLM
66
 
 
5
  base_model:
6
  - Lin-Chen/open-llava-next-llama3-8b
7
  tags:
8
+ - remote-sensing
9
  ---
10
  # Adapting Multimodal Large Language Models to Domains via Post-Training
11
 
 
28
 
29
  instruction = "What's in the image?"
30
 
31
+ model_path='AdaptLLM/remote-sensing-LLaVA-NeXT-Llama3-8B'
32
 
33
  # =========================== Do NOT need to modify the following ===============================
34
  # Load the processor
 
60
 
61
  ## 2. To Evaluate Any MLLM on Domain-Specific Benchmarks
62
 
63
+ Refer to the [remote-sensing-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/remote-sensing-VQA-benchmark) to reproduce our results and evaluate many other MLLMs on domain-specific benchmarks.
64
 
65
  ## 3. To Reproduce this Domain-Adapted MLLM
66