Files changed (1) hide show
  1. README.md +593 -581
README.md CHANGED
@@ -1,582 +1,594 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- pipeline_tag: image-text-to-text
4
- library_name: transformers
5
- base_model:
6
- - google/paligemma-3b-mix-448
7
- - Qwen/Qwen2.5-7B-Instruct
8
- - google/siglip-so400m-patch14-384
9
- - timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k
10
- base_model_relation: merge
11
- language:
12
- - multilingual
13
- tags:
14
- - eagle
15
- - VLM
16
- ---
17
-
18
-
19
- # Eagle-2
20
-
21
- [\[📂 GitHub\]](https://github.com/NVlabs/EAGLE) [\[📜 Eagle2 Tech Report\]](http://arxiv.org/abs/2501.14818)
22
- [\[🗨️ Chat Demo\]](http://eagle-vlm.xyz/) [\[🤗 HF Demo\]](TODO)
23
- ## Introduction
24
-
25
- We are thrilled to release our latest Eagle2 series Vision-Language Model. Open-source Vision-Language Models (VLMs) have made significant strides in narrowing the gap with proprietary models. However, critical details about data strategies and implementation are often missing, limiting reproducibility and innovation. In this project, we focus on VLM post-training from a data-centric perspective, sharing insights into building effective data strategies from scratch. By combining these strategies with robust training recipes and model design, we introduce Eagle2, a family of performant VLMs. Our work aims to empower the open-source community to develop competitive VLMs with transparent processes.
26
-
27
-
28
-
29
- In this repo, we are open-sourcing Eagle2-9B, which strikes the perfect balance between performance and inference speed.
30
-
31
-
32
-
33
-
34
-
35
-
36
-
37
-
38
-
39
- ## Model Zoo
40
- We provide the following models:
41
-
42
- | model name | LLM | Vision | Max Length| HF Link|
43
- | ----------- | ------- |---------|-|-|
44
- | Eagle2-1B | [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) | Siglip | 16K| [🤗 link](https://huggingface.co/NVIDIA/Eagle2-1B)|
45
- | Eagle2-2B | [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) | Siglip | 16K| [🤗 link](https://huggingface.co/NVIDIA/Eagle2-2B)|
46
- | Eagle2-9B | [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | Siglip+ConvNext | 16K| [🤗 link](https://huggingface.co/NVIDIA/Eagle2-9B)|
47
-
48
- ## Benchmark Results
49
- | Benchmark | MiniCPM-Llama3-V-2_5 | InternVL-Chat-V1-5 | InternVL2-8B |QwenVL2-7B| Eagle2-9B|
50
- | :--------------------------: | :------------------: | :----------------: | :----------: |:----------: |:----------: |
51
- | Model Size | 8.5B | 25.5B | 8.1B | 8.3B|8.9B|
52
- | | | | | | |
53
- | DocVQA<sub>test</sub> | 84.8 | 90.9 | 91.6 |**94.5**|92.6|
54
- | ChartQA<sub>test</sub> | - | 83.8 | 83.3 |83.0|**86.4**|
55
- | InfoVQA<sub>test</sub> | - | 72.5 | 74.8 |74.3|**77.2**|
56
- | TextVQA<sub>val</sub> | 76.6 | 80.6 | 77.4 |**84.3**|83.0|
57
- | OCRBench | 725 | 724 | 794 |845|**868**|
58
- | MME<sub>sum</sub> | 2024.6 | 2187.8 | 2210.3 | **2326.8**|2260|
59
- | RealWorldQA | 63.5 | 66.0 | 64.4 |**70.1**|69.3|
60
- | AI2D<sub>test</sub> | 78.4 | 80.7 | 83.8 | - |**83.9**|
61
- | MMMU<sub>val</sub> | 45.8 | 45.2 / 46.8 | 49.3 / 51.8 |54.1|**56.1**|
62
- | MMBench_V11<sub>test</sub> | | | 79.5 |79.4|**80.6**|
63
- | MMVet<sub>GPT-4-Turbo</sub> | 52.8 | 55.4 | 54.2 | 62.0|**62.2**|
64
- | SEED-Image | 72.3 | 76.0 | 76.2 ||**77.1**|
65
- | HallBench<sub>avg</sub> | 42.4 | 49.3 | 45.2 |**50.6**|49.3
66
- | MathVista<sub>testmini</sub> | 54.3 | 53.5 | 58.3 |58.2|**63.8**|
67
- | MMstar | - | - | 60.9|60.7|**62.6**|
68
-
69
-
70
-
71
- ## Quick Start
72
-
73
-
74
-
75
- We provide a [demo inference script](./demo.py) to help you quickly start using the model. We support different input types:
76
- - pure text input
77
- - single image input
78
- - multiple image input
79
- - video input
80
-
81
- ### 0. Install the dependencies
82
-
83
- ```bash
84
- pip install transformers==4.37.2
85
- pip install flash-attn
86
- ```
87
- **Note**: Latest version of transformers is not compatible with the model.
88
-
89
- ### 1. Prepare the Model worker
90
-
91
- <details>
92
- <summary>Click to expand</summary>
93
-
94
- ```python
95
-
96
- """
97
- A model worker executes the model.
98
- Copied and modified from https://github.com/OpenGVLab/InternVL/blob/main/streamlit_demo/model_worker.py
99
- """
100
- # Importing torch before transformers can cause `segmentation fault`
101
- from transformers import AutoModel, AutoTokenizer, TextIteratorStreamer, AutoConfig
102
-
103
- import argparse
104
- import base64
105
- import json
106
- import os
107
- import decord
108
- import threading
109
- import time
110
- from io import BytesIO
111
- from threading import Thread
112
- import math
113
- import requests
114
- import torch
115
- import torchvision.transforms as T
116
- from PIL import Image
117
- from torchvision.transforms.functional import InterpolationMode
118
- import numpy as np
119
-
120
-
121
- IMAGENET_MEAN = (0.485, 0.456, 0.406)
122
- IMAGENET_STD = (0.229, 0.224, 0.225)
123
-
124
- SIGLIP_MEAN = (0.5, 0.5, 0.5)
125
- SIGLIP_STD = (0.5, 0.5, 0.5)
126
-
127
-
128
- def get_seq_frames(total_num_frames, desired_num_frames=-1, stride=-1):
129
- """
130
- Calculate the indices of frames to extract from a video.
131
-
132
- Parameters:
133
- total_num_frames (int): Total number of frames in the video.
134
- desired_num_frames (int): Desired number of frames to extract.
135
-
136
- Returns:
137
- list: List of indices of frames to extract.
138
- """
139
-
140
- assert desired_num_frames > 0 or stride > 0 and not (desired_num_frames > 0 and stride > 0)
141
-
142
- if stride > 0:
143
- return list(range(0, total_num_frames, stride))
144
-
145
- # Calculate the size of each segment from which a frame will be extracted
146
- seg_size = float(total_num_frames - 1) / desired_num_frames
147
-
148
- seq = []
149
- for i in range(desired_num_frames):
150
- # Calculate the start and end indices of each segment
151
- start = int(np.round(seg_size * i))
152
- end = int(np.round(seg_size * (i + 1)))
153
-
154
- # Append the middle index of the segment to the list
155
- seq.append((start + end) // 2)
156
-
157
- return seq
158
-
159
- def build_video_prompt(meta_list, num_frames, time_position=False):
160
- # if time_position is True, the frame_timestamp is used.
161
- # 1. pass time_position, 2. use env TIME_POSITION
162
- time_position = os.environ.get("TIME_POSITION", time_position)
163
- prefix = f"This is a video:\n"
164
- for i in range(num_frames):
165
- if time_position:
166
- frame_txt = f"Frame {i+1} sampled at {meta_list[i]:.2f} seconds: <image>\n"
167
- else:
168
- frame_txt = f"Frame {i+1}: <image>\n"
169
- prefix += frame_txt
170
- return prefix
171
-
172
- def load_video(video_path, num_frames=64, frame_cache_root=None):
173
- if isinstance(video_path, str):
174
- video = decord.VideoReader(video_path)
175
- elif isinstance(video_path, dict):
176
- assert False, 'we not support vidoe: "video_path" as input'
177
- fps = video.get_avg_fps()
178
- sampled_frames = get_seq_frames(len(video), num_frames)
179
- samepld_timestamps = [i / fps for i in sampled_frames]
180
- frames = video.get_batch(sampled_frames).asnumpy()
181
- images = [Image.fromarray(frame) for frame in frames]
182
-
183
- return images, build_video_prompt(samepld_timestamps, len(images), time_position=True)
184
-
185
- def load_image(image):
186
- if isinstance(image, str) and os.path.exists(image):
187
- return Image.open(image)
188
- elif isinstance(image, dict):
189
- if 'disk_path' in image:
190
- return Image.open(image['disk_path'])
191
- elif 'base64' in image:
192
- return Image.open(BytesIO(base64.b64decode(image['base64'])))
193
- elif 'url' in image:
194
- response = requests.get(image['url'])
195
- return Image.open(BytesIO(response.content))
196
- elif 'bytes' in image:
197
- return Image.open(BytesIO(image['bytes']))
198
- else:
199
- raise ValueError(f'Invalid image: {image}')
200
- else:
201
- raise ValueError(f'Invalid image: {image}')
202
-
203
- def build_transform(input_size, norm_type='imagenet'):
204
- if norm_type == 'imagenet':
205
- MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
206
- elif norm_type == 'siglip':
207
- MEAN, STD = SIGLIP_MEAN, SIGLIP_STD
208
-
209
- transform = T.Compose([
210
- T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
211
- T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
212
- T.ToTensor(),
213
- T.Normalize(mean=MEAN, std=STD)
214
- ])
215
- return transform
216
-
217
-
218
- def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
219
- """
220
- previous version mainly foucs on ratio.
221
- We also consider area ratio here.
222
- """
223
- best_factor = float('-inf')
224
- best_ratio = (1, 1)
225
- area = width * height
226
- for ratio in target_ratios:
227
- target_aspect_ratio = ratio[0] / ratio[1]
228
- ratio_diff = abs(aspect_ratio - target_aspect_ratio)
229
- area_ratio = (ratio[0]*ratio[1]*image_size*image_size)/ area
230
- """
231
- new area > 60% of original image area is enough.
232
- """
233
- factor_based_on_area_n_ratio = min((ratio[0]*ratio[1]*image_size*image_size)/ area, 0.6)* \
234
- min(target_aspect_ratio/aspect_ratio, aspect_ratio/target_aspect_ratio)
235
-
236
- if factor_based_on_area_n_ratio > best_factor:
237
- best_factor = factor_based_on_area_n_ratio
238
- best_ratio = ratio
239
-
240
- return best_ratio
241
-
242
-
243
- def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
244
- orig_width, orig_height = image.size
245
- aspect_ratio = orig_width / orig_height
246
-
247
- # calculate the existing image aspect ratio
248
- target_ratios = set(
249
- (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
250
- i * j <= max_num and i * j >= min_num)
251
- target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
252
-
253
- # find the closest aspect ratio to the target
254
- target_aspect_ratio = find_closest_aspect_ratio(
255
- aspect_ratio, target_ratios, orig_width, orig_height, image_size)
256
-
257
- # calculate the target width and height
258
- target_width = image_size * target_aspect_ratio[0]
259
- target_height = image_size * target_aspect_ratio[1]
260
- blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
261
-
262
- # resize the image
263
- resized_img = image.resize((target_width, target_height))
264
- processed_images = []
265
- for i in range(blocks):
266
- box = (
267
- (i % (target_width // image_size)) * image_size,
268
- (i // (target_width // image_size)) * image_size,
269
- ((i % (target_width // image_size)) + 1) * image_size,
270
- ((i // (target_width // image_size)) + 1) * image_size
271
- )
272
- # split the image
273
- split_img = resized_img.crop(box)
274
- processed_images.append(split_img)
275
- assert len(processed_images) == blocks
276
- if use_thumbnail and len(processed_images) != 1:
277
- thumbnail_img = image.resize((image_size, image_size))
278
- processed_images.append(thumbnail_img)
279
- return processed_images
280
-
281
- def split_model(model_path, device):
282
-
283
- device_map = {}
284
- world_size = torch.cuda.device_count()
285
- config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
286
- num_layers = config.llm_config.num_hidden_layers
287
-
288
- print('world_size', world_size)
289
- num_layers_per_gpu_ = math.floor(num_layers / (world_size - 1))
290
- num_layers_per_gpu = [num_layers_per_gpu_] * world_size
291
- num_layers_per_gpu[device] = num_layers - num_layers_per_gpu_ * (world_size-1)
292
- print(num_layers_per_gpu)
293
- layer_cnt = 0
294
- for i, num_layer in enumerate(num_layers_per_gpu):
295
- for j in range(num_layer):
296
- device_map[f'language_model.model.layers.{layer_cnt}'] = i
297
- layer_cnt += 1
298
- device_map['vision_model'] = device
299
- device_map['mlp1'] = device
300
- device_map['language_model.model.tok_embeddings'] = device
301
- device_map['language_model.model.embed_tokens'] = device
302
- device_map['language_model.output'] = device
303
- device_map['language_model.model.norm'] = device
304
- device_map['language_model.lm_head'] = device
305
- device_map['language_model.model.rotary_emb'] = device
306
- device_map[f'language_model.model.layers.{num_layers - 1}'] = device
307
- return device_map
308
-
309
- class ModelWorker:
310
- def __init__(self, model_path, model_name,
311
- load_8bit, device):
312
-
313
- if model_path.endswith('/'):
314
- model_path = model_path[:-1]
315
- if model_name is None:
316
- model_paths = model_path.split('/')
317
- if model_paths[-1].startswith('checkpoint-'):
318
- self.model_name = model_paths[-2] + '_' + model_paths[-1]
319
- else:
320
- self.model_name = model_paths[-1]
321
- else:
322
- self.model_name = model_name
323
-
324
- print(f'Loading the model {self.model_name}')
325
-
326
- tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
327
- tokens_to_keep = ['<box>', '</box>', '<ref>', '</ref>']
328
- tokenizer.additional_special_tokens = [item for item in tokenizer.additional_special_tokens if item not in tokens_to_keep]
329
- self.tokenizer = tokenizer
330
- config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
331
- model_type = config.vision_config.model_type
332
- self.device = torch.cuda.current_device()
333
- if model_type == 'siglip_vision_model':
334
- self.norm_type = 'siglip'
335
- elif model_type == 'MOB':
336
- self.norm_type = 'siglip'
337
- else:
338
- self.norm_type = 'imagenet'
339
-
340
- if any(x in model_path.lower() for x in ['34b']):
341
- device_map = split_model(model_path, self.device)
342
- else:
343
- device_map = None
344
-
345
- if device_map is not None:
346
- self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,
347
- low_cpu_mem_usage=True,
348
- device_map=device_map,
349
- trust_remote_code=True,
350
- load_in_8bit=load_8bit).eval()
351
- else:
352
- self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,
353
- trust_remote_code=True,
354
- load_in_8bit=load_8bit).eval()
355
-
356
- if not load_8bit and device_map is None:
357
- self.model = self.model.to(device)
358
- self.load_8bit = load_8bit
359
-
360
- self.model_path = model_path
361
- self.image_size = self.model.config.force_image_size
362
- self.context_len = tokenizer.model_max_length
363
- self.per_tile_len = 256
364
-
365
- def reload_model(self):
366
- del self.model
367
- torch.cuda.empty_cache()
368
- if self.device == 'auto':
369
- os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
370
- # This can make distributed deployment work properly
371
- self.model = AutoModel.from_pretrained(
372
- self.model_path,
373
- load_in_8bit=self.load_8bit,
374
- torch_dtype=torch.bfloat16,
375
- device_map=self.device_map,
376
- trust_remote_code=True).eval()
377
- else:
378
- self.model = AutoModel.from_pretrained(
379
- self.model_path,
380
- load_in_8bit=self.load_8bit,
381
- torch_dtype=torch.bfloat16,
382
- trust_remote_code=True).eval()
383
- if not self.load_8bit and not self.device == 'auto':
384
- self.model = self.model.cuda()
385
-
386
- @torch.inference_mode()
387
- def generate(self, params):
388
- system_message = params['prompt'][0]['content']
389
- send_messages = params['prompt'][1:]
390
- max_input_tiles = params['max_input_tiles']
391
- temperature = params['temperature']
392
- top_p = params['top_p']
393
- max_new_tokens = params['max_new_tokens']
394
- repetition_penalty = params['repetition_penalty']
395
- video_frame_num = params.get('video_frame_num', 64)
396
- do_sample = True if temperature > 0.0 else False
397
-
398
- global_image_cnt = 0
399
- history, pil_images, max_input_tile_list = [], [], []
400
- for message in send_messages:
401
- if message['role'] == 'user':
402
- prefix = ''
403
- if 'image' in message:
404
- for image_data in message['image']:
405
- pil_images.append(load_image(image_data))
406
- prefix = prefix + f'<image {global_image_cnt + 1}><image>\n'
407
- global_image_cnt += 1
408
- max_input_tile_list.append(max_input_tiles)
409
- if 'video' in message:
410
- for video_data in message['video']:
411
- video_frames, tmp_prefix = load_video(video_data, num_frames=video_frame_num)
412
- pil_images.extend(video_frames)
413
- prefix = prefix + tmp_prefix
414
- global_image_cnt += len(video_frames)
415
- max_input_tile_list.extend([1] * len(video_frames))
416
- content = prefix + message['content']
417
- history.append([content, ])
418
- else:
419
- history[-1].append(message['content'])
420
- question, history = history[-1][0], history[:-1]
421
-
422
- if global_image_cnt == 1:
423
- question = question.replace('<image 1><image>\n', '<image>\n')
424
- history = [[item[0].replace('<image 1><image>\n', '<image>\n'), item[1]] for item in history]
425
-
426
-
427
- try:
428
- assert len(max_input_tile_list) == len(pil_images), 'The number of max_input_tile_list and pil_images should be the same.'
429
- except Exception as e:
430
- from IPython import embed; embed()
431
- exit()
432
- print(f'Error: {e}')
433
- print(f'max_input_tile_list: {max_input_tile_list}, pil_images: {pil_images}')
434
- # raise e
435
-
436
- old_system_message = self.model.system_message
437
- self.model.system_message = system_message
438
-
439
- transform = build_transform(input_size=self.image_size, norm_type=self.norm_type)
440
- if len(pil_images) > 0:
441
- max_input_tiles_limited_by_contect = params['max_input_tiles']
442
- while True:
443
- image_tiles = []
444
- for current_max_input_tiles, pil_image in zip(max_input_tile_list, pil_images):
445
- if self.model.config.dynamic_image_size:
446
- tiles = dynamic_preprocess(
447
- pil_image, image_size=self.image_size, max_num=min(current_max_input_tiles, max_input_tiles_limited_by_contect),
448
- use_thumbnail=self.model.config.use_thumbnail)
449
- else:
450
- tiles = [pil_image]
451
- image_tiles += tiles
452
- if (len(image_tiles) * self.per_tile_len < self.context_len):
453
- break
454
- else:
455
- max_input_tiles_limited_by_contect -= 2
456
-
457
- if max_input_tiles_limited_by_contect < 1:
458
- break
459
-
460
- pixel_values = [transform(item) for item in image_tiles]
461
- pixel_values = torch.stack(pixel_values).to(self.model.device, dtype=torch.bfloat16)
462
- print(f'Split images to {pixel_values.shape}')
463
- else:
464
- pixel_values = None
465
-
466
- generation_config = dict(
467
- num_beams=1,
468
- max_new_tokens=max_new_tokens,
469
- do_sample=do_sample,
470
- temperature=temperature,
471
- repetition_penalty=repetition_penalty,
472
- max_length=self.context_len,
473
- top_p=top_p,
474
- )
475
-
476
- response = self.model.chat(
477
- tokenizer=self.tokenizer,
478
- pixel_values=pixel_values,
479
- question=question,
480
- history=history,
481
- return_history=False,
482
- generation_config=generation_config,
483
- )
484
- self.model.system_message = old_system_message
485
- return {'text': response, 'error_code': 0}
486
-
487
-
488
-
489
-
490
-
491
- if __name__ == '__main__':
492
- parser = argparse.ArgumentParser()
493
- parser.add_argument('--model-path', type=str, default='nvidia/Eagle2-9B')
494
- parser.add_argument('--model-name', type=str, default='Eagle2-9B')
495
- parser.add_argument('--device', type=str, default='cuda')
496
- parser.add_argument('--load-8bit', action='store_true')
497
- args = parser.parse_args()
498
- print(f'args: {args}')
499
-
500
- worker = ModelWorker(
501
- args.model_path,
502
- args.model_name,
503
- args.load_8bit,
504
- args.device)
505
- ```
506
- </details>
507
-
508
-
509
- ### 2. Prepare the Prompt
510
-
511
- - Single image input
512
- ```python
513
- prompt = [
514
- {'role': 'system', 'content': 'You are a helpful assistant.'},
515
- {'role': 'user', 'content': 'Describe this image in details.',
516
- 'image':[
517
- {'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/01-nvidia-logo-vert-500x200-2c50-d@2x.png'}
518
- ],
519
- }
520
- ]
521
- ```
522
-
523
- - Multiple image input
524
- ```python
525
- prompt = [
526
- {'role': 'system', 'content': 'You are a helpful assistant.'},
527
- {'role': 'user', 'content': 'Describe these two images in details.',
528
- 'image':[
529
- {'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/01-nvidia-logo-vert-500x200-2c50-d@2x.png'},
530
- {'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/01-nvidia-logo-vert-500x200-2c50-d@2x.png'}
531
- ],
532
- }
533
- ]
534
- ```
535
-
536
- - Video input
537
- ```python
538
- prompt = [
539
- {'role': 'system', 'content': 'You are a helpful assistant.'},
540
- {'role': 'user', 'content': 'Describe this video in details.',
541
- 'video':[
542
- 'path/to/your/video.mp4'
543
- ],
544
- }
545
- ]
546
- ```
547
-
548
- ### 3. Generate the response
549
- ```python
550
- params = {
551
- 'prompt': prompt,
552
- 'max_input_tiles': 24,
553
- 'temperature': 0.7,
554
- 'top_p': 1.0,
555
- 'max_new_tokens': 4096,
556
- 'repetition_penalty': 1.0,
557
- }
558
- worker.generate(params)
559
- ```
560
-
561
- ## TODO
562
- - [ ] Support vLLM Inference
563
- - [ ] Provide AWQ Quantization Weights
564
- - [ ] Provide fine-tuning scripts
565
-
566
-
567
- ## License/Terms of Use
568
- - The code is released under the Apache 2.0 license as found in the [LICENSE](https://huggingface.co/NVEagle/Eagle-X5-13B-Chat/blob/main/LICENSE) file.
569
- - The pretrained model weights are released under the [Creative Commons Attribution: Non-Commercial 4.0 International](https://spdx.org/licenses/CC-BY-NC-4.0) <br>
570
- - The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms:
571
- - Model License of Qwen2.5-7B-Instruct: [Apache-2.0](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE)
572
- - Model License of PaliGemma: [Gemma license](https://ai.google.dev/gemma/terms)
573
-
574
-
575
-
576
- ## Citation
577
-
578
- ## Ethical Considerations
579
- NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
580
-
581
- Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
 
 
 
 
 
 
 
 
 
 
 
 
582
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ pipeline_tag: image-text-to-text
4
+ library_name: transformers
5
+ base_model:
6
+ - google/paligemma-3b-mix-448
7
+ - Qwen/Qwen2.5-7B-Instruct
8
+ - google/siglip-so400m-patch14-384
9
+ - timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k
10
+ base_model_relation: merge
11
+ language:
12
+ - zho
13
+ - eng
14
+ - fra
15
+ - spa
16
+ - por
17
+ - deu
18
+ - ita
19
+ - rus
20
+ - jpn
21
+ - kor
22
+ - vie
23
+ - tha
24
+ - ara
25
+ tags:
26
+ - eagle
27
+ - VLM
28
+ ---
29
+
30
+
31
+ # Eagle-2
32
+
33
+ [\[📂 GitHub\]](https://github.com/NVlabs/EAGLE) [\[📜 Eagle2 Tech Report\]](http://arxiv.org/abs/2501.14818)
34
+ [\[🗨️ Chat Demo\]](http://eagle-vlm.xyz/) [\[🤗 HF Demo\]](TODO)
35
+ ## Introduction
36
+
37
+ We are thrilled to release our latest Eagle2 series Vision-Language Model. Open-source Vision-Language Models (VLMs) have made significant strides in narrowing the gap with proprietary models. However, critical details about data strategies and implementation are often missing, limiting reproducibility and innovation. In this project, we focus on VLM post-training from a data-centric perspective, sharing insights into building effective data strategies from scratch. By combining these strategies with robust training recipes and model design, we introduce Eagle2, a family of performant VLMs. Our work aims to empower the open-source community to develop competitive VLMs with transparent processes.
38
+
39
+
40
+
41
+ In this repo, we are open-sourcing Eagle2-9B, which strikes the perfect balance between performance and inference speed.
42
+
43
+
44
+
45
+
46
+
47
+
48
+
49
+
50
+
51
+ ## Model Zoo
52
+ We provide the following models:
53
+
54
+ | model name | LLM | Vision | Max Length| HF Link|
55
+ | ----------- | ------- |---------|-|-|
56
+ | Eagle2-1B | [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) | Siglip | 16K| [🤗 link](https://huggingface.co/NVIDIA/Eagle2-1B)|
57
+ | Eagle2-2B | [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) | Siglip | 16K| [🤗 link](https://huggingface.co/NVIDIA/Eagle2-2B)|
58
+ | Eagle2-9B | [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | Siglip+ConvNext | 16K| [🤗 link](https://huggingface.co/NVIDIA/Eagle2-9B)|
59
+
60
+ ## Benchmark Results
61
+ | Benchmark | MiniCPM-Llama3-V-2_5 | InternVL-Chat-V1-5 | InternVL2-8B |QwenVL2-7B| Eagle2-9B|
62
+ | :--------------------------: | :------------------: | :----------------: | :----------: |:----------: |:----------: |
63
+ | Model Size | 8.5B | 25.5B | 8.1B | 8.3B|8.9B|
64
+ | | | | | | |
65
+ | DocVQA<sub>test</sub> | 84.8 | 90.9 | 91.6 |**94.5**|92.6|
66
+ | ChartQA<sub>test</sub> | - | 83.8 | 83.3 |83.0|**86.4**|
67
+ | InfoVQA<sub>test</sub> | - | 72.5 | 74.8 |74.3|**77.2**|
68
+ | TextVQA<sub>val</sub> | 76.6 | 80.6 | 77.4 |**84.3**|83.0|
69
+ | OCRBench | 725 | 724 | 794 |845|**868**|
70
+ | MME<sub>sum</sub> | 2024.6 | 2187.8 | 2210.3 | **2326.8**|2260|
71
+ | RealWorldQA | 63.5 | 66.0 | 64.4 |**70.1**|69.3|
72
+ | AI2D<sub>test</sub> | 78.4 | 80.7 | 83.8 | - |**83.9**|
73
+ | MMMU<sub>val</sub> | 45.8 | 45.2 / 46.8 | 49.3 / 51.8 |54.1|**56.1**|
74
+ | MMBench_V11<sub>test</sub> | | | 79.5 |79.4|**80.6**|
75
+ | MMVet<sub>GPT-4-Turbo</sub> | 52.8 | 55.4 | 54.2 | 62.0|**62.2**|
76
+ | SEED-Image | 72.3 | 76.0 | 76.2 ||**77.1**|
77
+ | HallBench<sub>avg</sub> | 42.4 | 49.3 | 45.2 |**50.6**|49.3
78
+ | MathVista<sub>testmini</sub> | 54.3 | 53.5 | 58.3 |58.2|**63.8**|
79
+ | MMstar | - | - | 60.9|60.7|**62.6**|
80
+
81
+
82
+
83
+ ## Quick Start
84
+
85
+
86
+
87
+ We provide a [demo inference script](./demo.py) to help you quickly start using the model. We support different input types:
88
+ - pure text input
89
+ - single image input
90
+ - multiple image input
91
+ - video input
92
+
93
+ ### 0. Install the dependencies
94
+
95
+ ```bash
96
+ pip install transformers==4.37.2
97
+ pip install flash-attn
98
+ ```
99
+ **Note**: Latest version of transformers is not compatible with the model.
100
+
101
+ ### 1. Prepare the Model worker
102
+
103
+ <details>
104
+ <summary>Click to expand</summary>
105
+
106
+ ```python
107
+
108
+ """
109
+ A model worker executes the model.
110
+ Copied and modified from https://github.com/OpenGVLab/InternVL/blob/main/streamlit_demo/model_worker.py
111
+ """
112
+ # Importing torch before transformers can cause `segmentation fault`
113
+ from transformers import AutoModel, AutoTokenizer, TextIteratorStreamer, AutoConfig
114
+
115
+ import argparse
116
+ import base64
117
+ import json
118
+ import os
119
+ import decord
120
+ import threading
121
+ import time
122
+ from io import BytesIO
123
+ from threading import Thread
124
+ import math
125
+ import requests
126
+ import torch
127
+ import torchvision.transforms as T
128
+ from PIL import Image
129
+ from torchvision.transforms.functional import InterpolationMode
130
+ import numpy as np
131
+
132
+
133
+ IMAGENET_MEAN = (0.485, 0.456, 0.406)
134
+ IMAGENET_STD = (0.229, 0.224, 0.225)
135
+
136
+ SIGLIP_MEAN = (0.5, 0.5, 0.5)
137
+ SIGLIP_STD = (0.5, 0.5, 0.5)
138
+
139
+
140
+ def get_seq_frames(total_num_frames, desired_num_frames=-1, stride=-1):
141
+ """
142
+ Calculate the indices of frames to extract from a video.
143
+
144
+ Parameters:
145
+ total_num_frames (int): Total number of frames in the video.
146
+ desired_num_frames (int): Desired number of frames to extract.
147
+
148
+ Returns:
149
+ list: List of indices of frames to extract.
150
+ """
151
+
152
+ assert desired_num_frames > 0 or stride > 0 and not (desired_num_frames > 0 and stride > 0)
153
+
154
+ if stride > 0:
155
+ return list(range(0, total_num_frames, stride))
156
+
157
+ # Calculate the size of each segment from which a frame will be extracted
158
+ seg_size = float(total_num_frames - 1) / desired_num_frames
159
+
160
+ seq = []
161
+ for i in range(desired_num_frames):
162
+ # Calculate the start and end indices of each segment
163
+ start = int(np.round(seg_size * i))
164
+ end = int(np.round(seg_size * (i + 1)))
165
+
166
+ # Append the middle index of the segment to the list
167
+ seq.append((start + end) // 2)
168
+
169
+ return seq
170
+
171
+ def build_video_prompt(meta_list, num_frames, time_position=False):
172
+ # if time_position is True, the frame_timestamp is used.
173
+ # 1. pass time_position, 2. use env TIME_POSITION
174
+ time_position = os.environ.get("TIME_POSITION", time_position)
175
+ prefix = f"This is a video:\n"
176
+ for i in range(num_frames):
177
+ if time_position:
178
+ frame_txt = f"Frame {i+1} sampled at {meta_list[i]:.2f} seconds: <image>\n"
179
+ else:
180
+ frame_txt = f"Frame {i+1}: <image>\n"
181
+ prefix += frame_txt
182
+ return prefix
183
+
184
+ def load_video(video_path, num_frames=64, frame_cache_root=None):
185
+ if isinstance(video_path, str):
186
+ video = decord.VideoReader(video_path)
187
+ elif isinstance(video_path, dict):
188
+ assert False, 'we not support vidoe: "video_path" as input'
189
+ fps = video.get_avg_fps()
190
+ sampled_frames = get_seq_frames(len(video), num_frames)
191
+ samepld_timestamps = [i / fps for i in sampled_frames]
192
+ frames = video.get_batch(sampled_frames).asnumpy()
193
+ images = [Image.fromarray(frame) for frame in frames]
194
+
195
+ return images, build_video_prompt(samepld_timestamps, len(images), time_position=True)
196
+
197
+ def load_image(image):
198
+ if isinstance(image, str) and os.path.exists(image):
199
+ return Image.open(image)
200
+ elif isinstance(image, dict):
201
+ if 'disk_path' in image:
202
+ return Image.open(image['disk_path'])
203
+ elif 'base64' in image:
204
+ return Image.open(BytesIO(base64.b64decode(image['base64'])))
205
+ elif 'url' in image:
206
+ response = requests.get(image['url'])
207
+ return Image.open(BytesIO(response.content))
208
+ elif 'bytes' in image:
209
+ return Image.open(BytesIO(image['bytes']))
210
+ else:
211
+ raise ValueError(f'Invalid image: {image}')
212
+ else:
213
+ raise ValueError(f'Invalid image: {image}')
214
+
215
+ def build_transform(input_size, norm_type='imagenet'):
216
+ if norm_type == 'imagenet':
217
+ MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
218
+ elif norm_type == 'siglip':
219
+ MEAN, STD = SIGLIP_MEAN, SIGLIP_STD
220
+
221
+ transform = T.Compose([
222
+ T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
223
+ T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
224
+ T.ToTensor(),
225
+ T.Normalize(mean=MEAN, std=STD)
226
+ ])
227
+ return transform
228
+
229
+
230
+ def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
231
+ """
232
+ previous version mainly foucs on ratio.
233
+ We also consider area ratio here.
234
+ """
235
+ best_factor = float('-inf')
236
+ best_ratio = (1, 1)
237
+ area = width * height
238
+ for ratio in target_ratios:
239
+ target_aspect_ratio = ratio[0] / ratio[1]
240
+ ratio_diff = abs(aspect_ratio - target_aspect_ratio)
241
+ area_ratio = (ratio[0]*ratio[1]*image_size*image_size)/ area
242
+ """
243
+ new area > 60% of original image area is enough.
244
+ """
245
+ factor_based_on_area_n_ratio = min((ratio[0]*ratio[1]*image_size*image_size)/ area, 0.6)* \
246
+ min(target_aspect_ratio/aspect_ratio, aspect_ratio/target_aspect_ratio)
247
+
248
+ if factor_based_on_area_n_ratio > best_factor:
249
+ best_factor = factor_based_on_area_n_ratio
250
+ best_ratio = ratio
251
+
252
+ return best_ratio
253
+
254
+
255
+ def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
256
+ orig_width, orig_height = image.size
257
+ aspect_ratio = orig_width / orig_height
258
+
259
+ # calculate the existing image aspect ratio
260
+ target_ratios = set(
261
+ (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
262
+ i * j <= max_num and i * j >= min_num)
263
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
264
+
265
+ # find the closest aspect ratio to the target
266
+ target_aspect_ratio = find_closest_aspect_ratio(
267
+ aspect_ratio, target_ratios, orig_width, orig_height, image_size)
268
+
269
+ # calculate the target width and height
270
+ target_width = image_size * target_aspect_ratio[0]
271
+ target_height = image_size * target_aspect_ratio[1]
272
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
273
+
274
+ # resize the image
275
+ resized_img = image.resize((target_width, target_height))
276
+ processed_images = []
277
+ for i in range(blocks):
278
+ box = (
279
+ (i % (target_width // image_size)) * image_size,
280
+ (i // (target_width // image_size)) * image_size,
281
+ ((i % (target_width // image_size)) + 1) * image_size,
282
+ ((i // (target_width // image_size)) + 1) * image_size
283
+ )
284
+ # split the image
285
+ split_img = resized_img.crop(box)
286
+ processed_images.append(split_img)
287
+ assert len(processed_images) == blocks
288
+ if use_thumbnail and len(processed_images) != 1:
289
+ thumbnail_img = image.resize((image_size, image_size))
290
+ processed_images.append(thumbnail_img)
291
+ return processed_images
292
+
293
+ def split_model(model_path, device):
294
+
295
+ device_map = {}
296
+ world_size = torch.cuda.device_count()
297
+ config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
298
+ num_layers = config.llm_config.num_hidden_layers
299
+
300
+ print('world_size', world_size)
301
+ num_layers_per_gpu_ = math.floor(num_layers / (world_size - 1))
302
+ num_layers_per_gpu = [num_layers_per_gpu_] * world_size
303
+ num_layers_per_gpu[device] = num_layers - num_layers_per_gpu_ * (world_size-1)
304
+ print(num_layers_per_gpu)
305
+ layer_cnt = 0
306
+ for i, num_layer in enumerate(num_layers_per_gpu):
307
+ for j in range(num_layer):
308
+ device_map[f'language_model.model.layers.{layer_cnt}'] = i
309
+ layer_cnt += 1
310
+ device_map['vision_model'] = device
311
+ device_map['mlp1'] = device
312
+ device_map['language_model.model.tok_embeddings'] = device
313
+ device_map['language_model.model.embed_tokens'] = device
314
+ device_map['language_model.output'] = device
315
+ device_map['language_model.model.norm'] = device
316
+ device_map['language_model.lm_head'] = device
317
+ device_map['language_model.model.rotary_emb'] = device
318
+ device_map[f'language_model.model.layers.{num_layers - 1}'] = device
319
+ return device_map
320
+
321
+ class ModelWorker:
322
+ def __init__(self, model_path, model_name,
323
+ load_8bit, device):
324
+
325
+ if model_path.endswith('/'):
326
+ model_path = model_path[:-1]
327
+ if model_name is None:
328
+ model_paths = model_path.split('/')
329
+ if model_paths[-1].startswith('checkpoint-'):
330
+ self.model_name = model_paths[-2] + '_' + model_paths[-1]
331
+ else:
332
+ self.model_name = model_paths[-1]
333
+ else:
334
+ self.model_name = model_name
335
+
336
+ print(f'Loading the model {self.model_name}')
337
+
338
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
339
+ tokens_to_keep = ['<box>', '</box>', '<ref>', '</ref>']
340
+ tokenizer.additional_special_tokens = [item for item in tokenizer.additional_special_tokens if item not in tokens_to_keep]
341
+ self.tokenizer = tokenizer
342
+ config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
343
+ model_type = config.vision_config.model_type
344
+ self.device = torch.cuda.current_device()
345
+ if model_type == 'siglip_vision_model':
346
+ self.norm_type = 'siglip'
347
+ elif model_type == 'MOB':
348
+ self.norm_type = 'siglip'
349
+ else:
350
+ self.norm_type = 'imagenet'
351
+
352
+ if any(x in model_path.lower() for x in ['34b']):
353
+ device_map = split_model(model_path, self.device)
354
+ else:
355
+ device_map = None
356
+
357
+ if device_map is not None:
358
+ self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,
359
+ low_cpu_mem_usage=True,
360
+ device_map=device_map,
361
+ trust_remote_code=True,
362
+ load_in_8bit=load_8bit).eval()
363
+ else:
364
+ self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,
365
+ trust_remote_code=True,
366
+ load_in_8bit=load_8bit).eval()
367
+
368
+ if not load_8bit and device_map is None:
369
+ self.model = self.model.to(device)
370
+ self.load_8bit = load_8bit
371
+
372
+ self.model_path = model_path
373
+ self.image_size = self.model.config.force_image_size
374
+ self.context_len = tokenizer.model_max_length
375
+ self.per_tile_len = 256
376
+
377
+ def reload_model(self):
378
+ del self.model
379
+ torch.cuda.empty_cache()
380
+ if self.device == 'auto':
381
+ os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
382
+ # This can make distributed deployment work properly
383
+ self.model = AutoModel.from_pretrained(
384
+ self.model_path,
385
+ load_in_8bit=self.load_8bit,
386
+ torch_dtype=torch.bfloat16,
387
+ device_map=self.device_map,
388
+ trust_remote_code=True).eval()
389
+ else:
390
+ self.model = AutoModel.from_pretrained(
391
+ self.model_path,
392
+ load_in_8bit=self.load_8bit,
393
+ torch_dtype=torch.bfloat16,
394
+ trust_remote_code=True).eval()
395
+ if not self.load_8bit and not self.device == 'auto':
396
+ self.model = self.model.cuda()
397
+
398
+ @torch.inference_mode()
399
+ def generate(self, params):
400
+ system_message = params['prompt'][0]['content']
401
+ send_messages = params['prompt'][1:]
402
+ max_input_tiles = params['max_input_tiles']
403
+ temperature = params['temperature']
404
+ top_p = params['top_p']
405
+ max_new_tokens = params['max_new_tokens']
406
+ repetition_penalty = params['repetition_penalty']
407
+ video_frame_num = params.get('video_frame_num', 64)
408
+ do_sample = True if temperature > 0.0 else False
409
+
410
+ global_image_cnt = 0
411
+ history, pil_images, max_input_tile_list = [], [], []
412
+ for message in send_messages:
413
+ if message['role'] == 'user':
414
+ prefix = ''
415
+ if 'image' in message:
416
+ for image_data in message['image']:
417
+ pil_images.append(load_image(image_data))
418
+ prefix = prefix + f'<image {global_image_cnt + 1}><image>\n'
419
+ global_image_cnt += 1
420
+ max_input_tile_list.append(max_input_tiles)
421
+ if 'video' in message:
422
+ for video_data in message['video']:
423
+ video_frames, tmp_prefix = load_video(video_data, num_frames=video_frame_num)
424
+ pil_images.extend(video_frames)
425
+ prefix = prefix + tmp_prefix
426
+ global_image_cnt += len(video_frames)
427
+ max_input_tile_list.extend([1] * len(video_frames))
428
+ content = prefix + message['content']
429
+ history.append([content, ])
430
+ else:
431
+ history[-1].append(message['content'])
432
+ question, history = history[-1][0], history[:-1]
433
+
434
+ if global_image_cnt == 1:
435
+ question = question.replace('<image 1><image>\n', '<image>\n')
436
+ history = [[item[0].replace('<image 1><image>\n', '<image>\n'), item[1]] for item in history]
437
+
438
+
439
+ try:
440
+ assert len(max_input_tile_list) == len(pil_images), 'The number of max_input_tile_list and pil_images should be the same.'
441
+ except Exception as e:
442
+ from IPython import embed; embed()
443
+ exit()
444
+ print(f'Error: {e}')
445
+ print(f'max_input_tile_list: {max_input_tile_list}, pil_images: {pil_images}')
446
+ # raise e
447
+
448
+ old_system_message = self.model.system_message
449
+ self.model.system_message = system_message
450
+
451
+ transform = build_transform(input_size=self.image_size, norm_type=self.norm_type)
452
+ if len(pil_images) > 0:
453
+ max_input_tiles_limited_by_contect = params['max_input_tiles']
454
+ while True:
455
+ image_tiles = []
456
+ for current_max_input_tiles, pil_image in zip(max_input_tile_list, pil_images):
457
+ if self.model.config.dynamic_image_size:
458
+ tiles = dynamic_preprocess(
459
+ pil_image, image_size=self.image_size, max_num=min(current_max_input_tiles, max_input_tiles_limited_by_contect),
460
+ use_thumbnail=self.model.config.use_thumbnail)
461
+ else:
462
+ tiles = [pil_image]
463
+ image_tiles += tiles
464
+ if (len(image_tiles) * self.per_tile_len < self.context_len):
465
+ break
466
+ else:
467
+ max_input_tiles_limited_by_contect -= 2
468
+
469
+ if max_input_tiles_limited_by_contect < 1:
470
+ break
471
+
472
+ pixel_values = [transform(item) for item in image_tiles]
473
+ pixel_values = torch.stack(pixel_values).to(self.model.device, dtype=torch.bfloat16)
474
+ print(f'Split images to {pixel_values.shape}')
475
+ else:
476
+ pixel_values = None
477
+
478
+ generation_config = dict(
479
+ num_beams=1,
480
+ max_new_tokens=max_new_tokens,
481
+ do_sample=do_sample,
482
+ temperature=temperature,
483
+ repetition_penalty=repetition_penalty,
484
+ max_length=self.context_len,
485
+ top_p=top_p,
486
+ )
487
+
488
+ response = self.model.chat(
489
+ tokenizer=self.tokenizer,
490
+ pixel_values=pixel_values,
491
+ question=question,
492
+ history=history,
493
+ return_history=False,
494
+ generation_config=generation_config,
495
+ )
496
+ self.model.system_message = old_system_message
497
+ return {'text': response, 'error_code': 0}
498
+
499
+
500
+
501
+
502
+
503
+ if __name__ == '__main__':
504
+ parser = argparse.ArgumentParser()
505
+ parser.add_argument('--model-path', type=str, default='nvidia/Eagle2-9B')
506
+ parser.add_argument('--model-name', type=str, default='Eagle2-9B')
507
+ parser.add_argument('--device', type=str, default='cuda')
508
+ parser.add_argument('--load-8bit', action='store_true')
509
+ args = parser.parse_args()
510
+ print(f'args: {args}')
511
+
512
+ worker = ModelWorker(
513
+ args.model_path,
514
+ args.model_name,
515
+ args.load_8bit,
516
+ args.device)
517
+ ```
518
+ </details>
519
+
520
+
521
+ ### 2. Prepare the Prompt
522
+
523
+ - Single image input
524
+ ```python
525
+ prompt = [
526
+ {'role': 'system', 'content': 'You are a helpful assistant.'},
527
+ {'role': 'user', 'content': 'Describe this image in details.',
528
+ 'image':[
529
+ {'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/01-nvidia-logo-vert-500x200-2c50-d@2x.png'}
530
+ ],
531
+ }
532
+ ]
533
+ ```
534
+
535
+ - Multiple image input
536
+ ```python
537
+ prompt = [
538
+ {'role': 'system', 'content': 'You are a helpful assistant.'},
539
+ {'role': 'user', 'content': 'Describe these two images in details.',
540
+ 'image':[
541
+ {'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/01-nvidia-logo-vert-500x200-2c50-d@2x.png'},
542
+ {'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/01-nvidia-logo-vert-500x200-2c50-d@2x.png'}
543
+ ],
544
+ }
545
+ ]
546
+ ```
547
+
548
+ - Video input
549
+ ```python
550
+ prompt = [
551
+ {'role': 'system', 'content': 'You are a helpful assistant.'},
552
+ {'role': 'user', 'content': 'Describe this video in details.',
553
+ 'video':[
554
+ 'path/to/your/video.mp4'
555
+ ],
556
+ }
557
+ ]
558
+ ```
559
+
560
+ ### 3. Generate the response
561
+ ```python
562
+ params = {
563
+ 'prompt': prompt,
564
+ 'max_input_tiles': 24,
565
+ 'temperature': 0.7,
566
+ 'top_p': 1.0,
567
+ 'max_new_tokens': 4096,
568
+ 'repetition_penalty': 1.0,
569
+ }
570
+ worker.generate(params)
571
+ ```
572
+
573
+ ## TODO
574
+ - [ ] Support vLLM Inference
575
+ - [ ] Provide AWQ Quantization Weights
576
+ - [ ] Provide fine-tuning scripts
577
+
578
+
579
+ ## License/Terms of Use
580
+ - The code is released under the Apache 2.0 license as found in the [LICENSE](https://huggingface.co/NVEagle/Eagle-X5-13B-Chat/blob/main/LICENSE) file.
581
+ - The pretrained model weights are released under the [Creative Commons Attribution: Non-Commercial 4.0 International](https://spdx.org/licenses/CC-BY-NC-4.0) <br>
582
+ - The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms:
583
+ - Model License of Qwen2.5-7B-Instruct: [Apache-2.0](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE)
584
+ - Model License of PaliGemma: [Gemma license](https://ai.google.dev/gemma/terms)
585
+
586
+
587
+
588
+ ## Citation
589
+
590
+ ## Ethical Considerations
591
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
592
+
593
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
594