prithivMLmods commited on
Commit
08e178f
·
verified ·
1 Parent(s): 69b949e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -0
README.md CHANGED
@@ -20,3 +20,107 @@ tags:
20
  - 2B
21
  ---
22
  ![aWERGSRFH.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/0kXlF1G9NdID1279w_yfw.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  - 2B
21
  ---
22
  ![aWERGSRFH.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/0kXlF1G9NdID1279w_yfw.png)
23
+
24
+ # **Qwen2-VL-OCR2-2B-Instruct**
25
+
26
+ The **Qwen2-VL-OCR2-2B-Instruct** model is a fine-tuned version of **Qwen/Qwen2-VL-2B-Instruct**, tailored for tasks that involve **Optical Character Recognition (OCR)**, **English language understanding**, and **math problem solving with LaTeX formatting**. This model integrates a conversational approach with visual and textual understanding to handle multi-modal tasks effectively.
27
+
28
+ #### Key Enhancements:
29
+
30
+ * **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
31
+
32
+ * **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
33
+
34
+ * **Agent that can operate your mobiles, robots, etc.**: With the abilities of complex reasoning and decision-making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
35
+
36
+ * **Multilingual Support**: To serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
37
+
38
+ ### How to Use
39
+
40
+ ```python
41
+ from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
42
+ from qwen_vl_utils import process_vision_info
43
+
44
+ # default: Load the model on the available device(s)
45
+ model = Qwen2VLForConditionalGeneration.from_pretrained(
46
+ "prithivMLmods/Qwen2-VL-OCR2-2B-Instruct", torch_dtype="auto", device_map="auto"
47
+ )
48
+
49
+ # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
50
+ # model = Qwen2VLForConditionalGeneration.from_pretrained(
51
+ # "prithivMLmods/Qwen2-VL-OCR2-2B-Instruct",
52
+ # torch_dtype=torch.bfloat16,
53
+ # attn_implementation="flash_attention_2",
54
+ # device_map="auto",
55
+ # )
56
+
57
+ # default processor
58
+ processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen2-VL-OCR2-2B-Instruct")
59
+
60
+ # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
61
+ # min_pixels = 256*28*28
62
+ # max_pixels = 1280*28*28
63
+ # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
64
+
65
+ messages = [
66
+ {
67
+ "role": "user",
68
+ "content": [
69
+ {
70
+ "type": "image",
71
+ "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
72
+ },
73
+ {"type": "text", "text": "Describe this image."},
74
+ ],
75
+ }
76
+ ]
77
+
78
+ # Preparation for inference
79
+ text = processor.apply_chat_template(
80
+ messages, tokenize=False, add_generation_prompt=True
81
+ )
82
+ image_inputs, video_inputs = process_vision_info(messages)
83
+ inputs = processor(
84
+ text=[text],
85
+ images=image_inputs,
86
+ videos=video_inputs,
87
+ padding=True,
88
+ return_tensors="pt",
89
+ )
90
+ inputs = inputs.to("cuda")
91
+
92
+ # Inference: Generation of the output
93
+ generated_ids = model.generate(**inputs, max_new_tokens=128)
94
+ generated_ids_trimmed = [
95
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
96
+ ]
97
+ output_text = processor.batch_decode(
98
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
99
+ )
100
+ print(output_text)
101
+ ```
102
+ ### Buffering Output
103
+ ```python
104
+ buffer = ""
105
+ for new_text in streamer:
106
+ buffer += new_text
107
+ # Remove <|im_end|> or similar tokens from the output
108
+ buffer = buffer.replace("<|im_end|>", "")
109
+ yield buffer
110
+ ```
111
+ ### **Key Features**
112
+
113
+ 1. **Vision-Language Integration:**
114
+ - Combines **image understanding** with **natural language processing** to convert images into text.
115
+
116
+ 2. **Optical Character Recognition (OCR):**
117
+ - Extracts and processes textual information from images with high accuracy.
118
+
119
+ 3. **Math and LaTeX Support:**
120
+ - Solves math problems and outputs equations in **LaTeX format**.
121
+
122
+ 4. **Conversational Capabilities:**
123
+ - Designed to handle **multi-turn interactions**, providing context-aware responses.
124
+
125
+ 5. **Image-Text-to-Text Generation:**
126
+ - Inputs can include **images, text, or a combination**, and the model generates descriptive or problem-solving text.