xiangan's picture
Update README.md
75b40c3 verified
|
raw
history blame
2.12 kB
metadata
license: mit

MLCD-ViT-bigG Model Card

MLCD-ViT-bigG is a state-of-the-art vision transformer model enhanced with 2D Rotary Position Embedding (RoPE2D), achieving superior performance on document understanding and visual question answering tasks. Developed by DeepGlint AI, this model demonstrates exceptional capabilities in processing complex visual-language interactions.

We adopted the official LLaVA-NeXT and the official training dataset LLaVA-NeXT-Data for evaluating the foundational visual models.

Vision Tower RoPE2D ChartQA DocVQA InfoVQA OCRBench MMMU
CLIP (ViT-L-14-336px) × 66.52 75.21 38.88 525.00 44.20
SigLIP (ViT-SO400M-384px) × 69.28 76.71 41.38 554.00 46.78
DFN5B (ViT-H-14-378px) × 64.36 70.87 38.59 473.00 48.00
MLCD (ViT-L-14-336px) × 67.84 76.46 43.48 531.00 44.30
MLCD (ViT-bigG-14-336px) 71.07 79.63 44.38 572.00 46.78

Installation

pip install torch transformers
git clone https://github.com/deepglint/unicom
cd unicom/mlcd

Usage

from vit_rope2d_hf import MLCDVisionModel
from transformers import AutoImageProcessor
from PIL import Image
import torch

# Load model and processor
model = MLCDVisionModel.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-336")
processor = AutoImageProcessor.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-336")

# Process single image
image = Image.open("document.jpg").convert("RGB")
inputs = processor(images=image, return_tensors="pt")

# Get visual features
with torch.no_grad():
    outputs = model(**inputs)
features = outputs.last_hidden_state

print(f"Extracted features shape: {features.shape}")