|
--- |
|
datasets: |
|
- starriver030515/FUSION-Pretrain-10M |
|
- starriver030515/FUSION-Finetune-12M |
|
base_model: |
|
- meta-llama/Llama-3.1-8B-Instruct |
|
- google/siglip-so400m-patch14-384 |
|
license: apache-2.0 |
|
--- |
|
# Model Card for FUSION |
|
|
|
This is the checkpoint after Stage 1 training of FUSION-LLaMA3.1-8B. |
|
|
|
## Model Details |
|
|
|
**Model Description** |
|
|
|
<img src="https://raw.githubusercontent.com/starriver030515/FUSION/main/images/encoder.jpg" alt="encoder" width="1000px"> |
|
|
|
<img src="https://raw.githubusercontent.com/starriver030515/FUSION/main/images/decoder.jpg" alt="decoder" width="1000px"> |
|
|
|
FUSION is a family of multimodal large language models that adopts a fully integrated vision-language architecture, enabling comprehensive and fine-grained cross-modal understanding. In contrast to prior approaches that primarily perform shallow or late-stage modality fusion during the LLM decoding phase, FUSION achieves deep, dynamic integration across the entire vision-language processing pipeline. |
|
|
|
To enable this, FUSION utilizes Text-Guided Unified Vision Encoding, which incorporates textual context directly into the vision encoder. This design allows for pixel-level vision-language alignment and facilitates early-stage cross-modal interaction. |
|
|
|
During decoding, FUSION employs Context-Aware Recursive Alignment Decoding strategy. This component dynamically aggregates and refines visual features based on the evolving textual context at each decoding step, allowing the model to capture question-level semantics with high precision. |
|
|
|
To further enhance alignment and reduce the semantic gap between modalities, FUSION integrates Dual-Supervised Semantic Mapping Loss, which provides simultaneous supervision in both visual and textual embedding spaces. This dual-path guidance strengthens the consistency and semantic coherence of the fused representations. |
|
|
|
**Base Model** |
|
|
|
**LLM**: [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) |
|
|
|
**Vision Encoder**: [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) |
|
|
|
|
|
## Training Details |
|
|
|
**Training Strategies** |
|
|
|
FUSION is trained with a three-stage training framework, ensuring comprehensive alignment and integration between visual and linguistic modalities. |
|
|
|
- **Stage1: Foundational Semantic Alignment**: We pretrain the vision encoder using extensive image-caption datasets to establish precise semantic alignment be- tween visual and textual representations. |
|
- **Stage1.5: Contextual Multimodal Fusion**: In contrast to Stage 1, this intermediate stage incorporates various types of QA data along with image-caption pairs. This phase is designed to enhance the model’s adaptability in aligning vision and language representations across a broad spectrum of scenarios. |
|
- **Stage2: Visual Instruction Tuning**: At this stage, we expose the model to various visual tasks, enabling it to answer downstream vision-related questions effectively. |
|
|
|
**Training Data** |
|
|
|
- [10M FUSION Alignment Data](https://huggingface.co/datasets/starriver030515/FUSION-Pretrain-10M) For Stage1 |
|
- [12M FUSION Curated Instruction Tuning Data](https://huggingface.co/datasets/starriver030515/FUSION-Finetune-12M) For Stage1.5 and Stage2 |
|
|
|
## Performance |
|
|
|
<img src="https://raw.githubusercontent.com/starriver030515/FUSION/main/images/performance.jpg" alt="performance" width="1000px"> |
|
|
|
**Where to send questions or comments about the model:** |
|
|
|
https://github.com/starriver030515/FUSION/issues |
|
|
|
## Paper or resources for more information |
|
|
|
- https://github.com/starriver030515/FUSION |
|
- Coming soon~ |
|
|