Model Card: ColBERT-ko-embeddinggemma-300m
Model Description
This is a ColBERT-style late-interaction retrieval model based on Google's embeddinggemma-300m
. It has been fine-tuned on the MS Marco Korean dataset, making it specialized for semantic search and information retrieval tasks in the Korean language.
The model produces token-level embeddings for both queries and documents. This enables highly accurate and efficient retrieval through the ColBERT MaxSim scoring mechanism, which calculates the relevance between a query and a document at a fine-grained token level.
Performance & Evaluation
The model demonstrated stable and consistent improvement throughout the training process. Starting from a strong in-batch Recall@1 of ~75-80%, the model was validated every 50 steps, with checkpoints saved based on validation performance. Key metrics like validation loss steadily decreased while Recall@1 increased, indicating successful generalization without signs of overfitting.
Semantic Inference Example (in Korean)
The true power of the fine-tuned model is its ability to understand semantic context beyond simple keyword matching. In the following challenging example, the fine-tuned model correctly infers the answer, while the original base model fails.
$ python inference.py
Using device: cuda
Loading fine-tuned model...
Fine-tuned model loaded.
Loading original (pre-trained) model for comparison...
Original model loaded.
==================================================
Query: ์ผ๋ก ๋จธ์คํฌ๊ฐ ์ค๋ฆฝํ ์ ๊ธฐ์ฐจ ํ์ฌ๋ ์ด๋์ผ?
==================================================
--- 1. โ
Fine-tuned Model Results ---
Rank 1 (Score: 9.00): ํ
์ฌ๋ผ๋ ๋ชจ๋ธ S, 3, X, Y๋ฅผ ์์ฐํ๋ฉฐ ์คํ ํ์ผ๋ฟ ๊ธฐ๋ฅ์ผ๋ก ์ ๋ช
ํฉ๋๋ค.
Rank 2 (Score: 7.92): ์คํ์ด์คX๋ ์ฌ์ฌ์ฉ ๊ฐ๋ฅํ ๋ก์ผ์ ๊ฐ๋ฐํ์ฌ ์ฐ์ฃผ ํ์ฌ ๋น์ฉ์ ํฌ๊ฒ ๋ฎ์ท์ต๋๋ค.
Rank 3 (Score: 7.72): ์๋ง์กด ์น ์๋น์ค(AWS)๋ ํด๋ผ์ฐ๋ ์ปดํจํ
์์ฅ์ ์ ๋์ฃผ์์
๋๋ค.
Rank 4 (Score: 7.23): ์๋๊ถ ์ ์ฒ ์ ์์ธ๊ณผ ์ฃผ๋ณ ๋์๋ฅผ ์ฐ๊ฒฐํ๋ ์ค์ํ ๊ตํต์๋จ์
๋๋ค.
Rank 5 (Score: 5.77): ๋ํ๋ฏผ๊ตญ์ ์๋๋ ์์ธ์
๋๋ค. ์์ธ์ ๊ฒฝ์ ์ ๋ฌธํ์ ์ค์ฌ์ง์
๋๋ค.
Rank 6 (Score: 5.43): ์ผ๋ณธ์ ์๋๋ ๋์ฟ์
๋๋ค. ๋ฒ๊ฝ์ด ์๋ฆ๋ค์ด ๋์์ฃ .
Rank 7 (Score: 5.40): ํ๋์ค์ ์๋๋ ํ๋ฆฌ์ด๋ฉฐ, ์ํ ํ์ผ๋ก ์ ๋ช
ํฉ๋๋ค.
--- 2. โ Original Model Results ---
Rank 1 (Score: 9.13): ์๋๊ถ ์ ์ฒ ์ ์์ธ๊ณผ ์ฃผ๋ณ ๋์๋ฅผ ์ฐ๊ฒฐํ๋ ์ค์ํ ๊ตํต์๋จ์
๋๋ค.
Rank 2 (Score: 8.79): ํ
์ฌ๋ผ๋ ๋ชจ๋ธ S, 3, X, Y๋ฅผ ์์ฐํ๋ฉฐ ์คํ ํ์ผ๋ฟ ๊ธฐ๋ฅ์ผ๋ก ์ ๋ช
ํฉ๋๋ค.
Rank 3 (Score: 8.77): ์ผ๋ณธ์ ์๋๋ ๋์ฟ์
๋๋ค. ๋ฒ๊ฝ์ด ์๋ฆ๋ค์ด ๋์์ฃ .
Rank 4 (Score: 8.71): ๋ํ๋ฏผ๊ตญ์ ์๋๋ ์์ธ์
๋๋ค. ์์ธ์ ๊ฒฝ์ ์ ๋ฌธํ์ ์ค์ฌ์ง์
๋๋ค.
Rank 5 (Score: 8.53): ์๋ง์กด ์น ์๋น์ค(AWS)๋ ํด๋ผ์ฐ๋ ์ปดํจํ
์์ฅ์ ์ ๋์ฃผ์์
๋๋ค.
Rank 6 (Score: 8.48): ์คํ์ด์คX๋ ์ฌ์ฌ์ฉ ๊ฐ๋ฅํ ๋ก์ผ์ ๊ฐ๋ฐํ์ฌ ์ฐ์ฃผ ํ์ฌ ๋น์ฉ์ ํฌ๊ฒ ๋ฎ์ท์ต๋๋ค.
Rank 7 (Score: 8.24): ํ๋์ค์ ์๋๋ ํ๋ฆฌ์ด๋ฉฐ, ์ํ ํ์ผ๋ก ์ ๋ช
ํฉ๋๋ค.
Analysis: The fine-tuned model correctly identifies 'Tesla' by understanding the semantic relationship between the query and the document, even with no direct keyword overlap. In contrast, the original model is easily confused by distractors and fails to rank the correct answer first, demonstrating the significant impact of the ColBERT fine-tuning process.
Intended Uses
The primary use case is high-performance semantic search for Korean text. It is designed to be used as a dual encoder in a retrieval pipeline:
- Offline Indexing: Encode your document corpus into token-level embeddings. Each document is represented as a matrix of vectors (
Ld x D
). - Online Search: Encode an incoming query into its token-level embeddings (
Lq x D
). Use the efficient MaxSim algorithm to score and rank documents from your index.
Training Procedure
The model was trained using an 8-GPU setup with the Hugging Face Accelerate library, utilizing in-batch and cross-device negatives.
- Base Model:
google/embeddinggemma-300m
- Dataset: MS Marco Korean Translated Dataset
- Key Hyperparameters:
- Precision:
bf16
- Query Max Length:
128
- Document Max Length:
1024
- Learning Rate:
5e-6
(base) &1e-4
(projection head) - Effective Batch Size:
512
(32 per device * 8 devices * 2 grad_accum) - Epochs:
1
- Precision:
Model tree for sigridjineth/colbert-ko-embeddinggemma-300m
Base model
google/embeddinggemma-300m