File size: 3,097 Bytes
5f4d988
 
 
 
 
 
 
 
 
 
 
 
 
579e358
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0f7621
 
 
579e358
 
 
 
 
 
 
 
 
 
773f985
579e358
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0f7621
579e358
5f4d988
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---

# Query Generation with LoRA Finetuning

This project fine-tunes a language model using supervised fine-tuning (SFT) and LoRA adapters to generate queries from documents. The model was trained on the [`prdev/qtack-gq-embeddings-unsupervised`](https://huggingface.co/datasets/prdev/qtack-gq-embeddings-unsupervised) dataset using an A100 GPU.

## Overview

- **Objective:**  
  The goal is to train a model that, given a document, generates a relevant query. Each training example is formatted with custom markers:
  - `<|document|>\n` precedes the document text.
  - `<|query|>\n` precedes the query text.
  - An EOS token is appended at the end to signal termination.

- **Text Chunking:**  
  For optimal performance, **chunk your text** into smaller, coherent pieces before providing it to the model. Long documents can lead the model to focus on specific details rather than the overall context.

- **Training Setup:**  
  The model is fine-tuned using the Unsloth framework with LoRA adapters, taking advantage of an A100 GPU for efficient training. See W&B loss curve here: https://wandb.ai/prdev/lora_model_training/panel/jp2r24xk7?nw=nwuserprdev



## Quick Usage

Below is an example code snippet to load the finetuned model and test it with a chunked document:

```python
from unsloth import FastLanguageModel
from transformers import TextStreamer

# Load the finetuned model and tokenizer from Hugging Face Hub.
model, tokenizer = FastLanguageModel.from_pretrained("prdev/query-gen", load_in_4bit=True)

# Enable faster inference if supported.
FastLanguageModel.for_inference(model)

# Example document chunk (ensure text is appropriately chunked).
document_chunk = (
    "liberal arts. 1. the academic course of instruction at a college intended to provide general knowledge "
    "and comprising the arts, humanities, natural sciences, and social sciences, as opposed to professional or technical subjects."
)

# Create the prompt using custom markers.
prompt = (
    "<|document|>\n" + document_chunk + "\n<|query|>\n"
)

# Tokenize the prompt.
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

# Set up a TextStreamer to view token-by-token generation.
streamer = TextStreamer(tokenizer, skip_prompt=True)

# Generate a query from the document.
_ = model.generate(
    input_ids=inputs["input_ids"],
    streamer=streamer,
    max_new_tokens=100,
    temperature=0.7,
    min_p=0.1,
    eos_token_id=tokenizer.eos_token_id,  # Ensures proper termination.
)
```

# Uploaded  model

- **Developed by:** prdev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)