Backward GPT-2 Model

Overview

A GPT-2 model fine-tuned for backward generation (answers → questions) in Turkish.

Input Format

### Response:
[answer text in Turkish]

Output Format

Generated text must be reversed to obtain:

### Instruction:
[instruction text in Turkish]
### Input:
[optional input text in Turkish]

Generation Parameters

  • Temperature: 1.4
  • Top-p: 0.95
  • Top-k: 20
  • Repetition penalty: 1.5
  • EOS token IDs: [36320, eos_token_id]

Example Code

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("ytu-ce-cosmos/backward-cosmos-gpt2-v1", trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained("ytu-ce-cosmos/backward-cosmos-gpt2-v1")

# Turkish answer
answer = "İstanbul, Türkiye'nin en kalabalık şehridir ve tarihi, kültürel zenginliği ile ünlüdür."
prompt = f"\n### Response:\n{answer}"
inputs = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(
    **inputs,
    max_new_tokens=40,
    temperature=1.4,
    top_p=0.95,
    top_k=20,
    repetition_penalty=1.5,
    eos_token_id=[36320, tokenizer.eos_token_id],
    pad_token_id=tokenizer.eos_token_id
)

generated_tokens = outputs[0][inputs.input_ids.shape[1]:]
reversed_tokens = generated_tokens.flip(dims=[0])
generated_text = tokenizer.decode(reversed_tokens, skip_special_tokens=True)

parts = generated_text.split("### Input:")
instruction = parts[0].replace("### Instruction:", "").strip()
input_text = parts[1].strip() if len(parts) > 1 else None
Downloads last month
55
Safetensors
Model size
774M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support