Prompt retuned
Hello, the model seems to return the prompt itself and not the expected response
Hello there could you share the prompt you tried?
The model by default will work as a translator unless it detects an explicit instruction.
from transformers import AutoModelForCausalLM, AutoTokenizer
Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("CraneAILabs/ganda-gemma-1b")
tokenizer = AutoTokenizer.from_pretrained("CraneAILabs/ganda-gemma-1b")
Translate to Luganda
prompt = "Translate to Luganda: Hello, how are you today?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.3)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
I got this response
Translate to Luganda: Hello, how are you today?
@sulaimank this seems a little strange, I can't recreate it on my end. Can you rerun the command a couple of times and let us know if its consistent?
I actually ran the quick start code in the model card. I was testing it out. Though it seems to return the prompt itself on my side.
Here is the quick start code I ran from the model card
from transformers import AutoModelForCausalLM, AutoTokenizer
Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("CraneAILabs/ganda-gemma-1b")
tokenizer = AutoTokenizer.from_pretrained("CraneAILabs/ganda-gemma-1b")
Translate to Luganda
prompt = "Translate to Luganda: Hello, how are you today?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.3)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
@sulaimank
we're going to advise you to try the pipeline example for now as we figure this out.
It seems to be inconsistently happening across different machines.