--- language: - ko pipeline_tag: text-generation tags: - finetune --- # Model Card for mistral-7b-wiki It is a fine-tuned model using Korean in the mistral-7b model ## Model Details * **Model Developers** : shleeeee(Seunghyeon Lee) * **Repository** : To be added * **Model Architecture** : The mistral-7b-wiki is is a fine-tuned version of the Mistral-7B-v0.1. * **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj ## Dataset Korean Custom Dataset ## Prompt template: Mistral ``` [INST]{['instruction']}[/INST]{['output']} ``` ## Usage ``` from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-7b-wiki") tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-7b-wiki") # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="shleeeee/mistral-7b-wiki") ``` ## Evaluation - To be added