--- language: - ko pipeline_tag: text-generation tags: - finetune --- # Model Card for mistral-7b-wiki It is a fine-tuned model using Korean in the mistral-7b model ## Model Details * **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park) * **Repository** : To be added * **Model Architecture** : The mistral-7b-wiki is is a fine-tuned version of the Mistral-7B-v0.1. * **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj * **train_batch** : 2 * **Max_step** : 500 ## Dataset Korean Custom Dataset ## Prompt template: Mistral ``` [INST]{['instruction']}[/INST]{['output']} ``` ## Usage ``` # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-7b-wiki") model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-7b-wiki") # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="shleeeee/mistral-7b-wiki") ``` ## Evaluation ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654495fa893aec5da96e9134/s_Jiv78QB7vM2qBQdDSF1.png)