--- license: apache-2.0 base_model: - Writer/palmyra-mini - Writer/palmyra-mini-thinking-a - Writer/palmyra-mini-thinking-b language: - en pipeline_tag: text-generation library_name: transformers tags: - text-generation-inference - code - coder - math --- # **palmyra-mini-thinking-AIO-GGUF** > The palmyra-mini *[models](https://huggingface.co/Writer)* demonstrates exceptional capabilities in complex reasoning and mathematical problem-solving domains. Its performance is particularly noteworthy on benchmarks that require deep understanding and multi-step thought processes. A key strength of the model is its proficiency in grade-school-level math problems, as evidenced by its impressive score of 0.818 on the gsm8k (strict-match) benchmark. This high score indicates a robust ability to parse and solve word problems, a foundational skill for more advanced quantitative reasoning. This aptitude for mathematics is further confirmed by its outstanding performance on the MATH500 benchmark, where it also achieved a score of 0.818. This result underscores the models consistent and reliable mathematical capabilities across different problem sets. The model also shows strong performance on the AMC23 benchmark, with a solid score of 0.6. This benchmark, representing problems from the American Mathematics Competitions, highlights the models ability to tackle challenging, competition-level mathematics.