--- base_model: - nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 pipeline_tag: text-generation --- Big thanks to ymcki for updating the llama.cpp code to support the 'dummy' layers. Use the llama.cpp branch from this PR: https://github.com/ggml-org/llama.cpp/pull/12843 if it hasn't been merged yet. Note the imatrix data used for the IQ quants has been produced from the Q4 quant! ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e6d37e02dee9bcb9d9fa18/g07b9e-9UmPrfsvFBi-So.png) [](https://devquasar.com) 'Make knowledge free for everyone' Quantized version of: [nvidia/Llama-3_1-Nemotron-Ultra-253B-v1](https://huggingface.co/nvidia/Llama-3_1-Nemotron-Ultra-253B-v1) Buy Me a Coffee at ko-fi.com