n-atlas

#1
by sadiqmw - opened
National Centre for Artificial Intelligence and Robotics org

pls, does anyone knows what steps to follow in installing the n-atlas model on pc or mobile phone? thanks.

National Centre for Artificial Intelligence and Robotics org

You can convert the F16 N-Atlas into gguf format, then run it locally on ollama.

National Centre for Artificial Intelligence and Robotics org

pls, does anyone knows what steps to follow in installing the n-atlas model on pc or mobile phone? thanks.

@sadiqmw If you’re looking to run N-ATLaS locally, you essentially have two options depending on your comfort level and compute setup:

  1. Build your own GGUF weights
    The official repo inuwamobarak/N-ATLaS includes a Makefile that streamlines the process. You can simply run the provided targets (make download, make convert, make quantize) to go from the Hugging Face model to quantized GGUF weights optimized for inference on CPU/GPU. This path gives you the most flexibility if you want to customize quantization strategies.

  2. Use pre-quantized weights
    If you’d rather skip the build process, you can pull ready-to-use quantized versions directly from Hugging Face: N-ATLaS-8B-GGUF-Q4_K_M. These are “drop-in” and can be loaded immediately with inference engines like llama.cpp, KoboldCpp, or compatible frontends on both desktop and mobile.

For PC usage, llama.cpp or KoboldCpp is the most straightforward route. For mobile, clients like MLC LLM (Android/iOS) or llama.cpp-based apps can consume the GGUF weights directly.

So the quick decision tree is: Want control? build via Makefile. Want plug-and-play? grab GGUF release. Happy to see what you will build

Sign up or log in to comment