Apertus-70B-Instruct-2509-GGUF

Static quants of swiss-ai/Apertus-70B-Instruct-2509.

Quants

Link URI Quant Size
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q2_K Q2_K 27.3GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q3_K_S Q3_K_S 30.8GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q3_K_M Q3_K_M 35.5GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q3_K_L Q3_K_L 39.6GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q4_0 Q4_0 40.0GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q4_K_S Q4_K_S 40.4GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q4_K_M Q4_K_M 43.7GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q5_0 Q5_0 48.7GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q5_K_S Q5_K_S 48.7GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q5_K_M Q5_K_M 50.6GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q6_K Q6_K 57.9GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q8_0 Q8_0 75.0GB
GGUF hf:giladgd/Apertus-70B-Instruct-2509-GGUF:F16 F16 141.2GB

Download a quant using node-llama-cpp (more info):

npx -y node-llama-cpp pull <URI>

Usage

Use with node-llama-cpp (recommended)

Ensure you have node.js installed:

brew install nodejs

CLI

Chat with the model:

npx -y node-llama-cpp chat hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q4_K_M

Code

Use it in your project:

npm install node-llama-cpp
import {getLlama, resolveModelFile, LlamaChatSession} from "node-llama-cpp";

const modelUri = "hf:giladgd/Apertus-70B-Instruct-2509-GGUF:Q4_K_M";


const llama = await getLlama();
const model = await llama.loadModel({
    modelPath: await resolveModelFile(modelUri)
});
const context = await model.createContext();
const session = new LlamaChatSession({
    contextSequence: context.getSequence()
});


const q1 = "Hi there, how are you?";
console.log("User: " + q1);

const a1 = await session.prompt(q1);
console.log("AI: " + a1);

Read the getting started guide to quickly scaffold a new node-llama-cpp project

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

CLI

llama-cli -hf giladgd/Apertus-70B-Instruct-2509-GGUF:Q4_K_M -p "The meaning to life and the universe is"

Server

llama-server -hf giladgd/Apertus-70B-Instruct-2509-GGUF:Q4_K_M -c 2048
Downloads last month
288
GGUF
Model size
70.6B params
Architecture
apertus
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for giladgd/Apertus-70B-Instruct-2509-GGUF

Quantized
(10)
this model