GitHub Repo | Technical Report | Join Us

πŸ‘‹ Contact us in Discord and WeChat

What's New

  • [2025.09.05] MiniCPM4.1 series are released! This series is a hybrid reasoning model, which can be used in both deep reasoning mode and non-reasoning mode. πŸ”₯πŸ”₯πŸ”₯
  • [2025.06.06] MiniCPM4 series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report here.πŸ”₯πŸ”₯πŸ”₯

MiniCPM4 and MiniCPM4.1 Series

MiniCPM4 and MiniCPM4.1 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems.

Introduction

MiniCPM4 and MiniCPM4.1 are extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements.

  • πŸ—οΈ Efficient Model Architecture:

    • InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts
  • 🧠 Efficient Learning Algorithms:

    • Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search
    • BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction
    • Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy
  • πŸ“š High-Quality Training Data:

    • UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset UltraFinweb
    • UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data
  • ⚑ Efficient Inference System:

    • CPM.cu -- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding
    • ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities

Usage

Prebuilt vllm

pip install vllm

Inference

import os
import multiprocessing

os.environ['VLLM_USE_V1'] = '0'
multiprocessing.set_start_method('spawn', force=True)

from vllm import LLM, SamplingParams

prompt = "εŒ—δΊ¬ζœ‰δ»€δΉˆε₯½ηŽ©ηš„εœ°ζ–Ή"

sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=1500)

llm = LLM(model="MiniCPM4.1-8B-GPTQ", trust_remote_code = True)
tokenizer = llm.get_tokenizer()
messages = [{"role": "user", "content": prompt}]

# if open think mode, use the following code
formatted_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# if close think mode, use the following code
# formatted_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=False)

outputs = llm.generate([formatted_prompt], sampling_params)

print("-"*50)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}\nGenerated text: {generated_text!r}")
    print("-"*50)
Downloads last month
8
Safetensors
Model size
1.62B params
Tensor type
I32
Β·
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including openbmb/MiniCPM4.1-8B-GPTQ