Instructions to use spicyneuron/Kimi-K2.6-MLX-3.3bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use spicyneuron/Kimi-K2.6-MLX-3.3bit with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("spicyneuron/Kimi-K2.6-MLX-3.3bit") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use spicyneuron/Kimi-K2.6-MLX-3.3bit with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "spicyneuron/Kimi-K2.6-MLX-3.3bit"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "spicyneuron/Kimi-K2.6-MLX-3.3bit" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use spicyneuron/Kimi-K2.6-MLX-3.3bit with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "spicyneuron/Kimi-K2.6-MLX-3.3bit"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default spicyneuron/Kimi-K2.6-MLX-3.3bit
Run Hermes
hermes
- MLX LM
How to use spicyneuron/Kimi-K2.6-MLX-3.3bit with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "spicyneuron/Kimi-K2.6-MLX-3.3bit"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "spicyneuron/Kimi-K2.6-MLX-3.3bit" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "spicyneuron/Kimi-K2.6-MLX-3.3bit", "messages": [ {"role": "user", "content": "Hello"} ] }'
Kimi K2.6 optimized to run comfortably on a Mac Studio M3 512. This is the smaller, compact version. Quality-first version here.
- A mixed-precision quant that balances speed, memory, and accuracy.
- 3-bit baseline with important layers at 8-bit and BF16.
- Fits into ~430 GB memory, leaving plenty of room to run a smaller, faster utility model (ex: Qwen 3.6 35B, Gemma 4 26B).
- This quant does not support image input.
Usage
# Start server at http://localhost:8080/v1/chat/completions
# Kimi K2.6 requires tiktoken + remote code for the tokenizer
uvx --from mlx-lm --with tiktoken \
mlx_lm.server \
--host 127.0.0.1 \
--port 8080 \
--trust-remote-code \
--model spicyneuron/Kimi-K2.6-MLX-3.3bit
Benchmarks
| metric | 3.6 bit | 3.3 bit (this model) |
|---|---|---|
| bpw | 3.578 | 3.331 |
| peak memory (1024/512) | 460.444 | 428.735 |
| prompt tok/s (1024) | 221.704 卤 0.057 | 223.613 卤 0.098 |
| gen tok/s (512) | 21.095 卤 0.070 | 21.363 卤 0.035 |
| kl mean | 0.022 卤 0.001 | 0.051 卤 0.002 |
| kl p95 | 0.053 卤 0.001 | 0.113 卤 0.002 |
| perplexity | 3.559 卤 0.021 | 3.550 卤 0.020 |
| hellaswag | 0.594 卤 0.022 | 0.590 卤 0.022 |
| piqa | 0.848 卤 0.016 | 0.852 卤 0.016 |
| winogrande | 0.670 卤 0.021 | 0.690 卤 0.021 |
Tested on a Mac Studio M3 Ultra with:
mlx_lm.kld --baseline-model path/to/mlx-full-precision
mlx_lm.perplexity --sequence-length 512 --seed 123
mlx_lm.benchmark --prompt-tokens 1024 --generation-tokens 512 --num-trials 5
mlx_lm.evaluate --tasks hellaswag --seed 123 --num-shots 0 --limit 500
mlx_lm.evaluate --tasks piqa --seed 123 --num-shots 0 --limit 500
mlx_lm.evaluate --tasks winogrande --seed 123 --num-shots 0 --limit 500
Note:
mlx_lm.kldis approximate, based ontop_knot full logits. Here's the code.- Kimi K2.6 KL divergence calculated against the largest quant I could run locally (~490 GB), so real KL is higher.
Methodology
Quantized with a mlx-lm fork, drawing inspiration from Unsloth/AesSedai/ubergarm style mixed-precision GGUFs. MLX quantization options differ from llama.cpp, but the principles are the same:
- Sensitive layers like MoE routing, attention, and output embeddings get higher precision
- More tolerant layers like MoE experts get lower precision
- Downloads last month
- 2,860
Model size
1T params
Tensor type
BF16
路
U32 路
F32 路
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for spicyneuron/Kimi-K2.6-MLX-3.3bit
Base model
moonshotai/Kimi-K2.6