Qwen3 32B - Kimi K2 Thinking Distill

This model was trained on 1000 high reasoning examples from Kimi-K2-Thinking.

  • 🧬 Datasets:

    • TeichAI/kimi-k2-thinking-1000x
  • 🏗 Base Model:

    • unsloth/Qwen3-32B
  • ⚡ Use cases:

    • Coding
    • Math
    • Chat
    • Deep Research

This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
-
Safetensors
Model size
33B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TeichAI/Qwen3-32B-Kimi-K2-Thinking-Distill

Base model

Qwen/Qwen3-32B
Finetuned
unsloth/Qwen3-32B
Finetuned
(188)
this model
Quantizations
2 models

Dataset used to train TeichAI/Qwen3-32B-Kimi-K2-Thinking-Distill