A collection of MoE+MLA models, serving as testing proxies for DeepSeek-V3/R1
Thien Tran
gaunernst
AI & ML interests
None yet
Organizations
None yet
Gemma 3 QAT INT4 (from Flax)
These are converted from the official QAT INT4 Flax checkpoints on Kaggle. Supported formats: AutoAWQ, GGUF
-
gaunernst/gemma-3-1b-it-int4-awq
Text Generation • Updated • 5.42k • 2 -
gaunernst/gemma-3-4b-it-int4-awq
Image-Text-to-Text • Updated • 56.8k • 6 -
gaunernst/gemma-3-12b-it-int4-awq
Image-Text-to-Text • 12B • Updated • 15.2k • 22 -
gaunernst/gemma-3-27b-it-int4-awq
Image-Text-to-Text • 6B • Updated • 16.6k • 38
Face Recognition Models
-
gaunernst/vit_small_patch8_gap_112.cosface_ms1mv3
Image Feature Extraction • Updated • 151 • 2 -
gaunernst/vit_tiny_patch8_112.cosface_ms1mv3
Image Feature Extraction • Updated • 11 • 2 -
gaunernst/vit_tiny_patch8_112.arcface_ms1mv3
Image Feature Extraction • Updated • 237 • 4 -
gaunernst/vit_tiny_patch8_112.adaface_ms1mv3
Image Feature Extraction • Updated • 131 • 2
LLMs 1B - 2B
Smallish LLM pre-training datasets
Llama3-compatible
-
nvidia/Llama-3.1-Minitron-4B-Width-Base
Text Generation • Updated • 1.89k • 193 -
nvidia/Llama-3.1-Minitron-4B-Depth-Base
Text Generation • 5B • Updated • 407 • 21 -
meta-llama/Llama-3.1-8B-Instruct
Text Generation • Updated • 6.51M • • 5.5k -
meta-llama/Llama-3.1-8B
Text Generation • Updated • 1.28M • • 2.08k
Gemma 3 QAT INT4 (from GGUF)
Convert official Gemma 3 QAT GGUF to AutoAWQ and compressed-tensors format for ease of deployment
-
gaunernst/gemma-3-1b-it-qat-autoawq
Text Generation • Updated • 69 -
gaunernst/gemma-3-4b-it-qat-autoawq
Image-Text-to-Text • Updated • 2.02k • 2 -
gaunernst/gemma-3-12b-it-qat-autoawq
Image-Text-to-Text • 12B • Updated • 327 • 7 -
gaunernst/gemma-3-27b-it-qat-autoawq
Image-Text-to-Text • 27B • Updated • 885 • 12
Mini BERT models
https://arxiv.org/abs/1908.08962
LLMs < 1B
LLMs 2B - 4B
Llama2-compatible
DeepSeek testing
A collection of MoE+MLA models, serving as testing proxies for DeepSeek-V3/R1
Gemma 3 QAT INT4 (from GGUF)
Convert official Gemma 3 QAT GGUF to AutoAWQ and compressed-tensors format for ease of deployment
-
gaunernst/gemma-3-1b-it-qat-autoawq
Text Generation • Updated • 69 -
gaunernst/gemma-3-4b-it-qat-autoawq
Image-Text-to-Text • Updated • 2.02k • 2 -
gaunernst/gemma-3-12b-it-qat-autoawq
Image-Text-to-Text • 12B • Updated • 327 • 7 -
gaunernst/gemma-3-27b-it-qat-autoawq
Image-Text-to-Text • 27B • Updated • 885 • 12
Gemma 3 QAT INT4 (from Flax)
These are converted from the official QAT INT4 Flax checkpoints on Kaggle. Supported formats: AutoAWQ, GGUF
-
gaunernst/gemma-3-1b-it-int4-awq
Text Generation • Updated • 5.42k • 2 -
gaunernst/gemma-3-4b-it-int4-awq
Image-Text-to-Text • Updated • 56.8k • 6 -
gaunernst/gemma-3-12b-it-int4-awq
Image-Text-to-Text • 12B • Updated • 15.2k • 22 -
gaunernst/gemma-3-27b-it-int4-awq
Image-Text-to-Text • 6B • Updated • 16.6k • 38
Mini BERT models
https://arxiv.org/abs/1908.08962
Face Recognition Models
-
gaunernst/vit_small_patch8_gap_112.cosface_ms1mv3
Image Feature Extraction • Updated • 151 • 2 -
gaunernst/vit_tiny_patch8_112.cosface_ms1mv3
Image Feature Extraction • Updated • 11 • 2 -
gaunernst/vit_tiny_patch8_112.arcface_ms1mv3
Image Feature Extraction • Updated • 237 • 4 -
gaunernst/vit_tiny_patch8_112.adaface_ms1mv3
Image Feature Extraction • Updated • 131 • 2
LLMs < 1B
LLMs 1B - 2B
LLMs 2B - 4B
Smallish LLM pre-training datasets
Llama2-compatible
Llama3-compatible
-
nvidia/Llama-3.1-Minitron-4B-Width-Base
Text Generation • Updated • 1.89k • 193 -
nvidia/Llama-3.1-Minitron-4B-Depth-Base
Text Generation • 5B • Updated • 407 • 21 -
meta-llama/Llama-3.1-8B-Instruct
Text Generation • Updated • 6.51M • • 5.5k -
meta-llama/Llama-3.1-8B
Text Generation • Updated • 1.28M • • 2.08k