Text Generation
• Updated • 124k
• 577
Text Generation
• 0.3B • Updated • 103k
• 1.01k
Image-Text-to-Text
• Updated • 1.79M
• 1.31k
Image-Text-to-Text
• 4B • Updated • 138k
• 154
Text Generation
• Updated • 69.5k
• 191
Text Generation
• 1.0B • Updated • 727k
• 921
Image-Text-to-Text
• 12B • Updated • 20.8k
• 89
Image-Text-to-Text
• Updated • 2.51M
• • 704
Image-Text-to-Text
• Updated • 14.8k
• 123
Image-Text-to-Text
• 27B • Updated • 653k
• • 1.95k
Note ^ transformers-based pre-trained and instruct models
google/shieldgemma-2-4b-it
Image-Text-to-Text
• Updated • 4.06k
• 159
Note ^ ShieldGemma 2
google/gemma-3-4b-it-qat-q4_0-gguf
Image-Text-to-Text
• 4B • Updated • 12k
• 254
google/gemma-3-4b-pt-qat-q4_0-gguf
Image-Text-to-Text
• 4B • Updated • 473
• 25
google/gemma-3-1b-it-qat-q4_0-gguf
Text Generation
• 1.0B • Updated • 1.17k
• 124
google/gemma-3-1b-pt-qat-q4_0-gguf
Text Generation
• 1.0B • Updated • 83
• 14
google/gemma-3-12b-it-qat-q4_0-gguf
Image-Text-to-Text
• 12B • Updated • 5.55k
• 266
google/gemma-3-12b-pt-qat-q4_0-gguf
Image-Text-to-Text
• 12B • Updated • 47
• 21
google/gemma-3-27b-it-qat-q4_0-gguf
Image-Text-to-Text
• 27B • Updated • 2.17k
• 399
google/gemma-3-27b-pt-qat-q4_0-gguf
Image-Text-to-Text
• 27B • Updated • 61
• 31
Note ^ GGUFs to be used in llama.cpp and Ollama. We strongly recommend using the IT models.
google/gemma-3-270m-qat-q4_0-unquantized
Text Generation
• 0.3B • Updated • 96
• 9
google/gemma-3-270m-it-qat-q4_0-unquantized
Text Generation
• 0.3B • Updated • 188
• 13
google/gemma-3-4b-it-qat-q4_0-unquantized
Image-Text-to-Text
• 4B • Updated • 630
• 11
google/gemma-3-27b-it-qat-q4_0-unquantized
Image-Text-to-Text
• Updated • 22.8k
• 41
google/gemma-3-12b-it-qat-q4_0-unquantized
Image-Text-to-Text
• Updated • 45.1k
• • 85
google/gemma-3-1b-it-qat-q4_0-unquantized
Text Generation
• 1.0B • Updated • 363
• 11
google/gemma-3-4b-it-qat-int4-unquantized
Image-Text-to-Text
• Updated • 136
• 10
google/gemma-3-12b-it-qat-int4-unquantized
Image-Text-to-Text
• 12B • Updated • 733
• 12
google/gemma-3-1b-it-qat-int4-unquantized
Text Generation
• 1.0B • Updated • 221
• 14
Note ^ unquantized QAT-based checkpoints that allow quantizing while retaining similar quality to half precision