-
-
-
-
-
-
Inference Providers
Active filters:
devops
mratsim/Minimax-M2.5-BF16-INT4-AWQ
Text Generation
•
39B
•
Updated
•
8
mratsim/MiniMax-M2.1-FP8-INT4-AWQ
Text Generation
•
Updated
•
5.69k
•
39
Text Generation
•
Updated
•
64
•
3
mratsim/MiniMax-M2.1-BF16-INT4-AWQ
Text Generation
•
39B
•
Updated
•
4.64k
•
7
Text Generation
•
8B
•
Updated
•
168k
•
•
103
mradermacher/WhiteRabbitNeo-V3-7B-i1-GGUF
8B
•
Updated
•
274
•
6
mradermacher/DeepHat-V1-7B-GGUF
8B
•
Updated
•
1.09k
•
12
Ennon/Gemma-2-9B-PL-DevOps-Instruct
9B
•
Updated
•
77
•
1
mradermacher/Gemma-2-9B-PL-DevOps-Instruct-GGUF
9B
•
Updated
•
641
•
1
mradermacher/Gemma-2-9B-PL-DevOps-Instruct-i1-GGUF
9B
•
Updated
•
1.78k
•
1
mradermacher/TerminGen-32B-i1-GGUF
33B
•
Updated
•
4.67k
•
1
Phpcool/DeepSeek-R1-Distill-SRE-Qwen-32B-INT8
Text Generation
•
33B
•
Updated
•
5
lakhera2023/mini-devops-7B
Text Generation
•
Updated
•
1
mradermacher/WhiteRabbitNeo-V3-7B-GGUF
8B
•
Updated
•
149
•
3
Hadisur/WhiteRabbitNeo-V3-7B
Text Generation
•
8B
•
Updated
•
3
1B
•
Updated
•
4
4B
•
Updated
•
16
•
1
AMaslovskyi/qwen-devops-foundation-lora
Text Generation
•
Updated
•
30
•
2
mendrika261/DeepHat-V1-7B-GGUF
Text Generation
•
8B
•
Updated
•
22
jmainformatique/DeepHat-V1-7B-Q4_K_M-GGUF
Text Generation
•
8B
•
Updated
•
9
hobaratio/WhiteRabbitNeo-V3-7B-mlx-8Bit
Text Generation
•
8B
•
Updated
•
57
VISHNUDHAT/DeepHat-V1-7B-Q4_K_M-GGUF
Text Generation
•
8B
•
Updated
•
55
anysecret-io/anysecret-assistant
Text Generation
•
13B
•
Updated
•
3
SoarAILabs/KiteResolve-20B
Text Generation
•
21B
•
Updated
•
12
•
2
mradermacher/KiteResolve-20B-GGUF
21B
•
Updated
•
35
mradermacher/KiteResolve-20B-i1-GGUF
21B
•
Updated
•
153
lakhera2023/devops-slm-v1
Text Generation
•
Updated
•
6
Text Generation
•
0.5B
•
Updated
•
4
•
1
aciklab/kubernetes-ai-lora
Updated
•
2
•
5
aciklab/kubernetes-ai-GGUF
12B
•
Updated
•
233
•
3