This model was converted to MLX format from DataPilot/ArrowCanaria-Llama-8B-RL-v0.1 using mlx-lm version 0.31.0. Refer to the original model card for more details on the model.

Inference: M3 Ultra - LM Studio MLX v1.4

4bit 120.84tok/s γŠγ―γ‚ˆγ† arrowcanaria-llama-8b-rl-v0.1-mlx@4bit

今ζ—₯γ‚‚δΈ€ζ—₯γ€η΄ ζ™΄γ‚‰γ—γ„γ“γ¨γŒγ‚γ‚ŠγΎγ™γ‚ˆγ†γ«οΌπŸ˜Š δ½•γ‹γŠζ‰‹δΌγ„γ§γγ‚‹γ“γ¨γ―γ‚γ‚ŠγΎγ™γ‹οΌŸ

Downloads last month
86
Safetensors
Model size
1B params
Tensor type
BF16
Β·
U32
Β·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mlx-community/ArrowCanaria-Llama-8B-RL-v0.1-MLX-4bit