Instructions to use kernels-community/quantization-gptq with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Kernels
How to use kernels-community/quantization-gptq with Kernels:
# !pip install kernels from kernels import get_kernel kernel = get_kernel("kernels-community/quantization-gptq") - Notebooks
- Google Colab
- Kaggle
This is the repository card of kernels-community/quantization-gptq that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.
How to use
# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel
kernel_module = get_kernel("kernels-community/quantization-gptq")
gemm_int4_forward = kernel_module.gemm_int4_forward
gemm_int4_forward(...)
Available functions
gemm_int4_forward
Benchmarks
No benchmark available yet.
- Downloads last month
- 3,346
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support