Instructions to use dhashu/sql-genie-full with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use dhashu/sql-genie-full with PEFT:
Task type is invalid.
- Transformers
How to use dhashu/sql-genie-full with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="dhashu/sql-genie-full")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("dhashu/sql-genie-full") model = AutoModelForCausalLM.from_pretrained("dhashu/sql-genie-full") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use dhashu/sql-genie-full with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "dhashu/sql-genie-full" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "dhashu/sql-genie-full", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/dhashu/sql-genie-full
- SGLang
How to use dhashu/sql-genie-full with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "dhashu/sql-genie-full" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "dhashu/sql-genie-full", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "dhashu/sql-genie-full" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "dhashu/sql-genie-full", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Unsloth Studio new
How to use dhashu/sql-genie-full with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for dhashu/sql-genie-full to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for dhashu/sql-genie-full to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for dhashu/sql-genie-full to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="dhashu/sql-genie-full", max_seq_length=2048, ) - Docker Model Runner
How to use dhashu/sql-genie-full with Docker Model Runner:
docker model run hf.co/dhashu/sql-genie-full
SQL-Genie (LLaMA-3.1-8B Fine-Tuned)
π§ Model Overview
SQL-Genie is a fine-tuned version of LLaMA-3.1-8B, specialized for converting natural language questions into SQL queries.
The model was trained using parameter-efficient fine-tuning (LoRA) on a structured SQL instruction dataset, enabling strong SQL generation performance while remaining lightweight and affordable to train on limited compute (Google Colab).
- Developed by: dhashu
- Base model:
unsloth/meta-llama-3.1-8b-bnb-4bit - License: Apache-2.0
- Training stack: Unsloth + Hugging Face TRL
βοΈ Training Methodology
This model was trained using LoRA (Low-Rank Adaptation) via the PEFT framework.
Key Details
- Base model loaded in 4-bit quantization for memory efficiency
- Base weights frozen
- LoRA adapters applied to:
- Attention layers (
q_proj,k_proj,v_proj,o_proj) - Feed-forward layers (
gate_proj,up_proj,down_proj)
- Attention layers (
- Fine-tuned using Supervised Fine-Tuning (SFT)
This approach allows efficient specialization without full model retraining.
π Dataset
The model was trained on a subset of the b-mc2/sql-create-context dataset, which includes:
- Natural language questions
- Database schema / context
- Corresponding SQL queries
Each sample was formatted as an instruction-style prompt to improve reasoning and structured output.
π Performance & Efficiency
- π 2Γ faster fine-tuning using Unsloth
- πΎ Low VRAM usage via 4-bit quantization
- π§ Improved SQL syntax and schema understanding
- β‘ Suitable for real-time inference and lightweight deployments
π§© Model Variants
This repository contains a merged model:
πΉ Merged 4-bit Model
- LoRA adapters merged into base weights
- No PEFT required at inference time
- Ready-to-use single checkpoint
- Optimized for easy deployment
βΆοΈ How to Use (Inference)
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "dhashu/sql-genie-full"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_4bit=True,
)
prompt = """Below is an input question, context is given to help. Generate a SQL response.
### Input: List all employees hired after 2020
### Context: CREATE TABLE employees(id, name, hire_date)
### SQL Response:
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=128,
temperature=0.7,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 7