DannyAI/African-History-QA-Dataset
Viewer • Updated • 2.41k • 118 • 2
How to use DannyAI/phi4_lora_axolotl with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-4-mini-instruct")
model = PeftModel.from_pretrained(base_model, "DannyAI/phi4_lora_axolotl")How to use DannyAI/phi4_lora_axolotl with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="DannyAI/phi4_lora_axolotl", trust_remote_code=True)
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("DannyAI/phi4_lora_axolotl", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("DannyAI/phi4_lora_axolotl", trust_remote_code=True)
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use DannyAI/phi4_lora_axolotl with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "DannyAI/phi4_lora_axolotl"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DannyAI/phi4_lora_axolotl",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/DannyAI/phi4_lora_axolotl
How to use DannyAI/phi4_lora_axolotl with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "DannyAI/phi4_lora_axolotl" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DannyAI/phi4_lora_axolotl",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "DannyAI/phi4_lora_axolotl" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DannyAI/phi4_lora_axolotl",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use DannyAI/phi4_lora_axolotl with Docker Model Runner:
docker model run hf.co/DannyAI/phi4_lora_axolotl
axolotl version: 0.14.0.dev0
base_model: microsoft/Phi-4-mini-instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# 1. Dataset Configuration
datasets:
- path: DannyAI/African-History-QA-Dataset
split: train
type: alpaca_chat.load_qa
system_prompt: "You are a helpful AI assistant specialised in African history which gives concise answers to questions asked"
test_datasets:
- path: DannyAI/African-History-QA-Dataset
split: validation
type: alpaca_chat.load_qa
# Fixed the missing quote and indentation below
system_prompt: "You are a helpful AI assistant specialised in African history which gives concise answers to questions asked"
# 2. Output & Chat Configuration
output_dir: ./phi4_african_history_lora_out
chat_template: tokenizer_default
train_on_inputs: false
# 3. Batch Size Configuration
micro_batch_size: 2
gradient_accumulation_steps: 4
# 4. LoRA Configuration
adapter: lora
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: [q_proj, v_proj, k_proj, o_proj]
# 5. Hardware & Efficiency
sequence_len: 2048
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
bf16: true
fp16: false
# 6. Training Duration & Optimizer
max_steps: 650
# removed
# num_epochs:
warmup_steps: 20
learning_rate: 0.00002
optimizer: adamw_torch
lr_scheduler: cosine
# 7. Logging & Evaluation
wandb_project: phi4_african_history
wandb_name: phi4_lora_axolotl
eval_strategy: steps
eval_steps: 50
save_strategy: steps
save_steps: 100
logging_steps: 5
# 8. Public Hugging Face Hub Upload
hub_model_id: DannyAI/phi4_lora_axolotl
push_adapter_to_hub: true
hub_private_repo: false
This is a LoRA fine-tuned version of microsoft/Phi-4-mini-instruct for African History using the DannyAI/African-History-QA-Dataset dataset. It achieves a loss value of 1.7479 on the validation set
This can be used for QA datasets about African History
Can be used beyond African History but should not.
from transformers import pipeline
from transformers import (
AutoTokenizer,
AutoModelForCausalLM)
from peft import PeftModel
model_id = "microsoft/Phi-4-mini-instruct"
tokeniser = AutoTokenizer.from_pretrained(model_id)
# load base model
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map = "auto",
torch_dtype = torch.bfloat16,
trust_remote_code = False
)
# Load the fine-tuned LoRA model
lora_id = "DannyAI/phi4_lora_axolotl"
lora_model = PeftModel.from_pretrained(
model,lora_id
)
generator = pipeline(
"text-generation",
model=lora_model,
tokenizer=tokeniser,
)
question = "What is the significance of African feminist scholarly activism in contemporary resistance movements?"
def generate_answer(question)->str:
"""Generates an answer for the given question using the fine-tuned LoRA model.
"""
messages = [
{"role": "system", "content": "You are a helpful AI assistant specialised in African history which gives concise answers to questions asked."},
{"role": "user", "content": question}
]
output = generator(
messages,
max_new_tokens=2048,
temperature=0.1,
do_sample=False,
return_full_text=False
)
return output[0]['generated_text'].strip()
# Example output
African feminist scholarly activism is significant in contemporary resistance movements as it provides a critical framework for understanding and addressing the specific challenges faced by African women in the context of global capitalism, neocolonialism, and patriarchal structures.
| Training Loss | Epoch | Step | Validation Loss | Ppl | Active (gib) | Allocated (gib) | Reserved (gib) |
|---|---|---|---|---|---|---|---|
| No log | 0 | 0 | 2.1184 | 8.3175 | 14.82 | 14.82 | 15.37 |
| 5.394 | 3.8627 | 50 | 2.1004 | 8.1694 | 14.84 | 14.84 | 31.82 |
| 4.4484 | 7.7059 | 100 | 2.0367 | 7.6652 | 14.84 | 14.84 | 31.84 |
| 3.7583 | 11.5490 | 150 | 1.9785 | 7.2316 | 14.84 | 14.84 | 31.84 |
| 3.363 | 15.3922 | 200 | 1.9299 | 6.8886 | 14.84 | 14.84 | 31.84 |
| 3.0568 | 19.2353 | 250 | 1.8664 | 6.4652 | 14.84 | 14.84 | 31.84 |
| 2.8736 | 23.0784 | 300 | 1.8134 | 6.1314 | 14.84 | 14.84 | 31.79 |
| 2.7646 | 26.9412 | 350 | 1.7851 | 5.9604 | 14.84 | 14.84 | 31.79 |
| 2.6891 | 30.7843 | 400 | 1.7668 | 5.8523 | 14.84 | 14.84 | 31.79 |
| 2.6843 | 34.6275 | 450 | 1.7581 | 5.8014 | 14.84 | 14.84 | 31.79 |
| 2.6048 | 38.4706 | 500 | 1.7534 | 5.7739 | 14.84 | 14.84 | 31.79 |
| 2.6118 | 42.3137 | 550 | 1.7505 | 5.7573 | 14.84 | 14.84 | 31.79 |
| 2.6024 | 46.1569 | 600 | 1.7503 | 5.7565 | 14.84 | 14.84 | 31.79 |
| 2.5727 | 50.0 | 650 | 1.7479 | 5.7428 | 14.84 | 14.84 | 31.79 |
The following hyperparameters were used during training:
| Models | Bert Score | TinyMMLU | TinyTrufulQA |
|---|---|---|---|
| Base model | 0.88868 | 0.6837 | 0.49745 |
| Fine tuned Model | 0.88981 | 0.67371 | 0.46626 |
Runpod A40 GPU instance
If you use this dataset, please cite:
@Model{
Ihenacho2026phi4_lora_axolotl,
author = {Daniel Ihenacho},
title = {phi4_lora_axolotl},
year = {2026},
publisher = {Hugging Face Models},
url = {https://huggingface.co/DannyAI/phi4_lora_axolotl},
urldate = {2026-01-27},
}
Daniel Ihenacho
Base model
microsoft/Phi-4-mini-instruct