BibleAI 7B (GGUF)

A Bible study assistant fine-tuned from Qwen 2.5 7B Instruct. Trained to quote scripture exactly from the Berean Standard Bible, handle Hebrew and Greek exegesis with Strong's numbers, and present Protestant, Catholic, and Orthodox perspectives without picking sides.

Built by Rhema.

What's in this repo

File Quant Size Notes
bibleai-7b-q5_k_m.gguf Q5_K_M ~5.4 GB Best quality-to-size ratio for most hardware

How to run it

Ollama

Create a Modelfile:

FROM ./bibleai-7b-q5_k_m.gguf

TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
"""

PARAMETER stop "<|im_end|>"
PARAMETER temperature 0.3
PARAMETER top_p 0.9
PARAMETER num_ctx 4096

Then:

ollama create bibleai -f Modelfile
ollama run bibleai

LM Studio

  1. Download the GGUF file from this repo
  2. Open LM Studio, go to My Models, click Import and select the file
  3. Load the model, then paste the system prompt from below into the System Prompt field
  4. Set temperature to 0.3 and context length to 4096

llama.cpp

./llama-cli -m bibleai-7b-q5_k_m.gguf -p "<|im_start|>user\nWhat does Romans 8:28 say and how have different traditions interpreted it?<|im_end|>\n<|im_start|>assistant\n" --temp 0.3 --ctx-size 4096

Training

Two-stage pipeline on the Qwen 2.5 7B Instruct base:

Stage 1 โ€” Continued pretraining. Domain adaptation on public-domain theological corpora: Calvin's commentaries, Barnes' Notes, the Pulpit Commentary, Keil-Delitzsch, plus creeds and confessions (Nicene, Apostles', Westminster, Augsburg, Council of Trent decrees). LoRA rank 32, learning rate 2e-5, max sequence length 4096.

Stage 2 โ€” QLoRA instruction tuning. 50,000+ supervised examples from 24 synthetic data generators covering verse lookup, passage exposition, Hebrew/Greek exegesis, cross-references, doctrinal Q&A, patristic readings, creedal analysis, and multi-tradition theological comparison. LoRA rank 64, learning rate 1e-4, 3 epochs.

All training data is grounded in the Berean Standard Bible with Hebrew morphology (Westminster Leningrad Codex), Greek lexicon data, and Strong's numbers across 31,102 verses.

Trained on RunPod (A100 80GB). Adapters merged back to base, then exported to GGUF via Unsloth.

System prompt

The model was trained with this system prompt and works best when you include it:

You are BibleAI, a scholarly Bible study assistant grounded in the Berean Standard Bible (BSB).

- Quote BSB text exactly using "Book Chapter:Verse (BSB)" format
- Reference Greek and Hebrew terms with transliteration and Strong's numbers
- On debated topics (predestination, baptism, end times), present Protestant, Catholic, and Orthodox perspectives fairly
- Attribute interpretive claims to specific scholars, church fathers, confessions, or traditions
- Only answer Bible, theology, church history, and faith questions
- Never fabricate references or verses

Intended use

Bible study, theological research, scripture lookup, exegesis. The model stays within theology and won't answer off-topic questions.

It does not replace pastoral care. For personal spiritual matters, talk to a pastor.

Limitations

  • Trained on BSB; may be less accurate with other translations
  • Smaller model (7B) means it can miss nuance on complex systematic theology questions
  • Synthetic training data generated via Claude API, so some examples may reflect that model's tendencies
  • Not evaluated on formal benchmarks yet

License

Apache 2.0, matching the Qwen 2.5 base model license.

Downloads last month
18
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for rhemabible/BibleAI

Base model

Qwen/Qwen2.5-7B
Quantized
(278)
this model