Doc-to-LoRA โ€” NIAH Proof of Concept

A 144 M parameter Perceiver hypernetwork trained on Qwen/Qwen3-1.7B. Reads a document once, outputs LoRA deltas, and lets the base LLM answer questions without the document ever appearing in the context window.

Based on Doc-to-LoRA (Charakorn et al., 2026).

curves

Results

Metric Value
Base model Qwen/Qwen3-1.7B
Perceiver params 144 M
LoRA rank / alpha 8 / 8.0
Target module down_proj
Training steps 8,000
Final CE loss 0.2165
Exact-match accuracy (NIAH) 80.0%
Training ctx length 32โ€“256 tokens

Files

File Description
hypernet.pt Perceiver weights + full config to rebuild the class
inference_example.py Self-contained script (download and run)
training_config.json Training hyperparameters
curves.png Loss and accuracy curves

Quick start

pip install transformers>=4.51.0 huggingface_hub torch
from huggingface_hub import hf_hub_download
import torch

ckpt = torch.load(hf_hub_download("farpluto/doc-to-lora-niah", "hypernet.pt"),
                   map_location="cuda", weights_only=False)
# See inference_example.py for the complete working script.

Qwen3 note

Chain-of-thought thinking is suppressed via /no_think appended to every query. Residual <think> tokens are stripped from generated output. Both techniques are harmless no-ops on non-Qwen3 models.

Downloads last month
50
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for farpluto/doc-to-lora-niah

Finetuned
Qwen/Qwen3-1.7B
Adapter
(356)
this model

Paper for farpluto/doc-to-lora-niah