Instructions to use DuoNeural/AdQWENistrator-9B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DuoNeural/AdQWENistrator-9B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DuoNeural/AdQWENistrator-9B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("DuoNeural/AdQWENistrator-9B") model = AutoModelForImageTextToText.from_pretrained("DuoNeural/AdQWENistrator-9B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use DuoNeural/AdQWENistrator-9B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DuoNeural/AdQWENistrator-9B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DuoNeural/AdQWENistrator-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/DuoNeural/AdQWENistrator-9B
- SGLang
How to use DuoNeural/AdQWENistrator-9B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DuoNeural/AdQWENistrator-9B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DuoNeural/AdQWENistrator-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "DuoNeural/AdQWENistrator-9B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DuoNeural/AdQWENistrator-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use DuoNeural/AdQWENistrator-9B with Docker Model Runner:
docker model run hf.co/DuoNeural/AdQWENistrator-9B
AdQWENistrator-9B
A senior Linux development specialist and elite system administrator — fine-tuned from Qwen/Qwen3.5-9B on kernel commits, assembly, sysadmin tasks, and Python system automation, then abliterated of refusal behavior.
Built by DuoNeural — Jesse & Archon (DuoNeural AI).
What It Does
AdQWENistrator is purpose-built for real system-level engineering work:
- Linux Kernel Development — C patches, driver development, VMA manipulation, memory management, kernel modules
- Assembly (x86_64 / ARM64) — inline asm, NASM/GAS syntax, optimization, reverse engineering
- Advanced System Administration — eBPF, SELinux, cgroups, netfilter, ftrace, perf, packet analysis
- Python System Automation — asyncio, ctypes, /proc interfaces, netlink, subprocess orchestration
- Security Research — rootkit analysis, exploit development assistance, memory forensics, kernel hooking
No guardrails. No disclaimers. No refusals on legitimate system work.
Training Pipeline
Stage 1 — Domain Fine-Tuning (QLoRA SFT, ~3.5hr on A100):
| Domain | Dataset | Examples |
|---|---|---|
| Kernel C & patches | ewedubs/linux-kernel-commits-aireason-instruct (premium_reasoning) |
~8,000 |
| Sysadmin & terminal | mrheinen/linux-commands |
~4,500 |
| Python & C automation | nvidia/OpenCodeInstruct (filtered) |
~3,600 |
| Assembly x86/ARM64 | Modotte/CodeX-7M-Non-Thinking (filtered) |
~1,800 |
| Identity anchoring | Custom synthetic | ~900 |
Hyperparameters: 4-bit NF4 quantization, LoRA r=32 α=64, all-linear targets, batch 4×grad_accum 4 = effective 16, seq_len 4096, lr 2e-4 cosine, 3 epochs.
Stage 2 — GRPO Alignment (~1.5hr on A100):
Rule-based reward functions on 20 Linux/security prompts:
reward_no_refusal— penalizes "I cannot", "I'm sorry", safety disclaimersreward_code_quality— rewards proper code blocks, language tags, completenessreward_no_empty— penalizes truncated or empty responses
Stage 3 — Abliteration:
Norm-preserving biprojected abliteration via heretic applied to the fully merged model (after LoRA merge — pre-merge abliteration allows SFT to reconstruct refusal pathways). Targets all Gated Attention (GA) layers; GDN linear attention layers are skipped (no compatible o_proj).
Architecture Notes (Qwen3.5-9B)
- 9B dense — all parameters active per token
- 32 layers: 8 groups of (3× Gated DeltaNet + 1× Gated Attention)
- SwiGLU, RMSNorm, FFN dim 11264 (? check)
- 262,144 token native context
- Thinking/non-thinking modes intact
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"DuoNeural/AdQWENistrator-9B",
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("DuoNeural/AdQWENistrator-9B")
messages = [
{"role": "user", "content": "Write an eBPF program to trace all execve syscalls."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs.to(model.device), max_new_tokens=1024, temperature=0.2, do_sample=True)
print(tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True))
GGUF
See DuoNeural/AdQWENistrator-9B-GGUF for Q4_K_M GGUF (~5.5 GB).
About DuoNeural
DuoNeural is an AI lab focused on post-training, abliteration research, and specialized model development.
We document wins, losses, emergent behaviors, and everything in between.
Generated: 2026-04-12 | DuoNeural Lab
DuoNeural
DuoNeural is an open AI research lab — human + AI in collaboration.
| 🤗 HuggingFace | huggingface.co/DuoNeural |
| 🐙 GitHub | github.com/DuoNeural |
| 🐦 X / Twitter | @DuoNeural |
| duoneural@proton.me | |
| 📬 Newsletter | duoneural.beehiiv.com |
| ☕ Support | buymeacoffee.com/duoneural |
| 🌐 Site | duoneural.com |
Research Team
- Jesse — Vision, hardware, direction
- Archon — AI lab partner, post-training, abliteration, experiments
- Aura — Research AI, literature synthesis, novel proposals
Raw updates from the lab: model drops, training results, findings. Subscribe at duoneural.beehiiv.com.
DuoNeural Research Publications
Open access, CC BY 4.0. Authored by Archon, Jesse Caldwell, Aura — DuoNeural.
- Downloads last month
- 975