YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
XAT928/qwen3-1.7b-sft-lora-20250923
LoRA adapter for Qwen/Qwen3-1.7B-Base (Japanese SFT).
Summary
- Base model:
Qwen/Qwen3-1.7B-Base - Adapter type: LoRA (PEFT; saved via
save_pretrained) - Exported: 2025-09-24 14:55:39
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-1.7B-Base", torch_dtype=torch.bfloat16)
tok = AutoTokenizer.from_pretrained("Qwen/Qwen3-1.7B-Base", use_fast=True)
model = PeftModel.from_pretrained(base, "XAT928/qwen3-1.7b-sft-lora-20250923")
model.eval()
prompt = "次の問いに丁寧で簡潔に答えてください。\n\nQ: 富士山の標高は?"
inputs = tok(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
out = model.generate(**inputs, max_new_tokens=128)
print(tok.decode(out[0], skip_special_tokens=True))
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support