<【課題】ここは自分で記入して下さい>
LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: QLoRA (4-bit)
- Max sequence length: 2048
- Epochs: 3
- Learning rate: 2e-06
- LoRA: r=64, alpha=128
Sources & Terms
Training data: u-10bei/structured_data_with_cot_dataset (MIT License)
- Downloads last month
- 14
Model tree for thetmon/c4
Base model
Qwen/Qwen3-4B-Instruct-2507