exp_camelcase
Model ID: ekunish/exp_camelcase
exp008a + camelCase augmented data (21K + 1.6K camelCase conversion variants)
Training Configuration
| Parameter | Value |
|---|---|
| Base model | Qwen/Qwen3-4B-Instruct-2507 |
| Method | QLoRA (4-bit) |
| Max sequence length | 512 |
| Epochs | 1 |
| Learning rate | 1e-06 |
| LoRA r | 64 |
| LoRA alpha | 128 |
| Batch size | 2 × 8 = 16 |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "ekunish/exp_camelcase"
tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
base,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
Training Data
- Dataset:
data/sft_u10bei_camelcase - License: CC-BY-4.0 (where applicable)
Sources & License
- Training Data: u-10bei/structured_data_with_cot_dataset_512_v2, daichira/structured-3k-mix-sft, etc.
- Dataset License: Creative Commons Attribution (CC-BY-4.0)
- Compliance: Users must comply with both the dataset's attribution requirements and the base model's original terms of use.
Competition
松尾研LLMコミュニティ 2025年度講座 メインコンペ (StructEval-T)
- Downloads last month
- 6
Model tree for ekunish/exp_camelcase
Base model
Qwen/Qwen3-4B-Instruct-2507