How to use from
SGLang
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "Andy-ML-And-AI/HyperThinkCode-Qwen3-8B-v1.5" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Andy-ML-And-AI/HyperThinkCode-Qwen3-8B-v1.5",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "Andy-ML-And-AI/HyperThinkCode-Qwen3-8B-v1.5" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Andy-ML-And-AI/HyperThinkCode-Qwen3-8B-v1.5",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Quick Links

HyperThinkCode-Qwen3-8B-v1

HyperThinkCode-Qwen3-8B-v1 is a LoRA fine-tune of the Qwen3-8B base model.


🛠 Experimental Setup

  • Base model: Qwen3-8B
  • Hardware: dual Tesla T4 (16GB VRAM each)
  • 4-bit QLoRA with rank = 16 and alpha = 16
  • All linear layers:
    • Attention: q, k, v, o
    • MLP: gate, up, down
  • Training time: ~1 hour 17 minutes
  • Total steps: 50

🧠 Dataset & Objective

Training on a specific 30k subset of the
Sashvat/HyperThink-X-Nvidia-Opencode-Reasoning-200K dataset.

  • Uses chat template with assistant response in the thinking field
  • Objective: encourage thinking over direct response
  • Sequence length limited to 4096 tokens (for code complexity + VRAM constraints)

📉 Training Logs

With only 50 steps, the loss shows expected variance given model + dataset complexity.

Step Training Loss
10 0.8177
25 0.7358
50 0.6785
  • Global batch size: 8 (1 device × 8 gradient steps)

📊 Evaluation (Ongoing)

Currently running benchmarks using the lm-eval library:

  • HumanEval (Coding)
  • GSM8K (Math)

Comparisons are being made against the base model.


🔁 Reproduction

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "Andy-ML-And-AI/HyperThinkCode-Qwen3-8B-v1",
    max_seq_length = 4096,
    load_in_4bit = True,
)
Downloads last month
35
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Andy-ML-And-AI/HyperThinkCode-Qwen3-8B-v1.5

Quantizations
1 model

Dataset used to train Andy-ML-And-AI/HyperThinkCode-Qwen3-8B-v1.5

Collection including Andy-ML-And-AI/HyperThinkCode-Qwen3-8B-v1.5