How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "GetSoloTech/FoodStack"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "GetSoloTech/FoodStack",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/GetSoloTech/FoodStack
Quick Links

Solo

Model Details

Base Model google/gemma-3-270m-it
Method LoRA (PEFT)
Parameters 0.27B

Training Hyperparameters

Epochs 1
Max Steps 100
Batch Size 4
Gradient Accumulation 4
Learning Rate 0.0002
LoRA r 4
LoRA Alpha 4
Max Sequence Length 2048
Training Duration 41m 11s

Dataset

GetSoloTech/Code-Reasoning


Trained with Solo

Downloads last month
12
Safetensors
Model size
0.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GetSoloTech/FoodStack

Adapter
(58)
this model

Dataset used to train GetSoloTech/FoodStack