TNG Technology Consulting fine-tuned the 32-billion-parameter OLMo-2 Large Language Model using AMD's MI300X GPUs and the Open R1 dataset, focusing on enhancing the model's reasoning capabilities. The MI300X accelerators, with their multi-chip module architecture and substantial memory bandwidth, facilitated efficient handling of the model's training requirements. The Open R1 dataset, curated by Hugging Face, provided a comprehensive collection of mathematical problems with detailed reasoning traces, serving as an ideal foundation for this fine-tuning endeavor. This collaborative effort underscores the potential of open-source initiatives and advanced hardware in advancing AI research.
- Downloads last month
- 18
Model tree for tngtech/OLMo-2-Instruct-Math-32B
Base model
allenai/OLMo-2-0325-32B Finetuned
allenai/OLMo-2-0325-32B-SFT Finetuned
allenai/OLMo-2-0325-32B-DPO Finetuned
allenai/OLMo-2-0325-32B-Instruct
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "tngtech/OLMo-2-Instruct-Math-32B"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tngtech/OLMo-2-Instruct-Math-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'