Instructions to use MiniMaxAI/MiniMax-M2.5 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MiniMaxAI/MiniMax-M2.5 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MiniMaxAI/MiniMax-M2.5", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MiniMaxAI/MiniMax-M2.5", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("MiniMaxAI/MiniMax-M2.5", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- HuggingChat
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MiniMaxAI/MiniMax-M2.5 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MiniMaxAI/MiniMax-M2.5" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MiniMaxAI/MiniMax-M2.5", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/MiniMaxAI/MiniMax-M2.5
- SGLang
How to use MiniMaxAI/MiniMax-M2.5 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MiniMaxAI/MiniMax-M2.5" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MiniMaxAI/MiniMax-M2.5", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MiniMaxAI/MiniMax-M2.5" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MiniMaxAI/MiniMax-M2.5", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use MiniMaxAI/MiniMax-M2.5 with Docker Model Runner:
docker model run hf.co/MiniMaxAI/MiniMax-M2.5
关于SWE-Bench-Pro复现的疑问
#51
by Clumsy1 - opened
Minimax Team 的朋友们好!
我根据discussion 17(https://huggingface.co/MiniMaxAI/MiniMax-M2.5/discussions/17)的回复的方法复现SWE-Bench Pro的分数,对user prompt略做改动:
{problem_statement}
{requirements}
{interface}
Can you help me implement the necessary changes to this repository so that the issue can be resolved?
---------
# INSTRUCTIONS
Follow these steps to resolve the issue:
1. As a first step, it might be a good idea to explore the repo to familiarize yourself with its structure.
2. Create a script to reproduce the error and execute it using the BashTool, to confirm the error
3. Edit the sourcecode of the repo to resolve the issue
4. Rerun your reproduce script and confirm that the error is fixed!
5. Think about edgecases and make sure your fix handles them as well
Your thinking should be thorough and so it's fine if it's very long.
You should use tools as much as possible, ideally more than 100 times. You should also implement your own tests first before attempting the problem.
I will export your changes and apply suitable test patches to verify if your fix is correct when you finish this task. This means you MUST NOT modify the testing logic or any of the tests in any way!
--------
**Workspace Path**: The repository is located at `{working_dir}`. All file operations should be performed relative to this directory.
但复现出来分数只有50%左右,排查后没有环境和api问题,是还有其他setting没对齐吗
弱弱的问一下,我也是按照一样的方法,用的 https://huggingface.co/MiniMaxAI/MiniMax-M2.5/discussions/17 里的 claude-code 版本和prompt,但是跑出来分数只有十几分,是哪里不对吗?
你们跑这个是 Pass@1还是Pass@N?还是说average N runs?