burtenshaw HF Staff commited on
Commit
825eff4
·
verified ·
1 Parent(s): 9c7f2fb

Add MMLU-Pro evaluation result

Browse files

## Evaluation Results

This PR adds structured evaluation results using the new [`.eval_results/` format](https://huggingface.co/docs/hub/eval-results).

### What This Enables

- **Model Page**: Results appear on the model page with benchmark links
- **Leaderboards**: Scores are aggregated into benchmark dataset leaderboards
- **Verification**: Support for cryptographic verification of evaluation runs

![Model Evaluation Results](https://huggingface.co/huggingface/documentation-images/resolve/main/evaluation-results/eval-results-previw.png)

### Format Details

Results are stored as YAML in `.eval_results/` folder. See the [Eval Results Documentation](https://huggingface.co/docs/hub/eval-results) for the full specification.

---
*Generated by [community-evals](https://github.com/huggingface/community-evals)*

Files changed (1) hide show
  1. .eval_results/mmlu_pro.yaml +7 -0
.eval_results/mmlu_pro.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ - dataset:
2
+ id: TIGER-Lab/MMLU-Pro
3
+ value: 80.6
4
+ date: '2026-01-15'
5
+ source:
6
+ url: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct
7
+ name: Model Card