Add MMLU-Pro evaluation result
Browse files## Evaluation Results
This PR adds structured evaluation results using the new [`.eval_results/` format](https://huggingface.co/docs/hub/eval-results).
### What This Enables
- **Model Page**: Results appear on the model page with benchmark links
- **Leaderboards**: Scores are aggregated into benchmark dataset leaderboards
- **Verification**: Support for cryptographic verification of evaluation runs

### Format Details
Results are stored as YAML in `.eval_results/` folder. See the [Eval Results Documentation](https://huggingface.co/docs/hub/eval-results) for the full specification.
---
*Generated by [community-evals](https://github.com/huggingface/community-evals)*
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
- dataset:
|
| 2 |
+
id: TIGER-Lab/MMLU-Pro
|
| 3 |
+
value: 80.6
|
| 4 |
+
date: '2026-01-15'
|
| 5 |
+
source:
|
| 6 |
+
url: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct
|
| 7 |
+
name: Model Card
|