Why F16 is used to represent MXFP4?

#14
by czl - opened

May I know what's the rationale to call MXFP4_MOE quant as F16?

  • I understand the MX formats from the Microscaling paper, it shows that the MXFP6 performance is comparable to FP32
  • A comparison made in NVIDIA's blog compares the performance of MXFP4 to FP8 and points that the accuracy of MXFP4 may have noticeable accuracy drop versus FP8
    • In this case, wouldn't F8 be more suitable?
Unsloth AI org

Because OpenAI originally released the model in this format. So technically f16 is the model's 'original' precision. Only the MOE layers are quantized. If you want all layers unquantized, then go for the B32 version

Thanks for clarifying.

czl changed discussion status to closed

Sign up or log in to comment