EmotionThinker: Prosody-Aware Reinforcement Learning for Explainable Speech Emotion Reasoning

ICLR 2026 Oral Project

Introduction

EmotionThinker is the first RL–enhanced SpeechLLM framework for interpretable speech emotion reasoning. For details, please refer to the paper.

Unlike conventional speech emotion recognition (SER) systems that treat emotion as a flat classification problem, EmotionThinker reframes SER as a deep reasoning problem, enabling models to jointly produce accurate emotion labels and structured, human-aligned explanations.

EmotionThinker offers the following advantages:

  • Higher emotion recognition accuracy compared to existing SpeechLLMs;
  • Deep reasoning ability to integrate emotion-related cues for justification;
  • Fine-grained audio caption covering speaker traits, prosodic cues and semantic information.

Quickstart

import torch
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
from qwen_omni_utils import process_mm_info

processor = Qwen2_5OmniProcessor.from_pretrained('ddwang2000/EmotionThinker')

model = Qwen2_5OmniForConditionalGeneration.from_pretrained('ddwang2000/EmotionThinker',torch_dtype="auto", device_map="auto")

print("✅ Model loaded successfully")

audio_path="angry.wav" #your audio path
prompt="<audio>What is the emotion expressed in this audio clip? Please choose one from the following options: neutral, happy, sad, angry, contempt or disgust, confused, whisper, surprise, fear."

messages = [
    {"role": "system", "content": [
        {"type": "text", "text": "A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think><answer> answer here </answer>."} 
    ], },
    {"role": "user", "content": [
        {"type": "audio", "audio": audio_path},
        {"type": "text", "text": prompt},
    ]
     },
]

text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
audios, images, videos = process_mm_info(messages, use_audio_in_video=False)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=False)
inputs = inputs.to(model.device).to(model.dtype)

with torch.no_grad():
    text_ids = model.generate(
        **inputs,
        return_audio=False,
        max_new_tokens=2048
    )[:, inputs.input_ids.size(1):]


text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(text)

Citation

If you find this model useful in your research, please kindly cite:

@inproceedings{wang2026emotionthinker,
  title={EmotionThinker: Prosody-Aware Reinforcement Learning for Explainable Speech Emotion Reasoning},
  author={Wang, Dingdong and Liu, Shujie and Zhang, Tianhua and Chen, Youjun and Li, Jinyu and Meng, Helen},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2026}
}
Downloads last month
-
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ddwang2000/EmotionThinker

Finetuned
(44)
this model

Paper for ddwang2000/EmotionThinker