Can RL Teach Long-Horizon Reasoning to LLMs? Expressiveness Is Key
Abstract
ScaleLogic demonstrates that reinforcement learning training compute scales as a power law with reasoning depth, with scaling exponents increasing monotonically with logical expressiveness across multiple reasoning tasks.
Reinforcement learning (RL) has been applied to improve large language model (LLM) reasoning, yet the systematic study of how training scales with task difficulty has been hampered by the lack of controlled, scalable environments. We introduce ScaleLogic, a synthetic logical reasoning framework that offers independent control over two axes of difficulty: the depth of the required proof planning (i.e., the horizon) and the expressiveness of the underlying logic. Our proposed framework supports a wide range of logics: from simple implication-only logic ("if-then") towards more expressive first-order reasoning with conjunction ("and"), disjunction ("or"), negation ("not"), and universal quantification ("for all"). Using this framework, we show that the RL training compute T follows a power law with respect to reasoning depth D (T propto D^γ, R^{2} > 0.99), and that the scaling exponent γ increases monotonically with logical expressiveness, from 1.04 to 2.60. On downstream mathematics and general reasoning benchmarks, more expressive training settings yield both larger performance gains (up to +10.66 points) and more compute-efficient transfer compared to less expressive settings, demonstrating that what a model is trained on, not just how much it is trained, shapes downstream transfer. We further show that the power-law relationship holds across multiple RL methods, and curriculum-based training substantially improves scaling efficiency.
Community
the thing that sticks with me is the reported power law T ∝ D^γ and the fact that γ grows as you move from simple implication to richer first-order logic. that hints expressiveness, not just horizon, controls learning difficulty in a pretty dramatic way. would love an ablation where horizon is fixed and you vary expressiveness to confirm it's expressiveness driving γ, not just more or different proofs per task. btw the arxivlens breakdown helped me parse the method details, especially the backward proof expansion and the verifiable multiple-choice framing. overall, the transfer gains with expressive synthetic data look nice, but i want to see robustness under distribution shifts.
Thanks, this is a great point. In the current paper, the main evidence comes from fitting the depth-scaling curve separately for each expressiveness level and observing that the exponent increases as the logic becomes richer. One nuance is that γ is estimated across different depths, so fixing the horizon cannot directly test γ itself. But I agree that a tighter ablation would be useful: fixing the horizon and matching proof size while varying only the logical operators. Robustness to broader distribution shifts is also a very good future direction.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SUPERNOVA: Eliciting General Reasoning in LLMs with Reinforcement Learning on Natural Instructions (2026)
- Learning to Generate Formally Verifiable Step-by-Step Logic Reasoning via Structured Formal Intermediaries (2026)
- Apriel-1.5-OpenReasoner: RL Post-Training for General-Purpose and Efficient Reasoning (2026)
- When Can LLMs Learn to Reason with Weak Supervision? (2026)
- AgentV-RL: Scaling Reward Modeling with Agentic Verifier (2026)
- WIST: Web-Grounded Iterative Self-Play Tree for Domain-Targeted Reasoning Improvement (2026)
- Learning from Less: Measuring the Effectiveness of RLVR in Low Data and Compute Regimes (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2605.06638 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper