Understanding the Thinking Process of Reasoning Models: A Perspective from Schoenfeld's Episode Theory
Abstract
A novel framework using Schoenfeld's Episode Theory is introduced to analyze the reasoning patterns of Large Reasoning Models in solving math problems, providing a benchmark for machine reasoning.
While Large Reasoning Models (LRMs) generate extensive chain-of-thought reasoning, we lack a principled framework for understanding how these thoughts are structured. In this paper, we introduce a novel approach by applying Schoenfeld's Episode Theory, a classic cognitive framework for human mathematical problem-solving, to analyze the reasoning traces of LRMs. We annotated thousands of sentences and paragraphs from model-generated solutions to math problems using seven cognitive labels (e.g., Plan, Implement, Verify). The result is the first publicly available benchmark for the fine-grained analysis of machine reasoning, including a large annotated corpus and detailed annotation guidebooks. Our preliminary analysis reveals distinct patterns in LRM reasoning, such as the transition dynamics between cognitive states. This framework provides a theoretically grounded methodology for interpreting LRM cognition and enables future work on more controllable and transparent reasoning systems.
Community
Understanding the Thinking Process of Reasoning Models: A Perspective from Schoenfeld's Episode Theory
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Meta-R1: Empowering Large Reasoning Models with Metacognition (2025)
- Hop, Skip, and Overthink: Diagnosing Why Reasoning Models Fumble during Multi-Hop Analysis (2025)
- Fast, Slow, and Tool-augmented Thinking for LLMs: A Review (2025)
- From "Aha Moments" to Controllable Thinking: Toward Meta-Cognitive Reasoning in Large Reasoning Models via Decoupled Reasoning and Control (2025)
- A Study on Thinking Patterns of Large Reasoning Models in Code Generation (2025)
- Thinking with Nothinking Calibration: A New In-Context Learning Paradigm in Reasoning Large Language Models (2025)
- Less is More Tokens: Efficient Math Reasoning via Difficulty-Aware Chain-of-Thought Distillation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper