[Paper] Evaluating Counterfactual Strategic Reasoning in Large Language Models

Published: (March 19, 2026 at 01:23 PM EDT)
1 min read
Source: arXiv

Source: arXiv - 2603.19167v1

Overview

We evaluate Large Language Models (LLMs) in repeated game-theoretic settings to assess whether strategic performance reflects genuine reasoning or reliance on memorized patterns. We consider two canonical games, Prisoner’s Dilemma (PD) and Rock-Paper-Scissors (RPS), upon which we introduce counterfactual variants that alter payoff structures and action labels, breaking familiar symmetries and dominance relations. Our multi-metric evaluation framework compares default and counterfactual instantiations, showcasing LLM limitations in incentive sensitivity, structural generalization and strategic reasoning within counterfactual environments.

Key Contributions

This paper presents research in the following areas:

  • cs.CL

Methodology

Please refer to the full paper for detailed methodology.

Practical Implications

This research contributes to the advancement of cs.CL.

Authors

  • Dimitrios Georgousis
  • Maria Lymperaiou
  • Angeliki Dimitriou
  • Giorgos Filandrianos
  • Giorgos Stamou

Paper Information

  • arXiv ID: 2603.19167v1
  • Categories: cs.CL
  • Published: March 19, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »