Journal of System Simulation ›› 2026, Vol. 38 ›› Issue (2): 399-415.doi: 10.16182/j.issn1004731x.joss.25-0996

• Wargaming and Simulation-Based Evaluation • Previous Articles    

Evolutionary Game-based Analysis of Responses to Hallucinations in Generative Artificial Intelligence

Yan Qiang, Zhang Qianyu, Wei Na   

  1. School of Economics and Management of Beijing University of Posts and Telecommunications, Beijing 100876, China
  • Received:2025-10-16 Revised:2025-12-03 Online:2026-02-18 Published:2026-02-11
  • Contact: Wei Na

Abstract:

The accelerated deployment of generative artificial intelligence, particularly large language models, has amplified the social risks of hallucinations, posing systemic threats to the credibility of the information ecosystem, the effectiveness of users’ cognitive decision-making, and the governance security in the public domain. Research primarily focuses on hallucination mitigation mechanisms at the technical level or the design of regulatory frameworks at the policy level, lacking a systematic theoretical analysis of the evolutionary logic of strategic interactions among the “large language models, users, and regulators” under conditions of bounded rationality. By introducing evolutionary game theory into the field of generative artificial intelligence governance, a tripartite dynamic game model integrating the honesty strategies of large language models, user feedback behaviors, and regulatory interventions was constructed. This model revealed the dynamic evolutionary paths of strategy selection and their stability conditions for multiple actors under cost-benefit trade-offs. Research shows that under reasonable parameters, the system can converge to the optimal equilibrium of “honest responding of large language models, active feedback of users, and proactive oversight of regulators”. The initial willingness of users to provide positive feedback accelerates both the honesty process of large language models and the intensity of regulatory responses simultaneously through the dual signal effect. Incentive mechanisms exhibit asymmetric sensitivity: Users are most sensitive to positive incentives; regulatory penalties form rigid constraints on model compliance, and the collaborative benefits play a stable role in the long term. Accordingly, it is necessary to strengthen user feedback incentives, advance regulatory technology empowerment, and optimize institutional collaborative mechanisms. These measures aim to build a governance ecosystem characterized by tripartite collaboration, cost hedging, and risk sharing, thereby providing theoretical support and policy paths for the construction of trustworthy AI and the governance of hallucinations.

Key words: generative artificial intelligence, large language model, AI hallucination, evolutionary game theory, collaborative governance

CLC Number: