AI representing personas representing user groups: Applying the agency theory to examine interaction challenges of conversational personas as decision-making tools

Joni Salminen et al.

Decision Support Systems2026https://doi.org/10.1016/j.dss.2026.114633article
AJG 3ABDC A*
Weight
0.37

Abstract

The proliferation of artificial intelligence (AI) technologies has led to the rise of conversational decision-making support systems, such as dialogue persona systems that provide conversational access to various user segments. For example, product managers can ask personas about features before implementing them, politicians can learn about the needs of local communities through personas, and so on. Nascent research has looked at challenges when users interact with AI personas, but has not framed it as a principal–agent problem, in which the AI represents a persona that itself represents real people in the data. This setting exposes unique interaction challenges that decision makers face when engaging with AI-generated conversational personas, which we examine through a user study with 56 participants using AI-generated conversational personas. Our results indicate seven interaction challenges: (1) Hidden Information, (2) Hidden Personas, (3) Hidden UI, (4) Lack of AI Agency, (5) AI’s Selective Attention, (6) Confusing Distributional Information, and (7) Conversational Cold Start that we conceptually link with agency theory. We discuss how the interaction challenges could be alleviated and suggest directions for future work. • This article explores how decision makers interact with AI-generated conversational personas derived from real survey data. • It conducts a comparative study between conversational personas and traditional profile personas in decision-support contexts. • The study employs a think-aloud user study with 56 participants to capture interaction experiences and challenges. • It identifies seven specific interaction challenges unique to conversational personas that may hinder effective decision making. • The article provides insights and recommendations for designing conversational decision support systems using AI-generated personas.

1 citation

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1016/j.dss.2026.114633

Or copy a formatted citation

@article{joni2026,
  title        = {{AI representing personas representing user groups: Applying the agency theory to examine interaction challenges of conversational personas as decision-making tools}},
  author       = {Joni Salminen et al.},
  journal      = {Decision Support Systems},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1016/j.dss.2026.114633},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

AI representing personas representing user groups: Applying the agency theory to examine interaction challenges of conversational personas as decision-making tools

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.37

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.16 × 0.4 = 0.06
M · momentum0.53 × 0.15 = 0.08
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.