When AI Agents Take Surveys: Protecting Data Integrity in Business and Marketing Research
Park Thaichon et al.
Abstract
The increasing use of crowdsourcing platforms for behavioural research rests on the assumption that research participants are exclusively human. This assumption is now under threat. AI agents from browsers such as OpenAI’s Atlas and Perplexity’s Comet can autonomously complete online surveys. These agents can simulate specific personas or demographic profiles and follow survey prompts, select responses and submit data with fluency and internal consistency. Such capabilities threaten data authenticity and integrity, especially as subjective perception, motivation and emotion are central in behavioural research. This research note outlines practical mitigation strategies to detect AI responses. In addition to immediate measures, the emergence of AI-generated survey data requires broader methodological reflection, updated ethical guidelines and transparent reporting practices. We also situate these risks within the emerging literature on synthetic data, distinguishing unauthorised AI-generated responses from the transparent and theory-driven use of synthetic data for research purposes. Finally, we offer a forward-looking research agenda for protecting human data while responsibly engaging with synthetic data in marketing research. Instead of treating AI solely as a threat, researchers can use this as an opportunity to strengthen methodological rigour and protect the authenticity of human data in an increasingly automated research environment.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.