Conversational signatures: structural patterns in human–AI interaction across models and platforms
Obada Kraishan
Abstract
As conversational AI systems proliferate across platforms and use contexts, understanding the structural patterns of human–AI interaction becomes critical for both system design and user experience optimization. We analyzed 1,469,549 conversations from two large-scale datasets (LMSYS-1 M and WildChat) to examine how conversational structures vary across 25 AI models and two deployment platforms. We extracted structural features through automated computational analysis and applied unsupervised clustering and nonparametric statistical tests to identify systematic differences in message length, turn-taking patterns, and conversational balance. Three key findings emerged: (a) deployment context shapes interaction patterns more strongly than model architecture (r = 0.371 vs. r = 0.283), with the same models producing dramatically different conversational structures depending on platform infrastructure, user populations, and task framings; (b) AI models differ substantially in response verbosity, with some generating responses 4.4 times longer than others despite similar capabilities; (c) four distinct conversation types emerged across datasets (technical assistance, general Q&A, intensive collaboration, and quick lookups) with 97.5% consisting of single-exchange interactions rather than multiturn dialogue. These findings challenge assumptions about human–AI conversation as dialogic exchange and demonstrate that deployment context, user populations, and platform affordances fundamentally shape interaction patterns independent of technical capabilities. We discuss implications for conversational AI design, evaluation practices, and theoretical frameworks for understanding human–AI communication.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.