Efficient Processing of Long Sequence Text Data in Transformer: An Examination of Five Different Approaches
Zihao Jia & Philseok Lee
Abstract
The advent of machine learning and artificial intelligence has profoundly transformed organizational research, especially with the growing application of natural language processing (NLP). Despite these advances, managing long-sequence text input data remains a persistent and significant challenge in NLP analysis within organizational studies. This study introduces five different approaches for handling long sequence text data: term frequency-inverse document frequency with a random forest algorithm (TF-IDF-RF), Longformer, GPT-4o, truncation with averaged scores and our proposed construct-relevant text-selection approach. We also present analytical strategies for each approach and evaluate their effectiveness by comparing the psychometric properties of the predicted scores. Among them, GPT-4o, the truncation with averaged scores, and the proposed text-selection approach generally demonstrate slightly superior psychometric properties compared to TF-IDF-RF and Longformer. However, no single approach consistently outperforms the others across all psychometric criteria. The discussion explores the practical considerations, limitations, and potential directions for future research on these methods, enriching the dialogue on effective long-sequence text management in NLP-driven organizational research.
2 citations
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.25 × 0.4 = 0.10 |
| M · momentum | 0.55 × 0.15 = 0.08 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.