Efficient Processing of Long Sequence Text Data in Transformer: An Examination of Five Different Approaches

Zihao Jia & Philseok Lee

Organizational Research Methods2025https://doi.org/10.1177/10944281251326062article
AJG 4ABDC A*
Weight
0.41

Abstract

The advent of machine learning and artificial intelligence has profoundly transformed organizational research, especially with the growing application of natural language processing (NLP). Despite these advances, managing long-sequence text input data remains a persistent and significant challenge in NLP analysis within organizational studies. This study introduces five different approaches for handling long sequence text data: term frequency-inverse document frequency with a random forest algorithm (TF-IDF-RF), Longformer, GPT-4o, truncation with averaged scores and our proposed construct-relevant text-selection approach. We also present analytical strategies for each approach and evaluate their effectiveness by comparing the psychometric properties of the predicted scores. Among them, GPT-4o, the truncation with averaged scores, and the proposed text-selection approach generally demonstrate slightly superior psychometric properties compared to TF-IDF-RF and Longformer. However, no single approach consistently outperforms the others across all psychometric criteria. The discussion explores the practical considerations, limitations, and potential directions for future research on these methods, enriching the dialogue on effective long-sequence text management in NLP-driven organizational research.

2 citations

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1177/10944281251326062

Or copy a formatted citation

@article{zihao2025,
  title        = {{Efficient Processing of Long Sequence Text Data in Transformer: An Examination of Five Different Approaches}},
  author       = {Zihao Jia & Philseok Lee},
  journal      = {Organizational Research Methods},
  year         = {2025},
  doi          = {https://doi.org/https://doi.org/10.1177/10944281251326062},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

Efficient Processing of Long Sequence Text Data in Transformer: An Examination of Five Different Approaches

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.41

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.25 × 0.4 = 0.10
M · momentum0.55 × 0.15 = 0.08
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.