A priority map is all you need: Exploring the roots of neural mechanisms underlying transformer-based large language models.
Koorosh Mirpour et al.
Abstract
The impressive abilities of language AI models are sparking new questions about how artificial and human minds focus. While both seem to selectively pay attention, the underlying mechanisms remain elusive. This research proposes a compelling analogy: the "priority map" concept from visual neuroscience. Priority maps in the brain dynamically guide attention by integrating what is eye-catching (bottom-up) with what is relevant to our goals (top-down). Intriguingly, language AI appears to operate on similar principles, using sophisticated methods to weigh input and task demands. Both systems employ parallel processing, refine information through layered structures, and even use a form of "inhibition" to manage resources efficiently. Despite vast differences in their makeup-brain cells versus computer code-this functional similarity suggests a shared strategy for dynamically prioritizing information. This insight not only illuminates how our own brains prioritize information but could also inspire the development of smarter, more adaptable AI in the future. (PsycInfo Database Record (c) 2026 APA, all rights reserved).
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.