A priority map is all you need: Exploring the roots of neural mechanisms underlying transformer-based large language models.

Koorosh Mirpour et al.

Psychological Review2026https://doi.org/10.1037/rev0000616article
AJG 4ABDC A*
Weight
0.50

Abstract

The impressive abilities of language AI models are sparking new questions about how artificial and human minds focus. While both seem to selectively pay attention, the underlying mechanisms remain elusive. This research proposes a compelling analogy: the "priority map" concept from visual neuroscience. Priority maps in the brain dynamically guide attention by integrating what is eye-catching (bottom-up) with what is relevant to our goals (top-down). Intriguingly, language AI appears to operate on similar principles, using sophisticated methods to weigh input and task demands. Both systems employ parallel processing, refine information through layered structures, and even use a form of "inhibition" to manage resources efficiently. Despite vast differences in their makeup-brain cells versus computer code-this functional similarity suggests a shared strategy for dynamically prioritizing information. This insight not only illuminates how our own brains prioritize information but could also inspire the development of smarter, more adaptable AI in the future. (PsycInfo Database Record (c) 2026 APA, all rights reserved).

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1037/rev0000616

Or copy a formatted citation

@article{koorosh2026,
  title        = {{A priority map is all you need: Exploring the roots of neural mechanisms underlying transformer-based large language models.}},
  author       = {Koorosh Mirpour et al.},
  journal      = {Psychological Review},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1037/rev0000616},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

A priority map is all you need: Exploring the roots of neural mechanisms underlying transformer-based large language models.

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.