The credibility transformer

Ronald Richman et al.

European Actuarial Journal2025https://doi.org/10.1007/s13385-025-00413-yarticle
AJG 2ABDC B
Weight
0.44

Abstract

Inspired by the large success of Transformers in Large Language Models, these architectures are increasingly applied to tabular data. This is achieved by embedding tabular data into low-dimensional Euclidean spaces resulting in similar structures as time-series data. We introduce a novel credibility mechanism to this Transformer architecture. This credibility mechanism is based on a special token that should be seen as an encoder that consists of a credibility weighted average of prior information and observation based information. We demonstrate that this novel credibility mechanism is very beneficial to stabilize training, and our Credibility Transformer leads to predictive models that are superior to state-of-the-art deep learning models.

3 citations

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1007/s13385-025-00413-y

Or copy a formatted citation

@article{ronald2025,
  title        = {{The credibility transformer}},
  author       = {Ronald Richman et al.},
  journal      = {European Actuarial Journal},
  year         = {2025},
  doi          = {https://doi.org/https://doi.org/10.1007/s13385-025-00413-y},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

The credibility transformer

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.44

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.32 × 0.4 = 0.13
M · momentum0.57 × 0.15 = 0.09
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.