On the structural dimension of sliced inverse regression

Dongming Huang et al.

Annals of Statistics2026https://doi.org/10.1214/25-aos2505article
AJG 4*ABDC A*
Weight
0.50

Abstract

In this work, we address the longstanding puzzle that Sliced Inverse Regression (SIR) often performs poorly for sufficient dimension reduction when the structural dimension d (the dimension of the central space) exceeds 4. We first show that in the multiple index model Y=f(PX)+ϵ where X is a p-standard normal vector, ϵ is an independent noise, and P is a projection operator from Rp to Rd, if the link function f follows the law of a Gaussian process. Then with high probability, the dth eigenvalue λd of Cov[E(X|Y)] satisfies λd≤Ce−θd for some positive constants C and θ. We then focus on the low signal regime where λd can be arbitrarily small and not larger than d−8.1, and prove that the minimax risk of estimating the central space is lower bounded by dp nλd. Combining these two results, we provide a convincing explanation for the poor performance of SIR when d is large, a phenomenon that has perplexed researchers for nearly three decades. The technical tools developed here may be of independent interest for studying other sufficient dimension reduction methods.

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1214/25-aos2505

Or copy a formatted citation

@article{dongming2026,
  title        = {{On the structural dimension of sliced inverse regression}},
  author       = {Dongming Huang et al.},
  journal      = {Annals of Statistics},
  year         = {2026},
  doi          = {https://doi.org/https://doi.org/10.1214/25-aos2505},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

On the structural dimension of sliced inverse regression

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.50

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.50 × 0.4 = 0.20
M · momentum0.50 × 0.15 = 0.07
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.