On the structural dimension of sliced inverse regression
Dongming Huang et al.
Abstract
In this work, we address the longstanding puzzle that Sliced Inverse Regression (SIR) often performs poorly for sufficient dimension reduction when the structural dimension d (the dimension of the central space) exceeds 4. We first show that in the multiple index model Y=f(PX)+ϵ where X is a p-standard normal vector, ϵ is an independent noise, and P is a projection operator from Rp to Rd, if the link function f follows the law of a Gaussian process. Then with high probability, the dth eigenvalue λd of Cov[E(X|Y)] satisfies λd≤Ce−θd for some positive constants C and θ. We then focus on the low signal regime where λd can be arbitrarily small and not larger than d−8.1, and prove that the minimax risk of estimating the central space is lower bounded by dp nλd. Combining these two results, we provide a convincing explanation for the poor performance of SIR when d is large, a phenomenon that has perplexed researchers for nearly three decades. The technical tools developed here may be of independent interest for studying other sufficient dimension reduction methods.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.