Code and Data Repository for An Efficient Jackknife Model Averaging Method
Ze Chen et al.
Abstract
Model averaging, an effective method for reducing uncertainty in data modeling and learning, has been widely applied across various fields, including economics, finance, and operations research. The leave-one-out cross-validation (LOOCV) criterion is a widely used technique to choose weights in model averaging. It has achieved great success, but its application is limited by the high computational complexity especially when the sample size is large. To address this issue, focusing on generalized linear models (GLMs), we propose a computationally efficient and approximate leave-one-out cross-validation (ALOOCV) weight choice criterion, which no longer suffers from the need to repeatedly solving optimization problems many times. Our theoretical analysis presents the discrepancy between the weight estimators produced by LOOCV and ALOOCV which is proved to vanish asymptotically under certain conditions. Moreover, the efficient jackknife model averaging (EJMA) procedure with ALOOCV for GLMs is proposed. The asymptotic optimality under the model misspecification and the convergence rate of the selected weights by ALOOCV are derived. And the over-consistency of the weight estimators and the consistency of averaging coefficient estimators are established if there exists at least one correct model. Additionally, to reduce the computational burden caused by an excessive number of candidate models, we provide a model screening procedure. Numerical experiments show the remarkable computational efficiency exhibited by EJMA relative to the model averaging procedure with LOOCV and the excellent prediction performance of our method over some common model averaging and selection procedures.
1 citation
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.16 × 0.4 = 0.06 |
| M · momentum | 0.53 × 0.15 = 0.08 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.