Non-asymptotic analysis of online noisy stochastic gradient descent
Riddhiman Bhattacharya & Tiefeng Jiang
Abstract
Past research has indicated that the covariance of the stochastic gradient descent (SGD) error done via minibatching plays a critical role in determining its regularization and escape from low potential points. Motivated by some new research in this area, we prove universality results by showing that noise classes that have the same mean and covariance structure of SGD via minibatching have similar properties. We mainly consider the SGD algorithm, with multiplicative noise, introduced in previous work (Wu et al (2016) Int. Conf. on Machine Learning , PMLR, pp. 10367–10376), which has a much more general noise class than the SGD algorithm done via minibatching. We establish non-asymptotic bounds for the multiplicative SGD algorithm in the Wasserstein distance. We also show that the error term for the algorithm is approximately a scaled Gaussian distribution with mean 0 at any fixed point.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.