Denoising Diffusion Probabilistic Models Are Optimally Adaptive to Unknown Low Dimensionality
Zhihan Huang et al.
Abstract
The denoising diffusion probabilistic model (DDPM) has emerged as a mainstream generative model in generative artificial intelligence. Although sharp convergence guarantees have been established for the DDPM, the iteration complexity is, in general, proportional to the ambient data dimension, resulting in overly conservative theory that fails to explain its practical efficiency. This has motivated the recent work to investigate how the DDPM can achieve sampling speed-ups through automatic exploitation of intrinsic low dimensionality of data. We strengthen this line of work by demonstrating, in some sense, optimal adaptivity to unknown low dimensionality. For a broad class of data distributions, we prove that the iteration complexity of the DDPM scales nearly linearly with its intrinsic dimension, which is optimal when using the Kullback-Leibler divergence to measure distributional discrepancy. Funding: Y. Wei is supported in part by the National Science Foundation (NSF) [Grant CCF-2418156 and CAREER Award DMS-2143215]. Y. Chen is supported in part by the Alfred P. Sloan Research Fellowship, the Office of Naval Research [Grants N00014-22-1-2354 and N00014-25-1-2344], the NSF [Grants 2221009 and 2218773], the Wharton AI & Analytics Initiative [AI Research Fund], and the Amazon Research Award.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.