참고

[1] T. Karras et al., Elucidating the Design Space of Diffusion-Based Generative Models, NeurIPS 2022

[2] D. P. Kingma and R. Gao., Understanding Diffusion Objectives as the ELBO with Simple Data Augmentation, NeurIPS 2023

[3] C. Williams et al., A Unified Framework for U-Net Design and Analysis, NeurIPS 2023

[4] Y. Li et al., SnapFusion: Text-to-image Diffusion Model on Mobile Devices within Two Seconds, NeurIPS 2023

[5] R. Rombach et al., High-Resolution Image Synthesis with Latent Diffusion Models, CVPR 2022

[6] K. Clark and P. Jaini, Text-to-image Diffusion Models are Zero-Shot Classifiers, NeurIPS 2023

[7] J. Gao et al., Can Pre-trained Text-to-image Models Generate Visual Goals for Reinforcement Learning?, NeurIPS 2023

[8] Y. Kasten et al. Point-Cloud Completion with Pretrained Text-to-image diffusion models, NeurIPS 2023

[9] S. Zhao et al. Uni-ControlNet: All-in-one Control to Text-to-image Diffusion Models, NeurIPS 2023.

[10] M. Janner et al. Planning with Diffusion for Flexible Behavior Synthesis, ICML 2022

[11] A. Ajay et al. Is Conditional Generative Modeling all you need for Decision-Making?, ICLR 2023.

[12] Z. Wang et al. Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning, ICLR 2023.

[13] H. Chen et al. Score Regularized Policy Optimization through Diffusion Behavior, ICLR 2024