![]() Finally in the last section we will give a more mathematical presentation of VAEs, based on variational inference. Then, in the second section, we will show why autoencoders cannot be used to generate new data and will introduce Variational Autoencoders that are regularised versions of autoencoders making the generative process possible. In the first section, we will review some important notions about dimensionality reduction and autoencoder that will be useful for the understanding of VAEs. Without further ado, let’s (re)discover VAEs together! Outline Thus, the purpose of this post is not only to discuss the fundamental notions Variational Autoencoders rely on but also to build step by step and starting from the very beginning the reasoning that leads to these notions. What is an autoencoder? What is the latent space and why regularising it? How to generate new data from VAEs? What is the link between VAEs and variational inference? In order to describe VAEs as well as possible, we will try to answer all this questions (and many others!) and to provide the reader with as much insights as we can (ranging from basic intuitions to more advanced mathematical details). ![]() If the last two sentences summarise pretty well the notion of VAEs, they can also raise a lot of questions. Moreover, the term “variational” comes from the close relation there is between the regularisation and the variational inference method in statistics. In a nutshell, a VAE is an autoencoder whose encodings distribution is regularised during the training in order to ensure that its latent space has good properties allowing us to generate some new data. We introduce now, in this post, the other major kind of deep generative models: Variational Autoencoders (VAEs). In a previous post, published in January of this year, we discussed in depth Generative Adversarial Networks (GANs) and showed, in particular, how adversarial training can oppose two networks, a generator and a discriminator, to push both of them to improve iteration after iteration. ![]() Face images generated with a Variational Autoencoder (source: Wojciech Mormul on Github).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |