Skip to content

Week #8 - Autoencoders

In today's lecture, we'll explore autoencoders - a specialized neural network architecture that learns to compress data into a lower-dimensional representation and then reconstruct it. We'll examine how these self-supervised models work, their various architectures, and their practical applications in dimensionality reduction, denoising, and generative modeling.

Learning objectives:

  • Understand the fundamental architecture of autoencoders including encoder, latent space, and decoder components
  • Explore different types of autoencoders (vanilla, denoising, variational, sparse) and their specific use cases
  • Master the mathematics behind latent space representations and reconstruction loss
  • Implement and train basic autoencoders for dimensionality reduction and feature learning
  • Learn techniques for regularizing autoencoders to prevent memorization and encourage useful representations
  • Analyze the quality of learned representations through reconstruction visualization and latent space exploration
  • Apply autoencoders to practical tasks like image denoising, anomaly detection, and data compression
  • Understand the connection between autoencoders and modern generative models like VAEs

Laboratory - TBA

Resources - TBA