Pytorch Conditional GAN Tutorial

Pytorch Conditional GAN Tutorial

Building a Conditional GAN Model

In this video, the presenter explains how to build a conditional GAN model that generates images based on specific labels. The presenter modifies the discriminator and generator models to include an additional channel for the label information.

Modifying the Discriminator Model

  • Add an embedding layer to send label information to the discriminator and generator.
  • Create an embedding layer with some number of classes and image size.
  • Reshape the embedding layer to be an additional channel in the input image.
  • Concatenate the input image with the embedding layer before passing it through the discriminator.

Modifying the Generator Model

  • Add label information to the generator by creating an embedding layer and concatenating it with noise input.
  • Create an embedding layer with some number of classes and embed size.
  • Concatenate the noise input with the embedding layer before passing it through the generator.

Conclusion

The presenter demonstrates how to modify both discriminator and generator models in order to create a conditional GAN that can generate images based on specific labels. By adding an additional channel for label information, we can train our model on specific datasets such as MNIST while generating images that match our desired output.

Implementing Conditional GANs

In this section, the speaker explains how to implement conditional GANs. They start by explaining how to modify the generator and discriminator models to include labels for conditional generation.

Modifying Generator and Discriminator Models

  • To modify the generator model, concatenate the input noise with the label embedding using torch.cat and send it through the generator.
  • The discriminator model also needs modification to include labels as inputs in forward propagation.

Adding Parameters to Training File

  • Add parameters for number of classes, image size, and generator embedding to training file.

Sending Labels as Inputs

  • Send labels as inputs to both generator and discriminator models during training.
  • Remove unnecessary code related to target labels from previous implementation.

Testing Implementation

  • After running training for a few epochs, generated images can be observed.
  • Conditional GAN implementation allows for generating specific digits based on input labels.
Video description

In this video we take a look at a way of also deciding what the output from the GAN should be. Specifically the output is conditioned on the labels that we send in and as an example we take a look at training on MNIST (of course) ;) But these ideas extend to any dataset you're working with really! ❤️ Support the channel ❤️ https://www.youtube.com/channel/UCkzW5JSFwvKRjXABI-UTAkQ/join Paid Courses I recommend for learning (affiliate links, no extra cost for you): ⭐ Machine Learning Specialization https://bit.ly/3hjTBBt ⭐ Deep Learning Specialization https://bit.ly/3YcUkoI 📘 MLOps Specialization http://bit.ly/3wibaWy 📘 GAN Specialization https://bit.ly/3FmnZDl 📘 NLP Specialization http://bit.ly/3GXoQuP ✨ Free Resources that are great: NLP: https://web.stanford.edu/class/cs224n/ CV: http://cs231n.stanford.edu/ Deployment: https://fullstackdeeplearning.com/ FastAI: https://www.fast.ai/ 💻 My Deep Learning Setup and Recording Setup: https://www.amazon.com/shop/aladdinpersson GitHub Repository: https://github.com/aladdinpersson/Machine-Learning-Collection ✅ One-Time Donations: Paypal: https://bit.ly/3buoRYH ▶️ You Can Connect with me on: Twitter - https://twitter.com/aladdinpersson LinkedIn - https://www.linkedin.com/in/aladdin-persson-a95384153/ Github - https://github.com/aladdinpersson OUTLINE: 0:00 - Introduction 0:56 - Modifying Generator and Discriminator 6:58 - Modifying Gradient Penalty 7:35 - Modifying Training 10:43 - Evaluation & Ending