Talk: Gradient-based optimization for Deep Learning

This weekend I gave a talk at the Machine Learning Porto Alegre Meetup about optimization methods for Deep Learning. In this material you will find an overview of first-order methods, second-order methods and some approximations of second-order methods as well about the natural gradient descent and approximations to it. I took some long nights to prepare this material, so I hope you like it! You can download the PDF of the slides by clicking on the top-right menu.

– Christian S. Perone

6 thoughts to “Talk: Gradient-based optimization for Deep Learning”

  1. Dear Christian,
    Thank you for sharing this material by posting it here! It indeed is worth all the long nights you have put in. It was very educational.

    By any chance would you have a recorded version of your talk that might be available for viewing? If yes, would you be willing to share it with me?

    Thanks!

      1. Hi Christian, absolutely lovely slides. Would appreciate if there was an English version available.
        But seriously this is the most intuitive material out there on chapter 4 of Deep Learning book.
        Thanks so much!!

  2. Hello,
    Amazing work! The presentation was awesome as well. Do you mind sharing the name or source of that beamer template? Any help is greatly appreciated.

    Grato pela atenção 🙂

  3. thank you very muchfor this material! Although it is already great such a resouce please let us know when you have the english version

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.