Talk: Gradient-based optimization for Deep Learning
This weekend I gave a talk at the Machine Learning Porto Alegre Meetup about optimization methods for Deep Learning. In this material you will find an overview of first-order methods, second-order methods and some approximations of second-order methods as well about the natural gradient descent and approximations to it. I took some long nights to prepare this material, so I hope you like it! You can download the PDF of the slides by clicking on the top-right menu.
– Christian S. Perone
Dear Christian,
Thank you for sharing this material by posting it here! It indeed is worth all the long nights you have put in. It was very educational.
By any chance would you have a recorded version of your talk that might be available for viewing? If yes, would you be willing to share it with me?
Thanks!
Hello ! It was recorded only in Portuguese (https://www.youtube.com/watch?v=n–7yFNFvhs), but I’m planning to do an English session soon.
Hi Christian, absolutely lovely slides. Would appreciate if there was an English version available.
But seriously this is the most intuitive material out there on chapter 4 of Deep Learning book.
Thanks so much!!
Thanks a lot Christian. Looking forward to it.
Thanks!
Hello,
Amazing work! The presentation was awesome as well. Do you mind sharing the name or source of that beamer template? Any help is greatly appreciated.
Grato pela atenção 🙂
thank you very muchfor this material! Although it is already great such a resouce please let us know when you have the english version