Deep Learning
Instructor: Hamid Beigy | Certificate: Official (bilingual) |
Term: Summer 2025 | Prerequisite: Mathematics for AI & Data Science |
Schedule: Monday 17:00-20:00 | Online Class: Online Class |
General Objective
This course covers deep learning, a highly influential area of machine learning that has achieved remarkable performance in numerous applications. The course begins with fundamental concepts including multilayer neural networks, their modeling power, and training methods. It then introduces major architectures like CNNs and RNNs, along with advances in network design, optimization, generalization improvement, and training techniques. Generative models will be examined as an important branch. The course also covers notable deep networks developed in recent years, with emphasis on applications in computer vision and natural language processing.
Topics
- Introduction to Artificial Neural Networks
- Multi-layer Perceptron (MLP)
- MLP as universal approximator
- Error Back Propagation Algorithm
- Optimization in Deep Networks
- Overview of convex optimization
- Optimization methods: SGD, Momentum, RMSProp, Adam, etc.
- Deep Network Training, Design and Generalization Techniques
- Generalization improvement techniques: regularization, dropout, data augmentation
- Batch Normalization
- Activation functions, weight initialization, input normalization, etc.
- Convolutional Neural Networks (CNNs)
- Convolution and pooling layers
- Popular CNN architectures
- CNN applications
- Recurrent Neural Networks (RNNs)
- Sequence modeling
- Long Short-Term Memories (LSTMs)
- Attention Networks
- Language Modeling using RNNs
- Other RNN applications in NLP and other domains
- Transformer Architecture
- Product-Sum Networks
- Generative Models
- Autoregressive models
- Variational Autoencoders
- Generative Adversarial Networks (GANs)
- Flow-based models
- Deep Reinforcement Learning
- Deep Q-Learning
- Policy Gradient approach
- Actor-Critic approach
- Adversarial Examples and Network Robustness
- Advanced Topics
- Dual Networks and Dual Learning
- Graph Convolutional Networks
- Self-supervised Learning
Assessment
- Assignments: 30%
- Midterm: 20%
- Final Exam: 30%
- Quizzes: 10%
- Project or Research Work: 10%
References
- Ian Goodfellow, Yoshua Bengio and Aaron Courville, Deep Learning, Book in preparation for MIT Press, 2016.
- Michael Nielsen, Neural networks and deep learning, Preprint, 2016.