How Neural Networks and Backpropagation Works
  • BEFORE STARTING...PLEASE READ THIS
  • What Can Deep Learning Do?
  • The Rise of Deep Learning
  • The Essence of Neural Networks
  • The Perceptron
  • Gradient Descent
  • The Forward Propagation
  • Before Proceeding with the Backpropagation
  • Backpropagation Part 1
  • Backpropagation Part 2
Loss Functions
  • Mean Squared Error (MSE)
  • L1 Loss (MAE)
  • Huber Loss
  • Binary Cross Entropy Loss
  • Cross Entropy Loss
  • Softmax Function
  • KL divergence Loss
  • Contrastive Loss
  • Hinge Loss
  • Triplet Ranking Loss
  • Practical Loss Functions Note
Activation Functions
  • Why we need activation functions
  • Sigmoid Activation
  • Tanh Activation
  • ReLU and PReLU
  • Exponentially Linear Units (ELU)
  • Gated Linear Units (GLU)
  • Swish Activation
  • Mish Activation
Optimization
  • Batch Gradient Descent
  • Stochastic Gradient Descent
  • Mini-Batch Gradient Descent
  • Exponentially Weighted Average Intuition
  • Exponentially Weighted Average Implementation
  • Bias Correction in Exponentially Weighted Averages
  • Momentum
  • RMSProp
  • Adam Optimization
  • SWATS - Switching from Adam to SGD
  • Weight Decay
  • Decoupling Weight Decay
  • AMSGrad
Hyperparameter Tuning and Learning Rate Scheduling
  • Introduction to Hyperparameter Tuning and Learning Rate Recap
  • Step Learning Rate Decay
  • Cyclic Learning Rate
  • Cosine Annealing with Warm Restarts
  • Batch Size vs Learning Rate
Weight Initialization
  • Normal Distribution
  • What happens when all weights are initialized to the same value?
  • Xavier Initialization
  • He Norm Initialization
  • Practical Weight Initialization Note
Regularization and Normalization
  • Overfitting
  • L1 and L2 Regularization
  • Dropout
  • DropBlock in CNNs
  • DropConnect
  • Normalization
  • Batch Normalization
  • Layer Normalization
  • Group Normalization
Introduction to PyTorch
  • CODE FOR THIS COURSE
  • Computation Graphs and Deep Learning Frameworks
  • Installing PyTorch and an Introduction
  • How PyTorch Works
  • Torch Tensors - Part 1
  • Torch Tensors - Part 2
  • Numpy Bridge, Tensor Concatenation and Adding Dimensions
  • Automatic Differentiation
  • Loss Functions in PyTorch
  • Weight Initialization in PyTorch
Practical Neural Networks in PyTorch - Application 1: Diabetes
  • Download the Dataset
  • Part 1: Data Preprocessing
  • Part 2: Data Normalization
  • Part 3: Creating and Loading the Dataset
  • Part 4: Building the Network
  • Part 5: Training the Network
Visualize the Learning Process
  • Visualize Learning Part 1
  • Visualize Learning Part 2
  • Visualize Learning Part 3
  • Visualize Learning Part 4
  • Visualize Learning Part 5
  • Visualize Learning Part 6
  • Neural Networks Playground
Implementing a Neural Network from Scratch with Numpy
  • The Dataset and Hyperparameters
  • Understanding the Implementation
  • Forward Propagation
  • Loss Function
  • Prediction