Artificial Intelligence with Deep Learning Workshop

Get Course Information

Connect for information with us at info@velocityknowledge.com

How would you like to learn?*

Overview

Learn How to Develop Neural Networks From Scratch and Develop and Deploy Models.

There is a technological revolution happening, changing all aspects of our daily lives. AI (Artificial Intelligence) has penetrated deep into our activities, interactions, professions, comforts, and experiences.

Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.

In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers. This immersive deep learning course teaches you how to use this data to create outcomes to use in your development.

In This Artificial Intelligence and Deep Leaning Course You Will: 

  • Understand the basics of deep learning
  • Use Tuning Models
  • Create convolutional neural networks
  • Use Recurrent neural networks
  • Develop and deploy models
  • Use python / Jupyter notebooks
  • Know the best practices to follow

Course Outline

Part 1: Introduction to Deep Learning

  1. What is a neural network?
  2. Supervised Learning with Neural Networks
  3. Why is Deep Learning taking off?

Part 2: Neural Networks Basics

  1. Binary Classification
  2. Logistic Regression
  3. Logistic Regression Cost Function
  4. Gradient Descent
  5. Derivatives
  6. More Derivative Examples
  7. Computation graph
  8. Derivatives with a Computation Graph
  9. Logistic Regression Gradient Descent
  10. Gradient Descent on m Examples
  11. Vectorization

Part 3: Shallow Neural Networks

  1. Neural Networks Overview
  2. Neural Network Representation
  3. Computing a Neural Network’s Output
  4. Vectorizing across multiple examples
  5. Explanation for Vectorized Implementation
  6. Activation functions
  7. Why do you need non-linear activation functions?
  8. Derivatives of activation functions
  9. Gradient descent for Neural Networks

Part 4: Deep Neural Networks

  1. Deep L-layer neural network
  2. Forward Propagation in a Deep Network
  3. Getting your matrix dimensions right
  4. Why deep representations?
  5. Building blocks of deep neural networks
  6. Forward and Backward Propagation
  7. Parameters vs Hyperparameters

Part 5: Practical Aspects of Deep Learning

  1. Train / Dev / Test sets
  2. Bias / Variance
  3. Basic Recipe for Machine Learning
  4. Regularization
  5. Why regularization reduces overfitting?
  6. Dropout Regularization
  7. Understanding Dropout
  8. Other regularization methods
  9. Normalizing inputs
  10. Vanishing / Exploding gradients
  11. Weight Initialization for Deep Networks
  12. Numerical approximation of gradients
  13. Gradient checking
  14. Gradient Checking Implementation Notes

Part 6: Optimization Algorithms

  1. Mini-batch gradient descent
  2. Understanding mini-batch gradient descent
  3. Exponentially weighted averages
  4. Understanding exponentially weighted averages
  5. Bias correction in exponentially weighted averages
  6. Gradient descent with momentum
  7. RMSprop
  8. Adam optimization algorithm
  9. Learning rate decay
  10. The problem of local optima

Part 7: Hyperparameter Tuning, Batch Normalization, and Programming Frameworks

  1. Tuning process
  2. Using an appropriate scale to pick hyperparameters
  3. Hyperparameters tuning in practice: Pandas vs. Caviar
  4. Normalizing activations in a network
  5. Fitting Batch Norm into a neural network
  6. Why does Batch Norm work?
  7. Batch Norm at test time
  8. Softmax Regression
  9. Training a softmax classifier
  10. Deep learning frameworks
  11. TensorFlow

Part 8: Foundations of Convolutional Neural Networks

  1. Computer Vision
  2. Edge Detection Example
  3. More Edge Detection
  4. Padding
  5. Strided Convolutions
  6. Convolutions Over Volume
  7. One Layer of a Convolutional Network
  8. Simple Convolutional Network Example
  9. Pooling Layers
  10. CNN Example
  11. Why Convolutions?
  12. Deep convolutional models: case studies
  13. Classic Networks
  14. ResNets
  15. Why ResNets Work
  16. Networks in Networks and 1×1 Convolutions
  17. Inception Network Motivation
  18. Inception Network
  19. Using Open-Source Implementation
  20. Transfer Learning
  21. Data Augmentation
  22. State of Computer Vision

Part 9: Recurrent Neural Networks

  1. Why sequence models
  2. Notation
  3. Recurrent Neural Network Model
  4. Backpropagation through time
  5. Different types of RNNs
  6. Language model and sequence generation
  7. Sampling novel sequences
  8. Vanishing gradients with RNNs
  9. Gated Recurrent Unit (GRU)
  10. Long Short Term Memory (LSTM)
  11. Bidirectional RNN
  12. Deep RNNs

Part 10: Natural Language Processing & Word Embeddings

  1. Word Representation
  2. Using word embeddings
  3. Properties of word embeddings
  4. Embedding matrix
  5. Learning word embeddings
  6. Word2Vec
  7. Negative Sampling
  8. GloVe word vectors
  9. Sentiment Classification
  10. Debiasing word embeddings
  11. Sequence models & Attention mechanism
  12. Basic Models
  13. Picking the most likely sentence
  14. Beam Search
  15. Refinements to Beam Search
  16. Error analysis in beam search
  17. Bleu Score (optional)
  18. Attention Model Intuition
  19. Attention Model
  20. Speech recognition
  21. Trigger Word Detection

Part 11: ML Strategy

  1. Why ML Strategy
  2. Orthogonalization
  3. Single number evaluation metric
  4. Satisficing and Optimizing metric
  5. Train/dev/test distributions
  6. Size of the dev and test sets
  7. When to change dev/test sets and metrics
  8. Why human-level performance?
  9. Avoidable bias
  10. Understanding human-level performance
  11. Surpassing human-level performance
  12. Improving your model performance
  13. Carrying out error analysis
  14. Cleaning up incorrectly labeled data
  15. Build your first system quickly, then iterate
  16. Training and testing on different distributions
  17. Bias and Variance with mismatched data distributions
  18. Addressing data mismatch
  19. Transfer learning
  20. Multi-task learning
  21. What is end-to-end deep learning?
  22. Whether to use end-to-end deep learning

 

 

 

 

Search