fbpx
4.81 de 5
4.81
1825 reseñas sobre Udemy

A deep understanding of deep learning (with Python intro)

Master deep learning in PyTorch using an experimental scientific approach, with lots of examples and practice problems.
Instructor:
Mike X Cohen
16.510 estudiantes matriculados
English [Auto] Más
The theory and math underlying deep learning
How to build artificial neural networks
Architectures of feedforward and convolutional networks
Building models in PyTorch
The calculus and code of gradient descent
Fine-tuning deep network models
Learn Python from scratch (no prior coding experience necessary)
How and why autoencoders work
How to use transfer learning
Improving model performance using regularization
Optimizing weight initializations
Understand image convolution using predefined and learned kernels
Whether deep learning models are understandable or mysterious black-boxes!
Using GPUs for deep learning (much faster than CPUs!)

Deep learning is increasingly dominating technology and has major implications for society.

From self-driving cars to medical diagnoses, from face recognition to deep fakes, and from language translation to music generation, deep learning is spreading like wildfire throughout all areas of modern technology.

But deep learning is not only about super-fancy, cutting-edge, highly sophisticated applications. Deep learning is increasingly becoming a standard tool in machine-learning, data science, and statistics. Deep learning is used by small startups for data mining and dimension reduction, by governments for detecting tax evasion, and by scientists for detecting patterns in their research data.

Deep learning is now used in most areas of technology, business, and entertainment. And it’s becoming more important every year.

How does deep learning work?

Deep learning is built on a really simple principle: Take a super-simple algorithm (weighted sum and nonlinearity), and repeat it many many times until the result is an incredibly complex and sophisticated learned representation of the data.

Is it really that simple? mmm OK, it’s actually a tiny bit more complicated than that 😉   but that’s the core idea, and everything else — literally everything else in deep learning — is just clever ways of putting together these fundamental building blocks. That doesn’t mean the deep neural networks are trivial to understand: there are important architectural differences between feedforward networks, convolutional networks, and recurrent networks.

Given the diversity of deep learning model designs, parameters, and applications, you can only learn deep learning — I mean, really learn deep learning, not just have superficial knowledge from a youtube video — by having an experienced teacher guide you through the math, implementations, and reasoning. And of course, you need to have lots of hands-on examples and practice problems to work through. Deep learning is basically just applied math, and, as everyone knows, math is not a spectator sport!

What is this course all about?

Simply put: The purpose of this course is to provide a deep-dive into deep learning. You will gain flexible, fundamental, and lasting expertise on deep learning. You will have a deep understanding of the fundamental concepts in deep learning, so that you will be able to learn new topics and trends that emerge in the future.

Please note: This is not a course for someone who wants a quick overview of deep learning with a few solved examples. Instead, this course is designed for people who really want to understand how and why deep learning works; when and how to select metaparameters like optimizers, normalizations, and learning rates; how to evaluate the performance of deep neural network models; and how to modify and adapt existing models to solve new problems.

You can learn everything about deep learning in this course.

In this course, you will learn

  • Theory: Why are deep learning models built the way they are?

  • Math: What are the formulas and mechanisms of deep learning?

  • Implementation: How are deep learning models actually constructed in Python (using the PyTorch library)?

  • Intuition: Why is this or that metaparameter the right choice? How to interpret the effects of regularization? etc.

  • Python: If you’re completely new to Python, go through the 8+ hour coding tutorial appendix. If you’re already a knowledgeable coder, then you’ll still learn some new tricks and code optimizations.

  • Google-colab: Colab is an amazing online tool for running Python code, simulations, and heavy computations using Google’s cloud services. No need to install anything on your computer.

Unique aspects of this course

  • Clear and comprehensible explanations of concepts in deep learning.

  • Several distinct explanations of the same ideas, which is a proven technique for learning.

  • Visualizations using graphs, numbers, and spaces that provide intuition of artificial neural networks.

  • LOTS of exercises, projects, code-challenges, suggestions for exploring the code. You learn best by doing it yourself!

  • Active Q&A forum where you can ask questions, get feedback, and contribute to the community.

  • 8+ hour Python tutorial. That means you don’t need to master Python before enrolling in this course.

So what are you waiting for??

Watch the course introductory video and free sample videos to learn more about the contents of this course and about my teaching style. If you are unsure if this course is right for you and want to learn more, feel free to contact with me questions before you sign up.

I hope to see you soon in the course!

Mike

Introduction

1
How to learn from this course
2
Using Udemy like a pro

Download all course materials

1
Downloading and using the code

You can download all course code files from the attached zip, or from my github site (same materials).

2
My policy on code-sharing

Concepts in deep learning

1
What is an artificial neural network?
2
How models "learn"
3
The role of DL in science and knowledge
4
Running experiments to understand DL
5
Are artificial "neurons" like biological neurons?

About the Python tutorial

1
Should you watch the Python tutorial?

Math, numpy, PyTorch

1
PyTorch or TensorFlow?
2
Introduction to this section
3
Spectral theories in mathematics
4
Terms and datatypes in math and computers
5
Converting reality to numbers
6
Vector and matrix transpose
7
OMG it's the dot product!
8
Matrix multiplication
9
Softmax
10
Logarithms
11
Entropy and cross-entropy
12
Min/max and argmin/argmax
13
Mean and variance
14
Random sampling and sampling variability
15
Reproducible randomness via seeding
16
The t-test
17
Derivatives: intuition and polynomials
18
Derivatives find minima
19
Derivatives: product and chain rules

Gradient descent

1
Overview of gradient descent
2
What about local minima?
3
Gradient descent in 1D
4
CodeChallenge: unfortunate starting value
5
Gradient descent in 2D
6
CodeChallenge: 2D gradient ascent
7
Parametric experiments on g.d.
8
CodeChallenge: fixed vs. dynamic learning rate
9
Vanishing and exploding gradients
10
Tangent: Notebook revision history

ANNs (Artificial Neural Networks)

1
The perceptron and ANN architecture
2
A geometric view of ANNs
3
ANN math part 1 (forward prop)
4
ANN math part 2 (errors, loss, cost)
5
ANN math part 3 (backprop)
6
ANN for regression
7
CodeChallenge: manipulate regression slopes
8
ANN for classifying qwerties
9
Learning rates comparison
10
Multilayer ANN
11
Linear solutions to linear problems
12
Why multilayer linear models don't exist
13
Multi-output ANN (iris dataset)
14
CodeChallenge: more qwerties!
15
Comparing the number of hidden units
16
Depth vs. breadth: number of parameters
17
Defining models using sequential vs. class
18
Model depth vs. breadth
19
CodeChallenge: convert sequential to class
20
Diversity of ANN visual representations
21
Reflection: Are DL models understandable yet?

Overfitting and cross-validation

1
What is overfitting and is it as bad as they say?
2
Cross-validation
3
Generalization
4
Cross-validation -- manual separation
5
Cross-validation -- scikitlearn
6
Cross-validation -- DataLoader
7
Splitting data into train, devset, test
8
Cross-validation on regression

Regularization

1
Regularization: Concept and methods
2
train() and eval() modes
3
Dropout regularization
4
Dropout regularization in practice
5
Dropout example 2
6
Weight regularization (L1/L2): math
7
L2 regularization in practice
8
L1 regularization in practice
9
Training in mini-batches
10
Batch training in action
11
The importance of equal batch sizes
12
CodeChallenge: Effects of mini-batch size

Metaparameters (activations, optimizers)

1
What are "metaparameters"?
2
The "wine quality" dataset
3
CodeChallenge: Minibatch size in the wine dataset
4
Data normalization
5
The importance of data normalization
6
Batch normalization
7
Batch normalization in practice
8
CodeChallenge: Batch-normalize the qwerties
9
Activation functions
10
Activation functions in PyTorch
4.8
4.8 de 5
Calificaciones 1825

Calificación Detallada

5 estrellas
1365
4 estrellas
391
3 estrellas
51
2 estrellas
10
1 estrellas
8
6fbdb0e95a5e045dbd0b672c86c1438b
Garantía de devolución de dinero de 30 días

Incluye

57 horas de video a pedido
3 artículos
Acceso completo de por vida
Acceso en el móvil y en la televisión
Certificado de finalización
A deep understanding of deep learning (with Python intro)
Precio:
$94.99 $17
bubble_bg_popup.png

Descarga las Herramientas Gratis