Interested in the field of Machine Learning? Then this course is for you!
This course has been designed by a Data Scientist and a Machine Learning expert so that we can share our knowledge and help you learn complex theory, algorithms, and coding libraries in a simple way.
Over 900,000 students world-wide trust this course.
We will walk you step-by-step into the World of Machine Learning. With every tutorial, you will develop new skills and improve your understanding of this challenging yet lucrative sub-field of Data Science.
This course can be completed by either doing either the Python tutorials, or R tutorials, or both – Python & R. Pick the programming language that you need for your career.
This course is fun and exciting, and at the same time, we dive deep into Machine Learning. It is structured the following way:
-
Part 1 – Data Preprocessing
-
Part 2 – Regression: Simple Linear Regression, Multiple Linear Regression, Polynomial Regression, SVR, Decision Tree Regression, Random Forest Regression
-
Part 3 – Classification: Logistic Regression, K-NN, SVM, Kernel SVM, Naive Bayes, Decision Tree Classification, Random Forest Classification
-
Part 4 – Clustering: K-Means, Hierarchical Clustering
-
Part 5 – Association Rule Learning: Apriori, Eclat
-
Part 6 – Reinforcement Learning: Upper Confidence Bound, Thompson Sampling
-
Part 7 – Natural Language Processing: Bag-of-words model and algorithms for NLP
-
Part 8 – Deep Learning: Artificial Neural Networks, Convolutional Neural Networks
-
Part 9 – Dimensionality Reduction: PCA, LDA, Kernel PCA
-
Part 10 – Model Selection & Boosting: k-fold Cross Validation, Parameter Tuning, Grid Search, XGBoost
Each section inside each part is independent. So you can either take the whole course from start to finish or you can jump right into any specific section and learn what you need for your career right now.
Moreover, the course is packed with practical exercises that are based on real-life case studies. So not only will you learn the theory, but you will also get lots of hands-on practice building your own models.
And as a bonus, this course includes both Python and R code templates which you can download and use on your own projects.
Welcome to the course! Here we will help you get started in the best conditions.
See the power of Machine Learning in action as we create a Logistic Regression predictive model for a real-world marketing and sales use-case!
In this video, Hadelin explains in details how to install R programming language and R studio on your computer so you can swiftly go through the rest of the course.
-------------------- Part 1: Data Preprocessing --------------------
Understand the steps involved in Machine Learning: Data Pre-Processing (Import the data, Clean the data, Split into training & test sets, Feature Scaling), Modelling (Build the model, Train the model, Make predictions), and Evaluation (Calculate performance metrics, Make a verdict).
Understand why it's important to split the data into a training set and a test set, how they differ and what they are used for.
Two types of feature scaling: Normalization and Standardization. In the practical tutorials we focus on Standardisation and here we will discuss the intuition behind Normalisation.
Data Preprocessing in Python
A short written summary of what needs to know in Object-oriented programming, e.g. class, object, and method.
Data Preprocessing in R
-------------------- Part 2: Regression --------------------
What is regression? 6 types of regression models are taught in this course.
Simple Linear Regression
The math behind Simple Linear Regression.
Finding the best fitting line with Ordinary Least Squares method to model the linear relationship between independent variable and dependent variable.
Data preprocessing for Simple Linear Regression in R.
Fitting Simple Linear Regression (SLR) model to the training set using R function ‘lm’.
Predicting the test set results with the SLR model using R function ‘predict’ .
Visualizing the training set results and test set results with R package ‘ggplot2’.
Multiple Linear Regression
An application of Multiple Linear Regression: profit prediction for Startups.
The math behind Multiple Linear Regression: modelling the linear relationship between the independent (explanatory) variables and dependent (response) variable.
The 5 assumptions associated with a linear regression model: linearity, homoscedasticity, multivariate normality, independence (no autocorrelation), and lack of multicollinearity - plus an additional check for outliers.
Coding categorical variables in regression with dummy variables.
Dummy variable trap and how to avoid it.
An intuitive guide to 5 Stepwise Regression methods of building multiple linear regression models: All-in, Backward Elimination, Forward Selection, Bidirectional Elimination, and Score Comparison.
Polynomial Regression
The math behind Polynomial Regression: modelling the non-linear relationship between independent variables and dependent variable.