Based on feedback that course content is at graduate level (501) rather than undergraduate level (101), we have renumbered the courses to reflect the depth of the content more accurately. There has been no change to the lectures or exercises.

Summary

This course provides an overview of machine learning fundamentals on modern Intel® architecture. Topics covered include:

  • Reviewing the types of problems that can be solved
  • Understanding building blocks
  • Learning the fundamentals of building models in machine learning
  • Exploring key algorithms

By the end of this course, students will have practical knowledge of:

  • Supervised learning algorithms
  • Key concepts like under- and over-fitting, regularization, and cross-validation
  • How to identify the type of problem to be solved, choose the right algorithm, tune parameters, and validate a model

The course is structured around 12 weeks of lectures and exercises. Each week requires three hours to complete. The exercises are implemented in Python*, so familiarity with the language is encouraged (you can learn along the way).

Prerequisites

Python* programming
Calculus
Linear algebra
Statistics

Week 1

This class introduces the basic data science toolset:

  • Jupyter Notebook* for interactive coding
  • NumPy, SciPy, and pandas for numerical computation
  • Matplotlib and seaborn for data visualization
  • Scikit-learn* for machine learning libraries.

You’ll be using these tools to work through the exercises each week.

Download

Week 2

This class introduces the basic concepts and vocabulary of machine learning:

  • Supervised learning and how it can be applied to regression and classification problems
  • K-Nearest Neighbor (KNN) algorithm for classification

Download

Week 3

This class reviews the principles of core model generalization:

  • The difference between over-fitting and under-fitting a model
  • Bias-variance tradeoffs
  • Finding the optimal training and test data set splits, cross-validation, and model complexity versus error
  • Introduction to the linear regression model for supervised learning

Download

Week 4

This class builds on concepts taught in previous weeks. Additionally you will:

  • Learn about cost functions, regularization, feature selection, and hyper-parameters
  • Understand more complex statistical optimization algorithms like gradient descent and its application to linear regression

Download

Week 5

This class discusses the following:

  • Logistic regression and how it differs from linear regression
  • Metrics for classification error and scenarios in which they can be used

Download

Week 6

During this session, we review:

  • The basics of probability theory and its application to the Naïve Bayes classifier
  • The different types of Naïve Bayes classifiers and how to train a model using this algorithm

Download

Week 7

This week covers:

  • Support vector machines (SVMs)—a popular algorithm used for classification problems
  • Examples to learn SVM similarity to logistic regression
  • How to calculate the cost function of SVMs
  • Regularization in SVMs and some tips to obtain non-linear classifications with SVMs

Download

Week 8

Continuing with the topic of advanced supervised learning algorithms, this class covers:

  • Decision trees and how to use them for classification problems
  • How to identify the best split and the factors for splitting
  • Strengths and weaknesses of decision trees
  • Regression trees that help with classifying continuous values

Download

Week 9

Following on what was learned in Week 8, this class teaches:

  • The concepts of bootstrapping and aggregating (commonly known as “bagging”) to reduce variance
  • The Random Forest algorithm that further reduces the correlation seen in bagging models

Download

Week 10

This week, learn about the boosting algorithm that helps reduce variance and bias.

Download

Week 11

So far, the course has been heavily focused on supervised learning algorithms. This week, learn about unsupervised learning algorithms and how they can be applied to clustering and dimensionality reduction problems.

Download

Week 12

Dimensionality refers to the number of features in the dataset. Theoretically, more features should mean better models, but this is not true in practice. Too many features could result in spurious correlations, more noise, and slower performance. This week, learn algorithms that can be used to achieve a reduction in dimensionality, such as:

  • Principal Component Analysis (PCA)
  • Multidimensional Scaling (MDS)

Download