\

\

Machine learning new upgraded version ⅱ ****

\

The original selections 899.00

More than 500 people have signed up

The bottom price is ¥399.00

\

>> Click on the bottom of the article to read the original text

\

Course name

\

 

Machine Learning updated edition ii (7 days with no reason to refund)

\

On the teacher

\

\

Qin Zeng Chang

\

Master and doctor of Bristol University, UK. Postdoctoral fellow at UC Berkeley, visiting Scholar at Oxford University and Carnegie Mellon University (CMU). His current research interest covers data mining, cross-media retrieval and natural language understanding. He has published one English monographs (Springer), edited one collection of theses and more than 90 professional papers or chapters. At the same time in the IT industry to do machine learning, big data, artificial intelligence and other professional technical consulting work.

\

Course characteristics

\

 \

1. The teaching focus of this course is to understand and master the derivation of classical machine learning algorithms at the mathematical level, and to deeply understand the basic ideas of machine learning and specific ideas and methods of various algorithms from history to details.

2. Strengthen the basic knowledge of mathematics, probability theory and mathematical statistics, and consolidate the basic and necessary knowledge of machine learning.

3. This course will provide rigorous mathematical derivation process documentation to help students better grasp algorithm derivation (essential for interview)

4. In-class quizzes will be set up to help students consolidate and understand important knowledge points in class.

5. The course will provide students with supporting learning materials and classic papers carefully sorted by the teacher, which can be used for review and study at different stages of the course.

Learning style

\

\

Classes will begin on June 22, 2018

Live online, 20 sessions, 2 hours each

2 times per week (Monday, Friday, 20:00-22:00)

After live broadcast, playback video can be recorded and watched repeatedly online, valid for one year

\

Course outline

\

\

Lesson 1: The Mathematical foundations of machine learning

* * * *

1. Mathematical foundations of machine learning

A. Generalization of functions and data

B. Deduction and Induction

Linear Algebra

A. Vector and Matrix

B. Eigenvalues and eigenvectors

C. Vectors and higher dimensional Spaces

D. Feature Vector

3. Probability and Statistics

A. Conditional Probability and Classical Problems

B. Marginal Probability

4. Assignment/Practice: Treasure problem probability calculation program

       

Lesson 2: The Mathematical foundations of machine learning

 

1. Statistical Inference

A. Bayesian Principle and Inference

B. Maximum Likelihood estimation

C. Subjective Probability

D. Maximum delay probability (MAP)

2. Random Variable

A. Independence and Correlation

B. Mean and Variance

C. Covariance (co-variance)

3. Probability distribution

I have the Central Limit Theorem.

5. Assignment/Practice: Sampling probability distribution and calculation of covariance between different random variables

 

Lesson 3: The Mathematical foundations of machine learning

 

1. Gradient Descent

A. Derivative and Gradient

B. Stochastic gradient descent (SGD)

C. Newton’s Method

2. Convex Function

A. Jensen’s Inequality

B. Lagrange Multiplier

3. Assignment/Practice: Solve a given equation using Newton’s method


Lesson 4: Machine Learning Philosophy of ML

 

1. The Science of Algorithms

A. The Mystery of I/O

B. Occam’s Razor

2. Curse of Dimensionality

A. Geometric Properity

B. High-dimensional Manifold

3. Machine Learning and AI

4. Paradigms of ML

 

Lesson 5: Classical ML Models

 

1. Case-based Reasoning

A. K-nearest Neighbors

B. K- KNN for Prediction

C. Distance and Metric

2. Naive Bayes Classifier

A. Conditional Independence

B. Naive Bayes for Classification

3. Homework/Practice: Cases of spam classification

 

Lesson 6: Classical ML Models

 

1. Decision Tree Learning

A. Information theory and probability

B. Information Entropy

         c. ID3

2. Prediction Tree (CART)

A. Gini Index

B. Decision trees and rules (DT and Rule Learning)

3. Assignment/Practice: Decision tree classification experiment

 

Lesson 7: Classical ML Models

 

1. Ensemble Learning

        a. Bagging and Boosting

        b. AdaBoost 

C. Bias-variance Decomposition

Boosting and Random Forest

2. Model Evaluation

A. Cross-validation

        b. ROC (Receiver Operating Characteristics)

        c. Cost-Sensitive Learning

3. Assignment/Practice: Comparison of random forest and decision tree classification experiment

 

Lesson 8: Linear Models

 

1. Linear Models

A. Linear Regression

2. Least Square method (LMS)

B. Linear Classifier

3. Perceptron

4. Logistic Regression

5. Probabilistic Interpretation of the linear model

6. Assignment/Practice: Application of logarithmic probability regression to text sentiment analysis

 

Lesson 9: Linear Models

 

1. Linear Discrimination Analysis

2. Linear Model with Regularization

         a. LASSO

         b. Ridge Regression

3. Sparse representation and dictionary learning

         a. Sparse Representation & Coding

         b. Dictionary Learning

 

Lesson 10: Kernel Methods

 

1. Support Vector Machines (SVM)

A. vC-dimension

B. Maximum Margin

C. Support Vectors

2. Assignment/practice: Compare different kernel functions of SVM in actual classification

 

Lesson 11: Kernel Methods

 

1. Dual Lagrange multiplier

2. KKT Conditions

   3.  Support Vector Regression (SVR)

4. Kernel Methods

 

Lesson 12 Statistical Learning

 

1. Discriminant model and generation model

A. Latent Variable

2. Mixture Model

A. Three coins Problem (3-coin Problem)

B. Gaussian Mixture Model

3. EM algorithm (Expectation Maximization)

A. Expectation Maximization

B. EM for Mixture Models

C. Jensen’s Inequality

D. EM Algorithm derivation and performance (EM Algorithm)

 

Lesson 13 Statistical Learning

 

1. Hidden Markov Models

A. Dynamic Mixture Model

B. Viterbi Algorithm

C. Derivation of Algorithm

2. Conditional Random Field

 

Lesson 14: Statistical Learning

 

1. Hierarchical Bayesian Model

A. Probabilistic Model

B. From implicit semantic model to P-LSA (From LSA to P-LSA)

C. Dirichlet Distribution and characteristics

D. Conjugate Distribution

 

Lesson 15 Statistical Learning

 

1. Topic Model (LDA)

        a. Latent Dirichlet Allocation

B. LDA for Text Classification

2. Topic Modeling for Chinese

3. Other Topic Variables

 

Lesson 16: Unsupervised Learning

 

1. K-means algorithm

A. Kernel Density Estimation

B. Hierarchical Clustering

2. The End of the Road

A. Monte Carol Tree Search

B. MCMC (Markov Chain Monte Carlo)

        c. Gibbs Sampling

 

Manifold Learning Lesson 17: Manifold Learning

 

1. Principal Component Analysis (PCA)

        a. PCA and ICA

2. Low-dimensional Embedding

A. Isomap mapping (Isomap)

B. Locally Linear Embedding

 

Lesson 18: Concept Learning

 

1. Concept Learning

A. Study classic concepts

B. Learning the concept of one-short

2. Gaussian Process for ML

        c. Dirichlet Process

 

Reinforcement Learning

 

1. The Reward and Penalty

A. State-space Model

B. Q-learning algorithm (Q-learning)

2. Path Planning

3. Game AI

4. Assignment/Practice: Automatic learning algorithm for bird flying game

 

Lesson 20: Neural networks

 

1. Multilayer neural network

A. Nonlinear Mapping

B. Back propagation

2. Auto-encoder

\

To register, consult and view courses, please click

Left left left