Li Hang’s Statistical Learning Methods can be said to be an introduction to machine learning. I reproduced the algorithm of this book in Python code and made it into an online reading version, which can be read using fragments of time. (Huang Haiguang)

Resources is introduced

When I first learned machine learning, Li Hang’s “Statistical Learning Methods” gave me a great help. I tried to search and download the book from Github and modify the code by myself. This book was implemented with Python code and downloaded on Github (almost 7700+star) : \

Github.com/fengdu78/li…

In July this year, I attended the GMIS Summit held by The Heart of Machine. I met Teacher Li Hang, added wechat, and had a brief communication with her. The project I did was recognized by her.

Many friends hope to use the fragmented time to learn on mobile phones, so I put the complete code in the public article, and do a good reading directory in this article, you can open online learning.

The book purchase

Respect li Hang teacher’s labor achievements, refuse piracy.

Data suggest that

The first edition of Statistical Learning Methods was published in 2012. It describes statistical machine learning methods, mainly some commonly used supervised learning methods. The second edition adds some commonly used unsupervised learning methods, so this book covers the main content of traditional statistical machine learning methods. The first edition is identical to the first twelve chapters of the second edition, which contain more unsupervised learning:

directory

Chapter 1 Supervision (the code has been completed, click the corresponding chapter to open)

Chapter 1 Introduction to Statistical and supervised learning Chapter 2 Perceptron, Chapter 3 K-nearest Neighbor method, Chapter 4 Naive Bayes method, Chapter 5 Decision tree, Chapter 6 Logistic regression and maximum entropy model, Chapter 7 Support vector machine, Chapter 8 Promotion methods, Chapter 9 EM algorithm and its Generalization, Chapter 10 Hidden Markov model, Chapter 11 Conditional random fields Chapter 12 Summary of supervised learning methods \

Chapter 2 Unsupervised Learning (Still in development)

Chapter 13 Introduction to unsupervised learning chapter 14 Clustering methods Chapter 15 Singular value decomposition Chapter 16 Principal component Analysis Chapter 17 Latent semantic analysis Chapter 18 Probabilistic latent semantic analysis Chapter 19 Markov chain Monte Carlo method \

Chapter 20 potential Dirichlet allocation

Chapter 21 PageRank algorithm \

Chapter 22 summary of unsupervised learning methods

Appendix A Gradient descent method

Appendix B Newton’s method and quasi-Newton’s method

Appendix C Lagrange duality

Appendix D Basic subspaces of matrices

Appendix E Definition of KL divergence and properties of dirichlet distribution

Recommended learning methods

Use wechat to save this article, and click on the links of relevant chapters to learn from this article.

The article is also a complete code, if you need to download the code to learn, please visit github: \

Github.com/fengdu78/li…