This is Andrew NG’s recent book on MACHINE LEARNING ten years ago. The manuscript is currently being released. I plan to translate this book and take the opportunity to comb through the knowledge of machine learning. Any deficiencies in translation are welcome to be pointed out.
2. How to use this book to help your team Note: This book has become a tool for helping Your team. This book has become a tool for helping Your team. Note: This book has become a tool for helping Your team Your dev and test sets should come from the same distribution 7. How large do the dev/test sets need to be? Establish a single-number evaluation metric for your team to optimize Optimizing and satisficing metrics Having a dev set and metric speeds up iterations When to change dev/test sets and metrics 13. Build your first system quickly, then iterate 14. Error analysis: Look at dev set examples to evaluate ideas Evaluate multiple ideas in parallel during error analysis 16. If you have a large dev, Evaluate multiple ideas in parallel set, split it into two subsets, only one of which you look at 17. How big should the Eyeball and Blackbox dev sets be? 18. Takeaways: Basic error analysis 19. Bias and Variance: The two big sources of error 20. Examples of Bias and Variance 21. Comparing to the optimal error rate 22. Addressing Bias and Variance 23. Bias vs. Variance tradeoff 24. Techniques for reducing avoidable bias 25. Techniques for reducing Variance 26. Error analysis on the training set 27. Diagnosing bias and variance: Learning curves 28. Plotting training error 29. Interpreting learning curves: High bias 30. Interpreting learning curves: Other cases 31. Plotting learning curves 32. Why we compare to human-level performance 33. How to define human-level performance 34. Surpassing human-level performance 35. Why train and test on different distributions 36. Whether to use all your data 37. Whether to include inconsistent data 38. Weighting data 39. Generalizing from the training set to the dev set 40. Addressing Bias and Variance 41. Addressing data mismatch 42. Artificial data synthesis 43. The Optimization Verification test 44. General form of Optimization Verification test 45. Reinforcement learning example 46. The rise of end-to-end learning 47. More end-to-end learning examples 48. Pros and cons of end-to-end learning 49. Learned sub-components 50. Directly learning rich outputs 51. Error Analysis by Parts 52. Beyond supervised learning: What ‘s next? 53. Building a superhero team – Get your teammates to read this 54. Big picture 55. Credits
PDF Machine_learning_yearning_v0.5_01.pdf Machine_learning_yearning_v0.5_02.pdf Machine_learning_yearning_v0.5_03.pdf