“This is the 16th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”

🌊 Author’s home page: Haiyun 🌊 Author profile: 🏆CSDN full stack quality creator, 🥇HDZ core group member 🌊 Fan benefits: fans send four books every week, a variety of small gifts every month

The term machine learning was coined in 1959 by Arthur Samuel, an American pioneer in computer games and artificial intelligence, who said “it gives computers the ability to learn without explicit programming”. In 1997, Tom Mitchell developed a mathematical and relational definition of “well-posed,”

“If the performance of a computer program on T (as measured by P) improves with experience E, the computer program is said to learn from experience E about certain tasks T and certain performance measures P.”

Machine learning is a buzzword in recent years. Because it’s one of the most interesting subfields in computer science. So what does machine learning really mean?

Let’s try to understand machine learning in layman’s terms. Think about trying to throw a wad of paper into the trash. After the first attempt, you realize that you pushed too hard. After the second try, you realize that you are closer to the target, but you need to increase the throw Angle. What’s happening here is basically we’re learning something after every throw and improving the end result. We are programmed to learn from our experiences.

This means that the tasks that machine learning focuses on provide a basic operational definition, rather than defining the domain in cognitive terms. This follows Alan Turing’s question ‘Can machines think?’ in his paper ‘Computers and Intelligence’. That’s the question. Was replaced with “Can machines do what we (as thinking entities) can do?

In data analysis, machine learning is used to design complex models and algorithms to make predictions; In business use, this is called predictive analytics. These analytical models allow researchers, data scientists, engineers, and analysts to “generate reliable, repeatable decisions and results” and discover “hidden insights” by learning historical relationships and trends in data sets (inputs).

Suppose you decide to check out the holiday deals. You browse the travel agency website and search for hotels. When you look at specific hotels, there is a section titled “You may also like these hotels” below the hotel description. This is a common use case for machine learning called a “recommendation engine.” Again, many data points are used to train models to predict the best hotels shown to you under that section based on the amount of information they already know.

So, if you want your program to predict, for example, of a busy intersection traffic patterns (task T), you can run it through machine learning algorithm, and use the data on the past traffic patterns (E) experience, if it has been successfully “learning”, and then it will better predict the future traffic patterns (P) performance measurement.

However, the high complexity of many real-world problems usually means that it is impractical, if not impossible, to invent specialized algorithms that can solve them perfectly every time. Examples of machine learning questions include, “Is this cancer?” “And” Which of these people are best friends with each other?” “, “Will this person like the movie?” Such problems are an excellent target for machine learning, and in fact machine learning has been applied to them with great success.

Classification of machine learning

Machine learning implementations fall into three broad categories, depending on the nature of the learning “signals” or “responses” available to the learning system, as follows:

Supervised learning: When an algorithm learns from sample data and associated target responses, these target responses can consist of numeric or string labels, such as classes or labels, to later fall under the category of supervised learning when predicting the correct response in a new example. The method is indeed similar to human learning under the supervision of a teacher. The teacher provides good examples for the students to remember, and then the students derive general rules from these specific examples.

Unsupervised learning: While the algorithm learns from ordinary examples without any associated responses, let the algorithm determine the data pattern itself. This type of algorithm tends to recombine the data into something else, such as a new feature that might represent a class or a new set of unrelated values. They are very useful in providing humans with insight into the meaning of data and new useful inputs for overseeing machine learning algorithms.

As a form of learning, it is similar to the methods humans use to determine whether certain objects or events belong to the same class, for example by observing how similar objects are to each other. Some of the recommendation systems you find online in the form of marketing automation are based on this type of learning.

Reinforcement learning: When you show the algorithm examples of missing labels, such as unsupervised learning. However, you can according to the algorithm proposed solution with a sample with positive or negative feedback, belong to the reinforcement learning category, the category associated with the application of algorithm must make decisions (so products is prescriptive, rather than merely descriptive, as in unsupervised learning), and take the consequences. In this world, it’s like learning by trial and error.

Mistakes can help you learn because they increase penalties (cost, lost time, regret, pain, etc.) and tell you that one course of action is less likely to succeed than another. An interesting example of reinforcement learning comes when computers learn to play video games on their own.

In this case, the application demonstrates the algorithm through examples of specific situations, such as having the game player run into a maze while avoiding enemies. The application lets the algorithm know the results of the actions it takes and learns while trying to avoid the dangers it discovers and pursue survival. You can see how Google DeepMind created a reinforcement learning program to play old Atari video games. As you watch the video, notice how the program was clumsy and unskilled at first, but improved with training until it became a champion.

Semi-supervised learning: giving incomplete training signals: The training set is missing some (usually many) target outputs. The principle has a special case, called transduction, in which the entire set of problem instances is known at the time of learning, with only partial targets missing.

Categorize according to desired output

Another category of machine learning tasks arises when one considers the expected output of a machine learning system:

1. Classification: When the input is divided into two or more classes, the learner must generate a model that assigns the invisible input to one or more of these classes (multi-label classification). This is usually resolved in a supervised manner. Spam filtering is an example of sorting, where the input is E-mail (or other) messages and the categories are “spam” and “non-spam.”

2. Regression: This is also a monitoring problem and the output is continuous rather than discrete.

3. Clustering: When a set of inputs is divided into groups. Unlike classification, these groups are not known in advance, making this usually an unsupervised task. Machine learning occurs when problems cannot be solved in typical ways.

Concern public number [sea yong] reply [get book] lucky draw send one

Write it at the end

The author is determined to build a fishing site with 100 small games, update progress: 40/100

I’ve been blogging about technology for a long time, mostly through Nuggets, and this is my article on machine learning from the ground up [section 1]. I like to share technology and happiness through articles. You can visit my blog at juejin.cn/user/204034… For more information. Hope you like it! 😊

💌 welcomes your comments and suggestions in the comments section! 💌