Reviewing logistic regression, we know that it is a classification algorithm that controls classification boundaries according to category distribution. It is easy to find that there is a problem that outliers have a serious impact on classification results. Therefore, the support vector machine algorithm is presented here.
Support Vector Machine (SVM) aims to maximize the classification boundary and the minimum distance between categories, so it is not affected by outliers. The decision function of SVM is divided by decision boundary. And it’s loss function is relatively complex, general situation, the part of the loss function is the minimum distance between boundary and categories, the other part is the loss of the misclassification, can we surprisingly found that the form is equivalent to the L2 regularization form of logistic regression, they are essentially different, of course, the process of derivation is also different, but at least it proved one thing: The decision space is the same.
Support vector machine has several amazing tricks, one is duality, that is, under the condition that KKT is satisfied, the original minimization problem is mapped to a maximized optimal problem through Lagrange multiplier method, and then it can be solved easily.
Another trick is the kernel trick, which is to obtain nonlinear boundaries by mapping raw data to other dimensions.
Support vector machine is a binary classifier. If you want to carry out multiple classification, you can train multiple binary classifiers to form a multi-classifier.
Support vector machine (SVM) is the summative of traditional machine learning, and its ideas have deeply influenced other algorithms. Even in deep learning, IT is often used in SUPPORT vector machine tanH activation function, duality optimization and kernel techniques.