1. What is logistic regression

Logistic regression is used to classify algorithms, and we’re all familiar with linear regression, and the general form is Y=aX+b, and the range of Y is [-∞, +∞], and there are so many values, how do you classify them? Don’t worry, great mathematicians have found a way for us.

In other words, by putting the result of Y into the Sigmoid function of a nonlinear transformation, we can get the number S within the range of [0,1]. S can be regarded as a probability value. If we set the probability threshold as 0.5, then S greater than 0.5 can be regarded as a positive sample, and less than 0.5 can be regarded as a negative sample, and classification can be carried out.

2. What is Sigmoid function

The function formula is as follows:

No matter what the value of t is, the result of the function is in the interval [0,-1]. Recall that there are two answers to a classification question, one is “yes”, one is “no”, then 0 corresponds to “no”,1 corresponds to “yes”, then some people ask, you are not in the interval [0,1], how can only 0 and 1? This is a good question. We assume that the threshold value of classification is 0.5, then those above 0.5 are classified as 1, and those below 0.5 are classified as 0. The threshold value can be set by itself.

Ok, then we substitute aX+b into t to get our general model equation for logistic regression:


Results P can also be understood as probability, in other words, probability greater than 0.5 belongs to classification 1, probability less than 0.5 belongs to classification 0, which achieves the purpose of classification.

3. What is the loss function

The loss function of logistic regression is log Loss, also known as logarithmic likelihood function. The function formula is as follows:

In the formula y=1 means that the first formula is used when the true value is 1, and the second formula is used to calculate the loss when the true value is 0. Why do I add the log function? If you think about it, if the real sample is 1, but h=0, then log0= infinity, that’s the maximum penalty for the model; When h=1, log base 1=0, there’s no penalty, there’s no loss, there’s an optimal result. So the mathematicians came up with the log function for the loss function.

Finally, according to the gradient descent method, the minimum point is solved to obtain the desired model effect.

4. Can multiple categories be carried out?

Yes, in fact, we can transition from a binary classification problem to a multi-classification problem (one vs REST), the thinking steps are as follows:

1. Take type class1 as a positive sample and all other types as negative samples, and then we can get the probability P1 of sample label type of this type.

2. Then take the other type class2 as positive samples, and all the other types as negative samples, and get P2 in the same way.

3. In this cycle, we can get the probability PI when the label type of the sample to be predicted is type class I respectively. Finally, we take the sample label type corresponding to the largest probability in PI as our sample type to be predicted.

In short, the dichotomies are divided in turn and the maximum probability result is obtained.

5. What are the advantages of logistic regression

  • LR can output results as probabilities, not just 0,1 decisions.
  • LR is highly interpretable and controllable. .
  • Training is fast and the effect after feature engineering is great.
  • Because the result is probability, you can do ranking Model.

6. What are the applications of logistic regression

  • CTR estimation/recommendation system learning to rank/ various classification scenarios.
  • A search engine factory advertising CTR estimated baseline version is LR.
  • An e-commerce search ranking/advertising CTR estimate baseline version is LR.
  • A large amount of LR is used in the shopping collocation recommendation of an e-commerce company.
  • The ranking baseline of a news app that now earns more than 1000 million ads a day is LR.

7. What are the commonly used optimization methods of logistic regression

7.1 First-order method

Gradient descent, stochastic gradient descent, mini stochastic gradient descent. The stochastic gradient descent is not only faster than the original gradient descent, but also can inhibit the occurrence of local optimal solutions to a certain extent.

7.2 Second-order method: Newton method, quasi-Newton method:

Here is a detailed description of the basic principle of Newton’s method and Newton’s method of application. Newton’s method basically updates the position of the tangent line by the intersection of the curve and the X-axis, until it reaches the intersection of the curve and the X-axis to get the solution of the equation. In practical application, we often ask to solve the convex optimization problem, that is, to solve the position where the first derivative of the function is 0, and Newton’s method can provide a solution to this problem. In practical application, Newton’s method firstly selects a point as the starting point, and performs a second-order Taylor expansion to get the point with the derivative of 0, and then updates until the requirement is reached. At this time, Newton’s method also becomes a second-order solution problem, which is faster than first-order method. We often see x as a multidimensional vector, which leads to the concept of a Hessian matrix (the second derivative matrix of x).

Disadvantages: Newton method is a fixed-length iteration without step size factor, so it cannot guarantee the stable decline of function value, and even fails in serious cases. And Newton’s method requires that the function be second order differentiable. Moreover, the inverse complexity of calculating Hessian matrix is very large.

Quasi-newton method: the method of constructing approximate positive definite symmetric matrix of Hessian matrix without second partial derivative is called quasi-Newton method. The idea of quasi Newtonian method is to simulate the Hessian matrix or its inverse in a way that satisfies quasi Newtonian conditions. There are mainly DFP method (approximate the inverse of Hession), BFGS (directly approximate Hession matrix), L-BFGS (can reduce the storage space required by BFGS).

8. Why features should be discretized in Logistic regression.

  1. Nonlinear! Nonlinear! Nonlinear! Logistic regression is a generalized linear model with limited expression ability. After the single variable is discretized into N, each variable has its own weight, which is equivalent to introducing nonlinearity into the model, which can improve the expression ability of the model and increase the fitting. It is easy to increase and decrease discrete features, which is easy to iterate quickly.
  2. Fast speed! Fast speed! Fast speed! The inner product multiplication of sparse vectors is fast, and the calculation results are easy to store and expand.
  3. Robustness. Robustness. Robustness. Discretized features have strong robustness to abnormal data: for example, if a feature is age >30, it is 1, otherwise it is 0. If features are not discretized, an abnormal data “300 years old” will cause great disturbance to the model.
  4. Convenient crossover and feature combination: feature crossover can be carried out after discretization, from M+N variables to M*N variables, which further introduces nonlinearity and improves expression ability;
  5. Stability: After feature discretization, the model will be more stable. For example, if the age of users is discretized, 20-30 will not be a completely different person just because a user is one year older. Of course, the samples next to the interval will be the opposite, so how to divide the interval is an art;
  6. Simplified model: Feature discretization simplifies the logistic regression model and reduces the risk of model overfitting.

9. What will be the result of increasing L1 regularization in the objective function of logistic regression?

All the parameters w are going to be 0.

10. Code implementation

GitHub:github.com/NLP-LOVE/ML…

Machine Learning


Author: @ mantchs

GitHub:github.com/NLP-LOVE/ML…

Welcome to join the discussion! Work together to improve this project! Group Number: [541954936]