1. Univariate linear regression

Data Presentation: In this part of the exercise, you will use a variable to achieve linear regression to predict restaurant profits. Let’s say you’re the CEO of a restaurant chain and you’re thinking about opening a new restaurant in a different city, and you get the profit and population data for that city.

1.1 Draw the original data graph

import matplotlib.pyplot as plt
import pandas as pd
data = pd.read_csv('ex1data1.txt', names=['population'.'profit'])  Read the data and name the column
Copy the code

Show the first five rows of data

data.head()
Copy the code

data.plot(kind='scatter', x='population', y='profit'PLT, figsize = (12, 8)). The show ()Copy the code

1.2 Data Processing

Insert a list of characteristicsThe values are all 1 in order to match the bias during matrix operation.

Insert all 1 columns x0
data.insert(0, 'Ones'1),Copy the code

Extract eigenvalues and target values from the data

# set X (training data) and y (target variable)
cols = data.shape[1]  # Number of eigenvalues n
X = data.iloc[:,0:cols-1]  # X is all of the rows, get rid of the last column
y = data.iloc[:,cols-1:cols]  # y is all the rows, the last column
Copy the code

Convert the Dataframe type to Np.matrix

Construct matrices and vectors
X = np.matrix(X.values)
y = np.matrix(y.values)
Initialize the weight to 0,1 *2 vectorTheta = np. Matrix (np) array ([0, 0]))Copy the code

1.3 Creating a loss function

Mathematical Model of Loss Function (MSE)

# Build the loss function
def computeCost(X, y, theta):
    inner = np.power(((X * theta.T) - y), 2)  # J of theta
    return np.sum(inner) / (2 * X.shape[0])  # x.shape [0] is the sample number M
Copy the code

1.4 Creating the gradient descent function

Mathematical Model (Batch gradient descent)

# Construct gradient descent function
def gradientDescent(X, y, final_theta, alpha, iters):
    temp = np.matrix(np.zeros(theta.shape))
    # feature number n
    featrue_num = int(theta.shape[1])
    # used to store the loss value
    cost = np.zeros(iters)
    # is used to hold the value of θ
    theta0_list = []  
    theta1_list = []
    
    for i in range(iters):
        Internal function of gradient descent model
        inner = (X * final_theta.T) - y
        # Iterate over different features separately
        for j in range(featrue_num):
            term = np.multiply(inner, X[:,j])
            temp[0,j] = final_theta[0,j] - ((alpha / X.shape[0]) * np.sum(term))
            
        # save variableTheta0_list.append (temp[0,0]) theta1_list.append(temp[0,1]) final_theta = temp cost[I] = computeCost(X, y, final_theta) theta0_list.append(temp[0,0])return final_theta, cost,theta0_list,theta1_list

Copy the code

1.5 Train and draw

Comparison of actual and predicted values

# Perform gradient descent
g, cost, theta0, theta1 = gradientDescent(X, y, theta, alpha, iters)
# Draw a fitting graph
x = np.linspace(df.population.min(), df.population.max(), 100)  # return 100 equally spaced numbers within the specified range
f = g[0, 0] + (g[0, 1] * x)  # predicted
Set the subcanvasPlots (x, f), plots(x, f), plots(x, f), plots(x, f), plots(x, f)'r', label='Prediction')
ax.scatter(df.population, df.profit, label='Traning Data')
ax.legend(loc=2)  # set the tag
ax.set_xlabel('Population')
ax.set_ylabel('Profit')
ax.set_title('Predicted Profit vs. Population Size')
plt.show()
Copy the code

Comparison of iteration times and loss values

Draw the loss function changePlot (np.arange(iters), copy-plot (figsize=(12,8)).'r')  # np.arange(): Generates a natural array
ax.set_xlabel('Iterations')
ax.set_ylabel('Cost')
ax.set_title('Error vs. Training Epoch')
plt.show()
Copy the code

1.6 θ value and loss value comparison, 3D map

3d drawing in Python is worth exploring, first post code

# Draw 3D graphics
from mpl_toolkits.mplot3d import Axes3D
# create object
fig = plt.figure()
ax = Axes3D(fig)
# specify the value selection rangeAxis =[min(theta0)-0.5, Max (theta0)+0.5,min(theta1)-0.5, Max (theta1)+0.5] X0, X1 = np. Meshgrid (# Two random groups of numbers, starting value and density determined by the starting value of the coordinate axis
        np.linspace(axis[0], axis[1], int((axis[1] - axis[0]) * 200)).reshape(-1, 1),
        np.linspace(axis[2], axis[3], int((axis[3] - axis[2]) * 200)).reshape(-1, 1),
    )
The # ravel() method reduces a higher-dimensional array to a one-dimensional array, and c_[] concatenates the two arrays as columns, forming a matrix
X_grid_matrix = np.c_[X0.ravel(), X1.ravel()]

So this part is going to calculate the cost
inner = np.power(((X * X_grid_matrix.T) - y), 2)
inner = inner.sum(axis=0) / (2 * X.shape[0])
Z = inner.reshape(X0.shape)

# set the tag
ax.set_xlabel("theta0")
ax.set_ylabel("theta1")
ax.set_zlabel("cost")

# Draw 3D surface map
ax.plot_surface(X0, X1, Z, rstride=10, cstride=10, cmap='rainbow')
ax.view_init(30, 70)  # Change perspective
plt.show()
Copy the code

Quote:The programmer said

  • Np.meshgrid () : This function combines two random sets of number points, where X0 is the x-coordinate and X1 is the y-coordinate

  • Np.c_ [] : Concatenates two one-dimensional arrays to form an N * 2 matrix, that is, the combination of coordinates of each point

In addition, the purpose of using Np.mat ().sum(axis=0) when calculating the loss value is to sum each column of the matrix, returning a 1 * N matrix.

2. Use TensorFlow to achieve linear regression

Data Introduction: In this section, you will use multiple linear regression to predict housing prices. Let’s say you’re selling a house and you want to know what a good market price would be. One approach is to first gather information about recently sold homes and make a model of home prices. The ex1datA2.txt file contains the housing price training set for Portland, Oregon. The first column is the size of the house (in square feet), the second column is the number of bedrooms, and the third column is the price of the house.

2.1 Data is read and features are normalized

Because the dimensions of the data differ greatly, it is necessary to carry out feature normalization

# fetch data
data = pd.read_csv('ex1data2.txt', header=None, names=['Size'.'Bedrooms'.'Price'])
# Feature normalization
data = (data - data.mean()) / data.std()
data.head()
Copy the code

2.2 Data Extraction

Because tf needs to set a placeholder for float32 and dataframe is not supported for feed data, dataframe –> ndarray is used.

# add ones column
data.insert(0, 'Ones'1),# set X (training data) and y (target variable)
cols = data.shape[1]
X = data.iloc[:,0:cols-1]
y = data.iloc[:,cols-1:cols]
Dataframe --> ndarray
X = X.values.astype(np.float32)
y = y.values.astype(np.float32)
Copy the code

2.3 Creating a TF Function

The function returns a list of lost values and the final weight value

def linear_regression(X_data, y_data, alpha, epoch):
    # set a placeholder
    X = tf.placeholder(tf.float32, shape=X_data.shape)
    y = tf.placeholder(tf.float32, shape=y_data.shape)

    # Construct linear regression algorithm
    with tf.variable_scope('linear-regression'):
        W = tf.get_variable("weights",
                            (X_data.shape[1], 1),
                            initializer=tf.constant_initializer())  # create weight

        y_pred = tf.matmul(X, W)  # m*n @ n*1 -> m*1
        # Loss function
        loss = 1 / (2 * X_data.shape[0]) * tf.matmul((y_pred - y), (y_pred - y), transpose_a=True)  # (m*1).T @ m*1 = 1*1
    
    # Gradient descent optimization
    train_op = tf.train.GradientDescentOptimizer(alpha).minimize(loss)

    # run the session
    with tf.Session() as sess:
        # Variable initialization
        sess.run(tf.global_variables_initializer())
        loss_data = []

        for i in range(epoch):
            _, loss_val, W_val = sess.run([train_op, loss, W], feed_dict={X: X_data, y: y_data})
# print(loss_val)
            loss_data.append(loss_val[0, 0])  # because every loss_val is 1*1 ndarray

            if len(loss_data) > 1 and np.abs(loss_data[-1] - loss_data[-2]) < 10 ** -9:  # early break when it's converged
                print('Converged at epoch {}'.format(i))
                break

    # clear the graph
    tf.reset_default_graph()
    return {'loss': loss_data, 'parameters': W_val}  # just want to return in row vector format
Copy the code

2.4 Draw an iteration diagram

Comparison of iteration times and loss values

Result = linear_regression (X, y, 0.1, 1000)Draw the loss function changePlots (figsize=(12,8)) ax.plot(np.arange(152)), result['loss'].'r')  # np.arange(): Generates a natural array
ax.set_xlabel('Iterations')
ax.set_ylabel('Cost')
ax.set_title('Error vs. Training Epoch')
plt.show()
Copy the code