The data set

In deep learning, the quality of the algorithm is reflected by the training results of the test set, and it also promotes the update and iteration of the algorithm.

Commonly used data sets

Reference links: zhuanlan.zhihu.com/p/55775309

  1. Datafoundatin
  2. Kordsa network.
  3. Kaggle
  4. Google
  5. Microsoft data set

model

If the model is complex, write it in a new.py file and import it in the main file

# design model using class
class LogisticRegressionModel(torch.nn.Module) :def __init__(self) :super(LogisticRegressionModel.self).__init__() # create 1 * 1 dimension objectself.linear = torch.nn.Linear(1.1)

    def forward(self, x):
        # y_pred = F.sigmoid(self.linear(x))
        y_pred = torch.sigmoid(self.linear(x))
        returnRegressionmodel = LogisticRegressionModel()Copy the code

Loss function

You can construct your own loss function from the nn.Module Module

# By default, loss will be averaged based on element, and will be accumulated if size_average=False. Criterion = torch.nn.BCELoss(size_average=False)Copy the code

The optimizer

Build your own optimizer based on your own usage and test set feedback

optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
Copy the code

training

According to the comparison between the average loss MSE before training and the average loss after several epochs after training, the chart is aggregated to observe whether the model converges (achieves the optimal) or overfits.

A fitting

Due to too much training data, too complex model and too few test samples, the model after a large number of training can accurately predict the results of test samples, rather than the detection results of abnormal paths

# training cycle forward, backward, update
for epoch in range(1000) : y_pred = model(x_data) loss = criterion(y_pred, y_data) print(epoch, loss.item()) optimizer.zero_grad() loss.backward() optimizer.step()Copy the code