Do you still remember Teacher Li Mu’s “Hands-on Learning deep Learning”? Recently, the book’s TF2.0 code Reproduction project arrived.

The heart of the machine arranges, participates: hits.

UC Berkeley Li Mu “hands-on learning deep learning” open source book has been widely praised once launched. Many developers have taken the book and recreated it with various deep learning frameworks. As far as Heart of the Machine knows, there are MXnet (original) and PyTorch versions.

Recently, the hands-on deep learning book has a new iteration of the code version – TensorFlow2.0. The project made it onto GitHub’s hot list on December 9, earning 100 stars in one day.





Project address: github.com/TrickyGo/Di…

According to the project author, the project is an update and refactoring based on the Chinese version of the book and references the PyTorch version in the code. The project has now been updated to chapter 5 and is still being updated.

The two main authors of this project are from the School of Software and Microelectronics, Peking University. The project has been authorized by Teacher Li Mu himself.




How about the TF2.0 version of “Hands-on Learning”

This project includes two folders code and doc. Where code saves the code of Jupyter format, and doc is the book file of MD format. Because the original book uses MXnet, the code and text are slightly different.

Book Content Display

Considering that the MD format doesn’t do a very good job of presenting formulas, the authors used Docsify (https://docsify.js.org/#/zh-cn/) to transfer the text to GitHub Pages, where you can read the book as if it were a web page.

Trickygo.github. IO/dive-into-d…


According to the web page, it is currently chapter 5, but considering that it was a small team refactoring, it was quite a feat.

The code shown

In the book, code and text are interspersed, so you can write code as you read it, and you can check the results wherever you want.




Taking “Building an MLP Network” as an example, the authors provide the most Python replay approach — define a class for the Model and inherit the base class from TF.keras.model. In TF2.0 code, this is a safer approach.




There is, of course, simpler implementation code. In short, the code is very clean and easy to understand.

The book catalog

As previously introduced by Heart of the Machine, a table of contents is provided for your reference.

  • Introduction to the

  • Reading guide

  • 1. Introduction to deep learning

  • 2. Prepare knowledge

  • 2.1 Environment Configuration
  • 2.2 Data Operation
  • 2.3 Automatic gradient calculation
  • 2.4 Document Review
  • 3. Fundamentals of deep learning

  • 3.1 Linear regression

  • 3.2 The realization of linear regression from zero

  • 3.3 Simple implementation of linear regression

  • 3.4 softmax regression

  • 3.5 Image Classification Data set (fashion-MNIST)

  • 3.6 Softmax regression is implemented from scratch

  • 3.7 Concise implementation of Softmax regression

  • 3.8 Multilayer perceptron

  • 3.9 Implementation of multilayer perceptron from scratch

  • 3.10 Simple implementation of multilayer perceptron

  • 3.11 Model selection, underfitting and overfitting

  • 3.12 Weight attenuation

  • 3.13 discarded method

  • 3.14 Forward propagation, back propagation and calculation diagram

  • 3.15 Numerical stability and model initialization

  • 3.16 Real Kaggle competition: Housing price forecast

  • 4. Deep learning computing

  • 4.1 Model Construction

  • 4.2 Access, initialization and sharing of model parameters

  • 4.3 Delayed initialization of model parameters

  • 4.4 Custom Layer

  • 4.5 Reading and Storage

  • 4.6 the GPU computing

  • Convolutional neural network

  • 5.1 Two-dimensional convolution layer

  • 5.2 Fill and stride

  • 5.3 Multiple Input Channels and Multiple Output Channels

  • 5.4 pooling layer

  • 5.5 Convolutional Neural Network (LeNet)

  • 5.6 Deep Convolutional Neural Network (AlexNet)

  • 5.7 Networks using Duplicate Elements (VGG)

  • 5.8 Network within a Network (NiN)

  • 5.9 Networks with Parallel Links (GoogLeNet)

  • 5.10 Batch normalization

  • 5.11 Residual Network (ResNet)

  • 5.12 Dense Connection Network (DenseNet)

  • 6. Recurrent neural network

  • 6.1 Language Model

  • 6.2 Recurrent neural network

  • 6.3 Language Model Data Set (Lyrics from Jay Chou’s Album)

  • 6.4 Implementation of cyclic neural network from scratch

  • 6.5 Simple implementation of recurrent neural network

  • 6.6 Reverse Propagation through Time

  • 6.7 Gated Cycle Unit (GRU)

  • 6.8 Short and Long Term Memory (LSTM)

  • 6.9 Deep recurrent neural network

  • 6.10 Bidirectional cyclic neural network

  • 7. Optimize the algorithm

  • 7.1 Optimization and deep learning

  • 7.2 Gradient descent and Stochastic gradient descent

  • 7.3 Small batch stochastic gradient descent

  • 7.4 the momentum method

  • 7.5 AdaGrad algorithm

  • 7.6 RMSProp algorithm

  • 7.7 AdaDelta algorithm

  • 7.8 Adam algorithm

  • 8. Computational performance

  • 8.1 Mixed imperative and symbolic programming

  • 8.2 Asynchronous Computing

  • 8.3 Automatic Parallel computing

  • 8.4 Multi-GPU Computing

  • Computer vision

  • 9.1 Image enhancement

  • 9.2 fine-tuning

  • 9.3 Target detection and boundary boxes

  • 9.4 the anchor box

  • 9.5 Multi-scale target detection

  • 9.6 Target detection Data set (Pikachu)

  • To be updated…

  • 10. Natural language processing

  • 10.1 Word Embedding (word2vec)

  • 10.2 Approximate training

  • 10.3 Implementation of word2vec

  • 10.4 Embedding Sub-Words (fastText)

  • 10.5 Word Embedding of global Vector (GloVe)

  • Find synonyms and analogies

  • 10.7 Text emotion classification: Using recurrent neural networks

  • 10.8 Text Sentiment Classification: Using Convolutional Neural Network (textCNN)

  • 10.9 Encoder – Decoder (SEQ2SEQ)

  • 10.10 beam search

  • 10.11 Attention mechanisms

  • 10.12 Machine translation

How is this project used

The author provides two ways to use it in the project introduction. You can read the book and the accompanying code from the web page and follow it step by step. There is, of course, another way to browse locally.

To be specific, you need to install the Docify-CLI tool first:

npm i docsify-cli -gCopy the code
Clone the project to the local directory and enter:
git clone https://github.com/TrickyGo/Dive-into-DL-TensorFlow2.0cdDive into - DL - TensorFlow2.0Copy the code
You can then run a local server, type http://localhost:3000 in your browser, and have real-time access to the document and see the rendering.
docsify serve docsCopy the code