The original link: machinelearningmastery.com/use-pre-tra…
Prerequisites:
- Windows10
- Download and install Anaconda, and configure conda image source (domestic source)
- Install Jupyter Notebook, TensorFlow, Keras and their dependent components based on Anaconda
- Install Keras using PIP
- Download “VGG training model “: 1️ on Github 2️ on Baidu cloud Disk (extraction code: PBZ5)
- 1️ one download from Baidu web disk (extraction code: Y8fq)
Article classification results of VGG model:
-
83.12% chance of being a Border Collie
-
There’s a 52.12% chance it’s a drag.
-
93.01% chance of being a “coffee cup”
Step1 install and configure Anaconda
1.1 Anaconda installation tutorial in windows10
Reference link: juejin.cn/post/684490…
1.2 Configuring the conda image source, the file is located at: C:\Users\To Kill a MockinBird\.condarc
Edit the “. Condarc “file and insert:
channels:
- http://mirrors.ustc.edu.cn/anaconda/pkgs/free/
show_channel_urls: true
ssl_verify: true
report_errors: true
Copy the code
1.3 Checking the Anaconda Installation
- Enter CMD and run to view the Anaconda version:
- Execute instructions:
conda -V
Note: If the installation is successful, the Anaconda version number is displayed. If an exception occurs, reinstall the Anaconda
Step2 install jupyter notebook and tensorflow
2.1 Create an independent Python-v3.6 environment based on Anaconda
Currently python37 support for tensorflow is not particularly friendly, and python3.6 is recommended
Run the command conda create -n tf-20190930 python=3.6
Tf-20190930: This is a custom name
- After the python runtime environment is created, you can view the python runtime environment using the following command:
conda info --env
2.2 Installing the IPykernel of Jupyter in an independent Python Environment
- Before installing Jupyter, install ipykernel, which is the kernel of Jupyter
- To activate a standalone Python environment:
conda activate tf-20190930
- Install ipykernel:
conda install ipykernel
2.3 installation jupyter
- Execute the instructions to install JUPyter:
conda install nb_conda
- (Very important) Write ipykernel to Jupyte and execute the command:
python -m ipykernel install --user --name tf-20190930 --display-name 'tf-20190930'
parameter | describe |
---|---|
–user | User name (TF-20190930) |
–name | The name of the Python standalone environment (TF-20190930) |
–display-name | Represents the display name in Jupyter NoteBook (TF-20190930) |
2.4 Installing KERas (including VGG16 and VGG19)
- If you cannot download it directly, you need to configure the PIP source. Here we configure it as ali mirror source
- PIP configuration files under Windows are located at:
C:\Users\To Kill a MockinBird\pip.ini
(Note: If the file does not exist, create it in this path.)
- PIP configuration files under Linux are located at:
/root/.pip/pip.conf
2.5 Two Methods of Starting Jupyter
- The first method is to start from the Anaconda GUI
- The second way is to start from the CMD command line
- Start-up success
Step3 “VGG model “and” item classification table (imagenet_class_index.json) “are stored in the specified directory
3.1 Default storage path of pretraining model
- Windows10 under the default path:
C:\Users\To Kill a MockinBird\.keras\models
;
- Default path for Linux:
/root/.keras/
3.2 Jupyter Create a Python program to invoke the VGG model through the API
3.3 Call VGG model to classify image types, the code is as follows:
# load model = VGG16 VGG16 model () from keras. Preprocessing, image import load_img # resource loading images, image storage paths by default: C:\Users\To Kill a MockinBird\dots.jpg image = load_img('dots.jpg', target_size=(224, 224) from keras. Preprocessing. Image import img_to_array # converts image pixel numpy array image = img_to_array image = (image) image.reshape((1, image.shape[0], image.shape[1], image.shape[2])) from keras.applications.vgg16 import preprocess_input # prepare the image for the VGG model image = preprocess_input(image) # predict the probability across all output classes yhat = model.predict(image) from keras.applications.vgg16 import decode_predictions # convert the probabilities to class labels label = decode_predictions(yhat) # retrieve the most likely result, Highest probability label = label[0][0] # print the classification print(' %s (%.2f%%)' % (label[1], label[2]*100))Copy the code
3.4 Click “Run” to perform VGG convolutional neural image classification
(Note: the default path of image storage is: C:\Users\To Kill a MockinBird\ dot.jpg)