What would you do if you were given 100,000 images and asked to find the 10 closest to one of them? Don’t give up. Don’t go through them. You can easily do this with Python, too.

doubt

How to Recognize Images with Python and Deep Neural Networks? Since writing this article, I’ve received a lot of feedback from readers. One of the most common questions is:

Is there a good way to recognize the same or similar images?

While I enjoy helping readers solve problems, TO be honest, I didn’t quite understand the need at first.

The sample pictures in my article (Doraemon and Wall · E) were all collected from the Internet. If you need to find a similar image from the Web, use Google’s “search by image” feature.

Soon it dawned on me.

The need is often not to find a needle in a haystack on the Internet for similar images. Instead, find an approximate image in a private collection of massive images.

This collection of images might be your team’s scientific data. Let’s say you study birds. One day, while looking through the images from the field camera, I discovered a novelty.

You wonder when the bird was there, how it lived, and so on. This requires searching through a large number of images to find similar ones (most likely the same birds).

This collection of images, maybe social Security data. Let’s say you’re in counterterrorism and the system suddenly finds a suspected terrorist in a sensitive area. Every time this guy appeared, all accompanied by the occurrence of vicious criminal cases, to the people’s lives and property safety brought serious threats.

The search for similarity in clothing, appearance or means of transportation is crucial.

In both cases, Google’s “search by image” engine, however powerful, doesn’t do much because you’re not uploading the image to the Internet.

All right, settle this. It makes sense.

The next question, of course, is: is this demand complex to solve?

Is there a high technical bar to cross? Do you have to spend a lot of money and hire experts to do it?

In this article, I show you how to solve this problem with a dozen or so lines of Python code.

data

For the sake of explanation, we will continue to use “How to Recognize Images with Python and Deep Neural Networks”. A collection of doraemon and WALL · E images used in this article.

I’ve got 119 doraemon pictures for you, and 80 WALL · E pictures. The image has been uploaded to the Github project.

Please click this link to download the zip package. Then unzip it locally. As our demo catalog.

When you unzip it, you’ll see an image folder with two subdirectories, Doraemon and Walle.

Doraemon’s directory is full of all kinds of blue fat man pictures.

The picture in The Wall catalog looks like this:

Now that we have the data, let’s prepare the environment configuration.

The environment

In this article, we need to use TuriCreate, apple’s machine learning framework.

Please note that TuriCreate was recently released and the list of operating systems currently supported is as follows:

This means that if you’re running Windows 7 or below, TuriCreate isn’t currently supported. If needed, there are two ways:

First, upgrade to Windows 10 and use WSL. I found a Chinese tutorial for you on how to use WSL. Please follow the tutorial step by step to complete the installation.

The second option is to use virtual machines. You are advised to use Virtualbox, which is free and open source. Similarly, I found you a very detailed Chinese tutorial for installing Ubuntu Linux with Virtualbox. You can use it to install Linux.

With system compatibility issues resolved, let’s install Anaconda on TuriCreate supported systems.

Please download the latest version of Anaconda at this website. Drop down the page to find the download location. Depending on the system you are currently using, the site will automatically recommend a suitable version for you to download. I use macOS and download files in PKG format.

The download page area shows Python version 3.6 on the left and 2.7 on the right. Please select version 2.7.

Double-click the downloaded PKG file and follow the Instructions in Chinese to install it step by step.

After installing Anaconda, we install TuriCreate.

Please go to your “terminal” and enter the sample directory we just downloaded and decompressed.

Execute the following command to create an Anaconda virtual environment named Turi. If you followed me on How to Recognize Images with Python and Deep Neural Networks? This virtual environment was created in this article, please skip here.

Conda create -n turi python=2.7 anacondaCopy the code

Next, we activate the TURi virtual environment.

source activate turi
Copy the code

In this environment, we install (or upgrade to) the latest version of TuriCreate.

pip install -U turicreate
Copy the code

After installation, perform the following operations:

jupyter notebook
Copy the code

This brings us to the Jupyter laptop environment. Let’s create a New Python 2 notebook.

A blank notebook appears in the browser.

Click the laptop name in the upper left corner and change it to a meaningful laptop name “demo-python-image-similarity”.

Now that we’re ready, we can start writing the program.

code

First, we read in the TuriCreate package.

import turicreate as tc
Copy the code

We specify the folder image that the image is in. Have TuriCreate read all the image files and store them in the Data data box.

data  = tc.image_analysis.load_images('./image/')
Copy the code

Let’s look at the contents of the data box:

data
Copy the code

Data contains two columns of information. The first column is the address of the picture, and the second column is the description of the length and width of the picture.

Next we ask TuriCreate to add a row number to each row in the data box. This will be used as a tag for the image and can be used later when looking up the image.

data = data.add_row_number()
Copy the code

Now look at the contents of the data box:

data
Copy the code

Let’s take a look at the picture that corresponds to this information in the data box.

data.explore()
Copy the code

TuriCreate will pop up a page showing us the contents of the data box.

Hover your mouse over a thumbnail to see the larger image.

The first picture is of Doraemon:

The second picture is of WALL · E:

Now, the highlight. We asked TuriCreate to establish an image similarity discriminant model based on the input image set.

model = tc.image_similarity.create(data)
Copy the code

This statement may take some time to execute. If you’re using TuriCreate for the first time, it may also need to download some data from the Web. Please be patient.

Resizing images...
Performing feature extraction on resized images...
Completed 199/199
Copy the code

Note that TuriCreate automatically did pre-processing work for us, such as resizing images, and feature extraction for each image.

After a long wait or a short wait, the model has been successfully built.

Now, let’s try to give the model an image and ask TuriCreate to help us pick the 10 most similar images from the current collection.

For convenience, we choose the first image as the query input.

Let’s use the show() function to show this image.

tc.Image(data[0] ['path']).show()
Copy the code

Yes, it’s the same Doraemon.

Now let’s do a query, and we ask the model to find the 10 most similar images.

similar_images = model.query(data[0:1], k=10)
Copy the code

Soon, the system tells us we’ve found it.

We store the results in the similar_images variable, so let’s take a look at what images are in it.

similar_images
Copy the code

A total of 10 rows are returned. It’s in line with our requirements.

Each row of data contains four columns. Respectively is:

  • Query the tag of a picture
  • The tag to get the result
  • The distance between the result image and the query image
  • Result picture and query picture approximate degree sort value

With this information, we can see which images are most similar to the ones we entered.

Notice that the first resulting image is actually our input image itself. There’s no point in thinking about it.

We extract the tag (index) values of all the resulting images, ignoring the first one (itself).

similar_image_index = similar_images['reference_label'] [1:]
Copy the code

The markings for the remaining 9 images are in the results:

similar_image_index
Copy the code
dtype: int
Rows: 9
[194, 158, 110, 185, 5, 15, 79, 91, 53]
Copy the code

Next we want TuriCreate to help us visualize the contents of these 9 images.

We want to filter out the tags of the above 9 images in the index list of all images:

filtered_index = data['id'].apply(lambda x : x in similar_image_index)
Copy the code

Look at the filtered index result:

filtered_index
Copy the code
dtype: int Rows: 199 [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,...Copy the code

You can count the position of the image marked 1 to see if it matches the number stored in similar_image_index.

After verification, execute the following statement. We call TuriCreate’s Explore () function again to display the image of the similarity query results.

data[filtered_index].explore()
Copy the code

The following dialog box is displayed:

We can see that only Doraemon appears in the pictures of all the query results. None of wall · E’s pictures came up.

Similar picture search success!

As you work with the sample data in this article, try it out with your own data instead.

The principle of

After showing you how to find similar shapes with a dozen or so lines of Python code, let’s talk about the principles behind this powerful, concise approach.

If you are not interested in the principle, please skip this section and look at the summary.

Although we just built the model with a single statement:

model = tc.image_similarity.create(data)
Copy the code

However, TuriCreate actually does a lot of things for us backstage.

First, it invokes a very complex model trained on a large data set.

How to Recognize Images with Python and Deep Neural Networks? This model is the last line in the figure above. It’s called Resnet-50, and it has 50 floors, millions of pictures, and a lot of training.

Here’s a question you’re smart enough to ask: Are doraemon and Wall · E among the millions of pre-training photos?

No.

That would be weird, wouldn’t it?

If the complex model had never seen Doraemon and WALL · E before, how would it know how to tell them apart? How can you tell that the difference between doraemon and Wall · E is less than the difference between Doraemon and WALL · E?

How to Recognize Images with Python and Deep Neural Networks? In this article, I have prompted you a key word: Transfer learning.

We won’t go into the technical details here. I’m just going to show you at the conceptual level.

Remember this diagram of computer vision (convolutional neural network)?

You may have convolved, sampled, convolved, sampled several times before Fully Connected Layer. These intermediate layers help us to describe the basic features of the image, such as the approximate shape of the edges, and the dominant colors of a block.

At the full connection layer, you are left with a single set of data, possibly a long one, that extracts all the characteristics of your input data.

If your input is a cat, the fully connected layer describes all kinds of information about the cat, such as the color of the hair, the way the facial parts are arranged, the shape of the edges…

This model can help you extract the characteristics of the cat, but it doesn’t know what the concept of “cat” is.

Naturally, you can use it to extract features from a dog.

Similarly, Doraemon’s photos, like those of cats, are two-dimensional and layered in different colors.

But can a model trained with other images extract features from Doraemon’s photos?

Of course you can!

The key to using transfer learning is to freeze all the training results of the intermediate process and directly convert one image into a feature description result using the model trained on other image sets.

In the following work, only this final feature description (full connection layer) is used to deal with classification and similarity calculation.

Dozens of layers of parameter iterative training were eliminated.

No wonder such high accuracy can be achieved with such a small data set; It is no wonder that integrated model results can be obtained in such a short period of time.

Transfer learning is the ability to transfer experience and knowledge accumulated on one task to a similar new task.

Humans are much more capable of learning by transfer than machines.

In a recent article, Hugo Award author Hao Jingfang described this powerful ability to learn:

Young children can learn quickly, learn from small data, and get the idea of a class. A child can easily tell the difference between the concept of “duck” and each specific duck. The former is an abstract “class”, the latter is a concrete thing. A child doesn’t need to look at many pictures of ducks to get the idea of an abstract “class.”

To describe it with the idiom, it is probably “touch by analogy”.

If human beings are not good at transfer learning, the life of all things, all as a new thing to learn from scratch, the consequences are unimaginable. Compared to the amount of information we can process in a lifetime, this cognitive load would be overwhelming.

Back to our problem, if the model could help us turn every image into a long list of numbers on a fully connected layer, it would be too easy to tell how similar the images are. Because this becomes a simple space vector distance problem.

Dealing with such simple numerical calculations can be tedious for us humans. But when the computer does the math, it’s pretty happy.

The smallest vectors are found according to the distance, and the images described by them are judged by the model to have the highest similarity.

summary

In how to Recognize Images with Python and Deep Neural Networks? On the basis of this paper, this paper further introduces the following contents:

  • How to use TuriCreate to quickly build image similarity model;
  • How to query k pictures most similar to a picture;
  • How to display the result of query picture set visually;
  • The principle behind TuriCreate graph classification and similarity calculation;
  • Basic concepts of transfer learning.

If you haven’t read How to Recognize Images with Python and Deep Neural Networks? , I strongly recommend you read it. The reading process can help you better understand how deep neural network-based computer vision works.

discuss

Have you ever found yourself looking for a needle in a haystack for a similar image? How did you deal with it? What good tools and methods have you used? Compared with this paper, what are their advantages? Welcome to leave a message, share your experience and thinking to everyone, we exchange and discuss together.

If you like, please give it a thumbs up. You can also follow and top my official account “Nkwangshuyi” on wechat.

If you’re interested in data science, check out my series of tutorial index posts entitled how to Get started in Data Science Effectively. There are more interesting problems and solutions.