preface
Through the efforts of the first few articles, we have installed several mainstream AI frameworks, which involves too much code and is a bit boring. In this post we’ll try to build on that, make some interesting AI applications, and jump on the bandwagon of no-code development all the way.
The biggest challenges in building ai applications now lie in several aspects:
- Data collection and annotation need to spend a lot of time and manpower;
- The training time of the model is very long and the computation cost is high.
- The model generalization ability is very weak, change a scene needs to change the model;
- The basic algorithm is difficult to understand, scientific research and schools are still the main force;
- Many areas do not have mature commercial solutions.
With the continuous development of technology, artificial intelligence should become the underlying technology of a system, and play a greater value in reducing cost and increasing efficiency. Like water and electricity, AI should be a resource that anyone can use at any time. Here is a website, a bit of this meaning.
Teachable Machine
This website provides a simple AI training platform, mainly for supervised learning, which provides three types of AI applications: image classification, sound classification and attitude recognition. The trained model can be saved in Tensorflow, tensorflow. js or Tensorflow Lite, so that it can be easily deployed to mobile phones or raspberry PI.
teachablemachine.withgoogle.com/
The training model is divided into three steps: collecting data, training model and exporting model.
1. Create a project
We are going to make a gesture recognition application first, create an Image classification Project, and then select Image Project
2. Collect data
You can upload an image or camera to capture the image, and fill in the category name of the category, such as ONE.
The camera will be convenient, Hold down the Hold to Record button will Record, you can also choose the frame rate. In the recording process, some unsuitable samples can be deleted at any time, or repeated recording can increase sample diversity.
And then a second category, and a third category… All the way up to ten, there are ten valid categories. Remember to add nothing as an exclusion.
Tip:
It should be noted that the number of images in each category should be kept at the same level as possible to avoid sample imbalance.
3. Training model
Fill in Epochs, default is 50, as long as there is no fit, can be changed according to the situation, improve accuracy;
Batch Size is 16 by default. If it is too small, convergence will not be easy; if it is too large, it will be short of fitting. I’m a bit superstitious about graphics cards with enough memory.
The Learning Rate refers to the updating range of each gradient descent. Too large is prone to oscillation, while too small will affect the model convergence speed, default is 0.001.
You can also click Under the Hood to see the curve of the entire training process, which is a bit tensorboard. Finally, you can calculate the accuracy and confusion matrix for each category. Perfect!
Tip:
Remember not to switch tabs during training, and keep TAB always open to avoid browser optimization background affecting the training model.
4. Validate the model
After the training, the camera can be used again to verify the classification results. The list will output the prediction probability of the model, which is very intuitive and easy to use.
You can add samples to the wrong categories in time to improve the accuracy of classification.
5. Export the model
Clicking on the Export Model button will bring up a dialog box where we can select the Tensorflow Lite Model and also choose the exported precision type, Floating Point, Quantized or EdgeTPU.
Click the Download My Model button and after a while, the site will package the converted model into a converted_tflite.zip zip file. It contains two files, model_unquant.tflite is the model file and allags.txt is the classification label file.
If these two files look familiar, they are the two parameters we entered in our last TensorFlow Lite application. So let’s just run the program from last time and see what happens.
Deploy to raspberry PI
Cool!!!!!!
What’s rare is that this time, without a line of code, a gesture recognition AI application ran out. A slight extension, such as training a seal gesture in Naruto, and then connecting it to an IFFFT app on the back end, can unlock your smart door lock with a sense of ritual.
It’s easy to turn on the air conditioner with a wave of your hand, turn off a bedroom lamp with a snap of your fingers, and leave the rest of these smart home applications to your imagination.
Next up
We’re gonna be on raspberry pie. Deploy Intel neural computing stick to further improve reasoning speed, please look forward to…