Abstract: With the development of artificial intelligence technology, AI has been able to “color” the old black and white video, reproduce the past scene, so that the black and white images become lifelike.
So, does it look like the title is similar to yesterday’s blog? Yesterday it was pictures, today it’s video. In principle, the video of the image one by one capture, color, together is the video… (I guess that’s how it works.)
So let’s finish the 18.04 Quick-start demo. This time according to gitee.com/lovingascen… “, coloring the video.
Without further discussion, we follow the instructions and use the “development environment cross-compile third-party libraries, and then import third-party libraries into the runtime to provide run calls” method.
Install presentAgent in virtual machine environment first:
Then follow suit:
bash configure
Make-j8 (this will take a bit longer to compile…)
sudo make install
The following compiles the ARM version of SO in preparation for the migration to the development board:
make distclean
./configure –build=x86_64-linux-gnu –host=aarch64-linux-gnu –with-protoc=protoc
Make -j8 (wait patiently for compilation to finish…)
sudo make install
Synchronize the compiled third-party library to the runtime:
SCP needs to add a layer of arm directory, otherwise the file will not be found: SCP $HOME/ascend_ddk/arm/lib/libpresenteragent. So [email protected]: / HOME/HwHiAiUser
Then log in to the development board and copy the so file:
Let’s take the final step to get the video project:
Back to the virtual machine development environment:
Unzip colorization_video.zip decompression
Create the directory, download the model and weight file (like last time)
Back to MindStudio. Close the original project and open the new downloaded project:
The following steps are basically the same as the one used last time: Transform the model, load the model into the project, compile, run…
The personal feeling model conversion can be omitted because it is already done when the image is colored. This time only load is selected.
Check the IP address of the virtual nic 192.168.1.223
And script/presentserver/display/config/config. The conf presenter_server_ip corresponds to:
no problem
SRC /colorize_process. CPP line 106:
No problem.
Change the compile parameters (each time the manual says you want to use centos7.6, which is actually the default. Target Architecture changed it to AARCH64 every time. The manual never says anything. I had to help you say…
Click Build… Build and Out folders are generated:
Presenter Server for colorization-Video:
Incredibly wrong report!!
Python3 (apt-get), python3.7.5 (source code), python3.7.5 (apt-get), Python 3.7.5 (source code) Since the previous command was 3.7.5, the command should be:
Python3.7.5 script/presenterserver presenter_server. Py – app = display & manual (bold red)
You can switch to MindStudio Run:
Apply, ok, then run:
There seems to be a lack of dynamic library libascendcl.so
Thanks to @Jokey,
Go to the development board background to check the file:
LD_LIBRARY_PATH also points to the directory where the so is located:
In addition, go to the development board project out directory, manual execution, seems to be able to complete:
Run. sh = runlibrARY_PATH = runLIBRARY_path = runLIBRARY_path = runLIBRARY_path = runlibrARY_path
It turns out that the run program restores the background to a single line of run.sh with each change
That requires an expert to answer. For the time being… (Hereafter…)
So what about presenter_server when it’s running?
Open a browser on the VM: http://127.0.0.1:7009
If you click the name of the view, the view window will pop up. During the execution of run.sh, there will be changes here.
Judging from the video effect, it should be relatively smooth. fps=14.
I uploaded the original video and the colored video. Interested children’s shoes can see the effect. The remaining operational problem is for the experts to solve…
In addition, the author went to Tencent Video to shoot a 1-minute video of Chaplin’s Modern Times (see the attachment) and put it into the data directory of the development board:
Then change the run parameter to the name of mp4:
The final result looks like this:
Converted video (only a little bit, because it was too slow to affect my feelings, I didn’t finish recording…) Also upload to the attachment.
As you can see from the results of the conversion, FPS =8. It feels a bit slow. Maybe 200DK reasoning ability is not so fast, maybe 300I can be a little faster…
Attached is a list of historical twists and turns:
Huawei Atlas experience – written in early 200 dk rise the second day of the launch bbs.huaweicloud.com/blogs/19384…
Atlas 200 dk systems for documentary: (1) the birth of the theory of pictures and 18.04.1 bbs.huaweicloud.com/blogs/19429…
The Atlas 200 dk systems for documentary: (2) the birth of video bbs.huaweicloud.com/blogs/19464…
Atlas 200 dk systems for documentary: (3) strong yolo3 object detection – do you want to see is already part of the system to extract the bbs.huaweicloud.com/blogs/19481…
Atlas 200 dk systems for documentary: (4) 18.04.1 software installation and double switch system validation bbs.huaweicloud.com/blogs/19522…
Atlas 200 dk systems for documentary: (5) the Atlas to black-and-white photographs of the original color is bbs.huaweicloud.com/blogs/19539 again…
(The full text, thank you for reading, the problem in this article can be solved next time.)
Click to follow, the first time to learn about Huawei cloud fresh technology ~