Forwarded from white Wolf Stack:Look at the original
A filter
What is a filter? Baidu Encyclopedia said that “filters are mainly used to achieve various special effects of images……” .
We first learned about filters in ffMPEG how to transcode, so let’s review the transcode flow chart at that time.
As you can see from the figure, the dotted lines are drawn before and after the filter, indicating that it is optional. In terminology, the filter refers to the action of processing the raw data (i.e. audio and video frames) decoded by the decoder before encoding. We can also call it a filter.
There are about 400 filters built into FFmPEG. You can use the ffmPEg-filters command to view all of them, use the ffmpeg-h filter= XXX command, or check the official documentation for each filter.
Filters are used in most audio and video processing, so you can see how important they are.
Multiple filters can be combined to form A filter chain or filter graph. In each filter, not only the input source can be processed, but the processed results of filter A can also be used as the input parameters of filter B, which can continue to be processed through filter B.
For filter processing, FFMPEG provides two processing methods, simple filter and complex filter.
A simple filter
A simple filter is one that has only one input and output, and ensures that the input and output streams are of the same type.
For example, in the flow operation (2) in the previous article, how to select a stream? At the end mentioned to the original video r3.MP4 scale double
ffmpeg -i r3.mp4 -vf scale=272:480 -y filter.mp4
Copy the code
-vf is short for -filter:v. Similarly, -filter:a or -af can be used to process audio streams.
– Filter syntax rules: -filter[:stream_specifier] filtergraph (output,per-stream) Filtergraph (output,per-stream) I’m using a Scale filter here.
The scale filter is used to adjust the size of the video, such as equal scale zoom, equal scale zoom, without equal scale operation, the output will be deformed, we generally do not consider the deformation result.
Because we know that the resolution of the original video r1ori.mp4 is 544×960, so when the scale is doubled, the command above directly specifies 272×480. The scale filter comes with many parameters, and we introduce some commonly used ones.
In_w in_h or iw Ih indicates the width and height of the input video. Out_w out_h or ow OH indicates the width and height of the output video
Of course, it doesn’t have to be video, the input and output can also be pictures.
So if we zoom in on the original video, we can also write:
ffmpeg -i r3.mp4 -vf scale=iw/2:ih/2 -y filter.mp4
Copy the code
Q1: What if we want to adjust the width of the original video to 300 and keep the original resolution?
Let’s say that 544/960 is equal to 300/x, and x is equal to 300×960/540, which is a little tricky, but it doesn’t have to be divisible, so we can just specify that the height is equal to -1, and it will automatically do equal proportions.
ffmpeg -i r1ori.mp4 -vf scale=300:-1 -y filter.mp4
Copy the code
Result found transcoding failed, prompt
[libx264 @ 0x7ff509053a00] height not divisible by 2 (300x529) Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, Width or height [aac @0x7FF50904e200] Qavg: 28010.410 [aAC @0x7FF50904e200] 2 frames left in the queue on closingCopy the code
Height not divisible by 2 (300×529) This is because some codecs require that the width and height of many videos be multiples of N (where n is 2), so when writing scripts for video or image width and height, remember not to use -1, instead use -2.
Ffmpeg-i R1ori. Mp4-vf Scale =300: -2-Y filter. Mp4 Output video resolution is 300 × 530Copy the code
Problem 2: In order to make things difficult for you, the boss put forward a new requirement: “I want the resolution of all the output videos to be 300×500 and can not be deformed”, how to do?
We know that the aspect ratio of 3:5 is very rare. Now the common resolution is 16:9 and 4:3. That is to say, the original video must be processed to meet the abnormal needs of the boss.
For the original video R1ori.mp4, if the width is 300 and the height is 530 after scaling, it will be deformed if the height is forced to set to 500, that is to say, we can only make the height equal to 500, try to reduce the width.
Ffmpeg-i R1ori. Mp4-vf Scale =-2:500 -Y filter. The resolution of the resulting video output by MP4 is 284x500Copy the code
As shown in the figure above, the blue box represents the actual width and height of the video, and the red box represents the target width and height.
Looking through the documentation, we found that the Pad filter solved our problem.
-pad=width[:height[:x[:y[:color]]]]
Ffmpeg-i r1ori. Mp4-vf "scale=-2:500,pad=300:500:(300-iW)/2:0" -y filter2.mp4 2, FFMPEG-I r1ori. Mp4-vf Scale =-2:500,pad=300:500: -1:0-Y filter. Mp4 3, FFMPEG-I R1ORI. Mp4-vf Scale =-2:500,pad=300:500:-1:0: black-y filter Mp4-vf "scale=-2:500,pad=300: iH: ow-iW /2:0:green" -y filter.mp4Copy the code
Method 4 is provided above. Let’s use method 4 for a brief introduction.
Scale =-2:500, refers to the original video in equal scale, height is equal to 500, which is 284×500 as you see above.
Pad =300:ih:(ow-iw)/2:0:green, 300:ih: 300:500 is the width and height of the red box (ow-iw)/2, which is half of the difference between the red box and the blue box, that is, the range to be filled on both sides; The last parameter indicates the color you want to fill in. The default is black, but for debugging purposes we set the color to green.
Now we guarantee that the current video will be output at the ratio of 300×500 and will not be deformed, but please note that the boss said “all output videos”, that is to say, the resolution of the input video may be 200×300, 544×960, 500×400, 200×800 and so on, and ensure the output at the ratio of 300×500. Obviously, the above method is not completely universal, how to do?
Now we know the width and height of the original input video and the desired width and height. In this case, we can formulate a set of processing rules to solve the problem:
- Small width and height, no stretching, no scaling
- The width and height are large, and the proportion is reduced to the height
- Width out of range, proportional to the reduction, the width prevail
- The height is out of range, and the proportion is reduced. The height prevails
In the actual development process, we have to deal with the code, usually on the command line implementation is an exercise, so based on this rule, we have the following code
<? php declare(strict_types=1); Class CalculatorService {/** * User video resolution conversion * rules: * The width and height are all small, no stretching, no scaling * The width and height are all large, scale down, take the height as the criterion * The width is out of range, scale down, * @param int $inputWidth Width of the input video * @param int $inputHeight Height of the input video * @param int $outWidth Width of the output video * @param int Public function getSize(int $inputWidth, int $inputHeight, int $outWidth, int $outHeight): string { $scale = ""; if ($inputWidth <= $outWidth && $inputHeight <= $outHeight) { $scale = "scale={$inputWidth}:{$inputHeight},pad={$outWidth}:{$outHeight}:-1:-1:green"; } elseif (($inputWidth > $outWidth && $inputHeight > $outHeight) || ($inputHeight > $outHeight) ) { $scale = "scale=-2:{$outHeight},pad={$outWidth}:{$outHeight}:-1:0:green"; } elseif ($inputWidth > $outWidth) { $scale = "scale={$outWidth}:-2,pad={$outWidth}:{$outHeight}:0:-1:green"; } return $scale; } } $calculatorService = new CalculatorService(); var_dump($calculatorService->getSize(200, 300, 300, 500)); var_dump($calculatorService->getSize(544, 960, 300, 500)); var_dump($calculatorService->getSize(500, 400, 300, 500)); var_dump($calculatorService->getSize(200, 600, 300, 500)); // Result String (37) "scale=200:300,pad=300:500:-1:-1:green" string(35) "scale=-2:500,pad=300:500:-1:0:green" string(35) "scale=300:-2,pad=300:500:0:-1:green" string(35) "scale=-2:500,pad=300:500:-1:0:green"Copy the code
For easy understanding, you can refer to the following figure for one-to-one correspondence.
Sophisticated filters
A complex filter, as opposed to a simple filter, is a filter graph that can handle any number of input and output effects and can do almost anything.
Complex filters are represented by the command -filter_complex, which also has an alias -lavfi.
The last article described streaming and filtering as one of the most important and commonly used methods. It is still to scale the input video r3. Mp4 and so on by one time, we take the way of manually selecting the stream as an example.
ffmpeg -i r3.mp4 -filter_complex "[0]scale=272:480[out]" -map 0:a -map "[out]" -y filter.mp4
Copy the code
Simple analysis is as follows:
- Command “[0]scale=272:480[out]” in [0] indicates the first input video, because the video needs to be processed, so can also use [0:v], if the audio needs to be processed separately, need to use [0:a];
- [0] Combined with scale filter, it means that the first input video is used as the parameter input of scale filter;
- [out] parentheses are required. Out is a custom alias, which means that the output of the scale filter is named [out], but is not the final output.
- -map “[out]” is to directly select the [out] stream as the output
As we said, the output of one filter is the input of another filter, which greatly avoids the need to write multiple commands repeatedly. Our principle is only one, and we should not use two commands if we can use one command.
Lossy codec repeated codec operations will degrade the quality of the original video.
For example, if you want to clip the middle part of the original video r1ori.mp4, but keep the resolution of the original video 544×960, how do you do that?
ffmpeg -i r1ori.mp4 -filter_complex "nullsrc=s=544x960[background]; \
crop=iw:(ih/2 - 110):0:250[middle]; \
[background][middle]overlay=shortest=1:x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2[out]" \
-map "[out]"
-map 0:a
-movflags +faststart
-y fc.mp4
Copy the code
Nullsrc, crop, and overlay are the three common filters used in this command.
The NullSrc filter is used to create an empty video, which is simply an empty canvas or green cloth because the default color is green. S specifies the size of the canvas. The default is 320×240, which means we create a canvas of 544×960 and call it background.
There are many different types of NullSrc users, such as using NullSrc and CIQRCodeGenerator to create a QR code for the front page of the “White Wolf stack”
ffmpeg -f lavfi -i nullsrc=s=200x200,coreimage=filter=CIQRCodeGenerator@inputMessage=\
http\\://manks.top/@inputCorrectionLevel=H -frames:v 1 manks.png
Copy the code
The Crop filter is used to crop the video, which means that any area of the video, any size, can be cropped. crop=iw:(ih/2 – 110):0:250[middle]; Here we clip the middle of the original video and call it middle;
Overlay filter means two videos overlay each other, shortest website says: “If set to 1, force the output to terminate when the shortest input terminates. Default value is 0.” Since we created a canvas with no timeline using NullSrc, we need middle’s video time as the final time, so set it to 1. Main_w and main_h indicate the width and height of the main video, and overlay_w and overlay_h indicate the width and height of the overlay video. To superimpose video A on video B, main_w and main_h indicate the width and height of video B, and overlay_w and overlay_h indicate the width and height of video A. Middle is superimposed on top of and in the middle of background.
The last parameter is -movFlags, which is related to the metadata of mp4. Setting this parameter to Faststart will move MOOv ahead of MDAT, making it slightly faster when playing online.
Homework: We have introduced two cases in the article audio and video composites. go and see if you can solve it in one command.
That’s all for the basic introduction of filters. If you have any questions, please leave a comment below.