This is the second blog to learn ffMPEG, mainly using FFMPEG to get the first frame of the local video file into a Bitmap, and then throw to the upper ImageView display.
The general process can be divided into:
- Pass the video file path, unpack
- Find the video stream and find the decoder from the stream
- Open the decoder and read the first full AVFrame frame
- Create a bitmap and use libyuv to convert the YUV to ARGB for the bitmap display
- Release resources
First define the JNI function
public static native Bitmap getCover(String path);
Copy the code
JNIEXPORT jobject JNICALL
Java_demo_simple_example_1ffmpeg_MainActivity_getCover(JNIEnv *env, jclass clazz, jstring path) {
const char *_path = env->GetStringUTFChars(path, JNI_FALSE);
int ret = - 1;
}
Copy the code
Open the video file and unpack it
// Encapsulate the format context
AVFormatContext *ifmt_ctx = NULL;
// Open the input source
ret = avformat_open_input(&ifmt_ctx, _path, 0.0);
if (ret < 0) {
logDebug("Decapsulation failed -- %s", av_err2str(ret));
return nullptr;
}
Copy the code
Use the avformat_open_input() function to find the formatted I/O context AVFormatContext structure from the input file, or use the avformat_alloc_context() function to create a new AVFormatContext for encoding.
When using avformat_open_INPUT (), remember to call avformat_close_input() to release the resource after the program has finished executing, and this method returns an int, 0 indicating success or a negative number indicating failure. We can also get a detailed log of function failures with the av_err2str() function, defined in the libavutil/error.h header file.
Here’s a little trick: FFmPEG mostly has a return function that succeeds if it is greater than or equal to 0, and fails if it is less than 0.
Find the stream, find the video stream from the stream’s array
ret = avformat_find_stream_info(ifmt_ctx, 0);
int video_stream_index = - 1;
AVStream *pStream = NULL;
AVCodecParameters *codecpar = NULL;
// Find the video stream
for (int i = 0; i < ifmt_ctx->nb_streams; ++i) {
pStream = ifmt_ctx->streams[i];
if(pStream->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) { codecpar = pStream->codecpar; video_stream_index = i; }}Copy the code
The avformat_find_stream_info() function gets the stream information from the media file’s packet and assigns it to AVFormatContext.
The Streams field in the AVFormatContext structure contains information about all streams in the media file, including video streams, audio streams, subtitle streams, etc.
The AVCodecParameters structure describes the properties of the encoded stream.
Locate the decoder and request the codec context
logDebug("Decoder == %s", avcodec_get_name(codecpar->codec_id));
AVCodec *codec = avcodec_find_decoder(codecpar->codec_id);
// Apply for a codec context
AVCodecContext *codec_ctx = avcodec_alloc_context3(codec);
Copy the code
Open the codec and get the first frame
// Copy the parameters into the codec context
ret = avcodec_parameters_to_context(codec_ctx, codecpar);
// Open the codec
ret = avcodec_open2(codec_ctx, codec, NULL);
// Apply for a frame structure
AVFrame *pFrame = av_frame_alloc();
int frameFinished;
while (av_read_frame(ifmt_ctx, &pkg) >= 0) {
if(pkg.stream_index ! = video_stream_index) {continue;
}
ret = avcodec_decode_video2(codec_ctx, pFrame, &frameFinished, &pkg);
if(! frameFinished)continue;
}
Copy the code
The av_read_frame() function reads the video stream information and stores it in the PKT variable of the AVPacket structure. Here we only need to allocate the memory of the AVPacket structure, and the data (PKT ->data) is automatically allocated by FFmpeg within it. However, after using the av_packet_unref() function to release.
The av_read_frame() function returns a value less than 0 indicating that an error occurred or that the end of the file was read.
The avcodec_decode_video() function converts a packet to a frame. However, decoding a packet does not necessarily get all the frame information, so the frameFinished flag is used to judge this process. The new version already uses avcodec_send_packet() and avcodec_receive_frame() instead of this function, but it’s ok to use it here.
When the frameFinished parameter is 0, no frames can be decompressed, and when the frameFinished parameter is not 0, the decompression can continue.
Create a Bitmap
jobject createBitmap(JNIEnv *env,
int width, int height) {
jclass bitmapCls = env->FindClass("android/graphics/Bitmap");
jmethodID createBitmapFunction = env->GetStaticMethodID(bitmapCls,
"createBitmap"."(IILandroid/graphics/Bitmap$Config;) Landroid/graphics/Bitmap;");
jstring configName = env->NewStringUTF("ARGB_8888");
jclass bitmapConfigClass = env->FindClass("android/graphics/Bitmap$Config");
jmethodID valueOfBitmapConfigFunction = env->GetStaticMethodID(bitmapConfigClass,
"valueOf"."(Ljava/lang/String;) Landroid/graphics/Bitmap$Config;");
jobject bitmapConfig = env->CallStaticObjectMethod(bitmapConfigClass,
valueOfBitmapConfigFunction,
configName);
jobject newBitmap = env->CallStaticObjectMethod(bitmapCls,
createBitmapFunction,
width, height,
bitmapConfig);
return newBitmap;
}
Copy the code
General operation, call Java layer method in native layer, jNI is not familiar with can check my JNI learning notes
Write RGB pixel information using libyuv
Use the AndroidBitmap_lockPixels() function in the bitmap.h header to bind the address of the pixel pointer. Use the I420ToABGR() function in libyuv to convert yuv420p to ARGB. Remember to finally reclaim the Bitmap using the AndroidBitmap_unlockPixels() function.
jobject bmp;
bmp = createBitmap(env, codec_ctx->width, codec_ctx->height);
void *addr_pixels;
ret = AndroidBitmap_lockPixels(env, bmp, &addr_pixels);
//yuv420p to argb
int linesize = pFrame->width * 4;
libyuv::I420ToABGR(pFrame->data[0], pFrame->linesize[0].// Y
pFrame->data[1], pFrame->linesize[1].// U
pFrame->data[2], pFrame->linesize[2].// V
(uint8_t *) addr_pixels, linesize, // RGBA
pFrame->width, pFrame->height);
Copy the code
The lineSize rule above is: a pixel has 4 bytes, so it should be x4 wide
Yuv420p to ARGB is called I420ToABGR, not I420ToARGB. I know, before the hand code too fast, resulting in the generated Bitmap color display has been wrong, looking for a long time did not find where the error is, or Baidu to a blogger also made the same mistake, this is the solidified thinking of people!
Release resources
av_packet_unref(&pkg);
AndroidBitmap_unlockPixels(env, bmp);
av_free(pFrame);
avcodec_close(codec_ctx);
avformat_close_input(&ifmt_ctx);
env->ReleaseStringUTFChars(path, _path);
Copy the code
Code in the Activity
String path = Environment.getExternalStorageDirectory().getAbsolutePath() + File.separator
+ "get_cover.mp4";
Bitmap bitmap = getCover(path);
Log.d(TAG, "bitmap width == " + bitmap.getWidth());
Log.d(TAG, "bitmap height == " + bitmap.getHeight());
Log.d(TAG, "bitmap config == " + bitmap.getConfig().name());
Log.d(TAG, "bitmap byteCount == " + bitmap.getByteCount());
ImageView ivCover = findViewById(R.id.ivCover);
ivCover.setImageBitmap(bitmap);
Copy the code
Log output and interface display
demo.simple.example_ffmpeg D/MainActivity: bitmap width == 1080
demo.simple.example_ffmpeg D/MainActivity: bitmap height == 1920
demo.simple.example_ffmpeg D/MainActivity: bitmap config == ARGB_8888
demo.simple.example_ffmpeg D/MainActivity: bitmap byteCount == 8294400
The complete code
Github.com/simplepeng/…