An overview of the
Screen sharing is one of the most commonly used functions in scenarios such as video conferences, online classes, and live games. In order to realize the real-time sharing of the screen, there are several steps from end to end: screen recording collection, video coding, real-time transmission, video decoding and video rendering.
Generally, real-time screen sharing, sharing a end with a fixed sampling frequency (typically 8 to 15 frames) to specify the source of the screen picture (including specified screen, the designated area, specified procedures, etc.), through the video compression coding (select text/graphics edge information sheets and really plan), with corresponding frame rate on real-time network distribution.
Therefore, screen capture is the basis of real-time screen sharing. As a professional audio and video cloud service provider, WE have a complete set of process system and API package for real-time screen sharing, so that developers can have the ability to record screen live more easily and quickly.
Below we will introduce based on different end, the realization of screen capture method. This article will introduce in detail the Android terminal screen capture implementation tutorial.
The principle of
Before sharing how to implement screen capture on Android, let’s take a look at the principle behind it.
Android had to get root permission for screen recording before version 4.4, but most devices are running systems older than 4.4, so I won’t go into that here.
In version 5.0 and above, we can use the MediaProjection and MediaProjectionManager provided by the system for screen recording. We don’t need to get root permission, but we need to get popover permission.
On Android5.0 and above, how do we record screen data using MedaProjection?
Here we come to the two “assist buddies” — Surface and VirtualDisplay.
1, the Surface
Handle onto a raw buffer that is being managed by the screen compositor.
A Surface is generally created by or from a consumer of image buffers (such as a SurfaceTexture ,
MediaRecorder , or Allocation ), and is handed to some kind of producer (such as OpenGL ,
MediaPlayer , or CameraDevice ) to draw into.
Copy the code
Google’s official website defines Surface as: Surface is a buffer that the screen data consumer (SurfaceTexture, MediaRecorder, Allocation) provides to the screen data producer (OpenGL, MediaPlayer, CameraDevice). Producers can produce graphic content on the Surface, and consumers can either consume the generated data on the screen (draw it) or convert it to whatever they want.
2, VirtualDisplay
As the name implies, this is a virtual screen provided by the system, and we use MediaProjection for recording, so we need to create such a VirturalDisplay. So what does this VirturalDisplay have to do with the Surface? Is it the producer or the consumer?
The answer is obvious: The VirturalDisplay belongs to the producer, because the VirturalDisplay is a virtual screen of the system, and its contents can be understood as a copy of the phone’s physical screen, but only exists in memory, not drawn, so we can’t see the screen. So since it’s a mirror image of the phone screen, it’s a producer relative to the entire architecture of screen recording.
OK, now that we know the characteristics of these two assists, we have another question to think about. Now that the buffer zone is there, the producer has it, what about the consumer? To whom should screen data be consumed?
This is where the scene comes in. Android allows us to record screen data through MediaRecorder and then save it, and also allows us to record screen data through MediaCodec and then transfer it.
Therefore, based on the above principles, we can draw the following overall architecture of the screen capture:
implementation
Now that we know how screen recording works, how do we implement it at the code level? It is mainly divided into the following steps:
Step 1: Apply for permission. Add code to the AndroidManifest to request permissions, because we need to use audio recording.
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Copy the code
Step 2: Obtain system services. To obtain a system service through the MediaProjectionManager, the system service needs to obtain user authorization:
mMediaProjectionManager = (MediaProjectionManager)
getSystemService(MEDIA_PROJECTION_SERVICE);
Copy the code
MediaProjectionManager is a screen recording service provided by the system. There is no big difference between the MediaProjectionManager and other system services in the use. The corresponding service is obtained through getSystemService.
Third, create an Intent jump service. MediaProjectManager have a way to encapsulate the access Intent createScreenCaptureIntent, to the Intent, when call startActivityForResult method, A request for authorization popup is triggered, and when the user agrees or denies authorization, it is returned via onActivityResult.
Intent captureIntent= mMediaProjectionManager.createScreenCaptureIntent();
startActivityForResult(captureIntent, REQUEST_CODE);
Copy the code
Step 4, listen on onActivityResult to retrieve MediaProjection based on the results returned by the user authorization
if (requestCode == REQUEST_CODE && resultCode == RESULT_OK) { mMediaProjection = mMediaProjectionManager.getMediaProjection(resultCode, data); }}Copy the code
Here we get the actual screen recording operation object, MediaProjection, and then we need to use this object to start screen recording.
Fifth, create a virtual screen. Now that we have Acquired MediaProjection, the next step is to create a virtual screen — a VirtualDisplay — which is the key to screen recording. Let’s first take a look at how the MediaProject.com API creates a VirtualDIsplay, focusing on the definition of parameters.
public VirtualDisplay createVirtualDisplay(@NonNull String name,
int width, int height, int dpi, int flags, @Nullable Surface surface,
@Nullable VirtualDisplay.Callback callback, @Nullable Handler handler)
Copy the code
For the API to create the virtual screen, the other parameters can be ignored for the first time, but there are two parameters we need to pay attention to: Surface Surface and int Flag.
Int Flag: Int Flag: int Flag: int Flag: int Flag: int Flag: int Flag: int Flag Let’s look at the comments.
* @param flags A combination of virtual display flags. See {@link DisplayManager}
for the full
* list of flags.
Copy the code
From the comments, we can see that DisplayManager provides the following related flags:
So, what are the differences between the several flags provided?
VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR: Allows content to be mirrored to a dedicated display when no content is displayed. VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY: Displays only the content on this screen. The content on other screens is not mirrored. VIRTUAL_DISPLAY_FLAG_PRESENTATION: The screen for creating a presentation. VIRTUAL_DISPLAY_FLAG_PUBLIC: creates a public screen. VIRTUAL_DISPLAY_FLAG_SECURE: Creates a secure screen
If there is no special need, we can set the Flag to VIRTUAL_DISPLAY_FLAG_PUBLIC to get the screen data.
Then there’s Surface, which is what we called the assist buddy. As we’ve said before, the Surface is built by consumers. So it’s a good time to think about what is our consumer? What is our scenario? Is it going to be recorded as a file or encoded as data and sent out for live screen recording?
Of course… The ultimate problem may end up being the product manager deciding on the picture…
1, screen recording save (MediaRecoder)
Ok, let’s say that now the product manager has made it clear that the requirement scenario is to document the screen recording, just like many screen recording apps on the market today. So what should we do?
It’s pretty simple. We just need to think about, is there an API that can record and save image data as a file?
Android already has a tool that we can use officially, which is MediaRecoder. The point is that MediaRecoder can provide a Surface through getSurface. This Surface is exactly what VirtualDisplay needs, so the entire call chain and API can be sorted out, as shown below. The data flow is reversed, from VirturalDisplay -> Surface -> MediaRecoder (the green arrow indicates the data flow).
So how do YOU use MediaRecoder? MediaRecoder can record audio as well as video. The code for setting up the MediaRecoder is provided below. Finally, a call to mediaRecorder.start() will start recording and save the recorded video frame and the sound captured by the MIC into a file we define.
private void initRecorder() { File file = new File(Environment.getExternalStorageDirectory(), System.currentTimeMillis() + ".mp4"); mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE); mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); mediaRecorder.setOutputFile(file.getAbsolutePath()); mediaRecorder.setVideoSize(width, height); mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264); mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); mediaRecorder.setVideoEncodingBitRate(5 * 1024 * 1024); mediaRecorder.setVideoFrameRate(30); try { mediaRecorder.prepare(); } catch (IOException e) { e.printStackTrace(); }}Copy the code
2, recording live screen
If the product manager at this time suddenly to change the requirements, want to record the document into a live screen recording pictures pictures pictures.
Then we have to change the solution and replace the consumer MediaRecorder with something else that can encode the data, such as Android’s hardware-encoded MediaCodec or the well-known FFmpeg. However, the producer of the data will not change, it will still be VirtualDisplay, and the data buffer will still be Surface.
Taking MediaCodec as an example, the flow chart for MediaRecoder becomes:
MediaCodec’s hardware-coding/hardware-dismantling capabilities for Android can be shared as a feature in its own right. Therefore, we won’t go into much detail here about the capabilities and usage of MediaCodec, just from a consumer perspective and our screen recording live solution.
mEncoder = MediaCodec.createEncoderByType(MIME_TYPE); mEncoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE); mInputSurface = mEncoder.createInputSurface(); // The output Surface can be input to VirtualDisplay // direct encoder mEncoder. Start ();Copy the code
extension
As we know from the above, MediaRecoder can record audio data collected by MIC and screen image data provided by MediaProjection. However, MediaProjection cannot provide audio data. What if we want to record screen data provided by MediaProjection through MediaRecoder plus audio sources not specified by MediaRecoder? For example, we record a game video, but we want to add the corresponding audio, like the king of glory highlights with specific sound effects, how to achieve?
As long as we don’t set the Audio source of the MediaRecoder at the beginning of the recording, and then use other tools to clip the audio source into it. For example, the famous FFmpeg is good at audio and video editing. However, FFmpeg for getting started is a certain threshold and difficulty, want to compile a stable and reliable easy to use FFmpeg library is not so simple, and in order to add a recording audio function, greatly increase the volume of our APK, but also penny wise and penny foolish.
So, what else can be done? The answer is yes.
The Android system provides a native MediaExtractor class, which provides a relatively simple and easy way to mix audio and video, so what should you pay attention to when using MediaExtractor?
MediaExtractor can clip audio and video sources together, we can understand as two different tracks — audio track and video track, mix them together, the most important of which is naturally the time stamp mixed together. Therefore, when editing, unless the start time of the audio can be clearly determined at a specific point in time of the video, it is recommended to set the audio and video back to the start time and then start mixing.
conclusion
Finally, let’s summarize the main content of Android terminal screen capture.
First of all, we need to understand the principle of MediaProjectionManager and MediaProjection, the two system services used by android to provide screen recording capabilities, and two supporting partners – Surface data buffer and VirtualDisplay.
Secondly, it introduces how to realize the two scenarios of screen capture: recording and saving (screen recording) and recording and coding (live screen);
Finally, it extends how to mix in non-ambient background audio while recording and saving on the screen.
Finally, remember to release the API when you are not using it.