Android from 4.0 to provide mobile phone recording screen method, but need root permission, more trouble is not easy to achieve. However, starting from 5.0, the system provides a series of methods for App to record the screen. It does not require root permission, but only user authorization, which is relatively simple.
Basically according to the official documents can write the relevant code recording screen.
Screen recording of the basic implementation steps
Declare permissions in the Manifest
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
Copy the code
Get MediaProjectionManager and apply for permissions
private val mediaProjectionManager by lazy { activity.getSystemService(Context.MEDIA_PROJECTION_SERVICE) as? MediaProjectionManager }
private var mediaProjection: MediaProjection? = null
if (mediaProjectionManager == null) {
Log.d(TAG, "MediaProjectionManager == null, currently the phone does not support screen recording")
showToast(R.string.phone_not_support_screen_record)
return
}
// Apply for related permissions
PermissionUtils.permission(PermissionConstants.STORAGE, PermissionConstants.MICROPHONE)
.callback(object : PermissionUtils.SimpleCallback {
override fun onGranted(a) {
Log.d(TAG, "start record") mediaProjectionManager? .apply {// Apply for the screen recording dialog box from the user
val intent = this.createScreenCaptureIntent()
if(activity.packageManager.resolveActivity(intent, PackageManager.MATCH_DEFAULT_ONLY) ! =null) {
activity.startActivityForResult(intent, REQUEST_CODE)
} else {
showToast(R.string.phone_not_support_screen_record)
}
}
}
override fun onDenied(a) {
showToast(R.string.permission_denied)
}
})
.request()
Copy the code
Override onActivityResult() to handle user authorization
override fun onActivityResult(requestCode: Int, resultCode: Int.data: Intent) {
if (requestCode == REQUEST_CODE) {
if(resultCode == Activity.RESULT_OK) { mediaProjection = mediaProjectionManager!! .getMediaProjection(resultCode,data)
// Actually, some mobile phones will pop up when recording video, so we need to make a 150ms delay
Handler().postDelayed({
if(initRecorder()) { mediaRecorder? .start() }else {
showToast(R.string.phone_not_support_screen_record)
}
}, 150)}else {
showToast(R.string.phone_not_support_screen_record)
}
}
}
private fun initRecorder(a): Boolean {
Log.d(TAG, "initRecorder")
var result = true
// Create a folder
val f = File(savePath)
if(! f.exists()) { f.mkdirs() }// Record screensaver files
saveFile = File(savePath, "$saveName.tmp") saveFile? .apply {if (exists()) {
delete()
}
}
mediaRecorder = MediaRecorder()
val width = Math.min(displayMetrics.widthPixels, 1080)
val height = Math.min(displayMetrics.heightPixels, 1920) mediaRecorder? .apply {// You can set whether to record audio
if (recordAudio) {
setAudioSource(MediaRecorder.AudioSource.MIC)
}
setVideoSource(MediaRecorder.VideoSource.SURFACE)
setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)
setVideoEncoder(MediaRecorder.VideoEncoder.H264)
if(recordAudio){ setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB) } setOutputFile(saveFile!! .absolutePath) setVideoSize(width, height) setVideoEncodingBitRate(8388608)
setVideoFrameRate(VIDEO_FRAME_RATE)
try{ prepare() virtualDisplay = mediaProjection? .createVirtualDisplay("MainScreen", width, height, displayMetrics.densityDpi,
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR, surface, null.null)
Log.d(TAG, "InitRecorder success")}catch (e: Exception) {
Log.e(TAG, "IllegalStateException preparing MediaRecorder: ${e.message}")
e.printStackTrace()
result = false}}return result
}
Copy the code
As you can see above, we can set a number of parameters, and the meaning of each parameter is left to you to see the official documentation. One of the important things is that we created a VirtualDisplay through the MediaProjectionManager, and this VirtualDisplay can be interpreted as a virtual renderer that captures what’s on the screen, And the captured content is rendered to the Surface, MediaRecorder and then further encapsulated as an MP4 file to save.
After recording, call stop method to save data
private fun stop(a) {
if (isRecording) {
isRecording = false
try{ mediaRecorder? .apply { setOnErrorListener(null)
setOnInfoListener(null)
setPreviewDisplay(null)
stop()
Log.d(TAG, "stop success")}}catch (e: Exception) {
Log.e(TAG, "StopRecorder () error!${e.message}")}finally{ mediaRecorder? .reset() virtualDisplay? .release() mediaProjection? .stop() listener? .onEndRecord() } } }/** * if you has parameters, the recordAudio will be invalid */
fun stopRecord(videoDuration: Long = 0, audioDuration: Long = 0, afdd: AssetFileDescriptor? = null) {
stop()
if(audioDuration ! =0L && afdd ! =null) {
syntheticAudio(videoDuration, audioDuration, afdd)
} else {
// saveFile
if(saveFile ! =null) {
val newFile = File(savePath, "$saveName.mp4")
// Change the suffix to mp4 after recordingsaveFile!! .renameTo(newFile)// Refresh to album
val intent = Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE)
intent.data = Uri.fromFile(newFile)
activity.sendBroadcast(intent)
showToast(R.string.save_to_album_success)
}
saveFile = null}}Copy the code
We must look at the MediaRecorder comment on the stop() method.
/**
* Stops recording. Call this after start(). Once recording is stopped,
* you will have to configure it again as if it has just been constructed.
* Note that a RuntimeException is intentionally thrown to the
* application, if no valid audio/video data has been received when stop()
* is called. This happens if stop() is called immediately after
* start(). The failure lets the application take action accordingly to
* clean up the output file (delete the output file, for instance), since
* the output file is not properly constructed when this happens.
*
* @throws IllegalStateException if it is called before start()
*/
public native void stop(a) throws IllegalStateException;
Copy the code
According to the official documentation, stop() crashes if called immediately after prepare(), but it doesn’t say much about what happens in other cases. In fact, when you actually use MediaRecorder to do screen recording, You’ll notice that IllegalStateException can be thrown even if you don’t call stop() immediately after prepare(). So, to be on the safe side, it’s best to just use try… catch… Statement block to wrap.
For example, some parameter Settings in your initRecorder have problems, there will be stop() error, data can not write into your file.
When complete, release resources
fun clearAll(a){ mediaRecorder? .release() mediaRecorder =nullvirtualDisplay? .release() virtualDisplay =nullmediaProjection? .stop() mediaProjection =null
}
Copy the code
Ambient sounds that cannot be bypassed
The above basic Android screen recording code, of course, we need to do more than that, if you are interested in check out ScreenRecordHelper.
But that’s not the point at all, it’s extremely easy to get into a situation where you need to record the system volume when you record the audio, but you’re not allowed to record the ambient volume.
Seems we initialize the MediaRecorder when there is a place set up the audio source, let’s take a look at this MediaRecorder. SetAudioSource () method are set what support.
According to the official documentation, we can set up the following audio sources. Since there are so many official comments, here’s a quick explanation of some of the configurable audio sources we support.
If the camera has no built-in camera or cannot be recognized, use the preset microphone
MediaRecorder.AudioSource.CAMCORDER
// Default audio source
MediaRecorder.AudioSource.DEFAULT
// Set the recording source to the main microphone
MediaRecorder.AudioSource.MIC
// Set the recording source to the dialing voice and the voice of the other party
MediaRecorder.AudioSource.VOICE_CALL
// Microphone next to camera
MediaRecorder.AudioSource.VOICE_COMMUNICATION
// Sound down
MediaRecorder.AudioSource.VOICE_DOWNLINK
// Speech recognition
MediaRecorder.AudioSource.VOICE_RECOGNITION
// Uplink sound
MediaRecorder.AudioSource.VOICE_UPLINK
Copy the code
At first glance we don’t have the choices we want, and in fact if you test them one by one, you’ll see that they do. The music we want our media to play is always limited by the ambient sound.
Strangely, we use huawei part of the mobile phone system when recording screen, but can do it, which laments the magic of ROM customization change, of course, strange third-party ROM has always made our Android adaptation difficult.
The curve saves the country by stripping the ambient sound
Since we were never able to do what we wanted by calling the system’s API: ** Record the screen and play background music at the same time, and record the saved video with only background music and no ambient volume, ** we had to find another way.
It is not difficult to imagine that we can record the video without setting the audio source, so that the video is a video without any sound. If we force the music into the video at this time, it can perfectly meet the needs of users.
For audio and video mixed editing, most people can think of the famous FFmpeg, but if you want to compile and optimize yourself to get a stable and usable FFmpge library, need to spend a lot of time. More importantly, we can’t afford to greatly increase the size of our APK for such a simple function. So we need to turn our attention to the official MediaExtractor.
According to the official documentation, it can support m4A and AAC format audio files to be synthesized into video files, according to the relevant documentation we are not difficult to write such code.
/** * https://stackoverflow.com/questions/31572067/android-how-to-mux-audio-file-and-video-file */
private fun syntheticAudio(audioDuration: Long, videoDuration: Long, afdd: AssetFileDescriptor) {
Log.d(TAG, "start syntheticAudio")
val newFile = File(savePath, "$saveName.mp4")
if (newFile.exists()) {
newFile.delete()
}
try {
newFile.createNewFile()
valvideoExtractor = MediaExtractor() videoExtractor.setDataSource(saveFile!! .absolutePath)val audioExtractor = MediaExtractor()
afdd.apply {
audioExtractor.setDataSource(fileDescriptor, startOffset, length * videoDuration / audioDuration)
}
val muxer = MediaMuxer(newFile.absolutePath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
videoExtractor.selectTrack(0)
val videoFormat = videoExtractor.getTrackFormat(0)
val videoTrack = muxer.addTrack(videoFormat)
audioExtractor.selectTrack(0)
val audioFormat = audioExtractor.getTrackFormat(0)
val audioTrack = muxer.addTrack(audioFormat)
var sawEOS = false
var frameCount = 0
val offset = 100
val sampleSize = 1000 * 1024
val videoBuf = ByteBuffer.allocate(sampleSize)
val audioBuf = ByteBuffer.allocate(sampleSize)
val videoBufferInfo = MediaCodec.BufferInfo()
val audioBufferInfo = MediaCodec.BufferInfo()
videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC)
audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC)
muxer.start()
// How many frames per second
// OPPO R9em garbage phone is not mediaformat.key_frame_rate
val frameRate = if (videoFormat.containsKey(MediaFormat.KEY_FRAME_RATE)) {
videoFormat.getInteger(MediaFormat.KEY_FRAME_RATE)
} else {
31
}
// Get the average number of subtle intervals per frame
val videoSampleTime = 1000 * 1000 / frameRate
while(! sawEOS) { videoBufferInfo.offset = offset videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset)if (videoBufferInfo.size < 0) {
sawEOS = true
videoBufferInfo.size = 0
} else {
videoBufferInfo.presentationTimeUs += videoSampleTime
videoBufferInfo.flags = videoExtractor.sampleFlags
muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo)
videoExtractor.advance()
frameCount++
}
}
var sawEOS2 = false
var frameCount2 = 0
while(! sawEOS2) { frameCount2++ audioBufferInfo.offset = offset audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset)if (audioBufferInfo.size < 0) {
sawEOS2 = true
audioBufferInfo.size = 0
} else {
audioBufferInfo.presentationTimeUs = audioExtractor.sampleTime
audioBufferInfo.flags = audioExtractor.sampleFlags
muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo)
audioExtractor.advance()
}
}
muxer.stop()
muxer.release()
videoExtractor.release()
audioExtractor.release()
// Delete silent video filessaveFile? .delete() }catch (e: Exception) {
Log.e(TAG, "Mixer Error:${e.message}")
// Video failed to add audio composition, save video directlysaveFile? .renameTo(newFile) }finally {
afdd.close()
Handler().post {
refreshVideo(newFile)
saveFile = null}}}Copy the code
Thus the achievement of recording screen help classScreenRecordHelper
After various compatibility tests, two versions of the APP with more than 1 million DAILY active users have been running steadily. Therefore, I have selected a tool class library to share with you. It is very simple to use and the code annotations are comprehensive.
README (github.com/nanchen2251…) Post it.
Step 1. Add it in your root build.gradle at the end of repositories:
allprojects {
repositories {
...
maven { url 'https://jitpack.io'}}}Copy the code
Step 2. Add the dependency
dependencies {
implementation 'com. Making. Nanchen2251: ScreenRecordHelper: 1.0.2'
}
Copy the code
Step 3. Just use it in your project
// start screen record
if (screenRecordHelper == null) {
screenRecordHelper = ScreenRecordHelper(this.null, PathUtils.getExternalStoragePath() + "/nanchen") } screenRecordHelper? .apply {if(! isRecording) {// if you want to record the audio,you can set the recordAudio as truescreenRecordHelper? .startRecord() } }// You must rewrite the onActivityResult
override fun onActivityResult(requestCode: Int, resultCode: Int.data: Intent?). {
super.onActivityResult(requestCode, resultCode, data)
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP && data! =null) { screenRecordHelper? .onActivityResult(requestCode, resultCode,data)}}// just stop screen recordscreenRecordHelper? .apply {if (isRecording) {
stopRecord()
}
}
Copy the code
Step 4. if you want to mix the audio into your video,you just should do
// parameter1 -> The last video length you want
// parameter2 -> the audio's duration
// parameter2 -> assets resource
stopRecord(duration, audioDuration, afdd)
Copy the code
Step 5. If you still don’t understand, please refer to the demo
Due to the personal level is limited, although the current withstand the test of the company’s products, but there are certainly many places do not support the comprehensive, hope to know the big man not add to the sik comments, any compatibility problems please directly mention issues, Thx.
Refer to the article: lastwarmth. Win / 2018/11/23 /… Juejin. Cn/post / 684490…
I am the south dust, only do than the heart of the public number, welcome to pay attention to me.
Nan Chen, GitHub 7K Star, is a frequent guest at tech Blog forums. He is from Android, but not just Android. A little technical, a little emotional. Do not finish of open source, write not finish of hypocritical, you listen to me blow force, not wrong ~