[sound Ming]
First of all, this series of articles are based on their own understanding and practice, there may be wrong places, welcome to correct.
Secondly, this is an introductory series, covering only enough knowledge, and there are many blog posts on the Internet for in-depth knowledge. Finally, in the process of writing the article, I will refer to the articles shared by others and list them at the end of the article, thanking these authors for their sharing.
Code word is not easy, reproduced please indicate the source!
Tutorial code: [Making portal】 |
---|
directory
First, Android audio and video hard decoding:
- 1. Basic knowledge of audio and video
- 2. Audio and video hard decoding process: packaging basic decoding framework
- 3. Audio and video playback: audio and video synchronization
- 4, audio and video unencapsulation and packaging: generate an MP4
2. Use OpenGL to render video frames
- 1. Preliminary understanding of OpenGL ES
- 2. Use OpenGL to render video images
- 3, OpenGL rendering multi-video, picture-in-picture
- 4. Learn more about EGL of OpenGL
- 5, OpenGL FBO data buffer
- 6, Android audio and video hardcoding: generate an MP4
Android FFmpeg audio and video decoding
- 1, FFmpeg SO library compilation
- 2. Android introduces FFmpeg
- 3, Android FFmpeg video decoding playback
- 4, Android FFmpeg+OpenSL ES audio decoding playback
- 5, Android FFmpeg+OpenGL ES play video
- Android FFmpeg Simple Synthesis MP4: Video unencapsulation and Reencapsulation
- 7, Android FFmpeg video encoding
You can read about it in this article
Rendering multiple video images is the basis of audio and video editing. This article will introduce how to render multiple video images into OpenGL, as well as how to mix, zoom, move and so on.
Writing in the front
It has been two weeks since the last update. Since there are a lot of things in this period, please forgive me if you pay attention to this series of articles. I will speed up the code when I have time, thank you for your attention and supervision.
Let’s see how to render multiple video images in OpenGL.
First, render many pictures
In the previous article, I explained in detail how to render video images in OpenGL and how to scale video images. Based on the tools encapsulated in the previous series of articles, it is very easy to render multiple video images in OpenGL.
OpenGL Render is very simple:
class SimpleRender(private val mDrawer: IDrawer): GLSurfaceView.Renderer {
override fun onSurfaceCreated(gl: GL10? , config:EGLConfig?). {
GLES20.glClearColor(0f.0f.0f.0f)
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
mDrawer.setTextureID(OpenGLTools.createTextureIds(1) [0])}override fun onSurfaceChanged(gl: GL10? , width:Int, height: Int) {
GLES20.glViewport(0.0, width, height)
mDrawer.setWorldSize(width, height)
}
override fun onDrawFrame(gl: GL10?). {
mDrawer.draw()
}
}
Copy the code
Only one Drawer is supported, so change the Drawer to a list to support multiple drawers.
class SimpleRender: GLSurfaceView.Renderer {
private val drawers = mutableListOf<IDrawer>()
override fun onSurfaceCreated(gl: GL10? , config:EGLConfig?). {
GLES20.glClearColor(0f.0f.0f.0f)
val textureIds = OpenGLTools.createTextureIds(drawers.size)
for ((idx, drawer) in drawers.withIndex()) {
drawer.setTextureID(textureIds[idx])
}
}
override fun onSurfaceChanged(gl: GL10? , width:Int, height: Int) {
GLES20.glViewport(0.0, width, height)
for (drawer in drawers) {
drawer.setWorldSize(width, height)
}
}
override fun onDrawFrame(gl: GL10?). {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT or GLES20.GL_DEPTH_BUFFER_BIT)
drawers.forEach {
it.draw()
}
}
fun addDrawer(drawer: IDrawer) {
drawers.add(drawer)
}
}
Copy the code
Again very simple,
- Add an addDrawer method to add more than one drawer.
- Set a texture ID for each renderer in onSurfaceCreated.
- Set the display area width height for each painter in onSurfaceChanged.
- In onDrawFrame, start drawing by iterating through all the painters.
Next, create a new page and generate multiple decoders and renderers.
<android.support.constraint.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent" xmlns:app="http://schemas.android.com/apk/res-auto">
<android.opengl.GLSurfaceView
android:id="@+id/gl_surface"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
</android.support.constraint.ConstraintLayout>
Copy the code
class MultiOpenGLPlayerActivity: AppCompatActivity() {
private val path = Environment.getExternalStorageDirectory().absolutePath + "/mvtest.mp4"
private val path2 = Environment.getExternalStorageDirectory().absolutePath + "/mvtest_2.mp4"
private val render = SimpleRender()
private val threadPool = Executors.newFixedThreadPool(10)
override fun onCreate(savedInstanceState: Bundle?). {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_opengl_player)
initFirstVideo()
initSecondVideo()
initRender()
}
private fun initFirstVideo(a) {
val drawer = VideoDrawer()
drawer.setVideoSize(1920.1080)
drawer.getSurfaceTexture {
initPlayer(path, Surface(it), true)
}
render.addDrawer(drawer)
}
private fun initSecondVideo(a) {
val drawer = VideoDrawer()
drawer.setVideoSize(1920.1080)
drawer.getSurfaceTexture {
initPlayer(path2, Surface(it), false)
}
render.addDrawer(drawer)
}
private fun initPlayer(path: String, sf: Surface, withSound: Boolean) {
val videoDecoder = VideoDecoder(path, null, sf)
threadPool.execute(videoDecoder)
videoDecoder.goOn()
if (withSound) {
val audioDecoder = AudioDecoder(path)
threadPool.execute(audioDecoder)
audioDecoder.goOn()
}
}
private fun initRender(a) {
gl_surface.setEGLContextClientVersion(2)
gl_surface.setRenderer(render)
}
}
Copy the code
The code is relatively simple, through the previous package of decoding tools and drawing tools, add two video rendering screen.
Of course, you can add more images to Render in OpenGL.
Also, you should notice that rendering multiple videos is actually generating multiple texture ids, using this ID to generate a Surface rendering Surface, and finally rendering the Surface to the decoder MediaCodec.
Since the two videos I’m using here are 1920 by 1080, you’ll see that they only show one because they’re superimposed on each other.
The two images are as follows:
Get a taste of video editing
Now that the two videos are stacked together and you can’t see the bottom video, let’s change the alpha value of the top video and make it translucent, so we can see the bottom video.
1) Semi-permeable
First, a new interface is added to IDrawer for uniformity:
interface IDrawer {
fun setVideoSize(videoW: Int, videoH: Int)
fun setWorldSize(worldW: Int, worldH: Int)
fun draw(a)
fun setTextureID(id: Int)
fun getSurfaceTexture(cb: (st: SurfaceTexture) - >Unit) {}
fun release(a)
// Add adjusting alpha interface
fun setAlpha(alpha: Float)
}
Copy the code
In the VideoDrawer, save the value.
For easy viewing, the entire VideoDrawer is posted here (skip the additions below if you don’t want to see them) :
class VideoDrawer : IDrawer {
// Vertex coordinates
private val mVertexCoors = floatArrayOf(
-1f, -1f.1f, -1f,
-1f.1f.1f.1f
)
// Texture coordinates
private val mTextureCoors = floatArrayOf(
0f.1f.1f.1f.0f.0f.1f.0f
)
private var mWorldWidth: Int = -1
private var mWorldHeight: Int = -1
private var mVideoWidth: Int = -1
private var mVideoHeight: Int = -1
private var mTextureId: Int = -1
private var mSurfaceTexture: SurfaceTexture? = null
private var mSftCb: ((SurfaceTexture) -> Unit)? = null
/ / OpenGL program ID
private var mProgram: Int = -1
// Matrix transform receiver
private var mVertexMatrixHandler: Int = -1
// Vertex coordinates receiver
private var mVertexPosHandler: Int = -1
// Texture coordinates receiver
private var mTexturePosHandler: Int = -1
// Texture receiver
private var mTextureHandler: Int = -1
// Semi-permeable receiver
private var mAlphaHandler: Int = -1
private lateinit var mVertexBuffer: FloatBuffer
private lateinit var mTextureBuffer: FloatBuffer
private var mMatrix: FloatArray? = null
private var mAlpha = 1f
init {
// step 1: Initialize vertex coordinates
initPos()
}
private fun initPos(a) {
val bb = ByteBuffer.allocateDirect(mVertexCoors.size * 4)
bb.order(ByteOrder.nativeOrder())
// Convert coordinate data to FloatBuffer, which is passed to OpenGL ES program
mVertexBuffer = bb.asFloatBuffer()
mVertexBuffer.put(mVertexCoors)
mVertexBuffer.position(0)
val cc = ByteBuffer.allocateDirect(mTextureCoors.size * 4)
cc.order(ByteOrder.nativeOrder())
mTextureBuffer = cc.asFloatBuffer()
mTextureBuffer.put(mTextureCoors)
mTextureBuffer.position(0)}private fun initDefMatrix(a) {
if(mMatrix ! =null) return
if(mVideoWidth ! = -1&& mVideoHeight ! = -1&& mWorldWidth ! = -1&& mWorldHeight ! = -1) {
mMatrix = FloatArray(16)
var prjMatrix = FloatArray(16)
val originRatio = mVideoWidth / mVideoHeight.toFloat()
val worldRatio = mWorldWidth / mWorldHeight.toFloat()
if (mWorldWidth > mWorldHeight) {
if (originRatio > worldRatio) {
val actualRatio = originRatio / worldRatio
Matrix.orthoM(
prjMatrix, 0,
-actualRatio, actualRatio,
-1f.1f.3f.5f)}else {// The original scale is smaller than the window scale. The scaling width will cause the width to exceed the window scale
val actualRatio = worldRatio / originRatio
Matrix.orthoM(
prjMatrix, 0,
-1f.1f,
-actualRatio, actualRatio,
3f.5f)}}else {
if (originRatio > worldRatio) {
val actualRatio = originRatio / worldRatio
Matrix.orthoM(
prjMatrix, 0,
-1f.1f,
-actualRatio, actualRatio,
3f.5f)}else {// The original scale is smaller than the window scale, the scaling height will cause the height to exceed, therefore, the height is based on the window, the scaling width
val actualRatio = worldRatio / originRatio
Matrix.orthoM(
prjMatrix, 0,
-actualRatio, actualRatio,
-1f.1f.3f.5f)}}// Set the camera position
val viewMatrix = FloatArray(16)
Matrix.setLookAtM(
viewMatrix, 0.0f.0f.5.0 f.0f.0f.0f.0f.1.0 f.0f
)
// Compute the transformation matrix
Matrix.multiplyMM(mMatrix, 0, prjMatrix, 0, viewMatrix, 0)}}override fun setVideoSize(videoW: Int, videoH: Int) {
mVideoWidth = videoW
mVideoHeight = videoH
}
override fun setWorldSize(worldW: Int, worldH: Int) {
mWorldWidth = worldW
mWorldHeight = worldH
}
override fun setAlpha(alpha: Float) {
mAlpha = alpha
}
override fun setTextureID(id: Int){ mTextureId = id mSurfaceTexture = SurfaceTexture(id) mSftCb? .invoke(mSurfaceTexture!!) }override fun getSurfaceTexture(cb: (st: SurfaceTexture) - >Unit) {
mSftCb = cb
}
override fun draw(a) {
if(mTextureId ! = -1) {
initDefMatrix()
// step 2: Create, compile, and start the OpenGL shader
createGLPrg()
// step 3: Activate and bind the texture unit
activateTexture()
// [Step 4: Bind image to texture unit]
updateTexture()
// [Step 5: Start rendering]
doDraw()
}
}
private fun createGLPrg(a) {
if (mProgram == -1) {
val vertexShader = loadShader(GLES20.GL_VERTEX_SHADER, getVertexShader())
val fragmentShader = loadShader(GLES20.GL_FRAGMENT_SHADER, getFragmentShader())
// Create OpenGL ES program, note: need to create in OpenGL render thread, otherwise cannot render
mProgram = GLES20.glCreateProgram()
// Add a vertex shader to the program
GLES20.glAttachShader(mProgram, vertexShader)
// Add a chip shader to the program
GLES20.glAttachShader(mProgram, fragmentShader)
// Connect to the shader program
GLES20.glLinkProgram(mProgram)
mVertexMatrixHandler = GLES20.glGetUniformLocation(mProgram, "uMatrix")
mVertexPosHandler = GLES20.glGetAttribLocation(mProgram, "aPosition")
mTextureHandler = GLES20.glGetUniformLocation(mProgram, "uTexture")
mTexturePosHandler = GLES20.glGetAttribLocation(mProgram, "aCoordinate")
mAlphaHandler = GLES20.glGetAttribLocation(mProgram, "alpha")}// Use the OpenGL program
GLES20.glUseProgram(mProgram)
}
private fun activateTexture(a) {
// Activate the specified texture unit
GLES20.glActiveTexture(GLES20.GL_TEXTURE0)
// Bind the texture ID to the texture unit
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, mTextureId)
// Pass the active texture unit to the shader
GLES20.glUniform1i(mTextureHandler, 0)
// Set edge transition parameters
GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR.toFloat())
GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR.toFloat())
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE)
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE)
}
private fun updateTexture(a){ mSurfaceTexture? .updateTexImage() }private fun doDraw(a) {
// Enable vertex handles
GLES20.glEnableVertexAttribArray(mVertexPosHandler)
GLES20.glEnableVertexAttribArray(mTexturePosHandler)
GLES20.glUniformMatrix4fv(mVertexMatrixHandler, 1.false, mMatrix, 0)
// Set the shader parameter. The second parameter represents the amount of data a vertex contains, which is xy, so 2
GLES20.glVertexAttribPointer(mVertexPosHandler, 2, GLES20.GL_FLOAT, false.0, mVertexBuffer)
GLES20.glVertexAttribPointer(mTexturePosHandler, 2, GLES20.GL_FLOAT, false.0, mTextureBuffer)
GLES20.glVertexAttrib1f(mAlphaHandler, mAlpha)
// Start drawing
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0.4)}override fun release(a) {
GLES20.glDisableVertexAttribArray(mVertexPosHandler)
GLES20.glDisableVertexAttribArray(mTexturePosHandler)
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0)
GLES20.glDeleteTextures(1, intArrayOf(mTextureId), 0)
GLES20.glDeleteProgram(mProgram)
}
private fun getVertexShader(a): String {
return "attribute vec4 aPosition;" +
"precision mediump float;" +
"uniform mat4 uMatrix;" +
"attribute vec2 aCoordinate;" +
"varying vec2 vCoordinate;" +
"attribute float alpha;" +
"varying float inAlpha;" +
"void main() {" +
" gl_Position = uMatrix*aPosition;" +
" vCoordinate = aCoordinate;" +
" inAlpha = alpha;" +
"}"
}
private fun getFragmentShader(a): String {
// Be sure to add a newline "\n", otherwise it will be mixed with the precision on the next line and cause compilation errors
return "#extension GL_OES_EGL_image_external : require\n" +
"precision mediump float;" +
"varying vec2 vCoordinate;" +
"varying float inAlpha;" +
"uniform samplerExternalOES uTexture;" +
"void main() {" +
" vec4 color = texture2D(uTexture, vCoordinate);" +
" gl_FragColor = vec4(color.r, color.g, color.b, inAlpha);" +
"}"
}
private fun loadShader(type: Int, shaderCode: String): Int {
// Create a vertex shader or slice shader based on type
val shader = GLES20.glCreateShader(type)
// Add the resource to the shader and compile it
GLES20.glShaderSource(shader, shaderCode)
GLES20.glCompileShader(shader)
return shader
}
}
Copy the code
In fact, very little has changed from the previous renderer:
class VideoDrawer : IDrawer {
// omit extraneous code......
// Semi-permeable receiver
private var mAlphaHandler: Int = -1
// Translucency value
private var mAlpha = 1f
override fun setAlpha(alpha: Float) {
mAlpha = alpha
}
private fun createGLPrg(a) {
if (mProgram == -1) {
// omit extraneous code......
mAlphaHandler = GLES20.glGetAttribLocation(mProgram, "alpha")
/ /...
}
// Use the OpenGL program
GLES20.glUseProgram(mProgram)
}
private fun doDraw(a) {
// omit extraneous code......
GLES20.glVertexAttrib1f(mAlphaHandler, mAlpha)
/ /...
}
private fun getVertexShader(a): String {
return "attribute vec4 aPosition;" +
"precision mediump float;" +
"uniform mat4 uMatrix;" +
"attribute vec2 aCoordinate;" +
"varying vec2 vCoordinate;" +
"attribute float alpha;" +
"varying float inAlpha;" +
"void main() {" +
" gl_Position = uMatrix*aPosition;" +
" vCoordinate = aCoordinate;" +
" inAlpha = alpha;" +
"}"
}
private fun getFragmentShader(a): String {
// Be sure to add a newline "\n", otherwise it will be mixed with the precision on the next line and cause compilation errors
return "#extension GL_OES_EGL_image_external : require\n" +
"precision mediump float;" +
"varying vec2 vCoordinate;" +
"varying float inAlpha;" +
"uniform samplerExternalOES uTexture;" +
"void main() {" +
" vec4 color = texture2D(uTexture, vCoordinate);" +
" gl_FragColor = vec4(color.r, color.g, color.b, inAlpha);" +
"}"}}Copy the code
Focus on the code for two shaders:
In the vertex shader, an alpha variable is passed in, which is passed in by Java code, which is then assigned by the vertex shader to inAlpha, and finally to the fragment shader.
A brief description of how to pass parameters to the chip shader.
To pass values from Java to a fragment shader, you can’t pass values directly, you need to pass values indirectly through the vertex shader.
Vertex shader input and output
- The input
Build-in variables, which are built into OpengL and can be regarded as drawing context information of OpengL
Uniform variables: Commonly used by Java programs to pass information about transformation matrices, materials, lighting parameters, and colors. Uniform Mat4 uMatrix;
Attribute variable: Used to pass in vertex data, such as vertex coordinates, normals, texture coordinates, and vertex colors.
- The output
Build-in variable: GLSL built-in variable, such as gl_Position.
Varying variable: Used by the vertex shader to pass data to the chip shader. Note that this variable must be in the vertex shader and the fragment shader, and must be declared consistently. Like inAlpha above.
Chip shader input and output
- The input
Build-in variable: same as vertex shader.
Varying variable: Used as input to the vertex shader data, consistent with the vertex shader declaration
- The output
Build-in variables: GLSL built-in variables, such as gl_FragColor.
Once you know how to pass values, the rest is obvious.
- Gets the alpha of the vertex shader and passes the value in before drawing.
- In the slice shader, modify the alpha of the color value taken from the texture. The final value is assigned to gl_FragColor for output.
Then, in the MultiOpenGLPlayerAcitivity, change the upper semi-permeable value of the picture
class MultiOpenGLPlayerActivity: AppCompatActivity() {
// omit irrelevant code...
private fun initSecondVideo(a) {
val drawer = VideoDrawer()
// Set the semi-permeable value
drawer.setAlpha(0.5 f)
drawer.setVideoSize(1920.1080)
drawer.getSurfaceTexture {
initPlayer(path2, Surface(it), false)
}
render.addDrawer(drawer)
}
/ /...
}
Copy the code
Just when you think you can output a translucent image perfectly, the image is still not transparent. Why?
I’m going back to SimpleRender because I didn’t have OpenGL blend on.
- Enable blending mode in onSurfaceCreated;
- Before you start drawing each frame in onDrawFrame, clear the screen, otherwise there will be screen residue.
class SimpleRender: GLSurfaceView.Renderer {
private val drawers = mutableListOf<IDrawer>()
override fun onSurfaceCreated(gl: GL10? , config:EGLConfig?). {
GLES20.glClearColor(0f.0f.0f.0f)
//------ turn on blending, i.e. Translucent ---------
// Enable very mixed mode
GLES20.glEnable(GLES20.GL_BLEND)
// Configure the hybrid algorithm
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA)
//------------------------------
val textureIds = OpenGLTools.createTextureIds(drawers.size)
for ((idx, drawer) in drawers.withIndex()) {
drawer.setTextureID(textureIds[idx])
}
}
override fun onSurfaceChanged(gl: GL10? , width:Int, height: Int) {
GLES20.glViewport(0.0, width, height)
for (drawer in drawers) {
drawer.setWorldSize(width, height)
}
}
override fun onDrawFrame(gl: GL10?). {
// Clear the screen, otherwise there will be screen residue
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT or GLES20.GL_DEPTH_BUFFER_BIT)
drawers.forEach {
it.draw()
}
}
fun addDrawer(drawer: IDrawer) {
drawers.add(drawer)
}
}
Copy the code
This way, you can see a translucent video superimposed on top of another video.
What do you think? Do you smell a little video editing?
This is actually the most basic video editing principle, basically all video editing is based on shaders, to do the transformation of the picture.
Now let’s look at the next two basic transformations: move and scale.
2) mobile
Now, let’s see how you can change the position of the video by touching and dragging.
As mentioned in the previous article, the shift and scaling of an image or video is basically done by matrix transformation.
Android provides a method in Matrix for Matrix translation:
/**
* Translates matrix m by x, y, and z in place.
*
* @param m matrix
* @param mOffset index into m where the matrix starts
* @param x translation factor x
* @param y translation factor y
* @param z translation factor z
*/
public static void translateM(
float[] m, int mOffset,
float x, float y, float z) {
for (int i=0 ; i<4 ; i++) {
int mi = mOffset + i;
m[12 + mi] += m[mi] * x + m[4 + mi] * y + m[8+ mi] * z; }}Copy the code
You’re essentially changing the last row of the 4×4 matrix.
Where x, y, and z are the distances moved relative to the current position.
The thing to notice here is that the change in translation is multiplied by the scale. Specific everybody can use pen to calculate on paper to know.
If the original matrix is the identity matrix, the translateM method above can be used directly for the translateM transformation.
But in order to correct the scale of the picture, as detailed in the previous article, the video picture is scaled, so the matrix of the current picture is not the identity matrix.
For this reason, to pan the screen, you need to scale x, y, and z accordingly (otherwise the distance moved will be changed by the scaling factor in the original matrix).
Well, there are two ways to make the picture move at a normal distance:
- Restore the matrix to the identity matrix -> Move -> Scale again
- Use the current matrix -> Scale to move the distance -> Move
A lot of people use the first one, but here we use the second one.
- Record scaling
In the last article, you learned how to calculate the scaling factor:
Ratio = videoRatio * worldRatio or ratio = videoRatio/worldRatioCopy the code
Corresponding to the width or height of the scaling factor. In the VideoDrawer, record the width and height scaling factors.
class VideoDrawer : IDrawer {
// omit extraneous code......
private var mWidthRatio = 1f
private var mHeightRatio = 1f
private fun initDefMatrix(a) {
if(mMatrix ! =null) return
if(mVideoWidth ! = -1&& mVideoHeight ! = -1&& mWorldWidth ! = -1&& mWorldHeight ! = -1) {
mMatrix = FloatArray(16)
var prjMatrix = FloatArray(16)
val originRatio = mVideoWidth / mVideoHeight.toFloat()
val worldRatio = mWorldWidth / mWorldHeight.toFloat()
if (mWorldWidth > mWorldHeight) {
if (originRatio > worldRatio) {
mHeightRatio = originRatio / worldRatio
Matrix.orthoM(
prjMatrix, 0,
-mWidthRatio, mWidthRatio,
-mHeightRatio, mHeightRatio,
3f.5f)}else {// The original scale is smaller than the window scale, the scale height will cause the height to exceed, therefore, the height is based on the window, the scale width
mWidthRatio = worldRatio / originRatio
Matrix.orthoM(
prjMatrix, 0,
-mWidthRatio, mWidthRatio,
-mHeightRatio, mHeightRatio,
3f.5f)}}else {
if (originRatio > worldRatio) {
mHeightRatio = originRatio / worldRatio
Matrix.orthoM(
prjMatrix, 0,
-mWidthRatio, mWidthRatio,
-mHeightRatio, mHeightRatio,
3f.5f)}else {// The original scale is smaller than the window scale, the scaling height will cause the height to exceed, therefore, the height is based on the window, the scaling width
mWidthRatio = worldRatio / originRatio
Matrix.orthoM(
prjMatrix, 0,
-mWidthRatio, mWidthRatio,
-mHeightRatio, mHeightRatio,
3f.5f)}}// Set the camera position
val viewMatrix = FloatArray(16)
Matrix.setLookAtM(
viewMatrix, 0.0f.0f.5.0 f.0f.0f.0f.0f.1.0 f.0f
)
// Compute the transformation matrix
Matrix.multiplyMM(mMatrix, 0, prjMatrix, 0, viewMatrix, 0)}}/ / translation
fun translate(dx: Float, dy: Float) {
Matrix.translateM(mMatrix, 0, dx*mWidthRatio*2, -dy*mHeightRatio*2.0f)}/ /...
}
Copy the code
In the code, according to the zoom width or height, respectively record the corresponding width to height zoom ratio.
Next, in the Translate method, dx and dy are scaled. So how does scaling work?
- Calculate the zoom ratio
First, let’s look at how scaling is computed by ordinary matrix translation.
As you can see, an identity Matrix, magnified by 2 times in the Y direction, is translated by 2 times after the Matrix. TranslateM transformation.
So in order to get the distance back, we need to get rid of this multiple.
The final result is:
sx = dx / w_ratio
sy = dy / h_ratio
Copy the code
Now let’s see how we can calculate the zoom factor for an OpenGL video.
The first matrix is the OpenGL orthogonal projection matrix. We already know that left and right, top and bottom are the inverse of each other, and are equal to w_ratio and h_ratio of the video image. Therefore, we can simplify to the matrix on the right.
After Matrix. TranslateM translation, the translation obtained is as follows:
X: 1/w_ratio * dx Y: 1/h_ratio * dyCopy the code
Therefore, the correct translation can be obtained as:
sx = dx * w_ratio
sy = dy * h_ratio
Copy the code
But why is the translation factor in the code all multiplied by two? namely
fun translate(dx: Float, dy: Float) {
Matrix.translateM(mMatrix, 0, dx*mWidthRatio*2, -dy*mHeightRatio*2, 0f)
}
Copy the code
So first of all, what do we mean by dx and dy?
Dx = (curx-prevx)/GLSurfaceView_Width dy = (cury-prevy)/GLSurfaceView_Height PervX /prevY: Denotes the X/ y coordinates of the current finger touch pointCopy the code
So dx, dy is the normalized distance from 0 to 1.
Corresponds to the world coordinates of OpenGL:
In the x direction, (left, right) ->(-w_ratio, w_ratio) in the Y direction, (top, bottom) ->(-h_ratio, h_ratio)Copy the code
In fact, the entire World coordinate width of OpenGL is: twice w_ratio; The h_ratio is twice as high. Therefore, to convert the actual (0 ~ 1) into the corresponding distance in the world coordinate, we need to multiply by 2 to get the correct moving distance.
Finally, one more thing to note is that the y shift is preceded by a minus sign, because the positive y direction of the Android screen is down, while the Y direction of the OpenGL world coordinate is up, just the opposite.
- Gets the touch distance and pans the screen
To get the finger touch point, you need to define a GLSurfaceView.
class DefGLSurfaceView : GLSurfaceView {
constructor(context: Context): super(context)
constructor(context: Context, attrs: AttributeSet): super(context, attrs)
private var mPrePoint = PointF()
private var mDrawer: VideoDrawer? = null
override fun onTouchEvent(event: MotionEvent): Boolean {
when (event.action) {
MotionEvent.ACTION_DOWN -> {
mPrePoint.x = event.x
mPrePoint.y = event.y
}
MotionEvent.ACTION_MOVE -> {
val dx = (event.x - mPrePoint.x) / width
valdy = (event.y - mPrePoint.y) / height mDrawer? .translate(dx, dy) mPrePoint.x = event.x mPrePoint.y = event.y } }return true
}
fun addDrawer(drawer: VideoDrawer) {
mDrawer = drawer
}
}
Copy the code
The code is very simple, in order to facilitate the demonstration, only add a renderer, also did not judge whether the finger touched the actual picture position, as long as there is touch movement, pan the picture.
Then put it on the page and use it
<android.support.constraint.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent">
<com.cxp.learningvideo.opengl.DefGLSurfaceView
android:id="@+id/gl_surface"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
</android.support.constraint.ConstraintLayout>
Copy the code
Finally, call addDrawer in the Activity and set the renderer for the top frame to the DefGLSurfaceView.
private fun initSecondVideo(a) {
val drawer = VideoDrawer()
drawer.setVideoSize(1920.1080)
drawer.getSurfaceTexture {
initPlayer(path2, Surface(it), false)
}
render.addDrawer(drawer)
// Sets the renderer for touch movement
gl_surface.addDrawer(drawer)
}
Copy the code
In this way, you can move the screen around.
3) zoom
It’s much easier to zoom than to move.
Matrix for Android provides a Matrix scaling method:
/**
* Scales matrix m in place by sx, sy, and sz.
*
* @param m matrix to scale
* @param mOffset index into m where the matrix starts
* @param x scale factor x
* @param y scale factor y
* @param z scale factor z
*/
public static void scaleM(float[] m, int mOffset,
float x, float y, float z) {
for (int i=0 ; i<4 ; i++) {
int mi = mOffset + i;
m[ mi] *= x;
m[ 4 + mi] *= y;
m[ 8+ mi] *= z; }}Copy the code
This method is also very simple, just multiply x, y, z by the scale of the corresponding matrix.
Add a scale method to the VideoDrawer:
class VideoDrawer : IDrawer {
// omit extraneous code.......
fun scale(sx: Float, sy: Float) {
Matrix.scaleM(mMatrix, 0, sx, sy, 1f)
mWidthRatio /= sx
mHeightRatio /= sy
}
/ /...
}
Copy the code
One thing to note here is that when you set the scaling factor, you need to add it to the scaling factor of the original projection matrix, so that when you translate, you can scale the distance correctly.
Note: this is (the original scaling factor/the scaling factor), not “multiply”. Because the scaling of the projection matrix is “the bigger the scaling, the smaller the scaling”.
Finally, set a zoom factor for the image, such as 0.5F.
private fun initSecondVideo(a) {
val drawer = VideoDrawer()
drawer.setAlpha(0.5 f)
drawer.setVideoSize(1920.1080)
drawer.getSurfaceTexture {
initPlayer(path2, Surface(it), false)
}
render.addDrawer(drawer)
gl_surface.addDrawer(drawer)
// Set the scaling factor
Handler().postDelayed({
drawer.scale(0.5 f.0.5 f)},1000)}Copy the code
The effect is as follows:
Third, the latter
Above is the most basic knowledge that uses in audio and video development, but must not look down upon these knowledge, the effect of a lot of cool dazzle is based on these the simplest transform to come true actually, hope everybody gains somewhat.
See you next time!