“Submarine Challenge” is a small game on Douyin, which uses facial recognition to drive submarines through obstacles. It is very popular recently, and I believe many people have played it.
On a whim I use Android custom View also masturbated a, found that as long as there is a good idea, without advanced technology can still develop a fun application. The development process is out to share with you.
Project address: github.com/vitaviva/ug…
The basic idea
The overall game view can be divided into three layers:
- Camera: Handle camera preview and face recognition
- Background: Handles obstruction-related logic
- Foreground: Submarine foreground
The code is also organized by the three layers above, and the layout of the game interface can be easily understood as the superposition of three layers of views, and then the related work is done in each layer
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent">
<! - camera - >
<TextureView
android:layout_width="match_parent"
android:layout_height="match_parent"/>
<! - foreground - >
<com.my.ugame.bg.BackgroundView
android:layout_width="match_parent"
android:layout_height="match_parent"/>
<! Prospects -- -- -- >
<com.my.ugame.fg.ForegroundView
android:layout_width="match_parent"
android:layout_height="match_parent"/>
</Framelayout>
Copy the code
Development will involve the use of the following technologies, no sophisticated, are common goods:
- Camera: use Camera2 for camera preview and face recognition
- Custom View: Defines and controls obstacles and submarines
- Property animation: Controls the movement of obstacles and submarines and various dynamic effects
Get off your back and look at things first! The implementation of each part of the code is described below.
Background
Bar
Firstly, the obstacle base class Bar is defined, which is mainly responsible for drawing bitmap resources to the specified area. Since the height of the obstacle is random when it is periodically refreshed from the right side of the screen, the x, Y, W and H of its drawing area need to be set dynamically
/** * Obstacle base class */
sealed class Bar(context: Context) {
protected open valbmp = context.getDrawable(R.mipmap.bar)!! .toBitmap()protected abstract val srcRect: Rect
private lateinit var dstRect: Rect
private val paint = Paint()
var h = 0F
set(value) {
field = value
dstRect = Rect(0.0, w.toInt(), h.toInt())
}
var w = 0F
set(value) {
field = value
dstRect = Rect(0.0, w.toInt(), h.toInt())
}
var x = 0F
set(value) {
view.x = value
field = value
}
val y
get() = view.y
internal val view bylazy { BarView(context) { it? .apply { drawBitmap( bmp, srcRect, dstRect, paint ) } } } }internal class BarView(context: Context? .private val block: (Canvas?) -> Unit) :
View(context) {
override fun onDraw(canvas: Canvas?). {
block((canvas))
}
}
Copy the code
Obstacles are divided into upper and lower parts. Since the same resource is used, they should be treated differently when drawing. Therefore, two subclasses are defined: UpBar and DnBar
/** * Obstacles above the screen */
class UpBar(context: Context, container: ViewGroup) : Bar(context) {
private val _srcRect by lazy(LazyThreadSafetyMode.NONE) {
Rect(0, (bmp.height * (1 - (h / container.height))).toInt(), bmp.width, bmp.height)
}
override val srcRect: Rect
get() = _srcRect
}
Copy the code
Resources on the obstacles below are drawn after being rotated 180 degrees
/** * Obstacles below the screen */
class DnBar(context: Context, container: ViewGroup) : Bar(context) {
override val bmp = super.bmp.let {
Bitmap.createBitmap(
it, 0.0, it.width, it.height,
Matrix().apply { postRotate(-180F)},true)}private val _srcRect by lazy(LazyThreadSafetyMode.NONE) {
Rect(0.0, bmp.width, (bmp.height * (h / container.height)).toInt())
}
override val srcRect: Rect
get() = _srcRect
}
Copy the code
BackgroundView
Next, create the BackgroundView container for the backscene, which is used to periodically create and move obstacles. Use the barsList to manage all the current obstacles. In onLayout, place the obstacles at the top and bottom of the screen
/** * Backscene container class */
class BackgroundView(context: Context, attrs: AttributeSet?) : FrameLayout(context, attrs) {
internal val barsList = mutableListOf<Bars>()
override fun onLayout(changed: Boolean, left: Int, top: Int, right: Int, bottom: Int) {
barsList.flatMap { listOf(it.up, it.down) }.forEach {
val w = it.view.measuredWidth
val h = it.view.measuredHeight
when (it) {
is UpBar -> it.view.layout(0.0, w, h)
else -> it.view.layout(0, height - h, w, height)
}
}
}
Copy the code
Provide two methods start and stop to control the start and end of the game:
- At the end of the game, all obstacles are required to stop moving.
- It will pass when the game starts
Timer
And periodically refresh obstacles
/** * End of game, stop all obstacles */
@UiThread
fun stop(a) {
_timer.cancel()
_anims.forEach { it.cancel() }
_anims.clear()
}
/** * Periodically refresh obstacles: * 1. Create * 2. Add to view * 3. Mobile * /
@UiThread
fun start(a) {
_clearBars()
Timer().also { _timer = it }.schedule(object : TimerTask() {
override fun run(a) {
post {
_createBars(context, barsList.lastOrNull()).let {
_addBars(it)
_moveBars(it)
}
}
}
}, FIRST_APPEAR_DELAY_MILLIS, BAR_APPEAR_INTERVAL_MILLIS
)
}
/** * When the game restarts, clear obstacles */
private fun _clearBars(a) {
barsList.clear()
removeAllViews()
}
Copy the code
Refresh obstacles
Obstacle refreshment goes through three steps:
- Create: Create an obstacle in a group of two
- Add: Adds an object to
barsList
And at the same time willView
Add to container - Move: Move from the right to the left by animating properties, and delete after moving off screen
When creating an obstacle, set a random height for it. The randomness should not be too high. Adjust the height based on the previous obstacle to ensure randomness and consistency
/** * Create an obstacle */
private fun _createBars(context: Context, pre: Bars?). = run {
val up = UpBar(context, this).apply { h = pre? .let {val step = when {
it.up.h >= height - _gap - _step -> -_step
it.up.h <= _step -> _step
_random.nextBoolean() -> _step
else-> -_step } it.up.h + step } ? : _barHeight w = _barWidth }val down = DnBar(context, this).apply {
h = height - up.h - _gap
w = _barWidth
}
Bars(up, down)
}
/** * add to screen */
private fun _addBars(bars: Bars) {
barsList.add(bars)
bars.asArray().forEach {
addView(
it.view,
ViewGroup.LayoutParams(
it.w.toInt(),
it.h.toInt()
)
)
}
}
/** * Use properties to animate obstacles */
private fun _moveBars(bars: Bars) {
_anims.add(
ValueAnimator.ofFloat(width.toFloat(), -_barWidth)
.apply {
addUpdateListener {
bars.asArray().forEach { bar ->
bar.x = it.animatedValue as Float
if (bar.x + bar.w <= 0) { post { removeView(bar.view) } } } } duration = BAR_MOVE_DURATION_MILLIS interpolator = LinearInterpolator() start() }}})Copy the code
Foreground
Boat
Class Boat, creates a custom View, and provides methods to move to the specified coordinates
/** * sub */
class Boat(context: Context) {
internal val view by lazy { BoatView(context) }
val h
get() = view.height.toFloat()
val w
get() = view.width.toFloat()
val x
get() = view.x
val y
get() = view.y
/** * moves to the specified coordinate */
fun moveTo(x: Int, y: Int) {
view.smoothMoveTo(x, y)
}
}
Copy the code
BoatView
The following things are done in a custom View
- Two resources are switched periodically to achieve the effect of searchlight flashing
- through
OverScroller
Make the movement smoother - Through a
Rotation Animation
, allowing the submarine to turn the Angle when moving, more flexible
internal class BoatView(context: Context?) : AppCompatImageView(context) {
private val _scroller by lazy { OverScroller(context) }
private val _res = arrayOf(
R.mipmap.boat_000,
R.mipmap.boat_002
)
private var _rotationAnimator: ObjectAnimator? = null
private var _cnt = 0
set(value) {
field = if (value > 1) 0 else value
}
init {
scaleType = ScaleType.FIT_CENTER
_startFlashing()
}
private fun _startFlashing(a) {
postDelayed({
setImageResource(_res[_cnt++])
_startFlashing()
}, 500)}override fun computeScroll(a) {
super.computeScroll()
if (_scroller.computeScrollOffset()) {
x = _scroller.currX.toFloat()
y = _scroller.currY.toFloat()
// Keep on drawing until the animation has finished.
postInvalidateOnAnimation()
}
}
/** * moves more smoothly */
internal fun smoothMoveTo(x: Int, y: Int) {
if(! _scroller.isFinished) _scroller.abortAnimation() _rotationAnimator? .let {if (it.isRunning) it.cancel() }
val curX = this.x.toInt()
val curY = this.y.toInt()
val dx = (x - curX)
val dy = (y - curY)
_scroller.startScroll(curX, curY, dx, dy, 250)
_rotationAnimator = ObjectAnimator.ofFloat(
this."rotation",
rotation,
Math.toDegrees(atan((dy / 100.toDouble()))).toFloat()
).apply {
duration = 100
start()
}
postInvalidateOnAnimation()
}
}
Copy the code
ForegroundView
- through
boat
Members hold and control submarine objects - implementation
CameraHelper.FaceDetectListener
According to the face recognition callback, move the submarine to the specified position - At the start of the game, build the submarine and do the opening animation
/** * foreground container class */
class ForegroundView(context: Context, attrs: AttributeSet?) : FrameLayout(context, attrs),
CameraHelper.FaceDetectListener {
private var _isStop: Boolean = false
internal var boat: Boat? = null
/** * The game stops and the sub no longer moves */
@MainThread
fun stop(a) {
_isStop = true
}
/** * accept face recognition callback, move position */
override fun onFaceDetect(faces: Array<Face>, facesRect: ArrayList<RectF>) {
if (_isStop) return
if(facesRect.isNotEmpty()) { boat? .run {val face = facesRect.first()
val x = (face.left - _widthOffset).toInt()
val y = (face.top + _heightOffset).toInt()
moveTo(x, y)
}
_face = facesRect.first()
}
}
}
Copy the code
intro
The game begins by animating the submarine to its starting position, halfway down the Y axis
/** * The game starts with an animation to enter */
@MainThread
fun start(a) {
_isStop = false
if (boat == null) {
boat = Boat(context).also {
post {
addView(it.view, _width, _width)
AnimatorSet().apply {
play(
ObjectAnimator.ofFloat(
it.view,
"y".0F.this@ForegroundView.height / 2f
)
).with(
ObjectAnimator.ofFloat(it.view, "rotation".0F.360F)
)
doOnEnd { _ -> it.view.rotation = 0F }
duration = 1000
}.start()
}
}
}
}
Copy the code
Camera
The camera part mainly consists of TextureView and CameraHelper. TextureView provides Camera bearing preview; CameraHelper implements the following functions:
- Turn on the camera: Yes
CameraManger
Alternate camera - Camera switching: Switch the front and rear cameras.
- Preview: Obtain the preview size provided by the Camera and adapt it
TextureView
According to - Face recognition: detection of face position, for
TestureView
The transformation of coordinates on
Adapter PreviewSize
The previewable size provided by the camera hardware may be different from the actual screen size (i.e. TextureView size), so you need to select the most appropriate PreviewSize during camera initialization to avoid stretching and other exceptions on TextureView
class CameraHelper(val mActivity: Activity, private val mTextureView: TextureView) {
private lateinit var mCameraManager: CameraManager
private var mCameraDevice: CameraDevice? = null
private var mCameraCaptureSession: CameraCaptureSession? = null
private var canExchangeCamera = false // Whether the camera can be switched
private var mFaceDetectMatrix = Matrix() // Face detection coordinate transformation matrix
private var mFacesRect = ArrayList<RectF>() // Save the face coordinate information
private var mFaceDetectListener: FaceDetectListener? = null // Face detection callback
private lateinit var mPreviewSize: Size
/** * initializes */
private fun initCameraInfo(a) {
mCameraManager = mActivity.getSystemService(Context.CAMERA_SERVICE) as CameraManager
val cameraIdList = mCameraManager.cameraIdList
if (cameraIdList.isEmpty()) {
mActivity.toast("No cameras available")
return
}
// Get the camera direction
mCameraSensorOrientation =
mCameraCharacteristics.get(CameraCharacteristics.SENSOR_ORIENTATION)!!
// Get StreamConfigurationMap, which manages all output formats and sizes supported by the camera
val configurationMap =
mCameraCharacteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
val previewSize = configurationMap.getOutputSizes(SurfaceTexture::class.java) // Preview the size
// When the screen is vertical, switch the width and height to make sure that the width is larger than the height
mPreviewSize = getBestSize(
mTextureView.height,
mTextureView.width,
previewSize.toList()
)
// Set TextureView according to the preview size
mTextureView.surfaceTexture.setDefaultBufferSize(mPreviewSize.width, mPreviewSize.height)
mTextureView.setAspectRatio(mPreviewSize.height, mPreviewSize.width)
}
Copy the code
The principles for selecting preview dimensions should be consistent with the aspect ratio of TextureView, and the area should be as close as possible.
private fun getBestSize(
targetWidth: Int,
targetHeight: Int,
sizeList: List<Size>): Size {
val bigEnough = ArrayList<Size>() // A list of sizes larger than the specified width
val notBigEnough = ArrayList<Size>() // A list of sizes smaller than the specified width and height
for (size in sizeList) {
// Aspect ratio == Target aspect ratio
if (size.width == size.height * targetWidth / targetHeight
) {
if (size.width >= targetWidth && size.height >= targetHeight)
bigEnough.add(size)
else
notBigEnough.add(size)
}
}
// Select the smallest value in bigEnough or the largest value in notBigEnough
return when {
bigEnough.size > 0 -> Collections.min(bigEnough, CompareSizesByArea())
notBigEnough.size > 0 -> Collections.max(notBigEnough, CompareSizesByArea())
else -> sizeList[0]
}
initFaceDetect()
}
Copy the code
InitFaceDetect () is used to initialize the Matrix of the face, described below
Face recognition
As the camera preview, create a CameraCaptureSession object, session through CameraCaptureSession. Return to TotalCaptureResult CaptureCallback, through the parameters including can face recognition of relevant information
/** * Create a preview session */
private fun createCaptureSession(cameraDevice: CameraDevice) {
// Create a CameraCaptureSession object for camera preview
cameraDevice.createCaptureSession(
arrayListOf(surface),
object : CameraCaptureSession.StateCallback() {
override fun onConfigured(session: CameraCaptureSession) {
mCameraCaptureSession = session
session.setRepeatingRequest(
captureRequestBuilder.build(),
mCaptureCallBack,
mCameraHandler
)
}
},
mCameraHandler
)
}
private val mCaptureCallBack = object : CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult
) {
super.onCaptureCompleted(session, request, result)
if(mFaceDetectMode ! = CaptureRequest.STATISTICS_FACE_DETECT_MODE_OFF) handleFaces(result) } }Copy the code
MFaceDetectMatrix is used to change the matrix of face information and determine the face coordinates to make it accurately applied to TextureView.
/** ** Handle face information */
private fun handleFaces(result: TotalCaptureResult) {
val faces = result.get(CaptureResult.STATISTICS_FACES)!!
mFacesRect.clear()
for (face in faces) {
val bounds = face.bounds
val left = bounds.left
val top = bounds.top
val right = bounds.right
val bottom = bounds.bottom
val rawFaceRect =
RectF(left.toFloat(), top.toFloat(), right.toFloat(), bottom.toFloat())
mFaceDetectMatrix.mapRect(rawFaceRect)
var resultFaceRect = if (mCameraFacing == CaptureRequest.LENS_FACING_FRONT) {
rawFaceRect
} else {
RectF(
rawFaceRect.left,
rawFaceRect.top - mPreviewSize.width,
rawFaceRect.right,
rawFaceRect.bottom - mPreviewSize.width
)
}
mFacesRect.add(resultFaceRect)
}
mActivity.runOnUiThread {
mFaceDetectListener?.onFaceDetect(faces, mFacesRect)
}
}
Copy the code
Finally, the UI thread passes the Rect containing the face coordinates through a callback:
mActivity.runOnUiThread {
mFaceDetectListener?.onFaceDetect(faces, mFacesRect)
}
Copy the code
FaceDetectMatrix
MFaceDetectMatrix is created after getting PreviewSize
/** * Initializes face detection information */
private fun initFaceDetect(a) {
val faceDetectModes =
mCameraCharacteristics.get(CameraCharacteristics.STATISTICS_INFO_AVAILABLE_FACE_DETECT_MODES) // Face detection mode
mFaceDetectMode = when{ faceDetectModes!! .contains(CaptureRequest.STATISTICS_FACE_DETECT_MODE_FULL) -> CaptureRequest.STATISTICS_FACE_DETECT_MODE_FULL faceDetectModes!! .contains(CaptureRequest.STATISTICS_FACE_DETECT_MODE_SIMPLE) -> CaptureRequest.STATISTICS_FACE_DETECT_MODE_FULLelse -> CaptureRequest.STATISTICS_FACE_DETECT_MODE_OFF
}
if (mFaceDetectMode == CaptureRequest.STATISTICS_FACE_DETECT_MODE_OFF) {
mActivity.toast("Camera hardware does not support face detection.")
return
}
val activeArraySizeRect =
mCameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE)!! // Get the image area
val scaledWidth = mPreviewSize.width / activeArraySizeRect.width().toFloat()
val scaledHeight = mPreviewSize.height / activeArraySizeRect.height().toFloat()
val mirror = mCameraFacing == CameraCharacteristics.LENS_FACING_FRONT
mFaceDetectMatrix.setRotate(mCameraSensorOrientation.toFloat())
mFaceDetectMatrix.postScale(if (mirror) -scaledHeight else scaledHeight, scaledWidth)// Switch width and height!
mFaceDetectMatrix.postTranslate(
mPreviewSize.height.toFloat(),
mPreviewSize.width.toFloat()
)
}
Copy the code
Control class (GameController)
After three view layers are assembled, a master control class is needed to control the game logically
GameController
Mainly completed the following work:
- Controls the start/stop of the game
- Calculates the current score of the game
- Detect submarine collisions
- Foreign (
Activity
orFragment
Etc.) provides an interface for game state monitoring
Initialize the
The game starts by initializing the camera, creating the GameHelper class and setting up the setFaceDetectListener callback to the ForegroundView
class GameController(
private val activity: AppCompatActivity,
private val textureView: AutoFitTextureView,
private val bg: BackgroundView,
private val fg: ForegroundView
) {
private var camera2HelperFace: CameraHelper? = null
/** * Camera initialization */
private fun initCamera(a){ cameraHelper ? : run { cameraHelper = CameraHelper(activity, textureView).apply { setFaceDetectListener(object : CameraHelper.FaceDetectListener {
override fun onFaceDetect(faces: Array<Face>, facesRect: ArrayList<RectF>) {
if (facesRect.isNotEmpty()) {
fg.onFaceDetect(faces, facesRect)
}
}
})
}
}
}
Copy the code
The game state
Define GameState to provide external status listening. Currently, three states are supported
- Start: The game starts
- Over: The game is Over
- Score: The Score of a game
sealed class GameState(open val score: Long) {
object Start : GameState(0)
data class Over(override val score: Long) : GameState(score)
data class Score(override val score: Long) : GameState(score)
}
Copy the code
You can update the status at stop and start
/** * Game state */
private val _state = MutableLiveData<GameState>()
internal val gameState: LiveData<GameState>
get() = _state
/**
* 游戏停止
*/
fun stop(a) {
bg.stop()
fg.stop()
_state.value = GameState.Over(_score)
_score = 0L
}
/** ** start */
fun start(a) {
initCamera()
fg.start()
bg.start()
_state.value = GameState.Start
handler.postDelayed({
startScoring()
}, FIRST_APPEAR_DELAY_MILLIS)
}
Copy the code
Calculate a score
At the start of the game, startScoring is calculated and reported by GameState. The current rules are simple, with the survival time being the game score
/**
* 开始计分
*/
private fun startScoring(a){ handler.postDelayed( { fg.boat? .run { bg.barsList.flatMap { listOf(it.up, it.down) } .forEach { bar ->if (isCollision(
bar.x, bar.y, bar.w, bar.h,
this.x, this.y, this.w, this.h
)
) {
stop()
return@postDelayed
}
}
}
_score++
_state.value = GameState.Score(_score)
startScoring()
}, 100)}Copy the code
Collision detection
IsCollision calculates whether a collision has occurred according to the current position of the submarine and obstacle, and GameOver occurs when a collision occurs
/** * Collision detection */
private fun isCollision(
x1: Float,
y1: Float,
w1: Float,
h1: Float,
x2: Float,
y2: Float,
w2: Float,
h2: Float
): Boolean {
if (x1 > x2 + w2 || x1 + w1 < x2 || y1 > y2 + h2 || y1 + h1 < y2) {
return false
}
return true
}
Copy the code
Activity
The Activity works simply:
-
Permission application: Dynamically apply for Camera permission
-
Listen for GameState: create a GameController and listen for the GameState state
private fun startGame(a) {
PermissionUtils.checkPermission(this, Runnable {
gameController.start()
gameController.gameState.observe(this, Observer {
when (it) {
is GameState.Start ->
score.text = "DANGER\nAHEAD"
is GameState.Score ->
score.text = "${it.score / 10f} m"
is GameState.Over ->
AlertDialog.Builder(this)
.setMessage("Game over! Successfully promote${it.score / 10f}Meters! ")
.setNegativeButton("End the game.") { _: DialogInterface, _: Int ->
finish()
}.setCancelable(false)
.setPositiveButton("Give me one more.") { _: DialogInterface, _: Int ->
gameController.start()
}.show()
}
})
})
}
Copy the code
The last
The structure of the project is very clear, and most of the use of conventional technology, even for the new Android students look effortless. The game can be further improved by adding BGM and obstacle types. If you like, leave a star to encourage the author ^^
Github.com/vitaviva/ug…