You brush trill training model no, recommended for video are sand, I pick my foot big fellow, very few pretty little sister, but his brush trill training and too time consuming, just look at the ADB, found that this was a treasure, but think about it or use the ADB to help me train, province I manual training.

For more on ADB, see my previous article, ADB Practical Notes

First, use node. js to encapsulate ADB commands. In order to accommodate the problem of connecting multiple devices, you can use a flag bit, and use the use method to specify the corresponding device each time you use it.

const { exec } = require('child_process')
const path = require('path')

let currentDeviceName = ' '
let isVerbose = false

const call = (code) = > {
  return new Promise((resolve, reject) = > {
    const command = `adb ${currentDeviceName ? `-s ${currentDeviceName}` : ' '} ${code}`

    if (isVerbose) console.log(command, '\n')

    exec(command, (err, stdout, stderr) => {
      if (err) reject(new Error(err + ' '))
      resolve(stdout)
    })
  })
}

const use = (device) = > currentDeviceName = device.name
const verbose = (value) = > isVerbose = value
Copy the code

Add the corresponding device query method and save the device as an array by parsing the string form:

const rawDevices = (a)= > call('devices')

const devices = async() = > {return (await rawDevices())
    .split(/\n/)
    .map(line= > line.split('\t'))
    .filter(line= > line.length > 1)
    .map(device= > ({ name: device[0].trim(), status: device[1].trim() }))
}
Copy the code

The basic operations of the ADB query device are now complete. So the question is how do you know if douyin’s video has a pretty little sister?

What I want is to judge through the API of face recognition. You can take screenshots through ADB, and then call the corresponding AI interface to judge the appearance level and gender of the person in the screenshots, and then decide whether to pay attention to and like the video, and then slide up to switch to the next video, and so on and so on.

To tidy things up, the ADB commands that need to be wrapped are click commands, swipe commands, and screenshot commands:

const touch = (x, y) = > call(`shell input tap ${x} ${y}`)

const swipe = (x1, y1, x2, y2, ms = 200) = > call(`shell input swipe ${x1} ${y1} ${x2} ${y2} ${ms} `)

const screenshot = (filename = 's.png', localSavePath = '/') = > call(`shell screencap -p > ${path.resolve(localSavePath, filename)}`)

Copy the code

Then look at the AI interface on the market, I chose the Face++ interface, www.faceplusplus.com.cn/face-detect… :

After the registration is complete, you can see the corresponding trial API key and API Secret in the application management console

As with most open platforms, authentication is done by passing these two values to the background, and the trial type is free.

In the console of face recognition under Detect API can be seen under the request of the corresponding interface parameters and the corresponding return, console.faceplusplus.com.cn/documents/4…

The gender and beauty parameters in the optional return_attributes parameters are used to determine gender and level of appearance.

The interface return value is in the following format:

Faces is an array. After all, there may be more than one person in a picture. Beauty is an object that distinguishes the level of appearance perceived by men and women. The truth is, boys and girls have different tastes!

Based on the above analysis, you can start to write code. You can call the interface through HTTPS module. In order to facilitate the image passing through base64, it is roughly as follows:

const https = require('https');
const querystring = require('querystring');
const { base64Sync } = require('base64-img')


module.exports = function (file, scoreLevel = 70) {
  const base64 = base64Sync(file)
  const data = querystring.stringify({
    api_key: "Own API_key".api_secret: "Own API_secret".image_base64: base64,
    return_attributes: 'gender,age,beauty'
  })

  const options = {
    host: 'api-cn.faceplusplus.com'.path: '/facepp/v3/detect'.method: 'POST'.headers: {
      'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'}}return new Promise((resolve, reject) = > {
    const req = https.request(options, (res) => {
      res.on('data', (d) => {
        let b = {}
        try {
          b = JSON.parse(' ' + d)
        } catch (err) {
          resolve({
            shouldFollow: false
          })
          return
        }

        const faces = b.faces || []

        let shouldFollow = false
        let score = 0

        for (let i = 0; i < faces.length; i++) {
          const attrs = faces[i].attributes
          score = attrs.beauty.male_score
          if (attrs.gender.value == 'Female' && attrs.beauty.male_score >= scoreLevel) {
            shouldFollow = true
            break
          }
        }
        resolve({
          shouldFollow,
          score
        })
      })

    });
    req.on('error', (e) => {
      resolve(false)}); req.write(data) req.end() }) }Copy the code

This accepts the file and appearance level score requirements, and if the faces array is returned with more than this score requirement and the gender is a little sister, then pay attention and like it.

For better training, stay a little longer in the videos you like and follow so douyin knows we like to watch little Sister.

Then there is the tap operation to handle the corresponding likes and follows, which I roughly estimated based on the screen resolution and button position of my own phone (a more accurate way is to take a picture of Douyin and measure the position through tools like Faststone) :

I tried to like 1300 is ok, follow 1200 is ok, my phone’s resolution is 1080 X 2280.

The slide up is actually pretty easy to handle, the X-axis stays the same, the Y-axis gets smaller.

After the above analysis, the first implementation of a wait function.

function awaitMoment(time = 2000) {
  return new Promise((resolve) = > {
    setTimeout((a)= > resolve(), time)
  })
}
Copy the code

After the device turns on Tiktok, it slides up to switch a video

async function main() {
  const device = (await adb.devices())[0]

  adb.use(device)
  adb.verbose(true)

  await adb.swipe(200.1000.200.100.200)

  await awaitMoment()
}
Copy the code

Wait for 2 seconds to take a screenshot, then call the face++ interface, based on the result whether to follow and like, then delete the corresponding screenshot, then call this method:


const fileName = ((Math.random() + ' ').substr(2.7)) + '.png'
  await adb.screenshot(fileName, path.resolve(__dirname, 'images'))
  const file = path.resolve(path.resolve(__dirname, 'images', fileName))

  const { shouldFollow, score } = await face(file, 70)

  console.log('shouldFollow', shouldFollow)
  console.log('score', score)

  if (shouldFollow) {

    await adb.touch(1000.1300)
    await adb.touch(1000.1200)

    await awaitMoment(5000)
  }

  fs.unlinkSync(file)

  await main()
Copy the code

And you’re almost done.

Run it, and then wait for the program to automatically help us focus on the beautiful little sister.

I have trained for about 2 or 3 hours, and the effect is still very obvious. Douyin has changed from recommending various sand sculpture videos to various little sisters. The douyin recommendation algorithm is awesome, ha ha ha, so that I can have fun when paddling at work.

Because GIF is too big, too much trouble, so simply shot a video, you can see the process of program training, iQiyi link www.iqiyi.com/w_19saaayji… , YouTube link www.youtube.com/watch?v=-_G…

Finally, take a look at the effect of my account after training:

(In fact, there is also a problem, is too serious filter, a lot of Internet celebrities, I don’t know if there is an algorithm to identify Internet celebrities)

All of the above code is open source on my Github address github.com/huruji/find…

Finally, it is an advertisement. Recently, a new public account for sharing technology has been opened, and you are welcome to pay attention to 👇 (currently the number of followers is poor 🤕).