This article was originally published on the wechat official account — Interesting Things in the world, handling, reprint please note the source, otherwise will be held liable for copyright. Wechat id: A1018998632, QQ group: 859640274

Serial articles

  • 1. Copy a Douyin app from scratch — Start
  • 5. Copy a Douyin App from scratch — App architecture update and network layer customization
  • 6. Copy a Douyin App from scratch — start with audio and video
  • 7. Copy a Douyin App from scratch — a minimalist video player based on FFmpeg
  • 8. Copy a Douyin App from scratch — build a cross-platform video editing SDK project

The github address for this project is MyTikTok

I was so busy these two weeks. The first demand was about to be put on line, I worked overtime for a week, and then the second week I had to build the team for three days, which led to the delay of this article for two weeks. I guess everyone thought I was going to give up. But in fact, I was rushing articles when the team was built, so I kept everyone waiting. This week’s article will discuss the following issues, you can skip chapters as needed to save valuable time, this article is expected to take 10 minutes to read.

  • 1. Discussion — Summarize the meaningful discussion in the comments of the last two weeks and give me an answer
  • 2. Log and burial – discuss how log and burial are designed and implemented
  • 3. Backend Architecture Tentative vision — Discuss how the backend of future apps will need to be structured and implemented
  • 4. Ubuntu environment initialization — To initialize the environment on the cloud to the MAC environment I am familiar with (if you are a Windows reader, you can also learn about this, later this article will cover more Operations under Linux)

A, discuss

Discussion 1: Will the project use Kotlin?

  • 1. At present, my plan is to use Java in the basic module and kotlin in some business modules.

Discussion 2: This series of articles is the title party, the heat of the flick

  • 1. First of all be clear why I want to trill as an example, the reason is that my company is to develop a short video, technically there is similar, but the company as an example of the development of products is impossible, so I take for example trill hope can again big company project development process and structure, is not only benefit the reader, It’s a good promotion for me, too.
  • 2. Of course, it cannot be denied that the title of Douyin has brought me a certain amount of traffic and attracted some people’s eyeballs, but I have a clear conscience. Since I spent more than two weeks writing each article in my spare time, the quality of the content is much better than the average article.
  • There is a good saying: people red is not much, on the article is the same. I don’t want to fight meaningless words so: in the future, if there are offensive or derogatory comments that are not related to technology or articles, I will directly delete them and do not respond.

Two, log and buried point

Logging plays a very important supporting role in a project, allowing developers to easily locate bugs. It allows the background to monitor app performance and stability after the system goes live. It can also collect user behavior data to facilitate the analysis of user needs. In this section, I’ll examine five different types of logs and explain how several of them are implemented.

So let me just list five different kinds of logs.

  • 1. Debug logs: Used for developers’ local debugging
  • 2. Aop Debug logging: For developer native debugging, using AOP, methods and classes can be sliced and logged with simple comments. It is used to type some logs that need to be executed uniformly.
  • 3. Network request log: Used by developers to debug network requests locally
  • 4. Local file log: it is used to record the bugs that appear after the app goes online. Log into the file, and users can manually click to upload the log through an entry.
  • 5. Buried point log: The buried point log is used to record the data of users using the APP, app performance, etc. The data structure is defined by the negotiation of the front and back end, and will be stored in the database of the back end for some data analysis. The way to bury the point can be manual or automatic.

1. Debug logs

  • 1. Debug Log is relatively simple. As shown in Figure 1, the Log class provided by Android itself is encapsulated and some features and extensions are added.

2. Aop logging

  • 1. Many people think of AOP when writing repetitive logging, a technique that can inject required template code before and after annotated methods. I talked about this technology in my last article, so if you’re interested in it, I’ll just talk about it briefly.
  • 2. First we need to define an annotation class that can be used to annotate classes or methods. The annotation class can be filled with information such as whether to print the method’s initial arguments.
  • 3. After the annotation class is used, we need to use Gradle Transform. This technique allows us to scan all classes during compilation to find the methods and classes annotated by the annotated class.
  • 4. Finally, we can use JavAssist to inject the code we need into the method we find. Note that the logs can be local debug logs, local file logs, or buried point logs. You can say that AOP logging is just an automated encapsulation of other types of logging.

3. Network request logs

  • 1. When debugging network requests, we will print network requests besides capturing packets. At this point, it would be convenient to have a uniform format for printing logs.
  • 2. Most vendors use okHTTP as the network request library, so I will customize logs directly on it. Since the HTTP module of the project has not yet been developed, so there is no implementation code, here is a general solution. The implementation will be explained later as we develop the HTTP module.
  • 3. Before we go into the solution, we need to know how OKHTTP works. As shown in Figure 3, interceptors pass through during an OKHTTP request, once on the local request to the network, and again on the network request back.
  • 4. So we can add a log interceptor to print the request head and the body of the request on demand on two passes through the interceptor. Note that this can be printed to debug logs, local file logs, or buried logs. They are used for local debug, online debug, and network performance monitoring.

4. Local file logs

  • 1. What should we do when we encounter a bug online? Some crash logs can be retrieved through platforms such as Bugly. But there are some weird bugs that only happen on certain models and even on certain users’ phones. This is where the local file log comes in handy.
  • 2. We can manually add local file logging to some key features during development. When a user reports a bug, we can let them send the file log to the background through an entry, and finally the developers can analyze the log to find the problem.
  • 3. Next, I will explain the implementation of local file logging through the code combined with Figure 4 above.
  • 4. Take a look at Figure 4:
    • 1.LocalFileLogger is responsible for providing the external API of this module. It has two main functions:
      • 1. Initialize and bind LocalFileLoggerService(a service that can be interworked with with a binder)
      • 2. The binder sends external log adding requests to LocalFileLoggerService
    • 2. The LocalFileLoggerService initializes a HandlerThread, which will be used by the Handler to throw the high performance log add request to it.
    • 3.FileLogger is the class responsible for writing logs locally. It also initializes a HandlerThread and customizes a LoggerHandler. This Handler will accumulate the logs thrown by LocalFileLoggerService, and when it reaches a certain amount. The log write request is sent to the HandlerThread for execution.

  • 5. Looking at the code again, let’s follow the code through the process:
    • 1. In Figure 5, we can see that after a series of calls in addLog, the slot.log object is finally given to the Slot.log object, which is a Binder object to operate LocalFileLoggerService.
    • 2. Moving to Figure 6, you can see that the Service initializes a HandlerThread and then defines a Handler to throw requests into it. MBinder’s implementation is to throw FileLogger. AddLog request to HandlerThread through Handler.
    • 3. Looking at Figure 7, you can see that when the FileLogger is initialized, a HandlerThread is initialized, and then a LoggerHandler is defined to throw log write requests into it. The filelogger.addlog method sends a request directly.
    • 4. Looking at Figure 8 again, loggerhandler. add does not write logs locally immediately, but instead has a LOG_CACHE_COUNT threshold, which is exceeded before writing logs to the file system.

5. Buried logs

  • 1. Buried log is actually similar to file log, I will combine figure 9 to briefly say, the specific code you can go to look at the project
  • 2. There is still a UploadLogManager which provides API to external and initializes LocalFileLoggerService. What’s more complicated than the file system here is the addition of a UploadLogConfiguration for assembling some Settings.
  • 3. With LocalFileLoggerService, there are two different ways to add buried logs.
    • 1. Real-time burial log addition: If the external system needs to report the current burial log immediately, the request is directly sent to UploadLogHandler and then handed over to HandlerThread for execution. Finally, the network reporting is performed through LogSender.
    • 2. Non-real-time buried log addition: In this mode, the LocalFileLoggerService will fetch a certain amount of logs from UploadLogStorage at a certain time, merge them, and then report the buried log as in 1.
  • 4. At present, UploadLogStorage and LogSender are only interfaces because neither Http module nor database module has started writing, but they do not affect the code logic.

Third, the initial idea of the back-end architecture

Although the focus of this project is the development of douyin Android app, but the background will also be involved. Next, I will describe the objectives and expected results of the project on the back end.

1, the RPC

For those of you who may not be familiar with the term RPC(Remote Procedure call) on the client side, I’m going to give you a brief introduction here.

Take Java for example: we have two services A and B on two servers, and we need to call B’s service on A to get the data Foo on it. So in A we can write Foo f = b.xxService (); . Here Foo is the data transmission structure defined by the two services A and B, and B is the object abstracted by the service B, whose internal implementation can be various network data interaction protocols, such as HTTP protocol. Simply put: RPC is about calling remote functions as if they were local functions.

There are many existing RPC frameworks, and each large manufacturer has opened its own framework. Here I will introduce and compare several frameworks, and finally select the suitable one according to the requirements of this project.

  • 1.Dubbo: This is an open source framework of Ali. Later, Ali abandoned it for various reasons, and finally dangdang maintained and extended a Dubbox. Here are his strengths and weaknesses:
    • 1. Advantages:
      • 1.Dubbo is written in Java, which is user-friendly for Android developers.
      • 2. The ecology of Dubbo is still relatively good at present. Last year, when I practiced Java development at Yuzan, I used Dubbo for more than half a year, and felt that all kinds of pits had been stepped on, and all kinds of libraries were relatively perfect.
      • 3. Support for service governance is in place.
    • 2. Disadvantage:
      • 1. Poor cross-platform capability. Native Dubbo basically doesn’t have cross-platform capability.
      • 2. With Java as the main development language, it cannot iterate quickly. Our project time was skewed towards the Android client, so we needed a language that could iterate quickly.
      • 3. The speed of serialization and deserialization is not superior compared to other RPC frameworks.
      • 4. Poor performance compared to the other frameworks.
  • 2.Thrift: This is an open source framework for FaceBook, contributed by FaceBook to the Apache Foundation in 2007, and is a top project under Apache.
    • 1. Advantages:
      • 1. Strong cross-platform capability, supporting almost all major languages.
      • 2. The performance is good
    • 2. Disadvantage:
      • 1. Cross-platform language protocols are difficult to write.
      • 2. Service governance is not supported
  • 3.Grpc: Open source framework by Google,We are currently using this framework for our backend
    • 1. Advantages:
      • 1. Strong cross-platform capability and support for most mainstream development languages
      • 2. The cross-platform language protocol is ProtoBuf, which is consistent with our client’s technology stack.
      • 3. The performance is good
      • 4. With technical support from our company, of course it is not official, but I can understand our technology in this respect, and finally feed it to our project.
    • 2. Disadvantage:
      • 1. Service governance is not supported

After looking at the above comparison, I think you have the answer in mind. Yes, I decided to use GRPC as the RPC framework for the back end of this project. Then the development language is Python, Java for assistance, later if there is time to implement a small service in GO. There are several reasons for using these languages:

  • 1. Firstly, the background ecology of Python is mature, and it is convenient and fast to use.
  • 2. Secondly, we will use TensorFlow later to train various deep learning models. In this case, it is necessary to skillfully use Python.
  • 3. Some people may ask why you want to implement the backend services in several different languages. I thought it was unnecessary. Yes, it’s redundant from a normal development point of view, but multilingual environments are perfectly normal in larger factories, and part of my goal is to simulate them. Besides, this multi-language environment in my opinion is more interesting, I want to try to play.

2. Microservices and service governance

In fact, I had a lot of things to say here, but I found that MY current ability can not fully say these two things, afraid of misleading people in the end, so HERE I list the final project needs to complete related to these two goals.

  • 1. In the future, the author expects that there will be 10 service machines, two of which will provide one type of service for one group, and there will be a total of five categories of services.
  • 2. So the first function to implement is:Service Discovery RegistrationFunction. This functionality is primarily for interacting with the registry.
    • 1. The service provider starts to register its services with the registry
    • 2. Consumers start to subscribe to the registry for the services they need
    • 3. The registry returns a list of service providers to the consumer
    • 4. The consumer selects a service provider from the service provider list based on the soft load balancing algorithm to initiate a request
  • 3. In order to understand and monitor the situation of each service, the second function to be implemented is: service monitoring, which is to calculate the number of times each service is invoked over time.
  • 4. To distinguish the Internal and external networks and unify authentication. The third function that needs to be implemented is the service gateway, through which all external requests will pass, and the gateway will distribute the requests to the internal machine, which will return the results to the external machine after the call is completed.

Initialize the Ubuntu environment

I wonder how many of my readers have Macs. I know that because I’m a dual MAC and Win user. The development benefits of the MAC. In this easy section, I’ll show you how to initialize your local MAC command line environment to Ubuntu on the cloud.

1, Oh my ZSH

  • 1. First, you need to buy a machine in the XX cloud. I bought ali cloud, chose ubuntu16 as the system template at first, and then installed nothing at all. Then log in to the cloud host locally using SSH.
  • 2. Clone my library on the local computer, and then use the two script files in it. Ubuntu initialization
  • 3. Run the SCP command to upload two files in 2 to the server: ubuntu_init.sh and ubuntu_init_oh-my-zsh.sh. For example: SCP a. pg [email protected]: / root/a. pg, will the local directory of a. pg file upload for cloud/root/a. pg file on the server.
  • 4. Run ubuntu_init.sh, which asks you to enter your password and eventually restarts the server.
  • 5. After restarting the server in 4, log in to the server and run ubuntu_init_oh-my-zsh.sh. And we’re done. The end result, as shown in Figure 10, is much more user-friendly than ubuntu’s native version and supports a variety of custom plugins.
  • 6. I forgot to mention that this command line is an open source project: OH my ZSH, English better students can see the original project, to expand their own configuration.

2. Configure vim

  • 1. The next step is the configuration of Vim. In fact, I haven’t completely successfully transferred the configuration to Ubuntu until now, so let’s have a look.
  • 2. Ubuntu initialization. In this repository.vimrc is vim’s configuration file. Vim plugin management, this repository is the Vim plugin library.
  • 3. Here is actually to show my results, for beginners to learn the aspects are not much, for old birds also look down on my configuration.

3. Docker configuration

In these two weeks, I also found time to learn Docker. In my understanding, Docker is a lightweight virtual machine that is convenient for packaging and reuse. So we will also use this technology on the back end for easy operation and maintenance. I also just learned this thing, so I posted a few OF my learning url!

  • 1. Initial docker learning
  • 2. The docker learn python

Five, the tail

This article is the fourth in a series of articles about douyin App written from scratch. The length is relatively long. Thank you very much for your recognition of me. It has been more than two months since I decided to write this series of articles, and I find that my growth is very rapid in these two months, so I will continue to write like this in the future.

No anxiety peddling, no headlines. Share something interesting about the world. Topics include but are not limited to: science fiction, science, technology, Internet, Programmer, computer programming. Below is my wechat public number: the world’s interesting things, do a lot of goods waiting for you to see.