Content in this paper,
As cloud computing evolves, so does the service model for business systems. This article focuses on the team’s exploration of the BFF development model over the past few years, and shares our hands-on experience and underlying principles for building our own functional services with OpenFaaS in a new business scenario.
background
Let’s start with a quick review of the cloud service patterns with a graph. Without going over the concepts of each model, it is clear from this chart that from IaaS to FaaS, developers have less and less to care about, which is essentially helping us to reduce our o&M costs and improve our R&D efficiency.
Although many of the team’s business services are still deployed based on IaaS, we are gradually following the trend of The Times to practice the new development model. Here’s a quick look at some of the technical solutions the team adopted at different stages.
Front and rear separation
A few years ago, in the period of rapid business development, in order to improve the cooperation efficiency of the front and back ends, the department decided that the front-end team took over Java Web applications, responsible for interface aggregation, template rendering and other work, while several back-end teams focused on business process and data processing, and provided RPC services. Although the front-end application code was reconstructed, the original deployment mode was still used. Servers were built by themselves using CentOS VMS, and a total of eight 8U16G VMS were deployed in different computer rooms.
You cannot paste blocks outside Docs
This approach has many advantages over the traditional model, such as clear team division and self-sufficiency in front-end interface and presentation. The disadvantage is that for students at the front end, the cost of operation and maintenance rises linearly, and manual intervention is required in the links such as online, rollback, capacity expansion and migration, which takes time and effort. The key words of this stage can be summarized as: virtual machine, human operation and maintenance.
Micro service
As the business grows rapidly, the back end evolves into a microservice architecture. While the front-end Java BFF application gradually expanded into a huge single application, the number of interfaces and pages up to several hundred, urgent need for a more flexible and efficient development mode.
After a period of exploration, based on some technical concepts and best practices in the industry, we have gradually transformed the original Java BFF into the r&d mode of Node.js + Docker. Later, the Web application is divided into multiple micro front terminal applications, which greatly reduces the complexity and coupling degree of the system.
At present, this mode is the main mode of online operation, which can basically meet the needs of the team in terms of development efficiency and operation and maintenance cost. The key words of this stage can be summarized as: microservice, container.
Cloud native
With the development of cloud technology, technical concepts such as K8S, Service Mesh, Serverless and FaaS begin to appear in our field of vision, and features such as elastic expansion and shrinkage capacity and low operation and maintenance cost have a new impact on the existing development mode.
As the company’s infrastructure is still under construction, the front end team began to explore the feasibility of the Serverless model in the business. After investigating and comparing OpenFaaS, KNative, OpenWhisk and other open source frameworks, we finally decided to use OpenFaaS to build our own function services on the K8S cluster provided by the company, and then practiced in many scenarios such as tool services, small program interfaces, and peripheral business interfaces.
You cannot paste blocks outside Docs
While maintaining OpenFaaS in the short term does require a human cost, the flexible development model also greatly improves the efficiency of team iterations. It can also be gradually migrated to the FaaS platform built by the company.
Functional service practice
Let’s share some of our experiences in building and practicing the OpenFaaS service.
Set up the OpenFaaS service
The pre-dependency of the core requires a K8S cluster + private Docker image repository. Thanks to K8S’s Helm package management tool, the setup process is as simple as NPM Install, although there are pitfalls in the configuration process that need to be addressed. The following figure shows the core services and container instances that OpenFaaS runs after the installation is successful.
Business practices
In actual business development, you only need to install the OpenFaaS official CLI tool and log in to the Intranet Docker Registry at the same time, and then you can formally develop.
Official support for go/ Python/Node and other language templates, but also support any custom Dockerfile, the following share our practical application in the project.
Tool service
General utility services can be implemented quickly by directly introducing open source third-party libraries, such as pinyin conversion, simple conversion and other functions. The following example quickly implements a minimalist pinyin conversion function with three lines of Python code:
# handler.py
import pinyin
def handle(req) :
return pinyin.get(str(req), format="strip", delimiter="")
Copy the code
After being released through the CLI, our service is practically deployed. Eliminating the process of applying for resources, installing dependencies, and configuring services. A quick test:
Tooling services are discrete and have simple logic. The following small program interface as an example to introduce the business module development mode.
Applet interface
To meet the business needs, we quickly developed a small version of the system based on the company’S IM. Due to the limited functions, only more than ten interfaces are required, and the core business logic is the same as that of the Web side. In order to reduce cost and improve performance, we abandoned the Nest BFF development model and instead organized the API services as multiple functions. The directory structure and description file for the service are as follows:
Mobile ├ ─ ─functions│ ├ ─ ─ notice# Notification interface│ │ ├ ─ ─ handler. Js │ │ └ ─ ─ package. The json │ └ ─ ─ the search# Search interface│ ├── handler.js │ ├── package.htm │ ├─ movement.ymlCopy the code
# file: mobile.yml
version: 1.0
provider:
name: openfaas
gateway: https://openfaas.demo.domain
functions:
m-search:
lang: node12
handler: ./functions/search
image: docker.demo.domain/mobile-search:latest
secrets:
- docker-auth
m-notice:
lang: node14
handler: ./functions/notice
image: docker.demo.domain/mobile-notice:latest
secrets:
- docker-auth
Copy the code
By default, you can batch publish through the CLI. Since each function is essentially a separate mirror, you can publish parameters separately.
# Deploy the Search function separately
faas up --filter "*search" -f mobile.yml
Copy the code
The function development
Function code is simple to write, similar to the development model of most function platforms, and third-party dependencies can be installed as required. In addition, in order to meet the requirements of development and testing, the general third-party service URL/Token and other information are specified by environment variables.
const axios = require('axios');
// Specify the API address through the environment variable
const API_URL = process.env.API_URL;
module.exports = async (event, context) => {
const { q } = event.query;
const response = await axios.get(API_URL, {/* options */});
/* process stuff */
return context.status(200).succeed(data);
}
Copy the code
The user authentication
Since the small program version of the interface needs to open public network access and has a separate permission control, we encapsulated it as a separate NPM package and exported it as a high order function for authentication.
const mobileAuth = require('@internal/mobile-auth');
const handler = async (event, context) => {
const { user, token } = event.auth;
/* process stuff */
return context.status(200).succeed(data);
}
module.exports = mobileAuth(handler);
Copy the code
Calling other functions
In real business scenarios, functions can also call each other. In OpenFaaS, calls can be made directly through the internal gateway gateway.openfaas.
const axios = require('axios');
const faas = axios.create({
baseURL: 'http://gateway.openfaas:8080/function/'
});
module.exports = async (event, context) => {
const { q } = event.query;
const pinyin = await faas.post('/pinyin'./* data */);
/* process stuff */
return context.status(200).succeed(data);
}
Copy the code
Underlying principle analysis
The practice process is also a good learning opportunity. Here’s a brief overview of some of the overall architecture and underlying principles of OpenFaaS.
Overall technology stack
The bottom layer of OpenFaas framework is based on K8S and Docker, providing services through Gateway component, and integrating Prometheus and NATS services to realize automatic capacity expansion and shrink. At the platform level, OpenFaaS Cloud is officially provided to integrate the R&D process and can be connected to code hosting platforms such as Github/Gitlab.
Abstract service process
Above is the abstract service process of OpenFaaS, and below is a brief introduction of each node:
- Gateway: HTTP Gateway that receives user requests and internal commands.
- NATS Streaming: Used for asynchronous execution functions.
- Prometheus/AlertManager: used to collect service indicators and expand or shrink capacity.
- Faas-netes: For K8S Provider, you can customize other providers such as Docker Swarm.
- Docker Registry: repository for pulling function images.
Automatic expansion and shrinkage capacity
Automatic scaling is a core feature of FaaS. The OpenFaaS capacity expansion process can be summarized as follows:
- The AlertManager triggers capacity expansion based on monitoring indicators.
- The Gateway initiates a container creation request to FaaS Netes.
- K8S looks for suitable nodes;
- Pull mirror image;
- Start the container instance.
Optimization idea:
- Reduce the mirror size.
- Prepull the mirror from the node.
- Regular traffic can be expanded in advance according to rules.
# file: mobile.yml
functions:
m-search:
labels:
com.openfaas.scale.min: 1
com.openfaas.scale.max: 20
com.openfaas.scale.factor: 20
com.openfaas.scale.zero: false
Copy the code
Watchdog
In the official default function template, each container instance has a Watchdog process that represents Gateway requests and forwards them to the user’s function process. The function responds to a request by processing standard input and output.
Function runtime
Since functions are typically developed for the HTTP API, you want to get more of the request context. It is also possible to introduce a framework to solve the problem. Here are some examples of wrapping the Node.js runtime using the Express framework.
The first is the basic image, need to introduce the watchdog, and change the mode to HTTP mode, while specifying the function entry file and service address.
FROM ghcr.io/openfaas/of-watchdog:0.8.4 as watchdog
FROM node:12-alpine as ship
COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog
RUN chmod +x /usr/bin/fwatchdog
# Omitted part of the configuration...
COPY function/ /.
RUN npm i
WORKDIR /home/app/
ENV fprocess="node index.js"
ENV mode="http"
ENV upstream_url="http://127.0.0.1:3000"
CMD ["fwatchdog"]
Copy the code
In the function entry file section, we need to create a new Express application that encapsulates function events and context objects to be passed to user code execution when the request is received.
const express = require('express');
const handler = require('./function/handler'); // User function code
const app = express();
// Function events
class FunctionEvent {
constructor(req) {
this.body = req.body;
this.headers = req.headers;
// ...}}// Function context
class FunctionContext {
constructor(cb) {
this.statusValue = 200;
this.cb = cb;
}
status(value) {}
succeed(value){}}// Construct the middleware to execute the user code
const middleware = async (req, res) => {
const cb = (err, functionResult) = > {
// res.status().send()
};
const fnEvent = new FunctionEvent(req);
const fnContext = new FunctionContext(cb);
// handler(fnEvent, fnContext)
};
app.get('/ *', middleware);
const port = process.env.http_port || 3000;
app.listen(port, () = > {
console.log(`listening on port: ${port}`)});Copy the code
conclusion
Briefly review the content of this article. We first introduced the development of the BFF application development model and the background of the team’s exploration of OpenFaaS, followed by some of the team’s practices in the business, and finally a brief introduction to some of the underlying principles of OpenFaaS. Hope to help you, interested students can leave a message to discuss.
Finally, I would like to share some personal views. With the gradual improvement of FaaS infrastructure, although the efficiency of development and operation has been improved, the complexity of the business has not been reduced. Therefore, how to build abstract and flexible BaaS services is also a very worthwhile direction to think about. For example, we want to quickly build a video online service. The ideal state is to be able to directly connect BaaS services such as users, file storage, video transcoding and database through FaaS function. We do not need to care about the details of the server, but focus on the encapsulation of business process.
Author: Yang Pengfei