Content navigation
- What is Tars?
- Tars framework source code deployment
- Tars service deployment management
- Tars configuration center
- Tars service discovery
- Tars Remote logs
- Tars status monitoring
What is the Tars
Tars is a framework that supports multi-language embedded service governance capabilities and can be developed in good collaboration with DevOps. Provides a complete set of solutions including development, operation, maintenance, and testing. Tars integrates extensible protocol codec, high-performance RPC communication framework, name routing and discovery, release monitoring, log statistics, configuration management and so on. Tars can quickly build its own highly available distributed applications in the way of microservices, and achieve complete and effective service governance. Generally speaking, Tars is a cross-platform and cross-language software operating environment. It is a development framework based on the design concept of Service Mesh.
Tars framework source code deployment
Note: If CentOS7 is used, the GLIC must be upgraded for CentOS6
The deployment environment
- Docker environment installation
- Mysql installation
- Linux/Mac source code deployment
- Windows source code deployment
- TarsDocker deployment
- K8s Docker deployment
- TarsNode deployment
Depend on the environment
software | Software requirements |
---|---|
Linux Kernel Version | 2.6.18 or later (Operating system dependent) |
GCC version | 4.8.2 or later, glibc-devel (C++ framework dependency) |
Bison tool version | Version 2.5 and later (C++ framework dependency) |
With flexl version | Version 2.5 and later (C++ framework dependency) |
Cmake version | Version 3.2 and later (C++ framework dependency) |
Mysql version | 4.1.17 and later (Framework Operation Dependency) |
NVM version | 0.35.1 or later (Dependent on the Web management system and automatically installed during script installation) |
The node version | 12.13.0 or later (Dependent on the Web management system and automatically installed during Script Installation) |
Tars service deployment management
- The deployment process has a full life cycle of page interactions
- Zero configuration or configuration template, page configuration operation execution file, deployment services
- After deployment, you can verify the deployment and view service running status and generated logs to facilitate troubleshooting and reporting
- On the version management page, you can upgrade or degrade the version
- After the VPN or public network is connected, remote deployment can be implemented
- The deployment process requires no technical background and is easy to operate
- Deployment solutions can be cross-platform
Service publication architecture implementation
- Tars patch components upload war file to patch directory (/ usr/local/app/patch/tars. Upload /)
- The Tars registry tells Node to pull the corresponding package locally
- Start the corresponding service
- After the service is started, you can view the status on the web page
- After the service is started, you can view the service startup logs by streaming logs on the web page.
Gray released
Tars supports grayscale publishing, as shown below
Fusing strategy
When the client and server need to interact, routes can be pulled from the registry. After pulling the registration information from the registry, the client can decide what service to request based on its internal judgment of the service.
Service release
-
Operations management
Service deployment
-
Template management
-
Release management
-
Service management
-
Framework to check
Tars service release vs. traditional service release
Compare the item | Tars service release | Traditional service Publishing |
---|---|---|
Service release | Page visualization, foolproof operation | SSH remote login, file upload, script start service |
Service upgrade | Upload the WAR package and select the WAR package to publish | Publish to a sample process with a service, but with a backup of history files in mind |
Service degradation | Page to select the corresponding version, publish | If there is a backup file, restore the service package file, script start, no backup file, you need to roll back the source code package upload, and then through the script start |
Need skills | Don’t need | Certain operation and maintenance experience, server operation, shell script writing |
Cluster release | Select multiple nodes and publish | You need to copy the package to the corresponding machine and restart the service |
Tars configuration center
The configuration center manages service configuration files in a unified manner. It is the unified management center for updating configuration files, push configuration files to services, and active pull configuration files of services in real time. It mainly contains the following advantages:
- Manage service configurations in a centralized manner and provide an operation page to facilitate configuration modification, prompt notification, and secure configuration changes.
- Keep a history of configuration changes so that configurations can be easily rolled back to previous versions.
- The configuration pull is simple. The service only needs to invoke the interface of the configuration service to obtain the configuration file.
- You can flexibly manage configuration files, which are divided into several levels: application configuration, Set configuration, service configuration, and node configuration.
Configuration Information Maintenance
The Tars framework maintains this configuration information through two data tables (stored in mysq I) : T_config_file s and t_config_References.
- The main information of the T_config_Files table includes the name, type, service name, set group, NODE IP address, index ID of the configuration file, and set group information of the service.
- T_config_references Main information for the table: the index ID of the configuration file and the index ID of the configuration file referenced by that ID.
Service configuration
Add configuration, add reference files, and push configuration files to the service on tars Web management system. The configuration file will be pushed to the corresponding directory.
Pull configuration files from service code to local.
Tars service discovery
The Tars protocol is implemented by Interface Description Language (IDL), which is a binary, extensible, automatic code generation, platform-supporting protocol. So that objects running on different platforms and programs written in different languages can communicate with each other by RPC remote call, mainly applied in the network transmission protocol between background services, as well as object serialization and deserialization. Registries involve three main roles: service provider, service consumer, and registry.
- Tars uses the name service to register and discover services
- The Client accesses the name service to obtain the address list of the called service
- The Client then selects an appropriate load balancing mode to invoke the service
The data structure
Protocols support two types: basic type and complex type.
- Basic types include: void, bool, byte, short, int, long, float, double, string, unsigned byte, unsigned short, and unsigned int.
- Complex types include enum, const, struct, vector, map, and nesting of struct, vector, and map.
Addressing mode
Automatic addressing: Cache update interval for the client Endpoint registry, active mode (refresh interval is 1 minute, refreshEndpointInterval) Load balancing algorithm for automatic addressing (including several types)
Direct addressing: You can manually enter the IP port for direct addressing, which is used in special debugging scenarios
Call way
The INTERFACE provided by the service can be defined using the IDL language protocol, and the communication codes of the client and the server can be automatically generated. The server can provide services only by implementing the service logic, and the client can invoke the service through the automatically generated code. The invocation modes support the following three modes:
- Synchronous invocation: The client makes the invocation request and waits for the service to return the result before continuing the logic.
- Asynchronous invocation: the client makes an invocation request and continues with other business logic. The server returns the result and the callback processing class processes the result.
- One-way call: the client ends the call after making the call request, and the server does not return the call result.
The Tars file defines the structure demo
The service registry
Tars service registration benefits:
- Easy to use: transparent to developers
- High availability: The failure of a few registries does not result in the failure of the entire service
- Avoid calls across machine rooms: It is best to call services from the same machine room first to reduce network latency
- Cross-language: Allows developers to build microservices in multiple programming languages
- Load balancing: Load balancing is supported in polling, hash, and weight modes.
- Fault tolerant protection: name service exclusion and Client active masking.
Name service exclusion strategy:
The service service proactively reports heartbeat messages to the name service so that the name service knows the status of the node on which the service is deployed. When a node fails, the name service does not return the IP address of the faulty node to the Client to rectify the faulty node. Troubleshooting the name service requires two processes: service heartbeat and Clien address list. Troubleshooting takes about 1 minute.
Client active masking:
To mask faulty nodes in a timely manner, the Client detects faults based on the exceptions of invoking the invoked service. When a client attempts to invoke a CERTAIN SVR and its call timeout rate exceeds a certain percentage, the client shields the SVR and distributes traffic to normal nodes. The shielded SVR nodes are reconnected at regular intervals. If the SVR nodes are normal, traffic is distributed normally.
You can also use the Web API to implement automatic upload and deployment
The service registration process is shown below:
The service discovery process is shown in the following figure:
Client implementation principles
Implementation principle of the server
Tars invocation chain
On the Tars management platform, select the service to enable the call chain and click “Edit”
The final effect is as follows:
Click on the word call chain to see details
Tars Remote logs
Tarslog is a Tarslog service, based on Logback as a logging system, used to send log content to local or remote servers, and supports in-service log call link tracing and log dyeing.
Tarslog provides very flexible configuration items to provide users with more powerful logging capabilities.
The built-in remote logging function of Tars is in the form of a Logback plug-in. You only need to import the plug-in into the Logback configuration file, which is easy to configure and can be customized to print logs locally and remotely.
The Tars log module uses MDC to track intra-service log links and customize log staining. MDC internally holds a ThreadLocal object that stores a Map so users can add key-value pairs to it as needed.
The Tars log module is configured to support high availability.
Log print
The process is as follows:
- Website (github.com/TarsCloud/T…
- Jar tars-plugins in TarsJava using the MVN package command
- Put the jar package into the project.
- Configure logback files and add appender. As shown in figure configuration
- Get the Iogger object from Logger Logger = loggerFactory.getLogger (“root”).
- Run logger.debug(“message”) to print logs. In this case, logs are printed to the Iogserver configured by logserverObjname in the figure in step 4.
Trace intra-MDC service log links
- The MDC has an internal ThreadLocal object. In a request link processed by a single thread, run the MDC. put(” traceId”, value) command to set the ID of the invoked link.
- Use pattern to obtain traceld in the Logback configuration file, as shown in the following figure.
- In this case, all logs on a single request link contain the values of put traceId in the MDC in Step 1.
MDC Log Dyeing
- By implementing the package dyeing conversionRule ForegroundCompositeConverterBase interface. As shown below:
- Configure the logback.xml file, add the conversionRule tag to point to the coloring conversionRule class encapsulated in the first step, and use this coloring rule in the Appender that outputs remote logs. As shown below:
Kafka logback integration
Method 1: Handwritten Appender
- Import the Kafka JAR package into the project.
- A handwritten Appender implements sending logs to Kafka.
- Configuration logback. XML.
Approach 2: Open source the JAR
For details, please refer to github.com/danielwegen…
- Import the open source JAR package into the project.
</dependency> <groupld>com.github.danielwegener</groupld> <artifactld>logback-kafka-appender</artifactld> < version > 0.2.0 < / version > < scope > runtime < / scope > < / dependency >Copy the code
- Configure logback. XML, configure kafkaAppender, and kafka information (host, topic, etc.).
- Put kafkaAppender on log output.
<root level="lNFO"><appender-ref ref="kafkaAppender"/></root>
Copy the code
conclusion
- Currently, the remote log service does not support cross-service log link tracing. Zipkin is required for link tracing. Tars is already integrated with Zipkin)
- You can configure multiple IP addresses for remote log services to implement mimas.
- Horizontal extension of functionality can be made based on Logback.
Tars status monitoring
In order to better reflect and monitor the operation quality of small service processes and large businesses, the framework supports the following data reporting functions:
- Provides the function of reporting call statistics between service modules. You can view service traffic, delay, timeout, and exceptions.
- The user – defined attribute data report function is provided. You can view certain service dimensions or indicators, such as memory usage, queue size, and cache hit ratio.
- Provides the function of reporting service status changes and exceptions. You can view when the service is published, restarted, down, and encountered fatal errors.
Tars statistics
Statistics include access count, time, exception, and time. Statistics and aggregations are provided by each language framework SDK, which will be reported regardless of success or failure.
Reporting statistics is the reporting of time consuming information and other information to tarsstat within the Tars framework. There is no need for user development, and it can be reported automatically within the framework after the relevant information is set correctly during program initialization.
After the client invokes the reporting interface, the information is temporarily stored in the memory. When a certain point in time is reached, the information is reported to the TARsstat service (once every minute by default). The time interval between two reporting points is called a statistical interval, in which operations such as aggregating and comparing the same key are performed.
Tars feature monitoring
Feature monitoring reports customized features of service scripts. It consists of feature names, feature values, and statistical methods, similar to indicator monitoring. Each language SDK has default feature monitoring metrics, such as JAVA’s default jVM.memory, JVM.gc, etc.
You can also customize the extension:
obj = property.create('name', [new property.POLICY.Count, new property.POLICY.Max]);
obj.report(value)
Copy the code
Tars service monitoring
All the machines in the cluster have Node service for application management. One of the important functions of Node service is service monitoring. The Node service starts a monitoring thread to poll the running status of all services managed by the Node periodically (the interval can be configured) and report the running status of the services to the main control periodically.
- The application service SDK periodically reports heartbeats.
- The Node service periodically checks for SDK heartbeat timeouts.
- The Node service periodically checks whether the service process exists.
Write in the last
In recent years, against the background of the rapid development of AIOps, the urgent need for IT tools, platform capabilities, solutions, AI scenarios and available data sets has exploded in various industries. Based on this, AIOps community was launched by Cloud Intelligence in August 2021, aiming to set up an open source banner and build an active user and developer community for customers, users, researchers and developers in various industries to contribute and solve industry problems and promote the development of technology in this field.
Nearly half a year after its establishment, the community has opened source products such as FlyFish, OMP, Mohr, Hours algorithm and so on. FlyFish won the Excellent Open Source Project Award of China Open Source Cloud Alliance 2021. OMP operation and Maintenance Management platform was selected as the “Most Popular Project” of OSC China open Source project in 2021. GAIA data set is the first open source data set of intelligent operation and maintenance in the industry, which fills the gap in the field of AIOps open source data set. You are welcome to click the link below to learn more about cloud wisdom open source projects ~
Making address:github.com/CloudWise
Gitee address:gitee.com/CloudWise