Zookeeper Distributed Environment Commander
They are based
ZooKeeper is a distributed coordination service used to manage large hosts. Coordinating and managing services in a distributed environment is a complex process. ZooKeeper solves this problem with its simple architecture and API. ZooKeeper allows developers to focus on the core application logic without worrying about the distributed nature of the application.
Advantages of distributed applications
(1) Reliability – the failure of a single or several systems will not cause the failure of the whole system.
(2) Scalability – Can increase performance as needed by adding more machines and making minor changes in application configuration without downtime.
(3) Transparency – Hide the complexity of the system and show it as a single entity/application.
The challenges of distributed applications
(1) Competition conditions – Two or more machines trying to perform a particular task actually need only be completed by a single machine at any given time. For example, shared resources can only be modified by a single machine at any given time.
(2) Deadlock – Two or more operations wait for each other to complete indefinitely.
(3) Inconsistency – partial failure of data.
Two) Nginx high concurrency shunt advanced combat
How does NGINx achieve high concurrency
In short, it’s asynchronous, non-blocking, using epoll and lots of low-level code optimization.
In a little more detail, nginx’s special process model and event model are designed.
Process model
Nginx uses one master process and multiple Woker processes.
The master process is responsible for collecting and distributing requests. When a request comes in, the master pulls up a worker process to handle the request.
The master process is also responsible for monitoring the woker status to ensure high reliability
Woker processes are generally set to the same number of CPU cores. Nginx’s Woker process is different from Apache’s. Apche processes can only handle one request at a time, so it has many processes running, hundreds or even thousands. Nginx’s Woker process is only limited by the number of requests it can handle at the same time, so it can handle multiple requests.
The event model
Nginx is asynchronous and non-blocking.
Every time a request comes in, a worker process will process it. But not all the way through. To what extent? Process to where blocking might occur, such as forwarding a request to an upstream (back-end) server and waiting for the request to return. Instead of waiting, the processing worker registers an event after sending the request: “Let me know if upstream returns and I’ll continue.” So he went to rest. At this point, if another request comes in, he can quickly do it again. Once the upstream server returns, the event will be triggered, the worker will take over, and the request will continue down.
By the nature of the Work of the Web Server, the majority of the life of each request is spent on the network, and the actual time slice spent on the server machine is small. This is the secret to solving high concurrency in just a few processes.
RabbitMQ messaging middleware
(1) Broker: instance of message-oriented middleware, which may be a single node or a logical entity running on a multi-node cluster
** (2) Message :** A Message consists of a header and a body. Message headers include standard headers such as routing-key and priority, and other custom headers that define RabbitMQ’s behavior towards messages. The message body is a byte stream that contains the message content.
** TCP Connection between a client and Broker
(4) Channel A Channel is a logical (virtual) connection over a TCP connection. Multiple channels reuse the same TCP connection to avoid the high cost of establishing a TCP connection. RabbitMQ officially requires each thread to use a separate Channel and forbids multiple threads from sharing a Channel.
** (5) Producer (Publisher):** Client thread that sends messages
** (6) Consumer :** The client thread that processes the message
** (7) Exchange :** The switch is responsible for Posting messages to the corresponding queue
(8) Queue: Receives and saves the messages delivered by the switch until they are successfully consumed by consumers. The logical structure follows a FIFO.
(9) Binding: Registers queues to the Exchange routing table
(10) Virtual host (Vhost): Multiple Vhosts can be established under each Broker, and each Vhost can establish an independent Exchange, Queue, binding and permission system. Vhosts under the same Broker share Connection, Channel, and user systems, meaning that different Vhosts can be accessed using the same Channel using the same user identity.
4) ActiveMQ message middleware
(1) Multiple languages and protocols to write clients. Languages: Java,C,C++,C#,Ruby,Perl,Python,PHP. Application protocols: OpenWire,Stomp REST,WS Notification,XMPP,AMQP
(2) Full support for JMS1.1 and J2EE 1.4 specifications (persistence, XA messages, transactions)
(3) Spring support, ActiveMQ can be easily embedded into the system using Spring, and also supports the characteristics of Spring2.0
(4) Has passed the tests of common J2EE servers (such as Geronimo,JBoss 4,GlassFish,WebLogic), where JCA 1.5 Resource Adaptors are configured, ActiveMQ can be automatically deployed to any J2EE 1.4 compliant commercial server
(5) support for multiple transfer protocol: the in – VM, TCP, SSL, NIO, UDP, JGroups, JXTA
(6) Support high-speed message persistence through JDBC and Journal
(7) From the design to ensure high performance of the cluster, client-server, point-to-point
(8) Ajax support
(9) Support integration with Axis
(10) It is easy to call the embedded JMS provider for testing
Redis high performance cache database
Redis data structure and related common commands
**Key: **Redis uses the basic key-value data structure. Any binary sequence can be used as a Redis Key (such as a regular string or a JPEG image).
**String: **String is the basic data type of Redis. Redis does not have the concepts of Int, Float, Boolean, etc. All basic data types are represented by strings in Redis.
SET: sets the value of a key, which can be used in conjunction with EX/PX parameters to specify the validity period of the key. NX/XX parameters can be used to distinguish whether the key exists or not. Time complexity O(1)
GET: Obtains the value of a key. Time complexity O(1)
GETSET: Sets the value of a key and returns the original value of the key, time complexity O(1)
MSET: Set values for multiple keys, time complexity O(N)
MSETNX: same as MSET, if any of the specified keys already exists, no operation is performed, time complexity O(N)
MGET: Obtain the values of multiple keys, time complexity O(N)
INCR: Increases the value of the key by 1 and returns the value after the increment. Applies only to String data that can be converted to integers. Time complexity O(1)
INCRBY: increments the value of the key to the specified integer value and returns the incremented value. Applies only to String data that can be converted to integers. Time complexity O(1)
DECR/DECRBY: Same as INCR/INCRBY, autoincrement is changed to autodecrement.
6) Actual combat data of the project
(1) Kafka million level swallow combat
(2) Memcached
(3) High performance cache development practice
(4) MongoDB advanced combat
Need project information can pay attention to my public number: [calm as code] point information can be obtained!
If you think it’s written well, click a “like” and add a follow! Point attention, do not get lost, continue to update!!