The main contents of this paper are as follows:

 

The main content

preface

We all talk about distribution, especially in interviews, whether you’re looking for a junior software engineer or a senior software engineer, you’re going to need to know about distribution, or even have some experience with it. What is the buzzing distribution in the end, what is the advantage?

Borrow from the art of Fire shadow ninjutsu

 

Wind Dun spiral has a sword in his hand

Those of you who have watched Naruto know his signature ninjutsu: the art of multiple incarnations.

  • One of the most powerful aspects of this technique is the process and experience: the feelings and experiences of multiple dopes are shared. If A goes to Kakashi (Naruto’s teacher) to ask A question, then the other dopes will also know what A is asking.
  • Naruto has another extremely powerful ninjutsu skill that needs to be performed by several shadow-incarnations: Airdun Spiral hand sword. This ninjutsu is accomplished by the collaboration of three Naruto.

What do these two ninjutsu have to do with distribution?

  • Systems or services that are distributed in different places are interconnected.
  • Distributed systems work cooperatively.

Case study:

  • For example, the Sentinel mechanism of Redis can tell which Redis node has failed in a clustered environment.
  • Kafka elects a new Leader from the followers if a node fails. (The leader serves as the entry point for writing data, and the follower serves as the entry point for reading data)

So what are the downsides to the multiple incarnations?

  • It will consume a lot of chakras. Distributed systems have the same problem and require several times more resources to support.

A popular understanding of distribution

  • It’s a way of working
  • A collection of independent computers that act as a single related system to the user
  • Distribute different businesses in different places

Advantages can be considered in two ways: one is macro and one is micro.

  • At the macro level, a system with multiple functional modules is separated to decouple calls between services.
  • Micro level: services provided by modules are distributed to different machines or containers to expand the service strength.

Everything has a Yin, there must be a Yang, so what problems will distribution bring?

  • Need more quality talent to understand the distribution, the human cost increases
  • Architectural design becomes extremely complex and costly to learn
  • The cost of o&M deployment and maintenance has increased significantly
  • The links between multiple services become longer, making it more difficult to develop and troubleshoot problems
  • High environment reliability
  • Data idempotence problem
  • The order of the data
  • , etc.

Speaking of distributed have to know the CAP theorem and Base theory, here to do a illiteracy to the students who do not know.

The CAP theorem

In theoretical computer science, the CAP theorem states that it is impossible for a distributed computing system to meet all three requirements:

  • Consistency

    • All nodes access the same latest copy of data.
  • Availability

    • Each request gets a non-error response, but the data retrieved is not guaranteed to be up to date
  • Partition tolerance

    • Failure to reach data consistency within the time limit means that A partitioning situation has occurred and you must choose between C and A for the current operation.)

The BASE theory of

BASE is an acronym for Basically Available, Soft state, and Eventually consistent. BASE theory is an extension of AP in CAP, which achieves availability by sacrificing strong consistency. When failure occurs, part of the data is allowed to be unavailable but core functions are ensured to be available. Data is allowed to be inconsistent for a period of time, but eventually reaches a consistent state. Transactions that satisfy BASE theory are called flexible transactions.

  • Basic availability: When a distributed system fails, it allows the loss of some available functions to ensure the availability of core functions. If the e-commerce site transaction payment problems, the goods can still be viewed normally.
  • Soft state: Since strong consistency is not required, BASE allows the existence of intermediate states (also called soft states) in the system. This state does not affect the system availability, such as “payment in progress”, “data synchronization in progress” in the order. After the data is finally consistent, the state will be changed to “success”.
  • Final consistency: Indicates that data on all nodes will be consistent after a period of time. For example, the “payment in progress” status of the order will eventually change to “payment succeeded” or “payment failed”, so that the order status is consistent with the actual transaction result, but it needs a certain time delay and waiting.

A pit for distributed message queues

How is message queuing distributed?

If the messages in the message queue are spread over multiple nodes (a machine or container), the sum of the message queues of all nodes contains all the messages.

1. The pit of message queue is not idempotent

Idempotent concept

Idempotent means that no matter how many operations you do, the result of the first operation is the same. If the message is consumed multiple times, it is likely to cause inconsistencies in the data. And if the message is inevitably consumed multiple times, it’s acceptable if we developers have technical means to ensure that the data is consistent, which reminds me of the ABA problem in Concurrent Programming in Java. If there is [ABA problem], it’s acceptable if all the data is consistent.

Scenario analysis

RabbitMQ, RocketMQ, and Kafka message queue middleware can have repeated message consumption problems. This problem is not guaranteed by MQ itself, but by developers.

Among these message queues are the most powerful distributed message queues in the world, which must take into account the idempotent nature of messages. Let’s take a look at how Kafka ensures message queues are idempotent.

Kafka has the concept of an offset, representing the message number, each message to a message queue can have an offset, consumer consumption data, every once in a fixed time, will be offset to submit the consumption of news, says it has consumption, consumption from behind the offset start next time.

Pit: When a message is consumed and the system is shut down before the offset is committed, the message without the offset will be consumed again.

As shown in the figure below, the offsets of data A, B and C in the queue are 100, 101 and 102 respectively, which are all consumed by consumers. However, only the offset 100 of data A is submitted successfully, and the other two offsets are not submitted in time due to system restart.

 

System restart, offset not committed

After the restart, the consumer again takes the data after offset 100, starting with the message at offset 101. So data B and data C are repeated messages.

As shown below:

 

After the restart, the message is consumed repeatedly

Avoid pit guide

  • Wechat payment result notification scenario

    • Wechat official documents mentioned that wechat Pay notification results may be pushed several times, so developers need to ensure idempotency by themselves. The first time we can directly modify the order status (such as payment in progress -> payment successfully), the second time we will judge according to the order status, if it is not payment in progress, the order processing logic will not be carried out.
  • Database Insertion scenario

    • Each time data is inserted, check whether the primary key ID of the data exists in the database. If so, update the data.
  • Write a Redis scenario

    • Redis’s Set operation is inherently idempotent, so you don’t have to worry about Redis writing data.
  • Other Scenarios

    • When a producer sends each piece of data, it adds a globally unique ID, similar to an order ID. For each consumption, check whether the ID exists in Redis first. If not, the message will be processed normally and the ID will be saved in Redis. If this ID is found, it indicates that it has been consumed before. Do not re-process this message.
    • Different business scenarios may have different idempotent solutions. You can choose the appropriate one. The above solutions only provide common solutions.

2. The message of the pit in the message queue is lost

Pit: What are the problems caused by message loss? If the information related to order placing, payment result notification and deduction is lost, it may cause financial losses. If the amount is large, it will bring huge losses to Party A.

Does a message queue guarantee against message loss? Answer: No. There are three main scenarios for message loss.

 

Message loss in message queue

(1) The message is lost when the producer stores the message

 

Producer loss message

The solution

  • Transaction mechanism (not recommended, asynchronous)

Txselect is enabled for RabbitMQ. If the messages are not queued, the producers receive an exception, roll back channel.txrollback, and retry sending the messages. If a message is received, the transaction channel.txCommit can be committed. But this is a synchronous operation, which affects performance.

  • Confirm mechanism (recommended, asynchronous)

There is another mode that can be used to solve performance problems with synchronization: confirm mode. Each message sent by a producer is assigned a unique ID, and if written to the RabbitMQ queue RabbitMQ will send an ACK message indicating that the message was received successfully. If RabbitMQ fails to process the message, the nACK interface is called back. You need to retry sending messages.

You can also customize the timeout period + message ID to implement the timeout wait and retry mechanism. The problem is that the ACK interface fails, so the message is sent twice. In this case, you need to ensure that the consumer consumes the message idempotent.

Differences between transaction mode and Confirm mode:

  • The transaction mechanism is synchronous, and the committed transaction regrets being blocked until the committed transaction completes.
  • Confirm mode Receives notifications asynchronously, but may not receive notifications. You need to consider a scenario where notifications are not received.

(2) The message queue loses messages

 

The message queue lost messages

Messages in the message queue can be placed in memory, or messages in memory can be transferred to disk (such as a database), usually both memory and disk hold messages. If it’s just in memory, then when the machine restarts, the messages are all lost. In the case of hard disk, there may be an extreme case where the message queue fails during the conversion of data from memory to hard disk, failing to persist messages to hard disk.

The solution

  • Set the Queue to persist when you create it.
  • Set message deliveryMode to 2 when sending a message.
  • If the producer confirm mode is enabled, messages can be sent again.

(3) Consumers lose messages

 

Consumer lost message

The consumer had just received the data and had not yet started processing the message. As a result, the process stopped abnormally and the consumer had no chance to get the message again.

The solution

  • Turn off automatic ack for RabbitMQ to send an ACK back to the producer each time a message is written to the message queue.
  • After processing the message, the consumer actively ack the message queue to tell it that I am finished processing.

Q: What is the flaw in this active ACK? What if I hang up when I ack?

It might be consumed again, and that’s where idempotent processing comes in.

Question: What if this message keeps being re-consumed?

If the number of retries exceeds a certain number, the message will be lost, logged to the exception table or sent to the exception notification to the duty personnel.

(4) Summary of RabbitMQ message loss

 

Processing scheme for RabbitMQ lost messages

(5) Kafka message is lost

Scenario: One of Kafka’s brokers is down, and the leader (the writing node) is reelected. If the leader dies and some data on the followers remains unsynchronized, the message queue will lose some data after the follower becomes the leader.

The solution

  • Set replication. Factor to topic. The value must be greater than 1 and each partition must have at least 2 replicas.
  • The min.insyc.replicas set to the Kafka server must be greater than 1 to indicate that the leader has at least one follower in contact with him.

3. Messages in the pit of the message queue are out of order

Pit: the user first place a successful order, and then cancel the order, if the order is reversed, then there will be a successful order in the database.

The RabbitMQ scene:

  • The producer sends two messages to the message queue in order: message 1: add data A and message 2: delete data A.
  • Expected result: Data A is deleted.
  • But if there are two consumers, the order of consumption is message 2 and message 1. The final result is to increase data A.

 

RabbitMQ message sequence is out of order

 

RabbitMQ message sequence is out of order

RabbitMQ solution:

  • Split the Queue, creating multiple in-memory queues, with message 1 and message 2 in the same Queue.
  • Create multiple consumers, each corresponding to a Queue.

 

RabbitMQ solution

Kafka scene:

  • Create topic with 3 partitions.
  • Create an order record, order ID as the key, order related messages are thrown to the same partition, the same producer created messages, the order is correct.
  • In order to consume messages quickly, multiple consumers are created to process messages, and for efficiency, each consumer may create multiple threads to fetch and process messages in parallel, possibly out of order.

 

Kafka message loss scenario

Kafka solution:

  • The solution is similar to RabbitMQ, using multiple memory queues, consuming one Queue per thread.
  • Messages with the same key enter the same Queue.

 

Kafka message out of order solution

4. The pit messages in the message queue are backlogged

Message backlog: There are too many messages in the message queue to consume.

Scenario 1: A problem occurs on the consumer side. For example, there are no more consumers to consume, causing messages to accumulate in the queue.

Scenario 2: Something goes wrong on the consumer side. For example, consumers are spending too slowly, resulting in a backlog of messages.

Pit: for example, online order activities are being done, all orders go to the message queue, if the message backlog, the order is not successful, then it will lose a lot of transactions.

 

Messages in the message queue are backlogged

Solution: You have to tie the bell

  • Fix consumer issues at the code level to ensure that subsequent consumption is restored or as fast as possible.
  • Stop existing customers.
  • Temporarily create five times the number of queues.
  • Temporarily build up five times the number of customers.
  • All the accumulated messages are put into temporary queues, which are consumed by consumers.

 

Message backlog solution

5. The pit message of the message queue is expired

Pit: RabbitMQ can set an expiration date for messages that have not been consumed for a certain amount of time and will be cleared by RabbitMQ. The message is lost.

 

Message expiration

Solution:

  • Have a batch redirecting program ready
  • Manually rerun messages in timed batches

 

Message expiration solution

6. The pit queue of the message queue is full

Pit: When a message queue is nearing full due to a backlog of messages and cannot receive any more messages. Messages produced by the producer will be discarded.

Solution:

  • To determine which messages are useless, RabbitMQ can perform the Purge Message operation.
  • If the message is useful, it needs to be quickly consumed and its contents transferred to the database.
  • The ready program redirects the message stored in the database to the message queue again.
  • Reroute messages to message queues at idle times.

Two, distributed cache pit

In the scenario of high frequency database access, a cache mechanism will be added between the business layer and the data layer to share the database access pressure. After all, the disk ACCESS I/O speed is very slow. For example, using the cache to look up data, it may take 5ms to complete, while looking up the database may take 50 ms, a difference of an order of magnitude. In the case of high concurrency, the database may also lock data, resulting in slower access to the database.

Distributed cache Redis is one of the most commonly used distributed cache services.

1. Redis data loss pit

The guard mechanism

Redis enables high availability of clusters using the sentry mechanism. So what is the sentry mechanism?

  • English name: Sentinel, Chinese name: Sentinel.
  • Cluster monitoring: Responsible for the normal running of the primary and secondary processes.
  • Message notification: Responsible for reporting fault information to o&M personnel.
  • Failover: Responsible for moving the primary node to the standby node.
  • Configuration center: Notifies clients to update the IP address of the primary node.
  • Distributed: Multiple sentinels are distributed on each active and standby node to work cooperatively.
  • Distributed election: a primary/secondary switchover requires the consent of most sentries.
  • High availability: Even if some sentinels go down, the Sentinels cluster still works.

Pit: When the active node is faulty, an active/standby switchover is required, which may cause data loss.

Data loss caused by asynchronous data replication

When the active node asynchronously synchronizes data to the standby node, the active node breaks down. As a result, some data is not synchronized to the standby node. The slave node is elected as the master node, and some data is lost.

Data loss due to split-brain

The machine on which the master node is located is detached from the cluster network and is actually running itself. But the sentry elects the standby node as the master node, and at this point there are two master nodes running, which is like two brains telling the cluster what to do, but who to do? This is the split brain.

So how does a split brain cause data loss? If the client is still connected to the first master node before switching to the new master node, some data is still written to the first master node, and the new master node does not have this data. When the first primary node recovers, it will be connected to the cluster environment as a standby node, and its data will be wiped and replicated again from the new primary node. While the new master node does not have the data written by the client before, so some data is lost.

Avoid pit guide

  • Min-rabes-to-write 1 is configured, indicating that at least one standby node is available.
  • If min-rabes-max-lag 10 is configured, the delay of data replication and synchronization cannot exceed 10 seconds. Data loss of up to 10 seconds

Note: cache avalanches, cache penetrations, and cache breakdowns are not unique to distribution and can occur on a single machine. So it’s not a distributed pit.

Three, sub – library sub – table pit

1. Expansion of pits in sub-tables

Sub – library, sub – table, vertical split and horizontal split

  • Split: Because a database supports a limited number of concurrent access, you can split the data of a database into multiple libraries to increase the maximum number of concurrent access.

  • Split table: because the amount of data in a table is too large, it is difficult to use the index to query data, so you can split the data of a table into multiple tables. When querying, you only need to look up a table after splitting, and the query performance of SQL statements is improved.

  • Advantages of separate database and separate table: after separate database and separate table, the concurrency is increased many times; The disk usage is greatly reduced. The amount of data in a single table is reduced, improving the EFFICIENCY of SQL execution.

  • Horizontal split: Split the data of a table into multiple databases with the same table structure in each database. Use multiple libraries for higher concurrency. For example, the order table has 5 million data accumulated every month. It can be split horizontally every month and put the data of the last month into another database.

  • Vertical split: To split a table with many fields into multiple tables into the same library or multiple libraries. Put high-frequency fields in one table and low-frequency fields in another. Utilize the database cache to cache frequently accessed new data. For example, an order table with many fields is divided into several tables to store different fields (redundant fields can be available).

  • The way of dividing database and table:

    • Divide database and table according to tenant.
    • Use the time range to separate the database and table.
    • Use ID to take module to sub – library, sub – table.

Pit: Sub – list is an operations level need to do things, sometimes take early morning downtime to start the upgrade. You can stay up until dawn, fail to upgrade, and need to roll back, which is actually a torment for the technical team.

How to make it automatic to save the time of inventory and table?

  • Double-write migration scheme: During migration, the operation of adding, deleting and modifying new data is done in both the new and old databases.
  • Use Sharding- JDBC tool to complete the database and table.
  • Use the program to compare the data between the two libraries until the data is consistent.

Pit: It looks like a bright and shiny table, but what new problems will it introduce?

Problems with vertical split

  • There is still the problem of too much data in a single table.
  • Some tables cannot be associated with query and can only be solved through interface aggregation, which improves the complexity of development.
  • Distributed processing is complex.

Problems with horizontal splitting

  • Poor performance of associative queries across libraries.
  • Multiple data expansion and maintenance require a lot of data.
  • Transaction consistency across shards is difficult to guarantee.

2. Unique ID of the pit in the sub-database sub-table

Why does a table need a unique ID

  • If you want to create A sub-table, you must consider that the primary key ID of the table is globally unique, such as an order table, assigned to A Cook B library. If both order tables are incremented from 1, the order data query will be confused, with many order ids being duplicated and not actually the same order.
  • One of the desired results of repository splitting is to spread the number of times the data is accessed to other libraries. Some scenarios require even splitting, so when data is inserted into multiple databases, unique ids need to be generated alternately to ensure that requests are evenly spread across all databases.

Pit: There are n ways to generate a unique ID, each with its own purpose, do not use the wrong.

The principle of generating unique ids

  • Global uniqueness
  • Increasing trend
  • Monotone increasing
  • Information security

Several ways to generate unique ids

  • The ID of the database is automatically added. Every time a database adds a record, its ID increases by 1.

    • Multiple library ids may be the same, this scheme can be directly rejected, not suitable for the ID generation after the sub-table sub-library.
    • Information insecurity
    • disadvantages
  • This parameter applies to the UNIQUE IDENTIFIER (UUID).

    • The UUID is too long and occupies large space.
    • When the primary key is used, sequential append operations cannot be generated when data is written, but insert operations can only be performed, which results in reading the entire B+ tree node into memory and writing the entire node back to disk after the record is inserted. When the record occupies a large space, the performance is poor.
    • disadvantages
  • Obtain the current system time as the unique ID.

    • At high concurrency, there may be multiple ids with the same ID within 1 ms.
    • Information insecurity
    • disadvantages
  • Twitter’s Snowflake algorithm: Twitter’s open-source distributed ID generation algorithm, 64-bit long ids, is divided into four parts of the Snowflake algorithm

  • Fundamentals and pros and cons:

    • 1 bit: No, the value is 0

    • 41 bits: indicates the millisecond timestamp, which can represent 69 years.

    • 10 bits: 5 bits indicates the equipment room ID, and 5 bits indicates the machine ID. Each equipment room represents a maximum of 32 machines.

    • 12 bits: Indicates a maximum of 4096 ids in the same millisecond. The mode is auto-increment.

    • Advantages:

      • The number of milliseconds is high, the increment sequence is low, and the whole ID is trending upward.
      • It does not rely on third-party systems such as databases, and is deployed as a service, providing higher stability and high performance in ID generation.
      • You can allocate bits according to your service characteristics, which is very flexible.
    • Disadvantages:

      • A heavy reliance on the machine clock can result in duplicate numbers or service unavailability if the clock is rolled back (by searching for the 2017 leap second 7:59:60).
  • Baidu’s UIDGenerator algorithm. UIDGenerator algorithm

    • Optimization algorithm based on Snowflake.
    • The future time and double Buffer are borrowed to solve the problems of time callback and generation performance, and the ID allocation is combined with MySQL.
    • Advantages: Solves time callback and build performance issues.
    • Disadvantages: MySQL database dependency.
  • Meituan’s Leaf-Snowflake algorithm.

  • Photo by Meituan

    • Obtaining ids is to access the database through the proxy service to obtain a batch of ids (number segments).

    • Double buffering: when 10% of the id of the current batch is used, access the database to obtain a new batch of ids and cache them. After the new IDS are used up, use them directly.

    • Advantages:

      • Leaf services can easily scale linearly, and the performance is fully capable of supporting most business scenarios.
      • The ID number is a 64-bit number of 8 bytes in ascending order, which meets the primary key requirements of the database.
      • High DISASTER recovery: The Leaf service has an internal number segment cache. Even if the DB is down, the Leaf can still provide external services in a short period of time.
      • You can customize the size of max_ID to facilitate service migration from the original ID.
      • Even if DB goes down, Leaf can keep issuing numbers for a while.
      • Occasional network jitter does not affect the update of the next number segment.
    • Disadvantages:

      • The ID number is not random enough to reveal the number of numbers, which is not secure.

How to choose: General own internal system, snowflake algorithm is enough, if you want to be more secure and reliable, you can choose Baidu or Meituan to generate a unique ID scheme.

4. Distributed transaction pits

How do you understand transactions?

  • An event can be simply interpreted as either all of it being done, or none of it being done, as if it had never happened.
  • In a distributed world, there are services that call each other, and the links can be very long. If either party fails, the related operations of the other services involved need to be rolled back. For example, if the order service places an order successfully and invokes the voucher issuing interface of the marketing center to issue a voucher, but wechat Pay fails to deduct the payment, the voucher needs to be returned and the order status needs to be changed to abnormal order.

Pit: How to ensure the correct execution of distributed transactions is a big problem.

There are several main approaches to distributed transactions

  • XA scheme (Two-stage submission scheme)
  • TCC scheme (try, Confirm, Cancel)
  • SAGA scheme
  • Reliable message final consistency scheme
  • Best efforts to inform the scheme

Principle of XA scheme

 

XA scheme

  • The transaction manager is responsible for coordinating transactions with multiple databases. Is each database ready? The operation is performed on the database if it is ready, and the transaction is rolled back if either database is not ready.
  • Suitable for individual applications, not microservice architectures. Because each service can only access its own database, cross-access to other microservices’ databases is not allowed.

TCC scheme

  • Try phase: Checks the resources of each service and locks or reserves the resources.
  • Confirm phase: Perform actual operations in each service.
  • Cancel phase: If the execution of any of the service’s business methods fails, the previous successful steps need to be rolled back.

Application Scenarios:

  • When dealing with payments and transactions, you must ensure that the money is correct.
  • High requirements for consistency.

Disadvantages:

  • However, this is not recommended in other scenarios because it requires a lot of code for compensation logic and is not easy to maintain.

Sega scheme

Basic Principles:

  • If one of each steps in the business process fails, the previous successful steps are compensated.

Applicable scenarios:

  • Long business process, many business process.
  • Participants include other corporate or legacy system services.

Advantage:

  • The first phase commits local transactions, lock-free, and high performance.
  • Participants can execute asynchronously with high throughput.
  • Compensation services are easy to implement.

Disadvantages:

  • Transaction isolation is not guaranteed.

Reliable message consistency scheme

 

Reliable message consistency scheme

Basic Principles:

  • Message transactions are implemented using the messaging middleware RocketMQ.
  • Step 1: System A sends A message to MQ, which marks the message status as prepared (prepared, half-message) and the message cannot be subscribed.
  • Step 2: MQ responds to system A and tells system A that the message has been received.
  • Step 3: A System performs local transactions.
  • Step 4: If system A successfully executed the local transaction, change the prepared message to COMMIT, and system B can subscribe to the message.
  • Step 5: MQ will also periodically poll all prepared messages and call back to system A to tell MQ how the local transaction is going, whether to wait or roll back.
  • Step 6: A System checks the execution result of the local transaction.
  • Step 7: If system A fails to execute the local transaction, MQ receives A Rollback signal and discards the message. If the local transaction is successful, MQ receives a Commit signal.
  • B After receiving the message, the system starts to execute the local transaction. If the transaction fails, the system automatically retries the transaction until it succeeds. Or System B rolls back and notifies system A to roll back in other ways.
  • B system needs to be idempotent.

Best efforts to inform the scheme

Basic Principles:

  • After the local transaction is completed, system A sends A message to MQ.
  • MQ persists messages.
  • If system B fails to execute a local transaction, the maximum effort service periodically tries to invoke system B again and tries its best to make system B try again. After several attempts, system B still fails and has to give up. Go to the developer for troubleshooting and subsequent manual compensation.

How to choose several schemes

  • Dealing with payment and transaction, TCC is preferred.
  • Large systems, but with less stringent requirements, consider message transactions or SAGA scenarios.
  • For single application, it is recommended to commit XA in two stages.
  • Try your best to inform the solution suggestions are added, after all, it is impossible to give a problem to the development investigation, first retry a few times to see whether it can succeed.

Write in the last

This article is just a summary. From these pits, we also know that distribution has its advantages and disadvantages. Whether or not to use distribution depends entirely on the business, time, cost, and overall strength of the development team. I will continue to share some of the underlying principles of distribution, as well as a guide to avoiding potholes.

References:

Meituan’s Leaf-Snowflake algorithm. Baidu’s UIDGenerator algorithm. The Advanced – Java,