Java eight-part article detailed video tutorial >>>>>>

A,Java Basics

1, Object class related methods

  • GetClass Gets the Class object of the current runtime object.
  • HashCode Returns the hash code of the object.
  • Clone To copy the current object, the Cloneable interface must be implemented. A shallow copy copies values for basic types and references for reference types. A deep copy is a value copy of a basic type and copies not only the reference to the object but also the related properties and methods of the object. The difference is that a deep copy creates a new object.
  • The String class overrides the equals method, which compares two objects by their memory addresses.
  • ToString Returns the hexadecimal value of the class name @hash code.
  • Notify Wakes up any thread of the current object monitor.
  • NotifyAll wakes up all threads on the current object monitor.
  • wait

    1. Suspend the execution of the thread; How many milliseconds to wait; How many extra milliseconds to wait; Have been waiting (have been waitingThread.sleep(long time)In contrast, sleep hibernates the current thread for a period of time and does not release the lock on the object. Wait releases the lock.
  • Method that is executed when a Finalize object is collected by the garbage collector.

2. Basic data types

  • Integer: byte(8), short(16), int(32), long(64)
  • Floating point: float(32), double(64)
  • Boolean: Boolean (8)
  • Type: char(16)

3. Serialization

Java object implementation serialization implements the Serializable interface.

  • Deserialization does not call the constructor. Antisequenced objects are objects generated by the JVM itself, not by constructors.
  • The reference type member variable of the serialized object must also be serializable; otherwise, an error will be reported.
  • If you want a variable not to be serialized, use the transient modifier.
  • Singleton class serialization requires overriding the readResolve() method.

4. String, StringBuffer, StringBuilder

  • A String consists of a char[] array and is final. It is an immutable object and can be understood as a constant. It is thread safe. Each change to a String generates a new String and points to the new reference object.
  • StringBuffer is thread safe; StringBuiler thread is not safe.
  • Use String to manipulate small amounts of character data. Single-threaded operations on large amounts of data using StringBuilder; Multithreaded operations on large amounts of data use StringBuffer.

5. Overloading and overwriting

  • Overloading occurs in the same class, the method name is the same, the parameter type, number, order is different, the method return value and modifiers can be different.
  • Override occurs in the parent and child class, with the same method name and argument, return value range less than or equal to the parent class, throw an exception range less than or equal to the parent class, and access modifier range greater than or equal to the parent class; A subclass cannot override a superclass method if its access modifier is private or final.

6, final

  • Modifies a variable of a basic type that cannot be modified once it is initialized.
  • Modifies a variable of reference type that cannot point to another reference.
  • A modified class or method that cannot be inherited or overridden.

7, reflection

  • Get complete class information dynamically at run time
  • Increase the flexibility of the program
  • JDK dynamic proxies use reflection

8, JDK dynamic proxy

  • Using the step

    • Create the interface and implementation classes
    • Invoke (Proxy Proxy, Method Method, Object[] ARgs) Method
    • NewProxyInstance (ClassLoaderloader, Class[] interfaces, InvocationHandler h
    • Methods are invoked through proxy classes.

9, Java IO

  • Normal IO, stream-oriented, synchronously blocking threads.
  • NIO, buffer oriented, synchronous non-blocking.

Second,Java Collections Framework

1, List (linear structure)

  • ArrayList Object[] Array implementation, default size 10, support random access, contiguous memory space, insert end time complexity o(1), insert I position time complexity o(n-i). Arrays.copyOf (underlying system.arrayCopy), copy to the new array, pointer to the new array.
  • Vector is similar to ArrayList. It is thread-safe. By default, it grows by twice as much.
  • LinkedList is based on the LinkedList, 1.7 is two-way LinkedList, 1.6 is two-way circular LinkedList, cancel the loop can distinguish the head and tail.

2, Map (K, V)

  • HashMap

    • The underlying data structure, JDK 1.8 is array + linked list + red-black tree, JDK 1.7 has no red-black tree. When the length of the linked list is greater than 8, it is transformed into a red-black tree to optimize the query efficiency.
    • The initial capacity is 16. TableSizeFor guarantees that the capacity is a power of 2. Addressing mode, high xOR, modulo (n-1)& H, optimized speed.
    • Capacity expansion mechanism: When the number of elements is larger than capacity x load factor 0.75, the capacity is doubled, a new array is created, and then transferred to the new array.
    • Based on Map implementation.
    • The thread is not safe.
  • HashMap (1.7) Multithreaded circular list problem

    • In a multi-threaded environment, the HashMap under 1.7 will form a circular linked list during expansion.
    • How to form A circular list: Suppose you have A HashMap of capacity 2 and store it as A -> B list at index 1. A thread performs the PUT operation on the map. As the capacity expansion condition is triggered, capacity expansion needs to be performed. In this case, another thread also put, which also needs to be expanded, and has completed the expansion operation. Since copying to the new array is A header insert, position 1 is changed to B -> A. In this case, the first thread continues to perform the expansion operation, first copies A, then copies B, and then checks whether b. ext is null. Because the second thread does the expansion operation, b. ext = A, so before A is placed in B, A. ext is equal to B again, resulting in the circular list.
  • HashTable

    • Synchronized, Synchronized, and thread-safe.
    • Expand the initial capacity from 11 to 2N + 1.
    • Inherit from the Dictionary class.
  • ConcurrentHashMap

    • Thread safe HashMap.
    • 1.7 Segmenting locking is adopted; 1.8 Synchronized and CAS are used to implement synchronization. If the Node of the array is empty, the value is set through CAS. If the Node is not empty, the value is added to the first Node in the linked list. Gets whether the first element is null using the getObjectVolatile provided by the Unsafe class to guarantee visibility.
    • For read operations, the array is volatile, and the element of the array is Node. Node’s K is final, V is volatile, and the next Node is volatile, ensuring the visibility of multiple threads.
  • LinkedHashMap LinkedHashMap inherits from HashMap, so its underlying structure is still based on a zipped-hash structure consisting of arrays and linked lists or red-black trees. In addition, LinkedHashMap adds a two-way linked list to the above structure so that the above structure preserves the insertion order of the key-value pairs.
  • TreeMap ordered Map, red-black tree structure, you can customize the comparator to sort.
  • Collections.synchronizedmap how to implement the Map thread safe? Synchronized, in effect, locks the currently passed Map object.

3, Set (unique value)

  • A HashSet is based on a HashMap implementation. It uses the HashMap’s K as the element, and V as the new Object(). If two elements have the same Hash value in the add() method, they are compared by the equals method.
  • LinkedHashSet A LinkedHashSet inherits from a HashSet and is implemented internally through a LinkedHashMap.
  • TreeSet red-black trees implement ordered uniqueness

Three,Java multi-thread

1, the synchronized

  • Decorates the underlying implementation of the code block and flags the code block as synchronous with monitorenter & MonitoreXit.
  • The underlying implementation of the method is modified by the ACC_SYNCHRONIZED flag.
  • When you decorate a class object, the lock is actually on an instance of the class.
  • The singleton pattern
public class Singleton {

    private static volatile Singleton instance = null;

    private Singleton(){}

    public static Singleton getInstance(){
    if (null == instance) {
        synchronized (Singleton.class) {
            if (null == instance) {
            instance = new Singleton();
            }
        }
      }
        return instance;
        }
}
Copy the code

  • Bias lock, spin lock, lightweight lock, heavyweight lock

    • With synchronized, the first thread obtains a biased lock, and then other threads compete for the lock, upgrading to a lightweight lock. The other threads try to acquire the lock in a loop, called a spin lock. If the number of spins reaches a certain threshold, the lock is upgraded to a heavyweight lock.
    • Note that when the second thread acquies a lock, the first thread is determined to see if it is still alive. If not, the lock is not upgraded to lightweight.

2, the Lock

  • ReentrantLock

    • Based on AbstractQueuedSynchronizer (AQS) implementation, there are mainly the state (resources) + FIFO (thread waiting queue).
    • Fair and non-fair locks: The difference is that when acquiring a lock, fair locks will determine whether there are threads waiting in the current queue, if so, queue.
    • Use the lock() and unLock() methods to lock and unLock.
  • ReentrantReadWriteLock

    • Also based on the AQS implementation, the internal use of internal class form of read lock (shared lock) and write lock (exclusive lock).
  • Non-fair lock throughput is analyzed during the lock acquisition phase, when a thread wants to acquire a lock, non-fair locks can directly attempt to acquire the lock, rather than determine whether there are threads waiting in the queue. In some cases, frequent context switches can be avoided by threads, so that an active thread may acquire a lock, and a lock in the queue may have to be woken up to continue trying to acquire a lock, and the order in which threads are executed generally does not affect the program.

3, volatile

  • Java Memory Model

  • In a multithreaded environment, ensure the visibility of variables. After a variable is modified using volatile, it is synchronized to main storage immediately and refreshed from main storage each time it is used.
  • Disables JVM instruction reordering.
  • Why is the singleton mode double check lock variable volatile? To prohibit JVM instruction reordering, new Object() is divided into three steps: request memory space, assign a memory space reference to a variable, and initialize the variable. If you do not disallow reordering, it is possible to get an uninitialized variable.

4. Five states of threads

1). New

A new thread has been created and has not yet started running.

2). Runnable

A thread is in the Runnable state when it is ready to run.

The Runnable state can be either an actual running thread or a thread ready to run.

In a multi-threaded environment, each thread is allocated a fixed amount of CPU time, and each thread runs for a while before stopping to let the other threads run, so that each thread runs fairly. These waiting and running threads are in the Runnable state.

3). Blocked

For example, if a thread is waiting for an I/O resource, or if the protected code it wants to access is locked by another thread, it is in the Blocked state and becomes Runnable once the required resource is available.

4).

A thread is in the Waiting state if it is Waiting for another thread to wake up. The following methods put the thread into a wait state:

  • Object.wait()
  • Thread.join()
  • LockSupport.park()

5).Timed Waiting

The system automatically wakes up after a certain period of time without waiting to be displayed by other threads.

The following methods put the thread into a finite wait state:

  • Thread.sleep(sleeptime)
  • Object.wait(timeout)
  • Thread.join(timeout)
  • LockSupport.parkNanos(timeout)
  • LockSupport.parkUntil(timeout)

6). Terminated

When a thread finishes executing normally, or fails unexpectedly, it is finished.

5, wait() and sleep()

  • The thread enters the waiting state after the call.
  • Wait () releases the lock, sleep() does not.
  • After wait() is called, the notify() or notifyAll() method is called to wake up the thread.
  • The wait() method is declared in Object, and the sleep() method is declared in Thread.

6, yield ()

  • After the call, the thread enters the Runnable state.
  • The CPU time slice is given, after which another thread may take over execution, or this thread may continue execution.

7, the join ()

  • The Join() method of thread A is called on thread B, and the execution of thread B will not continue until thread A finishes executing.
  • The sequential execution of threads is guaranteed.
  • The join() method must be called after the thread has started to make sense.
  • Use the wait() method.

9. Thread usage

  • Inherited Tread class
  • Implement the Runnable interface
  • Implement the Callable interface: with a return value

10. Compare Runnable and Callable

  1. Method signatures are different,void Runnable.run() , V Callable.call() throws Exception
  2. Whether return values are allowed,CallableReturn values are allowed
  3. Whether to allow exceptions to be thrown,CallableAllow exceptions to be thrown.
  4. Task submission mode,CallableuseFuture<T> submit(Callable<T> task)Return a Future object with a return value by calling its get() method,Runnableusevoid execute(Runnable command) 。

11, hapens – before

If one action happens before another, then the result of the first action will be visible to the second, and the first action will be executed before the second.

12, ThreadLocal

  • The main purpose of the scenario is to maintain the thread itself object and to avoid parameter passing. The main application scenario is to access the object by thread multi-instance (one instance per thread), and this object is used in many places.
  • The principle is to create a copy of variables for each thread, which is invisible to different threads, ensuring thread safety. Use ThreadLocalMap to store copies of variables, with ThreadLocal as K, so that a thread can have multiple ThreadLocal objects.
  • When actual using multiple data sources, need according to the data source name data source switch, suppose that a thread to set up a data source, this time it is possible to have another thread to modify the data source, you can use ThreadLocal maintain the data source name, make each thread holding a copy of the data source name, avoid thread-safety issues.

8. Thread pool

1), classification

  • FixThreadPool A fixed number of thread pools, suitable for high-load systems that manage threads
  • SingleThreadPool A pool of only one thread, suitable for ensuring that tasks are executed sequentially
  • CacheThreadPool creates a pool with an unlimited number of threads, suitable for small programs that perform short asynchronous tasks, low load systems
  • ScheduledThreadPool Thread pool used by scheduled tasks. This pool is applicable to scheduled tasks

2) Several important parameters of thread pool

  • Int corePoolSize, number of core threads
  • Int maximumPoolSize: Specifies the maximum number of threads
  • Long keepAliveTime, TimeUnit Unit, the lifetime of a thread that exceeds the corePoolSize. After this time, redundant threads will be reclaimed.
  • BlockingQueue workQueue, which is a queue for tasks
  • ThreadFactory ThreadFactory, how new threads are generated
  • RejectedExecutionHandler) reject policy

3) Thread pool thread working process

CorePoolSize -> Task Queue -> maximumPoolSize -> Reject policy

The core thread is always alive in the thread pool and is used to execute tasks when they need to be executed. When the number of tasks is greater than the number of core threads, join the waiting queue. When the number of task queues reaches the maximum length of the queue, continue to create threads until the maximum number of threads is reached. When the collection time is set, idle threads other than core threads are reclaimed. If the number of threads reaches the maximum but cannot meet the task execution requirements, the rejection process is performed according to the rejection policy.

4) Thread pool reject policy (throw exception by default)

| — – | : – – – | | AbortPolicy | thrown RejectedExecutionException | | DiscardPolicy | do nothing, Direct ignore | | DiscardOldestPolicy | discarding the oldest task execution the queue, trying to make room for the currently committed task | | CallerRunsPolicy | | directly by submitting your task to carry out the task

5) How to design the number of threads in the thread pool according to the number of CPU cores

  • IO intensive 2nCPU

  • Computationally intensive nCPU+1

    • N indicates the number of CPU coresRuntime.getRuntime().availableProcessors()Get the core number:.
    • Why it adds 1: This extra thread ensures that CPU clock cycles are not wasted, even when computationally intensive threads are occasionally suspended due to missing failures or other reasons.

4. Java Virtual Machine

1. Java memory structure

  • The heap is shared by threads, holds new objects, and is the main working area for the garbage collector.
  • The stack thread is private, which is divided into Java virtual machine stack and local method stack. It stores information such as local variable table, operation stack, dynamic link, method exit and so on. The execution of method corresponds to the process of pushing the stack to the stack exit.
  • The method area is shared by threads and holds loaded class information, constants, static variables, and code compiled by the just-in-time compiler. In JDK 1.8, the method area is replaced by meta-space and uses direct memory.

2. Java class loading mechanism

  • Load Loads the bytecode file.

  • link

    • Validation Verifies that the bytecode file is correct.
    • Prepare to allocate memory for static variables.
    • Resolution resolves a symbolic reference (such as the fully qualified name of a class) to a direct reference (the address of the class in real memory).
  • Initialization Assigns an initial value to a static variable.

Parent delegation mode

When a class needs to be loaded, determine whether the current class has been loaded. Classes that have already been loaded are returned directly, otherwise loading is attempted. When loaded, the request is first delegated to the parent classloader’s loadClass(), so all requests should eventually be passed to the top-level bootclass loader, BootstrapClassLoader. When the parent class loader cannot handle it, it is handled by itself. When the parent class loader is NULL, BootstrapClassLoader is used as the parent class loader.

3. Garbage collection algorithm

  • The mark-sweep algorithm marks objects that need to be reclaimed and then clears them, causing a lot of memory fragmentation.
  • Copying algorithm divides memory into two pieces and uses only one. During garbage collection, one of the surviving objects is first copied to another area and then empty out the previous area.
  • The mark-compact algorithm is similar to the mark-clean algorithm, but after marking, the living objects are moved to one end, and the garbage objects outside the boundary are cleared
  • The Generational Collection algorithm is divided into younger and older generations. If the younger generation is more active, the Generational Collection algorithm is used for garbage Collection. In the old days, only a small number of objects were collected at a time, using tag collation.

4. Typical garbage collector

  • CMS

    • Introduction to the collector with the goal of obtaining the shortest collection pause time, it is a concurrent collector using the Mark-Sweep algorithm.
    • Scenarios WHERE your application needs faster response times, doesn’t want long pauses, and you have a lot of CPU resources, a CMS collector is a good fit.
    • Garbage collection Steps
  1. Initial tag (Stop the World event CPU pauses, very short) The initial tag only marks the objects that GC Roots can directly relate to, which is very fast;
  2. Concurrent tagging (garbage collection and user thread execution) The process of concurrent tagging is the process of GC Roots lookup;
  3. Resigning (Stop the World event CPU pauses, slightly longer than the initial flag, much shorter than the concurrent flag) fixed flags that changed when the application was running due to concurrent flags.
  4. Concurrent cleanup, tag cleanup algorithm;
    • disadvantages

      • Concurrent markup occurs at the same time as the application, taking up some threads, so throughput decreases.
      • Concurrent cleanup occurs at the same time as the application, and the garbage generated during this time will have to wait until the next GC.
      • The marker clearing algorithm adopted will generate memory fragmentation. If a large object is created, Full GC will be triggered in advance.
  • G1

    • Introduction is a collector for server-side applications that take full advantage of multi-CPU, multi-core environments. Therefore, it is a parallel and concurrent collector, and it can model the predictable pause time, that is, the time of STW can be set.
    • Garbage collection step 1, initial flag (stop the world event CPU stops processing only garbage); 2, concurrency flag (execute concurrently with user thread); 3, final flag (stop the world event, the CPU stops processing garbage); 4, filter collection (the Stop the World event collects based on the user’s expected GC pause time)
    • The characteristics of

      • Concurrency and parallelism Take full advantage of the multi-core CPU. Using multi-cores to reduce STW time, some operations that require the pause of application threads can still ensure the execution of the application through concurrency.
      • Generational recovery Cenozoic, survival zone, old age
      • Overall, spatial integration adopts the tag sorting algorithm to recycle. Each Region has the same size and is recycled through replication.
      • Predictable pause time Set the maximum target pause value using -xx :MaxGCPauseMillis=200.

In the Java language, there are four types of objects that can be GC Roots:

A) Objects referenced in the virtual machine stack (local variables in the stack frame); B) Objects referenced by class static properties in the method area; C) Objects referenced by constants in the method area; D) Objects referenced by Native methods in the local method stack.

MySQL (Inno DB)

1. Clustered index and non-clustered index

  • Both use B+ trees as data structures
  • The data in the cluster index is stored in the leaf node of the primary key index, and the key is obtained. Non-clustered index data exists in a separate space.
  • The primary key is stored in the leaf node of the secondary index in the cluster index. In the non-clustered index, the leaf node stores the address of the data.
  • The advantage of clustered indexes is that the data is found when the primary key is found, with only one disk IO; When the node of B+ tree changes, the address will also change. At this time, the non-clustered index needs to update all the addresses, increasing the overhead.

2, why use B tree index instead of red black tree?

Indexes are large and are usually stored on disk as files. Each time an index is retrieved, the index file needs to be loaded into memory, so the number of disk IO counts is an important indicator of how well the index data structure is. When an application reads data from the disk, it not only reads the required data, but also prereads the data in the form of pages along with other data to reduce the number of disk I/OS. The designer of the database sets the size of each node to one page, and applies for a new page every time a new node is created. In this way, retrieving a node only takes one IO, locating the data according to the index only takes H-1 IO, and d (degree, (width) is inversely proportional to h, that is, the larger d is, the smaller the height is. Therefore, the flatter the tree is, the less disk I/O times will be. That is, the progressive complexity is logdN, which is why the red-black tree is not selected as the index. As you can see, the larger the d, the better the index performance. A node consists of key and data. The page size is fixed. The smaller the key and data, the larger the D. B + trees remove the data field within the node, so there is a larger D and better performance.

3. Left-most prefix principle

In MySQL, you can specify multiple column indexes, known as federated indexes. For example, for index(name, age), the leftmost prefix principle matches the leftmost column or columns (name; Name&age), you can hit the index. If all columns are used in different order, the query engine will automatically optimize to match the order of the joint index, so that the index can be matched.

4. When can B tree index be used

(1) Columns defined with primary keys must be indexed. This is because the primary key speeds up locating to a row in the table

(2) Columns defined with foreign keys must be indexed. Foreign key columns are commonly used for joins between tables, and creating indexes on them can speed up joins between tables

(3) It is best to create indexes for frequently queried columns.

(1) For the data columns that need to be queried quickly or frequently in the specified range, because the indexes have been sorted and the specified range is continuous, the query can use the index sorting to speed up the query time

(2) It is often used in the data column in the WHERE clause. The index is built in the collection process of the WHERE clause. For the data columns that need to be accelerated or frequently retrieved, the data columns that often participate in the query can be queried according to the order of the index to speed up the query time.

5. Transaction isolation level

  • Read Uncommitted: Dirty reads may occur, no repeatable reads, or phantom reads.
  • Read COMMITTED: Non-repeatable Read or phantom Read may occur.
  • Repeatable Read Indicates that magic reading may occur.
  • Serializable Locks both read and write data of the same data, preventing dirty reads and compromising performance.

Inno The default isolation level of DB is repeatable read, which is divided into quick light and current read. The phantom read problem is solved by row lock and gap lock.

6. MVCC (multi-version concurrency control)

  • Implementation details

    • There is a version of each row of data that is updated each time the data is updated.
    • The current version is copied and modified at will. Each transaction does not interfere with each other.
    • The version number is compared when saving. If it succeeds (COMMIT), the original record is overwritten. If the rollback fails, copy (rollback) is abandoned.
  • Inno DB implementation

Add two hidden fields for each row in InnoDB, which are the version number when the row was created and the version number when the row was deleted. The version number here is the system version number (which can be simply understood as the transaction ID). Every time a new transaction starts, the system version number is automatically increments as the transaction ID. Usually these two version numbers are called create time and delete time, respectively.

6. Spring related

1. The scope of the Bean

| — – | — – | | | category that | | singleton | default there is only one instance in the Spring container | | prototype | each call getBean () to generate an instance | | request | for each HTTP Request to generate an instance | | session | the same HTTP session using an instance of different session | using different instance

2. Bean lifecycle

In four simple steps

  1. Instantiate the Instantiation
  2. Property to the value Populate
  3. Initialize the Initialization
  4. Destruction of Destruction

On top of these four steps, Spring provides some extensions:

  • Bean’s own methods: This includes methods called by the Bean itself and methods specified by init-method and destroy-method in the configuration file %3Cbean %3E
  • Bean-level Lifecycle interface methods: This includes methods for BeanNameAware, BeanFactoryAware, InitializingBean, and DiposableBean interfaces
  • Container level lifecycle interface methods: this includes InstantiationAwareBeanPostProcessor and BeanPostProcessor these two interface implementation, general said their implementation class for the “processor”.
  • Factory after processor interface methods: this includes AspectJWeavingEnabler, ConfigurationClassPostProcessor, CustomAutowireConfigurer etc. Very useful plant after processor interface methods. The factory post processor is also container level. Called immediately after the application context assembles the configuration file

3, Spring AOP

There are two implementation methods:

  • JDK dynamic proxy: An object with an interface, implemented at run time
  • CGlib static proxy: implemented at compile time.

Spring transaction propagation behavior

Default PROPAGATION_REQUIRED, if one exists, the current transaction is supported. If there is no transaction, a new transaction is started.

5, Spring IoC

6. Spring MVC workflow

7. Computer network

1. TCP/IP five-layer model

2. What does the browser do after entering the address?

Three handshakes and four waves

  • Three-way handshake

  • Four times to wave

4. TIME_WAIT and CLOSE_WAIT

5. TCP sliding window

TCP flow control mainly uses the sliding window protocol. The sliding window is the size of the window used by the receiving end of data. It is used to tell the sender the cache size of the receiving end, so as to control the size of the data sent by the sender, so as to achieve the purpose of flow control. The window size is how many data we’re transferring at a time. All data frames are numbered in sequence, and the sender always keeps a sending window during the sending process. Only the frames falling within the sending window are allowed to be sent; At the same time, the receiver also maintains a receive window, and only frames falling within the receive window are allowed to receive.

6. TCP sticks and unpacks packets

  • The phenomenon of

  • Possible cause 1. The data to be sent is larger than the remaining space of the TCP send buffer. 2. If the data to be sent is greater than the MSS (maximum packet length), TCP will unpack the data before transmission. 3. The data to be sent is smaller than the size of the TCP send buffer. If TCP sends the data written to the buffer for many times, packet sticking will occur. 4. If the application layer of the receiving data side does not read the data in the receiving buffer in time, packet sticking will occur.
  • Solution 1. The sending end adds a packet header to each packet. The header should contain at least the length of the packet, so that the receiving end can know the actual length of each packet by reading the length field in the packet header after receiving data. 2. The sender wraps each packet with a fixed length (if not enough, fill it with zeros), so the receiver automatically splits each packet each time it reads a fixed length of data from the receive buffer. 3. You can set boundaries between packets, such as adding special symbols, so that the receiving end can use this boundary to separate different packets.

MQ message queues

1. Scene function

Peak shifting and valley filling, asynchronous decoupling.

2. How do you ensure that messages are not consumed repeatedly?

Another way to think about this problem is to guarantee the repeated consumption of messages, in fact, to guarantee the idempotence of the program. Regardless of how the messages are repeated, the program runs consistently. For example, after the message is consumed, the database is inserted. In order to prevent repeated consumption of the message, you can query the corresponding data before the insert.

3. How to ensure that the data received from the message queue is executed sequentially?

After receiving the message, the consumer puts it into the memory queue, and then consumes the messages in the queue in order.

4. How do I solve the problem of message queue delay and expiration? What happens when the message queue is full? There are millions of messages that go on for hours. How do you solve them?

The message is expired. If the message is not consumed for a period of time, the expired message is lost. You can only find the lost message and resend it. Let’s talk about the backlog of messages :(the idea is to consume the backlog quickly)

  • First of all, check the consumption side of the problem, to restore the consumption side of the normal consumption speed.

  • Then work on the backlog of messages in the queue.

    • Stop the existing consumer.
    • Create a new topic and set 10 times as many partations as before and 10 times as many queues as before.
    • Write a dispatcher that uniformly polls the backlog of messages into these queues.
    • Then temporarily deploy the consumers on 10 times as many machines, with each batch consuming 1 temporary queue.
    • After consumption, restore the original structure.

Message queues are full: messages can only be received and discarded, and then lost messages can be replaced before consumption.

4. How to ensure the reliability of message transmission (how to deal with the problem of message loss)?

Kafka, for example:

  • The consumer lost data: after each message consumption, instead of automatically submitting the offset, manually submitting the offset.
  • Lost messages in Kafka: A common scenario is when a Kafka broker is down and the partition leader is re-elected. If some of the other followers happen to be out of sync, the leader will die, and the follower will be elected as the leader, and some data will be lost.

    • Set the replication. Factor parameter to the topic: this value must be greater than 1, requiring that each partition must have at least two copies.
    • Set min.insync.replicas parameter on kafka server: This value must be greater than 1, which requires a leader to be aware that at least one follower is still in contact with the leader and is not falling behind in order to ensure that there is still a follower if the leader fails.
    • Set acks=all on the producer side: Data is written successfully only after it is written to all replicas.
    • Set retries=MAX on the producer side: if a write fails, retry indefinitely.
  • The producer has lost a message: If you set ACK =all to the same value, you will not lose the message. Only after the leader has received the message and all followers have synchronized the message, you will consider that the write is successful. If this condition is not met, the producer automatically retries for an unlimited number of times.

Nine, Redis

1. Data type

  • String

Common commands: set,get,decr,incr,mget, etc.

  • Hash

Common commands: hget,hset, hGEtall, etc

  • List

Common commands: lpush, rpush lpop, rpop, lrange, etc

You can use the lrange command, which is how many elements to read from a given element, and you can do paging queries based on a list.

  • Set

Common commands: sadd, spop smembers, sunion, etc

  • Sort Set

Common commands: zadd,zrange,zrem,zcard, etc

2. How does Redis implement the expiration deletion of key?

Periodic deletion and lazy deletion forms.

  • Periodically Delete Redis randomly selects some keys from the key set whose expiration time is set every once in a while to check whether they are expired. If they are expired, Redis deletes them.
  • Lazy deletion Redis checks whether a key is expired when it is accessed and deletes it if it is expired.

3. Redis persistence mechanism

Data Snapshot (RDB) + Modify data Statement File (AOF)

4. How to solve Redis cache avalanche and cache penetration?

  • A cache avalanche caches a large number of failures at the same time, so all subsequent requests fall on the database, causing the database to crash under a large number of requests in a short period of time.

    • The solution

      • Prior to: ensure the stability of the Redis cluster, found machine downtime as soon as possible to fill, set the appropriate memory elimination strategy.
      • What’s going on: Local cache + stream limiting degradation to avoid a large number of requests falling on the database.
      • After the fact: Use the Redis persistence mechanism to recover the cache as soon as possible
  • Cache penetration is usually a hacker intentionally requests data that does not exist in the cache, resulting in all requests falling on the database, causing the database to withstand a large number of requests in a short period of time and crash.

    • Solution Enumerate the data that does not exist on a large enough map. In this way, when attacked, the system intercepts the requests in the map and sends the requests to the database. Or do not exist in the cache, the value of null, set the expiration time.

5. How to use Redis to implement message queue?

Redis implementation of message queues depends on the stability of the Redis cluster and is generally not recommended.

  • Redis comes with publish subscribe, based on publish and subscribe commands.
  • Use List to store messages, lpush and RPOP to send and receive messages respectively.

Ten, Nginx

Nginx is a lightweight Web server/reverse proxy server and email (IMAP/POP3) proxy server. Nginx mainly provides reverse proxy, load balancing, dynamic and static separation (static resource service) and other services.

1, forward proxy and reverse proxy

  • Forward proxy Enables the client to access the server. Typical: VPN
  • The reverse proxy receives the client request instead of the server and forwards it to the server, which receives the request and forwards the processed result to the client through the proxy server.

2. Load balancing

Spread requests across multiple machines for high concurrency and increased throughput.

  • Load balancing algorithm

    • Weight polling
    • fair
    • ip_hash
    • url_hash

3. Dynamic and static separation

Separation of dynamic and static is to make dynamic web pages in dynamic websites distinguish constant resources from frequently changing resources according to certain rules. After the separation of dynamic and static resources, we can do cache operation according to the characteristics of static resources. This is the core idea of static website processing.

4. The four components of Nginx

  • Nginx binary executable: a file compiled from each module’s source code
  • Nginx.conf configuration file: controls Nginx behavior
  • Acess. log Access log: records information about each HTTP request
  • Error. log Error logs: locate faults

Java eight-part article to share here! To learn more, go to >>>>>