Java string constant pool
Intern () method
The source code
explain
- Method area and runtime constant pool overflow
- Since the runtime constant pool is part of the method area, overflow testing for the two areas can be done together. HotSpot has been phasing out persistent generation planning from JDK7 and replacing the persistent generation backstory entirely with meta-space in JDK8. Let’s take a look at the test code. What is the practical impact of using a permanent generation or a meta-space to implement a method area on a program
- String:: Intern () is a local method that returns a reference to a String representing the String in the pool if the String constant pool already contains a String equal to the String (the first encountered String instance). In JDK6 and earlier HotSpot virtual machines, constant pools are allocated to persistent generations. We can indirectly limit the capacity of constant pools by limiting the size of persistent generations by -xx: PermSize and -xx :MaxPermSize
Code sample
-
test
why
The test results
- As the result of the code, as long as it’s not a Java string, anything else
58tongcheng meituan alibaba
, return true; The Java string answer is false, it must be two different Java strings, so how did the other Java string load in? - Why is that?
sun.misc.Version
The class is loaded and initialized during the initialization of the JDK library, which requires default initialization of the static constant field with the specified ConstantValuesun.misc.Version.launcher
Referenced by the static constant fieldjava
String literals will beintern
The string constant pool to HotSpot VM —StringTable
In the- This code, when run in JDK 6, returns two false values, and when run in JDK7, returns one true and one false. The reason for the difference is that in JDK6, the intern() method copies the first encountered string instance into the string constant pool of the persistent generation and returns a reference to the string instance in the persistent generation, whereas the string object instances created by StringBuilder are stored on the Java heap. So it cannot be the same reference otherwise, and the result will return false
- JDK7 (and some other virtual machines, such as JRockit)
Intern ()
Method implementationDon’t needIn copying an instance of a string to a permanent generation, since the string constant pool has been moved to the Java heap, fetching only needs to be done in the constant poolRecord the first instance reference that appearsSo intern() returns the same reference as the string instance created by StringBuilder, and str2 returns falsejava
This string is executingStringBuilder.tostring()
It’s been there before, it’s been referenced in the string constant pool,First encounter principles that do not comply with intern() method requirements
- Take a look at the rest of the method area, which is responsible for holding type-related information such as class names, access modifiers, constant pools, field descriptions, method descriptions, and so on. For the parts of the test, the basic idea is to run when a large number of classes to fill method, until it overflows, although the direct use of the Java SE API can be dynamically generated classes (such as reflection GeneratedConstructorAccessor and dynamic proxy, etc.)
OpenJDK8 source code description
Recursive step
1. System code parsing
-
System
—initializeSystemClass
—Version
Class loaders andrt.jar
-
The root loader is pre-deployed to load rT.jar
OpenJDK8 source code
-
Loading file location
Dynamic access
version
Classlauncher_name
Value:java
conclusion
Check point
- 1. String
intern()
True /flase? - 2. Have you read classic JVM books (zhou Zhiming)?
LeetCode sum of two numbers
-
The title details
-
Algorithm complexity
- Optimal: O(1), find only once, refer to redis k-V key-value pairs
- Second: O (log2N)
- Again: O (N)
- Poor: O (N ^ 2)
-
Hash k sub v
JUC
Common JUC related topics
- 1. Synchronized related problems
- Has Synchronized ever been used? What’s the mechanism?
- You just mentioned the lock to get the object. What exactly is that lock? How do I determine the lock of an object
- What is reentrant and why is Synchronized reentrant?
- What optimization has the JVM made to Java’s native locking?
- Why is Synchronized unfair?
- What is lock elimination and lock coarsening?
- Why is Synchronized pessimistic? What is the implementation principle of optimistic lock? What is CAS?
- Is it good to be optimistic?
- ReentrantLock and other explicit lock related issues
- How is the implementation principle of ReentrantLock different from Synchronized?
- So what is the AQS framework?
- Compare Synchronized and ReentrantLock in as much detail as possible
- How does ReentrantLock achieve reentrancy?
Reentrant lock
- A reentrant lock, also known as a recursive lock, means that if the same thread acquires the lock from the outer method, the inner method of the same thread automatically acquires the lock (provided that the lock object is the same object and does not block because it has been acquired before).
- In Java
ReentrantLock
andSynchronized
Are allReentrant lockOne advantage of reentrant locking is thatDeadlocks can be avoided to some extent
Reentrant lock interpretation
- Open to explain
- Can be:
- Once again:
- : into
- Lock: Synchronization lock
- What to enter: Into the synchronization domain (that is, synchronized code blocks/methods or explicitly locked code)
- In a word:
- Multiple processes in a thread can acquire the same lock, and holding this synchronized lock can re-enter
- You can acquire your own internal locks
Type of reentrant lock
Implicit locking (i.eSynchronized
The default is a reentrant lock
-
Synchronized block
-
Synchronized methods
-
Code validation for reentrant locks (synchronized code blocks/synchronized methods)
Synchronized
The implementation mechanism of reentrant
-
Each lock object has a lock counter and a pointer to the thread that holds the lock
If the target lock object’s counter is zero when monitorenter is executed, it is not held by another thread. The Java virtual machine sets the thread holding the lock object to the current thread and increments its counter by one
In the case that the target lock object’s counter is not zero, the Java VIRTUAL machine can increment its counter by one if the holding thread of the lock object is the current thread, or wait until the holding thread releases the lock
When Monitorexit is executed, the Java virtual machine decrement the lock object’s counter by 1. A zero counter indicates that the lock has been released
Display locks (that is, locks) are also availableReentrantLock
Such a reentrant lock
-
ReentrantLock code validation
-
Further verify that the lock is released (be sure to match two locks, add a lock to need to release a lock)
LockSupport
Why LockSupport?
-
Java — JVM
-
JUC — AQS — Pre-knowledge (reentrant lock, LockSupport)
-
AB — after | before
What is LockSupport
-
The official documentation
-
Basic thread blocking primitives for creating locks and other synchronization classes
LockSupport: Thread wake-up (wait/ Notify)
Park () and unpark() in LockSupport block and unblock threads, respectively
Thread wait/notify mechanism
Three ways to make a thread wait and wake up
- Method 1: Use Object
wait()
Method to make the thread wait, using thenotify()
Method to wake up a thread - Mode 2: Use Condition in JUC package
await()
Method to make the thread wait, usesignal()
Method to wake up a thread - Three:
LockSupport
Of the classpark()
Can block the current thread as wellunpark()
Wakes up the specified blocked thread
The wait and notify methods in the Object class implement thread waiting and awakening
-
Code demo
-
1. Normal conditions
-
2. Exception 1: Synchronized is displayed when Synchronized is deleted from wait and notify
-
3, exception 2: put notify in front of wait method, the program cannot end, cannot wake up
-
-
A small summary
wait
andnotify
A method must be in a synchronized block or methodUse in pairs- First,
wait
afternotify
To be valid
The await after signal method in the Condition interface implements the wait and wake up of the thread
-
Code demo
-
1. Normal conditions
-
2. Exception 1: If lock or unlock is not added, an error is reported
-
3, exception 2: wake up and wait, the program cannot be completed
-
Traditional synchronized and Lock implement wake-notification constraints (*)
- A thread must first acquire and hold a Lock, which must be in a Synchronized or Lock
- The thread must wait and wake up before it can be woken up
Park waits and unpark wakes in the LockSupport class
What is the
-
The blocking and wake up operations are implemented through the park() and unpark(thread) methods
park()
: Disables the current thread scheduling with mirror threads unless a license is availableunpark(Thread thread)
: Permits a given thread if it is not yet available
-
The LockSupport class uses a concept called Permit to block and wake up threads. Each thread has a Permit. Permit has only two values, 1 and 0, and the default is 0
You can think of a license as a (0,1) Semaphore, but unlike Semaphore, the license accumulates at an upper limit of 1
The main method
-
blocking
-
park()/park(Object blocker)
-
Block the current thread/block the specific thread passed in
Permit defaults to 0, so the current thread blocks at the first call to the park() method until another thread sets the current thread’s permit to 1, the park method wakes up, and then sets permit to 0 again and returns
-
-
Wake up the
unpark(Thread thread)
- Wakes up a blocked pointing thread
Code demo
-
1. Normal + no locking block requirements
-
LockSupport (*)
Highlights (*)
-
LockSupport is a basic thread-blocking primitive for creating locks and other synchronization classes
LockSupport is a thread-blocking utility class. All methods are static, allowing a thread to block anywhere. After blocking, there is a corresponding wake up method
-
LockSupport provides the park() and unpark() methods to block and unblock threads
LockSupport has a permit associated with each thread that uses it. Permit is equivalent to a 1,0 switch, which defaults to 0
Calling unpark once increments 1 to become 1
A call to park consumes permit, which turns 1 to 0 and park returns immediately
If calling park again blocks (because permit is zero and blocks here until permit is 1), this call to unpark sets permit to 1
Each thread has an associated permit, and there is only one permit at most. Repeated calls to unpark do not accumulate credentials
-
Image understanding
A thread blocking requires a permit, of which there is at most one
-
When the park method is called
If there is a credential, it will directly consume the credential and exit normally
If there are no credentials, you must block until the credentials are available
-
Unpark, on the other hand, will add one certificate, but there can be only one certificate at most, and the sum is invalid
-
Common topic
Why is it possible to wake up a thread and then block it?
- Answer: because
unpark
Obtain a certificate (permit + 1 becomes 1), and then call the park method, the credential consumption can be justified, so it will not block
2, why wake up twice and then block twice, but still end up blocking the thread?
- Answer: becauseThe maximum number of credentials for the same thread is 1, twice in a row
unpark
Just like calling unpark once, only one credential is added; And the two callspark
However, it needs to consume two vouchers, and the number of vouchers is not enough to release, so the thread is blocked
The AQS AbstractQueuedSynchronizer
Common topic
-
The subject summed up
Front knowledge
CAS (CompareAndSwap)
-
By comparison and substitution, the CAS mechanism uses three basic operands: the memory address V, the old expected value A, and the new value B to be modified
-
Objective: To use CAS instruction of CPU, at the same time with the help of JNI to complete the non-blocking algorithm of Java, other atomic operations are completed by using similar characteristics; The whole JUC is based on CAS, so the performance of JUC has been greatly improved for synchronized blocking algorithm
-
CAS operates on optimistic locking, where an operation is performed each time without locking, assuming no conflicts, and retry until it succeeds if it fails due to conflicts
-
Synchronized is a pessimistic lock, in which once a thread acquires a lock, other threads that need to lock are suspended
Fair lock Unfair lock
-
FairSync: When multiple threads acquire locks in the same order as they apply for them, the thread will queue directly and always be the first in the queue to obtain the lock
-
Advantages: All threads get resources and do not starve to death in the queue
-
Disadvantages: throughput drops dramatically, all but the first thread in the queue blocks, and it is expensive for the CPU to wake up blocked threads
-
Fair lock creation
final ReentrantLock lock = new ReentrantLock(true); Copy the code
-
-
NonfairSync: when multiple threads attempt to acquire a lock, they will attempt to acquire the lock directly. If they fail to obtain the lock, they will enter the queue. If they can obtain the lock, they will directly acquire the lock
-
Advantages: it can reduce the overhead of THE CPU to wake up threads, the overall throughput efficiency will be higher, the CPU does not have to wake up all threads, will reduce the number of wake up threads
-
Disadvantages: As you may have noticed, this can lead to threads in the middle of the queue either not acquiring locks consistently or not acquiring locks for long periods of time, leading to starvation
-
Creation of an unfair lock
final ReentrantLock lock = new ReentrantLock(); final ReentrantLock lock = new ReentrantLock(false); Copy the code
-
Reentrant lock
LockSupport
spinlocks
- Spinlock: When a thread is acquiring a lock, if the lock has already been acquired by another thread, the thread will wait in a loop, and then continuously determine whether the lock can be acquired successfully, until the lock is acquired and will exit the loop
- The thread that acquired the lock is still active, but does not perform any valid tasks
busy-waiting
A linked list of data structures
Template design patterns for design patterns
What is the
The literal meaning
-
Abstract queue synchronizer
-
The source code
Technical explanation
-
Is used to build locks or other synchronizer components of the heavyweight infrastructure and the cornerstone of the entire JUC system, through the built-in FIFO queue to complete the queuing of resource acquisition threads, and through an int type variable to represent the status of the lock
Why is AQS the most important cornerstone of JUC content
Related to AQS
-
ReentrantLock
-
CountDownLatch
-
ReentrantReadWriteLock
-
Semaphore
-
, etc.
Further understand the relationship between locks and synchronizers
- Lock, lock oriented user: defines the developer and lock interaction layer API, hidden implementation details, just call
- Synchronizers, lock-oriented implementors: Such as Java concurrency master Dougee, proposed a unified specification and simplified lock implementation, masking synchronization state management, blocking thread queuing and notification, wake up mechanism waiting
Can do
Locking causes blocking
- There is a blocking need to queue, the realization of queue must have some form of queue to manage
explain
- The thread that has grabbed the resource is used to process business logic. If the thread fails to grab the resource, it will inevitably involve the consistent queuing mechanism, and the thread that fails to seize the resource will continue to wait (similar to the banking business window is full, and customers who have no processing window temporarily can only wait in the waiting area). However, the waiting thread still retains the possibility of acquiring the lock and the process of acquiring the lock continues (customers in the waiting area are also waiting for their call number and go to the reception window for business when it is time).
- Speaking of queueing, there must be some sort of queue. What is the data structure of that queue?
- If the shared resource is occupied,A certain blocking wake-up mechanism is needed to ensure lock allocationThis mechanism is mainly used by
CLH
The queue variant implements this by adding threads that temporarily cannot acquire locks to a queueAQS
Abstract representation of. It will beThreads requesting shared resources are encapsulated as nodes of queues (Node
)Through theCAS
, spin, andReentrantLock.park()
And so on, maintenancestate
The state of a variable is controlled concurrently to achieve synchronization
AQS preliminary
AQS met
-
The official explanation
-
There is a blocking need to detoxify, queue to achieve the inevitable need to queue
-
AQS uses a volatile int type member variable to represent the synchronization State, and uses the built-in FIFO queue to complete the queuing work of resource acquisition. Each thread to preempt resources is encapsulated as a Node Node to realize the allocation of locks, and the modification of State value is completed through CAS
-
AQS internal architecture
-
Architecture diagram
AQS itself
-
Int variable of AQS
-
AQS synchronization State member variable
/** * The synchronization state. */ private volatile int state; Copy the code
-
Similar to the status of the acceptance window for bank transactions
- Equal to 0: is no one, free state can handle
- Greater than or equal to 1: someone is occupying the window and waiting to go
-
-
CLH queue for AQS
-
CLH queue (three big bull names) is a two-way queue
-
Like waiting for customers in the waiting area of a bank
-
-
A small summary
- Queues are needed when there is blocking, and queues are necessary to achieve queuing
- State variable +CLH variant of the two-ended queue
Inner class Node(Node is inside AQS)
-
The Node int variable
-
The Node waitState member variable: volatile int waitStatus
-
The wait state of other customers in the waiting area (threads that did not grab the lock resource)
Each queued individual in a queue is a Node
-
-
Node
-
The internal structure
static final class Node { / / Shared static final Node SHARED = new Node(); / / exclusive static final Node EXCLUSIVE = null; // The thread was cancelled static final int CANCELLED = 1; // Subsequent threads need to be woken up static final int SIGNAL = -1; // Wait for the condition to wake up static final int CONDITION = -2; // Shared synchronous state acquisition propagates unconditionally static final int PROPAGATE = -3; // The initial state is 0 volatile int waitStatus; // The front node volatile Node prev; // volatile Node next; // volatile Thread thread; // Node nextWaiter; Copy the code
-
Attributes that
-
The basic structure of AQS synchronization queue
-
synchronizer
Start with ReentrantLock to interpret AQS
The implementation class of the Lock interface
- Basically, thread access control is done by aggregating a subclass of queue synchronizer
The principle of ReenteantLock
-
Already architecture diagram
Start with the simplest lock method to see if it’s fair
-
It can be seen that the only difference between the lock() method of a fair lock and that of an unfair lock is that the fair lock obtains the synchronous state with an additional constraint: hasqueued24 ()
The HasqueuedToraise () method is used to determine whether a valid node exists in the wait queue when a fair lock is added to a lock
Use the most common lock/ UNLOCK as the case breakthrough
-
The difference between fair and unfair locks
- Fair lock: A fair lock is first come, first come. When a thread obtains a lock, if there are already threads waiting for the lock, the current thread will enter the wait queue
- Unfair lock: If the lock is available, regardless of whether there is a wait queue, the lock is taken immediately, that is, the first queue in the queue is now in unpark(), and then the contention lock is still required (in the case of thread contention).
AQS source analysis to go
-
1, the lock ()
-
2, acquire ()
-
3, tryAcruire (arg)
- nonfairTryAcquire(acquires)
- Return true: End
- Return false: Continue to advance the condition and proceed to the next method, addWaiter
- nonfairTryAcquire(acquires)
-
4, addWaiter (Node. EXCLUSIVE)
- AddWaiter (Node mode) B thread
- Enq (node)
- In a bidirectional list, the first node is a virtual node (also known as a sentinel node), which does not store any information. The real first node with data starts at the second node
- Let’s say thread 3 comes in
- prev
- compareAndSetTail
- next
Thread B preempts the lock resource
C thread preempts lock resources
- AddWaiter (Node mode) B thread
-
Practical problem: AQS preemption lock
-
If the first thread preempts the lock, should the second thread queue?
Answer: yes
-
2. Is the first node in the waiting queue the node B that needs to wait?
A: No, the first node is the sentry (puppet node), which is responsible for wake up, queue out, etc
-
-
5, acquireQueued (addWaiter (Node. EXCLUSIVE), arg)
-
If we fail to seize again, we will enter
-
ShouldParkAfterFailedAcquire: if waitStatus precursor node is SIGNAL state, namely shouldParkAfterFailedAcquire method will return true; The program continues down to the parkAndCheckInterrupt method, which is used to suspend the current thread
-
parkAndCheckInterrupt
-
-
-
Unlock () method
-
sync.release(1);
Wake up and get out
The original sentry node is queued, and thread B becomes the new sentry node
-
Three process trends
-
AQS Acquire has three main processes
AQS small summary
-
The thread that grabs the resource directly uses the processing business logic. If the thread fails to grab the resource, it will inevitably involve a queuing mechanism, and the thread that fails to seize the resource will continue to wait (similar to the banking business window is full, and customers who have no processing window temporarily can only wait in the waiting area). However, the waiting thread still retains the possibility of acquiring the lock and the process of acquiring the lock continues (customers in the waiting area are also waiting for the number to be called, and they go to the reception window to handle business when it is time).
Speaking of queueing, there must be some sort of queue. What is the data structure of that queue?
If shared resources are occupied, a certain blocking wake-up mechanism is needed to ensure lock allocation. This mechanism is mainly implemented by a variant of CLH queue. Threads that cannot obtain locks temporarily are added to the queue, which is the abstract representation of AQS. It encapsulates the thread requesting sharing into the Node of the queue and maintains the state of the state variable through CAS, spin and locksupport.park (), so as to achieve synchronous control effect concurrently
-
Key Process Description
-
AddWaiter: Encapsulates the current thread as a Node object and joins the queue. Perform 1 or 2 different processing logic depending on whether the queue has been initialized
-
1. Indicates that the queue is not empty, that is, it has been initialized before. In this case, just add the new node to the end of the queue
-
Enq will initialize an empty Node as the head of the queue, and then insert the thread that needs to queue as the next Node of the head
Queue not initialized, call enQ method; This method generates an empty Node object (new Node ()), inserts it into the head of the AQS queue, and then inserts the parameter Node, as a successor Node, into the queue
-
-
-
One of the core and difficult points of aQS:
-
AcquireQueued, shouldParkAfterFailedAcquire
-
Notice the use of for(;;) The spin
-
If so, it indicates that it is the next thread that can acquire the lock. Then call tryAcquire and try to acquire the lock. If so, re-maintain the linked list relationship (Node is set to head, and the previous head is removed from the linked list), and return
-
If the node’s predecessor is not head or fails to acquire the lock, the current thread calls Park and blocks. If the node’s predecessor is SIGNAL, the current thread calls Park and blocks
- 1. If ==0, set SIGNAL to SIGNAL.
- 2. >0 (==1), it indicates that the predecessor node has been cancelled. Remove the cancelled node from the queue and re-maintain the relationship of the queued linked list
The for loop is then used again to execute the above logic, in contrast to the doAcquireInterruptibly method. The difference is that the logic is handled after the thread has been interrupted
-
-
parkAndCheckInterrupt
- Blocking ends in two ways:
-
1. The thread that holds the lock must be at the head of the queue (excluding the head node) if it unparks the thread after releasing the lock.
-
The thread is interrupted. (Note that the external interrupt thread does not throw InterruptException, unlike sleep or wait blocking.)
-
- Blocking ends in two ways:
-
Spring
Spring’s AOP sequence
Common Aop annotations
@Before
: Pre-notification: executed before the target method@After
: post-notification: executed after the target method (always executed)@AfterReturning
: Notification after return: execution before the end of execution method (exception not executed)@AfterThrowing
: Exception notification: Executed when an exception occurs@Around
: Surround notification: Surround target method execution
Common questions asked
Problem 1: AOP’s full advice execution order
-
1. Switch from Spring4 to Spring5, AOP’s entire notification execution order; Spring4 and 5 are executed in a different order for AOP. Springboot1 goes to Springboot2, which is essentially Spring4 goes to Spring5
-
Code demo
- Want to put various notifications before and after the division method, introduce aspect programming
- Create a new aspect class, MyAspect, and add two new annotations to the aspect class
@Aspect
: Specifies a class as the aspect class@Component
: into the Spring container management
Problem 2: Common AOP usage issues
- 2. What pitfalls have you encountered in using AOP?
Spring4 + springboot1.5.9 demonstration
-
Add dependencies to prevent startup errors
< the dependency > < groupId > ch. Qos. Logback < / groupId > < artifactId > logback - core < / artifactId > < version > 1.1.3 < / version > </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-access</artifactId> < version > 1.1.3 < / version > < / dependency > < the dependency > < groupId > ch. Qos. Logback < / groupId > < artifactId > logback - classic < / artifactId > < version > 1.1.3 < / version > < / dependency >Copy the code
-
Example Change the Springboot version to 1.5.9
-
The results
-
Normal operation result
-
Abnormal operation
-
Aop normal order + abnormal order under Spring4
- Normal execution:
@Before
(Pre-notification) –>@After
(Post notification) –>@AfterReturning
(Normal return) - Abnormal execution:
@Before
(Pre-notification) –>@After
(Post notification) –>@AfterThrowing
(Abnormal method)
Spring5 + springboot2.3.3 demonstration
-
The results
-
When the normal
-
When abnormal
-
Aop normal order + abnormal order under Spring5
- Normal execution:
@Before
(Pre-notification) –>@AfterReturning
(Normal return) –>@After
(Post notification) - Abnormal execution:
@Before
(Pre-notification) –>@AfterThrowing
(Exception return) –>@After
(Post notification)
Spring loop dependencies
Common questions
- Explain the three-level cache in Spring
- What are the three levels of caches? What are the similarities and differences between the three maps
- What are circular dependencies? Have you seen the Spring source code? What is a spring container in general?
- How do I detect circular dependencies? Do you see exceptions to loop dependencies in your daily development?
- Why can’t circular dependencies be solved in multiple cases?
What are circular dependencies
-
Multiple beans depend on each other to form a closed loop. For example, A depends on B, B depends on C, and C depends on A
-
In general, if you ask the Spring container how to solve the problem of loop dependencies, you must be referring to the default singleton Bean
The impact of two injection modes on cyclic dependencies
- Constructor injection
- Set method injection
Circular dependencies are described on the official website
-
Circular dependencies
conclusion
- As long as A’s injection is setter and singleton, we don’t have A cyclic dependency problem
Circular dependencies error – BeanCurrentlyInCreationException demonstration
Circular dependencies in the Spring container, there are two types of injection
Constructors inject dependencies
-
Code demo
-
ServiceA
-
ServiceB
-
ClientConstructor
-
-
Conclusion: The constructor loop dependency that you want constructor injection to support does not exist
Inject dependencies with the set method
-
Code demo
-
ServiceAA
-
ServiceBB
-
ClientSet
-
Important Code case demonstration
Based on the code
-
A
-
B
-
ClientCode
-
ClientSpringContainer
Add to the Spring container
-
steps
-
applicationContext.xml
<? The XML version = "1.0" encoding = "utf-8"? > <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.0.xsd "> < bean id =" a" class="com.touch.air.basic.spring.circulardepend.A"> <property name="b" ref="b"></property> </bean> <bean id="b" class="com.touch.air.basic.spring.circulardepend.B"> <property name="a" ref="a"></property> </bean> </beans>Copy the code
The normal operation
-
Scope =”prototype” scope=”prototype”
Loop dependency error
-
ClientSpringContainer
-
Cyclic dependent exception
-
-
Conclusion:
- The default singleton scenario supports circular dependencies and does not report errors
- Prototype scenarios do not support loop dependencies and will report errors
Important conclusion (Spring internally resolves loop dependencies with level 3 caching)
-
DefaultSingletonBeanRegistry
- Level 1 cache (also called singleton pool)
singletonObjects
: Holds Bean objects that have gone through a full life cycle - Level 2 cache:
earlySingletonObjects
, to store the bean objects that are exposed early, before the bean’s life cycle ends (attributes are not filled) - Level 3 cache:
singletonFactories
, which houses factories where beans can be generated
- Level 1 cache (also called singleton pool)
-
Only singleton beans are exposed ahead of time by the level 3 cache to solve the problem of loop dependency. If a non-singleton bean is retrieved from the container every time it is a brand new object, it will be recreated. Therefore, non-singleton beans are not cached and will not be placed in the level 3 cache
Deep Debug loop dependency
Front knowledge
-
Instantiate/initialize
- Example: Apply for a memory space in memory (rent a house, your furniture has not moved in)
- Initialize attribute fill: Complete various attribute assignments (decoration, home appliances and furniture enter)
-
Three maps and four methods, overall related objects
GetSingleton: Gets a singleton object
DoCreateBean: if not obtained, create it
PopulateBean: Populates the properties one by one during creation
AddSingleton: Finally into the singleton pool, which is the level 1 cache
The first layer of singletonObjects holds the beans that have already been initialized
The second layer of earlySingletonObjects holds beans that are instantiated, but not initialized
Factorybeans are stored in singletonFactories. If you add classa to the factory factory, then you can inject classa beans into the FactoryBean
Description of migration of A/B objects in level 3 cache
- 1. A needs B in the process of creating A, so A puts itself in the level 3 cache to instantiate B
- 2, when B instantiates, it finds that A is needed, so B first checks the level-1 cache, then checks the level-2 cache, or not, then checks the level-3 cache, finds A, then puts the LEVEL-3 cache in the level-2 cache, and deletes the LEVEL-3 cache in the LEVEL-2 cache
- 3, B successfully initialized, put itself in the level 1 cache (at this point, A in B is still in the state of creation), and then go back to create A, at this point, B has been created, directly from the level 1 cache B, and then complete the creation, and put itself in the level 1 cache
Summary: How does Spring address cyclic dependencies
-
Overall Debug: Deeply experience the process of solving loop dependencies in Spring
- 1, call
doGetBean()
Method that wants to get beanAgetSingleton()
Method to find beanA from the cache - 2, in
getSingleton()
Method from level 1 cache, no, returns NULL - 3,
doGetBean()
Method (beanA, beanA, beanA)getSingleton()
Overloaded methods of ObjectFactory - 4, in
getSingleton()
In the method, the firstbeanA_name
Added to a collection to mark that the bean is being created, and then called back to the anonymous inner classcreateBean
methods - 5, enter
AbstractAutowireCapableBeanFactory#doCreateBean
First reflection calls the constructor to create an instance of beanA, and then determines; Whether it is a singleton, whether references are allowed to be exposed in advance (generally true for singleton cases), whether it is being created (that is, in the collection of step 4); True adds beanA to level 3 cache - 6. Fill the attributes of beanA. At this time, it is detected that beanA depends on beanB, so it starts to search for beanB
- 7,
doGetBean()
Method, as with beanA above, looks for a beanB in the cache, creates a beanB, and populates the beanB with attributes - BeanB depends on beanA
getSingleton()
To obtain beanA, search the level-1, level-2 and level-3 caches successively. At this time, the creation factory of beanA is obtained from level-3 caches and obtained by creating factoriessingletonObject
At this pointsingletonObject
It’s pointing up heredoCreateBean()
Method instantiated by beanA - 9. BeanB gets beanA’s dependencies, and beanB completes its instantiation and moves beanA from level 3 to level 2
- 10. BeanA then continues its property filling work, and at this point also obtains beanB, beanA also completes the creation, and returns
getSingleton()
Method to move beanA from level 2 cache to Level 1 cache
- 1, call
-
The process to explain
-
Spring creates a bean in two main steps: create the original bean object, then populate the object properties and initialize it
-
2. Each time we create a bean, we check the cache to see if there is a bean. Since it is a singleton, there can only be one bean
-
3. After we create the original object of beanA and put it in the level 3 cache, it is time to populate the object attributes. At this time, we find that beanB is dependent on beanB, and then we create beanB again
The difference is that the original object beanA can be found in the level 3 cache, so there is no need to continue to create beanB, use it to inject beanB, complete creation of beanB
-
4. Now that B has been created, beanA is ready to fill in the attributes and execute the rest of the logic, closing the loop
-
-
Spring addresses loop dependencies by using the concept of Bean “intermediate states”, which are instantiated but not initialized (semi-finished) states; The instantiation process is created through the constructor, how can it be exposed in advance if A has not been created, so the constructor’s loop dependency does not solve the problem of **
Spring uses level 3 caching to solve the problem of loop dependency for singletons
- Level 1 cache is a singleton pool (
singletonObjects
) - Level 2 cache is pre-exposure object (
earlySingletonObjects
) - The level 3 cache is the pre-exposure object factory (
singletonFactories
)
Assuming that A and B circular reference, instantiate put it in when I was A three-level cache, then filling properties, found that rely on B, the same process are instantiated in the three levels of cache, and then to fill attribute when found themselves dependent on A B, this time from the cache lookup of early exposure to A, no AOP agent, Directly inject the original object of A into B, complete the initialization of B, for property filling and initialization, at this time AFTER B is completed, to complete the remaining steps of A, if there is no AOP proxy, AOP processing to obtain the object of proxy A, injection B, go to the rest of the process
- Level 1 cache is a singleton pool (
Redis
The landing application of redis five traditional data types
The eight types
- 1. String (String type)
- 2. Hash
- 3, List (List type)
- 4, Set (Set type)
- 5. SortedSet (ordered set type, zset for short)
- 6. Bitmap
- 7. HyperLogLog (Statistics)
- 8
Note: The redis command is case insensitive, while the key is case sensitive. Help command: help.@ Type noun
String
The most commonly used
- set key value
- get key
Sets/gets multiple values simultaneously
- MSET key value [key value ….]
- MGET key [key ….]
Numerical increase or decrease
- Increment digit: INCR key
- Add a specified integer: INCRBY key increment
- Decrement value: DECR key
- Reduce specified integers: DECRBY Key Decrement
Get the length of the string
- STRLEN key
Distributed lock SETNX
- setnx key value
- set key value[EX seconds] [PX milliseconds] [NX|XX]
- EX: The number of seconds after the key expires
- PX: How many milliseconds after the key expires
- NX: Creates a key when it does not exist. The effect is the same as setnx
- XX: Overwrites the key if it exists
Application scenarios
-
INCR command is used to generate the product number and order number
-
As long as the rest address is clicked, the incr key command can be used directly to add a number 1 to complete the digital record
hash
- Corresponding data structure in Java:
Map<String,Map<Object,Object>>
Set one field value at a time
- HSET key field value
Get one field value at a time
- HGET key field
Set multiple field values at once
- HMSET key field value [field value ….]
Get multiple field values at once
- HMGET key field [field ….]
Gets all field values
- hgetall key
Gets all quantities within a key
- hlen
Delete a key
- hdel
Application scenarios
-
Simple shopping cart
list
Add an element to the left of the list
-
LPUSH key value [value ….]
Add elements to the right of the list
- RPUSH key value [value ….]
Check the list
- LRANGE key start stop
Gets the number of elements in the list
- LLEN key
Application scenarios
-
Wechat subscription public account:
1. For example, if you subscribe to the public accounts of Nuggets and CSDN, then they publish article ids 11 and 22 respectively
2. As soon as they publish new articles, they will load them into my List: lpush likeArticle: YY1024 11 22
3, check your own subscription number all articles, similar page: lrange likeArticle: YY1024 0 10
set
Add elements
- SADD key member [member …]
Remove elements
- SREM key member [memeber …]
Gets all elements in the collection
- SMEMBERS key
Determines whether an element is in a collection
- SISMEMBER key member
Gets the number of elements in the collection
- SCARD key
Pops a random element from the collection, element not deleted
- SRANDMEMBER key [number]
Eject a random element from an element and delete one from it
- SPOP key [number]
Set operations
-
Difference set operation on sets A-B: The set of elements belonging to A but not to B
SDIFF key [key]
-
The set of elements jointly owned by BOTH A and B
SINTER key [key …]
-
A union B is A union of elements belonging to A or B
SUNION key [key …]
Application scenarios
-
1. Wechat Lottery small program (SRANDMEMBER, SPOP)
-
2. “Like” in wechat circle
-
3. Weibo friends follow social relations (set operation)
-
People of common interest (intersection)
-
The people I follow follow him.
-
-
4, QQ push may know people (poor set)
zset
Adds an element and the fraction of that element to an ordered set
Add elements
- ZADD key score member [score memeber]
Sort, return all elements between indexes
- ZRANGE key start stop [WITHSCORES]
Gets the score of the element
- ZSCORE key memeber
Remove elements
- ZREM key memebr [memebr …]
Gets the element in the specified score range
- ZRANGEBYSCORE key min max [WITHSCORES][LIMIT OFFSET COUNT ]
Increments the score of an element
- ZINCRBY key increment member
Gets the number of elements in the collection
- ZCARD key
Gets the number of elements in the specified range of fractions
- ZCOUNT key min max
Remove elements by rank range
- ZREMRANGEBYRANK key start stop
Gets the ranking of elements
- From small to large: ZRANK Key Memebr
- From large to small: ZREVRANK Key member
Application scenarios
-
Display items in order of sales
-
Shake out
Distributed lock, why redis distributed lock
- Distributed microservice architecture, a kind of lock added between microservices to avoid conflicts and data failures after splitting. Distributed lock
- Redis distributed lock implementation
- Write your own simple version:
string -- setnx + lua
The script - In cluster mode:
redisson
- Write your own simple version:
Common usage scenarios: Prevent oversold
Common questions
- 1, Redis in addition to used for caching, what other uses?
- 2. What problems should be paid attention to when Redis makes distributed locks?
- 3. If Redis is a single point of deployment, what are the problems? How to solve these single point problems?
- 4. Are there any problems in cluster mode, such as master/slave mode?
- 5. RedisLock? Talk about redisson?
- 6. How to renew Redis distributed lock? Does the guard dog know?
Base case (Springboot+Redis)
Usage scenarios
- Multiple services + Ensure that only one request can be made by one user at a time (to prevent data conflicts and concurrent errors in critical services)
Code implementation
-
Create Module boot_redis01 and boot_redis02
-
2, change the POM
-
3, write YML
-
4. Main start
-
5. Business code
Existing problems
1, the standalone version is not locked
The problem
- No lock, concurrent requests under the number is certainly wrong, there will be oversold phenomenon
thinking
-
Add synchronized?
- Release or wait
-
Add already?
-
TryLock (), set expiration time
-
-
Or both?
- Select an appropriate one based on specific business requirements
To solve
-
Updated to version 2.0
2. Nginx Distributed microservices Architecture
The problem
-
2.1 After distributed deployment, single-node locks still appear oversold, and distributed locks are required
-
2.2 Nginx configure load balancing
Local host file
-
2.3. Start two microservices
-
2.4. Simulate high concurrency (Jmeter pressure measurement)
To solve
-
Distributed lock setnx on Redis
3, Always release lock (continue 2)
The problem
- If an exception occurs, the lock may not be released, but must be done at the code level
finally
Release the lock
To solve
4, Down (continue 3)
The problem
- The machine with microservice has been suspended, and the finally part has not been reached at the code level at all. There is no way to guarantee unlocking. This key has not been deleted, and a key with expiration time limit needs to be added
To solve
-
The lockKey expiration time must be set
5. The key and expiration time Settings are not atomic.
To solve
-
Merge into one line, lock and set expiration time
6. Delete locks by mistake
The problem
-
Due to timeout of internal service execution or remote call, the execution time of the current thread exceeds the set lock expiration time, resulting in the lock of thread A, which is automatically deleted when the expiration time expires. Finally, the execution code of thread A is finally deleted. Deleting the lock is actually deleting the lock of thread B, and deleting the lock of someone else
To solve
-
You can only delete your own locks, not others
7. Misunderstand locks
The problem
-
The finally block’s judgment +del deletion operation is not atomic
To solve
-
Use Redis’s own transactions
-
The transaction is introduced
-
Redis transactions are done using four commands: MULTI, EXEC, DISCARD, and WATCH
-
Redis’s individual commands are atomic, so the transactional object is guaranteed to be a collection of commands
-
Redis serializes the command set and ensures that the command set in a transaction is executed consecutively without interruption
-
Redis does not support rollback operations
-
-
Relevant command
-
MULTI: Indicates the start of a transaction block
Redis queues subsequent commands one by one, and then executes the command sequence atomically using the EXEC command
-
EXEC: Execute all previously queued commands in a transaction and then restore normal connection status (similar to Commit)
-
DISCARD: Clears all commands previously queued in a transaction and restores normal connection status
-
WATCH: Use this command to set the given key to a monitored state (similar to optimistic locks) when a transaction needs to be executed conditionally
WATCH key [key ….]
-
-
-
Lua scripts (recommended)
8. How about the expiration date
The problem
- 8.1,The problem of ensuring that the redisLock expiration time is longer than the service execution time
- How do Redis locks renew
- 8.2 cluster +CAP Comparison with ZooKeeper
- Compare the Zookeeper
- CAP
- Redis (cluster) – AP: Lock loss caused by Redis asynchronous replication, for example: the master node did not come and the data was just set to the slave node, so the machine went down
- Zookeeper — CP
9. To sum up
-
Redis cluster environment, we write our own distributed lock is not ok, directly on RedLock (concept: Redis distributed lock) Redisson implementation
-
Redisson’s WATCH DOG helps automatically renew locks
Redisson provides a monitor lock watchdog, the function of it is in front of the Redisson instance is shut down, constantly extend the validity of the lock, that is, if a thread to lock has not been completed logic, then the watchdog would help thread continuously extend the lock timeout, the lock will not be released due to timeout by default, Watchdog renewal time is 30 s, but can be by modified Config. LockWatchdogTimeout to specify separately. In addition, Redisson provides a lock method that can specify the leaseTime parameter to specify the lock time. After this time, the lock will be unlocked automatically, without extending the validity of the lock
-
conclusion
- 1. The Watch Dog extends the key of the distributed lock 30 seconds every 10 seconds when the current node is alive
- 2. When the Watch Dog mechanism starts and there is no lock release operation in the code, the Watch Dog will continuously renew the lock
- If an exception is not executed, then the lock cannot be released. Therefore, the lock must be released in finally {}
-
-
Use the Jemeter pressure test again
According to redis performance, adjust thread parameters, my local VM, 0.4 seconds 100 requests to appear single-machine lock oversold
Using Redisson basically solves the oversold problem
-
Perfect again
At higher levels of concurrency, a simple call to Redisson’s lock() and unlock() methods will result in the following error:
IllegalMonitorStateException: attempt to unlock the lock, Id: 0DA6385F-81a5-4EDaf9ewa Not locked by current thread by node ID: 0DA6385F-81a5-4EDaf9ewa
Redis Distributed lock summary
Synchronized standalone version OK
Distributed System
- 1, Nginx distributed micro service, single lock does not work
- 2, cancel the single lock, redis distributed lock setnx
- 3, only add lock, did not release the lock
- 4. If an exception occurs, the lock may not be released. You must finally release the lock at the code level
- 5. When the system is down, finally has not been reached at the level of the deployed microservice code, so there is no way to ensure unlocking. The key has not been deleted, and the lockkey expiration time needs to be set
- 6, for redis distributed lock key, add expiration time, setnx+ expiration time must be the same line
- 7, must stipulate that can only delete their own lock, you can not delete others’ lock, A delete B, B delete C
- 8, Redis cluster environment, we write our own is not OK, directly on RedLock Redisson implementation
Redis cache expiration elimination policy
Common questions
- 1. How much redis memory do you set for production?
- 2. How to configure and modify redis memory size
- What do you do if the memory is full
- 4. How does Redis clean up memory? Do you know about periodic deletion and lazy deletion?
- 5. Redis cache elimination strategy
- 6. Have you learned about REDis LRU? Can I write an LRU algorithm
What if Redis memory is full
How much memory does Redis default? Where to check? How to set and modify?
Check the maximum memory used by Redis
How much memory can redis use by default?
- If the maximum memory size is not set or the maximum memory size is 0, the memory size is not limited in a 64-bit operating system, and the maximum memory size is 3GB in a 32-bit operating system
How do you arrange for general production?
- Redis is generally recommended to set the memory to three-quarters of the maximum physical memory
How do I modify redis memory Settings?
-
1. Modify the configuration file
-
2. Run commands to modify the Settings
config get maxmemory
config set maxmemory 104857600
What command is used to check redis memory usage?
-
info memory
What happens if you go beyond the maximum? (OOM)
-
Change the configuration and deliberately set the maximum value to 1 byte
-
At this time, try set again, find error: OOM
conclusion
- The maxMemory option is set, if redis memory usage reaches the upper limit, no expiration time is added, the data will be written to maxMemory, to avoid similar situation, introduced in the next chapter memory elimination strategy
Redis cache elimination strategy
-
The current version has eight cache policies, if not set, Noeviction is used by default
How did the data written into Redis go missing
Redis deletion policy for expired keys
Three different deletion strategies
-
Timed deletion: CPU-unfriendly, trading processor performance for storage (time for space)
Redis cannot traverse all keys that have been set to live time all the time to detect whether data has reached an expiration date and then delete it
Delete immediately to ensure that the data in the memory of freshness, because it guarantees keys will be removed immediately after the expiration date, the changes in plants could also release the memory, but the most unfriendly, immediately remove the CPU because delete CPU time, if just touching the CPU is busy, such as doing intersection or sorting while waiting for the calculation, It puts extra stress on the CPU
This incurs a significant performance cost, as well as affecting data read operations
-
Lazy deletion: memory unfriendly, trading memory space for processor performance (space for time)
If the data reaches the expiration time, the system does not process the data. If the data has not expired, the system returns the data. If the data has expired, the system deletes it and returns that the data does not exist
Cons: Least memory friendly
If a key is out of date and the key is still in use, its memory will not be released as long as the expired key is not deleted
When using the lazy delete policy, if there are too many expired keys in the database and they happen not to be accessed, the memory they occupy will never be released (unless the user FLUSHDB manually), and we can even consider this a memory leak
-
Both schemes go to extremes – delete regularly
The periodic deletion policy is a compromise between the preceding two policies. The periodic deletion policy performs the deletion expiration key once in a period of time and limits the duration and frequency of the deletion operation to reduce the impact on the CPU time
All the above steps have gone through, are there any loopholes?
-
1. When regularly deleted, it has never been spot checked
-
2, inert delete, has never been in the point
-
A large number of expired keys are piled up in the memory, causing the memory space of Redis to be strained or exhausted quickly
There must be a better bottom-line solution…
Enter the memory elimination strategy
What are the elimination strategies
-
Eight kinds of strategy
-
Noeviction: Will not expel any key
-
Allkeys-lru: deletes allkeys using the lru algorithm
-
Volatile – lRU: Deletes all keys with expiration time using the LRU algorithm
-
Allkeys-random: deletes allkeys randomly
-
Volatile -random: Randomly deletes all keys whose expiration time is set
-
Volatile – TTL: deletes keys that are about to expire
-
Allkeys-lfu: deletes allkeys using the lfu algorithm
-
Volatile – lFU: Deletes all keys whose expiration time is set using the LFU algorithm
-
-
conclusion
LRU(Least Recently Used)
: Least recently usedLFU(Least Frequently Used)
: Minimum frequency of use- Two dimensions
- Filter in expired key
- Filter among all keys
- Four aspects:
- LRU
- LFU
- random
- ttl
Which one do you usually use?
allkeys-lru
How do I configure and modify it
-
The configuration file
-
Run the config set maxmemory-policy command
Redis LRU algorithm introduction
What is the LRU of Reids
- LRU is
Least Recently Used
The abbreviation of, i.eLeast recently used, is a common page replacement algorithm, select the most recent unused data to be eliminated
Algorithm source
-
leetcode
Design idea
-
Feature requires
- There must be a sorting order to distinguish between recently used and long-unused data sorts
- Write and read operations at one time, time complexity O(1)
- If the capacity (pit) is full, the least frequently used data is deleted, and new data is inserted into the queue header with each new access
- Fast to find, insert, delete, and also need to order
-
The algorithm core of LRU is hash linked list
The essence is HashMap+DoubleLinkedList, which is O(1), hash table + bidirectional linked list
How to implement LRU for handwriting coding
Case 1
-
Refer to LinkedHashMap, which relies on the JDK
Case 2
-
Do not rely on the JDK