preface

To be a good Android developer, you need a completeThe knowledge systemHere, let’s grow up to be what we want to be.

An Awesome Android Expert Interview Questions and Answers (Continuous Updating)

From dozens of top interview warehouse and more than 300 high-quality interviews summarized a comprehensive and systematic Android advanced interview test set.

Welcome to 2020 advanced Android factory interview secrets, for you to escort gold three silver four, through the Java factory.

Java interview questions

Java based

1. Object Oriented (⭐⭐⭐)

1. What is your understanding of Java polymorphism?

Polymorphism means that when a method of a parent class is overridden by a child class, it can produce its own functional behavior. The same operation on different objects can be interpreted differently and produce different results.

Three necessary conditions for polymorphism:

  • Inherits from the parent class.
  • Overrides methods of the superclass.
  • A reference to a parent class points to a subclass object.

What is polymorphism

Three characteristics of object – oriented: encapsulation, inheritance, polymorphism. From a certain perspective, encapsulation and inheritance are almost always prepared for polymorphism. This is our last and most important concept.

Definition of polymorphism: allows objects of different classes to respond to the same message. That is, the same message can behave in many different ways depending on the object it is sent to. (Sending a message is a function call)

The technique for implementing polymorphism is called dynamic binding, which determines the actual type of the referenced object during execution and calls its corresponding method based on its actual type.

Function of polymorphism: eliminate coupling between types.

In reality, there are many examples of polymorphism. For example, if you press the F1 key, the help document of AS 3 will pop up in the Flash interface. If the current pop-up under Word is Word Help; Under Windows, Windows Help and Support will pop up. The same event occurring on different objects can produce different results.

Benefits of polymorphism:

(1) Substitutability. Polymorphism has the ability to replace existing code. For example, polymorphism works for the Circle class, as well as for any other circular geometry, such as a ring.

2. extensibility. Polymorphism is extensible to code. Adding a new subclass does not affect the running and operation of polymorphism, inheritance, or other features of an existing class. In fact, new subclasses make it easier to get polymorphic functionality. For example, on the basis of realizing the polymorphism of cone, half cone and half sphere, it is easy to add the polymorphism of sphere.

3. Interface-ability. Polymorphism is implemented when a superclass provides a common interface to a subclass by means of method signatures, and subclasses complete or override it.

4. Flexibility. It reflects the flexible and diverse operation in the application, and improves the use efficiency.

5) Simplicity. Polymorphism simplifies the coding and modification of application software, especially when dealing with operations and operations on a large number of objects.

In Java, the implementation of polymorphism: interface implementation, inherited from the parent class for method override, method overload in the same class.

2. What design patterns do you know?

A: There are generally 23 design patterns in Java. We don’t need to know all of them, but we should learn the common design patterns. All the design patterns are listed below. I’ve listed a separate list of design patterns to master, but the more you can master, the better.

In general, design patterns fall into three broad categories:

There are five types of creation patterns:

Factory method pattern, Abstract Factory pattern, singleton pattern, Builder pattern, prototype pattern.

Structural mode, a total of seven:

Adapter pattern, decorator pattern, proxy pattern, facade pattern, bridge pattern, composite pattern, enjoy element pattern.

There are eleven behavioral patterns:

Strategy pattern, Template method pattern, Viewer pattern, Iterative subpattern, chain of responsibility pattern, command pattern, Memo pattern, Status pattern, visitor pattern, mediator pattern, interpreter pattern.

See my design pattern summary notes for details

3. What are the advantages of implementing singletons with static inner classes?

  1. Synchronized.
  2. GetInstance () creates the object only when it is called, and saves space by not calling it.

4. What is the difference between static proxy and dynamic proxy?

The difference between static proxy and dynamic proxy lies in the different generation time of the proxy class. That is, agents can be divided into static proxy and dynamic proxy according to whether the proxy class already exists before the program runs. If you need to proxy multiple classes, and the function of the proxy is the same, it is very troublesome to write the proxy class repeatedly with static proxy, you can dynamically generate the proxy class with dynamic proxy.

Public Object getProxyInstance() {return proxy.newProxyInstance (target.getClass().getClassLoader(), target.getClass().getInterfaces(), new InvocationHandler() { @Override public Object invoke(Object proxy, Throws Throwable {system.out.println (" Open transaction "); Object returnValue = method.invoke(target, args); System.out.println(" commit transaction "); return null; }}); }Copy the code
  • Static proxy usage scenario: The four components communicate with AIDL and AMS across processes
  • Dynamic proxy usage scenario: Retrofit uses dynamic proxies to greatly improve scalability and maintainability.

5, the difference between simple factory, factory method, abstract factory, Builder mode?

  • Simple factory pattern: A factory method creates objects of different types.
  • Factory method pattern: A concrete factory class is responsible for creating a concrete object type.
  • Abstract Factory pattern: A concrete factory class is responsible for creating a set of related objects.
  • The Builder pattern: The construction of objects is separated from the presentation, and it focuses more on the creation process of objects.

6. What are the differences between decoration mode and agent mode? How about bridge mode?

  • 1. Decoration mode is an alternative to inheritance, which extends the functionality of objects in a transparent way on the client side. In proxy mode, an object is provided with a proxy object, and the proxy object controls the reference to the original object.
  • 2. The decoration mode should enhance the function of the object to be decorated; Proxy mode exerts control over the objects that are represented, but does not add functionality to the objects themselves.
  • 3. The bridge mode has different effects on agents and decorations. It is mainly used to deal with scenarios where a class family has multiple dimensions that lead to a sharp increase in the number of subclass types. Bridging patterns isolate multiple dimensions of change so that they can change independently, and finally combine them to cope with multi-dimensional changes, reducing the number and complexity of subclasses.

7. What’s the difference between the facade pattern and the mediation pattern?

The focus of the appearance mode is to encapsulate a unified high-level interface, which is convenient for users to use. The mediation pattern, on the other hand, avoids direct references to multiple cooperating objects and interacts with each other through a mediation object, making them loosely coupled and able to cope with changes.

What is the difference between policy mode and state mode?

Although the type structure of the two is consistent, their essence is not the same. The policy pattern focuses on the substitution of the entire algorithm, that is, the substitution of the policy, while the state pattern changes the behavior through the state.

9. What are the similarities and differences between adapter mode, decorator mode and appearance mode?

What these three patterns have in common is that they all act as an intermediate layer between the user and the real class or system being used, allowing the user to invoke the real class indirectly. They differ in addition to, as mentioned above, the application and the essential idea.

Agent with the appearance of the main difference is that agent object represents a single object, and the appearance of object represents a subsystem, the agent’s customers can’t direct access to the object, by the agent to provide single target object access, usually appearance object to provide a subsidiary system has the function of each component of a simplified call interface of common level. A proxy is a representative of the original object. All other operations that need to be done with the object are done with the representative. Adapters, on the other hand, do not need to invent a representative, but simply combine the original classes for a specific purpose.

Both the facade and the adapter are packages of existing systems. A facade defines a new interface, while an adapter reuses an existing interface. An adapter makes two existing interfaces work together, and a facade provides a more accessible interface to an existing system. If appearance is adaptation, then adapters are used to adapt objects, while appearance is used to adapt entire subsystems. That is, the object for which the appearance is directed is of greater granularity.

The proxy pattern provides interfaces that are consistent with the real class. The intent is to use the proxy class to handle the real class and implement some specific service or part of the functionality of the real class. The Facade pattern focuses on simplifying the interface, and the Adapter pattern focuses on transforming the interface.

10. Code smells bad:

1. Code duplication:

Code duplication is almost the most common odor. And this was one of the main goals with Refactoring. Code duplication often comes from the copy-and-paste style of programming.

2. The method is too long:

A method should have its own intention, not several intentions together.

3. Classes provide too many functions:

Putting too much responsibility on a class that should only provide a single function.

4. Data muddle:

Certain data often play in groups like children: together in member variables of many classes, together in parameters of many methods… . , the data should probably form its own objects. For example, provide your own instance as a singleton.

5, redundant category:

A class that doesn’t do much work. The maintenance of a class requires additional overhead, and if a class has too little responsibility, it should be eliminated.

6. Too many comments:

Often finding yourself writing too many comments means your code is hard to understand. If you’re feeling too much, you’re factoring.

Can you give some examples of what design patterns are used in Android?

Bulider mode is used to initialize parameters in the AlertDialog and Notification source code:

The Direcotr role is not seen in the Builder mode of AlertDialog. In fact, in many scenarios, Android does not completely follow GOF’s classic design pattern, but has made some modifications to make it easier to use. This AlertDialog.Builder also acts as the Builder, ConcreteBuilder, and Director mentioned in the context, simplifying the design of the Builder pattern. When the module is relatively stable and does not exist some changes, you can make some simplification on the basis of the classical pattern implementation, rather than copy the classical implementation on GOF, let alone mechanically copy, so that the program loses the beauty of the architecture.

Definition: To separate the construction of a complex object from its representation, so that the same construction process can create different representations. The configuration is isolated from the target class to avoid too many setter methods.

Advantages:

  • 1, good encapsulation, using the builder pattern can make the client do not have to know the details of the internal composition of the product.
  • 2, the builder is independent, easy to expand.

Disadvantages:

  • Excess Builder objects and Director objects are generated, consuming memory.
BaseActivity abstract factory pattern for everyday development:

Definition: Provides an interface for creating a set of related or interdependent objects without specifying their concrete classes.

Theme switching application:

For example, our application has two sets of themes: LightTheme and DarkTheme. We can define these two themes by an abstract class or interface, and we have different UI elements under the corresponding theme. For example, Button, TextView, Dialog, ActionBar, etc., each of these UI elements corresponds to a different theme. These UI elements can also be defined by abstract classes or interfaces. The abstract factory pattern is best exemplified by the relationship between abstract themes, concrete themes, abstract UI elements, and concrete UI elements.

Advantages:

  • The separation of interface and implementation, interface oriented programming, so that it is decoupled from the specific product implementation, at the same time based on the separation of interface and implementation, so that the abstract factory method pattern is more flexible and easy to switch product classes.

Disadvantages:

  • Explosive increase in class files.
  • New product classes do not extend easily.
Okhttp internally uses the chain of responsibility pattern to complete each Interceptor invocation:

Definition: Allows multiple objects to have a chance to process a request, thereby avoiding coupling between the sender and receiver of a request. Chain the objects and pass the request along the chain until an object processes it.

The recursive invocation of the ViewGroup event is like a chain of responsibility. Once it finds the responsible person, the responsible person will own and consume the event. This is done by setting the value returned by the View’s onTouchEvent method. That means that the current View will not be the person responsible for this event and will not hold it; If true, in contrast, the View will hold the event and no longer pass it down.

Advantages:

Decouple requestor and handler relationships, providing code flexibility.

Disadvantages:

In a traversal of request handlers in the chain, if there are too many handlers, the traversal is bound to affect performance, especially in recursive calls.

RxJava observer mode:

Definition: Defines a one-to-many dependency between objects such that whenever an object changes state, all dependent objects are notified and automatically updated.

ListView/RecyclerView Adapter notifyDataSetChanged method, broadcast, event bus mechanism.

The main purpose of the Observer mode is to decouple objects, completely separating the Observer from the observed, relying only on the Observer and Observable abstractions.

Advantages:

  • There is an abstract coupling between the observer and the observed to respond to business changes.
  • Enhance the flexibility and scalability of the system.

Disadvantages:

  • In Java, notification of messages is executed sequentially by default, and a single observer can affect the overall efficiency of execution. In this case, an asynchronous approach is generally considered.
AIDL Proxy mode:

Definition: provides a proxy for other objects to control access to this object.

Static proxy: The class compile file for the proxy class exists before the code runs.

Dynamic proxy: Objects that generate proxies dynamically through reflection. The agent who will decide during the execution phase. Let the InvocationHandler handle what the original proxy class does.

Usage scenario:

  • When an object cannot or does not want to be accessed directly or it is difficult to access an object, it can be accessed indirectly through a proxy object. In order to ensure the transparency of client use, the delegate object and the proxy object need to implement the same interface.

Disadvantages:

  • Additions to the class.
ListView/RecyclerView/GridView adapter pattern:

The adapter pattern transforms the interface of a class into another interface expected by the client, enabling two classes to work together that otherwise would not work together due to interface mismatches.

Usage scenario:

  • The interface is incompatible.
  • You want to create a class that can be reused.
  • A uniform output interface is required, and the type of input is unpredictable.

Advantages:

  • Better reusability: Reusing existing functionality.
  • Better scalability: Extending existing functionality.

Disadvantages:

  • Excessive use of adapters will make the system very messy, not easy to grasp the whole. For example, if you see that interface A is being called, but the internal implementation of interface B is being adapted, A system can be A disaster if there are too many of these situations.
Context/ContextImpl appearance mode:

The facade pattern provides a high-level interface that makes the subsystem easier to use.

Usage scenario:

  • Provide a simple interface to a complex subsystem.

Advantages:

  • Hiding subsystem details from the client reduces the coupling of the client to the subsystem and enables the client to embrace change.
  • Facade classes encapsulate the subsystem’s interfaces, making the system easier to use.

Disadvantages:

  • Appearance class interface inflation.
  • The appearance class does not follow the open or close principle. When services change, you may need to modify the appearance class directly.

Ii. Collection Framework (⭐⭐⭐)

1, set framework, list, map, set, what are the specific implementation classes, what are the differences?

The use of interfaces to define functionality in Java collections is a well-established inheritance system. Iterator is the general interface of all sets, and all other interfaces inherit from it. This interface defines the traversal operations of the set. Collection interface inherits from Iterator, and is a secondary interface of the set (Map exists independently), and defines some common operations of the set.

The class structure of a Java collection looks like this:

List: ordered and repeatable; Index query speed is fast; The speed of insertion and deletion is slow due to data movement.

Set: unordered, unrepeatable;

Map: pair of keys and values, with unique keys and multiple values.

1.List and Set are inherited from the Collection interface, Map is not;

2.List features: Elements are placed in order, elements can be repeated;

< span style = “margin-top: 0pt; margin-top: 0pt; margin-top: 0pt; margin-top: 0pt;

In addition, lists support for loops, that is, through indices. Iterators can also be used, but sets can only be iterated because they are unordered and cannot be used to obtain the desired value.

3.Set and List

Set: Retrieving elements is inefficient, while deleting and inserting elements is efficient. Inserting and deleting elements does not change the position of elements.

List: Like arrays, lists can grow dynamically. Finding elements is efficient, but inserting or deleting elements is inefficient because other elements change position.

4.Map is suitable for storing key-value pairs.

5. Thread-safe collection classes versus non-thread-safe collection classes

LinkedList, ArrayList, and HashSet are non-thread-safe; Vector is thread-safe;

HashMap is non-thread-safe, HashTable is thread-safe;

StringBuilder is non-thread-safe, StringBuffer is thread-safe.

The following is a detailed introduction to the use of these classes:
The difference between ArrayList and LinkedList and the applicable scenarios

Arraylist:

Advantage: ArrayList is a dynamic array-based data structure, because the address is contiguous, once the data is stored, the query operation is more efficient (in memory is placed consecutively).

Disadvantages: Because the addresses are contiguous, ArrayList moves data, so insert and delete operations are less efficient.

LinkedList:

Advantages: LinkedList is based on the data structure of LinkedList, the address is arbitrary, it does not need to wait for a continuous address when opening up memory space, to add and delete operations add and remove, LinedList is more advantageous. LikedList is suitable for scenarios where you want to start and end operations or insert into specified locations.

Disadvantages: Because LinkedList moves pointer, query operation performance is low.

Application scenario analysis:

Use ArrayList when you need to access the data, and LinkedList when you want to add, delete, and modify the data multiple times.

How do ArrayList and LinkedList dynamically expand?

ArrayList:

The initial size of the ArrayList is 0 and then becomes 10 when the first element is added. In addition, it will be 1.5 times the current capacity during subsequent expansion.

LinkedList:

LinkedList is a two-way linkedList. There is no initial size, and there is no mechanism to expand the size.

The difference between ArrayList and Vector and the applicable scenarios

ArrayList has three constructors:

Public ArrayList(intinitialCapacity)// Constructs an empty list with the specified initial capacity. Public ArrayList()// Constructs an empty list with an initial capacity of 10. public ArrayList(Collection<? Extends E> c)// Constructs a list of elements from a specified collectionCopy the code

Vector has four constructors:

Public Vector() // constructs an empty Vector using the specified initial capacity and a capacity increment equal to zero. Public Vector(int initialCapacity) public Vector(int initialCapacity) public Vector(int initialCapacity) public Vector(int initialCapacity) public Vector(Collection<? Extends E> c) public Vector(int initialCapacity) extends E> c) Int capacityIncrement)// constructs an empty vector with the specified initial capacity and capacityIncrementCopy the code

Both ArrayList and Vector are implemented using arrays. There are four main differences:

1)Vector is multithreaded safe. Thread-safe means that multiple threads accessing code do not produce uncertain results. ArrayList is not. As you can see from the source code, many of the methods in Vector class are modified by synchronied. As a result, Vector is not as efficient as ArrayLst.

2) Both classes use linear continuous space to store elements, but when there is enough space, the two classes increase in different ways.

3)Vector can set the growth factor, but ArrayList cannot.

4)Vector is an old dynamic array. It is thread synchronous and inefficient. It is generally frowned upon.

Application scenario:

1.Vector is thread synchronous, so it is also thread safe, while ArraList is thread asynchronous, which is not safe. If thread safety is not a concern, ArrayList is generally more efficient.

2. If the number of elements in the collection is greater than the length of the current collection array, there are advantages to using a Vector for using a large amount of data in the collection.

The difference between HashSet and TreeSet and the applicable scenarios

1.TreeSet is implemented as a binary tree (red-black tree data structure). Data in TreeSet is automatically sorted and null values are not allowed.

2. A HashSet is an implementation of a hash table. The data in a HashSet is unordered and can be put into a NULL, but only one NULL.

3. A HashSet requires that the objects to be put in must implement the HashCode() method, and that the objects to be put in are identified by the HashCode code, while the HashCode is the same for strings with the same content, so the contents of the items to be put in cannot be repeated but different instances of the same class can be put in.

Application scenario analysis:

Hashsets are implemented based on the Hash algorithm, which generally outperforms TreeSet. For sets designed for quick lookup, we should always use HashSet, and we’ll use TreeSet only when we need sorting functionality.

The differences between HashMap, TreeMap, and HashTable and their application scenarios

HashMap is not thread safe

HashMap: Implementation based on a hash table (hash table). The key classes required to use HashMap explicitly define hashCode() and equals()[which can be overridden], and you can tune the initial capacity and load factor to optimize the use of HashMap space. There are two main ways to deal with hash table conflicts, one is open addressing method, the other is linked list method. The implementation of HashMap uses the linked list method.

TreeMap: Non-thread-safe implementation based on red-black trees. TreeMap has no tuning preferences because the tree is always in balance.

Application scenario analysis:

HashMap and HashTable:HashMap removes the HashTable contain method, but includes containsValue() and containsKey(). HashTable is synchronous while HashMap is asynchronous, which is more efficient than HashTable. HashMap allows null key values, HashTable does not.

HashMap: Applies to inserting, removing, and locating elements in a Map.

Treemap: For traversing keys in a natural or custom order. (PS: In fact, we use the set very frequently in the process of work, pay attention to and summarize the accumulation, should be very easy to answer in the interview)

2. In principle, how to ensure that sets are not repeated?

1) When adding an element to a set, if the specified element does not exist, the element is added successfully.

2) When adding an element to a HashSet, first calculate the hashcode value of the element, then calculate the location of the element using the hashcode % (the size of the HashMap set) +1, and then add the element if the location is empty. If it is not null, then the equals method is used to compare whether the elements are equal. If it is not, then the equals method is used to compare whether the elements are equal.

3. What are the main differences between HashMap and HashTable? , what is the data structure of the underlying implementation?

The difference between HashMap and HashTable:

Both implement the Map interface, which maps unique keys to specific values. The main differences are:

1)HashMap does not sort, allowing one NULL key and multiple null values, whereas Hashtable does not;

(containsValue) (containsKey) (ContainsValue) (containsKey) (ContainsValue)

3)Hashtable inherits from Dictionary, and HashMap is an implementation of the Map interface introduced in Java1.2;

4) The methods of Hashtable are Synchronized, while HashMap is not. When multiple threads access a Hashtable, they do not need to synchronize its methods themselves, and HashMap must provide additional synchronization. Hashtable and HashMap use roughly the same hash/rehash algorithm, so there isn’t much difference in performance.

The underlying implementation data structure of HashMap and HashTable:

The underlying implementation of HashMap and Hashtable is an array + linked list (pre-JDk8).

4. HashMap, ConcurrentHashMap, and Hash ()

How HashMap 1.7 works:

The underlying HashMap is based on an array + linked list, but the implementation is slightly different in JDK1.7 and 1.8.

Load factor:

  • The default capacity given is 16 and the load factor is 0.75. When the Map is in use, data is constantly stored in it. When the number reaches 16 * 0.75 = 12, the current capacity of 16 needs to be expanded. This expansion involves rehash and data replication, which consumes performance.
  • Therefore, it is generally recommended to estimate the size of a HashMap in advance to minimize the performance cost of expansion.

Entry<K,V>[] table. Entry is a static inner class in a HashMap. It has key, value, next, hash (hashcode for key) member variables.

The put method:

  • Determines whether the current array needs to be initialized.
  • If the key is empty, put a null value in it.
  • Calculate the Hashcode based on the key.
  • Locate the bucket based on the calculated Hashcode.
  • If the bucket is a linked list, you need to check whether the hashcode and key in the bucket are equal to the passed key. If they are equal, you will override them and return the original value.
  • If the bucket is empty, no data is stored at the current location. Add an Entry object to the current location. When you call addEntry to write an Entry, you need to determine whether to expand the capacity. Double the size if needed, and rehash and reposition the current key. In createEntry, the current bucket is passed into the new bucket. If the current bucket has a value, the list is formed at the location.

The get method:

  • First, calculate the hashcode based on the key, and then locate the specific bucket.
  • Determines whether the location is a linked list.
  • If it is not a linked list, it returns the value based on whether the key and the hashcode of the key are equal.
  • For a linked list, you need to iterate until the key and hashcode are equal and then return a value.
  • You get nothing, you just return null.
How HashMap 1.8 works:

When the Hash conflict is serious, the linked list formed on the bucket becomes longer and longer. In this way, the query efficiency becomes lower and lower. The time complexity is O(N), so this query efficiency is heavily optimized in 1.8.

TREEIFY_THRESHOLD is used to determine whether a linked list needs to be converted into a red-black tree.

Change HashEntry to Node.

The put method:

  • To determine whether the current bucket is empty, an empty bucket needs to be initialized (in the resize method to determine whether to initialize).
  • According to the hashcode of the current key, locate the specific bucket and determine whether it is empty. If it is empty, it indicates that there is no Hash conflict, and directly create a new bucket at the current position.
  • If the current bucket has a value (Hash conflict), then compare the key in the current bucket, the hashcode of the key and the key written are equal. If they are equal, the value is assigned to e. In step 8, the value is assigned and returned uniformly.
  • If the current bucket is a red-black tree, data is written as a red-black tree.
  • If it is a linked list, you need to encapsulate the current key and value into a new node and write it to the end of the current bucket (forming a linked list).
  • Then determine whether the size of the current list is greater than the preset threshold, if it is greater than the red black tree.
  • If the same key is found during the traversal, exit the traversal directly.
  • If e! = null is equivalent to the existence of the same key, so the value needs to be overwritten.
  • Finally, determine whether to expand the capacity.

The get method:

  • First hash the key and get the located bucket.
  • Return null if the bucket is empty.
  • Otherwise, determine whether the key in the first position of the bucket (possibly a linked list or a red-black tree) is the key of the query. If yes, return value directly.
  • If the first one does not match, it determines whether its next is a red-black tree or a linked list.
  • The red-black tree returns the value as the tree finds it.
  • Otherwise, the list is traversed to match the returned value.

After changing to a red-black tree, the query efficiency is directly improved to O(logn). But HashMap has its own problems, such as an infinite loop when used in concurrent scenarios:

  • The resize() method is called when the HashMap is expanded, because it is easy for concurrent operations to form a circular list on a bucket; In this case, when retrieving a non-existent key, the index calculated happens to be the index of the circular list. In 1.7, the hash collision method is used to form a circular list under concurrent conditions. Once a query falls on the list, when the value cannot be obtained, the loop will occur.
ConcurrentHashMap 1.7 Principles:

ConcurrentHashMap adopts Segment locking technology, in which Segment inherits from ReentrantLock. Instead of synchronizing both PUT and GET operations like HashTable, ConcurrentHashMap theoretically supports CurrencyLevel concurrency. Every time a thread accesses a Segment using a lock, other segments are not affected.

The put method:

First, locate the Segment by key, and then put it in the corresponding Segment.

  • Although the value in HashEntry is modified with volatile keywords, concurrency atomicity is not guaranteed, so locking is still required for put operations.

  • The first step is to try to acquire the lock. If this fails, other threads must be competing, then use scanAndLockForPut() to spin the lock:

    Try to spin the lock. If the number of retries reaches MAX_SCAN_RETRIES, change to block lock to ensure success.

  • Locate the table in the current Segment to a HashEntry using the key’s hashcode.

  • The HashEntry is traversed. If it is not empty, the passed key and the currently traversed key are determined to be equal, and if so, the old value is overridden.

  • If it is empty, create a HashEntry and add it to the Segment. In addition, the system determines whether to expand the HashEntry.

  • Finally, unlock() is used to unlock the Segment.

The get method:

  • All you need to do is Hash the Key to the specific Segment, and then Hash it to the specific element.
  • Since the value attribute in HashEntry is decorated with volatile keywords, memory visibility is guaranteed, so the latest value is fetched each time.
  • The get method of ConcurrentHashMap is very efficient because the entire process does not require locking.
ConcurrentHashMap 1.8 Principles:

The concurrency problem has been solved in 1.7 and can support N segments concurrent times, but the problem with HashMap in 1.7 is that it is inefficient to query through the linked list. Similar to the structure of 1.8 HashMap: it abandons the original Segment lock and uses CAS + synchronized to ensure concurrency security.

CAS:

If value in obj and Expect are equal, it proves that no other thread has changed the variable, then update it to update, and if CAS fails at this step, the CAS operation continues spin.

Question:

  • An AtomicStampedReference class is currently available in the JDK’s atomic package to address ABA issues. The compareAndSet method of this class first checks to see if the current reference equals the expected reference, and if the current flag equals the expected flag, and if all are equal, sets the reference and flag values atomically to the given update value.
  • If the CAS is unsuccessful, it will spin in place, and if it spins for a long time, it will impose a very high execution overhead on the CPU.

The put method:

  • Calculate the Hashcode based on the key.
  • Determine whether initialization is required.
  • If the Node identified by the current key is empty, data can be written to the current location. The CAS is used to try to write data. If the CAS fails, the spin is successful.
  • If the current hashCode == MOVED == -1, you need to expand the capacity.
  • If none of these are met, the synchronized lock is used to write data.
  • Finally, if the number is greater than the TREEIFY_THRESHOLD, the tree is converted to a red-black tree.

The get method:

  • Based on the calculated hashcode addressing, return the value directly if it is on the bucket.
  • If it’s a red-black tree then you get the value as a tree.
  • If it doesn’t then we’re going to go through the list and get the value.

1.8 has made major changes in the data structure of 1.7. After adopting red-black tree, the query efficiency (O(logn)) can be guaranteed, and even synchronized instead of ReentrantLock. This shows that the synchronized optimization in the new JDK is in place.

Implementation principles of HashMap and ConcurrentHashMap 1.7/1.8

The hash() algorithm is fully resolved

When to expand HashMap:

When adding elements to a container, it determines the number of elements in the current container. If the number of elements is greater than or equal to the threshold – that is, greater than the size of the current array multiplied by the value of the load factor – it automatically expands.

What is the algorithm for capacity expansion:

To resize, “resize,” is to recalculate the capacity of a HashMap object by continually adding elements to it. When the array inside the HashMap object cannot hold more elements, the object needs to expand the size of the array to accommodate more elements. Of course, the Java array is not automatically expanded, the method is to use a new array instead of the existing smaller array capacity.

How does Hashmap resolve hash collisions (required)?

In Java, HashMap uses the “zipper method” to deal with the collision problem of HashCode. When a HashMap’s put or get method is called, the HashCode method is called first to look for the associated key, and the equals method is called if there is a conflict. HashMap is based on the principle of hasing. We access objects through the put and get methods. When we pass the key-value pair to the PUT method, it calls the key object’s hashCode() method to calculate the hashCode, and then finds the bucket location to store the object. When retrieving an object, the correct key-value pair is found through the equals() method of the key object and the value object is returned. HashMap uses linked lists to solve the collision problem. When a collision occurs, the object is stored in the next node in the list. HashMap stores key-value pair objects in each linked list node. When two different keys have the same hashCode, they are stored in a linked list at the same bucket location. Key object equals() to find the key-value pair.

Why is the underlying Hashmap thread unsafe?
  • When used in concurrent scenarios, infinite loops are likely to occur. When the HashMap is expanded, the resize() method will be called, which means that the concurrent operation here is easy to form a circular list on a bucket. In this case, when obtaining a non-existent key, the index calculated is exactly the index of the circular list, and there will be an infinite loop;
  • In the 1.7 hash collision, the head insertion method is used to form a circular list under concurrent conditions. Once a query falls on the list, when the value cannot be obtained, it will loop indefinitely.

5, ArrayMap and SparseArray improvements on HashMap?

In order to store all the data, the HashMap will have to continuously expand, and in the process, it will also need to do the hash operation, which will cause a lot of consumption and waste of our memory space.

SparseArray:

SparseArray saves more memory than HashMap and performs better in some cases, mainly because it avoids automatic boxing of keys (int to Integer). It stores data internally in two arrays, one for keys and one for values. To optimize performance, It also adopts the compression method to represent sparse array data internally, so as to save memory space. We can see from the source code that key and value are respectively represented by array:

private int[] mKeys;
private Object[] mValues;
Copy the code

SparseArray also uses binary search when storing and reading data. When putting adds data, we compare the size of the element’s key to the previous key, using binary search, and then arrange the key in order from smallest to largest. Therefore, SparseArray stores all elements in order from smallest to largest. When obtaining data, binary search is also used to determine the location of elements, so it is very fast in obtaining data, much faster than HashMap.

ArrayMap:

ArrayMap uses two arrays. MHashes are used to store the hash value of each key. The size of mArrray is twice that of mHashes, and key and value are stored successively.

mHashes[index] = hash;
mArray[index<<1] = key;
mArray[(index<<1)+1] = value;
Copy the code

When inserting, the hash value is calculated based on the key’s hashcode() method, and the mArrays’ index position is calculated. Then, binary search is used to find the corresponding position to insert. In case of hash conflict, the insertion will be adjacent to the index.

Assuming that the data volume is within 1000 levels:

If the key is of type long, use SparseArray because it avoids autoboxing. If the key is of type long, it also provides a LongSparseArray to ensure that the key is of type long

2. If the key type is any other type, use ArrayMap.

3. Reflection (⭐⭐⭐)

What is your understanding of Java reflection?

A: Reflection in Java is the ability to get the bytecode of the Java class to be reflected. There are three ways to get the bytecode:

1.Class.forName(className)

2. The name of the class. The class

3. This. GetClass ().

Then, the methods, variables and constructors in the bytecode are mapped into corresponding classes such as Method, field, Constructor, etc., which provide rich methods for us to use.

Deep Parsing of Java Reflection (1) – Basics

Java Basics – Reflection (very important)

4. Generic (⭐⭐)

A brief introduction to generics in Java, generic erasing and related concepts, parsing and dispatching?

Generics are a new feature in Java SE1.5. The essence of generics is that they are parameterized, which means that the data type you operate is specified as a parameter. This parameter type can be used in the creation of classes, interfaces, and methods, called generic classes, generic interfaces, and generic methods, respectively. The advantage of introducing generics into the Java language is safety and simplicity.

Prior to Java SE 1.5, without generics, parameters were “arbitrary” by reference to the type Object. The downside of “arbitrary” was that it required explicit casting, which required the developer to be predictable about the actual parameter types. In the case of a cast error, the compiler may run with an exception without prompting an error, which is a security hazard.

The benefits of generics are that type safety is checked at compile time, and all conversions are automatic and implicit, increasing code reuse.

Generic type arguments can only be class types (including custom classes), not simple types.

2. There can be multiple versions of the same generic type (because the parameter types are incorrect), and different versions of the generic class instances are incompatible.

3. A generic type can have more than one type parameter.

4. The extends statement can be used for generic parameter types, for example. It is conventionally called a “bounded type”.

5. Generic parameter types can also be wildcard types. Such as Class <? > classType = Class.forName(“java.lang.String”);

Generic erasure and related concepts

Generic information exists only during code compilation and is erased before entering the JVM.

During type erasures, type arguments in generic classes are converted to type Object if no upper bound is specified. If an upper bound is specified, type arguments are passed to the corresponding upper bound.

Generics in Java are basically implemented at the compiler level. The generated Java bytecode does not contain the type information in generics. Type arguments that are added when using generics are erased by the compiler at compile time. This process is called type erasure.

Problems caused by type erasure and solutions:

Check first, during compilation, as well as check for compiled objects and references passed

2. Automatic type conversion

3. Conflict and resolution of type erasure and polymorphism

A generic type variable cannot be a basic data type

5. Runtime type query

The use of generics in exceptions

7. Arrays (this is not a problem caused by type erasures)

9. Conflicts after type erasure

Problems with generics in static methods and classes

5. Comments (⭐⭐)

1. What is your understanding of Java annotations?

Annotation is equivalent to a mark, in the program to add annotations is equal to the program marked a certain mark. The program can use AVA’s reflection mechanism to know if your class and various elements are marked up, and to do events for each tag. Tags can be added to packages, classes, fields, methods, method parameters, and local variables.

6. Others (⭐⭐)

Java char is two bytes. How to store utF-8 characters?

Familiarity with Java Char and Strings (beginner)
  • Char is 2 bytes and UTF-8 is 1 to 3 bytes.
  • Character set (character set is not encoding) : ASCII and Unicode codes.
  • Character -> 0xD83DDE00 (code point).
Know the mapping and storage details of characters (Intermediate)

Human cognition: characters => Character set: 0x4e2D (char) => Computer storage (byte) : 01001110:4e, 00101101:2D

Code: utf-8 16

“In”. GetBytes (” utf – 6 “); -> fe ff 4e 2D: 4 bytes, where the first fe ff is only the byte order flag.

Can you compare other languages by analogy (high level)

Python2 string:

  • ByteString = “in”
  • UnicodeString = U “in”

Confusing string length

Print (len(emoij))Copy the code

Java and Python 3.2 and below: 2-byte Python >= 3.3:1 byte

Note: Java 9 has optimized the storage space for Latin characters, but the string length remains! = Number of characters.

conclusion
  • Java CHAR does not store UTF-8 bytes, but UTF-16.
  • The Unicode universal character set is two bytes, such as middle.
  • The Unicode Extended character set requires a pair of chars, such as “emoji”.
  • Unicode is a character set, not an encoding, similar to ASCII.
  • The length of a Java String is not the number of characters.

How long can a Java String be?

Have in-depth knowledge of string codec (Intermediate)

Assign to stack:

String longString = "aaa... aaa";Copy the code

Allocate to the heap:

byte[] bytes = loadFromFile(new File("superLongText.txt");
String superLongString = new String(bytes);
Copy the code
Have a deep understanding of how strings are stored in memory (advanced)
Do you have sufficient knowledge of Java virtual machine bytecode (advanced)

Source file: *.java

String longString = "aaa... aaa"; Number of bytes <= 65535Copy the code

Bytecode: *.class

CONSTANT_Utf8_info { u1 tag; u2 length; (0~65535) u1 bytes[length]; Up to 65535 bytes}Copy the code

Javac compiler has a problem, < 65535 should be changed to < = 65535.

Java String stack allocation

  • The final mutF-8 byte number of the string is limited by bytecode to 65535.
  • Latin characters, limited by Javac code, maximum 65534 characters.
  • The final number of bytes for non-Latin characters varies greatly. The maximum number of bytes is 65535.
  • If the runtime method area setting is small, it is also limited by the size of the method area.
Knowledge of Java virtual machine instructions (advanced)

New String(bytes) internally uses an array of characters. The corresponding vm instruction is newarray [int]. The maximum number of arrays is Integer. So MAX_ARRAY_SIZE = Integer.MAX_VALUE – 8.

Java String heap allocation

  • Due to vm instructions, the theoretical upper limit of the number of characters is integer.max_value.
  • The actual upper limit may be smaller than Integer.MAX_VALUE due to vm implementation restrictions.
  • If the heap memory is small, it is also limited by the heap memory.
conclusion

Java String literal form

  • The limitation of the bytecode CONSTANT_Utf8_info
  • Limitations of Javac source logic
  • Limits on method area size

The form created on the heap by the Java String runtime

  • Limitations of the Java virtual machine directive newarray
  • Limit on the heap memory size of the Java virtual machine

3. What are the limitations of Anonymous inner classes in Java?

Examining the concept and use of anonymous inner classes (elementary)
  • Anonymous inner class names: no human cognitive names
  • You can inherit only one parent class or implement only one interface
  • The package name. OuterClassN, N is the order of anonymous inner classes.
Study of language specifications and horizontal comparison of languages (intermediate)

Anonymous inner class inheritance: In Java, anonymous inner classes are not allowed to inherit. Only inner classes can have features that implement inheritance and interfaces. Kotlin is an anonymous inner class that supports inheritance, such as

val runnableFoo = object: Foo(),Runnable { 
        override fun run() { 
        
        } 
}
Copy the code
As a pointcut for memory leaks (advanced)

Anonymous inner class constructors (the ability to delve into source bytecode to explore the nature of the language) :

  • Anonymous inner classes hold references to external classes by default, which can cause memory leaks.
  • Generated by the compiler.

The parameter list includes

  • External objects (defined in non-static fields)
  • External object to the parent (non-static parent)
  • Constructor arguments for the superclass (the superclass has constructors and the argument list is not empty)
  • Externally captured variables (there are external final variables referenced in the method body)

Lambda conversion (SAM type, supports only a single interface type) :

If the CallBack is an interface and not an abstract class, it can be converted to a Lambda expression.

CallBack callBack = () -> { 
        ... 
};
Copy the code
conclusion
  • There is no human cognitive name.
  • You can inherit only one parent class or implement only one interface.
  • If a parent class is a nonstatic type, it needs to be initialized by an external instance of the parent class.
  • If the definition is in a non-static scope, it references an external class instance.
  • Only final variables in the outer scope can be captured.
  • Interfaces that are created with a single method can be transformed using Lambda.
Tips inspiration.

Pay attention to language version changes:

  • Show a passion for technology
  • Reflect the quality of learning
  • Be professional

4. How are exceptions classified in Java?

Overall classification of anomalies:

The Throwable class is defined in the Java exception structure. Exception and Error are subclasses.

Errors are errors that the program cannot handle, such as OutOfMemoryError and StackOverflowError. When these exceptions occur, the Java Virtual Machine (JVM) typically chooses thread termination.

Exceptions are exceptions that can be handled by the program itself, which are classified into two categories: run-time exceptions and non-run-time exceptions. The program should handle these exceptions as much as possible.

Runtime exceptions are RuntimeException class and its subclasses abnormalities, such as NullPointerException, IndexOutOfBoundsException, these exceptions are not checked exception, in the program can choose to capture processing, also can not deal with. These exceptions are usually caused by errors in program logic, and programs should logically avoid such exceptions as much as possible.

There are two basic principles of exception handling:

Try not to catch general exceptions like Exception. Instead, try to catch specific exceptions.

2, do not swallow abnormal.

What is the difference between NoClassDefFoundError and ClassNotFoundException?

The reasons for ClassNotFoundException are as follows: Java supports the use of reflection to dynamically load classes at run time. For example, when you use the class.forname method to dynamically load classes, you can pass the Class name as an argument to the above method to load the specified Class into THE JVM memory. If the Class is not found in the classpath, A ClassNotFoundException is thrown at run time. To solve this problem, you need to make sure that the required class, along with the package it depends on, exists in the classpath. A common problem is that the class name is written incorrectly. Another cause of ClassNotFoundException is when a class has already been loaded into memory by one class loader and another class loader tries to load the class dynamically from the same package. This can be avoided by controlling the dynamic class loading process.

NoClassDefFoundError occurs if the JVM or ClassLoader instance tries to load the class (either through normal method calls or by using new to create a new object) and cannot find the class definition. The class you are looking for exists at compile time, but cannot be found at run time. This can cause NoClassDefFoundError. This could be due to missing classes in the packaging process, or the JAR package being corrupted or tampered with. The solution to this problem is to look for classes that were in the classpath at development time but not at runtime.

5. Why are Strings designed to be immutable?

String is immutable (when you modify a String, you do not change the memory address of the original, but redirect it to a new object), String is final, and is not inheritable. String is essentially a final char[] array, so the memory address of the char[] array will not be changed. Also, String does not expose a method to modify the char[] array. Immutability guarantees thread safety and the implementation of string constant pools.

Do you know about idempotence in Java?

Idempotent is originally a mathematical concept, that is: f(x) = f(f(x)), for the same system, using the same conditions, a request and repeated multiple requests have the same impact on system resources.

One of the most common applications of idempotentality is e-commerce customer payment. Imagine how bad it would be if you failed to pay because of the network and other problems, and then had to pay again. Idempotence is designed to solve problems like this.

Idempotence can be implemented using the Token mechanism.

The core idea is to generate a unique credential-known as a token – for each operation. A token has only one execution right at each stage of the operation. Once the execution is successful, the execution result is saved. Return the same result for repeated requests.

For example, the order ID on the e-commerce platform is the most suitable token. When a user places an order, it goes through a number of steps, such as order generation, inventory reduction, coupon reduction and so on. When executing each step, it first checks whether the order ID has been executed. For the request that has not been executed, the operation is performed and the result is cached. For the ID that has been executed, the previous execution result is directly returned without any operation. In this way, the problem of repetitive execution of operations can be avoided to the greatest extent, and the cached execution results can also be used for transaction control.

7. Why do anonymous inner classes in Java only have access to final-modified outer variables?

Anonymous inner class

public class TryUsingAnonymousClass { public void useMyInterface() { final Integer number = 123; System.out.println(number); MyInterface myInterface = new MyInterface() { @Override public void doSomething() { System.out.println(number); }}; myInterface.doSomething(); System.out.println(number); }}Copy the code

The compiled result

class TryUsingAnonymousClass$1 implements MyInterface { private final TryUsingAnonymousClass this$0; private final Integer paramInteger; TryUsingAnonymousClass$1(TryUsingAnonymousClass this$0, Integer paramInteger) { this.this$0 = this$0; this.paramInteger = paramInteger; } public void doSomething() { System.out.println(this.paramInteger); }}Copy the code

Because an anonymous inner class eventually compiles into a separate class, variables used by that class are passed to that class as constructor arguments, such as: Integer paramInteger. If the variable is not final, paramInteger can be modified in an anonymous inner class, causing inconsistency with the external paramInteger. To avoid this inconsistency, Because Java states that anonymous inner classes can only access finally-decorated outer variables.

8. How to code Java?

Why you need to code

The minimum unit of information stored in a computer is a byte, or 8 bits, so the range of information is 0 to 255. This range does not hold all characters, so a new data structure, char, is required to represent these characters. The encoding from char to byte is required.

Common coding methods are as follows:

ASCII: 128 characters in total, represented by the lower 7 bits of a byte, 031 are control characters such as newline, carriage return, deletion, etc. 32126 is a print character that can be typed on a keyboard and can be displayed.

GBK: The code range is 8140~FEFE (remove XX7F) a total of 23940 code points, it can represent 21003 Chinese characters, its code is compatible with GB2312, that is to say, the characters encoded by GB2312 can be decodes by GBK, and there will be no garble.

Utf-16: UTF-16 specifies how Unicode characters are accessed on a computer. Utf-16 uses two bytes to represent the Unicode conversion format. This is a fixed-length representation. Any character can be represented in two bytes, and two bytes are 16 bits. Utf-16’s convenience in representing characters, one character per two bytes, greatly simplifies string operations, and is one of the main reasons Why Java uses UTF-16 as its in-memory character storage format.

Utf-8: unified USES two bytes one character at a time, while in the said is very simple and convenient, but also has its disadvantages, there are a large part of the characters in a byte can be said now want two bytes, storage space magnified twice, in what is now the network bandwidth is very limited today, this will increase the flow of network transmission, but also not necessary. Utf-8 uses a variable-length technique, with different word-code lengths for each coding region. Different types of characters can consist of 1 to 6 bytes.

The most common areas of coding in Java are character-to-byte conversions, which generally include disk IO and network IO.

The InputStream class is the parent class for reading characters in Java I/O, and the InputStream class is the parent class for reading bytes. The InputStreamReader class is the byte-to-character bridge, and it handles the conversion of bytes to characters during I/O. In the process of StreamDecoder decoding, the Charset encoding format must be specified by the user.

What are the differences between String, StringBuffer and StringBuilder?

In terms of execution speed: StringBuilder > StringBuffer > String

Each time String changes a value, a new memory space is created

StringBuilder: Thread unsafe

StringBuffer: thread-safe

Summary of the use of the three:

1. If you want to manipulate a small amount of data, use String.

2. Single-thread operation on large amounts of data in a string buffer using StringBuilder.

3. Multithreaded operation of large amounts of data under the StringBuffer with StringBuffer.

String is a very basic and important class in the Java language, providing all the basic logic for constructing and managing strings. It is a typical Immutable class. It is declared final and all properties are final. Because of its immutability, actions such as concatenation, clipping, and so on produce new strings. Due to the ubiquity of string operations, the efficiency of these operations often has a significant impact on application performance.

StringBuffer is a class provided to solve the above mentioned problem of concatenation producing too many intermediate objects. We can use the Append or add method to add the string to the end of an existing sequence or at a specified position. StringBuffer is essentially a thread-safe, modifiable sequence of characters. It is guaranteed to be thread-safe, but it also carries an additional performance overhead, so its successor, StringBuilder, is recommended unless there is a thread-safety requirement.

StringBuilder, a new addition to Java 1.5, is not substantially different from StringBuffer in terms of capabilities, but it reduces overhead by removing thread-safe elements and is the preferred choice for string concatenation in most cases.

10. What are inner classes? The role of inner classes.

An inner class can have multiple instances, each with its own state information and independent of information about other peripheral objects.

Within a single enclosing class, you can have multiple inner classes implement the same interface in different ways, or inherit from the same class.

The creation of inner class objects does not depend on the creation of outer class objects.

An inner class does not have a confusing “IS-A” relationship; it is a separate entity.

Inner classes provide better encapsulation and are not accessible to other classes except the enclosing class.

11. What are the differences between abstract classes and interfaces?

In common

  • Is the abstraction layer at the top.
  • Can’t be instantiated.
  • Can contain abstract methods that describe the functionality of the class but do not provide a concrete implementation.

The difference between:

  • 1. Non-abstract methods can be written in the abstract class, so as to avoid writing them repeatedly in the subclass, which can improve the code reuse, this is the advantage of abstract class, interface can only have abstract methods.
  • 2. Multiple inheritance: A class can inherit only one direct parent. This parent can be a concrete or abstract class, but a class can implement multiple interfaces.
  • 3. Abstract classes can have default method implementations; interfaces have no method implementations at all.
  • 4. Subclasses use the extends keyword to extend abstract classes. If a subclass is not an abstract class, it needs to provide implementations of all the declared methods in the abstract class. Subclasses implement the interface using the keyword implements. It needs to provide the implementation of all the declared methods in the interface.
  • 5. Constructors: Abstract classes can have constructors, interfaces cannot.
  • 6. Differences from normal Java classes: There is no difference between an abstract class and a normal Java class except that you cannot instantiate it, and the interface is a completely different type.
  • Access modifiers: Abstract methods can have public, protected, and default modifiers. The default modifier for interface methods is public. You cannot use other modifiers.
  • 8. Main method: An abstract method can have a main method and we can run it. The interface does not have a main method, so we cannot run it.
  • 9, speed: Abstract analogy interface speed, the interface is a little slow because it takes time to find the method implemented in the class.
  • 10. Add new methods: If you add a new method to an abstract class, you can provide it with a default implementation. So you don’t need to change your current code. If you add methods to an interface, you must change the class that implements the interface.

Note that in JDK V1.8 and later, the defalut method, the default method of the Interface, was added to Interface. This new feature allows us to add a non-abstract method implementation to the interface by decorating the default implementation method with the keyword default. A simple example looks like this:

public interface Formula {
    double calculate(int a);
    default double sqrt(int a){
        returnMath.sqrt(a); }}Copy the code

This feature is also known as extension methods. With this feature, we will be able to conveniently implement the default implementation class of the interface.

12. What does the interface mean?

Specification, extension, callback.

Can a static method of a parent class be overridden by a child class?

Can’t. Subclass inherits the parent class, use the same method of static and non-static methods, then non-static methods covered in the parent class method (namely rewriting), the static method of the parent is hidden (if the object is the parent class is called the hidden method), the other a subclass inherits the parent class of the static and non-static methods, as for the method overloading I think it is one of the elements in the same class, It is not possible to say which methods in a superclass and which methods in a subclass are the embodiment of method overloading.

14. The meaning of abstract classes?

Provides a common type for its subclasses, encapsulates duplicate content in subclasses, and defines abstract methods. Subclasses have different implementations but the definitions are consistent.

15, Static inner class, non-static inner class understanding?

Static inner classes: Static inner classes are defined to reduce the depth of the package and to facilitate the use of classes. Static inner classes are designed to be contained within a class, but do not depend on the outer class. They do not use the non-static properties and methods of the outer class, but are defined to facilitate the management of the class structure. When creating a static inner class, no reference to the external class object is required.

Non-static inner class: Holds a reference to an outer class and is free to use all variables and methods of the outer class.

16. Do you need to duplicate equals () and hashcode ()? Why is that?

Consider the use of hash data types such as HashMap, HashTable, and HashSet. When we override equals in its own way to determine whether two custom objects are equal, however, if we need to use a custom object as the key of a HashMap, We have what we think is the same object, but because the hash value is different, there are two objects in the HashMap, which requires the override of the hashcode method.

17, What is the relationship between equals and Hashcode?

Hashcode and equals:

  • 1. If two objects are equal, they must have the same hashcode.

  • 2. If the hash values of two objects are equal, they may or may not be equal. (Use equals again)

Why is Java cross-platform?

This is because the compiled code of a Java program is not code that can be run directly by the hardware system, but rather a kind of “intermediate code” — bytecode. Different Java virtual Machines (JVMS) are then installed on different hardware platforms, and the JVMS “translate” the bytecode into code that the corresponding hardware platform can execute. So for the Java programmer, it doesn’t matter what the hardware platform is. So Java is cross-platform.

19. Precise calculation of floating point numbers

The BigDecimal class performs commercial computations, and Float and Double can only be used for scientific or engineering computations.

20, The difference between final, finally, Finalize?

Final can be used to refer to classes, methods, and variables, each of which has different meanings. Final means that a class may not be extended, a final variable may not be modified, and a final method may not be overridden.

Finally is Java’s mechanism for ensuring that key code is executed. We can use try-finally or try-catch-finally to do things like close a JDBC connection or guarantee an unlock.

Finalize is a method of the base class Java.lang. Object, which is designed to ensure the collection of specific resources before the Object is garbage collected. The Finalize mechanism is now deprecated and has been marked deprecated since JDK 9. At present, The Java platform is gradually using Java.lang.ref.Cleaner to replace the original Finalize implementation. Cleaner implementations make use of phantom References, a common so-called post-mortem cleaning mechanism. With the use of phantom reference and reference queue, we can ensure that objects are thoroughly destroyed to do some similar work of resource recycling, such as closing file descriptor (the limited resources of operating system), which is lighter and more reliable than Finalize.

21. Design intent of static inner classes

There is one big difference between a static inner class and a non-static inner class: while a non-static inner class implicitly holds a reference to the periphery that created it after compilation, a static inner class does not.

The absence of this reference means:

Its creation does not depend on the enclosing class. It cannot use non-static member variables and methods of any enclosing class.

22. Life cycle of objects in Java

In Java, the life cycle of an object consists of the following phases:

1. Created Phase

The JVM loads the class file for the class and at that point all the static variables and static code blocks will be executed and when that’s done, the local variables will be assigned (parent, child) and then the new method constructor will be called once the object has been created and assigned to some variable, The object’s state switches to the application phase.

2. In Use

Object is held by at least one strong reference.

3. Invisible

When an object is invisible, the program itself no longer holds any strong references to the object, although those references still exist. Simply put, the program execution has exceeded the scope of the object.

4. Unreachable

When an object is unreachable, it is no longer held by any strong reference. This is compared to the “invisible phase” where the program no longer holds any strong references to the object, in which case the object may still be held by some loaded static variable or thread or strong reference such as JNI on a system like the JVM. These special strong references are called “GC root.” There are cases where the GC root causes the object to leak memory and cannot be reclaimed.

5. Collected information

When the garbage collector finds that the object is already in the unreachable phase and the garbage collector is ready to reallocate the object’s memory space, the object enters the collection phase. If the object has overridden the Finalize () method, then the terminal operation of that method will be performed.

It was in the Royal Academy of Sciences that he Finalized his career.

When an object is still in unreachable state after the Finalize () method is executed, the object enters the finalization stage. In this phase, you wait for the garbage collector to reclaim the object space.

7. De-allocated Object Space

When the garbage collector collects or reallocates the memory occupied by the object, the object disappears completely, which is called “object space reallocation phase”.

23. Can static properties and static methods be inherited? Can it be rewritten? And why?

Conclusion: Static properties and static methods can be inherited in Java, but they cannot be overridden but are hidden.

The reason:

1). Static methods and properties belong to the class and are called directly by the class name. Method name completion, no inheritance mechanism can be called. If a subclass defines static methods and properties, then the static methods or properties of the superclass are “hidden”. If you want to call static methods and properties of the superclass, use the superclass name directly. Method or variable name completion, as to whether inheritance is said, subclasses are inheriting static methods and properties, but with instance methods and properties are not quite the same, there is “hidden” this case.

2). The realization of polymorphism depends on inheritance, interface and override, and overloading (inheritance and override are the most important). Inheritance and override make it possible for a parent class to refer to an object of a different child class. Overrides have a higher priority than the parent class, but hidden has no precedence.

3). Static properties, static methods, and non-static properties can be inherited and hidden and cannot be overridden. Therefore, polymorphism cannot be implemented. Non-static methods can be inherited and overridden, thus enabling polymorphism.

24, The equal and Hashcode methods of class object are overridden. Why?

The Java API documentation has the following provisions for the hashCode method (originally from the Java In-depth Parsing book) :

If the information used in the equals method comparison has not been modified during the execution of the Java application, the hashCode method must consistently return the same integer when called multiple times on the same object. If the same application is executed multiple times, the integer must not be the same.

2. If two objects are equal through a call to equals, then both objects calling hashCode must return the same integer.

3. If two objects are not equal by calling equals, there is no requirement that the two objects must return different integers when calling hashCode. But programmers should be aware that producing different hash values for different objects can provide hash table performance.

25. What is the difference between equals and hashCode in Java?

The equals method inherited from the superclass Object is completely equivalent to ‘==’ by default, but we can override equals to compare objects in the way we want. For example, the String class overwrites equals to compare sequences of characters. Instead of a memory address. In a Java collection, the rules for determining whether two objects are equal are:

1. Check whether the hashcodes of the two objects are equal. 2. Determine whether two objects are equal using the equals operation.Copy the code

What are the four Java references and usage scenarios?

  • Strong Reference: Is not reclaimed when memory is out of memory. Most commonly used objects, such as newly created objects.
  • SoftReference: a SoftReference is reclaimed when memory is insufficient. Used to implement memory sensitive caching.
  • WeakReferenc: As soon as the GC collector finds it, it will reclaim it. Used in Map data structures to refer to large memory consuming objects.
  • PhantomReference: The referent field is placed in the ReferenceQueue before recycling. The JVM does not automatically set the referent field to null. Other references are not put into the ReferenceQueue until they are reclaimed by the JVM. Use to do some cleaning up before an object is reclaimed.

Person Person = new Person(); As an example.

1). Because new uses person. class, it will first find the person. class file and load it into memory;

2). Execute the static block in the class, if any, and initialize the Person.class class;

3). Open up space in the heap memory to allocate memory addresses;

4). Create the unique properties of the object in the heap memory and initialize it by default;

5). Perform display initialization for attributes;

6). Initialize the construction code block for the object;

7). Initialize the constructor corresponding to the object;

8). Give the memory address to the p variable in stack memory.

28. JAVA constant pool

128 of Interger (128 ~ 127)

A. When the value ranges from -128 to 127: If two new Integer objects, even though they have the same value, are compared to false by “==”, but if two pairs are directly assigned to each other, they are compared to “true” by “==”, which is very similar to a String.

B. If the value is not between -128 and 127, the value of the two objects is false by using ==, even if the values of the two objects are equal.

C. When an Integer object is directly compared with an int base data type by “==”, the result is the same as the first point;

The hash value of the object is the value itself;

Why -128-127?

In the Integer class there is a static inner class IntegerCache, and in the IntegrCache class there is an Integer array to cache Integer objects whose current values range from -128 to 127.

29. What conventions do I have to follow when overriding equals?

There are general conventions to follow when overriding equals: reflexivity, symmetry, transitivity, consistency, and nonnull

1) Reflexivity

X.equals (x) must return true for any reference value x that is not null. — That’s probably not a problem

2) Symmetry

For any non-null reference values x and y, y.quals (x) is true if and only if x.quals (y) is true.

3) Transitivity

For any non-null reference values x, y, z. If x.quals (y)==true, y.quals (z)==true, then x.quals (z)==true.

4) Consistency

For any non-null reference values x and y, multiple calls to x.equals(y) will consistently return true, or consistently return false, as long as the information used by the equals comparison operation on the object has not been modified.

5) Non-null

All objects to be compared cannot be empty.

30,The difference between deep copy and shallow copy

31,The Integer class optimizes ints

Java concurrency

1. Thread pool related (⭐⭐⭐)

What is a thread pool and how to use it? Why use thread pools?

A: Thread pool is to put multiple thread objects into a container in advance. When using, you do not need to use new threads but directly go to the pool to get threads. It saves the time to open up child threads and improves the efficiency of code execution.

How many thread pools are there in Java?

Java has four thread pools:

The first is newCachedThreadPool

The number of threads is not fixed and supports a maximum value of Integer.MAX_VALUE:

Public static ExecutorService newCachedThreadPool() {// The thread pool corePoolSize is 0, MAX_VALUE maximumPoolSize is Integer.MAX_VALUE Return new ThreadPoolExecutor(0, integer.max_value, 60L, timeUnit. SECONDS, new SynchronousQueue<Runnable>()); }Copy the code

Cacheable thread pool:

1. Unlimited number of threads. 2. If there are idle threads, idle threads are reused. If there are no idle threads, new threads are created. 3, a certain program to reduce the frequent creation/destruction of threads, reduce the system overhead.

The second is newFixedThreadPool

A thread pool with a fixed number of threads:

public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory ThreadFactory) {// corePoolSize is the same as maximumPoolSize, and an unbounded blocking queue is passed in. Return new ThreadPoolExecutor(nThreads, nThreads, 0L, timeUnit.milliseconds, new LinkedBlockingQueue<Runnable>(), threadFactory); }Copy the code

Fixed length thread pool:

1. Can control the maximum number of concurrent threads (the number of threads executing simultaneously). 2, the thread will wait in the queue.

The third: newSingleThreadExecutor

Can be thought of as a FixedThreadPool with a number of threads:

Public static ExecutorService newSingleThreadExecutor() {// Only one thread in the thread pool can execute the task, Everything else in a blocking queue / / outside packaging FinalizableDelegatedExecutorService class implements the finalize method, When the JVM garbage collection will close the thread pool return new FinalizableDelegatedExecutorService (new ThreadPoolExecutor (1, 1, 0 l, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>())); }Copy the code

Single-threaded thread pools:

1. One and only one worker thread executes the task. 2. All tasks are executed in the specified order, that is, the in-queue and out-queue rules of the queue are followed.

Fourth: newScheduledThreadPool.

Supports periodic execution of tasks at specified periods:

public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) {
    return new ScheduledThreadPoolExecutor(corePoolSize);
}
Copy the code

Note: in the first three kind of thread pool is a ThreadPoolExecutor instances of different configuration, the last one is ScheduledThreadPoolExecutor instance.

3,How do thread pools work?

From a data structure perspective, thread pools are composed primarily of blockingQueues and hashsets. From the point of view of the process of task submission, the mechanism of a thread pool for an external user looks like this:

If the number of running threads is < coreSize, create a core thread to execute the task without queuing. If the number of running threads >= coreSize, put the task into the blocking queue; If the queue is full && the number of running threads is < maximumPoolSize, create a new non-core thread to execute the task. 4. If the queue is full && The number of running threads is greater than or equal to maximumPoolSize, the thread pool calls the Handler's method to reject the commit.Copy the code

Understanding memory: 1-2-3-4 correspondence (core threads -> block queue -> non-core threads -> Handler reject commit).

Thread reuse for thread pools:

This is where you need to dig deep into the source addWorker() : it is the key to creating new threads and a key entry point for thread reuse. It eventually executes to runWoker, which can fetch tasks in two ways:

  • FirstTask: This is the first runnable task specified, which runs the run task in the Woker worker thread. Blank indicates that the task has been executed.
  • GetTask () : This is first an infinite loop in which the worker thread loops until it is able to fetch a Runnable object or returns with a timeout. In this case, the target of fetching is the workQueue.

In fact, the task will not only execute the firstTask specified when the creation of firstTask, but also from the task queue through getTask() to take the task itself to execute, and there is no time limit to block waiting, ensure the survival of the thread.

A semaphore

Semaphore can be used for interprocess synchronization or thread synchronization within the same process.

Can be used to ensure that two or more key code segments are not called concurrently. Before entering a critical code segment, the thread must acquire a semaphore; Once the critical code snippet is complete, the thread must release the semaphore. Other threads that want to enter this critical code segment must wait until the first thread releases the semaphore.

What kinds of work queues do thread pools have?

1, ArrayBlockingQueue

Is an array-based bounded blocking queue that sorts elements on a FIFO (first in, first out) basis.

2, LinkedBlockingQueue

A blocking queue based on a linked list structure that sorts elements by FIFO (first in, first out) and generally has a higher throughput than ArrayBlockingQueue. Static factory methods Executors. NewFixedThreadPool () and Executors. NewSingleThreadExecutor using the queue.

3, SynchronousQueue will

A blocking queue that does not store elements. Each insert operation must wait until another thread calls to remove operation, otherwise the insert has been in the blocking state, the throughput is usually more than LinkedBlockingQueue, static factory methods Executors. NewCachedThreadPool using the queue.

4, PriorityBlockingQueue

An infinite blocking queue with priority.

5. How to understand unbounded queue and bounded queue?

Bounded queue

1. If the initial poolSize is smaller than corePoolSize, the submitted runnable task is immediately executed as a new Thread parameter. 2. When the number of submitted tasks exceeds the corePoolSize, the current runable is submitted to a block queue. 3. When the bounded queue is full, if the poolSize is smaller than maximumPoolsize, a new Thread is tried for emergency processing, and the corresponding Runnable task is executed immediately. 4. If there is no more processing in 3, go to step 4 and execute reject.

Unbounded queue

Compared with a bounded queue, an unbounded task queue does not have a task enqueueing failure unless the system resources are exhausted. When a new task arrives and the number of threads in the system is smaller than the corePoolSize, a new thread is created to execute the task. When the corePoolSize is reached, it will not continue to increase. If new tasks are added later without idle thread resources, the tasks will be directly queued and wait. If the speed of task creation and processing varies greatly, the unbounded queue will continue to grow rapidly until it runs out of system memory. When the task cache queue of the thread pool is full and the number of threads in the thread pool reaches maximumPoolSize, a task rejection policy will be adopted if more tasks arrive.

6,What is the general implementation of a secure queue in multithreading?

The thread safe queues provided by Java can be divided into blocking queues and non-blocking queues. A typical example of a BlockingQueue is BlockingQueue, and a typical example of a non-blocking Queue is ConcurrentLinkedQueue.

For BlockingQueue, to block, call the put(e) take() method. ConcurrentLinkedQueue is an unbounded, thread-safe, non-blocking queue based on linked nodes.

Synchronized, volatile, and Lock(⭐⭐⭐)

1. Synchronized?

Synchronized code blocks are implemented by a pair of monitorenter/monitorexit directives. Monitor objects are the basic implementation of synchronization. Synchronized methods, on the other hand, use the ACC_SYNCHRONIZED access flag to tell whether a method is declared a synchronized method and therefore to make the corresponding synchronized call.

Prior to Java 6, The implementation of Monitor relied entirely on the operating system’s internal muexes, and synchronization was an undifferentiated heavyweight operation because of the need to switch from user mode to kernel mode.

In modern (Oracle) JDKS, the JVM has significantly improved its performance by providing three different Implementations of Monitor, commonly known as Biased Locking, lightweight Locking, and heavyweight Locking.

Lock escalation/degradation is the mechanism by which the JVM optimizes synchronized execution, and when the JVM detects different race conditions, it automatically switches to the appropriate lock implementation. This switch is called lock escalation/degradation.

When no competition is present, skew locking is used by default. The JVM uses the CAS operation to set the thread ID in the Mark Word section of the object header to indicate that the object is biased in favor of the current thread, so no true mutex is involved. This is based on the assumption that in many application scenarios, most objects will be locked by at most one thread during their lifetime, and that using skew locking can reduce uncontended overhead.

If another thread tries to lock an object that has been skewed, the JVM needs to revoke the skewed lock and switch to a lightweight lock implementation. The lightweight lock relies on the CAS operation Mark Word to attempt to acquire the lock. If the retry is successful, the normal lightweight lock is used. Otherwise, it is further upgraded to a heavyweight lock (a spinlock upgrade may be performed first, and a heavyweight lock upgrade attempt may be attempted if this fails).

I have noticed that there is an argument that Java does not degrade locks. In fact, as far as I know, lock degradation does occur, and when the JVM enters SafePoint, it checks for idle monitors and tries to degrade them.

A brief introduction to the optimization of Synchronized mechanism, including spin lock, bias lock, lightweight lock, heavyweight lock?

The spin lock.

Thread spin simply means that the CPU is doing nothing. For example, it can execute several for loops, or it can execute several empty assembly instructions, in order to occupy the CPU and wait for the opportunity to acquire the lock. If the rotation time is too long, the overall performance will be affected. If the rotation time is too short, the delay blocking goal cannot be achieved.

Biased locking

Bias locking means that once the thread has acquired the monitor object for the first time, the monitor object is “biased” to the thread, and subsequent calls can avoid CAS operations. In other words, if the variable is found to be true, there is no need to go through various locking/unlocking processes.

Lightweight lock:

Lightweight locks are upgraded from biased locks, which run when one thread enters a synchronized block and are upgraded to lightweight locks when a second thread joins the lock race.

Heavyweight lock

Weight to lock in the JVM also called object’s Monitor (Monitor), it’s like the Mutex in C, in addition to possess a Mutex (0 | 1) the function of the Mutex, it also is responsible for implementing the Semaphore (Semaphore) function, which means it contains at least one lock queue of the competition, and a signal blocking queue (wait queue), The former is responsible for the mutex, and the latter for thread synchronization.

“Synchronized” refers to class, method, and reentrant locks.

Synchronized static methods acquire the class lock (the bytecode file object of the class).

Synchronized refers to a common method or code block that acquires an object lock. This mechanism ensures that at most one of the synchronized member functions is in the executable state for each class instance at a time, thus effectively avoiding access conflicts of class member variables.

A thread that has acquired a class lock and a thread that has acquired an object lock do not conflict!

Public class Widget {// synchronized public void doSomething() {... }} public class LoggingWidget extends Widget {// synchronized public void doSomething() { System.out.println(toString() + ": calling doSomething"); super.doSomething(); }}Copy the code

This is because locks are held by threads, not calls.

Thread A already has A lock on the LoggingWidget instance object, and can continue to “unlock” it when needed!

This is the reentrancy of a built-in lock.

4. The difference between wait and sleep and the notify process.

The difference between wait and sleep

The biggest difference is that while wait releases the lock, sleep always holds it. Wait is usually used for interthread interaction, and sleep is usually used to pause execution.

  • First, keep this distinction in mind: “Sleep is a method of Thread,wait is a method defined in Object.” Although both methods affect the execution behavior of the thread, they are fundamentally different.
  • Thread.sleep does not cause a change in lock behavior. If the current Thread owns the lock, thread. sleep does not cause the Thread to release the lock. If it helps you remember, simply assume that the lock related methods are defined in the Object class, so calling Thread.sleep does not affect the lock related behavior.
  • Both thread. sleep and object. wait pause the current Thread. Either way, the suspended Thread indicates that it no longer needs the CPU’s execution time for the time being. The OS allocates execution time to other threads. The difference is that after a wait is called, another thread must execute the notify/notifyAll to regain the CPU execution time.
  • For the Thread State, refer to the definition of thread. State. Newly created threads that have not executed (start() has not yet been called) are in the “ready”, or thread.state.new State.
  • Thread.state.blocked indicates that a Thread is in the process of acquiring a lock and is forced to suspend execution of the following instruction because the lock cannot be acquired until the lock is released by another Thread. In the BLOCKED state, the OS scheduling mechanism needs to determine which thread is next to acquire the lock, which in this case is a contention for the lock, which is a time-consuming operation in any case.
Notify Running procedure

When thread A (consumer) calls wait(), thread A releases the lock and enters the wait state, joining the wait queue for the lock object. After thread B (producer) acquires the lock, the notify method is called to notify the wait queue of the lock object, causing thread A to enter the blocking queue from the wait queue. After thread A enters the blocking queue until thread B releases the lock, thread A races to acquire the lock and continues execution from the wait() method.

Do you know the difference between synchronized? Why does Lock perform better?

category synchronized Lock (underlying implementation is mainly Volatile + CAS)
There are levels Java keywords, at the JVM level Is a class
The release of the lock The JVM will let the thread release the lock if there is an exception in thread execution. The lock must be released in the finally, otherwise it can cause a thread deadlock.
To acquire the lock Suppose thread A acquires the lock and thread B waits. If thread A blocks, thread B will wait. Depending on the case, a Lock can be acquired in multiple ways, roughly meaning that a thread can attempt to acquire the Lock without having to wait
The lock state Unable to determine Can be judged
The lock type Reentrant non-interruptible non-fair Reentrant judging fair (both)
performance A small amount of synchronization A large number of simultaneous

The underlying implementation of Lock (ReentrantLock) is mainly Volatile + CAS (optimistic Lock), while Synchronized is a pessimistic Lock, which is more performance expensive. However, after JDK1.6, the Synchronized mechanism has been optimized to include bias locks, lightweight locks, spin locks, and heavyweight locks, which may outperform the Lock mechanism in the case of low concurrency. Therefore, it is recommended to use the synchronized keyword when the number of concurrent requests is not large.

6. Principles of volatile.

In Concurrent Programming in Java: In “The Core Theory,” we’ve mentioned visibility, order, and atomicity, and we can usually solve these problems with the Synchronized keyword, but if you know anything about the Synchonized principle, you’ll know that Synchronized is a heavyweight operation. This can have a significant impact on the performance of the system, so we usually avoid using Synchronized if there are other solutions.

The volatile keyword is another solution to the visibility ordering problem provided in Java. One important point about atomicity, and one that is misunderstood, is that a single read/write on volatile variables, such as long or double, is guaranteed atomicity, but not i++, because i++ essentially reads and writes twice.

Volatile is also an implementation of mutex synchronization, but it is very lightweight.

What does volatile mean?
  • Prevents CPU instruction reordering

Volatile has two key semantics:

Ensure that volatile variables are visible to all threads

Command reordering is forbidden

To understand the volatile keyword, we need to start with Java’s threading model. As shown in the figure:

Java memory model provides all of the fields (these fields including instance fields, static field, etc., not including local variables, such as method parameters, because these are private, thread does not exist competition) are present in the main memory, each thread has its own working memory, working memory preserved the thread that variables used in main memory copies of copies, Threads can only operate on variables in the working memory, and cannot directly read or write to the main memory. Of course, different memory cannot directly access each other’s working memory, which means that the main memory is the medium through which threads pass values.

Let’s understand the first sentence:

Ensure that volatile variables are visible to all threadsCopy the code

How do I guarantee visibility?

Volatile variables are forced to be written back to main memory after being modified in working memory, and are forced to be flushed from main memory when used by other threads, thus ensuring consistency.

A common misconception is that a volatile variable is guaranteed to be visible to all threads:

  • Since volatile variables are consistent across threads, operations based on volatile variables are safe when multiple threads are running concurrently.

The first part of the sentence is true, but the second part is wrong, so it forgets to consider whether the operation of a variable is atomic.

Here’s an example:

private volatile int start = 0; private void volatile Keyword() { Runnable runnable = new Runnable() { @Override public void run() { for (int i = 0; i < 10; i++) { start++; }}}; for (int i = 0; i < 10; i++) { Thread thread = new Thread(runnable); thread.start(); } Log.d(TAG, "start = " + start); }Copy the code

This code starts 10 threads, incrementing 10 times each, which should result in 100, but it doesn’t.

Why is that?

Take a closer look at start++, it’s not actually an atomic operation, but it has two simple steps:

1, take the value of start, which is correct because of volatile.

2, since the increase, but on the other thread might have to start up, in this case it is possible to write small start back into main memory. So volatile can only guarantee visibility, and we still need to ensure atomicity through locking if the following scenarios are not true:

  • The result does not depend on the current value of the variable, or only a single thread can modify the value of the variable. (Either the result does not depend on the current value, or the operation is atomic, or only requires one thread to modify the value of the variable)
  • Variables don’t need to be subject to invariance constraints with other state variables. For example, if we add a Boolean to a thread to determine if the thread has stopped, volatile is ideal.

Let’s understand the second sentence.

Command reordering is forbidden

What is order reordering?

  • Instruction reordering refers to the out-of-order execution of instructions, that is, the subsequent instructions that are capable of immediate execution are directly run when conditions permit, avoiding the wait caused by obtaining the data required by an instruction, and providing execution efficiency through the out-of-order execution technology.

  • A reorder of an instruction adds a memory barrier before an assignment to a volatile variable. The reorder cannot move subsequent instructions to a position before the memory barrier.

7. The use and difference of synchronized and volatile keywords

Volatile

1) Ensure visibility when different threads operate on the variable. When one thread changes the value of a variable, the new value is immediately visible to other threads.

2) Instruction reordering is prohibited.

role

Volatile essentially tells the JVM that the value of the current variable in the register (working memory) is uncertain and needs to be read from main memory. Synchronized locks the current variable so that only the current thread can access it, blocking all other threads.

The difference between

1. Volatile can only be used at the variable level. Synchronized can be used at the variable, method, and class levels.

2. Volatile only allows visibility of changes to variables, but does not guarantee atomicity; Synchronized guarantees visibility and atomicity of changes to variables.

3. Volatile does not block threads; Synchronized may cause a thread to block.

4. Volatile variables are not optimized by the compiler; Variables with the synchronized tag can be optimized by the compiler.

Eight,The internal implementation of ReentrantLock.

Already implemented is the premise of AbstractQueuedSynchronizer, referred to as “AQS, is Java. Util. The core of concurrent, CountDownLatch, FutureTask, Semaphore, ReentrantLock, and others all have an inner class that is a subclass of this abstract class. AQS is based on the FIFO queue, so there must be a Node, Node is a Node, Node has two modes: shared mode and exclusive mode. ReentrantLock is based on AQS, AQS is the foundation of many synchronous components in Java and sent packets, it through an int type state variable state and a FIFO queue to complete the acquisition of shared resources, thread queuing and so on. AQS is a low-level framework, which adopts the template method pattern. It defines the general and relatively complex logic skeleton, such as thread queuing, blocking, awakening, etc., and extracts these complex but essentially common parts, which are not concerned by the users who need to build synchronous components. The user simply overrides some of the simple methods specified (which are simply fetching and releasing the shared variable state). Subclasses of AQS generally only need to override tryAcquire(int arg) and tryRelease(int arg).

ReentrantLock processing logic:

It defines three important static inner classes, Sync, NonFairSync, and FairSync. Sync, the common synchronization component in ReentrantLock, inherits AQS (to take advantage of the complex top-level logic of AQS, thread queuing, blocking, waking, etc.). NonFairSync and FairSync inherit from Sync, invoke Sync’s common logic, and then do their own specific logic internally (fair or unfair).

Here’s how the lock() method is implemented:

NonFairSync (non-fair reentrant lock)

1. Obtain the state value first. If the value is 0, it means that no thread has obtained the resource at this time.

If the value of state is greater than 0, the thread must have preempted the resource. If yes, the thread must have preempted the resource. If yes, the thread must have preempted the resource.

3. In other cases, the lock fails to be obtained.

FairSync (Fair reentrant lock)

As you can see, the general logic of fair locking and non-fair locking is the same, the difference is that there is! Hasqueuedhouseholds () this judgment logic, even if the state is 0, can not hastily directly to obtain, to see whether there is still a queue of threads, if not, to try to obtain, do the following processing. Otherwise, return false, and get failed.

Finally, ReentrantLock’s tryRelease() method is implemented:

If the state value is 0, the current thread is completely free and returns true, the upper-layer AQS will realize that the resource is empty. If it is not 0, the thread still owns the resource, but it has released the resource. Return false.

ReentrantLock is a reentrant and fair mutex. Its design is based on the AQS framework. The reentrant and fair mutex implementation logic is not difficult to understand. As for fairness, when attempting to acquire a lock, there is an additional judgment: whether there is a thread waiting in the synchronization queue before the request, if so, to wait; If not, it is allowed to preempt.

9, synchronized and ReentrantLock?

Synchronized is an implementation of mutually exclusive synchronization.

Synchronized: When a thread accesses a method or block of code that is marked by synchronized, it acquies the lock on the object, and no other thread can access the method until the method or block finishes executing, then it releases the lock and executes the block.

We’ve already talked about the volatile keyword, but here’s an example that combines the use of volatile and synchronized.

Here’s an example:

Public class Singleton {// volatile guarantees: Private volatile static Singleton instance; private volatile static Singleton instance; Private Singleton(){} public static Singleton getInstance() { If (instance == null) {// synchronized = global lock for Singleton; Synchronized (singleton.class) {synchronized (singleton.class) {if (instance == null) {instance = new Singleton(); } } } return instance; }}Copy the code

This is a classic DCL singleton.

Its bytecode is as follows:

You can see that a block of code that is synchronized is followed by a monitorenter and a monitorexit, respectively. Both bytecodes need to specify the object to lock and unlock.

About locking and unlocking objects:

Synchronized: synchronized code blocks that scope the entire code block and refer to the objects that call it.

Synchronized = synchronized = synchronized = synchronized = synchronized = synchronized

Synchronized: a synchronized method is used to refer to an entire static method and to any object that calls the class.

Synchronized (this) : refers to all variables, methods, or code blocks in the object that are marked by synchronized, and refers to the object itself.

Synchronized (classname.class) : synchronized(classname.class) : synchronized(classname.class) : synchronized(classname.class) : synchronized(classname.class) : synchronized(classname.class) : synchronized(classname.class)

Synchronized (this) adds an object lock, and synchronized(classname.class) adds a class lock. The differences between them are as follows:

  • Object lock: All Objects in Java contain a mutex lock, which is automatically acquired and released by the JVM. A thread enters the synchronized method and acquies the lock of the object. If another thread has already acquired the lock of the object, the thread waits. The JVM automatically releases the lock on the object if the synchronized method returns normally or terminates by throwing an exception. This is where the benefits of using synchronized come in. When a method throws an exception, the lock can still be automatically released by the JVM.

  • Class locks: Object locks are used to control synchronization between instance methods. Class locks are used to control synchronization between static methods (or static variable mutex). Class locking is a concept, not a real thing. It is used to help us understand the difference between locking instance methods and static methods. As we all know, a Java Class can have many objects, but only one Class object, meaning that different instances of the Class share the Class object of that Class. A Class object is really just a Java object, just a little special. Since every Java object has a mutex, static methods of a Class require a Class object. So a Class lock is a lock on a Class object. There are several ways to get the Class object of a Class. The simplest is myclass.class. Class locks and object locks are not the same thing. One is a lock on a Class object of a Class, and the other is a lock on an instance of the Class. That is, when one thread accesses static sychronized, it allows another thread to access an instance of synchronized methods on an object. The reverse is also true, as the locks they require are different.

Iii. Others (⭐⭐⭐)

1. How to use multithreading?

Is it necessarily efficient to use multithreading? Sometimes using multithreading is not for efficiency, but to allow the CPU to process multiple events simultaneously.

  • In order not to block the main thread, start other threads to do things, such as time-consuming operations in the APP are not done in the UI thread.

  • Implement faster applications where the main thread is dedicated to listening for user requests and the child thread is used to process the user requests for high throughput. It seems that in this case, multithreading may not be very efficient. Multithreading in this case is to allow multiple pieces of data to be processed in parallel without waiting. For example, JavaWeb’s main thread is dedicated to listening for the user’s HTTP requests, but it starts a child thread to process the user’s HTTP requests.

  • A low priority service that you don’t do on a regular basis. For example, the Jvm garbage collection.

  • A task that is time-consuming but does not consume CPU operation time can be significantly more efficient by opening a thread. Like reading a file and then processing it. Disk IO is a time-consuming, but cpu-free effort. So one thread can read the data and one thread can process the data. It’s definitely more efficient than one thread reading the data and then processing it. This is because the two-thread time takes full advantage of the CPU idle time waiting for disk IO.

2. Understanding CopyOnWriteArrayList.

What is copy-on-write?

What happens in a computer is that when you want to make a change to a piece of memory, instead of writing to the old memory, we copy it, we write to the new memory, and when we’re done, we point to the old memory and the pointer points to the new memory, and the old memory can be recycled.

Principle:

CopyOnWriteArrayList is a thread-safe variant of ArrayList. The underlying implementation of CopyOnWriteArrayList is to copy a container and then add the new data to a new container. The address of the new container is assigned to the address of the old container, but during the process of adding the data, other threads that want to read the data will still read the data from the old container.

Advantages and disadvantages:

Advantages:

1. According to the consistency integrity, why? Because of the lock, the concurrent data will not mess.

2. We solved the problem of multi-threaded iterating over collections such as ArrayList and Vector. Remember, Vector is thread safe, but the addition of the synchronized keyword does not solve the problem at all!

Disadvantages:

1. Memory occupancy problem: Obviously, two arrays are stationed in memory at the same time, if the actual application, more data, and a large case, memory will be relatively large, for this actually can use ConcurrentHashMap instead.

2. Data consistency: The CopyOnWrite container can only guarantee the final consistency of data, but not the real-time consistency of data. So if you want to write something that you can read immediately, don’t use the CopyOnWrite container.

Usage scenario:

1, read more than write (whitelist, blacklist, commodity access and update scenarios), why? Because when you write it, you copy the new set.

2, the collection is not large, why? Because when you write it, you copy the new set.

3. The real-time requirement is not high, why? Because old collection data may be read.

3, ConcurrentHashMap lock mechanism is what?

Java7 ConcurrentHashMap

ConcurrentHashMap is a thread-safe and efficient solution to hash tables, especially the “piecewise locking” scheme, which offers a significant performance improvement over HashTable table locking. The reason a HashTable container is inefficient in a highly competitive concurrent environment is that all threads accessing a HashTable must compete for the same lock. If there are multiple locks in the container, each lock is used to lock a portion of the container, then when multiple threads access different segments of the container, There will be no lock competition between threads, thus can effectively improve the efficiency of concurrent access, this is the ConcurrentHashMap used lock segmentation technology, first of all, data is divided into a section of storage, and then each section of data with a lock, when a thread uses the lock to access one of the data segment, Data in other segments can also be accessed by other threads.

ConcurrentHashMap is an array of segments. Segments are locked by inherits ReentrantLock, so each operation that needs to be locked locks a Segment. As long as each Segment is thread safe, This makes it globally thread-safe.

ConcurrencyLevel: concurrencyLevel, concurrency, and number of segments. The default value is 16, which means that ConcurrentHashMap has 16 Segments. In theory, a maximum of 16 concurrent write threads can be supported at this time, as long as they are distributed on different Segments. This value can be set to another value during initialization, but once initialized, it cannot be expanded. Each Segment is like a HashMap, but it has to be thread-safe, so it’s a little more cumbersome to handle.

Initialize the slot: ensureSegment

The first slot segment[0] is initialized when ConcurrentHashMap is initialized, and the other slots are initialized when the first value is inserted. Concurrent operations are controlled using CAS.

Java8 ConcurrentHashMap

Instead of the original Segment lock, CAS + synchronized is adopted to ensure concurrency security. The structure is basically the same as Java8’s HashMap (array + linked list + red-black tree), but it is thread-safe, so the source code is a bit more complex. 1.8 has made major changes in the data structure of 1.7. After adopting red-black tree, the query efficiency (O(logn)) can be guaranteed, and even synchronized instead of ReentrantLock. This shows that the synchronized optimization in the new JDK is in place.

4. Four conditions for thread deadlock?

How do deadlocks occur and how can they be avoided?

When thread A holds exclusive lock A and attempts to acquire exclusive lock B, thread B holds exclusive lock B and attempts to acquire exclusive lock A, A deadlock occurs because thread A and thread B hold the lock each other needs.

Public class DeadLockDemo {public static void main(String[] args) {public static void main(String[] args) { public void run() { DeadLockDemo.method1(); }}); Public void run() {deadlockDemo.method2 (); }}); td1.start(); td2.start(); } public static void method1() { synchronized (String.class) { try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } system.out.println (" thread A tries to get integer.class"); synchronized (Integer.class) { } } } public static void method2() { synchronized (Integer.class) { try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } system.out.println (" thread B tries to get string.class "); synchronized (String.class) { } } } }Copy the code

Four conditions that cause a deadlock:

  • Mutual exclusion: a resource can only be used by one thread at a time.
  • Request and hold condition: when a thread is blocked by a request for a resource, it holds on to a resource it has acquired.
  • Non-deprivation condition: A thread cannot forcibly deprive resources it has acquired before it uses them up.
  • Loop wait condition: A loop wait resource relationship is formed between several threads, head to tail.

In concurrent programs, deadlock can be avoided by avoiding situations in which several threads hold exclusive locks needed by each other in logic, as shown in the following:

Public class BreakDeadLockDemo {public static void main(String[] args) {a Thread td1 = new Thread(new Runnable() { public void run() { DeadLockDemo2.method1(); }}); Public void run() {deadlockDemo2. method2(); }}); td1.start(); td2.start(); } public static void method1() { synchronized (String.class) { try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } system.out.println (" thread A tries to get integer.class"); Synchronized (integer.class) {system.out.println (" thread a gets integer.class"); }}} public static void method2() {// No longer acquire the integer.class lock required by thread A. synchronized (String.class) { try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } system.out.println (" thread B tries to get integer.class "); Synchronized (integer.class) {system.out.println (" thread B gets integer.class "); }}}}Copy the code

5. Introduction to CAS?

Unsafe

Unsafe is the core CAS class. This is because Java does not have direct access to the underlying operating system, but rather through native methods. Despite this, the JVM does have a backdoor. The JDK has an Unsafe class, which provides for hardware-level atomic operations.

CAS

The java.util.concurrent package is completely built on top of CAS. Without CAS, there would be no such package, which shows the importance of CAS. Most current processors support CAS, but different vendors have different implementations. Also, CAS is implemented through Unsafe. Since CAS is a hardware-level operation, it is more efficient than normal locking.

The disadvantage of the CAS

CAS looks very beautiful, but this obviously cannot cover all the scenarios under the concurrent operation, and CAS semantically isn’t perfect, there is A logical flaw: if A variable is A value V first read, and are checked ready to assign A value to it is still A value, then we can show its value has not been modified by the other thread? If its value had been changed to B and then back to A during that time, the CAS operation would assume that it had never been changed. This vulnerability is known as the “ABA” problem of CAS operations. The java.util.Concurrent package addresses this issue by providing a marked atomic reference class “AtomicStampedReference” that ensures CAS correctness by controlling the version of variable values. In most cases, ABA problems do not affect the correctness of the program’s concurrency. If you need to solve ABA problems, using traditional mutex synchronization may be more efficient than avoiding atoms.

What’s the difference between a process and a thread?

In short, a program has at least one process, and a process has at least one thread.

  • 1. The partition scale of threads is smaller than the process, which makes the concurrency of multi-threaded programs high.

  • 2, the process has an independent memory unit in the process of execution, and multiple threads share memory, which greatly improves the efficiency of the program.

  • There is a difference between a thread and a process during execution. Each individual thread has an entry point for the program to run, a sequence of sequential execution, and an exit point for the program. However, threads cannot be executed independently, and must be dependent on the application program, which provides multiple thread execution control.

  • From a logical point of view, the meaning of multithreading is that multiple parts of an application can be executed at the same time. However, the operating system does not treat multiple threads as multiple independent applications to achieve process scheduling and management and resource allocation. This is the important difference between processes and threads.

  • 5. A process is a running activity of a program with certain independent functions on a certain data set. A process is an independent unit of the system for resource allocation and scheduling. A thread is an entity of a process. It is the basic unit of CPU scheduling and dispatching. It is a basic unit smaller than a process that can run independently. A thread owns virtually no system resources of its own, only a few of the resources essential to running (such as program counters, a set of registers, and stacks), but it can share all the resources owned by a process with other threads that belong to the same process.

  • 6, a thread can create and destroy another thread; Multiple threads in the same process can execute concurrently.

  • 7. Processes have separate address Spaces. When a process crashes, it does not affect other processes in protected mode, while threads are just different execution paths in a process. Threads have their own stack and local variables, but there is no separate address space between threads, a thread is dead is equal to the entire process is dead, so multi-process programs are more robust than multithreaded programs, but in the process switch, more resources, less efficiency.

7. What causes threads to block?

Thread blocking

In order to solve the conflict of access to the Shared storage area, Java synchronization mechanism was introduced, now let’s examine multiple threads access to Shared resources, synchronization mechanism has obviously not enough, because the resources required at any time may not ready to be accessed, in turn, the same time ready resources can be more than one. To solve the access control problem in this case, Java introduced support for blocking mechanisms.

Blocking is the process of suspending the execution of a thread to wait for a condition to occur (such as the availability of a resource). Those of you who have studied operating systems should already be familiar with it. Java provides a number of ways to support blocking, and let’s look at each of them.

The sleep() method: sleep() allows you to specify a period of time, in milliseconds, for which the thread will be blocked, unable to get CPU time, and then return to the executable state. Typically, sleep() is used to wait for a resource to be ready: after the test finds that the condition is not met, let the thread block for a period of time and then retest until the condition is met.

Suspend () and resume() methods: The two methods are used together. Suspend () puts the thread into a blocked state and does not automatically resume. Its corresponding resume() must be called in order for the thread to return to the executable state. Typically, suspend() and resume() are used to wait for results from another thread: let the thread block when the test finds that the results have not yet been produced, and call resume() to resume when the other thread has produced the results.

Yield () : Yield () causes the thread to give up the current allotted CPU time, but does not cause the thread to block, i.e. the thread is still in the executable state and can be allotted CPU time again at any time. The effect of calling yield() is equivalent to the scheduler assuming that the thread has executed enough time to move on to another thread.

Wait () and notify() methods: Used together, wait() causes a thread to block. It takes two forms: one that allows a specified period of time in milliseconds, and one that does not allow the thread to re-enter the executable state when the corresponding notify() is called or when the specified period of time is exceeded. In the latter case, the corresponding notify() must be called. At first glance they seem indistinguishable from the suspend() and resume() methods, but in fact they are quite different. The core difference is that none of the methods described above release the locked they hold (if any) when blocking, which is the opposite of the rule.

These core differences lead to a series of detailed differences.

First, all of the methods described above belong to the Thread class, but this pair belongs directly to the Object class, that is, all objects have this pair of methods. This may seem strange at first, but it’s actually quite natural, because the pair of methods block to release the lock they hold, which any object has. Calling wait() on any object causes the thread to block and the lock on that object to be released. Calling the notify() method on any object causes a randomly selected thread that was blocked by calling the wait() method on that object to unblock (but not actually execute until the lock is acquired).

Second, all of the methods described above can be called anywhere, but this pair of methods must be called from within a synchronized method or block, for the simple reason that only in a synchronized method or block does the current thread hold the lock and can release it. Similarly, the lock on the object calling the pair of methods must be owned by the current thread in order for the lock to be released. Thus, the pair of method calls must be placed in such a synchronized method or block, whose locked object is the one that called the pair of methods. If they do not meet the conditions, the program, though still able to compile, but abnormal IllegalMonitorStateException at run time.

The above-mentioned nature of wait() and notify() methods means that they are often used in conjunction with synchronized methods or blocks. A comparison of these methods to the interprocess communication mechanisms of the operating system shows their similarity: Synchronized methods or blocks provide functionality similar to operating system primitives in that their execution is undisturbed by multithreading, which is equivalent to the block and wakeup primitives (both methods are declared synchronized). Together, they enable us to implement a series of sophisticated interprocess communication algorithms (such as semaphore algorithms) on the operating system, and to solve a variety of complex interthread communication problems. (in addition, Method of communication between threads and multiple threads through the synchronized keyword this way to realize the communication between threads, while polling, using Java. IO. PipedInputStream and Java. IO. PipedOutputStream communication pipeline communication).

Two final points about the wait() and notify() methods:

First: a call to notify() causes the thread to be unblocked to be randomly selected from a thread that was blocked by a call to wait() on the object. We cannot predict which thread will be selected, so be careful when programming to avoid problems caused by this uncertainty.

Second: In addition to notify(), there is a method notifyAll() that serves a similar purpose, except that calling notifyAll() unblocks all threads that have been blocked by calling the object’s wait() method at one time. Of course, only the thread that acquired the lock can enter the executable state.

When we talk about blocking, we can’t talk about deadlocks without a brief analysis showing that both calls to suspend() and wait() without specifying a timeout period can cause deadlocks. Unfortunately, Java does not support deadlock avoidance at the language level, and we must be careful to avoid deadlocks in our programming.

In our analysis of the various methods of thread blocking in Java, we focused on the wait() and notify() methods because they are the most powerful and flexible to use, but this also makes them less efficient and error prone. In practice, we should use various methods flexibly in order to better achieve our goals.

Life cycle of threads

Thread state flow chart

  • NEW: Creation state, after the thread has been created, but not started yet.
  • RUNNABLE: A thread that is in a running state, but may be in a waiting state, such as waiting for CPU, IO, etc.
  • WAITING: WAITING state, usually with wait(), join(), locksupport.spark (), etc.
  • TIMED_WAITING: The TIMED WAITING state. Generally, wait(time), join(time), lockSupport.sparknanos (), lockSupport.sparkUnit () and other methods are called.
  • BLOCKED: The state of being BLOCKED, waiting for the lock to be released, such as when synchronized is called to increase the lock.
  • TERMINATED state. Threads TERMINATED or TERMINATED abnormally.

NEW, WAITING, and TIMED_WAITING are easy to understand, but we focus on the RUNNABLE and BLOCKED states.

There are five common situations when a thread enters the RUNNABLE state:

  • The thread ended the sleep time after calling sleep(time)
  • The blocking I/O called by the thread has returned. The blocking method is complete
  • The thread successfully acquired the resource lock. Procedure
  • The thread was waiting for a notification and successfully received one from another thread
  • The thread is suspended, and the resume() method is called to unsuspend it.

The BLOCKED state is generally divided into five cases:

  • The thread calls the sleep() method to voluntarily give up the resource it owns
  • A thread that calls a blocking IO method is blocked until the method returns.
  • The thread view acquires a resource lock that is being held by another thread lock.
  • The thread is waiting for a notification
  • The thread scheduler calls the suspend() method to suspend the thread

Let’s look at some methods related to thread state.

  • The sleep() method lets the currently executing Thread suspend execution for a specified period of time. The executing Thread can be obtained by thread.currentThread ().

  • The yield() method gives up the CPU resources held by the thread and gives them to other tasks that use the CPU execution time. But the time to give up is uncertain, it is possible to give up and immediately get CPU time slices.

  • The wait() method is for the thread currently executing the code to wait, place the current thread in the pre-execution queue, and stop execution at the code in the wait() until it is notified or interrupted. This method causes the calling thread to release the lock on the shared resource, exit from the running state, and enter the waiting queue until it is awakened again. This method can only be invoked in the synchronized code block, otherwise you will be thrown IllegalMonitorStateException anomalies. The wait(Long millis) method waits to see if a thread has awakened the lock within a certain period of time, and automatically awakens the lock if the time exceeds that.

  • The notify() method is used to notify other threads that may be waiting for an object lock on this object. This method randomly wakes up a thread of the same shared resource in the wait queue and causes it to exit the wait queue and enter a runnable state.

  • NotifyAll () causes all threads in the waiting queue waiting for the same shared resource to exit from the wait state and enter the running state. The higher-priority line usually executes first, but the notifyAll() method may be random, depending on the implementation of the virtual machine.

  • The join() method allows the calling thread to complete its normal execution before executing the code behind that thread. It queues threads.

9,Optimistic lock and pessimistic lock.

Pessimistic locking

Always assume the worst, every time to fetch the data that people will change, so every time when take data will be locked, so people want to take this data will be blocked until it got locked (Shared resources to only one thread at a time using, other threads blocked, after use to transfer resources to other threads). Exclusive locks such as Synchronized and ReentrantLock in Java are implementations of pessimistic locking ideas.

Optimistic locking

It always assumes the best case. When fetching data, it thinks that others will not modify it, so it will not be locked. However, when updating data, it will judge whether others have updated the data during this period, which can be implemented by using the version number mechanism and CAS algorithm. Optimistic locking is applicable to multiple read applications, which improves throughput. In Java. Java util. Concurrent. Atomic package this atomic variable classes is to use the optimistic locking a way of implementation of CAS.

Usage scenarios

The optimistic lock is suitable for the low-write scenario (the over-read scenario), while the pessimistic lock is suitable for the over-write scenario.

There are two common ways to implement optimistic locking

1. Version number mechanism

Generally, the version field is added to the data table to indicate the number of times the data has been modified. When the data is modified, the version value is increased by 1. When thread A wants to update the data value, it will read the version value as well as the data value. When submitting the update, it will update only if the version value it just read is the same as the version value in the current database. Otherwise, the update operation will be retry until the update succeeds.

2. CAS algorithm

Compare and swap, or compare and swap, is a named lock-free algorithm. CAS has three operands, the memory value V, the old expected value A, and the new value B to be modified. Change the memory value V to B if and only if the expected value A is the same as the memory value V, otherwise do nothing. In general, it’s a spin operation, that is, repeated retries.

Disadvantages of optimistic locking

1. ABA problems

If A variable V is first read as A value and is still A value when we are ready to assign, can we say that its value has not been modified by another thread? Obviously not, because in the meantime its value could be changed to something else and then changed back to A, and the CAS operation would assume that it was never changed. This problem is known as the “ABA” problem of CAS operation.

The AtomicStampedReference class after JDK 1.5 addresses this problem to some extent, where the compareAndSet method first checks whether the current reference is equal to the expected reference, and whether the current flag is equal to the expected flag. If all are equal, The reference and the value of the flag are set atomically to the given update value.

2. Spinning CAS (that is, spinning CAS until it succeeds) can cause a large execution overhead for the CPU if it fails for a long time.

3. CAS is valid only for a single shared variable. CAS is invalid when the operation involves multiple shared variables. However, starting with JDK 1.5, the AtomicReference class is provided to ensure atomicity between reference objects. You can place multiple variables in a single object to perform CAS operations. So we can use locks or use the AtomicReference class to combine multiple shared variables into a single shared variable.

10. What are the differences between the run() and start() methods?

1. The start() method is used to start the thread. In this case, the multithreaded operation is implemented without waiting for the run body to complete.

To start a Thread by calling the start() method of the Thread class, the Thread is in the ready state and is not running. The run() method is called the body of the Thread. It contains the contents of the Thread to be executed. When the run method runs, the Thread terminates, and the CPU runs another Thread, usually the main Thread in Android.

2. If the run() method is called as a normal method, the program will execute in sequence or wait for the body of the run method to complete before continuing with the following code:

If you use the Run method directly, you’re just calling a method, and there’s still only one main thread in the program — that one thread, which still has only one path to execute the program, so you’re not writing a thread.

11, multi-threaded breakpoint continuation principle.

SetRequestProperty (“Range”,”bytes=startIndex-endIndex”) in the HTTP GET request will be stored in the database in real time to determine where the file is stored during the local download. Method tells the server where the data starts and ends. The seek() method of RandomAccessFile also supports writing anywhere in the file when it is written locally. The progress of the child thread is also told to the progress bar of the Activity through a broadcast or event bus mechanism. The HTTP status code for disconnection is 206, httpstatus.sc_partial_content.

How to safely stop a threaded task? How does it work? Is there a similar mechanism for thread pools?

Termination of the thread

1. Use the violate Boolean variable exit flag to make the thread exit normally, that is, the thread terminates when the run method completes. (recommended)

2. Interrupt the thread with the interrupt() method, but the thread does not necessarily terminate.

3. Use the stop method to forcibly terminate the thread. The main hazards are that after thread.stop() is called, the thread that created the child raises an error of ThreadDeatherror and releases any locks held by the child.

Terminating a thread pool

The ExecutorService thread pool provides life-cycle methods such as shutdown and shutdownNow to shutdown the thread pool itself and all threads it owns.

1. Shutdown the thread pool

The thread pool does not exit immediately until all the tasks added to the thread pool have been processed.

ShutdownNow closes the thread pool and interrupts the task

Terminates the threads waiting for execution and returns their list. An attempt to stop all threads is made by calling thread.interrupt (), but it is known that the interrupt() method cannot interrupt the current Thread without the application of sleep, wait, Condition, timing lock, etc. So ShutdownNow() does not necessarily mean that the thread pool will exit immediately; it may have to wait for all ongoing tasks to complete before exiting.

13, heap memory, stack memory understanding, stack how to convert heap?

  • Some basic types of variables defined in a function and reference variables to objects are allocated in the stack memory of the function.
  • Heap memory is used to hold objects and arrays created by new. The “heap” in the JVM refers specifically to the area of memory used to hold Java objects. So by this definition, Java objects are all on the heap. The JVM’s heap is shared by all Java threads in the same JVM instance. It is usually managed by some kind of automatic memory management mechanism, often called garbage Collection (GC).
  • The heap is mainly used to store objects, and the stack is mainly used to execute programs.
  • In fact, variables in the stack point to variables in the heap, which are Pointers in Java!

How to control the number of concurrent access threads allowed by a method

15. Multi-process development and multi-process application scenarios;

Java threading model;

The concept of deadlock, how to avoid deadlock?

18. How to ensure the security of multi-threaded reading and writing files?

How to close the thread, and how to prevent thread memory leak?

Why have threads, not just processes?

21, how to request multiple threads at the same time, how to return the result of waiting for all threads to complete data synthesis a data?

How to close the thread?

23. How to ensure data consistency?

Can two processes write or read at the same time? How do I prevent process synchronization?

Talk about the understanding of multithreading and illustrate

Thread state and priority.

27. Use of ThreadLocal

28. Concurrency tools in Java (CountDownLatch, CyclicBarrier, etc.)

29. Implementation of process threads in the operating system

30. Dual threads print 12121212……. in a synchronized way

31, Java thread, scene implementation, how to request multiple threads at the same time, how to return the result of waiting for all threads to complete the data synthesis of a data

32, the server only provides the data receiving interface, in multithreaded or multi-process conditions, how to ensure the orderly arrival of data?

33. A thread pool on a single server is processing services. What if a power outage occurs?

Java Virtual Machine (⭐⭐⭐)

1. JVM memory region.

Basic Structure of JVM

From the figure above, the JVM consists of four main parts:

1. ClassLoader: Loads the required classes into the JVM when the JVM is started or when the class is running. The following figure shows the entire process from the Java source files to the JVM for understanding.

2. Execution engine: responsible for executing the bytecode instructions contained in the class file;

3. Memory area (also called runtime data area) : The area of memory allocated for operations while the JVM is running. The runtime memory area can be mainly divided into five regions, as shown in the figure:

MethodArea: a place to store information about the structure of a class, including constant pools, static constants, constructors, etc. Although the JVM specification describes the method area as a section of the heap, it has the name non-heap, so don’t confuse it. The method area also contains a pool of runtime constants.

Java Heap: The place where Java instances or objects are stored. This is the main region of GC. It is easy to know from the stored content that the methods and heap are shared by all Java threads.

Java Stack: A Java Stack is always associated with a thread. Whenever a thread is created, the JVM creates a Corresponding Java Stack for that thread. In this Java Stack, there are multiple Stack frames. The process of each method from the call to the completion of execution corresponds to the process of a stack frame in the Java stack from the stack to the stack. So the Java stack is already there.

Program counter (PCRegister) : Used to hold the memory address of the current thread execution. Since JVM programs are executed by multiple threads (threads switch in turn), a separate counter is required to record where the interruption occurred before, so the program counter is also thread private.

Native MethodStack: the same function as the Java stack, except that it serves the JVM’s use of Native methods.

4. Local method interface: it mainly calls local methods and callback results implemented in C or C++.

Which memory does threading affect?

Every time a thread is created, the JVM allocates a virtual stack and a local method stack in memory for it to record the contents of the invoked method, and the allocator counter records where the instruction was executed. This memory consumption is the cost of creating the thread.

2. Understanding the MEMORY model of the JVM?

The Java Memory Model (JMM) is also known as the Java Memory Model. The JMM defines how the Java Virtual Machine (JVM) works in computer memory (RAM). The JVM is the entire computer virtual model, so the JMM belongs to the JVM.

Communication between Java threads is always implicit and using a shared memory model. The shared memory model mentioned here is the Java Memory Model (JMM), which determines when a write to a shared variable by one thread is visible to another thread. From an abstract point of view, the JMM defines an abstract relationship between threads and main memory: Shared variables between threads are stored in Main Memory, and each thread has a private local memory that stores copies of shared variables that the thread reads/writes to. Local memory is an abstract concept of the JMM and does not really exist. It covers caching, write buffers, registers, and other hardware and compiler optimizations.

In summary, the JMM is a set of rules that are intended to address thread safety issues that can arise in concurrent programming, providing built-in solutions (the occur-before principle) and externally available synchronization (synchronized/volatile, etc.). It ensures the atomicity, visibility and order of program execution in multi-threaded environment.

For a fuller understanding, read the following:

Comprehensive understanding of the Java Memory Model (JMM) and the volatile keyword

Comprehensive understanding of the Java memory model

3, describe the principle of GC and recycling strategy?

When it comes to garbage collection, we can first think about what problems we need to solve if we do garbage collection.

Generally speaking, we have to solve three problems:

1. Which memory to reclaim?

2. When to recycle?

3. How to recycle?

These problems correspond to schemes such as reference management and reclamation policies.

Speaking of references, we all know that there are four types of references in Java:

  • Strong references: Ubiquitous in code, so long as a strong reference exists, the garbage collector will not reclaim the referenced object.
  • SoftReference: SoftReference, which describes a useful but unnecessary object that is reclaimed when memory runs out.
  • WeakReference: describes non-mandatory objects. The object with a WeakReference can only survive until the next GC occurs. When GC occurs, the object is reclaimed regardless of whether the memory is sufficient.
  • PhantomReference: PhantomReference has no effect on the lifetime of an object and cannot be referenced by a virtual reference. Its only purpose is to be notified when the object is reclaimed.

If a strong reference is present, GC will not collect it. If a strong reference is present, GC will not collect it. If a strong reference is present, GC will not collect it.

A simple idea is to use reference counting, which is +1 if there is a reference to the object and -1 if there is no reference, but it doesn’t solve the problem of circular reference counting.

Therefore, the reachability analysis algorithm is used to determine whether the reference of the object exists.

The reachability analysis algorithm uses a series of objects called GCRoots as the starting point and searches from these nodes from top to bottom. The path taken is called reference chain. When an object is not connected to GCRoots by any reference chain, it indicates that the object is unavailable, that is, the object is unreachable.

GC Roots objects typically include:

  • Objects referenced in the virtual machine stack (local variables in the stack frame)
  • An object referenced by a static property of the class in the method
  • Object referenced by a constant in the method area
  • The object referenced by the Native method

The whole process of accessibility analysis algorithm is as follows:

The first marking: the object is found to have no reference chain with GC Roots after reacsibility analysis, then the first marking and filtering are carried out. The filtering condition is whether it is necessary to execute finalize() method for this object. The finalize() method that is not overridden or has already been executed will be deemed unnecessary. If necessary: Then the object will be placed in an F-queue, and the Finalize () method of the object will be triggered later in the low-priority Finalizer thread established by the VIRTUAL machine, but there is no guarantee that it will wait for the completion of its execution, because if the object’s Finalize () method occurs in an infinite loop or takes a long time to execute, Blocks other objects in the F-queue, affecting GC.

Second marking: GC marks the object in the F-queue for the second time. If the object is referenced successfully during the second marking, the collection to be collected is removed; otherwise, the collection is collected.

In general, when garbage collection is performed by the JVM, all objects in the heap are checked to see if they are referenced by the root set objects. Objects that cannot be referenced are collected by the garbage collector. There are several general collection algorithms as follows:

1). Mark-sweep

The mark-clear algorithm uses scanning from the root set to mark the surviving objects. After the marking, the unmarked objects in the whole space are scanned for recycling. The mark-clear algorithm does not need to move objects, and only processes the non-viable objects, which is extremely efficient in the case of a large number of viable objects. However, because the mark-clear algorithm directly reclaims the non-viable objects, it will cause memory fragmentation.

2). Mark-compact

The mark-tidy algorithm marks objects in the same way as the mark-clear algorithm, but it is different in the clearing. After the space occupied by the inviable objects is reclaimed, all the living objects are moved to the left free space and the corresponding pointer is updated. The mark-tidy algorithm is based on the mark-clean algorithm, and carries on the object movement, so the cost is higher, but it solves the memory fragmentation problem. This garbage collection algorithm is suitable for scenarios with high object survival (old age).

3). Copying

The replication algorithm divides the available memory into two equal sized chunks by capacity and uses only one chunk at a time. When the memory of this piece is used up, the objects that are still alive are copied to another piece, and then the used memory space is cleaned up once again. This algorithm is suitable for scenarios where object survival is low, such as the new generation. This allows the entire half to be recollected each time, and memory allocation does not need to consider the complexity of memory fragmentation.

4). Generational collection algorithm

Different objects have different life cycles (survival), and objects with different life cycles are located in different regions of the heap, so different reclamation strategies for different regions of the heap memory can improve the performance of the JVM. Contemporary commercial virtual machines use generational collection algorithms: the new generation of objects survival rate is low, the use of replication algorithm; In the old age, the survival rate is high, so use the marker clearing algorithm or the marker sorting algorithm. Java heap memory can be divided into three modules: New generation, old generation and permanent generation:

New generation:

1. All newly generated objects are first placed in the new generation. The goal of the new generation is to collect objects with short life spans as quickly as possible.

2. The new-generation memory is divided into one Eden zone and two survivor zones in a ratio of 8:1:1. Most objects are generated in Eden. When recycling, the survival objects in Eden zone are first copied to a survivor0 zone, and then the Eden zone is emptied. When the storage of this survivor0 zone is also full, the survival objects in Eden zone and survivor0 zone are copied to another survivor1 zone. Then empty Eden and this survivor0 zone, in which case, survivor0 zone is empty. Then exchange survivor0 zone with survivor1 zone, that is, keep survivor1 zone empty, and so on.

3. When the survivor1 zone is insufficient to store the survivorobjects of Eden and Survivor0, the survivorobjects are directly stored in the old age. If the old age is Full, a Full GC will be triggered, that is, both the new generation and the old age will be recycled.

4. Cenozoic GC is also called MinorGC, and MinorGC occurs frequently (it may not be triggered when Eden zone is full).

Old age:

1. Objects that have survived N garbage collections in the old age are placed in the old age. Therefore, you can think of the old age as the repository of objects with a long lifetime.

2. The memory is much larger than that of the new generation (approximately 1:2). When the memory of the old generation is Full, the Major GC is triggered. Full GC occurs at a lower frequency, and the old age object has a longer lifetime.

Permanent generation:

The permanent generation mainly stores static files such as Java classes, methods, and so on. Permanent generation has no significant impact on garbage collection, but some applications may dynamically generate or call classes, such as bytecode frameworks such as Reflection, dynamic proxy, or CGLib. In such cases, a large permanent generation space is required to store new classes during runtime.

Garbage collector

Garbage collection algorithm is the methodology of memory collection, then garbage collector is the specific implementation of memory collection:

  • Serial collector (replication algorithm): A new generation of single-threaded collector, marking and cleaning are single-threaded, the advantage is simple and efficient;

  • Serial Old collector (mark-sorting algorithm): Old age single-threaded collector, Old age version of Serial collector;

  • ParNew collector (copy algorithm): The new collector parallel collector, which is actually a multi-threaded version of the Serial collector, performs better than Serial in multi-core CPU environments;

  • Concurrent Mark Sweep (CMS) collector (mark-sweep algorithm) : The old parallel collector, which aims to obtain the shortest collection pause time, has the characteristics of high concurrency and low pause, and pursues the shortest GC collection pause time.

  • Parallel Collector (mark-collection algorithm). The Old Parallel collector. Exploiture of the Parallel Scavenge;

  • Parallel scexploiture. The APPLICATION of a Parallel collector pursues high throughput, efficient UTILIZATION of a CPU. Throughput = user thread time /(user thread time +GC thread time), high throughput can efficiently use CPU time, complete the operation tasks of the program as soon as possible, suitable for background applications and other scenarios with low interactive requirements;

  • Garbage First (G1) collector: Java heap parallel collector. The G1 collector is a new collector provided by JDK1.7. The G1 collector is based on the “mark-collation” algorithm, that is, it does not generate memory fragmentation. In addition, the G1 collector differs from previous collectors in one important feature: The G1 collector recycles the entire Java heap (both the new generation and the old generation), whereas the first six collectors recycle only the new generation or the old generation.

Memory allocation and reclamation policies

JAVA automatic memory management: allocate memory to objects and reclaim memory allocated to objects.

1. Objects are allocated in Eden first. If there is not enough space in Eden, the VM initiates a MinorGC.

Big objects directly into the old age. Such as long strings and arrays. Very long strings and arrays.

3. The long-lived will enter the old age. When an object has undergone a certain number of Minor GCS in the Younger generation (default: 15), it is promoted to the older generation.

4. Age determination of dynamic objects. In order to better accommodate the memory conditions of different programs, the virtual machine does not always require that the age of all objects reach the MaxTenuringThreshold in order to be older. If the total size of all objects of the same age in the Survivor space is greater than half of the Survivor space, An object whose age is greater than or equal to that age can enter the old age directly without waiting for the age specified in the MaxTenuringThreshold.

For a more complete understanding, click here

4, class loader, parent mechanism, Android class loader.

Class loader

As we all know, a Java program is organized into a complete Java application by several.class files. When the program is running, it will call an entry function of the program to call the relevant system functions, and these functions are encapsulated in different class files. So it is common to call methods from this class file to another class file, and if the other file does not exist, a system exception will be thrown.

Instead of loading the class files once the program is started, the Java ClassLoader dynamically loads a class file into memory according to the program’s needs. Only after the class file is loaded into memory. Can be referenced by other class files. So a ClassLoader is used to dynamically load classes into memory.

Parents mechanism

A class is loaded when the virtual machine obtains a binary stream of bytes describing a class by its fully qualified name, and the class loader does this.

Classes are related to Class loaders. Determining whether two classes are equal only makes sense if they are loaded by the same Class loader. Otherwise, two classes from the same Class file are not equal even if they are loaded by different Class loaders.

The equals() method, isAssignableFrom() method, isInstance() method return result, and the Instance keyword to determine the object’s owning relationship.

Class loaders can be divided into three categories:

  • Bootstrap ClassLoader: loads the libraries identified by the VM in the <JAVA_HOME>\lib directory or the path specified by the -xbootclasspath parameter.

  • Extension ClassLoader: Is responsible for loading all class libraries into memory in <JAVA_HOME>\lib\ext directory or the path specified by the java.ext.dirs system variable.

  • Application ClassLoader: is responsible for loading the specified class libraries on the user’s classpath. If the Application does not implement its own ClassLoader, this ClassLoader will normally load the class libraries in the Application.

1. Principle introduction

Each ClassLoader instance has a reference to a parent ClassLoader (not an inherited relationship, but an included relationship). The vm’s built-in ClassLoader has no parent ClassLoader of its own. But it can be used as a parent class loader for other lassLoader instances.

When a ClassLoader instance needs to load a class, it will delegate the task to its parent ClassLoader before attempting to search for the class. This process is checked from top to bottom. The Bootstrap ClassLoader first tries to load the class. The Extension ClassLoader transfers the task to the Extension ClassLoader for loading. If it is not loaded, the Extension ClassLoader transfers the task to the App ClassLoader for loading. If it is not loaded, the delegate is returned to the originator, who loads the class to the specified file system or network wait URL.

If none of them are loaded into the class, a ClassNotFoundException is thrown. Otherwise, the found Class generates a Class definition, loads it into memory, and returns the in-memory Class instance object of the Class.

Class loading mechanism:

Class loading refers to reading the binary data from the.class file of the class into memory, placing it inside the method in the runtime data area, and then creating a Java.lang. class object in the heap to encapsulate the data structure in the method area. Classes are loaded ultimately by Class objects in the heap, which encapsulate the data structure of the Class in the method area and provide the Java programmer with an interface to access the data structure in the method area.

There are three ways to load classes:

1) When the command line starts the application, the JVM initializes the loading

2) Dynamic loading via the class.forname () method

3) Dynamic loading using the classLoader.loadClass () method

With so many class loaders, which class loader is used when the class is loaded?

This is where the parent delegate model of the classloader comes in. The flowchart is as follows:

The entire workflow of the parent delegation model is very simple, as follows:

If a classloader receives a request to load a class, it will not load the class itself. It will first request the parent classloader, and the same is true for class adders at each level. Only when the parent class loader reports that it cannot load the class will the subclass loader load the class.

Why use the parent delegation model?

Because this avoids reloading, there is no need for the child ClassLoader to load the class again when the parent has already loaded it.

For security reasons, let’s imagine that instead of using this delegate pattern, we could use custom strings to dynamically replace types defined in the Java core API at any time, which would be a huge security risk. The parent delegate approach avoids this. Since strings are already loaded by the BootstrcpClassLoader at startup, user-defined classloaders will never be able to load a String written by themselves unless you change the JDK’s default algorithm for searching for classes.

3. But when the JVM searches for classes, how does it determine that two classes are the same?

When determining whether two classes are the same, the JVM must not only determine whether the two classes have the same name, but also whether they were loaded by the same class loader instance.

The JVM considers two classes the same only if both are satisfied. Even if two classes are the same class bytecode, the JVM will treat them as two different classes if they are loaded by two different ClassLoader instances.

Such as a Java class ClassLoaderSimple org.classloader.simple.Net on the network, after the javac compiler to generate the bytecode file NetClasLoaderSimple. Class, ClassLoaderA and ClassLoaderB this class loader and read the NetClassLoaderSimple. And define the Java class files. Lang. To represent the class, the class instance for the JVM, they are two different instance objects, They are really a bytecode file, if you try to put this Class instance to generate specific object transform, Java runtime exception will be thrown. Lang. ClassCastException, suggest they are two different types.

Android class loader

For Android, the final APK file contains a dex file. The DEX file is a repackage of the class file. The packaging rules are not simply compressed, but completely optimize the various function tables inside the class file to produce a new file, namely the dex file. So loading a particular Class file requires a special Class loader, DexClassLoader.

Jars can be loaded dynamically via URLClassLoader

1.ClassLoader isolation problem: The JVM recognizes that a class is created by ClassLoaderid + PackageName + ClassName.

2. Load public classes in different Jar packages:

  • Let the parent ClassLoader load the public Jar, and the child ClassLoade load the Jar containing the public Jar. In this case, the child ClassLoader will first look for the parent ClassLoader to load the Jar. (Java only)
  • To override a ClassLoader that loads a Jar containing a public Jar, find a ClassLoader in loClass that has already loaded a public Jar, and replace the parent ClassLoader. (Java only)
  • Remove the public Jar when the Jar containing the public Jar is generated.

5,Compare THE JVM with Art and Dalvik?

  

6. Introduction to the GC collector? And how is its memory divided?

(1) Introduction:

The garbage-first (G1, Garbage First) collector is a service-type collector that targets multiprocessor machines, large memory machines. It is highly consistent with the goal of garbage collection pause times while achieving high throughput. The Oracle JDK 7 Update 4 and later releases fully support the G1 garbage collector

(2) G1’s memory partition mode:

It divides the heap memory into equal-sized heap areas, each of which is a logically contiguous virtual memory. Some of the regions are treated as the same role (Eden, Survivor, old) as the old generation collector, but the number of regions for each role is not fixed. This provides more flexibility in memory usage

What is the difference between stack memory and heap memory?

Java divides memory into two types: stack memory and heap memory. The difference is:

1) Stack memory: Some basic types of variables and reference variables of objects defined in the function are allocated in the stack memory of the function. When a variable is defined in a block of code, Java allocates memory for the variable in the stack. When the scope of the variable is exceeded, Java automatically frees the memory allocated for the variable, which can be immediately used for other purposes.

2) Heap memory: Heap memory is used to hold objects and arrays created by new. The memory allocated in the heap is managed by the Java Virtual Machine’s automatic garbage collector.

8. What are some common command-line tools for JVM tuning? What are some common TUNING parameters for the JVM?

(1) Common command tools for JVM tuning include:

1) The JPS command is used to query the running JVM process,

2) JSTAT can display classload, memory, garbage collection, JIT compilation and other data in local or remote JVM processes in real time

3) Jinfo is used to query the values of attributes and parameters of the JVM currently running this.

4) JMAP is used to display the details of the current Java heap and permanent generation

5) Jhat is used to analyze dump files generated using JMap. It is a built-in tool of JDK

6) JStack is used to generate a snapshot of all threads of the current JVM. A thread snapshot is a method of executing each thread of the VIRTUAL machine to locate the cause of a long pause.

(2) Common JVM tuning parameters include:

-Xmx

Use Java -xmx5000m -version to determine the maximum memory heap that can be allocated by the current system

-Xms

Specifies minimum heap memory, usually set to the same as maximum heap memory, which reduces GC

-Xmn

Sets the young generation size. Total heap size = young generation size + tenured generation size. So increasing the young generation will reduce the size of the old generation. The value has a significant impact on system performance. Sun officially recommends that the value be 3/8 of the entire heap.

-Xss

This parameter determines the depth of a Java function call. A larger value means a deeper call. A smaller value means a StackOverflowError.

-XX:PermSize

Specifies the initial value of the method (permanent) area, which defaults to 1/64 of the physical memory, removed in the Java8 permanent area and replaced by the metadata area, specified by -xx :MetaspaceSize

-XX:MaxPermSize

Specifies the maximum size of the method partition. The default is 1/4 of the physical memory. In java8, the metadata partition size is specified by -xx :MaxMetaspaceSize

-XX:NewRatio=n

The ratio of the aged generation to the young generation, -xx :NewRatio=2, indicating that the ratio of the aged generation to the young generation is 2:1

-XX:SurvivorRatio=n

-xx :SurvivorRatio=8 means that the ratio of Eden and Survivor is 8:1:1, because Survivor has two (from, to)

9. What is the meaning of jstack, Jmap and Jutil? How do I troubleshoot problems with the JVM online?

10, whether the CONTENT stored in the JVM method area will be dynamically expanded, whether there will be memory overflow, what are the reasons for the occurrence?

11. How do I solve the problem of object creation and object reclamation?

12. Is there a maximum heap size limit in the JVM?

13, whether the CONTENT stored in the JVM method area will be dynamically expanded, whether there will be memory overflow, what are the reasons for the occurrence?

14, How to understand the Java virtual table?

15. Java runtime data area, cause of memory overflow.

Object creation, memory layout, access location, etc.

The public,

My public account JsonChao has been opened. If you want to get the latest articles and the latest developments at the first time, please scan and follow ~