In addition to these mentioned in this article, I have also sorted out the latest interview questions and interviews of Java positions from other big factories. I know that you like electronic versions, so I have made them into PDF and shared them with everyone for free. At the end of the article, there is a link to get them, if you need them, just click the link to get them

Ali side

Let’s talk about the difference between ArrayList and LinkedList

  1. First, their underlying data structures are different. ArrayList is implemented based on arrays, while LinkedList is implemented based on linked lists
  2. Due to different underlying data structures, they are applicable to different scenarios. ArrayList is more suitable for random search, while LinkedList is more suitable for deletion and addition, and the time complexity of query, addition and deletion is different
  3. In addition, ArrayList and LinkedList implement the List interface, but LinkedList also implements the Deque interface, so LinkedList can also be used as a queue

Let’s talk about the Put method of a HashMap

Let’s start with the general flow of the HashMap Put method:

  1. According to the Key through the hash algorithm and the operation of the array subscript
  2. If the array subscript position element is empty, the key and value are encapsulated as Entry objects (in JDK1.7, Node objects in JDK1.8) and placed in that position
  3. If the array subscript element is not empty, it is discussed separately
    1. If it is JDK1.7, determine whether expansion is required. If expansion is required, perform expansion. If expansion is not required, generate Entry objects and add them to the linked list of the current location using the header method
    2. If it is JDK1.8, the type of Node in the current position is determined first, whether it is a red-black tree Node or a linked list Node
      1. If it is a red-black tree Node, the key and value are encapsulated as a red-black tree Node and added to the red-black tree. During this process, the system checks whether the current key exists in the red-black tree and updates the value if it does
      2. If the Node object at this position is a linked list Node, encapsulate the key and value as a linked list Node and insert them into the last position of the linked list by tail insertion method. Because it is tail insertion method, the linked list needs to be traversed. During the traversing of the linked list, it will determine whether the current key exists, and if so, the value will be updated. Insert a new linked list Node into the linked list. After inserting the new list Node into the linked list, the number of nodes in the current list is checked. If the number is greater than or equal to 8, the list will be converted into a red-black tree
      3. After encapsulating keys and values as nodes and inserting them into linked lists or red-black trees, determine whether to expand capacity. If yes, expand capacity. If no, end the PUT method

Actual combat video: Tear HashMap by hand

Said the ThreadLocal

  1. ThreadLocal is a thread-local storage mechanism provided in Java that allows data to be cached within a thread that can retrieve cached data at any time and in any method
  2. ThreadLocal is implemented using a ThreadLocalMap. Each Thread object contains a ThreadLocalMap whose key is a ThreadLocal object. The Map value is the value to be cached
  3. Using ThreadLocal in a thread pool can cause a memory leak, because when ThreadLocal is used up, the key, value, or Entry object should be reclaimed, but threads in the thread pool don’t. Thread objects are strongly referenced to ThreadLocalMap, and ThreadLocalMap is also strongly referenced to Entry objects. The thread is not recycled, and Entry objects are not recycled, resulting in memory leaks. The solution is to use ThreadLocal objects. Manually clear the Entry object by calling the remove method of ThreadLocal
  4. The classic use of ThreadLocal is connection management (a thread holds a connection that can be passed between different methods, threads do not share the same connection).

Talk about which parts of the JVM are shared and which can be used as GC root

1. Heap and method areas are shared by all threads, while stacks, local method stacks, and program counters are unique to each thread

The JVM needs to find “garbage” objects that are not referenced when garbage collection is performed. Instead, it needs to find “non-garbage” objects that are normal objects. Normal objects are found based on the reference path of these “roots”, which have the characteristic that they only refer to other objects and are not referenced by other objects. For example, local variables in the stack, static variables in the method area, variables in the local method stack, running threads, etc.

How does your project troubleshoot JVM problems

For systems still up and running:

  1. You can use JMap to see the usage of various regions in the JVM
  2. You can use JStack to see what threads are doing, such as which are blocked and whether a deadlock has occurred
  3. You can use the jstat command to check for garbage collection, especially fullGC, and if you find that fullGC is frequent, tune it
  4. Analyze the results of each command or a tool such as JVisualVM
  5. If fullGC happens frequently but there is no memory overflow, then fullGC actually recycles a lot of objects. Therefore, it is better to recycle these objects directly in the younggC process to avoid these objects falling into the old age. In this case, It is necessary to consider whether these objects with a short survival time are relatively large, so that the young generation cannot be put down and directly enter the old age. Try to increase the size of the young generation. If the fullGC decreases after the modification, it will prove that the modification is effective
  6. At the same time, you can find the thread that consumes the most CPU, locate the specific method, and optimize the execution of that method to see if you can save memory by avoiding the creation of certain objects

For systems where OOM has occurred:

  1. General production system will be set when the system is OOM happened, generate the dump file (+ HeapDumpOnOutOfMemoryError – – XX: XX: HeapDumpPath = / usr/local/base)
  2. Dump files can be analyzed using tools such as JsisualVM
  3. Dump file to find the exception instance object, and the exception thread (high CPU usage), locate the specific code
  4. Then carry out detailed analysis and debugging

In a word, tuning is not accomplished overnight. It requires analysis, reasoning, practice, summary and re-analysis to finally locate specific problems

Practical video: JVM performance tuning

How do I view thread deadlocks

  1. You can view this by using the jstack command, which displays the deadlocked threads
  2. Or two threads to operate the database, the database has a deadlock, this is to query the database deadlock situation
1, query whether the lock table is availableshow OPEN TABLES where In_use > 0;
2, query processshow processlist;
3, to view the transaction being lockedSELECT * FROM INFORMATION_SCHEMA.INNODB_LOCKS; 
4, to view transactions awaiting locksSELECT * FROM INFORMATION_SCHEMA.INNODB_LOCK_WAITS; 
Copy the code

How do threads communicate with each other

  1. Threads can communicate with each other through shared memory or over a network
  2. If you are communicating through shared memory, you need to consider concurrency: when to block and when to wake up
  3. For example, wait() and notify() in Java block and wake up
  4. Through the network is relatively simple, through the network connection to send communication data to each other, of course, also want to consider the concurrency problem, the way to deal with is to lock and so on

Practice video: Multi-thread tuning in high concurrency environment

Introduce Spring, read the source code to introduce the general process

  1. Spring is a rapid development framework that helps programmers manage objects
  2. Spring source code implementation is very good, the application of design patterns, concurrency safety implementation, interface oriented design, etc
  3. When you create the Spring container, that is, when you start Spring:
    1. A scan is performed, and all BeanDefinition objects are scanned and stored in a Map
    2. Then select the singleton BeanDefinition that is not lazily loaded to create the Bean. For the multi-instance Bean, it does not need to be created during startup. For the multi-instance Bean, BeanDefinition will be used to create the Bean every time it gets the Bean
    3. Creating a Bean with A BeanDefinition is the Bean’s creation life cycle, which includes merging BeanDefinitions, inferring constructors, instantiating, property filling, pre-initialization, initialization, and post-initialization steps, with AOP taking place in the post-initialization step
  4. After the singleton Bean is created, Spring issues a container start event
  5. Spring Startup End
  6. For example, the source code will provide some template methods to be implemented by subclasses. For example, the source code also involves some registration of BeanFactoryPostProcessor and BeanPostProcessor. Spring scanning is done through BenaFactoryPostProcessor, and dependency injection is done through BeanPostProcessor
  7. Annotations like @import are also handled during Spring startup

Let’s talk about Spring’s transaction mechanism

  1. Spring transactions underlying is based on database transactions and AOP mechanisms
  2. First, Spring creates a proxy object for a Bean that uses the @Transactional annotation
  3. When a method of a proxy object is called, it is checked to see if the method has the @Transactional annotation on it
  4. If it does, a database connection is created using the transaction manager
  5. Change the database connection’s autocommit property to false to disable automatic commit of this connection, which is a very important step in implementing Spring transactions
  6. The current method is then executed, where SQL is executed
  7. After executing the current method, commit the transaction directly if there is no exception
  8. If an exception occurs and the exception needs to be rolled back, the transaction is rolled back, otherwise the transaction remains committed
  9. The isolation level of Spring transactions corresponds to the isolation level of the database
  10. The propagation mechanism of Spring transactions is implemented by Spring transactions themselves and is the most complex of all Spring transactions
  11. Spring’s transaction propagation mechanism is based on database connections. One database connects to one transaction. If the propagation mechanism is configured to require a new transaction to be opened, then a database connection is established and SQL is executed on that new database connection

Video tutorial: Spring basic principle analysis

When does @Transactional fail

Because Spring transactions are implemented based on proxies, a method with @Transactional annotation will only take effect if it is called by a proxy object, so @Transactional will not take effect if it is called by a proxy object.

Cglib is implemented based on parent classes. Subclasses cannot override the private methods of the parent class. Therefore, the @Transactional method fails to make good use of the proxy, resulting in the failure of @Transactianal

How does Dubbo interact with the system

Dubbo supports many protocols, such as the default Dubbo protocol, such as HTTP, rest, etc. Their underlying technology is not quite the same, such as Dubbo protocol using NetTY. Tomcat or Jetty, the underlying USE of the MINA HTTP protocol, can also be used.

Service consumers when calling a service, will present the invoked service interface information, the current methods and implementation method of the incoming information into the parameter such as assembly for an Invocation object, and then different protocols in different data organization mode and transmission method to transfer the object to the service provider, the provider receives the object, Find the corresponding service implementation, use reflection to execute the corresponding method, get the method result and then respond to the service consumer through the network.

Of course, Dubbo does a lot of other design during this call, such as service fault tolerance, load balancing, Filter mechanisms, dynamic routing mechanisms, and so on, to allow Dubbo to handle more requirements in the enterprise.

Dubbo’s load balancing strategy

Dubbo currently supports:

  1. Balanced weighted polling algorithm
  2. Weighted stochastic algorithm
  3. Consistent hashing algorithm
  4. Minimum active number algorithm

Production-level load balancing algorithm in detail

What other frameworks have you read and introduced you are familiar with

This problem is more extensive, you can say: HashMap, thread pool and other JDK source, can also say Mybatis, Spring Boot, Spring Cloud, message queue development framework or middleware source

Combat video: Dubbo framework source code analysis

Ali 2 face

What has changed from Jdk1.7 to Jdk1.8 HashMap (underlying)?

  1. In 1.7, array + linked list and 1.8, array + linked list + red-black tree are added to improve the overall efficiency of HashMap insertion and query
  2. In 1.7, the head interpolation method is used for linked list insertion, while in 1.8, the tail interpolation method is used. Because the number of linked list elements needs to be judged when inserting key and value in 1.8, the number of linked list elements needs to be traversed and counted, so the tail interpolation method is directly used
  3. In 1.7, the hash algorithm is complex, with various right shift and xOR operations, which is simplified in 1.8, because the purpose of the complex hash algorithm is to improve hash performance to provide the overall efficiency of HashMap. In 1.8, red-black tree is added, so the hash algorithm can be simplified appropriately to save CPU resources

What has changed between Jdk1.7 and the Jdk1.8 Java virtual machine?

Permanent generation, exist in the 1.7 1.8 no permanent generation, replace it is yuan space, dimension of the memory is not within the virtual machine, but local memory space, and the reason for this is that whether the permanent generation or dimension, their methods are the concrete implementation, the dimension of memory into a local memory, For example, the class information stored in the method area is usually difficult to determine, so it is difficult to specify the size of the method area. If the method area is too small, it is prone to overflow. If the method area is too large, it will occupy too much memory space of the VIRTUAL machine. The memory used by the VM is not affected after the vm is migrated to the local memory

How is AOP implemented and where is AOP used in the project

Using dynamic proxy technology to implement AOP, such as the JDK dynamic proxy or additional dynamic proxy, using dynamic proxy technology, proxy objects can be generated for a class, when a proxy object is called a method, to control the execution of the method, such as print execution time, to perform the method, and the method, after the completion of execution Print the execution time again. Project, such as transaction, authority control, method execution time log are achieved through AOP technology, all need to do unified processing of some methods can be achieved with AOP, using AOP can do business without invasion.                                                                                                                                       

The role of the post-processor in Spring

Postprocessors in Spring are divided into BeanFactory postprocessors and Bean postprocessors, which are very important mechanisms in the design of Spring’s underlying source architecture and can also be extended by developers. BeanFactory post-processor refers to the BeanFactory processor. During Spring startup, an instance of BeanFactory will be created and BeanFactory processor will be used to process the BeanFactory. For example, Spring’s scanning is realized based on the BeanFactory post-processor, and Bean post-processor is similar. In the process of creating a Bean, Spring first instantiates an object and then uses the Bean post-processor to process the instance object. For example, dependency injection is implemented using a Bean post-processor that automatically assigns the @AutoWired annotation to an instance object. AOP is also implemented using a Bean post-processor, based on the original instance object. Determine whether AOP is needed, and if so, dynamically proxy a proxy object based on the original instance object.

Talk about common SpringBoot annotations and their implementation

  1. @springBootApplication Annotation: This annotation identifies a SpringBoot project and is actually a combination of three other annotations:
    1. @springBootConfiguration: This annotation is actually @Configuration, indicating that the startup class is also a Configuration class
    2. @enableAutoConfiguration: imports a Selector into the Spring container that loads the auto-configuration classes defined in SpringFactories in the ClassPath and automatically loads them as configuration beans
    3. @ComponentScan: Identifies the scanning path. By default, no actual scanning path is configured, so SpringBoot scans the path in the current directory where the startup class resides
  2. @bean annotations: Used to define beans, similar to tags in XML. When Spring starts, it parses the @bean annotated method, takes the name of the method as beanName, and executes the method to get the Bean object
  3. @controller, @Service, @responseBody, @autoWired

Spring loop dependency and AOP underlying principles

Tell me what you know about distributed lock implementation

The essence of the problem that distributed locks solve is mutually exclusive access to shared resources by threads distributed across multiple machines. This principle can be implemented in many ways:

  1. Based on Mysql, threads in the distributed environment connect to the same database, and use the row lock in the database to achieve mutually exclusive access. However, the performance of Mysql lock adding and lock releasing is relatively low, which is not suitable for the real production environment
  2. Based on Zookeeper, data in Zookeeper is stored in memory, so compared with Mysql performance, it is suitable for the actual environment. Moreover, the distributed lock mechanism based on Zookeeper sequence node, temporary node and Watch can be very good to realize
  3. Based on Redis, Redis data is also in memory, based on Redis consumption subscription function, data timeout, Lua script and other functions, can also be a good implementation of distributed lock

Redis data structure and usage scenarios

Redis data structures are:

  1. String: can be used to do the most simple data cache, can cache a simple string, also can cache a JSON format string, Redis distributed lock implementation uses this data structure, but also can realize counters, Session sharing, distributed ID
  2. Hash table: Can be used to store key-value pairs, more suitable for storing objects
  3. List: The list of Redis can be used as either stack or queue by combining commands. It can be used to cache message flow data such as wechat official account and Weibo
  4. Collection: similar to the list, can also store multiple elements, but can not be repeated, set can be intersection, union, difference set operation, so as to achieve similar, I and someone concerned about the people, friends circle praise and other functions
  5. Ordered collection: The collection is unordered, ordered collection can be set order, can be used to achieve the function of the leaderboard

Redis cluster strategy

Redis provides three clustering strategies:

  1. Master-slave mode: This model is simple, the main library can read and write, and to and from the data synchronization, this mode, the client directly connect the main library and some from the library, but the main library or from a library after downtime, the client need to manually modify IP, in addition, this model is also hard to expansion, the whole cluster can store data memory capacity by a machine, So it is impossible to support very large data volumes
  2. Sentinel mode: This model on the basis of the master-slave sentinel node has been added, but the main library node after downtime, the guard will find that the main library node goes down, and then select from a library of library as into the main library, the guard also can do the cluster, which can guarantee but a sentinel node after downtime, there are other sentinel node can continue to work, This mode can better ensure the high availability of Redis cluster, but still can not solve the capacity ceiling problem of Redis.
  3. Cluster mode: Cluster mode is to use more patterns, it supports multiple main from, this model will be carried out in accordance with the key slot allocation, can make different key dispersed to different master node, using this model can make the whole clustering support larger data capacity, at the same time each master node can have their own multiple from node, if the master node goes down, A new master node is elected from its slave nodes.

For these three modes, if Redis needs to store a small amount of data, the sentinel mode can be selected; if Redis needs to store a large amount of data and needs continuous expansion, the Cluster mode can be selected.

In a Mysql database, when an index is set but cannot be used?

  1. There are no leftmost prefixes
  2. Fields are implicitly cast to data types
  3. Index scanning is not as efficient as full table scanning

Actual combat video: Redis underlying data structure analysis

How does Innodb implement transactions

Innodb implements transactions through Buffer pools, logbuffers, Redo logs, and Undo logs.

  1. When Innodb receives an UPDATE statement, it will first find the page where the data resides based on the criteria and cache the page in the Buffer Pool
  2. Execute the UPDATE statement to modify the data in the Buffer Pool, that is, the data in memory
  3. Generate a RedoLog object for the UPDATE statement and store it in LogBuffer
  4. Generate undolog logs for update statements for transaction rollback
  5. If the transaction commits, then the RedoLog object is persisted, and there are subsequent mechanisms to persist the modified data pages in the Buffer Pool to disk
  6. If the transaction is rolled back, the undolog log is used for the rollback

Talk about your most fulfilling projects

  1. What does the project do
  2. What technology was used
  3. Your role in the project
  4. What has been gained

My most challenging projects and difficulties

  1. What technology was used to solve what project difficulties
  2. What project functionality was optimized using what techniques
  3. How much cost was saved by using what technology

Video: How to pick the right company for you

Jingdong side

What design patterns have you encountered?

I have encountered some design patterns while learning the underlying source code of some frameworks or middleware:

  1. Proxy mode: Mybatis uses JDK dynamic proxy to generate Mapper proxy object, in the implementation of proxy object method to execute SQL, AOP in Spring, including @Configuration annotation of the underlying implementation are also used proxy mode
  2. Chain of Responsibility pattern: The Pipeline implementation in Tomcat and the Filter mechanism in Dubbo use the chain of responsibility pattern
  3. Factory pattern: Spring’s BeanFactory is an implementation of the factory pattern
  4. Adapter pattern: The adapter pattern is used in Spring’s Bean destruction lifecycle to accommodate the execution of various Bean destruction logic
  5. Appearance pattern: The appearance pattern is represented between the Request and Request facade in Tomcat
  6. The template method pattern: The Refresh method in Spring provides methods for subclasses to inherit overrides, using the template method pattern

How can Java deadlocks be avoided?

There are several reasons for deadlocks:

  1. A resource can only be used by one thread at a time
  2. A thread that blocks waiting for a resource does not release the occupied resource
  3. Resources that a thread has acquired cannot be forcibly taken away before they are used up
  4. Several threads form a round-end waiting resource relationship

These are the four conditions that must be met to cause a deadlock. To avoid a deadlock, you only need to satisfy one of these conditions. The first three of these conditions are the conditions to be met as a lock, so to avoid deadlocks you need to break the fourth condition and not have a relationship of waiting for locks in a loop.

During development:

  1. Pay attention to the order in which each thread is locked. Ensure that each thread is locked in the same order
  2. To pay attention to the lock time, you can set a timeout for the specified time
  3. Pay attention to deadlock checking, which is a preventive mechanism to ensure that deadlocks are detected and resolved in the first place

Deep copy and shallow copy

Deep copy and shallow copy refer to the copy of an object. There are two types of properties in an object, one is the basic data type, and the other is the reference to the instance object.

  1. Shallow copy means that only the values of the basic data type and the reference address of the instance object are copied. It does not copy the object that the reference address points to, that is, the object that is shallow-copied. The inner class attributes point to the same object
  2. Deep copy refers to copying both the values of the basic data types and the objects to which the reference address of the instance object points. The internal attributes of the deeply copied object point to different objects

What happens if the thread pool queue is full when you submit a task

  1. If the unbounded queue is used, then you can continue to submit tasks when it doesn’t matter
  2. If the bounded queue is used, when the task is submitted, if the queue is full, if the number of core threads does not reach the upper limit, then the thread is added. If the number of threads has reached the maximum, the reject policy is used to reject the task

Talk about the expansion mechanism of ConcurrentHashMap

Version 1.7

  1. Version 1.7 of ConcurrentHashMap is implemented based on Segment segments
  2. Each Segment is relative to a small HashMap
  3. Each Segment is expanded internally, similar to the expansion logic of a HashMap
  4. Create a new array, and then transfer the elements to the new array
  5. Capacity expansion is also judged separately within each Segment to determine whether the threshold is exceeded

Version 1.8

  1. ConcurrentHashMap in version 1.8 is no longer implemented based on segments
  2. When a thread performs put, if ConcurrentHashMap is being expanded, the thread expands the capacity together
  3. If a thread is not expanding capacity when it is put, it adds key-value to ConcurrentHashMap and checks whether the value exceeds the threshold. If the value exceeds the threshold, the thread expands capacity
  4. ConcurrentHashMap supports simultaneous expansion of multiple threads
  5. Also generate a new array before capacity expansion
  6. To transfer elements, the original array is first grouped, and each component is transferred to a different thread. Each thread is responsible for transferring one or more groups of elements

ConcurrentHashMap source code analysis and comparison

Are beans thread-safe in Spring?

Spring itself does not do thread-safe processing for beans, so:

  1. If the Bean is stateless, then the Bean is thread-safe
  2. If the Bean is stateful, then the Bean is not thread-safe

In addition, whether a Bean is thread-safe has nothing to do with the Bean’s scope. The scope of the Bean only represents the scope of the Bean’s life cycle. For any Bean with its life cycle, it is an object.

Tell me about the basic Linux commands you use

  1. Add or delete check change
  2. Firewall correlation
  3. ssh/scp
  4. Download, decompress, and install software
  5. Modify the permissions

Difference between Package and Install in Maven

  1. Package is a Jar or War
  2. Install installs the Jar or War into the local repository

Project and responsible module

Learn more about the core modules, the core functions of the business and the technologies used in the projects you are currently working on

SpringCloud component features, differences from Dubbo

  1. Eureka: Registry for automatic registration and discovery of services
  2. Ribbon: Load balancing component for load balancing when a consumer invokes a service
  3. Feign: Declarative service invocation client based on interface to make invocation easier
  4. Hystrix: Circuit breaker, responsible for service fault tolerance
  5. Zuul: service gateway, which can be used for service routing, service degradation, and load balancing
  6. Nacos: Distributed configuration center and registry
  7. Sentinel: Circuit breaker degradation of service, including current limiting
  8. Seata: Distributed transactions
  9. Spring Cloud Config: Distributed configuration center
  10. Spring Cloud Bus: Message Bus
  11. .

Video tutorial: Spring Cloud in a nutshell

Spring Cloud is a microservice framework that provides many functional components in the field of microservices. Dubbo started as an RPC call framework with the core of solving problems between service calls. Spring Cloud is a large and comprehensive framework, while Dubbo focuses more on service calls. Dubbo is not as comprehensive as Spring Cloud, but Dubbo’s service invocation performance is higher than Spring Cloud’s. However, Spring Cloud and Dubbo are not opposites, but can be used together.

Jingdong 2 face

Talk about the class loader parent delegate model

There are three default class loaders in the JVM:

  1. BootstrapClassLoader
  2. ExtClassLoader
  3. AppClassLoader

The parent loader of AppClassLoader is ExtClassLoader, and the parent loader of ExtClassLoader is BootstrapClassLoader.

When loading a class, the JVM calls the loadClass method of AppClassLoader to load the class. In this case, however, the loadClass method of ExtClassLoader is used to load the class first. If the BootstrapClassLoader is loaded, the class will be successfully loaded. If the BootstrapClassLoader is not loaded, the class will be successfully loaded. The ExtClassLoader will try to load the class itself, and if it doesn’t, the AppClassLoader will load the class.

So parental delegation means that when the JVM loads a class, it delegates it to Ext and Bootstrap, and loads it itself if it’s not loaded.

The difference between extends and super in generics

  1. Represents any subclass of T, including T
  2. Represents the parent class of any T, including T

Three elements of concurrent programming?

  1. Atomicity: An indivisible operation in which multiple steps are guaranteed to succeed or fail simultaneously
  2. Orderliness: The order in which the program is executed is consistent with the order in which the code is executed
  3. Availability: Changes made by one thread to a shared variable are immediately visible to another thread

What design patterns Spring uses

Brief introduction to CAP Theory

CAP theory is a very important theory in the distributed field, and many distributed middleware need to comply with this theory in the implementation.

  1. C stands for consistency: refers to the consistency of data in a distributed system
  2. A: availability: Indicates whether the distributed system is available
  3. P indicates partition containability: indicates the fault tolerance of distributed systems when network problems occur

CAP theory refers to, in A distributed system can’t ensure that C and A, that is to say, either in A distributed system to ensure the CP, or guarantee the AP, namely consistency and availability can only take one of, if you want to the consistency of the data, you need to loss of system availability, if you need high availability system, then the loss data consistency in the system, Especially for strong consistency.

CAP theory is too strict, and in actual production environment, BASE theory is more used. BASE theory means that distributed system does not need to ensure strong consistency of data, as long as the final consistency is achieved, it does not need to guarantee always availability, but basically availability.

Depth and breadth traversal of the graph

  1. Depth-first traversal of a graph means that you start at one node and work your way down the edge to find the node. If you can’t find it, you go back to the next level to find other nodes
  2. The breadth-first traversal of a graph is to start from a node, traverse the nodes of the first layer down, and then traverse the nodes of the second layer until the last layer is traversed

Fast row algorithm

At the bottom of quicksort algorithm, divide and conquer is adopted. The basic idea is:

  1. Take the first number in the sequence as the base number
  2. Place all numbers greater than the base number to its right and all numbers less than the base number to its left
  3. Then repeat the second step for the left and right parts until there is only one number in each interval

Java version algorithm implementation

public class QuickSort {
    public static void quickSort(int[] arr,int low,int high){
        int i,j,temp,t;
        if(low>high){
            return;
        }
        i=low;
        j=high;
        // Temp is the base bit
        temp = arr[low];
 
        while (i<j) {
            // Look to the right first, descending to the left
            while (temp<=arr[j]&&i<j) {
                j--;
            }
            // Look at the left side, and increase in order to the right
            while (temp>=arr[i]&&i<j) {
                i++;
            }
            // Swap if the condition is met
            if(i<j) { t = arr[j]; arr[j] = arr[i]; arr[i] = t; }}// Finally the benchmark is the swap of numbers equal to I and j
         arr[low] = arr[i];
         arr[i] = temp;
        // Recursively call the left half array
        quickSort(arr, low, j-1);
        // Recursively call the right half of the array
        quickSort(arr, j+1, high);
    }
 
 
    public static void main(String[] args){
        int[] arr = {10.7.2.4.7.62.3.4.2.1.8.9.19};
        quickSort(arr, 0, arr.length-1);
        for (int i = 0; i < arr.length; i++) { System.out.println(arr[i]); }}Copy the code

TCP’s three handshakes and four waves

TCP is a transport layer protocol in layer 7 network protocols and is responsible for reliable data transmission. To establish a TCP connection, a three-way handshake is required:

  1. The client sends a SYN to the server
  2. When the server receives the SYN, it sends a SYN_ACK to the client
  3. After receiving the SYN_ACK, the client sends an ACK to the server

To disconnect a TCP connection, four waves of the hand are needed to disconnect the connection. The process is as follows:

  1. The client sends the FIN to the server
  2. After receiving the FIN, the server sends an ACK to the client, indicating that IT has received a disconnection request. You can stop sending data on the client, but the server may still be processing data
  3. After the server has processed all the data, it sends the FIN to the client to indicate that the server can now disconnect
  4. The client receives the FIN from the server and sends an ACK to the server, indicating that the client will also be disconnected

How can message queues ensure reliable transmission of messages

Reliable transmission of messages represents two things, neither more nor less.

  1. To ensure that there are not many messages, that is, messages cannot be duplicated, that is, producers cannot duplicate messages, or consumers cannot duplicate messages
    1. The first thing to do is to make sure that messages are not multiple, which is rare and difficult to control, because if multiple messages occur, it is largely due to the producer’s own reasons. If you want to avoid problems, you need to do control on the consumer side
    2. The safest mechanism to avoid non-repeated consumption is for the consumer to implement idempotence, which ensures that even if repeated consumption is not a problem, the problem of repeated messages sent by the producer can also be solved by idempotence
  2. The message must not be less, meaning that the message must not be lost, the message sent by the producer, the consumer must be able to consume, for this problem, we need to consider two aspects
    1. When a producer sends a message, it is necessary to ensure that the broker has received and persisted the message. RabbitMQ confirms and Kafka ack ensure that the producer sends the message correctly to the broker
    2. The broker waits for the consumer to confirm that the message has been consumed before deleting it. This is usually a consumer ACK mechanism. The consumer can send an ACK to the broker after receiving a message and confirming that it is ok

Draw a project architecture diagram that describes the module you are in

Work actively to understand the project architecture

The ant side

What is the relationship between a binary search tree and a balanced binary tree?

Balanced binary tree also called balanced binary search tree, is the upgrade version of binary search tree, binary search tree refers to all nodes on the left side of the nodes is smaller than the node, the node to the right of the node is larger than the node, and the balanced binary search tree is on the basis of binary search also specifies the node subtree of left and right sides height difference of the absolute value of not more than 1

What is the difference between strongly balanced and weakly balanced binary trees

Strongly balanced binary trees AVL trees, weakly balanced binary trees are what we call red-black trees.

  1. The AVL tree is more strict about the degree of balance than the red-black tree, and the height of the AVL tree is lower than that of the red-black tree for the same nodes
  2. A concept of node color has been added to red-black trees
  3. The rotation operation of AVL tree is more time-consuming than that of red-black tree

Why Mysql uses B+ trees

B tree features:

  1. The node ordering
  2. A node can store multiple elements, and multiple elements are sorted

B+ tree features:

  1. It has the characteristics of a B tree
  2. There are Pointers between the leaf nodes
  3. The elements on the non-leaf node are redundant on the leaf node, that is, the leaf node stores all the elements in order

Mysql indexes use B+ trees, because indexes are used to speed up queries, and B+ trees can speed up queries by sorting data. Then, by storing multiple elements in a node, the height of the B+ tree is not too high. In Mysql, an Innodb page is a B+ tree node. An Innodb page defaults to 16KB, so generally a two-layer B+ tree can store about 20 million rows of data, and then all data is stored and sorted by using the B+ tree leaf nodes, and there are Pointers between leaf nodes, which can well support SQL statements such as full table scan and range search.

Difference between epoll and poll

  1. The SELECT model uses arrays to store Socket connection file descriptors. The capacity is fixed, and polling is required to determine whether I/O events occur
  2. The poll model uses a linked list to store Socket connection file descriptors, the capacity is not fixed, and polling is also needed to determine whether IO events occur
  3. The epoll model is completely different from the poll model. Epoll is an event notification model. Applications perform I/O operations only when AN I/O event occurs

What is the blocking queue used by FixedThreadPool

Inside the thread pool is implemented by queue + thread, when we use the thread pool to perform tasks:

  1. If the number of threads in the thread pool is less than corePoolSize at this point, new threads are created to handle the added tasks even if all threads in the thread pool are idle.
  2. If the number of threads in the thread pool is equal to corePoolSize, but the buffer queue workQueue is not full, then the task is put into the buffer queue.
  3. If the number of threads in the pool is greater than or equal to corePoolSize, the buffer queue is full, and the number of threads in the pool is less than maximumPoolSize, a new thread is created to handle the added task.
  4. If the number of threads in the pool is greater than corePoolSize, the buffer queue workQueue is full, and the number of threads in the pool is equal to maximumPoolSize, the task is processed using the policy specified by the handler.
  5. When the number of threads in the thread pool is greater than corePoolSize, if a thread is idle for longer than keepAliveTime, the thread is terminated. In this way, the thread pool can dynamically adjust the number of threads in the pool

FixedThreadPool stands for fixed length thread pool, and underlying LinkedBlockingQueue stands for unbounded blocking queue.

Difference between sychronized and ReentrantLock

  1. Sychronized is a keyword and ReentrantLock is a class
  2. Sychronized automatically locks and releases locks. ReentrantLock requires programmers to manually lock and release locks
  3. Sychronized is the underlying lock at the JVM level, and ReentrantLock is the API level lock
  4. Sychronized is an unfair lock. ReentrantLock can be a fair lock or an unfair lock
  5. Sychronized locks are objects. The lock information is stored in the object header. ReentrantLock identifies the lock status by the state identifier of type INT in the code
  6. Sychronized The underlying layer has a lock upgrade process

Sychronized spin lock, bias lock, lightweight lock, heavyweight lock, respectively introduced and related

  1. Partial lock: in the lock object object header record the current access to the lock thread ID, the thread next time if the access to the lock can be directly acquired
  2. Lightweight lock: by biased locking upgrade, when a thread gets to lock, this lock is biased locking right now, at this time if you have the second thread to lock, biased locking will be upgraded to a lightweight locks, called lightweight lock, is to distinguish and heavyweight lock, lightweight lock bottom is done by spin, does not block the thread
  3. If you spin too many times and still don’t get the lock, you upgrade to a heavyweight lock, which causes the thread to block
  4. The spin lock. Spin lock is a thread in the process of acquiring a lock, not to block the thread, and you can’t wake up the thread, blocking and awaken these two steps are need operating systems to, time consuming, spin locks are threads by CAS to obtain expected a tag, if you don’t have access to, continues to cycle, if access to capturing the said lock, This process thread is always running, relatively does not use much operating system resources, and is relatively lightweight.

How does HTTPS ensure secure transmission

HTTPS uses symmetric encryption, asymmetric encryption, and digital certificates to ensure secure data transmission.

  1. The client to the server before sending data, need to establish a TCP connection, so you need to first establish a TCP connection, after establishing the TCP connection, the server will first give the client sends the public key, the client to get the public key can be used to encrypt the data, after the server receives the data can use private key to decrypt the data, this is through the asymmetric encryption to transmit data
  2. However, asymmetric encryption is slower than symmetric encryption, so asymmetric encryption cannot be directly used to transmit the request data. Therefore, asymmetric encryption can be used to transmit the symmetric encrypted secret key, and then symmetric encryption can be used to transmit the request data
  3. However, asymmetric encryption and symmetric encryption alone are not enough to ensure the absolute security of data transmission, because the public key may be intercepted when the server sends the public key to the client
  4. So, in order to secure the transmission of the public key, need to use a digital certificate, digital certificate is to have credibility, everyone is approved, the server to the client sends the public key, can put the public key and server related information through the Hash algorithm to generate the message digest, again through the digital certificate provided by the private key to encrypt the message digest to generate a digital signature, A digital certificate is formed by combining the Hash algorithm information with the digital signature, and then sent to the client. After receiving the certificate, the client decrypts the certificate using the public key provided by the certificate to obtain the public key used for asymmetric encryption.
  5. Middlemen in the process, even if have intercepted the service side of digital certificates, although it can be asymmetric encryption to decrypt the use of public key, but an intermediary is to forge a digital certificate to the client, because the client on the embedded digital certificate is the world’s public credibility, a web site if you want to support HTTPS, It is necessary to apply for the private key of the digital certificate. If the middle man wants to generate the digital certificate that can be parsed by the client, he also needs to apply for the private key, so it is relatively safe.

Ant 2 face

What are the broad categories of design patterns, and familiar with which design patterns

Design patterns fall into three broad categories:

  1. Create a type
    1. Factory Pattern
    2. Abstract Factory Pattern
    3. Singleton Pattern
    4. Builder Pattern
    5. Prototype Pattern
  2. structured
    1. Adapter Pattern
    2. Bridge Pattern
    3. Filter Pattern (Filter, Criteria Pattern)
    4. Composite Pattern
    5. Decorator Pattern
    6. Facade Pattern
    7. Flyweight Pattern
    8. Proxy Pattern
  3. Behavior type
    1. Chain of Responsibility Pattern
    2. Command Pattern
    3. Interpreter Pattern
    4. Iterator Pattern
    5. Mediator Pattern
    6. Memento Pattern
    7. Observer Pattern
    8. State Pattern
    9. Null Object Pattern
    10. Strategy Pattern
    11. Template Pattern
    12. Visitor Pattern

Volatile keyword, how does it guarantee visibility, order

  1. For volatile variables, changes to the variable are written directly from the CPU’s advanced cache back to main memory, and the variable is read directly from main memory, ensuring visibility
  2. Memory barriers are inserted when volatile variables are read or written, and memory barriers prevent reordering, thereby ensuring order

Java’s memory structure, what parts of the heap are divided into, what the default age is into the old age

  1. The young generation
    1. Eden District (8)
    2. From Survivor zone (1)
    3. To Survivor zone (1)
  2. The old s

When the default object reaches the age of 15, it enters the old age

Mysql locks

Classification by lock granularity:

  1. Row lock: Locks a row of data with minimum granularity and high concurrency
  2. Table lock: Locks the entire table. The lock granularity is the largest and the concurrency is low
  3. Gap lock: An interval is locked

It can also be divided into:

  1. Shared lock: also known as read lock, a transaction to a row of data read lock, other transactions can also read, but not write
  2. Exclusive lock: also known as write lock, a transaction to a row of data write lock, other transactions can not read, can not write

It can also be divided into:

  1. Optimistic locking: a row is not locked, but a version number
  2. Pessimistic locks: The above row locks, table locks, and so on are pessimistic locks

In the isolation level implementation of a transaction, it is necessary to take advantage of this to resolve phantom readings

How does ConcurrentHashMap keep threads safe? What’s the change in jdk1.8

Talk about OOM and how to deal with this situation. Have you used log analysis tool

Mysql > create index

Introduce the highlight items

How high is the concurrency of the project and what is the Redis bottleneck

How did you deal with online problems in the project? What impressed you most


Well, this article is here, other companies interview questions in the following, need can be taken, I wish everyone can find their most satisfactory work!

  • 2021 First-line Internet company interview real questions and interviews