network

86. How to achieve cross-domain?

Mode 1: The ping or script label of the image is cross-domain

Pings are often used to track the number of times a user clicks on a page or is exposed to a dynamic AD. Script tags can get data from other sources, which is what JSONP relies on.

Mode 2: JSONP cross-domain

JSONP (JSON with Padding) is a “usage mode” of JSON that allows web pages to request data from other domains. According to the XmlHttpRequest object affected by the same origin policy, and exploited

Disadvantages:

  • Only Get requests can be used
  • Unable to register event listeners such as SUCCESS and error, it is not easy to determine whether a JSONP request failed
  • JSONP is executed from code loaded in other domains and is vulnerable to cross-site request forgery, so its security cannot be guaranteed

Method 3: CORS

Cross-origin Resource Sharing (CORS) is a specification of browser technology. It provides a method for Web services to send sandbox scripts from different domains to avoid the same Origin policy of browsers and ensure secure cross-domain data transfer. Modern browsers use CORS in API containers such as XMLHttpRequest to reduce HTTP request risk sources. Unlike JSONP, CORS supports HTTP requirements in addition to GET requirements. The server generally needs to add one or more of the following response headers:

Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Headers: X-PINGOTHER, Content-Type
Access-Control-Max-Age: 86400
Copy the code

By default, cross-domain requests do not carry Cookie information. To carry Cookie information, set the following parameters:

"Access-Control-Allow-Credentials": true
// Ajax is set
"withCredentials": true
Copy the code

Mode 8: Agent

The same origin policy is restricted on the browser side and can be resolved on the server side

DomainA client (browser) ==> DomainA server ==> DomainB server ==> DomainA client (browser)

Design patterns

88. Tell me about a design pattern you are familiar with.

The singleton pattern

To put it simply, there is only one instance of a class in an application, and you can’t go to new because constructors are private and usually get their instances through the getInstance() method.

The return value of getInstance() is a reference to an object, not a new instance, so don’t mistake it for multiple objects. The singleton pattern is also easy to implement, so just look at the demo

public class Singleton {

    private static Singleton singleton;

    private Singleton(a) {}public static Singleton getInstance(a) {
        if (singleton == null) {
            singleton = new Singleton();
        }
        returnsingleton; }}Copy the code

The above is lazy writing

Lazy writing (thread-safe)

public class Singleton {  
   private static Singleton instance;  
   private Singleton (a){}  
   public static synchronized Singleton getInstance(a) {  
   if (instance == null) {  
       instance = new Singleton();  
   }  
   returninstance; }}Copy the code

The enumeration

public enum Singleton {  
   INSTANCE;  
   public void whateverMethod(a) {}}Copy the code

This approach is advocated by Effective Java author Josh Bloch. It not only avoids multithreaded synchronization problems, but also prevents deserialization from creating new objects, which is a strong barrier, but I think it’s a bit out of touch since the enum feature was introduced in 1.5.

Decorator pattern

The existing business logic is further encapsulated to add additional functions. For example, THE IO flow in Java uses decorator mode. When users use it, they can assemble it arbitrarily to achieve their desired effect. For example, I want to eat a sandwich. First, I need a big sausage. I like to eat cream. So how do we write the code? First, we need to write a Food class that all other foods inherit from. Look at the code:

public class Food {

   private String food_name;

   public Food(a) {}public Food(String food_name) {
       this.food_name = food_name;
   }

   public String make(a) {
       return food_name;
   };
}
Copy the code

The code is so simple that I won’t explain it, and then we’ll write some subclasses to inherit it:

/ / bread
public class Bread extends Food {

   private Food basic_food;

   public Bread(Food basic_food) {
       this.basic_food = basic_food;
   }

   public String make(a) {
       return basic_food.make()+"+ bread"; }}/ / cream
public class Cream extends Food {

   private Food basic_food;

   public Cream(Food basic_food) {
       this.basic_food = basic_food;
   }

   public String make(a) {
       return basic_food.make()+"+ cream"; }}/ / vegetables
public class Vegetable extends Food {

   private Food basic_food;

   public Vegetable(Food basic_food) {
       this.basic_food = basic_food;
   }

   public String make(a) {
       return basic_food.make()+"Plus vegetables."; }}Copy the code

The constructor passes in a parameter of type Food, and then adds some logic to the make method. If you still don’t understand why this is the case, just look at my Test class

public class Test {
   public static void main(String[] args) {
       Food food = new Bread(new Vegetable(new Cream(new Food("Sausage")))); System.out.println(food.make()); }}Copy the code

See no one layer encapsulation, away from you, we see: the most I new a sausage inside, outside the sausage I wrapped with a layer of cream, the cream of the outside and I add a layer of vegetables, bread, is the most I put outside is very image, ha ha ~ this design pattern is just like a real life touch, understand? So let’s look at the results

One sandwich is ready

Adapter mode

Connecting two very different things together, like a transformer in real life. Suppose a mobile phone charger needs a voltage of 20V, but the normal voltage is 220V, this time need a transformer, 220V voltage into 20V voltage, so that the transformer will be 20V voltage and mobile phone.

public class Test {
   public static void main(String[] args) {
       Phone phone = new Phone();
       VoltageAdapter adapter = newVoltageAdapter(); phone.setAdapter(adapter); phone.charge(); }}/ / cell phone
class Phone {

   public static final int V = 220;// The normal voltage is 220v, is a constant

   private VoltageAdapter adapter;

   / / charge
   public void charge(a) {
       adapter.changeVoltage();
   }

   public void setAdapter(VoltageAdapter adapter) {
       this.adapter = adapter; }}/ / transformer
class VoltageAdapter {
   // Change the voltage function
   public void changeVoltage(a) {
       System.out.println("Charging...");
       System.out.println("Original voltage:" + Phone.V + "V");
       System.out.println("Voltage after transformer conversion :" + (Phone.V - 200) + "V"); }}Copy the code

The factory pattern

Simple Factory pattern: One abstract interface, implementation classes for multiple abstract interfaces, and one factory class for instantiating the abstract interface

// Abstract product class
abstract class Car {
   public void run(a);

   public void stop(a);
}

// Concrete implementation class
class Benz implements Car {
   public void run(a) {
       System.out.println("Benz started...");
   }

   public void stop(a) {
       System.out.println("Benz stopped..."); }}class Ford implements Car {
   public void run(a) {
       System.out.println("Ford started...");
   }

   public void stop(a) {
       System.out.println("Ford stopped..."); }}/ / the factory class
class Factory {
   public static Car getCarInstance(String type) {
       Car c = null;
       if ("Benz".equals(type)) {
           c = new Benz();
       }
       if ("Ford".equals(type)) {
           c = new Ford();
       }
       returnc; }}public class Test {

   public static void main(String[] args) {
       Car c = Factory.getCarInstance("Benz");
       if(c ! =null) {
           c.run();
           c.stop();
       } else {
           System.out.println("Can't build the car..."); }}}Copy the code

Factory method Pattern: There are four roles: Abstract Factory pattern, Concrete factory pattern, Abstract product pattern, and Concrete product pattern. Instead of instantiating a concrete product by a factory class, the product is instantiated by a subclass of the abstract factory

// Abstract product roles
public interface Moveable {
   void run(a);
}

// Specific product roles
public class Plane implements Moveable {
   @Override
   public void run(a) {
       System.out.println("plane...."); }}public class Broom implements Moveable {
   @Override
   public void run(a) {
       System.out.println("broom....."); }}// Abstract factory
public abstract class VehicleFactory {
   abstract Moveable create(a);
}

// Specific factory
public class PlaneFactory extends VehicleFactory {
   public Moveable create(a) {
       return newPlane(); }}public class BroomFactory extends VehicleFactory {
   public Moveable create(a) {
       return newBroom(); }}/ / test class
public class Test {
   public static void main(String[] args) {
       VehicleFactory factory = newBroomFactory(); Moveable m = factory.create(); m.run(); }}Copy the code

Abstract Factory pattern: Unlike the factory method pattern, where the factory produces a single product, the abstract factory pattern produces multiple products

/ Abstract factory classpublic abstract class AbstractFactory {
   public abstract Vehicle createVehicle(a);
   public abstract Weapon createWeapon(a);
   public abstract Food createFood(a);
}
// Concrete factory class, where Food,Vehicle, Weapon are abstract classes,
public class DefaultFactory extends AbstractFactory{
   @Override
   public Food createFood(a) {
       return new Apple();
   }
   @Override
   public Vehicle createVehicle(a) {
       return new Car();
   }
   @Override
   public Weapon createWeapon(a) {
       return newAK47(); }}/ / test class
public class Test {
   public static void main(String[] args) {
       AbstractFactory f = newDefaultFactory(); Vehicle v = f.createVehicle(); v.run(); Weapon w = f.createWeapon(); w.shoot(); Food a = f.createFood(); a.printName(); }}Copy the code

Proxy mode

There are two types, static proxy and dynamic proxy. Let’s talk about static agents. I’m not going to talk about a lot of theoretical stuff, and even if I did, you wouldn’t understand it. What real role, abstract role, proxy role, delegate role… It’s a mess. I can’t read it. When I was learning the agent mode, I went to the Internet and looked up a lot of information. When I opened the links, they were basically for you to analyze what roles there were. There were a lot of theories, which seemed very laborious. Let’s not talk about anything. Let’s talk about real life. (Note: I’m not denying theoretical knowledge here, I just think that sometimes theoretical knowledge is hard to understand, you are here to learn, not to nitpick.) After a certain age, we have to get married, and it is a very troublesome thing to get married (including those who are pushed by their parents). Rich family may look for master of ceremonies to preside over the wedding, appear lively, foreign atmosphere ~ well, now the business of wedding celebration company came, we only need to give money, wedding celebration company can help us arrange a whole set of wedding process. The process looks something like this: the family urge marriage – > the families of men and women both parties agreed to marry the day of the zodiac – > looking for a company of wedding – > at the appointed time for the wedding ceremony – > after marriage Wedding how the company intends to make the wedding program, after finished the wedding wedding company will do, we know nothing… Don’t worry, it’s not a black agent, we just give people the money, they will do the job for us. So, the wedding agency here is the proxy role, now you see what the proxy role is.

See the code implementation:

// Proxy interface
public interface ProxyInterface {
    // If there are other things that need to be represented, such as eating, sleeping and going to the toilet, you can also write
    void marry(a);
    // Let others eat your own food
    //void eat();
    // Agent poop, own shit, let others pull it
    //void shit();
}
Copy the code

Civilized society, acting have a meal, acting shit of what I don’t write, have harm social decency ~ ~ ~ can understand good

Ok, let’s look at the code of the wedding company:

public class WeddingCompany implements ProxyInterface {

    private ProxyInterface proxyInterface;

    public WeddingCompany(ProxyInterface proxyInterface) {
        this.proxyInterface = proxyInterface;
    }

    @Override
    public void marry(a) {
        System.out.println("We're from the wedding service.");
        System.out.println("We are making preparations for the wedding.");
        System.out.println("Rehearsal for the show...");
        System.out.println("Gift shopping...");
        System.out.println("The division of labor...");
        System.out.println("It's time to get married.");
        proxyInterface.marry();
        System.out.println("After the marriage, we need to do the follow-up, you can go home, our company will do the rest."); }}Copy the code

See, the wedding company needs to do a lot of things, let’s look at the marriage family code:

public class NormalHome implements ProxyInterface{
    @Override
    public void marry(a) {
        System.out.println("We're married."); }}Copy the code

This has been very obvious, the marriage family only need to get married, and the wedding company to take care of everything, before and after the things are the wedding company to do, I heard that now the wedding company is very profitable, this is the reason, the work is much, can not make money?

Let’s look at the test class code:

public class Test {
    public static void main(String[] args) {
        ProxyInterface proxyInterface = new WeddingCompany(newNormalHome()); proxyInterface.marry(); }}Copy the code

The running results are as follows:

As we expected, the result is correct, this is static proxy, dynamic proxy I don’t want to say, related to Java reflection, there is a lot of information on the web, I will update later.

Spring / Spring MVC

90. Why use Spring?

1. Introduction

  • Objective: To solve the complexity of enterprise application development
  • Features: Use basic Javabeans instead of EJBs, and provide more enterprise application functionality
  • Scope: Any Java application

In simple terms, Spring is a lightweight Inversion of Control (IoC) and AOP oriented container framework.

2. Light weight

Spring is lightweight in terms of size and overhead. The full Spring framework can be distributed in a JAR file with a size of just over 1MB. And the processing overhead required by Spring is trivial. In addition, Spring is non-intrusive: Typically, objects in Spring applications do not depend on Spring specific classes.

3. Inversion of control

Spring promotes loose coupling through a technique called inversion of control (IoC). When IoC is applied, other objects that an object depends on are passed in passively, rather than being created or found by the object itself. You can think of IoC as the opposite of JNDI — instead of an object looking up a dependency from the container, the container actively passes the dependency to the object at initialization without waiting for the object to request it.

Step 4 Face the section

Spring provides rich support for faceted programming, allowing cohesive development by separating application business logic from system-level services such as auditing and transaction management. Application objects just do what they’re supposed to do — complete the business logic — and that’s it. They are not responsible for (or even aware of) other system-level concerns, such as logging or transaction support.

5. A container

Spring includes and manages the configuration and life cycle of application objects. It is a container in the sense that you can configure how each of your beans is created — based on a configurable prototype, Your bean can create a single instance or generate a new instance every time it is needed — and how they relate to each other. However, Spring should not be confused with traditional heavyweight EJB containers, which are often large and unwieldy and difficult to use.

Framework of 6.

Spring can configure and compose simple components into complex applications. In Spring, application objects are combined declaratively, typically in an XML file. Spring also provides a lot of basic functionality (transaction management, persistence framework integration, and so on), leaving the development of application logic up to you.

All of these Spring features enable you to write cleaner, more manageable, and more testable code. They also provide basic support for various modules in Spring.

Explain what AOP is?

AOP (aspect-oriented Programming, aspect-oriented Programming), can be said to be OOP (Object-oriented Programming, object-oriented Programming) complement and perfect. OOP introduces concepts such as encapsulation, inheritance, and polymorphism to build an object hierarchy that simulates a collection of common behaviors. OOP is helpless when we need to introduce common behavior to discrete objects. That is, OOP allows you to define top-to-bottom relationships, but not left-to-right relationships. For example, the logging function. Logging code tends to be spread horizontally across all object hierarchies, regardless of the core functionality of the object to which it is spread. The same is true for other types of code, such as security, exception handling, and transparency persistence. This scattering of extraneous code is called cross-cutting code, and in OOP design it leads to a lot of code duplication, which is not conducive to reuse of modules.

AOP, on the other hand, uses a technique called “crosscutting” to peel apart the insides of wrapped objects and encapsulate common behavior that affects multiple classes into a reusable module called “Aspect,” or Aspect. The so-called “aspect”, simply speaking, is to encapsulate the logic or responsibility that has nothing to do with business, but is called by business modules together, so as to reduce the repetitive code of the system, reduce the degree of coupling between modules, and facilitate the future operability and maintainability. AOP represents a horizontal relationship, if the “object” is a hollow cylinder, which encapsulates the properties and behavior of the object; Aspect-oriented programming, then, is like a sharp knife, cutting open these hollow cylinders to get information inside them. And the cut surface is the so-called “aspect”. Then, with a masterful hand, it restored the cut surfaces without leaving a trace.

Using “crosscutting” techniques, AOP divides a software system into two parts: core concerns and crosscutting concerns. The main flow of business processing is the core concern, and the less relevant part is the crosscutting concern. One of the things about crosscutting concerns is that they often occur at multiple points of the core concern, and they are basically similar everywhere. Such as permission authentication, logging, and transaction processing. Aop’s role is to separate the various concerns in a system, separating core concerns from crosscutting concerns. As Adam Magee, senior solution architect at Avanade, says, the core idea of AOP is to “separate the business logic in your application from the generic services that support it.”

Explain what ioc is.

IOC is an Inversion of Control, which most books translate as “Inversion of Control.”

The concept of IOC was first introduced in 1996 by Michael Mattson in an article exploring object-oriented frameworks. For the basic idea of object-oriented design and programming, we have talked a lot, in front of the needless, is simply the complex system is decomposed into mutual cooperation object, the object class by encapsulating, internal implementation external is transparent, thus reducing the complexity of problem solving, flexible and can be reused and expanded.

IOC theory puts forward a general idea like this: to achieve decoupling between dependent objects with the help of a “third party”. The diagram below:

The picture we’re looking at right now is all we need to do to implement the whole system. At this point, objects A, B, C, AND D are no longer coupled to each other, so that when you implement A, you do not need to consider B, C, and D at all, and the dependency between objects has been reduced to the lowest degree. So what a wonderful thing it would be for system development if an IOC container could be implemented, and everyone involved could just implement their own classes, no relation to anyone else!

Why is inversion of control (IOC) so called? Let’s compare:

Before the software system introduced IOC container, as shown in Figure 1, object A depends on object B, so when object A is initialized or running to A certain point, it must take the initiative to create object B or use the already created object B. Whether you create or use object B, you have control.

After the introduction of IOC container into the software system, this situation completely changes. As shown in Figure 3, due to the addition of IOC container, object A and object B lose direct contact. Therefore, when object A runs and needs object B, IOC container will take the initiative to create an object B and inject it into the place where object A needs it.

The process by which object A acquires dependent object B is changed from active to passive, and control is reversed, hence the name “inversion of control”.

93. What are the main modules of Spring?

The Spring framework has so far integrated more than 20 modules. These modules are divided into core containers, data access/integration, Web, AOP (faceted programming), tools, messaging, and test modules as shown below.

What are the common injection methods in Spring?

Spring implements IOC (Inversion of Control) through DI (dependency injection). There are three main injection methods commonly used:

  • Constructor injection

  • Setter injection

  • Annotation-based injection

95. Are beans in Spring thread-safe?

Whether the beans in the Spring container are thread-safe or not, the container itself does not provide a thread-safe policy for beans, so it can be said that the beans in the Spring container do not have thread-safe features, but the specific scope of beans should be combined to study.

96. How many scopes does Spring support for beans?

When you create a Bean instance through the Spring container, you can not only complete the instantiation of the Bean instance, but also specify a specific scope for the Bean. Spring supports the following five scopes:

  • Singleton: Singleton pattern, there will be only one instance of a Bean defined using a Singleton in the entire Spring IoC container
  • Prototype: In prototype mode, each time a Prototype-defined Bean is fetched through the container’s getBean method, a new instance of the Bean is generated
  • Request: For each HTTP request, a new instance of the Bean defined using Request is created, meaning that a different Bean instance is created for each HTTP request. This scope is valid only if Spring is used in a Web application
  • Session: For each HTTP session, a new instance is generated using the Bean milk defined by the session. Again, this scope is only valid if Spring is used in a Web application
  • Globalsession: For each global HTTP Session, a new instance of a Bean defined using the Session is generated. Typically, this is valid only when the portlet context is used. Again, this scope is only valid if Spring is used in a Web application

Singleton and Prototype are two commonly used scopes. For singleton-scoped beans, the same instance will be obtained each time the Bean is requested. The container is responsible for tracking the state of the Bean instance and maintaining the lifecycle behavior of the Bean instance. If a Bean is set to the Prototype scope, Spring creates a new Bean instance each time the program requests a Bean with that ID and returns it to the program. In this case, the Spring container simply creates the Bean instance using the new keyword, and once it has been created, the container does not track the instance nor maintain the state of the Bean instance.

Spring uses the Singleton scope by default if the Bean’s scope is not specified. Java When creating a Java instance, you need to apply for memory. When an instance is destroyed, garbage collection needs to be done, which adds overhead to the system. As a result, the prototype scoped Bean is expensive to create and destroy. Instances of singleton-scoped beans, once created, can be reused. Therefore, avoid setting beans to the Prototype scope unless necessary.

97. What are the methods for Spring to automate bean assembly?

The Spring container is responsible for creating the beans in the application and coordinating the relationships between these objects by ID. As developers, we need to tell Spring which beans to create and how to assemble them together.

There are two ways to assemble beans in Spring:

  • Implicit bean discovery mechanisms and autowiring
  • Display configuration in Java code or XML

Of course, these methods can also be used together.

What are the ways in which Spring transactions are implemented?

  1. Programmatic transaction management is the only option for POJO-based applications. We need to call beginTransaction(), COMMIT (), rollback() and other transaction management-related methods in our code, which is called programmatic transaction management.
  2. Based on the TransactionProxyFactoryBean declarative transaction management
  3. Transactional Transactional Transactional management
  4. Configure transactions based on Aspectj AOP

99. What about Transaction isolation in Spring?

The transaction isolation level refers to the degree to which data changes made by one transaction are isolated from those made by another parallel transaction. When multiple transactions access the same data simultaneously, the following problems may occur if the necessary isolation mechanisms are not in place:

  • Dirty read: a transaction reads uncommitted update data from another transaction.
  • Phantom read: For example, the first transaction modifies the data in a table, such as the modification involves “all rows” in the table. At the same time, the second transaction also modifies the data in the table by inserting a “new row” into the table. Then, as if in an illusion, a user operating on the first transaction will discover that there are still unmodified rows in the table.
  • Unrepeatable reads: For example, if two identical SELECT statements are executed in the same transaction, but no DDL statements are executed in the same transaction, the results are inconsistent. This is called unrepeatable reads.

100. What about the Spring MVC workflow?

Spring MVC runtime flow chart:

Spring running process description:

1. The user sends a request to the server, which is captured by Spring front-end control Servelt DispatcherServlet;

2. DispatcherServlet parses the request URL and obtains the request resource Identifier (URI). Then, according to the URI, HandlerMapping is called to obtain all the related objects (including the Handler object and the interceptor corresponding to the Handler object) configured by the Handler. Finally, HandlerExecutionChain object is returned.

3. DispatcherServlet selects an appropriate HandlerAdapter according to the obtained Handler; (Note: If the HandlerAdapter is successfully obtained, execution of the interceptor’s preHandler(…) begins at this point.) Methods)

4. Extract the model data from the Request, populate the Handler entry parameter, and start Handler (Controller). Depending on your configuration, Spring will do some extra work for you during the entry process to populate the Handler:

HttpMessageConveter: Converts the request message (such as Json or XML data) into an object, which is converted into the specified response information

  • Data transformation: Data transformation of the request message. Such as String to Integer, Double, and so on
  • Data root: Data formatting of the request message. For example, converting a string to a formatted number or a formatted date
  • Data validation: Verifies the validity of data (length, format, etc.) and stores the validation results in BindingResult or Error

5. After Handler completes execution, a ModelAndView object is returned to DispatcherServlet.

6. According to the returned ModelAndView, select a suitable ViewResolver (must be a ViewResolver registered in Spring container) and return it to DispatcherServlet;

7. ViewResolver combines Model and View to render the View;

8. Return the render result to the client.

101. What components does Spring MVC have?

The core components of Spring MVC:

  1. DispatcherServlet: Central controller that forwards requests to specific control classes
  2. Controller: The Controller that processes requests
  3. HandlerMapping: A mapping handler that maps the mapping policy that the CPU forwards to the controller
  4. ModelAndView: Encapsulates the data returned by the service layer and the view layer
  5. ViewResolver: a ViewResolver that resolves specific views
  6. Interceptors: Interceptors, which intercept the requests we define and do the processing

102. What does @requestMapping do?

RequestMapping is an annotation to handle request address mapping, which can be used on a class or method. Used on a class, this address is used as the parent path for all methods in the class that respond to requests.

RequestMapping annotations have six attributes, which we’ll break down into three categories.

The value and method:

  • Value: Specifies the actual address of the request, which can be the URI Template pattern (described below).
  • Method: Specifies the request method type, such as GET, POST, PUT, and DELETE.

Consumes, produces

  • Consumes: Specifies the content-types that handle requests, for example, application/ JSON or text/ HTML.
  • Produces: Specifies the content type returned only if the specified type is in the (Accept) type in the Request header.

Params, headers

  • Params: Specifies that the request must contain some parameter values for this method to process.
  • Headers: Specifies that the request must contain some specified header value before the method can process the request.

Spring Boot / Spring Cloud

104. What is Spring Boot?

In the big family of Spring framework, there are many derivative frameworks, such as Spring, SpringMvc framework, etc. The core content of Spring is inversion of control (IOC) and dependency injection (DI). The so-called inversion of control is not a technology, but an idea. Operationally created in a Spring configuration file, dependency injection means that the Spring container provides resources for an object in the application, such as reference objects, constant data, and so on.

SpringBoot is a framework, a new programming specification, his generation simplifies the use of the framework, the so-called simplification refers to the simplification of the Spring many frameworks required a large number of and tedious configuration files, so SpringBoot is a framework to serve the framework, the scope of service is simplified configuration files.

105. Why Spring Boot?

  • Spring Boot makes coding easy
  • Spring Boot makes configuration easy
  • Spring Boot makes deployment easy
  • Spring Boot makes monitoring easy
  • The shortage of the Spring

106. What is the Spring Boot core configuration file?

Spring Boot provides two common configuration files:

  • The properties file
  • Yml file

107. What are the types of Spring Boot configuration files? What’s the difference?

Spring Boot provides two common configuration files, properties files and YML files. Compared to the properties file, the YML file is younger and has a lot of bugs. Yml uses whitespace to determine hierarchy, which makes configuration files structurally clear, but also breaks hierarchy with tiny whitespace.

108. How can Spring Boot implement hot deployment?

SpringBoot hot deployment can be implemented in two ways:

① Use Spring Loaded

Add the following code to the project:

<build>
        <plugins>
            <plugin>
                <! -- springBoot plugin -->
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <dependencies>
                    <! Spring Hot deployment -->
                    <! This dependency can not be downloaded here, but can be placed outside the build tag and then pasted into the plugin.
                    <dependency>
                        <groupId>org.springframework</groupId>
                        <artifactId>springloaded</artifactId>
                        <version>1.2.6. RELEASE</version>
                    </dependency>
                </dependencies>
            </plugin>
        </plugins>
    </build>
Copy the code

After adding, you need to use MVN command to run:

Find Edit configurations in your IDEA first and do the following :(click the “+” in the top left corner and then select maven to bring up the right pane. Type in the directive shown in the red underlined area. You can name the directive (here named MvnSpringBootRun).

Click save and the IDEA project will appear in the running part. Click the green arrow to run

(2). Use the spring – the boot – devtools

Add dependencies to project POM files:

 <! -- Hot deployment jar-->
 <dependency>
     <groupId>org.springframework.boot</groupId>
     <artifactId>spring-boot-devtools</artifactId>
 </dependency>
Copy the code

Then: using shift + CTRL + Alt + “/” the shortcut key (IDEA) choose “Registry” and then check the compiler. The automake. Allow. The when. App. Running

109. What is the difference between JPA and Hibernate?

  • The JPA Java Persistence API, a standard ORM interface for Java EE 5, is also part of the EJB3 specification.
  • Hibernate, a popular ORM framework today, is an implementation of JPA, but its function is a superset of JPA.
  • The relationship between JPA and Hibernate can be simply understood as JPA is the standard interface and Hibernate is the implementation. So how does Hibernate implement this relationship with JPA? Hibernate is mainly implemented through three components, Hibernate – Annotation, Hibernate – EntityManager and Hibernate – Core.
  • Hibernate -annotation is the basis for Hibernate to support annotation configuration. It includes standard JPA annotations as well as Hibernate’s own special annotations.
  • Hibernate -core is the core implementation of Hibernate and provides all the core functions of Hibernate.
  • Hibernate-entity Manager implements standard JPA and can be regarded as an adapter between Hibernate-Core and JPA. It does not directly provide ORM functions, but encapsulates Hibernate-Core and makes Hibernate conform to JPA specifications.

110. What is Spring Cloud?

Literally, Spring Cloud is a framework dedicated to distributed systems and Cloud services.

Spring Cloud is a new member of the entire Spring family and a logical outgrowth of the recent boom in Cloud services.

Spring Cloud provides developers with tools to quickly build common patterns in distributed systems, such as:

  • Configuration management
  • Service registration and discovery
  • The circuit breaker
  • Intelligent routing
  • Interservice invocation
  • Load balancing
  • The micro broker
  • Control bus
  • One-time token
  • Global lock
  • Leadership election
  • Distributed session
  • State of the cluster
  • Distributed message

Using Spring Cloud developers can implement services and applications that implement these patterns right out of the box. These services can run in any environment, including distributed environments, developers’ own laptops, and a variety of hosting platforms.

111. What does the Spring Cloud circuit breaker do?

Hystrix is used in Spring Cloud to implement circuit breakers, which prevent an application from trying to perform an operation multiple times that is likely to fail, allowing it to continue without waiting for recovery or wasting CPU cycles when it is certain that the failure is persistent. Breaker mode also enables the application to detect whether the fault has been resolved, and if the problem appears to have been corrected, the application can try to invoke an operation.

Circuit breakers add stability and flexibility to a system, providing stability while the system recovers from failure and minimizing the impact of this failure on performance. It helps to quickly reject requests for an operation that is likely to fail, rather than waiting for the operation to time out (or not return) to keep the system’s response time. If the breaker raises the time of each state change event, this information can be used to monitor the health of components of the system protected by the breaker, or to alert administrators when the breaker trips to be in the open state.

112. What are the core components of Spring Cloud?

Service discovery — Netflix Eureka

A RESTful service that locates mid-tier services running in an AWS Region. It consists of two components: Eureka server and Eureka client. The Eureka server serves as the service registry server. The Eureka client is a Java client designed to simplify interactions with servers, act as a polling load balancer, and provide failover support for services. Netflix uses a separate client in its production environment that provides weighted load balancing based on traffic, resource utilization, and error status.

2. Customer side load balancing — Netflix Ribbon

The Ribbon provides client-side software load balancing algorithms. The Ribbon client component provides a comprehensive set of configuration options, such as connection timeout, retry, retry algorithm, and more. The Ribbon comes with pluggable, customizable load balancing components.

③ Circuit breaker — Netflix Hystrix

A circuit breaker prevents an application from trying multiple times to perform an operation that is likely to fail, allowing it to continue without waiting for recovery or wasting CPU cycles when it determines that the failure is persistent. Breaker mode also enables the application to detect whether the fault has been resolved. If the problem appears to have been corrected, the application can try to invoke the action.

4. Service gateway — Netflix Zuul

Nginx-like, reverse proxy functionality, but Netflix has added some of its own features to work with other components.

⑤. Distributed configuration — Spring Cloud Config

This is still static and needs to work with The Spring Cloud Bus for dynamic configuration updates.

Mybatis

125. What is the difference between #{} and ${} in mybatis?

  • ${} is string substitution;
  • Mybatis will replace #{} with? Call the set method in PreparedStatement to assign the value.
  • Mybatis in the process, is to {}, is to replace {} into the value of the variable;
  • Using #{} can effectively prevent SQL injection and improve system security.

126. How many pagination methods are there in Mybatis?

  1. An array of paging

  2. SQL paging

  3. Interceptor paging

  4. RowBounds paging

128. What is the difference between logical paging and physical paging in Mybatis?

  • Physical pages are not necessarily faster than logical pages, and logical pages are not necessarily faster than physical pages.
  • Physical paging is always better than logical paging: there is no need to impose pressure on the application side that belongs to the database side, even if there is an advantage in speed, but other performance advantages more than make up for it.

129. Does Mybatis support lazy loading? What is the principle of lazy loading?

Mybatis only supports lazy loading of association associative objects and collection associative objects. Association refers to one-to-one and collection refers to one-to-many queries. In Mybatis configuration file, you can configure whether to enable lazy-loading lazyLoadingEnabled = true | false.

The principle is that CGLIB is used to create a proxy object for the target object. When the target method is called, the interceptor method is entered, such as a.geb ().getName(). The interceptor invoke() method finds that A.geb () is null. A. setb (); a.getName (); a.getname (); a.getb (); a.getname (); This is the basic principle of lazy loading.

Of course, not only Mybatis, almost all including Hibernate, support lazy loading principle is the same.

What about level 1 cache and level 2 cache in Mybatis?

Level 1 Cache: A HashMap local Cache based on PerpetualCache, with storage scope of Session. When a Session is flushed or closed, all caches in that Session are cleared, enabling Level 1 Cache by default.

Level 2 caches have the same mechanism as Level 1 caches, with PerpetualCache and HashMap storage by default, but with Mapper(Namespace) storage scope and customizable storage source, such as Ehcache. Level 2 caching is not enabled by default. To enable level 2 caching, use the Level 2 cache attribute class to implement the Serializable interface (which can be used to store the state of objects), which can be configured in its mapping file.

If a C/U/D operation is performed on a Session (level 1) or Namespaces (level 2), all caches in the select area will be cleared by default.

131. What are the differences between Mybatis and Hibernate?

(1) Unlike Hibernate, Mybatis is not a complete ORM framework, because Mybatis requires programmers to write their own Sql statements.

(2) Mybatis directly writes the original SQL, which can strictly control the PERFORMANCE of SQL execution and has high flexibility. It is very suitable for software development with low requirements on relational data model, because such software needs to change frequently and output results rapidly once the requirements change. However, the premise of flexibility is that Mybatis cannot achieve database independence. If you need to implement software supporting a variety of databases, you need to customize multiple sets of SQL mapping files, and the workload is heavy.

(3) Hibernate object/relational mapping ability is strong, database independence is good, for software with high requirements of relational model, if Hibernate development can save a lot of code, improve efficiency.

132. What executors do Mybatis have?

Mybatis has three basic executors:

  1. SimpleExecutor: Every time an Update or select is performed, a Statement object is opened and the Statement object is closed immediately.
  2. ReuseExecutor: Performs update or select, searches for a Statement object using the SQL as its key, uses it if it exists, and creates it if it does not exist. After the Statement object is used, it is stored in the Map for future use. In short, the Statement object is reused.
  3. BatchExecutor: Update (no SELECT, JDBC batch does not support SELECT), add all SQL to the batch (addBatch()), and wait for execution (executeBatch()), which caches multiple Statement objects. Each Statement object is addBatch() and waits for the executeBatch() batch to be executed one by one. Same as JDBC batch processing.

133. What is the implementation principle of mybatis paging plug-in?

The basic principle of the paging plug-in is to use the plug-in interface provided by Mybatis to implement a custom plug-in to intercept the SQL to be executed in the interception method of the plug-in, and then rewrite the SQL to add the corresponding physical paging statement and physical paging parameters according to the dialect.

134. How to write a custom plug-in for Mybatis?

From: blog.csdn.net/qq_30051265/article/details/80266434

Mybatis intercepts four Mybatis objects: Executor, StatementHandler, ParameterHandler, ResultSetHandler.

  • Executor: methods for intercepting an Executor (log)
  • StatementHandler: Intercepts the processing of Sql syntax builds
  • ParameterHandler: processing interception parameters
  • ResultSetHandler: Intercepts the processing of result sets

Mybatis custom plug-in must implement the Interceptor interface:

public interface Interceptor {
    Object intercept(Invocation invocation) throws Throwable;
    Object plugin(Object target);
    void setProperties(Properties properties);
}
Copy the code

Intercept method: The logical method that the interceptor processes

Plugin method: Generate dynamic proxy objects based on the signature signatureMap

SetProperties method: Sets the Properties property

Custom plug-in demo:

// ExamplePlugin.java
@Intercepts({@Signature( type= Executor.class, method = "update", args = {MappedStatement.class,Object.class})})
public class ExamplePlugin implements Interceptor {
  public Object intercept(Invocation invocation) throws Throwable {
  Object target = invocation.getTarget(); // Proxy object
  Method method = invocation.getMethod(); // Proxy method
  Object[] args = invocation.getArgs(); // Method parameters
  // do something ...... Method executes a code block before intercepting
  Object result = invocation.proceed();
  // do something ....... Method intercepts and executes the code block
  return result;
  }
  public Object plugin(Object target) {
    return Plugin.wrap(target, this);
  }
  public void setProperties(Properties properties) {}}Copy the code

An @intercepts can be configured with multiple @signatures. The parameters in @signature are defined as follows:

  • Type: intercepting class, in this case Executor implementation class;
  • Method: intercepts the Executor update method.
  • Args: indicates method parameters.

RabbitMQ

135. What are the usage scenarios for RabbitMQ?

①. For asynchronous communication across systems, message queues can be used wherever asynchronous interaction is required. For example, in addition to making phone calls (synchronous), we also need to send text messages and E-mail (asynchronous) communication methods.

②. Coupling between multiple applications, because messages are platform – and language-independent and semantically no longer function calls, is better suited as a loosely coupled interface between multiple applications. Message queue-based coupling does not require both sender and receiver to be online at the same time. In enterprise application integration (EAI), file transfer, shared databases, message queues, and remote procedure calls can all be used as integration methods.

③. For the synchronous variation steps within the application, such as order processing, the front-end application can put the order information into the queue, and the back-end application can obtain the message processing from the queue in turn. A large number of orders at peak time can be backlogged in the queue and processed slowly. Because synchronization usually means blocking, and blocking a large number of threads can degrade computer performance.

(4). The message-driven architecture (EDA), the system is decomposed into the message queue, and news producers and news consumers, a process can according to need to split into several phases (stages), phase connection with queue up, the result of the previous Stage processing into the queue, after a phase get messages from the queue to continue processing.

⑤. Applications need more flexible coupling, such as publish and subscribe, for example, routing rules can be specified.

⑥. Cross-lan, even cross-city communication (CDN industry), such as Beijing machine room and Guangzhou machine room application communication.

136. What are the important roles of RabbitMQ?

The important roles in RabbitMQ are: producer, consumer, and agent:

  • Producer: The creator of the message, responsible for creating and pushing data to the message server;
  • Consumer: The recipient of a message, used to process data and acknowledge messages;
  • Proxy: this is RabbitMQ itself, used to act as a “Courier”. It does not produce messages itself, only as a “Courier”.

137. What are the important components of RabbitMQ?

  • ConnectionFactory: A manager used in application code to establish a connection between the application and Rabbit.
  • Channel: The Channel used by the message push.
  • Exchange: Used to receive and distribute messages.
  • Queue: Used to store messages from producers.
  • RoutingKey: Used to assign producer data to the switch.
  • BindingKey: Used to bind exchange messages to queues.

138. What is vhost for rabbitMQ?

A vhost is a virtual broker, that is, a mini-RabbitMQ server. They all have their own queue, exchange, binding, etc., but most importantly, they have their own permission system, which can control users in the vhost range. Of course, from RabbitMQ’s global perspective, vhosts can be used as a means of isolating different permissions (a typical example is that different applications can run on different vhosts).

139. How are rabbitMQ messages sent?

A TCP connection is created between the client and rabbit Server. Once TCP is opened and authenticated (authentication is the username and password you send to the Rabbit server), Your client and RabbitMQ create an AMQP channel. A channel is a virtual connection created over “real” TCP. Amqp commands are sent over the channel and each channel has a unique ID. Subscription queues are all done through this channel.

140. How does RabbitMQ ensure message stability?

  • Provides transaction functionality.
  • By setting channel to Confirm mode.

141. How can RabbitMQ avoid message loss?

  1. Message persistence

  2. ACK acknowledgement mechanism

  3. Set the cluster mirroring mode

  4. Message compensation mechanism

142. What are the conditions for successful message persistence?

  1. Declare queue Must be durable Set to true.
  2. Message push deliveryMode must be set to persistent, with deliveryMode set to 2 (persistent).
  3. The message has arrived at the persistent exchange.
  4. The message has reached the persistence queue.

All the above four conditions are met to ensure the success of message persistence.

143. What are the disadvantages of RabbitMQ persistence?

The disadvantage of persistence is that it reduces the throughput of the server by using disk rather than memory storage. SSDS can be used to mitigate throughput issues.

144. How many broadcast types are available for RabbitMQ?

Three broadcast modes:

  1. Fanout: All queues bound to this exchange can receive messages (pure broadcast, recipients bound to RabbitMQ can receive messages);
  2. Direct: the unique queue determined by the routingKey and exchange can receive messages;
  3. Topic: All queues bind to a routingKey that matches a routingKey(which in this case can be an expression) can receive messages;

145. How does RabbitMQ implement delayed message queuing?

  1. After the message expires, it enters the dead letter exchange and is forwarded to the delay consumption queue by the exchange to realize the delay function.
  2. Use the Rabbitmq-delayed -message-exchange plug-in for delay.

146. What are rabbitMQ clusters for?

Clusters are used for the following two purposes:

  • High availability: when a server fails, the entire RabbitMQ service can continue.
  • High volume: The cluster can carry more message volumes.

147. What are the types of RabbitMQ nodes?

  • Disk node: Messages are stored to disk.
  • Memory node: Messages are stored in memory. Messages are lost when the server is restarted. The performance is higher than that of disks.

148. What should I pay attention to when setting up rabbitMQ clusters?

  • Nodes are connected using –link. This attribute cannot be ignored.
  • The value of Erlang cookie used by all nodes must be the same. This value is equivalent to the function of “secret key” for authentication of all nodes.
  • The entire cluster must contain one disk node.

149. Is each rabbitMQ node a full copy of the other nodes? Why is that?

No, for two reasons:

  1. Storage space considerations: If each node has a full copy of all queues, the new node does not add storage space, but adds more redundant data.
  2. Performance considerations: If every message needs to be fully copied to every cluster node, the new node does not improve the ability to process messages, at best maintaining the same or worse performance as a single node.

150. What happens when the only disk node in the RabbitMQ cluster crashes?

If the disk node with the only disk crashes, the following operations cannot be performed:

  • Unable to create queue
  • Unable to create a switch
  • Unable to create binding
  • Cannot add user
  • Permissions cannot be changed
  • You cannot add or delete cluster nodes

The cluster can keep running when the only disk node crashes, but you can’t change anything.

151. Does RabbitMQ have a stop order for cluster nodes?

RabbitMQ requires that the cluster be stopped in the order in which the memory nodes should be shut down first and the disk nodes last. If the order is reversed, messages may be lost.

Kafka

Can Kafka be used independently of ZooKeeper? Why is that?

Kafka cannot be used independently of ZooKeeper because Kafka uses ZooKeeper to manage and coordinate kafka node servers.

153. How many data retention strategies does Kafka have?

Kafka has two data retention strategies: retention by expiration time and retention by the size of stored messages.

154. Kafka sets 7 days and 10 gigabytes of clean data at the same time, and on the fifth day it reaches 10 gigabytes of clean data.

At this point Kafka will perform a data purge, which will be performed regardless of the time and size of the purge.

155. What can cause Kafka to run slowly?

  • CPU performance Bottlenecks
  • Disk read and write Bottleneck
  • Network bottlenecks

156. What should I pay attention to when using kafka clusters?

  • It is better not to have more clusters than 7, because the more nodes, the longer the message replication takes and the lower the throughput of the whole group.
  • Set the number of clusters to an odd number because more than half of the failed clusters will become unusable. Set it to an odd number for higher fault tolerance.

Zookeeper

157. What is Zookeeper?

Zookeeper is a distributed, open source distributed application coordination service. It is an open source implementation of Google Chubby and an important component of Hadoop and hbase. It provides consistency services for distributed applications, including configuration and maintenance, domain name service, distributed synchronization, and group service.

158. What functions does ZooKeeper have?

  • Cluster management: Monitors node status and running requests.
  • Primary node election: After the primary node fails, a new primary node can be elected from the standby node. Primary node election refers to the election process. Zookeeper can help complete this process.
  • Distributed lock: ZooKeeper provides two types of locks: exclusive lock and shared lock. An exclusive lock means that only one thread can use a resource at a time. A shared lock means that read locks are shared. Read and write locks are mutually exclusive, that is, multiple threads can read the same resource at the same time. Zookeeper controls distributed locks.
  • Naming service: In distributed systems, client applications can obtain information such as the address and provider of a resource or service by using a named naming service.

159. How many deployment modes are available for ZooKeeper?

Zookeeper can be deployed in three modes:

  • Single-node deployment: Runs on one cluster.
  • Cluster deployment: Multiple clusters run.
  • Pseudo-cluster deployment: One cluster starts multiple ZooKeeper instances.

160. How does ZooKeeper synchronize the status of the primary and secondary nodes?

At the core of ZooKeeper is atomic broadcasting, which ensures synchronization between servers. The protocol that implements this is called the ZAB protocol. Zab supports two modes: recovery mode (master selection) and broadcast mode (synchronization). Zab goes into recovery mode when the service starts or after the leader crashes, and the recovery mode ends when the leader is elected and most servers have completed state synchronization with the Leader. State synchronization ensures that the leader and Server have the same system state.

161. Why have a primary node in a cluster?

In a distributed environment, some business logic needs to be executed by only one machine in the cluster, and the results can be shared by other machines. This can greatly reduce double computation and improve performance, so the primary node is needed.

162. There are three servers in the cluster and one node is down. Is ZooKeeper still available?

Can continue to use, singular servers as long as more than half of the servers are not down can continue to use.

163. What is the notification mechanism of ZooKeeper?

The client will set up a Watcher event for a ZNode. When the ZNode changes, the client will receive a Notification from ZooKeeper. Then the client can make business changes according to the zNode changes.

MySql

164. What are the three paradigms of databases?

  • First normal Form: Emphasizes atomicity of columns, that is, each column of a database table is an indivisible atomic data item.
  • Second normal Form: Requires that the attributes of the entity depend entirely on the primary key. Complete dependency means that there cannot be attributes that depend on only part of the primary key.
  • Third normal form: any non-primary property does not depend on any other non-primary property.

Mysql > alter table alter table alter table alter table alter table alter table alter table alter table alter table alter table alter table alter table alter table

  • If the table type is MyISAM, then the id is 18.
  • If the table type is InnoDB, the id is 15.

InnoDB tables only record the maximum ID of the self-increment primary key in memory, so the maximum ID will be lost after restarting.

166. How do I obtain the current database version?

Use select version() to get the current MySQL database version.

167. What is ACID?

  • Atomicity: All operations in a transaction, either complete or not complete, do not end at some intermediate stage. If a transaction fails during execution, it will be rolled back to the state before the transaction began, as if the transaction had never been executed. That is, transactions are indivisible and irreducible.
  • Consistency: The integrity of the database is not compromised before and after a transaction. This means that the data written must fully comply with all preset constraints, triggers, cascading rollback, and so on.
  • Isolation: The ability of a database to allow multiple concurrent transactions to read, write, and modify its data at the same time. Isolation prevents data inconsistencies due to cross-execution when multiple transactions are executed concurrently. Transaction isolation can be divided into different levels, including Read uncommitted, Read Committed, Repeatable Read, and Serializable.
  • “Durability” : After a transaction ends, modifications to data are permanent and not lost even if system failures.

168. What is the difference between char and varchar?

Char (n) : Fixed-length types, such as subscribe char(10). When you type “ABC”, they still take up 10 bytes. The other seven are empty bytes.

Chat advantages: high efficiency; Disadvantages: Takes up space; Application scenario: Char is suitable for storing the MD5 value of a password with a fixed length.

Varchar (n) : Variable length. The value stored is the number of bytes taken up by each value plus the number of bytes used to record its length.

Therefore, it is appropriate to consider Varcahr in terms of space; Char is appropriate in terms of efficiency, but there is a tradeoff between the two.

169. What is the difference between float and double?

  • Float can store up to 8 decimal digits and take up 4 bytes in memory.
  • A double can hold up to 16 decimal digits and occupy 8 bytes of memory.

170. What is the difference between inner join, left join, right join in mysql?

Inner join keywords: inner join; Left join: left join; Right join: right join.

The inner link is to display the matched associated data; The left link shows all the tables on the left, and the table on the right shows the data that meets the conditions; The right join is just the opposite.

171. How is the mysql index implemented?

An index is a data structure that satisfies a particular lookup algorithm and that points to the data in a way that enables efficient lookup of the data.

Specifically, indexes in MySQL are implemented differently by different data engines, but the indexes of the current mainstream database engines are implemented by B+ tree. The search efficiency of B+ tree can reach the performance of dichotomy, and the complete data structure can be found after the data region is found, and the performance of all indexes is also better.

172. How to verify that the mysql index meets the requirement?

Use Explain to see how SQL executes queries to see if your index meets your requirements.

Explain syntax: explain select * from table where type=1

173. What about transaction isolation of databases?

MySQL transaction isolation was added in the mysql.ini configuration file. At the end of the file: transaction- ISOLATION = REPEATABLE-READ

Available configuration values: read-uncommitted, read-committed, REPEATABLE-READ, SERIALIZABLE.

  • Read-uncommitted: Indicates that reads are not committed, at the lowest isolation level, and can be READ by other transactions (magic, dirty, and unrepeatable reads occur) before a transaction is committed.
  • Read-committed: A transaction can only be READ by other transactions after it is COMMITTED (causing illusorable and unrepeatable reads).
  • REPEATABLE-READ: REPEATABLE READ. The default level ensures that the value of the same data READ for many times is consistent with the content at the beginning of the transaction. Data not committed by other transactions is forbidden to be READ (resulting in phantom READ).
  • SERIALIZABLE: The most expensive and reliable isolation level that can prevent dirty reads, unrepeatable reads, and phantom reads.

Dirty read: Indicates that one transaction can read data that has not yet been committed in another transaction. For example, A transaction may attempt to insert record A before the transaction commits, and then another transaction may attempt to read record A.

Non-repeatable read: the same data is read multiple times in a transaction.

Phantom read: When multiple queries within the same transaction return different result sets. For example, the same transaction A has N records in the first query, but n+1 records in the second query under equal conditions. This is like hallucination. Phantom reading occurs when another transaction adds or deletes or modifies data in the result set of the first transaction. When the data content of the same record is modified, all data rows become more or less.

174. What is the common mysql engine?

InnoDB engine: The InnoDB engine provides acid transaction support, row-level locking and foreign key constraints, and is designed for database systems with large data volumes. When MySQL is running, InnoDB creates a buffer pool in memory to buffer data and indexes. However, this engine does not support full-text search and is slow to start. It does not save the number of rows in a table. Therefore, when performing select count(*) from table, you need to scan the entire table. Because of the small granularity of locks, write operations do not lock the entire table. Therefore, using locks in scenarios with high concurrency can improve efficiency.

MyIASM engine: the default engine for MySQL, but does not support transactions, row-level locks and foreign keys. Therefore, when inserting and updating statements, that is, when writing, the table needs to be locked, resulting in a loss of efficiency. Unlike InnoDB, MyIASM saves the number of rows in a table, so when you select count(*) from a table, you can read the saved values directly without scanning the entire table. Therefore, MyIASM can be used as the database engine of choice if the tables are being read far more than written, and transaction support is not required.

175. What about row and table locks in mysql?

MyISAM only supports table locks, InnoDB supports table locks and row locks, default row locks.

  • Table lock: low overhead, fast lock, no deadlock. Large lock granularity has the highest probability of lock conflict and the lowest concurrency.
  • Row-level locking: expensive, slow, and deadlocks. The lock force is small, the probability of lock conflict is small, and the concurrency is the highest.

176. What about optimistic and pessimistic locks?

  • Optimistic lock: Every time I go to get data, I will not lock it because I think others will not change it, but I will judge whether others have updated the data in the meantime before submitting the update.
  • Pessimistic lock: Every time you try to get data, you assume that someone else will change it, so every time you try to get data, you lock it, so that someone else will block it until the lock is released.

Add a version field to the table, and the value of each successful change increases by 1. In this way, each time you change the version of the database, check whether the version you have is consistent with the current version of the database. If the version is inconsistent, do not change it.

177. What are the methods used to troubleshoot mysql problems?

  • Run the show processList command to view all current connection information.
  • Query SQL statement execution plans using the Explain command.
  • Enable the slow query log function to view the SQL that is slowly queried.

178. How to optimize mysql performance?

  • Create an index for the search field.
  • Avoid select * and list the fields you want to query.
  • Vertical split table.
  • Select the correct storage engine.

Redis

179. What is redis? What are the usage scenarios?

Redis is an open source, network-enabled, memory-based and persistent logging, key-value database written in ANSI C, and provides multiple language apis.

Redis usage scenarios:

  • High concurrency of data reads and writes
  • Read and write massive data
  • Data with high scalability requirements

180. What functions does Redis have?

  • Data caching
  • Distributed lock functionality
  • Data persistence is supported
  • Support transactions
  • Message queue support

181. What’s the difference between Redis and memecache?

  • As an alternative to memcached, where all values are simple strings, Redis supports richer data types
  • Redis is much faster than memcached
  • Redis can persist its data

182. Why is Redis single threaded?

Since CPU is not the bottleneck of Redis, the bottleneck of Redis is most likely machine memory or network bandwidth. Since single threading is easy to implement and the CPU is not a bottleneck, it makes sense to adopt a single threaded solution.

About Redis performance, the official website also has, ordinary notebook easily handle hundreds of thousands of requests per second.

And single-threaded doesn’t mean slow. Nginx and NodeJS are both examples of high-performance single-threaded performance.

183. What is cache penetration? How to solve it?

Cache penetration: The system queries data that does not exist. If the cache does not match, the system queries the data from the database. If no data is found, the system does not write the data into the cache.

Solution: The simplest and most crude way If a query returns empty data (either nonexistent or a system failure), we cache the empty result, but its expiration time is short, no more than five minutes.

184. What data types are supported by Redis?

String, list, hash, set, zset.

185. What Java clients does Redis support?

Redisson, Jedis, lettuce, etc. Redisson is officially recommended.

186. What are the differences between Jedis and Redisson?

Jedis is the client of the Java implementation of Redis, and its API provides comprehensive support for Redis commands.

Redisson implements distributed and extensible Java data structures. Compared with Jedis, Redisson has relatively simple functions. It does not support string manipulation, sorting, transaction, pipeline, partitioning and other Redis features. The goal of Redisson is to promote a separation of focus from Redisso that users can focus more on processing business logic.

187. How to ensure the consistency of cache and database data?

  • Set the cache expiration time appropriately.
  • Update Redis synchronously when adding, changing, and deleting databases. Transaction mechanism can be used to ensure data consistency.

188. How many ways can REDis persist?

Redis persists in two ways, or two strategies:

  • Redis Database (RDB) : Specifies the interval at which a snapshot of your data can be stored.
  • AOF (Append Only File) : Each received write command is appended to the File using the write function.

189. How does Redis implement distributed lock?

Redis distributed lock is actually in the system to occupy a “pit”, other programs also want to occupy the “pit”, occupy a successful can continue to execute, failed can only give up or later retry.

A trap is usually occupied by a program using the setnx(Set if not exists) instruction.

190. What are the defects of redis distributed locks?

Redis distributed lock does not solve the problem of timeout. Distributed lock has a timeout period, and the program execution will have problems if the timeout period of the lock is exceeded.

191. How does Redis optimize memory?

Use hashes whenever possible. Hashes use very little memory, so you should abstract your data model into a hash as much as possible.

For example, if you have a user object in your Web system, do not set a separate key for the user’s name, last name, email address, and password. Instead, store all of the user’s information in a hash table.

192. What are the Redis elimination strategies?

  • Volatile – lRU: Selects the least recently used expires data from a set with an expiration date (server.db [I].expires).
  • Volatile – TTL: Selects expired data from a set (server.db [I].expires) to be discarded.
  • Volatile -random: Selects any data from a set with an expiration date (server.db [I].expires) to be discarded.
  • Allkeys-lru: Culls the least recently used data from the dataset (server.db [I].dict).
  • Allkeys-random: Random selection of data from a dataset (server.db [I].dict).
  • No-enviction: Data expulsion is prohibited.

193. What are the common performance issues with Redis? How to solve it?

  • When the primary server writes a memory snapshot, the main process is blocked. When a large number of snapshots are taken, the performance is greatly affected and services are interrupted intermittently. Therefore, do not write a memory snapshot on the primary server.
  • Redis master/slave replication performance problem, in order to master/slave replication speed and connection stability, the master/slave library had better be in the same LAN.

JVM

194. What are the main components of the JVM? And its role?

  • ClassLoader
  • Runtime Data Area
  • Execution Engine
  • Native Library Interface

What components do: The Java code is converted into bytecode by the ClassLoader, and the bytecode is loaded into memory by the Runtime Data Area. Bytecode files are only a set of instructions for the JVM, and cannot be directly delivered to the underlying operating system for execution. Therefore, a specific command parser Execution Engine is required to translate the bytecode into the underlying system instructions and then send them to the CPU for Execution. In this process, the Native Interface of other languages is called to realize the function of the entire program.

195. What about the JVM runtime data area?

  • Program counter
  • The virtual machine stack
  • Local method stack
  • The heap
  • Methods area

Some regions exist with the start of a virtual machine process, and some regions are created and destroyed depending on the start and end of a user process.

196. What is the difference between stacks?

  1. Stack memory stores local variables and heap memory stores entities;

  2. Stack memory is updated faster than heap memory because local variables have a short lifetime;

  3. Variables stored in stack memory are released as soon as their life cycle ends, while entities stored in heap memory are collected irregularly by the garbage collection mechanism.

197. What are queues and stacks? What’s the difference?

  • Queues and stacks are used to prestore data.
  • Queues allow first-in, first-out retrieval of elements, but there are exceptions where the Deque interface allows retrieval of elements from both ends.
  • A stack is similar to a queue, but it runs last-in, first-out retrieval of elements.

198. What is the parental delegation model?

Class loaders before introducing the parental delegation model. For any class, uniqueness within the JVM needs to be established both by the classloader that loads it and by the class itself. Each classloader has a separate class namespace. The classloader simply loads a class file into JVM memory with the specified fully qualified name and then converts it into a class object.

Class loader classification:

  • Bootstrap ClassLoader, which is part of the VM itself, loads libraries in the Java_HOME/lib/ directory or in the path specified by the -xbootclasspath parameter and recognized by the VM.
  • Other class loaders:
  • “> < span style=”box-sizing: border-box; -webkit-tap-highlight-color: transparent; text-size-adjust: none; -webkit-font-smoothing: antialiased; outline: 0px ! important;” All libraries in the >\lib\ext directory or in the path specified by the java.ext.dirs system variable;
  • The Application ClassLoader. The class loader is responsible for loading specified libraries on the user’s classpath, and we can use this class loader directly. In general, if we don’t have a custom class loader this is the default.

Parental delegation model: If a class loader receives a request for a class load, it first does not load the class itself. Instead, it delegates the request to the parent class loader, at each level, so that all load requests are sent to the top level of the starting class loader. The child loader attempts to load the class only if the parent load cannot complete the load request (it did not find the required class in its search scope).

199. What about the execution of class loading?

Class loading consists of the following five steps:

  1. Load: according to the search path to find the corresponding class file and import;
  2. Check: Check the correctness of the loaded class file;
  3. Preparation: allocate memory space for static variables in a class;
  4. Resolution: The process by which the virtual machine replaces symbolic references in a constant pool with direct references. A symbolic reference is understood as an identifier, whereas a direct reference points directly to an address in memory.
  5. Initialization: Performs initialization on static variables and static code blocks.

200. How to determine whether an object can be reclaimed?

There are two ways to judge:

  • Reference counter: Creates a reference count for each object, which is +1 when an object is referenced, -1 when a reference is released, and can be reclaimed when the counter is 0. It has a disadvantage that it does not solve the problem of circular references;
  • Reachability analysis: Search down from GC Roots, the path taken by the search is called the reference chain. When an object is not connected to GC Roots by any reference chain, the object is proven to be recyclable.

201. What reference types are available in Java?

  • Strong reference
  • Soft references
  • A weak reference
  • Phantom reference (phantom reference/phantom reference)

202. What garbage collection algorithms does the JVM have?

  • Mark-clear algorithm
  • Mark-collation algorithm
  • Replication algorithm
  • Generational algorithm

203. What garbage collectors does the JVM have?

  • Serial: The original single-threaded Serial garbage collector.
  • Serial Old: An older version of the Serial garbage collector, which is also single-threaded, can be an alternative to the CMS garbage collector.
  • ParNew: is the multithreaded version of Serial.
  • Parallel and ParNew collectors are similarly multithreaded, but Parallel is a throughput first collector that can sacrifice wait time for system throughput.
  • Parallel Old is the older version of Parallel, Parallel uses the copy memory reclamation algorithm, Parallel Old uses the mark-collation memory reclamation algorithm.
  • CMS: A collector whose goal is to achieve minimum pause times, ideal for B/S systems.
  • G1: A GC implementation that takes into account both throughput and pause times and is the default GC option in JDK 9 onwards.

204. What about CMS garbage collector?

CMS, short for Concurrent mark-sweep, is a garbage collector that achieves the shortest collection pause time at the expense of throughput. This garbage collector is ideal for applications that require fast server response. Specify the use of the CMS garbage collector by adding “-xx :+UseConcMarkSweepGC” to the start JVM argument.

CMS uses the mark-clear algorithm to achieve, so a large number of memory fragments will be generated during GC. When the remaining memory cannot meet the program running requirements, the system will suffer Concurrent Mode Failure. The Serial Old collector is used by the temporary CMS for garbage removal, where performance is reduced.

205. What are the new generation garbage collectors and old generation garbage collectors? What’s the difference?

  • Cenozoic recyclers: Serial, ParNew, Parallel Insane
  • Old collector: Serial Old, Parallel Old, CMS
  • Whole heap collector: G1

The new generation garbage collector generally adopts replication algorithm. The advantage of replication algorithm is high efficiency, but the disadvantage is low memory utilization. Old-time recyclers generally use a mark-collation algorithm for garbage collection.

206. Describe briefly how generational garbage collector works.

The generation collector has two partitions: old generation and new generation. The default space of the new generation is 1/3 of the total space, and the default space of the old generation is 2/3.

The new generation uses the replication algorithm, and there are three partitions in the new generation: Eden, To Survivor, and From Survivor. Their default ratio is 8:1:1. Its execution process is as follows:

  • Put Eden + From Survivor into To Survivor zone;
  • Clear Eden and From Survivor partitions;
  • Swap From Survivor and To Survivor partitions, From Survivor To Survivor, To Survivor To Survivor From Survivor.

An object that survives every move From Survivor To Survivor is aged +1 and is upgraded To the old generation when age reaches 15 (the default is 15). Large objects also go directly to old generation.

The old generation triggers global garbage collection when the footprint reaches a certain value, usually using a mark-up execution algorithm. These iterations constitute the entire implementation process of generational garbage collection.

207. What about JVM tuning tools?

The JDK comes with many monitoring tools, which are located in the BIN directory of the JDK. The two most commonly used view monitoring tools are JConsole and JVisualVM.

  • Jconsole: For monitoring memory, threads, classes, etc in the JVM
  • Jvisualvm: The JDK comes with a versatile analysis tool, can analyze: memory snapshot, thread snapshot, program deadlock, monitoring memory changes, GC changes, etc.

208. What are the common JVM tuning parameters?

  • -xMS2G: Indicates that the initial push size is 2g.
  • -XMX2G: The maximum heap memory is 2g.
  • -xx :NewRatio=4: Sets the memory ratio between the young and old ages to 1:4.
  • -xx :SurvivorRatio=8: Set the Eden and Survivor ratio of the new generation to 8:2.
  • – XX:+UseParNewGC: specifies the use of ParNew + Serial Old garbage collector combination;
  • -xx :+UseParallelOldGC: Specifies the use of ParNew + ParNew Old garbage collector combination;
  • -xx :+UseConcMarkSweepGC: specifies the use of CMS + Serial Old garbage collector combination;
  • -xx :+PrintGC: Enables gc information printing.
  • -xx :+PrintGCDetails: Prints GC details.