Small knowledge, big challenge! This paper is participating in theEssentials for programmers”Creative activities

preface

A design pattern is a set of repeated, familiar, catalogued code design lessons. It describes some recurring problems in the software design process, and the solutions to this problem. That is to say, it is a series of routines to solve a specific problem. It is a summary of the code design experience of predecessors. It has a certain universality and can be used repeatedly. The goal is to improve code reusability, code readability, and code reliability.

It is a summary of the most valuable lessons learned in object-oriented design. Before speaking may be relatively familiar with some of the design pattern, but carefully pondering or some questions, just at present relatively have more spare time, you can re-learn the design pattern!

Design pattern of a singleton pattern, the content contains two points is the definition and purpose of the pattern, the second is Java concrete implementation and questions

define

For the singleton pattern you should be relatively familiar with, may be a lot of times need to handwritten ah.

So let’s go back to the original definition

The original definition of the singleton pattern appeared in Design Patterns (Addison Wisley, 1994) : “Ensure that a class has only one instance and provide a global point of access to it.”

We can take two messages from this:

  1. Ensure that there is only one instance of a class

    That is, it can’t be instantiated by any other outside world. The constructor must be private and the instance of the object belongs to the class, i.e. the static member exists

  2. And provide the global access point for this instance

    Provide a public static method for the outside world to get this member

So why design such a singleton class?

  1. For example, a web site will have a login window. Think of the login window as an object. Do I create a login window every time I click login? Do you destroy them after you use them? There’s also access to the database, you have to make a connection and then destroy the connection, but you’re using this exact same thing every time.

  2. Global counters, for example, are useless if a counter gets a different object each time.

Two points: 1. Reduce overhead; 2. Share resources

The hungry type

To sum up, the definition says that the classes that implement the singleton pattern have the following points:

  1. Constructor private
  2. Exists as a static member
  3. Provide public method fetching

It is easy to write the following code, which is the implementation of the familiar hungry man

public class Single{
    private static final Single single = new Single();
    private Single(a){}
    public static Single getSingle(a){
        returnsingle; }}Copy the code

LanHanShi

For those hungry and lazy singletons, it should be clear that the instance is created when the class is loaded, and that new is created when getSingle() is called, as shown below

public class Single{
    private static Single single;
    private Single(a){}
    public static Single getSingle(a){
        if(single == null){
            single = new Single();
        }
        returnsingle; }}Copy the code

The advantage is that you avoid taking up space in the first place. The problem is that there are two steps to creating an object, one is to judge and the other is to create. Therefore, there are thread-safety issues that result in multiple creation violations of singletons, as follows:

You can see that new instances have occurred in the first few results. So there’s a thread-safety problem with this operation that gets.

The solution is simple:

public class Single{
    private static Single single;
    private Single(a){}
    public synchronized static Single getSingle(a){
        if(single == null){
            single = new Single();
        }
        returnsingle; }}Copy the code

or

public class Single{
    private static Single single;
    private Single(a){}
    public static Single getSingle(a){
        synchronized(Single.class){
            if(single == null){
                single = newSingle(); }}returnsingle; }}Copy the code

Synchronized ensures that blocks of code are synchronized. However, in fact, it is necessary to create a unique instance when it is acquired by the outside world for the first time. After that, it is always necessary to return the existing unique instance. For the security of the first time, all the obtained instances need to be synchronized. Is the synchronization range too large? Is it unnecessary?

DCL implementation

Anyone familiar with multithreading will be familiar with the writing of Double Check locks, in this case a singleton implementation of DCL

public class Single{
    private static Single single;
    private Single(a){}
    public static Single getSingle(a){
        if(single == null) {synchronized(Single.class){
                if(single == null){
                    single = newSingle(); }}}returnsingle; }}Copy the code

In fact, it is checked again outside synchronized, so that it is guaranteed that the instance is created only after a judgment return does not guarantee synchronization, because a batch of threads created in the first time will be synchronized block and nullated to allow a unique thread to create.

Those of you who have seen the singleton implementation know that the instance member also needs to be volatile, and that the new object is not atomic. There are roughly three steps:

  1. Open the space
  2. Initializes objects into Spaces
  3. Reference the spatial address

When you switch steps 2 and 3, that’s the bytecode on the diagram, 21 and 24. When the field assignment is complete, the judgment is not empty, but if 21 and 24 instructionsSwitched the order, the object is not initialized if the field is not empty. Due to the implementation of the double detection lockThe first judgment is openIn other words, while one thread is creating an object, another thread can return it if it is not null.

private static volatile Single single;
Copy the code

In the question section below, can the same goal be achieved without using volatile?

Inner class implementation

Lazy implementations are created when instance members are used, so there is another way for Java to delay creation by nullifying them as above: inner classes.

public class Single{
    private Single(a){}
    private static class CreateSingle{
        private static final Single SINGLE = new Single()
    } 
    public static Single getSingle(a){
        returnCreateSingle.SINGLE; }}Copy the code

This takes advantage of Java inner class features:

  1. External classes can access internal private members
  2. The inner class is not loaded when a class is loaded, but only when the inner class is accessed and used

So you have lazy loading which is a lazy singleton, and you don’t have multiple steps, which is naturally thread-safe.

The enumeration

Why enumerations when we already have a good singleton implementation above?

It is a natural singleton pattern for Java features

public enum EnumSingle {
    SINGLE;
}
Copy the code

We all know that enumerated types are classes that identify instances. If an enumerated type is used, the corresponding value is restricted. Because instances of enumerations are listed in the enumeration class, the constructor is private.

The following is a truncated version of the enumeration class decompilation (which removes some methods, leaving only the constructor and member instances)

public final class EnumSingle extends Enum{
    public static final EnumSingle SINGLE = new EnumSingle("SINGLE".0);
    private EnumSingle(String s, int i){
        super(s, i); }}Copy the code

doubt

Before I started, I thought it was familiar. There are still many questions after sorting out the feeling. The questions and my thoughts are listed below:

  1. Why is the best implementation of a singleton stateless? (See this sentence in many places)

What is statelessness? This means that the class has no member attribute information, or no member attribute that can be modified. So stateless singleton is completely concurrency-safe, with nothing to change, and getting instance just uses the methods in it, which is generally used as a utility class. The second is a singleton of states, sharing resources. Global counters, for example, are thread-safe and provide safe methods of operation.

  1. We know that Java has a launch mechanism, can these singleton implementations be broken?

It can be broken if reflection is used. Reflection can get constructors to create new objects, such as the following:

// 1. Get the no-parameter constructor
Constructor<Single> constructor = Single.class.getDeclaredConstructor();
// 2. Disable private access
constructor.setAccessible(true);
// 3. Call the constructor twice to get the object
Single single1 = constructor.newInstance();
Single single2 = constructor.newInstance();
4 / / test
System.out.println(single1);
System.out.println(single2);
Copy the code

Reflection can go to arbitrary use of the constructor of the singleton class, so it can indeed be broken. But we can do newInstance

You may be surprised to find that enumerations do not allow the constructor to be executed by reflection and throw an exception

throw new IllegalArgumentException("Cannot reflectively create enum objects");
Copy the code
  1. Enumerating this way is it hungry or lazy?

We can enumerate the actual class code by enumerating that the members of the enumeration are actually static final and created from the beginning. It’s hungrier.

public static final EnumSingle SINGLE = new EnumSingle("SINGLE".0);
Copy the code

But when I looked up the information, I found that some blogs wrote enumerations are lazy, because the sentence actually runs lazily. It must be hungry if you decompile the code to the final class execution. Get rid of the execution code and see if the memory analysis is consistentAs expected, the class loads and the instance is created. Enumeration is hungry.

  1. Is it the same instance after serialization?

In fact, we probably all know that serializing and deserializing it is a deep-copy process that produces a new object with the same properties and contents, so if a singleton class can be serialized, then it really can break the singleton to get a new object. So false for any object as follows

// Serialize
ObjectOutputStream out = new ObjectOutputStream(new FileOutputStream("tempFile"));
out.writeObject(Single.getSingle());
File file = new File("tempFile");
/ / reduction
ObjectInputStream input =  new ObjectInputStream(new FileInputStream(file));
Object newInstance = input.readObject();
// Check if it is the same object
System.out.println(newInstance == Single.getSingle());
Copy the code

But when tested, enumerations are different

The Java specification dictates that each enumerated type and the enumerated variables it defines are unique in the JVM, so Java makes special provisions for the serialization and deserialization of enumerated types. At serialization Java simply passes the enumeration object’s name property into the result, and at deserialization it looks up the enumeration object by name through the valueOf() method of java.lang.enum. In other words, when serializing the DATASOURCE, the name of the enumeration is printed. When deserializing the DATASOURCE, the enumeration type is found by the name of the enumeration type. Therefore, the deserialized instance is the same as the previously serialized object instance.

The readObject, as described above, gets an instance of an enumerated class from the enumeration’s name property.

  1. Can DCL implementations be implemented without volatile?

Theoretically, yes. Volatile is also used theoretically, and there is no way to test it.

How do you guarantee that you will not get initialized objects without using volatitle? Quite simply, although it is true that it is possible for the JVM to reorder instructions to improve efficiency, only if the reordering does not affect the results.

In addition to volatile modifies the members so that they are not reordered. Is it possible to create objects that are then dependent on the JVM to not reorder them? The complete object is eventually created and assigned directly to the singleton field single

Here’s my guess: let try-catch restrict the assignment until the instruction to create the object is complete. So atomic manipulation to single. The single field must be a complete object if it is not empty.

try{
  Single temp = new Single();
}catch(Exception e){

}
single = temp;
Copy the code

It is, of course, conjecture as to how the instruction reorder will be tested and traced back into the JVM to see the final operation. No suitable method has been found.

conclusion

The first is the idea of singleton design, which is a globally unique object that is easier to understand. The second is that we need to consider this idea to implement the specific coding implementation: 1. Need lazy or hungry? 2. Whether to ensure thread safety? 3. Whether anti-reflection serialization is necessary, etc. For advantages and disadvantages, it is necessary to give up the problem, such as it can not be inherited bad extension. These shortcomings are brought about by the idea of singletons and therefore must not be possible. So it’s not a disadvantage but is this usage scenario satisfying the need to use singletons.