preface

Nice to meet you

HashMap is a very important collection that is used frequently in daily life and is also the focus of interviews. This article is not intended to cover the basic usage apis, but rather to drill down to the bottom of a HashMap to get the most out of it. Some familiarity with hash tables and HashMaps is required.

HashMap is essentially a hash table, so it is inseparable from the hash function, hash conflict, expansion scheme; At the same time, as a data structure, we must consider the problem of multi-thread concurrent access, that is, thread safety. These four key points are the key points of learning HashMap, and also the key points of designing HashMap.

HashMap is part of the Map collection architecture and inherits the Serializable interface, which can be serialized, and the Cloneable interface, which can be copied. His succession structure is as follows:

HashMap is not universal, and other classes have been extended to meet the needs of some special situations, such as thread-safe ConcurrentHashMap, LinkHashMap for recording insertion order, TreeMap for sorting keys, and so on.

This article focuses on four key topics: hash functions, hash conflicts, expansion solutions, and thread safety, supplemented by key source code analysis and related issues.

All content in this article is JDK1.8 unless otherwise stated.

The hash function

The purpose of a hash function is to compute the index of a key in an array. The criteria for judging a hash function are whether the hash is uniform and whether the calculation is simple.

The steps of the HashMap hash function:

  1. Perturbed the Hashcode of the key object
  2. Find the subscripts of the array by taking modules

The purpose of the perturbation is to make hashcode more random, so that the second step of modulus will not make all the keys together, improving the hash uniformity. The perturbation can be seen in the hash() method:

static final int hash(Object key) {
    int h;
    // Get the hashcode of the key, xor in the high and low bits
    return (key == null)?0 : (h = key.hashCode()) ^ (h >>> 16);
}
Copy the code

That is, the lower 16 bits are xor with the higher 16 bits, and the higher 16 bits remain the same. Generally, the length of the array is relatively short, and only the low order is involved in the hash in the modulus operation. The xor between the high position and the position enables the high position to participate in the hash operation, making the hash more uniform. The specific calculation is shown in the figure below (for the convenience of 8-bit demonstration, the same is true for 32-bit demonstration) :

After the hashcode is disturbed, the result needs to be modulo. HashMap in JDK1.8 takes a different, more high-performance approach rather than simply using % for modulating. A HashMap controls an array of length 2 to an integer power, which has the advantage of having the same effect as a bit and operation on HashCode as an array of length -1. The diagram below:

However, bit and operation is much more efficient than redundancy, thus improving performance. This feature is also used in capacity expansion operations, as discussed later. The putVal() method is called in the put() method:

final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
               boolean evict) {...// Perform a bit sum with the array length -1 to get the index
    if ((p = tab[i = (n - 1) & hash]) == null)... }Copy the code

For the complete hash calculation process, see the following figure:

We mentioned above that the HashMap array is an integer power of 2. How does HashMap control the array to an integer power of 2? There are two ways to change an array length:

  1. Length specified during initialization
  2. Length increment during capacity expansion

So let’s do the first case. By default, if the length is not specified in the HashMap constructor, the initial length is 16. 16 is a good rule of thumb because it is an integer power of 2, but too small triggers frequent expansion and wastes space. If you specify an integer power that is not 2, it is automatically converted to the lowest integer power of 2 greater than the specified number. If 6 is specified, it is converted to 8, and if 11 is specified, it is converted to 16. TableSizeFor () When we initialize a non-2 integer power, HashMap calls tableSizeFor() :

public HashMap(int initialCapacity, float loadFactor) {...this.loadFactor = loadFactor;
    // The tableSizeFor method is called
    this.threshold = tableSizeFor(initialCapacity);
}

static final int tableSizeFor(int cap) {
    // Note that you must subtract one here
    int n = cap - 1;
    n |= n >>> 1;
    n |= n >>> 2;
    n |= n >>> 4;
    n |= n >>> 8;
    n |= n >>> 16;
    return (n < 0)?1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
Copy the code

The tableSizeFor() method looks complicated. The purpose of the tableSizeFor() method is to make the highest bit 1 all subsequent bits 1 and then +1 to produce the lowest power of 2 that is just greater than initialCapacity. As shown below (8 bits are used here for simulation, and 32 bits for simulation) :

So why do I have to evaluate cap to -1? If the specified number is exactly an integer power of 2, if there is no -1, the result will be a number twice as large as it, as follows:

00100High -1After the total variation1--> 00111-1---> 01000
Copy the code

The second way to change the length of an array is to expand it. The size of the array must be a power of 2. The size of the array must be an integer power of 2.

final Node<K,V>[] resize() {
    ...
    if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                 oldCap >= DEFAULT_INITIAL_CAPACITY)
            // Set it to double
            newThr = oldThr << 1; . }Copy the code

Summary:

  1. HashMap improves the hash effect by xOR operation between the high 16 bits and the low 16 bits.
  2. HashMap controls the integer power of array length 2 to simplify modular operations and improve performance.
  3. A HashMap controls an array to a certain power of 2 by initializing it to a power of 2 and expanding it by a factor of 2.

Hashing conflict resolution

A good hash algorithm can never avoid hash collisions. A hash conflict is when two different keys hash to get the same array index. There are many ways to resolve hash conflicts, such as open addressing, rehash, public overflow table, and chained address. HashMap uses the chain address method. After JDK1.8, we also added red-black tree optimization, as shown in the following figure:

When conflicts occur, the current node will form a linked list, and when the linked list is too long, it will automatically transform into a red-black tree to improve the search efficiency. Red black tree is a data structure with high search efficiency and time complexity O(logN), but red black tree can only play its advantages when the data volume is large. HashMap imposes the following restrictions on red-black tree conversions

  • If the list length >=8 and the array length >=64, the list is converted to a red-black tree.
  • If the list length is greater than or equal to 8, but the array length is less than 64, it will be expanded preferentially rather than converted to a red-black tree.
  • When the number of nodes in the red-black tree <=6, it is automatically converted into a linked list.

That raises the following questions:

  • Why do I need an array length of 64 to convert a red-black tree?

    When the array length is short, such as 16, list the length of the eight is occupied 50%, maximum mean load has almost reached ceiling, as if into a red-black tree, after the expansion will once again split the red-black tree to the new array, so that instead of performance benefits, it will reduce performance. Therefore, when the array length is less than 64, the expansion is preferred.

  • Why does it have to be greater than or equal to 8 to turn into a red-black tree, rather than 7 or 9?

    Tree nodes are larger than ordinary nodes. In short linked lists, red-black trees do not show obvious performance advantages, but waste space. In short linked lists, linked lists are used instead of red-black trees. In theoretical mathematics (loading factor =0.75), the probability of the list reaching 8 is 1 in a million; With 7 as the watershed, anything above 7 becomes a red-black tree, and anything below 7 becomes a linked list. Red-black trees are designed to withstand a lot of hash collisions in extreme cases, where a linked list is more appropriate.

Note that red-black trees come after JDK1.8, which uses the array + linked list schema.

Summary:

  1. HashMap adopts the chained address method, which is converted to a linked list in case of conflicts, and to a red-black tree in case of long lists to improve efficiency.
  2. HashMap limits red-black trees to withstand only a few extreme cases.

Expansion plan

As more and more data are stored in HashMap, the probability of hash conflicts will be higher and higher. Space swap time can be utilized through array expansion to keep the search efficiency at constant time complexity. So when is the expansion going to happen? Controlled by a key parameter of the HashMap: the load factor.

Load factor = number of nodes in the HashMap/array length, which is a scale value. When the number of nodes in the HashMap reaches the ratio of the load factor, the expansion will be triggered. That is, the load factor controls the threshold for the number of nodes that the current array can carry. If the length is 16 and the load factor is 0.75, then the number of nodes that can be accommodated is 16*0.75=12. The numerical size of the load factor needs to be carefully weighed. The larger the load factor is, the higher the utilization of the array is, and the higher the probability of hash conflict is. The smaller the load factor, the lower the array utilization, but also the lower the probability of hash collisions. So the size of the loading factor needs to balance the relationship between space and time. In theoretical calculation, 0.75 is a more appropriate value, and the probability of hash conflict increases exponentially when it is greater than 0.75, while the probability of hash conflict decreases not obviously when it is smaller than 0.75. The default size of the load factor in the HashMap is 0.75, and it is not recommended to change its value without special requirements.

So how does the HashMap scale up once the threshold is reached? Whereas HashMap expands the array twice as long and migrates data from the old array to the new one, HashMap is optimized for migration: using the HashMap array length as an integer power of two, it migrates data in a more efficient way.

Data migration prior to JDK1.7 was relatively simple, iterating through all nodes, using the hash function to compute new subscripts, and inserting them into the linked list of new arrays. This will have two disadvantages: **1. Each node needs to perform a calculation of complements; 2, insert into the new array using the head insert method, in multi-threaded environment will form a linked list ring. **jdk1.8 is optimized because it controls the array length to always be a power of 2. Each time the array is extended, it is doubled. The advantage is that the hash result of the key in the new array is only two ways: in the original position, or in the original position + the original array length. Specifically, we can look at the following figure:

As you can see from the figure, the hash result in the new array depends only on the higher value. If the higher bit is 0, the result is in the original position, and if it is 1, the length of the original array is added. So we just need to determine whether a node’s higher bit is 1 or 0 to get its position in the new array, without having to hash it repeatedly. HashMap splits each list into two linked lists, corresponding to the original location or the original location + the original array length, and inserts them into the new array respectively, keeping the original node order as follows:

One problem remains: the head plug method creates a linked list ring. This is covered in the thread safety section.

Summary:

  1. Load factor determines the threshold of HashMap expansion, which needs to weigh time and space. Generally, 0.75 is kept unchanged.
  2. The HashMap expansion mechanism combines the characteristics of the integer power of array length 2 to complete data migration with a higher efficiency, while avoiding the linked list ring caused by header insertion.

Thread safety

As a collection, the main function of HashMap is CRUD, that is, adding, deleting, checking and modifying data. Therefore, it must involve concurrent access of data by multiple threads. Concurrency issues require special attention.

HashMap is not thread-safe, and data consistency cannot be guaranteed in the case of multiple threads. For example, thread A needs to insert node X at subscript 2 of A HashMap when the subscript 2 position is null. After checking for null, thread A suspends. Thread B inserts the new node Y at subscript 2; When thread A is restored, node X will be directly inserted into subscript 2, overwriting node Y, resulting in data loss, as shown below:

In JDK1.7 and earlier expansion, the head insertion method is adopted, which is fast to insert, but in multi-threaded environment, it will cause linked list rings, and the linked list rings will not find the tail of the linked list and occur an infinite loop. For space, please refer to the interviewer for this question: Why is HashMap thread unsafe? , the author answers the concurrency problem of HashMap in detail. After THE expansion of JDK1.8, tail insertion method is adopted to solve this problem, but it does not solve the problem of data consistency.

What if the results are inconsistent? There are three solutions to this problem:

  • The Hashtable
  • callCollections.synchronizeMap()Method to make HashMap multithreaded
  • usingConcurrentHashMap

The idea of the first two schemes is similar, in each method, the entire object is locked. Hashtable is an old-generation collection framework with many backward designs. It adds the synchronize keyword to each method to ensure thread safety

// Hashtable
public synchronized V get(Object key) {... }public synchronized V put(K key, V value) {... }public synchronized V remove(Object key) {... }public synchronized V replace(K key, V value) {...}
...
Copy the code

The second method returns a SynchronizedMap object that, by default, locks the entire object for each method. The following source code:

What is mutex in this case? See the constructor directly:

final Object      mutex;        // Object on which to synchronize
SynchronizedMap(Map<K,V> m) {
    this.m = Objects.requireNonNull(m);
    // The default is this object
    mutex = this;
}
SynchronizedMap(Map<K,V> m, Object mutex) {
    this.m = m;
    this.mutex = mutex;
}
Copy the code

You can see that the default lock is itself, and the effect is the same as Hashtable. The result of this simple and crude locking of the entire object is:

  • Locks are very heavyweight and can seriously affect performance.
  • Only one thread can read or write at a time, limiting concurrency efficiency.

ConcurrentHashMap is designed to solve this problem. He improves efficiency by reducing lock granularity +CAS. ConcurrentHashMap locks only one node of an array, not the entire object. Therefore, other threads can access other nodes of the array without affecting each other, greatly improving concurrency efficiency. Meanwhile, the ConcurrentHashMap read operation does not need to acquire the lock, as shown below:

More on ConcurrentHashMap and Hashtable, due to space limitations, will be covered in another article.

So, is using all three of these solutions absolutely thread-safe? Take a look at the following code:

ConcurrentHashMap<String, String> map = new ConcurrentHashMap<>();
map.put("abc"."123");

Thread1:
if (map.containsKey("abc")){
    String s = map.get("abc");
}

Thread2:
map.remove("abc");
Copy the code

When Thread1 calls containsKey and releases the lock, Thread2 acquires the lock, removes “ABC” and releases the lock, and Thread1 reads a null s. So ConcurrentHashMap class or Collections. SynchronizeMap () method or Hashtable is only on a certain extent to ensure the safety of the thread, and there is no guarantee that absolutely thread-safe.

There is also a fast-fail problem with thread safety. When iterators of a HashMap are used to iterate over a HashMap, Iteractor will throw a fast-fail exception if structural changes occur to the HashMap, such as inserting new data, removing data, or expanding the HashMap. This prevents concurrent exceptions and ensures thread safety to a certain extent. The following source code:

final Node<K,V> nextNode(a) {...if(modCount ! = expectedModCount)throw newConcurrentModificationException(); . }Copy the code

The modCount variable of the HashMap is recorded when the Iteractor object is created, and modCount is incremented by one whenever a structural change to the HashMap is made. During iteration, we can judge whether the HashMap’s modCount is consistent with the self-saved expectedModCount to determine whether structural changes have taken place.

The fast-fail exception can only be used as a security guarantee for traversal, not as a means of accessing a HashMap concurrently by multiple threads. If you have concurrency requirements, you still need to use the three methods described above.

summary

  1. HashMap is not thread-safe, and unexpected problems such as data loss can occur with concurrent access by multiple threads
  2. HashMap1.8 uses the tail insertion method for expansion, to prevent the linked list loop caused by the problem of endless loop
  3. Solutions to the concurrency problem areHashtable,Collections.synchronizeMap(),ConcurrentHashMap. One of the best solutions isConcurrentHashMap
  4. The above solution does not guarantee thread-safety
  5. Fast failure is a concurrency safety guarantee in the HashMap iteration mechanism

The source code parsing

Understanding of key variables

There are many internal variables in the HashMap source code, and these variables will appear frequently in the following source code analysis. First, you need to understand the meaning of these variables.

// An array of data
transient Node<K,V>[] table;
// The number of key-value pairs stored
transient int size;
// Number of changes to the HashMap structure, mainly used to determine fast-fail
transient int modCount;
// Maximum number of key-value pairs to store (threshodl=table.length*loadFactor), also known as threshold
int threshold;
// Load factor, which represents the proportion of the maximum amount of data that can be held
final float loadFactor;
// Static inner class, the node type stored in HashMap; Can store key-value pairs, itself a linked list structure.
static class Node<K.V> implements Map.Entry<K.V> {... }Copy the code

capacity

The HashMap source code also includes initialization operations in the expansion method, so the expansion method source code is mainly divided into two parts: determine the new array size, migration data. Detailed source analysis is as follows, I played a very detailed annotation, optional view. The expansion steps have been described above, so you can use the source code to analyze how HashMap implements the above design.

final Node<K,V>[] resize() {
    // The variables are the original array, the original array size, the original threshold; New array size, new threshold
    Node<K,V>[] oldTab = table;
    int oldCap = (oldTab == null)?0 : oldTab.length;
    int oldThr = threshold;
    int newCap, newThr = 0;
    
    // If the original array length is greater than 0
    if (oldCap > 0) {
        // If you have exceeded the maximum length set (1<<30, that is, the maximum positive integer)
        if (oldCap >= MAXIMUM_CAPACITY) {
            // Set the threshold directly to the maximum positive number
            threshold = Integer.MAX_VALUE;
            return oldTab;
        }
        else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                 oldCap >= DEFAULT_INITIAL_CAPACITY)
            // Set it to double
            newThr = oldThr << 1; 
    }
    
    // The original array length is 0, but the maximum is not 0, set the length to the threshold
    // The array length is specified when creating a HashMap
    else if (oldThr > 0) 
        newCap = oldThr;
    // First initialization, default 16 and 0.75
    // Create a HashMap object using the default constructor
    else {               
        newCap = DEFAULT_INITIAL_CAPACITY;
        newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
    }
    // If the original array length is less than 16 or the maximum length is exceeded after doubling, the threshold is recalculated
    if (newThr == 0) {
        float ft = (float)newCap * loadFactor;
        newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                  (int)ft : Integer.MAX_VALUE);
    }
    threshold = newThr;
    
    @SuppressWarnings({"rawtypes","unchecked"})
    // Create a new array
    Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
    table = newTab;
    if(oldTab ! =null) {
        // Loop through the array and compute the new position for each node
        for (int j = 0; j < oldCap; ++j) {
            Node<K,V> e;
            if((e = oldTab[j]) ! =null) {
                oldTab[j] = null;
                // If there is no successor node, then just use the new array length modulo to get the new index
                if (e.next == null)
                    newTab[e.hash & (newCap - 1)] = e;
                // If it is a red-black tree, call the red-black tree disassembly method
                else if (e instanceof TreeNode)
                    ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                // The new location has only two possibilities: the original location, the original location + the old array length
                // Split the original list into two lists and insert them into two positions in the new array
                // Don't call the put method more than once
                else { 
                    // Are the same position of the list and the original position + the original length of the list
                    Node<K,V> loHead = null, loTail = null;
                    Node<K,V> hiHead = null, hiTail = null;
                    Node<K,V> next;
                    // Select 1or0 as the new bit
                    do {
                        next = e.next;
                        if ((e.hash & oldCap) == 0) {
                            if (loTail == null)
                                loHead = e;
                            else
                                loTail.next = e;
                            loTail = e;
                        }
                        else {
                            if (hiTail == null)
                                hiHead = e;
                            elsehiTail.next = e; hiTail = e; }}while((e = next) ! =null);
                    // Finally assign to the new array
                    if(loTail ! =null) {
                        loTail.next = null;
                        newTab[j] = loHead;
                    }
                    if(hiTail ! =null) {
                        hiTail.next = null;
                        newTab[j + oldCap] = hiHead;
                    }
                }
            }
        }
    }
    // Returns a new array
    return newTab;
}
Copy the code

Add value

Call the put() method to add the key-value pair, and eventually call putVal() to actually implement the add logic. Code parsing is as follows:

public V put(K key, V value) {
    // Get the hash value and call putVal to insert the data
    return putVal(hash(key), key, value, false.true);
}

// onlyIfAbsent Indicates whether the old value is overwritten. True indicates that the old value is not overwritten. False indicates that the old value is overwritten
// Evict is related to LinkHashMap callback methods and is beyond the scope of this article
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
               boolean evict) {
    
    // TAB is the internal array of the HashMap, n is the length of the array, I is the subscript to insert, and p is the node corresponding to that subscript
    Node<K,V>[] tab; Node<K,V> p; int n, i;
    
    // Check whether the array is null or empty. If so, call resize() to expand the array
    if ((tab = table) == null || (n = tab.length) == 0)
        n = (tab = resize()).length;
    
    // Use bit and operations instead of modulo to get subscripts
    // Check whether the current subscript is null, if so, create a node directly insert, if not, enter the following else logic
    if ((p = tab[i = (n - 1) & hash]) == null)
        tab[i] = newNode(hash, key, value, null);
    else {
        
        // e indicates the node with the same key as the current key. If the node does not exist, it is null
        // k is the key of the current array subscript node
        Node<K,V> e; K k;
        
        // Check whether the current node is the same as the key to be inserted. If yes, the existing key is found
        if(p.hash == hash && ((k = p.key) == key || (key ! =null && key.equals(k))))
            e = p;
        // Check whether the node is a tree node, if the red-black tree method is called to insert
        else if (p instanceof TreeNode)
            e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
        // The last case is direct list insertion
        else {
            for (int binCount = 0; ; ++binCount) {
                if ((e = p.next) == null) {
                    p.next = newNode(hash, key, value, null);
                    If the length is greater than or equal to 8, it is converted to a red-black tree
                    // Note that the treeifyBin method does the array length judgment,
                    // If less than 64, array expansion is preferred over tree conversion
                    if (binCount >= TREEIFY_THRESHOLD - 1) 
                        treeifyBin(tab, hash);
                    break;
                }
                // Find the same direct exit loop
                if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k))))
                    break; p = e; }}// If the same key is found, check whether onlyIfAbsent and the old value are null
        // Perform the update or do nothing, and return the old value
        if(e ! =null) { 
            V oldValue = e.value;
            if(! onlyIfAbsent || oldValue ==null)
                e.value = value;
            afterNodeAccess(e);
            returnoldValue; }}// If the old values are not updated, the number of key-value pairs in the HashMap has changed
    // modCount +1 indicates structure change
    ++modCount;
    // Check whether the length reaches the upper limit. If so, expand the capacity
    if (++size > threshold)
        resize();
    AfterNodeInsertion is a LinkHashMap callback.
    afterNodeInsertion(evict);
    return null;
}
Copy the code

Each step is explained in detail in the code, so here’s a summary:

  1. Generally, there are two cases: finding the same key and not finding the same key. If yes, determine whether to update and return the old value. If no, insert a new Node, update the number of nodes and determine whether to expand.
  2. Lookups fall into three categories: arrays, linked lists, and red-black trees. Array subscript I position is not empty and not equal to key, then need to determine whether the tree node or list node and search.
  3. When the list reaches a certain length, it needs to expand to a red-black tree if and only if the list length >=8 and the array length >=64.

Finally draw a picture to give a better impression of the whole process:

Other problems

Why does a Hashtable control array have a prime length, while a HashMap takes an integer power of 2?

A: Prime length can effectively reduce hash collisions; The use of the integer power of 2 in HashMap is to improve the efficiency of complementation and capacity expansion, while combining the high-low xOR method to make the hash more uniform.

Why do prime numbers reduce hash collisions? If the key’s Hashcode is evenly distributed between each number, the effect is the same for both prime and composite numbers. For example, if hashcode is evenly distributed between 1 and 20, the distribution is uniform regardless of whether the length is composite 4 or prime 5. If hashcode is always spaced by 2, such as 1,3,5… For an array of length 4, the subscripts of position 2 and position 4 do not hold data, while an array of length 5 does not. An array of composite length causes hashcode aggregates spaced by its factors to appear, making the hash less effective. For more details, see this blog: Algorithm Analysis: Why hash table Sizes are Prime, which uses data analysis to prove why prime numbers are better hashes.

Why does inserting HashMap data require implementing hashCode and equals methods? What are the requirements for these two methods?

A: Use HashCode to determine the insertion subscript and use equals comparison to find the data; The Hashcodes of two equal keys must be equal, but objects that have the same Hashcode are not necessarily equal.

A Hashcode is like a person’s first name. People with the same name must have the same name, but the same name is not necessarily the same person. Equals compares whether or not the content is the same, overridden by objects. By default, equals compares the reference address. The “==” reference formation compares whether the reference address is the same, and the value object compares whether the value is the same.

In a HashMap, you need to use hashcode to get the subscript of the key. If two identical objects have different Hashcodes, the same key will exist in the HashMap. So equals returns the same key and they have to have the same hashcode. HashMap compare two elements are the same using the three kinds of comparison method and combining with: p.h ash = = hash && ((k = p.k ey) = = key | | (key! = null && key.equals(k)). For a more in-depth look at the equals() and hashCode() methods in Java Enhancement, the author dissects the differences between these methods in great detail.

The last

The content of HashMap is difficult to cover in one article, and its design covers a lot of content. For example, thread-safe design can be extended to ConcurrentHashMap and Hashtable. The differences between these two classes and HashMap as well as the internal design are very important, which WILL be covered in another article.

Finally, I hope the article is helpful to you.

Full text here, the original is not easy, feel help can like collection comments forward. I have no talent, any ideas welcome to comment area exchange correction. If need to reprint please comment section or private communication.

And welcome to my blog: Portal