catalogue

  • 3.0.0.1 What is the difference between system.arrayCopy () and arrays.copyof () methods in arrayList? Array.arraycopy () and array.copyof ()
  • Why SparseArray performs better than HashMap?
  • 3.0.0.3 Different implementation principles of Sort between Arrays and Collections? Tell me the difference…
  • 3.0.0.4 What classes are in the Java Collections framework? What are the features? Fail-fast, a fast failure mechanism for Java collections?
  • 3.0.0.5 What are the differences between ArrayList, Vector and LinkList? What is a load factor?
  • 3.0.0.6 How can I Understand the Capacity Expansion cost of ArrayList? Can Arrays. AsList be expanded? How to serialize an ArrayList?
  • 3.0.0.7 How to Understand the READ/write mechanism and efficiency of list collections? What is CopyOnWriteArrayList, and how is it different from ArrayList?
  • 3.0.1.0 What is the difference between HashSet and TreeSet? How do you guarantee unique values? How does the bottom layer do that?
  • 3.0.1.5 What is the difference between HashMap and Hashtable? How does a HashMap handle put and get elements? What data structures are represented?
  • 3.0.1.6 How to Ensure HashMap Thread Safety? How is it implemented at the bottom? Is a HashMap ordered? How to achieve order?
  • 3.0.1.7 What happens when HashMap stores the same Hashcode of two objects? If two keys have the same Hashcode, how do you get the value object?
  • 3.0.1.8 HashMap Why not use the hash value of hashCode() as the subscript of the table?
  • 3.0.1.9 Why wrapper classes such as String and Integer in HashMap are suitable for K? Why not use a different key?
  • 3.0.2.0 How to expand HashMap? How do I understand that the size of the HashMap exceeds the capacity defined by the load factor? Any problems with resizing the HashMap?

Good news

  • Summary of blog notes [October 2015 to present], including Java basic and in-depth knowledge points, Android technology blog, Python learning notes, etc., including the summary of bugs encountered in daily development, of course, I also collected a lot of interview questions in my spare time, updated, maintained and corrected for a long time, and continued to improve… Open source files are in Markdown format! Also open source life blog, since 2012, accumulated a total of 500 articles [nearly 1 million words], will be published on the Internet, reprint please indicate the source, thank you!
  • Link address:Github.com/yangchong21…
  • If you feel good, you can star, thank you! Of course, also welcome to put forward suggestions, everything starts from small, quantitative change causes qualitative change! All blogs will be open source to GitHub!

3.0.0.1 What is the difference between system.arrayCopy () and arrays.copyof () methods in arrayList? Array.arraycopy () and array.copyof ()

  • Arraycopy () differs from array.copyof ().
    • For example, the add(int index, E element) method uses arraycopy() to make the arraycopy itself.
      /** * Inserts the specified element at the specified position in this list. * Call rangeCheckForAdd to check the bounds of the index; Then call the ensureCapacityInternal method to ensure that capacity is large enough; * Move all members from index back one place; Insert element into index; And then finally size plus 1. */ public void add(int index, E element) { rangeCheckForAdd(index); ensureCapacityInternal(size + 1); //arraycopy() implements the arraycopy itself //elementData: source array; Index: the starting position in the source array; ElementData: Target array; Index + 1: the starting position in the target array; Size-index: number of array elements to copy; System.arraycopy(elementData, index, elementData, index + 1, size - index); elementData[index] = element; size++; }Copy the code
    • The toArray() method uses the copyOf() method
      /** * returns an array containing all the elements in this list in the correct order (from first to last). * The returned array will be "safe" because the list does not retain a reference to it. In other words, the method must allocate a new array. * Therefore, the caller is free to modify the returned array. This approach acts as a bridge between array-based and collection-based apis. */ public Object[]toArray() {//elementData: the array to copy; Size: indicates the length to be copiedreturn Arrays.copyOf(elementData, size);
      }
      Copy the code
    • The two are related and different
      • Look at the above two source code can be foundcopyOf()Internally calledSystem.arraycopy()methods
      • Tech blog summary
      • The difference between:
        • 1. Arraycopy () requires the target array. Copy the original array into your own array, and you can choose the starting point and length of the copy as well as the location in the new array
        • 2. CopyOf () automatically creates an internal array and returns the array.
  • Array.arraycopy () and array.copyof ()
    • Use the system.arrayCopy () method
      public static void main(String[] args) {
      	// TODO Auto-generated method stub
      	int[] a = new int[10];
      	a[0] = 0;
      	a[1] = 1;
      	a[2] = 2;
      	a[3] = 3;
      	System.arraycopy(a, 2, a, 3, 3);
      	a[2]=99;
      	for(int i = 0; i < a.length; i++) { System.out.println(a[i]); }} // Result: //0 1 99 2 3 0 0 0 0 0 0 0Copy the code
    • Use the arrays.copyof () method.Tech blog summary
      public static void main(String[] args) {
      	int[] a = new int[3];
      	a[0] = 0;
      	a[1] = 1;
      	a[2] = 2;
      	int[] b = Arrays.copyOf(a, 10);
      	System.out.println("b.length"+b.length); // result: //10}Copy the code
    • Come to the conclusion
      • arraycopy()You need the target array, copy the original array to your own array or the original array, and you can choose the starting point and length of the copy and place it in the new arraycopyOf()The system automatically creates an array internally and returns the array.

Why SparseArray performs better than HashMap?

  • Util, the data structure in Android, optimized for mobile terminal, in the case of a small amount of data, the performance is better than HashMap, similar to HashMap, key:int,value:object.
  • 1. The key and value are stored in arrays. The array that stores keys is of type int and does not need to be boxed. Provides speed.
  • 2. The binary search method is used to sort the inserts, so the two arrays are sorted from smallest to largest.
  • 3. When searching, binary search is carried out. The speed is relatively fast with a small amount of data.

3.0.0.3 Different implementation principles of Sort between Arrays and Collections? Tell me the difference…

  • 1, the Arrays. The sort ()
    • The algorithm is a tuned quicksort that provides N*log(N) performance over many data sets, which causes other quicksorts to degrade quadratic performance.
  • 2, the Collections. The sort ()
    • The algorithm is a modified merge sort algorithm (where the merge is ignored if the highest element in the low sublist benefits the lowest element in the high sublist). This algorithm provides guaranteed N*log(N) performance. This implementation dumps the specified list into an array, then sorts the array, iterating over the list of each element at the corresponding position in the reset array.
  • The difference between them
    • Tech blog summary

3.0.0.4 What classes are in the Java Collections framework? What are the features? Fail-fast, a fast failure mechanism for Java collections?

  • Java collection framework can be roughly divided into Set, List, Queue and Map four systems
    • Set: Represents an unordered, non-repeatable Set. Common classes are HashSet and TreeSet
    • List: Represents ordered, repeatable collections. Common classes are ArrayList, LinkedList, and mutable array Vector
    • Map: represents a collection of mappings. Common classes include HashMap, LinkedHashMap, and TreeMap
    • Queue: Represents a collection of queues
  • Fail-fast: A fast failure mechanism for Java Collections
    • Tech blog summary
    • A fail-fast mechanism for detecting errors in Java collections that can occur when multiple threads make structural changes to a collection.
      • Such as: Let’s say there are two threads (thread 1 and thread 2). Thread 1 Iterator iterates through the elements of set A, and thread 2 at some point changes the structure of set A (the structure changes, rather than simply changing the contents of the elements). So when the program will throw ConcurrentModificationException, resulting in a fail – fast mechanism.
    • The reason:
      • Iterators access the contents of the collection directly during traversal and use a modCount variable during traversal. If the contents of the collection change during traversal, the value of modCount is changed. Whenever the iterator iterates over the next element using hashNext()/next(), it checks whether the modCount variable is the expectedmodCount value and returns the traversal if it is; Otherwise, an exception is thrown and the traversal is terminated.
    • Solutions:
      • 1. Add synchronized to all parts of the traversal that involve changing modCount.
      • 2. Replace ArrayList with CopyOnWriteArrayList

3.0.0.5 What are the differences between ArrayList, Vector and LinkList? How is the storage space expanded? What is a load factor?

  • ArrayList
    • The underlying structure of an ArrayList is an array, which can be quickly searched using an index. Is a dynamic array, compared to the size of the array can achieve dynamic growth.
    • ArrayList is not thread-safe. It is recommended to use ArrayList in a single thread, whereas in multithreading you can choose Vector or CopyOnWriteArrayList. The default initial capacity is 10, and each expansion is 1.5 times the original capacity
  • Vector
    • Much like ArrayList, Vector uses the synchronized keyword and is thread-safe, more expensive and slower than ArrayList. The default initial capacity is 10, and the capacity is doubled each time by default. You can set the capacityIncrement using the capacityIncrement property
  • LinkList
    • The underlying structure of LinkedList is LinkedList, which can be added and deleted quickly. Is a bidirectional circular linked list that can also be used as a stack, queue, or double-endian queue

3.0.0.6 How can I Understand the Capacity Expansion cost of ArrayList? Can Arrays. AsList be expanded? How to serialize an ArrayList?

  • How to understand the expansion costs of ArrayList
    • ArrayList uses elementData = array.copyof (elementData, newCapacity); Each time an array of newLength length is created, so the space complexity of expansion is O(n) and the time complexity is O(n).
    public static <T,U> T[] copyOf(U[] original, int newLength, Class<? extends T[]> newType) {
        T[] copy = ((Object)newType == (Object)Object[].class)
            ? (T[]) new Object[newLength]
            : (T[]) Array.newInstance(newType.getComponentType(), newLength);
        System.arraycopy(original, 0, copy, 0,
                         Math.min(original.length, newLength));
        return copy;
    }
    Copy the code
  • Can Arrays. AsList be expanded?
    • No, the List returned by asList is read-only. The reason is that the ArrayList returned by the asList method is an inner class of Arrays and does not implement add, remove, etc
  • How to sort a List?
    • To implement sorting, you can use a custom sort: list.sort(new Comparator(){… })
    • Or use Collections for quick sorting: collections.sort (list)
  • How to serialize an ArrayList?
    • ArrayList is implemented based on arrays and is dynamically scalable, so not all arrays holding elements will be used and there is no need to serialize them all.
    • Tech blog summary
    • The array elementData that holds elements uses the transient modifier, which states that the array will not be serialized by default.
    transient Object[] elementData; // non-private to simplify nested class access
    Copy the code
    • ArrayList implements writeObject() and readObject() to control only which parts of the array are populated with elements.
    private void readObject(java.io.ObjectInputStream s)
        throws java.io.IOException, ClassNotFoundException {
        elementData = EMPTY_ELEMENTDATA;
        s.defaultReadObject();
        s.readInt(); // ignored
        if (size > 0) {
            ensureCapacityInternal(size);
            Object[] a = elementData;
            for (int i=0; i<size; i++) {
                a[i] = s.readObject();
            }
        }
    }
    
    private void writeObject(java.io.ObjectOutputStream s)
        throws java.io.IOException{
        int expectedModCount = modCount;
        s.defaultWriteObject();
        s.writeInt(size);
        for (int i=0; i<size; i++) {
            s.writeObject(elementData[i]);
        }
        if (modCount != expectedModCount) {
            throw new ConcurrentModificationException();
        }
    }
    Copy the code
    • Serialization requires the writeObject() of ObjectOutputStream to convert the object into a byte stream and output it. The writeObject() method will reflect the writeObject() of the passed object if it has writeObject() to achieve serialization. Deserialization uses ObjectInputStream’s readObject() method, which works similarly.
    ArrayList list = new ArrayList();
    ObjectOutputStream oos = new ObjectOutputStream(new FileOutputStream(file));
    oos.writeObject(list);
    Copy the code

3.0.0.7 How to Understand the READ/write mechanism and efficiency of list collections? What is CopyOnWriteArrayList, and how is it different from ArrayList?

  • Read/write mechanism
    • When the number of elements inserted into the ArrayList exceeds the predefined maximum value of the current array, the array needs to be expanded. During the expansion process, the underlying system.arraycopy () method is called to perform a large number of arraycopy operations. Deleting elements does not reduce the size of the array (you can call the trimToSize() method if you need to reduce the size of the array); The array is iterated over when looking for elements, using equals for non-null elements.
    • When the LinkedList inserts an element, it creates a new Entry object and updates references to the elements before and after the corresponding element. To find elements, you need to traverse the linked list; To delete an element, you simply traverse the list, find the element to delete, and remove it from the list.
    • Vector and ArrayList differ in their capacity expansion mechanisms only when inserting elements. For Vector, create an Object array of size 10 by default and set capacityIncrement to 0. If the array size is insufficient, if capacityIncrement is greater than 0, the size of the Object array is expanded to the existing size+capacityIncrement. If capacityIncrement<=0, increase the size of the Object array by twice the existing size.
  • Writing and reading efficiency
    • The addition and deletion of elements by an ArrayList causes the memory allocation of the array to change dynamically. Therefore, it is slow to insert and delete, but fast to retrieve.
    • LinkedList is fast to add and delete elements, but slow to retrieve because it stores data in a LinkedList.
  • What is CopyOnWriteArrayList, and how is it different from ArrayList?
    • CopyOnWriteArrayList is a thread-safe variant of ArrayList, where all mutable operations (add, set, and so on) are implemented by making a new copy of the underlying array. It is slower to write than ArrayList because it requires a snapshot of the instance.
    • In CopyOnWriteArrayList, the write operation has to make a large copy of the array, so the performance is going to be pretty bad, but the read operation because the object is not the same as the write operation, and there is no lock between the reads, and the synchronization between the read and write is done by simply pointing the reference to the new array object with a simple “=”, This hardly takes time, so read quickly is safe, suitable for use in a multithreaded, would never occur ConcurrentModificationException, so CopyOnWriteArrayList is suitable for use when reading far outweigh the scene of the write operation, such as the cache.
    • Tech blog summary

3.0.1.0 What is the difference between HashSet and TreeSet? How do you guarantee unique values? How does the bottom layer do that?

  • HashSet
    • The order of elements cannot be guaranteed; Hash algorithm is used to store elements in the set, which has good access and lookup performance. Use equal() to determine whether two elements are equal, and the return value of hashCode() for both elements is equal
  • TreeSet
    • SortedSet is an implementation class of the SortedSet interface, sorting according to the size of the actual value of the elements; The data structure of red-black tree is used to store set elements. Two sorting methods are supported: natural sorting (the default) and custom sorting. The former compares the size relationship between two elements by implementing compareTo() in the Comparable interface and then sorts them in ascending order; The latter implements a custom arrangement by implementing compare() in the Comparator interface to compare the size relationship between two elements

3.0.1.5 What is the difference between HashMap and Hashtable? How does a HashMap handle put and get elements? What data structures are represented?

  • HashMap
    • Based on AbstractMap class, Map, Cloneable (can be cloned), Serializable (support serialization) interface is implemented; Non-thread-safe; Allow a null key and any NULL value. The data structure using linked list hash, that is, the combination of array and linked list; The initial capacity is 16. The default fill factor is 0.75. During capacity expansion, the current capacity is doubled, that is, 2capacity
  • Hashtable
    • Based on Map interface and Dictionary class; Thread safety, more overhead than HashMap, if multiple threads access a Map object, use Hashtable better; Nulls are not allowed as keys and values; The bottom layer is based on hash table structure; The initial capacity is 11, and the default filling factor is 0.75. During capacity expansion, the capacity is doubled +1, that is, 2capacity+1
    • The synchronized keyword is used in HashTable, which actually locks the object as a whole. When the size of HashTable increases to a certain point, the performance of the object deteriorates dramatically, because it needs to be locked for a long time during iteration.
  • A HashMap is used to put and get elements
    • PutForNullKey () is called when putting an element into a Hashmap. PutForNullKey () is called when putting an element into a Hashmap. PutForNullKey () is called when putting an element into a Hashmap. If the array has no elements at that location, it is saved directly; If there is, the same key is also compared. If there is, the value of the original key is overwritten. Otherwise, the element is stored in the header, and the first one is stored at the end.
    • When a get element is extracted from a Hashmap, the hash value of the key is computed to find the corresponding subscript value in the array, and the value corresponding to the key is returned. If there is a conflict, the linked list of positions is traversed to find the element with the same key and the corresponding value is returned
  • What data structures are represented
    • HashMap uses the data structure of a linked list hash, a combination of an array and a linked list, and after Java8, a red-black tree, which converts the list to a red-black tree when the list has more than eight elements
    • Tech blog summary

3.0.1.6 How to Ensure HashMap Thread Safety? How is it implemented at the bottom? Is a HashMap ordered? How to achieve order?

  • Use ConcurrentHashMap to ensure thread safety
    • ConcurrentHashMap is a thread-safe HashMap. It divides data into segments and assigns a lock to each segment of data. When one thread accesses one segment of data, other segments of data can also be accessed by other threads. There are two improvements to ConcurrentHashmap in JDK1.8:
      • Eliminate the segments field and use transient volatile HashEntry

        [] table to store data. Lock each row of data using array elements to reduce the probability of concurrent conflicts
        ,v>
      • The data structure changes from “array + unidirectional linked list” to “array + unidirectional linked list + red-black tree”, which reduces the time complexity of query to O(logN) and improves certain performance.
    • Popular explanation: ConcurrentHashMap uses the Segment partition method to split a large Map into N smaller hashtables. The put method determines which Segment to store according to the hash(paramk.hashcode ()) method. If we look at the Segment put operation, we can see that the internal synchronization mechanism is based on the lock operation, so that we can lock part of the Map (Segment). This only affects the put operation of the elements to be placed in the same Segment. Instead of locking the entire Map (which is what HashTable does), HashTable improves performance over HashTable in multithreaded environments, so HashTable is obsolete. Tech blog summary
  • Order is achieved using LinkedHashMap
    • A HashMap is unordered, while a LinkedHashMap is an ordered HashMap. By default, it is inserted in order and can also be accessed in order. The basic principle is that a bidirectional linked list is maintained internally through Entry to maintain the iteration order of the Map

3.0.1.7 What happens when HashMap stores the same Hashcode of two objects? If two keys have the same Hashcode, how do you get the value object?

  • What happens when a HashMap stores the same Hashcode for two objects?
    • Incorrect answer: Because hashCode is the same, the two objects are equal and the HashMap will throw exceptions, or it will not store them.
    • Correct answer: Two objects may have the same Hashcode, but they may not be equal. If not, take a look at my blog post on Hash and HashCode for further understanding. Answer “Because hashCode is the same, their bucket positions are the same, and a ‘collision’ occurs. Because HashMap uses a linked list to store objects, this Entry(a Map.entry object with key-value pairs) is stored in the linked list.
  • Differences between HashMap1.7 and 1.8
    • In JDK1.6 and JDK1.7, HashMap is implemented using array + linked list. That is, linked list is used to handle conflicts. All linked lists with the same hash value are stored in a linked list. However, when there are many elements in a linked list, that is, there are many elements with the same hash value, sequential searching by key value is inefficient.
    • In JDK1.8, HashMap is implemented by bit array + linked list + red-black tree. When the length of the linked list exceeds the threshold (8), the linked list is converted to red-black tree, which greatly reduces the search time.
  • If two keys have the same Hashcode, how do you get the value object?
    • When the get() method is called, the HashMap uses the hashcode of the key object to find the bucket location and then retrieves the value object. Of course, if there are two value objects stored in the same bucket, the list will be traversed until the value object is found.
    • When there are no value objects to compare, how do you determine which value objects to find? Since HashMap stores key-value pairs in the linked list, once the bucket location is found, keys.equals() is called to find the correct node in the list and ultimately the value object to look for.
    • Tech blog summary

3.0.1.8 HashMap Why not use the hash value of hashCode() as the subscript of the table?

  • Do not use hashCode() directly to hash the processed value
    • The hashCode() method returns an int integer ranging from -(2^31) to (2^31-1), which is about 4 billion mapping Spaces. The size of a HashMap ranges from 16 (the default value) to 2^ 30. It is also difficult to provide so much storage space on the device that the hash calculated by hashCode() may not be in the array size range and thus cannot match the storage location.
  • What methods do hashMaps use to effectively resolve hash conflicts
    • 1. Use chained address method (using hash table) to link data with the same hash value;
    • 2. Use the quadratic perturbation function (hash function) to reduce the probability of hash conflict and make the data distribution more even;
    • 3. Introduce red-black tree to further reduce the time complexity of traversal, making traversal faster;
  • How to solve the matching storage location problem
    • HashMap implements its own hash() method, which makes the high and low bits of its hash value perform xOR operation by itself through two perturbations, reducing the probability of hash collision and making the data distribution more even.
    • When the array length is a power of 2, it is more efficient to use the hash() value and (&) (array length -1) to get the index of the array. H &(length-1) is equivalent to h%length. Thirdly, the problem of “the hash value does not match the array size range” is solved.
  • Why is the length of the array guaranteed to be a power of two?
    • Only when the array length is 2 to the power, h&(length-1) is equivalent to H %length, that is, key positioning is realized. The power of 2 can also reduce the number of conflicts and improve the query efficiency of HashMap.
    • Tech blog summary
    • If length is a power of 2, leng-1 must be 11111 when converted to binary… In the form of h binary and operation efficiency will be very fast, and space is not wasted; If length is not a power of 2, for example, length is 15, then length-1 is 14, the corresponding binary is 1110, in h and operation, the last bit is 0, and 0001,0011,0101,1001,1011,0111, 1101 will never be able to store elements, which is a huge waste of space. Worse, the number of places the array can be used is much smaller than the length of the array, which further increases the chance of collisions and slows down the efficiency of queries! This will result in a waste of space.

3.0.1.9 Why wrapper classes such as String and Integer in HashMap are suitable for K? Why not use a different key?

  • Why wrapper classes like String and Integer in HashMap are suitable for K?
    • The features of wrapper classes such as String and Integer ensure the immodification of Hash values and the accuracy of calculation, effectively reducing the probability of Hash collisions
      • Both types are final, that is, immutable, to ensure that keys cannot be modified and hash values cannot be obtained differently
      • Internally rewrite equals(), hashCode() and other methods to comply with the internal HashMap specification (see putValue procedure above), which is not prone to Hash errors;
  • What should I do if I want my Object to be K?
    • Override the hashCode() and equals() methods
      • Rewrite hashCode() because you need to calculate where the data is stored, and you need to be careful not to try to improve performance by excluding critical parts of an object from the Hash code calculation, which is faster but may result in more Hash collisions;
      • Override equals() with reflexivity, symmetry, transitivity, consistency, and x.equals(null) must return false for any non-null reference value x to ensure uniqueness of keys in the hash table.
  • conclusion
    • Using the appropriate equals() and hashCode() methods will reduce the number of collisions and increase efficiency. Immutability makes it possible to cache hashCode with different keys, which improves the speed of retrieving the entire object. Using Wrapper classes like String and Interger as keys is a good choice.
    • Tech blog summary

3.0.2.0 How to expand HashMap? How do I understand that the size of the HashMap exceeds the capacity defined by the load factor? Any problems with resizing the HashMap?

  • Why HashMap should be expanded
    • When the size of the list array exceeds the initial size * load factor (default: 0.75), the hash increases the list array by 2 times, moving the original list array into the new array. Why do load factors need to be used? Why does it need to be expanded? Because if the filling ratio is large, that use a lot of space, if has not expansion, the list will be more and more long, so that search efficiency is very low, after the expansion, the original chain table array of every list is divided into odd and even two child list each hung in the chain of new array of the hash table location, thus reducing the length of each chain table, increase the search efficiency.
  • How do you understand that the size of a HashMap exceeds the capacity defined by the Load factor?
    • The default load factor is 0.75, which means that when a map is 75% full of buckets, as with other collection classes such as ArrayList, an array of buckets twice the size of the original HashMap will be created to resize the map. And put the original object into the new bucket array. This process is called rehashing because it calls the hash method to find the new bucket location.
  • Any problems with resizing the HashMap?Tech blog summary
    • Conditional contention can occur in the case of multiple threads. There is indeed a conditional race when resizing a HashMap, because if both threads find that the HashMap needs to be resized, they will try to resize it at the same time. During resizing, the order of the elements stored in the list is reversed because the HashMap does not place the elements at the end of the list when moving to the new bucket location, but at the head to avoid tail traversing. If conditional competition occurs, then the cycle is endless.

The other is introduced

01. About blog summary links

  • 1. Tech blog round-up
  • 2. Open source project summary
  • 3. Life Blog Summary
  • 4. Himalayan audio summary
  • 5. Other summaries

02. About my blog

  • My personal website: www.yczbj.org, www.ycbjie.cn
  • Github:github.com/yangchong21…
  • Zhihu: www.zhihu.com/people/yang…
  • Jane: www.jianshu.com/u/b7b2c6ed9…
  • csdn:my.csdn.net/m0_37700275
  • The Himalayan listening: www.ximalaya.com/zhubo/71989…
  • Source: China my.oschina.net/zbj1618/blo…
  • Soak in the days of online: www.jcodecraeer.com/member/cont.
  • Email address: [email protected]
  • Blog: ali cloud yq.aliyun.com/users/artic… 239.headeruserinfo.3.dT4bcV
  • Segmentfault headline: segmentfault.com/u/xiangjian…
  • The Denver nuggets: juejin. Cn/user / 197877…