In the last part of us, we shared the interview questions of several big Internet companies, this article will be a detailed analysis of the answer to the interview questions and review the reference and collate the interview materials, xiaomin classmate’s personal treasure 🐶.

First of all, the answers of the interview questions are announced. During the explanation, we are mainly divided into the following sections: basic knowledge of language, middleware, operating system, computer network, handwriting algorithm, open questions and project experience. On the other side of the questions and involved knowledge points to sort out, so that it is easier for students to understand. I’m sorry if I don’t follow the order of the questions.

The second is the Java review reference and collated interview materials. Since the content is quite large, it is very important to learn the tao, we introduce the main points and the table of contents, the complete document can be seen in the PDF provided by the author.

Analysis of interview Questions

How to use Jenkins?

Jenkins covers DevOps, which is used to automate integration, continuously and automatically build/test software projects, and monitor scheduled tasks.

Continuous integration (CI) has become a common practice for many software development teams today to focus on ensuring code quality throughout the software development lifecycle. It is a practice designed to ease and stabilize the process of building software.

What is the purpose of the MVCC for reading

Most of MySQL’s transactional storage engines do not implement simple row-level locking. In order to improve concurrency performance, they generally implement multi-version concurrency control (MVCC) simultaneously. MVCC implementations vary between storage engines, typically with optimistic and pessimistic concurrency control.

MVCC simply means that the commit of any changes to the database does not directly overwrite the previous data, but instead produces a new version that coexists with the old one, allowing it to be read without locks at all. In this way, when reading a particular piece of data, the transaction can choose which version of data to read based on the isolation level. No locking is required at all.

You can think of MVCC as a variant of row-level locking, but it avoids locking in many cases and is therefore cheaper. Most MVCCS implement non-blocking read operations, and write operations lock only necessary rows. MVCC is valid only at repeatable read and commit read isolation levels. Non-commit reads cannot use it because the version of the row that conforms to the transactional version cannot be read. They always read the latest version of the line. The reason MVCC cannot be used by serializable is that it always locks rows.

MVCC is implemented by saving a snapshot of the data at a point in time. That is, each transaction sees the same data regardless of how long it takes to execute. Depending on when the transaction started, each transaction may see different data for the same table at the same time.

How to implement mysql OrDerby

MySQL has two ways to implement ORDER BY:

  • Generate ordered results by index scan
  • Using file sorting (filesort)
Use ordered index to get ordered data

Fetch the fields that meet the filtering conditions as the sorting conditions and the row pointer information that can directly locate the row data. Sort the actual operation in the Sort Buffer, and then use the sorted data to return the data of other fields requested by the client according to the row pointer information, and then return the data to the client.

In this way, Using Index is displayed when analyzing queries Using Explain. File sorting shows Using filesort.

In SQL statements, indexes can be used in both the WHERE and ORDER BY clauses: The WHERE clause uses indexes to avoid full table scans, and the ORDER BY clause uses indexes to avoid filesort (” avoid “may be inappropriate, because in some scenarios filesort is not necessarily slower than full table scans) to improve query efficiency.

While indexes can improve query efficiency, you can only use one index at a time for a single SQL query. MySQL can only use one index (B+ tree) when the WHERE clause and ORDER BY clause do not match.

In other words, there are three possible scenarios for using indexes in SQL that have both WHERE and ORDER BY:

  • It is used only for the WHERE clause to filter out data that meets the condition
  • Used only when the ORDER BY clause returns the sorted result
  • Used for both WHERE and ORDER BY to filter out data that meets the criteria and return the sorted results
File sorting (filesort)

A filesort is simply a sort, and whether or not it goes to disk depends. Whether a filesort uses disks depends on the amount of data it operates on. There are two types of filesort:

  • Speed row in memory when the amount of data is small
  • If the amount of data is large, the storage system quicksorts the data blocks in the memory and merges the data blocks on the disk

When the data volume is large, disk I/O is involved, so the efficiency is lower. Based on the number of times of querying back tables, filesort can be divided into two modes:

  • Two-pass: a two-pass transmission sort
  • Single-pass: single-pass transmission sort

The downside of a single-transfer sort is that it puts all the columns involved in the sort buffer, reduces the number of Tuples that the sort buffer can hold at one time, and increases the probability of merging sort. The larger the amount of column data, the more merging paths are required, which increases the additional I/O overhead. Therefore, when the amount of column data is too large, a single-transfer sort may not be as efficient as a two-transfer sort.

The first callback is used to filter out the rowid that meets the condition and the column value of the ORDER BY corresponding to rowid in the WHERE clause. The second callback occurs after the ORDER BY clause sorts the specified column, and the roWID callback is used to retrieve the field information required BY the SELECT clause.

Mysql index orDerby to add fields

In the case of the order BY field adding itself to the index, if the final result set is filtered based on the Order by field, adding the ORDER BY field to the index and putting it in the right place in the index can provide a significant performance improvement.

Eureka source

Eureka is a service registry and discovery component recommended by Spring Cloud, including Eureka Server and Eureka Client:

  1. The Eureka Server provides service registration services. After each node starts, it registers with the Eureka Server. In this way, the service registry of the Server stores information about all available service nodes.

  2. Eureka Client is a Java Client designed to simplify interaction with the Eureka Server. The Client also has a built-in load balancer that uses a polling load balancing algorithm.

  3. After an application is started, it sends a heartbeat message to the Eureka Server (default interval: 30s). If the Eureka Server does not receive a heartbeat message from a node in multiple heartbeat cycles, the Eureka Server deletes the node from the service registry (default interval: 90s).

  4. Data is synchronized between Eureka servers through replication.

  5. The Eureka Client has a caching mechanism. If the Eureka Server is down, the Client can still use the cached information to consume other service apis.

  6. The Eureka region can be considered as a geographical partition without specific size limitation.

  7. Eureka zone can be understood as the specific equipment room information in a region.

  8. The Jersey framework is used to implement its Restful HTTP interface, synchronization and service registration between peers are implemented through HTTP protocol, timed tasks (sending heartbeat, periodically clearing expired services, node synchronization, etc.) are implemented through JDK Timer, and memory cache is implemented through Google guava.

  9. When the service registry Server detects the downtime of the service provider, it sets the service to DOWN state in the service center and publishes the service to other subscribers, who update the local cache information.

  10. If the Eureka Server node loses too many clients in a short period of time, the node enters the self-protection mode and does not log out any services.

Hystrix source

In a distributed system environment, similar dependencies between services are very common, and a business call often depends on multiple underlying services. As shown in the figure below, for synchronous invocation, when the inventory service is unavailable, the commodity service request thread will be blocked. When the inventory service is invoked with a large number of requests, the entire commodity service resource may eventually be exhausted and the external service cannot be provided. And this unavailability possibility is passed up the request-call chain, a phenomenon known as the avalanche effect.

Hystrix means porcupine in Chinese, and its back is covered with thorns, which give it the ability to protect itself.

Hystrix design objectives
  • Guard against and control delays and failures from dependencies that are usually accessed over the network
  • Stop the chain reaction of failures
  • Fail quickly and recover quickly
  • Roll back and gracefully degrade
  • Provides near-real-time monitoring and alarms
Hystrix follows design principles
  • Prevents any single dependency from depleting resources (threads)
  • Overload cuts off immediately and fails quickly to prevent queuing
  • Provide rollback whenever possible to protect users from failures
  • Use isolation techniques (such as dividers, swimlane and circuit breaker modes) to limit the impact of any one dependency
  • This section describes near-real time indicators, monitoring, and alarms to ensure that faults are discovered in time
  • You can dynamically modify configuration attributes to ensure that faults are rectified in a timely manner
  • Prevents the entire dependent client execution from failing, not just network traffic

Distributed lock implementation?

Distributed locking is very common in microservices architecture and can be implemented in several ways:

Do distributed locking based on database

Uniquely distributed locking based on the primary key of the table

If multiple requests are submitted to the database at the same time and only one operation is guaranteed to succeed, then the thread that successfully performed the operation is considered to have acquired the lock of the method. When the method finishes executing, the lock is released by deleting the row of data.

  • A single point database leads to a strong dependency. You can switch the primary/secondary database of multiple databases
  • Use timer to delete timeout data to avoid deadlock
  • Insert by spinning CAS to achieve blocking
  • Reentrancy can be implemented by checking whether the corresponding record exists
  • Fair locking can be achieved by waiting on the thread table
  • In the case of large concurrency, it is easy to lock the table through the primary key conflict, so try to produce primary keys in the program to prevent the weight

Do distributed locking based on table field version number

Based on the MVCC mechanism of mysql, the MVCC can be modified only when the version number is the same. After the modification, the version number is increased by 1.

Do distributed locks based on database exclusive locks

Through transactions and for update statements, the database adds an exclusive lock to the database under that transaction. In the InnoDB engine, only row-level locks are used for index retrieval, otherwise table-level locks are used.

Do distributed locking based on Redis

Redis-based SETNX() and EXPIRE() methods for distributed locking

The setnx method is SET if Not Exists and takes two main arguments, setnx(key, value). This method is atomic. If the key does not exist, set the current key successfully and return 1. If the current key already exists, setting the current key fails and 0 is returned.

The expire() method sets the expiration time of the key. The setnx command cannot set the expiration time of the key. Only the expire() method can set the expiration time of the key.

If setnx succeeds and expire succeeds, the thread of expire breaks down and a deadlock may occur.

Redis-based SETNX(), GET(), GETSET() methods to do distributed locking

  • The getSet method takes two arguments getSet (key, newValue). This method is atomic, sets newValue for the key, and returns the old value of the key. Getset (key, “value1”) returns null and the key value is set to value1. Getset (key, “value2”) returns value1. In this case, the key value is set to value2. Use steps:

    • Setnx (lockkey, current time + expiration timeout), if 1 is returned, the lock is obtained successfully; If 0 is returned and no lock was acquired, go to 2.
    • Get (lockkey) Obtains the value oldExpireTime and compares the value with the current system time. If the value is smaller than the current system time, the lock is considered to have timed out and can be reobtained by other requests, and then goes to 3.

NewExpireTime = current time + expiration timeout, and getSet (LockKey, newExpireTime) returns currentExpireTime. Check whether currentExpireTime is the same as oldExpireTime. If currentExpireTime is the same as oldExpireTime, it indicates that getSet is successfully set and the lock is obtained. If not, it means that the lock has been acquired by another request, and the current request can directly return failure or continue to retry.

After obtaining the lock, the current thread can start its own business processing. When the processing is complete, the thread compares its processing time with the timeout period set for the lock. If the timeout period is less than the timeout period set for the lock, the thread directly executes delete to release the lock. If the value is greater than the timeout period set for the lock, the lock does not need to be processed.

Do distributed lock based on REDLOCK

Redlock is a cluster mode Redis distributed lock developed by the author of Redis, which is based on N completely independent Redis nodes.

The client gets the current time in milliseconds. The client attempts to acquire the locks of N nodes (each node obtains the locks in the same way as the previous cache locks), and N nodes acquire the locks with the same key and value. The client needs to set the interface access timeout. The timeout period of the interface must be much shorter than the lock timeout period. For example, if the lock is automatically released for 10s, the timeout period of the interface is 5-50ms. In this way, when a Redis node breaks down, the access to this node can time out as soon as possible, reducing the normal use of locks.

The client calculates how much time it took to acquire the lock by subtracts the time acquired in step 1 from the current time. A distributed lock is acquired only when the client acquired locks on more than three nodes for less than the lock timeout.

The lock acquired by the client is the lock timeout period minus the lock acquired time calculated in Step 3. If the client fails to acquire a lock, the client deletes all locks in sequence.

Do distributed lock based on REDISSON

Redisson is the official distributed lock component of Redis. In Redisson, if the thread succeeds in getting the lock, Redisson will open a timed task watchdog in the background that will periodically check and call renewationAsync (threadId) to renew the lock. The time difference between scheduled scheduling is internalLockLeaseTime/3, that is, 10 seconds. The default lockup time is 30 seconds.

Distributed locks based on Consul

Consul based distributed locks are implemented using acquire and release operations in the Key/Value storage API. Acquire And release operations are similar to check-and-set operations. A acquire operation returns true only if there is no owner of the Key, and a set Value is set, and the session in which the operation is performed holds the lock on the Key, otherwise false. A release operation releases the lock on the Key using the specified session. If the specified session is invalid, it returns false, otherwise it sets the Value and returns true.

Zk principle source code

Zookeeper has the following roles:

  • The leader is responsible for initiating and deciding votes and updating the system status
  • Learners, including followers (followers) and observers (observers), are used to receive requests from clients, return results to clients, and participate in voting in the main selection process
  • The Observer can accept client connections and forward write requests to the leader. However, the Observer does not participate in the voting process and only synchronizes the state of the leader. The purpose of the Observer is to extend the system and improve the read speed
  • The client is the initiator of the request

At the heart of Zookeeper is atomic broadcasting, a mechanism that ensures synchronization between servers. The protocol that implements this mechanism is called the Zab protocol. The Zab protocol has two modes: recovery mode and broadcast mode.

  • Zab goes into recovery mode when the service is started or after the leader crashes, and the recovery mode ends when the leader is elected and most servers are in sync with the leader state. State synchronization ensures that the leader and server have the same system state

  • Once the leader has synchronized status with most of the followers, he can start broadcasting messages. When a server joins the ZooKeeper service, it starts in recovery mode, discovers the leader, and synchronizes its status with the leader. At the end of the synchronization, it also participates in message broadcasts. The Zookeeper service remains in the Broadcast state until the leader crashes or loses most followers support.

To ensure the sequential consistency of transactions, ZooKeeper uses an incremental transaction ID (zxID) to identify transactions. All proposals have their zxID added at the time they are made. In the implementation, zxID is a 64-bit number. The first 32 bits of the zxID are epochal. It is used to indicate whether the leader relationship has changed. The lower 32 bits are used to increment the count.

Senior development engineer interview will generally involve source code, why many students think that the principle, source code is to build rockets, in fact, this and their own experience is very relevant. First of all, it is true that another part of the interview questions is really about building rockets. Many students do a relatively simple project, take Java for example, business may be added, deleted, changed and checked most, for a long time to form the work does not need to look at the source code, or even feel that reading the source code, familiar with the principle of their own help is not great illusion. Looking at good source code is a deeper learning experience.

Serialization mode

First understand serialization and deserialization.

  • Serialization: Converts an object into a sequence of bytes for easy storage.
  • Deserialization: Restores a serialized sequence of bytes

Advantages: You can implement object “persistence”, which means that the life cycle of the object is not dependent on the program.

Serialization in Java includes Java native stream serialization, Json serialization, FastJson serialization, Protobuff serialization. Protobuff serialization is supported across languages.

Java native serialization

The Java native serialization method is a Java native stream (conversion between InputStream and OutputStream). Note that JavaBean entity classes must implement the Serializable interface or they cannot be serialized.

Json serialization

Json serialization typically uses the Jackson package, using the ObjectMapper class to perform operations such as converting objects to byte arrays or Json strings to objects. Most companies today use JSON as the data format returned from the server. For example, to call a server interface, the usual request is xxx.json? A =xxx&b= XXX

FastJson serialization

Fastjson is developed by Alibaba a very good performance of the Java language implementation of Json parser and generator. Features: Fast, fastJSON tests show that fastJSON has extremely fast performance, surpassing any other Java JSON Parser. Powerful, full support for Java beans, collections, maps, dates, enums, support for paradigms, and introspection. No dependency, can directly run on Java SE 5.0 or above support Android. Use the FastJson third-party JAR package.

ProtoBuff serialization

The ProtocolBuffer is a lightweight and efficient structured data storage format that can be used for structured data serialization. Suitable for data storage or RPC data exchange format. A language independent, platform independent, extensible serialized structured data format that can be used in communication protocol, data storage and other fields.

Advantages: across languages; Serialized data occupies less space than JSON. JSON has a certain format and can be compressed in terms of data volume.

Disadvantages: It is stored in binary and cannot be directly read or edited, unless you have a.proto definition, you cannot directly read anything in the Protobuffer.

Contrast with Thrift: Both have similar syntax and support version backward and forward compatibility. Thrift focuses on building scalable services across multiple languages and provides a full range of RPC solutions that make it easy to build services directly without doing much else. Protobuffer is primarily a serialization mechanism. When it comes to data serialization, Protobuffer is relatively good.

ProtoBuff serialized objects can be compressed to a large extent, which can greatly reduce data transfer size and improve system performance. For the cache of large amounts of data, it can also increase the amount of data stored in the cache. The original ProtoBuff had to write its own.proto file, which the compiler converted into a Java file, which was quite tedious. Baidu’s JProtobuf framework encapsulated Google’s original Protobuf, simplifying it by providing only serialization and deserialization methods. Instead of writing a.proto file or a utility compiler to generate a.java file, Baidu’s JProtobuf does all that for us.

Redis persistence

Redis is an in-memory database. Data is stored in memory, but we all know that data in memory changes quickly and is easy to be lost. Fortunately, Redis also provides persistence mechanisms in RDB(Redis DataBase) and AOF(Append Only File).

RDB mechanism

The RDB actually saves data on disk as a snapshot. A snapshot can be understood as taking a picture of the current data and saving it.

RDB persistence refers to writing a snapshot of the data set in memory to disk at a specified interval. In this way, the data in memory is written as a snapshot to a binary file named dump.rdb by default.

The RDB mechanism is to generate a snapshot of all the data at a certain time to save, so there should be a trigger mechanism, is to achieve this process. For RDB, there are three mechanisms: Save, BGSave, and automation. Let’s look at each of them:

  • Save: This command blocks the current Redis server. During the execution of the save command, Redis cannot process other commands until the RDB process is complete.
  • Bgsave Triggering mode: When this command is executed, Redis asynchronously performs snapshot operations in the background. The snapshot can also respond to client requests.
  • Automatic triggering: Automatic triggering is done by our configuration file.

The RDB has the following advantages:

  • The COMPACT, full-backup RDB is ideal for backup and disaster recovery.
  • When the RDB file is generated, the main Redis process forks () a child process to handle all the saving. The main process does not need to perform any disk IO operations.
  • RDB is faster than AOF in restoring large data sets.

On the downside, an RDB snapshot is a full backup that stores the binary serialized form of in-memory data and is very compact in storage. When snapshot persistence is performed, a child process is started for snapshot persistence. The child process owns the memory data of the parent process, and the child process does not respond to the memory changes made by the parent process. Therefore, data modified during snapshot persistence is not saved and may be lost.

AOF

Full backups are always time consuming. Sometimes we provide a more efficient AOF. The mechanism is simple: Redis appends every write command it receives to a file using a write function. Popularly known as logging.

The AOF approach also poses another problem. Persistent files get bigger and bigger. To compress persistent files of AOF. Redis provides the bgrewriteaof command. Save the data in memory as a command to a temporary file, and fork out a new process to rewrite the file. Instead of reading the old Aof file, the entire memory of the database is rewritten as a command to a new Aof file, similar to a snapshot.

AOF also has three triggering mechanisms:

  • Every change synchronization always: Synchronization persistence Data changes are immediately recorded to the disk. Poor performance but good data integrity
  • Synchronization per second Everysec: Asynchronous operations are performed every second to record data loss if the system breaks down within one second
  • Different no: never synchronized

AOF can better protect data against loss. Generally, AOF performs fSYNC operations on a background thread every second to lose up to one second of data. AOF log files do not have any disk addressing overhead, write performance is very high, and files are not prone to damage; Even if the AOF log file is too large, the background rewrite operation will not affect the read and write of the client. Commands for AOF log files are logged in a very readable way, and this feature is ideal for emergency recovery in case of catastrophic misdeletes. For example, if you use the flushall command to flush out all data, you can immediately copy the AOF file, delete the last flushall command, and put the AOF file back. In this case, you can automatically restore all data in the file.

Of course, AOF log files are usually larger than RDB data snapshot files for the same data; When AOF is enabled, write QPS will be lower than RDB because AOF is configured to fsync log files once per second. In the past, a bug occurred in AOF. That is, when data was recovered from AOF logs, the exact same data was not recovered.

Redis master/slave replication

As with Mysql master/slave replication, Redis reads and writes very fast, but it also produces very heavy read pressure. To share the read pressure, Redis supports master-slave replication. The master-slave replication can adopt one master with multiple slaves or a cascading structure. The Redis master-slave replication can be divided into full synchronization and incremental synchronization according to whether the Redis master-slave replication is full or not. The cascading structure is shown below.

Full amount of synchronization

Redis full replication occurs during the initialization of the Slave. In this case, the Slave needs to copy all the data on the Master. The specific steps are as follows:

  • The secondary server connects to the primary server and sends the SYNC command.
  • After receiving the SYNC name, the primary server runs the BGSAVE command to generate an RDB file and uses the buffer to record all subsequent write commands.
  • After BGSAVE is executed, the primary server sends snapshot files to all secondary servers and records the executed write commands during the sending.
  • After receiving the snapshot file from the server, all the old data is discarded and the received snapshot is loaded.
  • After the snapshot is sent, the primary server sends the write command in the buffer to the secondary server.
  • After the snapshot is loaded from the server, it starts to receive command requests and execute write commands from the master server buffer.
The incremental synchronization

Redis incremental replication refers to the process of synchronizing the write operations that occur on the primary server to the secondary server when the Slave is initialized and starts working properly.

Incremental replication is a process in which the primary server sends the same write command to the secondary server every time it executes a write command, and the secondary server receives and executes the received write command.

Redis Primary/secondary synchronization policy

When the master and slave are connected, full synchronization is performed. After full synchronization, incremental synchronization is performed. Of course, the slave can initiate full synchronization at any time if needed. The Redis policy is that incremental synchronization is first attempted anyway, and if that fails, full synchronization is required from the slave.

The configuration of Redis master-slave replication is very simple, it makes the slave server a full copy of the master server. Some important things to clear about Redis master/slave replication:

  • Redis uses asynchronous replication. But starting with Redis 2.8, the server periodically responds to the amount of data being processed from the replication stream.
  • A master server can have multiple slave servers.
  • Slave servers can also accept connections from other slave servers. In addition to multiple slave servers connecting to a master server, multiple slave servers can also connect to a slave server to form a graph structure.
  • The Redis primary-secondary replication does not block the primary server. This means that while several secondary servers are initially synchronizing, the primary server can still process requests.
  • The master – slave replication does not block the slave server. When doing initial synchronization from the server, it uses the older version of the data in response to the query request, as you configured in the redis.conf configuration file. Otherwise, you can configure the server to return an error to the client when the replication stream is closed. However, when the initial synchronization is complete, the old datasets need to be deleted and the new datasets need to be loaded, and the connection requests from the server are blocked for a short period of time.
  • Master-slave replication can be used for scalability, using multiple slave servers to handle read-only requests (for example, heavy sorting operations can be placed on the slave server), or simply for data redundancy.
  • Using master-slave replication removes the need for data to be written to disk on the primary server by configuring “Avoid save” (comment out all “save” commands) in the redis.conf file on the primary server, and then connecting to a secondary server that is configured to “do Save”. This configuration, however, ensures that the primary server does not restart automatically

Structure of master-slave replication, general slave server not to write, but that is not dead, this is to ensure that the main and more easily between each from the consistency of the data, if the slave server data is modified, so make sure all slave servers can be consistent, may be on the structure and processing logic is responsible for more. However, you can also configure the slave server to support write operations.

The master and slave servers can talk to each other regularly, but if the master has a password, the slave cannot do anything to the master without a password. So if your master has a password, set one for the slave as well.

Why use Spring Cloud

Spring Cloud encapsulates a series of mature and stable service frameworks developed by various companies through Spring Boot, eliminates cumbersome configuration, and provides developers with a set of distributed system development kit that is easy to use, convenient to deploy and easy to maintain. Spring Boot is a set of quick configuration scaffolding of Spring, which can be used to develop a single microservice quickly based on Spring Boot. Spring Boot focuses on the fast and convenient integration of a single individual, while Spring Cloud focuses on the global service governance framework.

Spring Boot can be independently used for development projects without Spring Cloud, but Spring Cloud is dependent on Spring Boot.

Micro service Registry principles

There are three main roles involved in the registry:

  • Service provider
  • Service consumer
  • The registry

The relationship between them is roughly as follows:

  • Each micro server in the startup, their network address and other information registered to the registry, the registry store these data.
  • The service consumer queries the address of the service provider from the registry and invokes the interface of the service provider from that address.
  • Individual microservices communicate with the registry using certain mechanisms, such as heartbeat. If the registry cannot communicate with a microservice for a long time, the instance is deregistered.
  • When the microservice network address sends changes (such as instance increase or IP change), it is re-registered with the registry. This eliminates the need for the service consumer to manually modify the provider’s network address.

How does the registration service determine whether the service is online or offline

  • When the service is started, if the port is successfully deployed, the service registration API provided by the registry is invoked to submit its network address and other data to the service registration and discovery for registration.
  • The registry maintains communication with the microservice instance in the registry through certain mechanisms (such as heartbeat). If the registry cannot communicate with the microservice instance for a long time, the service instance information will be removed from the registry.
  • When the microservice instance is closed, it will actively invoke the offline interface provided by the registry to log its own microservice instance data out of the registry.

Operating systems and computer networks

What happens when you access a URL

What happens when you type www.google.com into your browser’s address bar and hit enter?

DNS domain name resolution is performed based on the domain name
  • The Chrome browser first searches the DNS cache of the browser (the cache time is short and can only hold 1000 caches) to check whether there is an entry corresponding to www.google.com in the cache that has not expired. If there is an entry that has not expired, the resolution ends.
  • If the browser’s own cache does not find the corresponding entry, Chrome searches the operating system’s own DNS cache, and if it does not expire, it stops searching and parsing.
  • If it is not found in the system’s DNS cache, then try to read the hosts file to see if there is an IP address corresponding to the domain name. If there is, the resolution succeeds.
  • If no corresponding entry is found in the hosts file, the browser makes a DNS system call and sends a domain name resolution request to the local preferred DNS server. The carrier’s DNS server first searches its cache and finds the corresponding entry. If the entry does not expire, the resolution succeeds. If no corresponding entry is found, the carrier’s DNS initiates an iterative DNS resolution request on behalf of our browser.
The IP address is resolved and a TCP connection is established

After obtaining the IP address corresponding to the domain name, the user-agent (generally refers to the browser) will use a random port (1024 < port < 65535) to the server WEB program (commonly used HTTPD,nginx, etc.) port 80 to initiate TCP connection request. This connection request (the original HTTP request through the TCP/IP4 layer of packets) reaches the server (through various routing devices except the LAN), enters the network card, and then enters the kernel TCP/IP stack (to identify the connection request, decapsulate the packet, and peel off the layer by layer), It is also possible to filter through the Netfilter firewall (modules belonging to the kernel) and eventually reach the WEB program (Nginx as an example in this article), and finally establish a TCP/IP connection.

Sending an HTTP request

After the TCP 3-way handshake, the browser makes an HTTP request (see packet 1) using the HTTP method GET. The request URL is/and the protocol is HTTP/1.0.

The server processes the request and returns a response

Once the server-side WEB program receives the HTTP request, it processes the request and returns it to the browser HTML file. Close the TCP connection.

The browser parses HTML

When the browser gets the index.html file, it begins to parse the HTML code in it, and when it comes to static resources such as JS/CSS /image, it requests the server to download it (it will use multiple threads, each browser has a different number of threads). This is where the keep-alive feature is used. A single HTTP connection can request multiple resources, and the resources are downloaded in the same order as in the code. However, since each resource has a different size and the browser requests multiple resources, the order shown in the figure below is not necessarily the order in the code.

When the browser requests a static resources (in the case of not expired), the server side by an HTTP request (ask since the last update time to have any modify resources), if the server returns a 304 status code (tell the browser server without modification), the browser can directly read the resources of the local cache file.

Browser layout Rendering

Finally, the browser uses its own internal working mechanism to render the static resources and HTML code requested, and then renders to the user.

Is it possible for two operating system processes to write to one location in the shared memory?

Shared memory allows two or more processes to share a storage area. Just as the malloc() function returns Pointers to the same physical memory region to different processes. When one process changes the contents of the address, other processes are aware of the change. This is the fastest type of IPC because the data does not need to be replicated between the client and server, and is written directly to memory without multiple data copies.

However, problems can arise when multiple programs are simultaneously reading and writing data to shared memory. Shared memory does not provide synchronization mechanism, so when we use shared memory for interprocess communication, we often have to use other means to synchronize work between processes.

How is the HTTP request API timeout implemented

Configure the gateway and RPC load balancing. For example, configure the timeout duration of interfaces on the API gateway. Within a single microservice, Timeout periods are configured for calls between services, such as Ribbon Timeout.

What is the HTTP protocol

The HTTP Protocol is short for Hyper Text Transfer Protocol. It is used to Transfer hypertext from World Wide Web servers to local browsers. Application layer protocol based on TCP, it does not care about the details of the data transmission and HTTP (hypertext transfer protocol) is a pattern that is based on the request and response, stateless, application layer protocol, HTTP requests only follow the unified format, the server can be resolved properly different client sends the request, by the same token, the server follows the uniform response format, The client is able to properly parse the responses from different sites.

An HTTP request consists of a request line, request header, blank line, and request body

Request line: Request mode + URL + protocol version

  • Common request methods include GET, POST, PUT, DELETE, and HEAD
  • The path of the resource to be fetched by the client (the so-called URL)
  • HTTP protocol version number used by the client (http1.1 is used)

Request header: A request sent by a client to a server

  • Host: Request address
  • User-agent: indicates the name and version of the operating system and browser used by the client.
  • Content-length: indicates the Length of the data sent to the HTTP server.
  • Content-type: indicates the data Type of the parameter
  • Cookie: Sends the value of the Cookie to the HTTP server
  • Accept-charset: Accepts the character set itself
  • Accept-language: indicates the Language received by the browser
  • Accept: Indicates the media type accepted by the browser

Request body: Commonly carried request parameters

Application/json: {"name":"value"."name1":} Application /x-www-form-urlencoded: name1=value1&name2=value2 multipart/ froe-data: XML content-type: octets/streamCopy the code
What are the fields in the POST request

In the HTTP request header, you can use content-type to specify the request information in different formats. The HTTP protocol states that the data submitted by a POST must be placed in the entity-body of the message, but the protocol does not specify how the data must be encoded. In fact, it is entirely up to the developer to decide the format of the message body, as long as the last HTTP request sent satisfies the above format.

The data sent out is meaningful only if the server parses it successfully. Common server-side languages such as PHP, Python, and their frameworks have built-in capabilities to automatically parse common data formats. The server usually obtains the encoding mode of the message body in the request based on the Content-type field in the headers and then parses the body.

The encType attribute of a form can be used to control how the form data is encoded before it is sent. There are three enctypes:

  • Multipart /form-data does not encode characters and is used to send binary files. The other two types cannot be used to send files.
  • Text /plain: text/plain: text/plain: text/plain: text/plain: text/plain: text/plain
  • Application/X-wwW-form-urlencoded, will encode all characters before sending, that is, before sending to the server, all characters will be coded (space converted to “+” plus sign, “+” plus sign converted to space, special symbols converted to ASCII HEX value).

Application/X-www-form-urlencoded is the default type.

What is the difference between HTTPS and HTTP? How does the HTTPS client server interact?

HTTPS: Is an HTTP channel for security purposes. Simply speaking, it is the secure version of HTTP, that is, SSL layer is added to HTTP. The security basis of HTTPS is SSL.

The HTTPS protocol can be divided into two functions. One is to establish an information security channel to ensure the security of data transmission. The other is to verify the authenticity of the site.

The data transmitted over HTTP is unencrypted, that is, in plain text. Therefore, it is very insecure to use HTTP to transmit private information. To ensure that the private data can be encrypted and transmitted, Netscape designed the Secure Sockets Layer (SSL) protocol to encrypt the data transmitted over HTTP. Thus HTTPS was born. In short, HTTPS is a network protocol built by SSL and HTTP for encrypted transmission and identity authentication. It is more secure than HTTP.

The differences between HTTPS and HTTP are as follows:

  • For HTTPS, you need to apply for a certificate from the CA. Generally, there are few free certificates, so a certain fee is required.
  • HTTP is the hypertext transfer protocol, and information is transmitted in plain text. HTTPS is the SECURE SSL encryption transfer protocol.
  • HTTP and HTTPS use completely different connections and ports (80 and 443).
  • HTTP connections are simple and stateless; HTTPS is a network protocol based on SSL and HTTP for encrypted transmission and identity authentication. It is more secure than HTTP.

  • The client accesses the Web server using an HTTPS URL and must establish an SSL connection with the Web server.
  • After receiving the request from the client, the Web server sends the certificate information (including the public key) of the website to the client.
  • The browser on the client and the Web server begin to negotiate the security level of the SSL connection, that is, the level of information encryption.
  • The browser of the client establishes the session key according to the security level agreed by both parties, encrypts the session key using the public key of the website, and sends it to the website.
  • The Web server uses its own private key to decrypt the session key.
  • The Web server encrypts the communication with the client using the session key.

Although HTTPS is not completely secure, and organizations with root certificates and encryption algorithms can also carry out man-in-the-middle attacks, HTTPS is still the most secure solution under the current architecture.

Sliding window -> What areas are available on client and server (confirmed transmission but not confirmed transmission)

As you all know, when we send data from one machine to another, the data is not and cannot be transmitted to the recipient in one go. This is not difficult to understand, because the network environment is particularly complex, some places are fast and some places are slow. Therefore, the operating system writes the data as a continuous packet and sends it to each other at a certain rate.

We need to consider the bandwidth buffer and other factors, if all the data sent at once will only increase the network pressure, resulting in packet loss retry, light transmission slower, heavy network crash. Because TCP is sequential, the operating system sends these packets in batches to each other, like a window, moving backwards, so we call it the TCP sliding window protocol.

In TCP, the size of the window is agreed after the TCP three-way handshake, and the size of the window is not fixed, but will be adjusted according to the network conditions. TCP adjusts the window size for better transmission efficiency. TCP detects packet loss through the sliding window mechanism and adjusts the data transfer rate when packet loss occurs.

For the sender, the packets to be sent are queued, and for the sender, the packets are divided into four categories. They are in front of the window, have been sent to the receiver, and received a reply from the receiver, we say sent. In the window, there are two states, one is sent to the recipient, but the recipient has not confirmed the delivery, we call it sent unacknowledged, and the other is allowed to send, but has not yet sent, we call it allowed to send unsent. The last one is outside the window, we call it undeliverable, it doesn’t send unless the window slides there.

Once the previous data has been confirmed by the server, the window will slowly slide back. As shown in the figure below, after the two data packets P1 and P2 are confirmed, the window will move back, and then the new data packets will change from untransmittable to transmittable.

What does TCP’s sliding window protocol mean? The first, of course, is reliability. The sliding window moves back only after the front of the queue has been acknowledged, ensuring that the packet is acknowledged and received by the recipient. The second is the transmission efficiency. If there is no window, the server will send packets in a chaotic manner. Because of the queue head effect of TCP, if the previous packet is not sent successfully, it will keep retrying, which will cause worse transmission efficiency. Finally, stability. The sliding window size of TCP, which is the result of discussion on the whole complex network, will be dynamically adjusted to avoid network congestion and be more stable.

How does DNS work

Most of the current network access is based on TCP/IP protocol development, and TCP/IP is based on IP addressing. Because most people can’t remember meaningless IP addresses, they want to use simple domain names instead of addresses, and DNS acts as such a “translator.” We input the domain name, DNS help us to query the corresponding domain name binding IP address.

Wait time for the HTTP handshake

According to the TCP four-way handshake process, the terminator will enter the TIME_wait state after sending the last ACK packet. This state lasts for 2MSL (MSL refers to the maximum lifetime of the packet. If the packet is not accepted after MSL, it will be discarded. The recommended value in RFC793 is 2 minutes. When the time is up, the TCP connection enters the close state and is closed.

The purpose of actively disabling the 2MSL time_WAIT state is as follows:

  • Ensure that all packets generated in this TCP connection die out on the network, so that new TCP connections are not affected by packets generated in this TCP connection but arriving late.
  • Ensure that the passive closing party receives the final ACK. If the last ACK is lost in network transmission, the passive closing party will retransmit the FIN packet. The active closing party can receive the FIN packet within the 2MSL time and resend the ACK packet to restart the 2MSL timing.

By extension, what if a large number of TIME_wait states occur on TCP clients?

Hint at connection reuse.

Algorithm related

Calculation: the probability of two cards king

The hundredth draw is a medium draw from 54 cards, with a probability of 2/54=1/27. The second time you have to draw that card out of 53 to know the remaining king, the probability is 1/53. So the probability of playing two cards is (1/27) * (1/53) = (1/1431).

The actual effect of algorithm stability

Assuming that in the sequence of records to be sorted, there are multiple records with the same keyword. If the relative order of these records remains unchanged after sorting, that is, in the original order, Ai=Aj and Ai is before Aj, and in the sorted sequence, Ai is still before Aj, then the sorting algorithm is said to be stable. Otherwise, the algorithm is unstable.

To sort the contents of the object is a complex multiple digital properties, and its original sense of the initial order, so we need on the basis of the second order to keep the original significance of sorting, only need to use to the stability of the algorithm, for example to sort the contents of the object is a set of originally sorted by price, now need according to the sales rank, using algorithm stability, It can make the objects with the same sales volume still maintain the ranking of price, and only those with different sales volume will be reordered. (Of course, there is still no point in using the stability algorithm if the requirement does not need to maintain the original ordering meaning).

  • Stability is meaningless if you simply order numbers.
  • If the contents of the order were just a numeric property of a complex object, then stability would still be meaningless (as mentioned above, the cost of the algorithm itself would be the key, but not the decisive factor).
  • If the contents to be sorted are multiple numeric properties of a complex object, but their original initial order is meaningless, then stability will still be meaningless.

Algorithm: the KTH largest element

Look for the KTH (N >=k) largest element in an ordinal group of length N.

There are several ways to do this:

  • Method 1: sort method, first unordered array from large to small sort, sorted after the KTH element is naturally the KTH largest element in the array. However, the time complexity of this method is O(nlogn), and the performance is somewhat poor.
  • Method 2: insert method, maintain an ordered array of length K array A, used to store known k large elements. And then we iterate over the unordered array, one element at A time, and then we compare it to the smallest element in array A, and if it’s less than or equal to the smallest element in array A, we continue to iterate; If it is greater than the smallest element in array A, it is inserted into array A and “squeezes out” what was once the smallest element. The time complexity is O(N*k), for k is small relative to N.
  • Method 3: small top heap method, maintain a small top heap with capacity of K, K nodes in the heap represent the current maximum K elements, and the top of the heap is obviously the minimum value of the K elements. We iterate over the array, each time we iterate over an element, we compare it to the top of the heap, and if the current element is less than or equal to the top of the heap, we continue to iterate; If the element is larger than the top of the heap, place the current element at the top of the heap and adjust the binary heap (sink operation). At the end of the walk, the top of the heap is the smallest of the maximum K elements of the array, which is the KTH largest element. When k is much less than n, the time complexity can also be approximated to be O(nlogk).
  • Method 4: Divide and conquer, quick sort using divide and conquer, each time the array into two parts of the larger and smaller elements. We can use the same idea when we’re looking for the KTH largest element, using an element A as A reference, to swap the elements greater than A to the left side of the array, and the elements less than A to the right side.

As follows, we implement the algorithm based on method 3:

public static int findNumberK(int[] array, int k) {
        //1. Build a small top heap with the first K elements
        buildHeap(array, k);
        //2. Continue to traverse the array and compare with the top of the heap
        for (int i = k; i < array.length; i++) {
            if(array[i] > array[0]) {
                array[0] = array[i];
                downAdjust(array, 0, k); }}//3. Return the top element of the heap
        return array[0];
    }

    private static void buildHeap(int[] array, int length) {
        // Start with the last non-leaf node, and then drop down
        for (int i = (length - 2) / 2; i >= 0; i--) { downAdjust(array, i, length); }}/** * Sink adjustment *@paramArray Specifies the heap to be adjusted@paramIndex Specifies the node to sink@paramLength The effective size of the heap */
    private static void downAdjust(int[] array, int index, int length) {
        //temp stores the value of the parent node for final assignment
        int temp = array[index];
        int childIndex = 2 * index + 1;
        while (childIndex < length) {
            // If there is a right child and the right child is less than the value of the left child, the right child is located
            if (childIndex + 1 < length && array[childIndex + 1] < array[childIndex]) {
                childIndex++;
            }
            // If the parent node is smaller than the value of any of the children, jump directly
            if (temp <= array[childIndex])
                break;
            // No actual exchange is required
            array[index] = array[childIndex];
            index = childIndex;
            childIndex = 2 * childIndex + 1;
        }
        array[index] = temp;
    }

    public static void main(String[] args) {
        int[] array = new int[] {7.5.15.3.17.2.20.24.1.9.12.8};
        System.out.println(findNumberK(array, 5));
    }
Copy the code

The specific operation steps are as follows:

  1. Build the first K elements of the array into a heap.
  2. Continue traversing the array, comparing it to the top of the heap, if less than or equal to the top of the heap, continue traversing; If larger than the top of the heap, replace the top of the heap element and adjust the heap.
  3. The top of the heap, at this point, is the smallest element in the heap, which is the KTH largest element in the array.

Algorithm: The first one picks m numbers from an array of n numbers with a medium probability

This model is similar to the lottery model, there are n lottery tickets, one for each of n people, how to select m people to win. That is, we only have to simulate a fair lottery process to get equal probability of M people.

As we all know, the sweepstakes are in no particular order. Everyone has an equal chance of winning the lottery. So. The easiest way to do it is to randomize n people in a row, and then pick the first m people to win.

The problem itself is not difficult to understand, but the key point is equal probability, and an implicit condition, that is not repeated.

When faced with the current element. Use a probability (this probability may change dynamically. Or constant) decide to stay, if stay, with a selected element replacement. Here’s another way to do it.

Let A be the source array. B is the auxiliary array (which loads the selected elements). A has length n. B has length m. You have to take m numbers from A and put them into B. Make them equally likely.

Traversing A, when faced with the ith element x, mark p as the number of elements to be selected from A. Q is the number of numbers counting backwards from x to the end of A. Contains the x. The probability of x being selected is set to P /q. This is also equal to the probability.

  • 1: The probability that the 0th element is selected is m/n
  • 2: the probability of the first element is selected as: m/n * (m – 1)/(n – 1) + (1 – m/n) * m/(n – 1) = m/n
  • 3: The probability of the second element being selected is:… = m/n
  • .

And so on, the probability of whichever element is selected is m/n. Now, we show that the probability of any given element being selected is m over n.

The hypothesis would be very complicated to prove. But there’s a very clever way to prove it.

If we look at the model of this problem, in fact, it’s a raffle model, and now we have a box with n raffle tickets that say “win.” Or “miss”, middle. There are m sheets with the word “middle” written on them. What is the probability of winning in the KTH drawing? This is obviously m over n factorial.

Remember “Sweepstakes are not in order”? So. Let’s independently write the expression for the probability of winning the KTH lottery:

C(m,1)*A(n-1,m-1)/A(n,m) = m/n.Copy the code

Therefore, in the above operation method, the probability of any one element being selected is m/n.

Algorithm: Finds the longest repeating substring of a string

Suffixes are selected. The prefix of the suffix string contains the part string of string S. All the suffixes of string S can indirectly represent all the substrings of string S. The specific steps are as follows:

  • Stores all suffixes of the S string
  • Sort all suffixes (natural sort)
  • Find the longest common substring by comparing the contiguous suffixes (two suffixes equal from the first character)
Public class Main{// Save the longest public substring static String result =""; public static void main(String [] args) throws InterruptedException { Scanner sc = new Scanner(System.in); String s = sc.nextLine(); ArrayList<String> list = new ArrayList<String>(); // Get all the suffixes of the stringfor(int i = s.length()-1; i>=0; i--) { list.add(s.substring(i)); } Collections.sort(list); int maxLen = 0;for(int i = 0; i<s.length()-1; i++) { int len = getComlen(list.get(i),list.get(i+1)); maxLen = Math.max(maxLen, len); } System.out.println(result+":"+maxLen); Public static int getComlen(String str1,String str2) {int I;for(i =0; i<str1.length()&&i<str2.length(); i++) {if(str1.charAt(i)! =str2.charAt(i)) {break;	
			}
		}
		String temp = str1.substring(0,i);
		if(temp.length()>result.length()) result = temp;
		returni; }}Copy the code

Algorithm: invert each group of K nodes

The single linked list is inverted for each k nodes as a group (not inverted when the last number is less than K)

One of the solutions provided by leetcode is as follows:

public class LinkReverse {

    public static Node mhead=null;
    public static Node mtail=null;

    public static Node newLink(int n){
        Node head = new Node();
        head.setData(1);
        head.setNext(null);
        Node tmp = head;
        for(int i=2; i<=n; i++){ Node newNode = new Node(); newNode.setData(i); newNode.setNext(null); tmp.setNext(newNode); tmp = newNode; }return head;
    }


    public static void main(String[] args) {
        Node node = newLink(10);
        pritNode(node);
//        Node node1 = reverseKLink(node,3);
//        Node node1 = reverse(node,2);
        Node node1 = reverseLink3(node,4);
        pritNode(node1);

    }
    public static void pritNode(Node head){
        Node temp = head;
        while(temp ! =null){ System.out.print(temp.getData()+"- >");
            temp = temp.getNext();
        }
        System.out.println();
    }

    /*public static Node reverseLink(Node head){
        Node pre=null,cur=head,next=null;
        while(cur! =null){ next=cur.getNext(); cur.setNext(pre); pre=cur; cur=next; }return pre;
    }*/


    /*public static Node reverseKLink(Node head,int k){
        Node pre=null,cur=head,next=null;

        Node pnext=null,global_head=null;
        int i=0;
        Node tmp=null;

        while(cur! =null){ next = cur.getNext();if(tmp==null) tmp=cur; // TMP records the last node that the current group reversedif(pnext! =null) pnext.setNext(cur); Cur.setnext (pre); //pnext is the last node in the last group to reverse cur.setnext (pre); pre=cur; cur = next; i++;if(I >=k){// When the current group reversal is completeif(global_head==null){
                    global_head=pre;
                }
                if(pnext! =null){pnex.setNext (pre); pnex.setnext (pre); } pnext=tmp; / / iteration I = 0; // New set of inversion when key data is initialized TMP =null; pre=null; }}returnglobal_head; Public static void inreverse(Node left,Node right){Node pre=null,cur=left,next=null;while(pre! =right){ next = cur.getNext(); cur.setNext(pre); pre=cur; cur=next; }if(mtail! =null) mtail.setNext(right); mhead=right; mtail=left; Public static Node reverseLink3(Node head,int k){Node cur=head,global_head=head; int i=1; Node left=null,right=null;while(cur! =null){if(i%k==1)
                left=cur;
            right=cur;
            cur=cur.getNext();
            if(i%k==0){
                inreverse(left,right);
                if(mtail! =null){ mtail.setNext(cur); }if(global_head==head){
                    global_head = mhead;
                }
            }

            i++;
        }
        returnglobal_head; }}Copy the code

Algorithm: determine whether an IP is in the country

Input: there are hundreds of thousands of domestic IP segments (start_IP, end_IP) in the database. One IP to be verified output: YES or NO

First, sort out the IP segment configuration. To facilitate IP comparison, the IP is converted to the long format.

  • Load the data in, take the first and second rows, ip2Long
  • The first line is saved in left and the second line in right
  • And then sort by left

Results similar to the following are obtained:

('16842752'.'16843007'), ('16843264'.'16859135')
Copy the code

Next use dichotomy search can be, relatively simple.

Merge N linked list -> optimize to log(N) -> NULL judgment -> not allowed to modify the data structure how to achieve

The simplest way to merge n ordered lists is to create a collection, iterate over all the lists, add their elements to the collection, sort the collection ascending through an array, add it to a new list and return.

The time complexity of this method: o(n2) the outer layer traverses the elements of the list inside the array, that is, a double layer traversal, and a single layer traversal, so the result is approximately o(n2); Space complexity: o(n) defines the variable listNode with a large array length, and o(n) defines the list length with a large collection length.

Consider optimizing the above approach, using a divide-and-conquer approach to merge.

public ListNode mergeKLists(List<ListNode> lists) {
        // write your code here
        if (lists == null || lists.size() == 0) {
            return null;
        }
        ListNode dummy = new ListNode(0);
        ListNode p = dummy;
        Queue<ListNode> queue = new PriorityQueue<ListNode>(lists.size(), new Comparator<ListNode>()
        {   
            @Override
            public int compare(ListNode o1, ListNode o2) {
                returno1.val - o2.val; }});for (ListNode node : lists) {
            if (node != null) {
                queue.offer(node);
            }
        }
        while(! queue.isEmpty()) { ListNode node = queue.poll(); p.next = node;if(node.next ! = null) { queue.offer(node.next); } p = p.next; }return dummy.next;
}
public ListNode mergeTwoLists(ListNode l1, ListNode l2) {
        ListNode dummy = new ListNode(-1);
        ListNode p = dummy;
        ListNode p1 = l1;
        ListNode p2 = l2;
        while(p1 ! = null && p2 ! = null) {if (p1.val < p2.val) {
                p.next = p1;
                p1 = p1.next;
            } else {
                p.next = p2;
                p2 = p2.next;
            }
            p = p.next;
        }
        if(p1 ! = null) { p.next = p1; }if(p2 ! = null) { p.next = p2; }return dummy.next;
}    
Copy the code

Open questions and project experience

Open questions and project experience need to be sorted out in advance according to their own practical experience.

What if one version of a service is upgraded but the others are not

Compatibility processing, compatibility at the interface level, plus a layer of adapters. This needs to be handled with care, especially if the project is large enough to be maintained.

Design patterns in the Spring Cloud/JDK Design Patterns project

This depends on the usual habits, we must have a probing spirit. Look at some excellent source code.

  • Observer, the publish and subscribe model, uses a class in Java, java.util. Observable for the publisher and java.util. Observer for the subscriber.
  • Adapter: Changes one interface or class into another. Such as Java. Util. Arrays# asList ()
  • Decorator: Dynamically adds a series of actions to an object without creating a large number of inherited classes for different actions. This pattern is almost ubiquitous in the JDK, so the following list is just a few typical ones. Such as Java. IO. BufferedInputStream (InputStream)

Common language? Compare Python and Java

Java and Python are both very popular languages at the moment. The main differences are as follows:

  • In terms of difficulty. Python is far simpler than Java.
  • Development speed. Python is far superior to Java
  • Speed of operation. Java is far superior to standard Python, and Pypy and Cython can catch up, but neither is mature enough to do projects.
  • Available resources. Java is a handful, Python is a few, especially Chinese resources.
  • Degree of stability. Python3 and 2 are incompatible, causing some confusion and a large number of library failures. Java is much more stable because it has enterprise support behind it.
  • Open source or not. Python has been completely open source from the start. Java was developed by Sun, but now GUN’s Openjdk is available, so don’t worry.
  • Compile or interpret, both are interpreted.

Did you take a look at my junior year interview? Asked the reason why the interview failed at that time? I said the algorithm was no good at that time, and I asked about my written test this year

Summarize the experience between failures.

Future career plans?

Answer from the two dimensions of work and study:

  • The first point is that I have thought about this problem carefully. My plan is designed based on the current actual situation, not out of thin air.
  • Secondly, in terms of work, I will highlight my intention to become a professional in this field by actively completing work tasks and accumulating experience in various aspects. I also hope to have the opportunity to lead a team and become an excellent manager, so as to make greater contributions to the unit and achieve win-win results.
  • Thirdly, in terms of study, I plan to do further study and research in my professional field, combine practical experience with professional knowledge, and lay a good foundation for my career growth.

This is important not only to deal with the interviewer, but also to think about your own life.

Do you have a blog? Open source projects?

Blog and open source projects are a plus, have the best, usually pay attention to accumulation.

The internship? What do you find difficult? How do you coordinate with the rest of the team?

Test teamwork and problem solving skills.

What is the growth after the internship? Where can the business improve?

Same as above. Note the summary of previous experience.

Project: Introduce difficult implementation details similar to the second side

Pick familiar, large projects that are easier to deal with.

What books have you been reading lately? How to Study

HR wants to use this question to examine the path you took to acquire knowledge and experience. Through the usual reading habits to understand the knowledge structure and the habit of seeking knowledge.

The method of learning can be solved according to their own actual, such as repeated practice, strengthen memory, scientific method application and so on. The ability to learn is important, and you need to highlight that.

Tips for interview

The interview questions listed above can only be used to help students understand most of the questions, and will not cover everything. Here, I recommend some of the interview review materials for reference only.

He has received offers from Alibaba, Douyin, Tencent and Meituan, and is now working in Alibaba. First of all, I will provide you with a review material prepared by him.

Private collection

Benefits, the background reply “interview”, you can get PDF files. The following is a reference to some review materials, the quality of the content is very high, the interview headlines such a first-line big factory, the algorithm is also essential, so also give you a link to the classic algorithm.

Basic computer and Java
  • Cyc2018. Making. IO/CS – Notes / # /…
  • Github.com/AobingJava/…
  • Github.com/Snailclimb/…
algorithm
  • Longest Increasing Subsequence variants:

Leetcode.com/problems/lo… Leetcode.com/problems/la… Leetcode.com/problems/ru… Leetcode.com/problems/ma… Leetcode.com/problems/nu… Leetcode.com/problems/de… Leetcode.com/problems/lo…

  • Partition Subset:

Leetcode.com/problems/pa…

  • BitMasking:

Leetcode.com/problems/pa…

  • Longest Common Subsequence Variant:

Leetcode.com/problems/lo… Leetcode.com/problems/ed… Leetcode.com/problems/di… Leetcode.com/problems/mi…

  • Palindrome:

Leetcode.com/problems/pa… Leetcode.com/problems/pa…

  • Coin Change variant:

Leetcode.com/problems/co… Leetcode.com/problems/co… Leetcode.com/problems/co… Leetcode.com/problems/pe… Leetcode.com/problems/mi… Leetcode.com/problems/la…

  • Matrix multiplication variant:

Leetcode.com/problems/mi… Leetcode.com/problems/mi… Leetcode.com/problems/bu…

  • Matrix/2D Array:

Leetcode.com/problems/ma… Leetcode.com/problems/ra… Leetcode.com/problems/du… Leetcode.com/problems/tr… Leetcode.com/problems/ma… Leetcode.com/problems/mi…

  • Hash + DP:

Leetcode.com/problems/ta… Leetcode.com/problems/lo… Leetcode.com/problems/lo… Leetcode.com/problems/ma…

  • State machine:

Leetcode.com/problems/be… Leetcode.com/problems/be… Leetcode.com/problems/be… Leetcode.com/problems/be… Leetcode.com/problems/be… Leetcode.com/problems/be…

  • Depth First Search + DP:

Leetcode.com/problems/ou… Leetcode.com/problems/kn…

  • Minimax DP:

Leetcode.com/problems/pr… Leetcode.com/problems/st…

  • Misc:

Leetcode.com/problems/gr… Leetcode.com/problems/de… Leetcode.com/problems/pe… Leetcode.com/problems/co… Leetcode.com/problems/lo… Leetcode.com/problems/nu…

Face the

On the surface, we see more cattle passenger network, which is very comprehensive.

Welfare, put together a copy

Write in the last

The mastery of basic knowledge is a process of accumulation, although there are a lot of skills interview, can go to catch up. But if we can keep good study habits and, most importantly, thirst for knowledge, we can get twice the result with half the effort when preparing for the interview.

Finally leave a thinking for everyone, test the effect after their own study 😀, welcome to write your idea in the message area, or private letter I exchange. If you can’t do it, the author will finally provide ideas and answers.

  • Algorithm: user online wave peak calculation

Input: user log (time, user_id, login | logout) output: people online at the same time, the peak of the peak period of peak (90%) (such as timnathserah to no, peak of 300 million, the lowest 270 million)

Recommended reading

The interview collection

Subscribe to the latest articles, welcome to follow my public number