StringBuilder and StringBuffer

String String constants are immutable when using String concatenation and are different in 2 Spaces

StringBuffer Variable thread-safe string concatenation is appended directly to the string

String variable variable non-thread-safe string concatenation appends directly to the string

1.StringBuilder is more efficient than StringBuffer than String.

2.String is a constant and immutable, so a new object is created for each += assignment. Both StringBuffer and StringBuilder are mutable, and append to the original when concatenating strings. So StringBuffer performs better than String, and because StringBuffer is thread-safe and StringBuilder is thread-safe, StringBuilder is more efficient than StringBuffer.

(3) for large data quantity of the joining together of the string, use StringBuffer, StringBuilder.

The difference between Vector, ArrayList, and LinkedList:

ArrayList,LinkedList is not synchronous, Vestor is synchronous. So if thread safety is not required, ArrayList or LinkedList can be used to save synchronization overhead. In multithreaded situations, however, Vector is sometimes necessary. Of course, there are ways to wrap arrayLists and LinkedLists to synchronize them too, but that may be less efficient. Internally, ArrayList and Vector are stored in the array form of Objec. When you add elements to each of these types, if the number of elements exceeds the current size of the internal array, they both need to expand the internal array. Vector automatically doubles the size of the array by default. The ArrayList is 50% bigger, so you always end up with a collection that takes up more space than you really need. So if you want to store a large amount of data in a collection, using Vector has some advantages because you can avoid unnecessary resource overhead by setting the initial size of the collection. In ArrayList and Vector, it takes the same time to retrieve an object from a specified position (with index) or to insert or delete an object from the end of the collection, denoted as O(1). However, if you add or remove elements elsewhere in the set, the time taken increases linearly: O(n-i), where n represents the number of elements in the set and I represents the index where elements are added or removed. Why is that? All elements after the ith and ith elements in the set perform (n-i) object displacement. LinkedList takes the same amount of time to insert and delete elements anywhere in the collection – O(1), but it is slower to index an element, O(I), where I is the index position.

Differences between HashTable HashMap TreeMap:

 

HashMap is not thread-safe, HashTable is thread-safe.

A HashMap allows null keys and values; a HashTable does not.

HashMap performs better than Hashtable.

All elements in TreeMap are in some fixed order, and you should use TreeMap if you want to get an ordered result (the order of elements in HashMap is not fixed)

What are the parts of an HTTP packet?

Request first line;

Request header information;

A blank line;

Request body;

 

Post request protocol format:

GET /Hello/index.jsp HTTP/1.1: GET request, request server path is Hello/index.jsp, protocol is 1.1;

Host:localhost: indicates the requested Host name localhost.

The user-agent: Mozilla / 4.0 (compatible; MSIE 8.0… : Browser – and OS-related information. Some sites display the User’s system version and browser version information by fetching user-Agent headers.

Accept: */* : tells the server that the current client can receive the document type, */*, means that it can Accept anything;

Accept-language: zh-cn: indicates the Language supported by the current client. You can find the Language information in tool A of the browser.

Accept-encoding: gzip, deflate: supported compression formats. As data travels over the network, the server compresses it before sending it.

Connection: keep-alive: indicates the Connection mode supported by the client. The Connection is maintained for a period of time.

What is Sql injection? How do I prevent SQL injection?

The so-called SQL injection is to trick the server into executing malicious SQL commands by inserting SQL commands into Web form submission or query string for entering domain names or page requests. Specifically, it takes advantage of an existing application’s ability to inject (malicious) SQL commands into the back-end database engine to execute them. It can get a database on a vulnerable website by typing (malicious) SQL statements into a Web form, rather than executing the SQL statements intended by the designer. How to prevent SQL injection and use stored procedures to execute all queries; Check the validity of user input. Encrypts and saves user data such as login names and passwords.

What’s the difference between Redirect and Forwod?

1. From data sharing

Forword is a continuation of a request and can share data from the request

Redirect starts a new request and does not share the request data

2, from the address bar

The Forword forwarding address bar does not change

Redirect The forwarding address bar changes

Talk about thread synchronization, optimistic locking, pessimistic locking implementation?

Pessimistic locks, as the name implies, are Pessimistic. Each time I fetch the data, I think someone else will change it, so I Lock the data each time I fetch it, so that someone else will try to fetch it and block it until it gets the Lock. Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock.

Optimistic Lock, as the name implies, is very Optimistic. Every time I go to get data, I think that others will not modify it, so I will not Lock it. But when UPDATING, I will judge whether others have updated the data during this period, and I can use the version number and other mechanisms. Optimistic locks are suitable for multi-read applications to improve throughput. For example, if a database provides a mechanism similar to write_condition, it will provide optimistic locks.

The two kinds of locks have their own advantages and disadvantages, and one can not be considered better than the other. For example, optimistic lock is suitable for the situation with less write, that is, when conflicts really rarely occur, which can save the lock overhead and increase the overall throughput of the system. However, if there are frequent conflicts, the upper application will continue to retry, which can degrade performance, so pessimistic locking is appropriate in this case.

2. Pessimistic lock: The realization of pessimistic lock adopts the locking mechanism inside the database, which is a typical pessimistic lock call relying on the database: Select * from account where name= “Erica”; select * from account where name= “Erica”; These records cannot be modified by outsiders until the transaction commits (when locks are released during the transaction). We can use “for UPDATE” to lock the data before submitting it. This prevents other threads from updating the data and ensures that updates are not lost. 2.1. Performance issues caused by pessimistic locking. Let’s imagine a scenario: As a financial system, when an operator to read the user’s data, and read the user data on the basis of modified (e.g., change user account balances), if use pessimistic locking mechanism, means that the entire operation process (read data from a operator, began to change until the whole process of commit changes results), remain the state of locking the database record, As you can imagine, if faced with hundreds or thousands of concurrent, this situation will lead to what consequences? So we can use optimistic locks at this point.

1. Optimistic locks: Optimistic locks can be implemented by adding a version number to the table. Here is an example.

Explanation: that is, when updating, everyone will judge whether the current version number is consistent with the version number obtained by my query. If the version number is inconsistent, the update fails. If the version number is consistent, the record will be updated and the version number will be changed.

Sql query statement optimization? DB Index Usage Scenarios?

Create an index in the table, giving priority to where and group by columns.

2, try to avoid using SELECT *, return useless fields will reduce query efficiency. The optimization method is as follows: SELECT * FROM t Optimization method: Replace * with specific fields and return only used fields.

3, try to avoid using in and not in, which will cause the database engine to abandon the index for full table scan. SELECT * FROM t WHERE id IN (2,3) SELECT * FROM t1 WHERE username IN (SELECT username FROM t2) If the value is continuous, you can use between instead. The value is as follows: SELECT * FROM t WHERE ID BETWEEN 2 AND 3 If the value is a subquery, it can be replaced by exists. SELECT * FROM t1 WHERE EXISTS (SELECT * FROM t2 WHERE t1.username = t2.username)

4, try to avoid using OR, which will cause the database engine to abandon the index for full table scan. SELECT * FROM t WHERE id = 1 OR id = 3. SELECT * FROM t WHERE id = 1 UNION SELECT * FROM t WHERE id = 3 It seems that the efficiency of the two methods is similar, even though the union scans the index and the OR scans the full table.

5, try to avoid fuzzy query at the beginning of the field, which will lead to the database engine to abandon the index for full table scan. SELECT * FROM t WHERE username LIKE ‘%li%’. Mysql > SELECT * FROM t WHERE username = ‘li%’ SELECT * FROM t WHERE username = ‘li%’ SELECT * FROM t WHERE username = ‘li%’ The value IS as follows: SELECT * FROM t WHERE score IS NULL Optimization method: You can add the default value 0 to a field to determine the value 0. SELECT * FROM t WHERE score = 0

7, as far as possible to avoid the left side of the where condition medium number expression, function operation, will cause the database engine to abandon the index for full table scan. SELECT * FROM t2 WHERE score/10 = 9 SELECT * FROM t2 WHERE SUBSTR(username,1,2) = ‘li’ SELECT * FROM t2 WHERE score = 10*9 SELECT * FROM t2 WHERE username = ‘li%’

8. Avoid where 1=1 conditions when data is large. This condition is usually used by default to make it easier to assemble a query condition, and the database engine will discard the index for a full table scan. SELECT * FROM t WHERE 1=1 SELECT * FROM t WHERE 1=1

How does Spring use IOC and Aop to implement principle and function usage scenarios in projects?

The idea of IOC dependency injection is implemented through reflection, which, when instantiating a class, injects the class attributes previously stored in the HashMap into the class by calling the set method in the class through reflection. In summary, in traditional object creation, the caller usually creates the instance of the called, whereas in Spring Spring creates the called, and then injects the caller, known as dependency injection or inversion of control. There are two types of injection: dependency injection and setting injection. Advantages of IoC: It reduces the coupling between components, reduces the complexity of substitution between business objects, and enables flexible object management.

AOP uses a technique called “crosscutting” to peel apart the insides of wrapped objects and encapsulate common behavior that affects multiple classes into a reusable module called “Aspect,” or Aspect. The “aspect”, simply put, is the encapsulation of logic or responsibilities that have nothing to do with the business but are commonly invoked by business modules, such as logging, to reduce the duplication of code in the system, reduce coupling between modules, and facilitate future operability and maintainability.

AOP technology, mainly divided into two categories: one is the use of dynamic proxy technology, using the interception of the message to decorate the message, to replace the original object behavior implementation; The other is static weaving, which introduces special syntax to create “aspects” so that the compiler can weave code about “aspects” at compile time.

AOP usage scenarios:

Authentication Permission Check

Caching cache

Context passing

Error handling Error handling

Lazy loading Indicates Lazy loading

Was Debugging Debugging

Logging, tracing, profiling and Monitoring Logging, tracing, optimization, calibration

Performance optimization, efficiency check

Persistence Persistence

Resource pooling Resource pools

Synchronization synchronous

Transactions Management

Asynchronous programming:

Asynchronous programming provides a non-blocking, event-driven programming model. This programming model provides parallelism by utilizing multiple core execution tasks in the system, thus providing throughput for applications. Throughput here refers to the number of tasks done per unit of time. In this programming style, a unit of work is executed independently of the main application thread, and its status is notified to the calling thread: success, processing, or failure.

We need asynchrony to eliminate the blocking model. In fact, the asynchronous programming model can use the same thread to handle multiple requests without blocking the thread. Imagine an application using a thread that is executing a task and then waiting for the task to complete before proceeding to the next step. The log framework is a good example: typically you want to log exceptions and errors to a target, such as a file, database, or similar place. You don’t want your program to wait for the log to finish, or its response will be affected. Conversely, if calls to the log framework are made asynchronously, applications can execute other tasks concurrently without waiting. This is an example of non-blocking execution.

To implement asyncio in Java, you need to use Future and FutureTask, which are located under the java.util.Concurrent package. The Future is an interface and FutureTask is an implementation class of it. In fact, if you use a Future in your code, your asynchronous task will execute immediately and the calling thread can get the result promise.

The following code snippet defines an interface containing two methods. One is synchronous and the other is asynchronous.

import java.util.concurrent.Future;
public interface IDataManager {
   // synchronous method
   public String getDataSynchronously();
   // asynchronous method
   public Future<String> getDataAsynchronously();
}Copy the code

It is worth noting that the downside of the callback model is that it is troublesome when callbacks are nested.