Small programs run faster than big programs, the same correct program.

To compile the code, the more the longer the wait for procedures on the amount of time it, the more objects to be created and destroyed, garbage collection, the greater the amount of work to be assigned and hold the object, the GC cycle is longer, the more classes to be loaded from disk into the JVM, program the longer it will be spent, to application of the code, the more the lower machine hardware cache efficiency, The more code you execute, the longer it takes. As a result, the early programs were relatively small, and the performance differences were not significant as improvements in hardware allowed the new programs to run at acceptable speeds.

Java files are compiled into bytecode files and run on the JVM for further compilation into assembly language with a number of optimizations that improve performance.

We shouldn’t spend a lot of time on small performance improvements, and thinking about optimization too early is the mother of all nightmares.

We should write code that is clear, straightforward, easy to read, and easy to understand. Optimization is when algorithms and designs change complex program structures to provide better performance, but real optimization takes place when performance analysis shows that these measures have significant benefits. (Not including code structures that are known to be bad for performance)

Careful with abnormal

Exceptions are detrimental to performance. To throw an exception, a new object is created. The constructor of the Throwable interface calls a locally synchronized method called fillInStackTrace(), which checks the stack to collect the call trace information. Whenever an exception is thrown, the Java virtual machine must adjust the call stack because a new object is created during processing. Exceptions should only be used for error handling and should not be used to control program flow.

Minimize double counting of variables

Make it clear that calls to a method, even if there is only one statement in the method, have costs, including creating stack frames, protecting the scene when the method is called, and restoring the scene when the method is finished. So for example:

for (int i = 0; i < list.size(); i++) {... } for (int I = 0, int length = list.size(); i < length; i++) {... } this saves a lot of overhead when list.size() is largeCopy the code

Try catch inside and outside the loop

public void test1(){ while(true){ try{ Thread.sleep(30*60*1000); } catch (InterruptedException e){ // TODO Auto-generated catch block e.printStackTrace(); } } } public void test2(){ try{ while(true){ Thread.sleep(30*60*1000); } } catch (InterruptedException e){ // TODO Auto-generated catch block e.printStackTrace(); }}Copy the code

Test1 puts a try catch inside a loop, test2 puts a try catch inside a loop.

The difference is that if test2 throws an exception, it breaks out of the loop, whereas Test1 throws an exception during execution and continues the loop.

Test2 = test2; test2 = test2; test2 = test2; test2 = test2 ② If a server thread is constantly processing data from other threads, it must be encoded as test1 to ensure system stability.

Other people like to compare performance. I don’t think it’s necessary to compare performance. If the requirements are ②, you don’t have to do it, if the requirements are ①, try catch is written on the outside, so it’s nice and easy to understand, and it’s as good as the inside.

CB already provides a try catch at runtime, so why try catch in your own code?

If you don’t add the catch itself at the exact spot where the exception occurred, your program will catch the exception as you say, but the result will be exit. You have a program, you want to new something, you run out of memory, no new succeeds, so new throws an exception. If you continue without catching the exception and handling it, your program will probably crash. When the program goes wrong, the consequences are kept to a minimum.

Try to use a lazy loading strategy, that is, create when you need to

Such as:

String str = "aaa"; if (i == 1) { list.add(str); }Copy the code

You are advised to replace it with:

if (i == 1)
{

String str = "aaa";

list.add(str);

}
Copy the code

Specify final modifiers for classes and methods whenever possible

Specifying a final modifier for a class makes it uninheritable, and specifying a final modifier for a method makes it unoverridden. If a class is specified as final, all methods of that class are final. The Java compiler looks for opportunities to inline all final methods, and inlining is a major boost to Java performance, with an average performance improvement of 50%.

Use stack variables (local variables) whenever possible

Parameters passed when a method is called, as well as temporary variables created during the call, are quickly stored in the stack, while other variables, such as static variables, instance variables, and so on, are created in the heap and are slower. In addition, variables created in the stack are removed as the method completes its run, requiring no additional garbage collection.

Turn off the flow in time

During Java programming, be careful when connecting to the database and performing I/O flow operations. After using the database, close the database to release resources. Because the operation of these large objects will cause a large overhead of the system, a slight mistake will lead to serious consequences.

If you can estimate the length of the content to be added, specify the initial length for the underlying array implementation of the collection, utility class

Examples include ArrayList, LinkedLlist, StringBuilder, StringBuffer, HashMap, HashSet, etc. Take StringBuilder for example:

  1. StringBuilder() // Allocates 16 character space by default
  2. StringBuilder(int size) // The default allocation of size character space
  3. StringBuilder(String STR) // Default allocation of 16 characters +str.length() character space

The ability to set the initialization capacity of a class (not just the StringBuilder above) can significantly improve performance. For example, StringBuilder, length is the number of characters that the current StringBuilder can hold. Because when StringBuilder reaches its maximum capacity, it increases its current capacity by two plus two, and whenever StringBuilder reaches its maximum capacity, It would have to create a new character array and copy the contents of the old character array into the new character array — a very performance expensive operation. Imagine that the array contains about 5000 characters without specifying the length, and the nearest power of 5000 is 4096, with each increment incrementing by 2:

  1. Applying for 8194 character arrays on top of 4096 is equivalent to applying for 12290 character arrays at one time. If you can specify 5000 character arrays at the beginning, you can save more than twice the space.
  2. Copy the original 4096 characters into the new character array.

In this way, it wastes memory space and reduces the efficiency of the code. Therefore, you can’t go wrong with setting a reasonable initialization capacity for collections and utility classes that are implemented in arrays at the bottom. This will bring immediate results. Note, however, that for collections such as a HashMap, which is implemented as an array + linked list, do not set the initial size to the size you estimated, because the probability of joining only one object on a table is almost zero. The recommended initial size is 2 to the NTH power, or new HashMap(128) or new HashMap(256) if 2000 elements are expected.

Multiplication and division use shift operations

Such as:

for (val = 0; val < 100000; val += 5)
{
a = val * 8;
b = val / 2;
}
Copy the code

Using shift operation can greatly improve performance, because at the bottom of the computer, the counterpoint operation is the most convenient and fastest, so it is recommended to change to:

for (val = 0; val < 100000; val += 5)
{
a = val << 3;
b = val >> 1;
}
Copy the code

The shift operation, while fast, can make the code difficult to understand, so it is best to comment accordingly.

Do not keep creating object references within the loop

If count is too large, memory will be consumed

ArrayList is used when the size of the array cannot be determined

Array should be used whenever possible for efficiency and type checking reasons

Do not declare arrays to be public static final

Declaring an array public is a security loophole, which means that the array can be changed by an external class.

Use HashMap, ArrayList, StringBuilder whenever possible

The use of Hashtable, Vector, and StringBuffer is not recommended unless thread-safe, as the latter three incur performance overhead due to their use of synchronization

Try to avoid arbitrary use of static variables

Remember that when an object is referenced by a variable defined as static, gc usually does not reclaim the heap memory occupied by the object, as in:

public class A
{ 
private static B b = new B();
}
Copy the code

The lifetime of the static variable B is the same as that of class A. If class A is not unloaded, the object referred to by b will stay in memory until the program terminates

Clear sessions that are no longer needed in a timely manner

To clear out inactive sessions, many application servers have a default session timeout, typically 30 minutes. When the application server needs to hold more sessions, if the memory is low, the operating system will move some of the data to disk, the application server may dump some of the inactive sessions to disk according to the MRU (most frequently used recently) algorithm, or even throw an out-of-memory exception. If a session is to be dumped to disk, it must first be serialized, and serializing objects can be expensive in a large cluster. Therefore, when a session is no longer needed, it should be immediately cleared by calling the invalidate() method of HttpSession.

Myth about multithreading: The more threads, the better performance

A physical CPU can only execute one thread at a time, and multiple threads means context switching between threads, which can be costly. Therefore, the number of threads must be appropriate, the best case should be N threads for N cpus, and each CPU has 100% occupancy, in this case, system throughput is maximized. In reality, however, it is not possible to achieve 100% CPU monopolization by a single thread, a common preference is that IO operations, whether disk IO or network IO, are slow. Threads wait during execution, so efficiency comes down. This is why executing multiple threads on a physical core feels more efficient. For scheduling purposes, when one thread waits, it is a good opportunity for other threads to execute, thus maximizing CPU resources.

Do not suspend threads if possible

Synchronization is unavoidable for multithreaded programs, and the most straightforward way is to use locks. Only one thread is allowed to enter the critical section at a time, leaving the other related threads to wait. There are two kinds of wait, one is to suspend the thread directly using operating system instructions, and the other is spin wait. Suspend directly on the operating system is a crude implementation with poor performance and is not suitable for high concurrency scenarios because of the attendant problem of a large number of thread context switches. If possible, try a limited spin wait, and then suspend the thread. It’s possible to avoid unnecessary spending. The IMPLEMENTATION of ConcurrentHashMap in the JDK has some implementation of spin wait. In addition to Java virtual machine level, synchronization keyword also has spin wait optimization.

A simple getter/setter method should be made final, which tells the compiler that the method will not be overridden

class MAF { private int _size; public void setSize (int size) { _size = size; }} correction: class DAF_fixed {private int _size; final public void setSize (int size) { _size = size; }}Copy the code

Don’t always use the reverse operator (!)

Take the inverse operator (!) Reduce program readability, so don’t use it all the time.