As a Java backend developer, most of the code we write determines the user experience. If our back-end code is not performing well, then users will waste some time waiting for the server to respond when they visit our website, resulting in a poor user experience, which may lead to user complaints and even user loss.

Performance tuning is a big topic. Java Program Performance Tuning says there are five levels of performance tuning: design tuning, code tuning, JVM tuning, database tuning, operating system tuning, and so on. Each level contains a number of methodologies and best practices. This article has just mentioned a few common Java code optimizations that can actually be applied to a project.

1. Use singletons

The singleton pattern is used in cases where the creation of instances must be limited or a common instance must always be used to save system overhead for some resource-intensive operations such as IO processing, database connection, and configuration file parsing and loading.

2. Use Future mode

Suppose that a task execution need to spend some time, in order to save unnecessary waiting time, to obtain a “bill of lading”, that is the Future, and then continue with other tasks, arrive until the “goods”, finish the task execution results, which can use the “bill of lading” to pick up the goods, namely, the return value is obtained by the Future object.

public class RealData implements Callable<String> { protected String data; public RealData(String data) { this.data = data; } @override public String call() throws Exception {// Try {thread.sleep (1000); } catch (InterruptedException e) { e.printStackTrace(); }returndata; } } public class Application { public static void main(String[] args) throws Exception { FutureTask<String> futureTask =  new FutureTask<String>(new RealData("name")); ExecutorService executor = Executors.newFixedThreadPool(1); // Use thread pool // to execute FutureTask, equivalent to client.request("name") send the request executor.submit(futureTask); Thread.sleep(2000); thread.sleep (2000); thread.sleep (2000); // Use real data // If call() does not complete, it will still wait for system.out.println ("Data ="+ futureTask.get()); }}Copy the code

3. Use thread pools

There are three benefits to using thread pools properly. First: reduce resource consumption. Reduce the cost of thread creation and destruction by reusing created threads. Second: improve response speed. When a task arrives, it can be executed immediately without waiting for the thread to be created. Third: improve thread manageability. Threads are scarce resources. If they are created without limit, they will not only consume system resources, but also reduce system stability. Thread pools can be used for unified allocation, tuning, and monitoring.

After Java 5, concurrent programming introduced a bunch of new apis for starting, scheduling, and managing threads. The Executor framework, introduced in Java 5, uses a thread pool mechanism to control the starting, executing, and closing of threads in the java.util.cocurrent package to simplify concurrent programming.

public class MultiThreadTest {
    public static void main(String[] args) {
        ThreadFactory threadFactory = new ThreadFactoryBuilder().setNameFormat("thread-%d").build();
        ExecutorService executor = new ThreadPoolExecutor(2, 5, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), threadFactory);
        executor.execute(new Runnable() {
            @Override
            public void run() {
               System.out.println("hello world !"); }}); System.out.println(" ===> main Thread! "); }}Copy the code

4. Use NIO

The JDK has been offering a new I/O programming library since 1.4, NIO for short, which not only introduces new efficient buffers and channels, but also introduces Selector based non-blocking I/O mechanism, multiple asynchronous I/O operations are concentrated into one or several threads for processing, using NIO instead of blocking I/O can improve the concurrent throughput capacity of the program, reduce the system overhead. For each request, if a separate thread is opened for the corresponding logical processing, when the client data transfer is not always going on, but intermittently, the corresponding thread needs to I/O wait, and context switch. After using the Selector mechanism introduced by NIO, the concurrent efficiency of the program can be improved and this situation can be improved.

public class NioTest {  
    static public void main( String args[] ) throws Exception {  
        FileInputStream fin = new FileInputStream("c:\\test.txt"); // Get channel FileChannelfc= fin.getChannel(); ByteBuffer = ByteBuffer. Allocate (1024); // Read data into buffer fc.read(buffer); buffer.flip();while(buffer.remaining()>0) { byte b = buffer.get(); System.out.print(((char)b)); } fin.close(); }}Copy the code

5. Lock optimization

In concurrent scenarios, locks are often used in our code. There is lock, there is lock competition, there is lock competition, will consume a lot of resources. So how do we optimize locks in our Java code? It can be mainly considered from the following aspects:

  • Reduce lock holding time

    You can use synchronized code blocks instead of synchronized methods. This reduces the time the lock is held.
  • Reduce lock granularity

    When using maps in concurrent scenarios, remember to use ConcurrentHashMap instead of HashTable and HashMap.
  • Lock the separation

    A common lock (such as syncronized) blocks read and write, and blocks read and write. You can separate read and write operations.
  • Lock coarsening

    In some cases we want to consolidate multiple lock requests into one request to reduce the performance cost of a large number of lock requests, synchronizations, and releases in a short period of time.
  • Lock elimination

    Lock elimination is a process in which a Java virtual machine (JVM) can be jIT-compiled by scanning the running context to remove locks that cannot compete for shared resources. Lock elimination can save meaningless lock requests.

6, compression transmission

Before for data transmission, to the data is compressed, to reduce the network transmission of bytes, improve the speed of data transmission, the receiver can extract data, to restore the transmission of data, and compressed data can also save the amount of storage medium (disk or memory) space and network bandwidth, reduce the cost. Of course, compression is not free of overhead. Data compression requires a lot of CPU computation, and the complexity of computation and data compression ratio vary greatly according to different compression algorithms. Generally, you need to select different compression algorithms based on different service scenarios.

7. Cache results

For the same user request, if the database is repeatedly queried each time, repeated calculation, will waste a lot of time and resources. The computed results to the local cache memory, or through a distributed cache to cache the results of, can save precious CPU resources, reduce duplication of database query or a disk I/O, will otherwise head physical turning into memory electronic movement, improve the response speed, and thread the rapid release of application capacity for promotion.