Test the code by creating a thread and then executing the code
` public class TimeDemo {
public static Integer systemCore=Runtime.getRuntime().availableProcessors(); public static void main(String[] args) throws InterruptedException { int size=10000000; int coreSize=300; CountDownLatch countDownLatch=new CountDownLatch(coreSize); Thread[] thread = getThread(coreSize, countDownLatch, size); CountDownLatch countDownLatch1=new CountDownLatch(coreSize); Thread[] thread1 = getThread(coreSize, countDownLatch1, size); singleThread(size); multiThread(size,thread,countDownLatch); multiThreadCache(size,thread1,countDownLatch1); } public static void singleThread(Integer size){ long start = System.nanoTime(); for (int i = 0; i < size; i++) { System.currentTimeMillis(); } long ns = System.nanoTime() - start; System.out.println(" single thread, quantity :"+size+"; Time: "+ns+" ms: "+(ns/1000000)); } public static void multiThread(Integer size,Thread[] threads,CountDownLatch countDownLatch) throws InterruptedException { long start = System.nanoTime(); for (Thread thread : threads) { thread.start(); } countDownLatch.await(); long ns = System.nanoTime() - start; System.out.println(" +systemCore+"; Number of threads: "+threads.length+"; Number of threads :"+size+"; Time: "+ ns +"; Ms: "+ (ns / 1000000)); } public static void multiThreadCache(Integer size,Thread[] threads,CountDownLatch countDownLatch) throws InterruptedException { long start = System.nanoTime(); for (Thread thread : threads) { thread.start(); } countDownLatch.await(); long ns = System.nanoTime() - start; System.out.println(" +systemCore+"; Threads.length +"; Number of threads :"+size+"; Time: "+ ns +"; Ms: "+ (ns / 1000000)); } public static Thread[] getThread(Integer threadSize,CountDownLatch countDownLatch,Integer size){ Thread[] threads=new Thread[threadSize]; for (int i = 0; i < threadSize; i++) { MyRunnable myRunnable = new MyRunnable(countDownLatch, size); threads[i]=new Thread(myRunnable); } return threads; } static class MyRunnable implements Runnable { private CountDownLatch countDownLatch; private Integer size; public MyRunnable(CountDownLatch countDownLatch, Integer size) { this.countDownLatch = countDownLatch; this.size = size; } @Override public void run() { for (int i = 0; i < size; i++) { System.currentTimeMillis(); } countDownLatch.countDown(); } } static class MyRunnableCache implements Runnable { private CountDownLatch countDownLatch; private Integer size; public MyRunnableCache(CountDownLatch countDownLatch, Integer size) { this.countDownLatch = countDownLatch; this.size = size; } @Override public void run() { for (int i = 0; i < size; i++) { CacheMillisecondClock.getInstance().getTime(); } countDownLatch.countDown(); }}Copy the code
} `
Cache utility class
` public class CacheMillisecondClock {
private static volatile long time=0L;
private static long interval=1000;
private volatile static CacheMillisecondClock cacheMillisecondClock;
private CacheMillisecondClock() {
startCacheClock();
}
private void startCacheClock() {
new Thread(new Runnable() {
@Override
public void run() {
time=System.currentTimeMillis();
try {
TimeUnit.MILLISECONDS.sleep(interval);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}).start();
}
public static CacheMillisecondClock getInstance(){
if(cacheMillisecondClock==null){
synchronized (CacheMillisecondClock.class){
if(cacheMillisecondClock==null){
cacheMillisecondClock=new CacheMillisecondClock();
}
}
}
return cacheMillisecondClock;
}
public long getTime() {
return time;
}
Copy the code
}
‘Spring Boot Tomcat defaults to 200 threads, so I used 200 threads to test
Windows (200 threads, 6 cores)
Test number of threads 200, each thread number 10000000, a total of 200*10000000=2000000000; Since the computer has only 6 cores and 6 parallel threads, that is, the data of each trip is 200/6=33.33 times, and the time consuming 1395/37=37.70 times. The difference is not obvious, and it has been proved that there is no performance problem. Cache utility classes are 1181/37=31.91 times Windows (300 threads, 6 cores)
The cache utility class is much smaller than the direct call, which turns out to be a very small performance difference,
I’m going to test if I have any performance issues with Linux.
This performance gap is also very small, and it turns out that the cache utility class can be used normally even with high concurrency