background
A Netty implementing Web applications, the pressure during the test, the file upload function has been submitted to the io.net ty. Util. Internal. OutOfDirectMemoryError: Failed to allocate 16777216 byte(s) of direct memory (used: 939524103, Max: 954728448) exception
By comparing with the sample code provided by the official, it is found that different form parsing methods have an impact on resource release. This article will analyze the process of implementing Netty file upload function and the problem of memory leakage, as well as several places that need to release memory for file upload.
File upload form parsing
The project’s requirement for file uploading cannot be localized, and the file uploading process is as follows:
- Handler parses the file form, reads the file’s byte data and stores it in a business processing queue.
- Service processing threads consume queue data and analyze uploaded files.
The initial implementation is a reference to the network code “achieve Netty file upload”, no problem in demo verification, but after a couple of days after the pressure test run, a large number of reported memory leaks.
After repeatedly checking the original code for memory leaks and releasing the necessary resources, we finally get the correct code. The MultipartRequest class does the parsing of file forms. The output parameter is a Map object whose value is the binary array byte[] of files, which supports multi-file forms.
import io.netty.handler.codec.http.FullHttpRequest; import io.netty.handler.codec.http.HttpContent; import io.netty.handler.codec.http.multipart.*; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.HashMap; import java.util.Map; public class MultipartRequest { private final static Logger logger = LoggerFactory.getLogger(MultipartRequest.class); /** * decoder factory class, Public * / private final static HttpDataFactory factory = new DefaultHttpDataFactory (DefaultHttpDataFactory. MINSIZE); Private Map<String, byte[]> fileDatas; private Map<String, byte[]> fileDatas; public Map<String, byte[]> getFileDatas() { return fileDatas; } public void setFileDatas(Map<String, byte[]> fileDatas) { this.fileDatas = fileDatas; } /** * parses the form request to separate out the file byte data, @param Request @return */ public static MultipartRequest createMultipartBody(FullHttpRequest Request) { HttpPostRequestDecoder httpDecoder = null; HttpPostRequestDecoder = new HttpPostRequestDecoder(factory, request); httpDecoder.setDiscardThreshold(0); if (httpDecoder ! = null) {// Get the HTTP request object and add it to the decoder final HttpContent chunk = request; httpDecoder.offer(chunk); MultipartRequest = new MultipartRequest(); Map<String, byte[]> fileContents = new HashMap<>(); While (httpdecoder.hasNext ()) {InterfaceHttpData formData = httpdecoder.next (); if (formData == null) { continue; } // File form type, Collect the file content to byte [] the if (formData. GetHttpDataType () = = InterfaceHttpData. HttpDataType. FileUpload) {FileUpload FileUpload = (FileUpload) formData; if (fileUpload.isCompleted()) { byte[] readData = fileUpload.get(); httpDecoder.removeHttpDataFromClean(fileUpload); //remove fileContents.put(formData.getName(), readData); Fileupload.release (); }}}} the catch (Exception e) {/ / Ignore HttpPostRequestDecoder $EndOfDataDecoderException} / / stored file information multipartRequest.setFileDatas(fileContents); return multipartRequest; }} catch (Exception e) {logger.error(" failed to parse log file ",e); } finally {// Resource release if (httpDecoder! = null) {try{// 2, the second easy disk space leak httpdecoder.cleanfiles (); // 2, httpdecoder.destroy (); httpDecoder = null; }catch (Exception e) {logger.error(' LLDB ', LLDB); } } } return null; }}Copy the code
The code involves releasing resources in three places, and there is one exception:
FileUpload
File form object- A temporary form file stored on disk
HttpPostRequestDecoder
decoder- While, while, while
EndOfDataDecoderException
Exception, can be ignored
Check whether memory leaks occur
The first step is to add detection parameters. After resource leaks occur, add applications first-Dio.netty.leakDetectionLevel=advanced
, locate to decoder need to release:Step two, yeshttpDecoder
For resource release, quite tortuous ah, the initial traversal of decoder data with the following code:
List<InterfaceHttpData> InterfaceHttpDataList = httpDecoder.getBodyHttpDatas(); For (InterfaceHttpData data: InterfaceHttpDataList) {// If the data type is a file type, save it to the fileUploads object if (data! = null && InterfaceHttpData.HttpDataType.FileUpload.equals(data.getHttpDataType())) { FileUpload fileUpload = (FileUpload) data; fileUploads.put(data.getName(), fileUpload); continue; }}Copy the code
Httpdecoder.destroy (); The memory leak log keeps saying that the reference counter has returned to zero, but the memory leak log stubbornly indicates that there is no release, this cycle. The code looks fine, but out-of-heap memory problems persist in stress testing and long-running scenarios.
While (httpdecoder.hasNext ())), while (httpdecoder.hasNext ()), while (httpdecoder.hasNext ())), while (httpdecoder.hasNext ()), while (httpdecoder.hasNext ())), while (httpDecoder.
Step 3, continue to observe, there is still an out-of-heap memory overflow. The overflow log shows the file form when the decoder resource is releasedfileUpload.get()
And related quotes:At this point, after perfecting the code, you don’t see any memory leaks.
The revelation of
The weird thing about this memory leak problem is that when you walk through the decoder in for mode and call resource release, you don’t know where the reference counter is set to zero, but it’s not actually released, which causes the memory leak.
In addition, as for the configuration of leak log level, advanced level, because the sample is 1%, it is not easy to observe leaks. After the release of decoder resources, I thought the problem was solved. After changing to Paranoid, it occurred within an hour after the operation, so the test and troubleshooting phase was set to the highest level, which is conducive to fast locating the problem.
The last point is about using file forms. Since there is no need to localize, it is best to get byte[] directly with GET (), because the file form object corresponding to ByteBuf also has the risk of memory leakage and is difficult to control.