Currently, the project needs to store some files, videos, etc. So, found some information about the file server. Lustre, HDFS, Gluster, Alluxio, Ceph, FastDFS. Here is a brief introduction:

  1. Lustre is a large-scale, secure, and highly available clustered file system developed and maintained by SUN. The main objective of the project is to develop the next generation clustered file system, which currently supports over 10,000 nodes and petabytes of data storage.

  2. HDFS Hadoop Distributed File System (HDFS) is a Distributed File System. HDFS is a highly fault tolerant system suitable for deployment on inexpensive machines. HDFS provides high-throughput data access and is suitable for applications on large-scale data sets.

  3. GlusterFS is a clustered file system that supports petabytes of data. GlusterFS aggregates storage space distributed across different servers into a large networked parallel file system using RDMA and TCP/IP.

  4. Alluxio, formerly known as Tachyon, is a memory-centered distributed file system with high performance and fault tolerance. It provides reliable memory-level file sharing services for cluster frameworks such as Spark and MapReduce.

  5. Ceph is a new generation of open source distributed file system. The main goal is to improve the fault tolerance of data and achieve seamless replication as posiX-based distributed file system without single point of failure.

  6. FastDFS is an open source lightweight distributed file system. It manages files, including file storage, file synchronization, and file access (file upload and download). It solves the problems of large-capacity storage and load balancing. It is especially suitable for online services with file as the carrier, such as photo album website, video website and so on. FastDFS is tailor-made for the Internet. It takes into account redundant backup, load balancing, and linear expansion, and emphasizes high availability and performance. It is easy to set up a high-performance file server cluster to provide file uploading and downloading services. Through the introduction of the server of the above 6 files, it is very suitable for our business to use FastDFS, so we have learned about it and feel really powerful. Here we would like to thank Yu Qing Dashen, senior architect of Taobao, for opening up such an excellent lightweight distributed file system. This article documents the installation and configuration of FastDFS 5.0.9 in CentOS7.

1. Fastdfs profile

FastDFS is an open source lightweight distributed file system consisting of a Tracker Server, a Storage Server, and a client. It solves the problem of storing massive data. It is especially suitable for online services that use small and medium files (recommended size: 4KB < file_size <500MB).

The FastDFS system structure is shown below:

Both the tracker and the storage node can consist of a single server. Servers in both the tracker and the storage node can be added or taken offline at any time without affecting online services. All servers in the tracker are peer and can be added or reduced at any time according to the stress of the server.

To support large capacity, storage nodes (servers) are organized into volumes (or groups). A storage system consists of one or more volumes whose files are independent of each other. The file capacity of all volumes is the total file capacity of the entire storage system. A volume can be composed of one or more storage servers. All files on the storage servers under a volume are the same. Multiple storage servers in a volume provide redundancy and load balancing.

When a server is added to a volume, the system automatically synchronizes existing files. After the synchronization is complete, the system automatically switches the new server to online services.

When the storage space is insufficient or about to be used up, you can dynamically add volumes. You only need to add one or more servers and configure them as a new volume, thus increasing the capacity of the storage system.

2. FastDFS download

Fastdfs stable version download address

3. FastDFS installation

For details, see CentOS 7 Installation and Configuration FastDFS 5.0.5

4.SpringMVC uploads files to FastDFS

4.1 fast_client. CNF configuration

connect_timeout = 2
Network timeout
network_timeout = 30
The # character set
charset = UTF-8
Trace the server port
http.tracker_http_port = 9099
http.anti_steal_token = no
http.secret_key = FastDFS1234567890
# trace server address. The trace server serves primarily as a load balancer
tracker_server = 192.168. 0116.:22122Copy the code

4.2 FastdFS File Upload Process

Interactive process of uploading files:

  1. The client asks the storage to which the tracker is uploaded. No parameter is required.
  2. Tracker returns an available storage;
  3. The client communicates with the storage to upload files.

4.3 FastDFS File Download process

Download file interaction process:

  1. The client queries the storage where the tracker downloaded the file. The parameters are the file id (volume name and file name).
  2. Tracker returns an available storage;
  3. The client communicates with the storage to download the file.

Note that a client is a client that uses the FastDFS service. A client is also a server that invokes the Tracker and storage between servers.

4.4 FastDFSUtil Encapsulation

package com.lidong.dubbo.util;

import org.csource.common.MyException;
import org.csource.common.NameValuePair;
import org.csource.fastdfs.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.core.io.ClassPathResource;
import org.springframework.web.multipart.MultipartFile;

import javax.servlet.http.HttpServletResponse;
import java.io.*;
import java.util.HashMap;
import java.util.Map;

/** * @ Project name :lidong-dubbo * @ Class name :FastDFSUtil * @ Class description :FastDFS uploads files to the file server * @ Author :lidong * @ Created time: 5:23 PM * @ company :chni * @QQEmail address :[email protected] */
public class FastDFSUtil {

    private final static

    Logger logger = LoggerFactory.getLogger(FastDFSUtil.class);


    /** * Upload server local file - Through Linux client, call client command upload * @paramFilePath indicates the absolute filePath * @returnMap<String,Object> code- return code, group- file group, MSG - file path/error message */
    public static Map<String, Object> uploadLocalFile(String filePath) {
        Map<String, Object> retMap = new HashMap<String, Object>();
        /** * 1. Command */ to upload files
        String command = "fdfs_upload_file /etc/fdfs/client.conf " + filePath;
        /** * 2. Define the return information of the file */
        String fileId = "";
        InputStreamReader inputStreamReader = null;
        BufferedReader bufferedReader = null;
        try {
            /** * 3. Run the Linux command to upload the file */ by invoking the API
            Process process = Runtime.getRuntime().exec(command);
            /** * 4. Read the information returned after upload */
             inputStreamReader = new InputStreamReader(process.getInputStream());
             bufferedReader = new BufferedReader(inputStreamReader);
            String line;
            if((line = bufferedReader.readLine()) ! =null) {
                fileId = line;
            }
            /** * 5. If fileId contains M00, the file has been successfully uploaded. Otherwise, file upload fails */
            if (fileId.contains("M00")) {
                retMap.put("code"."0000");
                retMap.put("group", fileId.substring(0.6));
                retMap.put("msg", fileId.substring(7, fileId.length()));
            } else {
                retMap.put("code"."0001");  // Upload error
                retMap.put("msg", fileId);   // Return information}}catch (Exception e) {
            logger.error("IOException:" + e.getMessage());
            retMap.put("code"."0002");
            retMap.put("msg", e.getMessage());
        }finally {
            if(inputStreamReader! =null) {try {
                    inputStreamReader.close();
                } catch(IOException e) { e.printStackTrace(); }}if(bufferedReader ! =null) {
                try {
                    bufferedReader.close();
                } catch(IOException e) { e.printStackTrace(); }}}return retMap;
    }


    /** * Description: directly upload to server via FDFS Java client - read local file upload ** @paramFilePath indicates the absolute local filePath * @returnMap<String,Object> code- return code, group- file group, MSG - file path/error message */
    public static Map<String, Object> upload(String filePath) {
        Map<String, Object> retMap = new HashMap<String, Object>();
        File file = new File(filePath);
        TrackerServer trackerServer = null;
        StorageServer storageServer = null;
        if (file.isFile()) {
            try {
                String tempFileName = file.getName();
                byte[] fileBuff = FileUtil.getBytesFromFile(file);
                String fileId = "";
                // truncate the suffix
                String fileExtName = tempFileName.substring(tempFileName.lastIndexOf(".") + 1);
                ConfigAndConnectionServer configAndConnectionServer = new ConfigAndConnectionServer().invoke(1);
                StorageClient1 storageClient1 = configAndConnectionServer.getStorageClient1();
                storageServer = configAndConnectionServer.getStorageServer();
                trackerServer = configAndConnectionServer.getTrackerServer();

                /** * 4. Set the file properties. Call the method upload_file1 on the client to upload the file */
                NameValuePair[] metaList = new NameValuePair[3];
                // The original file name
                metaList[0] = new NameValuePair("fileName", tempFileName);
                // File suffix
                metaList[1] = new NameValuePair("fileExtName", fileExtName);
                // File size
                metaList[2] = new NameValuePair("fileLength", String.valueOf(file.length()));
                // Start uploading files
                fileId = storageClient1.upload_file1(fileBuff, fileExtName, metaList);
                retMap = handleResult(retMap, fileId);
            } catch (Exception e) {
                e.printStackTrace();
                retMap.put("code"."0002");
                retMap.put("msg", e.getMessage());
            }finally {
                /** * 5. Close the trace server connection */colse(storageServer, trackerServer); }}else {
            retMap.put("code"."0001");
            retMap.put("msg"."Error: local file does not exist!);
        }
        return retMap;
    }


    /** * Description: Select upload file remotely - via MultipartFile ** @paramFile File flow * @returnMap<String,Object> code- return code, group- file group, MSG - file path/error message */
    public static Map<String, Object> upload(MultipartFile file) {
        Map<String, Object> retMap = new HashMap<String, Object>();
        TrackerServer trackerServer = null;
        StorageServer storageServer = null;
        try {
            if (file.isEmpty()) {
                retMap.put("code"."0001");
                retMap.put("msg"."Error: File is empty!");
            } else {
                ConfigAndConnectionServer configAndConnectionServer = new ConfigAndConnectionServer().invoke(1);
                StorageClient1 storageClient1 = configAndConnectionServer.getStorageClient1();
                storageServer = configAndConnectionServer.getStorageServer();
                trackerServer = configAndConnectionServer.getTrackerServer();
                String tempFileName = file.getOriginalFilename();
                // Set the meta information
                NameValuePair[] metaList = new NameValuePair[3];
                // The original file name
                metaList[0] = new NameValuePair("fileName", tempFileName);
                // File suffix
                byte[] fileBuff = file.getBytes();
                String fileId = "";
                // truncate the suffix
                String fileExtName = tempFileName.substring(tempFileName.lastIndexOf(".") + 1);

                metaList[1] = new NameValuePair("fileExtName", fileExtName);
                // File size
                metaList[2] = new NameValuePair("fileLength", String.valueOf(file.getSize()));
                /** * 4. Call upload_file1 to upload file */fileId = storageClient1.upload_file1(fileBuff, fileExtName, metaList); retMap = handleResult(retMap, fileId); }}catch (Exception e) {
            retMap.put("code"."0002");
            retMap.put("msg"."Error: Failed to upload file!");
        }finally {
            /** * 5. Close the trace server connection */
            colse(storageServer, trackerServer);
        }
        return retMap;
    }


    /** * Download file ** @param response
     * @paramFilepath indicates the filepath of the data store * @paramDownname downloaded name * filepath M00/ filepath starting with * group file group, for example, group0 * @throws IOException
     */
    public static void download(HttpServletResponse response, String group, String filepath, String downname) {
        StorageServer storageServer = null;
        TrackerServer trackerServer = null;
        try {
            ConfigAndConnectionServer configAndConnectionServer = new ConfigAndConnectionServer().invoke(0);
            StorageClient storageClient = configAndConnectionServer.getStorageClient();
            storageServer = configAndConnectionServer.getStorageServer();
            trackerServer = configAndConnectionServer.getTrackerServer();

            /** *4. Call the client to download the download_file method */
            byte[] b = storageClient.download_file(group, filepath);
            if (b == null) {
                logger.error("Error1 : file not Found!");
                response.getWriter().write("Error1 : file not Found!");
            } else {
                logger.info("Download file...");
                downname = new String(downname.getBytes("utf-8"), "ISO8859-1");
                response.setHeader("Content-Disposition"."attachment; fileName="+ downname); OutputStream out = response.getOutputStream(); out.write(b); out.close(); }}catch (Exception e) {
            e.printStackTrace();
            try {
                response.getWriter().write("Error1 : file not Found!");
            } catch(IOException e1) { e1.printStackTrace(); }}finally {
            /** * 5. Close the trace server connection */colse(storageServer, trackerServer); }}/** * delete file ** @paramGroup indicates the file group. Filepath indicates the filepath starting with M00/ * @returnMap<String,Object> code- return code, MSG - error message */
    public static Map<String, Object> delete(String group, String filepath) {
        Map<String, Object> retMap = new HashMap<String, Object>();
        StorageServer storageServer = null;
        TrackerServer trackerServer = null;
        try {
            ConfigAndConnectionServer configAndConnectionServer = new ConfigAndConnectionServer().invoke(0);
            StorageClient storageClient = configAndConnectionServer.getStorageClient();
            storageServer = configAndConnectionServer.getStorageServer();
            trackerServer = configAndConnectionServer.getTrackerServer();
            /** * 4. Call the delete_file method on the client to delete file */
            int i = storageClient.delete_file(group, filepath);
            if (i == 0) {
                retMap.put("code"."0000");
                retMap.put("msg"."Delete successful!");
            } else {
                retMap.put("code"."0001");
                retMap.put("msg"."File does not exist!"); }}catch (Exception e) {
            e.printStackTrace();
            retMap.put("code"."0002");
            retMap.put("msg"."Delete failed!");
        } finally {
            /** * 5. Close the trace server connection */
            colse(storageServer, trackerServer);
        }

        return retMap;

    }

    /** * Shut down the server ** @param storageServer
     * @param trackerServer
     */
    private static void colse(StorageServer storageServer, TrackerServer trackerServer) {
        if(storageServer ! =null&& trackerServer ! =null) {
            try {
                storageServer.close();
                trackerServer.close();
            } catch(IOException e) { e.printStackTrace(); }}}/** * The result returned after uploading to the file server is ** @param retMap
     * @param fileId
     * @return* /
    private static Map<String, Object> handleResult(Map<String, Object> retMap, String fileId) {
        if(! fileId.equals("") && fileId ! =null) {
            retMap.put("code"."0000");
            retMap.put("group", fileId.substring(0.6));
            retMap.put("msg", fileId.substring(7, fileId.length()));
        } else {
            retMap.put("code"."0003");
            retMap.put("msg"."Error: Upload failed!");
        }

        return retMap;
    }

    /** * @ Project name :lidong-dubbo * @ Class name :FastDFSUtil * @ ConfigAndConnectionServer * @ author: lidong * @ creation time: 2017/2/7 morning she * @ company: chni * @QQEmail address :[email protected] */
    private static class ConfigAndConnectionServer {
        private TrackerServer trackerServer;
        private StorageServer storageServer;
        private StorageClient storageClient;
        private StorageClient1 storageClient1;


        public TrackerServer getTrackerServer() {
            return trackerServer;
        }

        public StorageServer getStorageServer() {
            return storageServer;
        }

        public StorageClient getStorageClient() {
            return storageClient;
        }

        public StorageClient1 getStorageClient1() {
            return storageClient1;
        }

        public ConfigAndConnectionServer invoke(int flag) throws IOException, MyException {
            1. Read the fastDFS client configuration file */
            ClassPathResource cpr = new ClassPathResource("fdfs_client.conf");
            /** * 2. Initialization information of the configuration file */
            ClientGlobal.init(cpr.getClassLoader().getResource("fdfs_client.conf").getPath());
            TrackerClient tracker = new TrackerClient();
            /** * 3. Establish a connection */
            trackerServer = tracker.getConnection();
            storageServer = null;
            /** * Construct the StorageClient object if flag=0 otherwise construct the StorageClient1 */
            if (flag == 0) {
                storageClient = new StorageClient(trackerServer, storageServer);
            } else {
                storageClient1 = new StorageClient1(trackerServer, storageServer);
            }
            return this; }}}Copy the code

4.5 SpringMVC uploads files to the Fastdfs file server

@RequestMapping("/upload")
    public String addUser(@RequestParam("file") CommonsMultipartFile[] files,
                          HttpServletRequest request){

        for(int i = 0; i<files.length; i++){ logger.info("fileName-->" + files[i].getOriginalFilename()+" file-size--->"+files[i].getSize());
            Map<String.Object> retMap = FastDFSUtil.upload(files[i]);
            String code = (String) retMap.get("code");
            String group = (String) retMap.get("group");
            String msg = (String) retMap.get("msg");

            if ("0000".equals(code)){
                logger.info("File uploaded successfully");
                //TODO: saves the uploaded file path to the mysql database
            }else {
                logger.info("File upload failed"); }}return "/success";
    }
Copy the code

That’s basically it. If you have any problems in the process of learning. You can comment and tease directly below.