Introduction to the

FastDFS is an open source lightweight distributed file system. It manages files, including file storage, file synchronization, and file access (file upload and download). It solves the problems of large-capacity storage and load balancing. It is especially suitable for online services with file as the carrier, such as photo album website, video website and so on. FastDFS is tailor-made for the Internet. It takes into account redundant backup, load balancing, and linear expansion, and emphasizes high availability and performance. It is easy to set up a high-performance file server cluster to provide file uploading and downloading services

advantages

Advantages:

I. The system complexity is reduced and the processing efficiency is higher

Ii. Supports online capacity expansion to enhance system scalability

Iii. Soft RAID is implemented to enhance the concurrent processing capability and data fault tolerance and recovery capability of the system

Iv. Support primary and secondary files and customize extension names

Active/standby Tracker services to enhance system availability

disadvantages

I. Breakpoint continuation is not supported and is not suitable for large file storage

Ii. Does not support POSIX(portable operating system Interface) universal interface access, low versatility

Iii. For file synchronization across the public network, there is a large delay. Therefore, you need to apply fault tolerance policies

Iv. There is a single point of performance bottleneck through API download

Do not directly provide permission control, you need to implement it yourself

Comparison of major file systems

FTP: password-enabled Files have high security (difficult to access if a password is set), resumable data transfer, non-capacity expansion, and no Dr

HDFS: Suitable for large file storage, cloud computing, not suitable for simple small file storage (low efficiency)

TFS: from Taobao, suitable for less than 1M files, relatively troublesome to use, relatively few documents

GlusterFS: Powerful and flexible, suitable for large files, supports multiple data types, cumbersome to use, high hardware requirements, at least two nodes, little Chinese data

MogileFS: Compared to the FastDFS architecture, FastDFS references mogileFS, which is more efficient than mogileFS

FastDFS: from Taobao, it is suitable for large amount of small files (recommended range: 4KB < file_size <500MB). It is relatively simple to use. Most websites recommend using FastDFS for storing small files

Fastdfs role

The FastDFS server has three roles: Tracker Server, Storage Server and Client.

A) Tracker server: a tracking server, which is mainly used for scheduling and load balancing. It records the status information of all storage groups and storage servers in the cluster in memory, and is the hub for the interaction between clients and data servers. The master is more compact than the master in GFS. It does not record file index information and consumes very little memory

B) Storage Server: a storage server (also called a storage node or data server) on which files and meta data are stored. Storage Server manages files directly using OS file system calls

Client: A client that initiates service requests and uses TCP/IP to exchange data with the tracker server or storage node through a dedicated interface. FastDFS provides users with basic file access interfaces, such as Upload, Download, Append, and Delete, in the form of client libraries

Docker + Nginx + FastdFS single-machine mode

Environment and software version

System: centos7.7

Nginx: nginx / 1.12.2

Host: 192.168.0.191

Docker installation configuration

Docker installation configuration here is not repeated, remember to configure ali cloud accelerator ok

Nginx installation configuration

  • Pull the Nginx image

    docker pull nginx
    Copy the code

  • Look at mirror

    docker images
    Copy the code

  • Run nginx

    docker run --name nginx -d -p 80:80 nginx
    Copy the code

    Note: this time does not mount the nginx container related directory, all future configuration will have to enter the container change, very inconvenient, after the above startup

    Using the command

    docker exec- it nginx nginx - tCopy the code

    Check the directory where the nginx configuration file is located, as shown in the following figure. The configuration file is in /etc/nginx-nginx.conf

  • Docker does not allow you to mount nginx configuration files directly, so you need to copy the configuration files in the container to the host, otherwise it will fail.

    docker cp -a nginx:/etc/nginx/ /home/nginx/conf/
    Copy the code

    After copying, stop and delete the already running Nginx container, and start again as follows

    docker stop nginx
    Copy the code
    docker rm nginx
    Copy the code

  • Run nginx again

     docker run --name nginx -d -p 80:80 --restart always -v /home/nginx/conf/nginx/:/etc/nginx/ -v /home/nginx/log/:/var/log/nginx/ nginx
    Copy the code

    1. -v: Mount the host to the container for data synchronization and configuration modification. The following figure shows the nginx mount directory. Configuration changes are synchronized to the container

    2. -p: indicates the specified port

    3. –restart always: indicates that the system starts after the restart

  • access

    http://192.168.0.191/
    Copy the code

    Since the specified port is 80, this can be accessed without a write port

    The above Nginx installation is complete, and the specific configuration will be discussed later when combined with Fastdfs

Fastdfs installation configuration

  • Pull fastDFS image

    docker pull delron/fastdfs
    Copy the code

    Pull the latest version

  • Look at mirror

    docker images
    Copy the code

  • Use docker images to build tracker containers (tracking servers, for scheduling purposes)

    docker run -dti --network=host --name tracker -v /var/fdfs/tracker:/var/fdfs -v /etc/localtime:/etc/localtime delron/fastdfs tracker
    Copy the code

    -v: mounts the host directory to a container

  • Use Docker images to build storage containers (storage servers, system capacity and backup services)

    Docker (the dti - network = host name storage - e TRACKER_SERVER = 192.168.0.191:22122 - v/var/FDFS/storage: / var/FDFS - v /etc/localtime:/etc/localtime delron/fastdfs storageCopy the code

    -v: mounts the host directory to a container

    TRACKER_SERVER= Local IP address :22122. Do not use 127.0.0.1 for the local IP address

  • Go to the storage container and configure the HTTP port in the configuration file of the storage. The configuration file is stored in the storage.conf directory in /etc/fdfs/directory

    docker exec -it storage bash
    Copy the code

    The default port is 8888, and you don’t have to change it, so I’m not going to change it, so I’m going to use the default

  • To configure nginx, go to the storage container and configure nginx. In /usr/local/nginx/conf/, modify the nginx.conf file

    docker exec -it storage bash
    Copy the code
    cd /usr/local/nginx/conf/
    Copy the code
    vi nginx.conf
    Copy the code

    The default port is 8888. You can leave the default Settings unchanged. I’m using the default here

    Note: if the storage.conf port in the previous step is changed, then the nginx configuration should be changed as well

  • Test file upload

    Use the Web module to upload files to the FastdFS file system

    1. First upload a photo to /var/fdf/storage/

    2. Enter the storage container and run the following command, as shown in the following figure

      /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /var/fdfs/1.jpg
      Copy the code

      At this point the picture has been uploaded to the file system, and returns the image store url: group1 / M00/00/00 / wKgAv13qDs – AfJN6ABCG3sAMTlE315. JPG

    3. Browser visit: HTTP: / / http://192.168.0.191:8888/group1/M00/00/00/wKgAv13qDs-AfJN6ABCG3sAMTlE315.jpg will get the picture

      According to the documents returned by group1 / M00/00/00 / wKgAv13qDs – AfJN6ABCG3sAMTlE315. JPG can learn: stored in a picture

      Server/var/FDFS/storage/data / 00/00 / directory, as shown in the figure below

  • Boot start container

    docker update --restart=always tracker
    Copy the code
    docker update --restart=always storage
    Copy the code

  • Q&A

    1. Storage fails to start

    You can delete the fdfs_storage.pid file in the /var/fdfs/storage-/ data/ directory and run storage again

Springboot integration fastdfs

  • pom.xml

    <! -- https://mvnrepository.com/artifact/com.github.tobato/fastdfs-client -->
    <dependency>
        <groupId>com.github.tobato</groupId>
        <artifactId>fastdfs-client</artifactId>
        <version>1.26.7</version>
    </dependency>
    Copy the code
  • Yml configuration

    # Fastdfs service configuration
    fdfs:
      so-timeout: 1500
      connect-timeout: 600
      # tracker-list: 192.168.0.191:22122
      tracker-list: 192.168. 0192.: 22122192168 0.193:22122 Cluster connection service configuration
      # visit-host: 192.168.0.191:8888
      visit-host: 192.168. 0191.
    Copy the code
  • Java code

    package com.xy.controller.fastdfs;
    
    import com.xy.entity.FastdfsFile;
    import com.xy.service.IFastdfsFileService;
    import com.github.tobato.fastdfs.domain.fdfs.StorePath;
    import com.github.tobato.fastdfs.exception.FdfsUnsupportStorePathException;
    import com.github.tobato.fastdfs.service.FastFileStorageClient;
    import io.swagger.annotations.ApiOperation;
    import io.swagger.annotations.ApiParam;
    import org.apache.commons.io.FilenameUtils;
    import org.apache.commons.lang3.StringUtils;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.beans.factory.annotation.Value;
    import org.springframework.web.bind.annotation.PostMapping;
    import org.springframework.web.bind.annotation.RequestMapping;
    import org.springframework.web.bind.annotation.RequestParam;
    import org.springframework.web.bind.annotation.RestController;
    import org.springframework.web.multipart.MultipartFile;
    
    import java.io.IOException;
    
    import static com.xy.utils.common.GenerateUniqueCode.getTimeAddRandom7;
    import static com.xy.utils.date.DateUtils.getCurDateTimeFull;
    
    / * * *@DescriptionFastdfs file operation *@Author xy
     * @Date2019/12/6 beginning * /
    @RestController
    @RequestMapping(value = "/fastdfs")
    public class FastdfsFileTestController {
    
        @Autowired
        private FastFileStorageClient storageClient;
    
        @Autowired
        IFastdfsFileService fastdfsFileService;
    
        @Value("${fdfs.visit-host}")
        private String hostIP;
    
    
        / * * *@paramMultipartFile file *@return java.lang.String
         * @DescriptionFastdfs file upload *@Author xy
         * @Date2019/12/6 17:49 * * /
        @apiOperation (value = "file upload ")
        @postmapping (path = "/fileUpload", name = "Upload file ")
        public String uploadFile(
                @apiparam (value = "file ")
                @RequestParam(name = "file", required = true) MultipartFile multipartFile
        ) throws IOException {
            String fullPath = "";
    
            StringBuffer stringBuffer = new StringBuffer();
            try {
                /*** * File upload */
                StorePath storePath = storageClient.uploadFile(multipartFile.getInputStream(), multipartFile.getSize(),
                        FilenameUtils.getExtension(multipartFile.getOriginalFilename()), null);
                String filePath = storePath.getFullPath();
    
                /*** * insert library */
                stringBuffer.append(hostIP).append("/").append(filePath);
                fullPath = new String(stringBuffer);
    
                FastdfsFile fastdfsFile = new FastdfsFile()
                        .setFdfsFileName(multipartFile.getOriginalFilename())
                        .setFdfsFileUrl(filePath)
                        .setFdfsFileFullUrl(fullPath)
                        .setFdfsCode(getTimeAddRandom7())
                        .setCreeTime(getCurDateTimeFull())
                        .setCreeUser("xy");
                boolean bool = fastdfsFileService.save(fastdfsFile);
                System.out.println(bool);
                System.out.println(filePath);
            } catch (Exception e) {
                e.printStackTrace();
            }
            return fullPath;
        }
    
    
        / * * *@paramFileUrl File address *@return void
         * @DescriptionDelete files@Author xy
         * @Date2019/12/9 9:08 * * /
        @apiOperation (value = "delete file ")
        @postmapping (path = "/deleteFile", name = "deleteFile")
        public String deleteFile(
                @apiparam (value = "file address ")
                @RequestParam(name = "fileUrl", required = true) String fileUrl
        ) {
            if (StringUtils.isEmpty(fileUrl)) {
                return "Parameters cannot be empty.";
            }
            try {
                StorePath storePath = StorePath.parseFromUrl(fileUrl);
                storageClient.deleteFile(storePath.getGroup(), storePath.getPath());
            } catch (FdfsUnsupportStorePathException e) {
                e.printStackTrace();
            }
            return "Operation successful"; }}Copy the code
  • Pay special attention to

    The configuration file in the code for the FastdFS service configuration is different as shown in the following figure

    The service connection port is 22122, TRACKER_SERVER=192.168.0.191:22122 that we specified when we started the storage

    The access port is 8888, which is the default port for nginx configuration above

The docker + nginx + fastdfs cluster

The environment

System: same as above

Nginx: same as above

Fastdfs: same as above

Host:

A) 192.168.0.191: nginx

B) 192.168.0.192: tracker1 storage1

C) 192.168.0.193: tracker2 storage2

Fastdfs cluster setup

During the construction process

[Perylene FastFDS Perylene FastFDS Cluster](blog.csdn.net/weixin_4024…)

[using the docker deployment fastdfs cluster _ a cold rainy night @ Leon – CSDN blog _docker fastdfs cluster] (blog.csdn.net/zhanngle/ar…).

  • 192.168.0.192/192.168.0.193, two hosts, pull fastdfs image

    docker pull delron/fastdfs
    Copy the code

  • Start the tracker for both hosts

    1. 192.168.0.192:

      docker run -dti --network=host --restart always --name tracker -v /var/fdfs/tracker:/var/fdfs -v /etc/localtime:/etc/localtime delron/fastdfs tracker
      Copy the code

      -v: mounts the host directory and the directory in the container

      –restart always: indicates the startup

    2. 192.168.0.193:

      docker run -dti --network=host --restart always --name tracker -v /var/fdfs/tracker:/var/fdfs -v /etc/localtime:/etc/localtime delron/fastdfs tracker
      Copy the code
  • Start the storage of the two machines

    1. 192.168.0.192:

      Docker run-dti --network=host --restart always --name storage-e TRACKER_SERVER=192.168.0.192:22122 -v /var/fdfs/storage:/var/fdfs -v /etc/localtime:/etc/localtime delron/fastdfs storageCopy the code
    2. 192.168.0.193:

      Docker run-dti --network=host --restart always --name storage-e TRACKER_SERVER= 192.168.0.193:22122-v /var/fdfs/storage:/var/fdfs -v /etc/localtime:/etc/localtime delron/fastdfs storageCopy the code

    3. Enter the storage containers of the two machines respectively

      docker exec -it storage bash
      Copy the code

      Go to the /etc/fdfs/directory and modify the following files

      Find all three files as shown below, add tracker_Server for both machines (note that both need to be configured)

      Tracker_server = 192.168.0.192:22122

      Tracker_server = 192.168.0.193:22122

  • Finally, modify the nginx.conf configuration files of storage containers on both machines

    1. /usr/local/nginx/conf/group1 /usr/local/nginx/conf/group1

  • Once configured, you can verify it

    1. restart

      fdfs_storaged /etc/fdfs/storage.conf restart
      Copy the code
    2. View storage node information

      fdfs_monitor /etc/fdfs/storage.conf
      Copy the code

      The following information appears

      Group count: the total Group

      Storage Server Count: indicates the total number of Storage services

      Active Server Count: indicates the total number of storage services in use

      .

  • Enter one of the storage services and upload an image to test, as shown in the following figure

    1. The command

      /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /var/fdfs/3.jpg
      Copy the code

      The origin of the image, because when start to use the -v/var/FDFS/storage: / var/FDFS for hosting and containers to mount the directory, then just preach a picture on the host, the container will be synchronized

    2. access

      http://192.168.0.192:8888/group1/M00/00/00/wKgAwV3wkkuAfs7yABWOZCYlEqY065.jpg

      http://192.168.0.193:8888/group1/M00/00/00/wKgAwV3wkkuAfs7yABWOZCYlEqY065.jpg

  • Now 192 machine storage service stopped,193 is also accessible

    As shown in the following figure, 192 is inaccessible, but 193 is accessible

    After the reboot,192 is accessible again

Nginx reverse proxy

After the fastdFS cluster is set up, the test also has to do a lot of IP cutting, so remember the top of the 191 machine installed on the nginx, using the 191 server as the reverse proxy

The host directory and container directory have been mounted, so go to CD /home/nginx/conf/nginx/ and modify the nginx configuration file

Added the following configuration:

#fastdfs server and access port, I used the default port 8888 for the above configuration, if there is any change, it needs to be consistent hereUpstream {server 192.168.0.192:8888; Server 192.168.0.193:8888; } server { listen 80; server_name localhost; location / { root html; index index.html index.htm; proxy_pass http://fdfs;/ FDFS / = / FDFS / = / FDFS /}}Copy the code

Save and exit and then docker restart nginx restart nginx

  • Nginx host IP + image path, since the specified port to start nginx is 80, there can be no write port access

    http://192.168.0.191/group1/M00/00/00/wKgAwF3wUBWAFW58ABWOZCYlEqY420.jpg

  • Nginx uses polling as the default proxy, so how do you know whether to proxy 192 and 193?

    Nginx can still proxy 193. If the access succeeds, it indicates that two services are proxy 193

    You can still access it if you look below

    If both storage services are shut down, as shown in the following figure, the access failure proves that the nginx agent is valid

  • Nginx is a polling agent by default, so if I shut down a storage, I should refresh it and then I can access it. However, I have tried several times even if I shut down a storage, I can still access it

    After I asked this question, I immediately felt that my brain rust, because the problem does not exist, if there is a web site configured with polling agent, if one of the service jumped, would the site like the problem I asked

    Refresh once can display, refresh again can not display, obviously does not exist.

    The correct conclusion is that if a service crashes, Nginx will immediately weed out that server to avoid access failures

  • Extension: Six ways to load balance nginx

Springboot connection configuration

The connection configuration simply requires adding a service to the configuration file, as shown in the figure below

Access service configuration because nginx is used as a proxy, it is good to write the host of Nginx, because port 80 can be accessed without port

The above code is