1. FastDFS profile

Why FastDFS?

  1. In A distributed cluster environment, when A file is uploaded to node A and accessed to node B through the load balancing algorithm, the file cannot be accessed. In this case, sometimes the file can be accessed and sometimes the file cannot be accessed
  2. In addition, functions such as file redundancy, load balancing, and linear expansion should be considered, which are not available for file uploading on a single node.

2. FastDFS architecture

FastDFS is an open source lightweight distributed file system. It manages files, including file storage, file synchronization, and file access (file upload and download). It solves the problems of large-capacity storage and load balancing. It is especially suitable for online services with file as the carrier, such as photo album website, video website and so on.

FastDFS is tailor-made for the Internet. It takes into account redundant backup, load balancing, and linear expansion, and emphasizes high availability and performance. It is easy to set up a high-performance file server cluster to provide file uploading and downloading services.

The FastDFS architecture includes Tracker Server and Storage Server. The client requests the Tracker Server to upload and download files. The Storage Server uploads and downloads files through the Tracker Server.

The Tracker server is used for load balancing and scheduling. During file uploading, you can find the Storage Server based on certain policies to provide file uploading service. A tracker can be called a tracking server or a scheduling server. The Storageserver is used to store files. The files uploaded by the client are finally stored on the Storageserver. The Storageserver manages files using the file system of the operating system instead of implementing its own file system. A storage can be called a storage server.

3. Upload process

After the client uploads a file, the storage server returns the file ID to the client. The file ID is used for index information about the file in the future. The file index information includes group name, virtual disk path, data level directory, and file name.

  • Group name: indicates the name of the storage group in which the file is uploaded. The storage server returns a message after the file is successfully uploaded and the client needs to save the file by itself.
  • Virtual disk path: virtual path configured for the storage, which corresponds to the disk option store_path*. If store_path0 is configured, it is M00, if store_path1 is configured, it is M01, and so on.
  • Data two-level directory: storage A two-level directory created by the storage server in each virtual disk path to store data files.
  • File name: Different from file upload. A file name is generated by a storage server based on specific information. The file name includes the IP address of the source storage server, time stamp of file creation, file size, random number, and file name extension.

4. FastDFS installation

Pull the mirror

docker pull morunchang/fastdfs
Copy the code

Run the tracker

docker run -d --name tracker --net=host morunchang/fastdfs sh tracker.sh
Copy the code

Run the storage

docker run -d --name storage --net=host -e TRACKER_IP=XXX.XXX.XXX.XXX:22122 -e GROUP_NAME=group1 morunchang/fastdfs sh storage.sh
Copy the code
  • --net=hostThe network mode is net=host. The host mode can be replaced by the Ip address of your machine instead of mapping the container port to the host
  • GROUP_NAMEIs the group name, that is, the group name of storage
  • If you want to add a new storage server, run this command again and change the new group name

Modify nginx configuration

Go to the storage container and modify nginx.conf

docker exec -it storage /bin/bash
Copy the code

After entering

vim /etc/nginx/conf/nginx.conf
Copy the code

Add the following content (it already exists, don’t change it)

location ~ /M00 { 
    root /data/fast_data/data; 
    ngx_fastdfs_module; 
}
Copy the code

Prohibit the cache

location ~ /M00 { root /data/fast_data/data; ngx_fastdfs_module; }
Copy the code

Restarting a Storage Container

docker restart storage
Copy the code

Check the tracker.conf and storage.conf configuration files

docker exec -it storage /bin/bash 
cd /etc/fdfs 
vim tracker.conf 
vim storage.conf
Copy the code

5. File upload microservice

1. Set up upload-service micro-service, and realize file uploading and deletion functions through fastdfs-client component

Add the dependent

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
    </dependency>
    <dependency>
        <groupId>com.github.tobato</groupId>
        <artifactId>fastdfs-client</artifactId>
        <version>1.26.7</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-devtools</artifactId>
        <scope>runtime</scope>
        <optional>true</optional>
    </dependency>
    <dependency>
        <groupId>org.projectlombok</groupId>
        <artifactId>lombok</artifactId>
        <optional>true</optional>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>
Copy the code

2. application.yml

server:
  port: 9007

logging:
  #file: demo.log
  pattern:
    console: "%d - %msg%n"
  level:
    org.springframework.web: debug
    com.lxs: debug

spring:
  application:
    name: upload-service
  servlet:
    multipart:
      enabled: true
      max-file-size: 10MB # Single file upload size
      max-request-size: 20MB Total file upload size

fdfs:
  # link timeout
  connect-timeout: 60
  # read time
  so-timeout: 60
  # Generate thumbnail parameters
  thumb-image:
    width: 150
    height: 150
  tracker-list: 124.XXX.XXX.XXX:22122

eureka:
  client:
    service-url:
      defaultZone: http://peer1:9004/eureka
  instance:
    Prefer to use IP addresses over host names
    prefer-ip-address: true
    # the IP address
    ip-address: 127.0. 01.
    The default renewal interval is 30 seconds
    lease-renewal-interval-in-seconds: 5
    The default value is 90 seconds
    lease-expiration-duration-in-seconds: 5
Copy the code

Start the UploadApplication class

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@SpringBootApplication
@EnableDiscoveryClient
public class UploadServiceApplication {

    public static void main(String[] args) { SpringApplication.run(UploadServiceApplication.class, args); }}Copy the code

4. FastDfs configuration class DfsConfig

import com.github.tobato.fastdfs.FdfsClientConfig;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;

@Configuration
@Import(FdfsClientConfig.class)
public class DfsConfig {}Copy the code

5. Tool class FileDfsUtil

Call fastdfS-client tool methods to upload and delete files

import com.github.tobato.fastdfs.domain.fdfs.StorePath;
import com.github.tobato.fastdfs.service.FastFileStorageClient;
import org.apache.catalina.Store;
import org.apache.commons.io.FileUtils;
import org.apache.commons.io.FilenameUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.web.multipart.MultipartFile;

@Component
public class FileDfsUtil {

    @Autowired
    private FastFileStorageClient storageClient;

    /** * Upload file *@param multipartFile
     * @return
     * @throws Exception
     */
    public String upload(MultipartFile multipartFile) throws Exception {
        String extName = FilenameUtils.getExtension(multipartFile.getOriginalFilename());
        StorePath storePath = storageClient.uploadImageAndCrtThumbImage(multipartFile.getInputStream(), multipartFile.getSize(), extName, null);
        return storePath.getFullPath(); //
    }

    /** * delete file *@param fileUrl
     */
    public void deleteFile(String fileUrl) { StorePath storePath = StorePath.parseFromUrl(fileUrl); storageClient.deleteFile(storePath.getGroup(), storePath.getPath()); }}Copy the code

6. FileController

Create a controller with file upload and delete functions to delete files

import com.lxs.config.FileDfsUtil;
import org.apache.commons.lang.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.multipart.MultipartFile;


@RestController
public class FileController {
    @Autowired
    private FileDfsUtil fileDfsUtil;

    /** * Upload file *@param file
     * @return* /
    @RequestMapping(value = "/uploadFile", method = RequestMethod.POST, headers="content-type=multipart/form-data")
    public ResponseEntity<String> uploadFile(@RequestParam("file") MultipartFile file){
        String result = "";

        try {
            String path = fileDfsUtil.upload(file); //"/group1/M00/00/00/wKjcZF8ekIWAEyMAAABzzes71pI891.jpg"
            if (StringUtils.isEmpty(path)){
                result = "Upload failed";
            } else{ result = path; }}catch (Exception e) {
            e.printStackTrace();
            result = "Server exception";
        }
        return ResponseEntity.ok(result);
    }

    / * * * *@param filePathName "/group1/M00/00/00/wKjcZF8ekIWAEyMAAABzzes71pI891.jpg"
     * @return* /
    @RequestMapping(value = "/deleteByPath", method = RequestMethod.GET)
    public ResponseEntity<String> deleteByPath(String filePathName){
        fileDfsUtil.deleteFile(filePathName);
        return ResponseEntity.ok("success delete"); }}Copy the code

7. Postman test

Upload a file

Delete the file