Set up the file management server
1 prepare
Provides the main logic in the daily project management in the service of the server always kept a lot of other static resources, such as file image takes up a lot of disk space, so these unrelated to business process can be apart of the resources of the deposit, so you need to prepare a server used in file management, including maintenance to add and delete operations, Here, the distributed file management FastDFS is used to complete the construction of the file system, which is relatively simple to use.
1.1 Hardware Preparations
You can prepare a Linux machine, either a real server or a VIRTUAL machine. The procedure is the same.
1.2 Environment Preparations
Docker container is first used for construction. There are many original construction steps, and a separate article will be written to record. If you don’t know docker, you can read articles about Docker first. Of course, MY blog also records them. If you don’t want to find other articles, you can go to have a look.
-
Pull the container
docker pull morunchang/fastdfs Copy the code
-
Run the tracker container
docker run -d --name tracker --net=host morunchang/fastdfs sh tracker.sh Copy the code
-
Running a Storage container
Docker run -d --name storage --net=host -e TRACKER_IP=< host IP address >:22122 -e GROUP_NAME=< group name > morunchang/fastdfs sh storage.shCopy the code
The host name is the IP address of the vm (TP: If it is found that after every virtual machine restart IP always changes, can be directly to network address configuration, network management in this fixed), a group name according to the reason is they have been defined, but in order to facilitate can define group1 the serial number of the way, after this will be used in the file download.
1.3 Configuring the System
-
After the above two containers are started, enter the container through the docker command to configure some related information.
docker exec -it storage /bin/bash Copy the code
-
Modify the built-in Nginx configuration file to disable caching (it doesn’t really matter).
vi /etc/nginx/conf/nginx.conf Copy the code
Locate the code location below.
location ~ /M00 { root /data/fast_data/data; ngx_fastdfs_module; } Copy the code
Add add_header cache-control no-store; You can disable caching.
If cross-domain access is involved later, you can also come back here to set up cross-domain requests.
-
Go to the http.conf file to enable the token check and set the key. If you do not have the corresponding token, you will access the default image, that is, the file specified at the end of the current configuration file. Of course, this key is also used to generate the corresponding token. Once exposed, Token generation can also be performed by others, which requires confidentiality.
-
Exit the container and restart the container service.
-
If the server is in use, you need to open the corresponding ports 22122, 23000, and 8080 in the security group, and the firewall also needs to expose the ports. You can modify the default ports in the configuration file and restart the server.
2 FastDFS construe positively
Take a look at this section if you want to learn more about FastDFS, or go straight to section 3 if you don’t.
2.1 introduction
FastDFS is an open source lightweight distributed file system. It manages files, including file storage, file synchronization, and file access (file upload and download). It solves the problems of large-capacity storage and load balancing. It is especially suitable for online services with file as the carrier, such as photo album website, video website and so on.
FastDFS is tailor-made for the Internet. It takes into account redundant backup, load balancing, and linear expansion, and emphasizes high availability and performance. It is easy to set up a high-performance file server cluster to provide file uploading and downloading services.
chart
2.2 Architecture Description
The FastDFS architecture includes Tracker Server and Storage Server. The client requests the Tracker Server to upload and download files. The Storage Server uploads and downloads files through the Tracker Server.
- The Tracker server is used for load balancing and scheduling. During file uploading, you can find the Storage Server based on certain policies to provide file uploading service. You can call a tracker a tracking server or a scheduling server. You can also think of a tracker as a receptionist.
- The Storage server is used to store files. The files uploaded by the client are finally stored on the Storage server. The Storage Server manages files using the file system of the operating system instead of implementing its own file system. A storage can be called a storage server, that is, something that actually stores file data, which does the same thing as NTFS and others.
2.3 Upload Process
-
The file ID
The file ID consists of the group name, virtual disk path, data two-level directory, and file name.
group1/M00/00/00/rB6S8GGFLuWARl13AAC-WxoW0Zk101.jpg Copy the code
Group name: indicates the name of the storage group in which the file is uploaded. After the file is successfully uploaded, the storage server returns a message and the client needs to save the file by itself. It is also the group name specified when the storage is started.
Virtual disk path: virtual path configured for the storage, which corresponds to the disk option store_path*. If store_path0 is configured, it is M00; if store_path1 is configured, it is M01; and so on, you can configure it in the nginx.conf configuration file.
Data two-level directory: storage A two-level directory created by the server in the path of each virtual disk to store data files. The directory is used to prevent files from being overwritten due to file name duplication.
File name: Different from file upload. A file name is generated by a storage server based on specific information. The file name includes the IP address of the source storage server, time stamp of file creation, file size, random number, and file name extension.
3 Setting up background environment
3.1 Dependency Import
Here, we use Springboot to build the project structure, and we can do other things as well, but we need to build by ourselves.
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>1.3.2</version>
</dependency>
<dependency>
<groupId>com.github.tobato</groupId>
<artifactId>fastdfs-client</artifactId>
<version>1.26.7</version>
</dependency>
<dependency>
<groupId>net.oschina.zcx7878</groupId>
<artifactId>fastdfs-client-java</artifactId>
<version>1.27.0.0</version>
</dependency>
Copy the code
3.2 Related Configurations
Add the DFS configuration to your project’s application.yaml configuration file.
Configure port information
server:
port: 80
tomcat:
uri-encoding: UTF-8
spring:
http:
encoding:
charset: utf-8
force: true
enabled: true
servlet:
multipart:
enabled: true
max-file-size: 10MB # Single file upload size
max-request-size: 20MB Total file upload size
fdfs:
# link timeout
connect-timeout: 5000
# read time
so-timeout: 5000
# Generate thumbnail parameters
thumb-image:
width: 150
height: 150
tracker-list: The IP address of the file server is 22122
Copy the code
3.3 configuration class
This configuration requires little configuration if the client is used for file upload and download.
package com.beordie.config;
import com.github.tobato.fastdfs.FdfsClientConfig;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.EnableMBeanExport;
import org.springframework.context.annotation.Import;
import org.springframework.jmx.support.RegistrationPolicy;
/ * * *@DescriptionFile on, down configuration *@Date 2021/11/2 17:28
* @Created30500 * /
@Configuration
@Import(FdfsClientConfig.class)
// Prevent bean re-injection
@EnableMBeanExport(registration = RegistrationPolicy.IGNORE_EXISTING)
public class DfsConfig {}Copy the code
3.4 tools
File upload and download work will be basically managed in this tool class, so this part of the code is more important, carefully understand.
package com.beordie.utils;
import com.github.tobato.fastdfs.domain.fdfs.StorePath;
import com.github.tobato.fastdfs.service.FastFileStorageClient;
import org.csource.common.MyException;
import org.csource.fastdfs.ProtoCommon;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.stereotype.Component;
import org.springframework.util.StringUtils;
import org.springframework.web.multipart.MultipartFile;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.security.NoSuchAlgorithmException;
/ * * *@DescriptionFile upload and download *@Date2021/11/2 covenant *@Created30500 * /
@Component
public class DfsUtil{
/** * key */
private final static String SECRET_KEY = "Set key";
/** * Server host */
private final static String HOST_NAME = "http://IP:port";
/** * Print the record of the log */
private static final Logger LOGGING = LoggerFactory.getLogger(DfsUtil.class);
@Autowired
/** * The main link object for file processing */
private FastFileStorageClient storageClient;
/** * Upload file *@paramFile File to be uploaded *@returnReturns the file ID *@throws IOException
*/
public String uploadImage(MultipartFile file) throws IOException {
String originalFilename = file.getOriginalFilename().substring(file.getOriginalFilename().
indexOf('. ') + 1);
StorePath storePath = this.storageClient.uploadImageAndCrtThumbImage(file.getInputStream(),
file.getSize(), originalFilename, null);
return storePath.getFullPath();
}
/** * Delete files *@paramFileName File ID *@return* /
public boolean deleteFile(String fileName) {
if(StringUtils.isEmpty(fileName)) {
LOGGING.info("File path is empty");
return false;
}
StorePath storePath = null;
try {
storePath = StorePath.parseFromUrl(fileName);
storageClient.deleteFile(storePath.getGroup(), storePath.getPath());
} catch (Exception e) {
LOGGING.info(e.getMessage());
return false;
}
return true;
}
/** * Download files based on the file name without obtaining token *@paramFileName fileName *@paramDownName Name of the download to the local *@returnSpecific file */
public byte[] downloadFile(String fileName, String downName) {
byte[] content = null;
HttpHeaders headers = new HttpHeaders();
StorePath storePath = null;
try {
storePath = StorePath.parseFromUrl(fileName);
storageClient.downloadFile(storePath.getGroup(), storePath.getPath(), null);
headers.setContentDispositionFormData("attachment".new String(downName.getBytes("UTF-8"), "iso-8859-1"));
headers.setContentType(MediaType.APPLICATION_OCTET_STREAM);
} catch (Exception e) {
e.printStackTrace();
} finally {
returncontent; }}/** * Obtain the file access token *@paramFileName fileName *@returnAddress of the file carrying the token *@throws UnsupportedEncodingException
* @throws NoSuchAlgorithmException
* @throws MyException
*/
public String getResourceUrl(String fileName) throws UnsupportedEncodingException, NoSuchAlgorithmException, MyException {
String url = fileName.substring(fileName.indexOf("/") + 1);
int lts = (int)(System.currentTimeMillis() / 1000);
String token = ProtoCommon.getToken(url, lts, SECRET_KEY);
return HOST_NAME + "/" + fileName + "? token=" + token + "&ts="+ lts; }}Copy the code
4 Code Testing
Once the configuration is written, you can write a controller to check the effect. Use Postman to complete the test, since file uploads require POST requests.
package com.beordie.contrller;
import com.beordie.common.Response;
import com.beordie.utils.DfsUtil;
import com.beordie.utils.StringUtils;
import org.csource.common.MyException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.multipart.MultipartFile;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.security.NoSuchAlgorithmException;
/ * * *@DescriptionFile upload and download *@Date2021/11/6 * descend@Created30500 * /
@RestController
@RequestMapping("file")
public class FileController {
@Autowired
private DfsUtil dfsUtil;
/** * Prints logs */
private final Logger LOGGING = LoggerFactory.getLogger(FileController.class);
/** **@paramImage Image resources *@returnThe file ID * /
@RequestMapping(value = "image", method = RequestMethod.POST)
public Response uploadImage(@RequestParam("image") MultipartFile image) {
Response response = new Response();
try {
String imageId = dfsUtil.uploadImage(image);
if(! StringUtils.isEmpty(imageId)){ response.setMessage(imageId); }else {
response.setMessage("Upload failed"); }}catch (IOException e) {
LOGGING.info("Abnormal service");
}
return response;
}
/** * Obtain token *@paramFileId indicates the fileId *@returnToken carrying address */
@RequestMapping(value = "token")
public Response getToken(String fileId) {
Response response = new Response();
try {
String url = dfsUtil.getResourceUrl(fileId);
response.setMessage(url);
} catch (UnsupportedEncodingException e) {
LOGGING.info(e.getMessage());
} catch (NoSuchAlgorithmException e) {
LOGGING.info(e.getMessage());
} catch (MyException e) {
LOGGING.info(e.getMessage());
}
returnresponse; }}Copy the code
4.1 Uploading Files
4.2 access token
4.3 Resource Access
Return the address to request an accessible resource