FastDFS is an open source lightweight distributed file system that manages files. It has the following functions: File storage, file synchronization, and file access (file upload and download) solve the problems of large-capacity storage and load balancing. In a cluster environment, a PC can upload files and other nodes in the cluster can back up the uploaded files. When doing distributed system development, one of the problems to be solved is the problem of pictures, audio and video, file sharing and data backup, distributed file system just can solve this demand. The FastDFS service mainly has two roles: Tracker and Storage. Tracker is used to schedule the communication between the Storage node and the client, play a load balancing role in the access, and record the running status of the Storage node. A storage node is used to save files
-
FastDFS Cluster deployment
-
Overall deployment module diagram
-
Environment to prepare
The name of the describe Centos System Version 6.9 libfatscommon Some common function packages isolated from FastDFS FastDFS FastDFS main program fastdfs-nginx-module FastDFS and Nginx associated module nginx nginx1.15.5 - Install the build environment
yum install git gcc gcc-c++ make automake autoconf libtool pcre pcre-devel zlib zlib-devel openssl-devel wget vim -y Copy the code
-
Disk installation path description
instructions location FastDFS Indicates the installation location of the installation package /usr/local/src The tracker data /data/fdfs/tracker Storage of data /data/fdfs/Storage Configuration file path /etc/fdfs
-
Install libfatscommon
-
Download libfatscommon
-
Decompression and installation
unzip libfastcommon-master.zip cd libfastcommon-master ./make.sh && ./make.sh install # build install Copy the code
-
-
Install FastDFS
-
Download FastDFS
-
Decompression and installation
unzip fastdfs-master.zip cd fastdfs-master ./make.sh && ./make.sh install # build install cp /etc/fdfs/tracker.conf.sample /etc/fdfs/tracker.conf cp /etc/fdfs/storage.conf.sample /etc/fdfs/storage.conf cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf Client file, for testing cp /usr/local/src/fastdfs/conf/http.conf /etc/fdfs/ # for nginx access cp /usr/local/src/fastdfs/conf/mime.types /etc/fdfs/ # for nginx access Copy the code
-
-
Install fastdfs – nginx – module
-
Download fastdfs – nginx – module
-
Decompression and installation
unzip fastdfs-nginx-module-master.zip cp /usr/local/src/fastdfs-nginx-module-master/src/mod_fastdfs.conf /etc/fdfs Copy the configuration file to the FDFS directory Copy the code
-
-
Install nginx
-
Download nginx
-
Decompression and installation
The tar - ZXVF nginx - 1.15.5. Tar. GzcdNginx - 1.15.5# Add fastdfs-nginx-module module ./configure --add-module=/usr/local/src/fastdfs-nginx-module-master/src/ make && make install # build install Copy the code
-
-
FastDFS Cluster deployment configuration
- The tracker configuration
The server IP address is XXX.xxx.78.12, xxx.xxx.78.13 vim /etc/fdfs/tracker.conf The contents that need to be modified are as follows port=22122 # tracker server port (default: 22122) base_path=/data/fdfs/tracker The root directory for storing logs and data Copy the code
-
Storage configuration
vim /etc/fdfs/storage.conf The contents that need to be modified are as follows port=23000 # storage service port (default 23000) base_path=/data/fdfs/storage The root directory for storing data and log files store_path0=/data/fdfs/storage # the first storage directoryTracker_server = XXX. XXX. 78.12:22122# server 1Tracker_server = XXX. XXX. 78.13:22122# Server 2 http.server_port=8888 # HTTP file access port (default 8888, as in nginx) Copy the code
- The client configuration
vim /etc/fdfs/client.conf The contents that need to be modified are as followsBase_path = / home/MOE/DFS tracker_server = XXX. XXX. 78.12:22122# server 1Tracker_server = XXX. XXX. 78.13:22122# Server 2 Copy the code
- Configure nginx access
vim /etc/fdfs/mod_fastdfs.conf The contents that need to be modified are as followsTracker_server = XXX. XXX. 78.12:22122# server 1Tracker_server = XXX. XXX. 78.13:22122# Server 2 url_have_group_name=true store_path0=/data/fdfs/storage # configure nginx. Config vim /usr/local/nginx/conf/nginx.conf Add the following configuration server { listen 8888; ## This port is the same as http.server_port in storage.confserver_name localhost; location ~/group[0-9]/ { ngx_fastdfs_module; }... . error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}Copy the code
-
Start services and tests
Vim /etc/sysconfig/iptables -A INPUT -m state --state NEW -m TCP -p TCP --dport 22122 -j ACCEPT -a vim /etc/sysconfig/iptables -a INPUT -m state --state NEW -m TCP -p TCP INPUT -m state --state NEW -m tcp -p tcp --dport 23000 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 8888 -j ACCEPT service iptables restart# restart firewall Copy the code
-
Startup, shutdown, and restart operations for each service
#tracker /etc/init.d/fdfs_trackerd start Start the tracker service /etc/init.d/fdfs_trackerd restart Restart the tracker service /etc/init.d/fdfs_trackerd stop Stop the tracker service chkconfig fdfs_trackerd on Start the tracker service #storage /etc/init.d/fdfs_storaged start Start the storage service /etc/init.d/fdfs_storaged restart Restart the storage service /etc/init.d/fdfs_storaged stop Stop the storage service chkconfig fdfs_storaged on Start the storage service automatically #nginx /usr/local/nginx/sbin/nginx # start nginx /usr/local/nginx/sbin/nginx -s reload # restart nginx /usr/local/nginx/sbin/nginx -s stop # stop nginx Copy the code
-
Detection of the cluster
Storage 1-Storage 2: storage 1-Storage 2: Storage 1-Storage 2 /usr/bin/fdfs_monitor /etc/fdfs/storage.conf Copy the code
-
Picture upload test
Upload success returns the file access ID # fdfS_upload_file Client configuration file upload file path fdfs_upload_file /etc/fdfs/client.conf /data/test.png Copy the code
-
Testing file access
http://xxx.xxx.78.12/group1/M00/00/00/rB9ODFvXuSiAWBYBAALSAkm_6RQ360.png http://xxx.xxx.78.13/group1/M00/00/00/rB9ODFvXuSiAWBYBAALSAkm_6RQ360.png Copy the code
Test nginx default port 80 to access the newly uploaded file, both addresses can access a file, achieve data backup purpose.
-
The FastDFS server deployment is complete
-
FastDFS client integrated into SpringBoot
-
First, according to the official source code tip, we first download the source code using Maven to compile into jar package in the company Maven private server (Nexus), or your local Maven private server (also have other methods such as Ant, please check github for details) fastdfs-java-client-sdk source download address
MVN = maven; maven = maven; MVN = Maven; mvn clean install Copy the code
-
Add dependencies to the Maven project pom.xml
<dependency> <groupId>org.csource</groupId> <artifactId>fastdfs-client-java</artifactId> < version > 1.27 - the SNAPSHOT < / version > < / dependency >Copy the code
-
Next we add the fdfs_client.conf file under the project resources directory
connect_timeout = 30 network_timeout = 30 charset = UTF-8 http.tracker_http_port = 80 http.anti_steal_token = no http.secret_key = 123456 The cluster tracker server address configured earlierTracker_server = XXX.XXX.78.12:22122 Tracker_Server = XXX.XXX.78.13:22122Copy the code
-
Write an upload file object class
/** * @Author: maoqitian * @Date: 2018/10/26 0026 17:57 * @Description: Public class FastDFSFileEntity {// Private String name; // private byte[] content; // File type private String ext; // MD5 value private String MD5; // private String author; public FastDFSFileEntity(String name, byte[] content, String ext, String height, String width, String author) { super(); this.name = name; this.content = content; this.ext = ext; this.author = author; } public FastDFSFileEntity(String name, byte[] content, String ext) { super(); this.name = name; this.content = content; this.ext = ext; } public StringgetName() { return name; } public void setName(String name) { this.name = name; } public byte[] getContent() { return content; } public void setContent(byte[] content) { this.content = content; } public String getExt() { return ext; } public void setExt(String ext) { this.ext = ext; } public String getMd5() { return md5; } public void setMd5(String md5) { this.md5 = md5; } public String getAuthor() { return author; } public void setAuthor(String author) { this.author = author; }}Copy the code
-
Write FastDFS operation class, mainly load initialization configuration Tracker server, file upload, download, delete and other operation tool class
/** * @Author: maoqitian * @Date: 2018/10/29 0029 9:30 * @Description: FastDFS operation class */ public class FastDFSClient {private static org.slf4j.Logger Logger = LoggerFactory.getLogger(FastDFSClient.class); // Double daemon singleton private static volatile FastDFSClient mInstance; Static {try {String filePath=new ClassPathResource("fdfs_client.conf").getFile().getAbsolutePath(); ClientGlobal.init(filePath); }catch (Exception e){ logger.error("FastDFS Client Init Fail!",e); } } private FastDFSClient(){ } public static FastDFSClient getInstance() {if(mInstance == null){ synchronized (FastDFSClient.class){ if(mInstance == null){ mInstance=new FastDFSClient(); }}}returnmInstance; } /** * @author maoqitian * @description * @date 2018/10/29 0029 9:42 * @param [fastDFSFileEntity] * @return java.lang.String[] **/ public String[] upload(FastDFSFileEntity file){ logger.info("File Name: " + file.getName() + "File Length:" + file.getContent().length); NameValuePair[] metalist=new NameValuePair[1]; metalist[0]=new NameValuePair("author",file.getAuthor()); long startTime = System.currentTimeMillis(); String[] uploadResults= null; StorageClient storageClient=null; try { storageClient=getTrackerClient(); uploadResults = storageClient.upload_file(file.getContent(),file.getExt(),metalist); }catch (IOException e){ logger.error("IO Exception when uploadind the file:"+file.getName(),e); } catch (Exception e){ logger.error("Non IO Exception when uploadind the file:"+file.getName(),e); } logger.info("upload_file time used:" + (System.currentTimeMillis() - startTime) + " ms"); if(uploadResults==null && storageClient! =null){ logger.error("upload file fail, error code:" + storageClient.getErrorCode()); } String groupName = uploadResults[0]; String remoteFileName = uploadResults[1]; logger.info("upload file successfully!!!" + "group_name:" + groupName + ", remoteFileName:" + "" + remoteFileName); return uploadResults; } public FileInfo getFile(String groupName, String remoteFileName) { try { StorageClient storageClient = getTrackerClient(); return storageClient.get_file_info(groupName, remoteFileName); } catch (IOException e) { logger.error("IO Exception: Get File from Fast DFS failed", e); } catch (Exception e) { logger.error("Non IO Exception: Get File from Fast DFS failed", e); } return null; } public InputStream downFile(String groupName, String remoteFileName) { try { StorageClient storageClient = getTrackerClient(); byte[] fileByte = storageClient.download_file(groupName, remoteFileName); InputStream ins = new ByteArrayInputStream(fileByte); return ins; } catch (IOException e) { logger.error("IO Exception: Get File from Fast DFS failed", e); } catch (Exception e) { logger.error("Non IO Exception: Get File from Fast DFS failed", e); } return null; } /** * @Author maoqitian * @Description * @Date 2018/10/31 0031 11:19 * @Param [remoteFileName] * @returnInt-1 Failed 0 succeeded **/ public int deleteFile(String remoteFileName) throws Exception {StorageClient StorageClient = getTrackerClient(); int i = storageClient.delete_file("group1", remoteFileName); logger.info("delete file successfully!!!" + i); return i; } public StorageServer[] getStoreStorages(String groupName) throws IOException { TrackerClient trackerClient = new TrackerClient(); TrackerServer trackerServer = trackerClient.getConnection(); return trackerClient.getStoreStorages(trackerServer, groupName); } public ServerInfo[] getFetchStorages(String groupName, String remoteFileName) throws IOException { TrackerClient trackerClient = new TrackerClient(); TrackerServer trackerServer = trackerClient.getConnection(); return trackerClient.getFetchStorages(trackerServer, groupName, remoteFileName); } public String getTrackerUrl() throws IOException { return "http://"+getTrackerServer().getInetSocketAddress().getHostString()+":"+ClientGlobal.getG_tracker_http_port()+"/"; } /** * @author maoqitian * @description Obtain StorageClient * @date 2018/10/29 0029 10:33 * @param [] * @return org.csource.fastdfs.StorageClient **/ private StorageClient getTrackerClient() throws IOException{ TrackerServer trackerServer=getTrackerServer(); StorageClient storageClient=new StorageClient(trackerServer,null); returnstorageClient; } /** * @author maoqitian * @description Get TrackerServer * @date 2018/10/29 0029 10:34 * @param [] * @return org.csource.fastdfs.TrackerServer **/ private TrackerServer getTrackerServer() throws IOException { TrackerClient trackerClient=new TrackerClient(); TrackerServer trackerServer = trackerClient.getConnection(); return trackerServer; } Copy the code
-
Controller write, receive request and upload file return file access path (here write a file upload example, other file download, delete and other functions can be written according to their own needs)
/** * @author maoqitian * @description * @date 2018/10/30 0030 15:07 * @param [file] * @return com.gxxmt.common.utils.ResultApi **/ @RequestMapping("/upload") public ResultApi upload(@RequestParam("file") MultipartFile file) throws Exception { if (file.isEmpty()) { throw new RRException("Upload file cannot be empty"); } String url; String domainUrl = ossfactory.build ().getDomainPath(); logger.info("Domain name configured as"+domainUrl); if (StringUtils.isNotBlank(domainUrl)){ url = uploadFile(file,domainUrl); return ResultApi.success.put("url",url); }else { return ResultApi.error("Domain name configuration is empty. Please configure the object storage domain name first."); }} /** * @author maoqitian * @description FastDFS * @date 2018/10/29 0029 11:11 * @param [file] * @param [domainName] indicates the domainName * @returnPath File access path **/ public String uploadFile(MultipartFile file,String domainName) throws IOException {String[] fileAbsolutePath={}; String fileName=file.getOriginalFilename(); String ext=fileName.substring(fileName.lastIndexOf(".") + 1); byte[] file_buff=null; InputStream inputStream = file.getInputStream();if(inputStream! =null){ int available = inputStream.available(); file_buff=new byte[available]; inputStream.read(file_buff); } inputStream.close(); FastDFSFileEntity fastDFSFileEntity=new FastDFSFileEntity(fileName,file_buff,ext); try { fileAbsolutePath=FastDFSClient.getInstance().upload(fastDFSFileEntity); logger.info(fileAbsolutePath.toString()); }catch (Exception e){ logger.error("upload file Exception!",e); throw new RRException("Error uploading file"+e); } if(fileAbsolutePath == null){ logger.error("upload file failed,please upload again!"); throw new RRException("File upload failed. Please upload again."); } String path=domainName+fileAbsolutePath[0]+ "/"+fileAbsolutePath[1]; return path; } Copy the code
Copy the code
-
-
To test this method, upload an image
-
From the log print we can see that the picture has been uploaded successfully
-
Test access to uploaded images
-
At this point, the FastDFS server cluster deployment and integration of clients into SpringBoot are complete, and we can happily use the FastDFS service to save and back up our images and so on. If there is something wrong in the article, please leave a comment and point it out to me. Let’s learn and progress together. If you find my article helpful, please also give me a like and attention.