Head figure from: www.zcool.com.cn/u/14943699
FastDFS
I. Basic understanding of FastDFS
1. FastDFS concept
FastDFS is an open source lightweight distributed file system. It manages files, including file storage, file synchronization, file access (file upload, file download), and so on. It solves the problems of mass storage and load balancing. It is especially suitable for online services with file as the carrier, such as photo album website, video website and so on.
FastDFS is customized for the Internet. It takes into account redundant backup, load balancing, and online capacity expansion, and emphasizes high availability and performance. It is easy to set up a high-performance file server cluster to provide file uploading and downloading services.
2. FastDFS
Suitable for storing user pictures, videos, documents and other files. For brevity, FastDFS does not block files, so it is not well suited for distributed computing scenarios.
3. FastDFS pros and cons
Advantages:
- Suitable for small and medium file storage (Recommended scope:
4KB<file_size<500MB
) - Active/standby Tracker services to enhance system availability
- The system does not need to support POSIX, which reduces the complexity of the system and leads to faster processing
- Support master and slave files, support custom extension
- Supports an online capacity expansion mechanism to enhance system scalability
- Soft RAID is implemented to enhance the concurrent processing capability and data fault tolerance and recovery capability of the system
Disadvantages:
- You can directly read the content of a file by storing it in a file, which lacks file security
- Through API downloads, there is a single point of performance bottleneck
- Breakpoint continuation is not supported and is a nightmare for large files
- The synchronization mechanism does not support file correctness verification, which reduces system availability
- Does not support POSIX universal interface access, low versatility
- For file synchronization across the public network, there is a relatively large delay, and the corresponding fault tolerance policy needs to be applied
4. Related concepts
The FastDFS server has three roles: Tracker Server, Storage Server, and Client.
Tracker server: a server that performs scheduling and load balancing. Tracker is the coordinator of FastDFS. It is responsible for managing all storage servers and groups. After startup, each storage will connect to Tracker, inform it of its own group and other information, and maintain periodic heartbeat. Create a mapping table for group==>[Storage Server list].
Storage Server: A storage server (also called a storage node or data server) on which files and meta data are stored. Storage Server manages files directly using OS file system calls.
Client: A client that initiates service requests and uses TCP/IP to exchange data with the tracker server or storage node through a dedicated interface. FastDFS provides users with basic file access interfaces, such as Upload, Download, Append, and Delete, in the form of client libraries.
Second, FastDFS principle
1. FastDFS system structure diagram
-
FastDFS consists of Tracker and Storage. Storage stores files, and Tracker stores files. The Tracker is used for load balancing and resource scheduling.
-
A Storage can be divided into multiple groups. The files saved in each group are different, and the groups are divided into multiple members. Each member saves the same content.
-
FastDFS file storage advantages: FastDFS can cope with massive file storage on the Internet. Once there are too many files, it can be horizontally expanded at any time. In addition, the cluster ensures that the system does not have a single point of failure, and users cannot access file resources due to server downtime.
2. FastDFS workflow
File upload
• 1. The client queries the storage to which the tracker is uploaded.
• 2. Tracker returns an available storage;
• 3. The client communicates with the storage to upload the file.
File download
-
The client queries the storage where the tracker downloaded the file. The parameters are the file id (group name and file name).
-
Tracker returns an available storage;
-
The client communicates with the storage to download the file.
Install FastDFS on Linux
1. Preparation
Install GCC (required at compile time)
yum install -y gcc gcc-c++
Copy the code
Install libevent (run)
yum -y install libevent
Copy the code
Create folders and upload files
mkdir -p /fileservice/fast
cd /fileservice/fast
Copy the code
Note: you need to upload relevant files to this folder.
2. Install libfastCommon
#Unzip the filesThe tar - ZXVF libfastcommon - 1.0.35. Tar. Gz#Enter the directoryCD libfastcommon - 1.0.35#compile
./make.sh
#The installation
./make.sh install
Copy the code
Use the make instruction
After installation
The directory structure
3. Install Fastdfs
Installation-dependent dependencies
yum install perl pcre pcre-devel zlib zlib-devel openssl openssl-devel -y
Copy the code
Install fastdfs
#Unpack the fastdfsThe tar ZXVF fastdfs - 5.11. Tar. Gz#Enter the directoryCD fastdfs - 5.11#compiling
./make.sh
#The installation
./make.sh install
Copy the code
After successful installation
View executable scripts for tracker and storage
ll /etc/init.d/ | grep fdfs
Copy the code
Preparing configuration Files
By default, it is under /etc/fdf/
cd /etc/fdfs/
Copy the code
Copying configuration Files
cp client.conf.sample client.conf
cp storage.conf.sample storage.conf
cp storage_ids.conf.sample storage_ids.conf
cp tracker.conf.sample tracker.conf
Copy the code
Create the desired directory to modify the tracker default for storing data and logs
mkdir -p /home/fastdfs/tracker
Copy the code
Create and modify the default storage directory for storing data and logs
mkdir -p /home/fastdfs/storage
Copy the code
Configure and start the tracker
configuration
#Switch to /etc/fdf/
cd /etc/fdfs/
#Modify the tracker. Conf
vim tracker.conf
Copy the code
Change the default base_path to the newly created file, that is, /home/fastdfs/tracker
Changes to the tracker.conf file
# Change the default base_path, note: the root folder must exist, the subfolder will automatically create base_path=/home/ fastdfs.trackerCopy the code
Start the tracker
service fdfs_trackerd start
Copy the code
Note: After the service is successfully started, data and logs are generated in the directory specified by base_path.
Configure and start storage
configuration
#Switch directory
cd /etc/fdfs/
#Modify storage. Conf
vim storage.conf
Copy the code
Configuration of the file tracker.conf
Group_name =group1 group_name=group1 The root folder must exist, the sub-folder will be automatically created base_path=/home/fastdfs/storage #store store file location (store_path), store_path0=/home/fastdfs/storage # note: If multiple disks are mounted, configure # store_path1=..... # store_path2=...... #... Tracker_server = IP tracker_server=192.168.10.100:22122#tracker_server= xxx.xxx.xxx #tracker_server= xxx.xxx.xxx22122#...Copy the code
Start the storage
service fdfs_storaged start
Copy the code
Start after the completion of the entry/home/fastdfs/storage/data directory, display directory as follows:
cd /home/fastdfs/storage/data/
Copy the code
4. Use FastDFS built-in tools to test
configuration
#Switch directory to
cd /etc/fdfs/
#Modify the client. Conf
vim client.conf
Copy the code
Modify the basic path and Tracker_Server as follows
/home/fastdfs/storage # tracker_server=192.168.10.100:22122# tracker_server=... # tracker_server=... # tracker_server= ...Copy the code
Upload an image to the root directory of Linux/root/
test
/usr/bin/fdfs_upload_file /etc/fdfs/client.conf /root/image-20200708110242109.jpg
Copy the code
If so, FastDFS is successfully set up.
The address of the file in the image above: http://192.168.10.100:80/group1/M00/00/00/wKgKZF8FdfiAIvwVAAETxoCDaLI533.jpg, the corresponding storage on the server / home/fastdfs/storage/data / 00/00 / wKgKZF8FdfiAIvwVAAETxoCDaLI533. JPG file
Note: this image cannot be downloaded using HTTP. You can use the built-in Web Server or use it with other Web servers
5. Why does FastDFS incorporate Nginx?
HTTP services are provided through FastDFS HTTP server. However, The HTTP service of FastDFS cannot provide high-performance services such as load balancing, so The developer of FastDFS, Architect Yu Qing of Taobao, provided us with the FastDFS module used on Nginx (the Nginx module of FastDFS). For details.
Install fastdfs-nginx-module
configuration
#Unzip fastdfs - nginx - moduleThe tar - ZXVF fastdfs - nginx - module - 1.20. Tar. Gz#Switch directoryCD fastdfs nginx - module - 1.20 / SRC#Modify the config
vim config
Copy the code
Modify the content
ngx_module_incs="/usr/include/fastdfs /usr/include/fastcommon/"
CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/"
Copy the code
Copy the mod_fastdfs.conf configuration and modify it
#Copy mod_fastdfs.conf from fastdfs-nginx-module/ SRC to /etc/fdfs.conf
cp mod_fastdfs.conf /etc/fdfs/
#Modify the contents of /etc/fdfs/mod_fastdfs.conf
vim /etc/fdfs/mod_fastdfs.conf
Copy the code
The modifications are as follows:
Tracker_server = tracker_server=192.168.10.100:22122Url_have_group_name =true # Specify file storage path (as configured store path) store_path0=/home/ fDFs_storageCopy the code
Copying configuration Files
cp http.conf mime.types /etc/fdfs/
Copy the code
7. Nginx installation
configuration
cd /fileservice/fast/
#Unpack the nginxThe tar - ZXVF nginx - 1.15.2. Tar. Gz#Go to the directory where nginx decompressedCD nginx 1.15.2 /#Add module command configuration. / configure -- prefix = / opt/nginx - sbin - path = / usr/bin/nginx - add - the module = / fileservice/fast/fastdfs nginx - module - 1.20 / SRC#Compile and install
make && make install
#Modify the nginx configuration
cd /opt/nginx/conf
vim nginx.conf
Copy the code
Modify the nginx configuration
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
# root html;
# index index.html index.htm;
ngx_fastdfs_module;
}
Copy the code
Start the ngnix
cd /usr/bin/
#Start the
./nginx
Copy the code
Docker builds FastDFS
1. Pull the image and start it
#Start the docker
systemctl start docker
# -v ${HOME}/fastdfs:/var/local/ FDFS means: will${HOME}The /fastdfs directory is mounted to /var/ in the containerlocal/ FDFS in this directory.
#So uploaded files will be persisted to${HOME}/ fastdfs/storage/data
#The IP address is the public IP address of the server or vm.
#-e WEB_PORT=80 Specifies the default nginx portDocker run -d --restart=always --privileged=true --net=host --name=fastdfs -e IP=192.168.10.100 -e WEB_PORT= 80-v ${HOME}/fastdfs:/var/local/fdfs registry.cn-beijing.aliyuncs.com/tianzuo/fastdfsCopy the code
2. Test upload
#Into the container
docker exec -it fastdfs /bin/bash
#Create a fileecho "Hello FastDFS!" >index.html#Test file upload
fdfs_test /etc/fdfs/client.conf upload index.html
Copy the code
3. Test access
Java file upload
1. Create Maven project
2. Introduce dependencies
<! Zcx7878 </groupId> <artifactId> Fastdfs-client-java </artifactId> <version>1.27. 0. 0</version> </dependency> <! -- spring-core --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>4.325..RELEASE</version>
</dependency>
Copy the code
3, create,fdfs_client.conf
The configuration file
#Connection timeout
connect_timeout=30
#Network timeout
network_timeout=60
#The root directory of the tracker
base_path=/home/fastdfs/tracker
#Change to the IP address of your own serverTracker_server = 192.168.10.1001:22122 LOG_level =info USe_connection_pool = false Connection_POOL_MAX_IDLE_time = 3600 load_fdfs_parameters_from_tracker=false use_storage_id = false storage_ids_filename = storage_ids.conf http.tracker_server_port=80Copy the code
4. Write test classes
public static void main(String[] args) throws Exception {
// The path to the file
String testFilePath = "C: / Users/hxxiapgy/Desktop/fastdfs architecture. PNG";
// Get fdfs_client.conf
String filePath = new ClassPathResource("fdfs_client.conf").getFile().getAbsolutePath();
// 1. Load the configuration file
ClientGlobal.init(filePath);
// 2. Create a TrackerClient
TrackerClient trackerClient = new TrackerClient();
// 3. Use the TrackerClient object to create a connection and obtain a TrackerServer object.
TrackerServer trackerServer = trackerClient.getConnection();
// 4. Declare StorageServer
StorageServer storageServer = null;
// 5. Create a StorageClient object with two parameters, TrackerServer object and StorageServer reference
StorageClient storageClient = new StorageClient(trackerServer, storageServer);
// 6. Upload the file through storangeClient and return the group name and file name
String[] strings = storageClient.upload_file(testFilePath, "png".null);
// Prints the group name and file name
for (String string : strings) {
System.out.println(string);
}
System.out.println("File uploaded successfully!");
}
Copy the code
SpringBoot uploads files
1. Method 1:
Create a project
Introduction of depend on
<! -- Fastdfs dependencies -->
<dependency>
<groupId>net.oschina.zcx7878</groupId>
<artifactId>fastdfs-client-java</artifactId>
<version>1.27.0.0</version>
</dependency>
<! -- lang3 -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
</dependency>
Copy the code
Modifying a Configuration File
fastdfs:
connect_timeout_in_seconds: 120
network_timeout_in_seconds: 120
charset: UTF-8
tracker_servers: 192.16810.100.: 22122
Copy the code
writeUploadService
/ * * *@description: File upload *@author: Hxxiapgy
* @date: 2020/7/9 20:02 * /
@Service
public class UploadService {
@Value("${fastdfs.connect_timeout_in_seconds}")
private Integer connectTimeout;
@Value("${fastdfs.network_timeout_in_seconds}")
private Integer networkTimeout;
@Value("${fastdfs.charset}")
private String charset;
@Value("${fastdfs.tracker_servers}")
private String trackerServers;
public Map<String,Object> upload(MultipartFile multipartFile){
// Check whether the file exists
if(multipartFile == null) {throw new RuntimeException("The file cannot be empty!");
}
// Upload the file to fastdfs
String fileId = fdfsUpload(multipartFile);
// Check whether fileId is null
if (fileId == null) {throw new RuntimeException("File upload failed!");
}
Map<String,Object> map = new HashMap<>();
map.put("code".1);
map.put("msg"."File uploaded successfully");
map.put("fileId", fileId);
return map;
}
/** **@param
* @return {@link String}
* @author hxxiapgy
* @data2020/7/9 20:08 * /
private String fdfsUpload(MultipartFile multipartFile){
1. Initialize the FastDFS environment
initFdfsConfid();
// 1. Create a trackerClient
TrackerClient trackerClient = new TrackerClient();
try {
// 2. Obtain trackerService
TrackerServer trackerServer = trackerClient.getConnection();
// 3. Declare StorageService
StorageServer storageServer = null;
// 4. Obtain StorageClient
StorageClient1 storageClient1 = new StorageClient1(trackerServer,storageServer);
// 5. Obtain the file suffix
String originalFilename = multipartFile.getOriginalFilename();
// The file is abnormal
if (StringUtils.isBlank(originalFilename)) {
throw new RuntimeException("File reading exception!");
}
String extName = originalFilename.substring(originalFilename.lastIndexOf(".") + 1);
// 6. Upload the file through the StorageService and return the group name and file name
String fileId = storageClient1.upload_file1(multipartFile.getBytes(), extName , null);
return fileId;
} catch (Exception e) {
e.printStackTrace();
return null; }}/** * load fastDFS environment *@param
* @return
* @author hxxiapgy
* @data2020/7/9 before * /
private void initFdfsConfid(a) {
try {
// Set connection timeout
ClientGlobal.setG_connect_timeout(connectTimeout);
// Set the network timeout
ClientGlobal.setG_network_timeout(networkTimeout);
// Set the character encoding
ClientGlobal.setG_charset(charset);
/ / set trackerService
ClientGlobal.initByTrackers(trackerServers);
} catch(Exception e) { e.printStackTrace(); }}}Copy the code
writeUploadController
/ * * *@description: File upload *@author: Hxxiapgy
* @date: 2020/7/9 choicest * /
@Controller
public class UploadController {
@Resource
private UploadService uploadService;
@PostMapping("/upload")
@ResponseBody
public Map<String,Object> upload(MultipartFile multipartFile){
Map<String, Object> map = uploadService.upload(multipartFile);
return map;
}
@RequestMapping("/index")
public String index(a){
return "index.html"; }}Copy the code
Writing test entryindex.html
<! DOCTYPEhtml>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Test file upload</title>
</head>
<body>
<h3>File upload</h3>
<hr/>
<form action="/upload" method="post" enctype="multipart/form-data">
<input type="file" name="multipartFile"/>
<input type="submit" value="Upload"/>
</form>
</body>
</html>
Copy the code
test
Way 2
Create a project
Introduction of depend on
<! -- fastdfs client -->
<dependency>
<groupId>com.github.tobato</groupId>
<artifactId>fastdfs-client</artifactId>
<version>1.26.7</version>
</dependency>
Copy the code
Modifying a Configuration File
fdfs:
so-timeout: 2500 # read time
connect-timeout: 600 Connection timeout
thumb-image: # the thumbnail
width: 100
height: 100
tracker-list: # tracker Specifies the service configuration address list
- 192.16810.100.: 22122
upload:
base-url: http://192.168.10.100/
allow-types:
- image/jpeg
- image/png
- image/bmp
- image/gif
Copy the code
Write fastdFS property classes
/ * * *@description: Configures the FastdFS property class *@author: Hxxiapgy
* @date: 2020/7/9 21:58 * /
@ConfigurationProperties(prefix = "upload")
@Data
public class UploadProperties {
private String baseUrl;
private List<String> allowTypes;
public String getBaseUrl(a) {
return baseUrl;
}
public void setBaseUrl(String baseUrl) {
this.baseUrl = baseUrl;
}
public List<String> getAllowTypes(a) {
return allowTypes;
}
public void setAllowTypes(List<String> allowTypes) {
this.allowTypes = allowTypes; }}Copy the code
writeUploadService
/ * * *@description: File upload *@author: Hxxiapgy
* @date: 2020/7/9 22:01 * /
@Service
@EnableConfigurationProperties(UploadProperties.class)
public class UploadService {
private Log log= LogFactory.getLog(UploadService.class);
@Resource
private FastFileStorageClient storageClient;
@Resource
private UploadProperties uploadProperties;
/** * Function description: File upload *@paramMultipartFile File to be uploaded *@return {@linkString} returns the file path *@author hxxiapgy
* @data2020/7/9 22:05 * /
public String upload(MultipartFile multipartFile){
// 1. Check whether the uploaded file is empty
if (multipartFile == null){
log.error("File does not exist");
throw new RuntimeException("File is empty!");
}
// 1. Verify the file type
// Get the type of file to upload
String contentType = multipartFile.getContentType();
if(! uploadProperties.getAllowTypes().contains(contentType)){ log.info("File type not supported!");
throw new RuntimeException("File type not supported!");
}
try {
// 2. Verify file contents
// Read the file
BufferedImage bufferedImage = ImageIO.read(multipartFile.getInputStream());
if (bufferedImage == null || bufferedImage.getWidth() == 0 || bufferedImage.getHeight() == 0){
log.error("There was a problem uploading files");
throw new RuntimeException("Problem uploading files!"); }}catch (IOException e) {
log.error("Failed to verify file contents... {}",e);
throw new RuntimeException("Failed to verify file contents... {}" + e.getMessage());
}
// 3. Obtain the extension name
String extension = StringUtils.substringAfterLast(multipartFile.getOriginalFilename(),".");
try {
// 4. Upload the file
StorePath storePath = storageClient.uploadFile(multipartFile.getInputStream(), multipartFile.getSize(), extension, null);
// 5. Return to path
return uploadProperties.getBaseUrl() + storePath.getFullPath();
} catch (Exception e) {
log.error("File upload failed! . {}",e);
throw new RuntimeException("File upload failed... {}"+ e.getMessage()); }}}Copy the code
writeUploadController
/ * * *@description: File upload *@author: Hxxiapgy
* @date: 2020/7/9 "* /
@Controller
public class UploadController {
@Resource
private UploadService uploadService;
/** * Function description: File upload *@paramMultipartFile File to be deleted *@return {@link Map< String, Object>}
* @author hxxiapgy
* @data2020/7/9 * /
@PostMapping("/upload")
@ResponseBody
public Map<String,Object> upload(MultipartFile multipartFile){
String filePath = uploadService.upload(multipartFile);
Map<String,Object> map = new HashMap<>();
map.put("code"."200");
map.put("msg"."File uploaded successfully");
map.put("filePath", filePath);
returnmap; }}Copy the code
Writing test entry
Same method one.