Writing in the front

How to build a high availability FastDFS distributed file system based on FastDFS? Don’t worry, today we are going to build a high availability distributed file system based on FastDFS.

FastDFS introduction

Reference: www.oschina.net/p/fastdfs

FastDFS is an open source distributed file system. It manages files, including file storage, file synchronization, and file access (file upload and download). It solves the problems of large-capacity storage and load balancing. It is especially suitable for online services with file as the carrier, such as photo album website, video website and so on. The FastDFS server has two roles: tracker and storage. The tracker mainly does scheduling work and plays a load balancing role in access. Storage nodes store files and perform all functions of file management, including storage, synchronization, and access interfaces. FastDFS also manages meta data of files. The meta data of a file is the related attributes of the file, which are represented by keyvalue pairs. For example, width=1024, where key is width and value is 1024. File Meta Data is a list of file properties that can contain multiple key-value pairs. The FastDFS system structure is shown below:

Both the tracker and the storage node can consist of a single server. Servers in both the tracker and the storage node can be added or taken offline at any time without affecting online services. All servers in the tracker are peer and can be added or reduced at any time according to the stress of the server.

To support large capacity, storage nodes (servers) are organized into volumes (or groups). A storage system consists of one or more volumes whose files are independent of each other. The file capacity of all volumes is the total file capacity of the entire storage system. A volume can be composed of one or more storage servers. All files on the storage servers under a volume are the same. Multiple storage servers in a volume provide redundant backup and load balancing. When a server is added to a volume, the system automatically synchronizes existing files. After the synchronization is complete, the system automatically switches the new server to online services. When the storage space is insufficient or about to be used up, you can dynamically add volumes. You only need to add one or more servers and configure them as a new volume, thus increasing the capacity of the storage system. File identification in FastDFS is divided into two parts: the volume name and the file name.

FastDFS Process for uploading files

(1) The client asks the storage to which the tracker is uploaded, no additional parameter is required;

(2) The tracker returns an available storage;

(3) The client communicates with the storage to upload the file.

The Client initiates a FastDFS file transfer by connecting to the specified port of a Tracker Server. The Tracker Server determines which Storage Server to select based on the available information. The Storage Server returns the address and other information of the Storage Server to the client. The client then uses the information to connect to the Storage Server and sends the file to the Storage Server

FastDFS File download interaction process:

(1) The client queries the storage where the tracker downloaded the file, using the file id (volume name and file name).

(2) The tracker returns an available storage;

(3) The client communicates with the storage to download the file.

FastDFS cluster planning

Liuyazhuang -tracker-1:192.168.50.135 LiuYazhuang -tracker-2:192.168.50.136 LiuYazhuang -tracker-2 Storage server 1: Liuyazhuang -storage-group1-1 storage server 2:192.168.50.137 Liuyazhuang -storage-group1-1 storage server 2:192.168.50.138 Liuyazhuang -storage-group1-1 storage server 3: 192.168.50.139 Liuyazhuang -storage-group2-1 Storage server 4:192.168.50.140 Liuyazhuang -storage-group2-2 Environment: CentOS 6.5 User: The root data directory: / fastdfs (note: the data catalogue according to your data plate mount path) installation package: to link download.csdn.net/detail/l102… FastDFS v5.05 libfastcommon-master.zip fastdfs-nginx-module_v1.16.tar.gz fastdfs-nginx-module_v1.16.tar.gz FastDFS v5.05 libfastcommon-master.zip fastdfs-nginx-module_v1.16.tar.gz Nginx – 1.6.2. Tar. Gz fastdfs_client_java. _v1. 25. Tar. Gz

Source code address: github.com/happyfish10… Download address: sourceforge.net/projects/fa… Official forum: bbs.chinaunix.net/forum-240-1…

FastDFS installation (all trace servers and storage servers do the following)

Compile and install the required dependency packages

# yum install make cmake gcc gcc-c++
Copy the code

2. Install libfastCommon

Github.com/happyfish10…

(1) Upload or download libfastcommon-master.zip to /usr/local/src and decompress it

# cd /usr/local/src/
# unzip libfastcommon-master.zip
# cd libfastcommon-master
Copy the code

(2) Compile and install

# ./make.sh
# ./make.sh install
Copy the code

Libfastcommon is installed by default

/usr/lib64/libfastcommon.so
/usr/lib64/libfdfsclient.so
Copy the code

(3) Establish soft links

Because the FastDFS main program sets the lib directory to /usr/local/lib, you need to create a soft link.

# ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
# ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so
# ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so
# ln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so
Copy the code

3. Install FastDFS

Github.com/happyfish10…

(1) Upload or download the FastDFS source package (fastdfs_v5.05.tar. gz) to the /usr/local/src directory and decompress it

# cd /usr/local/src/
# tar - ZXVF FastDFS_v5.05. Tar. Gz
# cd FastDFS
Copy the code

(2) Compile and install libfastCommon (make sure you have successfully installed libfastcommon before compiling)

# ./make.sh
# ./make.sh install
Copy the code

The default installation mode is adopted. The corresponding files and directories are as follows:

A. The service script is in:

/etc/init.d/fdfs_storaged
/etc/init.d/fdfs_tracker
Copy the code

B. The configuration file is in (Sample configuration file)

/etc/fdfs/client.conf.sample
/etc/fdfs/storage.conf.sample
/etc/fdfs/tracker.conf.sample
Copy the code

C, run the following command to run the /usr/bin/directory:

fdfs_appender_test
fdfs_appender_test1
fdfs_append_file
fdfs_crc32
fdfs_delete_file
fdfs_download_file
fdfs_file_info
fdfs_monitor
fdfs_storaged
fdfs_test
fdfs_test1
fdfs_trackerd
fdfs_upload_appender
fdfs_upload_file
stop.sh
restart.sh
Copy the code

(4) Because the bin directory of the FastDFS service script is /usr/local/bin, but the actual command is installed in /usr/bin, you can go to /user/bin and run the following command to view FDFS commands:

# cd /usr/bin/
# ls | grep fdfs
Copy the code

Therefore, you need to change the command path of the FastDFS service script to /usr/bin in the /etc/init.d/fdfs_storaged and /etc/init.d/fdfs_tracker scripts.

# vi /etc/init.d/fdfs_trackerdUse the find and replace command to make uniform changes :%s+/usr/local/bin+/usr/bin
# vi /etc/init.d/fdfs_storagedUse the find and replace command to make uniform changes :%s+/usr/local/bin+/usr/bin
Copy the code

Note: The above operations are necessary to configure both tracker and storage, and the difference between tracker and storage is mainly during the configuration process after fastdfs is installed.

Configure FastDFS Tracker Tracker (192.168.50.135, 192.168.50.136)

1. Copy the FastDFS tracker sample configuration file and rename it

# cd /etc/fdfs/
# cp tracker.conf.sample tracker.conf
Copy the code

2. Edit the tracker configuration file

# vi /etc/fdfs/tracker.conf
Copy the code

The modified contents are as follows:

disabled=false Enable the configuration file
port=22122 The default port for #tracker is 22122
base_path=/fastdfs/tracker Data files and log directories for #tracker
Copy the code

(other parameters keep the default configuration, specific configuration explain please refer to the official documentation: http://bbs.chinaunix.net/thread-1941456-1-1.html).

3. Create the base data directory (see base_path configuration)

# mkdir -p /fastdfs/tracker
Copy the code

4. Open tracker port in firewall (default: 22122)

# vi /etc/sysconfig/iptables
Copy the code

Add the following port lines:

## FastDFS Tracker Port
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22122 -j ACCEPT
Copy the code

Restart the firewall:

# service iptables restart
Copy the code

5. Start the Tracker

# /etc/init.d/fdfs_trackerd start
Copy the code

You can use the following two methods to check whether the tracker is successfully started:

(1) Check the listening status of port 22122

netstat -unltp|grep fdfs
Copy the code

(2) Run the following command to view the startup log of the tracker and check whether there is an error

tail -100f /fastdfs/tracker/logs/trackerd.log
Copy the code

Close the Tracker

# /etc/init.d/fdfs_trackerd stop
Copy the code

7. Set the FastDFS tracker to start

# vi /etc/rc.d/rc.local
Copy the code

Add the following

## FastDFS Tracker
/etc/init.d/fdfs_trackerd start
Copy the code

Configure FastDFS storage (192.168.50.137, 192.168.50.138, 192.168.50.139, 192.168.50.140)

1. Copy the FastDFS storage sample configuration file and rename it

# cd /etc/fdfs/
# cp storage.conf.sample storage.conf
Copy the code

2. Edit the storage sample configuration file

Conf of the storage node in group1 is used as an example

# vi /etc/fdfs/storage.conf
Copy the code

The modified contents are as follows:

disabled=false Enable the configuration file
group_name=group1 Group name = group1; group name = group2;
port=23000 The storage ports of the same group must be the same
base_path=/fastdfs/storage Set the log directory for storage
store_path0=/fastdfs/storage # storage path
store_path_count=1 Number of storage paths, which must match store_path numberTracker_server = 192.168.50.135:22122The IP address and port of the tracker serverTracker_server = 192.168.50.136:22122Add multiple configurations directly to multiple trackers
http.server_port=8888 Set the HTTP port number
Copy the code

(Keep the default Settings for other parameters. For details about the Settings, see the official documentation: bbs.chinaunix.net/thread-1941…)

3. Create the base data directory (see base_path configuration)

# mkdir -p /fastdfs/storage
Copy the code

4. Open storage port in firewall (default: 23000)

# vi /etc/sysconfig/iptables
Copy the code

Add the following port lines:

## FastDFS Storage Port
-A INPUT -m state --state NEW -m tcp -p tcp --dport 23000 -j ACCEPT
Copy the code

Restart the firewall:

# service iptables restart
Copy the code

5. Start Storage

# /etc/init.d/fdfs_storaged start
Copy the code

/fastdfs/storage directory data and logs will be created in /fastdfs/storage directory.

Start moving, all nodes using tail -f/fastdfs/storage/logs/storaged log command to monitor log storage nodes, you can see the storage node link to the tracker, and suggests which one for the leader tracker. You can also see logs about other nodes in the same group. Check the listening status of port 23000

netstat -unltp|grep fdfs
Copy the code

After all Storage nodes are started, you can run the following command on any Storage node to view cluster information:

# /usr/bin/fdfs_monitor /etc/fdfs/storage.conf
Copy the code

The result is:

Yes if the storage node is in the ACTIVE state

6. Close Storage

# /etc/init.d/fdfs_storaged stop
Copy the code

7, set FastDFS memory boot

# vi /etc/rc.d/rc.local
Copy the code

Add:

## FastDFS Storage
/etc/init.d/fdfs_storaged start
Copy the code

File Upload Test (192.168.50.135)

1. Modify the client configuration file in the Tracker server

# cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf
# vi /etc/fdfs/client.confBase_path = / fastdfs/tracker tracker_server = 192.168.50.135:22122 tracker_server = 192.168.50.136:22122Copy the code

2. Run the following command to upload files

# / usr/bin/fdfs_upload_file/etc/FDFS/client. The conf/usr/local/SRC/FastDFS_v5. 05. Tar. Gz
Copy the code

Return ID number:

group1/M00/00/00/wKgBh1Xtr9-AeTfWAAVFOL7FJU4.tar.gz
group2/M00/00/00/wKgBiVXtsDmAe3kjAAVFOL7FJU4.tar.gz
Copy the code

(If the preceding file ID is returned, the file is successfully uploaded.)

Install Nginx on storage nodes (192.168.50.137, 192.168.50.138, 192.168.50.139, and 192.168.50.140)

Fastdfs-nginx-module function description

FastDFS uses the Tracker server to store files on the Storage server. However, file replication is required between Storage servers in the same group, resulting in synchronization delay. Suppose the Tracker server uploads the file to 192.168.50.137, and the file ID is returned to the client. In this case, the FastDFS storage cluster mechanism synchronizes the file to the same storage group 192.168.50.138. If the client uses the file ID to obtain the file from 192.168.50.138, the file cannot be accessed. Fastdfs-nginx-module redirects files to the source server for file retrieval, avoiding client file failure due to replication delay. (Unzipped Fastdfs-nginx-module for nginx installation)

2. Upload the fastdfs-nginx-module_v1.16.tar.gz file to /usr/local/src and decompress it

# cd /usr/local/src/
# tar - ZXVF fastdfs - nginx - module_v1. 16. The tar. Gz
Copy the code

3. Modify the fastdfs-nginx-module config file

# vi /usr/local/src/fastdfs-nginx-module/src/config
CORE_INCS="$CORE_INCS /usr/local/include/fastdfs /usr/local/include/fastcommon/"Change the value to CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/"
Copy the code

(Note: this path modification is very important, otherwise nginx will report errors when compiling.)

4. Upload the Nginx installation package

Upload the current stable version of Nginx(nginx-1.13.0.tar.gz) to the /usr/local/src directory

5. Install the dependency packages required to compile Nginx

# yum install gcc gcc-c++ make automake autoconf libtool pcre pcre-devel zlib zlib-devel openssl openssl-devel
Copy the code

Install Nginx (fastdfs-nginx-module)

# cd /usr/local/src/
# tar - ZXVF nginx - 1.13.0. Tar. Gz
# CD nginx - 1.13.0
# ./configure --prefix=/usr/local/nginx --add-module=/usr/local/src/fastdfs-nginx-module/src
# make && make install
Copy the code

7. Copy the fastdfs-nginx-module configuration file to /etc/fdfs and modify it

# cp /usr/local/src/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/
# vi /etc/fdfs/mod_fastdfs.conf
Copy the code

(1) Mod_fastdfs. conf configuration for group 1 Storage:

10 base_path connect_timeout = = / TMP tracker_server = 192.168.1.131:22122 tracker_server = 192.168.1.132:22122 storage_server_port=23000 group_name=group1 url_have_group_name =true
store_path0=/fastdfs/storage
group_count = 2
[group1]
group_name=group1
storage_server_port=23000
store_path_count=1
store_path0=/fastdfs/storage
[group2]
group_name=group2
storage_server_port=23000
store_path_count=1
store_path0=/fastdfs/storage
Copy the code

The mod_fastdfs.conf configuration of the first Storage is different from that of the first Storage except group_name:

group_name=group2
Copy the code

8. Copy the FastDFS configuration file to /etc/fdfs

# cd /usr/local/src/FastDFS/conf
# cp http.conf mime.types /etc/fdfs/
Copy the code

9. Create a soft link in /fastdfs/storage and link it to the directory where the data is stored

# ln -s /fastdfs/storage/data/ /fastdfs/storage/data/M00
Copy the code

10, Configure Nginx, concise version of the Nginx configuration example

# vi /usr/local/nginx/conf/nginx.conf
user root;
worker_processes 1;
events {
	worker_connections 1024;
}
http {
	include mime.types;
	default_type application/octet-stream;
	sendfile on;
	keepalive_timeout 65;
	server {
		listen 8888;
		server_name localhost;
		location ~/group([0-9])/M00 {
			#alias /fastdfs/storage/data;ngx_fastdfs_module; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}}Copy the code

Note: Server_port =8888 in /etc/fdfs/storage.conf, because http.server_port is 8888 by default. If you want to change it to 80, change it accordingly.

B, Storage If there are multiple groups, the access path with the group name, such as /group1/ m00/00/00 / XXX, corresponds to the Nginx configuration:

location ~/group([0-9])/M00 {
	ngx_fastdfs_module;
} 
Copy the code

C. If 404 is found, change user nobody in the first line of nginx.conf to user root and restart the nginx.conf file.

Open Nginx port 8888 on the firewall

# vi /etc/sysconfig/iptablesAdd:## Nginx Port-A INPUT -m state --state NEW -m TCP -p TCP --dport 8888 -j ACCEPT# service iptables restart
Copy the code

12. Start Nginx

# /usr/local/nginx/sbin/nginx
ngx_http_fastdfs_set pid=xxx
Copy the code

(restart Nginx command: / usr/local/Nginx/sbin/Nginx – s reload)

Set Nginx to boot upon startup

# vi /etc/rc.local
Copy the code

Add:

/usr/local/nginx/sbin/nginx
Copy the code

13. Use a browser to access the files uploaded during the test

http://192.168.50.137:8888/group1/M00/00/00/wKgBh1Xtr9-AeTfWAAVFOL7FJU4.tar.gz
http://192.168.50.139:8888/group2/M00/00/00/wKgBiVXtsDmAe3kjAAVFOL7FJU4.tar.gz
Copy the code

Install Nginx on the tracker nodes 192.168.50.135 and 192.168.50.136

Nginx is installed on tracker to provide HTTP access to reverse proxy, load balancing, and caching services

Install and compile Nginx dependencies

# yum install gcc gcc-c++ make automake autoconf libtool pcre pcre-devel zlib zlib-devel openssl openssl-devel
Copy the code

3. Upload ngx_cache_purge-2.3.tar.gz to /usr/local/src and decompress the ngx_cache_purge-2.3.tar.gz file

# cd /usr/local/src/
# tar - ZXVF ngx_cache_purge 2.3. Tar. Gz
Copy the code

4. Upload the current stable version Nginx(nginx-1.13.0.tar.gz) to the /usr/local/src directory

Install Nginx (add ngx_cache_purge module)

# cd /usr/local/src/
# tar - ZXVF nginx - 1.13.0. Tar. Gz
# CD nginx - 1.13.0
#. / configure -- prefix = / usr/local/nginx - add - the module = / usr/local/SRC/ngx_cache_purge - 2.3
# make && make install
Copy the code

6. Configure Nginx, load balancing and caching

# vi /usr/local/nginx/conf/nginx.conf
user root;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
	worker_connections 1024;
	use epoll;
}
http {
	include mime.types;
	default_type application/octet-stream;
	#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
	# '$status $body_bytes_sent "$http_referer" '
	# '"$http_user_agent" "$http_x_forwarded_for"';
	#access_log logs/access.log main;
	sendfile on;
	tcp_nopush on;
	#keepalive_timeout 0;
	keepalive_timeout 65;
	#gzip on;
	# set cache
	server_names_hash_bucket_size 128;
	client_header_buffer_size 32k;
	large_client_header_buffers 4 32k;
	client_max_body_size 300m;
	proxy_redirect off;
	proxy_set_header Host $http_host;
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	proxy_connect_timeout 90;
	proxy_send_timeout 90;
	proxy_read_timeout 90;
	proxy_buffer_size 16k;
	proxy_buffers 4 64k;
	proxy_busy_buffers_size 128k;
	proxy_temp_file_write_size 128k;
	Set cache storage path, storage mode, allocated memory size, maximum disk space, cache duration
	proxy_cache_path /fastdfs/cache/nginx/proxy_cache levels=1:2
	keys_zone=http-cache:200m max_size=1g inactive=30d;
	proxy_temp_path /fastdfs/cache/nginx/proxy_cache/tmp;
	# set group1's serverUpstream =1 max_fails=2 fail_timeout=30s; Server 192.168.50.138:8888 weight=1 max_fails=2 fail_timeout=30s; }#Upstream: fdfs_group2 {server 192.168.50.133:8888 weight=1 max_fails=2 fail_timeout=30s; Server 192.168.50.140:8888 weight=1 max_fails=2 fail_timeout=30s; } server { listen 8000; server_name localhost;#charset koi8-r;
		#access_log logs/host.access.log main;
		Set load balancing parameters for group
		location /group1/M00 {
			proxy_next_upstream http_502 http_504 error timeout invalid_header;
			proxy_cache http-cache;
			proxy_cache_valid 200 304 12h;
			proxy_cache_key $uri$is_args$args;
			proxy_pass http://fdfs_group1;
			expires 30d;
		}
		location /group2/M00 {
			proxy_next_upstream http_502 http_504 error timeout invalid_header;
			proxy_cache http-cache;
			proxy_cache_valid 200 304 12h;
			proxy_cache_key $uri$is_args$args;
			proxy_pass http://fdfs_group2;
			expires 30d;
		} 
		Set access to clear cacheLocation ~/purge(/.*) {allow 127.0.0.1; Allow 192.168.1.0/24; deny all; proxy_cache_purge http-cacheThe $1$is_args$args;
		}
		#error_page 404 /404.html;
		# redirect server error pages to the static page /50x.html
		#error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}}Copy the code

Create the corresponding cache directory according to the above nginx configuration file requirements:

# mkdir -p /fastdfs/cache/nginx/proxy_cache
# mkdir -p /fastdfs/cache/nginx/proxy_cache/tmp
Copy the code

7. Open the corresponding port on the system firewall

# vi /etc/sysconfig/iptables
## Nginx
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8000 -j ACCEPT
# service iptables restart
Copy the code

8. Start Nginx

# /usr/local/nginx/sbin/nginxRestart the Nginx# /usr/local/nginx/sbin/nginx -s reloadSet Nginx to boot upon startup# vi /etc/rc.localAdd: / usr /local/nginx/sbin/nginx
Copy the code

9. File access test

Directly in front of the access Storage nodes in Nginx files at http://192.168.50.137:8888/group1/M00/00/00/wKgBh1Xtr9-AeTfWAAVFOL7FJU4.tar.gz http://192.168.50.139:8888/group2/M00/00/00/wKgBiVXtsDmAe3kjAAVFOL7FJU4.tar.gz

It can now be accessed via Nginx in Tracker

(1) through the Nginx Tracker1 visit http://192.168.50.135:8000/group1/M00/00/00/wKgBh1Xtr9-AeTfWAAVFOL7FJU4.tar.gz http://192.168.50.135:8000/group2/M00/00/00/wKgBiVXtsDmAe3kjAAVFOL7FJU4.tar.gz

(2) through the Nginx Tracker2 visit http://192.168.50.136:8000/group1/M00/00/00/wKgBh1Xtr9-AeTfWAAVFOL7FJU4.tar.gz http://192.168.50.136:8000/group2/M00/00/00/wKgBiVXtsDmAe3kjAAVFOL7FJU4.tar.gz

As you can see from the above file access effect, Nginx in each Tracker does load balancing for the back-end Storage group. However, if the entire FastDFS cluster wants to provide a uniform file access address, HA clustering is also required for Nginx in both trAckers

A high availability load balancing cluster consisting of Keepalived + Nginx was used to balance the Nginx load in the two Tracker nodes

Keepalived – Keepalived + Nginx implements High Availability Web Load Balancing

Link to blog address is: blog.csdn.net/l1028386804…

2. Configure the Nginx load balancing reverse proxy in the Tracker node in Keepalived+Nginx high availability load balancing cluster

(Nginx at 192.168.50.133 and 192.168.50.134 performs the same configuration)

# vi /usr/local/nginx/conf/nginx.conf
user root;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
	worker_connections 1024;
}
http {
	include mime.types;
	default_type application/octet-stream;
	#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
	# '$status $body_bytes_sent "$http_referer" '
	# '"$http_user_agent" "$http_x_forwarded_for"';
	#access_log logs/access.log main;
	sendfile on;
	#tcp_nopush on;
	#keepalive_timeout 0;
	keepalive_timeout 65;
	#gzip on;
	## FastDFS Tracker ProxyUpstream fastdfs_tracker {server 192.168.50.135:8000 weight=1 max_fails=2 fail_timeout=30s; Server 192.168.50.136:8000 weight=1 max_fails=2 fail_timeout=30s; } server { listen 88; server_name localhost;#charset koi8-r;
		#access_log logs/host.access.log main;
		location / {
		root html;
			index index.html index.htm;
		}
		#error_page 404 /404.html;
		# redirect server error pages to the static page /50x.html
		error_page 500 502 503 504 /50x.html;
			location = /50x.html {
			root html;
		}
		## FastDFS Proxy
		location /dfs {
			root html;
			index index.html index.htm;
			proxy_pass http://fastdfs_tracker/;
			proxy_set_header Host $http_host;
			proxy_set_header Cookie $http_cookie;
			proxy_set_header X-Real-IP $remote_addr;
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 300m; }}}Copy the code

3. Restart Nginx at 192.168.50.133 and 192.168.50.134

# /usr/local/nginx/sbin/nginx -s reload
Copy the code

4. Access files in FastDFS cluster via VIP(192.168.50.130) of Keepalived+Nginx high availability load cluster

http://192.168.50.130:88/dfs/group1/M00/00/00/wKgBh1Xtr9-AeTfWAAVFOL7FJU4.tar.gz http://192.168.50.130:88/dfs/group2/M00/00/00/wKgBiVXtsDmAe3kjAAVFOL7FJU4.tar.gz

Note: Do not use the kill -9 command to forcibly kill the FastDFS process. Otherwise, binlog data may be lost. Also, you can go to the linkDownload.csdn.net/detail/l102…Download the FastDFS cluster configuration file

Ok, let’s call it a day! Don’t forget to give a look and forward, let more people see, learn together progress!!

Write in the last

If you find this article helpful, please search and follow the wechat official account of “Glacier Technology” to learn distributed storage technology from Glacier.