The label
PostgreSQL, ECS, AliCloud, Deployment, Physical mirroring, Stream replication, Snapshot backup, backup verification, automatic cleanup
background
This section describes how to implement a PostgreSQL environment in aliyun ECS, which is very simple, but has availability and reliability that meet the requirements of common enterprises.
Include:
1. Automatically start the database
2. Include a physical flow replication standby
3, including automatic second-level snapshot backup
4, including automatic backup set validation
5. Automatically clear backup sets and archive files generated N days ago
6. Monitor by yourself
Deployment Environment Introduction
1, the ECS
111.111.111.199 (primary) 111.111.111.223 (secondary)Copy the code
2, the cloud disk
400 GB
Copy the code
3, software,
OS: CentOS 7.4 x64
PostgreSQL: 10.1
ZFS: 0.7.3
Copy the code
Deployment process
Double machine
1. Install the software
yum -y install coreutils glib2 lrzsz dstat sysstat e4fsprogs xfsprogs ntp readline-devel zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel python-devel tcl-devel gcc gcc-c++ make smartmontools flex bison perl-devel perl-ExtUtils* openldap-devel jadetex openjade bzip2
Copy the code
2. Modify the kernel configuration
vi /etc/sysctl.conf # add by digoal.zhou fs.aio-max-nr = 1048576 fs.file-max = 76724600 kernel.sem = 4096 2147483647 2147483646 512000 kernel.shmall = 107374182 kernel.shmmax = 274877906944 kernel.shmmni = 819200 net.core.netdev_max_backlog = 10000 net.core.rmem_default = 262144 # The default setting of the socket receive buffer in bytes. net.core.rmem_max = 4194304 # The maximum receive socket buffer size in bytes net.core.wmem_default = 262144 # The default setting (in bytes) of the socket send buffer. net.core.wmem_max = 4194304 # The maximum send socket buffer size in bytes. net.core.somaxconn = 4096 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.tcp_keepalive_intvl = 20 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_time = 60 net.ipv4.tcp_mem = 8388608 12582912 16777216 net.ipv4.tcp_fin_timeout = 5 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_tw_buckets = 262144 net.ipv4.tcp_rmem = 8192 87380 16777216 net.ipv4.tcp_wmem = 8192 65536 16777216 net.nf_conntrack_max = 1200000 net.netfilter.nf_conntrack_max = 1200000 vm.dirty_background_bytes = 204800000 vm.dirty_expire_centisecs = 3000 vm.dirty_ratio = 90 vm.dirty_writeback_centisecs = 50 vm.swappiness = 0 vm.mmap_min_addr = 65536 vm.overcommit_memory = 0 vm.overcommit_ratio = 90 vm.swappiness = 0 vm.zone_reclaim_mode = 0 net.ipv4.ip_local_port_range = 40000 65535 fs.nr_open=20480000Copy the code
3. Modify resource limits
# vi /etc/security/limits.conf * soft nofile 1024000 * hard nofile 1024000 * soft nproc unlimited * hard nproc unlimited * soft core unlimited * hard core unlimited * soft memlock unlimited * hard memlock unlimitedCopy the code
4. Close the large transparent page
Chmod + x/etc/rc. D/rc. Local vi/etc/rc. The local # additional if test - f/sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fiCopy the code
5. Allocate disks
parted -s /dev/vdb mklabel gpt
parted -s /dev/vdb mkpart primary 1MiB 100%
Copy the code
The host
1. Create a file system
mkfs.ext4 /dev/vdb1 -m 0 -O extent,uninit_bg -E lazy_itable_init=1 -T largefile -L data01
Copy the code
2. Load the file system
mkdir /data01
vi /etc/fstab
LABEL=data01 /data01 ext4 defaults,noatime,nodiratime,nodelalloc,barrier=0,data=writeback 0 0
mount -a
Copy the code
The standby
1. Install ZFS
yum install -y http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm yum install -y Kernel-devel-3.10.0-693.2.2.el7.x86_64 yum install -y ZFSCopy the code
2. Manually load the ZFS module
/usr/sbin/modprobe zfs
Copy the code
3, create zpool and ZFS for data directory, archive directory
zpool create -o ashift=12 zp1 vdb1 zfs set compression=lz4 zp1 zfs set canmount=off zp1 zfs set atime=off zp1 zfs create -o mountpoint=/data01 zp1/data01 zfs create -o mountpoint=/pg_arch zp1/pg_archCopy the code
4. Automatically load zPool
vi /etc/rc.local
/usr/sbin/modprobe zfs
/usr/sbin/zpool import zp1
Copy the code
Double machine
1. Install PostgreSQL and EPEL yum repository
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm
Copy the code
2. Install database software and peripheral software
yum install -y postgresql10* postgis24_10* pgrouting_10* osm2pgrouting_10* plpgsql_check_10* pgbadger pg_top10* hypopg_10* citus_10* cstore_fdw_10* pg_pathman10* orafce10* pgfincore10* pgbson10* pgmemcache-10* pldebugger10* plv8_10* geoip10* ip4r10* pg_repack10* pgsphere10* plr10*Copy the code
3. Modify environment variables
su - postgres
vi ~/.bash_profile
export PS1="$USER@`/bin/hostname -s`-> "
export PGPORT=3433
export PGDATA=/data01/pg_root$PGPORT
export LANG=en_US.utf8
export PGHOME=/usr/pgsql-10
export LD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH
export DATE=`date +"%Y%m%d%H%M"`
export PATH=$PGHOME/bin:$PATH:.
export MANPATH=$PGHOME/share/man:$MANPATH
export PGHOST=$PGDATA
export PGUSER=postgres
export PGDATABASE=postgres
alias rm='rm -i'
alias ll='ls -lh'
unalias vi
Copy the code
The host
1. Create a data directory and modify permissions
cd /data01
mkdir pg_root3433
mkdir -p pg_arch/pg3433
chown postgres:postgres /data01/*
ln -s /data01/pg_arch /pg_arch
Copy the code
The standby
1. Create a data directory and modify permissions
cd /data01
mkdir pg_root3433
chown postgres:postgres /data01/*
cd /pg_arch
mkdir pg3433
chown postgres:postgres /pg_arch/pg3433
Copy the code
The host
1. Initialize the database cluster
su - postgres
initdb -D $PGDATA -U postgres --locale=en_US.UTF8 -E UTF8
Copy the code
2. Modify database parameters
CD $PGDATA vi postgresql.auto. Conf listen_addresses = '0.0.0.0' port = 3433 # Max_connections = 2000 # Maximum number of connections allowed superuser_reserved_connections = 10 unix_socket_directories = '.' unix_socket_permissions = 0700 tcp_keepalives_idle = 60 tcp_keepalives_interval = 60 tcp_keepalives_count = 10 shared_buffers = 60 GB Maintenance_work_mem = 512MB # if the system memory exceeds 32 gb, you are advised to set it to 1 gb. If the value is larger than 64GB, 2GB is recommended. If the value is larger than 128GB, 4GB is recommended. Work_mem = 64MB # buffers = 64MB # min shared_buffers/32 ) dynamic_shared_memory_type = posix vacuum_cost_delay = 0 bgwriter_delay = 10ms bgwriter_lru_maxpages = 500 bgWRITer_lru_multiplier = 5.0 effective_IO_concurrency = 0 max_worker_PROCESSES = 128 Max_parallel_workers_per_gather = 16 # Recommended setting to half the number of host CPU cores. Max_parallel_workers = 16 # See the AP and TP ratio, and AP TP time staggered allocation. Actual situation adjustment. For example, host CPU cores-2 WAL_level = replica fsync = ON synchronous_commit = off full_page_writes = on # Support atomic write to block devices larger than BLOCK_SIZE. It can be closed after alignment. Or file systems that support COW can be shut down. wal_writer_delay = 10ms wal_writer_flush_after = 1MB checkpoint_timeout = 30min max_wal_size = 32GB # shared_buffers*2 min_wal_size = 8GB # max_wal_size/4 archive_mode = always archive_command = '/bin/date' hot_standby = on max_wal_senders = 10 max_replication_slots = 10 wal_receiver_status_interval = 1s max_logical_replication_workers = 4 Max_sync_workers_per_subscription = 2 random_PAGe_cost = 1.2 PARALLEL_tuple_cost = 0.1 parallel_setup_cost = 1000.0 Min_parallel_table_scan_size = 8MB min_parallel_index_scan_size = 512kB effective_cache_size = 32GB log_destination = 'csvlog' logging_collector = on log_directory = 'log' log_filename = 'postgresql-%a.log' log_truncate_on_rotation = on log_rotation_age = 1d log_rotation_size = 0 log_min_duration_statement = 5s Parameter Description Value log_changing = on log_connections = on # If a connection is short and you do not need to audit connection logs, you are advised to disable it. Log_disconnections = on # If the connection is short and you do not need to audit the connection log, OFF is recommended. log_error_verbosity = verbose log_line_prefix = '%m [%p] ' log_lock_waits = on log_statement = 'ddl' log_timezone = 'PRC' log_AUTOvacuum_MIN_duration = 0 AUTOvacuum_MAX_workers = 5 Autovacuum_vacuum_scale_factor = 0.1 Autovacuum_analyze_scale_factor = 0.05 AUTOvacuum_freeze_MAX_age = 1000000000 Autovacuum_multixact_freeze_MAX_age = 1200000000 Autovacuum_vacuum_cost_delay = 0 statement_timeout = 0 # unit ms, s, min, h, d. Indicates the timeout period of the statement. 0 indicates that there is no limit. Lock_timeout = 0 # ms, min, h, d. Indicates the timeout period for lock waiting. 0 indicates that the timeout period is unlimited. Idle_in_transaction_session_timeout = 2h # unit ms, s, min, h, d. Indicates the timeout period for idle transactions. 0 indicates that there is no limit. vacuum_freeze_min_age = 50000000 vacuum_freeze_table_age = 800000000 vacuum_multixact_freeze_min_age = 50000000 vacuum_multixact_freeze_table_age = 800000000 datestyle = 'iso, ymd' timezone = 'PRC' lc_messages = 'en_US.UTF8' lc_monetary = 'en_US.UTF8' lc_numeric = 'en_US.UTF8' lc_time = 'en_US.UTF8' default_text_search_config = 'pg_catalog.simple' shared_preload_libraries='pg_stat_statements,pg_pathman'Copy the code
2. Modify the automatic stream replication parameter file that will be used by the standby library
cp $PGHOME/share/recovery.conf.sample ./ mv recovery.conf.sample recovery.done vi recovery.done recovery_target_timeline = 'latest' standby_mode = on primary_conninfo = 'host=111.111.111.223 Port =3433 user=rep password= XXXXXXXXXX ' restore_command = 'BASEDIR="/pg_arch/pg3433" ; find $BASEDIR -name %f -exec cp {} %p \\; 'Copy the code
3. Modify the database authentication permission access control ACL (PG_hba.conf)
Conf host all all 0.0.0.0/0 MD5 host replication rep 0.0.0.0/0 MD5Copy the code
4. Start the database
pg_ctl start
Copy the code
Create stream replication, database role
psql -n
create role rep replication login encrypted password 'xxxxxxxxxx';
Copy the code
The standby
1. Create a standby database
Su - postgres pg_basebackup -d $PGDATA -f p -h 111.111.111.199 -p 3433 -u repCopy the code
2. Modify the database configuration
cd $PGDATA
vi postgresql.auto.conf
archive_command = 'DT="/pg_arch/pg3433/`date +%F`" ; test ! -d $DT && mkdir -p $DT ; test ! -f $DT/%f && cp %p $DT/%f'
Copy the code
3. Configure automatic synchronization parameters for the standby database
Conf vi recovery.conf primary_Conninfo = 'host=111.111.111.199 Port =3433 user=rep password=xxxxxxxxxx'Copy the code
5. Modify data directory permissions
su - root
chmod 700 /data01/pg_root3433
Copy the code
Double machine
1. When the system is started, the database is automatically started
Vi /etc/rc.local # Append su - postgres -c "pg_ctl start"Copy the code
2. Restart the ECS authentication
reboot
su - postgres
psql
postgres=# show max_connections ;
max_connections
-----------------
2000
(1 row)
Copy the code
The standby
1. Configure automatic backup (snapshot backup)
su - root mkdir script vi /root/script/zfs_snap.sh #! Data01 @ 'date +%Y%m%d' /sbin/ ZFS destroy zp1/data01@ 'date +%Y%m%d Rm -rf /pg_arch/pg3433/ 'date +%F -d '-11 day'' date +%F% tCopy the code
2. Set script permissions
chmod 500 /root/script/zfs_snap.sh
Copy the code
3. Set an automatic backup task and back up data once a day
# crontab -e
1 8 * * * /root/script/zfs_snap.sh >> /root/zfs_snap.log 2>&1
Copy the code
Automatic recovery verification
The standby
1. Create the configuration file used for backup verification, change the port number, and disable archiving.
su - root mkdir /root/test_recovery vi /root/test_recovery/postgresql.auto.conf # Do not edit this file manually! Listen_addresses = '0.0.0.0' port = 11111 # Listen_addresses max_connections = Superuser_reserved_connections = 10 unix_socket_directories = '.' unix_socket_permissions = 0700 Tcp_keepalives_idle = 60 tcp_keepalives_interval = 60 tcp_keepalives_count = 10 shared_buffers = 15gb Maintenance_work_mem = 512MB # if the system memory exceeds 32 gb, you are advised to set it to 1 gb. If the value is larger than 64GB, 2GB is recommended. If the value is larger than 128GB, 4GB is recommended. Work_mem = 64MB # buffers = 64MB # min shared_buffers/32 ) dynamic_shared_memory_type = posix vacuum_cost_delay = 0 bgwriter_delay = 10ms bgwriter_lru_maxpages = 500 bgWRITer_lru_multiplier = 5.0 effective_IO_concurrency = 0 max_worker_PROCESSES = 128 Max_parallel_workers_per_gather = 16 # Recommended setting to half the number of host CPU cores. Max_parallel_workers = 16 # See the AP and TP ratio, and AP TP time staggered allocation. Actual situation adjustment. For example, host CPU cores-2 WAL_level = replica fsync = ON synchronous_commit = off full_page_writes = on # Support atomic write to block devices larger than BLOCK_SIZE. It can be closed after alignment. Or file systems that support COW can be shut down. wal_writer_delay = 10ms wal_writer_flush_after = 1MB checkpoint_timeout = 30min max_wal_size = 32GB # shared_buffers*2 min_wal_size = 8GB # max_wal_size/4 archive_mode = off archive_command = 'DT="/pg_arch/pg3433/`date +%F`" ; test ! -d $DT && mkdir -p $DT ; test ! -f $DT/%f && cp %p $DT/%f' hot_standby = on max_wal_senders = 10 max_replication_slots = 10 wal_receiver_status_interval = 1s max_logical_REPLICATION_workers = 4 max_sync_WORKers_per_subscription = 2 random_page_cost = 1.2 Parallel_tuple_cost = 0.1 parallel_setup_cost = 1000.0 min_parallel_table_scan_size = 8MB min_parallel_index_scan_size = 8MB 512kB effective_cache_size = 32GB # the recommended value is 5/8 of the host memory. log_destination = 'csvlog' logging_collector = on log_directory = 'log' log_filename = 'postgresql-%a.log' log_truncate_on_rotation = on log_rotation_age = 1d log_rotation_size = 0 log_min_duration_statement = 5s Parameter Description Value log_changing = on log_connections = on # If a connection is short and you do not need to audit connection logs, you are advised to disable it. Log_disconnections = on # If the connection is short and you do not need to audit the connection log, OFF is recommended. log_error_verbosity = verbose log_line_prefix = '%m [%p] ' log_lock_waits = on log_statement = 'ddl' log_timezone = 'PRC' log_AUTOvacuum_MIN_duration = 0 AUTOvacuum_MAX_workers = 5 Autovacuum_vacuum_scale_factor = 0.1 Autovacuum_analyze_scale_factor = 0.05 AUTOvacuum_freeze_MAX_age = 1000000000 Autovacuum_multixact_freeze_MAX_age = 1200000000 Autovacuum_vacuum_cost_delay = 0 statement_timeout = 0 # unit ms, s, min, h, d. Indicates the timeout period of the statement. 0 indicates that there is no limit. Lock_timeout = 0 # ms, min, h, d. Indicates the timeout period for lock waiting. 0 indicates that the timeout period is unlimited. Idle_in_transaction_session_timeout = 2h # unit ms, s, min, h, d. Indicates the timeout period for idle transactions. 0 indicates that there is no limit. vacuum_freeze_min_age = 50000000 vacuum_freeze_table_age = 800000000 vacuum_multixact_freeze_min_age = 50000000 vacuum_multixact_freeze_table_age = 800000000 datestyle = 'iso, ymd' timezone = 'PRC' lc_messages = 'en_US.UTF8' lc_monetary = 'en_US.UTF8' lc_numeric = 'en_US.UTF8' lc_time = 'en_US.UTF8' default_text_search_config = 'pg_catalog.simple' shared_preload_libraries='pg_stat_statements,pg_pathman'Copy the code
2. Create an automatic recovery script, clone the snapshot of the previous day, and use the clone recovery to check whether the recovery can be normal
vi /root/test_recovery/recovery_test.sh #! /bin/bash date +%F%T . /var/lib/pgsql/.bash_profile DATE=`date -d '-1 day' +%Y%m%d` zfs clone -o mountpoint=/test_recovery zp1/data01@$DATE zp1/test_recovery rm -f /test_recovery/pg_root3433/postgresql.auto.conf rm -f /test_recovery/pg_root3433/postmaster.pid rm -f /test_recovery/pg_root3433/pg_wal/* cp -p /root/test_recovery/postgresql.auto.conf /test_recovery/pg_root3433/postgresql.auto.conf su - postgres -c "pg_ctl start -D /test_recovery/pg_root3433" for ((i=1; i<180; I ++)) do echo $I sleep 20 PSQL -h 127.0.0.1 -p 11111 postgres -c "select now(),* from pg_database; ret=$? if [ $ret -eq 0 ]; then break fi done su - postgres -c "pg_ctl stop -w -t 6000 -m immediate -D /test_recovery/pg_root3433" sleep 60 zfs destroy zp1/test_recovery date +%F%TCopy the code
3. Configure script permissions
chmod 500 /root/test_recovery/recovery_test.sh
Copy the code
4. Test the automatic backup set recovery script
/root/test_recovery/recovery_test.sh 2017-11-2923:23:15 /var/lib/pgsql/.bash_profile: line 24: unalias: vi: Not found rm: cannot remove '/test_recovery/pg_root3433/pg_wal/archive_status' : Is a directory /var/lib/pgsql/.bash_profile: line 24: unalias: vi: not found waiting for server to start.... 2017-11-29 23:23:15.879 CST [11051] LOG: 00000 Listening on IPv4 address "0.0.0.0", port 11111 2017-11-29 23:23:15.879 CST [11051] LOCATION: StreamServerPort, pQComm.c :593 2017-11-29 23:23:15.898 CST [11051] LOG: 00000 2017-11-29 23:23:15.898 CST [11051] LOCATION: StreamServerPort, pqComm.c :587 2017-11-29 23:23:16.422 CST [11051] LOG: 00000 Redirecting log Output to logging Collector Process 2017-11-29 23:23:16.422 CST [11051] HINT: 2017-11-29 23:23:16.422 CST [11051] LOCATION: Future log output will appear in directory "log". 2017-11-29 23:23:16.422 CST [11051] LOCATION: SysLogger_Start, syslogger.c:634 done server started 1 now | datname | datdba | encoding | datcollate | datctype | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | datminmxid | dattablespace | datacl -------------------------------+-----------+--------+----------+------------+------------+---------------+-------------- +--------------+---------------+--------------+------------+---------------+------------------------------------- The 2017-11-29 23:23:36. 572934 + 08 | postgres | | | 10 6 en_US. UTF8 | en_US. UTF8 | | t | | - 1 f 13805 | 548 | | 1663 | 1 The 2017-11-29 23:23:36. 572934 + 08 | template1 | | | 10 6 en_US. UTF8 | en_US. UTF8 | t | t | | - 1 13805 | 548 | | 1663 | 1 {= c/postgres postgres = CTc/postgres} 23:23:36. 2017-11-29 (572934 + | template0 | | | 10 6 en_US. UTF8 | en_US. UTF8 | t | f | -1 | 13805 | 548 | 1 | 1663 | {=c/postgres,postgres=CTc/postgres} (3 rows) /var/lib/pgsql/.bash_profile: line 24: unalias: vi: not found waiting for server to shut down.... done server stoppedCopy the code
other
1. View the existing backup set
# ZFS list -t snapshot NAME USED AVAIL REFER MOUNTPOINT ZP1 /data01@20171129 0B-12.6m-Zp1 /data01@20171128 0B-12.6m -Copy the code
2. Delete the backup set
# zfs destroy zp1/data01@20171128
Copy the code
3. Clone ZFS based on the existing snapshot
# zfs clone -o mountpoint=/test_recovery zp1/data01@20171128 zp1/test_recovery
Copy the code
4. Delete clone ZFS
# zfs destroy zp1/test_recovery
Copy the code
5. Check the active/standby time delay
postgres=# select * from pg_stat_replication ; -[ RECORD 1 ]----+------------------------------ pid | 11697 usesysid | 16384 usename | rep application_name | 53622 backend_start walreceiver client_addr | 111.111.111.223 client_hostname | client_port | | 2017-11-29 22:46:17. 806483 + 08 backend_xmin | state | streaming sent_lsn | 0 / E06B9A30 write_lsn | 0 / E06B9A30 flush_lsn | 0 / E06B9A30 replay_lsn | 0/E06B9A30 write_lag | flush_lag | replay_lag | sync_priority | 0 sync_state | asyncCopy the code
6. Check the delay of the active and standby LSNS
postgres=# select *, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_insert_lsn(), sent_lsn)) as sent_delay, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_insert_lsn(), replay_lsn)) as replay_delay from pg_stat_replication ; -[ RECORD 1 ]----+------------------------------ pid | 11697 usesysid | 16384 usename | rep application_name | 53622 backend_start walreceiver client_addr | 111.111.111.223 client_hostname | client_port | | 2017-11-29 22:46:17. 806483 + 08 backend_xmin | state | streaming sent_lsn | 1/39 d5c000 write_lsn 1/39000000 flush_lsn | | 1/39000000 1/37 replay_lsn | a58718 write_lag | 00:00:00. 192577 flush_lag | 00:00:00. 192577 replay_lag | 00:00:00. 502927 sync_priority | 0 sync_state | async sent_delay | 9650 kB replay_delay | 44 MBCopy the code
Create a database
CREATE DATABASE name [ [ WITH ] [ OWNER [=] user_name ] [ TEMPLATE [=] template ] [ ENCODING [=] encoding ] [ LC_COLLATE [=] lc_collate ] [ LC_CTYPE [=] lc_ctype ] [ TABLESPACE [=] tablespace_name ] [ ALLOW_CONNECTIONS [=] allowconn ] [ CONNECTION LIMIT [=] connlimit ] [ IS_TEMPLATE [=] istemplate ] ]Copy the code
Create a database template
Command: ALTER DATABASE
Description: change a database
Syntax:
ALTER DATABASE name [ [ WITH ] option [ ... ] ]
where option can be:
ALLOW_CONNECTIONS allowconn
CONNECTION LIMIT connlimit
IS_TEMPLATE istemplate
ALTER DATABASE name RENAME TO new_name
ALTER DATABASE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER }
ALTER DATABASE name SET TABLESPACE new_tablespace
ALTER DATABASE name SET configuration_parameter { TO | = } { value | DEFAULT }
ALTER DATABASE name SET configuration_parameter FROM CURRENT
ALTER DATABASE name RESET configuration_parameter
ALTER DATABASE name RESET ALL
postgres=# alter database postgres is_template true;
ALTER DATABASE
Copy the code
Clone the database from the template
create database newdb with template template_db;
Copy the code
8. Create a user
Command: CREATE ROLE Description: define a new database role Syntax: CREATE ROLE name [ [ WITH ] option [ ... ] ] where option can be: SUPERUSER | NOSUPERUSER | CREATEDB | NOCREATEDB | CREATEROLE | NOCREATEROLE | INHERIT | NOINHERIT | LOGIN | NOLOGIN | REPLICATION | NOREPLICATION | BYPASSRLS | NOBYPASSRLS | CONNECTION LIMIT connlimit | [ ENCRYPTED ] PASSWORD 'password' | VALID UNTIL 'timestamp' | IN ROLE role_name [, ...] | IN GROUP role_name [, ...] | ROLE role_name [, ...] | ADMIN role_name [, ...] | USER role_name [, ...] | SYSID uidCopy the code
Create a schema
Command: CREATE SCHEMA Description: define a new schema Syntax: CREATE SCHEMA schema_name [ AUTHORIZATION role_specification ] [ schema_element [ ... ] ] CREATE SCHEMA AUTHORIZATION role_specification [ schema_element [ ... ] ] CREATE SCHEMA IF NOT EXISTS schema_name [ AUTHORIZATION role_specification ] CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_specification where role_specification can be: user_name | CURRENT_USER | SESSION_USERCopy the code
10. Configure user rights
Command: GRANT Description: define access privileges Syntax: GRANT { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER } [, ...] | ALL [ PRIVILEGES ] } ON { [ TABLE ] table_name [, ...] | ALL TABLES IN SCHEMA schema_name [, ...] } TO role_specification [, ...] [ WITH GRANT OPTION ] ......Copy the code
11. Set library level parameters
Command: ALTER DATABASE
Description: change a database
Syntax:
ALTER DATABASE name [ [ WITH ] option [ ... ] ]
where option can be:
ALLOW_CONNECTIONS allowconn
CONNECTION LIMIT connlimit
IS_TEMPLATE istemplate
ALTER DATABASE name RENAME TO new_name
ALTER DATABASE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER }
ALTER DATABASE name SET TABLESPACE new_tablespace
ALTER DATABASE name SET configuration_parameter { TO | = } { value | DEFAULT }
ALTER DATABASE name SET configuration_parameter FROM CURRENT
ALTER DATABASE name RESET configuration_parameter
ALTER DATABASE name RESET ALL
Copy the code
12. Set user level parameters
Command: ALTER ROLE Description: change a database role Syntax: ALTER ROLE role_specification [ WITH ] option [ ... ] where option can be: SUPERUSER | NOSUPERUSER | CREATEDB | NOCREATEDB | CREATEROLE | NOCREATEROLE | INHERIT | NOINHERIT | LOGIN | NOLOGIN | REPLICATION | NOREPLICATION | BYPASSRLS | NOBYPASSRLS | CONNECTION LIMIT connlimit | [ ENCRYPTED ] PASSWORD 'password' | VALID UNTIL 'timestamp' ALTER ROLE name RENAME TO new_name ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] SET configuration_parameter { TO | = } { value | DEFAULT } ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] SET configuration_parameter FROM CURRENT ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] RESET configuration_parameter ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] RESET ALL where role_specification can be: role_name | CURRENT_USER | SESSION_USERCopy the code
13. Configure the function-level parameters
Command: ALTER FUNCTION Description: change the definition of a function Syntax: ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] action [ ... ] [ RESTRICT ] ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] RENAME TO new_name ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] OWNER TO { new_owner | CURRENT_USER | SESSION_USER } ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] SET SCHEMA new_schema ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] DEPENDS ON EXTENSION extension_name where action is one of: CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT IMMUTABLE | STABLE | VOLATILE | [ NOT ] LEAKPROOF [ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER PARALLEL { UNSAFE | RESTRICTED | SAFE } COST execution_cost ROWS result_rows SET configuration_parameter { TO | = } { value | DEFAULT } SET configuration_parameter FROM CURRENT RESET configuration_parameter RESET ALLCopy the code
14. Set table level parameters
Command: ALTER TABLE Description: change the definition of a table Syntax: ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] action [, ... ] ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] RENAME [ COLUMN ] column_name TO new_column_name ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] RENAME CONSTRAINT constraint_name TO new_constraint_name ALTER TABLE [ IF EXISTS ] name RENAME TO new_name ALTER TABLE [ IF EXISTS ] name SET SCHEMA new_schema ALTER TABLE ALL IN TABLESPACE name [ OWNED BY role_name [, ... ] ] SET TABLESPACE new_tablespace [ NOWAIT ] ALTER TABLE [ IF EXISTS ] name ATTACH PARTITION partition_name FOR VALUES partition_bound_spec ALTER TABLE [ IF EXISTS ] name DETACH PARTITION partition_name where action is one of: ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ] DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ] ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ USING expression ] ALTER [ COLUMN ] column_name SET DEFAULT expression ALTER [ COLUMN ] column_name DROP DEFAULT ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL ALTER [ COLUMN ] column_name ADD GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ] ALTER [ COLUMN ] column_name { SET GENERATED { ALWAYS | BY DEFAULT } | SET sequence_option | RESTART [ [ WITH ] restart ] } [...] ALTER [ COLUMN ] column_name DROP IDENTITY [ IF EXISTS ] ALTER [ COLUMN ] column_name SET STATISTICS integer ALTER [ COLUMN ] column_name SET ( attribute_option = value [, ... ] ) ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] ) ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN } ADD table_constraint [ NOT VALID ] ADD table_constraint_using_index ALTER CONSTRAINT constraint_name [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] VALIDATE CONSTRAINT constraint_name DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] DISABLE TRIGGER [ trigger_name | ALL | USER ] ENABLE TRIGGER [ trigger_name | ALL | USER ] ENABLE REPLICA TRIGGER trigger_name ENABLE ALWAYS TRIGGER trigger_name DISABLE RULE rewrite_rule_name ENABLE RULE rewrite_rule_name ENABLE REPLICA RULE rewrite_rule_name ENABLE ALWAYS RULE rewrite_rule_name DISABLE ROW LEVEL SECURITY ENABLE ROW LEVEL SECURITY FORCE ROW LEVEL SECURITY NO FORCE ROW LEVEL SECURITY CLUSTER ON index_name SET WITHOUT CLUSTER SET WITH OIDS SET WITHOUT OIDS SET TABLESPACE new_tablespace SET { LOGGED | UNLOGGED } SET ( storage_parameter = value [, ... ] ) RESET ( storage_parameter [, ... ] ) INHERIT parent_table NO INHERIT parent_table OF type_name NOT OF OWNER TO { new_owner | CURRENT_USER | SESSION_USER } REPLICA IDENTITY { DEFAULT | USING INDEX index_name | FULL | NOTHING } and table_constraint_using_index is: [ CONSTRAINT constraint_name ] { UNIQUE | PRIMARY KEY } USING INDEX index_name [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]Copy the code
15. Configure an authentication ACL
vi $PGDATA/pg_hba.conf # local DATABASE USER METHOD [OPTIONS] # host DATABASE USER ADDRESS METHOD [OPTIONS] # hostssl DATABASE USER ADDRESS METHOD [OPTIONS] # hostnossl DATABASE USER ADDRESS METHOD [OPTIONS] # TYPE DATABASE USER ADDRESS METHOD # "local" is for Unix domain socket connections only local all all trust # IPv4 local connections: Host all all 127.0.0.1/32 trust # IPv6 local connections: host all all ::1/128 trust # Allow replication connections from localhost, Local replication all trust host replication all 127.0.0.1/32 trust host by a user with the # replication privilege Replication ALL ::1/128 trust host all super_user 0.0.0.0/0 reject host all all 0.0.0.0/0 MD5 pg_ctl reloadCopy the code
16. Use PSQL help
Postgres =# \h SQL command, supports TAB completionCopy the code
17, use PSQL local simplification command
postgres=# \? General \copyright show PostgreSQL usage and distribution terms \crosstabview [COLUMNS] execute query and display results in crosstab \errverbose show most recent error message at maximum verbosity \g [FILE] or ; execute query (and send results to file or |pipe) \gexec execute query, then execute each value in its result \gset [PREFIX] execute query and store results in psql variables \gx [FILE] as \g, but forces expanded output mode \q quit psql \watch [SEC] execute query every SEC seconds Help \? [commands] show help on backslash commands \? options show help on psql command-line options \? variables show help on special variables \h [NAME] help on syntax of SQL commands, * for all commands Query Buffer \e [FILE] [LINE] edit the query buffer (or file) with external editor \ef [FUNCNAME [LINE]] edit function definition with external editor \ev [VIEWNAME [LINE]] edit view definition with external editor \p show the contents of the query buffer \r reset (clear) the query buffer \s [FILE] display history or save it to file \w FILE write query buffer to file Input/Output \copy ... perform SQL COPY with data stream to the client host \echo [STRING] write string to standard output \i FILE execute commands from file \ir FILE as \i, but relative to location of current script \o [FILE] send all query results to file or |pipe \qecho [STRING] write string to query output stream (see \o) Conditional \if EXPR begin conditional block \elif EXPR alternative within current conditional block \else final alternative within current conditional block \endif end conditional block Informational (options: S = show system objects, + = additional detail) \d[S+] list tables, views, and sequences \d[S+] NAME describe table, view, sequence, or index \da[S] [PATTERN] list aggregates \dA[+] [PATTERN] list access methods \db[+] [PATTERN] list tablespaces \dc[S+] [PATTERN] list conversions \dC[+] [PATTERN] list casts \dd[S] [PATTERN] show object descriptions not displayed elsewhere \dD[S+] [PATTERN] list domains \ddp [PATTERN] list default privileges \dE[S+] [PATTERN] list foreign tables \det[+] [PATTERN] list foreign tables \des[+] [PATTERN] list foreign servers \deu[+] [PATTERN] list user mappings \dew[+] [PATTERN] list foreign-data wrappers \df[antw][S+] [PATRN] list [only agg/normal/trigger/window] functions \dF[+] [PATTERN] list text search configurations \dFd[+] [PATTERN] list text search dictionaries \dFp[+] [PATTERN] list text search parsers \dFt[+] [PATTERN] list text search templates \dg[S+] [PATTERN] list roles \di[S+] [PATTERN] list indexes \dl list large objects, same as \lo_list \dL[S+] [PATTERN] list procedural languages \dm[S+] [PATTERN] list materialized views \dn[S+] [PATTERN] list schemas \do[S] [PATTERN] list operators \dO[S+] [PATTERN] list collations \dp [PATTERN] list table, view, and sequence access privileges \drds [PATRN1 [PATRN2]] list per-database role settings \dRp[+] [PATTERN] list replication publications \dRs[+] [PATTERN] list replication subscriptions \ds[S+] [PATTERN] list sequences \dt[S+] [PATTERN] list tables \dT[S+] [PATTERN] list data types \du[S+] [PATTERN] list roles \dv[S+] [PATTERN] list views \dx[+] [PATTERN] list extensions \dy [PATTERN] list event triggers \l[+] [PATTERN] list databases \sf[+] FUNCNAME show a function's definition \sv[+] VIEWNAME show a view's definition \z [PATTERN] same as \dp Formatting \a toggle between unaligned and aligned output mode \C [STRING] set table title, or unset if none \f [STRING] show or set field separator for unaligned query output \H toggle HTML output mode (currently off) \pset [NAME [VALUE]] set table output option (NAME := {border|columns|expanded|fieldsep|fieldsep_zero| footer|format|linestyle|null|numericlocale|pager| pager_min_lines|recordsep|recordsep_zero|tableattr|title| tuples_only|unicode_border_linestyle| unicode_column_linestyle|unicode_header_linestyle}) \t [on|off] show only rows (currently off) \T [STRING] set HTML <table> tag attributes, or unset if none \x [on|off|auto] toggle expanded output (currently off) Connection \c[onnect] {[DBNAME|- USER|- HOST|- PORT|-] | conninfo} connect to new database (currently "postgres") \conninfo display information about current connection \encoding [ENCODING] show or set client encoding \password [USERNAME] securely change the password for a user Operating System \cd [DIR] change the current working directory \setenv NAME [VALUE] set or unset environment variable \timing [on|off] toggle timing of commands (currently off) \! [COMMAND] execute command in shell or start interactive shell Variables \prompt [TEXT] NAME prompt user to set internal variable \set [NAME [VALUE]] set internal variable, or list all if no parameters \unset NAME unset (delete) internal variable Large Objects \lo_export LOBOID FILE \lo_import FILE [COMMENT] \lo_list \lo_unlink LOBOID large object operationsCopy the code
18. Run pg_stat_statements to query the TOP SQL statements
create extension pg_stat_statements;
select query,calls,total_time,total_time/calls from pg_stat_statements order by total_time desc limit 10;
Copy the code
19. View the expansion object
[PostgreSQL, Greenplum Daily Monitoring and Maintenance Tasks - Best Practices](.. /201709/20170913_01.md)Copy the code
20. Check the lock wait
[PostgreSQL lock Wait Monitoring Collector SQL - Who blocks who] /201705/20170521_01.md)Copy the code
21. View the error log
cd $PGDATA/log
see it
Copy the code
22, View slow SQL, long transactions, long 2PC transactions
select * from pg_stat_activity where now()-xact_start > interval '10s' or now()-query_start > interval '10s';
select * from pg_prepared_xacts ;
Copy the code
23. View active connections
select count(*) from pg_stat_activity where query <>'IDLE';
Copy the code
24. Check how many databases there are
select * from pg_database;
Copy the code
25. Check how many schemas there are
select * from pg_namespace ;
Copy the code
26. View the objects in the schema
\d schema_name.*
Copy the code
27. Change the default schema search path
set search_path=............. ;Copy the code
See the PostgreSQL manual for more information.
Reference documentation
PostgreSQL 10 + PostGIS + Sharding(PG_pathman) + MySQL(FDW External Table) on ECS Deployment Guide
Daily Monitoring and Maintenance Tasks for PostgreSQL and Greenplum – Best Practices
PostgreSQL Lock Wait Monitoring Collector SQL – Who Blocks Who
PostgreSQL Nagios Monitor Script (Archive, VACUUM, Age, conn, rollback, Standby, Lock, Xact, SEq, index…)
PostgreSQL Monitor – Customize Nagios Script
PostgreSQL monitor-check_postgres usage-2
PostgreSQL monitor-check_postgres usage-1
PostgreSQL Monitor-Nagios Client Installation
PostgreSQL Monitor-Nagios Server Installation
Recovery test Script for ZFS Snapshot Clone + PostgreSQL Stream Replication + recovery test script for ZFS Snapshot Clone + PostgreSQL Stream Replication + Archive”
PostgreSQL Best Practices – Block-level Incremental Backup (ZFS) Two-node HA and Block-level Backup Deployment
PostgreSQL Best Practices – Block-level Incremental Backup (ZFS) How to Consistently Back up a single database with multiple ZFS volumes (such as tablespaces)
PostgreSQL Best Practices – Automatic Validation of Backup Sets for Block-level Incremental Backups (ZFS)
PostgreSQL Best Practices – Block-level Incremental Backup (ZFS)
ZFS Snapshot Incremental Recovery vs PostgreSQL Xlog Incremental Recovery speed
ZFS Snapshot Increment size vs PostgreSQL XLOG Size
Fast & Safe Upgrade to PostgreSQL 9.4 use pg_upgrade & ZFS
PostgreSQL Logical Structure and Permission Architecture