Small knowledge, big challenge! This article is participating in the “Essential Tips for Programmers” creation campaign. This article also participated in the “Digitalstar Project” to win the creation package and challenge the creation incentive money. We studied and learned how to set the postgresql.conf parameter with reference to ali Cloud best practices and the official PostgresQL manual. (Only the parameters that need to be modified are listed below. Those whose default values are appropriate are not listed.)
IO Related Settings
Bgwriter_delay = 10ms bgWRITer_LRU_maxPages = 1000 BGWRITer_Lru_multiplier = 10.0 effective_IO_concurrency = 0 max_worker_processes = 256 max_parallel_workers = 28 max_parallel_maintenance_workers = 6 max_parallel_workers_per_gather = 0 synchronous_commit = remote_write wal_compression = on wal_writer_delay = 10ms commit_delay = 10 checkpoint_timeout = 30min max_wal_size = 32GB min_wal_size = 16GB archive_mode = on max_replication_slots = 64Copy the code
Bgwriter_delay: Delay between rounds of background writer activity. In each round, the writer issues writes for a certain number of dirty buffers, after which it hibernates for the length of bgWriter_Delay, and then repeats the action. When there are no dirty buffers in the buffer pool, regardless of BGWRITer_delay, it goes into longer sleep, with a default value of 200ms. Bgwriter_lru_maxpages: No more than this many buffers will be written by the background writer in each round. Setting this parameter to zero disables background writing. The default value is 100 buffers. Bgwriter_lru_multiplier: Multiplying the average of the most recent buffers required by the bgWRITer_lru_multiplier estimates the number of buffers that will be required for the next round. Dirty buffers are written out until there are many clean reusable buffers (however, the number of buffers written out in each round does not exceed BGWRITer_LRU_MAXPages). Thus, a setting of 1.0 represents a “just right” policy that writes out exactly the number of buffers expected, larger values that provide some kind of buffer for peak demand, and the default is 2.0. Effective_io_concurrency: If this parameter is configured, the disk prefetch is invalid. Sequential and index scanning are both available for disk prefetch. You are advised to disable this parameter. Max_worker_processes: specifies the maximum number of background processes supported by the system. The default value is 8. When changing this value, consider also adjusting max_PARALLEL_WORKERS, max_PARALLEL_MAINtenance_workers, and max_PARALLEL_WORKerS_per_GATHER. Max_parallel_workers: Set the maximum number of parallel processes supported by the system. The default value is 8. Note that setting this value higher than max_worker_PROCESSES will not work because parallel worker processes are pulled from the pool of worker processes created by max_worker_processes. Max_parallel_maintenance_workers: Maximum number of parallelism that a single instrumental command can start. The current tool command that supports the use of parallel workers is CREATE INDEX and can only be parallel when building b-tree indexes, and VACUUM does not have the FULL option. Parallel workers are pulled from the pool of processes created by max_worker_processes, and the number is controlled by max_parallel_workers, with a default value of 2. Max_parallel_workers_per_gather: Sets the maximum number of parallel queries allowed. Parallel queries may consume more resources than non-parallel queries. Setting this value to 0 will disable parallel query execution. Wal_compression: When this parameter is on, the PostgreSQL server compresses the full page image written to WAL if full_page_writes is on (the default is on) or during the base backup. The compressed page image will be decompressed during WAL replay. The default value is off. Only superusers can change this setting. Turning on this parameter reduces the amount of space taken up by WAL without the risk of unrecoverable data corruption, but at the cost of additional CPU overhead for compression during WAL recording and decompression during WAL replay. Wal_writer_delay: Specifies how often WAL writers write data, in time. After a WAL is written, the writer will sleep according to the length of time given by wal_writer_delay unless it is woken up early by an asynchronous committed transaction. If the most recent flush occurs before wal_writer_delay and is less than the value of WAL_writer_flush_after WAL, WAL will only be written to the operating system, not to disk. The default value is 200ms. Commit_delay: Adds a delay before a WAL write is initiated. If the system load is high enough for additional transactions to be ready to commit at a given interval, the throughput of group commits can be improved by allowing more transactions to commit through a single WAL flush. However, it also increases the latency of each WAL swipe to the maximum commit_delay. Because a delay is wasted if there are no other transactions ready to commit, a delay is only performed when there are at least commit_SIBLINGS (default: 5) other active transactions when a brush is about to be initiated. In addition, if fsync is disabled, no delay will be performed. Checkpoint_timeout: the maximum time between automatic WAL checkpoints. A reasonable range is between 30 seconds and 1 day. The default is 5 minutes. Increasing the value of this parameter increases the crash recovery time. Max_wal_size: The maximum size that WAL is allowed to grow to between automatic WAL checkpoints. This is a soft limit, WAL size may exceed max_WAL_size under special circumstances, such as heavy load, archive_command failure, or high WAL_keep_size setting, default is 1 GB. Increasing this parameter may cause the time it takes to recover from a crash. Min_wal_size: As long as WAL disk usage stays below this setting, old WAL files are always recycled for future use at checkpoints rather than being deleted. This can be used to ensure that enough WAL space is reserved to handle peak WAL usage, such as running large batch tasks. If the value is specified without units, it is in megabytes. The default is 80 MB. Archive_mode: When archive_mode is enabled, the completed WAL segments can be sent to the archive store by setting archive_command. In addition to off, which is used to disable, there are two modes: on and always. During normal operation, there is no difference between the two modes, but when set to always, WAL archiver is also enabled in archive recovery or backup mode. In always mode, all files recovered from the archive or copied by stream are archived (again). Archive_mode cannot be enabled when wal_level is set to minimal. Max_replication_slots: indicates the maximum number of replication slots supported by a server. The default value is 10. Setting it to a value less than the existing replication slot prevents the server from starting. Also, to allow the use of replica slots, WAL_level must be set to Replica or higher.
Synchronous_commit: The options are ON, off, local, remote_write, and remote_apply. Single-instance environment:
On: When the database submits a transaction, the WAL writes to the WAL buffer first and then to the WAL log file. If the value is set to ON, the transaction submission returns a success message to the client only after the local WAL writes to the WAL log. On is the default setting. The database is very secure, but performance is compromised. Local: Local is similar to on. When a transaction is submitted, the client returns a success message only after wal writes data. Off: When the database submits transactions, it does not need to wait for wal logs to be written into the local WAL buffer and returns a success message to the client. Setting off brings risks to the database: When the database is down, a small number of the latest transactions may be lost. After the database restarts, it considers these transactions terminated abnormally and rolls back. This applies to scenarios that have low requirements on database accuracy and high performance.
Stream replication environment:
On: When the stream replication primary database submits a transaction, it waits for the secondary database to receive wal log streams sent by the primary database and write WAL files to the primary database, and then returns a success message to the client. Simply put, on indicates that the local WAL has been dropped, the WAL of the secondary database has also been dropped, and there are two persistent wal copies, but the secondary database has not completed the redo. This option results in a higher transaction response time. Remote_write: Said current copy main library commits the transaction, need to wait for library to receive master repository send wal log stream and written to the cache node operating system, returned to the client after the success, this case for library appeared abnormal shutdown will not have passed a wal log loss risk, but abnormal downtime for library operating system has already risk… is lost, In this case, wal files on the standby node may not have been completely written into the WAL file. Remote_write indicates that the local WAL has been removed from the disk and the WAL of the standby database is still stored in the operating system cache of the standby database, that is, there is only one persistent WAL. This option results in lower transaction response times. Remote_apply: Remote_apply indicates that when a transaction is committed by the stream replication primary library, the secondary library needs to wait for the secondary library to receive the WAL stream sent by the primary library and write the WAL file, and the secondary library has completed the redo, and then returns a success to the client. This setting ensures that there are two persistent wal’s and that the standby repository has been redone. This option results in the highest transaction response time.