One thing which we do a lot for clients is moving databases from one server to another via pg_dump and pg_restore. Since this process often occurs during a downtime, it's critical to do the pg_dump and pg_restore as quickly as possible. Here's a few tips:
maintenance_work_mem = 1GB-2GB
fsync = off
synchronous_commit = off
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
checkpoint_segments = 256 or higher
max_wal_senders = 0
wal_keep_segments = 0
archive_mode = off
autovacuum = off
all activity logging settings disabled
Some more notes:
- Use the -j multiprocess option for pg_restore (and, on 9.3, for pg_dump as well). Ideal concurrency is generally two less than the number of cores you have, up to a limit of 8. Users with many ( > 1000) tables will benefit from even higher levels of concurrency.
- Doing a compressed pg_dump, copying it (with speed options), and restoring on the remote server is usually faster than piping output unless you have a very fast network.
- If you're using binary replication, it's faster to disable it while restoring a large database, and then reclone the replicas from the new database. Assuming there aren't other databases on the system in replication, of course.
- You should set some postgresql.conf options for fast restore.
- The below assumes that the restored database will be the only database running on the target system; they are not safe settings for production databases.
- It assumes that if the pg_restore fails you're going to delete the target database and start over.
- These settings will break replication as well as PITR backup.
- These settings will require a restart of PostgreSQL to get to production settings afterwards.
maintenance_work_mem = 1GB-2GB
fsync = off
synchronous_commit = off
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
checkpoint_segments = 256 or higher
max_wal_senders = 0
wal_keep_segments = 0
archive_mode = off
autovacuum = off
all activity logging settings disabled
Some more notes:
- you want to set maintenance_work_mem as high as possible, up to 2GB, for building new indexes. However, since we're doing concurrent restore, you don't want to get carried away; your limit should be (RAM/(2*concurrency)), in order to maintain somewhat of an FS buffer. This is a reason why you might turn concurrency down, if you have only a few large tables in the database.
- checkpoint_segments should be set high, but requires available disk space, at the rate of 1GB per 32 segments. This is in addition to the space you need for the database.