您好,登录后才能下订单哦!
本篇内容主要讲解“PostgreSQL怎么搭建流复制”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“PostgreSQL怎么搭建流复制”吧!
PostgreSQL通过流复制Streaming Replication可轻松实现高可用HA环境的搭建.
Step 1 主库:创建用户
创建复制用户replicator
testdb=# CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'replicator'; CREATE ROLE
Step 2 主库:参数配置
配置archive_mode等参数
archive_mode = ON wal_level = replica max_wal_senders = 10 archive_command = '/home/xdb/archive.sh %p %f' listen_addresses = '*'
也可用alter system命令修改
ALTER SYSTEM SET wal_level TO 'replica'; ALTER SYSTEM SET archive_mode TO 'ON'; ALTER SYSTEM SET max_wal_senders TO '10'; ALTER SYSTEM SET listen_addresses TO '*';
重启数据库
pg_ctl -D $PGDATA restart -mf
Step 3 主库:访问配置
修改pg_hba.conf文件
host replication replicator 192.168.26.26/32 md5
生效配置
pg_ctl -D $PGDATA reload
Step 4 从库:从主库备份中恢复
在从库上使用pg_basebackup创建备库
192.168.26.25是主库IP,192.168.26.26是从库IP
pg_basebackup -h 192.168.26.25 -U replicator -p 5432 -D $PGDATA -P -Xs -R
配置从库postgres.conf
hot_standby = ON hot_standby_feedback = ON ALTER SYSTEM SET hot_standby TO 'ON'; ALTER SYSTEM SET hot_standby_feedback TO 'ON';
配置从库recovery.conf
$ cat $PGDATA/recovery.conf standby_mode = 'on' primary_conninfo = 'host=192.168.26.25 port=5432 user=replicator password=replicator' restore_command = 'cp /data/archivelog/%f %p' archive_cleanup_command = 'pg_archivecleanup /data/archivelog %r'
Step 5 从库:启动数据库
[xdb@localhost testdb]$ pg_ctl -D $PGDATA start waiting for server to start....2019-03-13 12:13:30.239 CST [1870] LOG: listening on IPv4 address "0.0.0.0", port 5432 2019-03-13 12:13:30.239 CST [1870] LOG: listening on IPv6 address "::", port 5432 2019-03-13 12:13:30.252 CST [1870] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2019-03-13 12:13:30.379 CST [1870] LOG: redirecting log output to logging collector process 2019-03-13 12:13:30.379 CST [1870] HINT: Future log output will appear in directory "pg_log". done server started
Step 6 验证复制环境
确认相关进程是否已启动
#主库 [xdb@localhost testdb]$ ps -ef|grep sender xdb 1646 1532 0 12:13 ? 00:00:00 postgres: walsender replicator 192.168.26.26(35294) streaming 0/43000140 xdb 1659 1440 0 12:17 pts/1 00:00:00 grep --color=auto sender [xdb@localhost testdb]$ #从库 [xdb@localhost testdb]$ ps -ef|grep receiver xdb 1879 1870 0 12:13 ? 00:00:00 postgres: walreceiver streaming 0/43000140 xdb 1884 1799 0 12:18 pts/0 00:00:00 grep --color=auto receiver [xdb@localhost testdb]$ ps -ef|grep startup xdb 1872 1870 0 12:13 ? 00:00:00 postgres: startup recovering 000000100000000000000043 xdb 1887 1799 0 12:18 pts/0 00:00:00 grep --color=auto startup [xdb@localhost testdb]$
Step 7 监控
查询pg_stat_replication数据字典表
testdb=# SELECT * FROM pg_stat_replication; -[ RECORD 1 ]----+------------------------------ pid | 1646 usesysid | 90113 usename | replicator application_name | walreceiver client_addr | 192.168.26.26 client_hostname | client_port | 35294 backend_start | 2019-03-13 12:13:30.852269+08 backend_xmin | state | streaming sent_lsn | 0/43000140 write_lsn | 0/43000140 flush_lsn | 0/43000140 replay_lsn | 0/43000140 write_lag | flush_lag | replay_lag | sync_priority | 0 sync_state | async testdb=#
同步复制
从库配置参数recovery.conf,在primary_conninfo中添加application_name
primary_conninfo = 'user=replicator password=replicator host=192.168.26.25 port=5432 application_name = standby_26'
主库配置参数
synchronous_standby_names = 'standby_26' synchronous_commit = on
重启数据库,验证是否配置成功
testdb=# \x Expanded display is on. testdb=# SELECT * FROM pg_stat_replication; -[ RECORD 1 ]----+------------------------------ pid | 2257 usesysid | 90113 usename | replicator application_name | standby_26 client_addr | 192.168.26.26 client_hostname | client_port | 35418 backend_start | 2019-03-13 15:17:57.330573+08 backend_xmin | 634 state | streaming sent_lsn | 0/54D4DBD0 write_lsn | 0/54D4DBD0 flush_lsn | 0/54D4DBD0 replay_lsn | 0/54D4DBD0 write_lag | 00:00:00.00101 flush_lag | 00:00:00.001954 replay_lag | 00:00:00.002145 sync_priority | 1 sync_state | sync
到此,相信大家对“PostgreSQL怎么搭建流复制”有了更深的了解,不妨来实际操作一番吧!这里是亿速云网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。