您好,登录后才能下订单哦!
这篇文章为大家带来有关redis高可用集群的详细介绍。大部分知识点都是大家经常用到的,为此分享给大家做个参考。一起跟随小编过来看看吧。
Redis是我们目前最大规模使用的缓存中间件,由于它强大高效而又便捷的功能,得到广泛的使用。
Redis在2015年发布了3.0.0,官方支持redis cluster。至此结束了redis没有集群的时代,之前我们用的redis cluster多的是twitter发布的Twemproxy还有就是豌豆荚开发的codis。本文将进行理解和实践redis cluster。下面,我尽量用通熟易懂的方式来阐述。
redis cluster在设计的时候,就考虑到了去中心化,去中间件,也就是说,在集群中的每个节点都是平等的关系,都是对等的,每个节点都保存各自的数据和整个集群的状态。每个节点都和其他所有节点连接,而且这些连接保持活跃,这样就保证了我们只需要连接集群中的任意一个节点,就可以获取到其他节点的数据。
我们知道集群中的每个节点都是对等的关系并且都保存各自的数据,那么redis是如何合理分配这些节点和数据的呢?
Redis Cluster没有使用传统的一致性哈希来分配数据,而是采用另外一种哈希槽(hash slot)的方式来分配的。
redis cluster默认分配了16384个slot,当我们set一个key时,会采用CRC16算法来取模得到所属的slot,然后将这个key分到哈希槽区间的节点上,具体算法就是:CRC16(key) % 16384
。
需要注意的是:Redis集群至少需要3个节点,因为投票容错机制要求超过半数节点认为某个节点挂了该节点才是挂了,所以2个节点无法构成集群。
所以,我们假设现在有3个节点已经部署成redis cluster,分别是:A,B,C 三个节点,他们可以是一台机器上的三个端口,也可以是三台不同的服务器;那么采用哈希槽(hash slot)的方式来分配16384个slot,它们三个节点分别承担的slot区间是:
节点A覆盖0-5461; 节点B覆盖 5462-10922; 节点C覆盖 10923-16383;
那么,现在我想设置一个key,比如叫my_name:set my_name linux
按照redis cluster的哈希槽算法:CRC16('my_name')%16384 = 2412。
那么就会把这个key的存储分配到A上了。
同样,当我连接(A,B,C)任何一个节点想获取my_name这个key时,也会这样的算法,然后内部跳转到A节点上获取数据。
这种哈希槽分配方式的好处就是很清晰,比如我想新增一个节点ID,redis cluster的这种做法是从各个节点的前面各拿取一部分slot到D上。大致就会变成这样:(我会在接下来的实践中实验)
节点A覆盖 1365-5460;节点B覆盖 6827-10922;节点C覆盖 12288-16383; 节点D覆盖0-1364,5461-6826,10923-12287;
同样,删除一个节点也是类似,移动完成后就可以删除这个节点了。
所以redis cluster 就是这样一个形状:
redis cluster为了保证数据的高可用性,加入了主从模式,一个主节点对应一个或多个从节点,主节点提供数据存取,从节点则是从主节点拉取数据备份,当这个主节点挂掉后,就会有这个从节点存取一个来充当主节点,从而保证集群不会挂掉。
上边那个例子里,集群有ABC三个主节点,如果这3个节点都没有加入从节点,如果B挂掉了,我们就无法访问整个集群了。A和C的slot也无法访问。所以我们在集群建立的时候,一定要为每个主节点都添加从节点,比如像这样,集群包含主节点A,B,C,以及从节点A1,B1,C1,那么即使B挂掉,系统也可以继续正确工作。因为B1节点提替代了B节点,所以redis集群将会选择B1节点作为新的主节点,集群将会继续正确地提供服务。需要注意的是,当B重新开启后,它就会变成B1的从节点,而不会重新变成主节点。
如果节点B和B1同时挂了,Redis集群就无法继续正常地提供服务了,一般情况下,是不会也不允许两节点同时挂掉。
流程下图所示:
依据redis cluster内部故障转移实现原理,reids集群至少需要3个节点,并且要保证集群的高可用,需要每个节点都有从节点,因此搭建redis集群至少需要6台服务器,三主三从。
条件有限,并且是测试环境,所以我们在两台机器上创建一个伪集群,通过不同的TCP端口启动多个redis实例,组成集群,当然实际生产环境的Redis集群搭建和这里是一样的。
目前redis cluster 的搭建有两种方式:
1,手动方式搭建,即手动执行cluster命令,一步步完成搭建流程。
2,自动方式搭建,即使用官方提供的集群管理工具快速搭建。
两种方式原理一样,自动搭建方式只是将手动搭建方式中需要执行的redis命令封装到了可执行程序。生产环境中推荐使用自动方式搭建,简单快捷,不易出错;本文实战演示两种方式都会提及。
环境描述:
主机A:172.16.1.100(CentOS 7.3),启动三个实例7000,7001,7002;全部为主
主机B:172.16.1.110(CentOS 7.3),启动三个实例8000,8001,8002;全部为从
1,安装redis
A主机:
[root@redis01-server ~]# tar zxf redis-4.0.14.tar.gz
[root@redis01-server ~]# mv redis-4.0.14 /usr/local/redis
[root@redis01-server ~]# cd /usr/local/redis/
[root@redis01-server redis]# make && make install
#安装完成,修改配置文件:
[root@redis01-server ~]# vim /usr/local/redis/redis.conf
69 bind 172.16.1.100 #设置为当前redis主机的ip地址
92 port 7000 #设置redis的监听端口
136 daemonize yes #以守护进程运行redis实例
814 cluster-enabled yes #启动集群模式
822 cluster-config-file nodes-7000.conf #设置当前节点集群配置文件路径
828 cluster-node-timeout 5000 #设置当前连接超时秒数
672 appendonly yes #开启AOF持久化模式
676 appendfilename "appendonly-7000.aof" #保存数据的AOF文件名称
B主机:
[root@redis02-server ~]# tar zxf redis-4.0.14.tar.gz
[root@redis02-server ~]# mv redis-4.0.14 /usr/local/redis
[root@redis02-server ~]# cd /usr/local/redis
[root@redis02-server redis]# make && make install
[root@redis02-server ~]# vim /usr/local/redis/redis.conf
bind 172.16.1.110
port 8000
daemonize yes
cluster-enabled yes
cluster-config-file nodes-8000.conf
cluster-node-timeout 5000
appendonly yes
appendfilename "appendonly-8000.aof"
2,根据上述的规划,创建各个节点启动配置文件的存放目录
A主机:
[root@redis01-server ~]# mkdir /usr/local/redis-cluster
[root@redis01-server ~]# cd /usr/local/redis-cluster/
[root@redis01-server redis-cluster]# mkdir {7000,7001,7002}
B主机:
[root@redis02-server ~]# mkdir /usr/local/redis-cluster
[root@redis02-server ~]# cd /usr/local/redis-cluster/
[root@redis02-server redis-cluster]# mkdir {8000,8001,8002}
3,将主配文件分别拷贝到相对应的目录下
A主机:
[root@redis01-server ~]# cp /usr/local/redis/redis.conf /usr/local/redis-cluster/7000/
[root@redis01-server ~]# cp /usr/local/redis/redis.conf /usr/local/redis-cluster/7001/
[root@redis01-server ~]# cp /usr/local/redis/redis.conf /usr/local/redis-cluster/7002/
B主机:
[root@redis02-server ~]# cp /usr/local/redis/redis.conf /usr/local/redis-cluster/8000/
[root@redis02-server ~]# cp /usr/local/redis/redis.conf /usr/local/redis-cluster/8001/
[root@redis02-server ~]# cp /usr/local/redis/redis.conf /usr/local/redis-cluster/8002/
#修改各自的监听TCP端口号:
A主机:
[root@redis01-server ~]# sed -i "s/7000/7001/g" /usr/local/redis-cluster/7001/redis.conf
[root@redis01-server ~]# sed -i "s/7000/7002/g" /usr/local/redis-cluster/7002/redis.conf
B主机:
[root@redis02-server ~]# sed -i "s/8000/8001/g" /usr/local/redis-cluster/8001/redis.conf
[root@redis02-server ~]# sed -i "s/8000/8002/g" /usr/local/redis-cluster/8002/redis.conf
4,启动reids服务
A主机:
[root@redis01-server ~]# ln -s /usr/local/bin/redis-server /usr/local/sbin/
[root@redis01-server ~]# redis-server /usr/local/redis-cluster/7000/redis.conf
[root@redis01-server ~]# redis-server /usr/local/redis-cluster/7001/redis.conf
[root@redis01-server ~]# redis-server /usr/local/redis-cluster/7002/redis.conf
#确保各节点服务正常运行:
[root@redis01-server ~]# ps -ef | grep redis
root 19595 1 0 04:55 ? 00:00:00 redis-server 172.16.1.100:7000 [cluster]
root 19602 1 0 04:56 ? 00:00:00 redis-server 172.16.1.100:7001 [cluster]
root 19607 1 0 04:56 ? 00:00:00 redis-server 172.16.1.100:7002 [cluster]
root 19612 2420 0 04:58 pts/0 00:00:00 grep --color=auto redis
B主机:
[root@redis02-server ~]# ln -s /usr/local/bin/redis-server /usr/local/sbin/
[root@redis02-server ~]# redis-server /usr/local/redis-cluster/8000/redis.conf
[root@redis02-server ~]# redis-server /usr/local/redis-cluster/8001/redis.conf
[root@redis02-server ~]# redis-server /usr/local/redis-cluster/8002/redis.conf
[root@redis02-server ~]# ps -ef | grep redis
root 18485 1 0 00:17 ? 00:00:00 redis-server 172.16.1.110:8000 [cluster]
root 18490 1 0 00:17 ? 00:00:00 redis-server 172.16.1.110:8001 [cluster]
root 18495 1 0 00:17 ? 00:00:00 redis-server 172.16.1.110:8002 [cluster]
root 18501 1421 0 00:19 pts/0 00:00:00 grep --color=auto redis
5,节点握手
虽然上面6各节点都启用集群支持,但默认情况下它们是不相互信任或者说没有联系的。节点握手就是在各个节点之间创建链接(每个节点与其他节点相连),形成一个完整的网络,即集群。
节点握手的命令如下:
cluster meet ip port
我们创建的6个节点可以通过redis-cli 连接到A节点执行如下五组命令完成握手:
注意:需要关闭集群中各台主机的防火墙:systemctl stop firewalld
,否则无法完成握手
[root@redis01-server ~]# redis-cli -h 172.16.1.100 -p 7000
172.16.1.100:7000> cluster meet 172.16.1.100 7001
OK
172.16.1.100:7000> cluster meet 172.16.1.100 7002
OK
172.16.1.100:7000> cluster meet 172.16.1.110 8000
OK
172.16.1.100:7000> cluster meet 172.16.1.110 8001
OK
172.16.1.100:7000> cluster meet 172.16.1.110 8002
OK
#查看握手是否正常:
172.16.1.100:7000> cluster nodes
060a11f6985df66e4b9cf596355bbe334f843587 172.16.1.100:7001@17001 master - 0 1584155029000 1 connected
2fb26d79f703f9fbd8841e4ee93ea88f7df5dad9 172.16.1.110:8002@18002 master - 0 1584155029000 5 connected
6d3ac8cf0dc3c8400d2df8d0559fbe8bdce0c34d 172.16.1.110:8000@18000 master - 0 1584155029243 3 connected
cc3b16e067bf1ce9978c13870f0e1d538102a733 172.16.1.110:8001@18001 master - 0 1584155030000 0 connected
0f74b9e2d07e159fdc0fc1edffd3d0b305adc2fd 172.16.1.100:7000@17000 myself,master - 0 1584155028000 2 connected
c0fefd1442b3fa4e41eb6fba5073dcc1427ca812 172.16.1.100:7002@17002 master - 0 1584155030249 4 connected
可以看到,集群中所有节点都已经建立链接,自此,节点握手完成。
#虽然节点已经建立链接,但此时redis集群还并没有处于上线状态,执行cluster info命令来查看目前集群的运行状态:
172.16.1.100:7000> cluster info
cluster_state:fail #表示当前集群处于下线状态
cluster_slots_assigned:0 #为0 表示目前所有槽没有被分配到节点上
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:0
cluster_current_epoch:5
cluster_my_epoch:2
cluster_stats_messages_ping_sent:612
cluster_stats_messages_pong_sent:627
cluster_stats_messages_meet_sent:10
cluster_stats_messages_sent:1249
6,分配槽位
只有给集群中所有主节点分配好槽位(slot),集群才能正常上线,分配槽位的命令如下:cluster addslots slot [slot ...]
#根据预先规划,需要使用cluster addslots 命令手动将16384个哈希槽大致均等分配给主节点A,B,C。
[root@redis01-server ~]# redis-cli -h 172.16.1.100 -p 7000 cluster addslots {0..5461}
OK
[root@redis01-server ~]# redis-cli -h 172.16.1.100 -p 7001 cluster addslots {5462..10922}
OK
[root@redis01-server ~]# redis-cli -h 172.16.1.100 -p 7002 cluster addslots {10923..16383}
OK
槽位分配完成后,可以再次查看目前集群的状态:
[root@redis01-server ~]# redis-cli -h 172.16.1.100 -p 7000 cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:2
cluster_stats_messages_ping_sent:2015
cluster_stats_messages_pong_sent:2308
cluster_stats_messages_meet_sent:10
cluster_stats_messages_sent:4333
cluster_stats_messages_ping_received:2308
cluster_stats_messages_pong_received:2007
cluster_stats_messages_received:4315
cluster_state:ok 证明redis集群成功上线。
#如需删除分配槽,可以执行cluster delslots 命令,例如:[root@redis01-server ~]# redis-cli -h 172.16.1.100 -p 7000 cluster delslots {10923..16383}
#查看槽位分配情况:
[root@redis01-server ~]# redis-cli -h 172.16.1.100 -p 7000 cluster nodes
060a11f6985df66e4b9cf596355bbe334f843587 172.16.1.100:7001@17001 master - 0 1584156388000 1 connected 5462-10922
2fb26d79f703f9fbd8841e4ee93ea88f7df5dad9 172.16.1.110:8002@18002 master - 0 1584156387060 5 connected
6d3ac8cf0dc3c8400d2df8d0559fbe8bdce0c34d 172.16.1.110:8000@18000 master - 0 1584156386056 3 connected
cc3b16e067bf1ce9978c13870f0e1d538102a733 172.16.1.110:8001@18001 master - 0 1584156387000 0 connected
0f74b9e2d07e159fdc0fc1edffd3d0b305adc2fd 172.16.1.100:7000@17000 myself,master - 0 1584156387000 2 connected 0-5461
c0fefd1442b3fa4e41eb6fba5073dcc1427ca812 172.16.1.100:7002@17002 master - 0 1584156386000 4 connected 10923-16383
可以看到三个主节点槽位已完成分配,但是还有三个从节点没有使用,如果此时有一个主节点故障,那么整个集群也就挂了,所以我们需要为从节点配置主节点,实现高可用。
7,主从复制
集群复制命令如下:
cluster replicate node-id
1)连接集群中任意一个节点,获得所有主节点的node-id
[root@redis01-server ~]# redis-cli -h 172.16.1.110 -p 8000 cluster nodes
2fb26d79f703f9fbd8841e4ee93ea88f7df5dad9 172.16.1.110:8002@18002 master - 0 1584157179543 5 connected
c0fefd1442b3fa4e41eb6fba5073dcc1427ca812 172.16.1.100:7002@17002 master - 0 1584157179946 4 connected 10923-16383
6d3ac8cf0dc3c8400d2df8d0559fbe8bdce0c34d 172.16.1.110:8000@18000 myself,master - 0 1584157179000 3 connected
060a11f6985df66e4b9cf596355bbe334f843587 172.16.1.100:7001@17001 master - 0 1584157180952 1 connected 5462-10922
cc3b16e067bf1ce9978c13870f0e1d538102a733 172.16.1.110:8001@18001 master - 0 1584157180549 0 connected
0f74b9e2d07e159fdc0fc1edffd3d0b305adc2fd 172.16.1.100:7000@17000 master - 0 1584157179544 2 connected 0-5461
2)执行如下三组命令分别为从节点指定其主节点,使集群可以自动完成主从复制
[root@redis02-server ~]# redis-cli -h 172.16.1.110 -p 8000 cluster replicate 0f74b9e2d07e159fdc0fc1edffd3d0b305adc2fd
OK
[root@redis02-server ~]# redis-cli -h 172.16.1.110 -p 8001 cluster replicate 060a11f6985df66e4b9cf596355bbe334f843587
OK
[root@redis02-server ~]# redis-cli -h 172.16.1.110 -p 8002 cluster replicate c0fefd1442b3fa4e41eb6fba5073dcc1427ca812
OK
3)查看集群中各个节点的复制状态信息:
[root@redis02-server ~]# redis-cli -h 172.16.1.110 -p 8000 cluster nodes
2fb26d79f703f9fbd8841e4ee93ea88f7df5dad9 172.16.1.110:8002@18002 slave c0fefd1442b3fa4e41eb6fba5073dcc1427ca812 0 1584157699631 5 connected
c0fefd1442b3fa4e41eb6fba5073dcc1427ca812 172.16.1.100:7002@17002 master - 0 1584157700437 4 connected 10923-16383
6d3ac8cf0dc3c8400d2df8d0559fbe8bdce0c34d 172.16.1.110:8000@18000 myself,slave 0f74b9e2d07e159fdc0fc1edffd3d0b305adc2fd 0 1584157699000 3 connected
060a11f6985df66e4b9cf596355bbe334f843587 172.16.1.100:7001@17001 master - 0 1584157701442 1 connected 5462-10922
cc3b16e067bf1ce9978c13870f0e1d538102a733 172.16.1.110:8001@18001 slave 060a11f6985df66e4b9cf596355bbe334f843587 0 1584157699932 1 connected
0f74b9e2d07e159fdc0fc1edffd3d0b305adc2fd 172.16.1.100:7000@17000 master - 0 1584157700000 2 connected 0-5461
可以看到所有从节点都作为对应主节点的备份节点,至此,已经成功以手动方式搭建一个redis集群。
总结手动搭建redis集群的关键步骤:
1,在各节点安装redis
2,修改配置文件,开启集群模式
3,启动各节点redis服务
4,节点握手
5,为主节点分配槽位
6,主从节点建立复制关系
推荐博文:
redis的 rdb 和 aof 持久化的区别:https://www.cnblogs.com/shizhengwen/p/9283973.html
Redis 3.0版本之后官方发布了一个集群管理工具 redis-trib.rb,集成在Redis源码包的src目录下。其封装了redis提供的集群命令,使用简单,便捷。
环境描述:
主机A:172.16.1.100(CentOS 7.3),启动三个实例7000,7001,7002;
主机B:172.16.1.110(CentOS 7.3),启动三个实例8000,8001,8002;
1,执行上边”手动搭建redis“的步骤1-4(到启动redis即可(确保redis服务正常运行),关闭防火墙。
2,集群管理工具搭建
redis-trib.rb 是 Redis 作者使用 Ruby 语言开发的,故使用该工具之前还需要先在机器上安装 Ruby 环境。
注意的是:从Redis 5.0 版本开始便把这个工具集成到 redis-cli 中,以--cluster参数提供使用,其中create命令可以用来创建集群。
1)安装Ruby环境以及其他依赖项(两台主机)
[root@redis-01 ~]# yum -y install ruby ruby-devel rubygems rpm-build openssl openssl-devel
#确认安装版本:
[root@redis-01 ~]# ruby -v
ruby 2.0.0p648 (2015-12-16) [x86_64-linux]
2)使用redis-trib.rb 脚本搭建集群
[root@redis-01 ~]# ln -s /usr/local/redis/src/redis-trib.rb /usr/local/sbin/
[root@redis-01 ~]# redis-trib.rb create --replicas 1 172.16.1.100:7000 172.16.1.100:7001 172.16.1.100:7002 172.16.1.110:8000 172.16.1.110:8001 172.16.1.110:8002
#这里使用create命令,--replicas 1 参数表示为每个主节点创建一个从节点(随机分配),其他参数是实例的地址集合
/usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- redis (LoadError)
from /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in `require'
from /usr/local/sbin/redis-trib.rb:25:in `<main>'
上面报错是需要redis的gem包来安装ruby和redis的接口,安装gem包,网址: https://rubygems.org/gems/redis/ 选择对应的版本下载,这里选择3.3.0版本:
[root@redis-01 ~]# gem install -l redis-3.3.0.gem
Successfully installed redis-3.3.0
Parsing documentation for redis-3.3.0
Installing ri documentation for redis-3.3.0
1 gem installed
#重新创建集群
[root@redis-01 ~]# redis-trib.rb create --replicas 1 172.16.1.100:7000 172.16.1.100:7001 172.16.1.100:7002 172.16.1.110:8000 172.16.1.110:8001 172.16.1.110:8002
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.16.1.100:7000
172.16.1.110:8000
172.16.1.100:7001
Adding replica 172.16.1.110:8002 to 172.16.1.100:7000
Adding replica 172.16.1.100:7002 to 172.16.1.110:8000
Adding replica 172.16.1.110:8001 to 172.16.1.100:7001
M: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots:0-5460 (5461 slots) master
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:10923-16383 (5461 slots) master
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
replicates 4debd0b5743826d203d1af777824eb1b83105d21
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots:5461-10922 (5462 slots) master
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
S: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
replicates 8613a457f8009aaf784df0ac6d7039034b16b6a6
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 172.16.1.100:7000)
M: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots: (0 slots) slave
replicates 8613a457f8009aaf784df0ac6d7039034b16b6a6
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
slots: (0 slots) slave
replicates 4debd0b5743826d203d1af777824eb1b83105d21
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
redis-trib会提示你做了什么配置,输入yes接受,集群就被配置和加入了,意思是实例会经过互相交流后启动。
至此,集群可以说是搭建完成了,一条命令解决,可以说是非常方便。
1)测试集群的状态:
[root@redis-01 ~]# redis-trib.rb check 172.16.1.100:7000
>>> Performing Cluster Check (using node 172.16.1.100:7000)
M: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots: (0 slots) slave
replicates 8613a457f8009aaf784df0ac6d7039034b16b6a6
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
slots: (0 slots) slave
replicates 4debd0b5743826d203d1af777824eb1b83105d21
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:10923-16383 (5461 slots) master
1 additional replica(s)
可以看到有3个主节点(M),分别是7000,8000,7001,3个从节点(S),分别是7002,8001,8002。每个节点都是成功的连接状态。
2)测试连接集群
集群搭建成功了;按照redis cluster的特点,它是去中心化,每个节点都是对等的,所以,你连接哪个节点都可以获取和设置数据,接下来进行测试:
[root@redis-01 ~]# redis-cli -h 172.16.1.100 -p 7000 -c //集群模式需要加上-c参数
172.16.1.100:7000> set my_name linux //设置一个值
-> Redirected to slot [12803] located at 172.16.1.100:7001
//前面有说到,分配key的时候,它会使用CRC16('my_name')%16384算法来计算将这个key放到哪个节点,这里分配到了12803 ,所以slot就分配到了 7001(范围:10923-16383)这个节点上
OK
172.16.1.100:7001> get my_name //获取该数据值
"linux"
redis cluster 采用的方式很直接,创建完key后,它直接跳转到7001节点了,而不是还在自身的7000节点,现在我们连接8002这个从节点:
[root@redis-01 ~]# redis-cli -h 172.16.1.110 -p 8002 -c
172.16.1.110:8002> get my_name
-> Redirected to slot [12803] located at 172.16.1.100:7001
"linux"
//同样是获取 key(my_name)的值,它同样也是跳转到了7001上,并返回该数据值
3)测试集群的高可用
当前我的redis集群有3个主节点(7000,8000,7001)提供数据存储和读取,3个从节点(7002,8001,8002)负责把主节点的数据同步到自己的节点上来,所以我们来看一下从节点的appendonly.aof的内容(因为刚才创建的值分配给了7001节点,而7001主节点对应的从节点是8001,所以我们查看8001的aof文件)
[root@redis-02 ~]# cat appendonly-8001.aof
*2
$6
SELECT
$1
0
*3
$3
set
$7
my_name
$5
linux
可以看到的确是从主节点同步过来的数据。
注意:你的redis在哪个路径下启动,dump.rdb 文件或者appendonly.aof文件就会产生在启动所在的目录,如果想要自定义路径可以修改配置文件:263 dir ./ #把相对路径修改为绝对路径
#下面,我们模拟其中一台master主服务器挂掉:
[root@redis-01 ~]# ps -ef | grep redis
root 5598 1 0 01:02 ? 00:00:06 redis-server 172.16.1.100:7000 [cluster]
root 5603 1 0 01:02 ? 00:00:06 redis-server 172.16.1.100:7001 [cluster]
root 5608 1 0 01:02 ? 00:00:06 redis-server 172.16.1.100:7002 [cluster]
root 19735 2242 0 03:32 pts/0 00:00:00 grep --color=auto redis
[root@redis-01 ~]# kill 5598
#测试集群的状态:
[root@redis-01 ~]# redis-trib.rb check 172.16.1.100:7000
[ERR] Sorry, can't connect to node 172.16.1.100:7000
[root@redis-01 ~]# redis-trib.rb check 172.16.1.100:7001
>>> Performing Cluster Check (using node 172.16.1.100:7001)
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
slots: (0 slots) slave
replicates 4debd0b5743826d203d1af777824eb1b83105d21
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots:0-5460 (5461 slots) master
0 additional replica(s)
从上面的结果可以看到,当7000主节点挂掉了,那么这个时候,7000的从节点只有8002一个,所以8002就会被选举成master节点了。并且原来7000节点上的数据不会丢失,而是会转移到了8002节点上,当用户再次获取数据时,则是从8002上面获取了。
既然7000节点服务器因为某些原因宕机了,但是当我们把故障解决后,重新将7000节点加入集群中,那么7000节点会在集群中充当什么角色呢?
[root@redis-01 ~]# redis-server /usr/local/redis-cluster/7000/redis.conf
[root@redis-01 ~]# ps -ef | grep redis
root 5603 1 0 01:02 ? 00:00:08 redis-server 172.16.1.100:7001 [cluster]
root 5608 1 0 01:02 ? 00:00:08 redis-server 172.16.1.100:7002 [cluster]
root 19771 1 0 03:50 ? 00:00:00 redis-server 172.16.1.100:7000 [cluster]
root 19789 2242 0 03:51 pts/0 00:00:00 grep --color=auto redis
#查看集群的状态:
[root@redis-01 ~]# redis-trib.rb check 172.16.1.100:7001
>>> Performing Cluster Check (using node 172.16.1.100:7001)
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
slots: (0 slots) slave
replicates 4debd0b5743826d203d1af777824eb1b83105d21
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots: (0 slots) slave
replicates 3cc268dfbb918a99159900643b318ec87ba03ad9
可以看到7000节点已经成功加入集群,但它却作为了8002的从节点。
新加入一个节点,分两种情况:1是作为主节点,2是作为一个主节点的从节点,我们分别来实践一下。
1,作为主节点加入
1)新建一个7003节点,作为一个新的master节点加入:
[root@redis-01 ~]# mkdir /usr/local/redis-cluster/7003
[root@redis-01 ~]# cd /usr/local/redis-cluster/
[root@redis-01 redis-cluster]# cp 7000/redis.conf 7003/
[root@redis-01 redis-cluster]# sed -i "s/7000/7003/g" 7003/redis.conf
#启动7003 redis服务:
[root@redis-01 ~]# redis-server /usr/local/redis-cluster/7003/redis.conf
[root@redis-01 ~]# ps -ef | grep redis
root 5603 1 0 01:02 ? 00:00:09 redis-server 172.16.1.100:7001 [cluster]
root 5608 1 0 01:02 ? 00:00:09 redis-server 172.16.1.100:7002 [cluster]
root 19771 1 0 03:50 ? 00:00:00 redis-server 172.16.1.100:7000 [cluster]
root 19842 1 0 04:06 ? 00:00:00 redis-server 172.16.1.100:7003 [cluster]
root 19847 2242 0 04:06 pts/0 00:00:00 grep --color=auto redis
2)将7003节点加入集群
[root@redis-01 ~]# redis-trib.rb add-node 172.16.1.100:7003 172.16.1.100:7000 //add-node是加入指令,前面表示新加入的节点,后边表示加入的集群的一个节点,用来辨识是哪个集群,理论上哪个都可以
>>> Adding node 172.16.1.100:7003 to cluster 172.16.1.100:7000
>>> Performing Cluster Check (using node 172.16.1.100:7000)
S: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots: (0 slots) slave
replicates 3cc268dfbb918a99159900643b318ec87ba03ad9
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
M: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
slots: (0 slots) slave
replicates 4debd0b5743826d203d1af777824eb1b83105d21
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots:5461-10922 (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.16.1.100:7003 to make it join the cluster.
[OK] New node added correctly.
表示新的节点连接成功,而且也已经加入到集群了,我们再来检查一下:
[root@redis-01 ~]# redis-trib.rb check 172.16.1.100:7003
>>> Performing Cluster Check (using node 172.16.1.100:7003)
M: edd51c8389ba069d49fe54c24c535716ce06e62b 172.16.1.100:7003
slots: (0 slots) master
0 additional replica(s)
M: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
slots: (0 slots) slave
replicates 4debd0b5743826d203d1af777824eb1b83105d21
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
S: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots: (0 slots) slave
replicates 3cc268dfbb918a99159900643b318ec87ba03ad9
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots:5461-10922 (5462 slots) master
1 additional replica(s)
可以看到集群中有7个节点,7003也作为了master节点,但是有注意到7003节点的slots是0;也就是说,虽然它现在是主节点,但是并没有分配任何slot给它,所以它现在还不负责数据的存取。所以需要我们手动对集群进行重新分片迁移;
3)迁移slot节点
#这个命令是用来迁移slot节点的,后边的172.16.1.100:7000 表示是哪个集群,端口随便哪个节点都是可以的:
[root@redis-01 ~]# redis-trib.rb reshard 172.16.1.100:7000
How many slots do you want to move (from 1 to 16384)?
#回车后,它提示我们需要迁移多少slot到7003上,我们可以算一下:16384/4 = 4096,也就是说,为了负载均衡,我们需要移动4096个槽点到7003上
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID?
#它又提示我们,接受的node ID是多少,7003的id我们通过上面的信息就可以获得
What is the receiving node ID? edd51c8389ba069d49fe54c24c535716ce06e62b
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:
#接着 redis-trib 会向你询问重新分片的源节点(source node),也就是要从哪个节点中取出 4096 个哈希槽, 并将这些槽移动到7003节点上面。如果我们不打算从特定的节点上取出指定数量的哈希槽, 那么可以向 redis-trib 输入 all ,这样的话, 集群中的所有主节点都会成为源节点, redis-trib 将从各个源节点中各取出一部分哈希槽, 凑够 4096 个, 然后移动到7003节点上,所以我们输入all:
Source node #1:all
#接下来就开始迁移了,并且会询问你是否确认:
Do you want to proceed with the proposed reshard plan (yes/no)? yes
输入yes回车后,redis-trib就会正式开始执行重新分片操作,将指定的哈希槽从源节点一个个地移动到7003节点上面。迁移完毕之后,我们来检查一下:
[root@redis-01 ~]# redis-trib.rb check 172.16.1.100:7000
>>> Performing Cluster Check (using node 172.16.1.100:7000)
S: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots: (0 slots) slave
replicates 3cc268dfbb918a99159900643b318ec87ba03ad9
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
M: edd51c8389ba069d49fe54c24c535716ce06e62b 172.16.1.100:7003
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
0 additional replica(s)
M: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots:1365-5460 (4096 slots) master
1 additional replica(s)
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:12288-16383 (4096 slots) master
1 additional replica(s)
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
slots: (0 slots) slave
replicates 4debd0b5743826d203d1af777824eb1b83105d21
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots:6827-10922 (4096 slots) master
1 additional replica(s)
我们着重看7003:”0-1364,5461-6826,10923-12287 (4096 slots) “
这些原来在其他节点上的slot迁移到了7003上。原来,它只是间隔的移动,并不是衔接的整体移动,我们来验证7003节点上边是否有数据:
[root@redis-01 ~]# redis-cli -h 172.16.1.100 -p 7003 -c
172.16.1.100:7003> get my_name
-> Redirected to slot [12803] located at 172.16.1.100:7001
"linux"
//证明7003主节点已经正常工作了。
2,作为从节点加入
1)新建一个8003节点,作为7003的从节点,步骤类似,这里就省略了,启动8003的redis服务后,我们把它加入到集群中的从节点中:
/使用add-node -slave --master-id命令,master-id指向的是你需要选择哪个节点作为新加入从节点的主节点,172.16.1.110:8003表示你需要新加入的从节点,最后则是选择当前集群中的任意一个节点即可
[root@redis-02 ~]# redis-trib.rb add-node --slave --master-id edd51c8389ba069d49fe54c24c535716ce06e62b 172.16.1.110:8003 172.16.1.110:8000
>>> Adding node 172.16.1.110:8003 to cluster 172.16.1.110:8000
>>> Performing Cluster Check (using node 172.16.1.110:8000)
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots:6827-10922 (4096 slots) master
1 additional replica(s)
M: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots:1365-5460 (4096 slots) master
1 additional replica(s)
M: edd51c8389ba069d49fe54c24c535716ce06e62b 172.16.1.100:7003
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
0 additional replica(s)
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
slots: (0 slots) slave
replicates 4debd0b5743826d203d1af777824eb1b83105d21
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:12288-16383 (4096 slots) master
1 additional replica(s)
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
S: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots: (0 slots) slave
replicates 3cc268dfbb918a99159900643b318ec87ba03ad9
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.16.1.110:8003 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 172.16.1.100:7003.
[OK] New node added correctly.
上边提示说,已经选择了7003作为master节点,并且成功了。我们来检查一下集群各个节点的状态:
[root@redis-02 ~]# redis-trib.rb check 172.16.1.110:8003
>>> Performing Cluster Check (using node 172.16.1.110:8003)
S: 4ffabff843d32364fc506f3445980f5a04aa4292 172.16.1.110:8003
slots: (0 slots) slave
replicates edd51c8389ba069d49fe54c24c535716ce06e62b
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots:6827-10922 (4096 slots) master
1 additional replica(s)
S: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots: (0 slots) slave
replicates 3cc268dfbb918a99159900643b318ec87ba03ad9
M: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots:1365-5460 (4096 slots) master
1 additional replica(s)
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:12288-16383 (4096 slots) master
1 additional replica(s)
M: edd51c8389ba069d49fe54c24c535716ce06e62b 172.16.1.100:7003
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
1 additional replica(s)
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
slots: (0 slots) slave
replicates 4debd0b5743826d203d1af777824eb1b83105d21
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
#验证该从节点是否能够在集群中通信:
[root@redis-02 ~]# redis-cli -h 172.16.1.110 -p 8003 -c
172.16.1.110:8003> get my_name
-> Redirected to slot [12803] located at 172.16.1.100:7001
"linux"
//证明从节点加入成功,并且正常工作
redis集群中有添加节点,那肯定就会有移除节点的需求,redis cluster 同样支持移除节点的功能,同样也是redis-trib.rb的用法。
语法格式:
redis-trib del-node ip:端口 `<node-id>`
1,移除主节点
//和新加节点不同的是,移除需要节点的node-id。那么我们尝试将8000这个主节点移除:
[root@redis-02 ~]# redis-trib.rb del-node 172.16.1.110:8000 4debd0b5743826d203d1af777824eb1b83105d21
>>> Removing node 4debd0b5743826d203d1af777824eb1b83105d21 from cluster 172.16.1.110:8000
[ERR] Node 172.16.1.110:8000 is not empty! Reshard data away and try again.
报错了,它提示我们说,由于8000节点里面已经有数据了,不能被移除,要先将它的数据转移出去,也就是说得重新分片,所以用上面增加新节点后的分片方式一样,再重新分片一次:
[root@redis-01 ~]# redis-trib.rb reshard 172.16.1.100:7000
#提示,我们要分多少个槽点,由于8000上有4096个槽点,所以这里填写4096
How many slots do you want to move (from 1 to 16384)? 4096
#提示我们,需要移动到哪个id上,那就选择移动到8002主节点上
What is the receiving node ID? 3cc268dfbb918a99159900643b318ec87ba03ad9
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:
#这里就是关键了,他要我们从哪个节点去转移数据到8002,因为我们是要移除8000的,所以,我们就得填写8000节点的id了:
Source node #1:4debd0b5743826d203d1af777824eb1b83105d21
Source node #2:done //输入done命令,表示结束
Do you want to proceed with the proposed reshard plan (yes/no)? yes //输入yes
ok,这样8000主节点中原有的数据就迁移成功了。
我们检查一下节点的状态:
[root@redis-01 ~]# redis-trib.rb check 172.16.1.100:7000
>>> Performing Cluster Check (using node 172.16.1.100:7000)
S: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots: (0 slots) slave
replicates 3cc268dfbb918a99159900643b318ec87ba03ad9
S: 4ffabff843d32364fc506f3445980f5a04aa4292 172.16.1.110:8003
slots: (0 slots) slave
replicates edd51c8389ba069d49fe54c24c535716ce06e62b
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
M: edd51c8389ba069d49fe54c24c535716ce06e62b 172.16.1.100:7003
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
1 additional replica(s)
M: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots:1365-5460,6827-10922 (8192 slots) master
2 additional replica(s)
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:12288-16383 (4096 slots) master
1 additional replica(s)
S: d161ab43746405c2b517e3ffc98321956431191c 172.16.1.100:7002
slots: (0 slots) slave
replicates 3cc268dfbb918a99159900643b318ec87ba03ad9
M: 4debd0b5743826d203d1af777824eb1b83105d21 172.16.1.110:8000
slots: (0 slots) master
0 additional replica(s)
//可以看到8000节点上的slots已经为0,而它上边的slots已经迁移到了8002节点上了。
#现在重新对8000主节点进行移除操作:
[root@redis-02 ~]# redis-trib.rb del-node 172.16.1.110:8000 4debd0b5743826d203d1af777824eb1b83105d21
>>> Removing node 4debd0b5743826d203d1af777824eb1b83105d21 from cluster 172.16.1.110:8000
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@redis-02 ~]# redis-trib.rb check 172.16.1.110:8000
[ERR] Sorry, can't connect to node 172.16.1.110:8000
ok,主节点移除成功。
2,移除从节点
移除一个从节点就简单多了,因为不需要考虑数据的迁移,我们将7002这个从节点给移除:
[root@redis-02 ~]# redis-trib.rb del-node 172.16.1.100:7002 d161ab43746405c2b517e3ffc98321956431191c
>>> Removing node d161ab43746405c2b517e3ffc98321956431191c from cluster 172.16.1.100:7002
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
//提示7002从节点已经成功移除。
[root@redis-02 ~]# redis-trib.rb check 172.16.1.100:7000 //检查当前集群的状态信息
>>> Performing Cluster Check (using node 172.16.1.100:7000)
S: 8613a457f8009aaf784df0ac6d7039034b16b6a6 172.16.1.100:7000
slots: (0 slots) slave
replicates 3cc268dfbb918a99159900643b318ec87ba03ad9
S: 4ffabff843d32364fc506f3445980f5a04aa4292 172.16.1.110:8003
slots: (0 slots) slave
replicates edd51c8389ba069d49fe54c24c535716ce06e62b
S: 948421116dd1859002c78a2df0b9845bdc7db631 172.16.1.110:8001
slots: (0 slots) slave
replicates 5721b77733b1809449c6fc5806f38f1cacb1de8c
M: edd51c8389ba069d49fe54c24c535716ce06e62b 172.16.1.100:7003
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
1 additional replica(s)
M: 3cc268dfbb918a99159900643b318ec87ba03ad9 172.16.1.110:8002
slots:1365-5460,6827-10922 (8192 slots) master
1 additional replica(s)
M: 5721b77733b1809449c6fc5806f38f1cacb1de8c 172.16.1.100:7001
slots:12288-16383 (4096 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
以上就是redis高可用集群的详细内容了,看完之后是否有所收获呢?如果想了解更多相关内容,欢迎关注亿速云行业资讯!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。