Redis集群最少需要3个节点,一主一从,我就在一个节点上部署主和从,共6个节点,下面看我的个节点对应关系:
127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
1:下载redis。
官网下载3.0.0版本,之前2.几的版本不支持集群模式
下载地址:https://github.com/antirez/redis/archive/3.0.0-rc2.tar.gz
2:上传服务器,解压,编译
tar -zxvf redis-3.0.0-rc2.tar.gz mv redis-3.0.0-rc2.tar.gz redis3.0 cd /usr/local/redis3.0 make make install
2.1、make的时候注意: 如果make的时候提示如下错误:
cc: error: ../deps/hiredis/libhiredis.a: No such file or directory cc: error: ../deps/lua/src/liblua.a: No such file or directory cc: error: ../deps/jemalloc/lib/libjemalloc.a: No such file or directory make: *** [redis-server] Error 1
则进入redis下的deps下的运行如下命令,就OK了。
make lua hiredis linenoise
3:创建集群需要的目录
mkdir -p /usr/local/cluster cd /usr.local/cluster mkdir 7000 mkdir 7001 mkdir 7002 mkdir 7003 mkdir 7004 mkdir 7005
4:修改配置文件redis.conf
cp /usr/local/redis3.0/redis.conf /usr/local/cluster vi redis.conf
修改配置文件中的下面选项
port 7000 daemonize yes cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 appendonly yes
修改完redis.conf配置文件中的这些配置项之后把这个配置文件分别拷贝到7000/7001/7002/7003/7004/7005目录下面
cp /usr/local/cluster/redis.conf /usr/local/cluster/7000 cp /usr/local/cluster/redis.conf /usr/local/cluster/7001 cp /usr/local/cluster/redis.conf /usr/local/cluster/7002 cp /usr/local/cluster/redis.conf /usr/local/cluster/7003 cp /usr/local/cluster/redis.conf /usr/local/cluster/7004 cp /usr/local/cluster/redis.conf /usr/local/cluster/7005
注意:拷贝完成之后要修改7001/7002/7003/7004/7005目录下面redis.conf文件中的port参数,分别改为对应的文件夹的名称
5:分别启动这6个redis实例
sed -i -d 's/7000/7001/' /usr/local/cluster/7000/redis_7000.conf cd /usr/local/cluster/7000 redis-server redis_7000.conf cd /usr/local/cluster/7001 redis-server redis_7001.conf cd /usr/local/cluster/7002 redis-server redis_7002.conf cd /usr/local/cluster/7003 redis-server redis_7003.conf cd /usr/local/cluster/7004 redis-server redis_7004.conf cd /usr/local/cluster/7005 redis-server redis_7005.conf
启动之后使用命令查看redis的启动情况ps -ef|grep redis
如下显示则说明启动成功
root@ubuntu:/usr/local/redis3.0# ps -ef|grep redis root 1199 1126 0 2016 ? 00:00:00 runsv redis root 1227 1199 0 2016 ? 00:00:07 svlogd -tt /var/log/gitlab/redis redis 1986 1 0 2016 ? 00:44:44 /usr/bin/redis-server 127.0.0.1:6379 root 8619 5945 0 14:30 pts/0 00:00:00 grep --color=auto redis root 11151 1 0 11:47 ? 00:00:04 redis-server *:7000 [cluster] root 11352 1 0 11:47 ? 00:00:04 redis-server *:7001 [cluster] root 11419 1 0 11:48 ? 00:00:04 redis-server *:7002 [cluster] root 11494 1 0 11:48 ? 00:00:04 redis-server *:7003 [cluster] root 11546 1 0 11:48 ? 00:00:04 redis-server *:7004 [cluster] root 11626 1 0 11:48 ? 00:00:04 redis-server *:7005 [cluster] gitlab-+ 19978 1199 0 2016 ? 04:57:33 /opt/gitlab/embedded/bin/redis-server 127.0.0.1:0
6:执行redis的创建集群命令创建集群
cd /usr/local/redis3.0/src ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
解释下, --replicas 1 表示 自动为每一个master节点分配一个slave节点 上面有6个节点,程序会按照一定规则生成 3个master(主)3个slave(从)
前面已经提醒过的 防火墙一定要开放监听的端口,否则会创建失败。
6.1执行上面的命令的时候会报错,因为是执行的ruby的脚本,需要ruby的环境
错误内容:/usr/bin/env: ruby: No such file or directory
所以需要安装ruby的环境,这里推荐使用yum install ruby安装
yum install ruby
6.2然后再执行第6步的创建集群命令,还会报错,提示缺少rubygems组件,使用yum安装
错误内容:
./redis-trib.rb:24:in `require': no such file to load -- rubygems (LoadError) from ./redis-trib.rb:24 yum install rubygems
6.3再次执行第6步的命令,还会报错,提示不能加载redis,是因为缺少redis和ruby的接口,使用gem 安装 错误内容:
/usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- redis (LoadError) from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from ./redis-trib.rb:25 gem install redis --version 3.0.0
注意:gem install redis --version 3.0.0 失败的话,需要修改一下gem的源
gem sources --remove https://rubygems.org/ gem sources -a https://ruby.taobao.org/
6.4 再次执行第6步的命令,正常执行
root@ubuntu:/usr/local/redis3.0/src# ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 >>> Creating cluster Connecting to node 127.0.0.1:7000: OK Connecting to node 127.0.0.1:7001: OK Connecting to node 127.0.0.1:7002: OK Connecting to node 127.0.0.1:7003: OK Connecting to node 127.0.0.1:7004: OK Connecting to node 127.0.0.1:7005: OK >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 Adding replica 127.0.0.1:7003 to 127.0.0.1:7000 Adding replica 127.0.0.1:7004 to 127.0.0.1:7001 Adding replica 127.0.0.1:7005 to 127.0.0.1:7002 M: 13539493225694996c838a8b38a14a20f0b75504 127.0.0.1:7000 slots:0-5460 (5461 slots) master M: d66cdae56bd156c2f0d0a01a9b89ab469a0d49f4 127.0.0.1:7001 slots:5461-10922 (5462 slots) master M: 8ae2b44a67b6c0225ed21bbc4de768dd411181bc 127.0.0.1:7002 slots:10923-16383 (5461 slots) master S: c947705f81768f38eb0037f84b8fb20ac09e4d00 127.0.0.1:7003 replicates 13539493225694996c838a8b38a14a20f0b75504 S: 57c368df7de333f67ab52ac005fb60588715c6d5 127.0.0.1:7004 replicates d66cdae56bd156c2f0d0a01a9b89ab469a0d49f4 S: b50c77880cd63b57fc023cbbc076962dbf58a850 127.0.0.1:7005 replicates 8ae2b44a67b6c0225ed21bbc4de768dd411181bc Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join.... >>> Performing Cluster Check (using node 127.0.0.1:7000) M: 13539493225694996c838a8b38a14a20f0b75504 127.0.0.1:7000 slots:0-5460 (5461 slots) master M: d66cdae56bd156c2f0d0a01a9b89ab469a0d49f4 127.0.0.1:7001 slots:5461-10922 (5462 slots) master M: 8ae2b44a67b6c0225ed21bbc4de768dd411181bc 127.0.0.1:7002 slots:10923-16383 (5461 slots) master M: c947705f81768f38eb0037f84b8fb20ac09e4d00 127.0.0.1:7003 slots: (0 slots) master replicates 13539493225694996c838a8b38a14a20f0b75504 M: 57c368df7de333f67ab52ac005fb60588715c6d5 127.0.0.1:7004 slots: (0 slots) master replicates d66cdae56bd156c2f0d0a01a9b89ab469a0d49f4 M: b50c77880cd63b57fc023cbbc076962dbf58a850 127.0.0.1:7005 slots: (0 slots) master replicates 8ae2b44a67b6c0225ed21bbc4de768dd411181bc [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. root@ubuntu:/usr/local/redis3.0/src# redis-cli -c -p 7000 127.0.0.1:7000> exit
这下明白了,我刚开始在一台Server上去配,也是不需要等的,这里还需要跑到Server2上做一些这样的操作。
在Server2, redis-cli -c -p 700* 分别进入redis各节点的客户端命令窗口, 依次输入 cluster meet 192.168.0.240 7000……
回到Server1,已经创建完毕了。
查看一下
/usr/local/redis/src/redis-trib.rb check 127.0.0.1:7000
到这里集群已经初步搭建好了。
root@ubuntu:~# /usr/local/redis3.0/src/redis-trib.rb check 127.0.0.1:7000 Connecting to node 127.0.0.1:7000: OK Connecting to node 127.0.0.1:7001: OK Connecting to node 127.0.0.1:7003: OK Connecting to node 127.0.0.1:7005: OK Connecting to node 127.0.0.1:7002: OK Connecting to node 127.0.0.1:7004: OK >>> Performing Cluster Check (using node 127.0.0.1:7000) M: 13539493225694996c838a8b38a14a20f0b75504 127.0.0.1:7000 slots:0-5460 (5461 slots) master 1 additional replica(s) M: d66cdae56bd156c2f0d0a01a9b89ab469a0d49f4 127.0.0.1:7001 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: c947705f81768f38eb0037f84b8fb20ac09e4d00 127.0.0.1:7003 slots: (0 slots) slave replicates 13539493225694996c838a8b38a14a20f0b75504 S: b50c77880cd63b57fc023cbbc076962dbf58a850 127.0.0.1:7005 slots: (0 slots) slave replicates 8ae2b44a67b6c0225ed21bbc4de768dd411181bc M: 8ae2b44a67b6c0225ed21bbc4de768dd411181bc 127.0.0.1:7002 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: 57c368df7de333f67ab52ac005fb60588715c6d5 127.0.0.1:7004 slots: (0 slots) slave replicates d66cdae56bd156c2f0d0a01a9b89ab469a0d49f4 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
测试
1)get 和 set数据
redis-cli -c -p 7000
进入命令窗口,直接 set hello howareyou
直接根据hash匹配切换到相应的slot的节点上。
还是要说明一下,redis集群有16383个slot组成,通过分片分布到多个节点上,读写都发生在master节点。
2)假设测试
哥果断先把Server2服务Down掉,(Server2有1个Master, 2个Slave) , 跑回Server1, 查看一下 发生了什么事,Server1的3个节点全部都是Master,其他几个Server2的不见了
测试一下,依然没有问题,集群依然能继续工作。
原因: redis集群 通过选举方式进行容错,保证一台Server挂了还能跑,这个选举是全部集群超过半数以上的Master发现其他Master挂了后,会将其他对应的Slave节点升级成Master.
疑问: 要是挂的是Server1怎么办? 哥试了,cluster is down!! 没办法,超过半数挂了那救不了了,整个集群就无法工作了。 要是有三台Server,每台两Master,切记对应的主从节点 不要放在一台Server,别问我为什么自己用脑子想想看,互相交叉配置主从,挂哪台也没事,你要说同时两台crash了,呵呵哒......
您可以选择一种方式赞助本站
支付宝扫一扫赞助
微信钱包扫描赞助
赏