内容纲要
实验环境(client node1 node2 node3)
1.首先创建一个分布式存储v1
gluster volume create v1 node1:/data/xx node2:/data/xx
2.用客户端挂载v1,并创建100个文件
[root@client /]# mount node1:/v1 /v1
[root@client /]# df -hT
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/vda1 xfs 20G 1.3G 19G 7% /
devtmpfs devtmpfs 488M 0 488M 0% /dev
tmpfs tmpfs 497M 0 497M 0% /dev/shm
tmpfs tmpfs 497M 13M 484M 3% /run
tmpfs tmpfs 497M 0 497M 0% /sys/fs/cgroup
tmpfs tmpfs 100M 0 100M 0% /run/user/0
node1:/v1 nfs 4.0G 64M 4.0G 2% /v1
[root@client /]# touch /v1/test{1..100}.txt
3.查看node1和node2上的文件是否存在
[root@node1 xx]# ls /data/xx/
test10.txt test26.txt test35.txt test52.txt test70.txt test7.txt test90.txt
test16.txt test27.txt test37.txt test53.txt test71.txt test80.txt test91.txt
test17.txt test29.txt test38.txt test58.txt test72.txt test81.txt test94.txt
test18.txt test30.txt test3.txt test61.txt test73.txt test83.txt test96.txt
test1.txt test31.txt test43.txt test63.txt test74.txt test85.txt test97.txt
test22.txt test32.txt test46.txt test64.txt test75.txt test88.txt test99.txt
test24.txt test34.txt test4.txt test69.txt test79.txt test89.txt
[root@node2 xx]# ls /data/xx/
test100.txt test21.txt test40.txt test50.txt test60.txt test77.txt test93.txt
test11.txt test23.txt test41.txt test51.txt test62.txt test78.txt test95.txt
test12.txt test25.txt test42.txt test54.txt test65.txt test82.txt test98.txt
test13.txt test28.txt test44.txt test55.txt test66.txt test84.txt test9.txt
test14.txt test2.txt test45.txt test56.txt test67.txt test86.txt
test15.txt test33.txt test47.txt test57.txt test68.txt test87.txt
test19.txt test36.txt test48.txt test59.txt test6.txt test8.txt
test20.txt test39.txt test49.txt test5.txt test76.txt test92.txt
4.如果v1不够用了,我们可以给他扩容,把node3加进来
gluster volume add-brick v1 node3:/data/xx force
5.我们重新创建100个文件查看node3中是否有文件
[root@client /]# touch /v1/abc{1..100}.txt
[root@node1 xx]# ls
abc13.txt abc36.txt abc60.txt abc9.txt test29.txt test46.txt test71.txt test85.txt
abc16.txt abc3.txt abc72.txt test10.txt test30.txt test4.txt test72.txt test88.txt
abc17.txt abc42.txt abc73.txt test16.txt test31.txt test52.txt test73.txt test89.txt
abc20.txt abc45.txt abc77.txt test17.txt test32.txt test53.txt test74.txt test90.txt
abc23.txt abc48.txt abc78.txt test18.txt test34.txt test58.txt test75.txt test91.txt
abc24.txt abc49.txt abc81.txt test1.txt test35.txt test61.txt test79.txt test94.txt
abc2.txt abc4.txt abc82.txt test22.txt test37.txt test63.txt test7.txt test96.txt
abc30.txt abc50.txt abc93.txt test24.txt test38.txt test64.txt test80.txt test97.txt
abc32.txt abc53.txt abc98.txt test26.txt test3.txt test69.txt test81.txt test99.txt
abc35.txt abc56.txt abc99.txt test27.txt test43.txt test70.txt test83.txt
[root@node2 xx]# ls
abc10.txt abc41.txt abc7.txt test11.txt test39.txt test57.txt test84.txt
abc11.txt abc43.txt abc80.txt test12.txt test40.txt test59.txt test86.txt
abc15.txt abc44.txt abc83.txt test13.txt test41.txt test5.txt test87.txt
abc18.txt abc47.txt abc84.txt test14.txt test42.txt test60.txt test8.txt
abc19.txt abc51.txt abc87.txt test15.txt test44.txt test62.txt test92.txt
abc1.txt abc54.txt abc88.txt test19.txt test45.txt test65.txt test93.txt
abc22.txt abc55.txt abc89.txt test20.txt test47.txt test66.txt test95.txt
abc25.txt abc5.txt abc91.txt test21.txt test48.txt test67.txt test98.txt
abc28.txt abc62.txt abc92.txt test23.txt test49.txt test68.txt test9.txt
abc29.txt abc66.txt abc94.txt test25.txt test50.txt test6.txt
abc33.txt abc67.txt abc95.txt test28.txt test51.txt test76.txt
abc34.txt abc69.txt abc96.txt test2.txt test54.txt test77.txt
abc37.txt abc74.txt abc97.txt test33.txt test55.txt test78.txt
abc39.txt abc79.txt test100.txt test36.txt test56.txt test82.txt
[root@node3 xx]# ls
abc100.txt abc26.txt abc40.txt abc58.txt abc64.txt abc70.txt abc85.txt
abc12.txt abc27.txt abc46.txt abc59.txt abc65.txt abc71.txt abc86.txt
abc14.txt abc31.txt abc52.txt abc61.txt abc68.txt abc75.txt abc8.txt
abc21.txt abc38.txt abc57.txt abc63.txt abc6.txt abc76.txt abc90.txt
分布式的卷还需要做gluster volume rebalance v1 start的操作
6.我们还可以做把node3去除的操作,首先看一下v1的状态(里面是包含node3的)
[root@node1 xx]# gluster volume info v1
Volume Name: v1
Type: Distribute
Volume ID: 53a98603-7651-424c-a144-e3f3e301ac51
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node1:/data/xx
Brick2: node2:/data/xx
Brick3: node3:/data/xx
Options Reconfigured:
performance.readdir-ahead: on
去除node3
gluster volume remove-brick v1 node3:/data/xx start
此时我们发现,node3中的数据已经没了(他自己会执行一些迁移操作,迁移到了其他节点上)可以用以下命令查看迁移了多少数据
[root@node3 xx]# gluster volume remove-brick v1 node3:/data/xx status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 28 0Bytes 28 0 0 completed 1.00
7.最后我们执行以下断开的操作
gluster volume remove-brick v1 node3:/data/xx commit
[root@node1 xx]# gluster volume info v1
Volume Name: v1
Type: Distribute
Volume ID: 53a98603-7651-424c-a144-e3f3e301ac51
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/data/xx
Brick2: node2:/data/xx
Options Reconfigured:
performance.readdir-ahead: on