Tinkmaster 2017-10-27
部署流程(博客使用的markdown解析器不支持流程图使用图片代替)

登录 https://cr.console.aliyun.com/#/accelerator 获取自己的阿里云 docker 加速地址
# curl -sSL http://acs-public-mirror.oss-cn-hangzhou.aliyuncs.com/docker-engine/internet | sh -
# mkdir -p /etc/docker
# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://******.mirror.aliyuncs.com"]
}
EOF
# systemctl daemon-reload
# systemctl restart docker
# systemctl enable docker# docker pull ceph/daemon
# docker run -d \
--net=host \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
-e MON_IP=192.168.3.123 \
-e CEPH_PUBLIC_NETWORK=192.168.3.0/24 \
ceph/daemon mon查看容器
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b79a02c40296 ceph/daemon "/entrypoint.sh mon" About a minute ago Up About a minute sad_shannon
查看集群状态
# docker exec b79a02 ceph -s
cluster 96ae62d2-2249-4173-9dee-3a7215cba51c
health HEALTH_ERR
no osds
monmap e2: 1 mons at {node01=192.168.3.123:6789/0}
election epoch 4, quorum 0 node01
mgr no daemons active
osdmap e1: 0 osds: 0 up, 0 in
flags sortbitwise,require_jewel_osds,require_kraken_osds
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating# ssh root@node2 mkdir -p /var/lib/ceph # scp -r /etc/ceph root@node2:/etc # scp -r /var/lib/ceph/bootstrap* root@node2:/var/lib/ceph # ssh root@node3 mkdir -p /var/lib/ceph # scp -r /etc/ceph root@node3:/etc # scp -r /var/lib/ceph/bootstrap* root@node3:/var/lib/ceph
# docker run -d \
--net=host \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
-e MON_IP=192.168.3.124 \
-e CEPH_PUBLIC_NETWORK=192.168.3.0/24 \
ceph/daemon mon在 node03 上执行以下命令启动 mon,注意修改 MON_IP
# docker run -d \
--net=host \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
-e MON_IP=192.168.3.125 \
-e CEPH_PUBLIC_NETWORK=192.168.3.0/24 \
ceph/daemon mon查看在 node01 上集群状态
# docker exec b79a02 ceph -s
cluster 96ae62d2-2249-4173-9dee-3a7215cba51c
health HEALTH_ERR
64 pgs are stuck inactive for more than 300 seconds
64 pgs stuck inactive
64 pgs stuck unclean
no osds
monmap e4: 3 mons at {node01=192.168.3.123:6789/0,node02=192.168.3.124:6789/0,node03=192.168.3.125:6789/0}
election epoch 12, quorum 0,1,2 node01,node02,node03
mgr no daemons active
osdmap e1: 0 osds: 0 up, 0 in
flags sortbitwise,require_jewel_osds,require_kraken_osds
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating可以看到三个 mon 已经正确启动
每台虚拟机准备了两块磁盘作为 osd,分别加入到集群,注意修改磁盘
# docker run -d \
--net=host \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
-v /dev/:/dev/ \
--privileged=true \
-e OSD_FORCE_ZAP=1 \
-e OSD_DEVICE=/dev/sdb \
ceph/daemon osd_ceph_disk# docker run -d \
--net=host \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
-v /dev/:/dev/ \
--privileged=true \
-e OSD_FORCE_ZAP=1 \
-e OSD_DEVICE=/dev/sdc \
ceph/daemon osd_ceph_disk按照同样方法将 node02 和 node03 的 sdb、sdc 都加入集群
查看集群状态
# docker exec b79a ceph -s
cluster 96ae62d2-2249-4173-9dee-3a7215cba51c
health HEALTH_OK
monmap e4: 3 mons at {node01=192.168.3.123:6789/0,node02=192.168.3.124:6789/0,node03=192.168.3.125:6789/0}
election epoch 12, quorum 0,1,2 node01,node02,node03
mgr no daemons active
osdmap e63: 6 osds: 6 up, 6 in
flags sortbitwise,require_jewel_osds,require_kraken_osds
pgmap v157: 64 pgs, 1 pools, 0 bytes data, 0 objects
212 MB used, 598 GB / 599 GB avail
64 active+clean可以看到 mon 和 osd 都已经正确配置,切集群状态为 HEALTH_OK
使用以下命令在 node01 上启动 mds
# docker run -d \
--net=host \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
-e CEPHFS_CREATE=1 \
ceph/daemon mds使用以下命令在 node01 上启动 rgw,并绑定 80 端口
# docker run -d \
-p 80:80 \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
ceph/daemon rgw# docker exec b79a02 ceph -s
cluster 96ae62d2-2249-4173-9dee-3a7215cba51c
health HEALTH_OK
monmap e4: 3 mons at {node01=192.168.3.123:6789/0,node02=192.168.3.124:6789/0,node03=192.168.3.125:6789/0}
election epoch 12, quorum 0,1,2 node01,node02,node03
fsmap e5: 1/1/1 up {0=mds-node01=up:active}
mgr no daemons active
osdmap e136: 6 osds: 6 up, 6 in
flags sortbitwise,require_jewel_osds,require_kraken_osds
pgmap v1460: 136 pgs, 10 pools, 3829 bytes data, 223 objects
254 MB used, 598 GB / 599 GB avail
136 active+clean更多Ceph相关教程见以下内容:
Ceph 的详细介绍:请点这里
Ceph 的下载地址:请点这里