...
https://docs.ceph.com/en/latest/releases/index.html
Name | Initial release | Latest | End of life (estimated) |
---|---|---|---|
2023-08-07 | 2025-08-01 | ||
2022-04-19 | 2024-06-01 | ||
2021-03-31 | 2023-10-01 |
Install
코드 블럭 |
---|
dnf -y install centos-release-ceph-reef dnf -y install cephadm |
...
코드 블럭 |
---|
# systemctl status ceph<tab> ceph.target ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@alertmanager.c1.service ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@ceph-exporter.c1.service ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@crash.c1.service ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@grafana.c1.service ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mgr.c1.hijqcf.service ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mon.c1.service ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@node-exporter.c1.service ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@prometheus.c1.service # systemctl status ceph.target ● ceph.target - All Ceph clusters and services Loaded: loaded (/etc/systemd/system/ceph.target; enabled; preset: disabled) Active: active since Thu 2023-08-17 06:19:24 KST; 44min ago # podman ps -a CONTAINER ID 10826ebf2f23 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-mon-c1 52728e97ed80 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-mgr-c1-hijqcf 362f58d17f80 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-ceph-exporter-c1 0997132bdd95 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-crash-c1 63cbf2dabf52 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-node-exporter-c1 7417b1f063c4 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-prometheus-c1 d28c76dcc9cd ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-alertmanager-c1 74b3feaae01f ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-grafana-c1 # find /etc/systemd/system/ -name 'ceph*' /etc/systemd/system/multi-user.target.wants/ceph.target /etc/systemd/system/multi-user.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target /etc/systemd/system/ceph.target /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target /etc/systemd/system/ceph.target.wants /etc/systemd/system/ceph.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@.service /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mon.c1.service /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mgr.c1.hijqcf.service /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@ceph-exporter.c1.service /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@crash.c1.service /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@node-exporter.c1.service /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@alertmanager.c1.service /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@grafana.c1.service /etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@prometheus.c1.service # systemctl stop ceph.target # podman ps -a CONTAINER ID <nothing> # rm -rf /etc/ceph /var/lib/ceph /etc/systemd/system/ceph* # systemctl daemon-reload |
Shell
코드 블럭 |
---|
ca01:~ root# cephadm shell
[ceph: root@ca01 /]# |
Shell Alias
Add the following on .zshrc or .bashrc
...
코드 블럭 |
---|
# ceph orch device ls HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS ca01 /dev/sda hdd DELL_PERC_H710P_001733af8c4056682c00a0fe0420844a 10.9T Yes 5m ago ca01 /dev/sdc hdd DELL_PERC_H710P_00d5b1608e5d56682c00a0fe0420844a 10.9T Yes 5m ago ca01 /dev/sdd hdd DELL_PERC_H710P_0015b3388a1756682c00a0fe0420844a 10.9T Yes 5m ago ca02 /dev/sda hdd DELL_PERC_H710P_001f82551d4b56682c00d4acebe03f08 10.9T Yes 5m ago ca02 /dev/sdb hdd DELL_PERC_H710P_0039827b1e5e56682c00d4acebe03f08 10.9T Yes 5m ago ca02 /dev/sdd hdd DELL_PERC_H710P_002611941b2e56682c00d4acebe03f08 10.9T Yes 5m ago ca03 /dev/sdb hdd DELL_PERC_H710P_0032429e08ee58682c00731ceae03f08 10.9T Yes 5m ago ca03 /dev/sdc hdd DELL_PERC_H710P_007d58f0090459682c00731ceae03f08 10.9T Yes 5m ago ca03 /dev/sdd hdd DELL_PERC_H710P_00b5f71007d458682c00731ceae03f08 10.9T Yes 5m ago ca04 /dev/sdb hdd DELL_PERC_H710_007a36381ece18682c00bf49cf20a782 10.9T Yes 3m ago ca04 /dev/sdc hdd DELL_PERC_H710_00e4926e1b9f18682c00bf49cf20a782 10.9T Yes 3m ago ca04 /dev/sdd hdd DELL_PERC_H710_000a560b1a8818682c00bf49cf20a782 10.9T Yes 3m ago ca05 /dev/sdb hdd DELL_PERC_H710_00efa09a233a1a682c00594bcf20a782 10.9T Yes 3m ago ca05 /dev/sdc hdd DELL_PERC_H710_0008b06d26691a682c00594bcf20a782 10.9T Yes 3m ago ca05 /dev/sdd hdd DELL_PERC_H710_00be6f4f25571a682c00594bcf20a782 10.9T Yes 3m ago ca06 /dev/sda hdd DELL_PERC_H710_006295c228eb1b682c005e47cf20a782 10.9T Yes 3m ago ca06 /dev/sdb hdd DELL_PERC_H710_0098248e26c61b682c005e47cf20a782 10.9T Yes 3m ago ca06 /dev/sdc hdd DELL_PERC_H710_003dc7aa27d91b682c005e47cf20a782 10.9T Yes 3m ago # ceph orch apply osd --all-available-devices --dry-run # ceph orch apply osd --all-available-devices Scheduled osd.all-available-devices update... |
Mon
코드 블럭 |
---|
ceph telemetry on --license sharing-1-0
ceph telemetry enable channel perf |
ERR
Device has a filesystem
코드 블럭 |
---|
# ceph orch daemon add osd ca05:/dev/sdc
...
/usr/bin/podman: stderr RuntimeError: Device /dev/sdc has a filesystem.
... |
코드 블럭 |
---|
gdisk -Z /dev/sdc
wipefs -a /dev/sdc |
Ref
https://docs.ceph.com/en/reef/rados/configuration/network-config-ref/
https://somuch.medium.com/sds-cephadm로-ceph-구성하기-778a90ba6cc7
...