버전 비교

  • 이 줄이 추가되었습니다.
  • 이 줄이 삭제되었습니다.
  • 서식이 변경되었습니다.

목차

Install

코드 블럭
dnf -y install centos-release-ceph-reef
dnf -y install cephadm


Releases

https://docs.ceph.com/en/latest/releases/index.html

Name

Initial release

Latest

End of life (estimated)

Reef

2023-08-07

18.2.0

2025-08-01

Quincy

2022-04-19

17.2.6

2024-06-01

Pacific

2021-03-31

16.2.13

2023-10-01

Install

코드 블럭
dnf -y install centos-release-ceph-reef
dnf -y install cephadm
코드 블럭
# cephadm bootstrap --mon-ip 192.168.16.10
cephadm bootstrap --mon-ip 192.168.10.10
Verifying podman|docker is present...
...
Wrote config 
코드 블럭
# cephadm bootstrap --mon-ip 192.168.16.10
cephadm bootstrap --mon-ip 192.168.10.10
Verifying podman|docker is present...
...
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
…
firewalld ready
Enabling firewalld port 9283/tcp in current zone...
Enabling firewalld port 8765/tcp in current zone...
Enabling firewalld port 8443/tcp in current zone...
…
Wrote public SSH key to /etc/ceph/ceph.pubconf
AddingWrote keykeyring to root@localhost authorized_keys...
Adding host c1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying ceph-exporter service with default placement/etc/ceph/ceph.client.admin.keyring
…
firewalld ready
Enabling firewalld port 9283/tcp in current zone...
Enabling firewalld port 8765/tcp in current zone...
Enabling firewalld port 8443/tcp in current zone...
…
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host c1...
Deploying prometheusmon service with default placement...
Deploying grafanamgr service with default placement...
Deploying node-exportercrash service with default placement...
Deploying alertmanagerceph-exporter service with default placement...
…
CephDeploying Dashboardprometheus isservice nowwith available at:

             URL: https://c1:8443/
            default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
…
Ceph Dashboard is now available at:

             URL: https://c1:8443/
            User: admin
        Password: ........

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/7592ec0a-3c7a-11ee-b450-a0369f70d4f8/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

        sudo /usr/sbin/cephadm shell --fsid 7592ec0a-3c7a-11ee-b450-a0369f70d4f8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

        sudo /usr/sbin/cephadm shell

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/en/latest/mgr/telemetry/

Bootstrap complete.

...

코드 블럭
# systemctl status ceph<tab>
ceph.target
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@alertmanager.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@ceph-exporter.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@crash.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@grafana.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mgr.c1.hijqcf.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mon.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@node-exporter.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@prometheus.c1.service

# systemctl status ceph.target
● ceph.target - All Ceph clusters and services
     Loaded: loaded (/etc/systemd/system/ceph.target; enabled; preset: disabled)
     Active: active since Thu 2023-08-17 06:19:24 KST; 44min ago

# podman ps -a
CONTAINER ID  
10826ebf2f23 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-mon-c1
52728e97ed80 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-mgr-c1-hijqcf
362f58d17f80 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-ceph-exporter-c1
0997132bdd95 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-crash-c1
63cbf2dabf52 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-node-exporter-c1
7417b1f063c4 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-prometheus-c1
d28c76dcc9cd ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-alertmanager-c1
74b3feaae01f ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-grafana-c1

# find /etc/systemd/system/ -name 'ceph*'
/etc/systemd/system/multi-user.target.wants/ceph.target
/etc/systemd/system/multi-user.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target
/etc/systemd/system/ceph.target
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target
/etc/systemd/system/ceph.target.wants
/etc/systemd/system/ceph.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mon.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mgr.c1.hijqcf.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@ceph-exporter.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@crash.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@node-exporter.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@alertmanager.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@grafana.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@prometheus.c1.service

# systemctl stop ceph.target

# podman ps -a
CONTAINER ID  
<nothing>

# rm -rf /etc/ceph /var/lib/ceph /etc/systemd/system/ceph*

# systemctl daemon-reload


Shell

...

코드 블럭
ca01:~ root# cephadm shell
[ceph: root@ca01 /]#


Shell Alias

Add the following Add the following on .zshrc or .bashrc

코드 블럭
alias ceph='cephadm shell -- ceph'

...

코드 블럭
# ceph orch host ls --detail
HOST  ADDR           LABELS  STATUS  VENDOR/MODEL                  CPU       RAM     HDD       SSD  NIC
ca01  192.168.16.10  _admin          Dell Inc. (PowerEdge R720xd)  12C/144T  63 GiB  5/38.0TB  -    6
ca02  192.168.16.11                  Dell Inc. (PowerEdge R720xd)  12C/144T  63 GiB  5/38.0TB  -    6
ca03  192.168.16.12                  Dell Inc. (PowerEdge R720xd)  12C/144T  63 GiB  5/38.0TB  -    6
ca04  192.168.16.13                  Dell Inc. (PowerEdge R720)    16C/256T  63 GiB  5/38.0TB  -    6
ca05  192.168.16.14                  Dell Inc. (PowerEdge R720)    16C/256T  63 GiB  5/38.0TB  -    6
ca06  192.168.16.15                  Dell Inc. (PowerEdge R720)    16C/256T  63 GiB  5/38.0TB  -    6
6 hosts in cluster


OSD

코드 블럭
# ceph orch device ls
HOST  PATH      TYPE  DEVICE ID                                          SIZE  AVAILABLE  REFRESHED  REJECT REASONS
ca01  /dev/sda  hdd   DELL_PERC_H710P_001733af8c4056682c00a0fe0420844a  10.9T  Yes        5m ago
ca01  /dev/sdc  hdd   DELL_PERC_H710P_00d5b1608e5d56682c00a0fe0420844a  10.9T  Yes        5m ago
ca01  /dev/sdd  hdd   DELL_PERC_H710P_0015b3388a1756682c00a0fe0420844a  10.9T  Yes        5m ago
ca02  /dev/sda  hdd   DELL_PERC_H710P_001f82551d4b56682c00d4acebe03f08  10.9T  Yes        5m ago
ca02  /dev/sdb  hdd   DELL_PERC_H710P_0039827b1e5e56682c00d4acebe03f08  10.9T  Yes        5m ago
ca02  /dev/sdd  hdd   DELL_PERC_H710P_002611941b2e56682c00d4acebe03f08  10.9T  Yes        5m ago
ca03  /dev/sdb  hdd   DELL_PERC_H710P_0032429e08ee58682c00731ceae03f08  10.9T  Yes        5m ago
ca03  /dev/sdc  hdd   DELL_PERC_H710P_007d58f0090459682c00731ceae03f08  10.9T  Yes        5m ago
ca03  /dev/sdd  hdd   DELL_PERC_H710P_00b5f71007d458682c00731ceae03f08  10.9T  Yes        5m ago
ca04  /dev/sdb  hdd   DELL_PERC_H710_007a36381ece18682c00bf49cf20a782   10.9T  Yes        3m ago
ca04  /dev/sdc  hdd   DELL_PERC_H710_00e4926e1b9f18682c00bf49cf20a782   10.9T  Yes        3m ago
ca04  /dev/sdd  hdd   DELL_PERC_H710_000a560b1a8818682c00bf49cf20a782   10.9T  Yes        3m ago
ca05  /dev/sdb  hdd   DELL_PERC_H710_00efa09a233a1a682c00594bcf20a782   10.9T  Yes        3m ago
ca05  /dev/sdc  hdd   DELL_PERC_H710_0008b06d26691a682c00594bcf20a782   10.9T  Yes        3m ago
ca05  /dev/sdd  hdd   DELL_PERC_H710_00be6f4f25571a682c00594bcf20a782   10.9T  Yes        3m ago
ca06  /dev/sda  hdd   DELL_PERC_H710_006295c228eb1b682c005e47cf20a782   10.9T  Yes        3m ago
ca06  /dev/sdb  hdd   DELL_PERC_H710_0098248e26c61b682c005e47cf20a782   10.9T  Yes        3m ago
ca06  /dev/sdc  hdd   DELL_PERC_H710_003dc7aa27d91b682c005e47cf20a782   10.9T  Yes        3m ago

OSD

...



# ceph orch apply osd --all-available-devices --dry-run

# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...


Mon

코드 블럭
ceph telemetry on --license sharing-1-0
ceph telemetry enable channel perf


ERR

Device has a filesystem

코드 블럭
# ceph orch daemon add osd ca05:/dev/sdc
...
/usr/bin/podman: stderr RuntimeError: Device /dev/sdc has a filesystem.
...
코드 블럭
gdisk -Z /dev/sdc
wipefs -a /dev/sdc


Ref

https://docs.ceph.com/en/reef/rados/configuration/network-config-ref/

https://somuch.medium.com/sds-cephadm로-ceph-구성하기-778a90ba6cc7

https://sieun908.gitbook.io/ceph20/cephadm