이 페이지의 이전 버전을 보고 있습니다. 현재 버전 보기.

현재와 비교 페이지 이력 보기

« 이전 버전 4 다음 »


Install

dnf -y install centos-release-ceph-reef
dnf -y install cephadm
# cephadm bootstrap --mon-ip 192.168.16.10
cephadm bootstrap --mon-ip 192.168.10.10
Verifying podman|docker is present...
...
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
…
firewalld ready
Enabling firewalld port 9283/tcp in current zone...
Enabling firewalld port 8765/tcp in current zone...
Enabling firewalld port 8443/tcp in current zone...
…
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host c1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying ceph-exporter service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
…
Ceph Dashboard is now available at:

             URL: https://c1:8443/
            User: admin
        Password: ........

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/7592ec0a-3c7a-11ee-b450-a0369f70d4f8/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

        sudo /usr/sbin/cephadm shell --fsid 7592ec0a-3c7a-11ee-b450-a0369f70d4f8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

        sudo /usr/sbin/cephadm shell

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/en/latest/mgr/telemetry/

Bootstrap complete.

https://docs.ceph.com/en/latest/cephadm/

https://docs.ceph.com/en/latest/cephadm/install/#install-cephadm


Uninstall

Uninstall last ceph node

# systemctl status ceph<tab>
ceph.target
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@alertmanager.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@ceph-exporter.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@crash.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@grafana.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mgr.c1.hijqcf.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mon.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@node-exporter.c1.service
ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@prometheus.c1.service

# systemctl status ceph.target
● ceph.target - All Ceph clusters and services
     Loaded: loaded (/etc/systemd/system/ceph.target; enabled; preset: disabled)
     Active: active since Thu 2023-08-17 06:19:24 KST; 44min ago

# podman ps -a
CONTAINER ID  
10826ebf2f23 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-mon-c1
52728e97ed80 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-mgr-c1-hijqcf
362f58d17f80 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-ceph-exporter-c1
0997132bdd95 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-crash-c1
63cbf2dabf52 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-node-exporter-c1
7417b1f063c4 ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-prometheus-c1
d28c76dcc9cd ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-alertmanager-c1
74b3feaae01f ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8-grafana-c1

# find /etc/systemd/system/ -name 'ceph*'
/etc/systemd/system/multi-user.target.wants/ceph.target
/etc/systemd/system/multi-user.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target
/etc/systemd/system/ceph.target
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target
/etc/systemd/system/ceph.target.wants
/etc/systemd/system/ceph.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mon.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@mgr.c1.hijqcf.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@ceph-exporter.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@crash.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@node-exporter.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@alertmanager.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@grafana.c1.service
/etc/systemd/system/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8.target.wants/ceph-7592ec0a-3c7a-11ee-b450-a0369f70d4f8@prometheus.c1.service

# systemctl stop ceph.target

# podman ps -a
CONTAINER ID  
<nothing>

# rm -rf /etc/ceph /var/lib/ceph /etc/systemd/system/ceph*

# systemctl daemon-reload


Shell Alias

Add the following on .zshrc or .bashrc

alias ceph='cephadm shell -- ceph'


Add Hosts

register /etc/ceph/ceph.pub to root@hosts

ceph orch host add ca03 192.168.16.11
ceph orch host add ca03 192.168.16.12
ceph orch host add ca04 192.168.16.13
ceph orch host add ca05 192.168.16.14
ceph orch host add ca06 192.168.16.15
# ceph orch host ls --detail
HOST  ADDR           LABELS  STATUS  VENDOR/MODEL                  CPU       RAM     HDD       SSD  NIC
ca01  192.168.16.10  _admin          Dell Inc. (PowerEdge R720xd)  12C/144T  63 GiB  5/38.0TB  -    6
ca02  192.168.16.11                  Dell Inc. (PowerEdge R720xd)  12C/144T  63 GiB  5/38.0TB  -    6
ca03  192.168.16.12                  Dell Inc. (PowerEdge R720xd)  12C/144T  63 GiB  5/38.0TB  -    6
ca04  192.168.16.13                  Dell Inc. (PowerEdge R720)    16C/256T  63 GiB  5/38.0TB  -    6
ca05  192.168.16.14                  Dell Inc. (PowerEdge R720)    16C/256T  63 GiB  5/38.0TB  -    6
ca06  192.168.16.15                  Dell Inc. (PowerEdge R720)    16C/256T  63 GiB  5/38.0TB  -    6
6 hosts in cluster


OSD

# ceph orch device ls
HOST  PATH      TYPE  DEVICE ID                                          SIZE  AVAILABLE  REFRESHED  REJECT REASONS
ca01  /dev/sda  hdd   DELL_PERC_H710P_001733af8c4056682c00a0fe0420844a  10.9T  Yes        5m ago
ca01  /dev/sdc  hdd   DELL_PERC_H710P_00d5b1608e5d56682c00a0fe0420844a  10.9T  Yes        5m ago
ca01  /dev/sdd  hdd   DELL_PERC_H710P_0015b3388a1756682c00a0fe0420844a  10.9T  Yes        5m ago
ca02  /dev/sda  hdd   DELL_PERC_H710P_001f82551d4b56682c00d4acebe03f08  10.9T  Yes        5m ago
ca02  /dev/sdb  hdd   DELL_PERC_H710P_0039827b1e5e56682c00d4acebe03f08  10.9T  Yes        5m ago
ca02  /dev/sdd  hdd   DELL_PERC_H710P_002611941b2e56682c00d4acebe03f08  10.9T  Yes        5m ago
ca03  /dev/sdb  hdd   DELL_PERC_H710P_0032429e08ee58682c00731ceae03f08  10.9T  Yes        5m ago
ca03  /dev/sdc  hdd   DELL_PERC_H710P_007d58f0090459682c00731ceae03f08  10.9T  Yes        5m ago
ca03  /dev/sdd  hdd   DELL_PERC_H710P_00b5f71007d458682c00731ceae03f08  10.9T  Yes        5m ago
ca04  /dev/sdb  hdd   DELL_PERC_H710_007a36381ece18682c00bf49cf20a782   10.9T  Yes        3m ago
ca04  /dev/sdc  hdd   DELL_PERC_H710_00e4926e1b9f18682c00bf49cf20a782   10.9T  Yes        3m ago
ca04  /dev/sdd  hdd   DELL_PERC_H710_000a560b1a8818682c00bf49cf20a782   10.9T  Yes        3m ago
ca05  /dev/sdb  hdd   DELL_PERC_H710_00efa09a233a1a682c00594bcf20a782   10.9T  Yes        3m ago
ca05  /dev/sdc  hdd   DELL_PERC_H710_0008b06d26691a682c00594bcf20a782   10.9T  Yes        3m ago
ca05  /dev/sdd  hdd   DELL_PERC_H710_00be6f4f25571a682c00594bcf20a782   10.9T  Yes        3m ago
ca06  /dev/sda  hdd   DELL_PERC_H710_006295c228eb1b682c005e47cf20a782   10.9T  Yes        3m ago
ca06  /dev/sdb  hdd   DELL_PERC_H710_0098248e26c61b682c005e47cf20a782   10.9T  Yes        3m ago
ca06  /dev/sdc  hdd   DELL_PERC_H710_003dc7aa27d91b682c005e47cf20a782   10.9T  Yes        3m ago

# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...





  • 레이블 없음