메뉴 닫기

ceph osd full error 출력에 따른 해결 방법1(node add)

ceph 운용중 다음과 같은 에러메시지를 출력하는 경우가 있다.

2016-02-22 08:50:03.543208 mon.0 [INF] pgmap v257903: 960 pgs: 1 remapped, 8 active+clean+scrubbing+deep, 9 activating+remapped, 67 peering, 5 active+remapped+backfilling, 1 activating+undersized+degraded+remapped, 82 active+remapped+wait_backfill, 32 stale+active+clean, 31 active+undersized+degraded, 28 active+undersized+degraded+remapped, 573 active+clean, 7 undersized+degraded+peered, 35 remapped+peering, 81 active+remapped; 16972 GB data, 22523 GB used, 33327 GB / 55851 GB avail; 9/5472478 objects degraded (0.000%); 2024207/5472478 objects misplaced (36.989%) 4 near full osd(s)

이는 osd의 중 하나 이상의 osd.id가 full 일 경우 나타나는 error 으로 osd 노드 추가 혹은 불필요한 data 정리등으로 해결할 수 있다.

osd 노드 추가방법은 다음과 같음.

ceph@mgmt:~$ ceph-deploy install [NEW ceph hostname]

ceph@mgmt:~/cephcluster$ ceph-deploy admin [NEW ceph hostname]

ceph@mgmt:~/cephcluster$ ceph-deploy disk zap [NEW ceph hostname]:sda

ceph@mgmt:~/cephcluster$ ceph-deploy osd prepare [NEW ceph hostname]:sda:[@존재시에 저널링장치명]

ceph@mgmt:~/cephcluster$ ceph-deploy osd activate [NEW ceph hostname]:sda1:[@존재시에 저널링장치파티션명]

* 참고
# ceph-deploy osd prepare {node-name}:{data-disk}[:{journal-disk}]
# ceph-deploy osd activate {node-name}:{data-disk-partition}[:{journal-disk-partition}]

ceph@mgmt:~/cephcluster$ ceph -s
..
(생략)

osdmap e19: 4 osds: 4 up, 4 in <– 추가 생성된 osd의 node수와 비교하여 up 된 수와 비교
..

root@osd3:~# df -Th
..
..
/dev/sda1 2928254976 811125068 2117129908 1% /var/lib/ceph/osd/ceph-1
/dev/sdb1 2928254976 562295376 2365959600 1% /var/lib/ceph/osd/ceph-2
/dev/sdc1 2928254976 879282956 2048972020 1% /var/lib/ceph/osd/ceph-3
/dev/sdd1 2928254976 1314797772 1613457204 1% /var/lib/ceph/osd/ceph-4
/dev/sde1 2928254976 560996136 2367258840 1% /var/lib/ceph/osd/ceph-5
/dev/sdg1 2928254976 620085536 2308169440 1% /var/lib/ceph/osd/ceph-6
/dev/sdh1 2928254976 501566988 2426687988 1% /var/lib/ceph/osd/ceph-7
/dev/sdj1 2928254976 310455672 2617799304 1% /var/lib/ceph/osd/ceph-8
/dev/sdk1 2928254976 374152016 2554102960 1% /var/lib/ceph/osd/ceph-9
/dev/sdl1 2927736380 349956572 2577779808 1% /var/lib/ceph/osd/ceph-10
/dev/sdm1 2928254976 125987400 2802267576 1% /var/lib/ceph/osd/ceph-11
/dev/sdn1 2927736380 125520824 2802215556 1% /var/lib/ceph/osd/ceph-12
/dev/sdo1 2927736380 974992 2926761388 1% /var/lib/ceph/osd/ceph-13
/dev/sdi1 2928254976 81928 2928173048 1% /var/lib/ceph/osd/ceph-14

..
..

이후

ceph degrade 가 진행되면서 추가로 생성된 ceph-4 ceph-5 ceph-6에 data가 쌓이는 것을 볼 수 있다.

root@osd3:~# df -Th
..
..
/dev/sda1 2928254976 811125068 2117129908 28% /var/lib/ceph/osd/ceph-1
/dev/sdb1 2928254976 562295376 2365959600 20% /var/lib/ceph/osd/ceph-2
/dev/sdc1 2928254976 879282956 2048972020 31% /var/lib/ceph/osd/ceph-3
/dev/sdd1 2928254976 1314797772 1613457204 45% /var/lib/ceph/osd/ceph-4
/dev/sde1 2928254976 560996136 2367258840 20% /var/lib/ceph/osd/ceph-5
/dev/sdg1 2928254976 620085536 2308169440 22% /var/lib/ceph/osd/ceph-6
/dev/sdh1 2928254976 501566988 2426687988 18% /var/lib/ceph/osd/ceph-7
/dev/sdj1 2928254976 310455672 2617799304 11% /var/lib/ceph/osd/ceph-8
/dev/sdk1 2928254976 374152016 2554102960 13% /var/lib/ceph/osd/ceph-9
/dev/sdl1 2927736380 349956572 2577779808 12% /var/lib/ceph/osd/ceph-10
/dev/sdm1 2928254976 125987400 2802267576 5% /var/lib/ceph/osd/ceph-11
/dev/sdn1 2927736380 125520824 2802215556 5% /var/lib/ceph/osd/ceph-12
/dev/sdo1 2927736380 974992 2926761388 1% /var/lib/ceph/osd/ceph-13
/dev/sdi1 2928254976 81928 2928173048 1% /var/lib/ceph/osd/ceph-14

* 참고로 osd 안의 data를 정리하였다고 하여 바로 osd 공간이 확보 되지 않는다.
(degraded clean 작업이 동반됨.)

답글 남기기

이메일 주소는 공개되지 않습니다. 필수 항목은 *(으)로 표시합니다