All of lore.kernel.org
 help / color / mirror / Atom feed
* Iusse: Ceph osd rm one osd cause 30% objects degraded
@ 2014-11-19 11:29 Qiang
  2014-11-19 15:30 ` Issue: " Qiang
  0 siblings, 1 reply; 2+ messages in thread
From: Qiang @ 2014-11-19 11:29 UTC (permalink / raw)
  To: ceph-devel

Hi, Dear ceph-devel

I met a issue:  Ceph osd rm one osd cause 30% objects degraded.

Step 1:
#created a ssd root
ceph osd crush add-bucket ssd root

Step 2: Installed a osd.100 failed:

94	1			osd.94	up	1	
95	1			osd.95	up	1	
96	1			osd.96	up	1	
97	1			osd.97	up	1	
98	1			osd.98	up	1	
99	1			osd.99	up	1	
100	0	osd.100	down	0

Step 3: Installed a osd.101 again successfully.  Move a host into root=ssd.
-12	1	root ssd
-13	1		host ssd-cephnode1
101	1			osd.101	up	1	
-1	100	root default
-2	10		host cephnode1
0	1			osd.0	up	1	
1	1			osd.1	up	1	
2	1			osd.2	up	1

Step 4: Then I ceph osd rm 100, but the the ceph health turned into 30% 
objects degraded. Then the io performance downgrade to very slow (1MB/s 
each clients).

Anybody know what is the root cause? Or some suggestions to finger it out?

Thank you very much.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-11-19 15:30 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-19 11:29 Iusse: Ceph osd rm one osd cause 30% objects degraded Qiang
2014-11-19 15:30 ` Issue: " Qiang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.