All of lore.kernel.org
 help / color / mirror / Atom feed
* ceph osd journal disk in RAID#1?
@ 2019-02-14 13:04 M Ranga Swami Reddy
       [not found] ` <CANA9Uk5j=JJT9pG-vbyqztVm_VM_pX4CfxwPjJB0sDseyGEW4A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: M Ranga Swami Reddy @ 2019-02-14 13:04 UTC (permalink / raw)
  To: ceph-users, ceph-devel

Hello - Can we use the ceph osd journal disk in RAID#1 to achieve the
HA for journal disks?

Thanks
Swami

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: ceph osd journal disk in RAID#1?
       [not found] ` <CANA9Uk5j=JJT9pG-vbyqztVm_VM_pX4CfxwPjJB0sDseyGEW4A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2019-02-14 13:33   ` John Petrini
       [not found]     ` <CAD4AmV6w=c0ov7cXsr+T=o2hWpa5P5k=W1YR3VrJxf_WDR=1zg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: John Petrini @ 2019-02-14 13:33 UTC (permalink / raw)
  To: M Ranga Swami Reddy; +Cc: ceph-users, ceph-devel

You can but it's usually not recommended. When you replace a failed
disk the RAID rebuild is going to drag down the performance of the
remaining disk and subsequently all OSD's that are backed by it. This
can hamper the performance of the entire cluster. You could probably
tune rebuild priority in the RAID controller to limit the impact but
this will come at the expense of longer rebuild times which might not
be ideal.

Ideally losing a journal disk should not be a cause for concern. As
long as you don't have too many OSD's per journal your cluster should
keep humming along just fine until you rebuild those OSD's with a
replacement journal.

Cost and available disk slots are also worth considering since you'll
burn a lot more by going RAID-1, which again really isn't necessary.
This may be the most convincing reason not to bother.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: ceph osd journal disk in RAID#1?
       [not found]     ` <CAD4AmV6w=c0ov7cXsr+T=o2hWpa5P5k=W1YR3VrJxf_WDR=1zg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2019-02-14 14:23       ` Sage Weil
  0 siblings, 0 replies; 3+ messages in thread
From: Sage Weil @ 2019-02-14 14:23 UTC (permalink / raw)
  To: John Petrini; +Cc: ceph-users, ceph-devel

On Thu, 14 Feb 2019, John Petrini wrote:
> Cost and available disk slots are also worth considering since you'll
> burn a lot more by going RAID-1, which again really isn't necessary.
> This may be the most convincing reason not to bother.

Generally speaking, if the choice is between a 2 RAID-1 SSDs shared by 6 
HDD OSDs, or each of those SSDs shared by 3 HDD OSDs, I'd take the 
latter.  Especially with BlueStore, which can make good use of the doubled 
capacity to keep more metadata on fast storage.

sage

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-02-14 14:23 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-14 13:04 ceph osd journal disk in RAID#1? M Ranga Swami Reddy
     [not found] ` <CANA9Uk5j=JJT9pG-vbyqztVm_VM_pX4CfxwPjJB0sDseyGEW4A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-02-14 13:33   ` John Petrini
     [not found]     ` <CAD4AmV6w=c0ov7cXsr+T=o2hWpa5P5k=W1YR3VrJxf_WDR=1zg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-02-14 14:23       ` Sage Weil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.