All of lore.kernel.org
 help / color / mirror / Atom feed
* Can we have mutiple OSD in a single machine
@ 2012-04-11  7:42 Madhusudhana U
  2012-04-11  7:53 ` Stefan Kleijkers
  2012-04-11  9:45 ` Tomasz Paszkowski
  0 siblings, 2 replies; 9+ messages in thread
From: Madhusudhana U @ 2012-04-11  7:42 UTC (permalink / raw)
  To: ceph-devel

Hi all,
I have a system with 2T SATA drive and I want to add it to my ceph 
cluster. I was thinking instead of creating one large OSD, can't
I have 44 osd's of 450G each ? Is this possible ? if possible, will 
this improve read/write performance ?

Thanks
__Madhusudhana


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Can we have mutiple OSD in a single machine
  2012-04-11  7:42 Can we have mutiple OSD in a single machine Madhusudhana U
@ 2012-04-11  7:53 ` Stefan Kleijkers
  2012-04-11 12:42   ` Madhusudhana U
  2012-04-11  9:45 ` Tomasz Paszkowski
  1 sibling, 1 reply; 9+ messages in thread
From: Stefan Kleijkers @ 2012-04-11  7:53 UTC (permalink / raw)
  To: Madhusudhana U; +Cc: ceph-devel

Hello,

Yes that's no problem. I'm using that configuration for some time now. 
Just generate a config with multiple OSD clauses with the same node/host.

With the newer ceph version mkcephfs is smart enough to detect the osd's 
on the same node and will generate a crushmap whereby the objects get 
replicated to different nodes.

I didn't see any impact on the performance (if you have enough 
processing power, because you need more of that).

I wanted to use just a few OSD's per node with mdraid, so I could use 
RAID6. This way I could swap a faulty disk without bringing the node 
down. But I couldn't get it stable with mdraid.

Stefan

On 04/11/2012 09:42 AM, Madhusudhana U wrote:
> Hi all,
> I have a system with 2T SATA drive and I want to add it to my ceph
> cluster. I was thinking instead of creating one large OSD, can't
> I have 44 osd's of 450G each ? Is this possible ? if possible, will
> this improve read/write performance ?
>
> Thanks
> __Madhusudhana
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Can we have mutiple OSD in a single machine
  2012-04-11  7:42 Can we have mutiple OSD in a single machine Madhusudhana U
  2012-04-11  7:53 ` Stefan Kleijkers
@ 2012-04-11  9:45 ` Tomasz Paszkowski
  2012-04-11 12:38   ` Madhusudhana U
  1 sibling, 1 reply; 9+ messages in thread
From: Tomasz Paszkowski @ 2012-04-11  9:45 UTC (permalink / raw)
  To: ceph-devel

Hi,

Please correct me if I'am wrong. You would like to partition single drive ?


Dnia 11-04-2012 o godz. 09:42 Madhusudhana U
<madhusudhana.u.acharya@gmail.com> napisał(a):

> Hi all,
> I have a system with 2T SATA drive and I want to add it to my ceph
> cluster. I was thinking instead of creating one large OSD, can't
> I have 44 osd's of 450G each ? Is this possible ? if possible, will
> this improve read/write performance ?
>
> Thanks
> __Madhusudhana
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Can we have mutiple OSD in a single machine
  2012-04-11  9:45 ` Tomasz Paszkowski
@ 2012-04-11 12:38   ` Madhusudhana U
  2012-04-11 13:53     ` Tomasz Paszkowski
  0 siblings, 1 reply; 9+ messages in thread
From: Madhusudhana U @ 2012-04-11 12:38 UTC (permalink / raw)
  To: ceph-devel

Tomasz Paszkowski <ss7pro <at> gmail.com> writes:

> 
> Hi,
> 
> Please correct me if I'am wrong. You would like to partition single drive ?
> 
Yes,
I want to create 4 partitions in a  single drive. This will increase the
OSD number. Will this increase in OSD also increases performance ?

Thanks





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Can we have mutiple OSD in a single machine
  2012-04-11  7:53 ` Stefan Kleijkers
@ 2012-04-11 12:42   ` Madhusudhana U
  2012-04-11 13:55     ` Tomasz Paszkowski
  2012-04-11 14:33     ` Stefan Kleijkers
  0 siblings, 2 replies; 9+ messages in thread
From: Madhusudhana U @ 2012-04-11 12:42 UTC (permalink / raw)
  To: ceph-devel

Stefan Kleijkers <stefan <at> unilogicnetworks.net> writes:

> 
> Hello,
> 
> Yes that's no problem. I'm using that configuration for some time now. 
> Just generate a config with multiple OSD clauses with the same node/host.
> 
> With the newer ceph version mkcephfs is smart enough to detect the osd's 
> on the same node and will generate a crushmap whereby the objects get 
> replicated to different nodes.
> 
> I didn't see any impact on the performance (if you have enough 
> processing power, because you need more of that).
> 
> I wanted to use just a few OSD's per node with mdraid, so I could use 
> RAID6. This way I could swap a faulty disk without bringing the node 
> down. But I couldn't get it stable with mdraid.
> 
This is how my OSD part in ceph.conf looks like

[osd.0]
        host = ceph-node-1
        btrfs devs = /dev/sda6

[osd.1]
        host = ceph-node-2
        btrfs devs = /dev/sda6

[osd.2]
        host = ceph-node-3
        btrfs devs = /dev/sda6

[osd.3]
        host = ceph-node-4
        btrfs devs = /dev/sda6



Can you please help me how I can add multiple OSD in the same machine 
considering that i have 4 partition created for OSD ?

I have powerful machines having 6 quad core Intel Xeon with 48G of RAM








^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Can we have mutiple OSD in a single machine
  2012-04-11 12:38   ` Madhusudhana U
@ 2012-04-11 13:53     ` Tomasz Paszkowski
  2012-04-11 16:07       ` Sage Weil
  0 siblings, 1 reply; 9+ messages in thread
From: Tomasz Paszkowski @ 2012-04-11 13:53 UTC (permalink / raw)
  To: ceph-devel

Hi,

It'll not increase overall storage system performance. Partitioning of
single disk drive gives you no performance gains.





On Wed, Apr 11, 2012 at 2:38 PM, Madhusudhana U
<madhusudhana.u.acharya@gmail.com> wrote:
> Tomasz Paszkowski <ss7pro <at> gmail.com> writes:
>
>>
>> Hi,
>>
>> Please correct me if I'am wrong. You would like to partition single drive ?
>>
> Yes,
> I want to create 4 partitions in a  single drive. This will increase the
> OSD number. Will this increase in OSD also increases performance ?
>
> Thanks
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Tomasz Paszkowski
SS7, Asterisk, SAN, Datacenter, Cloud Computing
+48500166299
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Can we have mutiple OSD in a single machine
  2012-04-11 12:42   ` Madhusudhana U
@ 2012-04-11 13:55     ` Tomasz Paszkowski
  2012-04-11 14:33     ` Stefan Kleijkers
  1 sibling, 0 replies; 9+ messages in thread
From: Tomasz Paszkowski @ 2012-04-11 13:55 UTC (permalink / raw)
  To: ceph-devel

If you're single machine change hostname in the cfg to be the same but
you need to change dev name to be different for each osd process on
single machine.

On Wed, Apr 11, 2012 at 2:42 PM, Madhusudhana U
<madhusudhana.u.acharya@gmail.com> wrote:
> Stefan Kleijkers <stefan <at> unilogicnetworks.net> writes:
>
>>
>> Hello,
>>
>> Yes that's no problem. I'm using that configuration for some time now.
>> Just generate a config with multiple OSD clauses with the same node/host.
>>
>> With the newer ceph version mkcephfs is smart enough to detect the osd's
>> on the same node and will generate a crushmap whereby the objects get
>> replicated to different nodes.
>>
>> I didn't see any impact on the performance (if you have enough
>> processing power, because you need more of that).
>>
>> I wanted to use just a few OSD's per node with mdraid, so I could use
>> RAID6. This way I could swap a faulty disk without bringing the node
>> down. But I couldn't get it stable with mdraid.
>>
> This is how my OSD part in ceph.conf looks like
>
> [osd.0]
>        host = ceph-node-1
>        btrfs devs = /dev/sda6
>
> [osd.1]
>        host = ceph-node-2
>        btrfs devs = /dev/sda6
>
> [osd.2]
>        host = ceph-node-3
>        btrfs devs = /dev/sda6
>
> [osd.3]
>        host = ceph-node-4
>        btrfs devs = /dev/sda6
>
>
>
> Can you please help me how I can add multiple OSD in the same machine
> considering that i have 4 partition created for OSD ?
>
> I have powerful machines having 6 quad core Intel Xeon with 48G of RAM
>
>
>
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Tomasz Paszkowski
SS7, Asterisk, SAN, Datacenter, Cloud Computing
+48500166299
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Can we have mutiple OSD in a single machine
  2012-04-11 12:42   ` Madhusudhana U
  2012-04-11 13:55     ` Tomasz Paszkowski
@ 2012-04-11 14:33     ` Stefan Kleijkers
  1 sibling, 0 replies; 9+ messages in thread
From: Stefan Kleijkers @ 2012-04-11 14:33 UTC (permalink / raw)
  To: Madhusudhana U; +Cc: ceph-devel

Hello,

You will get something like this:

[osd.0]
         host = ceph-node-1
         btrfs devs = /dev/sda6

[osd.1]
         host = ceph-node-1
         btrfs devs = /dev/sda7

[osd.2]
         host = ceph-node-1
         btrfs devs = /dev/sda8

[osd.3]
         host = ceph-node-1
         btrfs devs = /dev/sda9


[osd.4]
         host = ceph-node-2
         btrfs devs = /dev/sda6

[osd.5]
         host = ceph-node-2
         btrfs devs = /dev/sda7

etc...

But as Tomasz mentions, you get no extra performance, because in most cases the disk is the bottleneck.

Besides I recommend to not use btrfs devs anymore, they want to deprecate that option. So you only get the "osd data =<directory>" option.

If you really want to add performance use more disks or use a fast journal device (I use a SSD).

Stefan




On 04/11/2012 02:42 PM, Madhusudhana U wrote:
> Stefan Kleijkers<stefan<at>  unilogicnetworks.net>  writes:
>
>> Hello,
>>
>> Yes that's no problem. I'm using that configuration for some time now.
>> Just generate a config with multiple OSD clauses with the same node/host.
>>
>> With the newer ceph version mkcephfs is smart enough to detect the osd's
>> on the same node and will generate a crushmap whereby the objects get
>> replicated to different nodes.
>>
>> I didn't see any impact on the performance (if you have enough
>> processing power, because you need more of that).
>>
>> I wanted to use just a few OSD's per node with mdraid, so I could use
>> RAID6. This way I could swap a faulty disk without bringing the node
>> down. But I couldn't get it stable with mdraid.
>>
> This is how my OSD part in ceph.conf looks like
>
> [osd.0]
>          host = ceph-node-1
>          btrfs devs = /dev/sda6
>
> [osd.1]
>          host = ceph-node-2
>          btrfs devs = /dev/sda6
>
> [osd.2]
>          host = ceph-node-3
>          btrfs devs = /dev/sda6
>
> [osd.3]
>          host = ceph-node-4
>          btrfs devs = /dev/sda6
>
>
>
> Can you please help me how I can add multiple OSD in the same machine
> considering that i have 4 partition created for OSD ?
>
> I have powerful machines having 6 quad core Intel Xeon with 48G of RAM
>
>
>
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Can we have mutiple OSD in a single machine
  2012-04-11 13:53     ` Tomasz Paszkowski
@ 2012-04-11 16:07       ` Sage Weil
  0 siblings, 0 replies; 9+ messages in thread
From: Sage Weil @ 2012-04-11 16:07 UTC (permalink / raw)
  To: Tomasz Paszkowski; +Cc: ceph-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1344 bytes --]

On Wed, 11 Apr 2012, Tomasz Paszkowski wrote:
> Hi,
> 
> It'll not increase overall storage system performance. Partitioning of
> single disk drive gives you no performance gains.

It will in fact slow these down, because each ceph-osd instance will by 
doing periodic syncfs(2) calls and they will interfere.

sage

> 
> 
> 
> 
> 
> On Wed, Apr 11, 2012 at 2:38 PM, Madhusudhana U
> <madhusudhana.u.acharya@gmail.com> wrote:
> > Tomasz Paszkowski <ss7pro <at> gmail.com> writes:
> >
> >>
> >> Hi,
> >>
> >> Please correct me if I'am wrong. You would like to partition single drive ?
> >>
> > Yes,
> > I want to create 4 partitions in a  single drive. This will increase the
> > OSD number. Will this increase in OSD also increases performance ?
> >
> > Thanks
> >
> >
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 
> -- 
> Tomasz Paszkowski
> SS7, Asterisk, SAN, Datacenter, Cloud Computing
> +48500166299
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2012-04-11 16:07 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-11  7:42 Can we have mutiple OSD in a single machine Madhusudhana U
2012-04-11  7:53 ` Stefan Kleijkers
2012-04-11 12:42   ` Madhusudhana U
2012-04-11 13:55     ` Tomasz Paszkowski
2012-04-11 14:33     ` Stefan Kleijkers
2012-04-11  9:45 ` Tomasz Paszkowski
2012-04-11 12:38   ` Madhusudhana U
2012-04-11 13:53     ` Tomasz Paszkowski
2012-04-11 16:07       ` Sage Weil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.