All of lore.kernel.org
 help / color / mirror / Atom feed
* Increase number of pg in running system
@ 2013-02-05 13:06 Martin B Nielsen
  2013-02-05 14:40 ` ArtemGr
  0 siblings, 1 reply; 7+ messages in thread
From: Martin B Nielsen @ 2013-02-05 13:06 UTC (permalink / raw)
  To: ceph-devel

Hi,

Looking at:
http://ceph.com/docs/master/rados/operations/pools/

It has this description roughly in the middle:

---------------
Important
Increasing the number of placement groups in a pool after you create
the pool is still an experimental feature in Bobtail (v 0.56). We
recommend defining a reasonable number of placement groups and
maintaining that number until Ceph’s placement group splitting and
merging functionality matures.
---------------

However, I cannot find any references how to do this?

I'm asking since we have a test system with 10TB data with only the
default 8 PG's created.

So currently the system is throwing 400GB pg's around whenever we test
removing disks.

The system sometimes wants to put 2x pg's on an OSD which cannot hold
it and it winds up with cluster full. If there is ~750GB free on a OSD
it might decide to put 2x 400GB pgs on it (even though there are other
OSD with even more disk free than that - these disks are exact same
type, size and weights).

System is running bobcat 0.56.2

The system holds 2 big rbd images and it is not an option to create a
new pool with higher pg count and copy them (not enough total space
available).

Thanks in advance,

Cheers,
Martin
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Increase number of pg in running system
  2013-02-05 13:06 Increase number of pg in running system Martin B Nielsen
@ 2013-02-05 14:40 ` ArtemGr
  2013-02-06  1:35   ` Mandell Degerness
  0 siblings, 1 reply; 7+ messages in thread
From: ArtemGr @ 2013-02-05 14:40 UTC (permalink / raw)
  To: ceph-devel

Martin B Nielsen <martin <at> unity3d.com> writes:
> Hi,
> 
> Looking at:
> http://ceph.com/docs/master/rados/operations/pools/
> 
> It has this description roughly in the middle:
> 
> ---------------
> Important
> Increasing the number of placement groups in a pool after you create
> the pool is still an experimental feature in Bobtail (v 0.56). We
> recommend defining a reasonable number of placement groups and
> maintaining that number until Ceph’s placement group splitting and
> merging functionality matures.
> ---------------
> 
> However, I cannot find any references how to do this?
> 
> I'm asking since we have a test system with 10TB data with only the
> default 8 PG's created.

Here's how I do it in ceph.conf:

[osd]
  ; Increase groups number in order to decrease scrub size
  osd pool default pg num = 64
  osd pool default pgp num = 64


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Increase number of pg in running system
  2013-02-05 14:40 ` ArtemGr
@ 2013-02-06  1:35   ` Mandell Degerness
  2013-02-06  1:49     ` Sage Weil
  0 siblings, 1 reply; 7+ messages in thread
From: Mandell Degerness @ 2013-02-06  1:35 UTC (permalink / raw)
  To: ceph-devel

I would like very much to specify pg_num and pgp_num for the default
pools, but they are defaulting to 64 (no OSDs are defined in the
config file).  I have tried using the options indicated by Artem, but
they didn't seem to have any effect on the data and rbd pools which
are created by default.  Is there something I am missing?

On Tue, Feb 5, 2013 at 6:40 AM, ArtemGr <artemciy@gmail.com> wrote:
> Martin B Nielsen <martin <at> unity3d.com> writes:
>> Hi,
>>
>> Looking at:
>> http://ceph.com/docs/master/rados/operations/pools/
>>
>> It has this description roughly in the middle:
>>
>> ---------------
>> Important
>> Increasing the number of placement groups in a pool after you create
>> the pool is still an experimental feature in Bobtail (v 0.56). We
>> recommend defining a reasonable number of placement groups and
>> maintaining that number until Ceph’s placement group splitting and
>> merging functionality matures.
>> ---------------
>>
>> However, I cannot find any references how to do this?
>>
>> I'm asking since we have a test system with 10TB data with only the
>> default 8 PG's created.
>
> Here's how I do it in ceph.conf:
>
> [osd]
>   ; Increase groups number in order to decrease scrub size
>   osd pool default pg num = 64
>   osd pool default pgp num = 64
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Increase number of pg in running system
  2013-02-06  1:35   ` Mandell Degerness
@ 2013-02-06  1:49     ` Sage Weil
  2013-02-06  3:22       ` Chen, Xiaoxi
  0 siblings, 1 reply; 7+ messages in thread
From: Sage Weil @ 2013-02-06  1:49 UTC (permalink / raw)
  To: Mandell Degerness; +Cc: ceph-devel

On Tue, 5 Feb 2013, Mandell Degerness wrote:
> I would like very much to specify pg_num and pgp_num for the default
> pools, but they are defaulting to 64 (no OSDs are defined in the
> config file).  I have tried using the options indicated by Artem, but
> they didn't seem to have any effect on the data and rbd pools which
> are created by default.  Is there something I am missing?

Ah, I see.  Specifying this is awkward.  In [mon] or [global],

 osd pg bits = N
 osd pgp bits = N

where N is the the number of bits to shift 1 to the left.  So for 1024 
PGs, you'd do 10.  (What it's actually doing is MIN(1, num_osds) << N.  
The default N is 6, so you're probaby seeing 64 PGs per pool by default.)

sage


> 
> On Tue, Feb 5, 2013 at 6:40 AM, ArtemGr <artemciy@gmail.com> wrote:
> > Martin B Nielsen <martin <at> unity3d.com> writes:
> >> Hi,
> >>
> >> Looking at:
> >> http://ceph.com/docs/master/rados/operations/pools/
> >>
> >> It has this description roughly in the middle:
> >>
> >> ---------------
> >> Important
> >> Increasing the number of placement groups in a pool after you create
> >> the pool is still an experimental feature in Bobtail (v 0.56). We
> >> recommend defining a reasonable number of placement groups and
> >> maintaining that number until Ceph?s placement group splitting and
> >> merging functionality matures.
> >> ---------------
> >>
> >> However, I cannot find any references how to do this?
> >>
> >> I'm asking since we have a test system with 10TB data with only the
> >> default 8 PG's created.
> >
> > Here's how I do it in ceph.conf:
> >
> > [osd]
> >   ; Increase groups number in order to decrease scrub size
> >   osd pool default pg num = 64
> >   osd pool default pgp num = 64
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: Increase number of pg in running system
  2013-02-06  1:49     ` Sage Weil
@ 2013-02-06  3:22       ` Chen, Xiaoxi
  2013-02-06  4:01         ` Gregory Farnum
  2013-02-06  4:49         ` Sage Weil
  0 siblings, 2 replies; 7+ messages in thread
From: Chen, Xiaoxi @ 2013-02-06  3:22 UTC (permalink / raw)
  To: Sage Weil, Mandell Degerness; +Cc: ceph-devel

But can we change the pg_num of a pool when the pool contains data? If yes, how to do this?

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Sage Weil
Sent: 2013年2月6日 9:50
To: Mandell Degerness
Cc: ceph-devel@vger.kernel.org
Subject: Re: Increase number of pg in running system

On Tue, 5 Feb 2013, Mandell Degerness wrote:
> I would like very much to specify pg_num and pgp_num for the default 
> pools, but they are defaulting to 64 (no OSDs are defined in the 
> config file).  I have tried using the options indicated by Artem, but 
> they didn't seem to have any effect on the data and rbd pools which 
> are created by default.  Is there something I am missing?

Ah, I see.  Specifying this is awkward.  In [mon] or [global],

 osd pg bits = N
 osd pgp bits = N

where N is the the number of bits to shift 1 to the left.  So for 1024 PGs, you'd do 10.  (What it's actually doing is MIN(1, num_osds) << N.  
The default N is 6, so you're probaby seeing 64 PGs per pool by default.)

sage


> 
> On Tue, Feb 5, 2013 at 6:40 AM, ArtemGr <artemciy@gmail.com> wrote:
> > Martin B Nielsen <martin <at> unity3d.com> writes:
> >> Hi,
> >>
> >> Looking at:
> >> http://ceph.com/docs/master/rados/operations/pools/
> >>
> >> It has this description roughly in the middle:
> >>
> >> ---------------
> >> Important
> >> Increasing the number of placement groups in a pool after you 
> >> create the pool is still an experimental feature in Bobtail (v 
> >> 0.56). We recommend defining a reasonable number of placement 
> >> groups and maintaining that number until Ceph?s placement group 
> >> splitting and merging functionality matures.
> >> ---------------
> >>
> >> However, I cannot find any references how to do this?
> >>
> >> I'm asking since we have a test system with 10TB data with only the 
> >> default 8 PG's created.
> >
> > Here's how I do it in ceph.conf:
> >
> > [osd]
> >   ; Increase groups number in order to decrease scrub size
> >   osd pool default pg num = 64
> >   osd pool default pgp num = 64
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe 
> > ceph-devel" in the body of a message to majordomo@vger.kernel.org 
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Increase number of pg in running system
  2013-02-06  3:22       ` Chen, Xiaoxi
@ 2013-02-06  4:01         ` Gregory Farnum
  2013-02-06  4:49         ` Sage Weil
  1 sibling, 0 replies; 7+ messages in thread
From: Gregory Farnum @ 2013-02-06  4:01 UTC (permalink / raw)
  To: ceph-devel; +Cc: Sage Weil, Chen, Xiaoxi, Mandell Degerness

On Tuesday, February 5, 2013 at 5:49 PM, Sage Weil wrote:
> On Tue, 5 Feb 2013, Mandell Degerness wrote:
> > I would like very much to specify pg_num and pgp_num for the default
> > pools, but they are defaulting to 64 (no OSDs are defined in the
> > config file). I have tried using the options indicated by Artem, but
> > they didn't seem to have any effect on the data and rbd pools which
> > are created by default. Is there something I am missing?
>  
>  
>  
>  
> Ah, I see. Specifying this is awkward. In [mon] or [global],
>  
> osd pg bits = N
> osd pgp bits = N
>  
> where N is the the number of bits to shift 1 to the left. So for 1024
> PGs, you'd do 10. (What it's actually doing is MIN(1, num_osds) << N.
> The default N is 6, so you're probaby seeing 64 PGs per pool by default.)

I see the confusion though — the osd_pool_default_pg_num option is only used for pools which you create through the monitor after the system is already running.



On Tuesday, February 5, 2013 at 7:22 PM, Chen, Xiaoxi wrote:

> But can we change the pg_num of a pool when the pool contains data? If yes, how to do this?  
>  
We advise against that right now; the relevant code isn't well-enough tested.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: Increase number of pg in running system
  2013-02-06  3:22       ` Chen, Xiaoxi
  2013-02-06  4:01         ` Gregory Farnum
@ 2013-02-06  4:49         ` Sage Weil
  1 sibling, 0 replies; 7+ messages in thread
From: Sage Weil @ 2013-02-06  4:49 UTC (permalink / raw)
  To: Chen, Xiaoxi; +Cc: Mandell Degerness, ceph-devel

On Wed, 6 Feb 2013, Chen, Xiaoxi wrote:
> But can we change the pg_num of a pool when the pool contains data? If 
> yes, how to do this?

This functionality is merged, but still a bit experimental.  The 
incantation is

 ceph osd pool set <poolname> pg_num <numpgs> --allow-experimental-feature

Please test, but be careful on clusters with real data.

sage


> 
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Sage Weil
> Sent: 2013?2?6? 9:50
> To: Mandell Degerness
> Cc: ceph-devel@vger.kernel.org
> Subject: Re: Increase number of pg in running system
> 
> On Tue, 5 Feb 2013, Mandell Degerness wrote:
> > I would like very much to specify pg_num and pgp_num for the default 
> > pools, but they are defaulting to 64 (no OSDs are defined in the 
> > config file).  I have tried using the options indicated by Artem, but 
> > they didn't seem to have any effect on the data and rbd pools which 
> > are created by default.  Is there something I am missing?
> 
> Ah, I see.  Specifying this is awkward.  In [mon] or [global],
> 
>  osd pg bits = N
>  osd pgp bits = N
> 
> where N is the the number of bits to shift 1 to the left.  So for 1024 PGs, you'd do 10.  (What it's actually doing is MIN(1, num_osds) << N.  
> The default N is 6, so you're probaby seeing 64 PGs per pool by default.)
> 
> sage
> 
> 
> > 
> > On Tue, Feb 5, 2013 at 6:40 AM, ArtemGr <artemciy@gmail.com> wrote:
> > > Martin B Nielsen <martin <at> unity3d.com> writes:
> > >> Hi,
> > >>
> > >> Looking at:
> > >> http://ceph.com/docs/master/rados/operations/pools/
> > >>
> > >> It has this description roughly in the middle:
> > >>
> > >> ---------------
> > >> Important
> > >> Increasing the number of placement groups in a pool after you 
> > >> create the pool is still an experimental feature in Bobtail (v 
> > >> 0.56). We recommend defining a reasonable number of placement 
> > >> groups and maintaining that number until Ceph?s placement group 
> > >> splitting and merging functionality matures.
> > >> ---------------
> > >>
> > >> However, I cannot find any references how to do this?
> > >>
> > >> I'm asking since we have a test system with 10TB data with only the 
> > >> default 8 PG's created.
> > >
> > > Here's how I do it in ceph.conf:
> > >
> > > [osd]
> > >   ; Increase groups number in order to decrease scrub size
> > >   osd pool default pg num = 64
> > >   osd pool default pgp num = 64
> > >
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe 
> > > ceph-devel" in the body of a message to majordomo@vger.kernel.org 
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> > 
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-02-06  4:49 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-05 13:06 Increase number of pg in running system Martin B Nielsen
2013-02-05 14:40 ` ArtemGr
2013-02-06  1:35   ` Mandell Degerness
2013-02-06  1:49     ` Sage Weil
2013-02-06  3:22       ` Chen, Xiaoxi
2013-02-06  4:01         ` Gregory Farnum
2013-02-06  4:49         ` Sage Weil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.