All of lore.kernel.org
 help / color / mirror / Atom feed
* ceph -w output
@ 2011-12-12 19:23 Jens Rehpoehler
  2011-12-13 17:57 ` Samuel Just
  0 siblings, 1 reply; 8+ messages in thread
From: Jens Rehpoehler @ 2011-12-12 19:23 UTC (permalink / raw)
  To: ceph-devel

[-- Attachment #1: Type: text/plain, Size: 642 bytes --]

Hi,

could someone please explain to following to me:

2011-12-12 20:16:16.899140    pg v182377: 1190 pgs: 18 creating, 1172 
active+clean; 332 GB data, 652 GB used, 13270 GB / 14667 GB avail
2011-12-12 20:16:20.769268    pg v182378: 1190 pgs: 18 creating, 1172 
active+clean; 332 GB data, 652 GB used, 13270 GB / 14667 GB avail

I've created the filesystem a week ago. The Status "18 creating" remains 
since then. I've another Cluster where "creating" never shows up.

Anything to worry about ? Any chance to debug this ?

Thanks a lot !!

Jens

PS: it's the first time i write to a list. please be patient with me if 
i break any rules :)

[-- Attachment #2: jens_rehpoehler.vcf --]
[-- Type: text/x-vcard, Size: 317 bytes --]

begin:vcard
fn;quoted-printable:Jens Rehp=C3=B6hler
n;quoted-printable:Rehp=C3=B6hler;Jens
org:Filoo GmbH
adr:;;Tilsiter Str. 1;Langenberg;NRW;33449;Deutschland
email;internet:jens.rehpoehler@filoo.de
tel;work:+49-5248-1898412
tel;fax:+49-5248-189819
tel;cell:+49-151-54645798
url:www.filoo.de
version:2.1
end:vcard


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ceph -w output
  2011-12-12 19:23 ceph -w output Jens Rehpoehler
@ 2011-12-13 17:57 ` Samuel Just
  2011-12-14  8:36   ` Jens Rehpöhler
  0 siblings, 1 reply; 8+ messages in thread
From: Samuel Just @ 2011-12-13 17:57 UTC (permalink / raw)
  To: ceph-devel

A pg is in the "creating" stage between when its pool is created and
when the OSD responsible for it first creates it.  Did this happen
after creating a pool?  Could you send the output of 'ceph pg dump'
and 'ceph osd dump'?

Thanks,
-Sam

On Mon, Dec 12, 2011 at 11:23 AM, Jens Rehpoehler
<jens.rehpoehler@filoo.de> wrote:
> Hi,
>
> could someone please explain to following to me:
>
> 2011-12-12 20:16:16.899140    pg v182377: 1190 pgs: 18 creating, 1172
> active+clean; 332 GB data, 652 GB used, 13270 GB / 14667 GB avail
> 2011-12-12 20:16:20.769268    pg v182378: 1190 pgs: 18 creating, 1172
> active+clean; 332 GB data, 652 GB used, 13270 GB / 14667 GB avail
>
> I've created the filesystem a week ago. The Status "18 creating" remains
> since then. I've another Cluster where "creating" never shows up.
>
> Anything to worry about ? Any chance to debug this ?
>
> Thanks a lot !!
>
> Jens
>
> PS: it's the first time i write to a list. please be patient with me if i
> break any rules :)
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ceph -w output
  2011-12-13 17:57 ` Samuel Just
@ 2011-12-14  8:36   ` Jens Rehpöhler
  2011-12-14 16:43     ` Tommi Virtanen
  0 siblings, 1 reply; 8+ messages in thread
From: Jens Rehpöhler @ 2011-12-14  8:36 UTC (permalink / raw)
  To: Samuel Just; +Cc: ceph-devel

[-- Attachment #1: Type: text/plain, Size: 1349 bytes --]

Hi Sam,

thank you for the reply.

Attached you will find the output you asked for. Is there any limitation
on the amount of pools ? We create
pools for every customer and store their VM images in that pools. So we
will create a lot of pools over time.

Thanks a lot

Jens

Am 13.12.2011 18:57, schrieb Samuel Just:
> A pg is in the "creating" stage between when its pool is created and
> when the OSD responsible for it first creates it.  Did this happen
> after creating a pool?  Could you send the output of 'ceph pg dump'
> and 'ceph osd dump'?
>
> Thanks,
> -Sam
>
> On Mon, Dec 12, 2011 at 11:23 AM, Jens Rehpoehler
> <jens.rehpoehler@filoo.de> wrote:
>> Hi,
>>
>> could someone please explain to following to me:
>>
>> 2011-12-12 20:16:16.899140    pg v182377: 1190 pgs: 18 creating, 1172
>> active+clean; 332 GB data, 652 GB used, 13270 GB / 14667 GB avail
>> 2011-12-12 20:16:20.769268    pg v182378: 1190 pgs: 18 creating, 1172
>> active+clean; 332 GB data, 652 GB used, 13270 GB / 14667 GB avail
>>
>> I've created the filesystem a week ago. The Status "18 creating" remains
>> since then. I've another Cluster where "creating" never shows up.
>>
>> Anything to worry about ? Any chance to debug this ?
>>
>> Thanks a lot !!
>>
>> Jens
>>
>> PS: it's the first time i write to a list. please be patient with me if i
>> break any rules :)

[-- Attachment #2: pg_dump.txt.gz --]
[-- Type: application/x-gzip, Size: 24833 bytes --]

[-- Attachment #3: ods_dump.txt.gz --]
[-- Type: application/x-gzip, Size: 816 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ceph -w output
  2011-12-14  8:36   ` Jens Rehpöhler
@ 2011-12-14 16:43     ` Tommi Virtanen
  2011-12-15  9:45       ` Jens Rehpöhler
  0 siblings, 1 reply; 8+ messages in thread
From: Tommi Virtanen @ 2011-12-14 16:43 UTC (permalink / raw)
  To: Jens Rehpöhler; +Cc: Samuel Just, ceph-devel

On Wed, Dec 14, 2011 at 00:36, Jens Rehpöhler <jens.rehpoehler@filoo.de> wrote:
> Attached you will find the output you asked for. Is there any limitation
> on the amount of pools ? We create
> pools for every customer and store their VM images in that pools. So we
> will create a lot of pools over time.

Each pool gets its own set of PGs (Placement Groups). An OSD that
manages too many PGs will use a lot of RAM. What is "too many" is
debatable, and really up to benchmarks, but considering we recommend
about 100 PGs/OSD as a starting point, you probably don't want to go
two orders of magnitude above that.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ceph -w output
  2011-12-14 16:43     ` Tommi Virtanen
@ 2011-12-15  9:45       ` Jens Rehpöhler
  2011-12-15 17:35         ` Samuel Just
  0 siblings, 1 reply; 8+ messages in thread
From: Jens Rehpöhler @ 2011-12-15  9:45 UTC (permalink / raw)
  To: Tommi Virtanen; +Cc: Samuel Just, ceph-devel

Am 14.12.2011 17:43, schrieb Tommi Virtanen:
> On Wed, Dec 14, 2011 at 00:36, Jens Rehpöhler <jens.rehpoehler@filoo.de> wrote:
>> Attached you will find the output you asked for. Is there any limitation
>> on the amount of pools ? We create
>> pools for every customer and store their VM images in that pools. So we
>> will create a lot of pools over time.
> Each pool gets its own set of PGs (Placement Groups). An OSD that
> manages too many PGs will use a lot of RAM. What is "too many" is
> debatable, and really up to benchmarks, but considering we recommend
> about 100 PGs/OSD as a starting point, you probably don't want to go
> two orders of magnitude above that.
Ok .... that will serve our needs. Remains only the "creating" question.

Any answers to that ?

Thanks a lot !

Jens

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ceph -w output
  2011-12-15  9:45       ` Jens Rehpöhler
@ 2011-12-15 17:35         ` Samuel Just
  2011-12-16 11:14           ` Jens Rehpöhler
  0 siblings, 1 reply; 8+ messages in thread
From: Samuel Just @ 2011-12-15 17:35 UTC (permalink / raw)
  To: ceph-devel

Sorry for the delay.  It looks like you hit a corner case in our crush
implementation.  The short version is that this bug got fixed last
week in commit 14f8f00e579083db542568a60cd23d50055c92a3.

The long version is that you have osd.3 and osd.4, but not osd.0,
osd.1, or osd.2.  The pgs stuck in creating are the ones mapped
specifically to osds 0, 1, and 2.  A pg ending in p# (like pg1.0p0) is
supposed to map to osd.0 if possible.  With the above patch, those pgs
should remap to available osds.

-Sam

On Thu, Dec 15, 2011 at 1:45 AM, Jens Rehpöhler
<jens.rehpoehler@filoo.de> wrote:
> Am 14.12.2011 17:43, schrieb Tommi Virtanen:
>> On Wed, Dec 14, 2011 at 00:36, Jens Rehpöhler <jens.rehpoehler@filoo.de> wrote:
>>> Attached you will find the output you asked for. Is there any limitation
>>> on the amount of pools ? We create
>>> pools for every customer and store their VM images in that pools. So we
>>> will create a lot of pools over time.
>> Each pool gets its own set of PGs (Placement Groups). An OSD that
>> manages too many PGs will use a lot of RAM. What is "too many" is
>> debatable, and really up to benchmarks, but considering we recommend
>> about 100 PGs/OSD as a starting point, you probably don't want to go
>> two orders of magnitude above that.
> Ok .... that will serve our needs. Remains only the "creating" question.
>
> Any answers to that ?
>
> Thanks a lot !
>
> Jens
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ceph -w output
  2011-12-15 17:35         ` Samuel Just
@ 2011-12-16 11:14           ` Jens Rehpöhler
  2012-01-04 11:51             ` Jens Rehpöhler
  0 siblings, 1 reply; 8+ messages in thread
From: Jens Rehpöhler @ 2011-12-16 11:14 UTC (permalink / raw)
  To: Samuel Just; +Cc: ceph-devel

Hi Sam,

thanks for the answers.

We began with osd3 und osd4 for internal reasons. osd1 to 3 will be
addes in near future.

I will wait until the patch is in the stables branch.

Thanks again

Jens


Am 15.12.2011 18:35, schrieb Samuel Just:
> Sorry for the delay.  It looks like you hit a corner case in our crush
> implementation.  The short version is that this bug got fixed last
> week in commit 14f8f00e579083db542568a60cd23d50055c92a3.
>
> The long version is that you have osd.3 and osd.4, but not osd.0,
> osd.1, or osd.2.  The pgs stuck in creating are the ones mapped
> specifically to osds 0, 1, and 2.  A pg ending in p# (like pg1.0p0) is
> supposed to map to osd.0 if possible.  With the above patch, those pgs
> should remap to available osds.
>
> -Sam
>
> On Thu, Dec 15, 2011 at 1:45 AM, Jens Rehpöhler
> <jens.rehpoehler@filoo.de> wrote:
>> Am 14.12.2011 17:43, schrieb Tommi Virtanen:
>>> On Wed, Dec 14, 2011 at 00:36, Jens Rehpöhler <jens.rehpoehler@filoo.de> wrote:
>>>> Attached you will find the output you asked for. Is there any limitation
>>>> on the amount of pools ? We create
>>>> pools for every customer and store their VM images in that pools. So we
>>>> will create a lot of pools over time.
>>> Each pool gets its own set of PGs (Placement Groups). An OSD that
>>> manages too many PGs will use a lot of RAM. What is "too many" is
>>> debatable, and really up to benchmarks, but considering we recommend
>>> about 100 PGs/OSD as a starting point, you probably don't want to go
>>> two orders of magnitude above that.
>> Ok .... that will serve our needs. Remains only the "creating" question.
>>
>> Any answers to that ?
>>
>> Thanks a lot !
>>
>> Jens
>>
>

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ceph -w output
  2011-12-16 11:14           ` Jens Rehpöhler
@ 2012-01-04 11:51             ` Jens Rehpöhler
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Rehpöhler @ 2012-01-04 11:51 UTC (permalink / raw)
  To: Samuel Just; +Cc: ceph-devel

[-- Attachment #1: Type: text/plain, Size: 2458 bytes --]

Hi Sam,

just to inform you: after upgrade to the lastest master (ceph version
0.39-210-gdf84594 (commit:df84594f205bb37d3b062b5b3b9cfd224e6c57d2)

the pgs in "creating" state disappeared. The bugfix seems to work.

Thx again

Jens

Am 16.12.2011 12:14, schrieb Jens Rehpöhler:
> Hi Sam,
>
> thanks for the answers.
>
> We began with osd3 und osd4 for internal reasons. osd1 to 3 will be
> addes in near future.
>
> I will wait until the patch is in the stables branch.
>
> Thanks again
>
> Jens
>
>
> Am 15.12.2011 18:35, schrieb Samuel Just:
>> Sorry for the delay.  It looks like you hit a corner case in our crush
>> implementation.  The short version is that this bug got fixed last
>> week in commit 14f8f00e579083db542568a60cd23d50055c92a3.
>>
>> The long version is that you have osd.3 and osd.4, but not osd.0,
>> osd.1, or osd.2.  The pgs stuck in creating are the ones mapped
>> specifically to osds 0, 1, and 2.  A pg ending in p# (like pg1.0p0) is
>> supposed to map to osd.0 if possible.  With the above patch, those pgs
>> should remap to available osds.
>>
>> -Sam
>>
>> On Thu, Dec 15, 2011 at 1:45 AM, Jens Rehpöhler
>> <jens.rehpoehler@filoo.de> wrote:
>>> Am 14.12.2011 17:43, schrieb Tommi Virtanen:
>>>> On Wed, Dec 14, 2011 at 00:36, Jens Rehpöhler <jens.rehpoehler@filoo.de> wrote:
>>>>> Attached you will find the output you asked for. Is there any limitation
>>>>> on the amount of pools ? We create
>>>>> pools for every customer and store their VM images in that pools. So we
>>>>> will create a lot of pools over time.
>>>> Each pool gets its own set of PGs (Placement Groups). An OSD that
>>>> manages too many PGs will use a lot of RAM. What is "too many" is
>>>> debatable, and really up to benchmarks, but considering we recommend
>>>> about 100 PGs/OSD as a starting point, you probably don't want to go
>>>> two orders of magnitude above that.
>>> Ok .... that will serve our needs. Remains only the "creating" question.
>>>
>>> Any answers to that ?
>>>
>>> Thanks a lot !
>>>
>>> Jens
>>>


-- 
mit freundlichen Grüssen

Jens Rehpöhler

----------------------------------------------------------------------
Filoo GmbH
Moltkestr. 25a
33330 Gütersloh
HRB4355 AG Gütersloh

Geschäftsführer: S.Grewing | J.Rehpöhler | C.Kunz
Telefon: +49 5241 8673012 | Mobil: +49 151 54645798
Hotline: 07000-3378658 (14 Ct/min) Fax: +49 5241 8673020



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2012-01-04 11:51 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-12-12 19:23 ceph -w output Jens Rehpoehler
2011-12-13 17:57 ` Samuel Just
2011-12-14  8:36   ` Jens Rehpöhler
2011-12-14 16:43     ` Tommi Virtanen
2011-12-15  9:45       ` Jens Rehpöhler
2011-12-15 17:35         ` Samuel Just
2011-12-16 11:14           ` Jens Rehpöhler
2012-01-04 11:51             ` Jens Rehpöhler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.