All of lore.kernel.org
 help / color / mirror / Atom feed
* NO pg created for eruasre-coded pool
@ 2014-10-14  9:07 ghislain.chevalier
  2014-10-14 10:11 ` Loic Dachary
  0 siblings, 1 reply; 4+ messages in thread
From: ghislain.chevalier @ 2014-10-14  9:07 UTC (permalink / raw)
  To: ceph-devel

Hi all,

Context :
Ceph : Firefly 0.80.6
Sandbox Platform  : Ubuntu 12.04 LTS, 5 VM (VMware), 3 mons, 10 osd


Issue:
I created an erasure-coded pool using the default profile 
--> ceph osd pool create ecpool 128 128 erasure default
the erasure-code rule was dynamically created and associated to the pool.
root@p-sbceph14:/etc/ceph# ceph osd crush rule dump erasure-code
{ "rule_id": 7,
  "rule_name": "erasure-code",
  "ruleset": 52,
  "type": 3,
  "min_size": 3,
  "max_size": 20,
  "steps": [
        { "op": "set_chooseleaf_tries",
          "num": 5},
        { "op": "take",
          "item": -1,
          "item_name": "default"},
        { "op": "chooseleaf_indep",
          "num": 0,
          "type": "host"},
        { "op": "emit"}]}
root@p-sbceph14:/var/log/ceph# ceph osd pool get ecpool crush_ruleset
crush_ruleset: 52

No error message was displayed at pool creation but no pgs were created.
--> rados lspools confirms the pool is created but rados/ceph df shows no pg for this pool

The command  "rados -p ecpool put services /etc/services" is inactive (stalled)
and the following message is encountered in ceph.log
2014-10-14 10:36:50.189432 osd.5 10.192.134.123:6804/21505 799 : [WRN] slow request 960.230073 seconds old, received at 2014-10-14 10:20:49.959255: osd_op(client.1192643.0:1 services [writefull 0~19281] 100.5a48a9c2 ondisk+write e11869) v4 currently waiting for pg to exist locally

I don't know if I missed something or if the problem is somewhere else..

Best regards 
 
 
 






_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: NO pg created for eruasre-coded pool
  2014-10-14  9:07 NO pg created for eruasre-coded pool ghislain.chevalier
@ 2014-10-14 10:11 ` Loic Dachary
  2014-10-14 13:20   ` ghislain.chevalier
  0 siblings, 1 reply; 4+ messages in thread
From: Loic Dachary @ 2014-10-14 10:11 UTC (permalink / raw)
  To: ghislain.chevalier, ceph-devel

[-- Attachment #1: Type: text/plain, Size: 3285 bytes --]



On 14/10/2014 02:07, ghislain.chevalier@orange.com wrote:
> Hi all,
> 
> Context :
> Ceph : Firefly 0.80.6
> Sandbox Platform  : Ubuntu 12.04 LTS, 5 VM (VMware), 3 mons, 10 osd
> 
> 
> Issue:
> I created an erasure-coded pool using the default profile 
> --> ceph osd pool create ecpool 128 128 erasure default
> the erasure-code rule was dynamically created and associated to the pool.
> root@p-sbceph14:/etc/ceph# ceph osd crush rule dump erasure-code
> { "rule_id": 7,
>   "rule_name": "erasure-code",
>   "ruleset": 52,
>   "type": 3,
>   "min_size": 3,
>   "max_size": 20,
>   "steps": [
>         { "op": "set_chooseleaf_tries",
>           "num": 5},
>         { "op": "take",
>           "item": -1,
>           "item_name": "default"},
>         { "op": "chooseleaf_indep",
>           "num": 0,
>           "type": "host"},
>         { "op": "emit"}]}
> root@p-sbceph14:/var/log/ceph# ceph osd pool get ecpool crush_ruleset
> crush_ruleset: 52

> No error message was displayed at pool creation but no pgs were created.
> --> rados lspools confirms the pool is created but rados/ceph df shows no pg for this pool
> 
> The command  "rados -p ecpool put services /etc/services" is inactive (stalled)
> and the following message is encountered in ceph.log
> 2014-10-14 10:36:50.189432 osd.5 10.192.134.123:6804/21505 799 : [WRN] slow request 960.230073 seconds old, received at 2014-10-14 10:20:49.959255: osd_op(client.1192643.0:1 services [writefull 0~19281] 100.5a48a9c2 ondisk+write e11869) v4 currently waiting for pg to exist locally
> 
> I don't know if I missed something or if the problem is somewhere else..

The erasure-code rule displayed will need at least three hosts. If there are not enough hosts with OSDs the mapping will fail and put will hang until an OSD becomes available to complete the mapping of OSDs to the PGs. What does your ceph osd tree shows ?

Cheers

> 
> Best regards 
>  
>  
>  
> 
> 
> 
> 
> 
> 
> _________________________________________________________________________________________________________________________
> 
> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
> 
> This message and its attachments may contain confidential or privileged information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
> Thank you.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: NO pg created for eruasre-coded pool
  2014-10-14 10:11 ` Loic Dachary
@ 2014-10-14 13:20   ` ghislain.chevalier
  2014-10-14 14:44     ` Loic Dachary
  0 siblings, 1 reply; 4+ messages in thread
From: ghislain.chevalier @ 2014-10-14 13:20 UTC (permalink / raw)
  To: Loic Dachary, ceph-devel

HI,

THX Loïc for your quick reply.

Here is the result of ceph osd tree

As showed at the last ceph day in Paris, we have multiple root but the ruleset 52 entered the crushmap on root default.

# id    weight  type name       up/down reweight
-100    0.09998 root diskroot
-110    0.04999         diskclass fastsata
0       0.009995                        osd.0   up      1
1       0.009995                        osd.1   up      1
2       0.009995                        osd.2   up      1
3       0.009995                        osd.3   up      1
-120    0.04999         diskclass slowsata
4       0.009995                        osd.4   up      1
5       0.009995                        osd.5   up      1
6       0.009995                        osd.6   up      1
7       0.009995                        osd.7   up      1
8       0.009995                        osd.8   up      1
9       0.009995                        osd.9   up      1
-5      0.2     root approot
-50     0.09999         appclient apprgw
-501    0.04999                 appclass fastrgw
0       0.009995                                osd.0   up      1
1       0.009995                                osd.1   up      1
2       0.009995                                osd.2   up      1
3       0.009995                                osd.3   up      1
-502    0.04999                 appclass slowrgw
4       0.009995                                osd.4   up      1
5       0.009995                                osd.5   up      1
6       0.009995                                osd.6   up      1
7       0.009995                                osd.7   up      1
8       0.009995                                osd.8   up      1
9       0.009995                                osd.9   up      1
-51     0.09999         appclient appstd
-511    0.04999                 appclass faststd
0       0.009995                                osd.0   up      1
1       0.009995                                osd.1   up      1
2       0.009995                                osd.2   up      1
3       0.009995                                osd.3   up      1
-512    0.04999                 appclass slowstd
4       0.009995                                osd.4   up      1
5       0.009995                                osd.5   up      1
6       0.009995                                osd.6   up      1
7       0.009995                                osd.7   up      1
8       0.009995                                osd.8   up      1
9       0.009995                                osd.9   up      1
-1      0.09999 root default
-2      0.09999         datacenter nanterre
-3      0.09999                 platform sandbox
-13     0.01999                         host p-sbceph13
0       0.009995                                        osd.0   up      1
5       0.009995                                        osd.5   up      1
-14     0.01999                         host p-sbceph14
1       0.009995                                        osd.1   up      1
6       0.009995                                        osd.6   up      1
-15     0.01999                         host p-sbceph15
2       0.009995                                        osd.2   up      1
7       0.009995                                        osd.7   up      1
-12     0.01999                         host p-sbceph12
3       0.009995                                        osd.3   up      1
8       0.009995                                        osd.8   up      1
-11     0.01999                         host p-sbceph11
4       0.009995                                        osd.4   up      1
9       0.009995                                        osd.9   up      1

Best regards

-----Message d'origine-----
De : Loic Dachary [mailto:loic@dachary.org] 
Envoyé : mardi 14 octobre 2014 12:12
À : CHEVALIER Ghislain IMT/OLPS; ceph-devel@vger.kernel.org
Objet : Re: [Ceph-Devel] NO pg created for eruasre-coded pool



On 14/10/2014 02:07, ghislain.chevalier@orange.com wrote:
> Hi all,
> 
> Context :
> Ceph : Firefly 0.80.6
> Sandbox Platform  : Ubuntu 12.04 LTS, 5 VM (VMware), 3 mons, 10 osd
> 
> 
> Issue:
> I created an erasure-coded pool using the default profile
> --> ceph osd pool create ecpool 128 128 erasure default
> the erasure-code rule was dynamically created and associated to the pool.
> root@p-sbceph14:/etc/ceph# ceph osd crush rule dump erasure-code { 
> "rule_id": 7,
>   "rule_name": "erasure-code",
>   "ruleset": 52,
>   "type": 3,
>   "min_size": 3,
>   "max_size": 20,
>   "steps": [
>         { "op": "set_chooseleaf_tries",
>           "num": 5},
>         { "op": "take",
>           "item": -1,
>           "item_name": "default"},
>         { "op": "chooseleaf_indep",
>           "num": 0,
>           "type": "host"},
>         { "op": "emit"}]}
> root@p-sbceph14:/var/log/ceph# ceph osd pool get ecpool crush_ruleset
> crush_ruleset: 52

> No error message was displayed at pool creation but no pgs were created.
> --> rados lspools confirms the pool is created but rados/ceph df shows 
> --> no pg for this pool
> 
> The command  "rados -p ecpool put services /etc/services" is inactive 
> (stalled) and the following message is encountered in ceph.log
> 2014-10-14 10:36:50.189432 osd.5 10.192.134.123:6804/21505 799 : [WRN] 
> slow request 960.230073 seconds old, received at 2014-10-14 
> 10:20:49.959255: osd_op(client.1192643.0:1 services [writefull 
> 0~19281] 100.5a48a9c2 ondisk+write e11869) v4 currently waiting for pg 
> to exist locally
> 
> I don't know if I missed something or if the problem is somewhere else..

The erasure-code rule displayed will need at least three hosts. If there are not enough hosts with OSDs the mapping will fail and put will hang until an OSD becomes available to complete the mapping of OSDs to the PGs. What does your ceph osd tree shows ?

Cheers

> 
> Best regards
>  
>  
>  
> 
> 
> 
> 
> 
> 
> ______________________________________________________________________
> ___________________________________________________
> 
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc pas etre diffuses, 
> exploites ou copies sans autorisation. Si vous avez recu ce message 
> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
> 
> This message and its attachments may contain confidential or 
> privileged information that may be protected by law; they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
> Thank you.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
> 

--
Loïc Dachary, Artisan Logiciel Libre


_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: NO pg created for eruasre-coded pool
  2014-10-14 13:20   ` ghislain.chevalier
@ 2014-10-14 14:44     ` Loic Dachary
  0 siblings, 0 replies; 4+ messages in thread
From: Loic Dachary @ 2014-10-14 14:44 UTC (permalink / raw)
  To: ghislain.chevalier, ceph-devel

[-- Attachment #1: Type: text/plain, Size: 9122 bytes --]

Hi,

The ruleset has

{ "op": "chooseleaf_indep",
          "num": 0,
          "type": "host"},

but it does not look like your tree has a bucket of type host in it.

Cheers

On 14/10/2014 06:20, ghislain.chevalier@orange.com wrote:
> HI,
> 
> THX Loïc for your quick reply.
> 
> Here is the result of ceph osd tree
> 
> As showed at the last ceph day in Paris, we have multiple root but the ruleset 52 entered the crushmap on root default.
> 
> # id    weight  type name       up/down reweight
> -100    0.09998 root diskroot
> -110    0.04999         diskclass fastsata
> 0       0.009995                        osd.0   up      1
> 1       0.009995                        osd.1   up      1
> 2       0.009995                        osd.2   up      1
> 3       0.009995                        osd.3   up      1
> -120    0.04999         diskclass slowsata
> 4       0.009995                        osd.4   up      1
> 5       0.009995                        osd.5   up      1
> 6       0.009995                        osd.6   up      1
> 7       0.009995                        osd.7   up      1
> 8       0.009995                        osd.8   up      1
> 9       0.009995                        osd.9   up      1
> -5      0.2     root approot
> -50     0.09999         appclient apprgw
> -501    0.04999                 appclass fastrgw
> 0       0.009995                                osd.0   up      1
> 1       0.009995                                osd.1   up      1
> 2       0.009995                                osd.2   up      1
> 3       0.009995                                osd.3   up      1
> -502    0.04999                 appclass slowrgw
> 4       0.009995                                osd.4   up      1
> 5       0.009995                                osd.5   up      1
> 6       0.009995                                osd.6   up      1
> 7       0.009995                                osd.7   up      1
> 8       0.009995                                osd.8   up      1
> 9       0.009995                                osd.9   up      1
> -51     0.09999         appclient appstd
> -511    0.04999                 appclass faststd
> 0       0.009995                                osd.0   up      1
> 1       0.009995                                osd.1   up      1
> 2       0.009995                                osd.2   up      1
> 3       0.009995                                osd.3   up      1
> -512    0.04999                 appclass slowstd
> 4       0.009995                                osd.4   up      1
> 5       0.009995                                osd.5   up      1
> 6       0.009995                                osd.6   up      1
> 7       0.009995                                osd.7   up      1
> 8       0.009995                                osd.8   up      1
> 9       0.009995                                osd.9   up      1
> -1      0.09999 root default
> -2      0.09999         datacenter nanterre
> -3      0.09999                 platform sandbox
> -13     0.01999                         host p-sbceph13
> 0       0.009995                                        osd.0   up      1
> 5       0.009995                                        osd.5   up      1
> -14     0.01999                         host p-sbceph14
> 1       0.009995                                        osd.1   up      1
> 6       0.009995                                        osd.6   up      1
> -15     0.01999                         host p-sbceph15
> 2       0.009995                                        osd.2   up      1
> 7       0.009995                                        osd.7   up      1
> -12     0.01999                         host p-sbceph12
> 3       0.009995                                        osd.3   up      1
> 8       0.009995                                        osd.8   up      1
> -11     0.01999                         host p-sbceph11
> 4       0.009995                                        osd.4   up      1
> 9       0.009995                                        osd.9   up      1
> 
> Best regards
> 
> -----Message d'origine-----
> De : Loic Dachary [mailto:loic@dachary.org] 
> Envoyé : mardi 14 octobre 2014 12:12
> À : CHEVALIER Ghislain IMT/OLPS; ceph-devel@vger.kernel.org
> Objet : Re: [Ceph-Devel] NO pg created for eruasre-coded pool
> 
> 
> 
> On 14/10/2014 02:07, ghislain.chevalier@orange.com wrote:
>> Hi all,
>>
>> Context :
>> Ceph : Firefly 0.80.6
>> Sandbox Platform  : Ubuntu 12.04 LTS, 5 VM (VMware), 3 mons, 10 osd
>>
>>
>> Issue:
>> I created an erasure-coded pool using the default profile
>> --> ceph osd pool create ecpool 128 128 erasure default
>> the erasure-code rule was dynamically created and associated to the pool.
>> root@p-sbceph14:/etc/ceph# ceph osd crush rule dump erasure-code { 
>> "rule_id": 7,
>>   "rule_name": "erasure-code",
>>   "ruleset": 52,
>>   "type": 3,
>>   "min_size": 3,
>>   "max_size": 20,
>>   "steps": [
>>         { "op": "set_chooseleaf_tries",
>>           "num": 5},
>>         { "op": "take",
>>           "item": -1,
>>           "item_name": "default"},
>>         { "op": "chooseleaf_indep",
>>           "num": 0,
>>           "type": "host"},
>>         { "op": "emit"}]}
>> root@p-sbceph14:/var/log/ceph# ceph osd pool get ecpool crush_ruleset
>> crush_ruleset: 52
> 
>> No error message was displayed at pool creation but no pgs were created.
>> --> rados lspools confirms the pool is created but rados/ceph df shows 
>> --> no pg for this pool
>>
>> The command  "rados -p ecpool put services /etc/services" is inactive 
>> (stalled) and the following message is encountered in ceph.log
>> 2014-10-14 10:36:50.189432 osd.5 10.192.134.123:6804/21505 799 : [WRN] 
>> slow request 960.230073 seconds old, received at 2014-10-14 
>> 10:20:49.959255: osd_op(client.1192643.0:1 services [writefull 
>> 0~19281] 100.5a48a9c2 ondisk+write e11869) v4 currently waiting for pg 
>> to exist locally
>>
>> I don't know if I missed something or if the problem is somewhere else..
> 
> The erasure-code rule displayed will need at least three hosts. If there are not enough hosts with OSDs the mapping will fail and put will hang until an OSD becomes available to complete the mapping of OSDs to the PGs. What does your ceph osd tree shows ?
> 
> Cheers
> 
>>
>> Best regards
>>  
>>  
>>  
>>
>>
>>
>>
>>
>>
>> ______________________________________________________________________
>> ___________________________________________________
>>
>> Ce message et ses pieces jointes peuvent contenir des informations 
>> confidentielles ou privilegiees et ne doivent donc pas etre diffuses, 
>> exploites ou copies sans autorisation. Si vous avez recu ce message 
>> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>
>> This message and its attachments may contain confidential or 
>> privileged information that may be protected by law; they should not be distributed, used or copied without authorisation.
>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>> Thank you.
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at  http://vger.kernel.org/majordomo-info.html
>>
> 
> --
> Loïc Dachary, Artisan Logiciel Libre
> 
> 
> _________________________________________________________________________________________________________________________
> 
> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
> 
> This message and its attachments may contain confidential or privileged information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
> Thank you.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-10-14 14:44 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-10-14  9:07 NO pg created for eruasre-coded pool ghislain.chevalier
2014-10-14 10:11 ` Loic Dachary
2014-10-14 13:20   ` ghislain.chevalier
2014-10-14 14:44     ` Loic Dachary

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.