All of lore.kernel.org
 help / color / mirror / Atom feed
* Lots of radosgw-admin commands fail after upgrade
@ 2016-11-01 13:13 Mustafa Muhammad
  2016-11-01 14:04 ` Orit Wasserman
  0 siblings, 1 reply; 21+ messages in thread
From: Mustafa Muhammad @ 2016-11-01 13:13 UTC (permalink / raw)
  To: ceph-devel

Hello,
I have production cluster configured with multiple placement pools according to:

http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw

After upgrading to Jewel, most radosgw-admin are failing, probably
because there is no realm


# radosgw-admin realm list
{
    "default_info": "",
    "realms": []
}


# radosgw-admin zone get
unable to initialize zone: (2) No such file or directory


# radosgw-admin regionmap get
failed to read current period info: 2016-11-01 16:08:14.099948
7f21b55ee9c0  0 RGWPeriod::init failed to init realm  id  : (2) No
such file or directory(2) No such file or directory
{
    "zonegroups": [],
    "master_zonegroup": "",
    "bucket_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    }
}


# radosgw-admin bucket stats
2016-11-01 16:07:55.860053 7f6e747f89c0  0 zonegroup default missing
zone for master_zone=
couldn't init storage provider

I have previous region.conf.json and zone.conf.json, how can I make
everything work again? Will creating new realm fix this?

Regards
Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2016-11-01 13:13 Lots of radosgw-admin commands fail after upgrade Mustafa Muhammad
@ 2016-11-01 14:04 ` Orit Wasserman
       [not found]   ` <CAFehDbC6yBsbQCPVmwE+DCod-Xmbafp1MU_r1wYZ0xd_q3Dt3Q@mail.gmail.com>
  0 siblings, 1 reply; 21+ messages in thread
From: Orit Wasserman @ 2016-11-01 14:04 UTC (permalink / raw)
  To: Mustafa Muhammad; +Cc: ceph-devel

Hi,
what version of jewel are you using?
can you try raodsgw-admin zone get --rgw-zone default and
radosgw-admin zonegroup get --rgw-zonegroup default?

Orit

On Tue, Nov 1, 2016 at 2:13 PM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
> Hello,
> I have production cluster configured with multiple placement pools according to:
>
> http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw
>
> After upgrading to Jewel, most radosgw-admin are failing, probably
> because there is no realm
>
>
> # radosgw-admin realm list
> {
>     "default_info": "",
>     "realms": []
> }
>
>
> # radosgw-admin zone get
> unable to initialize zone: (2) No such file or directory
>
>
> # radosgw-admin regionmap get
> failed to read current period info: 2016-11-01 16:08:14.099948
> 7f21b55ee9c0  0 RGWPeriod::init failed to init realm  id  : (2) No
> such file or directory(2) No such file or directory
> {
>     "zonegroups": [],
>     "master_zonegroup": "",
>     "bucket_quota": {
>         "enabled": false,
>         "max_size_kb": -1,
>         "max_objects": -1
>     },
>     "user_quota": {
>         "enabled": false,
>         "max_size_kb": -1,
>         "max_objects": -1
>     }
> }
>
>
> # radosgw-admin bucket stats
> 2016-11-01 16:07:55.860053 7f6e747f89c0  0 zonegroup default missing
> zone for master_zone=
> couldn't init storage provider
>
> I have previous region.conf.json and zone.conf.json, how can I make
> everything work again? Will creating new realm fix this?
>
> Regards
> Mustafa Muhammad
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
       [not found]   ` <CAFehDbC6yBsbQCPVmwE+DCod-Xmbafp1MU_r1wYZ0xd_q3Dt3Q@mail.gmail.com>
@ 2016-11-02  9:39     ` Orit Wasserman
       [not found]       ` <CAFehDbC1kRQV+rQbD_r-yFHD2ymWXCUR1go2nu6y7FtoWB_t7g@mail.gmail.com>
  2016-11-07  9:05       ` Mustafa Muhammad
  0 siblings, 2 replies; 21+ messages in thread
From: Orit Wasserman @ 2016-11-02  9:39 UTC (permalink / raw)
  To: Mustafa Muhammad, ceph-devel

Hi,
You have hit the master zone issue.
Here is a fix I prefer:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011157.html
It is very important notice to run the fix when the radosgw is down.

Good luck,
Orit

On Tue, Nov 1, 2016 at 10:07 PM, Mustafa Muhammad
<mustafa1024m@gmail.com> wrote:
> On Tue, Nov 1, 2016 at 5:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>> Hi,
>> what version of jewel are you using?
>> can you try raodsgw-admin zone get --rgw-zone default and
>> radosgw-admin zonegroup get --rgw-zonegroup default?
>>
> Hello, I am using 10.2.3
> #radosgw-admin zone get --rgw-zone default
> {
>     "id": "default",
>     "name": "default",
>     "domain_root": ".rgw",
>     "control_pool": ".rgw.control",
>     "gc_pool": ".rgw.gc",
>     "log_pool": ".log",
>     "intent_log_pool": ".intent-log",
>     "usage_log_pool": ".usage",
>     "user_keys_pool": ".users",
>     "user_email_pool": ".users.email",
>     "user_swift_pool": ".users.swift",
>     "user_uid_pool": ".users.uid",
>     "system_key": {
>         "access_key": "",
>         "secret_key": ""
>     },
>     "placement_pools": [],
>     "metadata_heap": ".rgw.meta",
>     "realm_id": ""
> }
>
> # radosgw-admin zonegroup get --rgw-zonegroup default
> {
>     "id": "default",
>     "name": "default",
>     "api_name": "",
>     "is_master": "true",
>     "endpoints": [],
>     "hostnames": [],
>     "hostnames_s3website": [],
>     "master_zone": "",
>     "zones": [
>         {
>             "id": "default",
>             "name": "default",
>             "endpoints": [],
>             "log_meta": "false",
>             "log_data": "false",
>             "bucket_index_max_shards": 0,
>             "read_only": "false"
>         }
>     ],
>     "placement_targets": [
>         {
>             "name": "cinema-placement",
>             "tags": []
>         },
>         {
>             "name": "cinema-source-placement",
>             "tags": []
>         },
>         {
>             "name": "default-placement",
>             "tags": []
>         },
>         {
>             "name": "erasure-placement",
>             "tags": []
>         },
>         {
>             "name": "share-placement",
>             "tags": []
>         },
>         {
>             "name": "share2016-placement",
>             "tags": []
>         },
>         {
>             "name": "test-placement",
>             "tags": []
>         }
>     ],
>     "default_placement": "default-placement",
>     "realm_id": ""
> }
>
>
> Thanks
> Mustafa
>
>> Orit
>>
>> On Tue, Nov 1, 2016 at 2:13 PM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
>>> Hello,
>>> I have production cluster configured with multiple placement pools according to:
>>>
>>> http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw
>>>
>>> After upgrading to Jewel, most radosgw-admin are failing, probably
>>> because there is no realm
>>>
>>>
>>> # radosgw-admin realm list
>>> {
>>>     "default_info": "",
>>>     "realms": []
>>> }
>>>
>>>
>>> # radosgw-admin zone get
>>> unable to initialize zone: (2) No such file or directory
>>>
>>>
>>> # radosgw-admin regionmap get
>>> failed to read current period info: 2016-11-01 16:08:14.099948
>>> 7f21b55ee9c0  0 RGWPeriod::init failed to init realm  id  : (2) No
>>> such file or directory(2) No such file or directory
>>> {
>>>     "zonegroups": [],
>>>     "master_zonegroup": "",
>>>     "bucket_quota": {
>>>         "enabled": false,
>>>         "max_size_kb": -1,
>>>         "max_objects": -1
>>>     },
>>>     "user_quota": {
>>>         "enabled": false,
>>>         "max_size_kb": -1,
>>>         "max_objects": -1
>>>     }
>>> }
>>>
>>>
>>> # radosgw-admin bucket stats
>>> 2016-11-01 16:07:55.860053 7f6e747f89c0  0 zonegroup default missing
>>> zone for master_zone=
>>> couldn't init storage provider
>>>
>>> I have previous region.conf.json and zone.conf.json, how can I make
>>> everything work again? Will creating new realm fix this?
>>>
>>> Regards
>>> Mustafa Muhammad
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
       [not found]       ` <CAFehDbC1kRQV+rQbD_r-yFHD2ymWXCUR1go2nu6y7FtoWB_t7g@mail.gmail.com>
@ 2016-11-02 12:36         ` Orit Wasserman
  0 siblings, 0 replies; 21+ messages in thread
From: Orit Wasserman @ 2016-11-02 12:36 UTC (permalink / raw)
  To: Mustafa Muhammad, ceph-devel

Yes it is required to stop the gateway while preforming the workaround.
Your zone info changes will be stay
I recommend using 10.2.3 (the same version) for all gateways.


On Wed, Nov 2, 2016 at 1:28 PM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
> Thanks a lot, I'll apply it when possible, but I've changed zone info
> while RGWs are running before, is it strictly required to stop them?
> They are all Jewel 10.2.2
>
> Regards
> Mustafa
>
> On Wed, Nov 2, 2016 at 12:39 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>> Hi,
>> You have hit the master zone issue.
>> Here is a fix I prefer:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011157.html
>> It is very important notice to run the fix when the radosgw is down.
>>
>> Good luck,
>> Orit
>>
>> On Tue, Nov 1, 2016 at 10:07 PM, Mustafa Muhammad
>> <mustafa1024m@gmail.com> wrote:
>>> On Tue, Nov 1, 2016 at 5:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>> Hi,
>>>> what version of jewel are you using?
>>>> can you try raodsgw-admin zone get --rgw-zone default and
>>>> radosgw-admin zonegroup get --rgw-zonegroup default?
>>>>
>>> Hello, I am using 10.2.3
>>> #radosgw-admin zone get --rgw-zone default
>>> {
>>>     "id": "default",
>>>     "name": "default",
>>>     "domain_root": ".rgw",
>>>     "control_pool": ".rgw.control",
>>>     "gc_pool": ".rgw.gc",
>>>     "log_pool": ".log",
>>>     "intent_log_pool": ".intent-log",
>>>     "usage_log_pool": ".usage",
>>>     "user_keys_pool": ".users",
>>>     "user_email_pool": ".users.email",
>>>     "user_swift_pool": ".users.swift",
>>>     "user_uid_pool": ".users.uid",
>>>     "system_key": {
>>>         "access_key": "",
>>>         "secret_key": ""
>>>     },
>>>     "placement_pools": [],
>>>     "metadata_heap": ".rgw.meta",
>>>     "realm_id": ""
>>> }
>>>
>>> # radosgw-admin zonegroup get --rgw-zonegroup default
>>> {
>>>     "id": "default",
>>>     "name": "default",
>>>     "api_name": "",
>>>     "is_master": "true",
>>>     "endpoints": [],
>>>     "hostnames": [],
>>>     "hostnames_s3website": [],
>>>     "master_zone": "",
>>>     "zones": [
>>>         {
>>>             "id": "default",
>>>             "name": "default",
>>>             "endpoints": [],
>>>             "log_meta": "false",
>>>             "log_data": "false",
>>>             "bucket_index_max_shards": 0,
>>>             "read_only": "false"
>>>         }
>>>     ],
>>>     "placement_targets": [
>>>         {
>>>             "name": "cinema-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "cinema-source-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "default-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "erasure-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "share-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "share2016-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "test-placement",
>>>             "tags": []
>>>         }
>>>     ],
>>>     "default_placement": "default-placement",
>>>     "realm_id": ""
>>> }
>>>
>>>
>>> Thanks
>>> Mustafa
>>>
>>>> Orit
>>>>
>>>> On Tue, Nov 1, 2016 at 2:13 PM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
>>>>> Hello,
>>>>> I have production cluster configured with multiple placement pools according to:
>>>>>
>>>>> http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw
>>>>>
>>>>> After upgrading to Jewel, most radosgw-admin are failing, probably
>>>>> because there is no realm
>>>>>
>>>>>
>>>>> # radosgw-admin realm list
>>>>> {
>>>>>     "default_info": "",
>>>>>     "realms": []
>>>>> }
>>>>>
>>>>>
>>>>> # radosgw-admin zone get
>>>>> unable to initialize zone: (2) No such file or directory
>>>>>
>>>>>
>>>>> # radosgw-admin regionmap get
>>>>> failed to read current period info: 2016-11-01 16:08:14.099948
>>>>> 7f21b55ee9c0  0 RGWPeriod::init failed to init realm  id  : (2) No
>>>>> such file or directory(2) No such file or directory
>>>>> {
>>>>>     "zonegroups": [],
>>>>>     "master_zonegroup": "",
>>>>>     "bucket_quota": {
>>>>>         "enabled": false,
>>>>>         "max_size_kb": -1,
>>>>>         "max_objects": -1
>>>>>     },
>>>>>     "user_quota": {
>>>>>         "enabled": false,
>>>>>         "max_size_kb": -1,
>>>>>         "max_objects": -1
>>>>>     }
>>>>> }
>>>>>
>>>>>
>>>>> # radosgw-admin bucket stats
>>>>> 2016-11-01 16:07:55.860053 7f6e747f89c0  0 zonegroup default missing
>>>>> zone for master_zone=
>>>>> couldn't init storage provider
>>>>>
>>>>> I have previous region.conf.json and zone.conf.json, how can I make
>>>>> everything work again? Will creating new realm fix this?
>>>>>
>>>>> Regards
>>>>> Mustafa Muhammad
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2016-11-02  9:39     ` Orit Wasserman
       [not found]       ` <CAFehDbC1kRQV+rQbD_r-yFHD2ymWXCUR1go2nu6y7FtoWB_t7g@mail.gmail.com>
@ 2016-11-07  9:05       ` Mustafa Muhammad
  2016-11-08 11:21         ` Orit Wasserman
  1 sibling, 1 reply; 21+ messages in thread
From: Mustafa Muhammad @ 2016-11-07  9:05 UTC (permalink / raw)
  To: Orit Wasserman; +Cc: ceph-devel

I understood the script and applied it, "zone get" works fine now with
realm, but "radosgw-admin zonegroup get" gives "master_zone":
"default" and realm id with value, then after a minute it goes back to
empty master_zone and realm id.
So I still get:
radosgw-admin bucket stats
2016-11-07 12:04:13.680779 7f7a88e929c0  0 zonegroup default missing
zone for master_zone=
couldn't init storage provider
What should I do?

Thanks
Mustafa

On Wed, Nov 2, 2016 at 12:39 PM, Orit Wasserman <owasserm@redhat.com> wrote:
> Hi,
> You have hit the master zone issue.
> Here is a fix I prefer:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011157.html
> It is very important notice to run the fix when the radosgw is down.
>
> Good luck,
> Orit
>
> On Tue, Nov 1, 2016 at 10:07 PM, Mustafa Muhammad
> <mustafa1024m@gmail.com> wrote:
>> On Tue, Nov 1, 2016 at 5:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>> Hi,
>>> what version of jewel are you using?
>>> can you try raodsgw-admin zone get --rgw-zone default and
>>> radosgw-admin zonegroup get --rgw-zonegroup default?
>>>
>> Hello, I am using 10.2.3
>> #radosgw-admin zone get --rgw-zone default
>> {
>>     "id": "default",
>>     "name": "default",
>>     "domain_root": ".rgw",
>>     "control_pool": ".rgw.control",
>>     "gc_pool": ".rgw.gc",
>>     "log_pool": ".log",
>>     "intent_log_pool": ".intent-log",
>>     "usage_log_pool": ".usage",
>>     "user_keys_pool": ".users",
>>     "user_email_pool": ".users.email",
>>     "user_swift_pool": ".users.swift",
>>     "user_uid_pool": ".users.uid",
>>     "system_key": {
>>         "access_key": "",
>>         "secret_key": ""
>>     },
>>     "placement_pools": [],
>>     "metadata_heap": ".rgw.meta",
>>     "realm_id": ""
>> }
>>
>> # radosgw-admin zonegroup get --rgw-zonegroup default
>> {
>>     "id": "default",
>>     "name": "default",
>>     "api_name": "",
>>     "is_master": "true",
>>     "endpoints": [],
>>     "hostnames": [],
>>     "hostnames_s3website": [],
>>     "master_zone": "",
>>     "zones": [
>>         {
>>             "id": "default",
>>             "name": "default",
>>             "endpoints": [],
>>             "log_meta": "false",
>>             "log_data": "false",
>>             "bucket_index_max_shards": 0,
>>             "read_only": "false"
>>         }
>>     ],
>>     "placement_targets": [
>>         {
>>             "name": "cinema-placement",
>>             "tags": []
>>         },
>>         {
>>             "name": "cinema-source-placement",
>>             "tags": []
>>         },
>>         {
>>             "name": "default-placement",
>>             "tags": []
>>         },
>>         {
>>             "name": "erasure-placement",
>>             "tags": []
>>         },
>>         {
>>             "name": "share-placement",
>>             "tags": []
>>         },
>>         {
>>             "name": "share2016-placement",
>>             "tags": []
>>         },
>>         {
>>             "name": "test-placement",
>>             "tags": []
>>         }
>>     ],
>>     "default_placement": "default-placement",
>>     "realm_id": ""
>> }
>>
>>
>> Thanks
>> Mustafa
>>
>>> Orit
>>>
>>> On Tue, Nov 1, 2016 at 2:13 PM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
>>>> Hello,
>>>> I have production cluster configured with multiple placement pools according to:
>>>>
>>>> http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw
>>>>
>>>> After upgrading to Jewel, most radosgw-admin are failing, probably
>>>> because there is no realm
>>>>
>>>>
>>>> # radosgw-admin realm list
>>>> {
>>>>     "default_info": "",
>>>>     "realms": []
>>>> }
>>>>
>>>>
>>>> # radosgw-admin zone get
>>>> unable to initialize zone: (2) No such file or directory
>>>>
>>>>
>>>> # radosgw-admin regionmap get
>>>> failed to read current period info: 2016-11-01 16:08:14.099948
>>>> 7f21b55ee9c0  0 RGWPeriod::init failed to init realm  id  : (2) No
>>>> such file or directory(2) No such file or directory
>>>> {
>>>>     "zonegroups": [],
>>>>     "master_zonegroup": "",
>>>>     "bucket_quota": {
>>>>         "enabled": false,
>>>>         "max_size_kb": -1,
>>>>         "max_objects": -1
>>>>     },
>>>>     "user_quota": {
>>>>         "enabled": false,
>>>>         "max_size_kb": -1,
>>>>         "max_objects": -1
>>>>     }
>>>> }
>>>>
>>>>
>>>> # radosgw-admin bucket stats
>>>> 2016-11-01 16:07:55.860053 7f6e747f89c0  0 zonegroup default missing
>>>> zone for master_zone=
>>>> couldn't init storage provider
>>>>
>>>> I have previous region.conf.json and zone.conf.json, how can I make
>>>> everything work again? Will creating new realm fix this?
>>>>
>>>> Regards
>>>> Mustafa Muhammad
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2016-11-07  9:05       ` Mustafa Muhammad
@ 2016-11-08 11:21         ` Orit Wasserman
       [not found]           ` <CAFehDbDaDdMHTtxLqq8kjd5Xd9RePqDDCXtJm7_7UMCD7Q3LOg@mail.gmail.com>
  0 siblings, 1 reply; 21+ messages in thread
From: Orit Wasserman @ 2016-11-08 11:21 UTC (permalink / raw)
  To: Mustafa Muhammad; +Cc: ceph-devel

On Mon, Nov 7, 2016 at 10:05 AM, Mustafa Muhammad
<mustafa1024m@gmail.com> wrote:
> I understood the script and applied it, "zone get" works fine now with
> realm, but "radosgw-admin zonegroup get" gives "master_zone":
> "default" and realm id with value, then after a minute it goes back to
> empty master_zone and realm id.

Hi,
Is it possible you have an old radosgw-admin running (from hammer)?
if so you encountered http://tracker.ceph.com/issues/17371, it will be
fixed in 10.2.4.
Can you provides logs?

Try the procedure again and this time also run in the end:
radosgw-admin period update --commit

Orit

> So I still get:
> radosgw-admin bucket stats
> 2016-11-07 12:04:13.680779 7f7a88e929c0  0 zonegroup default missing
> zone for master_zone=
> couldn't init storage provider
> What should I do?
>
> Thanks
> Mustafa
>
> On Wed, Nov 2, 2016 at 12:39 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>> Hi,
>> You have hit the master zone issue.
>> Here is a fix I prefer:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011157.html
>> It is very important notice to run the fix when the radosgw is down.
>>
>> Good luck,
>> Orit
>>
>> On Tue, Nov 1, 2016 at 10:07 PM, Mustafa Muhammad
>> <mustafa1024m@gmail.com> wrote:
>>> On Tue, Nov 1, 2016 at 5:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>> Hi,
>>>> what version of jewel are you using?
>>>> can you try raodsgw-admin zone get --rgw-zone default and
>>>> radosgw-admin zonegroup get --rgw-zonegroup default?
>>>>
>>> Hello, I am using 10.2.3
>>> #radosgw-admin zone get --rgw-zone default
>>> {
>>>     "id": "default",
>>>     "name": "default",
>>>     "domain_root": ".rgw",
>>>     "control_pool": ".rgw.control",
>>>     "gc_pool": ".rgw.gc",
>>>     "log_pool": ".log",
>>>     "intent_log_pool": ".intent-log",
>>>     "usage_log_pool": ".usage",
>>>     "user_keys_pool": ".users",
>>>     "user_email_pool": ".users.email",
>>>     "user_swift_pool": ".users.swift",
>>>     "user_uid_pool": ".users.uid",
>>>     "system_key": {
>>>         "access_key": "",
>>>         "secret_key": ""
>>>     },
>>>     "placement_pools": [],
>>>     "metadata_heap": ".rgw.meta",
>>>     "realm_id": ""
>>> }
>>>
>>> # radosgw-admin zonegroup get --rgw-zonegroup default
>>> {
>>>     "id": "default",
>>>     "name": "default",
>>>     "api_name": "",
>>>     "is_master": "true",
>>>     "endpoints": [],
>>>     "hostnames": [],
>>>     "hostnames_s3website": [],
>>>     "master_zone": "",
>>>     "zones": [
>>>         {
>>>             "id": "default",
>>>             "name": "default",
>>>             "endpoints": [],
>>>             "log_meta": "false",
>>>             "log_data": "false",
>>>             "bucket_index_max_shards": 0,
>>>             "read_only": "false"
>>>         }
>>>     ],
>>>     "placement_targets": [
>>>         {
>>>             "name": "cinema-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "cinema-source-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "default-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "erasure-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "share-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "share2016-placement",
>>>             "tags": []
>>>         },
>>>         {
>>>             "name": "test-placement",
>>>             "tags": []
>>>         }
>>>     ],
>>>     "default_placement": "default-placement",
>>>     "realm_id": ""
>>> }
>>>
>>>
>>> Thanks
>>> Mustafa
>>>
>>>> Orit
>>>>
>>>> On Tue, Nov 1, 2016 at 2:13 PM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
>>>>> Hello,
>>>>> I have production cluster configured with multiple placement pools according to:
>>>>>
>>>>> http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw
>>>>>
>>>>> After upgrading to Jewel, most radosgw-admin are failing, probably
>>>>> because there is no realm
>>>>>
>>>>>
>>>>> # radosgw-admin realm list
>>>>> {
>>>>>     "default_info": "",
>>>>>     "realms": []
>>>>> }
>>>>>
>>>>>
>>>>> # radosgw-admin zone get
>>>>> unable to initialize zone: (2) No such file or directory
>>>>>
>>>>>
>>>>> # radosgw-admin regionmap get
>>>>> failed to read current period info: 2016-11-01 16:08:14.099948
>>>>> 7f21b55ee9c0  0 RGWPeriod::init failed to init realm  id  : (2) No
>>>>> such file or directory(2) No such file or directory
>>>>> {
>>>>>     "zonegroups": [],
>>>>>     "master_zonegroup": "",
>>>>>     "bucket_quota": {
>>>>>         "enabled": false,
>>>>>         "max_size_kb": -1,
>>>>>         "max_objects": -1
>>>>>     },
>>>>>     "user_quota": {
>>>>>         "enabled": false,
>>>>>         "max_size_kb": -1,
>>>>>         "max_objects": -1
>>>>>     }
>>>>> }
>>>>>
>>>>>
>>>>> # radosgw-admin bucket stats
>>>>> 2016-11-01 16:07:55.860053 7f6e747f89c0  0 zonegroup default missing
>>>>> zone for master_zone=
>>>>> couldn't init storage provider
>>>>>
>>>>> I have previous region.conf.json and zone.conf.json, how can I make
>>>>> everything work again? Will creating new realm fix this?
>>>>>
>>>>> Regards
>>>>> Mustafa Muhammad
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
       [not found]             ` <CABo9giTWVHYGdrqpmtYP8-iDY5tM+a4MrBczwha27=g-HzmRcw@mail.gmail.com>
@ 2016-11-09  5:45               ` Mustafa Muhammad
  2016-11-09 10:11                 ` Orit Wasserman
  0 siblings, 1 reply; 21+ messages in thread
From: Mustafa Muhammad @ 2016-11-09  5:45 UTC (permalink / raw)
  To: Orit Wasserman, ceph-devel

On Tue, Nov 8, 2016 at 3:16 PM, Orit Wasserman <owasserm@redhat.com> wrote:
> On Tue, Nov 8, 2016 at 1:11 PM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
>> On Tue, Nov 8, 2016 at 2:21 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>> On Mon, Nov 7, 2016 at 10:05 AM, Mustafa Muhammad
>>> <mustafa1024m@gmail.com> wrote:
>>>> I understood the script and applied it, "zone get" works fine now with
>>>> realm, but "radosgw-admin zonegroup get" gives "master_zone":
>>>> "default" and realm id with value, then after a minute it goes back to
>>>> empty master_zone and realm id.
>>>
>>> Hi,
>>> Is it possible you have an old radosgw-admin running (from hammer)?
>>> if so you encountered http://tracker.ceph.com/issues/17371, it will be
>>> fixed in 10.2.4.
>>
>> I found I have one Infernalis 9.2.1
>>
>
> that explains it ...
>
>>> Can you provides logs?
>>
>> What logs exactly?
>>
> rgw logs but it looks like we know the cause so it is not important.
>
>>>
>>> Try the procedure again and this time also run in the end:
>>> radosgw-admin period update --commit
>>
>> After updating that RGW?
>>
> yes after doing all the steps
>
All RGWs now on 10.2.2, can't make them 10.2.3 because they won't start.
Stopped them all and run the script again with "radosgw-admin period
update --commit" at the end, still getting:
"zonegroup default missing zone for master_zone="
If I wait till 10.2.4, should it be fixed?

Regards
Mustafa

>>>
>>> Orit
>>>
>>
>> Thanks a lot :)
>>
>> Regards
>> Mustafa
>>
>>>> So I still get:
>>>> radosgw-admin bucket stats
>>>> 2016-11-07 12:04:13.680779 7f7a88e929c0  0 zonegroup default missing
>>>> zone for master_zone=
>>>> couldn't init storage provider
>>>> What should I do?
>>>>
>>>> Thanks
>>>> Mustafa
>>>>
>>>> On Wed, Nov 2, 2016 at 12:39 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>> Hi,
>>>>> You have hit the master zone issue.
>>>>> Here is a fix I prefer:
>>>>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011157.html
>>>>> It is very important notice to run the fix when the radosgw is down.
>>>>>
>>>>> Good luck,
>>>>> Orit
>>>>>
>>>>> On Tue, Nov 1, 2016 at 10:07 PM, Mustafa Muhammad
>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>> On Tue, Nov 1, 2016 at 5:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>>> Hi,
>>>>>>> what version of jewel are you using?
>>>>>>> can you try raodsgw-admin zone get --rgw-zone default and
>>>>>>> radosgw-admin zonegroup get --rgw-zonegroup default?
>>>>>>>
>>>>>> Hello, I am using 10.2.3
>>>>>> #radosgw-admin zone get --rgw-zone default
>>>>>> {
>>>>>>     "id": "default",
>>>>>>     "name": "default",
>>>>>>     "domain_root": ".rgw",
>>>>>>     "control_pool": ".rgw.control",
>>>>>>     "gc_pool": ".rgw.gc",
>>>>>>     "log_pool": ".log",
>>>>>>     "intent_log_pool": ".intent-log",
>>>>>>     "usage_log_pool": ".usage",
>>>>>>     "user_keys_pool": ".users",
>>>>>>     "user_email_pool": ".users.email",
>>>>>>     "user_swift_pool": ".users.swift",
>>>>>>     "user_uid_pool": ".users.uid",
>>>>>>     "system_key": {
>>>>>>         "access_key": "",
>>>>>>         "secret_key": ""
>>>>>>     },
>>>>>>     "placement_pools": [],
>>>>>>     "metadata_heap": ".rgw.meta",
>>>>>>     "realm_id": ""
>>>>>> }
>>>>>>
>>>>>> # radosgw-admin zonegroup get --rgw-zonegroup default
>>>>>> {
>>>>>>     "id": "default",
>>>>>>     "name": "default",
>>>>>>     "api_name": "",
>>>>>>     "is_master": "true",
>>>>>>     "endpoints": [],
>>>>>>     "hostnames": [],
>>>>>>     "hostnames_s3website": [],
>>>>>>     "master_zone": "",
>>>>>>     "zones": [
>>>>>>         {
>>>>>>             "id": "default",
>>>>>>             "name": "default",
>>>>>>             "endpoints": [],
>>>>>>             "log_meta": "false",
>>>>>>             "log_data": "false",
>>>>>>             "bucket_index_max_shards": 0,
>>>>>>             "read_only": "false"
>>>>>>         }
>>>>>>     ],
>>>>>>     "placement_targets": [
>>>>>>         {
>>>>>>             "name": "cinema-placement",
>>>>>>             "tags": []
>>>>>>         },
>>>>>>         {
>>>>>>             "name": "cinema-source-placement",
>>>>>>             "tags": []
>>>>>>         },
>>>>>>         {
>>>>>>             "name": "default-placement",
>>>>>>             "tags": []
>>>>>>         },
>>>>>>         {
>>>>>>             "name": "erasure-placement",
>>>>>>             "tags": []
>>>>>>         },
>>>>>>         {
>>>>>>             "name": "share-placement",
>>>>>>             "tags": []
>>>>>>         },
>>>>>>         {
>>>>>>             "name": "share2016-placement",
>>>>>>             "tags": []
>>>>>>         },
>>>>>>         {
>>>>>>             "name": "test-placement",
>>>>>>             "tags": []
>>>>>>         }
>>>>>>     ],
>>>>>>     "default_placement": "default-placement",
>>>>>>     "realm_id": ""
>>>>>> }
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>> Mustafa
>>>>>>
>>>>>>> Orit
>>>>>>>
>>>>>>> On Tue, Nov 1, 2016 at 2:13 PM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
>>>>>>>> Hello,
>>>>>>>> I have production cluster configured with multiple placement pools according to:
>>>>>>>>
>>>>>>>> http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw
>>>>>>>>
>>>>>>>> After upgrading to Jewel, most radosgw-admin are failing, probably
>>>>>>>> because there is no realm
>>>>>>>>
>>>>>>>>
>>>>>>>> # radosgw-admin realm list
>>>>>>>> {
>>>>>>>>     "default_info": "",
>>>>>>>>     "realms": []
>>>>>>>> }
>>>>>>>>
>>>>>>>>
>>>>>>>> # radosgw-admin zone get
>>>>>>>> unable to initialize zone: (2) No such file or directory
>>>>>>>>
>>>>>>>>
>>>>>>>> # radosgw-admin regionmap get
>>>>>>>> failed to read current period info: 2016-11-01 16:08:14.099948
>>>>>>>> 7f21b55ee9c0  0 RGWPeriod::init failed to init realm  id  : (2) No
>>>>>>>> such file or directory(2) No such file or directory
>>>>>>>> {
>>>>>>>>     "zonegroups": [],
>>>>>>>>     "master_zonegroup": "",
>>>>>>>>     "bucket_quota": {
>>>>>>>>         "enabled": false,
>>>>>>>>         "max_size_kb": -1,
>>>>>>>>         "max_objects": -1
>>>>>>>>     },
>>>>>>>>     "user_quota": {
>>>>>>>>         "enabled": false,
>>>>>>>>         "max_size_kb": -1,
>>>>>>>>         "max_objects": -1
>>>>>>>>     }
>>>>>>>> }
>>>>>>>>
>>>>>>>>
>>>>>>>> # radosgw-admin bucket stats
>>>>>>>> 2016-11-01 16:07:55.860053 7f6e747f89c0  0 zonegroup default missing
>>>>>>>> zone for master_zone=
>>>>>>>> couldn't init storage provider
>>>>>>>>
>>>>>>>> I have previous region.conf.json and zone.conf.json, how can I make
>>>>>>>> everything work again? Will creating new realm fix this?
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> Mustafa Muhammad
>>>>>>>> --
>>>>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2016-11-09  5:45               ` Mustafa Muhammad
@ 2016-11-09 10:11                 ` Orit Wasserman
  2017-01-21  8:24                   ` Mustafa Muhammad
  0 siblings, 1 reply; 21+ messages in thread
From: Orit Wasserman @ 2016-11-09 10:11 UTC (permalink / raw)
  To: Mustafa Muhammad; +Cc: ceph-devel

On Wed, Nov 9, 2016 at 6:45 AM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
> On Tue, Nov 8, 2016 at 3:16 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>> On Tue, Nov 8, 2016 at 1:11 PM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
>>> On Tue, Nov 8, 2016 at 2:21 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>> On Mon, Nov 7, 2016 at 10:05 AM, Mustafa Muhammad
>>>> <mustafa1024m@gmail.com> wrote:
>>>>> I understood the script and applied it, "zone get" works fine now with
>>>>> realm, but "radosgw-admin zonegroup get" gives "master_zone":
>>>>> "default" and realm id with value, then after a minute it goes back to
>>>>> empty master_zone and realm id.
>>>>
>>>> Hi,
>>>> Is it possible you have an old radosgw-admin running (from hammer)?
>>>> if so you encountered http://tracker.ceph.com/issues/17371, it will be
>>>> fixed in 10.2.4.
>>>
>>> I found I have one Infernalis 9.2.1
>>>
>>
>> that explains it ...
>>
>>>> Can you provides logs?
>>>
>>> What logs exactly?
>>>
>> rgw logs but it looks like we know the cause so it is not important.
>>
>>>>
>>>> Try the procedure again and this time also run in the end:
>>>> radosgw-admin period update --commit
>>>
>>> After updating that RGW?
>>>
>> yes after doing all the steps
>>
> All RGWs now on 10.2.2, can't make them 10.2.3 because they won't start.
> Stopped them all and run the script again with "radosgw-admin period
> update --commit" at the end, still getting:
> "zonegroup default missing zone for master_zone="
> If I wait till 10.2.4, should it be fixed?
>

yes it was fixed in 10.2.4

> Regards
> Mustafa
>
>>>>
>>>> Orit
>>>>
>>>
>>> Thanks a lot :)
>>>
>>> Regards
>>> Mustafa
>>>
>>>>> So I still get:
>>>>> radosgw-admin bucket stats
>>>>> 2016-11-07 12:04:13.680779 7f7a88e929c0  0 zonegroup default missing
>>>>> zone for master_zone=
>>>>> couldn't init storage provider
>>>>> What should I do?
>>>>>
>>>>> Thanks
>>>>> Mustafa
>>>>>
>>>>> On Wed, Nov 2, 2016 at 12:39 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>> Hi,
>>>>>> You have hit the master zone issue.
>>>>>> Here is a fix I prefer:
>>>>>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011157.html
>>>>>> It is very important notice to run the fix when the radosgw is down.
>>>>>>
>>>>>> Good luck,
>>>>>> Orit
>>>>>>
>>>>>> On Tue, Nov 1, 2016 at 10:07 PM, Mustafa Muhammad
>>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>>> On Tue, Nov 1, 2016 at 5:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>>>> Hi,
>>>>>>>> what version of jewel are you using?
>>>>>>>> can you try raodsgw-admin zone get --rgw-zone default and
>>>>>>>> radosgw-admin zonegroup get --rgw-zonegroup default?
>>>>>>>>
>>>>>>> Hello, I am using 10.2.3
>>>>>>> #radosgw-admin zone get --rgw-zone default
>>>>>>> {
>>>>>>>     "id": "default",
>>>>>>>     "name": "default",
>>>>>>>     "domain_root": ".rgw",
>>>>>>>     "control_pool": ".rgw.control",
>>>>>>>     "gc_pool": ".rgw.gc",
>>>>>>>     "log_pool": ".log",
>>>>>>>     "intent_log_pool": ".intent-log",
>>>>>>>     "usage_log_pool": ".usage",
>>>>>>>     "user_keys_pool": ".users",
>>>>>>>     "user_email_pool": ".users.email",
>>>>>>>     "user_swift_pool": ".users.swift",
>>>>>>>     "user_uid_pool": ".users.uid",
>>>>>>>     "system_key": {
>>>>>>>         "access_key": "",
>>>>>>>         "secret_key": ""
>>>>>>>     },
>>>>>>>     "placement_pools": [],
>>>>>>>     "metadata_heap": ".rgw.meta",
>>>>>>>     "realm_id": ""
>>>>>>> }
>>>>>>>
>>>>>>> # radosgw-admin zonegroup get --rgw-zonegroup default
>>>>>>> {
>>>>>>>     "id": "default",
>>>>>>>     "name": "default",
>>>>>>>     "api_name": "",
>>>>>>>     "is_master": "true",
>>>>>>>     "endpoints": [],
>>>>>>>     "hostnames": [],
>>>>>>>     "hostnames_s3website": [],
>>>>>>>     "master_zone": "",
>>>>>>>     "zones": [
>>>>>>>         {
>>>>>>>             "id": "default",
>>>>>>>             "name": "default",
>>>>>>>             "endpoints": [],
>>>>>>>             "log_meta": "false",
>>>>>>>             "log_data": "false",
>>>>>>>             "bucket_index_max_shards": 0,
>>>>>>>             "read_only": "false"
>>>>>>>         }
>>>>>>>     ],
>>>>>>>     "placement_targets": [
>>>>>>>         {
>>>>>>>             "name": "cinema-placement",
>>>>>>>             "tags": []
>>>>>>>         },
>>>>>>>         {
>>>>>>>             "name": "cinema-source-placement",
>>>>>>>             "tags": []
>>>>>>>         },
>>>>>>>         {
>>>>>>>             "name": "default-placement",
>>>>>>>             "tags": []
>>>>>>>         },
>>>>>>>         {
>>>>>>>             "name": "erasure-placement",
>>>>>>>             "tags": []
>>>>>>>         },
>>>>>>>         {
>>>>>>>             "name": "share-placement",
>>>>>>>             "tags": []
>>>>>>>         },
>>>>>>>         {
>>>>>>>             "name": "share2016-placement",
>>>>>>>             "tags": []
>>>>>>>         },
>>>>>>>         {
>>>>>>>             "name": "test-placement",
>>>>>>>             "tags": []
>>>>>>>         }
>>>>>>>     ],
>>>>>>>     "default_placement": "default-placement",
>>>>>>>     "realm_id": ""
>>>>>>> }
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>> Mustafa
>>>>>>>
>>>>>>>> Orit
>>>>>>>>
>>>>>>>> On Tue, Nov 1, 2016 at 2:13 PM, Mustafa Muhammad <mustafa1024m@gmail.com> wrote:
>>>>>>>>> Hello,
>>>>>>>>> I have production cluster configured with multiple placement pools according to:
>>>>>>>>>
>>>>>>>>> http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw
>>>>>>>>>
>>>>>>>>> After upgrading to Jewel, most radosgw-admin are failing, probably
>>>>>>>>> because there is no realm
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> # radosgw-admin realm list
>>>>>>>>> {
>>>>>>>>>     "default_info": "",
>>>>>>>>>     "realms": []
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> # radosgw-admin zone get
>>>>>>>>> unable to initialize zone: (2) No such file or directory
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> # radosgw-admin regionmap get
>>>>>>>>> failed to read current period info: 2016-11-01 16:08:14.099948
>>>>>>>>> 7f21b55ee9c0  0 RGWPeriod::init failed to init realm  id  : (2) No
>>>>>>>>> such file or directory(2) No such file or directory
>>>>>>>>> {
>>>>>>>>>     "zonegroups": [],
>>>>>>>>>     "master_zonegroup": "",
>>>>>>>>>     "bucket_quota": {
>>>>>>>>>         "enabled": false,
>>>>>>>>>         "max_size_kb": -1,
>>>>>>>>>         "max_objects": -1
>>>>>>>>>     },
>>>>>>>>>     "user_quota": {
>>>>>>>>>         "enabled": false,
>>>>>>>>>         "max_size_kb": -1,
>>>>>>>>>         "max_objects": -1
>>>>>>>>>     }
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> # radosgw-admin bucket stats
>>>>>>>>> 2016-11-01 16:07:55.860053 7f6e747f89c0  0 zonegroup default missing
>>>>>>>>> zone for master_zone=
>>>>>>>>> couldn't init storage provider
>>>>>>>>>
>>>>>>>>> I have previous region.conf.json and zone.conf.json, how can I make
>>>>>>>>> everything work again? Will creating new realm fix this?
>>>>>>>>>
>>>>>>>>> Regards
>>>>>>>>> Mustafa Muhammad
>>>>>>>>> --
>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2016-11-09 10:11                 ` Orit Wasserman
@ 2017-01-21  8:24                   ` Mustafa Muhammad
  2017-01-22  9:04                     ` Orit Wasserman
  0 siblings, 1 reply; 21+ messages in thread
From: Mustafa Muhammad @ 2017-01-21  8:24 UTC (permalink / raw)
  To: ceph-devel; +Cc: owasserm

[-- Attachment #1: Type: text/plain, Size: 722 bytes --]

Hello again :)

It still doesn't work for me using 10.2.5:

[root@monitor3 ~]# ceph -v
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
[root@monitor3 ~]# radosgw-admin period update --commit
2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
zone for master_zone=
couldn't init storage provider

I think I am hitting:
http://tracker.ceph.com/issues/17364

So I created new RPMs with this patch:
https://github.com/ceph/ceph/pull/12315

But now, it crashes when I try to update the period, I've attached the
output and the details of my zonegroup, I also tried the just released
Kraken, also crashes.

What do you think, what should I do?

Thanks a lot in advance

Regards
Mustafa Muhammad

[-- Attachment #2: update-commit-crash --]
[-- Type: application/octet-stream, Size: 40202 bytes --]

*** Caught signal (Segmentation fault) **
 in thread 7fc9bcd1c9c0 thread_name:radosgw-admin
2017-01-21 11:15:55.048557 7fc9bcd1c9c0  0 zonegroup default missing master_zone, setting zone default id:default as master
 ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
 1: (()+0x56b32a) [0x7fc9b343332a]
 2: (()+0xf370) [0x7fc9a9337370]
 3: (RGWZoneGroup::get_pool_name(CephContext*)+0x19) [0x7fc9b32abbe9]
 4: (RGWSystemMetaObj::store_info(bool)+0x39) [0x7fc9b32b5b89]
 5: (RGWRados::init_zg_from_period(bool*)+0x6af) [0x7fc9b32f6ccf]
 6: (RGWRados::init_complete()+0x1546) [0x7fc9b3302886]
 7: (RGWStoreManager::init_storage_provider(CephContext*, bool, bool, bool)+0x76) [0x7fc9b32bece6]
 8: (main()+0x46ea) [0x7fc9bcd5b98a]
 9: (__libc_start_main()+0xf5) [0x7fc9a8a69b35]
 10: (()+0x363af) [0x7fc9bcd753af]
2017-01-21 11:15:55.049974 7fc9bcd1c9c0 -1 *** Caught signal (Segmentation fault) **
 in thread 7fc9bcd1c9c0 thread_name:radosgw-admin

 ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
 1: (()+0x56b32a) [0x7fc9b343332a]
 2: (()+0xf370) [0x7fc9a9337370]
 3: (RGWZoneGroup::get_pool_name(CephContext*)+0x19) [0x7fc9b32abbe9]
 4: (RGWSystemMetaObj::store_info(bool)+0x39) [0x7fc9b32b5b89]
 5: (RGWRados::init_zg_from_period(bool*)+0x6af) [0x7fc9b32f6ccf]
 6: (RGWRados::init_complete()+0x1546) [0x7fc9b3302886]
 7: (RGWStoreManager::init_storage_provider(CephContext*, bool, bool, bool)+0x76) [0x7fc9b32bece6]
 8: (main()+0x46ea) [0x7fc9bcd5b98a]
 9: (__libc_start_main()+0xf5) [0x7fc9a8a69b35]
 10: (()+0x363af) [0x7fc9bcd753af]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
  -161> 2017-01-21 11:15:54.965001 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command perfcounters_dump hook 0x7fc9bddad710
  -160> 2017-01-21 11:15:54.965018 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command 1 hook 0x7fc9bddad710
  -159> 2017-01-21 11:15:54.965024 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command perf dump hook 0x7fc9bddad710
  -158> 2017-01-21 11:15:54.965029 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command perfcounters_schema hook 0x7fc9bddad710
  -157> 2017-01-21 11:15:54.965033 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command 2 hook 0x7fc9bddad710
  -156> 2017-01-21 11:15:54.965036 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command perf schema hook 0x7fc9bddad710
  -155> 2017-01-21 11:15:54.965039 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command perf reset hook 0x7fc9bddad710
  -154> 2017-01-21 11:15:54.965042 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command config show hook 0x7fc9bddad710
  -153> 2017-01-21 11:15:54.965045 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command config set hook 0x7fc9bddad710
  -152> 2017-01-21 11:15:54.965049 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command config get hook 0x7fc9bddad710
  -151> 2017-01-21 11:15:54.965054 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command config diff hook 0x7fc9bddad710
  -150> 2017-01-21 11:15:54.965058 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command log flush hook 0x7fc9bddad710
  -149> 2017-01-21 11:15:54.965062 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command log dump hook 0x7fc9bddad710
  -148> 2017-01-21 11:15:54.965065 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command log reopen hook 0x7fc9bddad710
  -147> 2017-01-21 11:15:54.989242 7fc9bcd1c9c0 10 monclient(hunting): build_initial_monmap
  -146> 2017-01-21 11:15:54.989304 7fc9bcd1c9c0  1 librados: starting msgr at :/0
  -145> 2017-01-21 11:15:54.989311 7fc9bcd1c9c0  1 librados: starting objecter
  -144> 2017-01-21 11:15:54.989364 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command objecter_requests hook 0x7fc9bde057e0
  -143> 2017-01-21 11:15:54.989427 7fc9bcd1c9c0  1 -- :/0 messenger.start
  -142> 2017-01-21 11:15:54.989445 7fc9bcd1c9c0  1 librados: setting wanted keys
  -141> 2017-01-21 11:15:54.989449 7fc9bcd1c9c0  1 librados: calling monclient init
  -140> 2017-01-21 11:15:54.989451 7fc9bcd1c9c0 10 monclient(hunting): init
  -139> 2017-01-21 11:15:54.989456 7fc9bcd1c9c0  5 adding auth protocol: cephx
  -138> 2017-01-21 11:15:54.989458 7fc9bcd1c9c0  5 adding auth protocol: none
  -137> 2017-01-21 11:15:54.989459 7fc9bcd1c9c0 10 monclient(hunting): auth_supported 2,1 method cephx, none
  -136> 2017-01-21 11:15:54.989626 7fc9bcd1c9c0  2 auth: KeyRing::load: loaded key file /etc/ceph/ceph.client.admin.keyring
  -135> 2017-01-21 11:15:54.989677 7fc9bcd1c9c0 10 monclient(hunting): _reopen_session rank -1 name 
  -134> 2017-01-21 11:15:54.989728 7fc9bcd1c9c0 10 monclient(hunting): picked mon.noname-c con 0x7fc9bde0c220 addr 192.168.217.203:6789/0
  -133> 2017-01-21 11:15:54.989750 7fc9bcd1c9c0 10 monclient(hunting): _send_mon_message to mon.noname-c at 192.168.217.203:6789/0
  -132> 2017-01-21 11:15:54.989755 7fc9bcd1c9c0  1 -- :/2612449765 --> 192.168.217.203:6789/0 -- auth(proto 0 34 bytes epoch 0) v1 -- ?+0 0x7fc9bde0ffd0 con 0x7fc9bde0c220
  -131> 2017-01-21 11:15:54.989765 7fc9bcd1c9c0 10 monclient(hunting): renew_subs
  -130> 2017-01-21 11:15:54.989776 7fc9bcd1c9c0 10 monclient(hunting): authenticate will time out at 2017-01-21 11:20:54.989775
  -129> 2017-01-21 11:15:54.990265 7fc9bcd12700  1 -- 192.168.217.201:0/2612449765 learned my addr 192.168.217.201:0/2612449765
  -128> 2017-01-21 11:15:54.990751 7fc99e2ad700  2 -- 192.168.217.201:0/2612449765 >> 192.168.217.203:6789/0 pipe(0x7fc9bde0af60 sd=3 :56440 s=2 pgs=3546745 cs=1 l=1 c=0x7fc9bde0c220).reader got KEEPALIVE_ACK
  -127> 2017-01-21 11:15:54.991038 7fc9a0ab2700  1 -- 192.168.217.201:0/2612449765 <== mon.2 192.168.217.203:6789/0 1 ==== mon_map magic: 0 v1 ==== 494+0+0 (3716576182 0 0) 0x7fc994000d00 con 0x7fc9bde0c220
  -126> 2017-01-21 11:15:54.991062 7fc9a0ab2700 10 monclient(hunting): handle_monmap mon_map magic: 0 v1
  -125> 2017-01-21 11:15:54.991077 7fc9a0ab2700 10 monclient(hunting):  got monmap 32, mon.noname-c is now rank -1
  -124> 2017-01-21 11:15:54.991080 7fc9a0ab2700 10 monclient(hunting): dump:
epoch 32
fsid fbc973b6-2ef9-4929-8b3c-1580b18b8875
last_changed 2016-05-10 10:56:05.251864
created 0.000000
0: 192.168.217.201:6789/0 mon.monitor1
1: 192.168.217.202:6789/0 mon.monitor2
2: 192.168.217.203:6789/0 mon.monitor3

  -123> 2017-01-21 11:15:54.991105 7fc9a0ab2700  1 -- 192.168.217.201:0/2612449765 <== mon.2 192.168.217.203:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (982320532 0 0) 0x7fc9940011c0 con 0x7fc9bde0c220
  -122> 2017-01-21 11:15:54.991131 7fc9a0ab2700 10 monclient(hunting): my global_id is 107972534
  -121> 2017-01-21 11:15:54.991191 7fc9a0ab2700 10 monclient(hunting): _send_mon_message to mon.monitor3 at 192.168.217.203:6789/0
  -120> 2017-01-21 11:15:54.991198 7fc9a0ab2700  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.203:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 0x7fc990001880 con 0x7fc9bde0c220
  -119> 2017-01-21 11:15:54.991760 7fc9a0ab2700  1 -- 192.168.217.201:0/2612449765 <== mon.2 192.168.217.203:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 206+0+0 (2609057364 0 0) 0x7fc9940011c0 con 0x7fc9bde0c220
  -118> 2017-01-21 11:15:54.991839 7fc9a0ab2700 10 monclient(hunting): _send_mon_message to mon.monitor3 at 192.168.217.203:6789/0
  -117> 2017-01-21 11:15:54.991846 7fc9a0ab2700  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.203:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0 0x7fc990003390 con 0x7fc9bde0c220
  -116> 2017-01-21 11:15:54.992302 7fc9a0ab2700  1 -- 192.168.217.201:0/2612449765 <== mon.2 192.168.217.203:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 393+0+0 (1834118384 0 0) 0x7fc994000a30 con 0x7fc9bde0c220
  -115> 2017-01-21 11:15:54.992371 7fc9a0ab2700  1 monclient(hunting): found mon.monitor3
  -114> 2017-01-21 11:15:54.992375 7fc9a0ab2700 10 monclient: _send_mon_message to mon.monitor3 at 192.168.217.203:6789/0
  -113> 2017-01-21 11:15:54.992379 7fc9a0ab2700  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.203:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x7fc9bde11310 con 0x7fc9bde0c220
  -112> 2017-01-21 11:15:54.992402 7fc9bcd1c9c0  5 monclient: authenticate success, global_id 107972534
  -111> 2017-01-21 11:15:54.992423 7fc9bcd1c9c0 10 monclient: renew_subs
  -110> 2017-01-21 11:15:54.992427 7fc9bcd1c9c0 10 monclient: _send_mon_message to mon.monitor3 at 192.168.217.203:6789/0
  -109> 2017-01-21 11:15:54.992432 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.203:6789/0 -- mon_subscribe({osdmap=0}) v2 -- ?+0 0x7fc9bde11610 con 0x7fc9bde0c220
  -108> 2017-01-21 11:15:54.992465 7fc9bcd1c9c0 10 monclient: renew_subs - empty
  -107> 2017-01-21 11:15:54.992487 7fc9bcd1c9c0  1 librados: init done
  -106> 2017-01-21 11:15:54.992494 7fc9bcd1c9c0  5 asok(0x7fc9bddad7f0) register_command cr dump hook 0x7fc9bde10808
  -105> 2017-01-21 11:15:54.992590 7fc99d1aa700  2 RGWDataChangesLog::ChangesRenewThread: start
  -104> 2017-01-21 11:15:54.992734 7fc9a0ab2700  1 -- 192.168.217.201:0/2612449765 <== mon.2 192.168.217.203:6789/0 5 ==== mon_map magic: 0 v1 ==== 494+0+0 (3716576182 0 0) 0x7fc9940014e0 con 0x7fc9bde0c220
  -103> 2017-01-21 11:15:54.992744 7fc9a0ab2700 10 monclient: handle_monmap mon_map magic: 0 v1
  -102> 2017-01-21 11:15:54.992751 7fc9a0ab2700 10 monclient:  got monmap 32, mon.monitor3 is now rank 2
  -101> 2017-01-21 11:15:54.992754 7fc9a0ab2700 10 monclient: dump:
epoch 32
fsid fbc973b6-2ef9-4929-8b3c-1580b18b8875
last_changed 2016-05-10 10:56:05.251864
created 0.000000
0: 192.168.217.201:6789/0 mon.monitor1
1: 192.168.217.202:6789/0 mon.monitor2
2: 192.168.217.203:6789/0 mon.monitor3

  -100> 2017-01-21 11:15:54.996931 7fc9a0ab2700  1 -- 192.168.217.201:0/2612449765 <== mon.2 192.168.217.203:6789/0 6 ==== osd_map(768946..768946 src has 757979..768946) v3 ==== 421709+0+0 (4098469963 0 0) 0x7fc994000e40 con 0x7fc9bde0c220
   -99> 2017-01-21 11:15:54.998632 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.112:6812/8586 -- osd_op(client.107972534.0:1 7.85fca992 default.realm [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde17130 con 0x7fc9bde15bd0
   -98> 2017-01-21 11:15:55.000154 7fc99c8a8700  1 -- 192.168.217.201:0/2612449765 <== osd.123 192.168.217.112:6812/8586 1 ==== osd_op_reply(1 default.realm [getxattrs,stat] v0'0 uv308 ondisk = 0) v7 ==== 175+0+20 (3135567767 0 2078908795) 0x7fc980000b70 con 0x7fc9bde15bd0
   -97> 2017-01-21 11:15:55.000242 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.112:6812/8586 -- osd_op(client.107972534.0:2 7.85fca992 default.realm [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde17940 con 0x7fc9bde15bd0
   -96> 2017-01-21 11:15:55.001075 7fc99c8a8700  1 -- 192.168.217.201:0/2612449765 <== osd.123 192.168.217.112:6812/8586 2 ==== osd_op_reply(2 default.realm [read 0~46] v0'0 uv308 ondisk = 0) v7 ==== 133+0+46 (1515887977 0 1858460508) 0x7fc980000b70 con 0x7fc9bde15bd0
   -95> 2017-01-21 11:15:55.001174 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:3 7.808727c3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde1a680 con 0x7fc9bde17430
   -94> 2017-01-21 11:15:55.002746 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 1 ==== osd_op_reply(3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] v0'0 uv11 ondisk = 0) v7 ==== 205+0+20 (3186222466 0 816618855) 0x7fc978000b90 con 0x7fc9bde17430
   -93> 2017-01-21 11:15:55.002807 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:4 7.808727c3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde1ad60 con 0x7fc9bde17430
   -92> 2017-01-21 11:15:55.003302 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 2 ==== osd_op_reply(4 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~110] v0'0 uv11 ondisk = 0) v7 ==== 163+0+110 (247623876 0 1875257244) 0x7fc978000b90 con 0x7fc9bde17430
   -91> 2017-01-21 11:15:55.003368 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:5 7.808727c3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde137c0 con 0x7fc9bde17430
   -90> 2017-01-21 11:15:55.003715 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 3 ==== osd_op_reply(5 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] v0'0 uv11 ondisk = 0) v7 ==== 205+0+20 (3186222466 0 816618855) 0x7fc978000b90 con 0x7fc9bde17430
   -89> 2017-01-21 11:15:55.003775 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:6 7.808727c3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde137c0 con 0x7fc9bde17430
   -88> 2017-01-21 11:15:55.004197 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 4 ==== osd_op_reply(6 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~110] v0'0 uv11 ondisk = 0) v7 ==== 163+0+110 (247623876 0 1875257244) 0x7fc978000b90 con 0x7fc9bde17430
   -87> 2017-01-21 11:15:55.004292 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.131:6800/8112 -- osd_op(client.107972534.0:7 7.ee842566 periods.2994bf09-46d2-49a9-9fdf-829d561b270f.latest_epoch [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde1e150 con 0x7fc9bde1cd10
   -86> 2017-01-21 11:15:55.007443 7fc99c4a4700  1 -- 192.168.217.201:0/2612449765 <== osd.51 192.168.217.131:6800/8112 1 ==== osd_op_reply(7 periods.2994bf09-46d2-49a9-9fdf-829d561b270f.latest_epoch [getxattrs,stat] v0'0 uv101 ondisk = 0) v7 ==== 219+0+20 (3043620126 0 3008374561) 0x7fc970000ba0 con 0x7fc9bde1cd10
   -85> 2017-01-21 11:15:55.007500 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.131:6800/8112 -- osd_op(client.107972534.0:8 7.ee842566 periods.2994bf09-46d2-49a9-9fdf-829d561b270f.latest_epoch [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde1e150 con 0x7fc9bde1cd10
   -84> 2017-01-21 11:15:55.007913 7fc99c4a4700  1 -- 192.168.217.201:0/2612449765 <== osd.51 192.168.217.131:6800/8112 2 ==== osd_op_reply(8 periods.2994bf09-46d2-49a9-9fdf-829d561b270f.latest_epoch [read 0~10] v0'0 uv101 ondisk = 0) v7 ==== 177+0+10 (24676541 0 3274736952) 0x7fc970000ba0 con 0x7fc9bde1cd10
   -83> 2017-01-21 11:15:55.008010 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.125:6816/18946 -- osd_op(client.107972534.0:9 7.91215419 periods.2994bf09-46d2-49a9-9fdf-829d561b270f.1 [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde213f0 con 0x7fc9bde1e360
   -82> 2017-01-21 11:15:55.009916 7fc99c2a2700  1 -- 192.168.217.201:0/2612449765 <== osd.312 192.168.217.125:6816/18946 1 ==== osd_op_reply(9 periods.2994bf09-46d2-49a9-9fdf-829d561b270f.1 [getxattrs,stat] v0'0 uv16 ondisk = 0) v7 ==== 208+0+20 (2814271291 0 2640880969) 0x7fc968000b90 con 0x7fc9bde1e360
   -81> 2017-01-21 11:15:55.009969 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.125:6816/18946 -- osd_op(client.107972534.0:10 7.91215419 periods.2994bf09-46d2-49a9-9fdf-829d561b270f.1 [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde213f0 con 0x7fc9bde1e360
   -80> 2017-01-21 11:15:55.010420 7fc99c2a2700  1 -- 192.168.217.201:0/2612449765 <== osd.312 192.168.217.125:6816/18946 2 ==== osd_op_reply(10 periods.2994bf09-46d2-49a9-9fdf-829d561b270f.1 [read 0~504] v0'0 uv16 ondisk = 0) v7 ==== 166+0+504 (3427335592 0 727302391) 0x7fc9680017b0 con 0x7fc9bde1e360
   -79> 2017-01-21 11:15:55.010509 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:11 7.636fdd3 converted [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde21e40 con 0x7fc9bde17430
   -78> 2017-01-21 11:15:55.011015 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 5 ==== osd_op_reply(11 converted [getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v7 ==== 171+0+0 (2095763732 0 0) 0x7fc978000b90 con 0x7fc9bde17430
   -77> 2017-01-21 11:15:55.011111 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.112:6812/8586 -- osd_op(client.107972534.0:12 7.85fca992 default.realm [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde226c0 con 0x7fc9bde15bd0
   -76> 2017-01-21 11:15:55.011509 7fc99c8a8700  1 -- 192.168.217.201:0/2612449765 <== osd.123 192.168.217.112:6812/8586 3 ==== osd_op_reply(12 default.realm [getxattrs,stat] v0'0 uv308 ondisk = 0) v7 ==== 175+0+20 (3135567767 0 2078908795) 0x7fc980000b70 con 0x7fc9bde15bd0
   -75> 2017-01-21 11:15:55.011563 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.112:6812/8586 -- osd_op(client.107972534.0:13 7.85fca992 default.realm [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde226c0 con 0x7fc9bde15bd0
   -74> 2017-01-21 11:15:55.011883 7fc99c8a8700  1 -- 192.168.217.201:0/2612449765 <== osd.123 192.168.217.112:6812/8586 4 ==== osd_op_reply(13 default.realm [read 0~46] v0'0 uv308 ondisk = 0) v7 ==== 133+0+46 (1515887977 0 1858460508) 0x7fc980000b70 con 0x7fc9bde15bd0
   -73> 2017-01-21 11:15:55.011947 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:14 7.808727c3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde21e60 con 0x7fc9bde17430
   -72> 2017-01-21 11:15:55.012380 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 6 ==== osd_op_reply(14 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] v0'0 uv11 ondisk = 0) v7 ==== 205+0+20 (3186222466 0 816618855) 0x7fc9780010c0 con 0x7fc9bde17430
   -71> 2017-01-21 11:15:55.012432 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:15 7.808727c3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde22eb0 con 0x7fc9bde17430
   -70> 2017-01-21 11:15:55.012815 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 7 ==== osd_op_reply(15 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~110] v0'0 uv11 ondisk = 0) v7 ==== 163+0+110 (247623876 0 1875257244) 0x7fc9780010c0 con 0x7fc9bde17430
   -69> 2017-01-21 11:15:55.012908 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.133:6816/11297 -- osd_op(client.107972534.0:16 7.9a566808 default.region [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde25b60 con 0x7fc9bde216d0
   -68> 2017-01-21 11:15:55.016552 7fc967fff700  1 -- 192.168.217.201:0/2612449765 <== osd.326 192.168.217.133:6816/11297 1 ==== osd_op_reply(16 default.region [getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v7 ==== 176+0+0 (2783885565 0 0) 0x7fc960000a50 con 0x7fc9bde216d0
   -67> 2017-01-21 11:15:55.016644 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.133:6816/11297 -- osd_op(client.107972534.0:17 7.0  [pgls start_epoch 0] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde22480 con 0x7fc9bde216d0
   -66> 2017-01-21 11:15:55.019169 7fc967fff700  1 -- 192.168.217.201:0/2612449765 <== osd.326 192.168.217.133:6816/11297 2 ==== osd_op_reply(17  [pgls start_epoch 0] v768507'6491 uv6491 ondisk = 1) v7 ==== 120+0+134 (3182910090 0 112974557) 0x7fc960000a50 con 0x7fc9bde216d0
   -65> 2017-01-21 11:15:55.019241 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.125:6816/18946 -- osd_op(client.107972534.0:18 7.1  [pgls start_epoch 0] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde25b60 con 0x7fc9bde1e360
   -64> 2017-01-21 11:15:55.020005 7fc99c2a2700  1 -- 192.168.217.201:0/2612449765 <== osd.312 192.168.217.125:6816/18946 3 ==== osd_op_reply(18  [pgls start_epoch 0] v768406'16 uv16 ondisk = 1) v7 ==== 120+0+211 (2350847545 0 2317759760) 0x7fc968001100 con 0x7fc9bde1e360
   -63> 2017-01-21 11:15:55.020060 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.112:6812/8586 -- osd_op(client.107972534.0:19 7.2  [pgls start_epoch 0] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde25b60 con 0x7fc9bde15bd0
   -62> 2017-01-21 11:15:55.021044 7fc99c8a8700  1 -- 192.168.217.201:0/2612449765 <== osd.123 192.168.217.112:6812/8586 5 ==== osd_op_reply(19  [pgls start_epoch 0] v768494'752 uv308 ondisk = 1) v7 ==== 120+0+124 (2314086519 0 2858269825) 0x7fc980000b70 con 0x7fc9bde15bd0
   -61> 2017-01-21 11:15:55.021095 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:20 7.3  [pgls start_epoch 0] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde25b60 con 0x7fc9bde17430
   -60> 2017-01-21 11:15:55.022414 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 8 ==== osd_op_reply(20  [pgls start_epoch 0] v768406'52 uv52 ondisk = 1) v7 ==== 120+0+214 (2775189799 0 426247646) 0x7fc9780010c0 con 0x7fc9bde17430
   -59> 2017-01-21 11:15:55.022494 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.106:6814/7485 -- osd_op(client.107972534.0:21 7.4  [pgls start_epoch 0] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde2a190 con 0x7fc9bde21a80
   -58> 2017-01-21 11:15:55.025215 7fc967dfd700  1 -- 192.168.217.201:0/2612449765 <== osd.101 192.168.217.106:6814/7485 1 ==== osd_op_reply(21  [pgls start_epoch 0] v752061'1 uv1 ondisk = 1) v7 ==== 120+0+75 (1455304318 0 2796451061) 0x7fc95c000b70 con 0x7fc9bde21a80
   -57> 2017-01-21 11:15:55.025340 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.112:6820/12888 -- osd_op(client.107972534.0:22 7.5  [pgls start_epoch 0] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde2d240 con 0x7fc9bde2be00
   -56> 2017-01-21 11:15:55.027640 7fc967bfb700  1 -- 192.168.217.201:0/2612449765 <== osd.127 192.168.217.112:6820/12888 1 ==== osd_op_reply(22  [pgls start_epoch 0] v0'0 uv0 ondisk = 1) v7 ==== 120+0+44 (1329745740 0 3248547820) 0x7fc954000b50 con 0x7fc9bde2be00
   -55> 2017-01-21 11:15:55.027684 7fc967bfb700  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.131:6800/8112 -- osd_op(client.107972534.0:23 7.6  [pgls start_epoch 0] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc954002590 con 0x7fc9bde1cd10
   -54> 2017-01-21 11:15:55.029248 7fc99c4a4700  1 -- 192.168.217.201:0/2612449765 <== osd.51 192.168.217.131:6800/8112 3 ==== osd_op_reply(23  [pgls start_epoch 0] v768507'144 uv144 ondisk = 1) v7 ==== 120+0+207 (1834613730 0 3792236871) 0x7fc970000ba0 con 0x7fc9bde1cd10
   -53> 2017-01-21 11:15:55.029371 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.122:6802/6543 -- osd_op(client.107972534.0:24 7.7  [pgls start_epoch 0] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde2ef60
   -52> 2017-01-21 11:15:55.031277 7fc9679f9700  1 -- 192.168.217.201:0/2612449765 <== osd.280 192.168.217.122:6802/6543 1 ==== osd_op_reply(24  [pgls start_epoch 0] v768507'124 uv124 ondisk = 1) v7 ==== 120+0+102 (3133458913 0 37397930) 0x7fc94c000b80 con 0x7fc9bde2ef60
   -51> 2017-01-21 11:15:55.031378 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.122:6802/6543 -- osd_op(client.107972534.0:25 7.7376dc2f zone_names.default [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde2ef60
   -50> 2017-01-21 11:15:55.032285 7fc9679f9700  1 -- 192.168.217.201:0/2612449765 <== osd.280 192.168.217.122:6802/6543 2 ==== osd_op_reply(25 zone_names.default [getxattrs,stat] v0'0 uv124 ondisk = 0) v7 ==== 180+0+20 (1992310589 0 2160522131) 0x7fc94c000b80 con 0x7fc9bde2ef60
   -49> 2017-01-21 11:15:55.032374 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.122:6802/6543 -- osd_op(client.107972534.0:26 7.7376dc2f zone_names.default [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde2ef60
   -48> 2017-01-21 11:15:55.032767 7fc9679f9700  1 -- 192.168.217.201:0/2612449765 <== osd.280 192.168.217.122:6802/6543 3 ==== osd_op_reply(26 zone_names.default [read 0~17] v0'0 uv124 ondisk = 0) v7 ==== 138+0+17 (3585625190 0 2082767461) 0x7fc94c000b80 con 0x7fc9bde2ef60
   -47> 2017-01-21 11:15:55.032872 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.131:6800/8112 -- osd_op(client.107972534.0:27 7.e01f9ae zone_info.default [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde26910 con 0x7fc9bde1cd10
   -46> 2017-01-21 11:15:55.033559 7fc99c4a4700  1 -- 192.168.217.201:0/2612449765 <== osd.51 192.168.217.131:6800/8112 4 ==== osd_op_reply(27 zone_info.default [getxattrs,stat] v0'0 uv144 ondisk = 0) v7 ==== 179+0+20 (409130783 0 3549236032) 0x7fc970000ba0 con 0x7fc9bde1cd10
   -45> 2017-01-21 11:15:55.033646 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.131:6800/8112 -- osd_op(client.107972534.0:28 7.e01f9ae zone_info.default [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde1cd10
   -44> 2017-01-21 11:15:55.034298 7fc99c4a4700  1 -- 192.168.217.201:0/2612449765 <== osd.51 192.168.217.131:6800/8112 5 ==== osd_op_reply(28 zone_info.default [read 0~1428] v0'0 uv144 ondisk = 0) v7 ==== 137+0+1428 (4142193290 0 313646020) 0x7fc9700009f0 con 0x7fc9bde1cd10
   -43> 2017-01-21 11:15:55.034424 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.122:6802/6543 -- osd_op(client.107972534.0:29 7.1eaca32f zonegroups_names.default [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde30f80 con 0x7fc9bde2ef60
   -42> 2017-01-21 11:15:55.034970 7fc9679f9700  1 -- 192.168.217.201:0/2612449765 <== osd.280 192.168.217.122:6802/6543 4 ==== osd_op_reply(29 zonegroups_names.default [getxattrs,stat] v0'0 uv122 ondisk = 0) v7 ==== 186+0+20 (1958490810 0 3205454620) 0x7fc94c000b80 con 0x7fc9bde2ef60
   -41> 2017-01-21 11:15:55.035054 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.122:6802/6543 -- osd_op(client.107972534.0:30 7.1eaca32f zonegroups_names.default [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde31010 con 0x7fc9bde2ef60
   -40> 2017-01-21 11:15:55.035491 7fc9679f9700  1 -- 192.168.217.201:0/2612449765 <== osd.280 192.168.217.122:6802/6543 5 ==== osd_op_reply(30 zonegroups_names.default [read 0~17] v0'0 uv122 ondisk = 0) v7 ==== 144+0+17 (1233540262 0 2082767461) 0x7fc94c000b80 con 0x7fc9bde2ef60
   -39> 2017-01-21 11:15:55.035590 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.133:6816/11297 -- osd_op(client.107972534.0:31 7.50e8d0b0 zonegroup_info.default [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde30f80 con 0x7fc9bde216d0
   -38> 2017-01-21 11:15:55.036455 7fc967fff700  1 -- 192.168.217.201:0/2612449765 <== osd.326 192.168.217.133:6816/11297 3 ==== osd_op_reply(31 zonegroup_info.default [getxattrs,stat] v0'0 uv6491 ondisk = 0) v7 ==== 184+0+20 (859156410 0 3493044667) 0x7fc960000a50 con 0x7fc9bde216d0
   -37> 2017-01-21 11:15:55.036542 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.133:6816/11297 -- osd_op(client.107972534.0:32 7.50e8d0b0 zonegroup_info.default [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde30fc0 con 0x7fc9bde216d0
   -36> 2017-01-21 11:15:55.037130 7fc967fff700  1 -- 192.168.217.201:0/2612449765 <== osd.326 192.168.217.133:6816/11297 4 ==== osd_op_reply(32 zonegroup_info.default [read 0~568] v0'0 uv6491 ondisk = 0) v7 ==== 142+0+568 (532320958 0 153494648) 0x7fc960001bb0 con 0x7fc9bde216d0
   -35> 2017-01-21 11:15:55.037247 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:33 7.bd31b503 region_map [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde17430
   -34> 2017-01-21 11:15:55.037714 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 9 ==== osd_op_reply(33 region_map [getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v7 ==== 172+0+0 (2563999994 0 0) 0x7fc9780010c0 con 0x7fc9bde17430
   -33> 2017-01-21 11:15:55.037847 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.112:6812/8586 -- osd_op(client.107972534.0:34 7.85fca992 default.realm [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde15bd0
   -32> 2017-01-21 11:15:55.038483 7fc99c8a8700  1 -- 192.168.217.201:0/2612449765 <== osd.123 192.168.217.112:6812/8586 6 ==== osd_op_reply(34 default.realm [getxattrs,stat] v0'0 uv308 ondisk = 0) v7 ==== 175+0+20 (3135567767 0 2078908795) 0x7fc980000b70 con 0x7fc9bde15bd0
   -31> 2017-01-21 11:15:55.038575 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.112:6812/8586 -- osd_op(client.107972534.0:35 7.85fca992 default.realm [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde15bd0
   -30> 2017-01-21 11:15:55.038985 7fc99c8a8700  1 -- 192.168.217.201:0/2612449765 <== osd.123 192.168.217.112:6812/8586 7 ==== osd_op_reply(35 default.realm [read 0~46] v0'0 uv308 ondisk = 0) v7 ==== 133+0+46 (1515887977 0 1858460508) 0x7fc980000b70 con 0x7fc9bde15bd0
   -29> 2017-01-21 11:15:55.039080 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:36 7.808727c3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde17430
   -28> 2017-01-21 11:15:55.040127 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 10 ==== osd_op_reply(36 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] v0'0 uv11 ondisk = 0) v7 ==== 205+0+20 (3186222466 0 816618855) 0x7fc978000fc0 con 0x7fc9bde17430
   -27> 2017-01-21 11:15:55.040216 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:37 7.808727c3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde17430
   -26> 2017-01-21 11:15:55.040624 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 11 ==== osd_op_reply(37 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~110] v0'0 uv11 ondisk = 0) v7 ==== 163+0+110 (247623876 0 1875257244) 0x7fc978000fc0 con 0x7fc9bde17430
   -25> 2017-01-21 11:15:55.040731 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:38 7.152404fb default.zonegroup.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde17430
   -24> 2017-01-21 11:15:55.041166 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 12 ==== osd_op_reply(38 default.zonegroup.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] v0'0 uv52 ondisk = 0) v7 ==== 216+0+20 (1738696704 0 4277343328) 0x7fc978000fc0 con 0x7fc9bde17430
   -23> 2017-01-21 11:15:55.041254 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:39 7.152404fb default.zonegroup.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde17430
   -22> 2017-01-21 11:15:55.041646 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 13 ==== osd_op_reply(39 default.zonegroup.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~17] v0'0 uv52 ondisk = 0) v7 ==== 174+0+17 (2391656160 0 2082767461) 0x7fc978000fc0 con 0x7fc9bde17430
   -21> 2017-01-21 11:15:55.041753 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.133:6816/11297 -- osd_op(client.107972534.0:40 7.50e8d0b0 zonegroup_info.default [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde216d0
   -20> 2017-01-21 11:15:55.042472 7fc967fff700  1 -- 192.168.217.201:0/2612449765 <== osd.326 192.168.217.133:6816/11297 5 ==== osd_op_reply(40 zonegroup_info.default [getxattrs,stat] v0'0 uv6491 ondisk = 0) v7 ==== 184+0+20 (859156410 0 3493044667) 0x7fc9600008c0 con 0x7fc9bde216d0
   -19> 2017-01-21 11:15:55.042556 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.133:6816/11297 -- osd_op(client.107972534.0:41 7.50e8d0b0 zonegroup_info.default [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde310f0 con 0x7fc9bde216d0
   -18> 2017-01-21 11:15:55.043226 7fc967fff700  1 -- 192.168.217.201:0/2612449765 <== osd.326 192.168.217.133:6816/11297 6 ==== osd_op_reply(41 zonegroup_info.default [read 0~568] v0'0 uv6491 ondisk = 0) v7 ==== 142+0+568 (532320958 0 153494648) 0x7fc960001bb0 con 0x7fc9bde216d0
   -17> 2017-01-21 11:15:55.043338 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.112:6812/8586 -- osd_op(client.107972534.0:42 7.85fca992 default.realm [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde15bd0
   -16> 2017-01-21 11:15:55.043837 7fc99c8a8700  1 -- 192.168.217.201:0/2612449765 <== osd.123 192.168.217.112:6812/8586 8 ==== osd_op_reply(42 default.realm [getxattrs,stat] v0'0 uv308 ondisk = 0) v7 ==== 175+0+20 (3135567767 0 2078908795) 0x7fc980001de0 con 0x7fc9bde15bd0
   -15> 2017-01-21 11:15:55.043927 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.112:6812/8586 -- osd_op(client.107972534.0:43 7.85fca992 default.realm [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde15bd0
   -14> 2017-01-21 11:15:55.045049 7fc99c8a8700  1 -- 192.168.217.201:0/2612449765 <== osd.123 192.168.217.112:6812/8586 9 ==== osd_op_reply(43 default.realm [read 0~46] v0'0 uv308 ondisk = 0) v7 ==== 133+0+46 (1515887977 0 1858460508) 0x7fc980000b70 con 0x7fc9bde15bd0
   -13> 2017-01-21 11:15:55.045143 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:44 7.808727c3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde17430
   -12> 2017-01-21 11:15:55.045538 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 14 ==== osd_op_reply(44 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] v0'0 uv11 ondisk = 0) v7 ==== 205+0+20 (3186222466 0 816618855) 0x7fc978000fc0 con 0x7fc9bde17430
   -11> 2017-01-21 11:15:55.045623 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:45 7.808727c3 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde303a0 con 0x7fc9bde17430
   -10> 2017-01-21 11:15:55.045956 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 15 ==== osd_op_reply(45 realms.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~110] v0'0 uv11 ondisk = 0) v7 ==== 163+0+110 (247623876 0 1875257244) 0x7fc9780008c0 con 0x7fc9bde17430
    -9> 2017-01-21 11:15:55.046055 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:46 7.2688fcf3 default.zone.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde310b0 con 0x7fc9bde17430
    -8> 2017-01-21 11:15:55.046608 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 16 ==== osd_op_reply(46 default.zone.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [getxattrs,stat] v0'0 uv51 ondisk = 0) v7 ==== 211+0+20 (3092092672 0 1093963375) 0x7fc9780008c0 con 0x7fc9bde17430
    -7> 2017-01-21 11:15:55.046694 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.113:6802/6764 -- osd_op(client.107972534.0:47 7.2688fcf3 default.zone.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde310b0 con 0x7fc9bde17430
    -6> 2017-01-21 11:15:55.047068 7fc99c6a6700  1 -- 192.168.217.201:0/2612449765 <== osd.142 192.168.217.113:6802/6764 17 ==== osd_op_reply(47 default.zone.4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a [read 0~17] v0'0 uv51 ondisk = 0) v7 ==== 169+0+17 (2637904352 0 2082767461) 0x7fc9780008c0 con 0x7fc9bde17430
    -5> 2017-01-21 11:15:55.047164 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.131:6800/8112 -- osd_op(client.107972534.0:48 7.e01f9ae zone_info.default [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde310b0 con 0x7fc9bde1cd10
    -4> 2017-01-21 11:15:55.047675 7fc99c4a4700  1 -- 192.168.217.201:0/2612449765 <== osd.51 192.168.217.131:6800/8112 6 ==== osd_op_reply(48 zone_info.default [getxattrs,stat] v0'0 uv144 ondisk = 0) v7 ==== 179+0+20 (409130783 0 3549236032) 0x7fc9700008c0 con 0x7fc9bde1cd10
    -3> 2017-01-21 11:15:55.047769 7fc9bcd1c9c0  1 -- 192.168.217.201:0/2612449765 --> 192.168.217.131:6800/8112 -- osd_op(client.107972534.0:49 7.e01f9ae zone_info.default [read 0~524288] snapc 0=[] ack+read+known_if_redirected e768946) v7 -- ?+0 0x7fc9bde310b0 con 0x7fc9bde1cd10
    -2> 2017-01-21 11:15:55.048477 7fc99c4a4700  1 -- 192.168.217.201:0/2612449765 <== osd.51 192.168.217.131:6800/8112 7 ==== osd_op_reply(49 zone_info.default [read 0~1428] v0'0 uv144 ondisk = 0) v7 ==== 137+0+1428 (4142193290 0 313646020) 0x7fc9700008c0 con 0x7fc9bde1cd10
    -1> 2017-01-21 11:15:55.048557 7fc9bcd1c9c0  0 zonegroup default missing master_zone, setting zone default id:default as master
     0> 2017-01-21 11:15:55.049974 7fc9bcd1c9c0 -1 *** Caught signal (Segmentation fault) **
 in thread 7fc9bcd1c9c0 thread_name:radosgw-admin

 ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
 1: (()+0x56b32a) [0x7fc9b343332a]
 2: (()+0xf370) [0x7fc9a9337370]
 3: (RGWZoneGroup::get_pool_name(CephContext*)+0x19) [0x7fc9b32abbe9]
 4: (RGWSystemMetaObj::store_info(bool)+0x39) [0x7fc9b32b5b89]
 5: (RGWRados::init_zg_from_period(bool*)+0x6af) [0x7fc9b32f6ccf]
 6: (RGWRados::init_complete()+0x1546) [0x7fc9b3302886]
 7: (RGWStoreManager::init_storage_provider(CephContext*, bool, bool, bool)+0x76) [0x7fc9b32bece6]
 8: (main()+0x46ea) [0x7fc9bcd5b98a]
 9: (__libc_start_main()+0xf5) [0x7fc9a8a69b35]
 10: (()+0x363af) [0x7fc9bcd753af]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/10 civetweb
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
   0/ 0 refs
   1/ 5 xio
   1/ 5 compressor
   1/ 5 newstore
   1/ 5 bluestore
   1/ 5 bluefs
   1/ 3 bdev
   1/ 5 kstore
   4/ 5 rocksdb
   4/ 5 leveldb
   1/ 5 kinetic
   1/ 5 fuse
  -2/-2 (syslog threshold)
  99/99 (stderr threshold)
  max_recent       500
  max_new         1000
  log_file 
--- end dump of recent events ---

[-- Attachment #3: details --]
[-- Type: application/octet-stream, Size: 4417 bytes --]

+ radosgw-admin realm list
{
    "default_info": "4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a",
    "realms": [
        "firstrealm"
    ]
}

+ radosgw-admin zonegroup list
read_default_id : 0
{
    "default_info": "default",
    "zonegroups": [
        "default"
    ]
}

+ radosgw-admin zone list
{
    "default_info": "default",
    "zones": [
        "default"
    ]
}

+ radosgw-admin realm get
{
    "id": "4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a",
    "name": "firstrealm",
    "current_period": "2994bf09-46d2-49a9-9fdf-829d561b270f",
    "epoch": 1
}

+ radosgw-admin zonegroup get
{
    "id": "default",
    "name": "default",
    "api_name": "",
    "is_master": "true",
    "endpoints": [],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "default",
    "zones": [
        {
            "id": "default",
            "name": "default",
            "endpoints": [],
            "log_meta": "true",
            "log_data": "false",
            "bucket_index_max_shards": 0,
            "read_only": "false"
        }
    ],
    "placement_targets": [
        {
            "name": "cinemana-placement",
            "tags": []
        },
        {
            "name": "cinemana-source-placement",
            "tags": []
        },
        {
            "name": "default-placement",
            "tags": []
        },
        {
            "name": "erasure-placement",
            "tags": []
        },
        {
            "name": "share-placement",
            "tags": []
        },
        {
            "name": "share2016-placement",
            "tags": []
        },
        {
            "name": "test-placement",
            "tags": []
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a"
}

+ radosgw-admin zone get
{
    "id": "default",
    "name": "default",
    "domain_root": ".rgw",
    "control_pool": ".rgw.control",
    "gc_pool": ".rgw.gc",
    "log_pool": ".log",
    "intent_log_pool": ".intent-log",
    "usage_log_pool": ".usage",
    "user_keys_pool": ".users",
    "user_email_pool": ".users.email",
    "user_swift_pool": ".users.swift",
    "user_uid_pool": ".users.uid",
    "system_key": {
        "access_key": "",
        "secret_key": ""
    },
    "placement_pools": [
        {
            "key": "cinemana-placement",
            "val": {
                "index_pool": ".rgw.buckets.index",
                "data_pool": ".rgw.buckets.cinemana",
                "data_extra_pool": ".rgw.buckets.extra",
                "index_type": 0
            }
        },
        {
            "key": "cinemana-source-placement",
            "val": {
                "index_pool": ".rgw.buckets.index",
                "data_pool": ".rgw.buckets.cinemana.source",
                "data_extra_pool": ".rgw.buckets.extra",
                "index_type": 0
            }
        },
        {
            "key": "default-placement",
            "val": {
                "index_pool": ".rgw.buckets.index",
                "data_pool": ".rgw.buckets",
                "data_extra_pool": ".rgw.buckets.extra",
                "index_type": 0
            }
        },
        {
            "key": "erasure-placement",
            "val": {
                "index_pool": ".rgw.buckets.index",
                "data_pool": ".rgw.buckets.erasure",
                "data_extra_pool": ".rgw.buckets.extra",
                "index_type": 0
            }
        },
        {
            "key": "share-placement",
            "val": {
                "index_pool": ".rgw.buckets.index",
                "data_pool": ".rgw.buckets.share",
                "data_extra_pool": ".rgw.buckets.extra",
                "index_type": 0
            }
        },
        {
            "key": "share2016-placement",
            "val": {
                "index_pool": ".rgw.buckets.index",
                "data_pool": ".rgw.buckets.share.2016",
                "data_extra_pool": ".rgw.buckets.extra",
                "index_type": 0
            }
        },
        {
            "key": "test-placement",
            "val": {
                "index_pool": ".rgw.buckets.index",
                "data_pool": ".rgw.buckets.erasure.test",
                "data_extra_pool": ".rgw.buckets.extra",
                "index_type": 0
            }
        }
    ],
    "metadata_heap": "",
    "realm_id": "4c2538ff-b5b0-4bf4-9b40-b11cc6115c8a"
}

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-21  8:24                   ` Mustafa Muhammad
@ 2017-01-22  9:04                     ` Orit Wasserman
  2017-01-22 10:00                       ` Mustafa Muhammad
  0 siblings, 1 reply; 21+ messages in thread
From: Orit Wasserman @ 2017-01-22  9:04 UTC (permalink / raw)
  To: Mustafa Muhammad; +Cc: ceph-devel

On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
<mustafa1024m@gmail.com> wrote:
> Hello again :)
>
> It still doesn't work for me using 10.2.5:
>
> [root@monitor3 ~]# ceph -v
> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
> [root@monitor3 ~]# radosgw-admin period update --commit
> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
> zone for master_zone=
> couldn't init storage provider
>
> I think I am hitting:
> http://tracker.ceph.com/issues/17364
>
> So I created new RPMs with this patch:
> https://github.com/ceph/ceph/pull/12315
>
> But now, it crashes when I try to update the period, I've attached the
> output and the details of my zonegroup, I also tried the just released
> Kraken, also crashes.
>

I am working on a fix.
Will you be able to try it?

Orit
> What do you think, what should I do?
>
> Thanks a lot in advance
>
> Regards
> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-22  9:04                     ` Orit Wasserman
@ 2017-01-22 10:00                       ` Mustafa Muhammad
  2017-01-22 13:34                         ` Orit Wasserman
  0 siblings, 1 reply; 21+ messages in thread
From: Mustafa Muhammad @ 2017-01-22 10:00 UTC (permalink / raw)
  To: Orit Wasserman; +Cc: ceph-devel

On Sun, Jan 22, 2017 at 12:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
> On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
> <mustafa1024m@gmail.com> wrote:
>> Hello again :)
>>
>> It still doesn't work for me using 10.2.5:
>>
>> [root@monitor3 ~]# ceph -v
>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>> [root@monitor3 ~]# radosgw-admin period update --commit
>> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
>> zone for master_zone=
>> couldn't init storage provider
>>
>> I think I am hitting:
>> http://tracker.ceph.com/issues/17364
>>
>> So I created new RPMs with this patch:
>> https://github.com/ceph/ceph/pull/12315
>>
>> But now, it crashes when I try to update the period, I've attached the
>> output and the details of my zonegroup, I also tried the just released
>> Kraken, also crashes.
>>
>
> I am working on a fix.
> Will you be able to try it?
>

Yes, of course :)

Thank you

Mustafa

> Orit
>> What do you think, what should I do?
>>
>> Thanks a lot in advance
>>
>> Regards
>> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-22 10:00                       ` Mustafa Muhammad
@ 2017-01-22 13:34                         ` Orit Wasserman
  2017-01-23  7:40                           ` Mustafa Muhammad
  0 siblings, 1 reply; 21+ messages in thread
From: Orit Wasserman @ 2017-01-22 13:34 UTC (permalink / raw)
  To: Mustafa Muhammad; +Cc: ceph-devel

On Sun, Jan 22, 2017 at 12:00 PM, Mustafa Muhammad
<mustafa1024m@gmail.com> wrote:
> On Sun, Jan 22, 2017 at 12:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>> On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
>> <mustafa1024m@gmail.com> wrote:
>>> Hello again :)
>>>
>>> It still doesn't work for me using 10.2.5:
>>>
>>> [root@monitor3 ~]# ceph -v
>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>> [root@monitor3 ~]# radosgw-admin period update --commit
>>> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
>>> zone for master_zone=
>>> couldn't init storage provider
>>>
>>> I think I am hitting:
>>> http://tracker.ceph.com/issues/17364
>>>
>>> So I created new RPMs with this patch:
>>> https://github.com/ceph/ceph/pull/12315
>>>
>>> But now, it crashes when I try to update the period, I've attached the
>>> output and the details of my zonegroup, I also tried the just released
>>> Kraken, also crashes.
>>>
>>
>> I am working on a fix.
>> Will you be able to try it?
>>

https://github.com/ceph/ceph/pull/13054

Good luck!

>
> Yes, of course :)
>
> Thank you
>
> Mustafa
>
>> Orit
>>> What do you think, what should I do?
>>>
>>> Thanks a lot in advance
>>>
>>> Regards
>>> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-22 13:34                         ` Orit Wasserman
@ 2017-01-23  7:40                           ` Mustafa Muhammad
  2017-01-23  7:45                             ` Orit Wasserman
  0 siblings, 1 reply; 21+ messages in thread
From: Mustafa Muhammad @ 2017-01-23  7:40 UTC (permalink / raw)
  To: Orit Wasserman; +Cc: ceph-devel

On Sun, Jan 22, 2017 at 4:34 PM, Orit Wasserman <owasserm@redhat.com> wrote:
> On Sun, Jan 22, 2017 at 12:00 PM, Mustafa Muhammad
> <mustafa1024m@gmail.com> wrote:
>> On Sun, Jan 22, 2017 at 12:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>> On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
>>> <mustafa1024m@gmail.com> wrote:
>>>> Hello again :)
>>>>
>>>> It still doesn't work for me using 10.2.5:
>>>>
>>>> [root@monitor3 ~]# ceph -v
>>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>>> [root@monitor3 ~]# radosgw-admin period update --commit
>>>> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
>>>> zone for master_zone=
>>>> couldn't init storage provider
>>>>
>>>> I think I am hitting:
>>>> http://tracker.ceph.com/issues/17364
>>>>
>>>> So I created new RPMs with this patch:
>>>> https://github.com/ceph/ceph/pull/12315
>>>>
>>>> But now, it crashes when I try to update the period, I've attached the
>>>> output and the details of my zonegroup, I also tried the just released
>>>> Kraken, also crashes.
>>>>
>>>
>>> I am working on a fix.
>>> Will you be able to try it?
>>>
>
> https://github.com/ceph/ceph/pull/13054
>
> Good luck!

This *kind of* worked, it doesn't crash anymore, and zonegroup now
have master zone, but now I don't have my placement-targets anymore
I used:

radosgw-admin zone set --rgw-zone=default < new-default-zone.json
radosgw-admin zonegroup set --rgw-zonegroup=default < new-default-zg.json
radosgw-admin zonegroupmap set < new-zonegroupmap.json

radosgw-admin zonegroup default --rgw-zonegroup=default
radosgw-admin zone default --rgw-zone=default
radosgw-admin period update --commit

Is there something wrong I am doing? Can I update zonegroupmap
directly (like I did) or should I only set zone and zonegroup, tried
several things, still only getting:

                "placement_targets": [
                    {
                        "name": "default-placement",
                        "tags": []
                    }
                ]


Regards
Mustafa

>
>>
>> Yes, of course :)
>>
>> Thank you
>>
>> Mustafa
>>
>>> Orit
>>>> What do you think, what should I do?
>>>>
>>>> Thanks a lot in advance
>>>>
>>>> Regards
>>>> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-23  7:40                           ` Mustafa Muhammad
@ 2017-01-23  7:45                             ` Orit Wasserman
  2017-01-23  8:52                               ` Mustafa Muhammad
  0 siblings, 1 reply; 21+ messages in thread
From: Orit Wasserman @ 2017-01-23  7:45 UTC (permalink / raw)
  To: Mustafa Muhammad; +Cc: ceph-devel

On Mon, Jan 23, 2017 at 9:40 AM, Mustafa Muhammad
<mustafa1024m@gmail.com> wrote:
> On Sun, Jan 22, 2017 at 4:34 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>> On Sun, Jan 22, 2017 at 12:00 PM, Mustafa Muhammad
>> <mustafa1024m@gmail.com> wrote:
>>> On Sun, Jan 22, 2017 at 12:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>> On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
>>>> <mustafa1024m@gmail.com> wrote:
>>>>> Hello again :)
>>>>>
>>>>> It still doesn't work for me using 10.2.5:
>>>>>
>>>>> [root@monitor3 ~]# ceph -v
>>>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>>>> [root@monitor3 ~]# radosgw-admin period update --commit
>>>>> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
>>>>> zone for master_zone=
>>>>> couldn't init storage provider
>>>>>
>>>>> I think I am hitting:
>>>>> http://tracker.ceph.com/issues/17364
>>>>>
>>>>> So I created new RPMs with this patch:
>>>>> https://github.com/ceph/ceph/pull/12315
>>>>>
>>>>> But now, it crashes when I try to update the period, I've attached the
>>>>> output and the details of my zonegroup, I also tried the just released
>>>>> Kraken, also crashes.
>>>>>
>>>>
>>>> I am working on a fix.
>>>> Will you be able to try it?
>>>>
>>
>> https://github.com/ceph/ceph/pull/13054
>>
>> Good luck!
>
> This *kind of* worked, it doesn't crash anymore, and zonegroup now
> have master zone, but now I don't have my placement-targets anymore
> I used:
>
> radosgw-admin zone set --rgw-zone=default < new-default-zone.json
> radosgw-admin zonegroup set --rgw-zonegroup=default < new-default-zg.json
> radosgw-admin zonegroupmap set < new-zonegroupmap.json
>

You don't need this and I suspect this it is the problem.
Can you try without this command?

> radosgw-admin zonegroup default --rgw-zonegroup=default
> radosgw-admin zone default --rgw-zone=default
> radosgw-admin period update --commit
>
> Is there something wrong I am doing? Can I update zonegroupmap
> directly (like I did) or should I only set zone and zonegroup, tried
> several things, still only getting:
>
>                 "placement_targets": [
>                     {
>                         "name": "default-placement",
>                         "tags": []
>                     }
>                 ]
>
>
> Regards
> Mustafa
>
>>
>>>
>>> Yes, of course :)
>>>
>>> Thank you
>>>
>>> Mustafa
>>>
>>>> Orit
>>>>> What do you think, what should I do?
>>>>>
>>>>> Thanks a lot in advance
>>>>>
>>>>> Regards
>>>>> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-23  7:45                             ` Orit Wasserman
@ 2017-01-23  8:52                               ` Mustafa Muhammad
  2017-01-23  8:53                                 ` Orit Wasserman
  0 siblings, 1 reply; 21+ messages in thread
From: Mustafa Muhammad @ 2017-01-23  8:52 UTC (permalink / raw)
  To: Orit Wasserman; +Cc: ceph-devel

On Mon, Jan 23, 2017 at 10:45 AM, Orit Wasserman <owasserm@redhat.com> wrote:
> On Mon, Jan 23, 2017 at 9:40 AM, Mustafa Muhammad
> <mustafa1024m@gmail.com> wrote:
>> On Sun, Jan 22, 2017 at 4:34 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>> On Sun, Jan 22, 2017 at 12:00 PM, Mustafa Muhammad
>>> <mustafa1024m@gmail.com> wrote:
>>>> On Sun, Jan 22, 2017 at 12:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>> On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>> Hello again :)
>>>>>>
>>>>>> It still doesn't work for me using 10.2.5:
>>>>>>
>>>>>> [root@monitor3 ~]# ceph -v
>>>>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>>>>> [root@monitor3 ~]# radosgw-admin period update --commit
>>>>>> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
>>>>>> zone for master_zone=
>>>>>> couldn't init storage provider
>>>>>>
>>>>>> I think I am hitting:
>>>>>> http://tracker.ceph.com/issues/17364
>>>>>>
>>>>>> So I created new RPMs with this patch:
>>>>>> https://github.com/ceph/ceph/pull/12315
>>>>>>
>>>>>> But now, it crashes when I try to update the period, I've attached the
>>>>>> output and the details of my zonegroup, I also tried the just released
>>>>>> Kraken, also crashes.
>>>>>>
>>>>>
>>>>> I am working on a fix.
>>>>> Will you be able to try it?
>>>>>
>>>
>>> https://github.com/ceph/ceph/pull/13054
>>>
>>> Good luck!
>>
>> This *kind of* worked, it doesn't crash anymore, and zonegroup now
>> have master zone, but now I don't have my placement-targets anymore
>> I used:
>>
>> radosgw-admin zone set --rgw-zone=default < new-default-zone.json
>> radosgw-admin zonegroup set --rgw-zonegroup=default < new-default-zg.json
>> radosgw-admin zonegroupmap set < new-zonegroupmap.json
>>
>
> You don't need this and I suspect this it is the problem.
> Can you try without this command?

It worked fine, then after restarting the RGW containers, it was lost
again, after some retries, I found that starting the older container
(10.2.2 because I couldn't use anything newer before), is causing the
revert.

Now I only started 10.2.5 RGWs and everything works fine.

Thank you very much, really appreciated.

Regards
Mustafa

>
>> radosgw-admin zonegroup default --rgw-zonegroup=default
>> radosgw-admin zone default --rgw-zone=default
>> radosgw-admin period update --commit
>>
>> Is there something wrong I am doing? Can I update zonegroupmap
>> directly (like I did) or should I only set zone and zonegroup, tried
>> several things, still only getting:
>>
>>                 "placement_targets": [
>>                     {
>>                         "name": "default-placement",
>>                         "tags": []
>>                     }
>>                 ]
>>
>>
>> Regards
>> Mustafa
>>
>>>
>>>>
>>>> Yes, of course :)
>>>>
>>>> Thank you
>>>>
>>>> Mustafa
>>>>
>>>>> Orit
>>>>>> What do you think, what should I do?
>>>>>>
>>>>>> Thanks a lot in advance
>>>>>>
>>>>>> Regards
>>>>>> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-23  8:52                               ` Mustafa Muhammad
@ 2017-01-23  8:53                                 ` Orit Wasserman
  2017-01-23  9:03                                   ` Mustafa Muhammad
  0 siblings, 1 reply; 21+ messages in thread
From: Orit Wasserman @ 2017-01-23  8:53 UTC (permalink / raw)
  To: Mustafa Muhammad; +Cc: ceph-devel

On Mon, Jan 23, 2017 at 10:52 AM, Mustafa Muhammad
<mustafa1024m@gmail.com> wrote:
> On Mon, Jan 23, 2017 at 10:45 AM, Orit Wasserman <owasserm@redhat.com> wrote:
>> On Mon, Jan 23, 2017 at 9:40 AM, Mustafa Muhammad
>> <mustafa1024m@gmail.com> wrote:
>>> On Sun, Jan 22, 2017 at 4:34 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>> On Sun, Jan 22, 2017 at 12:00 PM, Mustafa Muhammad
>>>> <mustafa1024m@gmail.com> wrote:
>>>>> On Sun, Jan 22, 2017 at 12:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>> On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
>>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>>> Hello again :)
>>>>>>>
>>>>>>> It still doesn't work for me using 10.2.5:
>>>>>>>
>>>>>>> [root@monitor3 ~]# ceph -v
>>>>>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>>>>>> [root@monitor3 ~]# radosgw-admin period update --commit
>>>>>>> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
>>>>>>> zone for master_zone=
>>>>>>> couldn't init storage provider
>>>>>>>
>>>>>>> I think I am hitting:
>>>>>>> http://tracker.ceph.com/issues/17364
>>>>>>>
>>>>>>> So I created new RPMs with this patch:
>>>>>>> https://github.com/ceph/ceph/pull/12315
>>>>>>>
>>>>>>> But now, it crashes when I try to update the period, I've attached the
>>>>>>> output and the details of my zonegroup, I also tried the just released
>>>>>>> Kraken, also crashes.
>>>>>>>
>>>>>>
>>>>>> I am working on a fix.
>>>>>> Will you be able to try it?
>>>>>>
>>>>
>>>> https://github.com/ceph/ceph/pull/13054
>>>>
>>>> Good luck!
>>>
>>> This *kind of* worked, it doesn't crash anymore, and zonegroup now
>>> have master zone, but now I don't have my placement-targets anymore
>>> I used:
>>>
>>> radosgw-admin zone set --rgw-zone=default < new-default-zone.json
>>> radosgw-admin zonegroup set --rgw-zonegroup=default < new-default-zg.json
>>> radosgw-admin zonegroupmap set < new-zonegroupmap.json
>>>
>>
>> You don't need this and I suspect this it is the problem.
>> Can you try without this command?
>
> It worked fine, then after restarting the RGW containers, it was lost
> again, after some retries, I found that starting the older container
> (10.2.2 because I couldn't use anything newer before), is causing the
> revert.
>
> Now I only started 10.2.5 RGWs and everything works fine.
>
> Thank you very much, really appreciated.
>

:)

Can you open a tracker issue for the zonegroupmap command problem?
this way it will documented.

> Regards
> Mustafa
>
>>
>>> radosgw-admin zonegroup default --rgw-zonegroup=default
>>> radosgw-admin zone default --rgw-zone=default
>>> radosgw-admin period update --commit
>>>
>>> Is there something wrong I am doing? Can I update zonegroupmap
>>> directly (like I did) or should I only set zone and zonegroup, tried
>>> several things, still only getting:
>>>
>>>                 "placement_targets": [
>>>                     {
>>>                         "name": "default-placement",
>>>                         "tags": []
>>>                     }
>>>                 ]
>>>
>>>
>>> Regards
>>> Mustafa
>>>
>>>>
>>>>>
>>>>> Yes, of course :)
>>>>>
>>>>> Thank you
>>>>>
>>>>> Mustafa
>>>>>
>>>>>> Orit
>>>>>>> What do you think, what should I do?
>>>>>>>
>>>>>>> Thanks a lot in advance
>>>>>>>
>>>>>>> Regards
>>>>>>> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-23  8:53                                 ` Orit Wasserman
@ 2017-01-23  9:03                                   ` Mustafa Muhammad
  2017-01-23 10:22                                     ` Orit Wasserman
  0 siblings, 1 reply; 21+ messages in thread
From: Mustafa Muhammad @ 2017-01-23  9:03 UTC (permalink / raw)
  To: Orit Wasserman; +Cc: ceph-devel

On Mon, Jan 23, 2017 at 11:53 AM, Orit Wasserman <owasserm@redhat.com> wrote:
> On Mon, Jan 23, 2017 at 10:52 AM, Mustafa Muhammad
> <mustafa1024m@gmail.com> wrote:
>> On Mon, Jan 23, 2017 at 10:45 AM, Orit Wasserman <owasserm@redhat.com> wrote:
>>> On Mon, Jan 23, 2017 at 9:40 AM, Mustafa Muhammad
>>> <mustafa1024m@gmail.com> wrote:
>>>> On Sun, Jan 22, 2017 at 4:34 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>> On Sun, Jan 22, 2017 at 12:00 PM, Mustafa Muhammad
>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>> On Sun, Jan 22, 2017 at 12:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>>> On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
>>>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>>>> Hello again :)
>>>>>>>>
>>>>>>>> It still doesn't work for me using 10.2.5:
>>>>>>>>
>>>>>>>> [root@monitor3 ~]# ceph -v
>>>>>>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>>>>>>> [root@monitor3 ~]# radosgw-admin period update --commit
>>>>>>>> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
>>>>>>>> zone for master_zone=
>>>>>>>> couldn't init storage provider
>>>>>>>>
>>>>>>>> I think I am hitting:
>>>>>>>> http://tracker.ceph.com/issues/17364
>>>>>>>>
>>>>>>>> So I created new RPMs with this patch:
>>>>>>>> https://github.com/ceph/ceph/pull/12315
>>>>>>>>
>>>>>>>> But now, it crashes when I try to update the period, I've attached the
>>>>>>>> output and the details of my zonegroup, I also tried the just released
>>>>>>>> Kraken, also crashes.
>>>>>>>>
>>>>>>>
>>>>>>> I am working on a fix.
>>>>>>> Will you be able to try it?
>>>>>>>
>>>>>
>>>>> https://github.com/ceph/ceph/pull/13054
>>>>>
>>>>> Good luck!
>>>>
>>>> This *kind of* worked, it doesn't crash anymore, and zonegroup now
>>>> have master zone, but now I don't have my placement-targets anymore
>>>> I used:
>>>>
>>>> radosgw-admin zone set --rgw-zone=default < new-default-zone.json
>>>> radosgw-admin zonegroup set --rgw-zonegroup=default < new-default-zg.json
>>>> radosgw-admin zonegroupmap set < new-zonegroupmap.json
>>>>
>>>
>>> You don't need this and I suspect this it is the problem.
>>> Can you try without this command?
>>
>> It worked fine, then after restarting the RGW containers, it was lost
>> again, after some retries, I found that starting the older container
>> (10.2.2 because I couldn't use anything newer before), is causing the
>> revert.
>>
>> Now I only started 10.2.5 RGWs and everything works fine.
>>
>> Thank you very much, really appreciated.
>>
>
> :)
>
> Can you open a tracker issue for the zonegroupmap command problem?
> this way it will documented.

Do you mean the "radosgw-admin zonegroupmap set  <
new-zonegroupmap.json" or the whole issue? I can't verify that it was
the cause, the older instances might be the cause, so not sure what to
write in the issue?

Regards
Mustafa
>
>> Regards
>> Mustafa
>>
>>>
>>>> radosgw-admin zonegroup default --rgw-zonegroup=default
>>>> radosgw-admin zone default --rgw-zone=default
>>>> radosgw-admin period update --commit
>>>>
>>>> Is there something wrong I am doing? Can I update zonegroupmap
>>>> directly (like I did) or should I only set zone and zonegroup, tried
>>>> several things, still only getting:
>>>>
>>>>                 "placement_targets": [
>>>>                     {
>>>>                         "name": "default-placement",
>>>>                         "tags": []
>>>>                     }
>>>>                 ]
>>>>
>>>>
>>>> Regards
>>>> Mustafa
>>>>
>>>>>
>>>>>>
>>>>>> Yes, of course :)
>>>>>>
>>>>>> Thank you
>>>>>>
>>>>>> Mustafa
>>>>>>
>>>>>>> Orit
>>>>>>>> What do you think, what should I do?
>>>>>>>>
>>>>>>>> Thanks a lot in advance
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-23  9:03                                   ` Mustafa Muhammad
@ 2017-01-23 10:22                                     ` Orit Wasserman
  2017-01-23 12:40                                       ` Mustafa Muhammad
  0 siblings, 1 reply; 21+ messages in thread
From: Orit Wasserman @ 2017-01-23 10:22 UTC (permalink / raw)
  To: Mustafa Muhammad; +Cc: ceph-devel

On Mon, Jan 23, 2017 at 11:03 AM, Mustafa Muhammad
<mustafa1024m@gmail.com> wrote:
> On Mon, Jan 23, 2017 at 11:53 AM, Orit Wasserman <owasserm@redhat.com> wrote:
>> On Mon, Jan 23, 2017 at 10:52 AM, Mustafa Muhammad
>> <mustafa1024m@gmail.com> wrote:
>>> On Mon, Jan 23, 2017 at 10:45 AM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>> On Mon, Jan 23, 2017 at 9:40 AM, Mustafa Muhammad
>>>> <mustafa1024m@gmail.com> wrote:
>>>>> On Sun, Jan 22, 2017 at 4:34 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>> On Sun, Jan 22, 2017 at 12:00 PM, Mustafa Muhammad
>>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>>> On Sun, Jan 22, 2017 at 12:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>>>> On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
>>>>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>>>>> Hello again :)
>>>>>>>>>
>>>>>>>>> It still doesn't work for me using 10.2.5:
>>>>>>>>>
>>>>>>>>> [root@monitor3 ~]# ceph -v
>>>>>>>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>>>>>>>> [root@monitor3 ~]# radosgw-admin period update --commit
>>>>>>>>> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
>>>>>>>>> zone for master_zone=
>>>>>>>>> couldn't init storage provider
>>>>>>>>>
>>>>>>>>> I think I am hitting:
>>>>>>>>> http://tracker.ceph.com/issues/17364
>>>>>>>>>
>>>>>>>>> So I created new RPMs with this patch:
>>>>>>>>> https://github.com/ceph/ceph/pull/12315
>>>>>>>>>
>>>>>>>>> But now, it crashes when I try to update the period, I've attached the
>>>>>>>>> output and the details of my zonegroup, I also tried the just released
>>>>>>>>> Kraken, also crashes.
>>>>>>>>>
>>>>>>>>
>>>>>>>> I am working on a fix.
>>>>>>>> Will you be able to try it?
>>>>>>>>
>>>>>>
>>>>>> https://github.com/ceph/ceph/pull/13054
>>>>>>
>>>>>> Good luck!
>>>>>
>>>>> This *kind of* worked, it doesn't crash anymore, and zonegroup now
>>>>> have master zone, but now I don't have my placement-targets anymore
>>>>> I used:
>>>>>
>>>>> radosgw-admin zone set --rgw-zone=default < new-default-zone.json
>>>>> radosgw-admin zonegroup set --rgw-zonegroup=default < new-default-zg.json
>>>>> radosgw-admin zonegroupmap set < new-zonegroupmap.json
>>>>>
>>>>
>>>> You don't need this and I suspect this it is the problem.
>>>> Can you try without this command?
>>>
>>> It worked fine, then after restarting the RGW containers, it was lost
>>> again, after some retries, I found that starting the older container
>>> (10.2.2 because I couldn't use anything newer before), is causing the
>>> revert.
>>>
>>> Now I only started 10.2.5 RGWs and everything works fine.
>>>
>>> Thank you very much, really appreciated.
>>>
>>
>> :)
>>
>> Can you open a tracker issue for the zonegroupmap command problem?
>> this way it will documented.
>
> Do you mean the "radosgw-admin zonegroupmap set  <
> new-zonegroupmap.json" or the whole issue? I can't verify that it was
> the cause, the older instances might be the cause, so not sure what to
> write in the issue?
>
For the crash I have already opened an issue
http://tracker.ceph.com/issues/18631.
I meant opening an issue for the "zonegroupmap set" that made the new
placement configuration disappear.
Please add in the description of the issue the commands you used and
if you can add the jsons files.

Thanks,
Orit
> Regards
> Mustafa
>>
>>> Regards
>>> Mustafa
>>>
>>>>
>>>>> radosgw-admin zonegroup default --rgw-zonegroup=default
>>>>> radosgw-admin zone default --rgw-zone=default
>>>>> radosgw-admin period update --commit
>>>>>
>>>>> Is there something wrong I am doing? Can I update zonegroupmap
>>>>> directly (like I did) or should I only set zone and zonegroup, tried
>>>>> several things, still only getting:
>>>>>
>>>>>                 "placement_targets": [
>>>>>                     {
>>>>>                         "name": "default-placement",
>>>>>                         "tags": []
>>>>>                     }
>>>>>                 ]
>>>>>
>>>>>
>>>>> Regards
>>>>> Mustafa
>>>>>
>>>>>>
>>>>>>>
>>>>>>> Yes, of course :)
>>>>>>>
>>>>>>> Thank you
>>>>>>>
>>>>>>> Mustafa
>>>>>>>
>>>>>>>> Orit
>>>>>>>>> What do you think, what should I do?
>>>>>>>>>
>>>>>>>>> Thanks a lot in advance
>>>>>>>>>
>>>>>>>>> Regards
>>>>>>>>> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-23 10:22                                     ` Orit Wasserman
@ 2017-01-23 12:40                                       ` Mustafa Muhammad
  2017-01-23 13:08                                         ` Orit Wasserman
  0 siblings, 1 reply; 21+ messages in thread
From: Mustafa Muhammad @ 2017-01-23 12:40 UTC (permalink / raw)
  To: Orit Wasserman; +Cc: ceph-devel

On Mon, Jan 23, 2017 at 1:22 PM, Orit Wasserman <owasserm@redhat.com> wrote:
> On Mon, Jan 23, 2017 at 11:03 AM, Mustafa Muhammad
> <mustafa1024m@gmail.com> wrote:
>> On Mon, Jan 23, 2017 at 11:53 AM, Orit Wasserman <owasserm@redhat.com> wrote:
>>> On Mon, Jan 23, 2017 at 10:52 AM, Mustafa Muhammad
>>> <mustafa1024m@gmail.com> wrote:
>>>> On Mon, Jan 23, 2017 at 10:45 AM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>> On Mon, Jan 23, 2017 at 9:40 AM, Mustafa Muhammad
>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>> On Sun, Jan 22, 2017 at 4:34 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>>> On Sun, Jan 22, 2017 at 12:00 PM, Mustafa Muhammad
>>>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>>>> On Sun, Jan 22, 2017 at 12:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>>>>> On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
>>>>>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>>>>>> Hello again :)
>>>>>>>>>>
>>>>>>>>>> It still doesn't work for me using 10.2.5:
>>>>>>>>>>
>>>>>>>>>> [root@monitor3 ~]# ceph -v
>>>>>>>>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>>>>>>>>> [root@monitor3 ~]# radosgw-admin period update --commit
>>>>>>>>>> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
>>>>>>>>>> zone for master_zone=
>>>>>>>>>> couldn't init storage provider
>>>>>>>>>>
>>>>>>>>>> I think I am hitting:
>>>>>>>>>> http://tracker.ceph.com/issues/17364
>>>>>>>>>>
>>>>>>>>>> So I created new RPMs with this patch:
>>>>>>>>>> https://github.com/ceph/ceph/pull/12315
>>>>>>>>>>
>>>>>>>>>> But now, it crashes when I try to update the period, I've attached the
>>>>>>>>>> output and the details of my zonegroup, I also tried the just released
>>>>>>>>>> Kraken, also crashes.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I am working on a fix.
>>>>>>>>> Will you be able to try it?
>>>>>>>>>
>>>>>>>
>>>>>>> https://github.com/ceph/ceph/pull/13054
>>>>>>>
>>>>>>> Good luck!
>>>>>>
>>>>>> This *kind of* worked, it doesn't crash anymore, and zonegroup now
>>>>>> have master zone, but now I don't have my placement-targets anymore
>>>>>> I used:
>>>>>>
>>>>>> radosgw-admin zone set --rgw-zone=default < new-default-zone.json
>>>>>> radosgw-admin zonegroup set --rgw-zonegroup=default < new-default-zg.json
>>>>>> radosgw-admin zonegroupmap set < new-zonegroupmap.json
>>>>>>
>>>>>
>>>>> You don't need this and I suspect this it is the problem.
>>>>> Can you try without this command?
>>>>
>>>> It worked fine, then after restarting the RGW containers, it was lost
>>>> again, after some retries, I found that starting the older container
>>>> (10.2.2 because I couldn't use anything newer before), is causing the
>>>> revert.
>>>>
>>>> Now I only started 10.2.5 RGWs and everything works fine.
>>>>
>>>> Thank you very much, really appreciated.
>>>>
>>>
>>> :)
>>>
>>> Can you open a tracker issue for the zonegroupmap command problem?
>>> this way it will documented.
>>
>> Do you mean the "radosgw-admin zonegroupmap set  <
>> new-zonegroupmap.json" or the whole issue? I can't verify that it was
>> the cause, the older instances might be the cause, so not sure what to
>> write in the issue?
>>
> For the crash I have already opened an issue
> http://tracker.ceph.com/issues/18631.
> I meant opening an issue for the "zonegroupmap set" that made the new
> placement configuration disappear.
> Please add in the description of the issue the commands you used and
> if you can add the jsons files.

I tested again, it was not the "zonegroupmap set" that caused this, it
was the older RGWs, I retested "zonegroupmap set" and it didn't mess
anything, but it also didn't change the map (I changed
bucket_index_max_shards but it stayed the same), so I'll create two
issues, one for the older RGW versions mess the map and another one
for not changing after set operation.

Regards
Mustafa

>
> Thanks,
> Orit
>> Regards
>> Mustafa
>>>
>>>> Regards
>>>> Mustafa
>>>>
>>>>>
>>>>>> radosgw-admin zonegroup default --rgw-zonegroup=default
>>>>>> radosgw-admin zone default --rgw-zone=default
>>>>>> radosgw-admin period update --commit
>>>>>>
>>>>>> Is there something wrong I am doing? Can I update zonegroupmap
>>>>>> directly (like I did) or should I only set zone and zonegroup, tried
>>>>>> several things, still only getting:
>>>>>>
>>>>>>                 "placement_targets": [
>>>>>>                     {
>>>>>>                         "name": "default-placement",
>>>>>>                         "tags": []
>>>>>>                     }
>>>>>>                 ]
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> Mustafa
>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Yes, of course :)
>>>>>>>>
>>>>>>>> Thank you
>>>>>>>>
>>>>>>>> Mustafa
>>>>>>>>
>>>>>>>>> Orit
>>>>>>>>>> What do you think, what should I do?
>>>>>>>>>>
>>>>>>>>>> Thanks a lot in advance
>>>>>>>>>>
>>>>>>>>>> Regards
>>>>>>>>>> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-23 12:40                                       ` Mustafa Muhammad
@ 2017-01-23 13:08                                         ` Orit Wasserman
  2017-01-28 12:32                                           ` Mustafa Muhammad
  0 siblings, 1 reply; 21+ messages in thread
From: Orit Wasserman @ 2017-01-23 13:08 UTC (permalink / raw)
  To: Mustafa Muhammad; +Cc: ceph-devel

On Mon, Jan 23, 2017 at 2:40 PM, Mustafa Muhammad
<mustafa1024m@gmail.com> wrote:
> On Mon, Jan 23, 2017 at 1:22 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>> On Mon, Jan 23, 2017 at 11:03 AM, Mustafa Muhammad
>> <mustafa1024m@gmail.com> wrote:
>>> On Mon, Jan 23, 2017 at 11:53 AM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>> On Mon, Jan 23, 2017 at 10:52 AM, Mustafa Muhammad
>>>> <mustafa1024m@gmail.com> wrote:
>>>>> On Mon, Jan 23, 2017 at 10:45 AM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>> On Mon, Jan 23, 2017 at 9:40 AM, Mustafa Muhammad
>>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>>> On Sun, Jan 22, 2017 at 4:34 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>>>> On Sun, Jan 22, 2017 at 12:00 PM, Mustafa Muhammad
>>>>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>>>>> On Sun, Jan 22, 2017 at 12:04 PM, Orit Wasserman <owasserm@redhat.com> wrote:
>>>>>>>>>> On Sat, Jan 21, 2017 at 10:24 AM, Mustafa Muhammad
>>>>>>>>>> <mustafa1024m@gmail.com> wrote:
>>>>>>>>>>> Hello again :)
>>>>>>>>>>>
>>>>>>>>>>> It still doesn't work for me using 10.2.5:
>>>>>>>>>>>
>>>>>>>>>>> [root@monitor3 ~]# ceph -v
>>>>>>>>>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>>>>>>>>>> [root@monitor3 ~]# radosgw-admin period update --commit
>>>>>>>>>>> 2017-01-21 11:06:20.659487 7f2ca18979c0  0 zonegroup default missing
>>>>>>>>>>> zone for master_zone=
>>>>>>>>>>> couldn't init storage provider
>>>>>>>>>>>
>>>>>>>>>>> I think I am hitting:
>>>>>>>>>>> http://tracker.ceph.com/issues/17364
>>>>>>>>>>>
>>>>>>>>>>> So I created new RPMs with this patch:
>>>>>>>>>>> https://github.com/ceph/ceph/pull/12315
>>>>>>>>>>>
>>>>>>>>>>> But now, it crashes when I try to update the period, I've attached the
>>>>>>>>>>> output and the details of my zonegroup, I also tried the just released
>>>>>>>>>>> Kraken, also crashes.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I am working on a fix.
>>>>>>>>>> Will you be able to try it?
>>>>>>>>>>
>>>>>>>>
>>>>>>>> https://github.com/ceph/ceph/pull/13054
>>>>>>>>
>>>>>>>> Good luck!
>>>>>>>
>>>>>>> This *kind of* worked, it doesn't crash anymore, and zonegroup now
>>>>>>> have master zone, but now I don't have my placement-targets anymore
>>>>>>> I used:
>>>>>>>
>>>>>>> radosgw-admin zone set --rgw-zone=default < new-default-zone.json
>>>>>>> radosgw-admin zonegroup set --rgw-zonegroup=default < new-default-zg.json
>>>>>>> radosgw-admin zonegroupmap set < new-zonegroupmap.json
>>>>>>>
>>>>>>
>>>>>> You don't need this and I suspect this it is the problem.
>>>>>> Can you try without this command?
>>>>>
>>>>> It worked fine, then after restarting the RGW containers, it was lost
>>>>> again, after some retries, I found that starting the older container
>>>>> (10.2.2 because I couldn't use anything newer before), is causing the
>>>>> revert.
>>>>>
>>>>> Now I only started 10.2.5 RGWs and everything works fine.
>>>>>
>>>>> Thank you very much, really appreciated.
>>>>>
>>>>
>>>> :)
>>>>
>>>> Can you open a tracker issue for the zonegroupmap command problem?
>>>> this way it will documented.
>>>
>>> Do you mean the "radosgw-admin zonegroupmap set  <
>>> new-zonegroupmap.json" or the whole issue? I can't verify that it was
>>> the cause, the older instances might be the cause, so not sure what to
>>> write in the issue?
>>>
>> For the crash I have already opened an issue
>> http://tracker.ceph.com/issues/18631.
>> I meant opening an issue for the "zonegroupmap set" that made the new
>> placement configuration disappear.
>> Please add in the description of the issue the commands you used and
>> if you can add the jsons files.
>
> I tested again, it was not the "zonegroupmap set" that caused this, it
> was the older RGWs, I retested "zonegroupmap set" and it didn't mess
> anything, but it also didn't change the map (I changed
> bucket_index_max_shards but it stayed the same), so I'll create two
> issues, one for the older RGW versions mess the map and another one
> for not changing after set operation.
>
Good catch
Thanks,
Orit

> Regards
> Mustafa
>
>>
>> Thanks,
>> Orit
>>> Regards
>>> Mustafa
>>>>
>>>>> Regards
>>>>> Mustafa
>>>>>
>>>>>>
>>>>>>> radosgw-admin zonegroup default --rgw-zonegroup=default
>>>>>>> radosgw-admin zone default --rgw-zone=default
>>>>>>> radosgw-admin period update --commit
>>>>>>>
>>>>>>> Is there something wrong I am doing? Can I update zonegroupmap
>>>>>>> directly (like I did) or should I only set zone and zonegroup, tried
>>>>>>> several things, still only getting:
>>>>>>>
>>>>>>>                 "placement_targets": [
>>>>>>>                     {
>>>>>>>                         "name": "default-placement",
>>>>>>>                         "tags": []
>>>>>>>                     }
>>>>>>>                 ]
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> Mustafa
>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Yes, of course :)
>>>>>>>>>
>>>>>>>>> Thank you
>>>>>>>>>
>>>>>>>>> Mustafa
>>>>>>>>>
>>>>>>>>>> Orit
>>>>>>>>>>> What do you think, what should I do?
>>>>>>>>>>>
>>>>>>>>>>> Thanks a lot in advance
>>>>>>>>>>>
>>>>>>>>>>> Regards
>>>>>>>>>>> Mustafa Muhammad

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Lots of radosgw-admin commands fail after upgrade
  2017-01-23 13:08                                         ` Orit Wasserman
@ 2017-01-28 12:32                                           ` Mustafa Muhammad
  0 siblings, 0 replies; 21+ messages in thread
From: Mustafa Muhammad @ 2017-01-28 12:32 UTC (permalink / raw)
  To: Orit Wasserman; +Cc: ceph-devel

I reported:

http://tracker.ceph.com/issues/18725
http://tracker.ceph.com/issues/18726

Sorry for the delay :)

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2017-01-28 12:32 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-01 13:13 Lots of radosgw-admin commands fail after upgrade Mustafa Muhammad
2016-11-01 14:04 ` Orit Wasserman
     [not found]   ` <CAFehDbC6yBsbQCPVmwE+DCod-Xmbafp1MU_r1wYZ0xd_q3Dt3Q@mail.gmail.com>
2016-11-02  9:39     ` Orit Wasserman
     [not found]       ` <CAFehDbC1kRQV+rQbD_r-yFHD2ymWXCUR1go2nu6y7FtoWB_t7g@mail.gmail.com>
2016-11-02 12:36         ` Orit Wasserman
2016-11-07  9:05       ` Mustafa Muhammad
2016-11-08 11:21         ` Orit Wasserman
     [not found]           ` <CAFehDbDaDdMHTtxLqq8kjd5Xd9RePqDDCXtJm7_7UMCD7Q3LOg@mail.gmail.com>
     [not found]             ` <CABo9giTWVHYGdrqpmtYP8-iDY5tM+a4MrBczwha27=g-HzmRcw@mail.gmail.com>
2016-11-09  5:45               ` Mustafa Muhammad
2016-11-09 10:11                 ` Orit Wasserman
2017-01-21  8:24                   ` Mustafa Muhammad
2017-01-22  9:04                     ` Orit Wasserman
2017-01-22 10:00                       ` Mustafa Muhammad
2017-01-22 13:34                         ` Orit Wasserman
2017-01-23  7:40                           ` Mustafa Muhammad
2017-01-23  7:45                             ` Orit Wasserman
2017-01-23  8:52                               ` Mustafa Muhammad
2017-01-23  8:53                                 ` Orit Wasserman
2017-01-23  9:03                                   ` Mustafa Muhammad
2017-01-23 10:22                                     ` Orit Wasserman
2017-01-23 12:40                                       ` Mustafa Muhammad
2017-01-23 13:08                                         ` Orit Wasserman
2017-01-28 12:32                                           ` Mustafa Muhammad

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.