All of lore.kernel.org
 help / color / mirror / Atom feed
* chooseleaf may cause some unnecessary pg migrations
@ 2015-10-13  8:22 Xusangdi
  2015-10-13 14:20 ` Robert LeBlanc
  2015-10-13 16:44 ` Sage Weil
  0 siblings, 2 replies; 20+ messages in thread
From: Xusangdi @ 2015-10-13  8:22 UTC (permalink / raw)
  To: sweil; +Cc: ceph-devel

Hi Sage,

Recently when I was learning about the crush rules I noticed that the step chooseleaf may cause some unnecessary pg migrations when OSDs are outed.
For example, for a cluster of 4 hosts with 2 OSDs each, after host1(osd.2, osd.3) is down, the mapping differences would be like this:
pgid    before <-> after        diff    diff_num
0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
Changed pg ratio: 30 / 32

I tried to change the code (please see https://github.com/ceph/ceph/pull/6242) and after the modification the result would be like this:
pgid    before <-> after        diff    diff_num
0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
Changed pg ratio: 20 / 32

Currently the only defect I can see from the change is that the chance for a given pg to successfully choose required available OSDs might be a bit lower compared with before. However, I believe it will cause problems only when the cluster is pretty small and degraded. And in that case, we can still make it workable by tuning some of the crushmap parameters such as chooseleaf_tries.

Anyway I'm not sure if it would raise any other issues, could you please review it and maybe give me some suggestions? Thank you!

----------
Best regards,
Sangdi

-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: chooseleaf may cause some unnecessary pg migrations
  2015-10-13  8:22 chooseleaf may cause some unnecessary pg migrations Xusangdi
@ 2015-10-13 14:20 ` Robert LeBlanc
  2015-10-14  2:15   ` Xusangdi
  2015-10-13 16:44 ` Sage Weil
  1 sibling, 1 reply; 20+ messages in thread
From: Robert LeBlanc @ 2015-10-13 14:20 UTC (permalink / raw)
  To: Xusangdi; +Cc: sweil, ceph-devel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Are you testing with straw or straw2?
- ----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Tue, Oct 13, 2015 at 2:22 AM, Xusangdi  wrote:
> Hi Sage,
>
> Recently when I was learning about the crush rules I noticed that the step chooseleaf may cause some unnecessary pg migrations when OSDs are outed.
> For example, for a cluster of 4 hosts with 2 OSDs each, after host1(osd.2, osd.3) is down, the mapping differences would be like this:
> pgid    before <-> after        diff    diff_num
> 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> Changed pg ratio: 30 / 32
>
> I tried to change the code (please see https://github.com/ceph/ceph/pull/6242) and after the modification the result would be like this:
> pgid    before <-> after        diff    diff_num
> 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> Changed pg ratio: 20 / 32
>
> Currently the only defect I can see from the change is that the chance for a given pg to successfully choose required available OSDs might be a bit lower compared with before. However, I believe it will cause problems only when the cluster is pretty small and degraded. And in that case, we can still make it workable by tuning some of the crushmap parameters such as chooseleaf_tries.
>
> Anyway I'm not sure if it would raise any other issues, could you please review it and maybe give me some suggestions? Thank you!
>
> ----------
> Best regards,
> Sangdi
>
> -------------------------------------------------------------------------------------------------------------------------------------
> 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
> 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
> 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
> 邮件!
> This e-mail and its attachments contain confidential information from H3C, which is
> intended only for the person or entity whose address is listed above. Any use of the
> information contained herein in any way (including, but not limited to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
> by phone or email immediately and delete it!

-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.2.0
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWHRM4CRDmVDuy+mK58QAARVMP/jhhtyRsiUXw4kl2ikso
F8CiAwPuGRMvFSa2CXqzvaHnNjiy8Q4uR8o0KgcR04eiLGPUeahjyAQ73+8k
geryb9ymjoDFjkKX2n7YxCHy/MnB5HayNIuUPi+KUFzpradx1v7S54XL2DHm
mDRR2DDeou9H6WcIqknRh4e6fc1a70E2CbpKr9qu7AiNiEfRZzXod//joavW
h0MkYC0Ug41UG64R9QTCJOKp+wSjri+IUgSSrs3WPYXb5W1jZPFIhsFkigws
VgitZTv3+rO5ZyHbtCR+3yNI5isU18Lhf+Dr01MExUuyCQQz6zODXV0W+xgP
wsMSe8ZXXr84a/8MKoP90mr2pNiiasMwWrcZ/klQ9J4AIqh8DJEHJeAWf+4N
pYWTiRFbq3NZzIUjTBqtP/AliKvCTDQhVP3E8hK1qYg4Gv0gQ0Zu76F5c5/p
rj9HTZa+o8rSQM0TDuiqKSMEJUcuMt/TScWmQNZF1GTb3HSx6LW6H+aOkLuE
N0Fi+rkYupxXC3P3HnU35GMzlum//j/svIFkLOA5V5abVAttcxrGg9jpebUO
i3f4DR6e86RNLMaakNoybYlK9J+7j3JjKydBTqkDn9sKBeMaE/oW21Ft99/z
eJDLf+8xGt02tV512mPDw8SWJZUws3/B4qc4yrkYUe2aWBeHrE7vIX8ZgC1M
icrE
=/pQd
-----END PGP SIGNATURE-----
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: chooseleaf may cause some unnecessary pg migrations
  2015-10-13  8:22 chooseleaf may cause some unnecessary pg migrations Xusangdi
  2015-10-13 14:20 ` Robert LeBlanc
@ 2015-10-13 16:44 ` Sage Weil
  2015-10-14  5:22   ` Xusangdi
  1 sibling, 1 reply; 20+ messages in thread
From: Sage Weil @ 2015-10-13 16:44 UTC (permalink / raw)
  To: Xusangdi; +Cc: ceph-devel

Hi Sangdi,

On Tue, 13 Oct 2015, Xusangdi wrote:
> Hi Sage,
> 
> Recently when I was learning about the crush rules I noticed that the step chooseleaf may cause some unnecessary pg migrations when OSDs are outed.
> For example, for a cluster of 4 hosts with 2 OSDs each, after host1(osd.2, osd.3) is down, the mapping differences would be like this:
> pgid    before <-> after        diff    diff_num
> 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> Changed pg ratio: 30 / 32
> 
> I tried to change the code (please see https://github.com/ceph/ceph/pull/6242) and after the modification the result would be like this:

Can you describe the reasoning behind the change?  I can't make sense 
of it.  The recursive call is still picking just 1 item, but it looks 
like it is always choosing a device in for slot 0 and not the current 
slot in the caller.  I'm not sure how that could generate a correct 
result.

Perhaps you can share the osd tree output so we can see how your map it 
structured?

It is normal for some mappings to have multiple items change, as I've 
described previously on this list.  It's a result of the fact that we're 
considering totally independent series of items, but whether we accept a 
choice also depends on the previous choices (we do not allow dups).. which 
means that you get echo effects in later slots when we make a different 
choice in earlier slots.

Thanks!
sage


> pgid    before <-> after        diff    diff_num
> 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> Changed pg ratio: 20 / 32
> 
> Currently the only defect I can see from the change is that the chance for a given pg to successfully choose required available OSDs might be a bit lower compared with before. However, I believe it will cause problems only when the cluster is pretty small and degraded. And in that case, we can still make it workable by tuning some of the crushmap parameters such as chooseleaf_tries.
> 
> Anyway I'm not sure if it would raise any other issues, could you please review it and maybe give me some suggestions? Thank you!
> 
> ----------
> Best regards,
> Sangdi
> 
> -------------------------------------------------------------------------------------------------------------------------------------
> ????????????????????????????????????????
> ????????????????????????????????????????
> ????????????????????????????????????????
> ???
> This e-mail and its attachments contain confidential information from H3C, which is
> intended only for the person or entity whose address is listed above. Any use of the
> information contained herein in any way (including, but not limited to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
> by phone or email immediately and delete it!
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j\a??f???h?????\x1e?w???\f???j:+v???w????????\a????zZ+???????j"????i

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-13 14:20 ` Robert LeBlanc
@ 2015-10-14  2:15   ` Xusangdi
  2015-10-14 12:18     ` Sage Weil
  0 siblings, 1 reply; 20+ messages in thread
From: Xusangdi @ 2015-10-14  2:15 UTC (permalink / raw)
  To: Robert LeBlanc; +Cc: ceph-devel

Straw2. But I had also run the same test for straw alg, which generated quite similar results.

> -----Original Message-----
> From: Robert LeBlanc [mailto:robert@leblancnet.us]
> Sent: Tuesday, October 13, 2015 10:21 PM
> To: xusangdi 11976 (RD)
> Cc: sweil@redhat.com; ceph-devel@vger.kernel.org
> Subject: Re: chooseleaf may cause some unnecessary pg migrations
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
> 
> Are you testing with straw or straw2?
> - ----------------
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> 
> 
> On Tue, Oct 13, 2015 at 2:22 AM, Xusangdi  wrote:
> > Hi Sage,
> >
> > Recently when I was learning about the crush rules I noticed that the step chooseleaf may cause
> some unnecessary pg migrations when OSDs are outed.
> > For example, for a cluster of 4 hosts with 2 OSDs each, after host1(osd.2, osd.3) is down, the
> mapping differences would be like this:
> > pgid    before <-> after        diff    diff_num
> > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > Changed pg ratio: 30 / 32
> >
> > I tried to change the code (please see https://github.com/ceph/ceph/pull/6242) and after the
> modification the result would be like this:
> > pgid    before <-> after        diff    diff_num
> > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > Changed pg ratio: 20 / 32
> >
> > Currently the only defect I can see from the change is that the chance for a given pg to successfully
> choose required available OSDs might be a bit lower compared with before. However, I believe it will
> cause problems only when the cluster is pretty small and degraded. And in that case, we can still make
> it workable by tuning some of the crushmap parameters such as chooseleaf_tries.
> >
> > Anyway I'm not sure if it would raise any other issues, could you please review it and maybe give me
> some suggestions? Thank you!
> >
> > ----------
> > Best regards,
> > Sangdi
> >
> > ----------------------------------------------------------------------
> > ---------------------------------------------------------------
> > 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
> > 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
> > 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
> > 邮件!
> > This e-mail and its attachments contain confidential information from
> > H3C, which is intended only for the person or entity whose address is
> > listed above. Any use of the information contained herein in any way
> > (including, but not limited to, total or partial disclosure,
> > reproduction, or dissemination) by persons other than the intended
> > recipient(s) is prohibited. If you receive this e-mail in error,
> > please notify the sender by phone or email immediately and delete it!
> 
> -----BEGIN PGP SIGNATURE-----
> Version: Mailvelope v1.2.0
> Comment: https://www.mailvelope.com
> 
> wsFcBAEBCAAQBQJWHRM4CRDmVDuy+mK58QAARVMP/jhhtyRsiUXw4kl2ikso
> F8CiAwPuGRMvFSa2CXqzvaHnNjiy8Q4uR8o0KgcR04eiLGPUeahjyAQ73+8k
> geryb9ymjoDFjkKX2n7YxCHy/MnB5HayNIuUPi+KUFzpradx1v7S54XL2DHm
> mDRR2DDeou9H6WcIqknRh4e6fc1a70E2CbpKr9qu7AiNiEfRZzXod//joavW
> h0MkYC0Ug41UG64R9QTCJOKp+wSjri+IUgSSrs3WPYXb5W1jZPFIhsFkigws
> VgitZTv3+rO5ZyHbtCR+3yNI5isU18Lhf+Dr01MExUuyCQQz6zODXV0W+xgP
> wsMSe8ZXXr84a/8MKoP90mr2pNiiasMwWrcZ/klQ9J4AIqh8DJEHJeAWf+4N
> pYWTiRFbq3NZzIUjTBqtP/AliKvCTDQhVP3E8hK1qYg4Gv0gQ0Zu76F5c5/p
> rj9HTZa+o8rSQM0TDuiqKSMEJUcuMt/TScWmQNZF1GTb3HSx6LW6H+aOkLuE
> N0Fi+rkYupxXC3P3HnU35GMzlum//j/svIFkLOA5V5abVAttcxrGg9jpebUO
> i3f4DR6e86RNLMaakNoybYlK9J+7j3JjKydBTqkDn9sKBeMaE/oW21Ft99/z
> eJDLf+8xGt02tV512mPDw8SWJZUws3/B4qc4yrkYUe2aWBeHrE7vIX8ZgC1M
> icrE
> =/pQd
> -----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-13 16:44 ` Sage Weil
@ 2015-10-14  5:22   ` Xusangdi
  2015-10-14 14:17     ` Sage Weil
  0 siblings, 1 reply; 20+ messages in thread
From: Xusangdi @ 2015-10-14  5:22 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel

Please see inline.

> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of
> Sage Weil
> Sent: Wednesday, October 14, 2015 12:45 AM
> To: xusangdi 11976 (RD)
> Cc: ceph-devel@vger.kernel.org
> Subject: Re: chooseleaf may cause some unnecessary pg migrations
>
> Hi Sangdi,
>
> On Tue, 13 Oct 2015, Xusangdi wrote:
> > Hi Sage,
> >
> > Recently when I was learning about the crush rules I noticed that the step chooseleaf may cause
> some unnecessary pg migrations when OSDs are outed.
> > For example, for a cluster of 4 hosts with 2 OSDs each, after host1(osd.2, osd.3) is down, the
> mapping differences would be like this:
> > pgid    before <-> after        diff    diff_num
> > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > Changed pg ratio: 30 / 32
> >
> > I tried to change the code (please see https://github.com/ceph/ceph/pull/6242) and after the
> modification the result would be like this:
>
> Can you describe the reasoning behind the change?  I can't make sense of it.  The recursive call is still
> picking just 1 item, but it looks like it is always choosing a device in for slot 0 and not the current slot in
> the caller.  I'm not sure how that could generate a correct result.
>
From my understanding, the argument 'outpos' of crush_choose_firstn(...) is always set to 0 for 'choose firstn'
and the first step of 'chooseleaf firstn' (you may check its caller, i.e. crush_do_rule(...), that the variable 'j' there
is always 0). So in fact the code modification won't change anything for these steps. It will only affect the recursive
call. And it this call, 'rep'(and 'r') is independent from its caller, which means it's only used to indicate the replica
number(the start value of 'rep' should be 1 less than 'numrep'), but is not related to result position. Since 'outpos'
is correctly passed to the recursive call, the result would be put in the right slot, not slot 0.

The current code set the starting value of 'r' based on the slot number, which means the algorithm may generate
a different result even when only the slot number changes, which I believe is not what we want for replicated pools.
For example, 0.14 [3, 6, 5] <-> [7, 4, 1], after ods.3 is down and out, the pg on osd.6 is transferred to osd.7 because
the starting value of 'r' is changed(probably from 1 to 0), and so is the pg from osd.5 to osd.4. Though it looks like that
all of these extra migrations are always within the same chosen bucket, they are unnecessary. If we set the starting
value of 'rep'(and 'r') to be a fixed value(e.g. 0), the results would be stable.

The quality of distribution could be assured by the hashed value x, since it's dependent on pgid. A simple test result
for a relatively large cluster is as followings, which from my view is similar with that of the current code:

crushtool -o crushmap --build --num_osds 64 host straw2 4 rack straw2 4 root straw2 0

crushtool --test -i crushmap --num_rep 3 --show-utilization
rule 0 (replicated_ruleset), x = 0..1023, numrep = 3..3
rule 0 (replicated_ruleset) num_rep 3 result size == 3: 1024/1024
  device 0:              stored : 57     expected : 48
  device 1:              stored : 39     expected : 48
  device 2:              stored : 44     expected : 48
  device 3:              stored : 42     expected : 48
  device 4:              stored : 45     expected : 48
  device 5:              stored : 42     expected : 48
  device 6:              stored : 45     expected : 48
  device 7:              stored : 48     expected : 48
  device 8:              stored : 43     expected : 48
  device 9:              stored : 38     expected : 48
  device 10:             stored : 44     expected : 48
  device 11:             stored : 52     expected : 48
  device 12:             stored : 54     expected : 48
  device 13:             stored : 59     expected : 48
  device 14:             stored : 53     expected : 48
  device 15:             stored : 53     expected : 48
  device 16:             stored : 42     expected : 48
  device 17:             stored : 56     expected : 48
  device 18:             stored : 49     expected : 48
  device 19:             stored : 42     expected : 48
  device 20:             stored : 56     expected : 48
  device 21:             stored : 51     expected : 48
  device 22:             stored : 59     expected : 48
  device 23:             stored : 34     expected : 48
  device 24:             stored : 43     expected : 48
  device 25:             stored : 52     expected : 48
  device 26:             stored : 46     expected : 48
  device 27:             stored : 44     expected : 48
  device 28:             stored : 43     expected : 48
  device 29:             stored : 57     expected : 48
  device 30:             stored : 53     expected : 48
  device 31:             stored : 52     expected : 48
  device 32:             stored : 46     expected : 48
  device 33:             stored : 42     expected : 48
  device 34:             stored : 59     expected : 48
  device 35:             stored : 50     expected : 48
  device 36:             stored : 48     expected : 48
  device 37:             stored : 47     expected : 48
  device 38:             stored : 52     expected : 48
  device 39:             stored : 56     expected : 48
  device 40:             stored : 38     expected : 48
  device 41:             stored : 40     expected : 48
  device 42:             stored : 52     expected : 48
  device 43:             stored : 43     expected : 48
  device 44:             stored : 41     expected : 48
  device 45:             stored : 51     expected : 48
  device 46:             stored : 43     expected : 48
  device 47:             stored : 46     expected : 48
  device 48:             stored : 42     expected : 48
  device 49:             stored : 47     expected : 48
  device 50:             stored : 51     expected : 48
  device 51:             stored : 54     expected : 48
  device 52:             stored : 56     expected : 48
  device 53:             stored : 46     expected : 48
  device 54:             stored : 47     expected : 48
  device 55:             stored : 48     expected : 48
  device 56:             stored : 42     expected : 48
  device 57:             stored : 40     expected : 48
  device 58:             stored : 58     expected : 48
  device 59:             stored : 53     expected : 48
  device 60:             stored : 41     expected : 48
  device 61:             stored : 44     expected : 48
  device 62:             stored : 52     expected : 48
  device 63:             stored : 60     expected : 48

> Perhaps you can share the osd tree output so we can see how your map it structured?
>
The architecture is pretty simple:
ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
-5 8.00000 root root
-1 2.00000     host host0
 0 1.00000         osd.0       up  1.00000          1.00000
 1 1.00000         osd.1       up  1.00000          1.00000
-2 2.00000     host host1
 2 1.00000         osd.2     down        0          1.00000
 3 1.00000         osd.3     down        0          1.00000
-3 2.00000     host host2
 4 1.00000         osd.4       up  1.00000          1.00000
 5 1.00000         osd.5       up  1.00000          1.00000
-4 2.00000     host host3
 6 1.00000         osd.6       up  1.00000          1.00000
 7 1.00000         osd.7       up  1.00000          1.00000

And the rule used if you were interested:
rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take root
        step chooseleaf firstn 0 type host
        step emit
}

Also all the tunables are default values and I'm running the master from several days ago.

> It is normal for some mappings to have multiple items change, as I've described previously on this list.
> It's a result of the fact that we're considering totally independent series of items, but whether we
> accept a choice also depends on the previous choices (we do not allow dups).. which means that you
> get echo effects in later slots when we make a different choice in earlier slots.
>
As I described in the previous section, I think these extra migrations is not caused by collisions. And IMO
the ideal result when an osd fails should be a simple shifting, just as you illustrated in the CRUSH paper:

1 2 3 4 5 6
a b c d e f ...

b fails ==>

1 2 3 4 5 6
a c d e f g ...

----------
Best regards,
Sangdi

> Thanks!
> sage
>
>
> > pgid    before <-> after        diff    diff_num
> > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > Changed pg ratio: 20 / 32
> >
> > Currently the only defect I can see from the change is that the chance for a given pg to successfully
> choose required available OSDs might be a bit lower compared with before. However, I believe it will
> cause problems only when the cluster is pretty small and degraded. And in that case, we can still make
> it workable by tuning some of the crushmap parameters such as chooseleaf_tries.
> >
> > Anyway I'm not sure if it would raise any other issues, could you please review it and maybe give me
> some suggestions? Thank you!
> >
> > ----------
> > Best regards,
> > Sangdi
> >
> > ----------------------------------------------------------------------
> > ---------------------------------------------------------------
> > ????????????????????????????????????????
> > ????????????????????????????????????????
> > ????????????????????????????????????????
> > ???
> > This e-mail and its attachments contain confidential information from
> > H3C, which is intended only for the person or entity whose address is
> > listed above. Any use of the information contained herein in any way
> > (including, but not limited to, total or partial disclosure,
> > reproduction, or dissemination) by persons other than the intended
> > recipient(s) is prohibited. If you receive this e-mail in error,
> > please notify the sender by phone or email immediately and delete it!
> > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
>
> ???j:+v???w???????? ????zZ+???????j"????i
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to
> majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-14  2:15   ` Xusangdi
@ 2015-10-14 12:18     ` Sage Weil
  0 siblings, 0 replies; 20+ messages in thread
From: Sage Weil @ 2015-10-14 12:18 UTC (permalink / raw)
  To: Xusangdi; +Cc: Robert LeBlanc, ceph-devel

On Wed, 14 Oct 2015, Xusangdi wrote:
> Straw2. But I had also run the same test for straw alg, which generated 
> quite similar results.

This post explains the current behavior:

http://marc.info/?l=ceph-devel&m=143862308610881&w=2

sage

> 
> > -----Original Message-----
> > From: Robert LeBlanc [mailto:robert@leblancnet.us]
> > Sent: Tuesday, October 13, 2015 10:21 PM
> > To: xusangdi 11976 (RD)
> > Cc: sweil@redhat.com; ceph-devel@vger.kernel.org
> > Subject: Re: chooseleaf may cause some unnecessary pg migrations
> > 
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA256
> > 
> > Are you testing with straw or straw2?
> > - ----------------
> > Robert LeBlanc
> > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> > 
> > 
> > On Tue, Oct 13, 2015 at 2:22 AM, Xusangdi  wrote:
> > > Hi Sage,
> > >
> > > Recently when I was learning about the crush rules I noticed that the step chooseleaf may cause
> > some unnecessary pg migrations when OSDs are outed.
> > > For example, for a cluster of 4 hosts with 2 OSDs each, after host1(osd.2, osd.3) is down, the
> > mapping differences would be like this:
> > > pgid    before <-> after        diff    diff_num
> > > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > > Changed pg ratio: 30 / 32
> > >
> > > I tried to change the code (please see https://github.com/ceph/ceph/pull/6242) and after the
> > modification the result would be like this:
> > > pgid    before <-> after        diff    diff_num
> > > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > > Changed pg ratio: 20 / 32
> > >
> > > Currently the only defect I can see from the change is that the chance for a given pg to successfully
> > choose required available OSDs might be a bit lower compared with before. However, I believe it will
> > cause problems only when the cluster is pretty small and degraded. And in that case, we can still make
> > it workable by tuning some of the crushmap parameters such as chooseleaf_tries.
> > >
> > > Anyway I'm not sure if it would raise any other issues, could you please review it and maybe give me
> > some suggestions? Thank you!
> > >
> > > ----------
> > > Best regards,
> > > Sangdi
> > >
> > > ----------------------------------------------------------------------
> > > ---------------------------------------------------------------
> > > ????????????????????????????????????????
> > > ????????????????????????????????????????
> > > ????????????????????????????????????????
> > > ???
> > > This e-mail and its attachments contain confidential information from
> > > H3C, which is intended only for the person or entity whose address is
> > > listed above. Any use of the information contained herein in any way
> > > (including, but not limited to, total or partial disclosure,
> > > reproduction, or dissemination) by persons other than the intended
> > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > please notify the sender by phone or email immediately and delete it!
> > 
> > -----BEGIN PGP SIGNATURE-----
> > Version: Mailvelope v1.2.0
> > Comment: https://www.mailvelope.com
> > 
> > wsFcBAEBCAAQBQJWHRM4CRDmVDuy+mK58QAARVMP/jhhtyRsiUXw4kl2ikso
> > F8CiAwPuGRMvFSa2CXqzvaHnNjiy8Q4uR8o0KgcR04eiLGPUeahjyAQ73+8k
> > geryb9ymjoDFjkKX2n7YxCHy/MnB5HayNIuUPi+KUFzpradx1v7S54XL2DHm
> > mDRR2DDeou9H6WcIqknRh4e6fc1a70E2CbpKr9qu7AiNiEfRZzXod//joavW
> > h0MkYC0Ug41UG64R9QTCJOKp+wSjri+IUgSSrs3WPYXb5W1jZPFIhsFkigws
> > VgitZTv3+rO5ZyHbtCR+3yNI5isU18Lhf+Dr01MExUuyCQQz6zODXV0W+xgP
> > wsMSe8ZXXr84a/8MKoP90mr2pNiiasMwWrcZ/klQ9J4AIqh8DJEHJeAWf+4N
> > pYWTiRFbq3NZzIUjTBqtP/AliKvCTDQhVP3E8hK1qYg4Gv0gQ0Zu76F5c5/p
> > rj9HTZa+o8rSQM0TDuiqKSMEJUcuMt/TScWmQNZF1GTb3HSx6LW6H+aOkLuE
> > N0Fi+rkYupxXC3P3HnU35GMzlum//j/svIFkLOA5V5abVAttcxrGg9jpebUO
> > i3f4DR6e86RNLMaakNoybYlK9J+7j3JjKydBTqkDn9sKBeMaE/oW21Ft99/z
> > eJDLf+8xGt02tV512mPDw8SWJZUws3/B4qc4yrkYUe2aWBeHrE7vIX8ZgC1M
> > icrE
> > =/pQd
> > -----END PGP SIGNATURE-----
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j\a??f???h?????\x1e?w???\f???j:+v???w????????\a????zZ+???????j"????i

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-14  5:22   ` Xusangdi
@ 2015-10-14 14:17     ` Sage Weil
       [not found]       ` <FF8D83DAE5DF57468726B79904321E06047677@H3CMLB12-EX.srv.huawei-3com.com>
  0 siblings, 1 reply; 20+ messages in thread
From: Sage Weil @ 2015-10-14 14:17 UTC (permalink / raw)
  To: Xusangdi; +Cc: ceph-devel

On Wed, 14 Oct 2015, Xusangdi wrote:
> Please see inline.
> 
> > -----Original Message-----
> > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of
> > Sage Weil
> > Sent: Wednesday, October 14, 2015 12:45 AM
> > To: xusangdi 11976 (RD)
> > Cc: ceph-devel@vger.kernel.org
> > Subject: Re: chooseleaf may cause some unnecessary pg migrations
> >
> > Hi Sangdi,
> >
> > On Tue, 13 Oct 2015, Xusangdi wrote:
> > > Hi Sage,
> > >
> > > Recently when I was learning about the crush rules I noticed that the step chooseleaf may cause
> > some unnecessary pg migrations when OSDs are outed.
> > > For example, for a cluster of 4 hosts with 2 OSDs each, after host1(osd.2, osd.3) is down, the
> > mapping differences would be like this:
> > > pgid    before <-> after        diff    diff_num
> > > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > > Changed pg ratio: 30 / 32
> > >
> > > I tried to change the code (please see https://github.com/ceph/ceph/pull/6242) and after the
> > modification the result would be like this:
> >
> > Can you describe the reasoning behind the change?  I can't make sense of it.  The recursive call is still
> > picking just 1 item, but it looks like it is always choosing a device in for slot 0 and not the current slot in
> > the caller.  I'm not sure how that could generate a correct result.
> >
> From my understanding, the argument 'outpos' of crush_choose_firstn(...) 
> is always set to 0 for 'choose firstn' and the first step of 'chooseleaf 
> firstn' (you may check its caller, i.e. crush_do_rule(...), that the 
> variable 'j' there is always 0). So in fact the code modification won't 
> change anything for these steps. It will only affect the recursive call. 
> And it this call, 'rep'(and 'r') is independent from its caller, which 
> means it's only used to indicate the replica number(the start value of 
> 'rep' should be 1 less than 'numrep'), but is not related to result 
> position. Since 'outpos' is correctly passed to the recursive call, the 
> result would be put in the right slot, not slot 0.

Okay, I undertstand now.  Thanks!
 
> The current code set the starting value of 'r' based on the slot number, 
> which means the algorithm may generate a different result even when only 
> the slot number changes, which I believe is not what we want for 
> replicated pools. For example, 0.14 [3, 6, 5] <-> [7, 4, 1], after ods.3 
> is down and out, the pg on osd.6 is transferred to osd.7 because the 
> starting value of 'r' is changed(probably from 1 to 0), and so is the pg 
> from osd.5 to osd.4. Though it looks like that all of these extra 
> migrations are always within the same chosen bucket, they are 
> unnecessary. If we set the starting value of 'rep'(and 'r') to be a 
> fixed value(e.g. 0), the results would be stable.

Yeah, that makes sense.

The only problem I see is the collision check here:

	https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294f6585c322588cfe026d701R497

For most maps your change isn't problematic because devices only 
appear once in the hierarchy, but if an item does appear twice, we may 
choose it twice and not detect the dup.

Perhaps leaving the argument alone and simply changing the for loop to

	for (rep = 0; rep < (numrep - outpos) && count > 0 ; rep++) {

would do the trick?

Thanks!
sage

	

> The quality of distribution could be assured by the hashed value x, 
> since it's dependent on pgid. A simple test result for a relatively 
> large cluster is as followings, which from my view is similar with that 
> of the current code:
> 
> crushtool -o crushmap --build --num_osds 64 host straw2 4 rack straw2 4 root straw2 0
> 
> crushtool --test -i crushmap --num_rep 3 --show-utilization
> rule 0 (replicated_ruleset), x = 0..1023, numrep = 3..3
> rule 0 (replicated_ruleset) num_rep 3 result size == 3: 1024/1024
>   device 0:              stored : 57     expected : 48
>   device 1:              stored : 39     expected : 48
>   device 2:              stored : 44     expected : 48
>   device 3:              stored : 42     expected : 48
>   device 4:              stored : 45     expected : 48
>   device 5:              stored : 42     expected : 48
>   device 6:              stored : 45     expected : 48
>   device 7:              stored : 48     expected : 48
>   device 8:              stored : 43     expected : 48
>   device 9:              stored : 38     expected : 48
>   device 10:             stored : 44     expected : 48
>   device 11:             stored : 52     expected : 48
>   device 12:             stored : 54     expected : 48
>   device 13:             stored : 59     expected : 48
>   device 14:             stored : 53     expected : 48
>   device 15:             stored : 53     expected : 48
>   device 16:             stored : 42     expected : 48
>   device 17:             stored : 56     expected : 48
>   device 18:             stored : 49     expected : 48
>   device 19:             stored : 42     expected : 48
>   device 20:             stored : 56     expected : 48
>   device 21:             stored : 51     expected : 48
>   device 22:             stored : 59     expected : 48
>   device 23:             stored : 34     expected : 48
>   device 24:             stored : 43     expected : 48
>   device 25:             stored : 52     expected : 48
>   device 26:             stored : 46     expected : 48
>   device 27:             stored : 44     expected : 48
>   device 28:             stored : 43     expected : 48
>   device 29:             stored : 57     expected : 48
>   device 30:             stored : 53     expected : 48
>   device 31:             stored : 52     expected : 48
>   device 32:             stored : 46     expected : 48
>   device 33:             stored : 42     expected : 48
>   device 34:             stored : 59     expected : 48
>   device 35:             stored : 50     expected : 48
>   device 36:             stored : 48     expected : 48
>   device 37:             stored : 47     expected : 48
>   device 38:             stored : 52     expected : 48
>   device 39:             stored : 56     expected : 48
>   device 40:             stored : 38     expected : 48
>   device 41:             stored : 40     expected : 48
>   device 42:             stored : 52     expected : 48
>   device 43:             stored : 43     expected : 48
>   device 44:             stored : 41     expected : 48
>   device 45:             stored : 51     expected : 48
>   device 46:             stored : 43     expected : 48
>   device 47:             stored : 46     expected : 48
>   device 48:             stored : 42     expected : 48
>   device 49:             stored : 47     expected : 48
>   device 50:             stored : 51     expected : 48
>   device 51:             stored : 54     expected : 48
>   device 52:             stored : 56     expected : 48
>   device 53:             stored : 46     expected : 48
>   device 54:             stored : 47     expected : 48
>   device 55:             stored : 48     expected : 48
>   device 56:             stored : 42     expected : 48
>   device 57:             stored : 40     expected : 48
>   device 58:             stored : 58     expected : 48
>   device 59:             stored : 53     expected : 48
>   device 60:             stored : 41     expected : 48
>   device 61:             stored : 44     expected : 48
>   device 62:             stored : 52     expected : 48
>   device 63:             stored : 60     expected : 48
> 
> > Perhaps you can share the osd tree output so we can see how your map it structured?
> >
> The architecture is pretty simple:
> ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -5 8.00000 root root
> -1 2.00000     host host0
>  0 1.00000         osd.0       up  1.00000          1.00000
>  1 1.00000         osd.1       up  1.00000          1.00000
> -2 2.00000     host host1
>  2 1.00000         osd.2     down        0          1.00000
>  3 1.00000         osd.3     down        0          1.00000
> -3 2.00000     host host2
>  4 1.00000         osd.4       up  1.00000          1.00000
>  5 1.00000         osd.5       up  1.00000          1.00000
> -4 2.00000     host host3
>  6 1.00000         osd.6       up  1.00000          1.00000
>  7 1.00000         osd.7       up  1.00000          1.00000
> 
> And the rule used if you were interested:
> rule replicated_ruleset {
>         ruleset 0
>         type replicated
>         min_size 1
>         max_size 10
>         step take root
>         step chooseleaf firstn 0 type host
>         step emit
> }
> 
> Also all the tunables are default values and I'm running the master from several days ago.
> 
> > It is normal for some mappings to have multiple items change, as I've described previously on this list.
> > It's a result of the fact that we're considering totally independent series of items, but whether we
> > accept a choice also depends on the previous choices (we do not allow dups).. which means that you
> > get echo effects in later slots when we make a different choice in earlier slots.
> >
> As I described in the previous section, I think these extra migrations is not caused by collisions. And IMO
> the ideal result when an osd fails should be a simple shifting, just as you illustrated in the CRUSH paper:
> 
> 1 2 3 4 5 6
> a b c d e f ...
> 
> b fails ==>
> 
> 1 2 3 4 5 6
> a c d e f g ...
> 
> ----------
> Best regards,
> Sangdi
> 
> > Thanks!
> > sage
> >
> >
> > > pgid    before <-> after        diff    diff_num
> > > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > > Changed pg ratio: 20 / 32
> > >
> > > Currently the only defect I can see from the change is that the chance for a given pg to successfully
> > choose required available OSDs might be a bit lower compared with before. However, I believe it will
> > cause problems only when the cluster is pretty small and degraded. And in that case, we can still make
> > it workable by tuning some of the crushmap parameters such as chooseleaf_tries.
> > >
> > > Anyway I'm not sure if it would raise any other issues, could you please review it and maybe give me
> > some suggestions? Thank you!
> > >
> > > ----------
> > > Best regards,
> > > Sangdi
> > >
> > > ----------------------------------------------------------------------
> > > ---------------------------------------------------------------
> > > ????????????????????????????????????????
> > > ????????????????????????????????????????
> > > ????????????????????????????????????????
> > > ???
> > > This e-mail and its attachments contain confidential information from
> > > H3C, which is intended only for the person or entity whose address is
> > > listed above. Any use of the information contained herein in any way
> > > (including, but not limited to, total or partial disclosure,
> > > reproduction, or dissemination) by persons other than the intended
> > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > please notify the sender by phone or email immediately and delete it!
> > > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> >
> > ???j:+v???w???????? ????zZ+???????j"????i
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to
> > majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
> -------------------------------------------------------------------------------------------------------------------------------------
> ????????????????????????????????????????
> ????????????????????????????????????????
> ????????????????????????????????????????
> ???
> This e-mail and its attachments contain confidential information from H3C, which is
> intended only for the person or entity whose address is listed above. Any use of the
> information contained herein in any way (including, but not limited to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
> by phone or email immediately and delete it!
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j\a??f???h?????\x1e?w???\f???j:+v???w????????\a????zZ+???????j"????i

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
       [not found]       ` <FF8D83DAE5DF57468726B79904321E06047677@H3CMLB12-EX.srv.huawei-3com.com>
@ 2015-10-15 19:14         ` Sage Weil
  2015-10-16  4:28           ` Chen, Xiaoxi
  2015-10-16 11:24           ` Xusangdi
  0 siblings, 2 replies; 20+ messages in thread
From: Sage Weil @ 2015-10-15 19:14 UTC (permalink / raw)
  To: Xusangdi; +Cc: ceph-devel

On Thu, 15 Oct 2015, Xusangdi wrote:
> > -----Original Message-----
> > From: Sage Weil [mailto:sweil@redhat.com]
> > Sent: Wednesday, October 14, 2015 10:18 PM
> > To: xusangdi 11976 (RD)
> > Cc: ceph-devel@vger.kernel.org
> > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> >
> > On Wed, 14 Oct 2015, Xusangdi wrote:
> > > Please see inline.
> > >
> > > > -----Original Message-----
> > > > From: ceph-devel-owner@vger.kernel.org
> > > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Sage Weil
> > > > Sent: Wednesday, October 14, 2015 12:45 AM
> > > > To: xusangdi 11976 (RD)
> > > > Cc: ceph-devel@vger.kernel.org
> > > > Subject: Re: chooseleaf may cause some unnecessary pg migrations
> > > >
> > > > Hi Sangdi,
> > > >
> > > > On Tue, 13 Oct 2015, Xusangdi wrote:
> > > > > Hi Sage,
> > > > >
> > > > > Recently when I was learning about the crush rules I noticed that
> > > > > the step chooseleaf may cause
> > > > some unnecessary pg migrations when OSDs are outed.
> > > > > For example, for a cluster of 4 hosts with 2 OSDs each, after
> > > > > host1(osd.2, osd.3) is down, the
> > > > mapping differences would be like this:
> > > > > pgid    before <-> after        diff    diff_num
> > > > > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > > > > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > > > > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > > > > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > > > > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > > > > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > > > > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > > > > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > > > > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > > > > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > > > > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > > > > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > > > > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > > > > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > > > > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > > > > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > > > > Changed pg ratio: 30 / 32
> > > > >
> > > > > I tried to change the code (please see
> > > > > https://github.com/ceph/ceph/pull/6242) and after the
> > > > modification the result would be like this:
> > > >
> > > > Can you describe the reasoning behind the change?  I can't make
> > > > sense of it.  The recursive call is still picking just 1 item, but
> > > > it looks like it is always choosing a device in for slot 0 and not the current slot in the caller.  I'm not
> > sure how that could generate a correct result.
> > > >
> > > From my understanding, the argument 'outpos' of
> > > crush_choose_firstn(...) is always set to 0 for 'choose firstn' and
> > > the first step of 'chooseleaf firstn' (you may check its caller, i.e.
> > > crush_do_rule(...), that the variable 'j' there is always 0). So in
> > > fact the code modification won't change anything for these steps. It will only affect the recursive call.
> > > And it this call, 'rep'(and 'r') is independent from its caller, which
> > > means it's only used to indicate the replica number(the start value of
> > > 'rep' should be 1 less than 'numrep'), but is not related to result
> > > position. Since 'outpos' is correctly passed to the recursive call,
> > > the result would be put in the right slot, not slot 0.
> >
> > Okay, I undertstand now.  Thanks!
> >
> > > The current code set the starting value of 'r' based on the slot
> > > number, which means the algorithm may generate a different result even
> > > when only the slot number changes, which I believe is not what we want
> > > for replicated pools. For example, 0.14 [3, 6, 5] <-> [7, 4, 1], after
> > > ods.3 is down and out, the pg on osd.6 is transferred to osd.7 because
> > > the starting value of 'r' is changed(probably from 1 to 0), and so is
> > > the pg from osd.5 to osd.4. Though it looks like that all of these
> > > extra migrations are always within the same chosen bucket, they are
> > > unnecessary. If we set the starting value of 'rep'(and 'r') to be a
> > > fixed value(e.g. 0), the results would be stable.
> >
> > Yeah, that makes sense.
> >
> > The only problem I see is the collision check here:
> >
> >       https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294f6585c322588cfe026d701R497
> >
> > For most maps your change isn't problematic because devices only appear once in the hierarchy, but if
> > an item does appear twice, we may choose it twice and not detect the dup.
> >
> > Perhaps leaving the argument alone and simply changing the for loop to
> >
> >       for (rep = 0; rep < (numrep - outpos) && count > 0 ; rep++) {
> >
> > would do the trick?
> >
> Well I believe it actually will detect the dup as 'numrep' and 'outpos' 
> are two different arguments and 'outpos' is correctly passed, so the 
> 'for loop' you marked for detecting collision will check all previously 
> selected items.

Yep, right again.

> I have read the post about current CRUSH behavior and thanks for that 
> detailed explanation. However, the prerequisite there is the crushmap is 
> changed(some item is reweighted), while in the current implementation 
> the crushmap remains untouched for quite a long time after some devices 
> fail. And because of this, I think the code change can make the pg 
> migrations to be optimal when osds become down. When the crushmap does 
> change, like you said, the extra migrations would be unavoidable. 
> Anyway, as I really cannot find it, could you give me some tip about how 
> or when the crushmap would get modified automatically? Thank you!

I think the situation is the same: when the osd is marked "out" the 
weight[i] value goes to zero and we will reject it at 

https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294f6585c322588cfe026d701R535

(An osd goes down but not out doesn't have any effect on crush--the ceph 
code just filters those osds out of the crush result.)

So, anyway, yes: this looks like a good change.  One thing that would be 
great would be a way to quantify how much it improves things.

The next step is to introduce a tunable that makes this behavior optional 
(as we can't break/change previous mappings).  That means introducing a 
new field in crush/crush.h for the tunable, adding accessors in 
CrushWrapper, introducing a new 'jewel' preset that sets it that way (and 
making jewel the new 'optimal'), modifying crushtool.cc with an argument 
to modify the tunable, modifying CrushCompiler to both spit out the 
tunable if it's non-default and to parse it if its specified in the input, 
etc.

If you're comfortable with those changes, it would be great if you 
could do this part.  If not, I can take care of it!

Thanks-
sage


> 
> ---Sangdi
> 
> > Thanks!
> > sage
> >
> >
> >
> > > The quality of distribution could be assured by the hashed value x,
> > > since it's dependent on pgid. A simple test result for a relatively
> > > large cluster is as followings, which from my view is similar with
> > > that of the current code:
> > >
> > > crushtool -o crushmap --build --num_osds 64 host straw2 4 rack straw2
> > > 4 root straw2 0
> > >
> > > crushtool --test -i crushmap --num_rep 3 --show-utilization rule 0
> > > (replicated_ruleset), x = 0..1023, numrep = 3..3 rule 0
> > > (replicated_ruleset) num_rep 3 result size == 3: 1024/1024
> > >   device 0:              stored : 57     expected : 48
> > >   device 1:              stored : 39     expected : 48
> > >   device 2:              stored : 44     expected : 48
> > >   device 3:              stored : 42     expected : 48
> > >   device 4:              stored : 45     expected : 48
> > >   device 5:              stored : 42     expected : 48
> > >   device 6:              stored : 45     expected : 48
> > >   device 7:              stored : 48     expected : 48
> > >   device 8:              stored : 43     expected : 48
> > >   device 9:              stored : 38     expected : 48
> > >   device 10:             stored : 44     expected : 48
> > >   device 11:             stored : 52     expected : 48
> > >   device 12:             stored : 54     expected : 48
> > >   device 13:             stored : 59     expected : 48
> > >   device 14:             stored : 53     expected : 48
> > >   device 15:             stored : 53     expected : 48
> > >   device 16:             stored : 42     expected : 48
> > >   device 17:             stored : 56     expected : 48
> > >   device 18:             stored : 49     expected : 48
> > >   device 19:             stored : 42     expected : 48
> > >   device 20:             stored : 56     expected : 48
> > >   device 21:             stored : 51     expected : 48
> > >   device 22:             stored : 59     expected : 48
> > >   device 23:             stored : 34     expected : 48
> > >   device 24:             stored : 43     expected : 48
> > >   device 25:             stored : 52     expected : 48
> > >   device 26:             stored : 46     expected : 48
> > >   device 27:             stored : 44     expected : 48
> > >   device 28:             stored : 43     expected : 48
> > >   device 29:             stored : 57     expected : 48
> > >   device 30:             stored : 53     expected : 48
> > >   device 31:             stored : 52     expected : 48
> > >   device 32:             stored : 46     expected : 48
> > >   device 33:             stored : 42     expected : 48
> > >   device 34:             stored : 59     expected : 48
> > >   device 35:             stored : 50     expected : 48
> > >   device 36:             stored : 48     expected : 48
> > >   device 37:             stored : 47     expected : 48
> > >   device 38:             stored : 52     expected : 48
> > >   device 39:             stored : 56     expected : 48
> > >   device 40:             stored : 38     expected : 48
> > >   device 41:             stored : 40     expected : 48
> > >   device 42:             stored : 52     expected : 48
> > >   device 43:             stored : 43     expected : 48
> > >   device 44:             stored : 41     expected : 48
> > >   device 45:             stored : 51     expected : 48
> > >   device 46:             stored : 43     expected : 48
> > >   device 47:             stored : 46     expected : 48
> > >   device 48:             stored : 42     expected : 48
> > >   device 49:             stored : 47     expected : 48
> > >   device 50:             stored : 51     expected : 48
> > >   device 51:             stored : 54     expected : 48
> > >   device 52:             stored : 56     expected : 48
> > >   device 53:             stored : 46     expected : 48
> > >   device 54:             stored : 47     expected : 48
> > >   device 55:             stored : 48     expected : 48
> > >   device 56:             stored : 42     expected : 48
> > >   device 57:             stored : 40     expected : 48
> > >   device 58:             stored : 58     expected : 48
> > >   device 59:             stored : 53     expected : 48
> > >   device 60:             stored : 41     expected : 48
> > >   device 61:             stored : 44     expected : 48
> > >   device 62:             stored : 52     expected : 48
> > >   device 63:             stored : 60     expected : 48
> > >
> > > > Perhaps you can share the osd tree output so we can see how your map it structured?
> > > >
> > > The architecture is pretty simple:
> > > ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
> > > -5 8.00000 root root
> > > -1 2.00000     host host0
> > >  0 1.00000         osd.0       up  1.00000          1.00000
> > >  1 1.00000         osd.1       up  1.00000          1.00000
> > > -2 2.00000     host host1
> > >  2 1.00000         osd.2     down        0          1.00000
> > >  3 1.00000         osd.3     down        0          1.00000
> > > -3 2.00000     host host2
> > >  4 1.00000         osd.4       up  1.00000          1.00000
> > >  5 1.00000         osd.5       up  1.00000          1.00000
> > > -4 2.00000     host host3
> > >  6 1.00000         osd.6       up  1.00000          1.00000
> > >  7 1.00000         osd.7       up  1.00000          1.00000
> > >
> > > And the rule used if you were interested:
> > > rule replicated_ruleset {
> > >         ruleset 0
> > >         type replicated
> > >         min_size 1
> > >         max_size 10
> > >         step take root
> > >         step chooseleaf firstn 0 type host
> > >         step emit
> > > }
> > >
> > > Also all the tunables are default values and I'm running the master from several days ago.
> > >
> > > > It is normal for some mappings to have multiple items change, as I've described previously on this
> > list.
> > > > It's a result of the fact that we're considering totally independent
> > > > series of items, but whether we accept a choice also depends on the
> > > > previous choices (we do not allow dups).. which means that you get echo effects in later slots
> > when we make a different choice in earlier slots.
> > > >
> > > As I described in the previous section, I think these extra migrations
> > > is not caused by collisions. And IMO the ideal result when an osd fails should be a simple shifting, just
> > as you illustrated in the CRUSH paper:
> > >
> > > 1 2 3 4 5 6
> > > a b c d e f ...
> > >
> > > b fails ==>
> > >
> > > 1 2 3 4 5 6
> > > a c d e f g ...
> > >
> > > ----------
> > > Best regards,
> > > Sangdi
> > >
> > > > Thanks!
> > > > sage
> > > >
> > > >
> > > > > pgid    before <-> after        diff    diff_num
> > > > > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > > > > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > > > > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > > > > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > > > > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > > > > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > > > > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > > > > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > > > > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > > > > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > > > > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > > > > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > > > > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > > > > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > > > > Changed pg ratio: 20 / 32
> > > > >
> > > > > Currently the only defect I can see from the change is that the
> > > > > chance for a given pg to successfully
> > > > choose required available OSDs might be a bit lower compared with
> > > > before. However, I believe it will cause problems only when the
> > > > cluster is pretty small and degraded. And in that case, we can still make it workable by tuning some
> > of the crushmap parameters such as chooseleaf_tries.
> > > > >
> > > > > Anyway I'm not sure if it would raise any other issues, could you
> > > > > please review it and maybe give me
> > > > some suggestions? Thank you!
> > > > >
> > > > > ----------
> > > > > Best regards,
> > > > > Sangdi
> > > > >
> > > > > ------------------------------------------------------------------
> > > > > ----
> > > > > ---------------------------------------------------------------
> > > > > ????????????????????????????????????????
> > > > > ????????????????????????????????????????
> > > > > ????????????????????????????????????????
> > > > > ???
> > > > > This e-mail and its attachments contain confidential information
> > > > > from H3C, which is intended only for the person or entity whose
> > > > > address is listed above. Any use of the information contained
> > > > > herein in any way (including, but not limited to, total or partial
> > > > > disclosure, reproduction, or dissemination) by persons other than
> > > > > the intended
> > > > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > > > please notify the sender by phone or email immediately and delete it!
> > > > > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > >
> > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe
> > > > ceph-devel" in the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > ----------------------------------------------------------------------
> > > ---------------------------------------------------------------
> > > ????????????????????????????????????????
> > > ????????????????????????????????????????
> > > ????????????????????????????????????????
> > > ???
> > > This e-mail and its attachments contain confidential information from
> > > H3C, which is intended only for the person or entity whose address is
> > > listed above. Any use of the information contained herein in any way
> > > (including, but not limited to, total or partial disclosure,
> > > reproduction, or dissemination) by persons other than the intended
> > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > please notify the sender by phone or email immediately and delete it!
> > > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> >
> > ???j:+v???w???????? ????zZ+???????j"????i
> -------------------------------------------------------------------------------------------------------------------------------------
> ????????????????????????????????????????
> ????????????????????????????????????????
> ????????????????????????????????????????
> ???
> This e-mail and its attachments contain confidential information from H3C, which is
> intended only for the person or entity whose address is listed above. Any use of the
> information contained herein in any way (including, but not limited to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
> by phone or email immediately and delete it!
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-15 19:14         ` Sage Weil
@ 2015-10-16  4:28           ` Chen, Xiaoxi
  2015-10-16  6:12             ` Xusangdi
  2015-10-16 11:24           ` Xusangdi
  1 sibling, 1 reply; 20+ messages in thread
From: Chen, Xiaoxi @ 2015-10-16  4:28 UTC (permalink / raw)
  To: Sage Weil, Xusangdi; +Cc: ceph-devel

I did some test by using crushtool --test (together with this PR and #PR 6004)

It doesn't help in quantities way.

In a 40 OSDs, 10 OSDs per Nodes demo crush map, test with rep = 2. 4096 PGs, in each run I will randomly kick off an OSD (reweight to 0) and compared the PG mapping. If any OSD in the upset of a given PG is changed, we said the PG is remapped.

    With this Patch,  on average 377.35 PG remapped with stddev = 20.62.
    W/o this Patch,  on average 347.13 PG remapped with stddev = 13.51 .

In terms of #OSDs changed( each osd difference in a given slog count for 1), I grab one sample from previous test which kickoff OSD.20,  as this processing still need a bit handy work.
   With this Path,  165 times in Primary , 219 times on Secondary, 37 times for both(347 in total).
   W/O this patch, 165 times in Primary , 215 times on Secondary, 37 times for both( 343 in total).


> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> owner@vger.kernel.org] On Behalf Of Sage Weil
> Sent: Friday, October 16, 2015 3:14 AM
> To: Xusangdi
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
> 
> On Thu, 15 Oct 2015, Xusangdi wrote:
> > > -----Original Message-----
> > > From: Sage Weil [mailto:sweil@redhat.com]
> > > Sent: Wednesday, October 14, 2015 10:18 PM
> > > To: xusangdi 11976 (RD)
> > > Cc: ceph-devel@vger.kernel.org
> > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > >
> > > On Wed, 14 Oct 2015, Xusangdi wrote:
> > > > Please see inline.
> > > >
> > > > > -----Original Message-----
> > > > > From: ceph-devel-owner@vger.kernel.org
> > > > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Sage Weil
> > > > > Sent: Wednesday, October 14, 2015 12:45 AM
> > > > > To: xusangdi 11976 (RD)
> > > > > Cc: ceph-devel@vger.kernel.org
> > > > > Subject: Re: chooseleaf may cause some unnecessary pg migrations
> > > > >
> > > > > Hi Sangdi,
> > > > >
> > > > > On Tue, 13 Oct 2015, Xusangdi wrote:
> > > > > > Hi Sage,
> > > > > >
> > > > > > Recently when I was learning about the crush rules I noticed
> > > > > > that the step chooseleaf may cause
> > > > > some unnecessary pg migrations when OSDs are outed.
> > > > > > For example, for a cluster of 4 hosts with 2 OSDs each, after
> > > > > > host1(osd.2, osd.3) is down, the
> > > > > mapping differences would be like this:
> > > > > > pgid    before <-> after        diff    diff_num
> > > > > > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > > > > > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > > > > > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > > > > > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > > > > > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > > > > > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > > > > > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > > > > > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > > > > > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > > > > > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > > > > > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > > > > > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > > > > > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > > > > > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > > > > > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > > > > > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > > > > > Changed pg ratio: 30 / 32
> > > > > >
> > > > > > I tried to change the code (please see
> > > > > > https://github.com/ceph/ceph/pull/6242) and after the
> > > > > modification the result would be like this:
> > > > >
> > > > > Can you describe the reasoning behind the change?  I can't make
> > > > > sense of it.  The recursive call is still picking just 1 item,
> > > > > but it looks like it is always choosing a device in for slot 0
> > > > > and not the current slot in the caller.  I'm not
> > > sure how that could generate a correct result.
> > > > >
> > > > From my understanding, the argument 'outpos' of
> > > > crush_choose_firstn(...) is always set to 0 for 'choose firstn'
> > > > and the first step of 'chooseleaf firstn' (you may check its caller, i.e.
> > > > crush_do_rule(...), that the variable 'j' there is always 0). So
> > > > in fact the code modification won't change anything for these steps. It
> will only affect the recursive call.
> > > > And it this call, 'rep'(and 'r') is independent from its caller,
> > > > which means it's only used to indicate the replica number(the
> > > > start value of 'rep' should be 1 less than 'numrep'), but is not
> > > > related to result position. Since 'outpos' is correctly passed to
> > > > the recursive call, the result would be put in the right slot, not slot 0.
> > >
> > > Okay, I undertstand now.  Thanks!
> > >
> > > > The current code set the starting value of 'r' based on the slot
> > > > number, which means the algorithm may generate a different result
> > > > even when only the slot number changes, which I believe is not
> > > > what we want for replicated pools. For example, 0.14 [3, 6, 5] <->
> > > > [7, 4, 1], after
> > > > ods.3 is down and out, the pg on osd.6 is transferred to osd.7
> > > > because the starting value of 'r' is changed(probably from 1 to
> > > > 0), and so is the pg from osd.5 to osd.4. Though it looks like
> > > > that all of these extra migrations are always within the same
> > > > chosen bucket, they are unnecessary. If we set the starting value
> > > > of 'rep'(and 'r') to be a fixed value(e.g. 0), the results would be stable.
> > >
> > > Yeah, that makes sense.
> > >
> > > The only problem I see is the collision check here:
> > >
> > >
> > > https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294f6585c32
> > > 2588cfe026d701R497
> > >
> > > For most maps your change isn't problematic because devices only
> > > appear once in the hierarchy, but if an item does appear twice, we may
> choose it twice and not detect the dup.
> > >
> > > Perhaps leaving the argument alone and simply changing the for loop
> > > to
> > >
> > >       for (rep = 0; rep < (numrep - outpos) && count > 0 ; rep++) {
> > >
> > > would do the trick?
> > >
> > Well I believe it actually will detect the dup as 'numrep' and 'outpos'
> > are two different arguments and 'outpos' is correctly passed, so the
> > 'for loop' you marked for detecting collision will check all
> > previously selected items.
> 
> Yep, right again.
> 
> > I have read the post about current CRUSH behavior and thanks for that
> > detailed explanation. However, the prerequisite there is the crushmap
> > is changed(some item is reweighted), while in the current
> > implementation the crushmap remains untouched for quite a long time
> > after some devices fail. And because of this, I think the code change
> > can make the pg migrations to be optimal when osds become down. When
> > the crushmap does change, like you said, the extra migrations would be
> unavoidable.
> > Anyway, as I really cannot find it, could you give me some tip about
> > how or when the crushmap would get modified automatically? Thank you!
> 
> I think the situation is the same: when the osd is marked "out" the weight[i]
> value goes to zero and we will reject it at
> 
> https://github.com/ceph/ceph/pull/6242/files#diff-
> 0df13ad294f6585c322588cfe026d701R535
> 
> (An osd goes down but not out doesn't have any effect on crush--the ceph
> code just filters those osds out of the crush result.)
> 
> So, anyway, yes: this looks like a good change.  One thing that would be great
> would be a way to quantify how much it improves things.
> 
> The next step is to introduce a tunable that makes this behavior optional (as
> we can't break/change previous mappings).  That means introducing a new
> field in crush/crush.h for the tunable, adding accessors in CrushWrapper,
> introducing a new 'jewel' preset that sets it that way (and making jewel the
> new 'optimal'), modifying crushtool.cc with an argument to modify the
> tunable, modifying CrushCompiler to both spit out the tunable if it's non-
> default and to parse it if its specified in the input, etc.
> 
> If you're comfortable with those changes, it would be great if you could do
> this part.  If not, I can take care of it!
> 
> Thanks-
> sage
> 
> 
> >
> > ---Sangdi
> >
> > > Thanks!
> > > sage
> > >
> > >
> > >
> > > > The quality of distribution could be assured by the hashed value
> > > > x, since it's dependent on pgid. A simple test result for a
> > > > relatively large cluster is as followings, which from my view is
> > > > similar with that of the current code:
> > > >
> > > > crushtool -o crushmap --build --num_osds 64 host straw2 4 rack
> > > > straw2
> > > > 4 root straw2 0
> > > >
> > > > crushtool --test -i crushmap --num_rep 3 --show-utilization rule 0
> > > > (replicated_ruleset), x = 0..1023, numrep = 3..3 rule 0
> > > > (replicated_ruleset) num_rep 3 result size == 3: 1024/1024
> > > >   device 0:              stored : 57     expected : 48
> > > >   device 1:              stored : 39     expected : 48
> > > >   device 2:              stored : 44     expected : 48
> > > >   device 3:              stored : 42     expected : 48
> > > >   device 4:              stored : 45     expected : 48
> > > >   device 5:              stored : 42     expected : 48
> > > >   device 6:              stored : 45     expected : 48
> > > >   device 7:              stored : 48     expected : 48
> > > >   device 8:              stored : 43     expected : 48
> > > >   device 9:              stored : 38     expected : 48
> > > >   device 10:             stored : 44     expected : 48
> > > >   device 11:             stored : 52     expected : 48
> > > >   device 12:             stored : 54     expected : 48
> > > >   device 13:             stored : 59     expected : 48
> > > >   device 14:             stored : 53     expected : 48
> > > >   device 15:             stored : 53     expected : 48
> > > >   device 16:             stored : 42     expected : 48
> > > >   device 17:             stored : 56     expected : 48
> > > >   device 18:             stored : 49     expected : 48
> > > >   device 19:             stored : 42     expected : 48
> > > >   device 20:             stored : 56     expected : 48
> > > >   device 21:             stored : 51     expected : 48
> > > >   device 22:             stored : 59     expected : 48
> > > >   device 23:             stored : 34     expected : 48
> > > >   device 24:             stored : 43     expected : 48
> > > >   device 25:             stored : 52     expected : 48
> > > >   device 26:             stored : 46     expected : 48
> > > >   device 27:             stored : 44     expected : 48
> > > >   device 28:             stored : 43     expected : 48
> > > >   device 29:             stored : 57     expected : 48
> > > >   device 30:             stored : 53     expected : 48
> > > >   device 31:             stored : 52     expected : 48
> > > >   device 32:             stored : 46     expected : 48
> > > >   device 33:             stored : 42     expected : 48
> > > >   device 34:             stored : 59     expected : 48
> > > >   device 35:             stored : 50     expected : 48
> > > >   device 36:             stored : 48     expected : 48
> > > >   device 37:             stored : 47     expected : 48
> > > >   device 38:             stored : 52     expected : 48
> > > >   device 39:             stored : 56     expected : 48
> > > >   device 40:             stored : 38     expected : 48
> > > >   device 41:             stored : 40     expected : 48
> > > >   device 42:             stored : 52     expected : 48
> > > >   device 43:             stored : 43     expected : 48
> > > >   device 44:             stored : 41     expected : 48
> > > >   device 45:             stored : 51     expected : 48
> > > >   device 46:             stored : 43     expected : 48
> > > >   device 47:             stored : 46     expected : 48
> > > >   device 48:             stored : 42     expected : 48
> > > >   device 49:             stored : 47     expected : 48
> > > >   device 50:             stored : 51     expected : 48
> > > >   device 51:             stored : 54     expected : 48
> > > >   device 52:             stored : 56     expected : 48
> > > >   device 53:             stored : 46     expected : 48
> > > >   device 54:             stored : 47     expected : 48
> > > >   device 55:             stored : 48     expected : 48
> > > >   device 56:             stored : 42     expected : 48
> > > >   device 57:             stored : 40     expected : 48
> > > >   device 58:             stored : 58     expected : 48
> > > >   device 59:             stored : 53     expected : 48
> > > >   device 60:             stored : 41     expected : 48
> > > >   device 61:             stored : 44     expected : 48
> > > >   device 62:             stored : 52     expected : 48
> > > >   device 63:             stored : 60     expected : 48
> > > >
> > > > > Perhaps you can share the osd tree output so we can see how your
> map it structured?
> > > > >
> > > > The architecture is pretty simple:
> > > > ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
> > > > -5 8.00000 root root
> > > > -1 2.00000     host host0
> > > >  0 1.00000         osd.0       up  1.00000          1.00000
> > > >  1 1.00000         osd.1       up  1.00000          1.00000
> > > > -2 2.00000     host host1
> > > >  2 1.00000         osd.2     down        0          1.00000
> > > >  3 1.00000         osd.3     down        0          1.00000
> > > > -3 2.00000     host host2
> > > >  4 1.00000         osd.4       up  1.00000          1.00000
> > > >  5 1.00000         osd.5       up  1.00000          1.00000
> > > > -4 2.00000     host host3
> > > >  6 1.00000         osd.6       up  1.00000          1.00000
> > > >  7 1.00000         osd.7       up  1.00000          1.00000
> > > >
> > > > And the rule used if you were interested:
> > > > rule replicated_ruleset {
> > > >         ruleset 0
> > > >         type replicated
> > > >         min_size 1
> > > >         max_size 10
> > > >         step take root
> > > >         step chooseleaf firstn 0 type host
> > > >         step emit
> > > > }
> > > >
> > > > Also all the tunables are default values and I'm running the master from
> several days ago.
> > > >
> > > > > It is normal for some mappings to have multiple items change, as
> > > > > I've described previously on this
> > > list.
> > > > > It's a result of the fact that we're considering totally
> > > > > independent series of items, but whether we accept a choice also
> > > > > depends on the previous choices (we do not allow dups).. which
> > > > > means that you get echo effects in later slots
> > > when we make a different choice in earlier slots.
> > > > >
> > > > As I described in the previous section, I think these extra
> > > > migrations is not caused by collisions. And IMO the ideal result
> > > > when an osd fails should be a simple shifting, just
> > > as you illustrated in the CRUSH paper:
> > > >
> > > > 1 2 3 4 5 6
> > > > a b c d e f ...
> > > >
> > > > b fails ==>
> > > >
> > > > 1 2 3 4 5 6
> > > > a c d e f g ...
> > > >
> > > > ----------
> > > > Best regards,
> > > > Sangdi
> > > >
> > > > > Thanks!
> > > > > sage
> > > > >
> > > > >
> > > > > > pgid    before <-> after        diff    diff_num
> > > > > > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > > > > > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > > > > > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > > > > > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > > > > > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > > > > > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > > > > > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > > > > > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > > > > > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > > > > > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > > > > > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > > > > > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > > > > > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > > > > > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > > > > > Changed pg ratio: 20 / 32
> > > > > >
> > > > > > Currently the only defect I can see from the change is that
> > > > > > the chance for a given pg to successfully
> > > > > choose required available OSDs might be a bit lower compared
> > > > > with before. However, I believe it will cause problems only when
> > > > > the cluster is pretty small and degraded. And in that case, we
> > > > > can still make it workable by tuning some
> > > of the crushmap parameters such as chooseleaf_tries.
> > > > > >
> > > > > > Anyway I'm not sure if it would raise any other issues, could
> > > > > > you please review it and maybe give me
> > > > > some suggestions? Thank you!
> > > > > >
> > > > > > ----------
> > > > > > Best regards,
> > > > > > Sangdi
> > > > > >
> > > > > > --------------------------------------------------------------
> > > > > > ----
> > > > > > ----
> > > > > > --------------------------------------------------------------
> > > > > > - ????????????????????????????????????????
> > > > > > ????????????????????????????????????????
> > > > > > ????????????????????????????????????????
> > > > > > ???
> > > > > > This e-mail and its attachments contain confidential
> > > > > > information from H3C, which is intended only for the person or
> > > > > > entity whose address is listed above. Any use of the
> > > > > > information contained herein in any way (including, but not
> > > > > > limited to, total or partial disclosure, reproduction, or
> > > > > > dissemination) by persons other than the intended
> > > > > > recipient(s) is prohibited. If you receive this e-mail in
> > > > > > error, please notify the sender by phone or email immediately and
> delete it!
> > > > > >
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > > >
> > > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe
> > > > > ceph-devel" in the body of a message to
> > > > > majordomo@vger.kernel.org More majordomo info at
> > > > > http://vger.kernel.org/majordomo-info.html
> > > > ------------------------------------------------------------------
> > > > ----
> > > > ---------------------------------------------------------------
> > > > ????????????????????????????????????????
> > > > ????????????????????????????????????????
> > > > ????????????????????????????????????????
> > > > ???
> > > > This e-mail and its attachments contain confidential information
> > > > from H3C, which is intended only for the person or entity whose
> > > > address is listed above. Any use of the information contained
> > > > herein in any way (including, but not limited to, total or partial
> > > > disclosure, reproduction, or dissemination) by persons other than
> > > > the intended
> > > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > > please notify the sender by phone or email immediately and delete it!
> > > >
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > >
> > > ???j:+v???w???????? ????zZ+???????j"????i
> > ----------------------------------------------------------------------
> > ---------------------------------------------------------------
> > ????????????????????????????????????????
> > ????????????????????????????????????????
> > ????????????????????????????????????????
> > ???
> > This e-mail and its attachments contain confidential information from
> > H3C, which is intended only for the person or entity whose address is
> > listed above. Any use of the information contained herein in any way
> > (including, but not limited to, total or partial disclosure,
> > reproduction, or dissemination) by persons other than the intended
> > recipient(s) is prohibited. If you receive this e-mail in error,
> > please notify the sender by phone or email immediately and delete it!
> >
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the
> body of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-16  4:28           ` Chen, Xiaoxi
@ 2015-10-16  6:12             ` Xusangdi
  2015-10-16  6:26               ` Chen, Xiaoxi
  0 siblings, 1 reply; 20+ messages in thread
From: Xusangdi @ 2015-10-16  6:12 UTC (permalink / raw)
  To: Chen, Xiaoxi; +Cc: ceph-devel



> -----Original Message-----
> From: Chen, Xiaoxi [mailto:xiaoxi.chen@intel.com]
> Sent: Friday, October 16, 2015 12:28 PM
> To: Sage Weil; xusangdi 11976 (RD)
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
>
> I did some test by using crushtool --test (together with this PR and #PR 6004)
>
> It doesn't help in quantities way.
>
> In a 40 OSDs, 10 OSDs per Nodes demo crush map, test with rep = 2. 4096 PGs, in each run I will
> randomly kick off an OSD (reweight to 0) and compared the PG mapping. If any OSD in the upset of a
> given PG is changed, we said the PG is remapped.
>
>     With this Patch,  on average 377.35 PG remapped with stddev = 20.62.
>     W/o this Patch,  on average 347.13 PG remapped with stddev = 13.51 .
>
> In terms of #OSDs changed( each osd difference in a given slog count for 1), I grab one sample from
> previous test which kickoff OSD.20,  as this processing still need a bit handy work.
>    With this Path,  165 times in Primary , 219 times on Secondary, 37 times for both(347 in total).
>    W/O this patch, 165 times in Primary , 215 times on Secondary, 37 times for both( 343 in total).
>
The reason here is that the crushmap is changed when you use crushtool to reweight an item. This PR cannot
serve as a radical solution for the extra migrations, it only helps to improve the performance when some devices
fail in the running time. For a simple test, you may set up an experimental cluster (vstart.sh would be a great help)
and then shut down one or more osds (please don't touch the crushmap, just stop the osd service or kill its process).
Thank you.

---Sangdi

>
> > -----Original Message-----
> > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> > owner@vger.kernel.org] On Behalf Of Sage Weil
> > Sent: Friday, October 16, 2015 3:14 AM
> > To: Xusangdi
> > Cc: ceph-devel@vger.kernel.org
> > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> >
> > On Thu, 15 Oct 2015, Xusangdi wrote:
> > > > -----Original Message-----
> > > > From: Sage Weil [mailto:sweil@redhat.com]
> > > > Sent: Wednesday, October 14, 2015 10:18 PM
> > > > To: xusangdi 11976 (RD)
> > > > Cc: ceph-devel@vger.kernel.org
> > > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > > >
> > > > On Wed, 14 Oct 2015, Xusangdi wrote:
> > > > > Please see inline.
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: ceph-devel-owner@vger.kernel.org
> > > > > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Sage
> > > > > > Weil
> > > > > > Sent: Wednesday, October 14, 2015 12:45 AM
> > > > > > To: xusangdi 11976 (RD)
> > > > > > Cc: ceph-devel@vger.kernel.org
> > > > > > Subject: Re: chooseleaf may cause some unnecessary pg
> > > > > > migrations
> > > > > >
> > > > > > Hi Sangdi,
> > > > > >
> > > > > > On Tue, 13 Oct 2015, Xusangdi wrote:
> > > > > > > Hi Sage,
> > > > > > >
> > > > > > > Recently when I was learning about the crush rules I noticed
> > > > > > > that the step chooseleaf may cause
> > > > > > some unnecessary pg migrations when OSDs are outed.
> > > > > > > For example, for a cluster of 4 hosts with 2 OSDs each,
> > > > > > > after host1(osd.2, osd.3) is down, the
> > > > > > mapping differences would be like this:
> > > > > > > pgid    before <-> after        diff    diff_num
> > > > > > > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > > > > > > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > > > > > > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > > > > > > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > > > > > > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > > > > > > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > > > > > > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > > > > > > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > > > > > > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > > > > > > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > > > > > > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > > > > > > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > > > > > > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > > > > > > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > > > > > > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > > > > > > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > > > > > > Changed pg ratio: 30 / 32
> > > > > > >
> > > > > > > I tried to change the code (please see
> > > > > > > https://github.com/ceph/ceph/pull/6242) and after the
> > > > > > modification the result would be like this:
> > > > > >
> > > > > > Can you describe the reasoning behind the change?  I can't
> > > > > > make sense of it.  The recursive call is still picking just 1
> > > > > > item, but it looks like it is always choosing a device in for
> > > > > > slot 0 and not the current slot in the caller.  I'm not
> > > > sure how that could generate a correct result.
> > > > > >
> > > > > From my understanding, the argument 'outpos' of
> > > > > crush_choose_firstn(...) is always set to 0 for 'choose firstn'
> > > > > and the first step of 'chooseleaf firstn' (you may check its caller, i.e.
> > > > > crush_do_rule(...), that the variable 'j' there is always 0). So
> > > > > in fact the code modification won't change anything for these
> > > > > steps. It
> > will only affect the recursive call.
> > > > > And it this call, 'rep'(and 'r') is independent from its caller,
> > > > > which means it's only used to indicate the replica number(the
> > > > > start value of 'rep' should be 1 less than 'numrep'), but is not
> > > > > related to result position. Since 'outpos' is correctly passed
> > > > > to the recursive call, the result would be put in the right slot, not slot 0.
> > > >
> > > > Okay, I undertstand now.  Thanks!
> > > >
> > > > > The current code set the starting value of 'r' based on the slot
> > > > > number, which means the algorithm may generate a different
> > > > > result even when only the slot number changes, which I believe
> > > > > is not what we want for replicated pools. For example, 0.14 [3,
> > > > > 6, 5] <-> [7, 4, 1], after
> > > > > ods.3 is down and out, the pg on osd.6 is transferred to osd.7
> > > > > because the starting value of 'r' is changed(probably from 1 to
> > > > > 0), and so is the pg from osd.5 to osd.4. Though it looks like
> > > > > that all of these extra migrations are always within the same
> > > > > chosen bucket, they are unnecessary. If we set the starting
> > > > > value of 'rep'(and 'r') to be a fixed value(e.g. 0), the results would be stable.
> > > >
> > > > Yeah, that makes sense.
> > > >
> > > > The only problem I see is the collision check here:
> > > >
> > > >
> > > > https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294f6585c
> > > > 32
> > > > 2588cfe026d701R497
> > > >
> > > > For most maps your change isn't problematic because devices only
> > > > appear once in the hierarchy, but if an item does appear twice, we
> > > > may
> > choose it twice and not detect the dup.
> > > >
> > > > Perhaps leaving the argument alone and simply changing the for
> > > > loop to
> > > >
> > > >       for (rep = 0; rep < (numrep - outpos) && count > 0 ; rep++)
> > > > {
> > > >
> > > > would do the trick?
> > > >
> > > Well I believe it actually will detect the dup as 'numrep' and 'outpos'
> > > are two different arguments and 'outpos' is correctly passed, so the
> > > 'for loop' you marked for detecting collision will check all
> > > previously selected items.
> >
> > Yep, right again.
> >
> > > I have read the post about current CRUSH behavior and thanks for
> > > that detailed explanation. However, the prerequisite there is the
> > > crushmap is changed(some item is reweighted), while in the current
> > > implementation the crushmap remains untouched for quite a long time
> > > after some devices fail. And because of this, I think the code
> > > change can make the pg migrations to be optimal when osds become
> > > down. When the crushmap does change, like you said, the extra
> > > migrations would be
> > unavoidable.
> > > Anyway, as I really cannot find it, could you give me some tip about
> > > how or when the crushmap would get modified automatically? Thank you!
> >
> > I think the situation is the same: when the osd is marked "out" the
> > weight[i] value goes to zero and we will reject it at
> >
> > https://github.com/ceph/ceph/pull/6242/files#diff-
> > 0df13ad294f6585c322588cfe026d701R535
> >
> > (An osd goes down but not out doesn't have any effect on crush--the
> > ceph code just filters those osds out of the crush result.)
> >
> > So, anyway, yes: this looks like a good change.  One thing that would
> > be great would be a way to quantify how much it improves things.
> >
> > The next step is to introduce a tunable that makes this behavior
> > optional (as we can't break/change previous mappings).  That means
> > introducing a new field in crush/crush.h for the tunable, adding
> > accessors in CrushWrapper, introducing a new 'jewel' preset that sets
> > it that way (and making jewel the new 'optimal'), modifying
> > crushtool.cc with an argument to modify the tunable, modifying
> > CrushCompiler to both spit out the tunable if it's non- default and to parse it if its specified in the
> input, etc.
> >
> > If you're comfortable with those changes, it would be great if you
> > could do this part.  If not, I can take care of it!
> >
> > Thanks-
> > sage
> >
> >
> > >
> > > ---Sangdi
> > >
> > > > Thanks!
> > > > sage
> > > >
> > > >
> > > >
> > > > > The quality of distribution could be assured by the hashed value
> > > > > x, since it's dependent on pgid. A simple test result for a
> > > > > relatively large cluster is as followings, which from my view is
> > > > > similar with that of the current code:
> > > > >
> > > > > crushtool -o crushmap --build --num_osds 64 host straw2 4 rack
> > > > > straw2
> > > > > 4 root straw2 0
> > > > >
> > > > > crushtool --test -i crushmap --num_rep 3 --show-utilization rule
> > > > > 0 (replicated_ruleset), x = 0..1023, numrep = 3..3 rule 0
> > > > > (replicated_ruleset) num_rep 3 result size == 3: 1024/1024
> > > > >   device 0:              stored : 57     expected : 48
> > > > >   device 1:              stored : 39     expected : 48
> > > > >   device 2:              stored : 44     expected : 48
> > > > >   device 3:              stored : 42     expected : 48
> > > > >   device 4:              stored : 45     expected : 48
> > > > >   device 5:              stored : 42     expected : 48
> > > > >   device 6:              stored : 45     expected : 48
> > > > >   device 7:              stored : 48     expected : 48
> > > > >   device 8:              stored : 43     expected : 48
> > > > >   device 9:              stored : 38     expected : 48
> > > > >   device 10:             stored : 44     expected : 48
> > > > >   device 11:             stored : 52     expected : 48
> > > > >   device 12:             stored : 54     expected : 48
> > > > >   device 13:             stored : 59     expected : 48
> > > > >   device 14:             stored : 53     expected : 48
> > > > >   device 15:             stored : 53     expected : 48
> > > > >   device 16:             stored : 42     expected : 48
> > > > >   device 17:             stored : 56     expected : 48
> > > > >   device 18:             stored : 49     expected : 48
> > > > >   device 19:             stored : 42     expected : 48
> > > > >   device 20:             stored : 56     expected : 48
> > > > >   device 21:             stored : 51     expected : 48
> > > > >   device 22:             stored : 59     expected : 48
> > > > >   device 23:             stored : 34     expected : 48
> > > > >   device 24:             stored : 43     expected : 48
> > > > >   device 25:             stored : 52     expected : 48
> > > > >   device 26:             stored : 46     expected : 48
> > > > >   device 27:             stored : 44     expected : 48
> > > > >   device 28:             stored : 43     expected : 48
> > > > >   device 29:             stored : 57     expected : 48
> > > > >   device 30:             stored : 53     expected : 48
> > > > >   device 31:             stored : 52     expected : 48
> > > > >   device 32:             stored : 46     expected : 48
> > > > >   device 33:             stored : 42     expected : 48
> > > > >   device 34:             stored : 59     expected : 48
> > > > >   device 35:             stored : 50     expected : 48
> > > > >   device 36:             stored : 48     expected : 48
> > > > >   device 37:             stored : 47     expected : 48
> > > > >   device 38:             stored : 52     expected : 48
> > > > >   device 39:             stored : 56     expected : 48
> > > > >   device 40:             stored : 38     expected : 48
> > > > >   device 41:             stored : 40     expected : 48
> > > > >   device 42:             stored : 52     expected : 48
> > > > >   device 43:             stored : 43     expected : 48
> > > > >   device 44:             stored : 41     expected : 48
> > > > >   device 45:             stored : 51     expected : 48
> > > > >   device 46:             stored : 43     expected : 48
> > > > >   device 47:             stored : 46     expected : 48
> > > > >   device 48:             stored : 42     expected : 48
> > > > >   device 49:             stored : 47     expected : 48
> > > > >   device 50:             stored : 51     expected : 48
> > > > >   device 51:             stored : 54     expected : 48
> > > > >   device 52:             stored : 56     expected : 48
> > > > >   device 53:             stored : 46     expected : 48
> > > > >   device 54:             stored : 47     expected : 48
> > > > >   device 55:             stored : 48     expected : 48
> > > > >   device 56:             stored : 42     expected : 48
> > > > >   device 57:             stored : 40     expected : 48
> > > > >   device 58:             stored : 58     expected : 48
> > > > >   device 59:             stored : 53     expected : 48
> > > > >   device 60:             stored : 41     expected : 48
> > > > >   device 61:             stored : 44     expected : 48
> > > > >   device 62:             stored : 52     expected : 48
> > > > >   device 63:             stored : 60     expected : 48
> > > > >
> > > > > > Perhaps you can share the osd tree output so we can see how
> > > > > > your
> > map it structured?
> > > > > >
> > > > > The architecture is pretty simple:
> > > > > ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
> > > > > -5 8.00000 root root
> > > > > -1 2.00000     host host0
> > > > >  0 1.00000         osd.0       up  1.00000          1.00000
> > > > >  1 1.00000         osd.1       up  1.00000          1.00000
> > > > > -2 2.00000     host host1
> > > > >  2 1.00000         osd.2     down        0          1.00000
> > > > >  3 1.00000         osd.3     down        0          1.00000
> > > > > -3 2.00000     host host2
> > > > >  4 1.00000         osd.4       up  1.00000          1.00000
> > > > >  5 1.00000         osd.5       up  1.00000          1.00000
> > > > > -4 2.00000     host host3
> > > > >  6 1.00000         osd.6       up  1.00000          1.00000
> > > > >  7 1.00000         osd.7       up  1.00000          1.00000
> > > > >
> > > > > And the rule used if you were interested:
> > > > > rule replicated_ruleset {
> > > > >         ruleset 0
> > > > >         type replicated
> > > > >         min_size 1
> > > > >         max_size 10
> > > > >         step take root
> > > > >         step chooseleaf firstn 0 type host
> > > > >         step emit
> > > > > }
> > > > >
> > > > > Also all the tunables are default values and I'm running the
> > > > > master from
> > several days ago.
> > > > >
> > > > > > It is normal for some mappings to have multiple items change,
> > > > > > as I've described previously on this
> > > > list.
> > > > > > It's a result of the fact that we're considering totally
> > > > > > independent series of items, but whether we accept a choice
> > > > > > also depends on the previous choices (we do not allow dups)..
> > > > > > which means that you get echo effects in later slots
> > > > when we make a different choice in earlier slots.
> > > > > >
> > > > > As I described in the previous section, I think these extra
> > > > > migrations is not caused by collisions. And IMO the ideal result
> > > > > when an osd fails should be a simple shifting, just
> > > > as you illustrated in the CRUSH paper:
> > > > >
> > > > > 1 2 3 4 5 6
> > > > > a b c d e f ...
> > > > >
> > > > > b fails ==>
> > > > >
> > > > > 1 2 3 4 5 6
> > > > > a c d e f g ...
> > > > >
> > > > > ----------
> > > > > Best regards,
> > > > > Sangdi
> > > > >
> > > > > > Thanks!
> > > > > > sage
> > > > > >
> > > > > >
> > > > > > > pgid    before <-> after        diff    diff_num
> > > > > > > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > > > > > > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > > > > > > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > > > > > > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > > > > > > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > > > > > > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > > > > > > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > > > > > > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > > > > > > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > > > > > > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > > > > > > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > > > > > > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > > > > > > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > > > > > > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > > > > > > Changed pg ratio: 20 / 32
> > > > > > >
> > > > > > > Currently the only defect I can see from the change is that
> > > > > > > the chance for a given pg to successfully
> > > > > > choose required available OSDs might be a bit lower compared
> > > > > > with before. However, I believe it will cause problems only
> > > > > > when the cluster is pretty small and degraded. And in that
> > > > > > case, we can still make it workable by tuning some
> > > > of the crushmap parameters such as chooseleaf_tries.
> > > > > > >
> > > > > > > Anyway I'm not sure if it would raise any other issues,
> > > > > > > could you please review it and maybe give me
> > > > > > some suggestions? Thank you!
> > > > > > >
> > > > > > > ----------
> > > > > > > Best regards,
> > > > > > > Sangdi
> > > > > > >
> > > > > > > ------------------------------------------------------------
> > > > > > > --
> > > > > > > ----
> > > > > > > ----
> > > > > > > ------------------------------------------------------------
> > > > > > > --
> > > > > > > - ????????????????????????????????????????
> > > > > > > ????????????????????????????????????????
> > > > > > > ????????????????????????????????????????
> > > > > > > ???
> > > > > > > This e-mail and its attachments contain confidential
> > > > > > > information from H3C, which is intended only for the person
> > > > > > > or entity whose address is listed above. Any use of the
> > > > > > > information contained herein in any way (including, but not
> > > > > > > limited to, total or partial disclosure, reproduction, or
> > > > > > > dissemination) by persons other than the intended
> > > > > > > recipient(s) is prohibited. If you receive this e-mail in
> > > > > > > error, please notify the sender by phone or email
> > > > > > > immediately and
> > delete it!
> > > > > > >
> > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > > > >
> > > > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe
> > > > > > ceph-devel" in the body of a message to
> > > > > > majordomo@vger.kernel.org More majordomo info at
> > > > > > http://vger.kernel.org/majordomo-info.html
> > > > > ----------------------------------------------------------------
> > > > > --
> > > > > ----
> > > > > ---------------------------------------------------------------
> > > > > ????????????????????????????????????????
> > > > > ????????????????????????????????????????
> > > > > ????????????????????????????????????????
> > > > > ???
> > > > > This e-mail and its attachments contain confidential information
> > > > > from H3C, which is intended only for the person or entity whose
> > > > > address is listed above. Any use of the information contained
> > > > > herein in any way (including, but not limited to, total or
> > > > > partial disclosure, reproduction, or dissemination) by persons
> > > > > other than the intended
> > > > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > > > please notify the sender by phone or email immediately and delete it!
> > > > >
> > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > >
> > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > --------------------------------------------------------------------
> > > --
> > > ---------------------------------------------------------------
> > > ????????????????????????????????????????
> > > ????????????????????????????????????????
> > > ????????????????????????????????????????
> > > ???
> > > This e-mail and its attachments contain confidential information
> > > from H3C, which is intended only for the person or entity whose
> > > address is listed above. Any use of the information contained herein
> > > in any way (including, but not limited to, total or partial
> > > disclosure, reproduction, or dissemination) by persons other than
> > > the intended
> > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > please notify the sender by phone or email immediately and delete it!
> > >
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > in the body of a message to majordomo@vger.kernel.org More majordomo
> > info at http://vger.kernel.org/majordomo-info.html
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-16  6:12             ` Xusangdi
@ 2015-10-16  6:26               ` Chen, Xiaoxi
  2015-10-16  6:36                 ` Dałek, Piotr
  2015-10-16  6:43                 ` Xusangdi
  0 siblings, 2 replies; 20+ messages in thread
From: Chen, Xiaoxi @ 2015-10-16  6:26 UTC (permalink / raw)
  To: Xusangdi; +Cc: ceph-devel

How did you measure "help performance" ?

In your case , the OSD is in "down" but not "out" in osdmap

Maybe extend the osdmaptool to support "mark osd out" could do the test?
> -----Original Message-----
> From: Xusangdi [mailto:xu.sangdi@h3c.com]
> Sent: Friday, October 16, 2015 2:12 PM
> To: Chen, Xiaoxi
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
> 
> 
> 
> > -----Original Message-----
> > From: Chen, Xiaoxi [mailto:xiaoxi.chen@intel.com]
> > Sent: Friday, October 16, 2015 12:28 PM
> > To: Sage Weil; xusangdi 11976 (RD)
> > Cc: ceph-devel@vger.kernel.org
> > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> >
> > I did some test by using crushtool --test (together with this PR and
> > #PR 6004)
> >
> > It doesn't help in quantities way.
> >
> > In a 40 OSDs, 10 OSDs per Nodes demo crush map, test with rep = 2.
> > 4096 PGs, in each run I will randomly kick off an OSD (reweight to 0)
> > and compared the PG mapping. If any OSD in the upset of a given PG is
> changed, we said the PG is remapped.
> >
> >     With this Patch,  on average 377.35 PG remapped with stddev = 20.62.
> >     W/o this Patch,  on average 347.13 PG remapped with stddev = 13.51 .
> >
> > In terms of #OSDs changed( each osd difference in a given slog count
> > for 1), I grab one sample from previous test which kickoff OSD.20,  as this
> processing still need a bit handy work.
> >    With this Path,  165 times in Primary , 219 times on Secondary, 37 times for
> both(347 in total).
> >    W/O this patch, 165 times in Primary , 215 times on Secondary, 37 times
> for both( 343 in total).
> >
> The reason here is that the crushmap is changed when you use crushtool to
> reweight an item. This PR cannot serve as a radical solution for the extra
> migrations, it only helps to improve the performance when some devices fail
> in the running time. For a simple test, you may set up an experimental cluster
> (vstart.sh would be a great help) and then shut down one or more osds
> (please don't touch the crushmap, just stop the osd service or kill its process).
> Thank you.
> 
> ---Sangdi
> 
> >
> > > -----Original Message-----
> > > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> > > owner@vger.kernel.org] On Behalf Of Sage Weil
> > > Sent: Friday, October 16, 2015 3:14 AM
> > > To: Xusangdi
> > > Cc: ceph-devel@vger.kernel.org
> > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > >
> > > On Thu, 15 Oct 2015, Xusangdi wrote:
> > > > > -----Original Message-----
> > > > > From: Sage Weil [mailto:sweil@redhat.com]
> > > > > Sent: Wednesday, October 14, 2015 10:18 PM
> > > > > To: xusangdi 11976 (RD)
> > > > > Cc: ceph-devel@vger.kernel.org
> > > > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > > > >
> > > > > On Wed, 14 Oct 2015, Xusangdi wrote:
> > > > > > Please see inline.
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: ceph-devel-owner@vger.kernel.org
> > > > > > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Sage
> > > > > > > Weil
> > > > > > > Sent: Wednesday, October 14, 2015 12:45 AM
> > > > > > > To: xusangdi 11976 (RD)
> > > > > > > Cc: ceph-devel@vger.kernel.org
> > > > > > > Subject: Re: chooseleaf may cause some unnecessary pg
> > > > > > > migrations
> > > > > > >
> > > > > > > Hi Sangdi,
> > > > > > >
> > > > > > > On Tue, 13 Oct 2015, Xusangdi wrote:
> > > > > > > > Hi Sage,
> > > > > > > >
> > > > > > > > Recently when I was learning about the crush rules I
> > > > > > > > noticed that the step chooseleaf may cause
> > > > > > > some unnecessary pg migrations when OSDs are outed.
> > > > > > > > For example, for a cluster of 4 hosts with 2 OSDs each,
> > > > > > > > after host1(osd.2, osd.3) is down, the
> > > > > > > mapping differences would be like this:
> > > > > > > > pgid    before <-> after        diff    diff_num
> > > > > > > > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > > > > > > > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > > > > > > > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > > > > > > > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > > > > > > > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > > > > > > > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > > > > > > > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > > > > > > > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > > > > > > > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > > > > > > > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > > > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > > > > > > > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > > > > > > > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > > > > > > > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > > > > > > > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > > > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > > > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > > > > > > > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > > > > > > > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > > > > > > > Changed pg ratio: 30 / 32
> > > > > > > >
> > > > > > > > I tried to change the code (please see
> > > > > > > > https://github.com/ceph/ceph/pull/6242) and after the
> > > > > > > modification the result would be like this:
> > > > > > >
> > > > > > > Can you describe the reasoning behind the change?  I can't
> > > > > > > make sense of it.  The recursive call is still picking just
> > > > > > > 1 item, but it looks like it is always choosing a device in
> > > > > > > for slot 0 and not the current slot in the caller.  I'm not
> > > > > sure how that could generate a correct result.
> > > > > > >
> > > > > > From my understanding, the argument 'outpos' of
> > > > > > crush_choose_firstn(...) is always set to 0 for 'choose firstn'
> > > > > > and the first step of 'chooseleaf firstn' (you may check its caller, i.e.
> > > > > > crush_do_rule(...), that the variable 'j' there is always 0).
> > > > > > So in fact the code modification won't change anything for
> > > > > > these steps. It
> > > will only affect the recursive call.
> > > > > > And it this call, 'rep'(and 'r') is independent from its
> > > > > > caller, which means it's only used to indicate the replica
> > > > > > number(the start value of 'rep' should be 1 less than
> > > > > > 'numrep'), but is not related to result position. Since
> > > > > > 'outpos' is correctly passed to the recursive call, the result would be
> put in the right slot, not slot 0.
> > > > >
> > > > > Okay, I undertstand now.  Thanks!
> > > > >
> > > > > > The current code set the starting value of 'r' based on the
> > > > > > slot number, which means the algorithm may generate a
> > > > > > different result even when only the slot number changes, which
> > > > > > I believe is not what we want for replicated pools. For
> > > > > > example, 0.14 [3, 6, 5] <-> [7, 4, 1], after
> > > > > > ods.3 is down and out, the pg on osd.6 is transferred to osd.7
> > > > > > because the starting value of 'r' is changed(probably from 1
> > > > > > to 0), and so is the pg from osd.5 to osd.4. Though it looks
> > > > > > like that all of these extra migrations are always within the
> > > > > > same chosen bucket, they are unnecessary. If we set the
> > > > > > starting value of 'rep'(and 'r') to be a fixed value(e.g. 0), the results
> would be stable.
> > > > >
> > > > > Yeah, that makes sense.
> > > > >
> > > > > The only problem I see is the collision check here:
> > > > >
> > > > >
> > > > > https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294f658
> > > > > 5c
> > > > > 32
> > > > > 2588cfe026d701R497
> > > > >
> > > > > For most maps your change isn't problematic because devices only
> > > > > appear once in the hierarchy, but if an item does appear twice,
> > > > > we may
> > > choose it twice and not detect the dup.
> > > > >
> > > > > Perhaps leaving the argument alone and simply changing the for
> > > > > loop to
> > > > >
> > > > >       for (rep = 0; rep < (numrep - outpos) && count > 0 ;
> > > > > rep++) {
> > > > >
> > > > > would do the trick?
> > > > >
> > > > Well I believe it actually will detect the dup as 'numrep' and 'outpos'
> > > > are two different arguments and 'outpos' is correctly passed, so
> > > > the 'for loop' you marked for detecting collision will check all
> > > > previously selected items.
> > >
> > > Yep, right again.
> > >
> > > > I have read the post about current CRUSH behavior and thanks for
> > > > that detailed explanation. However, the prerequisite there is the
> > > > crushmap is changed(some item is reweighted), while in the current
> > > > implementation the crushmap remains untouched for quite a long
> > > > time after some devices fail. And because of this, I think the
> > > > code change can make the pg migrations to be optimal when osds
> > > > become down. When the crushmap does change, like you said, the
> > > > extra migrations would be
> > > unavoidable.
> > > > Anyway, as I really cannot find it, could you give me some tip
> > > > about how or when the crushmap would get modified automatically?
> Thank you!
> > >
> > > I think the situation is the same: when the osd is marked "out" the
> > > weight[i] value goes to zero and we will reject it at
> > >
> > > https://github.com/ceph/ceph/pull/6242/files#diff-
> > > 0df13ad294f6585c322588cfe026d701R535
> > >
> > > (An osd goes down but not out doesn't have any effect on crush--the
> > > ceph code just filters those osds out of the crush result.)
> > >
> > > So, anyway, yes: this looks like a good change.  One thing that
> > > would be great would be a way to quantify how much it improves things.
> > >
> > > The next step is to introduce a tunable that makes this behavior
> > > optional (as we can't break/change previous mappings).  That means
> > > introducing a new field in crush/crush.h for the tunable, adding
> > > accessors in CrushWrapper, introducing a new 'jewel' preset that
> > > sets it that way (and making jewel the new 'optimal'), modifying
> > > crushtool.cc with an argument to modify the tunable, modifying
> > > CrushCompiler to both spit out the tunable if it's non- default and
> > > to parse it if its specified in the
> > input, etc.
> > >
> > > If you're comfortable with those changes, it would be great if you
> > > could do this part.  If not, I can take care of it!
> > >
> > > Thanks-
> > > sage
> > >
> > >
> > > >
> > > > ---Sangdi
> > > >
> > > > > Thanks!
> > > > > sage
> > > > >
> > > > >
> > > > >
> > > > > > The quality of distribution could be assured by the hashed
> > > > > > value x, since it's dependent on pgid. A simple test result
> > > > > > for a relatively large cluster is as followings, which from my
> > > > > > view is similar with that of the current code:
> > > > > >
> > > > > > crushtool -o crushmap --build --num_osds 64 host straw2 4 rack
> > > > > > straw2
> > > > > > 4 root straw2 0
> > > > > >
> > > > > > crushtool --test -i crushmap --num_rep 3 --show-utilization
> > > > > > rule
> > > > > > 0 (replicated_ruleset), x = 0..1023, numrep = 3..3 rule 0
> > > > > > (replicated_ruleset) num_rep 3 result size == 3: 1024/1024
> > > > > >   device 0:              stored : 57     expected : 48
> > > > > >   device 1:              stored : 39     expected : 48
> > > > > >   device 2:              stored : 44     expected : 48
> > > > > >   device 3:              stored : 42     expected : 48
> > > > > >   device 4:              stored : 45     expected : 48
> > > > > >   device 5:              stored : 42     expected : 48
> > > > > >   device 6:              stored : 45     expected : 48
> > > > > >   device 7:              stored : 48     expected : 48
> > > > > >   device 8:              stored : 43     expected : 48
> > > > > >   device 9:              stored : 38     expected : 48
> > > > > >   device 10:             stored : 44     expected : 48
> > > > > >   device 11:             stored : 52     expected : 48
> > > > > >   device 12:             stored : 54     expected : 48
> > > > > >   device 13:             stored : 59     expected : 48
> > > > > >   device 14:             stored : 53     expected : 48
> > > > > >   device 15:             stored : 53     expected : 48
> > > > > >   device 16:             stored : 42     expected : 48
> > > > > >   device 17:             stored : 56     expected : 48
> > > > > >   device 18:             stored : 49     expected : 48
> > > > > >   device 19:             stored : 42     expected : 48
> > > > > >   device 20:             stored : 56     expected : 48
> > > > > >   device 21:             stored : 51     expected : 48
> > > > > >   device 22:             stored : 59     expected : 48
> > > > > >   device 23:             stored : 34     expected : 48
> > > > > >   device 24:             stored : 43     expected : 48
> > > > > >   device 25:             stored : 52     expected : 48
> > > > > >   device 26:             stored : 46     expected : 48
> > > > > >   device 27:             stored : 44     expected : 48
> > > > > >   device 28:             stored : 43     expected : 48
> > > > > >   device 29:             stored : 57     expected : 48
> > > > > >   device 30:             stored : 53     expected : 48
> > > > > >   device 31:             stored : 52     expected : 48
> > > > > >   device 32:             stored : 46     expected : 48
> > > > > >   device 33:             stored : 42     expected : 48
> > > > > >   device 34:             stored : 59     expected : 48
> > > > > >   device 35:             stored : 50     expected : 48
> > > > > >   device 36:             stored : 48     expected : 48
> > > > > >   device 37:             stored : 47     expected : 48
> > > > > >   device 38:             stored : 52     expected : 48
> > > > > >   device 39:             stored : 56     expected : 48
> > > > > >   device 40:             stored : 38     expected : 48
> > > > > >   device 41:             stored : 40     expected : 48
> > > > > >   device 42:             stored : 52     expected : 48
> > > > > >   device 43:             stored : 43     expected : 48
> > > > > >   device 44:             stored : 41     expected : 48
> > > > > >   device 45:             stored : 51     expected : 48
> > > > > >   device 46:             stored : 43     expected : 48
> > > > > >   device 47:             stored : 46     expected : 48
> > > > > >   device 48:             stored : 42     expected : 48
> > > > > >   device 49:             stored : 47     expected : 48
> > > > > >   device 50:             stored : 51     expected : 48
> > > > > >   device 51:             stored : 54     expected : 48
> > > > > >   device 52:             stored : 56     expected : 48
> > > > > >   device 53:             stored : 46     expected : 48
> > > > > >   device 54:             stored : 47     expected : 48
> > > > > >   device 55:             stored : 48     expected : 48
> > > > > >   device 56:             stored : 42     expected : 48
> > > > > >   device 57:             stored : 40     expected : 48
> > > > > >   device 58:             stored : 58     expected : 48
> > > > > >   device 59:             stored : 53     expected : 48
> > > > > >   device 60:             stored : 41     expected : 48
> > > > > >   device 61:             stored : 44     expected : 48
> > > > > >   device 62:             stored : 52     expected : 48
> > > > > >   device 63:             stored : 60     expected : 48
> > > > > >
> > > > > > > Perhaps you can share the osd tree output so we can see how
> > > > > > > your
> > > map it structured?
> > > > > > >
> > > > > > The architecture is pretty simple:
> > > > > > ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
> > > > > > -5 8.00000 root root
> > > > > > -1 2.00000     host host0
> > > > > >  0 1.00000         osd.0       up  1.00000          1.00000
> > > > > >  1 1.00000         osd.1       up  1.00000          1.00000
> > > > > > -2 2.00000     host host1
> > > > > >  2 1.00000         osd.2     down        0          1.00000
> > > > > >  3 1.00000         osd.3     down        0          1.00000
> > > > > > -3 2.00000     host host2
> > > > > >  4 1.00000         osd.4       up  1.00000          1.00000
> > > > > >  5 1.00000         osd.5       up  1.00000          1.00000
> > > > > > -4 2.00000     host host3
> > > > > >  6 1.00000         osd.6       up  1.00000          1.00000
> > > > > >  7 1.00000         osd.7       up  1.00000          1.00000
> > > > > >
> > > > > > And the rule used if you were interested:
> > > > > > rule replicated_ruleset {
> > > > > >         ruleset 0
> > > > > >         type replicated
> > > > > >         min_size 1
> > > > > >         max_size 10
> > > > > >         step take root
> > > > > >         step chooseleaf firstn 0 type host
> > > > > >         step emit
> > > > > > }
> > > > > >
> > > > > > Also all the tunables are default values and I'm running the
> > > > > > master from
> > > several days ago.
> > > > > >
> > > > > > > It is normal for some mappings to have multiple items
> > > > > > > change, as I've described previously on this
> > > > > list.
> > > > > > > It's a result of the fact that we're considering totally
> > > > > > > independent series of items, but whether we accept a choice
> > > > > > > also depends on the previous choices (we do not allow dups)..
> > > > > > > which means that you get echo effects in later slots
> > > > > when we make a different choice in earlier slots.
> > > > > > >
> > > > > > As I described in the previous section, I think these extra
> > > > > > migrations is not caused by collisions. And IMO the ideal
> > > > > > result when an osd fails should be a simple shifting, just
> > > > > as you illustrated in the CRUSH paper:
> > > > > >
> > > > > > 1 2 3 4 5 6
> > > > > > a b c d e f ...
> > > > > >
> > > > > > b fails ==>
> > > > > >
> > > > > > 1 2 3 4 5 6
> > > > > > a c d e f g ...
> > > > > >
> > > > > > ----------
> > > > > > Best regards,
> > > > > > Sangdi
> > > > > >
> > > > > > > Thanks!
> > > > > > > sage
> > > > > > >
> > > > > > >
> > > > > > > > pgid    before <-> after        diff    diff_num
> > > > > > > > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > > > > > > > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > > > > > > > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > > > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > > > > > > > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > > > > > > > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > > > > > > > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > > > > > > > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > > > > > > > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > > > > > > > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > > > > > > > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > > > > > > > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > > > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > > > > > > > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > > > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > > > > > > > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > > > > > > > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > > > > > > > Changed pg ratio: 20 / 32
> > > > > > > >
> > > > > > > > Currently the only defect I can see from the change is
> > > > > > > > that the chance for a given pg to successfully
> > > > > > > choose required available OSDs might be a bit lower compared
> > > > > > > with before. However, I believe it will cause problems only
> > > > > > > when the cluster is pretty small and degraded. And in that
> > > > > > > case, we can still make it workable by tuning some
> > > > > of the crushmap parameters such as chooseleaf_tries.
> > > > > > > >
> > > > > > > > Anyway I'm not sure if it would raise any other issues,
> > > > > > > > could you please review it and maybe give me
> > > > > > > some suggestions? Thank you!
> > > > > > > >
> > > > > > > > ----------
> > > > > > > > Best regards,
> > > > > > > > Sangdi
> > > > > > > >
> > > > > > > > ----------------------------------------------------------
> > > > > > > > --
> > > > > > > > --
> > > > > > > > ----
> > > > > > > > ----
> > > > > > > > ----------------------------------------------------------
> > > > > > > > --
> > > > > > > > --
> > > > > > > > - ????????????????????????????????????????
> > > > > > > > ????????????????????????????????????????
> > > > > > > > ????????????????????????????????????????
> > > > > > > > ???
> > > > > > > > This e-mail and its attachments contain confidential
> > > > > > > > information from H3C, which is intended only for the
> > > > > > > > person or entity whose address is listed above. Any use of
> > > > > > > > the information contained herein in any way (including,
> > > > > > > > but not limited to, total or partial disclosure,
> > > > > > > > reproduction, or
> > > > > > > > dissemination) by persons other than the intended
> > > > > > > > recipient(s) is prohibited. If you receive this e-mail in
> > > > > > > > error, please notify the sender by phone or email
> > > > > > > > immediately and
> > > delete it!
> > > > > > > >
> > >
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > > > > >
> > > > > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > > > > > --
> > > > > > > To unsubscribe from this list: send the line "unsubscribe
> > > > > > > ceph-devel" in the body of a message to
> > > > > > > majordomo@vger.kernel.org More majordomo info at
> > > > > > > http://vger.kernel.org/majordomo-info.html
> > > > > > --------------------------------------------------------------
> > > > > > --
> > > > > > --
> > > > > > ----
> > > > > > --------------------------------------------------------------
> > > > > > - ????????????????????????????????????????
> > > > > > ????????????????????????????????????????
> > > > > > ????????????????????????????????????????
> > > > > > ???
> > > > > > This e-mail and its attachments contain confidential
> > > > > > information from H3C, which is intended only for the person or
> > > > > > entity whose address is listed above. Any use of the
> > > > > > information contained herein in any way (including, but not
> > > > > > limited to, total or partial disclosure, reproduction, or
> > > > > > dissemination) by persons other than the intended
> > > > > > recipient(s) is prohibited. If you receive this e-mail in
> > > > > > error, please notify the sender by phone or email immediately and
> delete it!
> > > > > >
> > >
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > > >
> > > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > > ------------------------------------------------------------------
> > > > --
> > > > --
> > > > ---------------------------------------------------------------
> > > > ????????????????????????????????????????
> > > > ????????????????????????????????????????
> > > > ????????????????????????????????????????
> > > > ???
> > > > This e-mail and its attachments contain confidential information
> > > > from H3C, which is intended only for the person or entity whose
> > > > address is listed above. Any use of the information contained
> > > > herein in any way (including, but not limited to, total or partial
> > > > disclosure, reproduction, or dissemination) by persons other than
> > > > the intended
> > > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > > please notify the sender by phone or email immediately and delete it!
> > > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > > in the body of a message to majordomo@vger.kernel.org More
> majordomo
> > > info at http://vger.kernel.org/majordomo-info.html
> ----------------------------------------------------------------------------------------------
> ---------------------------------------
> 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发
> 送给上面地址中列出
> 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或
> 部分地泄露、复制、
> 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件
> 通知发件人并删除本
> 邮件!
> This e-mail and its attachments contain confidential information from H3C,
> which is intended only for the person or entity whose address is listed above.
> Any use of the information contained herein in any way (including, but not
> limited to, total or partial disclosure, reproduction, or dissemination) by
> persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the
> sender by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-16  6:26               ` Chen, Xiaoxi
@ 2015-10-16  6:36                 ` Dałek, Piotr
  2015-10-16  6:43                 ` Xusangdi
  1 sibling, 0 replies; 20+ messages in thread
From: Dałek, Piotr @ 2015-10-16  6:36 UTC (permalink / raw)
  To: ceph-devel

> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> owner@vger.kernel.org] On Behalf Of Chen, Xiaoxi
> Sent: Friday, October 16, 2015 8:26 AM
> To: Xusangdi
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
> 
> How did you measure "help performance" ?
> 
> In your case , the OSD is in "down" but not "out" in osdmap
> 
> Maybe extend the osdmaptool to support "mark osd out" could do the test?

Downed OSDs are marked as out after around 5 minutes (depends on configuration).

With best regards / Pozdrawiam
Piotr Dałek

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-16  6:26               ` Chen, Xiaoxi
  2015-10-16  6:36                 ` Dałek, Piotr
@ 2015-10-16  6:43                 ` Xusangdi
  2015-10-19  1:11                   ` Chen, Xiaoxi
  1 sibling, 1 reply; 20+ messages in thread
From: Xusangdi @ 2015-10-16  6:43 UTC (permalink / raw)
  To: Chen, Xiaoxi; +Cc: ceph-devel

Sorry if I didn't state that clearly.

Like you did, the performance is measured by the number of PGs remapped between the original state
and after some OSDs are down and out.

The down OSDs will be marked out after a short period, or you can do it manually using 'ceph osd out' command.

Thanks,
---Sangdi

> -----Original Message-----
> From: Chen, Xiaoxi [mailto:xiaoxi.chen@intel.com]
> Sent: Friday, October 16, 2015 2:26 PM
> To: xusangdi 11976 (RD)
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
> 
> How did you measure "help performance" ?
> 
> In your case , the OSD is in "down" but not "out" in osdmap
> 
> Maybe extend the osdmaptool to support "mark osd out" could do the test?
> > -----Original Message-----
> > From: Xusangdi [mailto:xu.sangdi@h3c.com]
> > Sent: Friday, October 16, 2015 2:12 PM
> > To: Chen, Xiaoxi
> > Cc: ceph-devel@vger.kernel.org
> > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> >
> >
> >
> > > -----Original Message-----
> > > From: Chen, Xiaoxi [mailto:xiaoxi.chen@intel.com]
> > > Sent: Friday, October 16, 2015 12:28 PM
> > > To: Sage Weil; xusangdi 11976 (RD)
> > > Cc: ceph-devel@vger.kernel.org
> > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > >
> > > I did some test by using crushtool --test (together with this PR and
> > > #PR 6004)
> > >
> > > It doesn't help in quantities way.
> > >
> > > In a 40 OSDs, 10 OSDs per Nodes demo crush map, test with rep = 2.
> > > 4096 PGs, in each run I will randomly kick off an OSD (reweight to
> > > 0) and compared the PG mapping. If any OSD in the upset of a given
> > > PG is
> > changed, we said the PG is remapped.
> > >
> > >     With this Patch,  on average 377.35 PG remapped with stddev = 20.62.
> > >     W/o this Patch,  on average 347.13 PG remapped with stddev = 13.51 .
> > >
> > > In terms of #OSDs changed( each osd difference in a given slog count
> > > for 1), I grab one sample from previous test which kickoff OSD.20,
> > > as this
> > processing still need a bit handy work.
> > >    With this Path,  165 times in Primary , 219 times on Secondary,
> > > 37 times for
> > both(347 in total).
> > >    W/O this patch, 165 times in Primary , 215 times on Secondary, 37
> > > times
> > for both( 343 in total).
> > >
> > The reason here is that the crushmap is changed when you use crushtool
> > to reweight an item. This PR cannot serve as a radical solution for
> > the extra migrations, it only helps to improve the performance when
> > some devices fail in the running time. For a simple test, you may set
> > up an experimental cluster (vstart.sh would be a great help) and then
> > shut down one or more osds (please don't touch the crushmap, just stop the osd service or kill its
> process).
> > Thank you.
> >
> > ---Sangdi
> >
> > >
> > > > -----Original Message-----
> > > > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> > > > owner@vger.kernel.org] On Behalf Of Sage Weil
> > > > Sent: Friday, October 16, 2015 3:14 AM
> > > > To: Xusangdi
> > > > Cc: ceph-devel@vger.kernel.org
> > > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > > >
> > > > On Thu, 15 Oct 2015, Xusangdi wrote:
> > > > > > -----Original Message-----
> > > > > > From: Sage Weil [mailto:sweil@redhat.com]
> > > > > > Sent: Wednesday, October 14, 2015 10:18 PM
> > > > > > To: xusangdi 11976 (RD)
> > > > > > Cc: ceph-devel@vger.kernel.org
> > > > > > Subject: RE: chooseleaf may cause some unnecessary pg
> > > > > > migrations
> > > > > >
> > > > > > On Wed, 14 Oct 2015, Xusangdi wrote:
> > > > > > > Please see inline.
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: ceph-devel-owner@vger.kernel.org
> > > > > > > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of
> > > > > > > > Sage Weil
> > > > > > > > Sent: Wednesday, October 14, 2015 12:45 AM
> > > > > > > > To: xusangdi 11976 (RD)
> > > > > > > > Cc: ceph-devel@vger.kernel.org
> > > > > > > > Subject: Re: chooseleaf may cause some unnecessary pg
> > > > > > > > migrations
> > > > > > > >
> > > > > > > > Hi Sangdi,
> > > > > > > >
> > > > > > > > On Tue, 13 Oct 2015, Xusangdi wrote:
> > > > > > > > > Hi Sage,
> > > > > > > > >
> > > > > > > > > Recently when I was learning about the crush rules I
> > > > > > > > > noticed that the step chooseleaf may cause
> > > > > > > > some unnecessary pg migrations when OSDs are outed.
> > > > > > > > > For example, for a cluster of 4 hosts with 2 OSDs each,
> > > > > > > > > after host1(osd.2, osd.3) is down, the
> > > > > > > > mapping differences would be like this:
> > > > > > > > > pgid    before <-> after        diff    diff_num
> > > > > > > > > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > > > > > > > > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > > > > > > > > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > > > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > > > > > > > > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > > > > > > > > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > > > > > > > > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > > > > > > > > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > > > > > > > > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > > > > > > > > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > > > > > > > > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > > > > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > > > > > > > > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > > > > > > > > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > > > > > > > > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > > > > > > > > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > > > > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > > > > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > > > > > > > > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > > > > > > > > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > > > > > > > > Changed pg ratio: 30 / 32
> > > > > > > > >
> > > > > > > > > I tried to change the code (please see
> > > > > > > > > https://github.com/ceph/ceph/pull/6242) and after the
> > > > > > > > modification the result would be like this:
> > > > > > > >
> > > > > > > > Can you describe the reasoning behind the change?  I can't
> > > > > > > > make sense of it.  The recursive call is still picking
> > > > > > > > just
> > > > > > > > 1 item, but it looks like it is always choosing a device
> > > > > > > > in for slot 0 and not the current slot in the caller.  I'm
> > > > > > > > not
> > > > > > sure how that could generate a correct result.
> > > > > > > >
> > > > > > > From my understanding, the argument 'outpos' of
> > > > > > > crush_choose_firstn(...) is always set to 0 for 'choose firstn'
> > > > > > > and the first step of 'chooseleaf firstn' (you may check its caller, i.e.
> > > > > > > crush_do_rule(...), that the variable 'j' there is always 0).
> > > > > > > So in fact the code modification won't change anything for
> > > > > > > these steps. It
> > > > will only affect the recursive call.
> > > > > > > And it this call, 'rep'(and 'r') is independent from its
> > > > > > > caller, which means it's only used to indicate the replica
> > > > > > > number(the start value of 'rep' should be 1 less than
> > > > > > > 'numrep'), but is not related to result position. Since
> > > > > > > 'outpos' is correctly passed to the recursive call, the
> > > > > > > result would be
> > put in the right slot, not slot 0.
> > > > > >
> > > > > > Okay, I undertstand now.  Thanks!
> > > > > >
> > > > > > > The current code set the starting value of 'r' based on the
> > > > > > > slot number, which means the algorithm may generate a
> > > > > > > different result even when only the slot number changes,
> > > > > > > which I believe is not what we want for replicated pools.
> > > > > > > For example, 0.14 [3, 6, 5] <-> [7, 4, 1], after
> > > > > > > ods.3 is down and out, the pg on osd.6 is transferred to
> > > > > > > osd.7 because the starting value of 'r' is changed(probably
> > > > > > > from 1 to 0), and so is the pg from osd.5 to osd.4. Though
> > > > > > > it looks like that all of these extra migrations are always
> > > > > > > within the same chosen bucket, they are unnecessary. If we
> > > > > > > set the starting value of 'rep'(and 'r') to be a fixed
> > > > > > > value(e.g. 0), the results
> > would be stable.
> > > > > >
> > > > > > Yeah, that makes sense.
> > > > > >
> > > > > > The only problem I see is the collision check here:
> > > > > >
> > > > > >
> > > > > > https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294f6
> > > > > > 58
> > > > > > 5c
> > > > > > 32
> > > > > > 2588cfe026d701R497
> > > > > >
> > > > > > For most maps your change isn't problematic because devices
> > > > > > only appear once in the hierarchy, but if an item does appear
> > > > > > twice, we may
> > > > choose it twice and not detect the dup.
> > > > > >
> > > > > > Perhaps leaving the argument alone and simply changing the for
> > > > > > loop to
> > > > > >
> > > > > >       for (rep = 0; rep < (numrep - outpos) && count > 0 ;
> > > > > > rep++) {
> > > > > >
> > > > > > would do the trick?
> > > > > >
> > > > > Well I believe it actually will detect the dup as 'numrep' and 'outpos'
> > > > > are two different arguments and 'outpos' is correctly passed, so
> > > > > the 'for loop' you marked for detecting collision will check all
> > > > > previously selected items.
> > > >
> > > > Yep, right again.
> > > >
> > > > > I have read the post about current CRUSH behavior and thanks for
> > > > > that detailed explanation. However, the prerequisite there is
> > > > > the crushmap is changed(some item is reweighted), while in the
> > > > > current implementation the crushmap remains untouched for quite
> > > > > a long time after some devices fail. And because of this, I
> > > > > think the code change can make the pg migrations to be optimal
> > > > > when osds become down. When the crushmap does change, like you
> > > > > said, the extra migrations would be
> > > > unavoidable.
> > > > > Anyway, as I really cannot find it, could you give me some tip
> > > > > about how or when the crushmap would get modified automatically?
> > Thank you!
> > > >
> > > > I think the situation is the same: when the osd is marked "out"
> > > > the weight[i] value goes to zero and we will reject it at
> > > >
> > > > https://github.com/ceph/ceph/pull/6242/files#diff-
> > > > 0df13ad294f6585c322588cfe026d701R535
> > > >
> > > > (An osd goes down but not out doesn't have any effect on
> > > > crush--the ceph code just filters those osds out of the crush
> > > > result.)
> > > >
> > > > So, anyway, yes: this looks like a good change.  One thing that
> > > > would be great would be a way to quantify how much it improves things.
> > > >
> > > > The next step is to introduce a tunable that makes this behavior
> > > > optional (as we can't break/change previous mappings).  That means
> > > > introducing a new field in crush/crush.h for the tunable, adding
> > > > accessors in CrushWrapper, introducing a new 'jewel' preset that
> > > > sets it that way (and making jewel the new 'optimal'), modifying
> > > > crushtool.cc with an argument to modify the tunable, modifying
> > > > CrushCompiler to both spit out the tunable if it's non- default
> > > > and to parse it if its specified in the
> > > input, etc.
> > > >
> > > > If you're comfortable with those changes, it would be great if you
> > > > could do this part.  If not, I can take care of it!
> > > >
> > > > Thanks-
> > > > sage
> > > >
> > > >
> > > > >
> > > > > ---Sangdi
> > > > >
> > > > > > Thanks!
> > > > > > sage
> > > > > >
> > > > > >
> > > > > >
> > > > > > > The quality of distribution could be assured by the hashed
> > > > > > > value x, since it's dependent on pgid. A simple test result
> > > > > > > for a relatively large cluster is as followings, which from
> > > > > > > my view is similar with that of the current code:
> > > > > > >
> > > > > > > crushtool -o crushmap --build --num_osds 64 host straw2 4
> > > > > > > rack
> > > > > > > straw2
> > > > > > > 4 root straw2 0
> > > > > > >
> > > > > > > crushtool --test -i crushmap --num_rep 3 --show-utilization
> > > > > > > rule
> > > > > > > 0 (replicated_ruleset), x = 0..1023, numrep = 3..3 rule 0
> > > > > > > (replicated_ruleset) num_rep 3 result size == 3: 1024/1024
> > > > > > >   device 0:              stored : 57     expected : 48
> > > > > > >   device 1:              stored : 39     expected : 48
> > > > > > >   device 2:              stored : 44     expected : 48
> > > > > > >   device 3:              stored : 42     expected : 48
> > > > > > >   device 4:              stored : 45     expected : 48
> > > > > > >   device 5:              stored : 42     expected : 48
> > > > > > >   device 6:              stored : 45     expected : 48
> > > > > > >   device 7:              stored : 48     expected : 48
> > > > > > >   device 8:              stored : 43     expected : 48
> > > > > > >   device 9:              stored : 38     expected : 48
> > > > > > >   device 10:             stored : 44     expected : 48
> > > > > > >   device 11:             stored : 52     expected : 48
> > > > > > >   device 12:             stored : 54     expected : 48
> > > > > > >   device 13:             stored : 59     expected : 48
> > > > > > >   device 14:             stored : 53     expected : 48
> > > > > > >   device 15:             stored : 53     expected : 48
> > > > > > >   device 16:             stored : 42     expected : 48
> > > > > > >   device 17:             stored : 56     expected : 48
> > > > > > >   device 18:             stored : 49     expected : 48
> > > > > > >   device 19:             stored : 42     expected : 48
> > > > > > >   device 20:             stored : 56     expected : 48
> > > > > > >   device 21:             stored : 51     expected : 48
> > > > > > >   device 22:             stored : 59     expected : 48
> > > > > > >   device 23:             stored : 34     expected : 48
> > > > > > >   device 24:             stored : 43     expected : 48
> > > > > > >   device 25:             stored : 52     expected : 48
> > > > > > >   device 26:             stored : 46     expected : 48
> > > > > > >   device 27:             stored : 44     expected : 48
> > > > > > >   device 28:             stored : 43     expected : 48
> > > > > > >   device 29:             stored : 57     expected : 48
> > > > > > >   device 30:             stored : 53     expected : 48
> > > > > > >   device 31:             stored : 52     expected : 48
> > > > > > >   device 32:             stored : 46     expected : 48
> > > > > > >   device 33:             stored : 42     expected : 48
> > > > > > >   device 34:             stored : 59     expected : 48
> > > > > > >   device 35:             stored : 50     expected : 48
> > > > > > >   device 36:             stored : 48     expected : 48
> > > > > > >   device 37:             stored : 47     expected : 48
> > > > > > >   device 38:             stored : 52     expected : 48
> > > > > > >   device 39:             stored : 56     expected : 48
> > > > > > >   device 40:             stored : 38     expected : 48
> > > > > > >   device 41:             stored : 40     expected : 48
> > > > > > >   device 42:             stored : 52     expected : 48
> > > > > > >   device 43:             stored : 43     expected : 48
> > > > > > >   device 44:             stored : 41     expected : 48
> > > > > > >   device 45:             stored : 51     expected : 48
> > > > > > >   device 46:             stored : 43     expected : 48
> > > > > > >   device 47:             stored : 46     expected : 48
> > > > > > >   device 48:             stored : 42     expected : 48
> > > > > > >   device 49:             stored : 47     expected : 48
> > > > > > >   device 50:             stored : 51     expected : 48
> > > > > > >   device 51:             stored : 54     expected : 48
> > > > > > >   device 52:             stored : 56     expected : 48
> > > > > > >   device 53:             stored : 46     expected : 48
> > > > > > >   device 54:             stored : 47     expected : 48
> > > > > > >   device 55:             stored : 48     expected : 48
> > > > > > >   device 56:             stored : 42     expected : 48
> > > > > > >   device 57:             stored : 40     expected : 48
> > > > > > >   device 58:             stored : 58     expected : 48
> > > > > > >   device 59:             stored : 53     expected : 48
> > > > > > >   device 60:             stored : 41     expected : 48
> > > > > > >   device 61:             stored : 44     expected : 48
> > > > > > >   device 62:             stored : 52     expected : 48
> > > > > > >   device 63:             stored : 60     expected : 48
> > > > > > >
> > > > > > > > Perhaps you can share the osd tree output so we can see
> > > > > > > > how your
> > > > map it structured?
> > > > > > > >
> > > > > > > The architecture is pretty simple:
> > > > > > > ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
> > > > > > > -5 8.00000 root root
> > > > > > > -1 2.00000     host host0
> > > > > > >  0 1.00000         osd.0       up  1.00000          1.00000
> > > > > > >  1 1.00000         osd.1       up  1.00000          1.00000
> > > > > > > -2 2.00000     host host1
> > > > > > >  2 1.00000         osd.2     down        0          1.00000
> > > > > > >  3 1.00000         osd.3     down        0          1.00000
> > > > > > > -3 2.00000     host host2
> > > > > > >  4 1.00000         osd.4       up  1.00000          1.00000
> > > > > > >  5 1.00000         osd.5       up  1.00000          1.00000
> > > > > > > -4 2.00000     host host3
> > > > > > >  6 1.00000         osd.6       up  1.00000          1.00000
> > > > > > >  7 1.00000         osd.7       up  1.00000          1.00000
> > > > > > >
> > > > > > > And the rule used if you were interested:
> > > > > > > rule replicated_ruleset {
> > > > > > >         ruleset 0
> > > > > > >         type replicated
> > > > > > >         min_size 1
> > > > > > >         max_size 10
> > > > > > >         step take root
> > > > > > >         step chooseleaf firstn 0 type host
> > > > > > >         step emit
> > > > > > > }
> > > > > > >
> > > > > > > Also all the tunables are default values and I'm running the
> > > > > > > master from
> > > > several days ago.
> > > > > > >
> > > > > > > > It is normal for some mappings to have multiple items
> > > > > > > > change, as I've described previously on this
> > > > > > list.
> > > > > > > > It's a result of the fact that we're considering totally
> > > > > > > > independent series of items, but whether we accept a
> > > > > > > > choice also depends on the previous choices (we do not allow dups)..
> > > > > > > > which means that you get echo effects in later slots
> > > > > > when we make a different choice in earlier slots.
> > > > > > > >
> > > > > > > As I described in the previous section, I think these extra
> > > > > > > migrations is not caused by collisions. And IMO the ideal
> > > > > > > result when an osd fails should be a simple shifting, just
> > > > > > as you illustrated in the CRUSH paper:
> > > > > > >
> > > > > > > 1 2 3 4 5 6
> > > > > > > a b c d e f ...
> > > > > > >
> > > > > > > b fails ==>
> > > > > > >
> > > > > > > 1 2 3 4 5 6
> > > > > > > a c d e f g ...
> > > > > > >
> > > > > > > ----------
> > > > > > > Best regards,
> > > > > > > Sangdi
> > > > > > >
> > > > > > > > Thanks!
> > > > > > > > sage
> > > > > > > >
> > > > > > > >
> > > > > > > > > pgid    before <-> after        diff    diff_num
> > > > > > > > > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > > > > > > > > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > > > > > > > > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > > > > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > > > > > > > > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > > > > > > > > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > > > > > > > > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > > > > > > > > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > > > > > > > > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > > > > > > > > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > > > > > > > > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > > > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > > > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > > > > > > > > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > > > > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > > > > > > > > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > > > > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > > > > > > > > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > > > > > > > > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > > > > > > > > Changed pg ratio: 20 / 32
> > > > > > > > >
> > > > > > > > > Currently the only defect I can see from the change is
> > > > > > > > > that the chance for a given pg to successfully
> > > > > > > > choose required available OSDs might be a bit lower
> > > > > > > > compared with before. However, I believe it will cause
> > > > > > > > problems only when the cluster is pretty small and
> > > > > > > > degraded. And in that case, we can still make it workable
> > > > > > > > by tuning some
> > > > > > of the crushmap parameters such as chooseleaf_tries.
> > > > > > > > >
> > > > > > > > > Anyway I'm not sure if it would raise any other issues,
> > > > > > > > > could you please review it and maybe give me
> > > > > > > > some suggestions? Thank you!
> > > > > > > > >
> > > > > > > > > ----------
> > > > > > > > > Best regards,
> > > > > > > > > Sangdi
> > > > > > > > >
> > > > > > > > > --------------------------------------------------------
> > > > > > > > > --
> > > > > > > > > --
> > > > > > > > > --
> > > > > > > > > ----
> > > > > > > > > ----
> > > > > > > > > --------------------------------------------------------
> > > > > > > > > --
> > > > > > > > > --
> > > > > > > > > --
> > > > > > > > > - ????????????????????????????????????????
> > > > > > > > > ????????????????????????????????????????
> > > > > > > > > ????????????????????????????????????????
> > > > > > > > > ???
> > > > > > > > > This e-mail and its attachments contain confidential
> > > > > > > > > information from H3C, which is intended only for the
> > > > > > > > > person or entity whose address is listed above. Any use
> > > > > > > > > of the information contained herein in any way
> > > > > > > > > (including, but not limited to, total or partial
> > > > > > > > > disclosure, reproduction, or
> > > > > > > > > dissemination) by persons other than the intended
> > > > > > > > > recipient(s) is prohibited. If you receive this e-mail
> > > > > > > > > in error, please notify the sender by phone or email
> > > > > > > > > immediately and
> > > > delete it!
> > > > > > > > >
> > > >
> > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > > > > > >
> > > > > > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > > > > > > --
> > > > > > > > To unsubscribe from this list: send the line "unsubscribe
> > > > > > > > ceph-devel" in the body of a message to
> > > > > > > > majordomo@vger.kernel.org More majordomo info at
> > > > > > > > http://vger.kernel.org/majordomo-info.html
> > > > > > > ------------------------------------------------------------
> > > > > > > --
> > > > > > > --
> > > > > > > --
> > > > > > > ----
> > > > > > > ------------------------------------------------------------
> > > > > > > --
> > > > > > > - ????????????????????????????????????????
> > > > > > > ????????????????????????????????????????
> > > > > > > ????????????????????????????????????????
> > > > > > > ???
> > > > > > > This e-mail and its attachments contain confidential
> > > > > > > information from H3C, which is intended only for the person
> > > > > > > or entity whose address is listed above. Any use of the
> > > > > > > information contained herein in any way (including, but not
> > > > > > > limited to, total or partial disclosure, reproduction, or
> > > > > > > dissemination) by persons other than the intended
> > > > > > > recipient(s) is prohibited. If you receive this e-mail in
> > > > > > > error, please notify the sender by phone or email
> > > > > > > immediately and
> > delete it!
> > > > > > >
> > > >
> > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > > > >
> > > > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > > > ----------------------------------------------------------------
> > > > > --
> > > > > --
> > > > > --
> > > > > ---------------------------------------------------------------
> > > > > ????????????????????????????????????????
> > > > > ????????????????????????????????????????
> > > > > ????????????????????????????????????????
> > > > > ???
> > > > > This e-mail and its attachments contain confidential information
> > > > > from H3C, which is intended only for the person or entity whose
> > > > > address is listed above. Any use of the information contained
> > > > > herein in any way (including, but not limited to, total or
> > > > > partial disclosure, reproduction, or dissemination) by persons
> > > > > other than the intended
> > > > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > > > please notify the sender by phone or email immediately and delete it!
> > > > >
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > > > in the body of a message to majordomo@vger.kernel.org More
> > majordomo
> > > > info at http://vger.kernel.org/majordomo-info.html
> > ----------------------------------------------------------------------
> > ------------------------
> > ---------------------------------------
> > 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发
> > 送给上面地址中列出
> > 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或
> > 部分地泄露、复制、
> > 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件
> > 通知发件人并删除本
> > 邮件!
> > This e-mail and its attachments contain confidential information from
> > H3C, which is intended only for the person or entity whose address is listed above.
> > Any use of the information contained herein in any way (including, but
> > not limited to, total or partial disclosure, reproduction, or
> > dissemination) by persons other than the intended
> > recipient(s) is prohibited. If you receive this e-mail in error,
> > please notify the sender by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-15 19:14         ` Sage Weil
  2015-10-16  4:28           ` Chen, Xiaoxi
@ 2015-10-16 11:24           ` Xusangdi
  1 sibling, 0 replies; 20+ messages in thread
From: Xusangdi @ 2015-10-16 11:24 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel


> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of
> Sage Weil
> Sent: Friday, October 16, 2015 3:14 AM
> To: xusangdi 11976 (RD)
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
>
> On Thu, 15 Oct 2015, Xusangdi wrote:
> > > -----Original Message-----
> > > From: Sage Weil [mailto:sweil@redhat.com]
> > > Sent: Wednesday, October 14, 2015 10:18 PM
> > > To: xusangdi 11976 (RD)
> > > Cc: ceph-devel@vger.kernel.org
> > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > >
> > > On Wed, 14 Oct 2015, Xusangdi wrote:
> > > > Please see inline.
> > > >
> > > > > -----Original Message-----
> > > > > From: ceph-devel-owner@vger.kernel.org
> > > > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Sage Weil
> > > > > Sent: Wednesday, October 14, 2015 12:45 AM
> > > > > To: xusangdi 11976 (RD)
> > > > > Cc: ceph-devel@vger.kernel.org
> > > > > Subject: Re: chooseleaf may cause some unnecessary pg migrations
> > > > >
> > > > > Hi Sangdi,
> > > > >
> > > > > On Tue, 13 Oct 2015, Xusangdi wrote:
> > > > > > Hi Sage,
> > > > > >
> > > > > > Recently when I was learning about the crush rules I noticed
> > > > > > that the step chooseleaf may cause
> > > > > some unnecessary pg migrations when OSDs are outed.
> > > > > > For example, for a cluster of 4 hosts with 2 OSDs each, after
> > > > > > host1(osd.2, osd.3) is down, the
> > > > > mapping differences would be like this:
> > > > > > pgid    before <-> after        diff    diff_num
> > > > > > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > > > > > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > > > > > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > > > > > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > > > > > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > > > > > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > > > > > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > > > > > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > > > > > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > > > > > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > > > > > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > > > > > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > > > > > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > > > > > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > > > > > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > > > > > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > > > > > Changed pg ratio: 30 / 32
> > > > > >
> > > > > > I tried to change the code (please see
> > > > > > https://github.com/ceph/ceph/pull/6242) and after the
> > > > > modification the result would be like this:
> > > > >
> > > > > Can you describe the reasoning behind the change?  I can't make
> > > > > sense of it.  The recursive call is still picking just 1 item,
> > > > > but it looks like it is always choosing a device in for slot 0
> > > > > and not the current slot in the caller.  I'm not
> > > sure how that could generate a correct result.
> > > > >
> > > > From my understanding, the argument 'outpos' of
> > > > crush_choose_firstn(...) is always set to 0 for 'choose firstn'
> > > > and the first step of 'chooseleaf firstn' (you may check its caller, i.e.
> > > > crush_do_rule(...), that the variable 'j' there is always 0). So
> > > > in fact the code modification won't change anything for these steps. It will only affect the
> recursive call.
> > > > And it this call, 'rep'(and 'r') is independent from its caller,
> > > > which means it's only used to indicate the replica number(the
> > > > start value of 'rep' should be 1 less than 'numrep'), but is not
> > > > related to result position. Since 'outpos' is correctly passed to
> > > > the recursive call, the result would be put in the right slot, not slot 0.
> > >
> > > Okay, I undertstand now.  Thanks!
> > >
> > > > The current code set the starting value of 'r' based on the slot
> > > > number, which means the algorithm may generate a different result
> > > > even when only the slot number changes, which I believe is not
> > > > what we want for replicated pools. For example, 0.14 [3, 6, 5] <->
> > > > [7, 4, 1], after
> > > > ods.3 is down and out, the pg on osd.6 is transferred to osd.7
> > > > because the starting value of 'r' is changed(probably from 1 to
> > > > 0), and so is the pg from osd.5 to osd.4. Though it looks like
> > > > that all of these extra migrations are always within the same
> > > > chosen bucket, they are unnecessary. If we set the starting value
> > > > of 'rep'(and 'r') to be a fixed value(e.g. 0), the results would be stable.
> > >
> > > Yeah, that makes sense.
> > >
> > > The only problem I see is the collision check here:
> > >
> > >
> > > https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294f6585c32
> > > 2588cfe026d701R497
> > >
> > > For most maps your change isn't problematic because devices only
> > > appear once in the hierarchy, but if an item does appear twice, we may choose it twice and not
> detect the dup.
> > >
> > > Perhaps leaving the argument alone and simply changing the for loop
> > > to
> > >
> > >       for (rep = 0; rep < (numrep - outpos) && count > 0 ; rep++) {
> > >
> > > would do the trick?
> > >
> > Well I believe it actually will detect the dup as 'numrep' and 'outpos'
> > are two different arguments and 'outpos' is correctly passed, so the
> > 'for loop' you marked for detecting collision will check all
> > previously selected items.
>
> Yep, right again.
>
> > I have read the post about current CRUSH behavior and thanks for that
> > detailed explanation. However, the prerequisite there is the crushmap
> > is changed(some item is reweighted), while in the current
> > implementation the crushmap remains untouched for quite a long time
> > after some devices fail. And because of this, I think the code change
> > can make the pg migrations to be optimal when osds become down. When
> > the crushmap does change, like you said, the extra migrations would be unavoidable.
> > Anyway, as I really cannot find it, could you give me some tip about
> > how or when the crushmap would get modified automatically? Thank you!
>
> I think the situation is the same:
>
Well I am not sure if this is what you referred, but I can confirm that 'mark osd out' is definitely
different from 'remove osd from crushmap'. Please note if it's not clear that the 'crushmap' here
I mean what we can obtain from 'ceph osd getcrushmap' command.

> when the osd is marked "out" the weight[i] value goes to zero and we
> will reject it at
>
> https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294f6585c322588cfe026d701R535
>
Yes but we do pick this 'out' osd first from the crush_bucket_choose(...) function, which would
never return this osd if it is removed from the crushmap. From my understanding, CRUSH would first
make a choice based on values parsed from the crushmap(crush_bucket_choose function), and then
determine whether to actually choose this item depend on real-time weights(is_out function). Hence,
if an osd is only marked out but not manually removed from the crushmap (or maybe it will automatically
be removed after a certain time? this is what I asked), the crush_bucket_choose(...) function will return
exactly the same results as before for each turn.

> (An osd goes down but not out doesn't have any effect on crush--the ceph code just filters those osds
> out of the crush result.)
>
> So, anyway, yes: this looks like a good change.  One thing that would be great would be a way to
> quantify how much it improves things.
>
> The next step is to introduce a tunable that makes this behavior optional (as we can't break/change
> previous mappings).  That means introducing a new field in crush/crush.h for the tunable, adding
> accessors in CrushWrapper, introducing a new 'jewel' preset that sets it that way (and making jewel
> the new 'optimal'), modifying crushtool.cc with an argument to modify the tunable, modifying
> CrushCompiler to both spit out the tunable if it's non-default and to parse it if its specified in the input,
> etc.
>
> If you're comfortable with those changes, it would be great if you could do this part.  If not, I can take
> care of it!
>
Thanks for all the tips! For me it is challenging, but I'd like to take a try first as it would be a good learning
experience of CRUSH.

Thank you,
---Sangdi

> Thanks-
> sage
>
>
> >
> > ---Sangdi
> >
> > > Thanks!
> > > sage
> > >
> > >
> > >
> > > > The quality of distribution could be assured by the hashed value
> > > > x, since it's dependent on pgid. A simple test result for a
> > > > relatively large cluster is as followings, which from my view is
> > > > similar with that of the current code:
> > > >
> > > > crushtool -o crushmap --build --num_osds 64 host straw2 4 rack
> > > > straw2
> > > > 4 root straw2 0
> > > >
> > > > crushtool --test -i crushmap --num_rep 3 --show-utilization rule 0
> > > > (replicated_ruleset), x = 0..1023, numrep = 3..3 rule 0
> > > > (replicated_ruleset) num_rep 3 result size == 3: 1024/1024
> > > >   device 0:              stored : 57     expected : 48
> > > >   device 1:              stored : 39     expected : 48
> > > >   device 2:              stored : 44     expected : 48
> > > >   device 3:              stored : 42     expected : 48
> > > >   device 4:              stored : 45     expected : 48
> > > >   device 5:              stored : 42     expected : 48
> > > >   device 6:              stored : 45     expected : 48
> > > >   device 7:              stored : 48     expected : 48
> > > >   device 8:              stored : 43     expected : 48
> > > >   device 9:              stored : 38     expected : 48
> > > >   device 10:             stored : 44     expected : 48
> > > >   device 11:             stored : 52     expected : 48
> > > >   device 12:             stored : 54     expected : 48
> > > >   device 13:             stored : 59     expected : 48
> > > >   device 14:             stored : 53     expected : 48
> > > >   device 15:             stored : 53     expected : 48
> > > >   device 16:             stored : 42     expected : 48
> > > >   device 17:             stored : 56     expected : 48
> > > >   device 18:             stored : 49     expected : 48
> > > >   device 19:             stored : 42     expected : 48
> > > >   device 20:             stored : 56     expected : 48
> > > >   device 21:             stored : 51     expected : 48
> > > >   device 22:             stored : 59     expected : 48
> > > >   device 23:             stored : 34     expected : 48
> > > >   device 24:             stored : 43     expected : 48
> > > >   device 25:             stored : 52     expected : 48
> > > >   device 26:             stored : 46     expected : 48
> > > >   device 27:             stored : 44     expected : 48
> > > >   device 28:             stored : 43     expected : 48
> > > >   device 29:             stored : 57     expected : 48
> > > >   device 30:             stored : 53     expected : 48
> > > >   device 31:             stored : 52     expected : 48
> > > >   device 32:             stored : 46     expected : 48
> > > >   device 33:             stored : 42     expected : 48
> > > >   device 34:             stored : 59     expected : 48
> > > >   device 35:             stored : 50     expected : 48
> > > >   device 36:             stored : 48     expected : 48
> > > >   device 37:             stored : 47     expected : 48
> > > >   device 38:             stored : 52     expected : 48
> > > >   device 39:             stored : 56     expected : 48
> > > >   device 40:             stored : 38     expected : 48
> > > >   device 41:             stored : 40     expected : 48
> > > >   device 42:             stored : 52     expected : 48
> > > >   device 43:             stored : 43     expected : 48
> > > >   device 44:             stored : 41     expected : 48
> > > >   device 45:             stored : 51     expected : 48
> > > >   device 46:             stored : 43     expected : 48
> > > >   device 47:             stored : 46     expected : 48
> > > >   device 48:             stored : 42     expected : 48
> > > >   device 49:             stored : 47     expected : 48
> > > >   device 50:             stored : 51     expected : 48
> > > >   device 51:             stored : 54     expected : 48
> > > >   device 52:             stored : 56     expected : 48
> > > >   device 53:             stored : 46     expected : 48
> > > >   device 54:             stored : 47     expected : 48
> > > >   device 55:             stored : 48     expected : 48
> > > >   device 56:             stored : 42     expected : 48
> > > >   device 57:             stored : 40     expected : 48
> > > >   device 58:             stored : 58     expected : 48
> > > >   device 59:             stored : 53     expected : 48
> > > >   device 60:             stored : 41     expected : 48
> > > >   device 61:             stored : 44     expected : 48
> > > >   device 62:             stored : 52     expected : 48
> > > >   device 63:             stored : 60     expected : 48
> > > >
> > > > > Perhaps you can share the osd tree output so we can see how your map it structured?
> > > > >
> > > > The architecture is pretty simple:
> > > > ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
> > > > -5 8.00000 root root
> > > > -1 2.00000     host host0
> > > >  0 1.00000         osd.0       up  1.00000          1.00000
> > > >  1 1.00000         osd.1       up  1.00000          1.00000
> > > > -2 2.00000     host host1
> > > >  2 1.00000         osd.2     down        0          1.00000
> > > >  3 1.00000         osd.3     down        0          1.00000
> > > > -3 2.00000     host host2
> > > >  4 1.00000         osd.4       up  1.00000          1.00000
> > > >  5 1.00000         osd.5       up  1.00000          1.00000
> > > > -4 2.00000     host host3
> > > >  6 1.00000         osd.6       up  1.00000          1.00000
> > > >  7 1.00000         osd.7       up  1.00000          1.00000
> > > >
> > > > And the rule used if you were interested:
> > > > rule replicated_ruleset {
> > > >         ruleset 0
> > > >         type replicated
> > > >         min_size 1
> > > >         max_size 10
> > > >         step take root
> > > >         step chooseleaf firstn 0 type host
> > > >         step emit
> > > > }
> > > >
> > > > Also all the tunables are default values and I'm running the master from several days ago.
> > > >
> > > > > It is normal for some mappings to have multiple items change, as
> > > > > I've described previously on this
> > > list.
> > > > > It's a result of the fact that we're considering totally
> > > > > independent series of items, but whether we accept a choice also
> > > > > depends on the previous choices (we do not allow dups).. which
> > > > > means that you get echo effects in later slots
> > > when we make a different choice in earlier slots.
> > > > >
> > > > As I described in the previous section, I think these extra
> > > > migrations is not caused by collisions. And IMO the ideal result
> > > > when an osd fails should be a simple shifting, just
> > > as you illustrated in the CRUSH paper:
> > > >
> > > > 1 2 3 4 5 6
> > > > a b c d e f ...
> > > >
> > > > b fails ==>
> > > >
> > > > 1 2 3 4 5 6
> > > > a c d e f g ...
> > > >
> > > > ----------
> > > > Best regards,
> > > > Sangdi
> > > >
> > > > > Thanks!
> > > > > sage
> > > > >
> > > > >
> > > > > > pgid    before <-> after        diff    diff_num
> > > > > > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > > > > > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > > > > > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > > > > > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > > > > > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > > > > > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > > > > > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > > > > > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > > > > > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > > > > > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > > > > > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > > > > > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > > > > > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > > > > > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > > > > > Changed pg ratio: 20 / 32
> > > > > >
> > > > > > Currently the only defect I can see from the change is that
> > > > > > the chance for a given pg to successfully
> > > > > choose required available OSDs might be a bit lower compared
> > > > > with before. However, I believe it will cause problems only when
> > > > > the cluster is pretty small and degraded. And in that case, we
> > > > > can still make it workable by tuning some
> > > of the crushmap parameters such as chooseleaf_tries.
> > > > > >
> > > > > > Anyway I'm not sure if it would raise any other issues, could
> > > > > > you please review it and maybe give me
> > > > > some suggestions? Thank you!
> > > > > >
> > > > > > ----------
> > > > > > Best regards,
> > > > > > Sangdi
> > > > > >
> > > > > > --------------------------------------------------------------
> > > > > > ----
> > > > > > ----
> > > > > > --------------------------------------------------------------
> > > > > > - ????????????????????????????????????????
> > > > > > ????????????????????????????????????????
> > > > > > ????????????????????????????????????????
> > > > > > ???
> > > > > > This e-mail and its attachments contain confidential
> > > > > > information from H3C, which is intended only for the person or
> > > > > > entity whose address is listed above. Any use of the
> > > > > > information contained herein in any way (including, but not
> > > > > > limited to, total or partial disclosure, reproduction, or
> > > > > > dissemination) by persons other than the intended
> > > > > > recipient(s) is prohibited. If you receive this e-mail in
> > > > > > error, please notify the sender by phone or email immediately and delete it!
> > > > > > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > > >
> > > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe
> > > > > ceph-devel" in the body of a message to
> > > > > majordomo@vger.kernel.org More majordomo info at
> > > > > http://vger.kernel.org/majordomo-info.html
> > > > ------------------------------------------------------------------
> > > > ----
> > > > ---------------------------------------------------------------
> > > > ????????????????????????????????????????
> > > > ????????????????????????????????????????
> > > > ????????????????????????????????????????
> > > > ???
> > > > This e-mail and its attachments contain confidential information
> > > > from H3C, which is intended only for the person or entity whose
> > > > address is listed above. Any use of the information contained
> > > > herein in any way (including, but not limited to, total or partial
> > > > disclosure, reproduction, or dissemination) by persons other than
> > > > the intended
> > > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > > please notify the sender by phone or email immediately and delete it!
> > > > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > >
> > > ???j:+v???w???????? ????zZ+???????j"????i
> > ----------------------------------------------------------------------
> > ---------------------------------------------------------------
> > ????????????????????????????????????????
> > ????????????????????????????????????????
> > ????????????????????????????????????????
> > ???
> > This e-mail and its attachments contain confidential information from
> > H3C, which is intended only for the person or entity whose address is
> > listed above. Any use of the information contained herein in any way
> > (including, but not limited to, total or partial disclosure,
> > reproduction, or dissemination) by persons other than the intended
> > recipient(s) is prohibited. If you receive this e-mail in error,
> > please notify the sender by phone or email immediately and delete it!
> >
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to
> majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-16  6:43                 ` Xusangdi
@ 2015-10-19  1:11                   ` Chen, Xiaoxi
  2015-10-19  2:08                     ` Xusangdi
  0 siblings, 1 reply; 20+ messages in thread
From: Chen, Xiaoxi @ 2015-10-19  1:11 UTC (permalink / raw)
  To: Xusangdi; +Cc: ceph-devel

Sorry but not following...

> then shut down one or more osds (please don't touch the crushmap, just stop the osd service or kill its process).

In this case, OSD is only down but not out, but will be marked out after 300s.

So in what case your patch is helping?

	If you said your patch helps on "down and out" , then my experiment is exactly the case,

	If you said your patch helps on "down but not out" case, then we do need a new way to simulate.





> -----Original Message-----
> From: Xusangdi [mailto:xu.sangdi@h3c.com]
> Sent: Friday, October 16, 2015 2:44 PM
> To: Chen, Xiaoxi
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
> 
> Sorry if I didn't state that clearly.
> 
> Like you did, the performance is measured by the number of PGs remapped
> between the original state and after some OSDs are down and out.
> 
> The down OSDs will be marked out after a short period, or you can do it
> manually using 'ceph osd out' command.
> 
> Thanks,
> ---Sangdi
> 
> > -----Original Message-----
> > From: Chen, Xiaoxi [mailto:xiaoxi.chen@intel.com]
> > Sent: Friday, October 16, 2015 2:26 PM
> > To: xusangdi 11976 (RD)
> > Cc: ceph-devel@vger.kernel.org
> > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> >
> > How did you measure "help performance" ?
> >
> > In your case , the OSD is in "down" but not "out" in osdmap
> >
> > Maybe extend the osdmaptool to support "mark osd out" could do the test?
> > > -----Original Message-----
> > > From: Xusangdi [mailto:xu.sangdi@h3c.com]
> > > Sent: Friday, October 16, 2015 2:12 PM
> > > To: Chen, Xiaoxi
> > > Cc: ceph-devel@vger.kernel.org
> > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Chen, Xiaoxi [mailto:xiaoxi.chen@intel.com]
> > > > Sent: Friday, October 16, 2015 12:28 PM
> > > > To: Sage Weil; xusangdi 11976 (RD)
> > > > Cc: ceph-devel@vger.kernel.org
> > > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > > >
> > > > I did some test by using crushtool --test (together with this PR
> > > > and #PR 6004)
> > > >
> > > > It doesn't help in quantities way.
> > > >
> > > > In a 40 OSDs, 10 OSDs per Nodes demo crush map, test with rep = 2.
> > > > 4096 PGs, in each run I will randomly kick off an OSD (reweight to
> > > > 0) and compared the PG mapping. If any OSD in the upset of a given
> > > > PG is
> > > changed, we said the PG is remapped.
> > > >
> > > >     With this Patch,  on average 377.35 PG remapped with stddev = 20.62.
> > > >     W/o this Patch,  on average 347.13 PG remapped with stddev = 13.51 .
> > > >
> > > > In terms of #OSDs changed( each osd difference in a given slog
> > > > count for 1), I grab one sample from previous test which kickoff
> > > > OSD.20, as this
> > > processing still need a bit handy work.
> > > >    With this Path,  165 times in Primary , 219 times on Secondary,
> > > > 37 times for
> > > both(347 in total).
> > > >    W/O this patch, 165 times in Primary , 215 times on Secondary,
> > > > 37 times
> > > for both( 343 in total).
> > > >
> > > The reason here is that the crushmap is changed when you use
> > > crushtool to reweight an item. This PR cannot serve as a radical
> > > solution for the extra migrations, it only helps to improve the
> > > performance when some devices fail in the running time. For a simple
> > > test, you may set up an experimental cluster (vstart.sh would be a
> > > great help) and then shut down one or more osds (please don't touch
> > > the crushmap, just stop the osd service or kill its
> > process).
> > > Thank you.
> > >
> > > ---Sangdi
> > >
> > > >
> > > > > -----Original Message-----
> > > > > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> > > > > owner@vger.kernel.org] On Behalf Of Sage Weil
> > > > > Sent: Friday, October 16, 2015 3:14 AM
> > > > > To: Xusangdi
> > > > > Cc: ceph-devel@vger.kernel.org
> > > > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > > > >
> > > > > On Thu, 15 Oct 2015, Xusangdi wrote:
> > > > > > > -----Original Message-----
> > > > > > > From: Sage Weil [mailto:sweil@redhat.com]
> > > > > > > Sent: Wednesday, October 14, 2015 10:18 PM
> > > > > > > To: xusangdi 11976 (RD)
> > > > > > > Cc: ceph-devel@vger.kernel.org
> > > > > > > Subject: RE: chooseleaf may cause some unnecessary pg
> > > > > > > migrations
> > > > > > >
> > > > > > > On Wed, 14 Oct 2015, Xusangdi wrote:
> > > > > > > > Please see inline.
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: ceph-devel-owner@vger.kernel.org
> > > > > > > > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of
> > > > > > > > > Sage Weil
> > > > > > > > > Sent: Wednesday, October 14, 2015 12:45 AM
> > > > > > > > > To: xusangdi 11976 (RD)
> > > > > > > > > Cc: ceph-devel@vger.kernel.org
> > > > > > > > > Subject: Re: chooseleaf may cause some unnecessary pg
> > > > > > > > > migrations
> > > > > > > > >
> > > > > > > > > Hi Sangdi,
> > > > > > > > >
> > > > > > > > > On Tue, 13 Oct 2015, Xusangdi wrote:
> > > > > > > > > > Hi Sage,
> > > > > > > > > >
> > > > > > > > > > Recently when I was learning about the crush rules I
> > > > > > > > > > noticed that the step chooseleaf may cause
> > > > > > > > > some unnecessary pg migrations when OSDs are outed.
> > > > > > > > > > For example, for a cluster of 4 hosts with 2 OSDs
> > > > > > > > > > each, after host1(osd.2, osd.3) is down, the
> > > > > > > > > mapping differences would be like this:
> > > > > > > > > > pgid    before <-> after        diff    diff_num
> > > > > > > > > > 0.1e    [5, 1, 2] <-> [5, 1, 7]         [2]     1
> > > > > > > > > > 0.1f    [0, 7, 3] <-> [0, 7, 4]         [3]     1
> > > > > > > > > > 0.1a    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > > > > 0.5     [6, 3, 1] <-> [6, 0, 5]         [1, 3]  2
> > > > > > > > > > 0.4     [5, 6, 2] <-> [5, 6, 0]         [2]     1
> > > > > > > > > > 0.7     [3, 7, 0] <-> [7, 0, 4]         [3]     1
> > > > > > > > > > 0.6     [2, 1, 7] <-> [0, 7, 4]         [1, 2]  2
> > > > > > > > > > 0.9     [3, 4, 0] <-> [5, 0, 7]         [3, 4]  2
> > > > > > > > > > 0.15    [2, 6, 1] <-> [6, 0, 5]         [1, 2]  2
> > > > > > > > > > 0.14    [3, 6, 5] <-> [7, 4, 1]         [3, 5, 6]       3
> > > > > > > > > > 0.17    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > > > > > 0.16    [0, 4, 2] <-> [0, 4, 7]         [2]     1
> > > > > > > > > > 0.11    [4, 7, 2] <-> [4, 7, 1]         [2]     1
> > > > > > > > > > 0.10    [0, 3, 6] <-> [0, 7, 4]         [3, 6]  2
> > > > > > > > > > 0.13    [1, 7, 3] <-> [1, 7, 4]         [3]     1
> > > > > > > > > > 0.a     [0, 2, 7] <-> [0, 7, 4]         [2]     1
> > > > > > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > > > > > 0.b     [2, 5, 7] <-> [4, 7, 0]         [2, 5]  2
> > > > > > > > > > 0.18    [7, 2, 4] <-> [7, 4, 0]         [2]     1
> > > > > > > > > > 0.f     [2, 7, 5] <-> [6, 4, 0]         [2, 5, 7]       3
> > > > > > > > > > Changed pg ratio: 30 / 32
> > > > > > > > > >
> > > > > > > > > > I tried to change the code (please see
> > > > > > > > > > https://github.com/ceph/ceph/pull/6242) and after the
> > > > > > > > > modification the result would be like this:
> > > > > > > > >
> > > > > > > > > Can you describe the reasoning behind the change?  I
> > > > > > > > > can't make sense of it.  The recursive call is still
> > > > > > > > > picking just
> > > > > > > > > 1 item, but it looks like it is always choosing a device
> > > > > > > > > in for slot 0 and not the current slot in the caller.
> > > > > > > > > I'm not
> > > > > > > sure how that could generate a correct result.
> > > > > > > > >
> > > > > > > > From my understanding, the argument 'outpos' of
> > > > > > > > crush_choose_firstn(...) is always set to 0 for 'choose firstn'
> > > > > > > > and the first step of 'chooseleaf firstn' (you may check its caller,
> i.e.
> > > > > > > > crush_do_rule(...), that the variable 'j' there is always 0).
> > > > > > > > So in fact the code modification won't change anything for
> > > > > > > > these steps. It
> > > > > will only affect the recursive call.
> > > > > > > > And it this call, 'rep'(and 'r') is independent from its
> > > > > > > > caller, which means it's only used to indicate the replica
> > > > > > > > number(the start value of 'rep' should be 1 less than
> > > > > > > > 'numrep'), but is not related to result position. Since
> > > > > > > > 'outpos' is correctly passed to the recursive call, the
> > > > > > > > result would be
> > > put in the right slot, not slot 0.
> > > > > > >
> > > > > > > Okay, I undertstand now.  Thanks!
> > > > > > >
> > > > > > > > The current code set the starting value of 'r' based on
> > > > > > > > the slot number, which means the algorithm may generate a
> > > > > > > > different result even when only the slot number changes,
> > > > > > > > which I believe is not what we want for replicated pools.
> > > > > > > > For example, 0.14 [3, 6, 5] <-> [7, 4, 1], after
> > > > > > > > ods.3 is down and out, the pg on osd.6 is transferred to
> > > > > > > > osd.7 because the starting value of 'r' is
> > > > > > > > changed(probably from 1 to 0), and so is the pg from osd.5
> > > > > > > > to osd.4. Though it looks like that all of these extra
> > > > > > > > migrations are always within the same chosen bucket, they
> > > > > > > > are unnecessary. If we set the starting value of 'rep'(and
> > > > > > > > 'r') to be a fixed value(e.g. 0), the results
> > > would be stable.
> > > > > > >
> > > > > > > Yeah, that makes sense.
> > > > > > >
> > > > > > > The only problem I see is the collision check here:
> > > > > > >
> > > > > > >
> > > > > > > https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294
> > > > > > > f6
> > > > > > > 58
> > > > > > > 5c
> > > > > > > 32
> > > > > > > 2588cfe026d701R497
> > > > > > >
> > > > > > > For most maps your change isn't problematic because devices
> > > > > > > only appear once in the hierarchy, but if an item does
> > > > > > > appear twice, we may
> > > > > choose it twice and not detect the dup.
> > > > > > >
> > > > > > > Perhaps leaving the argument alone and simply changing the
> > > > > > > for loop to
> > > > > > >
> > > > > > >       for (rep = 0; rep < (numrep - outpos) && count > 0 ;
> > > > > > > rep++) {
> > > > > > >
> > > > > > > would do the trick?
> > > > > > >
> > > > > > Well I believe it actually will detect the dup as 'numrep' and 'outpos'
> > > > > > are two different arguments and 'outpos' is correctly passed,
> > > > > > so the 'for loop' you marked for detecting collision will
> > > > > > check all previously selected items.
> > > > >
> > > > > Yep, right again.
> > > > >
> > > > > > I have read the post about current CRUSH behavior and thanks
> > > > > > for that detailed explanation. However, the prerequisite there
> > > > > > is the crushmap is changed(some item is reweighted), while in
> > > > > > the current implementation the crushmap remains untouched for
> > > > > > quite a long time after some devices fail. And because of
> > > > > > this, I think the code change can make the pg migrations to be
> > > > > > optimal when osds become down. When the crushmap does
> change,
> > > > > > like you said, the extra migrations would be
> > > > > unavoidable.
> > > > > > Anyway, as I really cannot find it, could you give me some tip
> > > > > > about how or when the crushmap would get modified automatically?
> > > Thank you!
> > > > >
> > > > > I think the situation is the same: when the osd is marked "out"
> > > > > the weight[i] value goes to zero and we will reject it at
> > > > >
> > > > > https://github.com/ceph/ceph/pull/6242/files#diff-
> > > > > 0df13ad294f6585c322588cfe026d701R535
> > > > >
> > > > > (An osd goes down but not out doesn't have any effect on
> > > > > crush--the ceph code just filters those osds out of the crush
> > > > > result.)
> > > > >
> > > > > So, anyway, yes: this looks like a good change.  One thing that
> > > > > would be great would be a way to quantify how much it improves
> things.
> > > > >
> > > > > The next step is to introduce a tunable that makes this behavior
> > > > > optional (as we can't break/change previous mappings).  That
> > > > > means introducing a new field in crush/crush.h for the tunable,
> > > > > adding accessors in CrushWrapper, introducing a new 'jewel'
> > > > > preset that sets it that way (and making jewel the new
> > > > > 'optimal'), modifying crushtool.cc with an argument to modify
> > > > > the tunable, modifying CrushCompiler to both spit out the
> > > > > tunable if it's non- default and to parse it if its specified in
> > > > > the
> > > > input, etc.
> > > > >
> > > > > If you're comfortable with those changes, it would be great if
> > > > > you could do this part.  If not, I can take care of it!
> > > > >
> > > > > Thanks-
> > > > > sage
> > > > >
> > > > >
> > > > > >
> > > > > > ---Sangdi
> > > > > >
> > > > > > > Thanks!
> > > > > > > sage
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > The quality of distribution could be assured by the hashed
> > > > > > > > value x, since it's dependent on pgid. A simple test
> > > > > > > > result for a relatively large cluster is as followings,
> > > > > > > > which from my view is similar with that of the current code:
> > > > > > > >
> > > > > > > > crushtool -o crushmap --build --num_osds 64 host straw2 4
> > > > > > > > rack
> > > > > > > > straw2
> > > > > > > > 4 root straw2 0
> > > > > > > >
> > > > > > > > crushtool --test -i crushmap --num_rep 3
> > > > > > > > --show-utilization rule
> > > > > > > > 0 (replicated_ruleset), x = 0..1023, numrep = 3..3 rule 0
> > > > > > > > (replicated_ruleset) num_rep 3 result size == 3: 1024/1024
> > > > > > > >   device 0:              stored : 57     expected : 48
> > > > > > > >   device 1:              stored : 39     expected : 48
> > > > > > > >   device 2:              stored : 44     expected : 48
> > > > > > > >   device 3:              stored : 42     expected : 48
> > > > > > > >   device 4:              stored : 45     expected : 48
> > > > > > > >   device 5:              stored : 42     expected : 48
> > > > > > > >   device 6:              stored : 45     expected : 48
> > > > > > > >   device 7:              stored : 48     expected : 48
> > > > > > > >   device 8:              stored : 43     expected : 48
> > > > > > > >   device 9:              stored : 38     expected : 48
> > > > > > > >   device 10:             stored : 44     expected : 48
> > > > > > > >   device 11:             stored : 52     expected : 48
> > > > > > > >   device 12:             stored : 54     expected : 48
> > > > > > > >   device 13:             stored : 59     expected : 48
> > > > > > > >   device 14:             stored : 53     expected : 48
> > > > > > > >   device 15:             stored : 53     expected : 48
> > > > > > > >   device 16:             stored : 42     expected : 48
> > > > > > > >   device 17:             stored : 56     expected : 48
> > > > > > > >   device 18:             stored : 49     expected : 48
> > > > > > > >   device 19:             stored : 42     expected : 48
> > > > > > > >   device 20:             stored : 56     expected : 48
> > > > > > > >   device 21:             stored : 51     expected : 48
> > > > > > > >   device 22:             stored : 59     expected : 48
> > > > > > > >   device 23:             stored : 34     expected : 48
> > > > > > > >   device 24:             stored : 43     expected : 48
> > > > > > > >   device 25:             stored : 52     expected : 48
> > > > > > > >   device 26:             stored : 46     expected : 48
> > > > > > > >   device 27:             stored : 44     expected : 48
> > > > > > > >   device 28:             stored : 43     expected : 48
> > > > > > > >   device 29:             stored : 57     expected : 48
> > > > > > > >   device 30:             stored : 53     expected : 48
> > > > > > > >   device 31:             stored : 52     expected : 48
> > > > > > > >   device 32:             stored : 46     expected : 48
> > > > > > > >   device 33:             stored : 42     expected : 48
> > > > > > > >   device 34:             stored : 59     expected : 48
> > > > > > > >   device 35:             stored : 50     expected : 48
> > > > > > > >   device 36:             stored : 48     expected : 48
> > > > > > > >   device 37:             stored : 47     expected : 48
> > > > > > > >   device 38:             stored : 52     expected : 48
> > > > > > > >   device 39:             stored : 56     expected : 48
> > > > > > > >   device 40:             stored : 38     expected : 48
> > > > > > > >   device 41:             stored : 40     expected : 48
> > > > > > > >   device 42:             stored : 52     expected : 48
> > > > > > > >   device 43:             stored : 43     expected : 48
> > > > > > > >   device 44:             stored : 41     expected : 48
> > > > > > > >   device 45:             stored : 51     expected : 48
> > > > > > > >   device 46:             stored : 43     expected : 48
> > > > > > > >   device 47:             stored : 46     expected : 48
> > > > > > > >   device 48:             stored : 42     expected : 48
> > > > > > > >   device 49:             stored : 47     expected : 48
> > > > > > > >   device 50:             stored : 51     expected : 48
> > > > > > > >   device 51:             stored : 54     expected : 48
> > > > > > > >   device 52:             stored : 56     expected : 48
> > > > > > > >   device 53:             stored : 46     expected : 48
> > > > > > > >   device 54:             stored : 47     expected : 48
> > > > > > > >   device 55:             stored : 48     expected : 48
> > > > > > > >   device 56:             stored : 42     expected : 48
> > > > > > > >   device 57:             stored : 40     expected : 48
> > > > > > > >   device 58:             stored : 58     expected : 48
> > > > > > > >   device 59:             stored : 53     expected : 48
> > > > > > > >   device 60:             stored : 41     expected : 48
> > > > > > > >   device 61:             stored : 44     expected : 48
> > > > > > > >   device 62:             stored : 52     expected : 48
> > > > > > > >   device 63:             stored : 60     expected : 48
> > > > > > > >
> > > > > > > > > Perhaps you can share the osd tree output so we can see
> > > > > > > > > how your
> > > > > map it structured?
> > > > > > > > >
> > > > > > > > The architecture is pretty simple:
> > > > > > > > ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-
> AFFINITY
> > > > > > > > -5 8.00000 root root
> > > > > > > > -1 2.00000     host host0
> > > > > > > >  0 1.00000         osd.0       up  1.00000          1.00000
> > > > > > > >  1 1.00000         osd.1       up  1.00000          1.00000
> > > > > > > > -2 2.00000     host host1
> > > > > > > >  2 1.00000         osd.2     down        0          1.00000
> > > > > > > >  3 1.00000         osd.3     down        0          1.00000
> > > > > > > > -3 2.00000     host host2
> > > > > > > >  4 1.00000         osd.4       up  1.00000          1.00000
> > > > > > > >  5 1.00000         osd.5       up  1.00000          1.00000
> > > > > > > > -4 2.00000     host host3
> > > > > > > >  6 1.00000         osd.6       up  1.00000          1.00000
> > > > > > > >  7 1.00000         osd.7       up  1.00000          1.00000
> > > > > > > >
> > > > > > > > And the rule used if you were interested:
> > > > > > > > rule replicated_ruleset {
> > > > > > > >         ruleset 0
> > > > > > > >         type replicated
> > > > > > > >         min_size 1
> > > > > > > >         max_size 10
> > > > > > > >         step take root
> > > > > > > >         step chooseleaf firstn 0 type host
> > > > > > > >         step emit
> > > > > > > > }
> > > > > > > >
> > > > > > > > Also all the tunables are default values and I'm running
> > > > > > > > the master from
> > > > > several days ago.
> > > > > > > >
> > > > > > > > > It is normal for some mappings to have multiple items
> > > > > > > > > change, as I've described previously on this
> > > > > > > list.
> > > > > > > > > It's a result of the fact that we're considering totally
> > > > > > > > > independent series of items, but whether we accept a
> > > > > > > > > choice also depends on the previous choices (we do not allow
> dups)..
> > > > > > > > > which means that you get echo effects in later slots
> > > > > > > when we make a different choice in earlier slots.
> > > > > > > > >
> > > > > > > > As I described in the previous section, I think these
> > > > > > > > extra migrations is not caused by collisions. And IMO the
> > > > > > > > ideal result when an osd fails should be a simple
> > > > > > > > shifting, just
> > > > > > > as you illustrated in the CRUSH paper:
> > > > > > > >
> > > > > > > > 1 2 3 4 5 6
> > > > > > > > a b c d e f ...
> > > > > > > >
> > > > > > > > b fails ==>
> > > > > > > >
> > > > > > > > 1 2 3 4 5 6
> > > > > > > > a c d e f g ...
> > > > > > > >
> > > > > > > > ----------
> > > > > > > > Best regards,
> > > > > > > > Sangdi
> > > > > > > >
> > > > > > > > > Thanks!
> > > > > > > > > sage
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > pgid    before <-> after        diff    diff_num
> > > > > > > > > > 0.1e    [5, 0, 3] <-> [5, 0, 7]         [3]     1
> > > > > > > > > > 0.1f    [0, 6, 3] <-> [0, 6, 4]         [3]     1
> > > > > > > > > > 0.1a    [0, 5, 2] <-> [0, 5, 6]         [2]     1
> > > > > > > > > > 0.5     [6, 3, 0] <-> [6, 0, 5]         [3]     1
> > > > > > > > > > 0.4     [5, 7, 2] <-> [5, 7, 0]         [2]     1
> > > > > > > > > > 0.7     [3, 7, 1] <-> [7, 1, 5]         [3]     1
> > > > > > > > > > 0.6     [2, 0, 7] <-> [0, 7, 4]         [2]     1
> > > > > > > > > > 0.9     [3, 5, 1] <-> [5, 1, 7]         [3]     1
> > > > > > > > > > 0.15    [2, 6, 1] <-> [6, 1, 4]         [2]     1
> > > > > > > > > > 0.14    [3, 7, 5] <-> [7, 5, 1]         [3]     1
> > > > > > > > > > 0.17    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > > > > 0.16    [0, 4, 3] <-> [0, 4, 6]         [3]     1
> > > > > > > > > > 0.11    [4, 6, 3] <-> [4, 6, 0]         [3]     1
> > > > > > > > > > 0.10    [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > > > > > 0.13    [1, 7, 3] <-> [1, 7, 5]         [3]     1
> > > > > > > > > > 0.a     [0, 3, 6] <-> [0, 6, 5]         [3]     1
> > > > > > > > > > 0.c     [5, 0, 3] <-> [5, 0, 6]         [3]     1
> > > > > > > > > > 0.b     [2, 4, 6] <-> [4, 6, 1]         [2]     1
> > > > > > > > > > 0.18    [7, 3, 5] <-> [7, 5, 1]         [3]     1
> > > > > > > > > > 0.f     [2, 6, 5] <-> [6, 5, 1]         [2]     1
> > > > > > > > > > Changed pg ratio: 20 / 32
> > > > > > > > > >
> > > > > > > > > > Currently the only defect I can see from the change is
> > > > > > > > > > that the chance for a given pg to successfully
> > > > > > > > > choose required available OSDs might be a bit lower
> > > > > > > > > compared with before. However, I believe it will cause
> > > > > > > > > problems only when the cluster is pretty small and
> > > > > > > > > degraded. And in that case, we can still make it
> > > > > > > > > workable by tuning some
> > > > > > > of the crushmap parameters such as chooseleaf_tries.
> > > > > > > > > >
> > > > > > > > > > Anyway I'm not sure if it would raise any other
> > > > > > > > > > issues, could you please review it and maybe give me
> > > > > > > > > some suggestions? Thank you!
> > > > > > > > > >
> > > > > > > > > > ----------
> > > > > > > > > > Best regards,
> > > > > > > > > > Sangdi
> > > > > > > > > >
> > > > > > > > > > ------------------------------------------------------
> > > > > > > > > > --
> > > > > > > > > > --
> > > > > > > > > > --
> > > > > > > > > > --
> > > > > > > > > > ----
> > > > > > > > > > ----
> > > > > > > > > > ------------------------------------------------------
> > > > > > > > > > --
> > > > > > > > > > --
> > > > > > > > > > --
> > > > > > > > > > --
> > > > > > > > > > - ????????????????????????????????????????
> > > > > > > > > > ????????????????????????????????????????
> > > > > > > > > > ????????????????????????????????????????
> > > > > > > > > > ???
> > > > > > > > > > This e-mail and its attachments contain confidential
> > > > > > > > > > information from H3C, which is intended only for the
> > > > > > > > > > person or entity whose address is listed above. Any
> > > > > > > > > > use of the information contained herein in any way
> > > > > > > > > > (including, but not limited to, total or partial
> > > > > > > > > > disclosure, reproduction, or
> > > > > > > > > > dissemination) by persons other than the intended
> > > > > > > > > > recipient(s) is prohibited. If you receive this e-mail
> > > > > > > > > > in error, please notify the sender by phone or email
> > > > > > > > > > immediately and
> > > > > delete it!
> > > > > > > > > >
> > > > >
> > >
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > > > > > > >
> > > > > > > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > > > > > > > --
> > > > > > > > > To unsubscribe from this list: send the line
> > > > > > > > > "unsubscribe ceph-devel" in the body of a message to
> > > > > > > > > majordomo@vger.kernel.org More majordomo info at
> > > > > > > > > http://vger.kernel.org/majordomo-info.html
> > > > > > > > ----------------------------------------------------------
> > > > > > > > --
> > > > > > > > --
> > > > > > > > --
> > > > > > > > --
> > > > > > > > ----
> > > > > > > > ----------------------------------------------------------
> > > > > > > > --
> > > > > > > > --
> > > > > > > > - ????????????????????????????????????????
> > > > > > > > ????????????????????????????????????????
> > > > > > > > ????????????????????????????????????????
> > > > > > > > ???
> > > > > > > > This e-mail and its attachments contain confidential
> > > > > > > > information from H3C, which is intended only for the
> > > > > > > > person or entity whose address is listed above. Any use of
> > > > > > > > the information contained herein in any way (including,
> > > > > > > > but not limited to, total or partial disclosure,
> > > > > > > > reproduction, or
> > > > > > > > dissemination) by persons other than the intended
> > > > > > > > recipient(s) is prohibited. If you receive this e-mail in
> > > > > > > > error, please notify the sender by phone or email
> > > > > > > > immediately and
> > > delete it!
> > > > > > > >
> > > > >
> > >
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> > > > > > >
> > > > > > > ???j:+v???w???????? ????zZ+???????j"????i
> > > > > > --------------------------------------------------------------
> > > > > > --
> > > > > > --
> > > > > > --
> > > > > > --
> > > > > > --------------------------------------------------------------
> > > > > > - ????????????????????????????????????????
> > > > > > ????????????????????????????????????????
> > > > > > ????????????????????????????????????????
> > > > > > ???
> > > > > > This e-mail and its attachments contain confidential
> > > > > > information from H3C, which is intended only for the person or
> > > > > > entity whose address is listed above. Any use of the
> > > > > > information contained herein in any way (including, but not
> > > > > > limited to, total or partial disclosure, reproduction, or
> > > > > > dissemination) by persons other than the intended
> > > > > > recipient(s) is prohibited. If you receive this e-mail in
> > > > > > error, please notify the sender by phone or email immediately and
> delete it!
> > > > > >
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > > > > in the body of a message to majordomo@vger.kernel.org More
> > > majordomo
> > > > > info at http://vger.kernel.org/majordomo-info.html
> > > --------------------------------------------------------------------
> > > --
> > > ------------------------
> > > ---------------------------------------
> > > 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限
> 于发
> > > 送给上面地址中列出
> > > 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全
> 部或
> > > 部分地泄露、复制、
> > > 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或
> 邮件
> > > 通知发件人并删除本
> > > 邮件!
> > > This e-mail and its attachments contain confidential information
> > > from H3C, which is intended only for the person or entity whose address
> is listed above.
> > > Any use of the information contained herein in any way (including,
> > > but not limited to, total or partial disclosure, reproduction, or
> > > dissemination) by persons other than the intended
> > > recipient(s) is prohibited. If you receive this e-mail in error,
> > > please notify the sender by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-19  1:11                   ` Chen, Xiaoxi
@ 2015-10-19  2:08                     ` Xusangdi
  2015-10-19  2:17                       ` Sage Weil
  0 siblings, 1 reply; 20+ messages in thread
From: Xusangdi @ 2015-10-19  2:08 UTC (permalink / raw)
  To: Chen, Xiaoxi; +Cc: ceph-devel


> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of
> Chen, Xiaoxi
> Sent: Monday, October 19, 2015 9:11 AM
> To: xusangdi 11976 (RD)
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
>
> Sorry but not following...
>
> > then shut down one or more osds (please don't touch the crushmap, just stop the osd service or kill
> its process).
>
> In this case, OSD is only down but not out, but will be marked out after 300s.
>
> So in what case your patch is helping?
>
>       If you said your patch helps on "down and out" , then my experiment is exactly the case,
>

I am afraid it is probably not. Could you tell me how did you simulate the osd "down and out" situation using crushtool? If it was done by arguments such as '--remove-item' or 'reweight-item', it modified the crushmap and is not what I'm aiming for. Some details here as I said in previous discussion:
        > I think the situation is the same:
        >
        Well I am not sure if this is what you referred, but I can confirm that 'mark osd out' is definitely different from 'remove osd from crushmap'. Please note if it's not clear that the 'crushmap' here I mean what we can obtain from 'ceph osd         getcrushmap' command.

        > when the osd is marked "out" the weight[i] value goes to zero and we
        > will reject it at
        >
        > https://github.com/ceph/ceph/pull/6242/files#diff-0df13ad294f6585c3225
        > 88cfe026d701R535
        >
        Yes but we do pick this 'out' osd first from the crush_bucket_choose(...) function, which would never return this osd if it is removed from the crushmap. From my understanding, CRUSH would first make a choice based on values parsed from the        crushmap(crush_bucket_choose function), and then determine whether to actually choose this item depend on real-time weights(is_out function). Hence, if an osd is only marked out but not manually removed from the crushmap (or maybe it will  automatically be removed after a certain time? this is what I asked), the crush_bucket_choose(...) function will return exactly the same results as before for each turn.

        > (An osd goes down but not out doesn't have any effect on crush--the
        > ceph code just filters those osds out of the crush result.)
In a few words, from the code view the 'crushmap' is stable when a cluster is running and won't change even when some device becomes down and out. So again if you do tests by simulating a running cluster, you will see the difference. Anyway please note this is only the personal understanding from my knowledge and experience, and is not yet confirmed by sage or any other expert.

Thank you,
---Sangdi

>       If you said your patch helps on "down but not out" case, then we do need a new way to
> simulate.
>
>

-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-19  2:08                     ` Xusangdi
@ 2015-10-19  2:17                       ` Sage Weil
  2015-10-19  7:33                         ` Chen, Xiaoxi
  0 siblings, 1 reply; 20+ messages in thread
From: Sage Weil @ 2015-10-19  2:17 UTC (permalink / raw)
  To: Xusangdi; +Cc: Chen, Xiaoxi, ceph-devel

On Mon, 19 Oct 2015, Xusangdi wrote:
> 
> > -----Original Message-----
> > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of
> > Chen, Xiaoxi
> > Sent: Monday, October 19, 2015 9:11 AM
> > To: xusangdi 11976 (RD)
> > Cc: ceph-devel@vger.kernel.org
> > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> >
> > Sorry but not following...
> >
> > > then shut down one or more osds (please don't touch the crushmap, just stop the osd service or kill
> > its process).
> >
> > In this case, OSD is only down but not out, but will be marked out after 300s.
> >
> > So in what case your patch is helping?
> >
> >       If you said your patch helps on "down and out" , then my experiment is exactly the case,
> >
> 
> I am afraid it is probably not. Could you tell me how did you simulate 
> the osd "down and out" situation using crushtool? If it was done by 
> arguments such as '--remove-item' or 'reweight-item', it modified the 
> crushmap and is not what I'm aiming for. 

There is a --weight argument (noted in usage near --test, which is the 
only piece that uses it).  The crush map is not modified--only the weight 
vector that is passed in when a mapping is calculated (which is the 
equivalent of the in/out state in Ceph's OSDMap).  This should let you 
simulate this case.

When I'm debugging/understanding these issues I usually change the dprintk 
#define at the top of crush/mapper.c and use crushtool or osdmaptool to 
calculate a single mapping, comparing the log before and after a 
particular change.

sage

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-19  2:17                       ` Sage Weil
@ 2015-10-19  7:33                         ` Chen, Xiaoxi
  2015-10-19  8:12                           ` Xusangdi
  0 siblings, 1 reply; 20+ messages in thread
From: Chen, Xiaoxi @ 2015-10-19  7:33 UTC (permalink / raw)
  To: Sage Weil, Xusangdi; +Cc: ceph-devel

Thanks sage.

Retesting by using --test --weight ${rand} 0 , still in 40 OSDs, 10 per host case:

New code average on 204.31 while old code average on 202.31


> -----Original Message-----
> From: Sage Weil [mailto:sage@newdream.net]
> Sent: Monday, October 19, 2015 10:18 AM
> To: Xusangdi
> Cc: Chen, Xiaoxi; ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
> 
> On Mon, 19 Oct 2015, Xusangdi wrote:
> >
> > > -----Original Message-----
> > > From: ceph-devel-owner@vger.kernel.org
> > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Chen, Xiaoxi
> > > Sent: Monday, October 19, 2015 9:11 AM
> > > To: xusangdi 11976 (RD)
> > > Cc: ceph-devel@vger.kernel.org
> > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > >
> > > Sorry but not following...
> > >
> > > > then shut down one or more osds (please don't touch the crushmap,
> > > > just stop the osd service or kill
> > > its process).
> > >
> > > In this case, OSD is only down but not out, but will be marked out after
> 300s.
> > >
> > > So in what case your patch is helping?
> > >
> > >       If you said your patch helps on "down and out" , then my
> > > experiment is exactly the case,
> > >
> >
> > I am afraid it is probably not. Could you tell me how did you simulate
> > the osd "down and out" situation using crushtool? If it was done by
> > arguments such as '--remove-item' or 'reweight-item', it modified the
> > crushmap and is not what I'm aiming for.
> 
> There is a --weight argument (noted in usage near --test, which is the only
> piece that uses it).  The crush map is not modified--only the weight vector
> that is passed in when a mapping is calculated (which is the equivalent of the
> in/out state in Ceph's OSDMap).  This should let you simulate this case.
> 
> When I'm debugging/understanding these issues I usually change the dprintk
> #define at the top of crush/mapper.c and use crushtool or osdmaptool to
> calculate a single mapping, comparing the log before and after a particular
> change.
> 
> sage

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-19  7:33                         ` Chen, Xiaoxi
@ 2015-10-19  8:12                           ` Xusangdi
  2015-10-23 15:41                             ` Chen, Xiaoxi
  0 siblings, 1 reply; 20+ messages in thread
From: Xusangdi @ 2015-10-19  8:12 UTC (permalink / raw)
  To: Chen, Xiaoxi; +Cc: ceph-devel

I just realized the measurement I mentioned last time is not precise. It should be 'number of changed mappings' instead of 'number of remapped PGs'.
For example, [2,1,7] -> [0,7,4] should be regarded differently from [2,1,7] -> [1,7,4], as the first change causes doubled data transfer.
Is this the reason for your test results?

> -----Original Message-----
> From: Chen, Xiaoxi [mailto:xiaoxi.chen@intel.com]
> Sent: Monday, October 19, 2015 3:34 PM
> To: Sage Weil; xusangdi 11976 (RD)
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
>
> Thanks sage.
>
> Retesting by using --test --weight ${rand} 0 , still in 40 OSDs, 10 per host case:
>
> New code average on 204.31 while old code average on 202.31
>
>
> > -----Original Message-----
> > From: Sage Weil [mailto:sage@newdream.net]
> > Sent: Monday, October 19, 2015 10:18 AM
> > To: Xusangdi
> > Cc: Chen, Xiaoxi; ceph-devel@vger.kernel.org
> > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> >
> > On Mon, 19 Oct 2015, Xusangdi wrote:
> > >
> > > > -----Original Message-----
> > > > From: ceph-devel-owner@vger.kernel.org
> > > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Chen,
> > > > Xiaoxi
> > > > Sent: Monday, October 19, 2015 9:11 AM
> > > > To: xusangdi 11976 (RD)
> > > > Cc: ceph-devel@vger.kernel.org
> > > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > > >
> > > > Sorry but not following...
> > > >
> > > > > then shut down one or more osds (please don't touch the
> > > > > crushmap, just stop the osd service or kill
> > > > its process).
> > > >
> > > > In this case, OSD is only down but not out, but will be marked out
> > > > after
> > 300s.
> > > >
> > > > So in what case your patch is helping?
> > > >
> > > >       If you said your patch helps on "down and out" , then my
> > > > experiment is exactly the case,
> > > >
> > >
> > > I am afraid it is probably not. Could you tell me how did you
> > > simulate the osd "down and out" situation using crushtool? If it was
> > > done by arguments such as '--remove-item' or 'reweight-item', it
> > > modified the crushmap and is not what I'm aiming for.
> >
> > There is a --weight argument (noted in usage near --test, which is the
> > only piece that uses it).  The crush map is not modified--only the
> > weight vector that is passed in when a mapping is calculated (which is
> > the equivalent of the in/out state in Ceph's OSDMap).  This should let you simulate this case.
> >
> > When I'm debugging/understanding these issues I usually change the
> > dprintk #define at the top of crush/mapper.c and use crushtool or
> > osdmaptool to calculate a single mapping, comparing the log before and
> > after a particular change.
> >
> > sage
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: chooseleaf may cause some unnecessary pg migrations
  2015-10-19  8:12                           ` Xusangdi
@ 2015-10-23 15:41                             ` Chen, Xiaoxi
  0 siblings, 0 replies; 20+ messages in thread
From: Chen, Xiaoxi @ 2015-10-23 15:41 UTC (permalink / raw)
  To: Xusangdi; +Cc: ceph-devel

Yes, good catch.

By term of number of changed mapping (I sort and compared the OSD set), this patch do helps a lot as every remapped PG only has one OSD changing.

Cool , well done.

> -----Original Message-----
> From: Xusangdi [mailto:xu.sangdi@h3c.com]
> Sent: Monday, October 19, 2015 4:13 PM
> To: Chen, Xiaoxi
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
> 
> I just realized the measurement I mentioned last time is not precise. It
> should be 'number of changed mappings' instead of 'number of remapped
> PGs'.
> For example, [2,1,7] -> [0,7,4] should be regarded differently from [2,1,7] ->
> [1,7,4], as the first change causes doubled data transfer.
> Is this the reason for your test results?
> 
> > -----Original Message-----
> > From: Chen, Xiaoxi [mailto:xiaoxi.chen@intel.com]
> > Sent: Monday, October 19, 2015 3:34 PM
> > To: Sage Weil; xusangdi 11976 (RD)
> > Cc: ceph-devel@vger.kernel.org
> > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> >
> > Thanks sage.
> >
> > Retesting by using --test --weight ${rand} 0 , still in 40 OSDs, 10 per host
> case:
> >
> > New code average on 204.31 while old code average on 202.31
> >
> >
> > > -----Original Message-----
> > > From: Sage Weil [mailto:sage@newdream.net]
> > > Sent: Monday, October 19, 2015 10:18 AM
> > > To: Xusangdi
> > > Cc: Chen, Xiaoxi; ceph-devel@vger.kernel.org
> > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > >
> > > On Mon, 19 Oct 2015, Xusangdi wrote:
> > > >
> > > > > -----Original Message-----
> > > > > From: ceph-devel-owner@vger.kernel.org
> > > > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Chen,
> > > > > Xiaoxi
> > > > > Sent: Monday, October 19, 2015 9:11 AM
> > > > > To: xusangdi 11976 (RD)
> > > > > Cc: ceph-devel@vger.kernel.org
> > > > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > > > >
> > > > > Sorry but not following...
> > > > >
> > > > > > then shut down one or more osds (please don't touch the
> > > > > > crushmap, just stop the osd service or kill
> > > > > its process).
> > > > >
> > > > > In this case, OSD is only down but not out, but will be marked
> > > > > out after
> > > 300s.
> > > > >
> > > > > So in what case your patch is helping?
> > > > >
> > > > >       If you said your patch helps on "down and out" , then my
> > > > > experiment is exactly the case,
> > > > >
> > > >
> > > > I am afraid it is probably not. Could you tell me how did you
> > > > simulate the osd "down and out" situation using crushtool? If it
> > > > was done by arguments such as '--remove-item' or 'reweight-item',
> > > > it modified the crushmap and is not what I'm aiming for.
> > >
> > > There is a --weight argument (noted in usage near --test, which is
> > > the only piece that uses it).  The crush map is not modified--only
> > > the weight vector that is passed in when a mapping is calculated
> > > (which is the equivalent of the in/out state in Ceph's OSDMap).  This
> should let you simulate this case.
> > >
> > > When I'm debugging/understanding these issues I usually change the
> > > dprintk #define at the top of crush/mapper.c and use crushtool or
> > > osdmaptool to calculate a single mapping, comparing the log before
> > > and after a particular change.
> > >
> > > sage
> ----------------------------------------------------------------------------------------------
> ---------------------------------------
> 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发
> 送给上面地址中列出
> 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或
> 部分地泄露、复制、
> 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件
> 通知发件人并删除本
> 邮件!
> This e-mail and its attachments contain confidential information from H3C,
> which is intended only for the person or entity whose address is listed above.
> Any use of the information contained herein in any way (including, but not
> limited to, total or partial disclosure, reproduction, or dissemination) by
> persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the
> sender by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2015-10-23 15:42 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-13  8:22 chooseleaf may cause some unnecessary pg migrations Xusangdi
2015-10-13 14:20 ` Robert LeBlanc
2015-10-14  2:15   ` Xusangdi
2015-10-14 12:18     ` Sage Weil
2015-10-13 16:44 ` Sage Weil
2015-10-14  5:22   ` Xusangdi
2015-10-14 14:17     ` Sage Weil
     [not found]       ` <FF8D83DAE5DF57468726B79904321E06047677@H3CMLB12-EX.srv.huawei-3com.com>
2015-10-15 19:14         ` Sage Weil
2015-10-16  4:28           ` Chen, Xiaoxi
2015-10-16  6:12             ` Xusangdi
2015-10-16  6:26               ` Chen, Xiaoxi
2015-10-16  6:36                 ` Dałek, Piotr
2015-10-16  6:43                 ` Xusangdi
2015-10-19  1:11                   ` Chen, Xiaoxi
2015-10-19  2:08                     ` Xusangdi
2015-10-19  2:17                       ` Sage Weil
2015-10-19  7:33                         ` Chen, Xiaoxi
2015-10-19  8:12                           ` Xusangdi
2015-10-23 15:41                             ` Chen, Xiaoxi
2015-10-16 11:24           ` Xusangdi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.