All of lore.kernel.org
 help / color / mirror / Atom feed
From: Loic Dachary <loic@dachary.org>
To: "Pedro López-Adeva" <plopezadeva@gmail.com>
Cc: ceph-devel@vger.kernel.org
Subject: Re: crush multipick anomaly
Date: Sat, 22 Apr 2017 18:51:32 +0200	[thread overview]
Message-ID: <d60c247f-865f-03f7-1a5a-f692c666fa65@dachary.org> (raw)
In-Reply-To: <CAFdhnjdQSZZJJ0BUG45-QWUj6enr+0ZPpOxVjNhijzFPXL9==Q@mail.gmail.com>

Hi Pedro,

I tried the optimize function you suggested and got it to work[1]! It is my first time with scipy.optimize[2] and I'm not sure this is done right. In a nutshell I chose the Nedler-Mead method[3] because it seemed simpler. The initial guess is set to the target weights and the loss function simply is the standard deviation of the difference between the expected object count per device and the actual object count returned by the simulation. I'm pretty sure this is not right but I don't know what else to do and it's not completely wrong either. The sum of the differences seems simpler and probably gives the same results.

I ran the optimization to fix the uneven distribution we see when there are not enough samples, because the simulation runs faster than with the multipick anomaly. I suppose it could also work to fix the multipick anomaly. I assume it's ok to use the same method even though the root case of the uneven distribution is different because we're not using a gradient based optimization. But I'm not sure and maybe this is completely wrong...

Before optimization the situation is:

         ~expected~  ~objects~  ~delta~   ~delta%~
~name~                                            
dc1            1024       1024        0   0.000000
host0           256        294       38  14.843750
device0         128        153       25  19.531250
device1         128        141       13  10.156250
host1           256        301       45  17.578125
device2         128        157       29  22.656250
device3         128        144       16  12.500000
host2           512        429      -83 -16.210938
device4         128         96      -32 -25.000000
device5         128        117      -11  -8.593750
device6         256        216      -40 -15.625000

and after optimization we have the following:

         ~expected~  ~objects~  ~delta~  ~delta%~
~name~                                           
dc1            1024       1024        0  0.000000
host0           256        259        3  1.171875
device0         128        129        1  0.781250
device1         128        130        2  1.562500
host1           256        258        2  0.781250
device2         128        129        1  0.781250
device3         128        129        1  0.781250
host2           512        507       -5 -0.976562
device4         128        126       -2 -1.562500
device5         128        127       -1 -0.781250
device6         256        254       -2 -0.781250

Do you think I should keep going in this direction ? Now that CRUSH can use multiple weights[4] we have a convenient way to use these optimized values.

Cheers

[1] http://libcrush.org/main/python-crush/merge_requests/40/diffs#614384bdef0ae975388b03cf89fc7226aa7d2566_58_180
[2] https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html
[3] https://docs.scipy.org/doc/scipy/reference/optimize.minimize-neldermead.html#optimize-minimize-neldermead
[4] https://github.com/ceph/ceph/pull/14486

On 03/23/2017 04:32 PM, Pedro López-Adeva wrote:
> There are lot of gradient-free methods. I will try first to run the
> ones available using just scipy
> (https://docs.scipy.org/doc/scipy-0.18.1/reference/optimize.html).
> Some of them don't require the gradient and some of them can estimate
> it. The reason to go without the gradient is to run the CRUSH
> algorithm as a black box. In that case this would be the pseudo-code:
> 
> - BEGIN CODE -
> def build_target(desired_freqs):
>     def target(weights):
>         # run a simulation of CRUSH for a number of objects
>         sim_freqs = run_crush(weights)
>         # Kullback-Leibler divergence between desired frequencies and
> current ones
>         return loss(sim_freqs, desired_freqs)
>    return target
> 
> weights = scipy.optimize.minimize(build_target(desired_freqs))
> - END CODE -
> 
> The tricky thing here is that this procedure can be slow if the
> simulation (run_crush) needs to place a lot of objects to get accurate
> simulated frequencies. This is true specially if the minimize method
> attempts to approximate the gradient using finite differences since it
> will evaluate the target function a number of times proportional to
> the number of weights). Apart from the ones in scipy I would try also
> optimization methods that try to perform as few evaluations as
> possible like for example HyperOpt
> (http://hyperopt.github.io/hyperopt/), which by the way takes into
> account that the target function can be noisy.
> 
> This black box approximation is simple to implement and makes the
> computer do all the work instead of us.
> I think that this black box approximation is worthy to try even if
> it's not the final one because if this approximation works then we
> know that a more elaborate one that computes the gradient of the CRUSH
> algorithm will work for sure.
> 
> I can try this black box approximation this weekend not on the real
> CRUSH algorithm but with the simple implementation I did in python. If
> it works it's just a matter of substituting one simulation with
> another and see what happens.
> 
> 2017-03-23 15:13 GMT+01:00 Loic Dachary <loic@dachary.org>:
>> Hi Pedro,
>>
>> On 03/23/2017 12:49 PM, Pedro López-Adeva wrote:
>>> Hi Loic,
>>>
>>> >From what I see everything seems OK.
>>
>> Cool. I'll keep going in this direction then !
>>
>>> The interesting thing would be to
>>> test on some complex mapping. The reason is that "CrushPolicyFamily"
>>> is right now modeling just a single straw bucket not the full CRUSH
>>> algorithm.
>>
>> A number of use cases use a single straw bucket, maybe the majority of them. Even though it does not reflect the full range of what crush can offer, it could be useful. To be more specific, a crush map that states "place objects so that there is at most one replica per host" or "one replica per rack" is common. Such a crushmap can be reduced to a single straw bucket that contains all the hosts and by using the CrushPolicyFamily, we can change the weights of each host to fix the probabilities. The hosts themselves contain disks with varying weights but I think we can ignore that because crush will only recurse to place one object within a given host.
>>
>>> That's the work that remains to be done. The only way that
>>> would avoid reimplementing the CRUSH algorithm and computing the
>>> gradient would be treating CRUSH as a black box and eliminating the
>>> necessity of computing the gradient either by using a gradient-free
>>> optimization method or making an estimation of the gradient.
>>
>> By gradient-free optimization you mean simulated annealing or Monte Carlo ?
>>
>> Cheers
>>
>>>
>>>
>>> 2017-03-20 11:49 GMT+01:00 Loic Dachary <loic@dachary.org>:
>>>> Hi,
>>>>
>>>> I modified the crush library to accept two weights (one for the first disk, the other for the remaining disks)[1]. This really is a hack for experimentation purposes only ;-) I was able to run a variation of your code[2] and got the following results which are encouraging. Do you think what I did is sensible ? Or is there a problem I don't see ?
>>>>
>>>> Thanks !
>>>>
>>>> Simulation: R=2 devices capacity [10  8  6 10  8  6 10  8  6]
>>>> ------------------------------------------------------------------------
>>>> Before: All replicas on each hard drive
>>>> Expected vs actual use (20000 samples)
>>>>  disk 0: 1.39e-01 1.12e-01
>>>>  disk 1: 1.11e-01 1.10e-01
>>>>  disk 2: 8.33e-02 1.13e-01
>>>>  disk 3: 1.39e-01 1.11e-01
>>>>  disk 4: 1.11e-01 1.11e-01
>>>>  disk 5: 8.33e-02 1.11e-01
>>>>  disk 6: 1.39e-01 1.12e-01
>>>>  disk 7: 1.11e-01 1.12e-01
>>>>  disk 8: 8.33e-02 1.10e-01
>>>> it=    1 jac norm=1.59e-01 loss=5.27e-03
>>>> it=    2 jac norm=1.55e-01 loss=5.03e-03
>>>> ...
>>>> it=  212 jac norm=1.02e-03 loss=2.41e-07
>>>> it=  213 jac norm=1.00e-03 loss=2.31e-07
>>>> Converged to desired accuracy :)
>>>> After: All replicas on each hard drive
>>>> Expected vs actual use (20000 samples)
>>>>  disk 0: 1.39e-01 1.42e-01
>>>>  disk 1: 1.11e-01 1.09e-01
>>>>  disk 2: 8.33e-02 8.37e-02
>>>>  disk 3: 1.39e-01 1.40e-01
>>>>  disk 4: 1.11e-01 1.13e-01
>>>>  disk 5: 8.33e-02 8.08e-02
>>>>  disk 6: 1.39e-01 1.38e-01
>>>>  disk 7: 1.11e-01 1.09e-01
>>>>  disk 8: 8.33e-02 8.48e-02
>>>>
>>>>
>>>> Simulation: R=2 devices capacity [10 10 10 10  1]
>>>> ------------------------------------------------------------------------
>>>> Before: All replicas on each hard drive
>>>> Expected vs actual use (20000 samples)
>>>>  disk 0: 2.44e-01 2.36e-01
>>>>  disk 1: 2.44e-01 2.38e-01
>>>>  disk 2: 2.44e-01 2.34e-01
>>>>  disk 3: 2.44e-01 2.38e-01
>>>>  disk 4: 2.44e-02 5.37e-02
>>>> it=    1 jac norm=2.43e-01 loss=2.98e-03
>>>> it=    2 jac norm=2.28e-01 loss=2.47e-03
>>>> ...
>>>> it=   37 jac norm=1.28e-03 loss=3.48e-08
>>>> it=   38 jac norm=1.07e-03 loss=2.42e-08
>>>> Converged to desired accuracy :)
>>>> After: All replicas on each hard drive
>>>> Expected vs actual use (20000 samples)
>>>>  disk 0: 2.44e-01 2.46e-01
>>>>  disk 1: 2.44e-01 2.44e-01
>>>>  disk 2: 2.44e-01 2.41e-01
>>>>  disk 3: 2.44e-01 2.45e-01
>>>>  disk 4: 2.44e-02 2.33e-02
>>>>
>>>>
>>>> [1] crush hack http://libcrush.org/main/libcrush/commit/6efda297694392d0b07845eb98464a0dcd56fee8
>>>> [2] python-crush hack http://libcrush.org/dachary/python-crush/commit/d9202fcd4d17cd2a82b37ec20c1bd25f8f2c4b68
>>>>
>>>> On 03/19/2017 11:31 PM, Loic Dachary wrote:
>>>>> Hi Pedro,
>>>>>
>>>>> It looks like trying to experiment with crush won't work as expected because crush does not distinguish the probability of selecting the first device from the probability of selecting the second or third device. Am I mistaken ?
>>>>>
>>>>> Cheers
>>>>>
>>>>> On 03/18/2017 10:21 AM, Loic Dachary wrote:
>>>>>> Hi Pedro,
>>>>>>
>>>>>> I'm going to experiment with what you did at
>>>>>>
>>>>>> https://github.com/plafl/notebooks/blob/master/replication.ipynb
>>>>>>
>>>>>> and the latest python-crush published today. A comparison function was added that will help measure the data movement. I'm hoping we can release an offline tool based on your solution. Please let me know if I should wait before diving into this, in case you have unpublished drafts or new ideas.
>>>>>>
>>>>>> Cheers
>>>>>>
>>>>>> On 03/09/2017 09:47 AM, Pedro López-Adeva wrote:
>>>>>>> Great, thanks for the clarifications.
>>>>>>> I also think that the most natural way is to keep just a set of
>>>>>>> weights in the CRUSH map and update them inside the algorithm.
>>>>>>>
>>>>>>> I keep working on it.
>>>>>>>
>>>>>>>
>>>>>>> 2017-03-08 0:06 GMT+01:00 Sage Weil <sage@newdream.net>:
>>>>>>>> Hi Pedro,
>>>>>>>>
>>>>>>>> Thanks for taking a look at this!  It's a frustrating problem and we
>>>>>>>> haven't made much headway.
>>>>>>>>
>>>>>>>> On Thu, 2 Mar 2017, Pedro López-Adeva wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> I will have a look. BTW, I have not progressed that much but I have
>>>>>>>>> been thinking about it. In order to adapt the previous algorithm in
>>>>>>>>> the python notebook I need to substitute the iteration over all
>>>>>>>>> possible devices permutations to iteration over all the possible
>>>>>>>>> selections that crush would make. That is the main thing I need to
>>>>>>>>> work on.
>>>>>>>>>
>>>>>>>>> The other thing is of course that weights change for each replica.
>>>>>>>>> That is, they cannot be really fixed in the crush map. So the
>>>>>>>>> algorithm inside libcrush, not only the weights in the map, need to be
>>>>>>>>> changed. The weights in the crush map should reflect then, maybe, the
>>>>>>>>> desired usage frequencies. Or maybe each replica should have their own
>>>>>>>>> crush map, but then the information about the previous selection
>>>>>>>>> should be passed to the next replica placement run so it avoids
>>>>>>>>> selecting the same one again.
>>>>>>>>
>>>>>>>> My suspicion is that the best solution here (whatever that means!)
>>>>>>>> leaves the CRUSH weights intact with the desired distribution, and
>>>>>>>> then generates a set of derivative weights--probably one set for each
>>>>>>>> round/replica/rank.
>>>>>>>>
>>>>>>>> One nice property of this is that once the support is added to encode
>>>>>>>> multiple sets of weights, the algorithm used to generate them is free to
>>>>>>>> change and evolve independently.  (In most cases any change is
>>>>>>>> CRUSH's mapping behavior is difficult to roll out because all
>>>>>>>> parties participating in the cluster have to support any new behavior
>>>>>>>> before it is enabled or used.)
>>>>>>>>
>>>>>>>>> I have a question also. Is there any significant difference between
>>>>>>>>> the device selection algorithm description in the paper and its final
>>>>>>>>> implementation?
>>>>>>>>
>>>>>>>> The main difference is the "retry_bucket" behavior was found to be a bad
>>>>>>>> idea; any collision or failed()/overload() case triggers the
>>>>>>>> retry_descent.
>>>>>>>>
>>>>>>>> There are other changes, of course, but I don't think they'll impact any
>>>>>>>> solution we come with here (or at least any solution can be suitably
>>>>>>>> adapted)!
>>>>>>>>
>>>>>>>> sage
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>> --
>>>> Loïc Dachary, Artisan Logiciel Libre
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
>> --
>> Loïc Dachary, Artisan Logiciel Libre
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

  parent reply	other threads:[~2017-04-22 16:51 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-26  3:05 crush multipick anomaly Sage Weil
2017-01-26 11:13 ` Loic Dachary
2017-01-26 11:51   ` kefu chai
2017-02-03 14:37   ` Loic Dachary
2017-02-03 14:47     ` Sage Weil
2017-02-03 15:08       ` Loic Dachary
2017-02-03 18:54         ` Loic Dachary
2017-02-06  3:08           ` Jaze Lee
2017-02-06  8:18             ` Loic Dachary
2017-02-06 14:11               ` Jaze Lee
2017-02-06 17:07                 ` Loic Dachary
2017-02-03 15:26       ` Dan van der Ster
2017-02-03 17:37         ` Dan van der Ster
2017-02-06  8:31           ` Loic Dachary
2017-02-06  9:13             ` Dan van der Ster
2017-02-06 16:53               ` Dan van der Ster
2017-02-13 10:36 ` Loic Dachary
2017-02-13 14:21   ` Sage Weil
2017-02-13 18:50     ` Loic Dachary
2017-02-13 19:16       ` Sage Weil
2017-02-13 20:18         ` Loic Dachary
2017-02-13 21:01           ` Loic Dachary
2017-02-13 21:15             ` Sage Weil
2017-02-13 21:19               ` Gregory Farnum
2017-02-13 21:26                 ` Sage Weil
2017-02-13 21:43               ` Loic Dachary
2017-02-16 22:04     ` Pedro López-Adeva
2017-02-22  7:52       ` Loic Dachary
2017-02-22 11:26         ` Pedro López-Adeva
2017-02-22 11:38           ` Loic Dachary
2017-02-22 11:46             ` Pedro López-Adeva
2017-02-25  0:38               ` Loic Dachary
2017-02-25  8:41                 ` Pedro López-Adeva
2017-02-25  9:02                   ` Loic Dachary
2017-03-02  9:43                     ` Loic Dachary
2017-03-02  9:58                       ` Pedro López-Adeva
2017-03-02 10:31                         ` Loic Dachary
2017-03-07 23:06                         ` Sage Weil
2017-03-09  8:47                           ` Pedro López-Adeva
2017-03-18  9:21                             ` Loic Dachary
2017-03-19 22:31                               ` Loic Dachary
2017-03-20 10:49                                 ` Loic Dachary
2017-03-23 11:49                                   ` Pedro López-Adeva
2017-03-23 14:13                                     ` Loic Dachary
2017-03-23 15:32                                       ` Pedro López-Adeva
2017-03-23 16:18                                         ` Loic Dachary
2017-03-25 18:42                                         ` Sage Weil
     [not found]                                           ` <CAHMeWhHV=5u=QFggXFNMn2MzGLgQJ6nMnae+ZgK=MB5yYr1p9g@mail.gmail.com>
2017-03-27  2:33                                             ` Sage Weil
2017-03-27  6:45                                               ` Loic Dachary
     [not found]                                                 ` <CAHMeWhGuJnu2664VTxomQ-wJewBEPjRT_VGWH+g-v5k3ka6X5Q@mail.gmail.com>
2017-03-27  9:27                                                   ` Adam Kupczyk
2017-03-27 10:29                                                     ` Loic Dachary
2017-03-27 10:37                                                     ` Pedro López-Adeva
2017-03-27 13:39                                                     ` Sage Weil
2017-03-28  6:52                                                       ` Adam Kupczyk
2017-03-28  9:49                                                         ` Spandan Kumar Sahu
2017-03-28 13:35                                                         ` Sage Weil
2017-03-27 13:24                                                 ` Sage Weil
2017-04-11 15:22                                         ` Loic Dachary
2017-04-22 16:51                                         ` Loic Dachary [this message]
2017-04-25 15:04                                           ` Pedro López-Adeva
2017-04-25 17:46                                             ` Loic Dachary
2017-04-26 21:08                                             ` Loic Dachary
2017-04-26 22:25                                               ` Loic Dachary
2017-04-27  6:12                                                 ` Loic Dachary
2017-04-27 16:47                                                   ` Loic Dachary
2017-04-27 22:14                                                     ` Loic Dachary
2017-02-13 14:53   ` Gregory Farnum
2017-02-20  8:47     ` Loic Dachary
2017-02-20 17:32       ` Gregory Farnum
2017-02-20 19:31         ` Loic Dachary

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d60c247f-865f-03f7-1a5a-f692c666fa65@dachary.org \
    --to=loic@dachary.org \
    --cc=ceph-devel@vger.kernel.org \
    --cc=plopezadeva@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.