> Il giorno 24 mag 2019, alle ore 08:51, Paolo Valente ha scritto: > > > >> Il giorno 24 mag 2019, alle ore 01:43, Srivatsa S. Bhat ha scritto: >> >> On 5/23/19 10:22 AM, Paolo Valente wrote: >>> >>>> Il giorno 23 mag 2019, alle ore 11:19, Paolo Valente ha scritto: >>>> >>>>> Il giorno 23 mag 2019, alle ore 04:30, Srivatsa S. Bhat ha scritto: >>>>> >> [...] >>>>> Also, I'm very happy to run additional tests or experiments to help >>>>> track down this issue. So, please don't hesitate to let me know if >>>>> you'd like me to try anything else or get you additional traces etc. :) >>>>> >>>> >>>> Here's to you! :) I've attached a new small improvement that may >>>> reduce fluctuations (path to apply on top of the others, of course). >>>> Unfortunately, I don't expect this change to boost the throughput >>>> though. >>>> >>>> In contrast, I've thought of a solution that might be rather >>>> effective: making BFQ aware (heuristically) of trivial >>>> synchronizations between processes in different groups. This will >>>> require a little more work and time. >>>> >>> >>> Hi Srivatsa, >>> I'm back :) >>> >>> First, there was a mistake in the last patch I sent you, namely in >>> 0001-block-bfq-re-sample-req-service-times-when-possible.patch. >>> Please don't apply that patch at all. >>> >>> I've attached a new series of patches instead. The first patch in this >>> series is a fixed version of the faulty patch above (if I'm creating too >>> much confusion, I'll send you again all patches to apply on top of >>> mainline). >>> >> >> No problem, I got it :) >> >>> This series also implements the more effective idea I told you a few >>> hours ago. In my system, the loss is now around only 10%, even with >>> low_latency on. >>> >> >> When trying to run multiple dd tasks simultaneously, I get the kernel >> panic shown below (mainline is fine, without these patches). >> > > Could you please provide me somehow with a list *(bfq_serv_to_charge+0x21) ? > Maybe I've found the cause. Please apply also the two patches attached and retry. Thanks, Paolo