> Il giorno 25 apr 2018, alle ore 14:13, Joseph Qi ha scritto: > > Hi Paolo, > Hi Joseph > ... > Could you run blktrace as well when testing your case? There are several > throtl traces to help analyze whether it is caused by frequently > upgrade/downgrade. Certainly. You can find a trace attached. Unfortunately, I'm not familiar with the internals of blk-throttle and low limit, so, if you want me to analyze the trace, give me some hints on what I have to look for. Otherwise, I'll be happy to learn from your analysis. > If all cgroups are just running under low, I'am afraid the case you > tested has something to do with how SSD handle mixed workload IOs. > That's a rather important point. To investigate it, I repeated the same test with both bfq-mq, the development version of bfq, which contains also improvements not yet in mainline, and bfq, the version you can find in mainline. I set strict_guarantees to 1 for both bfq-mq and bfq (namely, the value for which bfq-mq/bfq reaches the lowest total throughput), and I gave the random-I/O group twice the weight of the other groups [1] (just as tentative values). Result are rather different now. With bfq-mq, the random-I/O group enjoys about the same throughput as with blk-throttle, precisely 11.16MB/s, but the total throughput is now 2.2 times as high: 284.127MB/s. The performance of bfq is expectedly poorer, as some important improvements have not yet been poured from bfq-mq to bfq: 9.16MB/s for the random-I/O group, and 190MB/s for the total throughput. From this, I guess we can deduce that the cause of the low throughput with blk-throttle is blk-throttle, and not the drive capabilities. As you already pointed out, the attached trace can tell us what went wrong. Thanks, Paolo [1] sudo ./thr-lat-with-interference.sh -b p -t randread -n 5 -w 100 -W 50