From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11AFEC433EF for ; Mon, 27 Sep 2021 22:54:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DAD2C61156 for ; Mon, 27 Sep 2021 22:54:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237801AbhI0Wzo (ORCPT ); Mon, 27 Sep 2021 18:55:44 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:20228 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237771AbhI0Wzn (ORCPT ); Mon, 27 Sep 2021 18:55:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1632783245; x=1664319245; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=OeGyJgYfT5B9my6u1FeKrRyltafJwnb1EGQg0zSdHNk=; b=IzNATVdKRnfWE4vittEcEshQgybzia8EJIh76CYUtPTSVt9MyxYYeZKH Iu507Ts8o3UR5iB1qyYuVMkLDyfZwGNZxlxidr/WmkiAv5TZmqf1s8gTB Hr8kNaZpeHpu4wpzSR5Dtd7Y4ggRn/aXglR2wazXF4JcBUAKAbeBEWWNU OOpTI0u2ok5AJYGt48RC3GHZKYh+cztqT53WWfG6K6m7PUbbczy5ddNnn P/qnwCuyiSAbQ9VAEdDUKPS2hXoXd/PAob7XhiWDLIvHjwCsBhO9ZDvh6 9oQG2lCw+kOOzcrypmyKcFjy0vr1sZKmoYmSNOup5CoAxkm2y1mhhJB3A Q==; X-IronPort-AV: E=Sophos;i="5.85,327,1624291200"; d="scan'208";a="284918353" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 28 Sep 2021 06:54:04 +0800 IronPort-SDR: uHiGHYm8BB3qzOIuiEOy0f0cBlENM8KCiOSS6vbkDU73JMyJQjS0kBtp0gvbyhEoZb/M4DMr2L FYeXKtR13jCcCW9Lcf4DOSMtbcyHfe5VLy/URbDa6kudYgUjh1v3zn07TZqN0feyp+J53pyIRx zaZNd/ERnKJRAW+XJO74CuzyUSqj/4Gn/4j3OXlExxZIL5LUN3jT5KKM1oWiZn8HdSkMtQZZPP pshYbWRd3d6YldEcW5qkeBqrc9I8jSKb6PGsatzErfesA0XL0G8e6NRs+gvlpv83gsxspiZ8LF yX/IHKrt2xAZn2kWVVVynX6F Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Sep 2021 15:30:10 -0700 IronPort-SDR: 5USOUjgcdwTJrIOjrEchjSeJutla8IuL/mjclzhlP93xHk5D2aqJUIS0PDmzO5YLUMnLK1ULMi NBWcYHXi+gaEsyuIUpjGDkTbv089NzUjR6XDhkh+MoUXEa/oY77hLLJRjVLRC/4Zz4IzLOYr53 VB+a6aYH84zMKSKDFU6iL3EnGs2KazIZMSz/0ebAlDctcElLoWg8mybpI1gLNmt7snc9kK4wJD hvVHUwXp/xfQoTqJ/y+VKdsr2ByuvSRGHTrUaaU3myb8fZ/cl2MxiRKwoM/AcFaSnhXVvdvPod iBA= WDCIronportException: Internal Received: from usg-ed-osssrv.wdc.com ([10.3.10.180]) by uls-op-cesaip01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Sep 2021 15:54:05 -0700 Received: from usg-ed-osssrv.wdc.com (usg-ed-osssrv.wdc.com [127.0.0.1]) by usg-ed-osssrv.wdc.com (Postfix) with ESMTP id 4HJHxl4glrz1SHw4 for ; Mon, 27 Sep 2021 15:54:03 -0700 (PDT) Authentication-Results: usg-ed-osssrv.wdc.com (amavisd-new); dkim=pass reason="pass (just generated, assumed good)" header.d=opensource.wdc.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d= opensource.wdc.com; h=content-transfer-encoding:content-type :in-reply-to:organization:from:references:to:content-language :subject:user-agent:mime-version:date:message-id; s=dkim; t= 1632783242; x=1635375243; bh=OeGyJgYfT5B9my6u1FeKrRyltafJwnb1EGQ g0zSdHNk=; b=s2BQyqgOggoV3xS9bjDUty97NkZAh7h8FfSn4ZZUb0hKx/Yeb2v 0pvROMSEiyUpSoNDaTMkpSwVzDineaHDykrueYZ9NYR0AUuFZivVk8oyTQdBQvpm mTVYtHPrsow9b9Kb6Bh4fhEACnZK3oVkA6P9a0UooZTdZpgHeAm9IG5tTWjvPHeG WWhaIMrb+jyegG3CH9AXgOvdt4FY+YniIzEc35v73eyrNmy99wH3Y6rYO+H4E1Nr HaXe7FtsGAyOVInfJ3wmVAmj+/UZYazjk8cMnh1Aafv6JizPq7p6eLw5HDTQl8ed idu1/nxA8gZ8/mLSyubfrIk5HL7BkjDKJLA== X-Virus-Scanned: amavisd-new at usg-ed-osssrv.wdc.com Received: from usg-ed-osssrv.wdc.com ([127.0.0.1]) by usg-ed-osssrv.wdc.com (usg-ed-osssrv.wdc.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id khUO4RFDHcKH for ; Mon, 27 Sep 2021 15:54:02 -0700 (PDT) Received: from [10.225.54.48] (jpf009086.ad.shared [10.225.54.48]) by usg-ed-osssrv.wdc.com (Postfix) with ESMTPSA id 4HJHxj15tjz1RvTg; Mon, 27 Sep 2021 15:54:00 -0700 (PDT) Message-ID: <1d0893c3-da0a-e473-e37d-15df8e6d468e@opensource.wdc.com> Date: Tue, 28 Sep 2021 07:53:59 +0900 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.1.1 Subject: Re: [PATCH v2 4/4] block/mq-deadline: Prioritize high-priority requests Content-Language: en-US To: Bart Van Assche , Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Jaegeuk Kim , Damien Le Moal , Niklas Cassel , Hannes Reinecke References: <20210927220328.1410161-1-bvanassche@acm.org> <20210927220328.1410161-5-bvanassche@acm.org> From: Damien Le Moal Organization: Western Digital In-Reply-To: <20210927220328.1410161-5-bvanassche@acm.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On 2021/09/28 7:03, Bart Van Assche wrote: > In addition to reverting commit 7b05bf771084 ("Revert "block/mq-deadline: > Prioritize high-priority requests""), this patch uses 'jiffies' instead > of ktime_get() in the code for aging lower priority requests. > > This patch has been tested as follows: > > Measured QD=1/jobs=1 IOPS for nullb with the mq-deadline scheduler. > Result without and with this patch: 555 K IOPS. > > Measured QD=1/jobs=8 IOPS for nullb with the mq-deadline scheduler. > Result without and with this patch: about 380 K IOPS. > > Ran the following script: > > set -e > scriptdir=$(dirname "$0") > if [ -e /sys/module/scsi_debug ]; then modprobe -r scsi_debug; fi > modprobe scsi_debug ndelay=1000000 max_queue=16 > sd='' > while [ -z "$sd" ]; do > sd=$(basename /sys/bus/pseudo/drivers/scsi_debug/adapter*/host*/target*/*/block/*) > done > echo $((100*1000)) > "/sys/block/$sd/queue/iosched/prio_aging_expire" > if [ -e /sys/fs/cgroup/io.prio.class ]; then > cd /sys/fs/cgroup > echo restrict-to-be >io.prio.class > echo +io > cgroup.subtree_control > else > cd /sys/fs/cgroup/blkio/ > echo restrict-to-be >blkio.prio.class > fi > echo $$ >cgroup.procs > mkdir -p hipri > cd hipri > if [ -e io.prio.class ]; then > echo none-to-rt >io.prio.class > else > echo none-to-rt >blkio.prio.class > fi > { "${scriptdir}/max-iops" -a1 -d32 -j1 -e mq-deadline "/dev/$sd" >& ~/low-pri.txt & } > echo $$ >cgroup.procs > "${scriptdir}/max-iops" -a1 -d32 -j1 -e mq-deadline "/dev/$sd" >& ~/hi-pri.txt > > Result: > * 11000 IOPS for the high-priority job > * 40 IOPS for the low-priority job > > If the prio aging expiry time is changed from 100s into 0, the IOPS results > change into 6712 and 6796 IOPS. > > The max-iops script is a script that runs fio with the following arguments: > --bs=4K --gtod_reduce=1 --ioengine=libaio --ioscheduler=${arg_e} --runtime=60 > --norandommap --rw=read --thread --buffered=0 --numjobs=${arg_j} > --iodepth=${arg_d} --iodepth_batch_submit=${arg_a} > --iodepth_batch_complete=$((arg_d / 2)) --name=${positional_argument_1} > --filename=${positional_argument_1} > > Cc: Damien Le Moal > Cc: Niklas Cassel > Cc: Hannes Reinecke > Signed-off-by: Bart Van Assche > --- > block/mq-deadline.c | 77 ++++++++++++++++++++++++++++++++++++++++++--- > 1 file changed, 73 insertions(+), 4 deletions(-) > > diff --git a/block/mq-deadline.c b/block/mq-deadline.c > index b262f40f32c0..bb723478baf1 100644 > --- a/block/mq-deadline.c > +++ b/block/mq-deadline.c > @@ -31,6 +31,11 @@ > */ > static const int read_expire = HZ / 2; /* max time before a read is submitted. */ > static const int write_expire = 5 * HZ; /* ditto for writes, these limits are SOFT! */ > +/* > + * Time after which to dispatch lower priority requests even if higher > + * priority requests are pending. > + */ > +static const int prio_aging_expire = 10 * HZ; > static const int writes_starved = 2; /* max times reads can starve a write */ > static const int fifo_batch = 16; /* # of sequential requests treated as one > by the above parameters. For throughput. */ > @@ -96,6 +101,7 @@ struct deadline_data { > int writes_starved; > int front_merges; > u32 async_depth; > + int prio_aging_expire; > > spinlock_t lock; > spinlock_t zone_lock; > @@ -338,12 +344,27 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio, > return rq; > } > > +/* > + * Returns true if and only if @rq started after @latest_start where > + * @latest_start is in jiffies. > + */ > +static bool started_after(struct deadline_data *dd, struct request *rq, > + unsigned long latest_start) > +{ > + unsigned long start_time = (unsigned long)rq->fifo_time; > + > + start_time -= dd->fifo_expire[rq_data_dir(rq)]; > + > + return time_after(start_time, latest_start); > +} > + > /* > * deadline_dispatch_requests selects the best request according to > - * read/write expire, fifo_batch, etc > + * read/write expire, fifo_batch, etc and with a start time <= @latest. s/@latest/@latest_start ? > */ > static struct request *__dd_dispatch_request(struct deadline_data *dd, > - struct dd_per_prio *per_prio) > + struct dd_per_prio *per_prio, > + unsigned long latest_start) > { > struct request *rq, *next_rq; > enum dd_data_dir data_dir; > @@ -355,6 +376,8 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, > if (!list_empty(&per_prio->dispatch)) { > rq = list_first_entry(&per_prio->dispatch, struct request, > queuelist); > + if (started_after(dd, rq, latest_start)) > + return NULL; > list_del_init(&rq->queuelist); > goto done; > } > @@ -432,6 +455,9 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, > dd->batching = 0; > > dispatch_request: > + if (started_after(dd, rq, latest_start)) > + return NULL; > + > /* > * rq is the selected appropriate request. > */ > @@ -449,6 +475,34 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, > return rq; > } > > +/* > + * Check whether there are any requests with priority other than DD_RT_PRIO > + * that were inserted more than prio_aging_expire jiffies ago. > + */ > +static struct request *dd_dispatch_prio_aged_requests(struct deadline_data *dd, > + unsigned long now) > +{ > + struct request *rq; > + enum dd_prio prio; > + int prio_cnt; > + > + lockdep_assert_held(&dd->lock); > + > + prio_cnt = !!dd_queued(dd, DD_RT_PRIO) + !!dd_queued(dd, DD_BE_PRIO) + > + !!dd_queued(dd, DD_IDLE_PRIO); > + if (prio_cnt < 2) > + return NULL; > + > + for (prio = DD_BE_PRIO; prio <= DD_PRIO_MAX; prio++) { > + rq = __dd_dispatch_request(dd, &dd->per_prio[prio], > + now - dd->prio_aging_expire); > + if (rq) > + return rq; > + } > + > + return NULL; > +} > + > /* > * Called from blk_mq_run_hw_queue() -> __blk_mq_sched_dispatch_requests(). > * > @@ -460,15 +514,26 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, > static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx) > { > struct deadline_data *dd = hctx->queue->elevator->elevator_data; > + const unsigned long now = jiffies; > struct request *rq; > enum dd_prio prio; > > spin_lock(&dd->lock); > + rq = dd_dispatch_prio_aged_requests(dd, now); > + if (rq) > + goto unlock; > + > + /* > + * Next, dispatch requests in priority order. Ignore lower priority > + * requests if any higher priority requests are pending. > + */ > for (prio = 0; prio <= DD_PRIO_MAX; prio++) { > - rq = __dd_dispatch_request(dd, &dd->per_prio[prio]); > - if (rq) > + rq = __dd_dispatch_request(dd, &dd->per_prio[prio], now); > + if (rq || dd_queued(dd, prio)) > break; > } > + > +unlock: > spin_unlock(&dd->lock); > > return rq; > @@ -573,6 +638,7 @@ static int dd_init_sched(struct request_queue *q, struct elevator_type *e) > dd->front_merges = 1; > dd->last_dir = DD_WRITE; > dd->fifo_batch = fifo_batch; > + dd->prio_aging_expire = prio_aging_expire; > spin_lock_init(&dd->lock); > spin_lock_init(&dd->zone_lock); > > @@ -796,6 +862,7 @@ static ssize_t __FUNC(struct elevator_queue *e, char *page) \ > #define SHOW_JIFFIES(__FUNC, __VAR) SHOW_INT(__FUNC, jiffies_to_msecs(__VAR)) > SHOW_JIFFIES(deadline_read_expire_show, dd->fifo_expire[DD_READ]); > SHOW_JIFFIES(deadline_write_expire_show, dd->fifo_expire[DD_WRITE]); > +SHOW_JIFFIES(deadline_prio_aging_expire_show, dd->prio_aging_expire); > SHOW_INT(deadline_writes_starved_show, dd->writes_starved); > SHOW_INT(deadline_front_merges_show, dd->front_merges); > SHOW_INT(deadline_async_depth_show, dd->front_merges); > @@ -825,6 +892,7 @@ static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count) > STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, msecs_to_jiffies) > STORE_JIFFIES(deadline_read_expire_store, &dd->fifo_expire[DD_READ], 0, INT_MAX); > STORE_JIFFIES(deadline_write_expire_store, &dd->fifo_expire[DD_WRITE], 0, INT_MAX); > +STORE_JIFFIES(deadline_prio_aging_expire_store, &dd->prio_aging_expire, 0, INT_MAX); > STORE_INT(deadline_writes_starved_store, &dd->writes_starved, INT_MIN, INT_MAX); > STORE_INT(deadline_front_merges_store, &dd->front_merges, 0, 1); > STORE_INT(deadline_async_depth_store, &dd->front_merges, 1, INT_MAX); > @@ -843,6 +911,7 @@ static struct elv_fs_entry deadline_attrs[] = { > DD_ATTR(front_merges), > DD_ATTR(async_depth), > DD_ATTR(fifo_batch), > + DD_ATTR(prio_aging_expire), > __ATTR_NULL > }; > > Apart from the nit above, looks good to me. Reviewed-by: Damien Le Moal -- Damien Le Moal Western Digital Research