linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
To: Keith Busch <kbusch@kernel.org>, "hch@lst.de" <hch@lst.de>
Cc: Mark Ruijter <MRuijter@onestopsystems.com>,
	Hannes Reinecke <hare@suse.com>,
	"sagi@grimberg.me" <sagi@grimberg.me>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: [PATCH] nvmet: introduce use_vfs ns-attr
Date: Mon, 28 Oct 2019 07:26:09 +0000	[thread overview]
Message-ID: <BYAPR04MB5749B0AEC4864326D01EEC5D86660@BYAPR04MB5749.namprd04.prod.outlook.com> (raw)
In-Reply-To: 20191028005517.GA6693@redsun51.ssa.fujisawa.hgst.com

I've collected the performance numbers with this patch and without
following patch:-

1. With Plugging patch:-
   write: IOPS=43.6k, BW=170MiB/s (179MB/s)(5112MiB/30002msec)
   write: IOPS=42.8k, BW=167MiB/s (175MB/s)(5014MiB/30002msec)

2. Without this patch :-
   write: IOPS=41.5k, BW=162MiB/s (170MB/s)(4861MiB/30003msec)
   write: IOPS=41.1k, BW=160MiB/s (168MB/s)(4813MiB/30002msec)
   cpu          : usr=0.49%, sys=3.66%, ctx=1244502, majf=0, minf=559
   cpu          : usr=0.53%, sys=3.63%, ctx=1232208, majf=0, minf=581
   slat (usec): min=8, max=437, avg=15.63, stdev= 9.92
   slat (usec): min=8, max=389, avg=15.77, stdev=10.00
   clat (usec): min=56, max=1472, avg=754.31, stdev=172.63
   clat (usec): min=55, max=2405, avg=761.82, stdev=153.19

3. With use_vfs patch where use_vfs=1:-
   write: IOPS=114k, BW=445MiB/s (466MB/s)(13.0GiB/30007msec)
   write: IOPS=114k, BW=445MiB/s (466MB/s)(13.0GiB/30024msec)
   cpu          : usr=1.31%, sys=8.67%, ctx=3415138, majf=0, minf=527
   cpu          : usr=1.28%, sys=8.70%, ctx=3418737, majf=0, minf=570
   slat (usec): min=8, max=6450, avg=13.68, stdev= 8.35
   slat (usec): min=8, max=22847, avg=13.65, stdev=12.77
   clat (usec): min=62, max=6633, avg=265.98, stdev=124.55
   clat (usec): min=69, max=1900, avg=265.70, stdev=125.61

 From above data it shows that there is a big difference in clat fio
numbers in #2 and #3 (#1 is close to #2 so didn't report it,
where CPU, slat is approximately same.

Regards,
Chaitanya

On 10/27/19 5:55 PM, Keith Busch wrote:
> On Sun, Oct 27, 2019 at 04:03:30PM +0100, hch@lst.de wrote:
>> ---
>>   drivers/nvme/target/io-cmd-bdev.c | 3 +++
>>   1 file changed, 3 insertions(+)
>>
>> diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
>> index 04a9cd2a2604..ed1a8d0ab30e 100644
>> --- a/drivers/nvme/target/io-cmd-bdev.c
>> +++ b/drivers/nvme/target/io-cmd-bdev.c
>> @@ -147,6 +147,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
>>   	int sg_cnt = req->sg_cnt;
>>   	struct bio *bio;
>>   	struct scatterlist *sg;
>> +	struct blk_plug plug;
>>   	sector_t sector;
>>   	int op, op_flags = 0, i;
>>   
>> @@ -185,6 +186,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
>>   	bio->bi_end_io = nvmet_bio_done;
>>   	bio_set_op_attrs(bio, op, op_flags);
>>   
>> +	blk_start_plug(&plug);
>>   	for_each_sg(req->sg, sg, req->sg_cnt, i) {
>>   		while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset)
>>   				!= sg->length) {
>> @@ -202,6 +204,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
>>   		sector += sg->length >> 9;
>>   		sg_cnt--;
>>   	}
>> +	blk_finish_plug(&plug);
>>   
>>   	submit_bio(bio);
>>   }
> 
> The blk_finish_plug() should be after the last submit_bio().
> 
> I looked at plugging too since that is a difference between the
> submit_bio and write_iter paths, but I thought we needed to plug the
> entire IO queue drain. Otherwise this random 4k write workload should
> plug a single request, which doesn't sound like it would change anything.
> 
> Using the block plug for the entire IO queue drain requires quite a bit
> larger change, though. Also, I saw a similar performance difference with
> a ramdisk back-end, which doesn't use plugs.
> 


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2019-10-28  7:26 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-23 20:17 [PATCH] nvmet: introduce use_vfs ns-attr Chaitanya Kulkarni
2019-10-24  2:00 ` Keith Busch
2019-10-24 11:30   ` Mark Ruijter
2019-10-25  4:05     ` Keith Busch
2019-10-25  4:26       ` Keith Busch
2019-10-25  8:44         ` Mark Ruijter
2019-10-26  1:06           ` Keith Busch
2019-10-27 15:03           ` hch
2019-10-27 16:06             ` Mark Ruijter
2019-10-28  0:55             ` Keith Busch
2019-10-28  7:26               ` Chaitanya Kulkarni [this message]
2019-10-28  7:32               ` Chaitanya Kulkarni
2019-10-28  7:35                 ` hch
2019-10-28  7:38                   ` Chaitanya Kulkarni
2019-10-28  7:43                     ` hch
2019-10-28  8:04                       ` Chaitanya Kulkarni
2019-10-28  8:01                 ` Keith Busch
2019-10-28  8:41                   ` Mark Ruijter
2019-10-25  3:29   ` Chaitanya Kulkarni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR04MB5749B0AEC4864326D01EEC5D86660@BYAPR04MB5749.namprd04.prod.outlook.com \
    --to=chaitanya.kulkarni@wdc.com \
    --cc=MRuijter@onestopsystems.com \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).