Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
From: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
To: "hch@lst.de" <hch@lst.de>, "sagi@grimberg.me" <sagi@grimberg.me>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"kbusch@kernel.org" <kbusch@kernel.org>,
	Logan Gunthorpe <logang@deltatee.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: [PATCH V3 6/6] nvmet: use inline bio for passthru fast path
Date: Thu, 29 Oct 2020 19:02:27 +0000
Message-ID: <BYAPR04MB4965980B745F28E069B8460186140@BYAPR04MB4965.namprd04.prod.outlook.com> (raw)
In-Reply-To: <9ba9c9ba-7caf-e24c-1471-62c199cfcd4a@deltatee.com>

On 10/22/20 08:58, Logan Gunthorpe wrote:
>
>
> On 2020-10-21 7:02 p.m., Chaitanya Kulkarni wrote:
>> In nvmet_passthru_execute_cmd() which is a high frequency function
>> it uses bio_alloc() which leads to memory allocation from the fs pool
>> for each I/O.
>>
>> For NVMeoF nvmet_req we already have inline_bvec allocated as a part of
>> request allocation that can be used with preallocated bio when we
>> already know the size of request before bio allocation with bio_alloc(),
>> which we already do.
>>
>> Introduce a bio member for the nvmet_req passthru anon union. In the
>> fast path, check if we can get away with inline bvec and bio from
>> nvmet_req with bio_init() call before actually allocating from the
>> bio_alloc().
>>
>> This will be useful to avoid any new memory allocation under high
>> memory pressure situation and get rid of any extra work of
>> allocation (bio_alloc()) vs initialization (bio_init()) when
>> transfer len is < NVMET_MAX_INLINE_DATA_LEN that user can configure at
>> compile time.
>>
>> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
>> ---
>>  drivers/nvme/target/nvmet.h    |  1 +
>>  drivers/nvme/target/passthru.c | 20 ++++++++++++++++++--
>>  2 files changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
>> index 559a15ccc322..408a13084fb4 100644
>> --- a/drivers/nvme/target/nvmet.h
>> +++ b/drivers/nvme/target/nvmet.h
>> @@ -330,6 +330,7 @@ struct nvmet_req {
>>  			struct work_struct      work;
>>  		} f;
>>  		struct {
>> +			struct bio		inline_bio;
>>  			struct request		*rq;
>>  			struct work_struct      work;
>>  			bool			use_workqueue;
>> diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
>> index 496ffedb77dc..32498b4302cc 100644
>> --- a/drivers/nvme/target/passthru.c
>> +++ b/drivers/nvme/target/passthru.c
>> @@ -178,6 +178,14 @@ static void nvmet_passthru_req_done(struct request *rq,
>>  	blk_mq_free_request(rq);
>>  }
>>  
>> +static void nvmet_passthru_bio_done(struct bio *bio)
>> +{
>> +	struct nvmet_req *req = bio->bi_private;
>> +
>> +	if (bio != &req->p.inline_bio)
>> +		bio_put(bio);
>> +}
>> +
>>  static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>>  {
>>  	int sg_cnt = req->sg_cnt;
>> @@ -186,13 +194,21 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>>  	int i;
>>  
>>  	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
>> -	bio->bi_end_io = bio_put;
>> +	if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
>> +		bio = &req->p.inline_bio;
>> +		bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
>> +	} else {
>> +		bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
>> +	}
>> +
>> +	bio->bi_end_io = nvmet_passthru_bio_done;
> I still think it's cleaner to change bi_endio for the inline/alloc'd
> cases by simply setting bi_endi_io to bio_put() only in the bio_alloc
> case. This should also be more efficient as it's one less indirect call
> and condition for the inline case.
>
> Besides that, the entire series looks good to me.
>
> Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
>
> Logan
>
Sagi/Christoph, any comments on this one ?

This series been sitting out for a while now.


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply index

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-22  1:02 [PATCH V3 0/6] nvmet: passthru fixes and improvements Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 1/6] nvme-core: add a helper to init req from nvme cmd Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 2/6] nvme-core: split nvme_alloc_request() Chaitanya Kulkarni
2020-11-03 18:24   ` Christoph Hellwig
2020-11-04 21:03     ` Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 3/6] nvmet: remove op_flags for passthru commands Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 4/6] block: move blk_rq_bio_prep() to linux/blk-mq.h Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 5/6] nvmet: use minimized version of blk_rq_append_bio Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 6/6] nvmet: use inline bio for passthru fast path Chaitanya Kulkarni
2020-10-22 15:57   ` Logan Gunthorpe
2020-10-29 19:02     ` Chaitanya Kulkarni [this message]
2020-11-03 18:25       ` hch
2020-11-03 18:32 ` [PATCH V3 0/6] nvmet: passthru fixes and improvements Christoph Hellwig
2020-11-03 23:50   ` Chaitanya Kulkarni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR04MB4965980B745F28E069B8460186140@BYAPR04MB4965.namprd04.prod.outlook.com \
    --to=chaitanya.kulkarni@wdc.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=logang@deltatee.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git