From: Logan Gunthorpe <logang@deltatee.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>,
Sagi Grimberg <sagi@grimberg.me>,
linux-nvme@lists.infradead.org,
Douglas Gilbert <dgilbert@interlog.com>
Subject: Re: [PATCH] nvmet-passthru: Cleanup nvmet_passthru_map_sg()
Date: Thu, 15 Oct 2020 12:40:52 -0600 [thread overview]
Message-ID: <b798e7f6-0afc-551c-f6d9-f7900b400137@deltatee.com> (raw)
In-Reply-To: <20201015180148.GA23377@lst.de>
On 2020-10-15 12:01 p.m., Christoph Hellwig wrote:
> On Thu, Oct 15, 2020 at 10:01:30AM -0600, Logan Gunthorpe wrote:
>>
>>
>> On 2020-10-15 1:56 a.m., Christoph Hellwig wrote:
>>> On Fri, Oct 09, 2020 at 05:18:16PM -0600, Logan Gunthorpe wrote:
>>>> static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>>>> {
>>>> - int sg_cnt = req->sg_cnt;
>>>> struct scatterlist *sg;
>>>> int op_flags = 0;
>>>> struct bio *bio;
>>>> int i, ret;
>>>>
>>>> + if (req->sg_cnt > BIO_MAX_PAGES)
>>>> + return -EINVAL;
>>>
>>> Don't you need to handle larger requests as well? Or at least
>>> limit MDTS?
>>
>> No and Yes: there is already code in nvmet_passthru_override_id_ctrl()
>> to limit MDTS based on max_segments and max_hw_sectors.
>
> But those are entirely unrelated to the bio size. BIO_MAX_PAGES is
> 256, so with 4k pages and assuming none can't be merged that is 1MB,
> while max_segments/max_hw_sectors could be something much larger.
Isn't it constrained by max_segments which is set to NVME_MAX_SEGS=127
(for PCI)... less than BIO_MAX_PAGES...
Would the NVME driver even work if max_segments was greater than
BIO_MAX_PAGES? Correct me if I'm wrong, but it looks like
blk_rq_map_sg() will only map one bio within a request. So there has to
be one bio per request by the time it hits nvme_map_data()...
So I'm not really sure how we could even construct a valid passthrough
request in nvmet_passthru_map_sg() that is larger than BIO_MAX_PAGES.
If you want me to send a patch to future proof the MDTS limit with
BIO_MAX_PAGES, I can do that, but it doesn't look like it will have any
effect right now unless big things change.
Logan
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-10-15 18:41 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-09 23:18 [PATCH] nvmet-passthru: Cleanup nvmet_passthru_map_sg() Logan Gunthorpe
2020-10-13 22:26 ` Sagi Grimberg
2020-10-13 22:38 ` Douglas Gilbert
2020-10-14 0:16 ` Chaitanya Kulkarni
2020-10-14 0:20 ` Logan Gunthorpe
2020-10-14 0:25 ` Chaitanya Kulkarni
2020-10-14 15:47 ` Logan Gunthorpe
2020-10-15 7:56 ` Christoph Hellwig
2020-10-15 16:01 ` Logan Gunthorpe
2020-10-15 17:24 ` Douglas Gilbert
2020-10-15 18:01 ` Christoph Hellwig
2020-10-15 18:40 ` Logan Gunthorpe [this message]
2020-10-16 13:57 ` Christoph Hellwig
2020-10-16 16:49 ` Logan Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b798e7f6-0afc-551c-f6d9-f7900b400137@deltatee.com \
--to=logang@deltatee.com \
--cc=chaitanya.kulkarni@wdc.com \
--cc=dgilbert@interlog.com \
--cc=hch@lst.de \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).