* [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio
@ 2021-04-29 18:43 Chaitanya Kulkarni
2021-04-29 18:43 ` [PATCH 1/2] nvmet: check sg_cnt for inline bio for bdev-ns Chaitanya Kulkarni
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Chaitanya Kulkarni @ 2021-04-29 18:43 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, sagi, Chaitanya Kulkarni
Hi,
While testing I came across the scenario where checking for the
transfer size is not enough for inline bios, this issue is not easily
reproducible with the test tools that I have.
This small patch-series adds an additional check so that we make sure
transfer size and the req->sg_cnt both fit in the inline bio for
bdev and passthru backend.
-ck
Chaitanya Kulkarni (2):
nvmet: check sg_cnt for inline bio for bdev-ns
nvmet: check sg_cnt for inline bio for passthru
drivers/nvme/target/io-cmd-bdev.c | 3 ++-
drivers/nvme/target/passthru.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/2] nvmet: check sg_cnt for inline bio for bdev-ns
2021-04-29 18:43 [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio Chaitanya Kulkarni
@ 2021-04-29 18:43 ` Chaitanya Kulkarni
2021-05-04 20:00 ` Sagi Grimberg
2021-05-04 22:03 ` Chaitanya Kulkarni
2021-04-29 18:43 ` [PATCH 2/2] nvmet: check sg_cnt for inline bio for passthru Chaitanya Kulkarni
` (2 subsequent siblings)
3 siblings, 2 replies; 11+ messages in thread
From: Chaitanya Kulkarni @ 2021-04-29 18:43 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, sagi, Chaitanya Kulkarni
When handling rw commands, for inline bio allocation we only consider
the transfer size. This works well when req->sg_cnt fits into the
req->inline_bvec, but it will result in the warning when
req->sg_cnt > NVMET_MAX_INLINE_BVEC.
Consider an I/O size 32768 and first buffer is not aligned to the page
boundary, then I/O is split in following manner :-
[ 2206.256140] nvmet: sg->length 3440 sg->offset 656
[ 2206.256144] nvmet: sg->length 4096 sg->offset 0
[ 2206.256148] nvmet: sg->length 4096 sg->offset 0
[ 2206.256152] nvmet: sg->length 4096 sg->offset 0
[ 2206.256155] nvmet: sg->length 4096 sg->offset 0
[ 2206.256159] nvmet: sg->length 4096 sg->offset 0
[ 2206.256163] nvmet: sg->length 4096 sg->offset 0
[ 2206.256166] nvmet: sg->length 4096 sg->offset 0
[ 2206.256170] nvmet: sg->length 656 sg->offset 0
Now the req->transfer_size == NVMET_MAX_INLINE_DATA_LEN i.e. 32768, but
the req->sg_cnt is (9) > NVMET_MAX_INLINE_BIOVEC which is (8).
This will result in the following warning message :-
nvmet_bdev_execute_rw()
bio_add_page()
__bio_add_page()
WARN_ON_ONCE(bio_full(bio, len));
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
drivers/nvme/target/io-cmd-bdev.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 9a8b3726a37c..5355956fa4cd 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -258,7 +258,8 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba);
- if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
+ if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN &&
+ req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC) {
bio = &req->b.inline_bio;
bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
} else {
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/2] nvmet: check sg_cnt for inline bio for passthru
2021-04-29 18:43 [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio Chaitanya Kulkarni
2021-04-29 18:43 ` [PATCH 1/2] nvmet: check sg_cnt for inline bio for bdev-ns Chaitanya Kulkarni
@ 2021-04-29 18:43 ` Chaitanya Kulkarni
2021-05-04 20:00 ` Sagi Grimberg
2021-05-04 8:56 ` [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio Christoph Hellwig
2021-05-04 20:01 ` Sagi Grimberg
3 siblings, 1 reply; 11+ messages in thread
From: Chaitanya Kulkarni @ 2021-04-29 18:43 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, sagi, Chaitanya Kulkarni
When handling passthru commands, for inline bio allocation we only
consider the transfer size. This works well when req->sg_cnt fits into
the req->inline_bvec, but it will result in the early return from
bio_add_hw_page() when req->sg_cnt > NVMET_MAX_INLINE_BVEC.
Consider an I/O of size 32768 and first buffer is not aligned to the
page boundary, then I/O is split in following manner :-
[ 2206.256140] nvmet: sg->length 3440 sg->offset 656
[ 2206.256144] nvmet: sg->length 4096 sg->offset 0
[ 2206.256148] nvmet: sg->length 4096 sg->offset 0
[ 2206.256152] nvmet: sg->length 4096 sg->offset 0
[ 2206.256155] nvmet: sg->length 4096 sg->offset 0
[ 2206.256159] nvmet: sg->length 4096 sg->offset 0
[ 2206.256163] nvmet: sg->length 4096 sg->offset 0
[ 2206.256166] nvmet: sg->length 4096 sg->offset 0
[ 2206.256170] nvmet: sg->length 656 sg->offset 0
Now the req->transfer_size == NVMET_MAX_INLINE_DATA_LEN i.e. 32768, but
the req->sg_cnt is (9) > NVMET_MAX_INLINE_BIOVEC which is (8).
This will result in early return in the following code path :-
nvmet_bdev_execute_rw()
bio_add_pc_page()
bio_add_hw_page()
if (bio_full(bio, len))
return 0;
Add a check to consider req->sg_cnt when using inline bio.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
drivers/nvme/target/passthru.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 2798944899b7..d65d76129000 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -194,7 +194,8 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
if (req->sg_cnt > BIO_MAX_VECS)
return -EINVAL;
- if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
+ if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN &&
+ req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC) {
bio = &req->p.inline_bio;
bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
} else {
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio
2021-04-29 18:43 [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio Chaitanya Kulkarni
2021-04-29 18:43 ` [PATCH 1/2] nvmet: check sg_cnt for inline bio for bdev-ns Chaitanya Kulkarni
2021-04-29 18:43 ` [PATCH 2/2] nvmet: check sg_cnt for inline bio for passthru Chaitanya Kulkarni
@ 2021-05-04 8:56 ` Christoph Hellwig
2021-05-04 21:16 ` Chaitanya Kulkarni
2021-05-04 20:01 ` Sagi Grimberg
3 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2021-05-04 8:56 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi
On Thu, Apr 29, 2021 at 11:43:55AM -0700, Chaitanya Kulkarni wrote:
> Hi,
>
> While testing I came across the scenario where checking for the
> transfer size is not enough for inline bios, this issue is not easily
> reproducible with the test tools that I have.
>
> This small patch-series adds an additional check so that we make sure
> transfer size and the req->sg_cnt both fit in the inline bio for
> bdev and passthru backend.
How do you get the fragmented scatterlist? Oh I guess this is due to
nvme-loop not using our page allocator but pages that come the
block layer?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] nvmet: check sg_cnt for inline bio for bdev-ns
2021-04-29 18:43 ` [PATCH 1/2] nvmet: check sg_cnt for inline bio for bdev-ns Chaitanya Kulkarni
@ 2021-05-04 20:00 ` Sagi Grimberg
2021-05-04 22:03 ` Chaitanya Kulkarni
1 sibling, 0 replies; 11+ messages in thread
From: Sagi Grimberg @ 2021-05-04 20:00 UTC (permalink / raw)
To: Chaitanya Kulkarni, linux-nvme; +Cc: hch
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] nvmet: check sg_cnt for inline bio for passthru
2021-04-29 18:43 ` [PATCH 2/2] nvmet: check sg_cnt for inline bio for passthru Chaitanya Kulkarni
@ 2021-05-04 20:00 ` Sagi Grimberg
0 siblings, 0 replies; 11+ messages in thread
From: Sagi Grimberg @ 2021-05-04 20:00 UTC (permalink / raw)
To: Chaitanya Kulkarni, linux-nvme; +Cc: hch
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio
2021-04-29 18:43 [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio Chaitanya Kulkarni
` (2 preceding siblings ...)
2021-05-04 8:56 ` [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio Christoph Hellwig
@ 2021-05-04 20:01 ` Sagi Grimberg
3 siblings, 0 replies; 11+ messages in thread
From: Sagi Grimberg @ 2021-05-04 20:01 UTC (permalink / raw)
To: Chaitanya Kulkarni, linux-nvme; +Cc: hch
> Hi,
>
> While testing I came across the scenario where checking for the
> transfer size is not enough for inline bios, this issue is not easily
> reproducible with the test tools that I have.
>
> This small patch-series adds an additional check so that we make sure
> transfer size and the req->sg_cnt both fit in the inline bio for
> bdev and passthru backend.
Looks good, though I'd re-title the patches as bug fixes so they
will be picked to stable kernels.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio
2021-05-04 8:56 ` [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio Christoph Hellwig
@ 2021-05-04 21:16 ` Chaitanya Kulkarni
2021-05-05 4:57 ` Christoph Hellwig
0 siblings, 1 reply; 11+ messages in thread
From: Chaitanya Kulkarni @ 2021-05-04 21:16 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-nvme, sagi
> On May 4, 2021, at 1:56 AM, Christoph Hellwig <hch@lst.de> wrote:
>
> How do you get the fragmented scatterlist? Oh I guess this is due to
> nvme-loop not using our page allocator but pages that come the
> block layer?
Yes, with nvme-loop and the test framework issuing I/Os using malloc allocated buffers with passthru I/O interface.
With respect to Sagi's comment do you want me to send v2 with fixes tag ?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] nvmet: check sg_cnt for inline bio for bdev-ns
2021-04-29 18:43 ` [PATCH 1/2] nvmet: check sg_cnt for inline bio for bdev-ns Chaitanya Kulkarni
2021-05-04 20:00 ` Sagi Grimberg
@ 2021-05-04 22:03 ` Chaitanya Kulkarni
2021-05-05 5:06 ` hch
1 sibling, 1 reply; 11+ messages in thread
From: Chaitanya Kulkarni @ 2021-05-04 22:03 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, sagi
On 4/29/21 11:44, Chaitanya Kulkarni wrote:
>
> sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba);
>
> - if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
> + if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN &&
> + req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC) {
> bio = &req->b.inline_bio;
> bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
> } else {
> -- 2.22.1
One more thing, in the V2 to avoid open-coding in bdev and passthru (and
ZBD)
we should add a helper ?
something like following untested :-
diff --git a/drivers/nvme/target/io-cmd-bdev.c
b/drivers/nvme/target/io-cmd-bdev.c
index 9a8b3726a37c..429263ca9b97 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -258,7 +258,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba);
- if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
+ if (nvmet_use_inline_bvec(req)) {
bio = &req->b.inline_bio;
bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
} else {
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 5566ed403576..d69a409515d6 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -616,4 +616,10 @@ static inline sector_t nvmet_lba_to_sect(struct
nvmet_ns *ns, __le64 lba)
return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT);
}
+static inline bool nvmet_use_inline_bvec(struct nvmet_req *req)
+{
+ return req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN &&
+ req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC;
+}
+
#endif /* _NVMET_H */
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio
2021-05-04 21:16 ` Chaitanya Kulkarni
@ 2021-05-05 4:57 ` Christoph Hellwig
0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2021-05-05 4:57 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: Christoph Hellwig, linux-nvme, sagi
On Tue, May 04, 2021 at 09:16:00PM +0000, Chaitanya Kulkarni wrote:
>
> > On May 4, 2021, at 1:56 AM, Christoph Hellwig <hch@lst.de> wrote:
> >
> > How do you get the fragmented scatterlist? Oh I guess this is due to
> > nvme-loop not using our page allocator but pages that come the
> > block layer?
>
> Yes, with nvme-loop and the test framework issuing I/Os using malloc allocated buffers with passthru I/O interface.
>
> With respect to Sagi's comment do you want me to send v2 with fixes tag ?
If you have a few spare cycles please do, and also document that this
affects nvme-loop only.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] nvmet: check sg_cnt for inline bio for bdev-ns
2021-05-04 22:03 ` Chaitanya Kulkarni
@ 2021-05-05 5:06 ` hch
0 siblings, 0 replies; 11+ messages in thread
From: hch @ 2021-05-05 5:06 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi
On Tue, May 04, 2021 at 10:03:27PM +0000, Chaitanya Kulkarni wrote:
> One more thing, in the V2 to avoid open-coding in bdev and passthru (and
> ZBD)
> we should add a helper ?
Fine with me.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2021-05-05 5:07 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-29 18:43 [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio Chaitanya Kulkarni
2021-04-29 18:43 ` [PATCH 1/2] nvmet: check sg_cnt for inline bio for bdev-ns Chaitanya Kulkarni
2021-05-04 20:00 ` Sagi Grimberg
2021-05-04 22:03 ` Chaitanya Kulkarni
2021-05-05 5:06 ` hch
2021-04-29 18:43 ` [PATCH 2/2] nvmet: check sg_cnt for inline bio for passthru Chaitanya Kulkarni
2021-05-04 20:00 ` Sagi Grimberg
2021-05-04 8:56 ` [PATCH 0/2] nvmet: consider req->sg_cnt for inline bio Christoph Hellwig
2021-05-04 21:16 ` Chaitanya Kulkarni
2021-05-05 4:57 ` Christoph Hellwig
2021-05-04 20:01 ` Sagi Grimberg
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.