* [PATCH 0/2] nvme: target: build bvec from sg directly
@ 2019-03-25 10:07 Ming Lei
2019-03-25 10:07 ` Ming Lei
2019-03-25 10:07 ` [PATCH 2/2] nvme: target: build bvec from sg directly Ming Lei
0 siblings, 2 replies; 16+ messages in thread
From: Ming Lei @ 2019-03-25 10:07 UTC (permalink / raw)
Hi,
The 1st patch fixes nvmet_file_init_bvec() for building bvec from
sg list.
The 2nd patch converts to build bvec from sg directly, given multi-page
bvec is ready.
Ming Lei (2):
nvme: target: fix nvmet_file_init_bvec()
nvme: target: build bvec from sg directly
drivers/nvme/target/io-cmd-file.c | 26 ++++++++++++--------------
1 file changed, 12 insertions(+), 14 deletions(-)
Cc: Yi Zhang <yi.zhang at redhat.com>
Cc: Sagi Grimberg <sagi at grimberg.me>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
--
2.9.5
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
2019-03-25 10:07 [PATCH 0/2] nvme: target: build bvec from sg directly Ming Lei
@ 2019-03-25 10:07 ` Ming Lei
2019-03-25 10:07 ` [PATCH 2/2] nvme: target: build bvec from sg directly Ming Lei
1 sibling, 0 replies; 16+ messages in thread
From: Ming Lei @ 2019-03-25 10:07 UTC (permalink / raw)
To: linux-nvme, Christoph Hellwig
Cc: Ming Lei, Yi Zhang, Sagi Grimberg, Chaitanya Kulkarni, stable
There isn't sg iterator helper for building bvec, so invent one
and fix the issue in nvmet_file_init_bvec().
The issue is that one sg may include multipge continuous pages, and
only the 1st .bv_offset isn't zero, also the length for the last bvec
has to consider the remained length.
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Fixes: d5eff33ee6f8("nvmet: add simple file backed ns support")
Cc: Yi Zhang <yi.zhang@redhat.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/nvme/target/io-cmd-file.c | 57 ++++++++++++++++++++-------------------
1 file changed, 29 insertions(+), 28 deletions(-)
diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index 3e43212d3c1c..1c76e9e2a474 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -75,13 +75,6 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
return ret;
}
-static void nvmet_file_init_bvec(struct bio_vec *bv, struct sg_page_iter *iter)
-{
- bv->bv_page = sg_page_iter_page(iter);
- bv->bv_offset = iter->sg->offset;
- bv->bv_len = PAGE_SIZE - iter->sg->offset;
-}
-
static ssize_t nvmet_file_submit_bvec(struct nvmet_req *req, loff_t pos,
unsigned long nr_segs, size_t count, int ki_flags)
{
@@ -129,13 +122,14 @@ static void nvmet_file_io_done(struct kiocb *iocb, long ret, long ret2)
static bool nvmet_file_execute_io(struct nvmet_req *req, int ki_flags)
{
ssize_t nr_bvec = DIV_ROUND_UP(req->data_len, PAGE_SIZE);
- struct sg_page_iter sg_pg_iter;
+ struct scatterlist *sg;
+ struct bio_vec *bv;
unsigned long bv_cnt = 0;
bool is_sync = false;
size_t len = 0, total_len = 0;
ssize_t ret = 0;
loff_t pos;
-
+ int i, j, sg_done;
if (req->f.mpool_alloc && nr_bvec > NVMET_MAX_MPOOL_BVEC)
is_sync = true;
@@ -147,26 +141,33 @@ static bool nvmet_file_execute_io(struct nvmet_req *req, int ki_flags)
}
memset(&req->f.iocb, 0, sizeof(struct kiocb));
- for_each_sg_page(req->sg, &sg_pg_iter, req->sg_cnt, 0) {
- nvmet_file_init_bvec(&req->f.bvec[bv_cnt], &sg_pg_iter);
- len += req->f.bvec[bv_cnt].bv_len;
- total_len += req->f.bvec[bv_cnt].bv_len;
- bv_cnt++;
-
- WARN_ON_ONCE((nr_bvec - 1) < 0);
-
- if (unlikely(is_sync) &&
- (nr_bvec - 1 == 0 || bv_cnt == NVMET_MAX_MPOOL_BVEC)) {
- ret = nvmet_file_submit_bvec(req, pos, bv_cnt, len, 0);
- if (ret < 0)
- goto complete;
-
- pos += len;
- bv_cnt = 0;
- len = 0;
+ for_each_sg(req->sg, sg, req->sg_cnt, i)
+ for (j = 0, sg_done = 0;
+ (bv = &req->f.bvec[bv_cnt]) && sg_done < sg->length;
+ j++, sg_done += bv->bv_len) {
+ bv->bv_offset = j ? 0 : sg->offset;
+ bv->bv_len = min_t(unsigned, PAGE_SIZE - bv->bv_offset,
+ sg->length - sg_done);
+ bv->bv_page = nth_page(sg_page(sg), j);
+
+ len += bv->bv_len;
+ total_len += bv->bv_len;
+ bv_cnt++;
+
+ WARN_ON_ONCE((nr_bvec - 1) < 0);
+
+ if (unlikely(is_sync) &&
+ (nr_bvec - 1 == 0 || bv_cnt == NVMET_MAX_MPOOL_BVEC)) {
+ ret = nvmet_file_submit_bvec(req, pos, bv_cnt, len, 0);
+ if (ret < 0)
+ goto complete;
+
+ pos += len;
+ bv_cnt = 0;
+ len = 0;
+ }
+ nr_bvec--;
}
- nr_bvec--;
- }
if (WARN_ON_ONCE(total_len != req->data_len)) {
ret = -EIO;
--
2.9.5
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
@ 2019-03-25 10:07 ` Ming Lei
0 siblings, 0 replies; 16+ messages in thread
From: Ming Lei @ 2019-03-25 10:07 UTC (permalink / raw)
There isn't sg iterator helper for building bvec, so invent one
and fix the issue in nvmet_file_init_bvec().
The issue is that one sg may include multipge continuous pages, and
only the 1st .bv_offset isn't zero, also the length for the last bvec
has to consider the remained length.
Reported-by: Yi Zhang <yi.zhang at redhat.com>
Fixes: d5eff33ee6f8("nvmet: add simple file backed ns support")
Cc: Yi Zhang <yi.zhang at redhat.com>
Cc: Sagi Grimberg <sagi at grimberg.me>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
Cc: <stable at vger.kernel.org>
Signed-off-by: Ming Lei <ming.lei at redhat.com>
---
drivers/nvme/target/io-cmd-file.c | 57 ++++++++++++++++++++-------------------
1 file changed, 29 insertions(+), 28 deletions(-)
diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index 3e43212d3c1c..1c76e9e2a474 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -75,13 +75,6 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
return ret;
}
-static void nvmet_file_init_bvec(struct bio_vec *bv, struct sg_page_iter *iter)
-{
- bv->bv_page = sg_page_iter_page(iter);
- bv->bv_offset = iter->sg->offset;
- bv->bv_len = PAGE_SIZE - iter->sg->offset;
-}
-
static ssize_t nvmet_file_submit_bvec(struct nvmet_req *req, loff_t pos,
unsigned long nr_segs, size_t count, int ki_flags)
{
@@ -129,13 +122,14 @@ static void nvmet_file_io_done(struct kiocb *iocb, long ret, long ret2)
static bool nvmet_file_execute_io(struct nvmet_req *req, int ki_flags)
{
ssize_t nr_bvec = DIV_ROUND_UP(req->data_len, PAGE_SIZE);
- struct sg_page_iter sg_pg_iter;
+ struct scatterlist *sg;
+ struct bio_vec *bv;
unsigned long bv_cnt = 0;
bool is_sync = false;
size_t len = 0, total_len = 0;
ssize_t ret = 0;
loff_t pos;
-
+ int i, j, sg_done;
if (req->f.mpool_alloc && nr_bvec > NVMET_MAX_MPOOL_BVEC)
is_sync = true;
@@ -147,26 +141,33 @@ static bool nvmet_file_execute_io(struct nvmet_req *req, int ki_flags)
}
memset(&req->f.iocb, 0, sizeof(struct kiocb));
- for_each_sg_page(req->sg, &sg_pg_iter, req->sg_cnt, 0) {
- nvmet_file_init_bvec(&req->f.bvec[bv_cnt], &sg_pg_iter);
- len += req->f.bvec[bv_cnt].bv_len;
- total_len += req->f.bvec[bv_cnt].bv_len;
- bv_cnt++;
-
- WARN_ON_ONCE((nr_bvec - 1) < 0);
-
- if (unlikely(is_sync) &&
- (nr_bvec - 1 == 0 || bv_cnt == NVMET_MAX_MPOOL_BVEC)) {
- ret = nvmet_file_submit_bvec(req, pos, bv_cnt, len, 0);
- if (ret < 0)
- goto complete;
-
- pos += len;
- bv_cnt = 0;
- len = 0;
+ for_each_sg(req->sg, sg, req->sg_cnt, i)
+ for (j = 0, sg_done = 0;
+ (bv = &req->f.bvec[bv_cnt]) && sg_done < sg->length;
+ j++, sg_done += bv->bv_len) {
+ bv->bv_offset = j ? 0 : sg->offset;
+ bv->bv_len = min_t(unsigned, PAGE_SIZE - bv->bv_offset,
+ sg->length - sg_done);
+ bv->bv_page = nth_page(sg_page(sg), j);
+
+ len += bv->bv_len;
+ total_len += bv->bv_len;
+ bv_cnt++;
+
+ WARN_ON_ONCE((nr_bvec - 1) < 0);
+
+ if (unlikely(is_sync) &&
+ (nr_bvec - 1 == 0 || bv_cnt == NVMET_MAX_MPOOL_BVEC)) {
+ ret = nvmet_file_submit_bvec(req, pos, bv_cnt, len, 0);
+ if (ret < 0)
+ goto complete;
+
+ pos += len;
+ bv_cnt = 0;
+ len = 0;
+ }
+ nr_bvec--;
}
- nr_bvec--;
- }
if (WARN_ON_ONCE(total_len != req->data_len)) {
ret = -EIO;
--
2.9.5
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 2/2] nvme: target: build bvec from sg directly
2019-03-25 10:07 [PATCH 0/2] nvme: target: build bvec from sg directly Ming Lei
2019-03-25 10:07 ` Ming Lei
@ 2019-03-25 10:07 ` Ming Lei
2019-03-25 10:55 ` Christoph Hellwig
1 sibling, 1 reply; 16+ messages in thread
From: Ming Lei @ 2019-03-25 10:07 UTC (permalink / raw)
Now multi-page bvec is supported, and the whole IO stack is capable
of handling multi-page bvec.
So build each bvec from the sg directly.
Cc: Yi Zhang <yi.zhang at redhat.com>
Cc: Sagi Grimberg <sagi at grimberg.me>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
Signed-off-by: Ming Lei <ming.lei at redhat.com>
---
drivers/nvme/target/io-cmd-file.c | 55 ++++++++++++++++++---------------------
1 file changed, 26 insertions(+), 29 deletions(-)
diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index 1c76e9e2a474..e2a92b85baa0 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -121,15 +121,14 @@ static void nvmet_file_io_done(struct kiocb *iocb, long ret, long ret2)
static bool nvmet_file_execute_io(struct nvmet_req *req, int ki_flags)
{
- ssize_t nr_bvec = DIV_ROUND_UP(req->data_len, PAGE_SIZE);
+ ssize_t nr_bvec = req->sg_cnt;
struct scatterlist *sg;
- struct bio_vec *bv;
unsigned long bv_cnt = 0;
bool is_sync = false;
size_t len = 0, total_len = 0;
ssize_t ret = 0;
loff_t pos;
- int i, j, sg_done;
+ int i;
if (req->f.mpool_alloc && nr_bvec > NVMET_MAX_MPOOL_BVEC)
is_sync = true;
@@ -141,33 +140,31 @@ static bool nvmet_file_execute_io(struct nvmet_req *req, int ki_flags)
}
memset(&req->f.iocb, 0, sizeof(struct kiocb));
- for_each_sg(req->sg, sg, req->sg_cnt, i)
- for (j = 0, sg_done = 0;
- (bv = &req->f.bvec[bv_cnt]) && sg_done < sg->length;
- j++, sg_done += bv->bv_len) {
- bv->bv_offset = j ? 0 : sg->offset;
- bv->bv_len = min_t(unsigned, PAGE_SIZE - bv->bv_offset,
- sg->length - sg_done);
- bv->bv_page = nth_page(sg_page(sg), j);
-
- len += bv->bv_len;
- total_len += bv->bv_len;
- bv_cnt++;
-
- WARN_ON_ONCE((nr_bvec - 1) < 0);
-
- if (unlikely(is_sync) &&
- (nr_bvec - 1 == 0 || bv_cnt == NVMET_MAX_MPOOL_BVEC)) {
- ret = nvmet_file_submit_bvec(req, pos, bv_cnt, len, 0);
- if (ret < 0)
- goto complete;
-
- pos += len;
- bv_cnt = 0;
- len = 0;
- }
- nr_bvec--;
+ for_each_sg(req->sg, sg, req->sg_cnt, i) {
+ struct bio_vec *bv = &req->f.bvec[bv_cnt];
+
+ bv->bv_offset = sg->offset;
+ bv->bv_len = sg->length;
+ bv->bv_page = sg_page(sg);
+
+ len += bv->bv_len;
+ total_len += bv->bv_len;
+ bv_cnt++;
+
+ WARN_ON_ONCE((nr_bvec - 1) < 0);
+
+ if (unlikely(is_sync) &&
+ (nr_bvec - 1 == 0 || bv_cnt == NVMET_MAX_MPOOL_BVEC)) {
+ ret = nvmet_file_submit_bvec(req, pos, bv_cnt, len, 0);
+ if (ret < 0)
+ goto complete;
+
+ pos += len;
+ bv_cnt = 0;
+ len = 0;
}
+ nr_bvec--;
+ }
if (WARN_ON_ONCE(total_len != req->data_len)) {
ret = -EIO;
--
2.9.5
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
2019-03-25 10:07 ` Ming Lei
@ 2019-03-25 10:52 ` Christoph Hellwig
-1 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2019-03-25 10:52 UTC (permalink / raw)
To: Ming Lei
Cc: linux-nvme, Christoph Hellwig, Yi Zhang, Sagi Grimberg,
Chaitanya Kulkarni, stable
On Mon, Mar 25, 2019 at 06:07:07PM +0800, Ming Lei wrote:
> There isn't sg iterator helper for building bvec, so invent one
> and fix the issue in nvmet_file_init_bvec().
>
> The issue is that one sg may include multipge continuous pages, and
> only the 1st .bv_offset isn't zero, also the length for the last bvec
> has to consider the remained length.
The scatterlist in the nvme target is always allocated by the nvmet
code itself an thus never contains multi-page sg list entries.
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
@ 2019-03-25 10:52 ` Christoph Hellwig
0 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2019-03-25 10:52 UTC (permalink / raw)
On Mon, Mar 25, 2019@06:07:07PM +0800, Ming Lei wrote:
> There isn't sg iterator helper for building bvec, so invent one
> and fix the issue in nvmet_file_init_bvec().
>
> The issue is that one sg may include multipge continuous pages, and
> only the 1st .bv_offset isn't zero, also the length for the last bvec
> has to consider the remained length.
The scatterlist in the nvme target is always allocated by the nvmet
code itself an thus never contains multi-page sg list entries.
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 2/2] nvme: target: build bvec from sg directly
2019-03-25 10:07 ` [PATCH 2/2] nvme: target: build bvec from sg directly Ming Lei
@ 2019-03-25 10:55 ` Christoph Hellwig
0 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2019-03-25 10:55 UTC (permalink / raw)
This looks pretty sensible even if we don't end up submitting multi-page
vecs. But can you please just send it on top of mainline directly?
The previos patch just creates pointless churn.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
2019-03-25 10:52 ` Christoph Hellwig
@ 2019-03-26 1:39 ` Ming Lei
-1 siblings, 0 replies; 16+ messages in thread
From: Ming Lei @ 2019-03-26 1:39 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Yi Zhang, Sagi Grimberg, Chaitanya Kulkarni, stable, linux-nvme
On Mon, Mar 25, 2019 at 11:52:31AM +0100, Christoph Hellwig wrote:
> On Mon, Mar 25, 2019 at 06:07:07PM +0800, Ming Lei wrote:
> > There isn't sg iterator helper for building bvec, so invent one
> > and fix the issue in nvmet_file_init_bvec().
> >
> > The issue is that one sg may include multipge continuous pages, and
> > only the 1st .bv_offset isn't zero, also the length for the last bvec
> > has to consider the remained length.
>
> The scatterlist in the nvme target is always allocated by the nvmet
> code itself an thus never contains multi-page sg list entries.
I am wondering if it is true.
Not look at other target code yet, however seems it isn't true for loop,
see the following code in nvme_loop_queue_rq():
iod->req.sg = iod->sg_table.sgl;
iod->req.sg_cnt = blk_rq_map_sg(req->q, req, iod->sg_table.sgl);
iod->req.transfer_len = blk_rq_payload_bytes(req);
And it has been triggered by nvme/011 in Yi's test.
Thanks,
Ming
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
@ 2019-03-26 1:39 ` Ming Lei
0 siblings, 0 replies; 16+ messages in thread
From: Ming Lei @ 2019-03-26 1:39 UTC (permalink / raw)
On Mon, Mar 25, 2019@11:52:31AM +0100, Christoph Hellwig wrote:
> On Mon, Mar 25, 2019@06:07:07PM +0800, Ming Lei wrote:
> > There isn't sg iterator helper for building bvec, so invent one
> > and fix the issue in nvmet_file_init_bvec().
> >
> > The issue is that one sg may include multipge continuous pages, and
> > only the 1st .bv_offset isn't zero, also the length for the last bvec
> > has to consider the remained length.
>
> The scatterlist in the nvme target is always allocated by the nvmet
> code itself an thus never contains multi-page sg list entries.
I am wondering if it is true.
Not look at other target code yet, however seems it isn't true for loop,
see the following code in nvme_loop_queue_rq():
iod->req.sg = iod->sg_table.sgl;
iod->req.sg_cnt = blk_rq_map_sg(req->q, req, iod->sg_table.sgl);
iod->req.transfer_len = blk_rq_payload_bytes(req);
And it has been triggered by nvme/011 in Yi's test.
Thanks,
Ming
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
2019-03-26 1:39 ` Ming Lei
@ 2019-03-26 2:03 ` Sagi Grimberg
-1 siblings, 0 replies; 16+ messages in thread
From: Sagi Grimberg @ 2019-03-26 2:03 UTC (permalink / raw)
To: Ming Lei, Christoph Hellwig
Cc: Yi Zhang, Chaitanya Kulkarni, stable, linux-nvme
>> The scatterlist in the nvme target is always allocated by the nvmet
>> code itself an thus never contains multi-page sg list entries.
>
> I am wondering if it is true.
>
> Not look at other target code yet, however seems it isn't true for loop,
> see the following code in nvme_loop_queue_rq():
>
> iod->req.sg = iod->sg_table.sgl;
> iod->req.sg_cnt = blk_rq_map_sg(req->q, req, iod->sg_table.sgl);
> iod->req.transfer_len = blk_rq_payload_bytes(req);
>
> And it has been triggered by nvme/011 in Yi's test.
Yes, loop is an exception in this case.
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
@ 2019-03-26 2:03 ` Sagi Grimberg
0 siblings, 0 replies; 16+ messages in thread
From: Sagi Grimberg @ 2019-03-26 2:03 UTC (permalink / raw)
>> The scatterlist in the nvme target is always allocated by the nvmet
>> code itself an thus never contains multi-page sg list entries.
>
> I am wondering if it is true.
>
> Not look at other target code yet, however seems it isn't true for loop,
> see the following code in nvme_loop_queue_rq():
>
> iod->req.sg = iod->sg_table.sgl;
> iod->req.sg_cnt = blk_rq_map_sg(req->q, req, iod->sg_table.sgl);
> iod->req.transfer_len = blk_rq_payload_bytes(req);
>
> And it has been triggered by nvme/011 in Yi's test.
Yes, loop is an exception in this case.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
2019-03-26 2:03 ` Sagi Grimberg
@ 2019-03-26 2:16 ` Ming Lei
-1 siblings, 0 replies; 16+ messages in thread
From: Ming Lei @ 2019-03-26 2:16 UTC (permalink / raw)
To: Sagi Grimberg
Cc: Christoph Hellwig, linux-nvme, Yi Zhang, stable, Chaitanya Kulkarni
On Mon, Mar 25, 2019 at 07:03:23PM -0700, Sagi Grimberg wrote:
>
> > > The scatterlist in the nvme target is always allocated by the nvmet
> > > code itself an thus never contains multi-page sg list entries.
> >
> > I am wondering if it is true.
> >
> > Not look at other target code yet, however seems it isn't true for loop,
> > see the following code in nvme_loop_queue_rq():
> >
> > iod->req.sg = iod->sg_table.sgl;
> > iod->req.sg_cnt = blk_rq_map_sg(req->q, req, iod->sg_table.sgl);
> > iod->req.transfer_len = blk_rq_payload_bytes(req);
> >
> > And it has been triggered by nvme/011 in Yi's test.
>
> Yes, loop is an exception in this case.
Thanks for the clarification!
Another candidate fix is to set nvmet-loop's queue segment boundary mask
as PAGE_SIZE - 1.
Also there is the same issue for block device backed target.
If no one objects, I'd like to take the approach of adjusting segment
boundary mask.
Thanks,
Ming
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
@ 2019-03-26 2:16 ` Ming Lei
0 siblings, 0 replies; 16+ messages in thread
From: Ming Lei @ 2019-03-26 2:16 UTC (permalink / raw)
On Mon, Mar 25, 2019@07:03:23PM -0700, Sagi Grimberg wrote:
>
> > > The scatterlist in the nvme target is always allocated by the nvmet
> > > code itself an thus never contains multi-page sg list entries.
> >
> > I am wondering if it is true.
> >
> > Not look at other target code yet, however seems it isn't true for loop,
> > see the following code in nvme_loop_queue_rq():
> >
> > iod->req.sg = iod->sg_table.sgl;
> > iod->req.sg_cnt = blk_rq_map_sg(req->q, req, iod->sg_table.sgl);
> > iod->req.transfer_len = blk_rq_payload_bytes(req);
> >
> > And it has been triggered by nvme/011 in Yi's test.
>
> Yes, loop is an exception in this case.
Thanks for the clarification!
Another candidate fix is to set nvmet-loop's queue segment boundary mask
as PAGE_SIZE - 1.
Also there is the same issue for block device backed target.
If no one objects, I'd like to take the approach of adjusting segment
boundary mask.
Thanks,
Ming
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
2019-03-26 2:16 ` Ming Lei
@ 2019-03-26 7:32 ` Christoph Hellwig
-1 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2019-03-26 7:32 UTC (permalink / raw)
To: Ming Lei
Cc: Sagi Grimberg, Christoph Hellwig, linux-nvme, Yi Zhang, stable,
Chaitanya Kulkarni
On Tue, Mar 26, 2019 at 10:16:28AM +0800, Ming Lei wrote:
> Also there is the same issue for block device backed target.
>
> If no one objects, I'd like to take the approach of adjusting segment
> boundary mask.
Just go for the merge of your two patches. That fixes the issue and
actually makes the code simpler.
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
@ 2019-03-26 7:32 ` Christoph Hellwig
0 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2019-03-26 7:32 UTC (permalink / raw)
On Tue, Mar 26, 2019@10:16:28AM +0800, Ming Lei wrote:
> Also there is the same issue for block device backed target.
>
> If no one objects, I'd like to take the approach of adjusting segment
> boundary mask.
Just go for the merge of your two patches. That fixes the issue and
actually makes the code simpler.
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec()
2019-03-25 10:07 ` Ming Lei
(?)
(?)
@ 2019-03-27 14:00 ` Sasha Levin
-1 siblings, 0 replies; 16+ messages in thread
From: Sasha Levin @ 2019-03-27 14:00 UTC (permalink / raw)
Hi,
[This is an automated email]
This commit has been processed because it contains a "Fixes:" tag,
fixing commit: .
The bot has tested the following trees: v5.0.4, v4.19.31, v4.14.108, v4.9.165, v4.4.177, v3.18.137.
v5.0.4: Build OK!
v4.19.31: Failed to apply! Possible dependencies:
50a909db36f2 ("nvmet: use IOCB_NOWAIT for file-ns buffered I/O")
v4.14.108: Failed to apply! Possible dependencies:
1b72b71facce ("nvmet: fix file discard return status")
50a909db36f2 ("nvmet: use IOCB_NOWAIT for file-ns buffered I/O")
543c09c89fdc ("nvmet: Fix nvmet_execute_write_zeroes sector count")
55eb942eda2c ("nvmet: add buffered I/O support for file backed ns")
618cff4285dc ("nvmet: remove duplicate NULL initialization for req->ns")
7756f72ccd43 ("nvmet: Change return code of discard command if not supported")
9c891c139894 ("nvmet: check fileio lba range access boundaries")
d5eff33ee6f8 ("nvmet: add simple file backed ns support")
e454d122e228 ("nvmet: kill nvmet_inline_bio_init")
e929f06d9eaa ("nvmet: constify struct nvmet_fabrics_ops")
ea435e1b9392 ("block: add a poll_fn callback to struct request_queue")
v4.9.165: Failed to apply! Possible dependencies:
297e3d854784 ("blk-throttle: make throtl_slice tunable")
4151dd9a58c6 ("nvmet: Fixed avoided printing nvmet: twice in error logs.")
4e4cbee93d56 ("block: switch bios to blk_status_t")
778f067c185c ("nvme: add semicolon in nvme_command setting")
87760e5eef35 ("block: hook up writeback throttling")
986994a27587 ("nvme: Use CNS as 8-bit field and avoid endianness conversion")
9e234eeafbe1 ("blk-throttle: add a simple idle detection")
cf43e6be865a ("block: add scalable completion tracking of requests")
d5eff33ee6f8 ("nvmet: add simple file backed ns support")
e806402130c9 ("block: split out request-only flags into a new namespace")
fbbaf700e7b1 ("block: trace completion of all bios.")
v4.4.177: Failed to apply! Possible dependencies:
1c63dc66580d ("nvme: split a new struct nvme_ctrl out of struct nvme_dev")
21d34711e1b5 ("nvme: split command submission helpers out of pci.c")
2d79c7dc8fe5 ("admin-cmd: Added smart-log command support.")
3a85a5de29ea ("nvme-loop: add a NVMe loopback host driver")
4151dd9a58c6 ("nvmet: Fixed avoided printing nvmet: twice in error logs.")
540c801c65eb ("NVMe: Implement namespace list scanning")
6f3b0e8bcf3c ("blk-mq: add a flags parameter to blk_mq_alloc_request")
986994a27587 ("nvme: Use CNS as 8-bit field and avoid endianness conversion")
a07b4970f464 ("nvmet: add a generic NVMe target")
d5eff33ee6f8 ("nvmet: add simple file backed ns support")
fa60682677c5 ("nvme: use symbolic constants for CNS values")
v3.18.137: Failed to apply! Possible dependencies:
062261be4e39 ("nvme: Replace rcu_assign_pointer() with RCU_INIT_POINTER()")
1d0906246095 ("NVMe: Mismatched host/device page size support")
302c6727e5eb ("NVMe: Fix filesystem sync deadlock on removal")
57dacad5f228 ("nvme: move to a new drivers/nvme/host directory")
5905535610fc ("NVMe: Correctly handle IOCTL_SUBMIT_IO when cpus > online queues")
5940c8578fe7 ("NVMe: Clear QUEUE_FLAG_STACKABLE")
6fccf9383b28 ("NVMe: Async event request")
7963e521811e ("NVMe: Passthrough IOCTL for IO commands")
7be50e93fbc2 ("NVMe: Fix nvmeq waitqueue entry initialization")
9dbbfab7d541 ("NVMe: Do not over allocate for discard requests")
a07b4970f464 ("nvmet: add a generic NVMe target")
a4aea5623d4a ("NVMe: Convert to blk-mq")
a96d4f5c2da6 ("NVMe: Reference count pci device")
aee4b9bd4505 ("nvmem: Add to MAINTAINERS for nvmem framework")
b4ff9c8ddb6f ("NVMe: Translate NVMe status to errno")
d5eff33ee6f8 ("nvmet: add simple file backed ns support")
dece45855a8b ("NFC: nxp-nci: Add support for NXP NCI chips")
f435c2825b4c ("NVMe: Call nvme_free_queue directly")
How should we proceed with this patch?
--
Thanks,
Sasha
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2019-03-27 14:00 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-25 10:07 [PATCH 0/2] nvme: target: build bvec from sg directly Ming Lei
2019-03-25 10:07 ` [PATCH 1/2] nvme: target: fix nvmet_file_init_bvec() Ming Lei
2019-03-25 10:07 ` Ming Lei
2019-03-25 10:52 ` Christoph Hellwig
2019-03-25 10:52 ` Christoph Hellwig
2019-03-26 1:39 ` Ming Lei
2019-03-26 1:39 ` Ming Lei
2019-03-26 2:03 ` Sagi Grimberg
2019-03-26 2:03 ` Sagi Grimberg
2019-03-26 2:16 ` Ming Lei
2019-03-26 2:16 ` Ming Lei
2019-03-26 7:32 ` Christoph Hellwig
2019-03-26 7:32 ` Christoph Hellwig
2019-03-27 14:00 ` Sasha Levin
2019-03-25 10:07 ` [PATCH 2/2] nvme: target: build bvec from sg directly Ming Lei
2019-03-25 10:55 ` Christoph Hellwig
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.