linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC 0/2] io_uring: examine request result only after completion
@ 2019-10-24  9:18 Bijan Mottahedeh
  2019-10-24  9:18 ` [RFC 1/2] io_uring: create io_queue_async() function Bijan Mottahedeh
                   ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-24  9:18 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

Running an fio test consistenly crashes the kernel with the trace included
below.  The root cause seems to be the code in __io_submit_sqe() that
checks the result of a request for -EAGAIN in polled mode, without
ensuring first that the request has completed:

	if (ctx->flags & IORING_SETUP_IOPOLL) {
		if (req->result == -EAGAIN)
			return -EAGAIN;

The request will be immediately resubmitted in io_sq_wq_submit_work(),
potentially before the the fisrt submission has completed.  This creates
a race where the original completion may set REQ_F_IOPOLL_COMPLETED in
a freed submission request, overwriting the poisoned bits, casusing the
panic below.

	do {
		ret = __io_submit_sqe(ctx, req, s, false);
		/*
		 * We can get EAGAIN for polled IO even though
		 * we're forcing a sync submission from here,
		 * since we can't wait for request slots on the
		 * block side.
		 */
		if (ret != -EAGAIN)
			break;
		cond_resched();
	} while (1);

The suggested fix is to move a submitted request to the poll list
unconditionally in polled mode.  The request can then be retried if
necessary once the original submission has indeed completed.

This bug raises an issue however since REQ_F_IOPOLL_COMPLETED is set
in io_complete_rw_iopoll() from interrupt context.  NVMe polled queues
however are not supposed to generate interrupts so it is not clear what
is the reason for this apparent inconsitency.

fio job
-------
[global]
filename=/dev/nvme0n1
rw=randread
bs=4k
size=4G
direct=1
time_based=1
runtime=60
randrepeat=1
gtod_reduce=1

fio test
--------
fio nvme.fio --readonly --ioengine=io_uring --iodepth 1024 --fixedbufs --hipri --numjobs=8

panic trace
-----------
[  450.395076] BUG io_kiocb (Not tainted): Poison overwritten
[  450.537797] -----------------------------------------------------------------------------
[  450.537799] INFO: 0x00000000cb333e0b-0x00000000cb333e0b. First byte 0x7b instead of 0x6b
[  450.656496] RIP: 0010:blkdev_bio_end_io+0x71/0xe0
[  450.772066] INFO: Allocated in io_submit_sqe+0x84/0x3d0 age=555 cpu=9 pid=3665
[  450.772070]  __slab_alloc+0x40/0x62
[  450.868914] Code: 75 3c 0f b6 43 32 4c 8b 2b 84 c0 75 0a 48 8b 73 08 49 01 75 08 eb 0b 0f b6 f8 e8 aa 9c 0e 00 48 63 f0 48 8b 03 31 d2 4c 89 ef <ff> 50 10 f6 43 14 01 74 32 48 8d 7b 18 e8 0d 56 0e 00 eb 27 48 8b
[  450.925197]  kmem_cache_alloc+0xa3/0x260
[  450.925198]  io_submit_sqe+0x84/0x3d0
[  451.011642] RSP: 0018:ffffc90006908e28 EFLAGS: 00010046
[  451.053353]  io_ring_submit+0xd5/0x150
[  451.053355]  __x64_sys_io_uring_enter+0x14e/0x290

Bijan Mottahedeh (2):
  io_uring: create io_queue_async() function
  io_uring: examine request result only after completion

 fs/io_uring.c | 122 ++++++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 89 insertions(+), 33 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC 1/2] io_uring: create io_queue_async() function
  2019-10-24  9:18 [RFC 0/2] io_uring: examine request result only after completion Bijan Mottahedeh
@ 2019-10-24  9:18 ` Bijan Mottahedeh
  2019-10-24  9:18 ` [RFC 2/2] io_uring: examine request result only after completion Bijan Mottahedeh
  2019-10-24 17:09 ` [RFC 0/2] " Jens Axboe
  2 siblings, 0 replies; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-24  9:18 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

This patch pulls out the code from __io_queue_sqe() and creates a new
io_queue_async() function.  It doesn't change run time, but it's a bit
cleaner and will be used in future patches.

Signed-off-by: <bijan.mottahedeh@oracle.com>
Reviewed-by: Dan Carpenter <dan.carpenter@oracle.com>
---
 fs/io_uring.c | 59 +++++++++++++++++++++++++++++++++++------------------------
 1 file changed, 35 insertions(+), 24 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 5415fcc..acb213c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -362,6 +362,8 @@ struct io_submit_state {
 static void io_cqring_fill_event(struct io_ring_ctx *ctx, u64 ki_user_data,
 				 long res);
 static void __io_free_req(struct io_kiocb *req);
+static int io_queue_async(struct io_ring_ctx *ctx, struct io_kiocb *req,
+			  struct sqe_submit *s);
 
 static struct kmem_cache *req_cachep;
 
@@ -2437,6 +2439,35 @@ static int io_req_set_file(struct io_ring_ctx *ctx, const struct sqe_submit *s,
 	return 0;
 }
 
+static int io_queue_async(struct io_ring_ctx *ctx, struct io_kiocb *req,
+			  struct sqe_submit *s)
+{
+	struct io_uring_sqe *sqe_copy;
+	struct async_list *list;
+
+	/* async context always use a copy of the sqe */
+	sqe_copy = kmemdup(s->sqe, sizeof(*sqe_copy), GFP_KERNEL);
+	if (!sqe_copy)
+		return -ENOMEM;
+
+	s->sqe = sqe_copy;
+
+	memcpy(&req->submit, s, sizeof(*s));
+	list = io_async_list_from_sqe(ctx, s->sqe);
+	if (!io_add_to_prev_work(list, req)) {
+		if (list)
+			atomic_inc(&list->cnt);
+		INIT_WORK(&req->work, io_sq_wq_submit_work);
+		io_queue_async_work(ctx, req);
+	}
+
+	/*
+	 * Queued up for async execution, worker will release
+	 * submit reference when the iocb is actually submitted.
+	 */
+	return 0;
+}
+
 static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 			struct sqe_submit *s, bool force_nonblock)
 {
@@ -2448,30 +2479,10 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	 * We async punt it if the file wasn't marked NOWAIT, or if the file
 	 * doesn't support non-blocking read/write attempts
 	 */
-	if (ret == -EAGAIN && (!(req->flags & REQ_F_NOWAIT) ||
-	    (req->flags & REQ_F_MUST_PUNT))) {
-		struct io_uring_sqe *sqe_copy;
-
-		sqe_copy = kmemdup(s->sqe, sizeof(*sqe_copy), GFP_KERNEL);
-		if (sqe_copy) {
-			struct async_list *list;
-
-			s->sqe = sqe_copy;
-			memcpy(&req->submit, s, sizeof(*s));
-			list = io_async_list_from_sqe(ctx, s->sqe);
-			if (!io_add_to_prev_work(list, req)) {
-				if (list)
-					atomic_inc(&list->cnt);
-				INIT_WORK(&req->work, io_sq_wq_submit_work);
-				io_queue_async_work(ctx, req);
-			}
-
-			/*
-			 * Queued up for async execution, worker will release
-			 * submit reference when the iocb is actually submitted.
-			 */
-			return 0;
-		}
+	if (ret == -EAGAIN &&
+	    (!(req->flags & REQ_F_NOWAIT) || (req->flags & REQ_F_MUST_PUNT)) &&
+	    !io_queue_async(ctx, req, s)) {
+		return 0;
 	}
 
 	/* drop submission reference */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC 2/2] io_uring: examine request result only after completion
  2019-10-24  9:18 [RFC 0/2] io_uring: examine request result only after completion Bijan Mottahedeh
  2019-10-24  9:18 ` [RFC 1/2] io_uring: create io_queue_async() function Bijan Mottahedeh
@ 2019-10-24  9:18 ` Bijan Mottahedeh
  2019-10-24 17:09 ` [RFC 0/2] " Jens Axboe
  2 siblings, 0 replies; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-24  9:18 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

__io_submit_sqe() checks the result of a request for -EAGAIN in
polled mode, without ensuring first that the request has completed.
The request will be immediately resubmitted in io_sq_wq_submit_work(),
potentially before the the first submission has completed.  This creates
a race where the original completion may set REQ_F_IOPOLL_COMPLETED in
a freed submission request.

Move a submitted request to the poll list unconditionally in polled mode.
The request can then be retried if necessary once the original submission
has indeed completed.

Signed-off-by: <bijan.mottahedeh@oracle.com>
Reviewed-by: Dan Carpenter <dan.carpenter@oracle.com>
---
 fs/io_uring.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 54 insertions(+), 9 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index acb213c..9e2eef5 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -328,6 +328,7 @@ struct io_kiocb {
 #define REQ_F_TIMEOUT		1024	/* timeout request */
 #define REQ_F_ISREG		2048	/* regular file */
 #define REQ_F_MUST_PUNT		4096	/* must be punted even for NONBLOCK */
+#define REQ_F_SQE_ALLOC		8192	/* dynamically allocated sqe */
 	u64			user_data;
 	u32			result;
 	u32			sequence;
@@ -657,6 +658,16 @@ static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
 static void io_free_req_many(struct io_ring_ctx *ctx, void **reqs, int *nr)
 {
 	if (*nr) {
+		int i;
+		struct io_kiocb *req;
+
+		for (i = 0; i < *nr; i++) {
+			req = reqs[i];
+			if (req->flags & REQ_F_SQE_ALLOC) {
+				kfree(req->submit.sqe);
+				req->submit.sqe = NULL;
+			}
+		}
 		kmem_cache_free_bulk(req_cachep, *nr, reqs);
 		percpu_ref_put_many(&ctx->refs, *nr);
 		*nr = 0;
@@ -668,6 +679,10 @@ static void __io_free_req(struct io_kiocb *req)
 	if (req->file && !(req->flags & REQ_F_FIXED_FILE))
 		fput(req->file);
 	percpu_ref_put(&req->ctx->refs);
+	if (req->flags & REQ_F_SQE_ALLOC) {
+		kfree(req->submit.sqe);
+		req->submit.sqe = NULL;
+	}
 	kmem_cache_free(req_cachep, req);
 }
 
@@ -789,6 +804,29 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
 		req = list_first_entry(done, struct io_kiocb, list);
 		list_del(&req->list);
 
+		/*
+		 * If a retry is needed, reset the request and queue
+		 * it for async submission.
+		 */
+		if (req->result == -EAGAIN) {
+			int ret;
+
+			refcount_set(&req->refs, 2);
+			req->flags &= ~REQ_F_IOPOLL_COMPLETED;
+			req->result = 0;
+
+			ret = io_queue_async(ctx, req, &req->submit);
+			if (!ret)
+				continue;
+
+			/*
+			 * The async submission failed.  Mark the
+			 * request as complete.
+			 */
+			refcount_set(&req->refs, 1);
+			req->flags |= REQ_F_IOPOLL_COMPLETED;
+		}
+
 		io_cqring_fill_event(ctx, req->user_data, req->result);
 		(*nr_events)++;
 
@@ -835,9 +873,14 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
 		 * Move completed entries to our local list. If we find a
 		 * request that requires polling, break out and complete
 		 * the done list first, if we have entries there.
+		 * Make sure the completed entries have been properly
+		 * reference counted before moving them.  This accounts
+		 * for a race where a request can complete before its
+		 * reference count is decremented after the submission.
 		 */
 		if (req->flags & REQ_F_IOPOLL_COMPLETED) {
-			list_move_tail(&req->list, &done);
+			if (refcount_read(&req->refs) < 2)
+				list_move_tail(&req->list, &done);
 			continue;
 		}
 		if (!list_empty(&done))
@@ -1007,8 +1050,7 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
 	if ((req->flags & REQ_F_LINK) && res != req->result)
 		req->flags |= REQ_F_FAIL_LINK;
 	req->result = res;
-	if (res != -EAGAIN)
-		req->flags |= REQ_F_IOPOLL_COMPLETED;
+	req->flags |= REQ_F_IOPOLL_COMPLETED;
 }
 
 /*
@@ -2116,6 +2158,7 @@ static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req,
 
 	memcpy(sqe_copy, sqe, sizeof(*sqe_copy));
 	req->submit.sqe = sqe_copy;
+	req->flags |= REQ_F_SQE_ALLOC;
 
 	INIT_WORK(&req->work, io_sq_wq_submit_work);
 	trace_io_uring_defer(ctx, req, false);
@@ -2189,9 +2232,12 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		return ret;
 
 	if (ctx->flags & IORING_SETUP_IOPOLL) {
-		if (req->result == -EAGAIN)
-			return -EAGAIN;
-
+		/*
+		 * Add the request to the poll list unconditionally.
+		 * The request may be resubmitted in case of EAGAIN
+		 * during completion processing.
+		 */
+		memcpy(&req->submit, s, sizeof(*s));
 		/* workqueue context doesn't hold uring_lock, grab it now */
 		if (s->in_async)
 			mutex_lock(&ctx->uring_lock);
@@ -2284,9 +2330,6 @@ static void io_sq_wq_submit_work(struct work_struct *work)
 			io_put_req(req, NULL);
 		}
 
-		/* async context always use a copy of the sqe */
-		kfree(sqe);
-
 		/* if a dependent link is ready, do that as the next one */
 		if (!ret && nxt) {
 			req = nxt;
@@ -2451,6 +2494,7 @@ static int io_queue_async(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		return -ENOMEM;
 
 	s->sqe = sqe_copy;
+	req->flags |= REQ_F_SQE_ALLOC;
 
 	memcpy(&req->submit, s, sizeof(*s));
 	list = io_async_list_from_sqe(ctx, s->sqe);
@@ -2607,6 +2651,7 @@ static void io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s,
 		}
 
 		s->sqe = sqe_copy;
+		req->flags |= REQ_F_SQE_ALLOC;
 		memcpy(&req->submit, s, sizeof(*s));
 		trace_io_uring_link(ctx, req, prev);
 		list_add_tail(&req->list, &prev->link_list);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-24  9:18 [RFC 0/2] io_uring: examine request result only after completion Bijan Mottahedeh
  2019-10-24  9:18 ` [RFC 1/2] io_uring: create io_queue_async() function Bijan Mottahedeh
  2019-10-24  9:18 ` [RFC 2/2] io_uring: examine request result only after completion Bijan Mottahedeh
@ 2019-10-24 17:09 ` Jens Axboe
  2019-10-24 19:18   ` Bijan Mottahedeh
  2 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-24 17:09 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
> Running an fio test consistenly crashes the kernel with the trace included
> below.  The root cause seems to be the code in __io_submit_sqe() that
> checks the result of a request for -EAGAIN in polled mode, without
> ensuring first that the request has completed:
> 
> 	if (ctx->flags & IORING_SETUP_IOPOLL) {
> 		if (req->result == -EAGAIN)
> 			return -EAGAIN;

I'm a little confused, because we should be holding the submission
reference to the request still at this point. So how is it going away?
I must be missing something...

> The request will be immediately resubmitted in io_sq_wq_submit_work(),
> potentially before the the fisrt submission has completed.  This creates
> a race where the original completion may set REQ_F_IOPOLL_COMPLETED in
> a freed submission request, overwriting the poisoned bits, casusing the
> panic below.
> 
> 	do {
> 		ret = __io_submit_sqe(ctx, req, s, false);
> 		/*
> 		 * We can get EAGAIN for polled IO even though
> 		 * we're forcing a sync submission from here,
> 		 * since we can't wait for request slots on the
> 		 * block side.
> 		 */
> 		if (ret != -EAGAIN)
> 			break;
> 		cond_resched();
> 	} while (1);
> 
> The suggested fix is to move a submitted request to the poll list
> unconditionally in polled mode.  The request can then be retried if
> necessary once the original submission has indeed completed.
>
> This bug raises an issue however since REQ_F_IOPOLL_COMPLETED is set
> in io_complete_rw_iopoll() from interrupt context.  NVMe polled queues
> however are not supposed to generate interrupts so it is not clear what
> is the reason for this apparent inconsitency.

It's because you're not running with poll queues for NVMe, hence you're
throwing a lot of performance away. Load nvme with poll_queues=X (or boot
with nvme.poll_queues=X, if built in) to have a set of separate queues
for polling. These don't have IRQs enabled, and it'll work much faster
for you.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-24 17:09 ` [RFC 0/2] " Jens Axboe
@ 2019-10-24 19:18   ` Bijan Mottahedeh
  2019-10-24 22:31     ` Jens Axboe
  0 siblings, 1 reply; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-24 19:18 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block


On 10/24/19 10:09 AM, Jens Axboe wrote:
> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>> Running an fio test consistenly crashes the kernel with the trace included
>> below.  The root cause seems to be the code in __io_submit_sqe() that
>> checks the result of a request for -EAGAIN in polled mode, without
>> ensuring first that the request has completed:
>>
>> 	if (ctx->flags & IORING_SETUP_IOPOLL) {
>> 		if (req->result == -EAGAIN)
>> 			return -EAGAIN;
> I'm a little confused, because we should be holding the submission
> reference to the request still at this point. So how is it going away?
> I must be missing something...

I don't think the submission reference is going away...

I *think* the problem has to do with the fact that 
io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being 
called from interrupt context in my configuration and so there is a 
potential race between updating the request there and checking it in 
__io_submit_sqe().

My first workaround was to simply poll for REQ_F_IOPOLL_COMPLETED in the 
code snippet above:

     if (req->result == --EAGAIN) {

         poll for REQ_F_IOPOLL_COMPLETED

         return -EAGAIN;

}

and that got rid of the problem.

>
>> The request will be immediately resubmitted in io_sq_wq_submit_work(),
>> potentially before the the fisrt submission has completed.  This creates
>> a race where the original completion may set REQ_F_IOPOLL_COMPLETED in
>> a freed submission request, overwriting the poisoned bits, casusing the
>> panic below.
>>
>> 	do {
>> 		ret = __io_submit_sqe(ctx, req, s, false);
>> 		/*
>> 		 * We can get EAGAIN for polled IO even though
>> 		 * we're forcing a sync submission from here,
>> 		 * since we can't wait for request slots on the
>> 		 * block side.
>> 		 */
>> 		if (ret != -EAGAIN)
>> 			break;
>> 		cond_resched();
>> 	} while (1);
>>
>> The suggested fix is to move a submitted request to the poll list
>> unconditionally in polled mode.  The request can then be retried if
>> necessary once the original submission has indeed completed.
>>
>> This bug raises an issue however since REQ_F_IOPOLL_COMPLETED is set
>> in io_complete_rw_iopoll() from interrupt context.  NVMe polled queues
>> however are not supposed to generate interrupts so it is not clear what
>> is the reason for this apparent inconsitency.
> It's because you're not running with poll queues for NVMe, hence you're
> throwing a lot of performance away. Load nvme with poll_queues=X (or boot
> with nvme.poll_queues=X, if built in) to have a set of separate queues
> for polling. These don't have IRQs enabled, and it'll work much faster
> for you.
>
That's what I did in fact.  I booted with nvme.poll_queues=36 (I figured 
1 per core but I'm not sure what is a reasonable number).

I also checked that /sys/block/<nvme>/queue/io_poll = 1.

What's really odd is that the irq/sec numbers from mpstat and perf show 
equivalent values with/without polling (with/without fio "hipri" option) 
even though I can see from perf top that we are in fact polling in one 
case. I don't if I missing a step or something is off in my config.

Thanks.

--bijan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-24 19:18   ` Bijan Mottahedeh
@ 2019-10-24 22:31     ` Jens Axboe
       [not found]       ` <fa82e9fc-caf7-a94a-ebff-536413e9ecce@oracle.com>
  0 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-24 22:31 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
> 
> On 10/24/19 10:09 AM, Jens Axboe wrote:
>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>> Running an fio test consistenly crashes the kernel with the trace included
>>> below.  The root cause seems to be the code in __io_submit_sqe() that
>>> checks the result of a request for -EAGAIN in polled mode, without
>>> ensuring first that the request has completed:
>>>
>>> 	if (ctx->flags & IORING_SETUP_IOPOLL) {
>>> 		if (req->result == -EAGAIN)
>>> 			return -EAGAIN;
>> I'm a little confused, because we should be holding the submission
>> reference to the request still at this point. So how is it going away?
>> I must be missing something...
> 
> I don't think the submission reference is going away...
> 
> I *think* the problem has to do with the fact that
> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
> called from interrupt context in my configuration and so there is a
> potential race between updating the request there and checking it in
> __io_submit_sqe().
> 
> My first workaround was to simply poll for REQ_F_IOPOLL_COMPLETED in the
> code snippet above:
> 
>       if (req->result == --EAGAIN) {
> 
>           poll for REQ_F_IOPOLL_COMPLETED
> 
>           return -EAGAIN;
> 
> }
> 
> and that got rid of the problem.

But that will not work at all for a proper poll setup, where you don't
trigger any IRQs... It only happens to work for this case because you're
still triggering interrupts. But even in that case, it's not a real
solution, but I don't think that's the argument here ;-)

I see what the race is now, it's specific to IRQ driven polling. We
really should just disallow that, to be honest, it doesn't make any
sense. But let me think about if we can do a reasonable solution to this
that doesn't involve adding overhead for a proper setup.

>>> The request will be immediately resubmitted in io_sq_wq_submit_work(),
>>> potentially before the the fisrt submission has completed.  This creates
>>> a race where the original completion may set REQ_F_IOPOLL_COMPLETED in
>>> a freed submission request, overwriting the poisoned bits, casusing the
>>> panic below.
>>>
>>> 	do {
>>> 		ret = __io_submit_sqe(ctx, req, s, false);
>>> 		/*
>>> 		 * We can get EAGAIN for polled IO even though
>>> 		 * we're forcing a sync submission from here,
>>> 		 * since we can't wait for request slots on the
>>> 		 * block side.
>>> 		 */
>>> 		if (ret != -EAGAIN)
>>> 			break;
>>> 		cond_resched();
>>> 	} while (1);
>>>
>>> The suggested fix is to move a submitted request to the poll list
>>> unconditionally in polled mode.  The request can then be retried if
>>> necessary once the original submission has indeed completed.
>>>
>>> This bug raises an issue however since REQ_F_IOPOLL_COMPLETED is set
>>> in io_complete_rw_iopoll() from interrupt context.  NVMe polled queues
>>> however are not supposed to generate interrupts so it is not clear what
>>> is the reason for this apparent inconsitency.
>> It's because you're not running with poll queues for NVMe, hence you're
>> throwing a lot of performance away. Load nvme with poll_queues=X (or boot
>> with nvme.poll_queues=X, if built in) to have a set of separate queues
>> for polling. These don't have IRQs enabled, and it'll work much faster
>> for you.
>>
> That's what I did in fact.  I booted with nvme.poll_queues=36 (I figured
> 1 per core but I'm not sure what is a reasonable number).
> 
> I also checked that /sys/block/<nvme>/queue/io_poll = 1.
> 
> What's really odd is that the irq/sec numbers from mpstat and perf show
> equivalent values with/without polling (with/without fio "hipri" option)
> even though I can see from perf top that we are in fact polling in one
> case. I don't if I missing a step or something is off in my config.

Doesn't sound right. io_poll just says if polling is enabled, not if
it's IRQ driven or not. Try this, assuming nvme0n1 is the device in
question:

# cd /sys/kernel/debug/block/nvme0n1
# grep . hctx*/type
hctx0/type:default
hctx1/type:read
hctx2/type:read
hctx3/type:read
hctx4/type:poll
hctx5/type:poll
hctx6/type:poll
hctx7/type:poll

This is just a vm I have, hence just the 8 queues. But this shows a
proper mapping setup - 1 default queue, 3 read, and 4 poll queues.

Is nvme a module on your box? Or is it built-in?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
       [not found]       ` <fa82e9fc-caf7-a94a-ebff-536413e9ecce@oracle.com>
@ 2019-10-25 14:07         ` Jens Axboe
  2019-10-25 14:18           ` Jens Axboe
  0 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-25 14:07 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
> 
> On 10/24/19 3:31 PM, Jens Axboe wrote:
>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>> Running an fio test consistenly crashes the kernel with the trace included
>>>>> below.  The root cause seems to be the code in __io_submit_sqe() that
>>>>> checks the result of a request for -EAGAIN in polled mode, without
>>>>> ensuring first that the request has completed:
>>>>>
>>>>> 	if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>> 		if (req->result == -EAGAIN)
>>>>> 			return -EAGAIN;
>>>> I'm a little confused, because we should be holding the submission
>>>> reference to the request still at this point. So how is it going away?
>>>> I must be missing something...
>>> I don't think the submission reference is going away...
>>>
>>> I *think* the problem has to do with the fact that
>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>> called from interrupt context in my configuration and so there is a
>>> potential race between updating the request there and checking it in
>>> __io_submit_sqe().
>>>
>>> My first workaround was to simply poll for REQ_F_IOPOLL_COMPLETED in the
>>> code snippet above:
>>>
>>>        if (req->result == --EAGAIN) {
>>>
>>>            poll for REQ_F_IOPOLL_COMPLETED
>>>
>>>            return -EAGAIN;
>>>
>>> }
>>>
>>> and that got rid of the problem.
>> But that will not work at all for a proper poll setup, where you don't
>> trigger any IRQs... It only happens to work for this case because you're
>> still triggering interrupts. But even in that case, it's not a real
>> solution, but I don't think that's the argument here ;-)
> 
> Sure.
> 
> I'm just curious though as how it would break the poll case because
> io_complete_rw_iopoll() would still be called though through polling,
> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
> should be able to reliably check req->result.

It'd break the poll case because the task doing the submission is
generally also the one that finds and reaps completion. Hence if you
block that task just polling on that completion bit, you are preventing
that very task from going and reaping completions. The condition would
never become true, and you are now looping forever.

> The same poll test seemed to run ok with nvme interrupts not being
> triggered. Anyway, no argument that it's not needed!

A few reasons why it would make progress:

- You eventually trigger a timeout on the nvme side, as blk-mq finds the
  request hasn't been completed by an IRQ. But that's a 30 second ordeal
  before that event occurs.

- There was still interrupts enabled.

- You have two threads, one doing submission and one doing completions.
  Maybe using SQPOLL? If that's the case, then yes, it'd still work as
  you have separate threads for submission and completion.

For the "generic" case of just using one thread and IRQs disabled, it'd
deadlock.

>> I see what the race is now, it's specific to IRQ driven polling. We
>> really should just disallow that, to be honest, it doesn't make any
>> sense. But let me think about if we can do a reasonable solution to this
>> that doesn't involve adding overhead for a proper setup.
> 
> It's a nonsensical config in a way and so disallowing it would make
> the most sense.

Definitely. The nvme driver should not set .poll() if it doesn't have
non-irq poll queues. Something like this:


diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 869f462e6b6e..ac1b0a9387a4 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1594,6 +1594,16 @@ static const struct blk_mq_ops nvme_mq_ops = {
 	.poll		= nvme_poll,
 };
 
+static const struct blk_mq_ops nvme_mq_nopoll_ops = {
+	.queue_rq	= nvme_queue_rq,
+	.complete	= nvme_pci_complete_rq,
+	.commit_rqs	= nvme_commit_rqs,
+	.init_hctx	= nvme_init_hctx,
+	.init_request	= nvme_init_request,
+	.map_queues	= nvme_pci_map_queues,
+	.timeout	= nvme_timeout,
+};
+
 static void nvme_dev_remove_admin(struct nvme_dev *dev)
 {
 	if (dev->ctrl.admin_q && !blk_queue_dying(dev->ctrl.admin_q)) {
@@ -2269,11 +2279,14 @@ static void nvme_dev_add(struct nvme_dev *dev)
 	int ret;
 
 	if (!dev->ctrl.tagset) {
-		dev->tagset.ops = &nvme_mq_ops;
 		dev->tagset.nr_hw_queues = dev->online_queues - 1;
 		dev->tagset.nr_maps = 2; /* default + read */
-		if (dev->io_queues[HCTX_TYPE_POLL])
+		if (dev->io_queues[HCTX_TYPE_POLL]) {
+			dev->tagset.ops = &nvme_mq_ops;
 			dev->tagset.nr_maps++;
+		} else {
+			dev->tagset.ops = &nvme_mq_nopoll_ops;
+		}
 		dev->tagset.timeout = NVME_IO_TIMEOUT;
 		dev->tagset.numa_node = dev_to_node(dev->dev);
 		dev->tagset.queue_depth =

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-25 14:07         ` Jens Axboe
@ 2019-10-25 14:18           ` Jens Axboe
  2019-10-25 14:21             ` Jens Axboe
  2019-10-25 14:42             ` Bijan Mottahedeh
  0 siblings, 2 replies; 25+ messages in thread
From: Jens Axboe @ 2019-10-25 14:18 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/25/19 8:07 AM, Jens Axboe wrote:
> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>
>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>> Running an fio test consistenly crashes the kernel with the trace included
>>>>>> below.  The root cause seems to be the code in __io_submit_sqe() that
>>>>>> checks the result of a request for -EAGAIN in polled mode, without
>>>>>> ensuring first that the request has completed:
>>>>>>
>>>>>> 	if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>> 		if (req->result == -EAGAIN)
>>>>>> 			return -EAGAIN;
>>>>> I'm a little confused, because we should be holding the submission
>>>>> reference to the request still at this point. So how is it going away?
>>>>> I must be missing something...
>>>> I don't think the submission reference is going away...
>>>>
>>>> I *think* the problem has to do with the fact that
>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>> called from interrupt context in my configuration and so there is a
>>>> potential race between updating the request there and checking it in
>>>> __io_submit_sqe().
>>>>
>>>> My first workaround was to simply poll for REQ_F_IOPOLL_COMPLETED in the
>>>> code snippet above:
>>>>
>>>>         if (req->result == --EAGAIN) {
>>>>
>>>>             poll for REQ_F_IOPOLL_COMPLETED
>>>>
>>>>             return -EAGAIN;
>>>>
>>>> }
>>>>
>>>> and that got rid of the problem.
>>> But that will not work at all for a proper poll setup, where you don't
>>> trigger any IRQs... It only happens to work for this case because you're
>>> still triggering interrupts. But even in that case, it's not a real
>>> solution, but I don't think that's the argument here ;-)
>>
>> Sure.
>>
>> I'm just curious though as how it would break the poll case because
>> io_complete_rw_iopoll() would still be called though through polling,
>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>> should be able to reliably check req->result.
> 
> It'd break the poll case because the task doing the submission is
> generally also the one that finds and reaps completion. Hence if you
> block that task just polling on that completion bit, you are preventing
> that very task from going and reaping completions. The condition would
> never become true, and you are now looping forever.
> 
>> The same poll test seemed to run ok with nvme interrupts not being
>> triggered. Anyway, no argument that it's not needed!
> 
> A few reasons why it would make progress:
> 
> - You eventually trigger a timeout on the nvme side, as blk-mq finds the
>    request hasn't been completed by an IRQ. But that's a 30 second ordeal
>    before that event occurs.
> 
> - There was still interrupts enabled.
> 
> - You have two threads, one doing submission and one doing completions.
>    Maybe using SQPOLL? If that's the case, then yes, it'd still work as
>    you have separate threads for submission and completion.
> 
> For the "generic" case of just using one thread and IRQs disabled, it'd
> deadlock.
> 
>>> I see what the race is now, it's specific to IRQ driven polling. We
>>> really should just disallow that, to be honest, it doesn't make any
>>> sense. But let me think about if we can do a reasonable solution to this
>>> that doesn't involve adding overhead for a proper setup.
>>
>> It's a nonsensical config in a way and so disallowing it would make
>> the most sense.
> 
> Definitely. The nvme driver should not set .poll() if it doesn't have
> non-irq poll queues. Something like this:

Actually, we already disable polling if we don't have specific poll
queues:

        if (set->nr_maps > HCTX_TYPE_POLL &&
            set->map[HCTX_TYPE_POLL].nr_queues)
                blk_queue_flag_set(QUEUE_FLAG_POLL, q);

Did you see any timeouts in your tests? I wonder if the use-after-free
triggered when the timeout found the request while you had the busy-spin
logic we discussed previously.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-25 14:18           ` Jens Axboe
@ 2019-10-25 14:21             ` Jens Axboe
  2019-10-29 19:17               ` Bijan Mottahedeh
  2019-10-25 14:42             ` Bijan Mottahedeh
  1 sibling, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/25/19 8:18 AM, Jens Axboe wrote:
> On 10/25/19 8:07 AM, Jens Axboe wrote:
>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>
>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>> Running an fio test consistenly crashes the kernel with the trace included
>>>>>>> below.  The root cause seems to be the code in __io_submit_sqe() that
>>>>>>> checks the result of a request for -EAGAIN in polled mode, without
>>>>>>> ensuring first that the request has completed:
>>>>>>>
>>>>>>> 	if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>> 		if (req->result == -EAGAIN)
>>>>>>> 			return -EAGAIN;
>>>>>> I'm a little confused, because we should be holding the submission
>>>>>> reference to the request still at this point. So how is it going away?
>>>>>> I must be missing something...
>>>>> I don't think the submission reference is going away...
>>>>>
>>>>> I *think* the problem has to do with the fact that
>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>> called from interrupt context in my configuration and so there is a
>>>>> potential race between updating the request there and checking it in
>>>>> __io_submit_sqe().
>>>>>
>>>>> My first workaround was to simply poll for REQ_F_IOPOLL_COMPLETED in the
>>>>> code snippet above:
>>>>>
>>>>>          if (req->result == --EAGAIN) {
>>>>>
>>>>>              poll for REQ_F_IOPOLL_COMPLETED
>>>>>
>>>>>              return -EAGAIN;
>>>>>
>>>>> }
>>>>>
>>>>> and that got rid of the problem.
>>>> But that will not work at all for a proper poll setup, where you don't
>>>> trigger any IRQs... It only happens to work for this case because you're
>>>> still triggering interrupts. But even in that case, it's not a real
>>>> solution, but I don't think that's the argument here ;-)
>>>
>>> Sure.
>>>
>>> I'm just curious though as how it would break the poll case because
>>> io_complete_rw_iopoll() would still be called though through polling,
>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>> should be able to reliably check req->result.
>>
>> It'd break the poll case because the task doing the submission is
>> generally also the one that finds and reaps completion. Hence if you
>> block that task just polling on that completion bit, you are preventing
>> that very task from going and reaping completions. The condition would
>> never become true, and you are now looping forever.
>>
>>> The same poll test seemed to run ok with nvme interrupts not being
>>> triggered. Anyway, no argument that it's not needed!
>>
>> A few reasons why it would make progress:
>>
>> - You eventually trigger a timeout on the nvme side, as blk-mq finds the
>>     request hasn't been completed by an IRQ. But that's a 30 second ordeal
>>     before that event occurs.
>>
>> - There was still interrupts enabled.
>>
>> - You have two threads, one doing submission and one doing completions.
>>     Maybe using SQPOLL? If that's the case, then yes, it'd still work as
>>     you have separate threads for submission and completion.
>>
>> For the "generic" case of just using one thread and IRQs disabled, it'd
>> deadlock.
>>
>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>> really should just disallow that, to be honest, it doesn't make any
>>>> sense. But let me think about if we can do a reasonable solution to this
>>>> that doesn't involve adding overhead for a proper setup.
>>>
>>> It's a nonsensical config in a way and so disallowing it would make
>>> the most sense.
>>
>> Definitely. The nvme driver should not set .poll() if it doesn't have
>> non-irq poll queues. Something like this:
> 
> Actually, we already disable polling if we don't have specific poll
> queues:
> 
>          if (set->nr_maps > HCTX_TYPE_POLL &&
>              set->map[HCTX_TYPE_POLL].nr_queues)
>                  blk_queue_flag_set(QUEUE_FLAG_POLL, q);
> 
> Did you see any timeouts in your tests? I wonder if the use-after-free
> triggered when the timeout found the request while you had the busy-spin
> logic we discussed previously.

Ah, but we still have fops->iopoll() set for that case. So we just won't
poll for it, it'll get completed by IRQ. So I do think we need to handle
this case in io_uring. I'll get back to you.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-25 14:18           ` Jens Axboe
  2019-10-25 14:21             ` Jens Axboe
@ 2019-10-25 14:42             ` Bijan Mottahedeh
  1 sibling, 0 replies; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-25 14:42 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block


On 10/25/19 7:18 AM, Jens Axboe wrote:
> On 10/25/19 8:07 AM, Jens Axboe wrote:
>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>> Running an fio test consistenly crashes the kernel with the trace included
>>>>>>> below.  The root cause seems to be the code in __io_submit_sqe() that
>>>>>>> checks the result of a request for -EAGAIN in polled mode, without
>>>>>>> ensuring first that the request has completed:
>>>>>>>
>>>>>>> 	if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>> 		if (req->result == -EAGAIN)
>>>>>>> 			return -EAGAIN;
>>>>>> I'm a little confused, because we should be holding the submission
>>>>>> reference to the request still at this point. So how is it going away?
>>>>>> I must be missing something...
>>>>> I don't think the submission reference is going away...
>>>>>
>>>>> I *think* the problem has to do with the fact that
>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>> called from interrupt context in my configuration and so there is a
>>>>> potential race between updating the request there and checking it in
>>>>> __io_submit_sqe().
>>>>>
>>>>> My first workaround was to simply poll for REQ_F_IOPOLL_COMPLETED in the
>>>>> code snippet above:
>>>>>
>>>>>          if (req->result == --EAGAIN) {
>>>>>
>>>>>              poll for REQ_F_IOPOLL_COMPLETED
>>>>>
>>>>>              return -EAGAIN;
>>>>>
>>>>> }
>>>>>
>>>>> and that got rid of the problem.
>>>> But that will not work at all for a proper poll setup, where you don't
>>>> trigger any IRQs... It only happens to work for this case because you're
>>>> still triggering interrupts. But even in that case, it's not a real
>>>> solution, but I don't think that's the argument here ;-)
>>> Sure.
>>>
>>> I'm just curious though as how it would break the poll case because
>>> io_complete_rw_iopoll() would still be called though through polling,
>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>> should be able to reliably check req->result.
>> It'd break the poll case because the task doing the submission is
>> generally also the one that finds and reaps completion. Hence if you
>> block that task just polling on that completion bit, you are preventing
>> that very task from going and reaping completions. The condition would
>> never become true, and you are now looping forever.
>>
>>> The same poll test seemed to run ok with nvme interrupts not being
>>> triggered. Anyway, no argument that it's not needed!
>> A few reasons why it would make progress:
>>
>> - You eventually trigger a timeout on the nvme side, as blk-mq finds the
>>     request hasn't been completed by an IRQ. But that's a 30 second ordeal
>>     before that event occurs.
>>
>> - There was still interrupts enabled.
>>
>> - You have two threads, one doing submission and one doing completions.
>>     Maybe using SQPOLL? If that's the case, then yes, it'd still work as
>>     you have separate threads for submission and completion.
>>
>> For the "generic" case of just using one thread and IRQs disabled, it'd
>> deadlock.
>>
>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>> really should just disallow that, to be honest, it doesn't make any
>>>> sense. But let me think about if we can do a reasonable solution to this
>>>> that doesn't involve adding overhead for a proper setup.
>>> It's a nonsensical config in a way and so disallowing it would make
>>> the most sense.
>> Definitely. The nvme driver should not set .poll() if it doesn't have
>> non-irq poll queues. Something like this:
> Actually, we already disable polling if we don't have specific poll
> queues:
>
>          if (set->nr_maps > HCTX_TYPE_POLL &&
>              set->map[HCTX_TYPE_POLL].nr_queues)
>                  blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>
> Did you see any timeouts in your tests? I wonder if the use-after-free
> triggered when the timeout found the request while you had the busy-spin
> logic we discussed previously.
>
I didn't notice any timeouts.  The error triggered without me making any 
changes.  The busy-spin workaround I mentioned before actually got rid 
of the error.


--bijan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-25 14:21             ` Jens Axboe
@ 2019-10-29 19:17               ` Bijan Mottahedeh
  2019-10-29 19:23                 ` Bijan Mottahedeh
  0 siblings, 1 reply; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-29 19:17 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block


On 10/25/19 7:21 AM, Jens Axboe wrote:
> On 10/25/19 8:18 AM, Jens Axboe wrote:
>> On 10/25/19 8:07 AM, Jens Axboe wrote:
>>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>>> Running an fio test consistenly crashes the kernel with the trace included
>>>>>>>> below.  The root cause seems to be the code in __io_submit_sqe() that
>>>>>>>> checks the result of a request for -EAGAIN in polled mode, without
>>>>>>>> ensuring first that the request has completed:
>>>>>>>>
>>>>>>>> 	if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>>> 		if (req->result == -EAGAIN)
>>>>>>>> 			return -EAGAIN;
>>>>>>> I'm a little confused, because we should be holding the submission
>>>>>>> reference to the request still at this point. So how is it going away?
>>>>>>> I must be missing something...
>>>>>> I don't think the submission reference is going away...
>>>>>>
>>>>>> I *think* the problem has to do with the fact that
>>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>>> called from interrupt context in my configuration and so there is a
>>>>>> potential race between updating the request there and checking it in
>>>>>> __io_submit_sqe().
>>>>>>
>>>>>> My first workaround was to simply poll for REQ_F_IOPOLL_COMPLETED in the
>>>>>> code snippet above:
>>>>>>
>>>>>>           if (req->result == --EAGAIN) {
>>>>>>
>>>>>>               poll for REQ_F_IOPOLL_COMPLETED
>>>>>>
>>>>>>               return -EAGAIN;
>>>>>>
>>>>>> }
>>>>>>
>>>>>> and that got rid of the problem.
>>>>> But that will not work at all for a proper poll setup, where you don't
>>>>> trigger any IRQs... It only happens to work for this case because you're
>>>>> still triggering interrupts. But even in that case, it's not a real
>>>>> solution, but I don't think that's the argument here ;-)
>>>> Sure.
>>>>
>>>> I'm just curious though as how it would break the poll case because
>>>> io_complete_rw_iopoll() would still be called though through polling,
>>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>>> should be able to reliably check req->result.
>>> It'd break the poll case because the task doing the submission is
>>> generally also the one that finds and reaps completion. Hence if you
>>> block that task just polling on that completion bit, you are preventing
>>> that very task from going and reaping completions. The condition would
>>> never become true, and you are now looping forever.
>>>
>>>> The same poll test seemed to run ok with nvme interrupts not being
>>>> triggered. Anyway, no argument that it's not needed!
>>> A few reasons why it would make progress:
>>>
>>> - You eventually trigger a timeout on the nvme side, as blk-mq finds the
>>>      request hasn't been completed by an IRQ. But that's a 30 second ordeal
>>>      before that event occurs.
>>>
>>> - There was still interrupts enabled.
>>>
>>> - You have two threads, one doing submission and one doing completions.
>>>      Maybe using SQPOLL? If that's the case, then yes, it'd still work as
>>>      you have separate threads for submission and completion.
>>>
>>> For the "generic" case of just using one thread and IRQs disabled, it'd
>>> deadlock.
>>>
>>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>>> really should just disallow that, to be honest, it doesn't make any
>>>>> sense. But let me think about if we can do a reasonable solution to this
>>>>> that doesn't involve adding overhead for a proper setup.
>>>> It's a nonsensical config in a way and so disallowing it would make
>>>> the most sense.
>>> Definitely. The nvme driver should not set .poll() if it doesn't have
>>> non-irq poll queues. Something like this:
>> Actually, we already disable polling if we don't have specific poll
>> queues:
>>
>>           if (set->nr_maps > HCTX_TYPE_POLL &&
>>               set->map[HCTX_TYPE_POLL].nr_queues)
>>                   blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>>
>> Did you see any timeouts in your tests? I wonder if the use-after-free
>> triggered when the timeout found the request while you had the busy-spin
>> logic we discussed previously.
> Ah, but we still have fops->iopoll() set for that case. So we just won't
> poll for it, it'll get completed by IRQ. So I do think we need to handle
> this case in io_uring. I'll get back to you.
>

I ran the same test on linux-next-20191029 in polled mode and got the 
same free-after-user panic:

- I booted with nvme.poll_queues set and verified that all queues except 
default where of type poll

- I added three assertions to verify the following:

     - nvme_timeout() is not called

     - io_complete_rw_iopoll() is not called from interrupt context

     - io_sq_offload_start() is not called with IORING_SETUP_SQPOLL set

Is it possible that the race is there also in polled mode since a 
request submitted by one thread could conceivably be polled for and 
completed by a different thread, e.g. in 
io_uring_enter()->io_iopoll_check()?

--bijan



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-29 19:17               ` Bijan Mottahedeh
@ 2019-10-29 19:23                 ` Bijan Mottahedeh
  2019-10-29 19:27                   ` Jens Axboe
  0 siblings, 1 reply; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-29 19:23 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block


On 10/29/19 12:17 PM, Bijan Mottahedeh wrote:
>
> On 10/25/19 7:21 AM, Jens Axboe wrote:
>> On 10/25/19 8:18 AM, Jens Axboe wrote:
>>> On 10/25/19 8:07 AM, Jens Axboe wrote:
>>>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>>>> Running an fio test consistenly crashes the kernel with the 
>>>>>>>>> trace included
>>>>>>>>> below.  The root cause seems to be the code in 
>>>>>>>>> __io_submit_sqe() that
>>>>>>>>> checks the result of a request for -EAGAIN in polled mode, 
>>>>>>>>> without
>>>>>>>>> ensuring first that the request has completed:
>>>>>>>>>
>>>>>>>>>     if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>>>>         if (req->result == -EAGAIN)
>>>>>>>>>             return -EAGAIN;
>>>>>>>> I'm a little confused, because we should be holding the submission
>>>>>>>> reference to the request still at this point. So how is it 
>>>>>>>> going away?
>>>>>>>> I must be missing something...
>>>>>>> I don't think the submission reference is going away...
>>>>>>>
>>>>>>> I *think* the problem has to do with the fact that
>>>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>>>> called from interrupt context in my configuration and so there is a
>>>>>>> potential race between updating the request there and checking 
>>>>>>> it in
>>>>>>> __io_submit_sqe().
>>>>>>>
>>>>>>> My first workaround was to simply poll for 
>>>>>>> REQ_F_IOPOLL_COMPLETED in the
>>>>>>> code snippet above:
>>>>>>>
>>>>>>>           if (req->result == --EAGAIN) {
>>>>>>>
>>>>>>>               poll for REQ_F_IOPOLL_COMPLETED
>>>>>>>
>>>>>>>               return -EAGAIN;
>>>>>>>
>>>>>>> }
>>>>>>>
>>>>>>> and that got rid of the problem.
>>>>>> But that will not work at all for a proper poll setup, where you 
>>>>>> don't
>>>>>> trigger any IRQs... It only happens to work for this case because 
>>>>>> you're
>>>>>> still triggering interrupts. But even in that case, it's not a real
>>>>>> solution, but I don't think that's the argument here ;-)
>>>>> Sure.
>>>>>
>>>>> I'm just curious though as how it would break the poll case because
>>>>> io_complete_rw_iopoll() would still be called though through polling,
>>>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>>>> should be able to reliably check req->result.
>>>> It'd break the poll case because the task doing the submission is
>>>> generally also the one that finds and reaps completion. Hence if you
>>>> block that task just polling on that completion bit, you are 
>>>> preventing
>>>> that very task from going and reaping completions. The condition would
>>>> never become true, and you are now looping forever.
>>>>
>>>>> The same poll test seemed to run ok with nvme interrupts not being
>>>>> triggered. Anyway, no argument that it's not needed!
>>>> A few reasons why it would make progress:
>>>>
>>>> - You eventually trigger a timeout on the nvme side, as blk-mq 
>>>> finds the
>>>>      request hasn't been completed by an IRQ. But that's a 30 
>>>> second ordeal
>>>>      before that event occurs.
>>>>
>>>> - There was still interrupts enabled.
>>>>
>>>> - You have two threads, one doing submission and one doing 
>>>> completions.
>>>>      Maybe using SQPOLL? If that's the case, then yes, it'd still 
>>>> work as
>>>>      you have separate threads for submission and completion.
>>>>
>>>> For the "generic" case of just using one thread and IRQs disabled, 
>>>> it'd
>>>> deadlock.
>>>>
>>>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>>>> really should just disallow that, to be honest, it doesn't make any
>>>>>> sense. But let me think about if we can do a reasonable solution 
>>>>>> to this
>>>>>> that doesn't involve adding overhead for a proper setup.
>>>>> It's a nonsensical config in a way and so disallowing it would make
>>>>> the most sense.
>>>> Definitely. The nvme driver should not set .poll() if it doesn't have
>>>> non-irq poll queues. Something like this:
>>> Actually, we already disable polling if we don't have specific poll
>>> queues:
>>>
>>>           if (set->nr_maps > HCTX_TYPE_POLL &&
>>>               set->map[HCTX_TYPE_POLL].nr_queues)
>>>                   blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>>>
>>> Did you see any timeouts in your tests? I wonder if the use-after-free
>>> triggered when the timeout found the request while you had the 
>>> busy-spin
>>> logic we discussed previously.
>> Ah, but we still have fops->iopoll() set for that case. So we just won't
>> poll for it, it'll get completed by IRQ. So I do think we need to handle
>> this case in io_uring. I'll get back to you.
>>
>
> I ran the same test on linux-next-20191029 in polled mode and got the 
> same free-after-user panic:
>
> - I booted with nvme.poll_queues set and verified that all queues 
> except default where of type poll
>
> - I added three assertions to verify the following:
>
>     - nvme_timeout() is not called
>
>     - io_complete_rw_iopoll() is not called from interrupt context
>
>     - io_sq_offload_start() is not called with IORING_SETUP_SQPOLL set
>
> Is it possible that the race is there also in polled mode since a 
> request submitted by one thread could conceivably be polled for and 
> completed by a different thread, e.g. in 
> io_uring_enter()->io_iopoll_check()?
>
> --bijan
>
>
I also tested my RFC again with 1 thread and with queue depths of 1 to 
1024 in multiples of 8 and didn't see any hangs.

Just to be clear, the busy-spin logic discussed before was only a 
workaround an not in the RFC.

--bijan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-29 19:23                 ` Bijan Mottahedeh
@ 2019-10-29 19:27                   ` Jens Axboe
  2019-10-29 19:31                     ` Bijan Mottahedeh
  0 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-29 19:27 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/29/19 1:23 PM, Bijan Mottahedeh wrote:
> 
> On 10/29/19 12:17 PM, Bijan Mottahedeh wrote:
>>
>> On 10/25/19 7:21 AM, Jens Axboe wrote:
>>> On 10/25/19 8:18 AM, Jens Axboe wrote:
>>>> On 10/25/19 8:07 AM, Jens Axboe wrote:
>>>>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>>>>> Running an fio test consistenly crashes the kernel with the
>>>>>>>>>> trace included
>>>>>>>>>> below.  The root cause seems to be the code in
>>>>>>>>>> __io_submit_sqe() that
>>>>>>>>>> checks the result of a request for -EAGAIN in polled mode,
>>>>>>>>>> without
>>>>>>>>>> ensuring first that the request has completed:
>>>>>>>>>>
>>>>>>>>>>      if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>>>>>          if (req->result == -EAGAIN)
>>>>>>>>>>              return -EAGAIN;
>>>>>>>>> I'm a little confused, because we should be holding the submission
>>>>>>>>> reference to the request still at this point. So how is it
>>>>>>>>> going away?
>>>>>>>>> I must be missing something...
>>>>>>>> I don't think the submission reference is going away...
>>>>>>>>
>>>>>>>> I *think* the problem has to do with the fact that
>>>>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>>>>> called from interrupt context in my configuration and so there is a
>>>>>>>> potential race between updating the request there and checking
>>>>>>>> it in
>>>>>>>> __io_submit_sqe().
>>>>>>>>
>>>>>>>> My first workaround was to simply poll for
>>>>>>>> REQ_F_IOPOLL_COMPLETED in the
>>>>>>>> code snippet above:
>>>>>>>>
>>>>>>>>            if (req->result == --EAGAIN) {
>>>>>>>>
>>>>>>>>                poll for REQ_F_IOPOLL_COMPLETED
>>>>>>>>
>>>>>>>>                return -EAGAIN;
>>>>>>>>
>>>>>>>> }
>>>>>>>>
>>>>>>>> and that got rid of the problem.
>>>>>>> But that will not work at all for a proper poll setup, where you
>>>>>>> don't
>>>>>>> trigger any IRQs... It only happens to work for this case because
>>>>>>> you're
>>>>>>> still triggering interrupts. But even in that case, it's not a real
>>>>>>> solution, but I don't think that's the argument here ;-)
>>>>>> Sure.
>>>>>>
>>>>>> I'm just curious though as how it would break the poll case because
>>>>>> io_complete_rw_iopoll() would still be called though through polling,
>>>>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>>>>> should be able to reliably check req->result.
>>>>> It'd break the poll case because the task doing the submission is
>>>>> generally also the one that finds and reaps completion. Hence if you
>>>>> block that task just polling on that completion bit, you are
>>>>> preventing
>>>>> that very task from going and reaping completions. The condition would
>>>>> never become true, and you are now looping forever.
>>>>>
>>>>>> The same poll test seemed to run ok with nvme interrupts not being
>>>>>> triggered. Anyway, no argument that it's not needed!
>>>>> A few reasons why it would make progress:
>>>>>
>>>>> - You eventually trigger a timeout on the nvme side, as blk-mq
>>>>> finds the
>>>>>       request hasn't been completed by an IRQ. But that's a 30
>>>>> second ordeal
>>>>>       before that event occurs.
>>>>>
>>>>> - There was still interrupts enabled.
>>>>>
>>>>> - You have two threads, one doing submission and one doing
>>>>> completions.
>>>>>       Maybe using SQPOLL? If that's the case, then yes, it'd still
>>>>> work as
>>>>>       you have separate threads for submission and completion.
>>>>>
>>>>> For the "generic" case of just using one thread and IRQs disabled,
>>>>> it'd
>>>>> deadlock.
>>>>>
>>>>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>>>>> really should just disallow that, to be honest, it doesn't make any
>>>>>>> sense. But let me think about if we can do a reasonable solution
>>>>>>> to this
>>>>>>> that doesn't involve adding overhead for a proper setup.
>>>>>> It's a nonsensical config in a way and so disallowing it would make
>>>>>> the most sense.
>>>>> Definitely. The nvme driver should not set .poll() if it doesn't have
>>>>> non-irq poll queues. Something like this:
>>>> Actually, we already disable polling if we don't have specific poll
>>>> queues:
>>>>
>>>>            if (set->nr_maps > HCTX_TYPE_POLL &&
>>>>                set->map[HCTX_TYPE_POLL].nr_queues)
>>>>                    blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>>>>
>>>> Did you see any timeouts in your tests? I wonder if the use-after-free
>>>> triggered when the timeout found the request while you had the
>>>> busy-spin
>>>> logic we discussed previously.
>>> Ah, but we still have fops->iopoll() set for that case. So we just won't
>>> poll for it, it'll get completed by IRQ. So I do think we need to handle
>>> this case in io_uring. I'll get back to you.
>>>
>>
>> I ran the same test on linux-next-20191029 in polled mode and got the
>> same free-after-user panic:
>>
>> - I booted with nvme.poll_queues set and verified that all queues
>> except default where of type poll
>>
>> - I added three assertions to verify the following:
>>
>>      - nvme_timeout() is not called
>>
>>      - io_complete_rw_iopoll() is not called from interrupt context
>>
>>      - io_sq_offload_start() is not called with IORING_SETUP_SQPOLL set
>>
>> Is it possible that the race is there also in polled mode since a
>> request submitted by one thread could conceivably be polled for and
>> completed by a different thread, e.g. in
>> io_uring_enter()->io_iopoll_check()?
>>
>> --bijan
>>
>>
> I also tested my RFC again with 1 thread and with queue depths of 1 to
> 1024 in multiples of 8 and didn't see any hangs.
> 
> Just to be clear, the busy-spin logic discussed before was only a
> workaround an not in the RFC.

What is your exact test case?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-29 19:27                   ` Jens Axboe
@ 2019-10-29 19:31                     ` Bijan Mottahedeh
  2019-10-29 19:33                       ` Jens Axboe
  0 siblings, 1 reply; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-29 19:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block


On 10/29/19 12:27 PM, Jens Axboe wrote:
> On 10/29/19 1:23 PM, Bijan Mottahedeh wrote:
>> On 10/29/19 12:17 PM, Bijan Mottahedeh wrote:
>>> On 10/25/19 7:21 AM, Jens Axboe wrote:
>>>> On 10/25/19 8:18 AM, Jens Axboe wrote:
>>>>> On 10/25/19 8:07 AM, Jens Axboe wrote:
>>>>>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>>>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>>>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>>>>>> Running an fio test consistenly crashes the kernel with the
>>>>>>>>>>> trace included
>>>>>>>>>>> below.  The root cause seems to be the code in
>>>>>>>>>>> __io_submit_sqe() that
>>>>>>>>>>> checks the result of a request for -EAGAIN in polled mode,
>>>>>>>>>>> without
>>>>>>>>>>> ensuring first that the request has completed:
>>>>>>>>>>>
>>>>>>>>>>>       if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>>>>>>           if (req->result == -EAGAIN)
>>>>>>>>>>>               return -EAGAIN;
>>>>>>>>>> I'm a little confused, because we should be holding the submission
>>>>>>>>>> reference to the request still at this point. So how is it
>>>>>>>>>> going away?
>>>>>>>>>> I must be missing something...
>>>>>>>>> I don't think the submission reference is going away...
>>>>>>>>>
>>>>>>>>> I *think* the problem has to do with the fact that
>>>>>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>>>>>> called from interrupt context in my configuration and so there is a
>>>>>>>>> potential race between updating the request there and checking
>>>>>>>>> it in
>>>>>>>>> __io_submit_sqe().
>>>>>>>>>
>>>>>>>>> My first workaround was to simply poll for
>>>>>>>>> REQ_F_IOPOLL_COMPLETED in the
>>>>>>>>> code snippet above:
>>>>>>>>>
>>>>>>>>>             if (req->result == --EAGAIN) {
>>>>>>>>>
>>>>>>>>>                 poll for REQ_F_IOPOLL_COMPLETED
>>>>>>>>>
>>>>>>>>>                 return -EAGAIN;
>>>>>>>>>
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> and that got rid of the problem.
>>>>>>>> But that will not work at all for a proper poll setup, where you
>>>>>>>> don't
>>>>>>>> trigger any IRQs... It only happens to work for this case because
>>>>>>>> you're
>>>>>>>> still triggering interrupts. But even in that case, it's not a real
>>>>>>>> solution, but I don't think that's the argument here ;-)
>>>>>>> Sure.
>>>>>>>
>>>>>>> I'm just curious though as how it would break the poll case because
>>>>>>> io_complete_rw_iopoll() would still be called though through polling,
>>>>>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>>>>>> should be able to reliably check req->result.
>>>>>> It'd break the poll case because the task doing the submission is
>>>>>> generally also the one that finds and reaps completion. Hence if you
>>>>>> block that task just polling on that completion bit, you are
>>>>>> preventing
>>>>>> that very task from going and reaping completions. The condition would
>>>>>> never become true, and you are now looping forever.
>>>>>>
>>>>>>> The same poll test seemed to run ok with nvme interrupts not being
>>>>>>> triggered. Anyway, no argument that it's not needed!
>>>>>> A few reasons why it would make progress:
>>>>>>
>>>>>> - You eventually trigger a timeout on the nvme side, as blk-mq
>>>>>> finds the
>>>>>>        request hasn't been completed by an IRQ. But that's a 30
>>>>>> second ordeal
>>>>>>        before that event occurs.
>>>>>>
>>>>>> - There was still interrupts enabled.
>>>>>>
>>>>>> - You have two threads, one doing submission and one doing
>>>>>> completions.
>>>>>>        Maybe using SQPOLL? If that's the case, then yes, it'd still
>>>>>> work as
>>>>>>        you have separate threads for submission and completion.
>>>>>>
>>>>>> For the "generic" case of just using one thread and IRQs disabled,
>>>>>> it'd
>>>>>> deadlock.
>>>>>>
>>>>>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>>>>>> really should just disallow that, to be honest, it doesn't make any
>>>>>>>> sense. But let me think about if we can do a reasonable solution
>>>>>>>> to this
>>>>>>>> that doesn't involve adding overhead for a proper setup.
>>>>>>> It's a nonsensical config in a way and so disallowing it would make
>>>>>>> the most sense.
>>>>>> Definitely. The nvme driver should not set .poll() if it doesn't have
>>>>>> non-irq poll queues. Something like this:
>>>>> Actually, we already disable polling if we don't have specific poll
>>>>> queues:
>>>>>
>>>>>             if (set->nr_maps > HCTX_TYPE_POLL &&
>>>>>                 set->map[HCTX_TYPE_POLL].nr_queues)
>>>>>                     blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>>>>>
>>>>> Did you see any timeouts in your tests? I wonder if the use-after-free
>>>>> triggered when the timeout found the request while you had the
>>>>> busy-spin
>>>>> logic we discussed previously.
>>>> Ah, but we still have fops->iopoll() set for that case. So we just won't
>>>> poll for it, it'll get completed by IRQ. So I do think we need to handle
>>>> this case in io_uring. I'll get back to you.
>>>>
>>> I ran the same test on linux-next-20191029 in polled mode and got the
>>> same free-after-user panic:
>>>
>>> - I booted with nvme.poll_queues set and verified that all queues
>>> except default where of type poll
>>>
>>> - I added three assertions to verify the following:
>>>
>>>       - nvme_timeout() is not called
>>>
>>>       - io_complete_rw_iopoll() is not called from interrupt context
>>>
>>>       - io_sq_offload_start() is not called with IORING_SETUP_SQPOLL set
>>>
>>> Is it possible that the race is there also in polled mode since a
>>> request submitted by one thread could conceivably be polled for and
>>> completed by a different thread, e.g. in
>>> io_uring_enter()->io_iopoll_check()?
>>>
>>> --bijan
>>>
>>>
>> I also tested my RFC again with 1 thread and with queue depths of 1 to
>> 1024 in multiples of 8 and didn't see any hangs.
>>
>> Just to be clear, the busy-spin logic discussed before was only a
>> workaround an not in the RFC.
> What is your exact test case?
>
See original cover letter.  I can reproduce the failure with numjobs between 8 and 32.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-29 19:31                     ` Bijan Mottahedeh
@ 2019-10-29 19:33                       ` Jens Axboe
  2019-10-29 19:40                         ` Bijan Mottahedeh
  0 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-29 19:33 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/29/19 1:31 PM, Bijan Mottahedeh wrote:
> 
> On 10/29/19 12:27 PM, Jens Axboe wrote:
>> On 10/29/19 1:23 PM, Bijan Mottahedeh wrote:
>>> On 10/29/19 12:17 PM, Bijan Mottahedeh wrote:
>>>> On 10/25/19 7:21 AM, Jens Axboe wrote:
>>>>> On 10/25/19 8:18 AM, Jens Axboe wrote:
>>>>>> On 10/25/19 8:07 AM, Jens Axboe wrote:
>>>>>>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>>>>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>>>>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>>>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>>>>>>> Running an fio test consistenly crashes the kernel with the
>>>>>>>>>>>> trace included
>>>>>>>>>>>> below.  The root cause seems to be the code in
>>>>>>>>>>>> __io_submit_sqe() that
>>>>>>>>>>>> checks the result of a request for -EAGAIN in polled mode,
>>>>>>>>>>>> without
>>>>>>>>>>>> ensuring first that the request has completed:
>>>>>>>>>>>>
>>>>>>>>>>>>        if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>>>>>>>            if (req->result == -EAGAIN)
>>>>>>>>>>>>                return -EAGAIN;
>>>>>>>>>>> I'm a little confused, because we should be holding the submission
>>>>>>>>>>> reference to the request still at this point. So how is it
>>>>>>>>>>> going away?
>>>>>>>>>>> I must be missing something...
>>>>>>>>>> I don't think the submission reference is going away...
>>>>>>>>>>
>>>>>>>>>> I *think* the problem has to do with the fact that
>>>>>>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>>>>>>> called from interrupt context in my configuration and so there is a
>>>>>>>>>> potential race between updating the request there and checking
>>>>>>>>>> it in
>>>>>>>>>> __io_submit_sqe().
>>>>>>>>>>
>>>>>>>>>> My first workaround was to simply poll for
>>>>>>>>>> REQ_F_IOPOLL_COMPLETED in the
>>>>>>>>>> code snippet above:
>>>>>>>>>>
>>>>>>>>>>              if (req->result == --EAGAIN) {
>>>>>>>>>>
>>>>>>>>>>                  poll for REQ_F_IOPOLL_COMPLETED
>>>>>>>>>>
>>>>>>>>>>                  return -EAGAIN;
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> and that got rid of the problem.
>>>>>>>>> But that will not work at all for a proper poll setup, where you
>>>>>>>>> don't
>>>>>>>>> trigger any IRQs... It only happens to work for this case because
>>>>>>>>> you're
>>>>>>>>> still triggering interrupts. But even in that case, it's not a real
>>>>>>>>> solution, but I don't think that's the argument here ;-)
>>>>>>>> Sure.
>>>>>>>>
>>>>>>>> I'm just curious though as how it would break the poll case because
>>>>>>>> io_complete_rw_iopoll() would still be called though through polling,
>>>>>>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>>>>>>> should be able to reliably check req->result.
>>>>>>> It'd break the poll case because the task doing the submission is
>>>>>>> generally also the one that finds and reaps completion. Hence if you
>>>>>>> block that task just polling on that completion bit, you are
>>>>>>> preventing
>>>>>>> that very task from going and reaping completions. The condition would
>>>>>>> never become true, and you are now looping forever.
>>>>>>>
>>>>>>>> The same poll test seemed to run ok with nvme interrupts not being
>>>>>>>> triggered. Anyway, no argument that it's not needed!
>>>>>>> A few reasons why it would make progress:
>>>>>>>
>>>>>>> - You eventually trigger a timeout on the nvme side, as blk-mq
>>>>>>> finds the
>>>>>>>         request hasn't been completed by an IRQ. But that's a 30
>>>>>>> second ordeal
>>>>>>>         before that event occurs.
>>>>>>>
>>>>>>> - There was still interrupts enabled.
>>>>>>>
>>>>>>> - You have two threads, one doing submission and one doing
>>>>>>> completions.
>>>>>>>         Maybe using SQPOLL? If that's the case, then yes, it'd still
>>>>>>> work as
>>>>>>>         you have separate threads for submission and completion.
>>>>>>>
>>>>>>> For the "generic" case of just using one thread and IRQs disabled,
>>>>>>> it'd
>>>>>>> deadlock.
>>>>>>>
>>>>>>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>>>>>>> really should just disallow that, to be honest, it doesn't make any
>>>>>>>>> sense. But let me think about if we can do a reasonable solution
>>>>>>>>> to this
>>>>>>>>> that doesn't involve adding overhead for a proper setup.
>>>>>>>> It's a nonsensical config in a way and so disallowing it would make
>>>>>>>> the most sense.
>>>>>>> Definitely. The nvme driver should not set .poll() if it doesn't have
>>>>>>> non-irq poll queues. Something like this:
>>>>>> Actually, we already disable polling if we don't have specific poll
>>>>>> queues:
>>>>>>
>>>>>>              if (set->nr_maps > HCTX_TYPE_POLL &&
>>>>>>                  set->map[HCTX_TYPE_POLL].nr_queues)
>>>>>>                      blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>>>>>>
>>>>>> Did you see any timeouts in your tests? I wonder if the use-after-free
>>>>>> triggered when the timeout found the request while you had the
>>>>>> busy-spin
>>>>>> logic we discussed previously.
>>>>> Ah, but we still have fops->iopoll() set for that case. So we just won't
>>>>> poll for it, it'll get completed by IRQ. So I do think we need to handle
>>>>> this case in io_uring. I'll get back to you.
>>>>>
>>>> I ran the same test on linux-next-20191029 in polled mode and got the
>>>> same free-after-user panic:
>>>>
>>>> - I booted with nvme.poll_queues set and verified that all queues
>>>> except default where of type poll
>>>>
>>>> - I added three assertions to verify the following:
>>>>
>>>>        - nvme_timeout() is not called
>>>>
>>>>        - io_complete_rw_iopoll() is not called from interrupt context
>>>>
>>>>        - io_sq_offload_start() is not called with IORING_SETUP_SQPOLL set
>>>>
>>>> Is it possible that the race is there also in polled mode since a
>>>> request submitted by one thread could conceivably be polled for and
>>>> completed by a different thread, e.g. in
>>>> io_uring_enter()->io_iopoll_check()?
>>>>
>>>> --bijan
>>>>
>>>>
>>> I also tested my RFC again with 1 thread and with queue depths of 1 to
>>> 1024 in multiples of 8 and didn't see any hangs.
>>>
>>> Just to be clear, the busy-spin logic discussed before was only a
>>> workaround an not in the RFC.
>> What is your exact test case?
>>
> See original cover letter.  I can reproduce the failure with numjobs
> between 8 and 32.

And how many poll queues are you using?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-29 19:33                       ` Jens Axboe
@ 2019-10-29 19:40                         ` Bijan Mottahedeh
  2019-10-29 19:46                           ` Jens Axboe
  0 siblings, 1 reply; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-29 19:40 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block


On 10/29/19 12:33 PM, Jens Axboe wrote:
> On 10/29/19 1:31 PM, Bijan Mottahedeh wrote:
>> On 10/29/19 12:27 PM, Jens Axboe wrote:
>>> On 10/29/19 1:23 PM, Bijan Mottahedeh wrote:
>>>> On 10/29/19 12:17 PM, Bijan Mottahedeh wrote:
>>>>> On 10/25/19 7:21 AM, Jens Axboe wrote:
>>>>>> On 10/25/19 8:18 AM, Jens Axboe wrote:
>>>>>>> On 10/25/19 8:07 AM, Jens Axboe wrote:
>>>>>>>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>>>>>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>>>>>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>>>>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>>>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>>>>>>>> Running an fio test consistenly crashes the kernel with the
>>>>>>>>>>>>> trace included
>>>>>>>>>>>>> below.  The root cause seems to be the code in
>>>>>>>>>>>>> __io_submit_sqe() that
>>>>>>>>>>>>> checks the result of a request for -EAGAIN in polled mode,
>>>>>>>>>>>>> without
>>>>>>>>>>>>> ensuring first that the request has completed:
>>>>>>>>>>>>>
>>>>>>>>>>>>>         if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>>>>>>>>             if (req->result == -EAGAIN)
>>>>>>>>>>>>>                 return -EAGAIN;
>>>>>>>>>>>> I'm a little confused, because we should be holding the submission
>>>>>>>>>>>> reference to the request still at this point. So how is it
>>>>>>>>>>>> going away?
>>>>>>>>>>>> I must be missing something...
>>>>>>>>>>> I don't think the submission reference is going away...
>>>>>>>>>>>
>>>>>>>>>>> I *think* the problem has to do with the fact that
>>>>>>>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>>>>>>>> called from interrupt context in my configuration and so there is a
>>>>>>>>>>> potential race between updating the request there and checking
>>>>>>>>>>> it in
>>>>>>>>>>> __io_submit_sqe().
>>>>>>>>>>>
>>>>>>>>>>> My first workaround was to simply poll for
>>>>>>>>>>> REQ_F_IOPOLL_COMPLETED in the
>>>>>>>>>>> code snippet above:
>>>>>>>>>>>
>>>>>>>>>>>               if (req->result == --EAGAIN) {
>>>>>>>>>>>
>>>>>>>>>>>                   poll for REQ_F_IOPOLL_COMPLETED
>>>>>>>>>>>
>>>>>>>>>>>                   return -EAGAIN;
>>>>>>>>>>>
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> and that got rid of the problem.
>>>>>>>>>> But that will not work at all for a proper poll setup, where you
>>>>>>>>>> don't
>>>>>>>>>> trigger any IRQs... It only happens to work for this case because
>>>>>>>>>> you're
>>>>>>>>>> still triggering interrupts. But even in that case, it's not a real
>>>>>>>>>> solution, but I don't think that's the argument here ;-)
>>>>>>>>> Sure.
>>>>>>>>>
>>>>>>>>> I'm just curious though as how it would break the poll case because
>>>>>>>>> io_complete_rw_iopoll() would still be called though through polling,
>>>>>>>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>>>>>>>> should be able to reliably check req->result.
>>>>>>>> It'd break the poll case because the task doing the submission is
>>>>>>>> generally also the one that finds and reaps completion. Hence if you
>>>>>>>> block that task just polling on that completion bit, you are
>>>>>>>> preventing
>>>>>>>> that very task from going and reaping completions. The condition would
>>>>>>>> never become true, and you are now looping forever.
>>>>>>>>
>>>>>>>>> The same poll test seemed to run ok with nvme interrupts not being
>>>>>>>>> triggered. Anyway, no argument that it's not needed!
>>>>>>>> A few reasons why it would make progress:
>>>>>>>>
>>>>>>>> - You eventually trigger a timeout on the nvme side, as blk-mq
>>>>>>>> finds the
>>>>>>>>          request hasn't been completed by an IRQ. But that's a 30
>>>>>>>> second ordeal
>>>>>>>>          before that event occurs.
>>>>>>>>
>>>>>>>> - There was still interrupts enabled.
>>>>>>>>
>>>>>>>> - You have two threads, one doing submission and one doing
>>>>>>>> completions.
>>>>>>>>          Maybe using SQPOLL? If that's the case, then yes, it'd still
>>>>>>>> work as
>>>>>>>>          you have separate threads for submission and completion.
>>>>>>>>
>>>>>>>> For the "generic" case of just using one thread and IRQs disabled,
>>>>>>>> it'd
>>>>>>>> deadlock.
>>>>>>>>
>>>>>>>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>>>>>>>> really should just disallow that, to be honest, it doesn't make any
>>>>>>>>>> sense. But let me think about if we can do a reasonable solution
>>>>>>>>>> to this
>>>>>>>>>> that doesn't involve adding overhead for a proper setup.
>>>>>>>>> It's a nonsensical config in a way and so disallowing it would make
>>>>>>>>> the most sense.
>>>>>>>> Definitely. The nvme driver should not set .poll() if it doesn't have
>>>>>>>> non-irq poll queues. Something like this:
>>>>>>> Actually, we already disable polling if we don't have specific poll
>>>>>>> queues:
>>>>>>>
>>>>>>>               if (set->nr_maps > HCTX_TYPE_POLL &&
>>>>>>>                   set->map[HCTX_TYPE_POLL].nr_queues)
>>>>>>>                       blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>>>>>>>
>>>>>>> Did you see any timeouts in your tests? I wonder if the use-after-free
>>>>>>> triggered when the timeout found the request while you had the
>>>>>>> busy-spin
>>>>>>> logic we discussed previously.
>>>>>> Ah, but we still have fops->iopoll() set for that case. So we just won't
>>>>>> poll for it, it'll get completed by IRQ. So I do think we need to handle
>>>>>> this case in io_uring. I'll get back to you.
>>>>>>
>>>>> I ran the same test on linux-next-20191029 in polled mode and got the
>>>>> same free-after-user panic:
>>>>>
>>>>> - I booted with nvme.poll_queues set and verified that all queues
>>>>> except default where of type poll
>>>>>
>>>>> - I added three assertions to verify the following:
>>>>>
>>>>>         - nvme_timeout() is not called
>>>>>
>>>>>         - io_complete_rw_iopoll() is not called from interrupt context
>>>>>
>>>>>         - io_sq_offload_start() is not called with IORING_SETUP_SQPOLL set
>>>>>
>>>>> Is it possible that the race is there also in polled mode since a
>>>>> request submitted by one thread could conceivably be polled for and
>>>>> completed by a different thread, e.g. in
>>>>> io_uring_enter()->io_iopoll_check()?
>>>>>
>>>>> --bijan
>>>>>
>>>>>
>>>> I also tested my RFC again with 1 thread and with queue depths of 1 to
>>>> 1024 in multiples of 8 and didn't see any hangs.
>>>>
>>>> Just to be clear, the busy-spin logic discussed before was only a
>>>> workaround an not in the RFC.
>>> What is your exact test case?
>>>
>> See original cover letter.  I can reproduce the failure with numjobs
>> between 8 and 32.
> And how many poll queues are you using?
>
30

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-29 19:40                         ` Bijan Mottahedeh
@ 2019-10-29 19:46                           ` Jens Axboe
  2019-10-29 19:51                             ` Bijan Mottahedeh
  0 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-29 19:46 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/29/19 1:40 PM, Bijan Mottahedeh wrote:
> 
> On 10/29/19 12:33 PM, Jens Axboe wrote:
>> On 10/29/19 1:31 PM, Bijan Mottahedeh wrote:
>>> On 10/29/19 12:27 PM, Jens Axboe wrote:
>>>> On 10/29/19 1:23 PM, Bijan Mottahedeh wrote:
>>>>> On 10/29/19 12:17 PM, Bijan Mottahedeh wrote:
>>>>>> On 10/25/19 7:21 AM, Jens Axboe wrote:
>>>>>>> On 10/25/19 8:18 AM, Jens Axboe wrote:
>>>>>>>> On 10/25/19 8:07 AM, Jens Axboe wrote:
>>>>>>>>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>>>>>>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>>>>>>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>>>>>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>>>>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>>>>>>>>> Running an fio test consistenly crashes the kernel with the
>>>>>>>>>>>>>> trace included
>>>>>>>>>>>>>> below.  The root cause seems to be the code in
>>>>>>>>>>>>>> __io_submit_sqe() that
>>>>>>>>>>>>>> checks the result of a request for -EAGAIN in polled mode,
>>>>>>>>>>>>>> without
>>>>>>>>>>>>>> ensuring first that the request has completed:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>          if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>>>>>>>>>              if (req->result == -EAGAIN)
>>>>>>>>>>>>>>                  return -EAGAIN;
>>>>>>>>>>>>> I'm a little confused, because we should be holding the submission
>>>>>>>>>>>>> reference to the request still at this point. So how is it
>>>>>>>>>>>>> going away?
>>>>>>>>>>>>> I must be missing something...
>>>>>>>>>>>> I don't think the submission reference is going away...
>>>>>>>>>>>>
>>>>>>>>>>>> I *think* the problem has to do with the fact that
>>>>>>>>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>>>>>>>>> called from interrupt context in my configuration and so there is a
>>>>>>>>>>>> potential race between updating the request there and checking
>>>>>>>>>>>> it in
>>>>>>>>>>>> __io_submit_sqe().
>>>>>>>>>>>>
>>>>>>>>>>>> My first workaround was to simply poll for
>>>>>>>>>>>> REQ_F_IOPOLL_COMPLETED in the
>>>>>>>>>>>> code snippet above:
>>>>>>>>>>>>
>>>>>>>>>>>>                if (req->result == --EAGAIN) {
>>>>>>>>>>>>
>>>>>>>>>>>>                    poll for REQ_F_IOPOLL_COMPLETED
>>>>>>>>>>>>
>>>>>>>>>>>>                    return -EAGAIN;
>>>>>>>>>>>>
>>>>>>>>>>>> }
>>>>>>>>>>>>
>>>>>>>>>>>> and that got rid of the problem.
>>>>>>>>>>> But that will not work at all for a proper poll setup, where you
>>>>>>>>>>> don't
>>>>>>>>>>> trigger any IRQs... It only happens to work for this case because
>>>>>>>>>>> you're
>>>>>>>>>>> still triggering interrupts. But even in that case, it's not a real
>>>>>>>>>>> solution, but I don't think that's the argument here ;-)
>>>>>>>>>> Sure.
>>>>>>>>>>
>>>>>>>>>> I'm just curious though as how it would break the poll case because
>>>>>>>>>> io_complete_rw_iopoll() would still be called though through polling,
>>>>>>>>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>>>>>>>>> should be able to reliably check req->result.
>>>>>>>>> It'd break the poll case because the task doing the submission is
>>>>>>>>> generally also the one that finds and reaps completion. Hence if you
>>>>>>>>> block that task just polling on that completion bit, you are
>>>>>>>>> preventing
>>>>>>>>> that very task from going and reaping completions. The condition would
>>>>>>>>> never become true, and you are now looping forever.
>>>>>>>>>
>>>>>>>>>> The same poll test seemed to run ok with nvme interrupts not being
>>>>>>>>>> triggered. Anyway, no argument that it's not needed!
>>>>>>>>> A few reasons why it would make progress:
>>>>>>>>>
>>>>>>>>> - You eventually trigger a timeout on the nvme side, as blk-mq
>>>>>>>>> finds the
>>>>>>>>>           request hasn't been completed by an IRQ. But that's a 30
>>>>>>>>> second ordeal
>>>>>>>>>           before that event occurs.
>>>>>>>>>
>>>>>>>>> - There was still interrupts enabled.
>>>>>>>>>
>>>>>>>>> - You have two threads, one doing submission and one doing
>>>>>>>>> completions.
>>>>>>>>>           Maybe using SQPOLL? If that's the case, then yes, it'd still
>>>>>>>>> work as
>>>>>>>>>           you have separate threads for submission and completion.
>>>>>>>>>
>>>>>>>>> For the "generic" case of just using one thread and IRQs disabled,
>>>>>>>>> it'd
>>>>>>>>> deadlock.
>>>>>>>>>
>>>>>>>>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>>>>>>>>> really should just disallow that, to be honest, it doesn't make any
>>>>>>>>>>> sense. But let me think about if we can do a reasonable solution
>>>>>>>>>>> to this
>>>>>>>>>>> that doesn't involve adding overhead for a proper setup.
>>>>>>>>>> It's a nonsensical config in a way and so disallowing it would make
>>>>>>>>>> the most sense.
>>>>>>>>> Definitely. The nvme driver should not set .poll() if it doesn't have
>>>>>>>>> non-irq poll queues. Something like this:
>>>>>>>> Actually, we already disable polling if we don't have specific poll
>>>>>>>> queues:
>>>>>>>>
>>>>>>>>                if (set->nr_maps > HCTX_TYPE_POLL &&
>>>>>>>>                    set->map[HCTX_TYPE_POLL].nr_queues)
>>>>>>>>                        blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>>>>>>>>
>>>>>>>> Did you see any timeouts in your tests? I wonder if the use-after-free
>>>>>>>> triggered when the timeout found the request while you had the
>>>>>>>> busy-spin
>>>>>>>> logic we discussed previously.
>>>>>>> Ah, but we still have fops->iopoll() set for that case. So we just won't
>>>>>>> poll for it, it'll get completed by IRQ. So I do think we need to handle
>>>>>>> this case in io_uring. I'll get back to you.
>>>>>>>
>>>>>> I ran the same test on linux-next-20191029 in polled mode and got the
>>>>>> same free-after-user panic:
>>>>>>
>>>>>> - I booted with nvme.poll_queues set and verified that all queues
>>>>>> except default where of type poll
>>>>>>
>>>>>> - I added three assertions to verify the following:
>>>>>>
>>>>>>          - nvme_timeout() is not called
>>>>>>
>>>>>>          - io_complete_rw_iopoll() is not called from interrupt context
>>>>>>
>>>>>>          - io_sq_offload_start() is not called with IORING_SETUP_SQPOLL set
>>>>>>
>>>>>> Is it possible that the race is there also in polled mode since a
>>>>>> request submitted by one thread could conceivably be polled for and
>>>>>> completed by a different thread, e.g. in
>>>>>> io_uring_enter()->io_iopoll_check()?
>>>>>>
>>>>>> --bijan
>>>>>>
>>>>>>
>>>>> I also tested my RFC again with 1 thread and with queue depths of 1 to
>>>>> 1024 in multiples of 8 and didn't see any hangs.
>>>>>
>>>>> Just to be clear, the busy-spin logic discussed before was only a
>>>>> workaround an not in the RFC.
>>>> What is your exact test case?
>>>>
>>> See original cover letter.  I can reproduce the failure with numjobs
>>> between 8 and 32.
>> And how many poll queues are you using?
>>
> 30

And how many threads/cores in the box? Trying to get a sense for how
many CPUs share a single poll queue, if any.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-29 19:46                           ` Jens Axboe
@ 2019-10-29 19:51                             ` Bijan Mottahedeh
  2019-10-29 19:52                               ` Jens Axboe
  0 siblings, 1 reply; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-29 19:51 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block


On 10/29/19 12:46 PM, Jens Axboe wrote:
> On 10/29/19 1:40 PM, Bijan Mottahedeh wrote:
>> On 10/29/19 12:33 PM, Jens Axboe wrote:
>>> On 10/29/19 1:31 PM, Bijan Mottahedeh wrote:
>>>> On 10/29/19 12:27 PM, Jens Axboe wrote:
>>>>> On 10/29/19 1:23 PM, Bijan Mottahedeh wrote:
>>>>>> On 10/29/19 12:17 PM, Bijan Mottahedeh wrote:
>>>>>>> On 10/25/19 7:21 AM, Jens Axboe wrote:
>>>>>>>> On 10/25/19 8:18 AM, Jens Axboe wrote:
>>>>>>>>> On 10/25/19 8:07 AM, Jens Axboe wrote:
>>>>>>>>>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>>>>>>>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>>>>>>>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>>>>>>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>>>>>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>>>>>>>>>> Running an fio test consistenly crashes the kernel with the
>>>>>>>>>>>>>>> trace included
>>>>>>>>>>>>>>> below.  The root cause seems to be the code in
>>>>>>>>>>>>>>> __io_submit_sqe() that
>>>>>>>>>>>>>>> checks the result of a request for -EAGAIN in polled mode,
>>>>>>>>>>>>>>> without
>>>>>>>>>>>>>>> ensuring first that the request has completed:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>           if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>>>>>>>>>>               if (req->result == -EAGAIN)
>>>>>>>>>>>>>>>                   return -EAGAIN;
>>>>>>>>>>>>>> I'm a little confused, because we should be holding the submission
>>>>>>>>>>>>>> reference to the request still at this point. So how is it
>>>>>>>>>>>>>> going away?
>>>>>>>>>>>>>> I must be missing something...
>>>>>>>>>>>>> I don't think the submission reference is going away...
>>>>>>>>>>>>>
>>>>>>>>>>>>> I *think* the problem has to do with the fact that
>>>>>>>>>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>>>>>>>>>> called from interrupt context in my configuration and so there is a
>>>>>>>>>>>>> potential race between updating the request there and checking
>>>>>>>>>>>>> it in
>>>>>>>>>>>>> __io_submit_sqe().
>>>>>>>>>>>>>
>>>>>>>>>>>>> My first workaround was to simply poll for
>>>>>>>>>>>>> REQ_F_IOPOLL_COMPLETED in the
>>>>>>>>>>>>> code snippet above:
>>>>>>>>>>>>>
>>>>>>>>>>>>>                 if (req->result == --EAGAIN) {
>>>>>>>>>>>>>
>>>>>>>>>>>>>                     poll for REQ_F_IOPOLL_COMPLETED
>>>>>>>>>>>>>
>>>>>>>>>>>>>                     return -EAGAIN;
>>>>>>>>>>>>>
>>>>>>>>>>>>> }
>>>>>>>>>>>>>
>>>>>>>>>>>>> and that got rid of the problem.
>>>>>>>>>>>> But that will not work at all for a proper poll setup, where you
>>>>>>>>>>>> don't
>>>>>>>>>>>> trigger any IRQs... It only happens to work for this case because
>>>>>>>>>>>> you're
>>>>>>>>>>>> still triggering interrupts. But even in that case, it's not a real
>>>>>>>>>>>> solution, but I don't think that's the argument here ;-)
>>>>>>>>>>> Sure.
>>>>>>>>>>>
>>>>>>>>>>> I'm just curious though as how it would break the poll case because
>>>>>>>>>>> io_complete_rw_iopoll() would still be called though through polling,
>>>>>>>>>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>>>>>>>>>> should be able to reliably check req->result.
>>>>>>>>>> It'd break the poll case because the task doing the submission is
>>>>>>>>>> generally also the one that finds and reaps completion. Hence if you
>>>>>>>>>> block that task just polling on that completion bit, you are
>>>>>>>>>> preventing
>>>>>>>>>> that very task from going and reaping completions. The condition would
>>>>>>>>>> never become true, and you are now looping forever.
>>>>>>>>>>
>>>>>>>>>>> The same poll test seemed to run ok with nvme interrupts not being
>>>>>>>>>>> triggered. Anyway, no argument that it's not needed!
>>>>>>>>>> A few reasons why it would make progress:
>>>>>>>>>>
>>>>>>>>>> - You eventually trigger a timeout on the nvme side, as blk-mq
>>>>>>>>>> finds the
>>>>>>>>>>            request hasn't been completed by an IRQ. But that's a 30
>>>>>>>>>> second ordeal
>>>>>>>>>>            before that event occurs.
>>>>>>>>>>
>>>>>>>>>> - There was still interrupts enabled.
>>>>>>>>>>
>>>>>>>>>> - You have two threads, one doing submission and one doing
>>>>>>>>>> completions.
>>>>>>>>>>            Maybe using SQPOLL? If that's the case, then yes, it'd still
>>>>>>>>>> work as
>>>>>>>>>>            you have separate threads for submission and completion.
>>>>>>>>>>
>>>>>>>>>> For the "generic" case of just using one thread and IRQs disabled,
>>>>>>>>>> it'd
>>>>>>>>>> deadlock.
>>>>>>>>>>
>>>>>>>>>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>>>>>>>>>> really should just disallow that, to be honest, it doesn't make any
>>>>>>>>>>>> sense. But let me think about if we can do a reasonable solution
>>>>>>>>>>>> to this
>>>>>>>>>>>> that doesn't involve adding overhead for a proper setup.
>>>>>>>>>>> It's a nonsensical config in a way and so disallowing it would make
>>>>>>>>>>> the most sense.
>>>>>>>>>> Definitely. The nvme driver should not set .poll() if it doesn't have
>>>>>>>>>> non-irq poll queues. Something like this:
>>>>>>>>> Actually, we already disable polling if we don't have specific poll
>>>>>>>>> queues:
>>>>>>>>>
>>>>>>>>>                 if (set->nr_maps > HCTX_TYPE_POLL &&
>>>>>>>>>                     set->map[HCTX_TYPE_POLL].nr_queues)
>>>>>>>>>                         blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>>>>>>>>>
>>>>>>>>> Did you see any timeouts in your tests? I wonder if the use-after-free
>>>>>>>>> triggered when the timeout found the request while you had the
>>>>>>>>> busy-spin
>>>>>>>>> logic we discussed previously.
>>>>>>>> Ah, but we still have fops->iopoll() set for that case. So we just won't
>>>>>>>> poll for it, it'll get completed by IRQ. So I do think we need to handle
>>>>>>>> this case in io_uring. I'll get back to you.
>>>>>>>>
>>>>>>> I ran the same test on linux-next-20191029 in polled mode and got the
>>>>>>> same free-after-user panic:
>>>>>>>
>>>>>>> - I booted with nvme.poll_queues set and verified that all queues
>>>>>>> except default where of type poll
>>>>>>>
>>>>>>> - I added three assertions to verify the following:
>>>>>>>
>>>>>>>           - nvme_timeout() is not called
>>>>>>>
>>>>>>>           - io_complete_rw_iopoll() is not called from interrupt context
>>>>>>>
>>>>>>>           - io_sq_offload_start() is not called with IORING_SETUP_SQPOLL set
>>>>>>>
>>>>>>> Is it possible that the race is there also in polled mode since a
>>>>>>> request submitted by one thread could conceivably be polled for and
>>>>>>> completed by a different thread, e.g. in
>>>>>>> io_uring_enter()->io_iopoll_check()?
>>>>>>>
>>>>>>> --bijan
>>>>>>>
>>>>>>>
>>>>>> I also tested my RFC again with 1 thread and with queue depths of 1 to
>>>>>> 1024 in multiples of 8 and didn't see any hangs.
>>>>>>
>>>>>> Just to be clear, the busy-spin logic discussed before was only a
>>>>>> workaround an not in the RFC.
>>>>> What is your exact test case?
>>>>>
>>>> See original cover letter.  I can reproduce the failure with numjobs
>>>> between 8 and 32.
>>> And how many poll queues are you using?
>>>
>> 30
> And how many threads/cores in the box? Trying to get a sense for how
> many CPUs share a single poll queue, if any.
>
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             2


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-29 19:51                             ` Bijan Mottahedeh
@ 2019-10-29 19:52                               ` Jens Axboe
  2019-10-30  1:02                                 ` Jens Axboe
  0 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-29 19:52 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/29/19 1:51 PM, Bijan Mottahedeh wrote:
> 
> On 10/29/19 12:46 PM, Jens Axboe wrote:
>> On 10/29/19 1:40 PM, Bijan Mottahedeh wrote:
>>> On 10/29/19 12:33 PM, Jens Axboe wrote:
>>>> On 10/29/19 1:31 PM, Bijan Mottahedeh wrote:
>>>>> On 10/29/19 12:27 PM, Jens Axboe wrote:
>>>>>> On 10/29/19 1:23 PM, Bijan Mottahedeh wrote:
>>>>>>> On 10/29/19 12:17 PM, Bijan Mottahedeh wrote:
>>>>>>>> On 10/25/19 7:21 AM, Jens Axboe wrote:
>>>>>>>>> On 10/25/19 8:18 AM, Jens Axboe wrote:
>>>>>>>>>> On 10/25/19 8:07 AM, Jens Axboe wrote:
>>>>>>>>>>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>>>>>>>>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>>>>>>>>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>>>>>>>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>>>>>>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>>>>>>>>>>> Running an fio test consistenly crashes the kernel with the
>>>>>>>>>>>>>>>> trace included
>>>>>>>>>>>>>>>> below.  The root cause seems to be the code in
>>>>>>>>>>>>>>>> __io_submit_sqe() that
>>>>>>>>>>>>>>>> checks the result of a request for -EAGAIN in polled mode,
>>>>>>>>>>>>>>>> without
>>>>>>>>>>>>>>>> ensuring first that the request has completed:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>            if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>>>>>>>>>>>                if (req->result == -EAGAIN)
>>>>>>>>>>>>>>>>                    return -EAGAIN;
>>>>>>>>>>>>>>> I'm a little confused, because we should be holding the submission
>>>>>>>>>>>>>>> reference to the request still at this point. So how is it
>>>>>>>>>>>>>>> going away?
>>>>>>>>>>>>>>> I must be missing something...
>>>>>>>>>>>>>> I don't think the submission reference is going away...
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I *think* the problem has to do with the fact that
>>>>>>>>>>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>>>>>>>>>>> called from interrupt context in my configuration and so there is a
>>>>>>>>>>>>>> potential race between updating the request there and checking
>>>>>>>>>>>>>> it in
>>>>>>>>>>>>>> __io_submit_sqe().
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> My first workaround was to simply poll for
>>>>>>>>>>>>>> REQ_F_IOPOLL_COMPLETED in the
>>>>>>>>>>>>>> code snippet above:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>                  if (req->result == --EAGAIN) {
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>                      poll for REQ_F_IOPOLL_COMPLETED
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>                      return -EAGAIN;
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> and that got rid of the problem.
>>>>>>>>>>>>> But that will not work at all for a proper poll setup, where you
>>>>>>>>>>>>> don't
>>>>>>>>>>>>> trigger any IRQs... It only happens to work for this case because
>>>>>>>>>>>>> you're
>>>>>>>>>>>>> still triggering interrupts. But even in that case, it's not a real
>>>>>>>>>>>>> solution, but I don't think that's the argument here ;-)
>>>>>>>>>>>> Sure.
>>>>>>>>>>>>
>>>>>>>>>>>> I'm just curious though as how it would break the poll case because
>>>>>>>>>>>> io_complete_rw_iopoll() would still be called though through polling,
>>>>>>>>>>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>>>>>>>>>>> should be able to reliably check req->result.
>>>>>>>>>>> It'd break the poll case because the task doing the submission is
>>>>>>>>>>> generally also the one that finds and reaps completion. Hence if you
>>>>>>>>>>> block that task just polling on that completion bit, you are
>>>>>>>>>>> preventing
>>>>>>>>>>> that very task from going and reaping completions. The condition would
>>>>>>>>>>> never become true, and you are now looping forever.
>>>>>>>>>>>
>>>>>>>>>>>> The same poll test seemed to run ok with nvme interrupts not being
>>>>>>>>>>>> triggered. Anyway, no argument that it's not needed!
>>>>>>>>>>> A few reasons why it would make progress:
>>>>>>>>>>>
>>>>>>>>>>> - You eventually trigger a timeout on the nvme side, as blk-mq
>>>>>>>>>>> finds the
>>>>>>>>>>>             request hasn't been completed by an IRQ. But that's a 30
>>>>>>>>>>> second ordeal
>>>>>>>>>>>             before that event occurs.
>>>>>>>>>>>
>>>>>>>>>>> - There was still interrupts enabled.
>>>>>>>>>>>
>>>>>>>>>>> - You have two threads, one doing submission and one doing
>>>>>>>>>>> completions.
>>>>>>>>>>>             Maybe using SQPOLL? If that's the case, then yes, it'd still
>>>>>>>>>>> work as
>>>>>>>>>>>             you have separate threads for submission and completion.
>>>>>>>>>>>
>>>>>>>>>>> For the "generic" case of just using one thread and IRQs disabled,
>>>>>>>>>>> it'd
>>>>>>>>>>> deadlock.
>>>>>>>>>>>
>>>>>>>>>>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>>>>>>>>>>> really should just disallow that, to be honest, it doesn't make any
>>>>>>>>>>>>> sense. But let me think about if we can do a reasonable solution
>>>>>>>>>>>>> to this
>>>>>>>>>>>>> that doesn't involve adding overhead for a proper setup.
>>>>>>>>>>>> It's a nonsensical config in a way and so disallowing it would make
>>>>>>>>>>>> the most sense.
>>>>>>>>>>> Definitely. The nvme driver should not set .poll() if it doesn't have
>>>>>>>>>>> non-irq poll queues. Something like this:
>>>>>>>>>> Actually, we already disable polling if we don't have specific poll
>>>>>>>>>> queues:
>>>>>>>>>>
>>>>>>>>>>                  if (set->nr_maps > HCTX_TYPE_POLL &&
>>>>>>>>>>                      set->map[HCTX_TYPE_POLL].nr_queues)
>>>>>>>>>>                          blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>>>>>>>>>>
>>>>>>>>>> Did you see any timeouts in your tests? I wonder if the use-after-free
>>>>>>>>>> triggered when the timeout found the request while you had the
>>>>>>>>>> busy-spin
>>>>>>>>>> logic we discussed previously.
>>>>>>>>> Ah, but we still have fops->iopoll() set for that case. So we just won't
>>>>>>>>> poll for it, it'll get completed by IRQ. So I do think we need to handle
>>>>>>>>> this case in io_uring. I'll get back to you.
>>>>>>>>>
>>>>>>>> I ran the same test on linux-next-20191029 in polled mode and got the
>>>>>>>> same free-after-user panic:
>>>>>>>>
>>>>>>>> - I booted with nvme.poll_queues set and verified that all queues
>>>>>>>> except default where of type poll
>>>>>>>>
>>>>>>>> - I added three assertions to verify the following:
>>>>>>>>
>>>>>>>>            - nvme_timeout() is not called
>>>>>>>>
>>>>>>>>            - io_complete_rw_iopoll() is not called from interrupt context
>>>>>>>>
>>>>>>>>            - io_sq_offload_start() is not called with IORING_SETUP_SQPOLL set
>>>>>>>>
>>>>>>>> Is it possible that the race is there also in polled mode since a
>>>>>>>> request submitted by one thread could conceivably be polled for and
>>>>>>>> completed by a different thread, e.g. in
>>>>>>>> io_uring_enter()->io_iopoll_check()?
>>>>>>>>
>>>>>>>> --bijan
>>>>>>>>
>>>>>>>>
>>>>>>> I also tested my RFC again with 1 thread and with queue depths of 1 to
>>>>>>> 1024 in multiples of 8 and didn't see any hangs.
>>>>>>>
>>>>>>> Just to be clear, the busy-spin logic discussed before was only a
>>>>>>> workaround an not in the RFC.
>>>>>> What is your exact test case?
>>>>>>
>>>>> See original cover letter.  I can reproduce the failure with numjobs
>>>>> between 8 and 32.
>>>> And how many poll queues are you using?
>>>>
>>> 30
>> And how many threads/cores in the box? Trying to get a sense for how
>> many CPUs share a single poll queue, if any.
>>
> Thread(s) per core:    2
> Core(s) per socket:    8
> Socket(s):             2

OK, so 2*8*2 == 32 threads, hence some threads will share a poll queue.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-29 19:52                               ` Jens Axboe
@ 2019-10-30  1:02                                 ` Jens Axboe
  2019-10-30 14:02                                   ` Bijan Mottahedeh
  0 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-30  1:02 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/29/19 1:52 PM, Jens Axboe wrote:
> On 10/29/19 1:51 PM, Bijan Mottahedeh wrote:
>>
>> On 10/29/19 12:46 PM, Jens Axboe wrote:
>>> On 10/29/19 1:40 PM, Bijan Mottahedeh wrote:
>>>> On 10/29/19 12:33 PM, Jens Axboe wrote:
>>>>> On 10/29/19 1:31 PM, Bijan Mottahedeh wrote:
>>>>>> On 10/29/19 12:27 PM, Jens Axboe wrote:
>>>>>>> On 10/29/19 1:23 PM, Bijan Mottahedeh wrote:
>>>>>>>> On 10/29/19 12:17 PM, Bijan Mottahedeh wrote:
>>>>>>>>> On 10/25/19 7:21 AM, Jens Axboe wrote:
>>>>>>>>>> On 10/25/19 8:18 AM, Jens Axboe wrote:
>>>>>>>>>>> On 10/25/19 8:07 AM, Jens Axboe wrote:
>>>>>>>>>>>> On 10/25/19 7:46 AM, Bijan Mottahedeh wrote:
>>>>>>>>>>>>> On 10/24/19 3:31 PM, Jens Axboe wrote:
>>>>>>>>>>>>>> On 10/24/19 1:18 PM, Bijan Mottahedeh wrote:
>>>>>>>>>>>>>>> On 10/24/19 10:09 AM, Jens Axboe wrote:
>>>>>>>>>>>>>>>> On 10/24/19 3:18 AM, Bijan Mottahedeh wrote:
>>>>>>>>>>>>>>>>> Running an fio test consistenly crashes the kernel with the
>>>>>>>>>>>>>>>>> trace included
>>>>>>>>>>>>>>>>> below.  The root cause seems to be the code in
>>>>>>>>>>>>>>>>> __io_submit_sqe() that
>>>>>>>>>>>>>>>>> checks the result of a request for -EAGAIN in polled mode,
>>>>>>>>>>>>>>>>> without
>>>>>>>>>>>>>>>>> ensuring first that the request has completed:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>             if (ctx->flags & IORING_SETUP_IOPOLL) {
>>>>>>>>>>>>>>>>>                 if (req->result == -EAGAIN)
>>>>>>>>>>>>>>>>>                     return -EAGAIN;
>>>>>>>>>>>>>>>> I'm a little confused, because we should be holding the submission
>>>>>>>>>>>>>>>> reference to the request still at this point. So how is it
>>>>>>>>>>>>>>>> going away?
>>>>>>>>>>>>>>>> I must be missing something...
>>>>>>>>>>>>>>> I don't think the submission reference is going away...
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I *think* the problem has to do with the fact that
>>>>>>>>>>>>>>> io_complete_rw_iopoll() which sets REQ_F_IOPOLL_COMPLETED is being
>>>>>>>>>>>>>>> called from interrupt context in my configuration and so there is a
>>>>>>>>>>>>>>> potential race between updating the request there and checking
>>>>>>>>>>>>>>> it in
>>>>>>>>>>>>>>> __io_submit_sqe().
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> My first workaround was to simply poll for
>>>>>>>>>>>>>>> REQ_F_IOPOLL_COMPLETED in the
>>>>>>>>>>>>>>> code snippet above:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>                   if (req->result == --EAGAIN) {
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>                       poll for REQ_F_IOPOLL_COMPLETED
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>                       return -EAGAIN;
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> and that got rid of the problem.
>>>>>>>>>>>>>> But that will not work at all for a proper poll setup, where you
>>>>>>>>>>>>>> don't
>>>>>>>>>>>>>> trigger any IRQs... It only happens to work for this case because
>>>>>>>>>>>>>> you're
>>>>>>>>>>>>>> still triggering interrupts. But even in that case, it's not a real
>>>>>>>>>>>>>> solution, but I don't think that's the argument here ;-)
>>>>>>>>>>>>> Sure.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I'm just curious though as how it would break the poll case because
>>>>>>>>>>>>> io_complete_rw_iopoll() would still be called though through polling,
>>>>>>>>>>>>> REQ_F_IOPOLL_COMPLETED would be set, and so io_iopoll_complete()
>>>>>>>>>>>>> should be able to reliably check req->result.
>>>>>>>>>>>> It'd break the poll case because the task doing the submission is
>>>>>>>>>>>> generally also the one that finds and reaps completion. Hence if you
>>>>>>>>>>>> block that task just polling on that completion bit, you are
>>>>>>>>>>>> preventing
>>>>>>>>>>>> that very task from going and reaping completions. The condition would
>>>>>>>>>>>> never become true, and you are now looping forever.
>>>>>>>>>>>>
>>>>>>>>>>>>> The same poll test seemed to run ok with nvme interrupts not being
>>>>>>>>>>>>> triggered. Anyway, no argument that it's not needed!
>>>>>>>>>>>> A few reasons why it would make progress:
>>>>>>>>>>>>
>>>>>>>>>>>> - You eventually trigger a timeout on the nvme side, as blk-mq
>>>>>>>>>>>> finds the
>>>>>>>>>>>>              request hasn't been completed by an IRQ. But that's a 30
>>>>>>>>>>>> second ordeal
>>>>>>>>>>>>              before that event occurs.
>>>>>>>>>>>>
>>>>>>>>>>>> - There was still interrupts enabled.
>>>>>>>>>>>>
>>>>>>>>>>>> - You have two threads, one doing submission and one doing
>>>>>>>>>>>> completions.
>>>>>>>>>>>>              Maybe using SQPOLL? If that's the case, then yes, it'd still
>>>>>>>>>>>> work as
>>>>>>>>>>>>              you have separate threads for submission and completion.
>>>>>>>>>>>>
>>>>>>>>>>>> For the "generic" case of just using one thread and IRQs disabled,
>>>>>>>>>>>> it'd
>>>>>>>>>>>> deadlock.
>>>>>>>>>>>>
>>>>>>>>>>>>>> I see what the race is now, it's specific to IRQ driven polling. We
>>>>>>>>>>>>>> really should just disallow that, to be honest, it doesn't make any
>>>>>>>>>>>>>> sense. But let me think about if we can do a reasonable solution
>>>>>>>>>>>>>> to this
>>>>>>>>>>>>>> that doesn't involve adding overhead for a proper setup.
>>>>>>>>>>>>> It's a nonsensical config in a way and so disallowing it would make
>>>>>>>>>>>>> the most sense.
>>>>>>>>>>>> Definitely. The nvme driver should not set .poll() if it doesn't have
>>>>>>>>>>>> non-irq poll queues. Something like this:
>>>>>>>>>>> Actually, we already disable polling if we don't have specific poll
>>>>>>>>>>> queues:
>>>>>>>>>>>
>>>>>>>>>>>                   if (set->nr_maps > HCTX_TYPE_POLL &&
>>>>>>>>>>>                       set->map[HCTX_TYPE_POLL].nr_queues)
>>>>>>>>>>>                           blk_queue_flag_set(QUEUE_FLAG_POLL, q);
>>>>>>>>>>>
>>>>>>>>>>> Did you see any timeouts in your tests? I wonder if the use-after-free
>>>>>>>>>>> triggered when the timeout found the request while you had the
>>>>>>>>>>> busy-spin
>>>>>>>>>>> logic we discussed previously.
>>>>>>>>>> Ah, but we still have fops->iopoll() set for that case. So we just won't
>>>>>>>>>> poll for it, it'll get completed by IRQ. So I do think we need to handle
>>>>>>>>>> this case in io_uring. I'll get back to you.
>>>>>>>>>>
>>>>>>>>> I ran the same test on linux-next-20191029 in polled mode and got the
>>>>>>>>> same free-after-user panic:
>>>>>>>>>
>>>>>>>>> - I booted with nvme.poll_queues set and verified that all queues
>>>>>>>>> except default where of type poll
>>>>>>>>>
>>>>>>>>> - I added three assertions to verify the following:
>>>>>>>>>
>>>>>>>>>             - nvme_timeout() is not called
>>>>>>>>>
>>>>>>>>>             - io_complete_rw_iopoll() is not called from interrupt context
>>>>>>>>>
>>>>>>>>>             - io_sq_offload_start() is not called with IORING_SETUP_SQPOLL set
>>>>>>>>>
>>>>>>>>> Is it possible that the race is there also in polled mode since a
>>>>>>>>> request submitted by one thread could conceivably be polled for and
>>>>>>>>> completed by a different thread, e.g. in
>>>>>>>>> io_uring_enter()->io_iopoll_check()?
>>>>>>>>>
>>>>>>>>> --bijan
>>>>>>>>>
>>>>>>>>>
>>>>>>>> I also tested my RFC again with 1 thread and with queue depths of 1 to
>>>>>>>> 1024 in multiples of 8 and didn't see any hangs.
>>>>>>>>
>>>>>>>> Just to be clear, the busy-spin logic discussed before was only a
>>>>>>>> workaround an not in the RFC.
>>>>>>> What is your exact test case?
>>>>>>>
>>>>>> See original cover letter.  I can reproduce the failure with numjobs
>>>>>> between 8 and 32.
>>>>> And how many poll queues are you using?
>>>>>
>>>> 30
>>> And how many threads/cores in the box? Trying to get a sense for how
>>> many CPUs share a single poll queue, if any.
>>>
>> Thread(s) per core:    2
>> Core(s) per socket:    8
>> Socket(s):             2
> 
> OK, so 2*8*2 == 32 threads, hence some threads will share a poll queue.

OK, so I still don't quite see where the issue is. Your setup has more
than one CPU per poll queue, and I can reproduce the issue quite easily
here with a similar setup. Below are some things that are given:

1) If we fail to submit the IO, io_complete_rw_iopoll() is ultimately
   invoked _from_ the submission path. This means that the result is
   readily available by the time we go and check:

   if (req->result == -EAGAIN)

   in __io_submit_sqe(). This is a submission time failure, not
   something that should be happening from a completion path after the
   IO has been submitted successfully.

2) If the succeed in submitting the request, given that we have other
   tasks polling, the request can complete any time. It can very well be
   complete by the time we call io_iopoll_req_issued(), and this is
   perfectly fine. We know the request isn't going away, as we're
   holding a reference to it. kiocb has the same lifetime, as it's
   embedded in the io_kiocb request. Note that this isn't the same
   io_ring_ctx at all, some other task with its own io_ring_ctx just
   happens to find our request when doing polling on the same queue
   itself.

We would definitely get in trouble if we submitted the request
successfully, but returned -EAGAIN because we thought we didn't.

In my testing, what I seem to see is double completions on the block
layer side, and double issues. I can't quite get that to match up with
anything...

I'll keep digging, hopefully I'll get some deeper understanding of what
exactly the issue is shortly. I was hoping I'd get that by writing my
thoughts in this email, but alas that didn't happen yet.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-30  1:02                                 ` Jens Axboe
@ 2019-10-30 14:02                                   ` Bijan Mottahedeh
  2019-10-30 14:18                                     ` Jens Axboe
  0 siblings, 1 reply; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-30 14:02 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block

> OK, so I still don't quite see where the issue is. Your setup has more
> than one CPU per poll queue, and I can reproduce the issue quite easily
> here with a similar setup.

That's probably why I couldn't reproduce this in a vm.  This time I set 
up one poll queue in a 8 cpu vm and reproduced it.

> Below are some things that are given:
>
> 1) If we fail to submit the IO, io_complete_rw_iopoll() is ultimately
>     invoked _from_ the submission path. This means that the result is
>     readily available by the time we go and check:
>
>     if (req->result == -EAGAIN)
>
>     in __io_submit_sqe().

Is that always true?

Let's say the operation was __io_submit_sqe()->io_read()

By "failing to submit the io", do you mean that 
io_read()->call_read_iter() returned success or failure?  Are you saying 
that req->result was set from kiocb_done() or later in the block layer?

Anyway I assume that io_read() would have to return success since 
otherwise __io_submit_sqe() would immediately return and not check 
req->result:

         if (ret)
                 return ret;

So if io_read() did return success,  are we guaranteed that setting 
req->result = -EAGAIN would always happen before the check?

Also, is it possible that we can be preempted in __io_submit_sqe() after 
the call to io_read() but before the -EAGAIN check?

> This is a submission time failure, not
>     something that should be happening from a completion path after the
>     IO has been submitted successfully.
>
> 2) If the succeed in submitting the request, given that we have other
>     tasks polling, the request can complete any time. It can very well be
>     complete by the time we call io_iopoll_req_issued(), and this is
>     perfectly fine. We know the request isn't going away, as we're
>     holding a reference to it. kiocb has the same lifetime, as it's
>     embedded in the io_kiocb request. Note that this isn't the same
>     io_ring_ctx at all, some other task with its own io_ring_ctx just
>     happens to find our request when doing polling on the same queue
>     itself.

Ah yes, it's a different io_ring_ctx, different poll list etc. For my 
own clarity, I assume all contexts are mapping the same actual sq/cq rings?

>
> We would definitely get in trouble if we submitted the request
> successfully, but returned -EAGAIN because we thought we didn't.
>
> In my testing, what I seem to see is double completions on the block
> layer side, and double issues. I can't quite get that to match up with
> anything...
>
> I'll keep digging, hopefully I'll get some deeper understanding of what
> exactly the issue is shortly. I was hoping I'd get that by writing my
> thoughts in this email, but alas that didn't happen yet.
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-30 14:02                                   ` Bijan Mottahedeh
@ 2019-10-30 14:18                                     ` Jens Axboe
  2019-10-30 17:32                                       ` Jens Axboe
  0 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-30 14:18 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/30/19 8:02 AM, Bijan Mottahedeh wrote:
>> OK, so I still don't quite see where the issue is. Your setup has more
>> than one CPU per poll queue, and I can reproduce the issue quite easily
>> here with a similar setup.
> 
> That's probably why I couldn't reproduce this in a vm.  This time I set
> up one poll queue in a 8 cpu vm and reproduced it.

You definitely do need poll queue sharing to hit this.

>> Below are some things that are given:
>>
>> 1) If we fail to submit the IO, io_complete_rw_iopoll() is ultimately
>>      invoked _from_ the submission path. This means that the result is
>>      readily available by the time we go and check:
>>
>>      if (req->result == -EAGAIN)
>>
>>      in __io_submit_sqe().
> 
> Is that always true?
> 
> Let's say the operation was __io_submit_sqe()->io_read()
> 
> By "failing to submit the io", do you mean that
> io_read()->call_read_iter() returned success or failure?  Are you saying
> that req->result was set from kiocb_done() or later in the block layer?

By "failed to submit" I mean that req->result == -EAGAIN. We set that in
io_complete_rw_iopoll(), which is called off bio_wouldblock_error() in
the block layer. This is different from an ret != 0 return, which would
have been some sort of other failure we encountered, failing to submit
the request. For that error, we just end the request. For the
bio_wouldblock_error(), we need to retry submission.

> Anyway I assume that io_read() would have to return success since
> otherwise __io_submit_sqe() would immediately return and not check
> req->result:
> 
>           if (ret)
>                   return ret;

Right, if ret != 0, then we have a fatal error for that request.
->result will not have been set at all for that case.

> So if io_read() did return success,  are we guaranteed that setting
> req->result = -EAGAIN would always happen before the check?

Always, since it happens inline from when you attempt to issue the read.

> Also, is it possible that we can be preempted in __io_submit_sqe() after
> the call to io_read() but before the -EAGAIN check?

Sure, we're not disabling preemption. But that shouldn't have any
bearing on this case.
> 
>> This is a submission time failure, not
>>      something that should be happening from a completion path after the
>>      IO has been submitted successfully.
>>
>> 2) If the succeed in submitting the request, given that we have other
>>      tasks polling, the request can complete any time. It can very well be
>>      complete by the time we call io_iopoll_req_issued(), and this is
>>      perfectly fine. We know the request isn't going away, as we're
>>      holding a reference to it. kiocb has the same lifetime, as it's
>>      embedded in the io_kiocb request. Note that this isn't the same
>>      io_ring_ctx at all, some other task with its own io_ring_ctx just
>>      happens to find our request when doing polling on the same queue
>>      itself.
> 
> Ah yes, it's a different io_ring_ctx, different poll list etc. For my

Exactly

> own clarity, I assume all contexts are mapping the same actual sq/cq
> rings?

The ring/context isn't mapped to a particular ring, a specific IO is
mapped to a specific queue at the time it's being submitted. That
depends on the IO type and where that task happens to be running at the
time the IO is being submitted.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-30 14:18                                     ` Jens Axboe
@ 2019-10-30 17:32                                       ` Jens Axboe
  2019-10-30 19:21                                         ` Bijan Mottahedeh
  0 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2019-10-30 17:32 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/30/19 8:18 AM, Jens Axboe wrote:
> On 10/30/19 8:02 AM, Bijan Mottahedeh wrote:
>>> OK, so I still don't quite see where the issue is. Your setup has more
>>> than one CPU per poll queue, and I can reproduce the issue quite easily
>>> here with a similar setup.
>>
>> That's probably why I couldn't reproduce this in a vm.  This time I set
>> up one poll queue in a 8 cpu vm and reproduced it.
> 
> You definitely do need poll queue sharing to hit this.
> 
>>> Below are some things that are given:
>>>
>>> 1) If we fail to submit the IO, io_complete_rw_iopoll() is ultimately
>>>       invoked _from_ the submission path. This means that the result is
>>>       readily available by the time we go and check:
>>>
>>>       if (req->result == -EAGAIN)
>>>
>>>       in __io_submit_sqe().
>>
>> Is that always true?
>>
>> Let's say the operation was __io_submit_sqe()->io_read()
>>
>> By "failing to submit the io", do you mean that
>> io_read()->call_read_iter() returned success or failure?  Are you saying
>> that req->result was set from kiocb_done() or later in the block layer?
> 
> By "failed to submit" I mean that req->result == -EAGAIN. We set that in
> io_complete_rw_iopoll(), which is called off bio_wouldblock_error() in
> the block layer. This is different from an ret != 0 return, which would
> have been some sort of other failure we encountered, failing to submit
> the request. For that error, we just end the request. For the
> bio_wouldblock_error(), we need to retry submission.
> 
>> Anyway I assume that io_read() would have to return success since
>> otherwise __io_submit_sqe() would immediately return and not check
>> req->result:
>>
>>            if (ret)
>>                    return ret;
> 
> Right, if ret != 0, then we have a fatal error for that request.
> ->result will not have been set at all for that case.
> 
>> So if io_read() did return success,  are we guaranteed that setting
>> req->result = -EAGAIN would always happen before the check?
> 
> Always, since it happens inline from when you attempt to issue the read.
> 
>> Also, is it possible that we can be preempted in __io_submit_sqe() after
>> the call to io_read() but before the -EAGAIN check?
> 
> Sure, we're not disabling preemption. But that shouldn't have any
> bearing on this case.
>>
>>> This is a submission time failure, not
>>>       something that should be happening from a completion path after the
>>>       IO has been submitted successfully.
>>>
>>> 2) If the succeed in submitting the request, given that we have other
>>>       tasks polling, the request can complete any time. It can very well be
>>>       complete by the time we call io_iopoll_req_issued(), and this is
>>>       perfectly fine. We know the request isn't going away, as we're
>>>       holding a reference to it. kiocb has the same lifetime, as it's
>>>       embedded in the io_kiocb request. Note that this isn't the same
>>>       io_ring_ctx at all, some other task with its own io_ring_ctx just
>>>       happens to find our request when doing polling on the same queue
>>>       itself.
>>
>> Ah yes, it's a different io_ring_ctx, different poll list etc. For my
> 
> Exactly
> 
>> own clarity, I assume all contexts are mapping the same actual sq/cq
>> rings?
> 
> The ring/context isn't mapped to a particular ring, a specific IO is
> mapped to a specific queue at the time it's being submitted. That
> depends on the IO type and where that task happens to be running at the
> time the IO is being submitted.

This might just do it, except it looks like there's a bug in sbitmap
where we don't always flush deferred clears. I'll look into that now.


diff --git a/fs/io_uring.c b/fs/io_uring.c
index af1937d66aee..f3ca2ee44dbd 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1212,6 +1212,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct sqe_submit *s,
 
 		kiocb->ki_flags |= IOCB_HIPRI;
 		kiocb->ki_complete = io_complete_rw_iopoll;
+		req->result = 0;
 	} else {
 		if (kiocb->ki_flags & IOCB_HIPRI)
 			return -EINVAL;

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-30 17:32                                       ` Jens Axboe
@ 2019-10-30 19:21                                         ` Bijan Mottahedeh
  2019-10-30 19:26                                           ` Jens Axboe
  0 siblings, 1 reply; 25+ messages in thread
From: Bijan Mottahedeh @ 2019-10-30 19:21 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block


On 10/30/19 10:32 AM, Jens Axboe wrote:
> On 10/30/19 8:18 AM, Jens Axboe wrote:
>> On 10/30/19 8:02 AM, Bijan Mottahedeh wrote:
>>>> OK, so I still don't quite see where the issue is. Your setup has more
>>>> than one CPU per poll queue, and I can reproduce the issue quite easily
>>>> here with a similar setup.
>>> That's probably why I couldn't reproduce this in a vm.  This time I set
>>> up one poll queue in a 8 cpu vm and reproduced it.
>> You definitely do need poll queue sharing to hit this.
>>
>>>> Below are some things that are given:
>>>>
>>>> 1) If we fail to submit the IO, io_complete_rw_iopoll() is ultimately
>>>>        invoked _from_ the submission path. This means that the result is
>>>>        readily available by the time we go and check:
>>>>
>>>>        if (req->result == -EAGAIN)
>>>>
>>>>        in __io_submit_sqe().
>>> Is that always true?
>>>
>>> Let's say the operation was __io_submit_sqe()->io_read()
>>>
>>> By "failing to submit the io", do you mean that
>>> io_read()->call_read_iter() returned success or failure?  Are you saying
>>> that req->result was set from kiocb_done() or later in the block layer?
>> By "failed to submit" I mean that req->result == -EAGAIN. We set that in
>> io_complete_rw_iopoll(), which is called off bio_wouldblock_error() in
>> the block layer. This is different from an ret != 0 return, which would
>> have been some sort of other failure we encountered, failing to submit
>> the request. For that error, we just end the request. For the
>> bio_wouldblock_error(), we need to retry submission.
>>
>>> Anyway I assume that io_read() would have to return success since
>>> otherwise __io_submit_sqe() would immediately return and not check
>>> req->result:
>>>
>>>             if (ret)
>>>                     return ret;
>> Right, if ret != 0, then we have a fatal error for that request.
>> ->result will not have been set at all for that case.
>>
>>> So if io_read() did return success,  are we guaranteed that setting
>>> req->result = -EAGAIN would always happen before the check?
>> Always, since it happens inline from when you attempt to issue the read.
>>
>>> Also, is it possible that we can be preempted in __io_submit_sqe() after
>>> the call to io_read() but before the -EAGAIN check?
>> Sure, we're not disabling preemption. But that shouldn't have any
>> bearing on this case.
>>>> This is a submission time failure, not
>>>>        something that should be happening from a completion path after the
>>>>        IO has been submitted successfully.
>>>>
>>>> 2) If the succeed in submitting the request, given that we have other
>>>>        tasks polling, the request can complete any time. It can very well be
>>>>        complete by the time we call io_iopoll_req_issued(), and this is
>>>>        perfectly fine. We know the request isn't going away, as we're
>>>>        holding a reference to it. kiocb has the same lifetime, as it's
>>>>        embedded in the io_kiocb request. Note that this isn't the same
>>>>        io_ring_ctx at all, some other task with its own io_ring_ctx just
>>>>        happens to find our request when doing polling on the same queue
>>>>        itself.
>>> Ah yes, it's a different io_ring_ctx, different poll list etc. For my
>> Exactly
>>
>>> own clarity, I assume all contexts are mapping the same actual sq/cq
>>> rings?
>> The ring/context isn't mapped to a particular ring, a specific IO is
>> mapped to a specific queue at the time it's being submitted. That
>> depends on the IO type and where that task happens to be running at the
>> time the IO is being submitted.
> This might just do it, except it looks like there's a bug in sbitmap
> where we don't always flush deferred clears. I'll look into that now.
>
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index af1937d66aee..f3ca2ee44dbd 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -1212,6 +1212,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct sqe_submit *s,
>   
>   		kiocb->ki_flags |= IOCB_HIPRI;
>   		kiocb->ki_complete = io_complete_rw_iopoll;
> +		req->result = 0;
>   	} else {
>   		if (kiocb->ki_flags & IOCB_HIPRI)
>   			return -EINVAL;
>
I checked out a few configs and didn't hit the error.  However, with 16 
poll queues and 32 fio jobs, I eventually started getting a bunch of 
nvme timeout/abort messages and the system seemed to be stuck in a poll 
loop.  IKilling the test didn't work and the system was hung:

...

[  407.978329] nvme nvme0: I/O 679 QID 30 timeout, aborting
[  408.085318] nvme nvme0: I/O 680 QID 30 timeout, aborting
[  408.164145] nvme nvme0: I/O 682 QID 30 timeout, aborting
[  408.238102] nvme nvme0: I/O 683 QID 30 timeout, aborting
[  412.970712] nvme nvme0: Abort status: 0x0r=239k IOPS][eta 01m:30s]
[  413.018696] nvme nvme0: Abort status: 0x0
[  413.066691] nvme nvme0: Abort status: 0x0
[  413.114684] nvme nvme0: Abort status: 0x0
[  438.035324] nvme nvme0: I/O 674 QID 30 timeout, reset controllers]
[  444.287637] nvme nvme0: 15/0/16 default/read/poll queues
[  637.073111] INFO: task kworker/u66:0:55 blocked for more than 122 
seconds.

...

fio: terminating on signal 2
Jobs: 32 (f=32): [r(32)][0.3%][eta 02d:01h:28m:36s]



Couldn't ssh to the machine and started getting hung task timeout errors.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC 0/2] io_uring: examine request result only after completion
  2019-10-30 19:21                                         ` Bijan Mottahedeh
@ 2019-10-30 19:26                                           ` Jens Axboe
  0 siblings, 0 replies; 25+ messages in thread
From: Jens Axboe @ 2019-10-30 19:26 UTC (permalink / raw)
  To: Bijan Mottahedeh; +Cc: linux-block

On 10/30/19 1:21 PM, Bijan Mottahedeh wrote:
> 
> On 10/30/19 10:32 AM, Jens Axboe wrote:
>> On 10/30/19 8:18 AM, Jens Axboe wrote:
>>> On 10/30/19 8:02 AM, Bijan Mottahedeh wrote:
>>>>> OK, so I still don't quite see where the issue is. Your setup has more
>>>>> than one CPU per poll queue, and I can reproduce the issue quite easily
>>>>> here with a similar setup.
>>>> That's probably why I couldn't reproduce this in a vm.  This time I set
>>>> up one poll queue in a 8 cpu vm and reproduced it.
>>> You definitely do need poll queue sharing to hit this.
>>>
>>>>> Below are some things that are given:
>>>>>
>>>>> 1) If we fail to submit the IO, io_complete_rw_iopoll() is ultimately
>>>>>         invoked _from_ the submission path. This means that the result is
>>>>>         readily available by the time we go and check:
>>>>>
>>>>>         if (req->result == -EAGAIN)
>>>>>
>>>>>         in __io_submit_sqe().
>>>> Is that always true?
>>>>
>>>> Let's say the operation was __io_submit_sqe()->io_read()
>>>>
>>>> By "failing to submit the io", do you mean that
>>>> io_read()->call_read_iter() returned success or failure?  Are you saying
>>>> that req->result was set from kiocb_done() or later in the block layer?
>>> By "failed to submit" I mean that req->result == -EAGAIN. We set that in
>>> io_complete_rw_iopoll(), which is called off bio_wouldblock_error() in
>>> the block layer. This is different from an ret != 0 return, which would
>>> have been some sort of other failure we encountered, failing to submit
>>> the request. For that error, we just end the request. For the
>>> bio_wouldblock_error(), we need to retry submission.
>>>
>>>> Anyway I assume that io_read() would have to return success since
>>>> otherwise __io_submit_sqe() would immediately return and not check
>>>> req->result:
>>>>
>>>>              if (ret)
>>>>                      return ret;
>>> Right, if ret != 0, then we have a fatal error for that request.
>>> ->result will not have been set at all for that case.
>>>
>>>> So if io_read() did return success,  are we guaranteed that setting
>>>> req->result = -EAGAIN would always happen before the check?
>>> Always, since it happens inline from when you attempt to issue the read.
>>>
>>>> Also, is it possible that we can be preempted in __io_submit_sqe() after
>>>> the call to io_read() but before the -EAGAIN check?
>>> Sure, we're not disabling preemption. But that shouldn't have any
>>> bearing on this case.
>>>>> This is a submission time failure, not
>>>>>         something that should be happening from a completion path after the
>>>>>         IO has been submitted successfully.
>>>>>
>>>>> 2) If the succeed in submitting the request, given that we have other
>>>>>         tasks polling, the request can complete any time. It can very well be
>>>>>         complete by the time we call io_iopoll_req_issued(), and this is
>>>>>         perfectly fine. We know the request isn't going away, as we're
>>>>>         holding a reference to it. kiocb has the same lifetime, as it's
>>>>>         embedded in the io_kiocb request. Note that this isn't the same
>>>>>         io_ring_ctx at all, some other task with its own io_ring_ctx just
>>>>>         happens to find our request when doing polling on the same queue
>>>>>         itself.
>>>> Ah yes, it's a different io_ring_ctx, different poll list etc. For my
>>> Exactly
>>>
>>>> own clarity, I assume all contexts are mapping the same actual sq/cq
>>>> rings?
>>> The ring/context isn't mapped to a particular ring, a specific IO is
>>> mapped to a specific queue at the time it's being submitted. That
>>> depends on the IO type and where that task happens to be running at the
>>> time the IO is being submitted.
>> This might just do it, except it looks like there's a bug in sbitmap
>> where we don't always flush deferred clears. I'll look into that now.
>>
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index af1937d66aee..f3ca2ee44dbd 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -1212,6 +1212,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct sqe_submit *s,
>>    
>>    		kiocb->ki_flags |= IOCB_HIPRI;
>>    		kiocb->ki_complete = io_complete_rw_iopoll;
>> +		req->result = 0;
>>    	} else {
>>    		if (kiocb->ki_flags & IOCB_HIPRI)
>>    			return -EINVAL;
>>
> I checked out a few configs and didn't hit the error.  However, with 16
> poll queues and 32 fio jobs, I eventually started getting a bunch of
> nvme timeout/abort messages and the system seemed to be stuck in a poll
> loop.  IKilling the test didn't work and the system was hung:
> 
> ...
> 
> [  407.978329] nvme nvme0: I/O 679 QID 30 timeout, aborting
> [  408.085318] nvme nvme0: I/O 680 QID 30 timeout, aborting
> [  408.164145] nvme nvme0: I/O 682 QID 30 timeout, aborting
> [  408.238102] nvme nvme0: I/O 683 QID 30 timeout, aborting
> [  412.970712] nvme nvme0: Abort status: 0x0r=239k IOPS][eta 01m:30s]
> [  413.018696] nvme nvme0: Abort status: 0x0
> [  413.066691] nvme nvme0: Abort status: 0x0
> [  413.114684] nvme nvme0: Abort status: 0x0
> [  438.035324] nvme nvme0: I/O 674 QID 30 timeout, reset controllers]
> [  444.287637] nvme nvme0: 15/0/16 default/read/poll queues
> [  637.073111] INFO: task kworker/u66:0:55 blocked for more than 122
> seconds.
> 
> ...
> 
> fio: terminating on signal 2
> Jobs: 32 (f=32): [r(32)][0.3%][eta 02d:01h:28m:36s]

Right, there's some funkiness going on on the nvme side with shared
poll queues, I'm looking into it right now. The patch I sent only
takes care of the "submit after -EAGAIN was already received" case
that you reported.


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2019-10-30 19:26 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-24  9:18 [RFC 0/2] io_uring: examine request result only after completion Bijan Mottahedeh
2019-10-24  9:18 ` [RFC 1/2] io_uring: create io_queue_async() function Bijan Mottahedeh
2019-10-24  9:18 ` [RFC 2/2] io_uring: examine request result only after completion Bijan Mottahedeh
2019-10-24 17:09 ` [RFC 0/2] " Jens Axboe
2019-10-24 19:18   ` Bijan Mottahedeh
2019-10-24 22:31     ` Jens Axboe
     [not found]       ` <fa82e9fc-caf7-a94a-ebff-536413e9ecce@oracle.com>
2019-10-25 14:07         ` Jens Axboe
2019-10-25 14:18           ` Jens Axboe
2019-10-25 14:21             ` Jens Axboe
2019-10-29 19:17               ` Bijan Mottahedeh
2019-10-29 19:23                 ` Bijan Mottahedeh
2019-10-29 19:27                   ` Jens Axboe
2019-10-29 19:31                     ` Bijan Mottahedeh
2019-10-29 19:33                       ` Jens Axboe
2019-10-29 19:40                         ` Bijan Mottahedeh
2019-10-29 19:46                           ` Jens Axboe
2019-10-29 19:51                             ` Bijan Mottahedeh
2019-10-29 19:52                               ` Jens Axboe
2019-10-30  1:02                                 ` Jens Axboe
2019-10-30 14:02                                   ` Bijan Mottahedeh
2019-10-30 14:18                                     ` Jens Axboe
2019-10-30 17:32                                       ` Jens Axboe
2019-10-30 19:21                                         ` Bijan Mottahedeh
2019-10-30 19:26                                           ` Jens Axboe
2019-10-25 14:42             ` Bijan Mottahedeh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).