io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work()
@ 2020-12-14 15:49 Xiaoguang Wang
  2020-12-14 17:48 ` Pavel Begunkov
  2020-12-20 19:34 ` Pavel Begunkov
  0 siblings, 2 replies; 8+ messages in thread
From: Xiaoguang Wang @ 2020-12-14 15:49 UTC (permalink / raw)
  To: io-uring; +Cc: axboe, asml.silence, joseph.qi

io_iopoll_complete() does not hold completion_lock to complete polled
io, so in io_wq_submit_work(), we can not call io_req_complete() directly,
to complete polled io, otherwise there maybe concurrent access to cqring,
defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
let io_iopoll_complete() complete polled io") has fixed this issue, but
Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is
not good.

Given that io_iopoll_complete() is always called under uring_lock, so here
for polled io, we can also get uring_lock to fix this issue.

Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
---
 fs/io_uring.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index f53356ced5ab..eab3d2b7d232 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6354,19 +6354,24 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
 	}
 
 	if (ret) {
+		bool iopoll_enabled = req->ctx->flags & IORING_SETUP_IOPOLL;
+
 		/*
-		 * io_iopoll_complete() does not hold completion_lock to complete
-		 * polled io, so here for polled io, just mark it done and still let
-		 * io_iopoll_complete() complete it.
+		 * io_iopoll_complete() does not hold completion_lock to complete polled
+		 * io, so here for polled io, we can not call io_req_complete() directly,
+		 * otherwise there maybe concurrent access to cqring, defer_list, etc,
+		 * which is not safe. Given that io_iopoll_complete() is always called
+		 * under uring_lock, so here for polled io, we also get uring_lock to
+		 * complete it.
 		 */
-		if (req->ctx->flags & IORING_SETUP_IOPOLL) {
-			struct kiocb *kiocb = &req->rw.kiocb;
+		if (iopoll_enabled)
+			mutex_lock(&req->ctx->uring_lock);
 
-			kiocb_done(kiocb, ret, NULL);
-		} else {
-			req_set_fail_links(req);
-			io_req_complete(req, ret);
-		}
+		req_set_fail_links(req);
+		io_req_complete(req, ret);
+
+		if (iopoll_enabled)
+			mutex_unlock(&req->ctx->uring_lock);
 	}
 
 	return io_steal_work(req);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work()
  2020-12-14 15:49 [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work() Xiaoguang Wang
@ 2020-12-14 17:48 ` Pavel Begunkov
  2020-12-15  2:28   ` Xiaoguang Wang
  2020-12-20 19:34 ` Pavel Begunkov
  1 sibling, 1 reply; 8+ messages in thread
From: Pavel Begunkov @ 2020-12-14 17:48 UTC (permalink / raw)
  To: Xiaoguang Wang, io-uring; +Cc: axboe, joseph.qi

On 14/12/2020 15:49, Xiaoguang Wang wrote:
> io_iopoll_complete() does not hold completion_lock to complete polled
> io, so in io_wq_submit_work(), we can not call io_req_complete() directly,
> to complete polled io, otherwise there maybe concurrent access to cqring,
> defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
> let io_iopoll_complete() complete polled io") has fixed this issue, but
> Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
> IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is
> not good.
> 
> Given that io_iopoll_complete() is always called under uring_lock, so here
> for polled io, we can also get uring_lock to fix this issue.

One thing I don't like is that io_wq_submit_work() won't be able to
publish an event while someone polling io_uring_enter(ENTER_GETEVENTS),
that's because both take the lock. The problem is when the poller waits
for an event that is currently in io-wq (i.e. io_wq_submit_work()).
The polling loop will eventually exit, so that's not a deadlock, but
latency,etc. would be huge.

> 
> Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
> Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
> ---
>  fs/io_uring.c | 25 +++++++++++++++----------
>  1 file changed, 15 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index f53356ced5ab..eab3d2b7d232 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -6354,19 +6354,24 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
>  	}
>  
>  	if (ret) {
> +		bool iopoll_enabled = req->ctx->flags & IORING_SETUP_IOPOLL;
> +
>  		/*
> -		 * io_iopoll_complete() does not hold completion_lock to complete
> -		 * polled io, so here for polled io, just mark it done and still let
> -		 * io_iopoll_complete() complete it.
> +		 * io_iopoll_complete() does not hold completion_lock to complete polled
> +		 * io, so here for polled io, we can not call io_req_complete() directly,
> +		 * otherwise there maybe concurrent access to cqring, defer_list, etc,
> +		 * which is not safe. Given that io_iopoll_complete() is always called
> +		 * under uring_lock, so here for polled io, we also get uring_lock to
> +		 * complete it.
>  		 */
> -		if (req->ctx->flags & IORING_SETUP_IOPOLL) {
> -			struct kiocb *kiocb = &req->rw.kiocb;
> +		if (iopoll_enabled)
> +			mutex_lock(&req->ctx->uring_lock);
>  
> -			kiocb_done(kiocb, ret, NULL);
> -		} else {
> -			req_set_fail_links(req);
> -			io_req_complete(req, ret);
> -		}
> +		req_set_fail_links(req);
> +		io_req_complete(req, ret);
> +
> +		if (iopoll_enabled)
> +			mutex_unlock(&req->ctx->uring_lock);
>  	}
>  
>  	return io_steal_work(req);
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work()
  2020-12-14 17:48 ` Pavel Begunkov
@ 2020-12-15  2:28   ` Xiaoguang Wang
  2020-12-15 11:08     ` Pavel Begunkov
  0 siblings, 1 reply; 8+ messages in thread
From: Xiaoguang Wang @ 2020-12-15  2:28 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring; +Cc: axboe, joseph.qi

hi,

> On 14/12/2020 15:49, Xiaoguang Wang wrote:
>> io_iopoll_complete() does not hold completion_lock to complete polled
>> io, so in io_wq_submit_work(), we can not call io_req_complete() directly,
>> to complete polled io, otherwise there maybe concurrent access to cqring,
>> defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
>> let io_iopoll_complete() complete polled io") has fixed this issue, but
>> Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
>> IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is
>> not good.
>>
>> Given that io_iopoll_complete() is always called under uring_lock, so here
>> for polled io, we can also get uring_lock to fix this issue.
> 
> One thing I don't like is that io_wq_submit_work() won't be able to
> publish an event while someone polling io_uring_enter(ENTER_GETEVENTS),
> that's because both take the lock. The problem is when the poller waits
> for an event that is currently in io-wq (i.e. io_wq_submit_work()).
> The polling loop will eventually exit, so that's not a deadlock, but
> latency,etc. would be huge.
In this patch, we just hold uring_lock for polled io in error path, so I think
normally it maybe not an issue, and seems that the critical section is not
that big, so it also may not result in huge latecy.
I also noticed that current codes also hold uring_lock in io_wq_submit_work()
call chain:
==> io_wq_submit_work()
====> io_issue_sqe()
======> io_provide_buffers()
========> io_ring_submit_lock(ctx, !force_nonblock);

Regards,
Xiaoguang Wang
> 
>>
>> Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
>> Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
>> ---
>>   fs/io_uring.c | 25 +++++++++++++++----------
>>   1 file changed, 15 insertions(+), 10 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index f53356ced5ab..eab3d2b7d232 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -6354,19 +6354,24 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
>>   	}
>>   
>>   	if (ret) {
>> +		bool iopoll_enabled = req->ctx->flags & IORING_SETUP_IOPOLL;
>> +
>>   		/*
>> -		 * io_iopoll_complete() does not hold completion_lock to complete
>> -		 * polled io, so here for polled io, just mark it done and still let
>> -		 * io_iopoll_complete() complete it.
>> +		 * io_iopoll_complete() does not hold completion_lock to complete polled
>> +		 * io, so here for polled io, we can not call io_req_complete() directly,
>> +		 * otherwise there maybe concurrent access to cqring, defer_list, etc,
>> +		 * which is not safe. Given that io_iopoll_complete() is always called
>> +		 * under uring_lock, so here for polled io, we also get uring_lock to
>> +		 * complete it.
>>   		 */
>> -		if (req->ctx->flags & IORING_SETUP_IOPOLL) {
>> -			struct kiocb *kiocb = &req->rw.kiocb;
>> +		if (iopoll_enabled)
>> +			mutex_lock(&req->ctx->uring_lock);
>>   
>> -			kiocb_done(kiocb, ret, NULL);
>> -		} else {
>> -			req_set_fail_links(req);
>> -			io_req_complete(req, ret);
>> -		}
>> +		req_set_fail_links(req);
>> +		io_req_complete(req, ret);
>> +
>> +		if (iopoll_enabled)
>> +			mutex_unlock(&req->ctx->uring_lock);
>>   	}
>>   
>>   	return io_steal_work(req);
>>
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work()
  2020-12-15  2:28   ` Xiaoguang Wang
@ 2020-12-15 11:08     ` Pavel Begunkov
  0 siblings, 0 replies; 8+ messages in thread
From: Pavel Begunkov @ 2020-12-15 11:08 UTC (permalink / raw)
  To: Xiaoguang Wang, io-uring; +Cc: axboe, joseph.qi

On 15/12/2020 02:28, Xiaoguang Wang wrote:
> hi,
> 
>> On 14/12/2020 15:49, Xiaoguang Wang wrote:
>>> io_iopoll_complete() does not hold completion_lock to complete polled
>>> io, so in io_wq_submit_work(), we can not call io_req_complete() directly,
>>> to complete polled io, otherwise there maybe concurrent access to cqring,
>>> defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
>>> let io_iopoll_complete() complete polled io") has fixed this issue, but
>>> Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
>>> IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is
>>> not good.
>>>
>>> Given that io_iopoll_complete() is always called under uring_lock, so here
>>> for polled io, we can also get uring_lock to fix this issue.
>>
>> One thing I don't like is that io_wq_submit_work() won't be able to
>> publish an event while someone polling io_uring_enter(ENTER_GETEVENTS),
>> that's because both take the lock. The problem is when the poller waits
>> for an event that is currently in io-wq (i.e. io_wq_submit_work()).
>> The polling loop will eventually exit, so that's not a deadlock, but
>> latency,etc. would be huge.
> In this patch, we just hold uring_lock for polled io in error path, so I think
> normally it maybe not an issue, and seems that the critical section is not
> that big, so it also may not result in huge latecy.

To give a bit of a context, it breaks out of the io_iopoll_check()'s loop
with need_resched(). Anyway, fair enough, probably should be dealt separately.

> I also noticed that current codes also hold uring_lock in io_wq_submit_work()
> call chain:
> ==> io_wq_submit_work()
> ====> io_issue_sqe()
> ======> io_provide_buffers()
> ========> io_ring_submit_lock(ctx, !force_nonblock);

Yep, that's not good either, though I care more about rw.

> 
> Regards,
> Xiaoguang Wang
>>
>>>
>>> Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
>>> Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
>>> ---
>>>   fs/io_uring.c | 25 +++++++++++++++----------
>>>   1 file changed, 15 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>> index f53356ced5ab..eab3d2b7d232 100644
>>> --- a/fs/io_uring.c
>>> +++ b/fs/io_uring.c
>>> @@ -6354,19 +6354,24 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
>>>       }
>>>         if (ret) {
>>> +        bool iopoll_enabled = req->ctx->flags & IORING_SETUP_IOPOLL;
>>> +
>>>           /*
>>> -         * io_iopoll_complete() does not hold completion_lock to complete
>>> -         * polled io, so here for polled io, just mark it done and still let
>>> -         * io_iopoll_complete() complete it.
>>> +         * io_iopoll_complete() does not hold completion_lock to complete polled
>>> +         * io, so here for polled io, we can not call io_req_complete() directly,
>>> +         * otherwise there maybe concurrent access to cqring, defer_list, etc,
>>> +         * which is not safe. Given that io_iopoll_complete() is always called
>>> +         * under uring_lock, so here for polled io, we also get uring_lock to
>>> +         * complete it.
>>>            */
>>> -        if (req->ctx->flags & IORING_SETUP_IOPOLL) {
>>> -            struct kiocb *kiocb = &req->rw.kiocb;
>>> +        if (iopoll_enabled)
>>> +            mutex_lock(&req->ctx->uring_lock);
>>>   -            kiocb_done(kiocb, ret, NULL);
>>> -        } else {
>>> -            req_set_fail_links(req);
>>> -            io_req_complete(req, ret);
>>> -        }
>>> +        req_set_fail_links(req);
>>> +        io_req_complete(req, ret);
>>> +
>>> +        if (iopoll_enabled)
>>> +            mutex_unlock(&req->ctx->uring_lock);
>>>       }
>>>         return io_steal_work(req);
>>>
>>

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work()
  2020-12-14 15:49 [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work() Xiaoguang Wang
  2020-12-14 17:48 ` Pavel Begunkov
@ 2020-12-20 19:34 ` Pavel Begunkov
  2020-12-20 19:36   ` Pavel Begunkov
  1 sibling, 1 reply; 8+ messages in thread
From: Pavel Begunkov @ 2020-12-20 19:34 UTC (permalink / raw)
  To: Xiaoguang Wang, io-uring; +Cc: axboe, joseph.qi

On 14/12/2020 15:49, Xiaoguang Wang wrote:
> io_iopoll_complete() does not hold completion_lock to complete polled
> io, so in io_wq_submit_work(), we can not call io_req_complete() directly,
> to complete polled io, otherwise there maybe concurrent access to cqring,
> defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
> let io_iopoll_complete() complete polled io") has fixed this issue, but
> Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
> IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is
> not good.
> 
> Given that io_iopoll_complete() is always called under uring_lock, so here
> for polled io, we can also get uring_lock to fix this issue.

This returns it to the state it was before fixing + mutex locking for
IOPOLL, and it's much better than having it half-broken as it is now.

Cc: <stable@vger.kernel.org> # 5.5+
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>

> 
> Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
> Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
> ---
>  fs/io_uring.c | 25 +++++++++++++++----------
>  1 file changed, 15 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index f53356ced5ab..eab3d2b7d232 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -6354,19 +6354,24 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
>  	}
>  
>  	if (ret) {
> +		bool iopoll_enabled = req->ctx->flags & IORING_SETUP_IOPOLL;
> +
>  		/*
> -		 * io_iopoll_complete() does not hold completion_lock to complete
> -		 * polled io, so here for polled io, just mark it done and still let
> -		 * io_iopoll_complete() complete it.
> +		 * io_iopoll_complete() does not hold completion_lock to complete polled
> +		 * io, so here for polled io, we can not call io_req_complete() directly,
> +		 * otherwise there maybe concurrent access to cqring, defer_list, etc,
> +		 * which is not safe. Given that io_iopoll_complete() is always called
> +		 * under uring_lock, so here for polled io, we also get uring_lock to
> +		 * complete it.
>  		 */
> -		if (req->ctx->flags & IORING_SETUP_IOPOLL) {
> -			struct kiocb *kiocb = &req->rw.kiocb;
> +		if (iopoll_enabled)
> +			mutex_lock(&req->ctx->uring_lock);
>  
> -			kiocb_done(kiocb, ret, NULL);
> -		} else {
> -			req_set_fail_links(req);
> -			io_req_complete(req, ret);
> -		}
> +		req_set_fail_links(req);
> +		io_req_complete(req, ret);
> +
> +		if (iopoll_enabled)
> +			mutex_unlock(&req->ctx->uring_lock);
>  	}
>  
>  	return io_steal_work(req);
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work()
  2020-12-20 19:34 ` Pavel Begunkov
@ 2020-12-20 19:36   ` Pavel Begunkov
  2020-12-22 23:41     ` Jens Axboe
  0 siblings, 1 reply; 8+ messages in thread
From: Pavel Begunkov @ 2020-12-20 19:36 UTC (permalink / raw)
  To: Xiaoguang Wang, io-uring; +Cc: axboe, joseph.qi

On 20/12/2020 19:34, Pavel Begunkov wrote:
> On 14/12/2020 15:49, Xiaoguang Wang wrote:
>> io_iopoll_complete() does not hold completion_lock to complete polled
>> io, so in io_wq_submit_work(), we can not call io_req_complete() directly,
>> to complete polled io, otherwise there maybe concurrent access to cqring,
>> defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
>> let io_iopoll_complete() complete polled io") has fixed this issue, but
>> Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
>> IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is
>> not good.
>>
>> Given that io_iopoll_complete() is always called under uring_lock, so here
>> for polled io, we can also get uring_lock to fix this issue.
> 
> This returns it to the state it was before fixing + mutex locking for
> IOPOLL, and it's much better than having it half-broken as it is now.

btw, comments are over 80, but that's minor.

> 
> Cc: <stable@vger.kernel.org> # 5.5+
> Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
> 
>>
>> Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
>> Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
>> ---
>>  fs/io_uring.c | 25 +++++++++++++++----------
>>  1 file changed, 15 insertions(+), 10 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index f53356ced5ab..eab3d2b7d232 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -6354,19 +6354,24 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
>>  	}
>>  
>>  	if (ret) {
>> +		bool iopoll_enabled = req->ctx->flags & IORING_SETUP_IOPOLL;
>> +
>>  		/*
>> -		 * io_iopoll_complete() does not hold completion_lock to complete
>> -		 * polled io, so here for polled io, just mark it done and still let
>> -		 * io_iopoll_complete() complete it.
>> +		 * io_iopoll_complete() does not hold completion_lock to complete polled
>> +		 * io, so here for polled io, we can not call io_req_complete() directly,
>> +		 * otherwise there maybe concurrent access to cqring, defer_list, etc,
>> +		 * which is not safe. Given that io_iopoll_complete() is always called
>> +		 * under uring_lock, so here for polled io, we also get uring_lock to
>> +		 * complete it.
>>  		 */
>> -		if (req->ctx->flags & IORING_SETUP_IOPOLL) {
>> -			struct kiocb *kiocb = &req->rw.kiocb;
>> +		if (iopoll_enabled)
>> +			mutex_lock(&req->ctx->uring_lock);
>>  
>> -			kiocb_done(kiocb, ret, NULL);
>> -		} else {
>> -			req_set_fail_links(req);
>> -			io_req_complete(req, ret);
>> -		}
>> +		req_set_fail_links(req);
>> +		io_req_complete(req, ret);
>> +
>> +		if (iopoll_enabled)
>> +			mutex_unlock(&req->ctx->uring_lock);
>>  	}
>>  
>>  	return io_steal_work(req);
>>
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work()
  2020-12-20 19:36   ` Pavel Begunkov
@ 2020-12-22 23:41     ` Jens Axboe
  2020-12-23  2:12       ` Xiaoguang Wang
  0 siblings, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2020-12-22 23:41 UTC (permalink / raw)
  To: Pavel Begunkov, Xiaoguang Wang, io-uring; +Cc: joseph.qi

On 12/20/20 12:36 PM, Pavel Begunkov wrote:
> On 20/12/2020 19:34, Pavel Begunkov wrote:
>> On 14/12/2020 15:49, Xiaoguang Wang wrote:
>>> io_iopoll_complete() does not hold completion_lock to complete polled
>>> io, so in io_wq_submit_work(), we can not call io_req_complete() directly,
>>> to complete polled io, otherwise there maybe concurrent access to cqring,
>>> defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
>>> let io_iopoll_complete() complete polled io") has fixed this issue, but
>>> Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
>>> IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is
>>> not good.
>>>
>>> Given that io_iopoll_complete() is always called under uring_lock, so here
>>> for polled io, we can also get uring_lock to fix this issue.
>>
>> This returns it to the state it was before fixing + mutex locking for
>> IOPOLL, and it's much better than having it half-broken as it is now.
> 
> btw, comments are over 80, but that's minor.

I fixed that up, but I don't particularly like how 'req' is used after
calling complete. How about the below variant - same as before, just
using the ctx instead to determine if we need to lock it or not.


commit 253b60e7d8adcb980be91f77e64968a58d836b5e
Author: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Date:   Mon Dec 14 23:49:41 2020 +0800

    io_uring: hold uring_lock while completing failed polled io in io_wq_submit_work()
    
    io_iopoll_complete() does not hold completion_lock to complete polled io,
    so in io_wq_submit_work(), we can not call io_req_complete() directly, to
    complete polled io, otherwise there maybe concurrent access to cqring,
    defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
    let io_iopoll_complete() complete polled io") has fixed this issue, but
    Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
    IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is not
    good.
    
    Given that io_iopoll_complete() is always called under uring_lock, so here
    for polled io, we can also get uring_lock to fix this issue.
    
    Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
    Cc: <stable@vger.kernel.org> # 5.5+
    Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
    Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
    [axboe: don't deref 'req' after completing it']
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

diff --git a/fs/io_uring.c b/fs/io_uring.c
index b27f61e3e0d6..0a8cf3fad955 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6332,19 +6332,28 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
 	}
 
 	if (ret) {
+		struct io_ring_ctx *lock_ctx = NULL;
+
+		if (req->ctx->flags & IORING_SETUP_IOPOLL)
+			lock_ctx = req->ctx;
+
 		/*
-		 * io_iopoll_complete() does not hold completion_lock to complete
-		 * polled io, so here for polled io, just mark it done and still let
-		 * io_iopoll_complete() complete it.
+		 * io_iopoll_complete() does not hold completion_lock to
+		 * complete polled io, so here for polled io, we can not call
+		 * io_req_complete() directly, otherwise there maybe concurrent
+		 * access to cqring, defer_list, etc, which is not safe. Given
+		 * that io_iopoll_complete() is always called under uring_lock,
+		 * so here for polled io, we also get uring_lock to complete
+		 * it.
 		 */
-		if (req->ctx->flags & IORING_SETUP_IOPOLL) {
-			struct kiocb *kiocb = &req->rw.kiocb;
+		if (lock_ctx)
+			mutex_lock(&lock_ctx->uring_lock);
 
-			kiocb_done(kiocb, ret, NULL);
-		} else {
-			req_set_fail_links(req);
-			io_req_complete(req, ret);
-		}
+		req_set_fail_links(req);
+		io_req_complete(req, ret);
+
+		if (lock_ctx)
+			mutex_unlock(&lock_ctx->uring_lock);
 	}
 
 	return io_steal_work(req);

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work()
  2020-12-22 23:41     ` Jens Axboe
@ 2020-12-23  2:12       ` Xiaoguang Wang
  0 siblings, 0 replies; 8+ messages in thread
From: Xiaoguang Wang @ 2020-12-23  2:12 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov, io-uring; +Cc: joseph.qi

hi,

> On 12/20/20 12:36 PM, Pavel Begunkov wrote:
>> On 20/12/2020 19:34, Pavel Begunkov wrote:
>>> On 14/12/2020 15:49, Xiaoguang Wang wrote:
>>>> io_iopoll_complete() does not hold completion_lock to complete polled
>>>> io, so in io_wq_submit_work(), we can not call io_req_complete() directly,
>>>> to complete polled io, otherwise there maybe concurrent access to cqring,
>>>> defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
>>>> let io_iopoll_complete() complete polled io") has fixed this issue, but
>>>> Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
>>>> IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is
>>>> not good.
>>>>
>>>> Given that io_iopoll_complete() is always called under uring_lock, so here
>>>> for polled io, we can also get uring_lock to fix this issue.
>>>
>>> This returns it to the state it was before fixing + mutex locking for
>>> IOPOLL, and it's much better than having it half-broken as it is now.
>>
>> btw, comments are over 80, but that's minor.
> 
> I fixed that up, but I don't particularly like how 'req' is used after
> calling complete. How about the below variant - same as before, just
> using the ctx instead to determine if we need to lock it or not.
It looks better, thanks.

Regards,
Xiaoguang Wang
> 
> 
> commit 253b60e7d8adcb980be91f77e64968a58d836b5e
> Author: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
> Date:   Mon Dec 14 23:49:41 2020 +0800
> 
>      io_uring: hold uring_lock while completing failed polled io in io_wq_submit_work()
>      
>      io_iopoll_complete() does not hold completion_lock to complete polled io,
>      so in io_wq_submit_work(), we can not call io_req_complete() directly, to
>      complete polled io, otherwise there maybe concurrent access to cqring,
>      defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
>      let io_iopoll_complete() complete polled io") has fixed this issue, but
>      Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
>      IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is not
>      good.
>      
>      Given that io_iopoll_complete() is always called under uring_lock, so here
>      for polled io, we can also get uring_lock to fix this issue.
>      
>      Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
>      Cc: <stable@vger.kernel.org> # 5.5+
>      Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
>      Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
>      [axboe: don't deref 'req' after completing it']
>      Signed-off-by: Jens Axboe <axboe@kernel.dk>
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index b27f61e3e0d6..0a8cf3fad955 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -6332,19 +6332,28 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
>   	}
>   
>   	if (ret) {
> +		struct io_ring_ctx *lock_ctx = NULL;
> +
> +		if (req->ctx->flags & IORING_SETUP_IOPOLL)
> +			lock_ctx = req->ctx;
> +
>   		/*
> -		 * io_iopoll_complete() does not hold completion_lock to complete
> -		 * polled io, so here for polled io, just mark it done and still let
> -		 * io_iopoll_complete() complete it.
> +		 * io_iopoll_complete() does not hold completion_lock to
> +		 * complete polled io, so here for polled io, we can not call
> +		 * io_req_complete() directly, otherwise there maybe concurrent
> +		 * access to cqring, defer_list, etc, which is not safe. Given
> +		 * that io_iopoll_complete() is always called under uring_lock,
> +		 * so here for polled io, we also get uring_lock to complete
> +		 * it.
>   		 */
> -		if (req->ctx->flags & IORING_SETUP_IOPOLL) {
> -			struct kiocb *kiocb = &req->rw.kiocb;
> +		if (lock_ctx)
> +			mutex_lock(&lock_ctx->uring_lock);
>   
> -			kiocb_done(kiocb, ret, NULL);
> -		} else {
> -			req_set_fail_links(req);
> -			io_req_complete(req, ret);
> -		}
> +		req_set_fail_links(req);
> +		io_req_complete(req, ret);
> +
> +		if (lock_ctx)
> +			mutex_unlock(&lock_ctx->uring_lock);
>   	}
>   
>   	return io_steal_work(req);
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-12-23  2:14 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-14 15:49 [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work() Xiaoguang Wang
2020-12-14 17:48 ` Pavel Begunkov
2020-12-15  2:28   ` Xiaoguang Wang
2020-12-15 11:08     ` Pavel Begunkov
2020-12-20 19:34 ` Pavel Begunkov
2020-12-20 19:36   ` Pavel Begunkov
2020-12-22 23:41     ` Jens Axboe
2020-12-23  2:12       ` Xiaoguang Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).