All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] io_uring: fix duplicated increase of cached_cq_overflow
@ 2019-11-15  9:37 Bob Liu
  2019-11-15  9:49 ` Pavel Begunkov
  0 siblings, 1 reply; 6+ messages in thread
From: Bob Liu @ 2019-11-15  9:37 UTC (permalink / raw)
  To: axboe; +Cc: io-uring, Bob Liu

cached_cq_overflow already be increased in function
io_cqring_overflow_flush().

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 55f8b1d..eb23451 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -701,7 +701,7 @@ static void io_cqring_fill_event(struct io_kiocb *req, long res)
 		WRITE_ONCE(cqe->flags, 0);
 	} else if (ctx->cq_overflow_flushed) {
 		WRITE_ONCE(ctx->rings->cq_overflow,
-				atomic_inc_return(&ctx->cached_cq_overflow));
+				atomic_read(&ctx->cached_cq_overflow));
 	} else {
 		refcount_inc(&req->refs);
 		req->result = res;
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] io_uring: fix duplicated increase of cached_cq_overflow
  2019-11-15  9:37 [PATCH] io_uring: fix duplicated increase of cached_cq_overflow Bob Liu
@ 2019-11-15  9:49 ` Pavel Begunkov
  2019-11-15 12:17   ` Bob Liu
  0 siblings, 1 reply; 6+ messages in thread
From: Pavel Begunkov @ 2019-11-15  9:49 UTC (permalink / raw)
  To: Bob Liu, axboe; +Cc: io-uring

On 11/15/2019 12:37 PM, Bob Liu wrote:
> cached_cq_overflow already be increased in function
> io_cqring_overflow_flush().
> 
> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> ---
>  fs/io_uring.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 55f8b1d..eb23451 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -701,7 +701,7 @@ static void io_cqring_fill_event(struct io_kiocb *req, long res)
>  		WRITE_ONCE(cqe->flags, 0);
>  	} else if (ctx->cq_overflow_flushed) {
>  		WRITE_ONCE(ctx->rings->cq_overflow,
> -				atomic_inc_return(&ctx->cached_cq_overflow));
> +				atomic_read(&ctx->cached_cq_overflow));

Not really. It won't get into io_cqring_overflow_flush() if this branch
is executed. See, it's enqueued for overflow in "else" right below.

i.e. list_add_tail(&req->list, &ctx->cq_overflow_list);

>  	} else {
>  		refcount_inc(&req->refs);
>  		req->result = res;
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] io_uring: fix duplicated increase of cached_cq_overflow
  2019-11-15  9:49 ` Pavel Begunkov
@ 2019-11-15 12:17   ` Bob Liu
  2019-11-15 12:41     ` Pavel Begunkov
  0 siblings, 1 reply; 6+ messages in thread
From: Bob Liu @ 2019-11-15 12:17 UTC (permalink / raw)
  To: Pavel Begunkov, axboe; +Cc: io-uring

On 11/15/19 5:49 PM, Pavel Begunkov wrote:
> On 11/15/2019 12:37 PM, Bob Liu wrote:
>> cached_cq_overflow already be increased in function
>> io_cqring_overflow_flush().
>>
>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> ---
>>  fs/io_uring.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index 55f8b1d..eb23451 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -701,7 +701,7 @@ static void io_cqring_fill_event(struct io_kiocb *req, long res)
>>  		WRITE_ONCE(cqe->flags, 0);
>>  	} else if (ctx->cq_overflow_flushed) {
>>  		WRITE_ONCE(ctx->rings->cq_overflow,
>> -				atomic_inc_return(&ctx->cached_cq_overflow));
>> +				atomic_read(&ctx->cached_cq_overflow));
> 
> Not really. It won't get into io_cqring_overflow_flush() if this branch
> is executed. 

io_cqring_overflow_flush(force=true) must have been called when this branch is executed,
since io_cqring_overflow_flush() is the only place can set 'ctx->cq_overflow_flushed' to true.

And 'ctx->cached_cq_overflow' may already be increased in io_cqring_overflow_flush() if force is true and cqe==NULL.

static void io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
{
	...
        if (force)
                ctx->cq_overflow_flushed = true;
                
        while (!list_empty(&ctx->cq_overflow_list)) {
                cqe = io_get_cqring(ctx);
                if (!cqe && !force)
                        break;
                        
                req = list_first_entry(&ctx->cq_overflow_list, struct io_kiocb,
                                                list);
                list_move(&req->list, &list);   
                if (cqe) {
			...
                } else {
                        WRITE_ONCE(ctx->rings->cq_overflow,
                                atomic_inc_return(&ctx->cached_cq_overflow));
			  ^^^^^^^^^^^
			  ctx->cached_cq_overflow is increased if 'force=true' and 'ceq==NULL'.


Did I miss anything?

> See, it's enqueued for overflow in "else" right below.
> 
> i.e. list_add_tail(&req->list, &ctx->cq_overflow_list);
> 
>>  	} else {
>>  		refcount_inc(&req->refs);
>>  		req->result = res;
>>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] io_uring: fix duplicated increase of cached_cq_overflow
  2019-11-15 12:17   ` Bob Liu
@ 2019-11-15 12:41     ` Pavel Begunkov
  2019-11-15 13:10       ` Bob Liu
  0 siblings, 1 reply; 6+ messages in thread
From: Pavel Begunkov @ 2019-11-15 12:41 UTC (permalink / raw)
  To: Bob Liu, axboe; +Cc: io-uring

On 11/15/2019 3:17 PM, Bob Liu wrote:
> On 11/15/19 5:49 PM, Pavel Begunkov wrote:
>> On 11/15/2019 12:37 PM, Bob Liu wrote:
>>> cached_cq_overflow already be increased in function
>>> io_cqring_overflow_flush().
>>>
>>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>>> ---
>>>  fs/io_uring.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>> index 55f8b1d..eb23451 100644
>>> --- a/fs/io_uring.c
>>> +++ b/fs/io_uring.c
>>> @@ -701,7 +701,7 @@ static void io_cqring_fill_event(struct io_kiocb *req, long res)
>>>  		WRITE_ONCE(cqe->flags, 0);
>>>  	} else if (ctx->cq_overflow_flushed) {
>>>  		WRITE_ONCE(ctx->rings->cq_overflow,
>>> -				atomic_inc_return(&ctx->cached_cq_overflow));
>>> +				atomic_read(&ctx->cached_cq_overflow));
>>
>> Not really. It won't get into io_cqring_overflow_flush() if this branch
>> is executed. 
> 
> io_cqring_overflow_flush(force=true) must have been called when this branch is executed,
> since io_cqring_overflow_flush() is the only place can set 'ctx->cq_overflow_flushed' to true.
> 
Yes, it should have been called, but it sets this flag for the future
users of io_cqring_fill_event(), so any _new_ requests in
io_cqring_fill_event() will overflow instead of being added to
@overflow_list.

Either a request completes/overflows in io_cqring_fill_event(),
or it would be added to @overflow_list to be processed by
io_cqring_overflow_flush().


> And 'ctx->cached_cq_overflow' may already be increased in io_cqring_overflow_flush() if force is true and cqe==NULL.
> 
> static void io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
> {
> 	...
>         if (force)
>                 ctx->cq_overflow_flushed = true;
>                 
>         while (!list_empty(&ctx->cq_overflow_list)) {
>                 cqe = io_get_cqring(ctx);
>                 if (!cqe && !force)
>                         break;
>                         
>                 req = list_first_entry(&ctx->cq_overflow_list, struct io_kiocb,
>                                                 list);
>                 list_move(&req->list, &list);   
>                 if (cqe) {
> 			...
>                 } else {
>                         WRITE_ONCE(ctx->rings->cq_overflow,
>                                 atomic_inc_return(&ctx->cached_cq_overflow));
> 			  ^^^^^^^^^^^
> 			  ctx->cached_cq_overflow is increased if 'force=true' and 'ceq==NULL'.
> 
> 
> Did I miss anything?
> 
>> See, it's enqueued for overflow in "else" right below.
>>
>> i.e. list_add_tail(&req->list, &ctx->cq_overflow_list);
>>
>>>  	} else {
>>>  		refcount_inc(&req->refs);
>>>  		req->result = res;
>>>
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] io_uring: fix duplicated increase of cached_cq_overflow
  2019-11-15 12:41     ` Pavel Begunkov
@ 2019-11-15 13:10       ` Bob Liu
  2019-11-15 16:35         ` Pavel Begunkov
  0 siblings, 1 reply; 6+ messages in thread
From: Bob Liu @ 2019-11-15 13:10 UTC (permalink / raw)
  To: Pavel Begunkov, axboe; +Cc: io-uring

On 11/15/19 8:41 PM, Pavel Begunkov wrote:
> On 11/15/2019 3:17 PM, Bob Liu wrote:
>> On 11/15/19 5:49 PM, Pavel Begunkov wrote:
>>> On 11/15/2019 12:37 PM, Bob Liu wrote:
>>>> cached_cq_overflow already be increased in function
>>>> io_cqring_overflow_flush().
>>>>
>>>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>>>> ---
>>>>  fs/io_uring.c | 2 +-
>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>>> index 55f8b1d..eb23451 100644
>>>> --- a/fs/io_uring.c
>>>> +++ b/fs/io_uring.c
>>>> @@ -701,7 +701,7 @@ static void io_cqring_fill_event(struct io_kiocb *req, long res)
>>>>  		WRITE_ONCE(cqe->flags, 0);
>>>>  	} else if (ctx->cq_overflow_flushed) {
>>>>  		WRITE_ONCE(ctx->rings->cq_overflow,
>>>> -				atomic_inc_return(&ctx->cached_cq_overflow));
>>>> +				atomic_read(&ctx->cached_cq_overflow));
>>>
>>> Not really. It won't get into io_cqring_overflow_flush() if this branch
>>> is executed. 
>>
>> io_cqring_overflow_flush(force=true) must have been called when this branch is executed,
>> since io_cqring_overflow_flush() is the only place can set 'ctx->cq_overflow_flushed' to true.
>>
> Yes, it should have been called, but it sets this flag for the future
> users of io_cqring_fill_event(), so any _new_ requests in
> io_cqring_fill_event() will overflow instead of being added to
> @overflow_list.
> 

Oh, I see..Thanks for the kindly explanation.

> Either a request completes/overflows in io_cqring_fill_event(),
> or it would be added to @overflow_list to be processed by
> io_cqring_overflow_flush().
> 
> 
>> And 'ctx->cached_cq_overflow' may already be increased in io_cqring_overflow_flush() if force is true and cqe==NULL.
>>
>> static void io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
>> {
>> 	...
>>         if (force)
>>                 ctx->cq_overflow_flushed = true;
>>                 
>>         while (!list_empty(&ctx->cq_overflow_list)) {
>>                 cqe = io_get_cqring(ctx);
>>                 if (!cqe && !force)
>>                         break;
>>                         
>>                 req = list_first_entry(&ctx->cq_overflow_list, struct io_kiocb,
>>                                                 list);
>>                 list_move(&req->list, &list);   
>>                 if (cqe) {
>> 			...
>>                 } else {
>>                         WRITE_ONCE(ctx->rings->cq_overflow,
>>                                 atomic_inc_return(&ctx->cached_cq_overflow));
>> 			  ^^^^^^^^^^^
>> 			  ctx->cached_cq_overflow is increased if 'force=true' and 'ceq==NULL'.
>>
>>
>> Did I miss anything?
>>
>>> See, it's enqueued for overflow in "else" right below.
>>>
>>> i.e. list_add_tail(&req->list, &ctx->cq_overflow_list);
>>>
>>>>  	} else {
>>>>  		refcount_inc(&req->refs);
>>>>  		req->result = res;
>>>>
>>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] io_uring: fix duplicated increase of cached_cq_overflow
  2019-11-15 13:10       ` Bob Liu
@ 2019-11-15 16:35         ` Pavel Begunkov
  0 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2019-11-15 16:35 UTC (permalink / raw)
  To: Bob Liu, axboe; +Cc: io-uring


[-- Attachment #1.1: Type: text/plain, Size: 563 bytes --]

On 15/11/2019 16:10, Bob Liu wrote:
>>> io_cqring_overflow_flush(force=true) must have been called when this branch is executed,
>>> since io_cqring_overflow_flush() is the only place can set 'ctx->cq_overflow_flushed' to true.
>>>
>> Yes, it should have been called, but it sets this flag for the future
>> users of io_cqring_fill_event(), so any _new_ requests in
>> io_cqring_fill_event() will overflow instead of being added to
>> @overflow_list.
>>
> 
> Oh, I see..Thanks for the kindly explanation.
> 
Sure, no problem

-- 
Pavel Begunkov


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-11-15 16:36 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-15  9:37 [PATCH] io_uring: fix duplicated increase of cached_cq_overflow Bob Liu
2019-11-15  9:49 ` Pavel Begunkov
2019-11-15 12:17   ` Bob Liu
2019-11-15 12:41     ` Pavel Begunkov
2019-11-15 13:10       ` Bob Liu
2019-11-15 16:35         ` Pavel Begunkov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.