All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2] block: Fix use-after-free issue while accessing ioscheduler lock
@ 2020-09-15  9:11 Pradeep P V K
  2020-09-15 10:09 ` Ming Lei
  0 siblings, 1 reply; 5+ messages in thread
From: Pradeep P V K @ 2020-09-15  9:11 UTC (permalink / raw)
  To: axboe, ming.lei; +Cc: linux-block, stummala, sayalil, Pradeep P V K

Observes below crash while accessing (use-after-free) lock member
of bfq data.

context#1			context#2
				process_one_work()
kthread()			blk_mq_run_work_fn()
worker_thread()			 ->__blk_mq_run_hw_queue()
process_one_work()		  ->blk_mq_sched_dispatch_requests()
__blk_release_queue()		    ->blk_mq_do_dispatch_sched()
->__elevator_exit()
  ->blk_mq_exit_sched()
    ->exit_sched()
      ->kfree()
      					->bfq_dispatch_request()
					  ->spin_unlock_irq(&bfqd->lock)

This is because of the kblockd delayed work that might got scheduled
around blk_release_queue() and accessed use-after-free member of
bfq_data.

240.212359:   <2> Unable to handle kernel paging request at
virtual address ffffffee2e33ad70
...
240.212637:   <2> Workqueue: kblockd blk_mq_run_work_fn
240.212649:   <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
240.212666:   <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
240.212677:   <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
...
Call trace:
240.212865:   <2>  queued_spin_lock_slowpath+0x10c/0x2e0
240.212876:   <2>  do_raw_spin_lock+0xf0/0xf4
240.212890:   <2>  _raw_spin_lock_irq+0x74/0x94
240.212906:   <2>  bfq_dispatch_request+0x4c/0xd60
240.212918:   <2>  blk_mq_do_dispatch_sched+0xe0/0x1f0
240.212927:   <2>  blk_mq_sched_dispatch_requests+0x130/0x194
240.212940:   <2>  __blk_mq_run_hw_queue+0x100/0x158
240.212950:   <2>  blk_mq_run_work_fn+0x1c/0x28
240.212963:   <2>  process_one_work+0x280/0x460
240.212973:   <2>  worker_thread+0x27c/0x4dc
240.212986:   <2>  kthread+0x160/0x170

Fix this by cancelling the delayed work if any before elevator exits.

Changes since V1:
- Moved the logic into blk_cleanup_queue() as per Ming comments.

Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
---
 block/blk-mq.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4abb714..890fded 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2598,6 +2598,7 @@ static void blk_mq_exit_hw_queues(struct request_queue *q,
 			break;
 		blk_mq_debugfs_unregister_hctx(hctx);
 		blk_mq_exit_hctx(q, set, hctx, i);
+		cancel_delayed_work_sync(&hctx->run_work);
 	}
 }
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH V2] block: Fix use-after-free issue while accessing ioscheduler lock
  2020-09-15  9:11 [PATCH V2] block: Fix use-after-free issue while accessing ioscheduler lock Pradeep P V K
@ 2020-09-15 10:09 ` Ming Lei
  2020-09-15 12:20   ` ppvk
  0 siblings, 1 reply; 5+ messages in thread
From: Ming Lei @ 2020-09-15 10:09 UTC (permalink / raw)
  To: Pradeep P V K; +Cc: axboe, linux-block, stummala, sayalil

On Tue, Sep 15, 2020 at 02:41:02PM +0530, Pradeep P V K wrote:
> Observes below crash while accessing (use-after-free) lock member
> of bfq data.
> 
> context#1			context#2
> 				process_one_work()
> kthread()			blk_mq_run_work_fn()
> worker_thread()			 ->__blk_mq_run_hw_queue()
> process_one_work()		  ->blk_mq_sched_dispatch_requests()
> __blk_release_queue()		    ->blk_mq_do_dispatch_sched()

Just found __blk_release_queue killed in v5.9 cycle.

> ->__elevator_exit()
>   ->blk_mq_exit_sched()
>     ->exit_sched()
>       ->kfree()
>       					->bfq_dispatch_request()
> 					  ->spin_unlock_irq(&bfqd->lock)

Actually not sure if the above race is easy to trigger in recent kernel,
because we do call cancel_delayed_work_sync() in blk_mq_hw_sysfs_release(),
which is usually called before __elevator_exit() from
blk_exit_queue()/blk_release_queue().

So can you share your kernel version in which the issue is reproduced?
And can you reproduce this issue on v5.8 or v5.9-rc5?

> 
> This is because of the kblockd delayed work that might got scheduled
> around blk_release_queue() and accessed use-after-free member of
> bfq_data.
> 
> 240.212359:   <2> Unable to handle kernel paging request at
> virtual address ffffffee2e33ad70
> ...
> 240.212637:   <2> Workqueue: kblockd blk_mq_run_work_fn
> 240.212649:   <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
> 240.212666:   <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
> 240.212677:   <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
> ...
> Call trace:
> 240.212865:   <2>  queued_spin_lock_slowpath+0x10c/0x2e0
> 240.212876:   <2>  do_raw_spin_lock+0xf0/0xf4
> 240.212890:   <2>  _raw_spin_lock_irq+0x74/0x94
> 240.212906:   <2>  bfq_dispatch_request+0x4c/0xd60
> 240.212918:   <2>  blk_mq_do_dispatch_sched+0xe0/0x1f0
> 240.212927:   <2>  blk_mq_sched_dispatch_requests+0x130/0x194
> 240.212940:   <2>  __blk_mq_run_hw_queue+0x100/0x158
> 240.212950:   <2>  blk_mq_run_work_fn+0x1c/0x28
> 240.212963:   <2>  process_one_work+0x280/0x460
> 240.212973:   <2>  worker_thread+0x27c/0x4dc
> 240.212986:   <2>  kthread+0x160/0x170
> 
> Fix this by cancelling the delayed work if any before elevator exits.
> 
> Changes since V1:
> - Moved the logic into blk_cleanup_queue() as per Ming comments.
> 
> Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
> ---
>  block/blk-mq.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 4abb714..890fded 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2598,6 +2598,7 @@ static void blk_mq_exit_hw_queues(struct request_queue *q,
>  			break;
>  		blk_mq_debugfs_unregister_hctx(hctx);
>  		blk_mq_exit_hctx(q, set, hctx, i);
> +		cancel_delayed_work_sync(&hctx->run_work);
>  	}
>  }

It should be better to move cancel_delayed_work_sync() into
blk_mq_exit_hctx(), exactly before adding hctx into unused list.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH V2] block: Fix use-after-free issue while accessing ioscheduler lock
  2020-09-15 10:09 ` Ming Lei
@ 2020-09-15 12:20   ` ppvk
  2020-09-15 12:41     ` Ming Lei
  0 siblings, 1 reply; 5+ messages in thread
From: ppvk @ 2020-09-15 12:20 UTC (permalink / raw)
  To: Ming Lei; +Cc: axboe, linux-block, stummala, sayalil

On 2020-09-15 15:39, Ming Lei wrote:
> On Tue, Sep 15, 2020 at 02:41:02PM +0530, Pradeep P V K wrote:
>> Observes below crash while accessing (use-after-free) lock member
>> of bfq data.
>> 
>> context#1			context#2
>> 				process_one_work()
>> kthread()			blk_mq_run_work_fn()
>> worker_thread()			 ->__blk_mq_run_hw_queue()
>> process_one_work()		  ->blk_mq_sched_dispatch_requests()
>> __blk_release_queue()		    ->blk_mq_do_dispatch_sched()
> 
> Just found __blk_release_queue killed in v5.9 cycle.
> 
Yes on v5.9 blk_release_queue() will be called directly by q->kobj when
request_queue ref. goes zero but
where as on older kernel versions (< 5.9), blk_release_queue() will
schedule a work to invoke/call "__blk_release_queue()".

>> ->__elevator_exit()
>>   ->blk_mq_exit_sched()
>>     ->exit_sched()
>>       ->kfree()
>>       					->bfq_dispatch_request()
>> 					  ->spin_unlock_irq(&bfqd->lock)
> 
> Actually not sure if the above race is easy to trigger in recent 
> kernel,
> because we do call cancel_delayed_work_sync() in 
> blk_mq_hw_sysfs_release(),
> which is usually called before __elevator_exit() from
> blk_exit_queue()/blk_release_queue().
> 
blk_mq_hw_sysfs_release() will be called from blk_mq_release() i.e. with
kobject_put(hctx->kobj), which is after __elevator_exit()

__elevator_exit() is called from blk_exit_queue() which is prior to 
blk_mq_release().

> So can you share your kernel version in which the issue is reproduced?
> And can you reproduce this issue on v5.8 or v5.9-rc5?
> 
This issue is seen on v5.4 stable and it is very easy to reproduce on 
v5.4.
sorry, i don't have a resource with v5.8 or with latest kernel. I can 
help you
to get tested on v5.4. From the issue prospective, both v5.4 kernel and
latest kernels calls blk_mq_release() -> blk_mq_hw_sysfs_release() after
__elevator_exit(). So, i think it wont matter much here.

>> 
>> This is because of the kblockd delayed work that might got scheduled
>> around blk_release_queue() and accessed use-after-free member of
>> bfq_data.
>> 
>> 240.212359:   <2> Unable to handle kernel paging request at
>> virtual address ffffffee2e33ad70
>> ...
>> 240.212637:   <2> Workqueue: kblockd blk_mq_run_work_fn
>> 240.212649:   <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
>> 240.212666:   <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
>> 240.212677:   <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
>> ...
>> Call trace:
>> 240.212865:   <2>  queued_spin_lock_slowpath+0x10c/0x2e0
>> 240.212876:   <2>  do_raw_spin_lock+0xf0/0xf4
>> 240.212890:   <2>  _raw_spin_lock_irq+0x74/0x94
>> 240.212906:   <2>  bfq_dispatch_request+0x4c/0xd60
>> 240.212918:   <2>  blk_mq_do_dispatch_sched+0xe0/0x1f0
>> 240.212927:   <2>  blk_mq_sched_dispatch_requests+0x130/0x194
>> 240.212940:   <2>  __blk_mq_run_hw_queue+0x100/0x158
>> 240.212950:   <2>  blk_mq_run_work_fn+0x1c/0x28
>> 240.212963:   <2>  process_one_work+0x280/0x460
>> 240.212973:   <2>  worker_thread+0x27c/0x4dc
>> 240.212986:   <2>  kthread+0x160/0x170
>> 
>> Fix this by cancelling the delayed work if any before elevator exits.
>> 
>> Changes since V1:
>> - Moved the logic into blk_cleanup_queue() as per Ming comments.
>> 
>> Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
>> ---
>>  block/blk-mq.c | 1 +
>>  1 file changed, 1 insertion(+)
>> 
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index 4abb714..890fded 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -2598,6 +2598,7 @@ static void blk_mq_exit_hw_queues(struct 
>> request_queue *q,
>>  			break;
>>  		blk_mq_debugfs_unregister_hctx(hctx);
>>  		blk_mq_exit_hctx(q, set, hctx, i);
>> +		cancel_delayed_work_sync(&hctx->run_work);
>>  	}
>>  }
> 
> It should be better to move cancel_delayed_work_sync() into
> blk_mq_exit_hctx(), exactly before adding hctx into unused list.
> 
Sure. i will do it in my next patch series.
> 
> Thanks,
> Ming

Thanks and Regards,
Pradeep

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH V2] block: Fix use-after-free issue while accessing ioscheduler lock
  2020-09-15 12:20   ` ppvk
@ 2020-09-15 12:41     ` Ming Lei
  2020-09-15 13:59       ` ppvk
  0 siblings, 1 reply; 5+ messages in thread
From: Ming Lei @ 2020-09-15 12:41 UTC (permalink / raw)
  To: ppvk; +Cc: axboe, linux-block, stummala, sayalil

On Tue, Sep 15, 2020 at 05:50:33PM +0530, ppvk@codeaurora.org wrote:
> On 2020-09-15 15:39, Ming Lei wrote:
> > On Tue, Sep 15, 2020 at 02:41:02PM +0530, Pradeep P V K wrote:
> > > Observes below crash while accessing (use-after-free) lock member
> > > of bfq data.
> > > 
> > > context#1			context#2
> > > 				process_one_work()
> > > kthread()			blk_mq_run_work_fn()
> > > worker_thread()			 ->__blk_mq_run_hw_queue()
> > > process_one_work()		  ->blk_mq_sched_dispatch_requests()
> > > __blk_release_queue()		    ->blk_mq_do_dispatch_sched()
> > 
> > Just found __blk_release_queue killed in v5.9 cycle.
> > 
> Yes on v5.9 blk_release_queue() will be called directly by q->kobj when
> request_queue ref. goes zero but
> where as on older kernel versions (< 5.9), blk_release_queue() will
> schedule a work to invoke/call "__blk_release_queue()".
> 
> > > ->__elevator_exit()
> > >   ->blk_mq_exit_sched()
> > >     ->exit_sched()
> > >       ->kfree()
> > >       					->bfq_dispatch_request()
> > > 					  ->spin_unlock_irq(&bfqd->lock)
> > 
> > Actually not sure if the above race is easy to trigger in recent kernel,
> > because we do call cancel_delayed_work_sync() in
> > blk_mq_hw_sysfs_release(),
> > which is usually called before __elevator_exit() from
> > blk_exit_queue()/blk_release_queue().
> > 
> blk_mq_hw_sysfs_release() will be called from blk_mq_release() i.e. with
> kobject_put(hctx->kobj), which is after __elevator_exit()
> 
> __elevator_exit() is called from blk_exit_queue() which is prior to
> blk_mq_release().
> 
> > So can you share your kernel version in which the issue is reproduced?
> > And can you reproduce this issue on v5.8 or v5.9-rc5?
> > 
> This issue is seen on v5.4 stable and it is very easy to reproduce on v5.4.
> sorry, i don't have a resource with v5.8 or with latest kernel. I can help
> you
> to get tested on v5.4. From the issue prospective, both v5.4 kernel and
> latest kernels calls blk_mq_release() -> blk_mq_hw_sysfs_release() after
> __elevator_exit(). So, i think it wont matter much here.
> 
> > > 
> > > This is because of the kblockd delayed work that might got scheduled
> > > around blk_release_queue() and accessed use-after-free member of
> > > bfq_data.
> > > 
> > > 240.212359:   <2> Unable to handle kernel paging request at
> > > virtual address ffffffee2e33ad70
> > > ...
> > > 240.212637:   <2> Workqueue: kblockd blk_mq_run_work_fn
> > > 240.212649:   <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
> > > 240.212666:   <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
> > > 240.212677:   <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
> > > ...
> > > Call trace:
> > > 240.212865:   <2>  queued_spin_lock_slowpath+0x10c/0x2e0
> > > 240.212876:   <2>  do_raw_spin_lock+0xf0/0xf4
> > > 240.212890:   <2>  _raw_spin_lock_irq+0x74/0x94
> > > 240.212906:   <2>  bfq_dispatch_request+0x4c/0xd60
> > > 240.212918:   <2>  blk_mq_do_dispatch_sched+0xe0/0x1f0
> > > 240.212927:   <2>  blk_mq_sched_dispatch_requests+0x130/0x194
> > > 240.212940:   <2>  __blk_mq_run_hw_queue+0x100/0x158
> > > 240.212950:   <2>  blk_mq_run_work_fn+0x1c/0x28
> > > 240.212963:   <2>  process_one_work+0x280/0x460
> > > 240.212973:   <2>  worker_thread+0x27c/0x4dc
> > > 240.212986:   <2>  kthread+0x160/0x170
> > > 
> > > Fix this by cancelling the delayed work if any before elevator exits.
> > > 
> > > Changes since V1:
> > > - Moved the logic into blk_cleanup_queue() as per Ming comments.
> > > 
> > > Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
> > > ---
> > >  block/blk-mq.c | 1 +
> > >  1 file changed, 1 insertion(+)
> > > 
> > > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > > index 4abb714..890fded 100644
> > > --- a/block/blk-mq.c
> > > +++ b/block/blk-mq.c
> > > @@ -2598,6 +2598,7 @@ static void blk_mq_exit_hw_queues(struct
> > > request_queue *q,
> > >  			break;
> > >  		blk_mq_debugfs_unregister_hctx(hctx);
> > >  		blk_mq_exit_hctx(q, set, hctx, i);
> > > +		cancel_delayed_work_sync(&hctx->run_work);
> > >  	}
> > >  }
> > 
> > It should be better to move cancel_delayed_work_sync() into
> > blk_mq_exit_hctx(), exactly before adding hctx into unused list.
> > 
> Sure. i will do it in my next patch series.

Thinking of further, looks your 1st post is right, because hw queue
won't be run when refcount of the queue drops to zero.

Sorry for the noise.

thanks,
Ming


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH V2] block: Fix use-after-free issue while accessing ioscheduler lock
  2020-09-15 12:41     ` Ming Lei
@ 2020-09-15 13:59       ` ppvk
  0 siblings, 0 replies; 5+ messages in thread
From: ppvk @ 2020-09-15 13:59 UTC (permalink / raw)
  To: Ming Lei; +Cc: axboe, linux-block, stummala, sayalil

On 2020-09-15 18:11, Ming Lei wrote:
> On Tue, Sep 15, 2020 at 05:50:33PM +0530, ppvk@codeaurora.org wrote:
>> On 2020-09-15 15:39, Ming Lei wrote:
>> > On Tue, Sep 15, 2020 at 02:41:02PM +0530, Pradeep P V K wrote:
>> > > Observes below crash while accessing (use-after-free) lock member
>> > > of bfq data.
>> > >
>> > > context#1			context#2
>> > > 				process_one_work()
>> > > kthread()			blk_mq_run_work_fn()
>> > > worker_thread()			 ->__blk_mq_run_hw_queue()
>> > > process_one_work()		  ->blk_mq_sched_dispatch_requests()
>> > > __blk_release_queue()		    ->blk_mq_do_dispatch_sched()
>> >
>> > Just found __blk_release_queue killed in v5.9 cycle.
>> >
>> Yes on v5.9 blk_release_queue() will be called directly by q->kobj 
>> when
>> request_queue ref. goes zero but
>> where as on older kernel versions (< 5.9), blk_release_queue() will
>> schedule a work to invoke/call "__blk_release_queue()".
>> 
>> > > ->__elevator_exit()
>> > >   ->blk_mq_exit_sched()
>> > >     ->exit_sched()
>> > >       ->kfree()
>> > >       					->bfq_dispatch_request()
>> > > 					  ->spin_unlock_irq(&bfqd->lock)
>> >
>> > Actually not sure if the above race is easy to trigger in recent kernel,
>> > because we do call cancel_delayed_work_sync() in
>> > blk_mq_hw_sysfs_release(),
>> > which is usually called before __elevator_exit() from
>> > blk_exit_queue()/blk_release_queue().
>> >
>> blk_mq_hw_sysfs_release() will be called from blk_mq_release() i.e. 
>> with
>> kobject_put(hctx->kobj), which is after __elevator_exit()
>> 
>> __elevator_exit() is called from blk_exit_queue() which is prior to
>> blk_mq_release().
>> 
>> > So can you share your kernel version in which the issue is reproduced?
>> > And can you reproduce this issue on v5.8 or v5.9-rc5?
>> >
>> This issue is seen on v5.4 stable and it is very easy to reproduce on 
>> v5.4.
>> sorry, i don't have a resource with v5.8 or with latest kernel. I can 
>> help
>> you
>> to get tested on v5.4. From the issue prospective, both v5.4 kernel 
>> and
>> latest kernels calls blk_mq_release() -> blk_mq_hw_sysfs_release() 
>> after
>> __elevator_exit(). So, i think it wont matter much here.
>> 
>> > >
>> > > This is because of the kblockd delayed work that might got scheduled
>> > > around blk_release_queue() and accessed use-after-free member of
>> > > bfq_data.
>> > >
>> > > 240.212359:   <2> Unable to handle kernel paging request at
>> > > virtual address ffffffee2e33ad70
>> > > ...
>> > > 240.212637:   <2> Workqueue: kblockd blk_mq_run_work_fn
>> > > 240.212649:   <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
>> > > 240.212666:   <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
>> > > 240.212677:   <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
>> > > ...
>> > > Call trace:
>> > > 240.212865:   <2>  queued_spin_lock_slowpath+0x10c/0x2e0
>> > > 240.212876:   <2>  do_raw_spin_lock+0xf0/0xf4
>> > > 240.212890:   <2>  _raw_spin_lock_irq+0x74/0x94
>> > > 240.212906:   <2>  bfq_dispatch_request+0x4c/0xd60
>> > > 240.212918:   <2>  blk_mq_do_dispatch_sched+0xe0/0x1f0
>> > > 240.212927:   <2>  blk_mq_sched_dispatch_requests+0x130/0x194
>> > > 240.212940:   <2>  __blk_mq_run_hw_queue+0x100/0x158
>> > > 240.212950:   <2>  blk_mq_run_work_fn+0x1c/0x28
>> > > 240.212963:   <2>  process_one_work+0x280/0x460
>> > > 240.212973:   <2>  worker_thread+0x27c/0x4dc
>> > > 240.212986:   <2>  kthread+0x160/0x170
>> > >
>> > > Fix this by cancelling the delayed work if any before elevator exits.
>> > >
>> > > Changes since V1:
>> > > - Moved the logic into blk_cleanup_queue() as per Ming comments.
>> > >
>> > > Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
>> > > ---
>> > >  block/blk-mq.c | 1 +
>> > >  1 file changed, 1 insertion(+)
>> > >
>> > > diff --git a/block/blk-mq.c b/block/blk-mq.c
>> > > index 4abb714..890fded 100644
>> > > --- a/block/blk-mq.c
>> > > +++ b/block/blk-mq.c
>> > > @@ -2598,6 +2598,7 @@ static void blk_mq_exit_hw_queues(struct
>> > > request_queue *q,
>> > >  			break;
>> > >  		blk_mq_debugfs_unregister_hctx(hctx);
>> > >  		blk_mq_exit_hctx(q, set, hctx, i);
>> > > +		cancel_delayed_work_sync(&hctx->run_work);
>> > >  	}
>> > >  }
>> >
>> > It should be better to move cancel_delayed_work_sync() into
>> > blk_mq_exit_hctx(), exactly before adding hctx into unused list.
>> >
>> Sure. i will do it in my next patch series.
> 
> Thinking of further, looks your 1st post is right, because hw queue
> won't be run when refcount of the queue drops to zero.
> 
ok. Can you help add your Review-by sign off on my V1 patch.
> Sorry for the noise.
> 
never mind - it's ok.

> thanks,
> Ming

Thanks and Regards,
Pradeep


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-09-16  0:47 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-15  9:11 [PATCH V2] block: Fix use-after-free issue while accessing ioscheduler lock Pradeep P V K
2020-09-15 10:09 ` Ming Lei
2020-09-15 12:20   ` ppvk
2020-09-15 12:41     ` Ming Lei
2020-09-15 13:59       ` ppvk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.