linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH-next] block: fix null-deref in percpu_ref_put
@ 2022-12-06  9:09 Zhong Jinghua
  2022-12-07  1:05 ` Dennis Zhou
  0 siblings, 1 reply; 5+ messages in thread
From: Zhong Jinghua @ 2022-12-06  9:09 UTC (permalink / raw)
  To: dennis, tj, cl; +Cc: linux-mm, linux-kernel, zhongjinghua, yi.zhang, yukuai3

A problem was find in stable 5.10 and the root cause of it like below.

In the use of q_usage_counter of request_queue, blk_cleanup_queue using
"wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter))"
to wait q_usage_counter becoming zero. however, if the q_usage_counter
becoming zero quickly, and percpu_ref_exit will execute and ref->data
will be freed, maybe another process will cause a null-defef problem
like below:

	CPU0                             CPU1
blk_mq_destroy_queue
 blk_freeze_queue
  blk_mq_freeze_queue_wait
				scsi_end_request
				 percpu_ref_get
				 ...
				 percpu_ref_put
				  atomic_long_sub_and_test
 blk_put_queue
  kobject_put
   kref_put
    blk_release_queue
     percpu_ref_exit
      ref->data -> NULL
   				   ref->data->release(ref) -> null-deref

As suggested by Ming Lei, fix it by getting the release method before
the referebce count is minus 0.

Suggested-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Zhong Jinghua <zhongjinghua@huawei.com>
---
 include/linux/percpu-refcount.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index d73a1c08c3e3..11e717c95acb 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -331,8 +331,11 @@ static inline void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr)
 
 	if (__ref_is_percpu(ref, &percpu_count))
 		this_cpu_sub(*percpu_count, nr);
-	else if (unlikely(atomic_long_sub_and_test(nr, &ref->data->count)))
-		ref->data->release(ref);
+	else {
+		percpu_ref_func_t *release = ref->data->release;
+		if (unlikely(atomic_long_sub_and_test(nr, &ref->data->count)))
+			release(ref);
+	}
 
 	rcu_read_unlock();
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH-next] block: fix null-deref in percpu_ref_put
  2022-12-06  9:09 [PATCH-next] block: fix null-deref in percpu_ref_put Zhong Jinghua
@ 2022-12-07  1:05 ` Dennis Zhou
  2022-12-07 13:10   ` Yu Kuai
  2022-12-08  8:55   ` Ming Lei
  0 siblings, 2 replies; 5+ messages in thread
From: Dennis Zhou @ 2022-12-07  1:05 UTC (permalink / raw)
  To: Zhong Jinghua; +Cc: tj, cl, linux-mm, linux-kernel, yi.zhang, yukuai3

Hello,

On Tue, Dec 06, 2022 at 05:09:39PM +0800, Zhong Jinghua wrote:
> A problem was find in stable 5.10 and the root cause of it like below.
> 
> In the use of q_usage_counter of request_queue, blk_cleanup_queue using
> "wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter))"
> to wait q_usage_counter becoming zero. however, if the q_usage_counter
> becoming zero quickly, and percpu_ref_exit will execute and ref->data
> will be freed, maybe another process will cause a null-defef problem
> like below:
> 
> 	CPU0                             CPU1
> blk_mq_destroy_queue
>  blk_freeze_queue
>   blk_mq_freeze_queue_wait
> 				scsi_end_request
> 				 percpu_ref_get
> 				 ...
> 				 percpu_ref_put
> 				  atomic_long_sub_and_test
>  blk_put_queue
>   kobject_put
>    kref_put
>     blk_release_queue
>      percpu_ref_exit
>       ref->data -> NULL
>    				   ref->data->release(ref) -> null-deref
> 

I remember thinking about this a while ago. I don't think this fix works
as nicely as it may seem. Please correct me if I'm wrong.

q->q_usage_counter has the oddity that the lifetime of the percpu_ref
object isn't managed by the release function. The freeing is handled by
a separate path where it depends on the percpu_ref hitting 0. So here we
have 2 concurrent paths racing to run with 1 destroying the object. We
probably need blk_release_queue() to wait on percpu_ref's release
finishing, not starting.

I think the above works in this specific case because there is a
call_rcu() in blk_release_queue(). If there wasn't a call_rcu(),
then by the same logic we could delay ref->data->release(ref) further
and that could potentially lead to a use after free.

Ideally, I think fixing the race in q->q_usage_counter's pattern is
better than masking it here as I think we're being saved by the
call_rcu() call further down the object release path.

Thanks,
Dennis

> As suggested by Ming Lei, fix it by getting the release method before
> the referebce count is minus 0.
> 
> Suggested-by: Ming Lei <ming.lei@redhat.com>
> Signed-off-by: Zhong Jinghua <zhongjinghua@huawei.com>
> ---
>  include/linux/percpu-refcount.h | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
> index d73a1c08c3e3..11e717c95acb 100644
> --- a/include/linux/percpu-refcount.h
> +++ b/include/linux/percpu-refcount.h
> @@ -331,8 +331,11 @@ static inline void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr)
>  
>  	if (__ref_is_percpu(ref, &percpu_count))
>  		this_cpu_sub(*percpu_count, nr);
> -	else if (unlikely(atomic_long_sub_and_test(nr, &ref->data->count)))
> -		ref->data->release(ref);
> +	else {
> +		percpu_ref_func_t *release = ref->data->release;
> +		if (unlikely(atomic_long_sub_and_test(nr, &ref->data->count)))
> +			release(ref);
> +	}
>  
>  	rcu_read_unlock();
>  }
> -- 
> 2.31.1
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH-next] block: fix null-deref in percpu_ref_put
  2022-12-07  1:05 ` Dennis Zhou
@ 2022-12-07 13:10   ` Yu Kuai
  2022-12-08  8:55   ` Ming Lei
  1 sibling, 0 replies; 5+ messages in thread
From: Yu Kuai @ 2022-12-07 13:10 UTC (permalink / raw)
  To: Dennis Zhou, Zhong Jinghua, Ming Lei
  Cc: tj, cl, linux-mm, linux-kernel, yi.zhang, yukuai (C)

Hi,

在 2022/12/07 9:05, Dennis Zhou 写道:
> Hello,
> 
> On Tue, Dec 06, 2022 at 05:09:39PM +0800, Zhong Jinghua wrote:
>> A problem was find in stable 5.10 and the root cause of it like below.
>>
>> In the use of q_usage_counter of request_queue, blk_cleanup_queue using
>> "wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter))"
>> to wait q_usage_counter becoming zero. however, if the q_usage_counter
>> becoming zero quickly, and percpu_ref_exit will execute and ref->data
>> will be freed, maybe another process will cause a null-defef problem
>> like below:
>>
>> 	CPU0                             CPU1
>> blk_mq_destroy_queue
>>   blk_freeze_queue
>>    blk_mq_freeze_queue_wait
>> 				scsi_end_request
>> 				 percpu_ref_get
>> 				 ...
>> 				 percpu_ref_put
>> 				  atomic_long_sub_and_test
>>   blk_put_queue
>>    kobject_put
>>     kref_put
>>      blk_release_queue
>>       percpu_ref_exit
>>        ref->data -> NULL
>>     				   ref->data->release(ref) -> null-deref
>>
> 
> I remember thinking about this a while ago. I don't think this fix works
> as nicely as it may seem. Please correct me if I'm wrong.
> 
> q->q_usage_counter has the oddity that the lifetime of the percpu_ref
> object isn't managed by the release function. The freeing is handled by
> a separate path where it depends on the percpu_ref hitting 0. So here we
> have 2 concurrent paths racing to run with 1 destroying the object. We
> probably need blk_release_queue() to wait on percpu_ref's release
> finishing, not starting.
> 
> I think the above works in this specific case because there is a
> call_rcu() in blk_release_queue(). If there wasn't a call_rcu(),
> then by the same logic we could delay ref->data->release(ref) further
> and that could potentially lead to a use after free.
> 
> Ideally, I think fixing the race in q->q_usage_counter's pattern is
> better than masking it here as I think we're being saved by the
> call_rcu() call further down the object release path.

Agree.

BTW, Wensheng used to send a patch to fix this in block layer:

https://www.spinics.net/lists/kernel/msg4615696.html.

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH-next] block: fix null-deref in percpu_ref_put
  2022-12-07  1:05 ` Dennis Zhou
  2022-12-07 13:10   ` Yu Kuai
@ 2022-12-08  8:55   ` Ming Lei
  1 sibling, 0 replies; 5+ messages in thread
From: Ming Lei @ 2022-12-08  8:55 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: Zhong Jinghua, tj, cl, linux-mm, linux-kernel, yi.zhang, yukuai3,
	Ming Lei

On Wed, Dec 7, 2022 at 9:08 AM Dennis Zhou <dennis@kernel.org> wrote:
>
> Hello,
>
> On Tue, Dec 06, 2022 at 05:09:39PM +0800, Zhong Jinghua wrote:
> > A problem was find in stable 5.10 and the root cause of it like below.
> >
> > In the use of q_usage_counter of request_queue, blk_cleanup_queue using
> > "wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter))"
> > to wait q_usage_counter becoming zero. however, if the q_usage_counter
> > becoming zero quickly, and percpu_ref_exit will execute and ref->data
> > will be freed, maybe another process will cause a null-defef problem
> > like below:
> >
> >       CPU0                             CPU1
> > blk_mq_destroy_queue
> >  blk_freeze_queue
> >   blk_mq_freeze_queue_wait
> >                               scsi_end_request
> >                                percpu_ref_get
> >                                ...
> >                                percpu_ref_put
> >                                 atomic_long_sub_and_test
> >  blk_put_queue
> >   kobject_put
> >    kref_put
> >     blk_release_queue
> >      percpu_ref_exit
> >       ref->data -> NULL
> >                                  ref->data->release(ref) -> null-deref
> >
>
> I remember thinking about this a while ago. I don't think this fix works
> as nicely as it may seem. Please correct me if I'm wrong.
>
> q->q_usage_counter has the oddity that the lifetime of the percpu_ref
> object isn't managed by the release function. The freeing is handled by
> a separate path where it depends on the percpu_ref hitting 0. So here we
> have 2 concurrent paths racing to run with 1 destroying the object. We
> probably need blk_release_queue() to wait on percpu_ref's release
> finishing, not starting.
>
> I think the above works in this specific case because there is a
> call_rcu() in blk_release_queue(). If there wasn't a call_rcu(),
> then by the same logic we could delay ref->data->release(ref) further
> and that could potentially lead to a use after free.
>
> Ideally, I think fixing the race in q->q_usage_counter's pattern is
> better than masking it here as I think we're being saved by the
> call_rcu() call further down the object release path.

The problem is actually in percpu_ref_is_zero(), which can return true
before ->release() is called. And any pattern of wait_event(percpu_ref_is_zero)
may imply such risk.

It may be not easy to fix the issue in block layer cleanly, but can be
solved in percpu-refcount simply by adding ->release_lock(spin lock)
in the counter for draining atomic_long_sub_and_test() & ->release()
in percpu_ref_exit(). Or simply use percpu_ref_switch_lock.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH-next] block: fix null-deref in percpu_ref_put
@ 2022-12-06  9:02 Zhong Jinghua
  0 siblings, 0 replies; 5+ messages in thread
From: Zhong Jinghua @ 2022-12-06  9:02 UTC (permalink / raw)
  To: richard, ext-adrian.hunter, Artem.Bityutskiy
  Cc: linux-mtd, linux-kernel, chengzhihao1, yukuai3

A problem was find in stable 5.10 and the root cause of it like below.

In the use of q_usage_counter of request_queue, blk_cleanup_queue using
"wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter))"
to wait q_usage_counter becoming zero. however, if the q_usage_counter
becoming zero quickly, and percpu_ref_exit will execute and ref->data
will be freed, maybe another process will cause a null-defef problem
like below:

	CPU0                             CPU1
blk_mq_destroy_queue
 blk_freeze_queue
  blk_mq_freeze_queue_wait
				scsi_end_request
				 percpu_ref_get
				 ...
				 percpu_ref_put
				  atomic_long_sub_and_test
 blk_put_queue
  kobject_put
   kref_put
    blk_release_queue
     percpu_ref_exit
      ref->data -> NULL
   				   ref->data->release(ref) -> null-deref

As suggested by Ming Lei, fix it by getting the release method before
the referebce count is minus 0.

Suggested-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Zhong Jinghua <zhongjinghua@huawei.com>
---
 include/linux/percpu-refcount.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index d73a1c08c3e3..11e717c95acb 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -331,8 +331,11 @@ static inline void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr)
 
 	if (__ref_is_percpu(ref, &percpu_count))
 		this_cpu_sub(*percpu_count, nr);
-	else if (unlikely(atomic_long_sub_and_test(nr, &ref->data->count)))
-		ref->data->release(ref);
+	else {
+		percpu_ref_func_t *release = ref->data->release;
+		if (unlikely(atomic_long_sub_and_test(nr, &ref->data->count)))
+			release(ref);
+	}
 
 	rcu_read_unlock();
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-12-08  8:56 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-06  9:09 [PATCH-next] block: fix null-deref in percpu_ref_put Zhong Jinghua
2022-12-07  1:05 ` Dennis Zhou
2022-12-07 13:10   ` Yu Kuai
2022-12-08  8:55   ` Ming Lei
  -- strict thread matches above, loose matches on Subject: below --
2022-12-06  9:02 Zhong Jinghua

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).