All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/3] ceph: fix potential mdsc use-after-free crash
@ 2020-07-01  5:52 xiubli
  2020-07-01 11:17 ` Jeff Layton
  0 siblings, 1 reply; 5+ messages in thread
From: xiubli @ 2020-07-01  5:52 UTC (permalink / raw)
  To: jlayton; +Cc: idryomov, zyan, pdonnell, ceph-devel, Xiubo Li

From: Xiubo Li <xiubli@redhat.com>

Make sure the delayed work stopped before releasing the resources.

Because the cancel_delayed_work_sync() will only guarantee that the
work finishes executing if the work is already in the ->worklist.
That means after the cancel_delayed_work_sync() returns and in case
if the work will re-arm itself, it will leave the work requeued. And
if we release the resources before the delayed work to run again we
will hit the use-after-free bug.

URL: https://tracker.ceph.com/issues/46293
Signed-off-by: Xiubo Li <xiubli@redhat.com>
---
 fs/ceph/mds_client.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index d5e523c..9a09d12 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -4330,6 +4330,9 @@ static void delayed_work(struct work_struct *work)
 
 	dout("mdsc delayed_work\n");
 
+	if (mdsc->stopping)
+		return;
+
 	mutex_lock(&mdsc->mutex);
 	renew_interval = mdsc->mdsmap->m_session_timeout >> 2;
 	renew_caps = time_after_eq(jiffies, HZ*renew_interval +
@@ -4689,7 +4692,16 @@ void ceph_mdsc_force_umount(struct ceph_mds_client *mdsc)
 static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
 {
 	dout("stop\n");
-	cancel_delayed_work_sync(&mdsc->delayed_work); /* cancel timer */
+	/*
+	 * Make sure the delayed work stopped before releasing
+	 * the resources.
+	 *
+	 * Because the cancel_delayed_work_sync() will only
+	 * guarantee that the work finishes executing. But the
+	 * delayed work will re-arm itself again after that.
+	 */
+	flush_delayed_work(&mdsc->delayed_work);
+
 	if (mdsc->mdsmap)
 		ceph_mdsmap_destroy(mdsc->mdsmap);
 	kfree(mdsc->sessions);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/3] ceph: fix potential mdsc use-after-free crash
  2020-07-01  5:52 [PATCH 1/3] ceph: fix potential mdsc use-after-free crash xiubli
@ 2020-07-01 11:17 ` Jeff Layton
  2020-07-01 11:25   ` Xiubo Li
  0 siblings, 1 reply; 5+ messages in thread
From: Jeff Layton @ 2020-07-01 11:17 UTC (permalink / raw)
  To: xiubli; +Cc: idryomov, zyan, pdonnell, ceph-devel

On Wed, 2020-07-01 at 01:52 -0400, xiubli@redhat.com wrote:
> From: Xiubo Li <xiubli@redhat.com>
> 
> Make sure the delayed work stopped before releasing the resources.
> 
> Because the cancel_delayed_work_sync() will only guarantee that the
> work finishes executing if the work is already in the ->worklist.
> That means after the cancel_delayed_work_sync() returns and in case
> if the work will re-arm itself, it will leave the work requeued. And
> if we release the resources before the delayed work to run again we
> will hit the use-after-free bug.
> 
> URL: https://tracker.ceph.com/issues/46293
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
>  fs/ceph/mds_client.c | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index d5e523c..9a09d12 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -4330,6 +4330,9 @@ static void delayed_work(struct work_struct *work)
>  
>  	dout("mdsc delayed_work\n");
>  
> +	if (mdsc->stopping)
> +		return;
> +
>  	mutex_lock(&mdsc->mutex);
>  	renew_interval = mdsc->mdsmap->m_session_timeout >> 2;
>  	renew_caps = time_after_eq(jiffies, HZ*renew_interval +
> @@ -4689,7 +4692,16 @@ void ceph_mdsc_force_umount(struct ceph_mds_client *mdsc)
>  static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
>  {
>  	dout("stop\n");
> -	cancel_delayed_work_sync(&mdsc->delayed_work); /* cancel timer */
> +	/*
> +	 * Make sure the delayed work stopped before releasing
> +	 * the resources.
> +	 *
> +	 * Because the cancel_delayed_work_sync() will only
> +	 * guarantee that the work finishes executing. But the
> +	 * delayed work will re-arm itself again after that.
> +	 */
> +	flush_delayed_work(&mdsc->delayed_work);
> +
>  	if (mdsc->mdsmap)
>  		ceph_mdsmap_destroy(mdsc->mdsmap);
>  	kfree(mdsc->sessions);

This patch looks fine, but the subject says [PATCH 1/3]. Were there
others in this series that didn't make it to the list for some reason?

Thanks,
-- 
Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/3] ceph: fix potential mdsc use-after-free crash
  2020-07-01 11:17 ` Jeff Layton
@ 2020-07-01 11:25   ` Xiubo Li
  2020-07-01 11:55     ` Jeff Layton
  0 siblings, 1 reply; 5+ messages in thread
From: Xiubo Li @ 2020-07-01 11:25 UTC (permalink / raw)
  To: Jeff Layton; +Cc: idryomov, zyan, pdonnell, ceph-devel

On 2020/7/1 19:17, Jeff Layton wrote:
> On Wed, 2020-07-01 at 01:52 -0400, xiubli@redhat.com wrote:
>> From: Xiubo Li <xiubli@redhat.com>
>>
>> Make sure the delayed work stopped before releasing the resources.
>>
>> Because the cancel_delayed_work_sync() will only guarantee that the
>> work finishes executing if the work is already in the ->worklist.
>> That means after the cancel_delayed_work_sync() returns and in case
>> if the work will re-arm itself, it will leave the work requeued. And
>> if we release the resources before the delayed work to run again we
>> will hit the use-after-free bug.
>>
>> URL: https://tracker.ceph.com/issues/46293
>> Signed-off-by: Xiubo Li <xiubli@redhat.com>
>> ---
>>   fs/ceph/mds_client.c | 14 +++++++++++++-
>>   1 file changed, 13 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
>> index d5e523c..9a09d12 100644
>> --- a/fs/ceph/mds_client.c
>> +++ b/fs/ceph/mds_client.c
>> @@ -4330,6 +4330,9 @@ static void delayed_work(struct work_struct *work)
>>   
>>   	dout("mdsc delayed_work\n");
>>   
>> +	if (mdsc->stopping)
>> +		return;
>> +
>>   	mutex_lock(&mdsc->mutex);
>>   	renew_interval = mdsc->mdsmap->m_session_timeout >> 2;
>>   	renew_caps = time_after_eq(jiffies, HZ*renew_interval +
>> @@ -4689,7 +4692,16 @@ void ceph_mdsc_force_umount(struct ceph_mds_client *mdsc)
>>   static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
>>   {
>>   	dout("stop\n");
>> -	cancel_delayed_work_sync(&mdsc->delayed_work); /* cancel timer */
>> +	/*
>> +	 * Make sure the delayed work stopped before releasing
>> +	 * the resources.
>> +	 *
>> +	 * Because the cancel_delayed_work_sync() will only
>> +	 * guarantee that the work finishes executing. But the
>> +	 * delayed work will re-arm itself again after that.
>> +	 */
>> +	flush_delayed_work(&mdsc->delayed_work);
>> +
>>   	if (mdsc->mdsmap)
>>   		ceph_mdsmap_destroy(mdsc->mdsmap);
>>   	kfree(mdsc->sessions);
> This patch looks fine, but the subject says [PATCH 1/3]. Were there
> others in this series that didn't make it to the list for some reason?

Sorry for confusing.

Just generated this patch with the metrics series and forget to fix it 
before sending out.

BRs

Xiubo

> Thanks,

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/3] ceph: fix potential mdsc use-after-free crash
  2020-07-01 11:25   ` Xiubo Li
@ 2020-07-01 11:55     ` Jeff Layton
  2020-07-01 11:57       ` Xiubo Li
  0 siblings, 1 reply; 5+ messages in thread
From: Jeff Layton @ 2020-07-01 11:55 UTC (permalink / raw)
  To: Xiubo Li; +Cc: idryomov, zyan, pdonnell, ceph-devel

On Wed, 2020-07-01 at 19:25 +0800, Xiubo Li wrote:
> On 2020/7/1 19:17, Jeff Layton wrote:
> > On Wed, 2020-07-01 at 01:52 -0400, xiubli@redhat.com wrote:
> > > From: Xiubo Li <xiubli@redhat.com>
> > > 
> > > Make sure the delayed work stopped before releasing the resources.
> > > 
> > > Because the cancel_delayed_work_sync() will only guarantee that the
> > > work finishes executing if the work is already in the ->worklist.
> > > That means after the cancel_delayed_work_sync() returns and in case
> > > if the work will re-arm itself, it will leave the work requeued. And
> > > if we release the resources before the delayed work to run again we
> > > will hit the use-after-free bug.
> > > 
> > > URL: https://tracker.ceph.com/issues/46293
> > > Signed-off-by: Xiubo Li <xiubli@redhat.com>
> > > ---
> > >   fs/ceph/mds_client.c | 14 +++++++++++++-
> > >   1 file changed, 13 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> > > index d5e523c..9a09d12 100644
> > > --- a/fs/ceph/mds_client.c
> > > +++ b/fs/ceph/mds_client.c
> > > @@ -4330,6 +4330,9 @@ static void delayed_work(struct work_struct *work)
> > >   
> > >   	dout("mdsc delayed_work\n");
> > >   
> > > +	if (mdsc->stopping)
> > > +		return;
> > > +
> > >   	mutex_lock(&mdsc->mutex);
> > >   	renew_interval = mdsc->mdsmap->m_session_timeout >> 2;
> > >   	renew_caps = time_after_eq(jiffies, HZ*renew_interval +
> > > @@ -4689,7 +4692,16 @@ void ceph_mdsc_force_umount(struct ceph_mds_client *mdsc)
> > >   static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
> > >   {
> > >   	dout("stop\n");
> > > -	cancel_delayed_work_sync(&mdsc->delayed_work); /* cancel timer */
> > > +	/*
> > > +	 * Make sure the delayed work stopped before releasing
> > > +	 * the resources.
> > > +	 *
> > > +	 * Because the cancel_delayed_work_sync() will only
> > > +	 * guarantee that the work finishes executing. But the
> > > +	 * delayed work will re-arm itself again after that.
> > > +	 */
> > > +	flush_delayed_work(&mdsc->delayed_work);
> > > +
> > >   	if (mdsc->mdsmap)
> > >   		ceph_mdsmap_destroy(mdsc->mdsmap);
> > >   	kfree(mdsc->sessions);
> > This patch looks fine, but the subject says [PATCH 1/3]. Were there
> > others in this series that didn't make it to the list for some reason?
> 
> Sorry for confusing.
> 
> Just generated this patch with the metrics series and forget to fix it 
> before sending out.
> 
> 

No worries. Just making sure before I merged it. Merged patch into
testing branch.

Thanks!
-- 
Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/3] ceph: fix potential mdsc use-after-free crash
  2020-07-01 11:55     ` Jeff Layton
@ 2020-07-01 11:57       ` Xiubo Li
  0 siblings, 0 replies; 5+ messages in thread
From: Xiubo Li @ 2020-07-01 11:57 UTC (permalink / raw)
  To: Jeff Layton; +Cc: idryomov, zyan, pdonnell, ceph-devel

On 2020/7/1 19:55, Jeff Layton wrote:
> On Wed, 2020-07-01 at 19:25 +0800, Xiubo Li wrote:
>> On 2020/7/1 19:17, Jeff Layton wrote:
>>> On Wed, 2020-07-01 at 01:52 -0400, xiubli@redhat.com wrote:
>>>> From: Xiubo Li <xiubli@redhat.com>
>>>>
>>>> Make sure the delayed work stopped before releasing the resources.
>>>>
>>>> Because the cancel_delayed_work_sync() will only guarantee that the
>>>> work finishes executing if the work is already in the ->worklist.
>>>> That means after the cancel_delayed_work_sync() returns and in case
>>>> if the work will re-arm itself, it will leave the work requeued. And
>>>> if we release the resources before the delayed work to run again we
>>>> will hit the use-after-free bug.
>>>>
>>>> URL: https://tracker.ceph.com/issues/46293
>>>> Signed-off-by: Xiubo Li <xiubli@redhat.com>
>>>> ---
>>>>    fs/ceph/mds_client.c | 14 +++++++++++++-
>>>>    1 file changed, 13 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
>>>> index d5e523c..9a09d12 100644
>>>> --- a/fs/ceph/mds_client.c
>>>> +++ b/fs/ceph/mds_client.c
>>>> @@ -4330,6 +4330,9 @@ static void delayed_work(struct work_struct *work)
>>>>    
>>>>    	dout("mdsc delayed_work\n");
>>>>    
>>>> +	if (mdsc->stopping)
>>>> +		return;
>>>> +
>>>>    	mutex_lock(&mdsc->mutex);
>>>>    	renew_interval = mdsc->mdsmap->m_session_timeout >> 2;
>>>>    	renew_caps = time_after_eq(jiffies, HZ*renew_interval +
>>>> @@ -4689,7 +4692,16 @@ void ceph_mdsc_force_umount(struct ceph_mds_client *mdsc)
>>>>    static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
>>>>    {
>>>>    	dout("stop\n");
>>>> -	cancel_delayed_work_sync(&mdsc->delayed_work); /* cancel timer */
>>>> +	/*
>>>> +	 * Make sure the delayed work stopped before releasing
>>>> +	 * the resources.
>>>> +	 *
>>>> +	 * Because the cancel_delayed_work_sync() will only
>>>> +	 * guarantee that the work finishes executing. But the
>>>> +	 * delayed work will re-arm itself again after that.
>>>> +	 */
>>>> +	flush_delayed_work(&mdsc->delayed_work);
>>>> +
>>>>    	if (mdsc->mdsmap)
>>>>    		ceph_mdsmap_destroy(mdsc->mdsmap);
>>>>    	kfree(mdsc->sessions);
>>> This patch looks fine, but the subject says [PATCH 1/3]. Were there
>>> others in this series that didn't make it to the list for some reason?
>> Sorry for confusing.
>>
>> Just generated this patch with the metrics series and forget to fix it
>> before sending out.
>>
>>
> No worries. Just making sure before I merged it. Merged patch into
> testing branch.

Thanks Jeff.

BRs


>
> Thanks!

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-07-01 11:57 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-01  5:52 [PATCH 1/3] ceph: fix potential mdsc use-after-free crash xiubli
2020-07-01 11:17 ` Jeff Layton
2020-07-01 11:25   ` Xiubo Li
2020-07-01 11:55     ` Jeff Layton
2020-07-01 11:57       ` Xiubo Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.