ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] ceph: ensure we take snap_empty_lock atomically with snaprealm refcount change
@ 2021-08-04 15:55 Jeff Layton
  2021-08-04 16:26 ` Luis Henriques
  0 siblings, 1 reply; 3+ messages in thread
From: Jeff Layton @ 2021-08-04 15:55 UTC (permalink / raw)
  To: ceph-devel; +Cc: idryomov, lhenriques, stable, Sage Weil, Mark Nelson

There is a race in ceph_put_snap_realm. The change to the nref and the
spinlock acquisition are not done atomically, so you could decrement nref,
and before you take the spinlock, the nref is incremented again. At that
point, you end up putting it on the empty list when it shouldn't be
there. Eventually __cleanup_empty_realms runs and frees it when it's
still in-use.

Fix this by protecting the 1->0 transition with atomic_dec_and_lock, and
just drop the spinlock if we can get the rwsem.

Because these objects can also undergo a 0->1 refcount transition, we
must protect that change as well with the spinlock. Increment locklessly
unless the value is at 0, in which case we take the spinlock, increment
and then take it off the empty list if it did the 0->1 transition.

With these changes, I'm removing the dout() messages from these
functions, as well as in __put_snap_realm. They've always been racy, and
it's better to not print values that may be misleading.

Cc: stable@vger.kernel.org
Cc: Sage Weil <sage@redhat.com>
Reported-by: Mark Nelson <mnelson@redhat.com>
URL: https://tracker.ceph.com/issues/46419
Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
 fs/ceph/snap.c | 34 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 17 deletions(-)

v2: No functional changes, but I cleaned up the comments a bit and
    added another in __put_snap_realm.

diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
index 9dbc92cfda38..158c11e96fb7 100644
--- a/fs/ceph/snap.c
+++ b/fs/ceph/snap.c
@@ -67,19 +67,19 @@ void ceph_get_snap_realm(struct ceph_mds_client *mdsc,
 {
 	lockdep_assert_held(&mdsc->snap_rwsem);
 
-	dout("get_realm %p %d -> %d\n", realm,
-	     atomic_read(&realm->nref), atomic_read(&realm->nref)+1);
 	/*
-	 * since we _only_ increment realm refs or empty the empty
-	 * list with snap_rwsem held, adjusting the empty list here is
-	 * safe.  we do need to protect against concurrent empty list
-	 * additions, however.
+	 * The 0->1 and 1->0 transitions must take the snap_empty_lock
+	 * atomically with the refcount change. Go ahead and bump the
+	 * nref here, unless it's 0, in which case we take the spinlock
+	 * and then do the increment and remove it from the list.
 	 */
-	if (atomic_inc_return(&realm->nref) == 1) {
-		spin_lock(&mdsc->snap_empty_lock);
+	if (atomic_add_unless(&realm->nref, 1, 0))
+		return;
+
+	spin_lock(&mdsc->snap_empty_lock);
+	if (atomic_inc_return(&realm->nref) == 1)
 		list_del_init(&realm->empty_item);
-		spin_unlock(&mdsc->snap_empty_lock);
-	}
+	spin_unlock(&mdsc->snap_empty_lock);
 }
 
 static void __insert_snap_realm(struct rb_root *root,
@@ -208,28 +208,28 @@ static void __put_snap_realm(struct ceph_mds_client *mdsc,
 {
 	lockdep_assert_held_write(&mdsc->snap_rwsem);
 
-	dout("__put_snap_realm %llx %p %d -> %d\n", realm->ino, realm,
-	     atomic_read(&realm->nref), atomic_read(&realm->nref)-1);
+	/*
+	 * We do not require the snap_empty_lock here, as any caller that
+	 * increments the value must hold the snap_rwsem.
+	 */
 	if (atomic_dec_and_test(&realm->nref))
 		__destroy_snap_realm(mdsc, realm);
 }
 
 /*
- * caller needn't hold any locks
+ * See comments in ceph_get_snap_realm. Caller needn't hold any locks.
  */
 void ceph_put_snap_realm(struct ceph_mds_client *mdsc,
 			 struct ceph_snap_realm *realm)
 {
-	dout("put_snap_realm %llx %p %d -> %d\n", realm->ino, realm,
-	     atomic_read(&realm->nref), atomic_read(&realm->nref)-1);
-	if (!atomic_dec_and_test(&realm->nref))
+	if (!atomic_dec_and_lock(&realm->nref, &mdsc->snap_empty_lock))
 		return;
 
 	if (down_write_trylock(&mdsc->snap_rwsem)) {
+		spin_unlock(&mdsc->snap_empty_lock);
 		__destroy_snap_realm(mdsc, realm);
 		up_write(&mdsc->snap_rwsem);
 	} else {
-		spin_lock(&mdsc->snap_empty_lock);
 		list_add(&realm->empty_item, &mdsc->snap_empty);
 		spin_unlock(&mdsc->snap_empty_lock);
 	}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] ceph: ensure we take snap_empty_lock atomically with snaprealm refcount change
  2021-08-04 15:55 [PATCH v2] ceph: ensure we take snap_empty_lock atomically with snaprealm refcount change Jeff Layton
@ 2021-08-04 16:26 ` Luis Henriques
  2021-08-04 16:32   ` Jeff Layton
  0 siblings, 1 reply; 3+ messages in thread
From: Luis Henriques @ 2021-08-04 16:26 UTC (permalink / raw)
  To: Jeff Layton; +Cc: ceph-devel, idryomov, stable, Sage Weil, Mark Nelson

Jeff Layton <jlayton@kernel.org> writes:

> There is a race in ceph_put_snap_realm. The change to the nref and the
> spinlock acquisition are not done atomically, so you could decrement nref,
> and before you take the spinlock, the nref is incremented again. At that
> point, you end up putting it on the empty list when it shouldn't be
> there. Eventually __cleanup_empty_realms runs and frees it when it's
> still in-use.
>
> Fix this by protecting the 1->0 transition with atomic_dec_and_lock, and
> just drop the spinlock if we can get the rwsem.
>
> Because these objects can also undergo a 0->1 refcount transition, we
> must protect that change as well with the spinlock. Increment locklessly
> unless the value is at 0, in which case we take the spinlock, increment
> and then take it off the empty list if it did the 0->1 transition.
>
> With these changes, I'm removing the dout() messages from these
> functions, as well as in __put_snap_realm. They've always been racy, and
> it's better to not print values that may be misleading.
>
> Cc: stable@vger.kernel.org
> Cc: Sage Weil <sage@redhat.com>
> Reported-by: Mark Nelson <mnelson@redhat.com>
> URL: https://tracker.ceph.com/issues/46419
> Signed-off-by: Jeff Layton <jlayton@kernel.org>
> ---
>  fs/ceph/snap.c | 34 +++++++++++++++++-----------------
>  1 file changed, 17 insertions(+), 17 deletions(-)
>
> v2: No functional changes, but I cleaned up the comments a bit and
>     added another in __put_snap_realm.
>
> diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
> index 9dbc92cfda38..158c11e96fb7 100644
> --- a/fs/ceph/snap.c
> +++ b/fs/ceph/snap.c
> @@ -67,19 +67,19 @@ void ceph_get_snap_realm(struct ceph_mds_client *mdsc,
>  {
>  	lockdep_assert_held(&mdsc->snap_rwsem);
>  
> -	dout("get_realm %p %d -> %d\n", realm,
> -	     atomic_read(&realm->nref), atomic_read(&realm->nref)+1);
>  	/*
> -	 * since we _only_ increment realm refs or empty the empty
> -	 * list with snap_rwsem held, adjusting the empty list here is
> -	 * safe.  we do need to protect against concurrent empty list
> -	 * additions, however.
> +	 * The 0->1 and 1->0 transitions must take the snap_empty_lock
> +	 * atomically with the refcount change. Go ahead and bump the
> +	 * nref here, unless it's 0, in which case we take the spinlock
> +	 * and then do the increment and remove it from the list.
>  	 */
> -	if (atomic_inc_return(&realm->nref) == 1) {
> -		spin_lock(&mdsc->snap_empty_lock);
> +	if (atomic_add_unless(&realm->nref, 1, 0))

Here you could probably use atomic_inc_not_zero() instead.  But other
than that it looks good.  Thanks a lot for solving yet another locking
puzzle!

Reviewed-by: Luis Henriques <lhenriques@suse.de>

Cheers,
-- 
Luis

> +		return;
> +
> +	spin_lock(&mdsc->snap_empty_lock);
> +	if (atomic_inc_return(&realm->nref) == 1)
>  		list_del_init(&realm->empty_item);
> -		spin_unlock(&mdsc->snap_empty_lock);
> -	}
> +	spin_unlock(&mdsc->snap_empty_lock);
>  }
>  
>  static void __insert_snap_realm(struct rb_root *root,
> @@ -208,28 +208,28 @@ static void __put_snap_realm(struct ceph_mds_client *mdsc,
>  {
>  	lockdep_assert_held_write(&mdsc->snap_rwsem);
>  
> -	dout("__put_snap_realm %llx %p %d -> %d\n", realm->ino, realm,
> -	     atomic_read(&realm->nref), atomic_read(&realm->nref)-1);
> +	/*
> +	 * We do not require the snap_empty_lock here, as any caller that
> +	 * increments the value must hold the snap_rwsem.
> +	 */
>  	if (atomic_dec_and_test(&realm->nref))
>  		__destroy_snap_realm(mdsc, realm);
>  }
>  
>  /*
> - * caller needn't hold any locks
> + * See comments in ceph_get_snap_realm. Caller needn't hold any locks.
>   */
>  void ceph_put_snap_realm(struct ceph_mds_client *mdsc,
>  			 struct ceph_snap_realm *realm)
>  {
> -	dout("put_snap_realm %llx %p %d -> %d\n", realm->ino, realm,
> -	     atomic_read(&realm->nref), atomic_read(&realm->nref)-1);
> -	if (!atomic_dec_and_test(&realm->nref))
> +	if (!atomic_dec_and_lock(&realm->nref, &mdsc->snap_empty_lock))
>  		return;
>  
>  	if (down_write_trylock(&mdsc->snap_rwsem)) {
> +		spin_unlock(&mdsc->snap_empty_lock);
>  		__destroy_snap_realm(mdsc, realm);
>  		up_write(&mdsc->snap_rwsem);
>  	} else {
> -		spin_lock(&mdsc->snap_empty_lock);
>  		list_add(&realm->empty_item, &mdsc->snap_empty);
>  		spin_unlock(&mdsc->snap_empty_lock);
>  	}
> -- 
>
> 2.31.1
>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] ceph: ensure we take snap_empty_lock atomically with snaprealm refcount change
  2021-08-04 16:26 ` Luis Henriques
@ 2021-08-04 16:32   ` Jeff Layton
  0 siblings, 0 replies; 3+ messages in thread
From: Jeff Layton @ 2021-08-04 16:32 UTC (permalink / raw)
  To: Luis Henriques; +Cc: ceph-devel, idryomov, stable, Sage Weil, Mark Nelson

On Wed, 2021-08-04 at 17:26 +0100, Luis Henriques wrote:
> Jeff Layton <jlayton@kernel.org> writes:
> 
> > There is a race in ceph_put_snap_realm. The change to the nref and the
> > spinlock acquisition are not done atomically, so you could decrement nref,
> > and before you take the spinlock, the nref is incremented again. At that
> > point, you end up putting it on the empty list when it shouldn't be
> > there. Eventually __cleanup_empty_realms runs and frees it when it's
> > still in-use.
> > 
> > Fix this by protecting the 1->0 transition with atomic_dec_and_lock, and
> > just drop the spinlock if we can get the rwsem.
> > 
> > Because these objects can also undergo a 0->1 refcount transition, we
> > must protect that change as well with the spinlock. Increment locklessly
> > unless the value is at 0, in which case we take the spinlock, increment
> > and then take it off the empty list if it did the 0->1 transition.
> > 
> > With these changes, I'm removing the dout() messages from these
> > functions, as well as in __put_snap_realm. They've always been racy, and
> > it's better to not print values that may be misleading.
> > 
> > Cc: stable@vger.kernel.org
> > Cc: Sage Weil <sage@redhat.com>
> > Reported-by: Mark Nelson <mnelson@redhat.com>
> > URL: https://tracker.ceph.com/issues/46419
> > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > ---
> >  fs/ceph/snap.c | 34 +++++++++++++++++-----------------
> >  1 file changed, 17 insertions(+), 17 deletions(-)
> > 
> > v2: No functional changes, but I cleaned up the comments a bit and
> >     added another in __put_snap_realm.
> > 
> > diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
> > index 9dbc92cfda38..158c11e96fb7 100644
> > --- a/fs/ceph/snap.c
> > +++ b/fs/ceph/snap.c
> > @@ -67,19 +67,19 @@ void ceph_get_snap_realm(struct ceph_mds_client *mdsc,
> >  {
> >  	lockdep_assert_held(&mdsc->snap_rwsem);
> >  
> > -	dout("get_realm %p %d -> %d\n", realm,
> > -	     atomic_read(&realm->nref), atomic_read(&realm->nref)+1);
> >  	/*
> > -	 * since we _only_ increment realm refs or empty the empty
> > -	 * list with snap_rwsem held, adjusting the empty list here is
> > -	 * safe.  we do need to protect against concurrent empty list
> > -	 * additions, however.
> > +	 * The 0->1 and 1->0 transitions must take the snap_empty_lock
> > +	 * atomically with the refcount change. Go ahead and bump the
> > +	 * nref here, unless it's 0, in which case we take the spinlock
> > +	 * and then do the increment and remove it from the list.
> >  	 */
> > -	if (atomic_inc_return(&realm->nref) == 1) {
> > -		spin_lock(&mdsc->snap_empty_lock);
> > +	if (atomic_add_unless(&realm->nref, 1, 0))
> 
> Here you could probably use atomic_inc_not_zero() instead.  But other
> than that it looks good.  Thanks a lot for solving yet another locking
> puzzle!
> 
> Reviewed-by: Luis Henriques <lhenriques@suse.de>
> 
> Cheers,

Good point! That is a little clearer. I'll incorporate that change and
merge it.

Thanks,
-- 
Jeff Layton <jlayton@kernel.org>


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-08-04 16:32 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-04 15:55 [PATCH v2] ceph: ensure we take snap_empty_lock atomically with snaprealm refcount change Jeff Layton
2021-08-04 16:26 ` Luis Henriques
2021-08-04 16:32   ` Jeff Layton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).