From: Elena Reshetova <elena.reshetova@intel.com>
To: peterz@infradead.org
Cc: linux-kernel@vger.kernel.org, gregkh@linuxfoundation.org,
keescook@chromium.org, tglx@linutronix.de, mingo@redhat.com,
ishkamiel@gmail.com, Elena Reshetova <elena.reshetova@intel.com>
Subject: [PATCH] refcount: provide same memory ordering guarantees as in atomic_t
Date: Mon, 23 Oct 2017 14:09:44 +0300 [thread overview]
Message-ID: <1508756984-18980-1-git-send-email-elena.reshetova@intel.com> (raw)
Currently arch. independent implementation of refcount_t in
lib/refcount.c provides weak memory ordering guarantees
compare to its analog atomic_t implementations.
While it should not be a problem for most of the actual
cases of refcounters, it is more understandable for everyone
(and more error-prone for future users) to provide exactly
same memory ordering guarantees as atomics.
If speed is of a concern, then either more efficient arch.
dependent refcount_t implementation should be used or if there
are enough users in the future we might need to provide both
strict and relaxed refcount_t APIs.
Suggested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
---
lib/refcount.c | 71 +++++-----------------------------------------------------
1 file changed, 6 insertions(+), 65 deletions(-)
diff --git a/lib/refcount.c b/lib/refcount.c
index 5d0582a..cc6946e 100644
--- a/lib/refcount.c
+++ b/lib/refcount.c
@@ -8,29 +8,7 @@
* there. This avoids wrapping the counter and causing 'spurious'
* use-after-free issues.
*
- * Memory ordering rules are slightly relaxed wrt regular atomic_t functions
- * and provide only what is strictly required for refcounts.
- *
- * The increments are fully relaxed; these will not provide ordering. The
- * rationale is that whatever is used to obtain the object we're increasing the
- * reference count on will provide the ordering. For locked data structures,
- * its the lock acquire, for RCU/lockless data structures its the dependent
- * load.
- *
- * Do note that inc_not_zero() provides a control dependency which will order
- * future stores against the inc, this ensures we'll never modify the object
- * if we did not in fact acquire a reference.
- *
- * The decrements will provide release order, such that all the prior loads and
- * stores will be issued before, it also provides a control dependency, which
- * will order us against the subsequent free().
- *
- * The control dependency is against the load of the cmpxchg (ll/sc) that
- * succeeded. This means the stores aren't fully ordered, but this is fine
- * because the 1->0 transition indicates no concurrency.
- *
- * Note that the allocator is responsible for ordering things between free()
- * and alloc().
+ * Memory ordering rules are exactly the same as with regular atomic_t functions
*
*/
@@ -46,10 +24,6 @@
*
* Will saturate at UINT_MAX and WARN.
*
- * Provides no memory ordering, it is assumed the caller has guaranteed the
- * object memory to be stable (RCU, etc.). It does provide a control dependency
- * and thereby orders future stores. See the comment on top.
- *
* Use of this function is not recommended for the normal reference counting
* use case in which references are taken and released one at a time. In these
* cases, refcount_inc(), or one of its variants, should instead be used to
@@ -72,7 +46,7 @@ bool refcount_add_not_zero(unsigned int i, refcount_t *r)
if (new < val)
new = UINT_MAX;
- } while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new));
+ } while (!atomic_try_cmpxchg(&r->refs, &val, new));
WARN_ONCE(new == UINT_MAX, "refcount_t: saturated; leaking memory.\n");
@@ -87,10 +61,6 @@ EXPORT_SYMBOL(refcount_add_not_zero);
*
* Similar to atomic_add(), but will saturate at UINT_MAX and WARN.
*
- * Provides no memory ordering, it is assumed the caller has guaranteed the
- * object memory to be stable (RCU, etc.). It does provide a control dependency
- * and thereby orders future stores. See the comment on top.
- *
* Use of this function is not recommended for the normal reference counting
* use case in which references are taken and released one at a time. In these
* cases, refcount_inc(), or one of its variants, should instead be used to
@@ -108,10 +78,6 @@ EXPORT_SYMBOL(refcount_add);
*
* Similar to atomic_inc_not_zero(), but will saturate at UINT_MAX and WARN.
*
- * Provides no memory ordering, it is assumed the caller has guaranteed the
- * object memory to be stable (RCU, etc.). It does provide a control dependency
- * and thereby orders future stores. See the comment on top.
- *
* Return: true if the increment was successful, false otherwise
*/
bool refcount_inc_not_zero(refcount_t *r)
@@ -127,7 +93,7 @@ bool refcount_inc_not_zero(refcount_t *r)
if (unlikely(!new))
return true;
- } while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new));
+ } while (!atomic_try_cmpxchg(&r->refs, &val, new));
WARN_ONCE(new == UINT_MAX, "refcount_t: saturated; leaking memory.\n");
@@ -141,9 +107,6 @@ EXPORT_SYMBOL(refcount_inc_not_zero);
*
* Similar to atomic_inc(), but will saturate at UINT_MAX and WARN.
*
- * Provides no memory ordering, it is assumed the caller already has a
- * reference on the object.
- *
* Will WARN if the refcount is 0, as this represents a possible use-after-free
* condition.
*/
@@ -162,10 +125,6 @@ EXPORT_SYMBOL(refcount_inc);
* ultimately leak on underflow and will fail to decrement when saturated
* at UINT_MAX.
*
- * Provides release memory ordering, such that prior loads and stores are done
- * before, and provides a control dependency such that free() must come after.
- * See the comment on top.
- *
* Use of this function is not recommended for the normal reference counting
* use case in which references are taken and released one at a time. In these
* cases, refcount_dec(), or one of its variants, should instead be used to
@@ -187,7 +146,7 @@ bool refcount_sub_and_test(unsigned int i, refcount_t *r)
return false;
}
- } while (!atomic_try_cmpxchg_release(&r->refs, &val, new));
+ } while (!atomic_try_cmpxchg(&r->refs, &val, new));
return !new;
}
@@ -200,10 +159,6 @@ EXPORT_SYMBOL(refcount_sub_and_test);
* Similar to atomic_dec_and_test(), it will WARN on underflow and fail to
* decrement when saturated at UINT_MAX.
*
- * Provides release memory ordering, such that prior loads and stores are done
- * before, and provides a control dependency such that free() must come after.
- * See the comment on top.
- *
* Return: true if the resulting refcount is 0, false otherwise
*/
bool refcount_dec_and_test(refcount_t *r)
@@ -218,9 +173,6 @@ EXPORT_SYMBOL(refcount_dec_and_test);
*
* Similar to atomic_dec(), it will WARN on underflow and fail to decrement
* when saturated at UINT_MAX.
- *
- * Provides release memory ordering, such that prior loads and stores are done
- * before.
*/
void refcount_dec(refcount_t *r)
{
@@ -236,9 +188,6 @@ EXPORT_SYMBOL(refcount_dec);
* No atomic_t counterpart, it attempts a 1 -> 0 transition and returns the
* success thereof.
*
- * Like all decrement operations, it provides release memory order and provides
- * a control dependency.
- *
* It can be used like a try-delete operator; this explicit case is provided
* and not cmpxchg in generic, because that would allow implementing unsafe
* operations.
@@ -249,7 +198,7 @@ bool refcount_dec_if_one(refcount_t *r)
{
int val = 1;
- return atomic_try_cmpxchg_release(&r->refs, &val, 0);
+ return atomic_try_cmpxchg(&r->refs, &val, 0);
}
EXPORT_SYMBOL(refcount_dec_if_one);
@@ -281,7 +230,7 @@ bool refcount_dec_not_one(refcount_t *r)
return true;
}
- } while (!atomic_try_cmpxchg_release(&r->refs, &val, new));
+ } while (!atomic_try_cmpxchg(&r->refs, &val, new));
return true;
}
@@ -296,10 +245,6 @@ EXPORT_SYMBOL(refcount_dec_not_one);
* Similar to atomic_dec_and_mutex_lock(), it will WARN on underflow and fail
* to decrement when saturated at UINT_MAX.
*
- * Provides release memory ordering, such that prior loads and stores are done
- * before, and provides a control dependency such that free() must come after.
- * See the comment on top.
- *
* Return: true and hold mutex if able to decrement refcount to 0, false
* otherwise
*/
@@ -327,10 +272,6 @@ EXPORT_SYMBOL(refcount_dec_and_mutex_lock);
* Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to
* decrement when saturated at UINT_MAX.
*
- * Provides release memory ordering, such that prior loads and stores are done
- * before, and provides a control dependency such that free() must come after.
- * See the comment on top.
- *
* Return: true and hold spinlock if able to decrement refcount to 0, false
* otherwise
*/
--
2.7.4
next reply other threads:[~2017-10-23 11:09 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-23 11:09 Elena Reshetova [this message]
2017-10-23 13:12 ` [PATCH] refcount: provide same memory ordering guarantees as in atomic_t Peter Zijlstra
2017-10-27 6:49 ` Reshetova, Elena
2017-10-27 13:56 ` Peter Zijlstra
2017-11-02 11:04 ` Reshetova, Elena
2017-11-02 13:57 ` Peter Zijlstra
2017-11-02 15:40 ` Alan Stern
2017-11-02 16:02 ` Peter Zijlstra
2017-11-02 16:45 ` Peter Zijlstra
2017-11-02 17:08 ` Alan Stern
2017-11-02 17:16 ` Will Deacon
2017-11-02 17:26 ` Peter Zijlstra
2017-11-02 20:21 ` Alan Stern
2017-11-15 18:05 ` Will Deacon
2017-11-15 19:15 ` Alan Stern
2017-11-15 20:03 ` Peter Zijlstra
2017-11-15 20:22 ` Alan Stern
2017-11-16 8:46 ` Peter Zijlstra
2017-11-15 21:01 ` Andrea Parri
2017-11-16 8:58 ` Peter Zijlstra
2017-11-16 10:00 ` Andrea Parri
2017-11-02 17:45 ` Andrea Parri
2017-11-02 20:28 ` Alan Stern
2017-11-03 11:55 ` Reshetova, Elena
2017-11-13 9:09 ` Reshetova, Elena
2017-11-13 13:19 ` Paul E. McKenney
2017-11-13 16:01 ` Reshetova, Elena
2017-11-13 16:26 ` Paul E. McKenney
2017-11-14 11:23 ` Reshetova, Elena
2017-11-14 17:24 ` Paul E. McKenney
2017-11-16 13:44 ` Michal Hocko
2017-11-16 15:29 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1508756984-18980-1-git-send-email-elena.reshetova@intel.com \
--to=elena.reshetova@intel.com \
--cc=gregkh@linuxfoundation.org \
--cc=ishkamiel@gmail.com \
--cc=keescook@chromium.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).