All of lore.kernel.org
 help / color / mirror / Atom feed
From: Elena Reshetova <elena.reshetova@intel.com>
To: kernel-hardening@lists.openwall.com
Cc: keescook@chromium.org, Hans Liljestrand <ishkamiel@gmail.com>,
	David Windsor <dwindsor@gmail.com>,
	Elena Reshetova <elena.reshetova@intel.com>
Subject: [kernel-hardening] [RFC PATCH 02/13] percpu-refcount: leave atomic counter unprotected
Date: Mon,  3 Oct 2016 09:41:15 +0300	[thread overview]
Message-ID: <1475476886-26232-3-git-send-email-elena.reshetova@intel.com> (raw)
In-Reply-To: <1475476886-26232-1-git-send-email-elena.reshetova@intel.com>

From: Hans Liljestrand <ishkamiel@gmail.com>

This is a temporary solution, and a deviation from the PaX/Grsecurity
implementation where the counter in question is protected against
overflows. That however necessitates decreasing the PERCPU_COUNT_BIAS
which is used in lib/percpu-refcount.c. Such a change effectively cuts
the safe counter range down by half, and still allows the counter to,
without warning, prematurely reach zero (which is what the bias aims to
prevent).

Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: David Windsor <dwindsor@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
---
 include/linux/percpu-refcount.h | 18 ++++++++++++++----
 lib/percpu-refcount.c           | 12 ++++++------
 2 files changed, 20 insertions(+), 10 deletions(-)

diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index 1c7eec0..7c6a482 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -81,7 +81,17 @@ enum {
 };
 
 struct percpu_ref {
-	atomic_long_t		count;
+	/*
+	 * This is a temporary solution.
+	 *
+	 * The count should technically not be allowed to wrap, but due to the
+	 * way the counter is used (in lib/percpu-refcount.c) together with the
+	 * PERCPU_COUNT_BIAS it needs to wrap. This leaves the counter open
+	 * to over/underflows. A non-wrapping atomic, together with a bias
+	 * decrease would reduce the safe range in half, and also offer only
+	 * partial protection.
+	 */
+	atomic_long_wrap_t	count;
 	/*
 	 * The low bit of the pointer indicates whether the ref is in percpu
 	 * mode; if set, then get/put will manipulate the atomic_t.
@@ -174,7 +184,7 @@ static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned long nr)
 	if (__ref_is_percpu(ref, &percpu_count))
 		this_cpu_add(*percpu_count, nr);
 	else
-		atomic_long_add(nr, &ref->count);
+		atomic_long_add_wrap(nr, &ref->count);
 
 	rcu_read_unlock_sched();
 }
@@ -272,7 +282,7 @@ static inline void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr)
 
 	if (__ref_is_percpu(ref, &percpu_count))
 		this_cpu_sub(*percpu_count, nr);
-	else if (unlikely(atomic_long_sub_and_test(nr, &ref->count)))
+	else if (unlikely(atomic_long_sub_and_test_wrap(nr, &ref->count)))
 		ref->release(ref);
 
 	rcu_read_unlock_sched();
@@ -320,7 +330,7 @@ static inline bool percpu_ref_is_zero(struct percpu_ref *ref)
 
 	if (__ref_is_percpu(ref, &percpu_count))
 		return false;
-	return !atomic_long_read(&ref->count);
+	return !atomic_long_read_wrap(&ref->count);
 }
 
 #endif
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index 9ac959e..2849e06 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
@@ -80,7 +80,7 @@ int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release,
 	else
 		start_count++;
 
-	atomic_long_set(&ref->count, start_count);
+	atomic_long_set_wrap(&ref->count, start_count);
 
 	ref->release = release;
 	ref->confirm_switch = NULL;
@@ -134,7 +134,7 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu)
 		count += *per_cpu_ptr(percpu_count, cpu);
 
 	pr_debug("global %ld percpu %ld",
-		 atomic_long_read(&ref->count), (long)count);
+		 atomic_long_read_wrap(&ref->count), (long)count);
 
 	/*
 	 * It's crucial that we sum the percpu counters _before_ adding the sum
@@ -148,11 +148,11 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu)
 	 * reaching 0 before we add the percpu counts. But doing it at the same
 	 * time is equivalent and saves us atomic operations:
 	 */
-	atomic_long_add((long)count - PERCPU_COUNT_BIAS, &ref->count);
+	atomic_long_add_wrap((long)count - PERCPU_COUNT_BIAS, &ref->count);
 
-	WARN_ONCE(atomic_long_read(&ref->count) <= 0,
+	WARN_ONCE(atomic_long_read_wrap(&ref->count) <= 0,
 		  "percpu ref (%pf) <= 0 (%ld) after switching to atomic",
-		  ref->release, atomic_long_read(&ref->count));
+		  ref->release, atomic_long_read_wrap(&ref->count));
 
 	/* @ref is viewed as dead on all CPUs, send out switch confirmation */
 	percpu_ref_call_confirm_rcu(rcu);
@@ -194,7 +194,7 @@ static void __percpu_ref_switch_to_percpu(struct percpu_ref *ref)
 	if (!(ref->percpu_count_ptr & __PERCPU_REF_ATOMIC))
 		return;
 
-	atomic_long_add(PERCPU_COUNT_BIAS, &ref->count);
+	atomic_long_add_wrap(PERCPU_COUNT_BIAS, &ref->count);
 
 	/*
 	 * Restore per-cpu operation.  smp_store_release() is paired with
-- 
2.7.4

  parent reply	other threads:[~2016-10-03  6:41 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-03  6:41 [kernel-hardening] [RFC PATCH 00/13] HARDENING_ATOMIC feature Elena Reshetova
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 01/13] Add architecture independent hardened atomic base Elena Reshetova
2016-10-03 21:10   ` [kernel-hardening] " Kees Cook
2016-10-03 21:26     ` David Windsor
2016-10-03 21:38       ` Kees Cook
2016-10-04  7:05         ` [kernel-hardening] " Reshetova, Elena
2016-10-05 15:37           ` [kernel-hardening] " Dave Hansen
2016-10-04  7:07         ` [kernel-hardening] " Reshetova, Elena
2016-10-04  6:54       ` Reshetova, Elena
2016-10-04  7:23       ` Reshetova, Elena
2016-10-12  8:26     ` [kernel-hardening] " AKASHI Takahiro
2016-10-12 17:25       ` Reshetova, Elena
2016-10-12 22:50         ` Kees Cook
2016-10-13 14:31           ` Hans Liljestrand
2016-10-03  6:41 ` Elena Reshetova [this message]
2016-10-03 21:12   ` [kernel-hardening] Re: [RFC PATCH 02/13] percpu-refcount: leave atomic counter unprotected Kees Cook
2016-10-04  6:24     ` [kernel-hardening] " Reshetova, Elena
2016-10-04 13:06       ` [kernel-hardening] " Hans Liljestrand
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 03/13] kernel: identify wrapping atomic usage Elena Reshetova
2016-10-03 21:13   ` [kernel-hardening] " Kees Cook
2016-10-04  6:28     ` [kernel-hardening] " Reshetova, Elena
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 04/13] mm: " Elena Reshetova
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 05/13] fs: " Elena Reshetova
2016-10-03 21:57   ` Jann Horn
2016-10-03 22:21     ` Kees Cook
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 06/13] net: " Elena Reshetova
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 07/13] net: atm: " Elena Reshetova
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 08/13] security: " Elena Reshetova
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 09/13] drivers: identify wrapping atomic usage (part 1/2) Elena Reshetova
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 10/13] drivers: identify wrapping atomic usage (part 2/2) Elena Reshetova
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 11/13] x86: identify wrapping atomic usage Elena Reshetova
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 12/13] x86: x86 implementation for HARDENED_ATOMIC Elena Reshetova
2016-10-03  9:47   ` Jann Horn
2016-10-04  7:15     ` Reshetova, Elena
2016-10-04 12:46       ` Jann Horn
2016-10-03 19:27   ` Dave Hansen
2016-10-03 22:49     ` David Windsor
2016-10-04 12:41     ` Jann Horn
2016-10-04 18:51       ` Kees Cook
2016-10-04 19:48         ` Jann Horn
2016-10-05 15:39       ` Dave Hansen
2016-10-05 16:18         ` Jann Horn
2016-10-05 16:32           ` Dave Hansen
2016-10-03 21:29   ` [kernel-hardening] " Kees Cook
2016-10-03  6:41 ` [kernel-hardening] [RFC PATCH 13/13] lkdtm: add tests for atomic over-/underflow Elena Reshetova
2016-10-03 21:35   ` [kernel-hardening] " Kees Cook
2016-10-04  6:27     ` [kernel-hardening] " Reshetova, Elena
2016-10-04  6:40       ` [kernel-hardening] " Hans Liljestrand
2016-10-03  8:14 ` [kernel-hardening] [RFC PATCH 00/13] HARDENING_ATOMIC feature AKASHI Takahiro
2016-10-03  8:13   ` Reshetova, Elena
2016-10-03 21:01 ` [kernel-hardening] " Kees Cook

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1475476886-26232-3-git-send-email-elena.reshetova@intel.com \
    --to=elena.reshetova@intel.com \
    --cc=dwindsor@gmail.com \
    --cc=ishkamiel@gmail.com \
    --cc=keescook@chromium.org \
    --cc=kernel-hardening@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.