From: Nadav Amit <namit@vmware.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>,
x86@kernel.org, linux-kernel@vger.kernel.org,
Dave Hansen <dave.hansen@linux.intel.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Nadav Amit <namit@vmware.com>
Subject: [RFC 5/7] percpu: Assume preemption is disabled on per_cpu_ptr()
Date: Thu, 18 Jul 2019 10:41:08 -0700 [thread overview]
Message-ID: <20190718174110.4635-6-namit@vmware.com> (raw)
In-Reply-To: <20190718174110.4635-1-namit@vmware.com>
When per_cpu_ptr() is used, the caller should have preemption disabled,
as otherwise the pointer is meaningless. If the user wants an "unstable"
pointer he should call raw_cpu_ptr().
Add an assertion to check that indeed preemption is disabled, and
distinguish between the two cases to allow further, per-arch
optimizations.
Signed-off-by: Nadav Amit <namit@vmware.com>
---
include/asm-generic/percpu.h | 12 ++++++++++++
include/linux/percpu-defs.h | 33 ++++++++++++++++++++++++++++++++-
2 files changed, 44 insertions(+), 1 deletion(-)
diff --git a/include/asm-generic/percpu.h b/include/asm-generic/percpu.h
index c2de013b2cf4..7853605f4210 100644
--- a/include/asm-generic/percpu.h
+++ b/include/asm-generic/percpu.h
@@ -36,6 +36,14 @@ extern unsigned long __per_cpu_offset[NR_CPUS];
#define my_cpu_offset __my_cpu_offset
#endif
+/*
+ * Determine the offset of the current active processor when preemption is
+ * disabled. Can be overriden by arch code.
+ */
+#ifndef __raw_my_cpu_offset
+#define __raw_my_cpu_offset __my_cpu_offset
+#endif
+
/*
* Arch may define arch_raw_cpu_ptr() to provide more efficient address
* translations for raw_cpu_ptr().
@@ -44,6 +52,10 @@ extern unsigned long __per_cpu_offset[NR_CPUS];
#define arch_raw_cpu_ptr(ptr) SHIFT_PERCPU_PTR(ptr, __my_cpu_offset)
#endif
+#ifndef arch_raw_cpu_ptr_preemptable
+#define arch_raw_cpu_ptr_preemptable(ptr) SHIFT_PERCPU_PTR(ptr, __raw_my_cpu_offset)
+#endif
+
#ifdef CONFIG_HAVE_SETUP_PER_CPU_AREA
extern void setup_per_cpu_areas(void);
#endif
diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h
index a6fabd865211..13afca8a37e7 100644
--- a/include/linux/percpu-defs.h
+++ b/include/linux/percpu-defs.h
@@ -237,20 +237,51 @@ do { \
SHIFT_PERCPU_PTR((ptr), per_cpu_offset((cpu))); \
})
+#ifndef arch_raw_cpu_ptr_preemption_disabled
+#define arch_raw_cpu_ptr_preemption_disabled(ptr) \
+ arch_raw_cpu_ptr(ptr)
+#endif
+
+#define raw_cpu_ptr_preemption_disabled(ptr) \
+({ \
+ __verify_pcpu_ptr(ptr); \
+ arch_raw_cpu_ptr_preemption_disabled(ptr); \
+})
+
+/*
+ * If preemption is enabled, we need to read the pointer atomically on
+ * raw_cpu_ptr(). However if it is disabled, we can use the
+ * raw_cpu_ptr_nopreempt(), which is potentially more efficient. Similarly, we
+ * can use the preemption-disabled version if the kernel is non-preemptable or
+ * if voluntary preemption is used.
+ */
+#ifdef CONFIG_PREEMPT
+
#define raw_cpu_ptr(ptr) \
({ \
__verify_pcpu_ptr(ptr); \
arch_raw_cpu_ptr(ptr); \
})
+#else
+
+#define raw_cpu_ptr(ptr) raw_cpu_ptr_preemption_disabled(ptr)
+
+#endif
+
#ifdef CONFIG_DEBUG_PREEMPT
+/*
+ * Unlike other this_cpu_* operations, this_cpu_ptr() requires that preemption
+ * will be disabled. In contrast, raw_cpu_ptr() does not require that.
+ */
#define this_cpu_ptr(ptr) \
({ \
+ __this_cpu_preempt_check("ptr"); \
__verify_pcpu_ptr(ptr); \
SHIFT_PERCPU_PTR(ptr, my_cpu_offset); \
})
#else
-#define this_cpu_ptr(ptr) raw_cpu_ptr(ptr)
+#define this_cpu_ptr(ptr) raw_cpu_ptr_preemption_disabled(ptr)
#endif
#else /* CONFIG_SMP */
--
2.17.1
next prev parent reply other threads:[~2019-07-19 1:04 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-18 17:41 [RFC 0/7] x86/percpu: Use segment qualifiers Nadav Amit
2019-07-18 17:41 ` [RFC 1/7] compiler: Report x86 segment support Nadav Amit
2019-07-18 17:41 ` [RFC 2/7] x86/percpu: Use compiler segment prefix qualifier Nadav Amit
2019-07-18 17:41 ` [RFC 3/7] x86/percpu: Use C for percpu accesses when possible Nadav Amit
2019-07-22 20:52 ` Peter Zijlstra
2019-07-22 21:12 ` Nadav Amit
2019-07-18 17:41 ` [RFC 4/7] x86: Fix possible caching of current_task Nadav Amit
2019-07-18 17:41 ` Nadav Amit [this message]
2019-07-18 17:41 ` [RFC 6/7] x86/percpu: Optimized arch_raw_cpu_ptr() Nadav Amit
2019-07-18 17:41 ` [RFC 7/7] x86/current: Aggressive caching of current Nadav Amit
2019-07-22 21:07 ` Peter Zijlstra
2019-07-22 21:20 ` Nadav Amit
2019-07-22 21:09 ` [RFC 0/7] x86/percpu: Use segment qualifiers Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190718174110.4635-6-namit@vmware.com \
--to=namit@vmware.com \
--cc=dave.hansen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).