linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] x86: Add a __copy_from_user_nmi
@ 2015-10-19 22:54 Andi Kleen
  2015-10-19 22:54 ` [PATCH 2/2] x86, perf: Optimize stack walk user accesses Andi Kleen
  0 siblings, 1 reply; 6+ messages in thread
From: Andi Kleen @ 2015-10-19 22:54 UTC (permalink / raw)
  To: x86; +Cc: peterz, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add a inlined __ variant of copy_from_user_nmi. The inlined variant allows
the user to:

- batch the access_ok check for multiple accesses
- avoid having a pagefault_disable/enable on every access if the caller
already ensures disabled page faults due to its context.
- get all the optimizations in copy_*_user for small constant sized transfers

It is just a define to __copy_from_user_inatomic.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/include/asm/uaccess.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index a8df874..1d0766c 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -745,5 +745,14 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 #undef __copy_from_user_overflow
 #undef __copy_to_user_overflow
 
+/*
+ * We rely on the nested NMI work to allow atomic faults from the NMI path; the
+ * nested NMI paths are careful to preserve CR2.
+ *
+ * Caller must use pagefault_enable/disable, or run in interrupt context,
+ * and also do a uaccess_ok() check
+ */
+#define __copy_from_user_nmi __copy_from_user_inatomic
+
 #endif /* _ASM_X86_UACCESS_H */
 
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread
* [PATCH 1/2] x86: Add a __copy_from_user_nmi
@ 2015-10-22 22:07 Andi Kleen
  2015-10-22 22:07 ` [PATCH 2/2] x86, perf: Optimize stack walk user accesses Andi Kleen
  0 siblings, 1 reply; 6+ messages in thread
From: Andi Kleen @ 2015-10-22 22:07 UTC (permalink / raw)
  To: peterz; +Cc: mingo, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add a inlined __ variant of copy_from_user_nmi. The inlined variant allows
the user to:

- batch the access_ok check for multiple accesses
- avoid having a pagefault_disable/enable on every access if the caller
already ensures disabled page faults due to its context.
- get all the optimizations in copy_*_user for small constant sized transfers

It is just a define to __copy_from_user_inatomic.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/include/asm/uaccess.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index a8df874..1d0766c 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -745,5 +745,14 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 #undef __copy_from_user_overflow
 #undef __copy_to_user_overflow
 
+/*
+ * We rely on the nested NMI work to allow atomic faults from the NMI path; the
+ * nested NMI paths are careful to preserve CR2.
+ *
+ * Caller must use pagefault_enable/disable, or run in interrupt context,
+ * and also do a uaccess_ok() check
+ */
+#define __copy_from_user_nmi __copy_from_user_inatomic
+
 #endif /* _ASM_X86_UACCESS_H */
 
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread
* [PATCH 1/2] x86: Add a __copy_from_user_nmi
@ 2015-11-16 23:23 Andi Kleen
  2015-11-16 23:23 ` [PATCH 2/2] x86, perf: Optimize stack walk user accesses Andi Kleen
  0 siblings, 1 reply; 6+ messages in thread
From: Andi Kleen @ 2015-11-16 23:23 UTC (permalink / raw)
  To: peterz; +Cc: x86, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add a inlined __ variant of copy_from_user_nmi. The inlined variant allows
the user to:

- batch the access_ok check for multiple accesses
- avoid having a pagefault_disable/enable on every access if the caller
already ensures disabled page faults due to its context.
- get all the optimizations in copy_*_user for small constant sized transfers

It is just a define to __copy_from_user_inatomic.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/include/asm/uaccess.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index a8df874..1d0766c 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -745,5 +745,14 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 #undef __copy_from_user_overflow
 #undef __copy_to_user_overflow
 
+/*
+ * We rely on the nested NMI work to allow atomic faults from the NMI path; the
+ * nested NMI paths are careful to preserve CR2.
+ *
+ * Caller must use pagefault_enable/disable, or run in interrupt context,
+ * and also do a uaccess_ok() check
+ */
+#define __copy_from_user_nmi __copy_from_user_inatomic
+
 #endif /* _ASM_X86_UACCESS_H */
 
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2015-11-16 23:35 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-19 22:54 [PATCH 1/2] x86: Add a __copy_from_user_nmi Andi Kleen
2015-10-19 22:54 ` [PATCH 2/2] x86, perf: Optimize stack walk user accesses Andi Kleen
2015-10-20 11:03   ` Peter Zijlstra
2015-10-20 17:32     ` Andi Kleen
2015-10-22 22:07 [PATCH 1/2] x86: Add a __copy_from_user_nmi Andi Kleen
2015-10-22 22:07 ` [PATCH 2/2] x86, perf: Optimize stack walk user accesses Andi Kleen
2015-11-16 23:23 [PATCH 1/2] x86: Add a __copy_from_user_nmi Andi Kleen
2015-11-16 23:23 ` [PATCH 2/2] x86, perf: Optimize stack walk user accesses Andi Kleen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).