From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755074AbbDJJWB (ORCPT ); Fri, 10 Apr 2015 05:22:01 -0400 Received: from mail-wi0-f181.google.com ([209.85.212.181]:37211 "EHLO mail-wi0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754111AbbDJJV5 (ORCPT ); Fri, 10 Apr 2015 05:21:57 -0400 Date: Fri, 10 Apr 2015 11:21:52 +0200 From: Ingo Molnar To: "Paul E. McKenney" Cc: Linus Torvalds , Jason Low , Peter Zijlstra , Davidlohr Bueso , Tim Chen , Aswin Chandramouleeswaran , LKML Subject: [PATCH] uaccess: Add __copy_from_kernel_inatomic() primitive Message-ID: <20150410092152.GA21332@gmail.com> References: <20150409053725.GB13871@gmail.com> <1428561611.3506.78.camel@j-VirtualBox> <20150409075311.GA4645@gmail.com> <20150409175652.GI6464@linux.vnet.ibm.com> <20150409183926.GM6464@linux.vnet.ibm.com> <20150410090051.GA28549@gmail.com> <20150410091252.GA27630@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150410091252.GA27630@gmail.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Ingo Molnar wrote: > Yeah, so what I missed here are those nops: placeholders for the > STAC/CLAC instructions on x86... and this is what Linus mentioned > about the clac() overhead. > > But this could be solved I think: by adding a > copy_from_kernel_inatomic() primitive which simply leaves out the > STAC/CLAC sequence: as these are always guaranteed to be kernel > addresses, the SMAP fault should not be generated. So the first step would be to introduce a generic __copy_from_kernel_inatomic() primitive as attached below. The next patch will implement efficient __copy_from_kernel_inatomic() for x86. Thanks, Ingo ==================================> >>From 89b2ac882933947513c0aabd38e6b6c5a203c337 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 10 Apr 2015 11:19:23 +0200 Subject: [PATCH] uaccess: Add __copy_from_kernel_inatomic() primitive Most architectures can just reuse __copy_from_user_inatomic() to copy possibly-faulting data from known-valid kernel addresses. Not-Yet-Signed-off-by: Ingo Molnar --- include/linux/uaccess.h | 12 ++++++++++++ kernel/locking/mutex.c | 3 ++- 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index ecd3319dac33..885eea43b69f 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -107,4 +107,16 @@ extern long __probe_kernel_read(void *dst, const void *src, size_t size); extern long notrace probe_kernel_write(void *dst, const void *src, size_t size); extern long notrace __probe_kernel_write(void *dst, const void *src, size_t size); +/* + * Generic wrapper, most architectures can just use __copy_from_user_inatomic() + * to implement __copy_from_kernel_inatomic(): + */ +#ifndef ARCH_HAS_COPY_FROM_KERNEL_INATOMIC +static __must_check __always_inline int +__copy_from_kernel_inatomic(void *dst, const void __user *src, unsigned size) +{ + return __copy_from_user_inatomic(dst, src, size); +} +#endif + #endif /* __LINUX_UACCESS_H__ */ diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index fcc7db45d62e..a4f74cda9fc4 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -26,6 +26,7 @@ #include #include #include +#include /* * In the DEBUG case we are using the "NULL fastpath" for mutexes, @@ -251,7 +252,7 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) * NOTE2: We ignore failed copies, as the next iteration will clean * up after us. This saves an extra branch in the common case. */ - ret = __copy_from_user_inatomic(&on_cpu, &owner->on_cpu, sizeof(on_cpu)); + ret = __copy_from_kernel_inatomic(&on_cpu, &owner->on_cpu, sizeof(on_cpu)); if (!on_cpu || need_resched()) return false;