From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751650AbdF1PZB (ORCPT ); Wed, 28 Jun 2017 11:25:01 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:52102 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751492AbdF1PYx (ORCPT ); Wed, 28 Jun 2017 11:24:53 -0400 Date: Wed, 28 Jun 2017 17:24:24 +0200 (CEST) From: Thomas Gleixner To: Mark Rutland cc: Sebastian Andrzej Siewior , Andrey Ryabinin , Ingo Molnar , Dmitry Vyukov , Peter Zijlstra , Will Deacon , "H. Peter Anvin" , kasan-dev , "x86@kernel.org" , LKML , Andrew Morton , "linux-mm@kvack.org" , Linus Torvalds Subject: Re: [PATCH] locking/atomics: don't alias ____ptr In-Reply-To: <20170628141420.GK5981@leverpostej> Message-ID: References: <85d51d3551b676ba1fc40e8fbddd2eadd056d8dd.1498140838.git.dvyukov@google.com> <20170628100246.7nsvhblgi3xjbc4m@breakpoint.cc> <1c1cbbfb-8e34-dd33-0e73-bbb2a758e962@virtuozzo.com> <20170628121246.qnk2csgzbgpqrmw3@linutronix.de> <20170628141420.GK5981@leverpostej> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 28 Jun 2017, Mark Rutland wrote: > On Wed, Jun 28, 2017 at 03:54:42PM +0200, Thomas Gleixner wrote: > > > static inline unsigned long cmpxchg_varsize(void *ptr, unsigned long old, > > > unsigned long new, int size) > > > { > > > switch (size) { > > > case 1: > > > case 2: > > > case 4: > > > break; > > > case 8: > > > if (sizeof(unsigned long) == 8) > > > break; > > > default: > > > BUILD_BUG_ON(1); > > > } > > > kasan_check(ptr, size); > > > return arch_cmpxchg(ptr, old, new); > > > } > > This'll need to re-cast things before the call to arch_cmpxchg(), and we > can move the check above the switch, as in [2]. Sure, but I rather see that changed to: 1) Create arch_cmpxchg8/16/32/64() inlines first 2) Add that varsize wrapper: static inline unsigned long cmpxchg_varsize(void *ptr, unsigned long old, unsigned long new, int size) { switch (size) { case 1: kasan_check_write(ptr, size); return arch_cmpxchg8((u8 *)ptr, (u8) old, (u8)new); case 2: kasan_check_write(ptr, size); return arch_cmpxchg16((u16 *)ptr, (u16) old, (u16)new); case 4: kasan_check_write(ptr, size); return arch_cmpxchg32((u32 *)ptr, (u32) old, (u32)new); case 8: if (sizeof(unsigned long) == 8) { kasan_check_write(ptr, size); return arch_cmpxchg64((u64 *)ptr, (u64) old, (u64)new); } default: BUILD_BUG(); } } #define cmpxchg(ptr, o, n) \ ({ \ ((__typeof__(*(ptr)))cmpxchg_varsize((ptr), (unsigned long)(o), \ (unsigned long)(n), sizeof(*(ptr)))); \ }) Which allows us to create: static inline u8 cmpxchg8(u8 *ptr, u8 old, u8 new) { kasan_check_write(ptr, sizeof(old)); return arch_cmpxchg8(ptr, old, new); } and friends as well and later migrate the existing users away from that untyped macro mess. And instead of adding #include to the architecture code, we rather do # mv arch/xxx/include/asm/atomic.h mv arch/xxx/include/asm/arch_atomic.h # echo '#include ' >arch/xxx/include/asm/atomic.h # mv include/asm-generic/atomic.h include/asm-generic/atomic_up.h and create a new include/asm-generic/atomic.h #ifndef __ASM_GENERIC_ATOMIC_H #define __ASM_GENERIC_ATOMIC_H #ifdef CONFIG_ATOMIC_INSTRUMENTED_H #include #else #include #endif #endif Thanks, tglx