From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C57CBC3A59B for ; Mon, 2 Sep 2019 03:53:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9468F2168B for ; Mon, 2 Sep 2019 03:53:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729391AbfIBDx3 (ORCPT ); Sun, 1 Sep 2019 23:53:29 -0400 Received: from ozlabs.org ([203.11.71.1]:37925 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729283AbfIBDx3 (ORCPT ); Sun, 1 Sep 2019 23:53:29 -0400 Received: from authenticated.ozlabs.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mail.ozlabs.org (Postfix) with ESMTPSA id 46MGQV10K4z9sDQ; Mon, 2 Sep 2019 13:53:22 +1000 (AEST) From: Michael Ellerman To: Michal Suchanek , linuxppc-dev@lists.ozlabs.org Cc: Michal Suchanek , Benjamin Herrenschmidt , Paul Mackerras , Alexander Viro , Nicholas Piggin , Christophe Leroy , Breno Leitao , Arnd Bergmann , Heiko Carstens , Greg Kroah-Hartman , Firoz Khan , Thomas Gleixner , Joel Stanley , Hari Bathini , Michael Neuling , Andrew Donnellan , Russell Currey , Diana Craciun , "Eric W. Biederman" , David Hildenbrand , Allison Randal , Andrew Morton , Madhavan Srinivasan , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH v7 3/6] powerpc/perf: consolidate read_user_stack_32 In-Reply-To: References: Date: Mon, 02 Sep 2019 13:53:21 +1000 Message-ID: <87a7bntkum.fsf@mpe.ellerman.id.au> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Michal Suchanek writes: > There are two almost identical copies for 32bit and 64bit. > > The function is used only in 32bit code which will be split out in next > patch so consolidate to one function. > > Signed-off-by: Michal Suchanek > Reviewed-by: Christophe Leroy > --- > new patch in v6 > --- > arch/powerpc/perf/callchain.c | 25 +++++++++---------------- > 1 file changed, 9 insertions(+), 16 deletions(-) > > diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c > index c84bbd4298a0..b7cdcce20280 100644 > --- a/arch/powerpc/perf/callchain.c > +++ b/arch/powerpc/perf/callchain.c > @@ -165,22 +165,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret) > return read_user_stack_slow(ptr, ret, 8); > } > > -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) > -{ > - if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || > - ((unsigned long)ptr & 3)) > - return -EFAULT; > - > - pagefault_disable(); > - if (!__get_user_inatomic(*ret, ptr)) { > - pagefault_enable(); > - return 0; > - } > - pagefault_enable(); > - > - return read_user_stack_slow(ptr, ret, 4); > -} > - > static inline int valid_user_sp(unsigned long sp, int is_64) > { > if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32) > @@ -295,6 +279,12 @@ static inline int current_is_64bit(void) > } > > #else /* CONFIG_PPC64 */ > +static int read_user_stack_slow(void __user *ptr, void *buf, int nb) > +{ > + return 0; > +} > +#endif /* CONFIG_PPC64 */ Ending the PPC64 else case here, and then restarting it below with an ifndef means we end up with two parts of the file that define 32-bit code, with a common chunk in the middle, which I dislike. I'd rather you add the empty read_user_stack_slow() in the existing #else section and then move read_user_stack_32() below the whole ifdef PPC64/else/endif section. Is there some reason that doesn't work? cheers > @@ -313,9 +303,12 @@ static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) > rc = __get_user_inatomic(*ret, ptr); > pagefault_enable(); > > + if (IS_ENABLED(CONFIG_PPC64) && rc) > + return read_user_stack_slow(ptr, ret, 4); > return rc; > } > > +#ifndef CONFIG_PPC64 > static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, > struct pt_regs *regs) > { > -- > 2.22.0