From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2B8DC4332B for ; Fri, 20 Mar 2020 10:21:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9567E20754 for ; Fri, 20 Mar 2020 10:21:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727466AbgCTKU7 (ORCPT ); Fri, 20 Mar 2020 06:20:59 -0400 Received: from mx2.suse.de ([195.135.220.15]:54508 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727297AbgCTKUj (ORCPT ); Fri, 20 Mar 2020 06:20:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id C4E25B308; Fri, 20 Mar 2020 10:20:35 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Cc: Michal Suchanek , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Alexander Viro , Mauro Carvalho Chehab , "David S. Miller" , Rob Herring , Greg Kroah-Hartman , Jonathan Cameron , Andy Shevchenko , Christophe Leroy , Thomas Gleixner , Arnd Bergmann , Nayna Jain , Eric Richter , Claudio Carvalho , Nicholas Piggin , Hari Bathini , Masahiro Yamada , Thiago Jung Bauermann , Sebastian Andrzej Siewior , Valentin Schneider , Jordan Niethe , Michael Neuling , Gustavo Luiz Duarte , Allison Randal , "Eric W. Biederman" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v12 4/8] powerpc/perf: consolidate valid_user_sp -> invalid_user_sp Date: Fri, 20 Mar 2020 11:20:15 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: <20200225173541.1549955-1-npiggin@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Merge the 32bit and 64bit version. Halve the check constants on 32bit. Use STACK_TOP since it is defined. Passing is_64 is now redundant since is_32bit_task() is used to determine which callchain variant should be used. Use STACK_TOP and is_32bit_task() directly. This removes a page from the valid 32bit area on 64bit: #define TASK_SIZE_USER32 (0x0000000100000000UL - (1 * PAGE_SIZE)) #define STACK_TOP_USER32 TASK_SIZE_USER32 Change return value to bool. It is inverted by users anyway. Change to invalid_user_sp to avoid inverting the return value twice. Signed-off-by: Michal Suchanek --- v8: new patch v11: simplify by using is_32bit_task() v12: - simplify by precalculating subexpresions - change return value to bool - remove double inversion --- arch/powerpc/perf/callchain.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-) diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index c9a78c6e4361..001d0473a61f 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -102,6 +102,14 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re } } +static inline bool invalid_user_sp(unsigned long sp) +{ + unsigned long mask = is_32bit_task() ? 3 : 7; + unsigned long top = STACK_TOP - (is_32bit_task() ? 16 : 32); + + return (!sp || (sp & mask) || (sp > top)); +} + #ifdef CONFIG_PPC64 /* * On 64-bit we don't want to invoke hash_page on user addresses from @@ -161,13 +169,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret) return read_user_stack_slow(ptr, ret, 8); } -static inline int valid_user_sp(unsigned long sp, int is_64) -{ - if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32) - return 0; - return 1; -} - /* * 64-bit user processes use the same stack frame for RT and non-RT signals. */ @@ -226,7 +227,7 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, while (entry->nr < entry->max_stack) { fp = (unsigned long __user *) sp; - if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp)) + if (invalid_user_sp(sp) || read_user_stack_64(fp, &next_sp)) return; if (level > 0 && read_user_stack_64(&fp[2], &next_ip)) return; @@ -275,13 +276,6 @@ static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry { } -static inline int valid_user_sp(unsigned long sp, int is_64) -{ - if (!sp || (sp & 7) || sp > TASK_SIZE - 32) - return 0; - return 1; -} - #define __SIGNAL_FRAMESIZE32 __SIGNAL_FRAMESIZE #define sigcontext32 sigcontext #define mcontext32 mcontext @@ -423,7 +417,7 @@ static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, while (entry->nr < entry->max_stack) { fp = (unsigned int __user *) (unsigned long) sp; - if (!valid_user_sp(sp, 0) || read_user_stack_32(fp, &next_sp)) + if (invalid_user_sp(sp) || read_user_stack_32(fp, &next_sp)) return; if (level > 0 && read_user_stack_32(&fp[1], &next_ip)) return; -- 2.23.0