From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF484C7618E for ; Tue, 23 Jul 2019 09:57:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8D8B0205C9 for ; Tue, 23 Jul 2019 09:57:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732547AbfGWJ5y (ORCPT ); Tue, 23 Jul 2019 05:57:54 -0400 Received: from foss.arm.com ([217.140.110.172]:52008 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731566AbfGWJ5x (ORCPT ); Tue, 23 Jul 2019 05:57:53 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D39B8337; Tue, 23 Jul 2019 02:57:52 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 639E93F71A; Tue, 23 Jul 2019 02:57:50 -0700 (PDT) Date: Tue, 23 Jul 2019 10:57:48 +0100 From: Mark Rutland To: Steven Price Cc: linux-mm@kvack.org, Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Dave Hansen , Ingo Molnar , James Morse , =?utf-8?B?SsOpcsO0bWU=?= Glisse , Peter Zijlstra , Thomas Gleixner , Will Deacon , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, "Liang, Kan" , Andrew Morton Subject: Re: [PATCH v9 19/21] mm: Add generic ptdump Message-ID: <20190723095747.GB8085@lakrids.cambridge.arm.com> References: <20190722154210.42799-1-steven.price@arm.com> <20190722154210.42799-20-steven.price@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190722154210.42799-20-steven.price@arm.com> User-Agent: Mutt/1.11.1+11 (2f07cb52) (2018-12-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 22, 2019 at 04:42:08PM +0100, Steven Price wrote: > Add a generic version of page table dumping that architectures can > opt-in to > > Signed-off-by: Steven Price [...] > +#ifdef CONFIG_KASAN > +/* > + * This is an optimization for KASAN=y case. Since all kasan page tables > + * eventually point to the kasan_early_shadow_page we could call note_page() > + * right away without walking through lower level page tables. This saves > + * us dozens of seconds (minutes for 5-level config) while checking for > + * W+X mapping or reading kernel_page_tables debugfs file. > + */ > +static inline bool kasan_page_table(struct ptdump_state *st, void *pt, > + unsigned long addr) > +{ > + if (__pa(pt) == __pa(kasan_early_shadow_pmd) || > +#ifdef CONFIG_X86 > + (pgtable_l5_enabled() && > + __pa(pt) == __pa(kasan_early_shadow_p4d)) || > +#endif > + __pa(pt) == __pa(kasan_early_shadow_pud)) { > + st->note_page(st, addr, 5, pte_val(kasan_early_shadow_pte[0])); > + return true; > + } > + return false; Having you tried this with CONFIG_DEBUG_VIRTUAL? The kasan_early_shadow_pmd is a kernel object rather than a linear map object, so you should use __pa_symbol for that. It's a bit horrid to have to test multiple levels in one function; can't we check the relevant level inline in each of the test_p?d funcs? They're optional anyway, so they only need to be defined for CONFIG_KASAN. Thanks, Mark. > +} > +#else > +static inline bool kasan_page_table(struct ptdump_state *st, void *pt, > + unsigned long addr) > +{ > + return false; > +} > +#endif > + > +static int ptdump_test_p4d(unsigned long addr, unsigned long next, > + p4d_t *p4d, struct mm_walk *walk) > +{ > + struct ptdump_state *st = walk->private; > + > + if (kasan_page_table(st, p4d, addr)) > + return 1; > + return 0; > +} > +static int ptdump_test_pud(unsigned long addr, unsigned long next, > + pud_t *pud, struct mm_walk *walk) > +{ > + struct ptdump_state *st = walk->private; > + > + if (kasan_page_table(st, pud, addr)) > + return 1; > + return 0; > +} > + > +static int ptdump_test_pmd(unsigned long addr, unsigned long next, > + pmd_t *pmd, struct mm_walk *walk) > +{ > + struct ptdump_state *st = walk->private; > + > + if (kasan_page_table(st, pmd, addr)) > + return 1; > + return 0; > +} > + > +static int ptdump_hole(unsigned long addr, unsigned long next, > + struct mm_walk *walk) > +{ > + struct ptdump_state *st = walk->private; > + > + st->note_page(st, addr, -1, 0); > + > + return 0; > +} > + > +void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm) > +{ > + struct mm_walk walk = { > + .mm = mm, > + .pgd_entry = ptdump_pgd_entry, > + .p4d_entry = ptdump_p4d_entry, > + .pud_entry = ptdump_pud_entry, > + .pmd_entry = ptdump_pmd_entry, > + .pte_entry = ptdump_pte_entry, > + .test_p4d = ptdump_test_p4d, > + .test_pud = ptdump_test_pud, > + .test_pmd = ptdump_test_pmd, > + .pte_hole = ptdump_hole, > + .private = st > + }; > + const struct ptdump_range *range = st->range; > + > + down_read(&mm->mmap_sem); > + while (range->start != range->end) { > + walk_page_range(range->start, range->end, &walk); > + range++; > + } > + up_read(&mm->mmap_sem); > + > + /* Flush out the last page */ > + st->note_page(st, 0, 0, 0); > +} > -- > 2.20.1 > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33AB1C7618E for ; Tue, 23 Jul 2019 09:58:00 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F2F85205C9 for ; Tue, 23 Jul 2019 09:57:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="dCE+2gC+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2F85205C9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=B8+tZjbqy1W3/dtZuqyOQMGRthGknuE8UYC/FM2Q24s=; b=dCE+2gC+ENy9iH Ep2rlR1Dz0ozKPDfTTYpQWvR8tJtDxyYDPghNbF9ss67K/r3b2DJfTpgVpWyAed4ffkXryiHDXq9m xR9IntBTgM8uw2ogdAu66iMofwCr8Myjub3gTwzie7qukHCzJsVMPujz7ow0ZWwFQeVe/MS7pFBPD nrEwFjO634g9w412fZwqIb4QlkWB/6oq3L+KLOJPD1RvJ1jH375yO/iKCH70KnAFM79FMcg5xPkFN d691pjSRjQZW7VHvLwdkSdnlw1RfGbt+zDr1swI1YkJhVovTB6qnjk17FmqO79QU7E2F7UA8lsHqn 01kNDqTL36uTbjhTeHVg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hprYT-0005gX-Fu; Tue, 23 Jul 2019 09:57:57 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hprYQ-0005gC-4T for linux-arm-kernel@lists.infradead.org; Tue, 23 Jul 2019 09:57:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D39B8337; Tue, 23 Jul 2019 02:57:52 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 639E93F71A; Tue, 23 Jul 2019 02:57:50 -0700 (PDT) Date: Tue, 23 Jul 2019 10:57:48 +0100 From: Mark Rutland To: Steven Price Subject: Re: [PATCH v9 19/21] mm: Add generic ptdump Message-ID: <20190723095747.GB8085@lakrids.cambridge.arm.com> References: <20190722154210.42799-1-steven.price@arm.com> <20190722154210.42799-20-steven.price@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20190722154210.42799-20-steven.price@arm.com> User-Agent: Mutt/1.11.1+11 (2f07cb52) (2018-12-01) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190723_025754_271775_4CC32296 X-CRM114-Status: GOOD ( 16.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, linux-mm@kvack.org, =?utf-8?B?SsOpcsO0bWU=?= Glisse , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Jul 22, 2019 at 04:42:08PM +0100, Steven Price wrote: > Add a generic version of page table dumping that architectures can > opt-in to > > Signed-off-by: Steven Price [...] > +#ifdef CONFIG_KASAN > +/* > + * This is an optimization for KASAN=y case. Since all kasan page tables > + * eventually point to the kasan_early_shadow_page we could call note_page() > + * right away without walking through lower level page tables. This saves > + * us dozens of seconds (minutes for 5-level config) while checking for > + * W+X mapping or reading kernel_page_tables debugfs file. > + */ > +static inline bool kasan_page_table(struct ptdump_state *st, void *pt, > + unsigned long addr) > +{ > + if (__pa(pt) == __pa(kasan_early_shadow_pmd) || > +#ifdef CONFIG_X86 > + (pgtable_l5_enabled() && > + __pa(pt) == __pa(kasan_early_shadow_p4d)) || > +#endif > + __pa(pt) == __pa(kasan_early_shadow_pud)) { > + st->note_page(st, addr, 5, pte_val(kasan_early_shadow_pte[0])); > + return true; > + } > + return false; Having you tried this with CONFIG_DEBUG_VIRTUAL? The kasan_early_shadow_pmd is a kernel object rather than a linear map object, so you should use __pa_symbol for that. It's a bit horrid to have to test multiple levels in one function; can't we check the relevant level inline in each of the test_p?d funcs? They're optional anyway, so they only need to be defined for CONFIG_KASAN. Thanks, Mark. > +} > +#else > +static inline bool kasan_page_table(struct ptdump_state *st, void *pt, > + unsigned long addr) > +{ > + return false; > +} > +#endif > + > +static int ptdump_test_p4d(unsigned long addr, unsigned long next, > + p4d_t *p4d, struct mm_walk *walk) > +{ > + struct ptdump_state *st = walk->private; > + > + if (kasan_page_table(st, p4d, addr)) > + return 1; > + return 0; > +} > +static int ptdump_test_pud(unsigned long addr, unsigned long next, > + pud_t *pud, struct mm_walk *walk) > +{ > + struct ptdump_state *st = walk->private; > + > + if (kasan_page_table(st, pud, addr)) > + return 1; > + return 0; > +} > + > +static int ptdump_test_pmd(unsigned long addr, unsigned long next, > + pmd_t *pmd, struct mm_walk *walk) > +{ > + struct ptdump_state *st = walk->private; > + > + if (kasan_page_table(st, pmd, addr)) > + return 1; > + return 0; > +} > + > +static int ptdump_hole(unsigned long addr, unsigned long next, > + struct mm_walk *walk) > +{ > + struct ptdump_state *st = walk->private; > + > + st->note_page(st, addr, -1, 0); > + > + return 0; > +} > + > +void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm) > +{ > + struct mm_walk walk = { > + .mm = mm, > + .pgd_entry = ptdump_pgd_entry, > + .p4d_entry = ptdump_p4d_entry, > + .pud_entry = ptdump_pud_entry, > + .pmd_entry = ptdump_pmd_entry, > + .pte_entry = ptdump_pte_entry, > + .test_p4d = ptdump_test_p4d, > + .test_pud = ptdump_test_pud, > + .test_pmd = ptdump_test_pmd, > + .pte_hole = ptdump_hole, > + .private = st > + }; > + const struct ptdump_range *range = st->range; > + > + down_read(&mm->mmap_sem); > + while (range->start != range->end) { > + walk_page_range(range->start, range->end, &walk); > + range++; > + } > + up_read(&mm->mmap_sem); > + > + /* Flush out the last page */ > + st->note_page(st, 0, 0, 0); > +} > -- > 2.20.1 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel