From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC2E9C433F5 for ; Sat, 7 May 2022 17:55:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1386543AbiEGR7o (ORCPT ); Sat, 7 May 2022 13:59:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237053AbiEGR7n (ORCPT ); Sat, 7 May 2022 13:59:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C7A113E32 for ; Sat, 7 May 2022 10:55:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 99FC7611A9 for ; Sat, 7 May 2022 17:55:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC323C385A6; Sat, 7 May 2022 17:55:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1651946155; bh=6Qnl+LC2cM49a9dtEjMZ9Tiy5oJmI27wwdzcbJdM3/E=; h=Date:To:From:Subject:From; b=UdPyyPkkNJQxOMlDMs/ma1TXgXHFkclsQ2D0gURPm17HeXSn73YBPYiRTH0h5WlGs +y8pwbmK9c7/goWYrkEErJc9p/3qQvQLZlMjKh7mAd3r9LXIrH+NuJRXhDuIxHuxdR TvUoTA8+phpHYvnUglYAW7Zq/Kfq5gfq0OYhsM/8= Date: Sat, 07 May 2022 10:55:54 -0700 To: mm-commits@vger.kernel.org, will@kernel.org, wangkefeng.wang@huawei.com, paul.walmsley@sifive.com, pasha.tatashin@soleen.com, palmer@dabbelt.com, mingo@redhat.com, hpa@zytor.com, dave.hansen@linux.intel.com, catalin.marinas@arm.com, bp@alien8.de, anshuman.khandual@arm.com, tongtiangen@huawei.com, akpm@linux-foundation.org From: Andrew Morton Subject: + mm-page_table_check-using-pxd_size-instead-of-pxd_page_size.patch added to mm-unstable branch Message-Id: <20220507175554.DC323C385A6@smtp.kernel.org> Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org The patch titled Subject: mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE has been added to the -mm mm-unstable branch. Its filename is mm-page_table_check-using-pxd_size-instead-of-pxd_page_size.patch This patch should soon appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Tong Tiangen Subject: mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE Patch series "mm: page_table_check: add support on arm64 and riscv", v7. Page table check performs extra verifications at the time when new pages become accessible from the userspace by getting their page table entries (PTEs PMDs etc.) added into the table. It is supported on X86[1]. This patchset made some simple changes and make it easier to support new architecture, then we support this feature on ARM64 and RISCV. [1]https://lore.kernel.org/lkml/20211123214814.3756047-1-pasha.tatashin@soleen.com/ This patch (of 6): Compared with PxD_PAGE_SIZE, which is defined and used only on X86, PxD_SIZE is more common in each architecture. Therefore, it is more reasonable to use PxD_SIZE instead of PxD_PAGE_SIZE in page_table_check.c. At the same time, it is easier to support page table check in other architectures. The substitution has no functional impact on the x86. Link: https://lkml.kernel.org/r/20220507110114.4128854-1-tongtiangen@huawei.com Link: https://lkml.kernel.org/r/20220507110114.4128854-2-tongtiangen@huawei.com Signed-off-by: Tong Tiangen Suggested-by: Anshuman Khandual Acked-by: Pasha Tatashin Reviewed-by: Anshuman Khandual Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: Catalin Marinas Cc: Will Deacon Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Kefeng Wang Signed-off-by: Andrew Morton --- mm/page_table_check.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/mm/page_table_check.c~mm-page_table_check-using-pxd_size-instead-of-pxd_page_size +++ a/mm/page_table_check.c @@ -177,7 +177,7 @@ void __page_table_check_pmd_clear(struct if (pmd_user_accessible_page(pmd)) { page_table_check_clear(mm, addr, pmd_pfn(pmd), - PMD_PAGE_SIZE >> PAGE_SHIFT); + PMD_SIZE >> PAGE_SHIFT); } } EXPORT_SYMBOL(__page_table_check_pmd_clear); @@ -190,7 +190,7 @@ void __page_table_check_pud_clear(struct if (pud_user_accessible_page(pud)) { page_table_check_clear(mm, addr, pud_pfn(pud), - PUD_PAGE_SIZE >> PAGE_SHIFT); + PUD_SIZE >> PAGE_SHIFT); } } EXPORT_SYMBOL(__page_table_check_pud_clear); @@ -219,7 +219,7 @@ void __page_table_check_pmd_set(struct m __page_table_check_pmd_clear(mm, addr, *pmdp); if (pmd_user_accessible_page(pmd)) { page_table_check_set(mm, addr, pmd_pfn(pmd), - PMD_PAGE_SIZE >> PAGE_SHIFT, + PMD_SIZE >> PAGE_SHIFT, pmd_write(pmd)); } } @@ -234,7 +234,7 @@ void __page_table_check_pud_set(struct m __page_table_check_pud_clear(mm, addr, *pudp); if (pud_user_accessible_page(pud)) { page_table_check_set(mm, addr, pud_pfn(pud), - PUD_PAGE_SIZE >> PAGE_SHIFT, + PUD_SIZE >> PAGE_SHIFT, pud_write(pud)); } } _ Patches currently in -mm which might be from tongtiangen@huawei.com are mm-page_table_check-using-pxd_size-instead-of-pxd_page_size.patch mm-page_table_check-add-hooks-to-public-helpers.patch mm-remove-__have_arch_ptep_clear-in-pgtableh.patch riscv-mm-enable-arch_supports_page_table_check.patch