From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93C37C433DF for ; Mon, 6 Jul 2020 03:59:07 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5687220720 for ; Mon, 6 Jul 2020 03:59:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="IVf6DwcU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5687220720 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:References: To:Subject:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QB6JBDDLylixYOh4aPpyGHiOWXwab6RyzgG3pt+gOTs=; b=IVf6DwcUreEnJPSXKhJPTH8Mw U/zrFg/4YVd/DOTK9iNMrPh9oFx0iRD9yMgqDEwswXZIjgcwP0LmIYC1PXneB0VmM/ABD+eHqhc90 1xxOSgyjmUQkE3JcnS/YWjQZfuqQm3+JXl2CgO2xPTR+dzMcuWH2QrwKu7OOsPKMB5BKc1uisJMbC Si7Rm2fuE8xK4Qgb1AIXLDDGdXYx3M0b0KsZV7pCW+JhUw+l4oLuTOlpNtfvq61uyUCDOjm4QB8Xw fYNfZqxqzsc8d54oI5H1l8Vwgounsjklb4dh4vgl0Nra03pIxo8A0ztBX1ZQbyTHZ4rKfHSx3XWEy gWFory90w==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jsIG2-0004Pr-Ef; Mon, 06 Jul 2020 03:57:30 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jsIFz-0004PG-QP for linux-arm-kernel@lists.infradead.org; Mon, 06 Jul 2020 03:57:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 004EA1FB; Sun, 5 Jul 2020 20:57:25 -0700 (PDT) Received: from [10.163.84.195] (unknown [10.163.84.195]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 427E33F71E; Sun, 5 Jul 2020 20:57:22 -0700 (PDT) From: Anshuman Khandual Subject: Re: [RFC V2 1/2] arm64/mm: Change THP helpers per generic memory semantics To: Catalin Marinas References: <1592226918-26378-1-git-send-email-anshuman.khandual@arm.com> <1592226918-26378-2-git-send-email-anshuman.khandual@arm.com> <20200702121135.GD22241@gaia> Message-ID: <48fd53ad-03a8-eb76-46a2-b65bd75a28d6@arm.com> Date: Mon, 6 Jul 2020 09:27:04 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20200702121135.GD22241@gaia> Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200705_235728_041953_BF38D864 X-CRM114-Status: GOOD ( 31.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, Suzuki Poulose , Marc Zyngier , linux-kernel@vger.kernel.org, linux-mm@kvack.org, ziy@nvidia.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 07/02/2020 05:41 PM, Catalin Marinas wrote: > Hi Anshuman, Hi Catalin, > > On Mon, Jun 15, 2020 at 06:45:17PM +0530, Anshuman Khandual wrote: >> --- a/arch/arm64/include/asm/pgtable.h >> +++ b/arch/arm64/include/asm/pgtable.h >> @@ -353,15 +353,92 @@ static inline int pmd_protnone(pmd_t pmd) >> } >> #endif >> >> +#define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_TABLE) >> +#define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_SECT) >> + >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >> /* >> - * THP definitions. >> + * PMD Level Encoding (THP Enabled) >> + * >> + * 0b00 - Not valid Not present NA >> + * 0b10 - Not valid Present Huge (Splitting) >> + * 0b01 - Valid Present Huge (Mapped) >> + * 0b11 - Valid Present Table (Mapped) >> */ > > I wonder whether it would be easier to read if we add a dedicated > PMD_SPLITTING bit, only when bit 0 is cleared. This bit can be high (say > 59), it doesn't really matter as the entry is not valid. Could make (PMD[0b00] = 0b10) be represented as PMD_SPLITTING just for better reading purpose. But if possible, IMHO it is efficient and less vulnerable to use HW defined PTE attribute bit positions including SW usable ones than the reserved bits, for a PMD state representation. Earlier proposal used PTE_SPECIAL (bit 56) instead. Using PMD_TABLE_BIT helps save bit 56 for later. Thinking about it again, would not these unused higher bits [59..63] create any problem ? For example while enabling THP swapping without split via ARCH_WANTS_THP_SWAP or something else later when these higher bits might be required. I am not sure, just speculating. But, do you see any particular problem with PMD_TABLE_BIT ? > > The only doubt I have is that pmd_mkinvalid() is used in other contexts > when it's not necessarily splitting a pmd (search for the > pmdp_invalidate() calls). So maybe a better name like PMD_PRESENT with a > comment that pmd_to_page() is valid (i.e. no migration or swap entry). > Feel free to suggest a better name. PMD_INVALID_PRESENT sounds better ? > >> +static inline pmd_t pmd_mksplitting(pmd_t pmd) >> +{ >> + unsigned long val = pmd_val(pmd); >> >> -#ifdef CONFIG_TRANSPARENT_HUGEPAGE >> -#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT)) >> + return __pmd((val & ~PMD_TYPE_MASK) | PMD_TABLE_BIT); >> +} >> + >> +static inline pmd_t pmd_clrsplitting(pmd_t pmd) >> +{ >> + unsigned long val = pmd_val(pmd); >> + >> + return __pmd((val & ~PMD_TYPE_MASK) | PMD_TYPE_SECT); >> +} >> + >> +static inline bool pmd_splitting(pmd_t pmd) >> +{ >> + unsigned long val = pmd_val(pmd); >> + >> + if ((val & PMD_TYPE_MASK) == PMD_TABLE_BIT) >> + return true; >> + return false; >> +} >> + >> +static inline bool pmd_mapped(pmd_t pmd) >> +{ >> + return pmd_sect(pmd); >> +} >> + >> +static inline pmd_t pmd_mkinvalid(pmd_t pmd) >> +{ >> + /* >> + * Invalidation should not have been invoked on >> + * a PMD table entry. Just warn here otherwise. >> + */ >> + WARN_ON(pmd_table(pmd)); >> + return pmd_mksplitting(pmd); >> +} > > And here we wouldn't need t worry about table checks.> This is just a temporary sanity check validating the assumption that a table entry would never be called with pmdp_invalidate(). This can be dropped later on if required. >> +static inline int pmd_present(pmd_t pmd); >> + >> +static inline int pmd_trans_huge(pmd_t pmd) >> +{ >> + if (!pmd_present(pmd)) >> + return 0; >> + >> + if (!pmd_val(pmd)) >> + return 0; >> + >> + if (pmd_mapped(pmd)) >> + return 1; >> + >> + if (pmd_splitting(pmd)) >> + return 1; >> + return 0; > > Doesn't your new pmd_present() already check for splitting? I think I actually meant pte_present() here instead, my bad. > checking for bit 0 and the new PMD_PRESENT. That would be similar to > what we do with PTE_PROT_NONE. Actually, you could use the same bit for > both. IIUC PROT NONE is supported at PMD level as well. Hence with valid bit cleared, there is a chance for misinterpretation between pmd_protnone() and pmd_splitting() if the same bit (PTE_PROT_NONE) is used. > >> +} >> + >> +void set_pmd_at(struct mm_struct *mm, unsigned long addr, >> + pmd_t *pmdp, pmd_t pmd); >> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ >> >> -#define pmd_present(pmd) pte_present(pmd_pte(pmd)) >> +static inline int pmd_present(pmd_t pmd) >> +{ >> + pte_t pte = pmd_pte(pmd); >> + >> + if (pte_present(pte)) >> + return 1; >> + >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >> + if (pmd_splitting(pmd)) >> + return 1; >> +#endif >> + return 0; >> +} > > [...] > >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index 990929c8837e..337519031115 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -22,6 +22,8 @@ >> #include >> #include >> #include >> +#include >> +#include >> >> #include >> #include >> @@ -1483,3 +1485,21 @@ static int __init prevent_bootmem_remove_init(void) >> } >> device_initcall(prevent_bootmem_remove_init); >> #endif >> + >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >> +void set_pmd_at(struct mm_struct *mm, unsigned long addr, >> + pmd_t *pmdp, pmd_t pmd) >> +{ >> + /* >> + * PMD migration entries need to retain splitting PMD >> + * representation created with pmdp_invalidate(). But >> + * any non-migration entry which just might have been >> + * invalidated previously, still need be a normal huge >> + * page. Hence selectively clear splitting entries. >> + */ >> + if (!is_migration_entry(pmd_to_swp_entry(pmd))) >> + pmd = pmd_clrsplitting(pmd); >> + >> + set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)); >> +} >> +#endif > > So a pmdp_invalidate() returns the old pmd. Do we ever need to rebuild a > pmd based on the actual bits in the new invalidated pmdp? Wondering how > the table bit ends up here that we need to pmd_clrsplitting(). Yes, a pmd is always rebuilt via set_pmd_at() with the old value as returned from an earlier pmdp_invalidate() but which may have been changed with standard page table entry transformations. Basically, it will not be created afresh from the pfn and VMA flags. Some example here: 1. dax_entry_mkclean (fs/dax.c) pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); set_pmd_at(vma->vm_mm, address, pmdp, pmd); 2. clear_soft_dirty_pmd (fs/proc/task_mmu.c) old = pmdp_invalidate(vma, addr, pmdp); if (pmd_dirty(old)) pmd = pmd_mkdirty(pmd); if (pmd_young(old)) pmd = pmd_mkyoung(pmd); pmd = pmd_wrprotect(pmd); pmd = pmd_clear_soft_dirty(pmd); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); 3. madvise_free_huge_pmd (mm/huge_memory.c) orig_pmd = *pmd; .... pmdp_invalidate(vma, addr, pmd); orig_pmd = pmd_mkold(orig_pmd); orig_pmd = pmd_mkclean(orig_pmd); set_pmd_at(mm, addr, pmd, orig_pmd); 4. page_mkclean_one (mm/rmap.c) entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); set_pmd_at(vma->vm_mm, address, pmd, entry); Any additional bit set in PMD via pmdp_invalidate() needs to be cleared off in set_pmd_at(), unless it is a migration entry. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel