From: "Thomas Hellström (VMware)" <thomas_os@shipmail.org> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: pv-drivers@vmware.com, linux-graphics-maintainer@vmware.com, "Thomas Hellstrom" <thellstrom@vmware.com>, "Andrew Morton" <akpm@linux-foundation.org>, "Michal Hocko" <mhocko@suse.com>, "Matthew Wilcox (Oracle)" <willy@infradead.org>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, "Ralph Campbell" <rcampbell@nvidia.com>, "Jérôme Glisse" <jglisse@redhat.com>, "Christian König" <christian.koenig@amd.com> Subject: [PATCH 1/8] mm: Introduce vma_is_special_huge Date: Tue, 3 Dec 2019 14:22:32 +0100 [thread overview] Message-ID: <20191203132239.5910-2-thomas_os@shipmail.org> (raw) In-Reply-To: <20191203132239.5910-1-thomas_os@shipmail.org> From: Thomas Hellstrom <thellstrom@vmware.com> For VM_PFNMAP and VM_MIXEDMAP vmas that want to support transhuge pages and -page table entries, introduce vma_is_special_huge() that takes the same codepaths as vma_is_dax(). The use of "special" follows the definition in memory.c, vm_normal_page(): "Special" mappings do not wish to be associated with a "struct page" (either it doesn't exist, or it exists but they don't want to touch it) For PAGE_SIZE pages, "special" is determined per page table entry to be able to deal with COW pages. But since we don't have huge COW pages, we can classify a vma as either "special huge" or "normal huge". Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: "Christian König" <christian.koenig@amd.com> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> --- include/linux/mm.h | 6 ++++++ mm/huge_memory.c | 6 +++--- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0133542d69c9..886a1f899887 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2822,6 +2822,12 @@ extern long copy_huge_page_from_user(struct page *dst_page, const void __user *usr_src, unsigned int pages_per_huge_page, bool allow_pagefault); +static inline bool vma_is_special_huge(struct vm_area_struct *vma) +{ + return vma_is_dax(vma) || (vma->vm_file && + (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))); +} + #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ #ifdef CONFIG_DEBUG_PAGEALLOC diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 41a0fbddc96b..f8d24fc3f4df 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1789,7 +1789,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, orig_pmd = pmdp_huge_get_and_clear_full(tlb->mm, addr, pmd, tlb->fullmm); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); - if (vma_is_dax(vma)) { + if (vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); @@ -2053,7 +2053,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, */ pudp_huge_get_and_clear_full(tlb->mm, addr, pud, tlb->fullmm); tlb_remove_pud_tlb_entry(tlb, pud, addr); - if (vma_is_dax(vma)) { + if (vma_is_special_huge(vma)) { spin_unlock(ptl); /* No zero page support yet */ } else { @@ -2162,7 +2162,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, */ if (arch_needs_pgtable_deposit()) zap_deposited_table(mm, pmd); - if (vma_is_dax(vma)) + if (vma_is_special_huge(vma)) return; page = pmd_page(_pmd); if (!PageDirty(page) && pmd_dirty(_pmd)) -- 2.21.0
WARNING: multiple messages have this Message-ID (diff)
From: "Thomas Hellström (VMware)" <thomas_os@shipmail.org> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: "Thomas Hellstrom" <thellstrom@vmware.com>, "Michal Hocko" <mhocko@suse.com>, pv-drivers@vmware.com, "Ralph Campbell" <rcampbell@nvidia.com>, "Matthew Wilcox (Oracle)" <willy@infradead.org>, "Jérôme Glisse" <jglisse@redhat.com>, linux-graphics-maintainer@vmware.com, "Andrew Morton" <akpm@linux-foundation.org>, "Christian König" <christian.koenig@amd.com>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCH 1/8] mm: Introduce vma_is_special_huge Date: Tue, 3 Dec 2019 14:22:32 +0100 [thread overview] Message-ID: <20191203132239.5910-2-thomas_os@shipmail.org> (raw) In-Reply-To: <20191203132239.5910-1-thomas_os@shipmail.org> From: Thomas Hellstrom <thellstrom@vmware.com> For VM_PFNMAP and VM_MIXEDMAP vmas that want to support transhuge pages and -page table entries, introduce vma_is_special_huge() that takes the same codepaths as vma_is_dax(). The use of "special" follows the definition in memory.c, vm_normal_page(): "Special" mappings do not wish to be associated with a "struct page" (either it doesn't exist, or it exists but they don't want to touch it) For PAGE_SIZE pages, "special" is determined per page table entry to be able to deal with COW pages. But since we don't have huge COW pages, we can classify a vma as either "special huge" or "normal huge". Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: "Christian König" <christian.koenig@amd.com> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> --- include/linux/mm.h | 6 ++++++ mm/huge_memory.c | 6 +++--- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0133542d69c9..886a1f899887 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2822,6 +2822,12 @@ extern long copy_huge_page_from_user(struct page *dst_page, const void __user *usr_src, unsigned int pages_per_huge_page, bool allow_pagefault); +static inline bool vma_is_special_huge(struct vm_area_struct *vma) +{ + return vma_is_dax(vma) || (vma->vm_file && + (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))); +} + #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ #ifdef CONFIG_DEBUG_PAGEALLOC diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 41a0fbddc96b..f8d24fc3f4df 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1789,7 +1789,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, orig_pmd = pmdp_huge_get_and_clear_full(tlb->mm, addr, pmd, tlb->fullmm); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); - if (vma_is_dax(vma)) { + if (vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); @@ -2053,7 +2053,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, */ pudp_huge_get_and_clear_full(tlb->mm, addr, pud, tlb->fullmm); tlb_remove_pud_tlb_entry(tlb, pud, addr); - if (vma_is_dax(vma)) { + if (vma_is_special_huge(vma)) { spin_unlock(ptl); /* No zero page support yet */ } else { @@ -2162,7 +2162,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, */ if (arch_needs_pgtable_deposit()) zap_deposited_table(mm, pmd); - if (vma_is_dax(vma)) + if (vma_is_special_huge(vma)) return; page = pmd_page(_pmd); if (!PageDirty(page) && pmd_dirty(_pmd)) -- 2.21.0 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2019-12-03 13:23 UTC|newest] Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-12-03 13:22 [PATCH 0/8] Huge page-table entries for TTM Thomas Hellström (VMware) 2019-12-03 13:22 ` Thomas Hellström (VMware) 2019-12-03 13:22 ` Thomas Hellström (VMware) [this message] 2019-12-03 13:22 ` [PATCH 1/8] mm: Introduce vma_is_special_huge Thomas Hellström (VMware) 2020-03-01 4:04 ` Andrew Morton 2020-03-01 4:04 ` Andrew Morton 2019-12-03 13:22 ` [PATCH 2/8] mm: Split huge pages on write-notify or COW Thomas Hellström (VMware) 2019-12-03 13:22 ` Thomas Hellström (VMware) 2020-03-01 4:04 ` Andrew Morton 2020-03-01 4:04 ` Andrew Morton 2019-12-03 13:22 ` [PATCH 3/8] mm: Add vmf_insert_pfn_xxx_prot() for huge page-table entries Thomas Hellström (VMware) 2019-12-03 13:22 ` Thomas Hellström (VMware) 2019-12-03 13:22 ` [PATCH 4/8] drm/ttm, drm/vmwgfx: Support huge TTM pagefaults Thomas Hellström (VMware) 2019-12-03 13:22 ` Thomas Hellström (VMware) 2019-12-03 13:22 ` [PATCH 5/8] drm/vmwgfx: Support huge page faults Thomas Hellström (VMware) 2019-12-03 13:22 ` Thomas Hellström (VMware) 2019-12-03 13:22 ` [PATCH 6/8] drm: Add a drm_get_unmapped_area() helper Thomas Hellström (VMware) 2019-12-03 13:22 ` Thomas Hellström (VMware) 2019-12-04 11:11 ` Christian König 2019-12-04 11:11 ` Christian König 2019-12-04 11:36 ` Thomas Hellström (VMware) 2019-12-04 11:36 ` Thomas Hellström (VMware) 2019-12-04 12:08 ` Christian König 2019-12-04 12:08 ` Christian König 2019-12-04 12:32 ` Thomas Hellström (VMware) 2019-12-04 12:32 ` Thomas Hellström (VMware) 2019-12-04 14:40 ` Christian König 2019-12-04 14:40 ` Christian König 2019-12-04 15:36 ` Thomas Hellström (VMware) 2019-12-04 15:36 ` Thomas Hellström (VMware) 2019-12-03 13:22 ` [PATCH 7/8] drm/ttm: Introduce a huge page aligning TTM range manager Thomas Hellström (VMware) 2019-12-03 13:22 ` Thomas Hellström (VMware) 2019-12-04 11:13 ` Christian König 2019-12-04 11:13 ` Christian König 2019-12-04 11:45 ` Thomas Hellström (VMware) 2019-12-04 11:45 ` Thomas Hellström (VMware) 2019-12-04 12:16 ` Christian König 2019-12-04 12:16 ` Christian König 2019-12-04 13:18 ` Thomas Hellström (VMware) 2019-12-04 13:18 ` Thomas Hellström (VMware) 2019-12-04 14:02 ` Christian König 2019-12-04 14:02 ` Christian König 2019-12-03 13:22 ` [PATCH 8/8] drm/vmwgfx: Hook up the helpers to align buffer objects Thomas Hellström (VMware) 2019-12-03 13:22 ` Thomas Hellström (VMware) 2020-03-01 4:04 ` [PATCH 0/8] Huge page-table entries for TTM Andrew Morton 2020-03-01 4:04 ` Andrew Morton
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20191203132239.5910-2-thomas_os@shipmail.org \ --to=thomas_os@shipmail.org \ --cc=akpm@linux-foundation.org \ --cc=christian.koenig@amd.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=jglisse@redhat.com \ --cc=kirill.shutemov@linux.intel.com \ --cc=linux-graphics-maintainer@vmware.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@suse.com \ --cc=pv-drivers@vmware.com \ --cc=rcampbell@nvidia.com \ --cc=thellstrom@vmware.com \ --cc=willy@infradead.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.