From: Matthew Wilcox <matthew.r.wilcox@intel.com> To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox <willy@linux.intel.com> Subject: [PATCH 06/10] mm: Add vmf_insert_pfn_pmd() Date: Fri, 10 Jul 2015 16:29:21 -0400 [thread overview] Message-ID: <1436560165-8943-7-git-send-email-matthew.r.wilcox@intel.com> (raw) In-Reply-To: <1436560165-8943-1-git-send-email-matthew.r.wilcox@intel.com> From: Matthew Wilcox <willy@linux.intel.com> Similar to vm_insert_pfn(), but for PMDs rather than PTEs. The 'vmf_' prefix instead of 'vm_' prefix is intended to indicate that it returns a VMF_ value rather than an errno (which would only have to be converted into a VMF_ value anyway). Signed-off-by: Matthew Wilcox <willy@linux.intel.com> --- include/linux/huge_mm.h | 2 ++ mm/huge_memory.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 70587ea..f9b612f 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -33,6 +33,8 @@ extern int move_huge_pmd(struct vm_area_struct *vma, extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, pgprot_t newprot, int prot_numa); +int vmf_insert_pfn_pmd(struct vm_area_struct *, unsigned long addr, pmd_t *, + unsigned long pfn, bool write); enum transparent_hugepage_flag { TRANSPARENT_HUGEPAGE_FLAG, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index db3180f..26d0fc1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -837,6 +837,49 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, return 0; } +static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, unsigned long pfn, pgprot_t prot, bool write) +{ + struct mm_struct *mm = vma->vm_mm; + pmd_t entry; + spinlock_t *ptl; + + ptl = pmd_lock(mm, pmd); + if (pmd_none(*pmd)) { + entry = pmd_mkhuge(pfn_pmd(pfn, prot)); + if (write) { + entry = pmd_mkyoung(pmd_mkdirty(entry)); + entry = maybe_pmd_mkwrite(entry, vma); + } + set_pmd_at(mm, addr, pmd, entry); + update_mmu_cache_pmd(vma, addr, pmd); + } + spin_unlock(ptl); + return VM_FAULT_NOPAGE; +} + +int vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, unsigned long pfn, bool write) +{ + pgprot_t pgprot = vma->vm_page_prot; + /* + * If we had pmd_special, we could avoid all these restrictions, + * but we need to be consistent with PTEs and architectures that + * can't support a 'special' bit. + */ + BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))); + BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == + (VM_PFNMAP|VM_MIXEDMAP)); + BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); + BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn)); + + if (addr < vma->vm_start || addr >= vma->vm_end) + return VM_FAULT_SIGBUS; + if (track_pfn_insert(vma, &pgprot, pfn)) + return VM_FAULT_SIGBUS; + return insert_pfn_pmd(vma, addr, pmd, pfn, pgprot, write); +} + int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, struct vm_area_struct *vma) -- 2.1.4
WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <matthew.r.wilcox@intel.com> To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox <willy@linux.intel.com> Subject: [PATCH 06/10] mm: Add vmf_insert_pfn_pmd() Date: Fri, 10 Jul 2015 16:29:21 -0400 [thread overview] Message-ID: <1436560165-8943-7-git-send-email-matthew.r.wilcox@intel.com> (raw) In-Reply-To: <1436560165-8943-1-git-send-email-matthew.r.wilcox@intel.com> From: Matthew Wilcox <willy@linux.intel.com> Similar to vm_insert_pfn(), but for PMDs rather than PTEs. The 'vmf_' prefix instead of 'vm_' prefix is intended to indicate that it returns a VMF_ value rather than an errno (which would only have to be converted into a VMF_ value anyway). Signed-off-by: Matthew Wilcox <willy@linux.intel.com> --- include/linux/huge_mm.h | 2 ++ mm/huge_memory.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 70587ea..f9b612f 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -33,6 +33,8 @@ extern int move_huge_pmd(struct vm_area_struct *vma, extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, pgprot_t newprot, int prot_numa); +int vmf_insert_pfn_pmd(struct vm_area_struct *, unsigned long addr, pmd_t *, + unsigned long pfn, bool write); enum transparent_hugepage_flag { TRANSPARENT_HUGEPAGE_FLAG, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index db3180f..26d0fc1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -837,6 +837,49 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, return 0; } +static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, unsigned long pfn, pgprot_t prot, bool write) +{ + struct mm_struct *mm = vma->vm_mm; + pmd_t entry; + spinlock_t *ptl; + + ptl = pmd_lock(mm, pmd); + if (pmd_none(*pmd)) { + entry = pmd_mkhuge(pfn_pmd(pfn, prot)); + if (write) { + entry = pmd_mkyoung(pmd_mkdirty(entry)); + entry = maybe_pmd_mkwrite(entry, vma); + } + set_pmd_at(mm, addr, pmd, entry); + update_mmu_cache_pmd(vma, addr, pmd); + } + spin_unlock(ptl); + return VM_FAULT_NOPAGE; +} + +int vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, unsigned long pfn, bool write) +{ + pgprot_t pgprot = vma->vm_page_prot; + /* + * If we had pmd_special, we could avoid all these restrictions, + * but we need to be consistent with PTEs and architectures that + * can't support a 'special' bit. + */ + BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))); + BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == + (VM_PFNMAP|VM_MIXEDMAP)); + BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); + BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn)); + + if (addr < vma->vm_start || addr >= vma->vm_end) + return VM_FAULT_SIGBUS; + if (track_pfn_insert(vma, &pgprot, pfn)) + return VM_FAULT_SIGBUS; + return insert_pfn_pmd(vma, addr, pmd, pfn, pgprot, write); +} + int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, struct vm_area_struct *vma) -- 2.1.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-07-10 20:30 UTC|newest] Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top 2015-07-10 20:29 [PATCH 00/10] Huge page support for DAX files Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox 2015-07-10 20:29 ` [PATCH 01/10] thp: vma_adjust_trans_huge(): adjust file-backed VMA too Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox 2015-07-10 20:29 ` [PATCH 02/10] dax: Move DAX-related functions to a new header Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox 2015-07-10 20:29 ` [PATCH 03/10] thp: Prepare for DAX huge pages Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox 2015-07-19 11:03 ` Kirill A. Shutemov 2015-07-19 11:03 ` Kirill A. Shutemov 2015-07-10 20:29 ` [PATCH 04/10] mm: Add a pmd_fault handler Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox 2015-07-10 20:29 ` [PATCH 05/10] mm: Export various functions for the benefit of DAX Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox [this message] 2015-07-10 20:29 ` [PATCH 06/10] mm: Add vmf_insert_pfn_pmd() Matthew Wilcox 2015-07-13 13:23 ` Jeff Moyer 2015-07-13 13:23 ` Jeff Moyer 2015-07-13 15:02 ` Matthew Wilcox 2015-07-13 15:02 ` Matthew Wilcox 2015-07-10 20:29 ` [PATCH 07/10] dax: Add huge page fault support Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox 2015-07-13 15:05 ` Jan Kara 2015-07-13 15:05 ` Jan Kara 2015-07-13 15:33 ` Matthew Wilcox 2015-07-13 15:33 ` Matthew Wilcox 2015-07-10 20:29 ` [PATCH 08/10] ext2: Huge " Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox 2015-07-10 20:29 ` [PATCH 09/10] ext4: " Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox 2015-07-10 20:29 ` [PATCH 10/10] xfs: " Matthew Wilcox 2015-07-10 20:29 ` Matthew Wilcox
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1436560165-8943-7-git-send-email-matthew.r.wilcox@intel.com \ --to=matthew.r.wilcox@intel.com \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=willy@linux.intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.