* [PATCH v2 0/8] Misc MM patches @ 2020-03-18 14:02 Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 1/8] mm: Use vm_fault error code directly Matthew Wilcox ` (7 more replies) 0 siblings, 8 replies; 17+ messages in thread From: Matthew Wilcox @ 2020-03-18 14:02 UTC (permalink / raw) To: linux-mm; +Cc: Matthew Wilcox (Oracle) From: "Matthew Wilcox (Oracle)" <willy@infradead.org> These are all things that I noticed while working on the THP patches, but all stand alone as useful MM cleanups. Andrew, please consider applying. There's two extra patches in here compared to v1; I separated out the patch renaming the arguments to find_subpage() from the patch converting it to use hpage_nr_pages(), and I added a patch to check PageTail() in hpage_nr_pages() for the !THP case. Patch 5 (Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE) adds the missing hunk that Alex Duyck pointed out. Patch 8 (pagecache_get_page documentation) also renames the 'offset' parameter to 'index'. Original series here: https://lore.kernel.org/linux-mm/20200303041125.19358-1-willy@infradead.org/ Matthew Wilcox (Oracle) (8): mm: Use vm_fault error code directly mm: Rename arguments to find_subpage mm: Check PageTail in hpage_nr_pages even when !THP mm: Optimise find_subpage for !THP mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io mm: Unexport find_get_entry mm: Rewrite pagecache_get_page documentation include/linux/huge_mm.h | 6 ++++- include/linux/pagemap.h | 15 +++++++----- include/linux/shmem_fs.h | 10 +------- mm/Kconfig | 6 +---- mm/filemap.c | 52 +++++++++++++++++++--------------------- mm/huge_memory.c | 2 +- mm/khugepaged.c | 12 ++++------ mm/memory.c | 5 ++-- mm/page-writeback.c | 2 +- mm/rmap.c | 2 +- mm/shmem.c | 36 ++++++++++++++-------------- 11 files changed, 67 insertions(+), 81 deletions(-) -- 2.25.1 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 1/8] mm: Use vm_fault error code directly 2020-03-18 14:02 [PATCH v2 0/8] Misc MM patches Matthew Wilcox @ 2020-03-18 14:02 ` Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 2/8] mm: Rename arguments to find_subpage Matthew Wilcox ` (6 subsequent siblings) 7 siblings, 0 replies; 17+ messages in thread From: Matthew Wilcox @ 2020-03-18 14:02 UTC (permalink / raw) To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kirill A . Shutemov, Christoph Hellwig From: "Matthew Wilcox (Oracle)" <willy@infradead.org> Use VM_FAULT_OOM instead of indirecting through vmf_error(-ENOMEM). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> --- mm/filemap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/filemap.c b/mm/filemap.c index 1784478270e1..1beb7716276b 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2491,7 +2491,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) if (!page) { if (fpin) goto out_retry; - return vmf_error(-ENOMEM); + return VM_FAULT_OOM; } } -- 2.25.1 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 2/8] mm: Rename arguments to find_subpage 2020-03-18 14:02 [PATCH v2 0/8] Misc MM patches Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 1/8] mm: Use vm_fault error code directly Matthew Wilcox @ 2020-03-18 14:02 ` Matthew Wilcox 2020-03-19 10:15 ` Christoph Hellwig ` (2 more replies) 2020-03-18 14:02 ` [PATCH v2 3/8] mm: Check PageTail in hpage_nr_pages even when !THP Matthew Wilcox ` (5 subsequent siblings) 7 siblings, 3 replies; 17+ messages in thread From: Matthew Wilcox @ 2020-03-18 14:02 UTC (permalink / raw) To: linux-mm; +Cc: Matthew Wilcox (Oracle) From: "Matthew Wilcox (Oracle)" <willy@infradead.org> This isn't just a random struct page, it's known to be a head page, and calling it head makes the function better self-documenting. The pgoff_t is less confusing if it's named index instead of offset. Also add a couple of comments to explain why we're doing various things. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- include/linux/pagemap.h | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index b4ea3a5d00e5..68db0ed8535d 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -333,14 +333,19 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping, mapping_gfp_mask(mapping)); } -static inline struct page *find_subpage(struct page *page, pgoff_t offset) +/* + * Given the page we found in the page cache, return the page corresponding + * to this index in the file + */ +static inline struct page *find_subpage(struct page *head, pgoff_t index) { - if (PageHuge(page)) - return page; + /* HugeTLBfs wants the head page regardless */ + if (PageHuge(head)) + return head; - VM_BUG_ON_PAGE(PageTail(page), page); + VM_BUG_ON_PAGE(PageTail(head), head); - return page + (offset & (compound_nr(page) - 1)); + return head + (index & (compound_nr(head) - 1)); } struct page *find_get_entry(struct address_space *mapping, pgoff_t offset); -- 2.25.1 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v2 2/8] mm: Rename arguments to find_subpage 2020-03-18 14:02 ` [PATCH v2 2/8] mm: Rename arguments to find_subpage Matthew Wilcox @ 2020-03-19 10:15 ` Christoph Hellwig 2020-03-19 14:31 ` Kirill A. Shutemov 2020-03-19 15:59 ` Pankaj Gupta 2 siblings, 0 replies; 17+ messages in thread From: Christoph Hellwig @ 2020-03-19 10:15 UTC (permalink / raw) To: Matthew Wilcox; +Cc: linux-mm On Wed, Mar 18, 2020 at 07:02:47AM -0700, Matthew Wilcox wrote: > From: "Matthew Wilcox (Oracle)" <willy@infradead.org> > > This isn't just a random struct page, it's known to be a head page, > and calling it head makes the function better self-documenting. > The pgoff_t is less confusing if it's named index instead of offset. > Also add a couple of comments to explain why we're doing various things. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de> ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v2 2/8] mm: Rename arguments to find_subpage 2020-03-18 14:02 ` [PATCH v2 2/8] mm: Rename arguments to find_subpage Matthew Wilcox 2020-03-19 10:15 ` Christoph Hellwig @ 2020-03-19 14:31 ` Kirill A. Shutemov 2020-03-19 15:59 ` Pankaj Gupta 2 siblings, 0 replies; 17+ messages in thread From: Kirill A. Shutemov @ 2020-03-19 14:31 UTC (permalink / raw) To: Matthew Wilcox; +Cc: linux-mm On Wed, Mar 18, 2020 at 07:02:47AM -0700, Matthew Wilcox wrote: > From: "Matthew Wilcox (Oracle)" <willy@infradead.org> > > This isn't just a random struct page, it's known to be a head page, > and calling it head makes the function better self-documenting. > The pgoff_t is less confusing if it's named index instead of offset. > Also add a couple of comments to explain why we're doing various things. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> -- Kirill A. Shutemov ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v2 2/8] mm: Rename arguments to find_subpage 2020-03-18 14:02 ` [PATCH v2 2/8] mm: Rename arguments to find_subpage Matthew Wilcox 2020-03-19 10:15 ` Christoph Hellwig 2020-03-19 14:31 ` Kirill A. Shutemov @ 2020-03-19 15:59 ` Pankaj Gupta 2 siblings, 0 replies; 17+ messages in thread From: Pankaj Gupta @ 2020-03-19 15:59 UTC (permalink / raw) To: Matthew Wilcox; +Cc: linux-mm > From: "Matthew Wilcox (Oracle)" <willy@infradead.org> > > This isn't just a random struct page, it's known to be a head page, > and calling it head makes the function better self-documenting. > The pgoff_t is less confusing if it's named index instead of offset. > Also add a couple of comments to explain why we're doing various things. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > include/linux/pagemap.h | 15 ++++++++++----- > 1 file changed, 10 insertions(+), 5 deletions(-) > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h > index b4ea3a5d00e5..68db0ed8535d 100644 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@ -333,14 +333,19 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping, > mapping_gfp_mask(mapping)); > } > > -static inline struct page *find_subpage(struct page *page, pgoff_t offset) > +/* > + * Given the page we found in the page cache, return the page corresponding > + * to this index in the file > + */ > +static inline struct page *find_subpage(struct page *head, pgoff_t index) > { > - if (PageHuge(page)) > - return page; > + /* HugeTLBfs wants the head page regardless */ > + if (PageHuge(head)) > + return head; > > - VM_BUG_ON_PAGE(PageTail(page), page); > + VM_BUG_ON_PAGE(PageTail(head), head); > > - return page + (offset & (compound_nr(page) - 1)); > + return head + (index & (compound_nr(head) - 1)); > } > > struct page *find_get_entry(struct address_space *mapping, pgoff_t offset); > -- > 2.25.1 Makes function much easier to understand. Thanks! Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com> > > ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 3/8] mm: Check PageTail in hpage_nr_pages even when !THP 2020-03-18 14:02 [PATCH v2 0/8] Misc MM patches Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 1/8] mm: Use vm_fault error code directly Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 2/8] mm: Rename arguments to find_subpage Matthew Wilcox @ 2020-03-18 14:02 ` Matthew Wilcox 2020-03-19 10:18 ` Christoph Hellwig 2020-03-19 14:33 ` Kirill A. Shutemov 2020-03-18 14:02 ` [PATCH v2 4/8] mm: Optimise find_subpage for !THP Matthew Wilcox ` (4 subsequent siblings) 7 siblings, 2 replies; 17+ messages in thread From: Matthew Wilcox @ 2020-03-18 14:02 UTC (permalink / raw) To: linux-mm; +Cc: Matthew Wilcox (Oracle) From: "Matthew Wilcox (Oracle)" <willy@infradead.org> It's even more important to check that we don't have a tail page when calling hpage_nr_pages() when THP are disabled. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- include/linux/huge_mm.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5aca3d1bdb32..a155a7929b34 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -289,7 +289,11 @@ static inline struct list_head *page_deferred_list(struct page *page) #define HPAGE_PUD_MASK ({ BUILD_BUG(); 0; }) #define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; }) -#define hpage_nr_pages(x) 1 +static inline int hpage_nr_pages(struct page *page) +{ + VM_BUG_ON_PAGE(PageTail(page), page); + return 1; +} static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) { -- 2.25.1 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v2 3/8] mm: Check PageTail in hpage_nr_pages even when !THP 2020-03-18 14:02 ` [PATCH v2 3/8] mm: Check PageTail in hpage_nr_pages even when !THP Matthew Wilcox @ 2020-03-19 10:18 ` Christoph Hellwig 2020-03-19 14:33 ` Kirill A. Shutemov 1 sibling, 0 replies; 17+ messages in thread From: Christoph Hellwig @ 2020-03-19 10:18 UTC (permalink / raw) To: Matthew Wilcox; +Cc: linux-mm On Wed, Mar 18, 2020 at 07:02:48AM -0700, Matthew Wilcox wrote: > From: "Matthew Wilcox (Oracle)" <willy@infradead.org> > > It's even more important to check that we don't have a tail page when > calling hpage_nr_pages() when THP are disabled. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de> ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v2 3/8] mm: Check PageTail in hpage_nr_pages even when !THP 2020-03-18 14:02 ` [PATCH v2 3/8] mm: Check PageTail in hpage_nr_pages even when !THP Matthew Wilcox 2020-03-19 10:18 ` Christoph Hellwig @ 2020-03-19 14:33 ` Kirill A. Shutemov 1 sibling, 0 replies; 17+ messages in thread From: Kirill A. Shutemov @ 2020-03-19 14:33 UTC (permalink / raw) To: Matthew Wilcox; +Cc: linux-mm On Wed, Mar 18, 2020 at 07:02:48AM -0700, Matthew Wilcox wrote: > From: "Matthew Wilcox (Oracle)" <willy@infradead.org> > > It's even more important to check that we don't have a tail page when > calling hpage_nr_pages() when THP are disabled. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> -- Kirill A. Shutemov ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 4/8] mm: Optimise find_subpage for !THP 2020-03-18 14:02 [PATCH v2 0/8] Misc MM patches Matthew Wilcox ` (2 preceding siblings ...) 2020-03-18 14:02 ` [PATCH v2 3/8] mm: Check PageTail in hpage_nr_pages even when !THP Matthew Wilcox @ 2020-03-18 14:02 ` Matthew Wilcox 2020-03-19 10:18 ` Christoph Hellwig 2020-03-18 14:02 ` [PATCH v2 5/8] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE Matthew Wilcox ` (3 subsequent siblings) 7 siblings, 1 reply; 17+ messages in thread From: Matthew Wilcox @ 2020-03-18 14:02 UTC (permalink / raw) To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kirill A . Shutemov From: "Matthew Wilcox (Oracle)" <willy@infradead.org> If THP is disabled, find_subpage() can become a no-op by using hpage_nr_pages() instead of compound_nr(). hpage_nr_pages() embeds a check for PageTail, so we can drop the check here. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- include/linux/pagemap.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 68db0ed8535d..fcb4887e65e8 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -343,9 +343,7 @@ static inline struct page *find_subpage(struct page *head, pgoff_t index) if (PageHuge(head)) return head; - VM_BUG_ON_PAGE(PageTail(head), head); - - return head + (index & (compound_nr(head) - 1)); + return head + (index & (hpage_nr_pages(head) - 1)); } struct page *find_get_entry(struct address_space *mapping, pgoff_t offset); -- 2.25.1 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v2 4/8] mm: Optimise find_subpage for !THP 2020-03-18 14:02 ` [PATCH v2 4/8] mm: Optimise find_subpage for !THP Matthew Wilcox @ 2020-03-19 10:18 ` Christoph Hellwig 0 siblings, 0 replies; 17+ messages in thread From: Christoph Hellwig @ 2020-03-19 10:18 UTC (permalink / raw) To: Matthew Wilcox; +Cc: linux-mm, Kirill A . Shutemov On Wed, Mar 18, 2020 at 07:02:49AM -0700, Matthew Wilcox wrote: > From: "Matthew Wilcox (Oracle)" <willy@infradead.org> > > If THP is disabled, find_subpage() can become a no-op by using > hpage_nr_pages() instead of compound_nr(). hpage_nr_pages() embeds > a check for PageTail, so we can drop the check here. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de> ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 5/8] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE 2020-03-18 14:02 [PATCH v2 0/8] Misc MM patches Matthew Wilcox ` (3 preceding siblings ...) 2020-03-18 14:02 ` [PATCH v2 4/8] mm: Optimise find_subpage for !THP Matthew Wilcox @ 2020-03-18 14:02 ` Matthew Wilcox 2020-03-19 14:35 ` Kirill A. Shutemov 2020-03-18 14:02 ` [PATCH v2 6/8] mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io Matthew Wilcox ` (2 subsequent siblings) 7 siblings, 1 reply; 17+ messages in thread From: Matthew Wilcox @ 2020-03-18 14:02 UTC (permalink / raw) To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kirill A . Shutemov, Aneesh Kumar K . V From: "Matthew Wilcox (Oracle)" <willy@infradead.org> Commit e496cf3d7821 ("thp: introduce CONFIG_TRANSPARENT_HUGE_PAGECACHE") notes that it should be reverted when the PowerPC problem was fixed. The commit fixing the PowerPC problem (953c66c2b22a) did not revert the commit; instead setting CONFIG_TRANSPARENT_HUGE_PAGECACHE to the same as CONFIG_TRANSPARENT_HUGEPAGE. Checking with Kirill and Aneesh, this was an oversight, so remove the Kconfig symbol and undo the work of commit e496cf3d7821. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> --- include/linux/shmem_fs.h | 10 +--------- mm/Kconfig | 6 +----- mm/huge_memory.c | 2 +- mm/khugepaged.c | 12 ++++-------- mm/memory.c | 5 ++--- mm/rmap.c | 2 +- mm/shmem.c | 36 ++++++++++++++++++------------------ 7 files changed, 28 insertions(+), 45 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d56fefef8905..7a35a6901221 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -78,6 +78,7 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); extern int shmem_unuse(unsigned int type, bool frontswap, unsigned long *fs_pages_to_unuse); +extern bool shmem_huge_enabled(struct vm_area_struct *vma); extern unsigned long shmem_swap_usage(struct vm_area_struct *vma); extern unsigned long shmem_partial_swap_usage(struct address_space *mapping, pgoff_t start, pgoff_t end); @@ -114,15 +115,6 @@ static inline bool shmem_file(struct file *file) extern bool shmem_charge(struct inode *inode, long pages); extern void shmem_uncharge(struct inode *inode, long pages); -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE -extern bool shmem_huge_enabled(struct vm_area_struct *vma); -#else -static inline bool shmem_huge_enabled(struct vm_area_struct *vma) -{ - return false; -} -#endif - #ifdef CONFIG_SHMEM extern int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, diff --git a/mm/Kconfig b/mm/Kconfig index ab80933be65f..211a70e8d5cf 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -420,10 +420,6 @@ config THP_SWAP For selection by architectures with reasonable THP sizes. -config TRANSPARENT_HUGE_PAGECACHE - def_bool y - depends on TRANSPARENT_HUGEPAGE - # # UP and nommu archs use km based percpu allocator # @@ -714,7 +710,7 @@ config GUP_GET_PTE_LOW_HIGH config READ_ONLY_THP_FOR_FS bool "Read-only THP for filesystems (EXPERIMENTAL)" - depends on TRANSPARENT_HUGE_PAGECACHE && SHMEM + depends on TRANSPARENT_HUGEPAGE && SHMEM help Allow khugepaged to put read-only file-backed pages in THP. diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b08b199f9a11..e88cce651705 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -326,7 +326,7 @@ static struct attribute *hugepage_attr[] = { &defrag_attr.attr, &use_zero_page_attr.attr, &hpage_pmd_size_attr.attr, -#if defined(CONFIG_SHMEM) && defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE) +#ifdef CONFIG_SHMEM &shmem_enabled_attr.attr, #endif #ifdef CONFIG_DEBUG_VM diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b679908743cb..cc80b0f2c5f8 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -416,8 +416,6 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && vma->vm_file && (vm_flags & VM_DENYWRITE))) { - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) - return false; return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, HPAGE_PMD_NR); } @@ -1260,7 +1258,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot) } } -#if defined(CONFIG_SHMEM) && defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE) +#ifdef CONFIG_SHMEM /* * Notify khugepaged that given addr of the mm is pte-mapped THP. Then * khugepaged should try to collapse the page table. @@ -1975,6 +1973,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, if (khugepaged_scan.address < hstart) khugepaged_scan.address = hstart; VM_BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK); + if (shmem_file(vma->vm_file) && !shmem_huge_enabled(vma)) + goto skip; while (khugepaged_scan.address < hend) { int ret; @@ -1986,14 +1986,10 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, khugepaged_scan.address + HPAGE_PMD_SIZE > hend); if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) { - struct file *file; + struct file *file = get_file(vma->vm_file); pgoff_t pgoff = linear_page_index(vma, khugepaged_scan.address); - if (shmem_file(vma->vm_file) - && !shmem_huge_enabled(vma)) - goto skip; - file = get_file(vma->vm_file); up_read(&mm->mmap_sem); ret = 1; khugepaged_scan_file(mm, file, pgoff, hpage); diff --git a/mm/memory.c b/mm/memory.c index 0bccc622e482..6ab0b03ea9bd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3354,7 +3354,7 @@ static vm_fault_t pte_alloc_one_map(struct vm_fault *vmf) return 0; } -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE static void deposit_prealloc_pte(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; @@ -3456,8 +3456,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, pte_t entry; vm_fault_t ret; - if (pmd_none(*vmf->pmd) && PageTransCompound(page) && - IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) { + if (pmd_none(*vmf->pmd) && PageTransCompound(page)) { /* THP on COW? */ VM_BUG_ON_PAGE(memcg, page); diff --git a/mm/rmap.c b/mm/rmap.c index b3e381919835..af48024c6baf 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -940,7 +940,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, set_pte_at(vma->vm_mm, address, pte, entry); ret = 1; } else { -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE pmd_t *pmd = pvmw.pmd; pmd_t entry; diff --git a/mm/shmem.c b/mm/shmem.c index c8f7540ef048..056cec644c17 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -410,7 +410,7 @@ static bool shmem_confirm_swap(struct address_space *mapping, #define SHMEM_HUGE_DENY (-1) #define SHMEM_HUGE_FORCE (-2) -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE /* ifdef here to avoid bloating shmem.o when not necessary */ static int shmem_huge __read_mostly; @@ -580,7 +580,7 @@ static long shmem_unused_huge_count(struct super_block *sb, struct shmem_sb_info *sbinfo = SHMEM_SB(sb); return READ_ONCE(sbinfo->shrinklist_len); } -#else /* !CONFIG_TRANSPARENT_HUGE_PAGECACHE */ +#else /* !CONFIG_TRANSPARENT_HUGEPAGE */ #define shmem_huge SHMEM_HUGE_DENY @@ -589,11 +589,11 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, { return 0; } -#endif /* CONFIG_TRANSPARENT_HUGE_PAGECACHE */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo) { - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && (shmem_huge == SHMEM_HUGE_FORCE || sbinfo->huge) && shmem_huge != SHMEM_HUGE_DENY) return true; @@ -1059,7 +1059,7 @@ static int shmem_setattr(struct dentry *dentry, struct iattr *attr) * Part of the huge page can be beyond i_size: subject * to shrink under memory pressure. */ - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) { + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { spin_lock(&sbinfo->shrinklist_lock); /* * _careful to defend against unlocked access to @@ -1472,7 +1472,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, pgoff_t hindex; struct page *page; - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) return NULL; hindex = round_down(index, HPAGE_PMD_NR); @@ -1511,7 +1511,7 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, int nr; int err = -ENOSPC; - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) huge = false; nr = huge ? HPAGE_PMD_NR : 1; @@ -2089,7 +2089,7 @@ unsigned long shmem_get_unmapped_area(struct file *file, get_area = current->mm->get_unmapped_area; addr = get_area(file, uaddr, len, pgoff, flags); - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) return addr; if (IS_ERR_VALUE(addr)) return addr; @@ -2228,7 +2228,7 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) file_accessed(file); vma->vm_ops = &shmem_vm_ops; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < (vma->vm_end & HPAGE_PMD_MASK)) { khugepaged_enter(vma, vma->vm_flags); @@ -3457,7 +3457,7 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) case Opt_huge: ctx->huge = result.uint_32; if (ctx->huge != SHMEM_HUGE_NEVER && - !(IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && + !(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && has_transparent_hugepage())) goto unsupported_parameter; ctx->seen |= SHMEM_SEEN_HUGE; @@ -3603,7 +3603,7 @@ static int shmem_show_options(struct seq_file *seq, struct dentry *root) if (!gid_eq(sbinfo->gid, GLOBAL_ROOT_GID)) seq_printf(seq, ",gid=%u", from_kgid_munged(&init_user_ns, sbinfo->gid)); -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE /* Rightly or wrongly, show huge mount option unmasked by shmem_huge */ if (sbinfo->huge) seq_printf(seq, ",huge=%s", shmem_format_huge(sbinfo->huge)); @@ -3848,7 +3848,7 @@ static const struct super_operations shmem_ops = { .evict_inode = shmem_evict_inode, .drop_inode = generic_delete_inode, .put_super = shmem_put_super, -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE .nr_cached_objects = shmem_unused_huge_count, .free_cached_objects = shmem_unused_huge_scan, #endif @@ -3910,7 +3910,7 @@ int __init shmem_init(void) goto out1; } -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE if (has_transparent_hugepage() && shmem_huge > SHMEM_HUGE_DENY) SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge; else @@ -3926,7 +3926,7 @@ int __init shmem_init(void) return error; } -#if defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && defined(CONFIG_SYSFS) +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SYSFS) static ssize_t shmem_enabled_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -3978,9 +3978,9 @@ static ssize_t shmem_enabled_store(struct kobject *kobj, struct kobj_attribute shmem_enabled_attr = __ATTR(shmem_enabled, 0644, shmem_enabled_show, shmem_enabled_store); -#endif /* CONFIG_TRANSPARENT_HUGE_PAGECACHE && CONFIG_SYSFS */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_SYSFS */ -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE bool shmem_huge_enabled(struct vm_area_struct *vma) { struct inode *inode = file_inode(vma->vm_file); @@ -4015,7 +4015,7 @@ bool shmem_huge_enabled(struct vm_area_struct *vma) return false; } } -#endif /* CONFIG_TRANSPARENT_HUGE_PAGECACHE */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #else /* !CONFIG_SHMEM */ @@ -4184,7 +4184,7 @@ int shmem_zero_setup(struct vm_area_struct *vma) vma->vm_file = file; vma->vm_ops = &shmem_vm_ops; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < (vma->vm_end & HPAGE_PMD_MASK)) { khugepaged_enter(vma, vma->vm_flags); -- 2.25.1 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v2 5/8] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE 2020-03-18 14:02 ` [PATCH v2 5/8] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE Matthew Wilcox @ 2020-03-19 14:35 ` Kirill A. Shutemov 0 siblings, 0 replies; 17+ messages in thread From: Kirill A. Shutemov @ 2020-03-19 14:35 UTC (permalink / raw) To: Matthew Wilcox; +Cc: linux-mm, Kirill A . Shutemov, Aneesh Kumar K . V On Wed, Mar 18, 2020 at 07:02:50AM -0700, Matthew Wilcox wrote: > From: "Matthew Wilcox (Oracle)" <willy@infradead.org> > > Commit e496cf3d7821 ("thp: introduce CONFIG_TRANSPARENT_HUGE_PAGECACHE") > notes that it should be reverted when the PowerPC problem was fixed. > The commit fixing the PowerPC problem (953c66c2b22a) did not revert > the commit; instead setting CONFIG_TRANSPARENT_HUGE_PAGECACHE to the > same as CONFIG_TRANSPARENT_HUGEPAGE. Checking with Kirill and Aneesh, > this was an oversight, so remove the Kconfig symbol and undo the work > of commit e496cf3d7821. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> -- Kirill A. Shutemov ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 6/8] mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io 2020-03-18 14:02 [PATCH v2 0/8] Misc MM patches Matthew Wilcox ` (4 preceding siblings ...) 2020-03-18 14:02 ` [PATCH v2 5/8] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE Matthew Wilcox @ 2020-03-18 14:02 ` Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 7/8] mm: Unexport find_get_entry Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 8/8] mm: Rewrite pagecache_get_page documentation Matthew Wilcox 7 siblings, 0 replies; 17+ messages in thread From: Matthew Wilcox @ 2020-03-18 14:02 UTC (permalink / raw) To: linux-mm; +Cc: Matthew Wilcox (Oracle), Christoph Hellwig, Kirill A . Shutemov From: "Matthew Wilcox (Oracle)" <willy@infradead.org> Dumping the page information in this circumstance helps for debugging. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- mm/page-writeback.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 2caf780a42e7..9173c25cf8e6 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2655,7 +2655,7 @@ int clear_page_dirty_for_io(struct page *page) struct address_space *mapping = page_mapping(page); int ret = 0; - BUG_ON(!PageLocked(page)); + VM_BUG_ON_PAGE(!PageLocked(page), page); if (mapping && mapping_cap_account_dirty(mapping)) { struct inode *inode = mapping->host; -- 2.25.1 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 7/8] mm: Unexport find_get_entry 2020-03-18 14:02 [PATCH v2 0/8] Misc MM patches Matthew Wilcox ` (5 preceding siblings ...) 2020-03-18 14:02 ` [PATCH v2 6/8] mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io Matthew Wilcox @ 2020-03-18 14:02 ` Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 8/8] mm: Rewrite pagecache_get_page documentation Matthew Wilcox 7 siblings, 0 replies; 17+ messages in thread From: Matthew Wilcox @ 2020-03-18 14:02 UTC (permalink / raw) To: linux-mm; +Cc: Matthew Wilcox (Oracle), Christoph Hellwig, Kirill A . Shutemov From: "Matthew Wilcox (Oracle)" <willy@infradead.org> No in-tree users (proc, madvise, memcg, mincore) can be built as a module. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- mm/filemap.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/filemap.c b/mm/filemap.c index 1beb7716276b..83ce9ce0bee1 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1536,7 +1536,6 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) return page; } -EXPORT_SYMBOL(find_get_entry); /** * find_lock_entry - locate, pin and lock a page cache entry -- 2.25.1 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 8/8] mm: Rewrite pagecache_get_page documentation 2020-03-18 14:02 [PATCH v2 0/8] Misc MM patches Matthew Wilcox ` (6 preceding siblings ...) 2020-03-18 14:02 ` [PATCH v2 7/8] mm: Unexport find_get_entry Matthew Wilcox @ 2020-03-18 14:02 ` Matthew Wilcox 2020-03-19 10:19 ` Christoph Hellwig 7 siblings, 1 reply; 17+ messages in thread From: Matthew Wilcox @ 2020-03-18 14:02 UTC (permalink / raw) To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kirill A . Shutemov From: "Matthew Wilcox (Oracle)" <willy@infradead.org> - These were never called PCG flags; they've been called FGP flags since their introduction in 2014. - The FGP_FOR_MMAP flag was misleadingly documented as if it was an alternative to FGP_CREAT instead of an option to it. - Rename the 'offset' parameter to 'index'. - Capitalisation, formatting, rewording. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- mm/filemap.c | 49 +++++++++++++++++++++++-------------------------- 1 file changed, 23 insertions(+), 26 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 83ce9ce0bee1..666c5a3a82ae 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1574,42 +1574,39 @@ struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset) EXPORT_SYMBOL(find_lock_entry); /** - * pagecache_get_page - find and get a page reference - * @mapping: the address_space to search - * @offset: the page index - * @fgp_flags: PCG flags - * @gfp_mask: gfp mask to use for the page cache data page allocation - * - * Looks up the page cache slot at @mapping & @offset. + * pagecache_get_page - Find and get a reference to a page. + * @mapping: The address_space to search. + * @index: The page index. + * @fgp_flags: %FGP flags modify how the page is returned. + * @gfp_mask: Memory allocation flags to use if %FGP_CREAT is specified. * - * PCG flags modify how the page is returned. + * Looks up the page cache entry at @mapping & @index. * - * @fgp_flags can be: + * @fgp_flags can be zero or more of these flags: * - * - FGP_ACCESSED: the page will be marked accessed - * - FGP_LOCK: Page is return locked - * - FGP_CREAT: If page is not present then a new page is allocated using - * @gfp_mask and added to the page cache and the VM's LRU - * list. The page is returned locked and with an increased - * refcount. - * - FGP_FOR_MMAP: Similar to FGP_CREAT, only we want to allow the caller to do - * its own locking dance if the page is already in cache, or unlock the page - * before returning if we had to add the page to pagecache. + * * %FGP_ACCESSED - The page will be marked accessed. + * * %FGP_LOCK - The page is returned locked. + * * %FGP_CREAT - If no page is present then a new page is allocated using + * @gfp_mask and added to the page cache and the VM's LRU list. + * The page is returned locked and with an increased refcount. + * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the + * page is already in cache. If the page was allocated, unlock it before + * returning so the caller can do the same dance. * - * If FGP_LOCK or FGP_CREAT are specified then the function may sleep even - * if the GFP flags specified for FGP_CREAT are atomic. + * If %FGP_LOCK or %FGP_CREAT are specified then the function may sleep even + * if the %GFP flags specified for %FGP_CREAT are atomic. * * If there is a page cache page, it is returned with an increased refcount. * - * Return: the found page or %NULL otherwise. + * Return: The found page or %NULL otherwise. */ -struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset, - int fgp_flags, gfp_t gfp_mask) +struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, + int fgp_flags, gfp_t gfp_mask) { struct page *page; repeat: - page = find_get_entry(mapping, offset); + page = find_get_entry(mapping, index); if (xa_is_value(page)) page = NULL; if (!page) @@ -1631,7 +1628,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset, put_page(page); goto repeat; } - VM_BUG_ON_PAGE(page->index != offset, page); + VM_BUG_ON_PAGE(page->index != index, page); } if (fgp_flags & FGP_ACCESSED) @@ -1656,7 +1653,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset, if (fgp_flags & FGP_ACCESSED) __SetPageReferenced(page); - err = add_to_page_cache_lru(page, mapping, offset, gfp_mask); + err = add_to_page_cache_lru(page, mapping, index, gfp_mask); if (unlikely(err)) { put_page(page); page = NULL; -- 2.25.1 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v2 8/8] mm: Rewrite pagecache_get_page documentation 2020-03-18 14:02 ` [PATCH v2 8/8] mm: Rewrite pagecache_get_page documentation Matthew Wilcox @ 2020-03-19 10:19 ` Christoph Hellwig 0 siblings, 0 replies; 17+ messages in thread From: Christoph Hellwig @ 2020-03-19 10:19 UTC (permalink / raw) To: Matthew Wilcox; +Cc: linux-mm, Kirill A . Shutemov On Wed, Mar 18, 2020 at 07:02:53AM -0700, Matthew Wilcox wrote: > From: "Matthew Wilcox (Oracle)" <willy@infradead.org> > > - These were never called PCG flags; they've been called FGP flags since > their introduction in 2014. > - The FGP_FOR_MMAP flag was misleadingly documented as if it was an > alternative to FGP_CREAT instead of an option to it. > - Rename the 'offset' parameter to 'index'. > - Capitalisation, formatting, rewording. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de> ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2020-03-19 15:59 UTC | newest] Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-03-18 14:02 [PATCH v2 0/8] Misc MM patches Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 1/8] mm: Use vm_fault error code directly Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 2/8] mm: Rename arguments to find_subpage Matthew Wilcox 2020-03-19 10:15 ` Christoph Hellwig 2020-03-19 14:31 ` Kirill A. Shutemov 2020-03-19 15:59 ` Pankaj Gupta 2020-03-18 14:02 ` [PATCH v2 3/8] mm: Check PageTail in hpage_nr_pages even when !THP Matthew Wilcox 2020-03-19 10:18 ` Christoph Hellwig 2020-03-19 14:33 ` Kirill A. Shutemov 2020-03-18 14:02 ` [PATCH v2 4/8] mm: Optimise find_subpage for !THP Matthew Wilcox 2020-03-19 10:18 ` Christoph Hellwig 2020-03-18 14:02 ` [PATCH v2 5/8] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE Matthew Wilcox 2020-03-19 14:35 ` Kirill A. Shutemov 2020-03-18 14:02 ` [PATCH v2 6/8] mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 7/8] mm: Unexport find_get_entry Matthew Wilcox 2020-03-18 14:02 ` [PATCH v2 8/8] mm: Rewrite pagecache_get_page documentation Matthew Wilcox 2020-03-19 10:19 ` Christoph Hellwig
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.