All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] THP prep patches
@ 2020-06-29 15:19 Matthew Wilcox (Oracle)
  2020-06-29 15:19 ` [PATCH 1/7] mm: Store compound_nr as well as compound_order Matthew Wilcox (Oracle)
                   ` (8 more replies)
  0 siblings, 9 replies; 18+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-06-29 15:19 UTC (permalink / raw)
  To: linux-fsdevel, linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

These are some generic cleanups and improvements, which I would like
merged into mmotm soon.  The first one should be a performance improvement
for all users of compound pages, and the others are aimed at getting
code to compile away when CONFIG_TRANSPARENT_HUGEPAGE is disabled (ie
small systems).  Also better documented / less confusing than the current
prefix mixture of compound, hpage and thp.

Matthew Wilcox (Oracle) (7):
  mm: Store compound_nr as well as compound_order
  mm: Move page-flags include to top of file
  mm: Add thp_order
  mm: Add thp_size
  mm: Replace hpage_nr_pages with thp_nr_pages
  mm: Add thp_head
  mm: Introduce offset_in_thp

 drivers/nvdimm/btt.c      |  4 +--
 drivers/nvdimm/pmem.c     |  6 ++--
 include/linux/huge_mm.h   | 58 ++++++++++++++++++++++++++++++++++++---
 include/linux/mm.h        | 12 ++++----
 include/linux/mm_inline.h |  6 ++--
 include/linux/mm_types.h  |  1 +
 include/linux/pagemap.h   |  6 ++--
 mm/compaction.c           |  2 +-
 mm/filemap.c              |  2 +-
 mm/gup.c                  |  2 +-
 mm/hugetlb.c              |  2 +-
 mm/internal.h             |  4 +--
 mm/memcontrol.c           | 10 +++----
 mm/memory_hotplug.c       |  7 ++---
 mm/mempolicy.c            |  2 +-
 mm/migrate.c              | 16 +++++------
 mm/mlock.c                |  9 +++---
 mm/page_alloc.c           |  5 ++--
 mm/page_io.c              |  4 +--
 mm/page_vma_mapped.c      |  6 ++--
 mm/rmap.c                 |  8 +++---
 mm/swap.c                 | 16 +++++------
 mm/swap_state.c           |  6 ++--
 mm/swapfile.c             |  2 +-
 mm/vmscan.c               |  6 ++--
 mm/workingset.c           |  6 ++--
 26 files changed, 127 insertions(+), 81 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/7] mm: Store compound_nr as well as compound_order
  2020-06-29 15:19 [PATCH 0/7] THP prep patches Matthew Wilcox (Oracle)
@ 2020-06-29 15:19 ` Matthew Wilcox (Oracle)
  2020-06-29 16:22   ` Ira Weiny
  2020-07-06 10:29   ` Kirill A. Shutemov
  2020-06-29 15:19 ` [PATCH 2/7] mm: Move page-flags include to top of file Matthew Wilcox (Oracle)
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 18+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-06-29 15:19 UTC (permalink / raw)
  To: linux-fsdevel, linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

This removes a few instructions from functions which need to know how many
pages are in a compound page.  The storage used is either page->mapping
on 64-bit or page->index on 32-bit.  Both of these are fine to overlay
on tail pages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/mm.h       | 5 ++++-
 include/linux/mm_types.h | 1 +
 mm/page_alloc.c          | 5 +++--
 3 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index dc7b87310c10..af0305ad090f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -911,12 +911,15 @@ static inline int compound_pincount(struct page *page)
 static inline void set_compound_order(struct page *page, unsigned int order)
 {
 	page[1].compound_order = order;
+	page[1].compound_nr = 1U << order;
 }
 
 /* Returns the number of pages in this potentially compound page. */
 static inline unsigned long compound_nr(struct page *page)
 {
-	return 1UL << compound_order(page);
+	if (!PageHead(page))
+		return 1;
+	return page[1].compound_nr;
 }
 
 /* Returns the number of bytes in this potentially compound page. */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 64ede5f150dc..561ed987ab44 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -134,6 +134,7 @@ struct page {
 			unsigned char compound_dtor;
 			unsigned char compound_order;
 			atomic_t compound_mapcount;
+			unsigned int compound_nr; /* 1 << compound_order */
 		};
 		struct {	/* Second tail page of compound page */
 			unsigned long _compound_pad_1;	/* compound_head */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 48eb0f1410d4..c7beb5f13193 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -673,8 +673,6 @@ void prep_compound_page(struct page *page, unsigned int order)
 	int i;
 	int nr_pages = 1 << order;
 
-	set_compound_page_dtor(page, COMPOUND_PAGE_DTOR);
-	set_compound_order(page, order);
 	__SetPageHead(page);
 	for (i = 1; i < nr_pages; i++) {
 		struct page *p = page + i;
@@ -682,6 +680,9 @@ void prep_compound_page(struct page *page, unsigned int order)
 		p->mapping = TAIL_MAPPING;
 		set_compound_head(p, page);
 	}
+
+	set_compound_page_dtor(page, COMPOUND_PAGE_DTOR);
+	set_compound_order(page, order);
 	atomic_set(compound_mapcount_ptr(page), -1);
 	if (hpage_pincount_available(page))
 		atomic_set(compound_pincount_ptr(page), 0);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/7] mm: Move page-flags include to top of file
  2020-06-29 15:19 [PATCH 0/7] THP prep patches Matthew Wilcox (Oracle)
  2020-06-29 15:19 ` [PATCH 1/7] mm: Store compound_nr as well as compound_order Matthew Wilcox (Oracle)
@ 2020-06-29 15:19 ` Matthew Wilcox (Oracle)
  2020-06-29 15:19 ` [PATCH 3/7] mm: Add thp_order Matthew Wilcox (Oracle)
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-06-29 15:19 UTC (permalink / raw)
  To: linux-fsdevel, linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

Give up on the notion that we can remove page-flags.h from mm.h.
There are currently 14 inline functions which use a PageFoo function.
Also, two of the files directly included by mm.h include page-flags.h
themselves, and there are probably more indirect inclusions.  So just
include it at the top like any other header file.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/mm.h | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index af0305ad090f..6c29b663135f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -24,6 +24,7 @@
 #include <linux/resource.h>
 #include <linux/page_ext.h>
 #include <linux/err.h>
+#include <linux/page-flags.h>
 #include <linux/page_ref.h>
 #include <linux/memremap.h>
 #include <linux/overflow.h>
@@ -667,11 +668,6 @@ int vma_is_stack_for_current(struct vm_area_struct *vma);
 struct mmu_gather;
 struct inode;
 
-/*
- * FIXME: take this include out, include page-flags.h in
- * files which need it (119 of them)
- */
-#include <linux/page-flags.h>
 #include <linux/huge_mm.h>
 
 /*
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/7] mm: Add thp_order
  2020-06-29 15:19 [PATCH 0/7] THP prep patches Matthew Wilcox (Oracle)
  2020-06-29 15:19 ` [PATCH 1/7] mm: Store compound_nr as well as compound_order Matthew Wilcox (Oracle)
  2020-06-29 15:19 ` [PATCH 2/7] mm: Move page-flags include to top of file Matthew Wilcox (Oracle)
@ 2020-06-29 15:19 ` Matthew Wilcox (Oracle)
  2020-06-29 15:19 ` [PATCH 4/7] mm: Add thp_size Matthew Wilcox (Oracle)
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-06-29 15:19 UTC (permalink / raw)
  To: linux-fsdevel, linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

This function returns the order of a transparent huge page.  It
compiles to 0 if CONFIG_TRANSPARENT_HUGEPAGE is disabled.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/huge_mm.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 71f20776b06c..dd19720a8bc2 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -265,6 +265,19 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
 	else
 		return NULL;
 }
+
+/**
+ * thp_order - Order of a transparent huge page.
+ * @page: Head page of a transparent huge page.
+ */
+static inline unsigned int thp_order(struct page *page)
+{
+	VM_BUG_ON_PGFLAGS(PageTail(page), page);
+	if (PageHead(page))
+		return HPAGE_PMD_ORDER;
+	return 0;
+}
+
 static inline int hpage_nr_pages(struct page *page)
 {
 	if (unlikely(PageTransHuge(page)))
@@ -324,6 +337,12 @@ static inline struct list_head *page_deferred_list(struct page *page)
 #define HPAGE_PUD_MASK ({ BUILD_BUG(); 0; })
 #define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; })
 
+static inline unsigned int thp_order(struct page *page)
+{
+	VM_BUG_ON_PGFLAGS(PageTail(page), page);
+	return 0;
+}
+
 static inline int hpage_nr_pages(struct page *page)
 {
 	VM_BUG_ON_PAGE(PageTail(page), page);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/7] mm: Add thp_size
  2020-06-29 15:19 [PATCH 0/7] THP prep patches Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2020-06-29 15:19 ` [PATCH 3/7] mm: Add thp_order Matthew Wilcox (Oracle)
@ 2020-06-29 15:19 ` Matthew Wilcox (Oracle)
  2020-06-29 15:19 ` [PATCH 5/7] mm: Replace hpage_nr_pages with thp_nr_pages Matthew Wilcox (Oracle)
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-06-29 15:19 UTC (permalink / raw)
  To: linux-fsdevel, linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

This function returns the number of bytes in a THP.  It is like
page_size(), but compiles to just PAGE_SIZE if CONFIG_TRANSPARENT_HUGEPAGE
is disabled.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 drivers/nvdimm/btt.c    |  4 +---
 drivers/nvdimm/pmem.c   |  6 ++----
 include/linux/huge_mm.h | 11 +++++++++++
 mm/internal.h           |  2 +-
 mm/page_io.c            |  2 +-
 mm/page_vma_mapped.c    |  4 ++--
 6 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 48e9d169b6f9..92f25b9e1483 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1490,10 +1490,8 @@ static int btt_rw_page(struct block_device *bdev, sector_t sector,
 {
 	struct btt *btt = bdev->bd_disk->private_data;
 	int rc;
-	unsigned int len;
 
-	len = hpage_nr_pages(page) * PAGE_SIZE;
-	rc = btt_do_bvec(btt, NULL, page, len, 0, op, sector);
+	rc = btt_do_bvec(btt, NULL, page, thp_size(page), 0, op, sector);
 	if (rc == 0)
 		page_endio(page, op_is_write(op), 0);
 
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index d25e66fd942d..d5e86ae144e3 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -238,11 +238,9 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector,
 	blk_status_t rc;
 
 	if (op_is_write(op))
-		rc = pmem_do_write(pmem, page, 0, sector,
-				   hpage_nr_pages(page) * PAGE_SIZE);
+		rc = pmem_do_write(pmem, page, 0, sector, thp_size(page));
 	else
-		rc = pmem_do_read(pmem, page, 0, sector,
-				   hpage_nr_pages(page) * PAGE_SIZE);
+		rc = pmem_do_read(pmem, page, 0, sector, thp_size(page));
 	/*
 	 * The ->rw_page interface is subtle and tricky.  The core
 	 * retries on any error, so we can only invoke page_endio() in
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index dd19720a8bc2..0ec3b5a73d38 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -469,4 +469,15 @@ static inline bool thp_migration_supported(void)
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+/**
+ * thp_size - Size of a transparent huge page.
+ * @page: Head page of a transparent huge page.
+ *
+ * Return: Number of bytes in this page.
+ */
+static inline unsigned long thp_size(struct page *page)
+{
+	return PAGE_SIZE << thp_order(page);
+}
+
 #endif /* _LINUX_HUGE_MM_H */
diff --git a/mm/internal.h b/mm/internal.h
index 9886db20d94f..de9f1d0ba5fc 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -395,7 +395,7 @@ vma_address(struct page *page, struct vm_area_struct *vma)
 	unsigned long start, end;
 
 	start = __vma_address(page, vma);
-	end = start + PAGE_SIZE * (hpage_nr_pages(page) - 1);
+	end = start + thp_size(page) - PAGE_SIZE;
 
 	/* page should be within @vma mapping range */
 	VM_BUG_ON_VMA(end < vma->vm_start || start >= vma->vm_end, vma);
diff --git a/mm/page_io.c b/mm/page_io.c
index e8726f3e3820..888000d1a8cc 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -40,7 +40,7 @@ static struct bio *get_swap_bio(gfp_t gfp_flags,
 		bio->bi_iter.bi_sector <<= PAGE_SHIFT - 9;
 		bio->bi_end_io = end_io;
 
-		bio_add_page(bio, page, PAGE_SIZE * hpage_nr_pages(page), 0);
+		bio_add_page(bio, page, thp_size(page), 0);
 	}
 	return bio;
 }
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 719c35246cfa..e65629c056e8 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -227,7 +227,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 			if (pvmw->address >= pvmw->vma->vm_end ||
 			    pvmw->address >=
 					__vma_address(pvmw->page, pvmw->vma) +
-					hpage_nr_pages(pvmw->page) * PAGE_SIZE)
+					thp_size(pvmw->page))
 				return not_found(pvmw);
 			/* Did we cross page table boundary? */
 			if (pvmw->address % PMD_SIZE == 0) {
@@ -268,7 +268,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
 	unsigned long start, end;
 
 	start = __vma_address(page, vma);
-	end = start + PAGE_SIZE * (hpage_nr_pages(page) - 1);
+	end = start + thp_size(page) - PAGE_SIZE;
 
 	if (unlikely(end < vma->vm_start || start >= vma->vm_end))
 		return 0;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/7] mm: Replace hpage_nr_pages with thp_nr_pages
  2020-06-29 15:19 [PATCH 0/7] THP prep patches Matthew Wilcox (Oracle)
                   ` (3 preceding siblings ...)
  2020-06-29 15:19 ` [PATCH 4/7] mm: Add thp_size Matthew Wilcox (Oracle)
@ 2020-06-29 15:19 ` Matthew Wilcox (Oracle)
  2020-06-29 17:40   ` Mike Kravetz
  2020-06-29 15:19 ` [PATCH 6/7] mm: Add thp_head Matthew Wilcox (Oracle)
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-06-29 15:19 UTC (permalink / raw)
  To: linux-fsdevel, linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

The thp prefix is more frequently used than hpage and we should
be consistent between the various functions.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/huge_mm.h   | 13 +++++++++----
 include/linux/mm_inline.h |  6 +++---
 include/linux/pagemap.h   |  6 +++---
 mm/compaction.c           |  2 +-
 mm/filemap.c              |  2 +-
 mm/gup.c                  |  2 +-
 mm/hugetlb.c              |  2 +-
 mm/internal.h             |  2 +-
 mm/memcontrol.c           | 10 +++++-----
 mm/memory_hotplug.c       |  7 +++----
 mm/mempolicy.c            |  2 +-
 mm/migrate.c              | 16 ++++++++--------
 mm/mlock.c                |  9 ++++-----
 mm/page_io.c              |  2 +-
 mm/page_vma_mapped.c      |  2 +-
 mm/rmap.c                 |  8 ++++----
 mm/swap.c                 | 16 ++++++++--------
 mm/swap_state.c           |  6 +++---
 mm/swapfile.c             |  2 +-
 mm/vmscan.c               |  6 +++---
 mm/workingset.c           |  6 +++---
 21 files changed, 65 insertions(+), 62 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 0ec3b5a73d38..dcdfd21763a3 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -278,9 +278,14 @@ static inline unsigned int thp_order(struct page *page)
 	return 0;
 }
 
-static inline int hpage_nr_pages(struct page *page)
+/**
+ * thp_nr_pages - The number of regular pages in this huge page.
+ * @page: The head page of a huge page.
+ */
+static inline int thp_nr_pages(struct page *page)
 {
-	if (unlikely(PageTransHuge(page)))
+	VM_BUG_ON_PGFLAGS(PageTail(page), page);
+	if (PageHead(page))
 		return HPAGE_PMD_NR;
 	return 1;
 }
@@ -343,9 +348,9 @@ static inline unsigned int thp_order(struct page *page)
 	return 0;
 }
 
-static inline int hpage_nr_pages(struct page *page)
+static inline int thp_nr_pages(struct page *page)
 {
-	VM_BUG_ON_PAGE(PageTail(page), page);
+	VM_BUG_ON_PGFLAGS(PageTail(page), page);
 	return 1;
 }
 
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 219bef41d87c..8fc71e9d7bb0 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -48,14 +48,14 @@ static __always_inline void update_lru_size(struct lruvec *lruvec,
 static __always_inline void add_page_to_lru_list(struct page *page,
 				struct lruvec *lruvec, enum lru_list lru)
 {
-	update_lru_size(lruvec, lru, page_zonenum(page), hpage_nr_pages(page));
+	update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
 	list_add(&page->lru, &lruvec->lists[lru]);
 }
 
 static __always_inline void add_page_to_lru_list_tail(struct page *page,
 				struct lruvec *lruvec, enum lru_list lru)
 {
-	update_lru_size(lruvec, lru, page_zonenum(page), hpage_nr_pages(page));
+	update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
 	list_add_tail(&page->lru, &lruvec->lists[lru]);
 }
 
@@ -63,7 +63,7 @@ static __always_inline void del_page_from_lru_list(struct page *page,
 				struct lruvec *lruvec, enum lru_list lru)
 {
 	list_del(&page->lru);
-	update_lru_size(lruvec, lru, page_zonenum(page), -hpage_nr_pages(page));
+	update_lru_size(lruvec, lru, page_zonenum(page), -thp_nr_pages(page));
 }
 
 /**
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index cf2468da68e9..484a36185bb5 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -381,7 +381,7 @@ static inline struct page *find_subpage(struct page *head, pgoff_t index)
 	if (PageHuge(head))
 		return head;
 
-	return head + (index & (hpage_nr_pages(head) - 1));
+	return head + (index & (thp_nr_pages(head) - 1));
 }
 
 struct page *find_get_entry(struct address_space *mapping, pgoff_t offset);
@@ -730,7 +730,7 @@ static inline struct page *readahead_page(struct readahead_control *rac)
 
 	page = xa_load(&rac->mapping->i_pages, rac->_index);
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
-	rac->_batch_count = hpage_nr_pages(page);
+	rac->_batch_count = thp_nr_pages(page);
 
 	return page;
 }
@@ -753,7 +753,7 @@ static inline unsigned int __readahead_batch(struct readahead_control *rac,
 		VM_BUG_ON_PAGE(!PageLocked(page), page);
 		VM_BUG_ON_PAGE(PageTail(page), page);
 		array[i++] = page;
-		rac->_batch_count += hpage_nr_pages(page);
+		rac->_batch_count += thp_nr_pages(page);
 
 		/*
 		 * The page cache isn't using multi-index entries yet,
diff --git a/mm/compaction.c b/mm/compaction.c
index 86375605faa9..014eaea4c56a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -991,7 +991,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		del_page_from_lru_list(page, lruvec, page_lru(page));
 		mod_node_page_state(page_pgdat(page),
 				NR_ISOLATED_ANON + page_is_file_lru(page),
-				hpage_nr_pages(page));
+				thp_nr_pages(page));
 
 isolate_success:
 		list_add(&page->lru, &cc->migratepages);
diff --git a/mm/filemap.c b/mm/filemap.c
index f0ae9a6308cb..80ce3658b147 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -197,7 +197,7 @@ static void unaccount_page_cache_page(struct address_space *mapping,
 	if (PageHuge(page))
 		return;
 
-	nr = hpage_nr_pages(page);
+	nr = thp_nr_pages(page);
 
 	__mod_lruvec_page_state(page, NR_FILE_PAGES, -nr);
 	if (PageSwapBacked(page)) {
diff --git a/mm/gup.c b/mm/gup.c
index 6f47697f8fb0..5daadae475ea 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1703,7 +1703,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
 					mod_node_page_state(page_pgdat(head),
 							    NR_ISOLATED_ANON +
 							    page_is_file_lru(head),
-							    hpage_nr_pages(head));
+							    thp_nr_pages(head));
 				}
 			}
 		}
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 57ece74e3aae..6bb07bc655f7 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1593,7 +1593,7 @@ static struct address_space *_get_hugetlb_page_mapping(struct page *hpage)
 
 	/* Use first found vma */
 	pgoff_start = page_to_pgoff(hpage);
-	pgoff_end = pgoff_start + hpage_nr_pages(hpage) - 1;
+	pgoff_end = pgoff_start + thp_nr_pages(hpage) - 1;
 	anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
 					pgoff_start, pgoff_end) {
 		struct vm_area_struct *vma = avc->vma;
diff --git a/mm/internal.h b/mm/internal.h
index de9f1d0ba5fc..ac3c79408045 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -368,7 +368,7 @@ extern void clear_page_mlock(struct page *page);
 static inline void mlock_migrate_page(struct page *newpage, struct page *page)
 {
 	if (TestClearPageMlocked(page)) {
-		int nr_pages = hpage_nr_pages(page);
+		int nr_pages = thp_nr_pages(page);
 
 		/* Holding pmd lock, no change in irq context: __mod is safe */
 		__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 19622328e4b5..5136bcae93f4 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5365,7 +5365,7 @@ static int mem_cgroup_move_account(struct page *page,
 {
 	struct lruvec *from_vec, *to_vec;
 	struct pglist_data *pgdat;
-	unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1;
+	unsigned int nr_pages = compound ? thp_nr_pages(page) : 1;
 	int ret;
 
 	VM_BUG_ON(from == to);
@@ -6461,7 +6461,7 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,
  */
 int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
 {
-	unsigned int nr_pages = hpage_nr_pages(page);
+	unsigned int nr_pages = thp_nr_pages(page);
 	struct mem_cgroup *memcg = NULL;
 	int ret = 0;
 
@@ -6692,7 +6692,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage)
 		return;
 
 	/* Force-charge the new page. The old one will be freed soon */
-	nr_pages = hpage_nr_pages(newpage);
+	nr_pages = thp_nr_pages(newpage);
 
 	page_counter_charge(&memcg->memory, nr_pages);
 	if (do_memsw_account())
@@ -6905,7 +6905,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 	 * ancestor for the swap instead and transfer the memory+swap charge.
 	 */
 	swap_memcg = mem_cgroup_id_get_online(memcg);
-	nr_entries = hpage_nr_pages(page);
+	nr_entries = thp_nr_pages(page);
 	/* Get references for the tail pages, too */
 	if (nr_entries > 1)
 		mem_cgroup_id_get_many(swap_memcg, nr_entries - 1);
@@ -6950,7 +6950,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
  */
 int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
 {
-	unsigned int nr_pages = hpage_nr_pages(page);
+	unsigned int nr_pages = thp_nr_pages(page);
 	struct page_counter *counter;
 	struct mem_cgroup *memcg;
 	unsigned short oldid;
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index da374cd3d45b..4a7ab9de1529 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1280,7 +1280,7 @@ static int
 do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 {
 	unsigned long pfn;
-	struct page *page;
+	struct page *page, *head;
 	int ret = 0;
 	LIST_HEAD(source);
 
@@ -1288,15 +1288,14 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 		if (!pfn_valid(pfn))
 			continue;
 		page = pfn_to_page(pfn);
+		head = compound_head(page);
 
 		if (PageHuge(page)) {
-			struct page *head = compound_head(page);
 			pfn = page_to_pfn(head) + compound_nr(head) - 1;
 			isolate_huge_page(head, &source);
 			continue;
 		} else if (PageTransHuge(page))
-			pfn = page_to_pfn(compound_head(page))
-				+ hpage_nr_pages(page) - 1;
+			pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
 
 		/*
 		 * HWPoison pages have elevated reference counts so the migration would
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 381320671677..d2b11c291e78 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1049,7 +1049,7 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist,
 			list_add_tail(&head->lru, pagelist);
 			mod_node_page_state(page_pgdat(head),
 				NR_ISOLATED_ANON + page_is_file_lru(head),
-				hpage_nr_pages(head));
+				thp_nr_pages(head));
 		} else if (flags & MPOL_MF_STRICT) {
 			/*
 			 * Non-movable page may reach here.  And, there may be
diff --git a/mm/migrate.c b/mm/migrate.c
index f37729673558..9d0c6a853c1c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -193,7 +193,7 @@ void putback_movable_pages(struct list_head *l)
 			put_page(page);
 		} else {
 			mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON +
-					page_is_file_lru(page), -hpage_nr_pages(page));
+					page_is_file_lru(page), -thp_nr_pages(page));
 			putback_lru_page(page);
 		}
 	}
@@ -386,7 +386,7 @@ static int expected_page_refs(struct address_space *mapping, struct page *page)
 	 */
 	expected_count += is_device_private_page(page);
 	if (mapping)
-		expected_count += hpage_nr_pages(page) + page_has_private(page);
+		expected_count += thp_nr_pages(page) + page_has_private(page);
 
 	return expected_count;
 }
@@ -441,7 +441,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
 	 */
 	newpage->index = page->index;
 	newpage->mapping = page->mapping;
-	page_ref_add(newpage, hpage_nr_pages(page)); /* add cache reference */
+	page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
 	if (PageSwapBacked(page)) {
 		__SetPageSwapBacked(newpage);
 		if (PageSwapCache(page)) {
@@ -474,7 +474,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
 	 * to one less reference.
 	 * We know this isn't the last reference.
 	 */
-	page_ref_unfreeze(page, expected_count - hpage_nr_pages(page));
+	page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
 
 	xas_unlock(&xas);
 	/* Leave irq disabled to prevent preemption while updating stats */
@@ -591,7 +591,7 @@ static void copy_huge_page(struct page *dst, struct page *src)
 	} else {
 		/* thp page */
 		BUG_ON(!PageTransHuge(src));
-		nr_pages = hpage_nr_pages(src);
+		nr_pages = thp_nr_pages(src);
 	}
 
 	for (i = 0; i < nr_pages; i++) {
@@ -1224,7 +1224,7 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
 		 */
 		if (likely(!__PageMovable(page)))
 			mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON +
-					page_is_file_lru(page), -hpage_nr_pages(page));
+					page_is_file_lru(page), -thp_nr_pages(page));
 	}
 
 	/*
@@ -1598,7 +1598,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
 		list_add_tail(&head->lru, pagelist);
 		mod_node_page_state(page_pgdat(head),
 			NR_ISOLATED_ANON + page_is_file_lru(head),
-			hpage_nr_pages(head));
+			thp_nr_pages(head));
 	}
 out_putpage:
 	/*
@@ -1962,7 +1962,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 
 	page_lru = page_is_file_lru(page);
 	mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_lru,
-				hpage_nr_pages(page));
+				thp_nr_pages(page));
 
 	/*
 	 * Isolating the page has taken another reference, so the
diff --git a/mm/mlock.c b/mm/mlock.c
index f8736136fad7..93ca2bf30b4f 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -61,8 +61,7 @@ void clear_page_mlock(struct page *page)
 	if (!TestClearPageMlocked(page))
 		return;
 
-	mod_zone_page_state(page_zone(page), NR_MLOCK,
-			    -hpage_nr_pages(page));
+	mod_zone_page_state(page_zone(page), NR_MLOCK, -thp_nr_pages(page));
 	count_vm_event(UNEVICTABLE_PGCLEARED);
 	/*
 	 * The previous TestClearPageMlocked() corresponds to the smp_mb()
@@ -95,7 +94,7 @@ void mlock_vma_page(struct page *page)
 
 	if (!TestSetPageMlocked(page)) {
 		mod_zone_page_state(page_zone(page), NR_MLOCK,
-				    hpage_nr_pages(page));
+				    thp_nr_pages(page));
 		count_vm_event(UNEVICTABLE_PGMLOCKED);
 		if (!isolate_lru_page(page))
 			putback_lru_page(page);
@@ -192,7 +191,7 @@ unsigned int munlock_vma_page(struct page *page)
 	/*
 	 * Serialize with any parallel __split_huge_page_refcount() which
 	 * might otherwise copy PageMlocked to part of the tail pages before
-	 * we clear it in the head page. It also stabilizes hpage_nr_pages().
+	 * we clear it in the head page. It also stabilizes thp_nr_pages().
 	 */
 	spin_lock_irq(&pgdat->lru_lock);
 
@@ -202,7 +201,7 @@ unsigned int munlock_vma_page(struct page *page)
 		goto unlock_out;
 	}
 
-	nr_pages = hpage_nr_pages(page);
+	nr_pages = thp_nr_pages(page);
 	__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
 
 	if (__munlock_isolate_lru_page(page, true)) {
diff --git a/mm/page_io.c b/mm/page_io.c
index 888000d1a8cc..77170b7e6f04 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -274,7 +274,7 @@ static inline void count_swpout_vm_event(struct page *page)
 	if (unlikely(PageTransHuge(page)))
 		count_vm_event(THP_SWPOUT);
 #endif
-	count_vm_events(PSWPOUT, hpage_nr_pages(page));
+	count_vm_events(PSWPOUT, thp_nr_pages(page));
 }
 
 int __swap_writepage(struct page *page, struct writeback_control *wbc,
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index e65629c056e8..5e77b269c330 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -61,7 +61,7 @@ static inline bool pfn_is_match(struct page *page, unsigned long pfn)
 		return page_pfn == pfn;
 
 	/* THP can be referenced by any subpage */
-	return pfn >= page_pfn && pfn - page_pfn < hpage_nr_pages(page);
+	return pfn >= page_pfn && pfn - page_pfn < thp_nr_pages(page);
 }
 
 /**
diff --git a/mm/rmap.c b/mm/rmap.c
index 5fe2dedce1fc..c56fab5826c1 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1130,7 +1130,7 @@ void do_page_add_anon_rmap(struct page *page,
 	}
 
 	if (first) {
-		int nr = compound ? hpage_nr_pages(page) : 1;
+		int nr = compound ? thp_nr_pages(page) : 1;
 		/*
 		 * We use the irq-unsafe __{inc|mod}_zone_page_stat because
 		 * these counters are not modified in interrupt context, and
@@ -1169,7 +1169,7 @@ void do_page_add_anon_rmap(struct page *page,
 void page_add_new_anon_rmap(struct page *page,
 	struct vm_area_struct *vma, unsigned long address, bool compound)
 {
-	int nr = compound ? hpage_nr_pages(page) : 1;
+	int nr = compound ? thp_nr_pages(page) : 1;
 
 	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
 	__SetPageSwapBacked(page);
@@ -1860,7 +1860,7 @@ static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc,
 		return;
 
 	pgoff_start = page_to_pgoff(page);
-	pgoff_end = pgoff_start + hpage_nr_pages(page) - 1;
+	pgoff_end = pgoff_start + thp_nr_pages(page) - 1;
 	anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
 			pgoff_start, pgoff_end) {
 		struct vm_area_struct *vma = avc->vma;
@@ -1913,7 +1913,7 @@ static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc,
 		return;
 
 	pgoff_start = page_to_pgoff(page);
-	pgoff_end = pgoff_start + hpage_nr_pages(page) - 1;
+	pgoff_end = pgoff_start + thp_nr_pages(page) - 1;
 	if (!locked)
 		i_mmap_lock_read(mapping);
 	vma_interval_tree_foreach(vma, &mapping->i_mmap,
diff --git a/mm/swap.c b/mm/swap.c
index a82efc33411f..5fb3c36bbdad 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -241,7 +241,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec,
 		del_page_from_lru_list(page, lruvec, page_lru(page));
 		ClearPageActive(page);
 		add_page_to_lru_list_tail(page, lruvec, page_lru(page));
-		(*pgmoved) += hpage_nr_pages(page);
+		(*pgmoved) += thp_nr_pages(page);
 	}
 }
 
@@ -312,7 +312,7 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages)
 void lru_note_cost_page(struct page *page)
 {
 	lru_note_cost(mem_cgroup_page_lruvec(page, page_pgdat(page)),
-		      page_is_file_lru(page), hpage_nr_pages(page));
+		      page_is_file_lru(page), thp_nr_pages(page));
 }
 
 static void __activate_page(struct page *page, struct lruvec *lruvec,
@@ -320,7 +320,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec,
 {
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
 		int lru = page_lru_base_type(page);
-		int nr_pages = hpage_nr_pages(page);
+		int nr_pages = thp_nr_pages(page);
 
 		del_page_from_lru_list(page, lruvec, lru);
 		SetPageActive(page);
@@ -499,7 +499,7 @@ void lru_cache_add_active_or_unevictable(struct page *page,
 		 * lock is held(spinlock), which implies preemption disabled.
 		 */
 		__mod_zone_page_state(page_zone(page), NR_MLOCK,
-				    hpage_nr_pages(page));
+				    thp_nr_pages(page));
 		count_vm_event(UNEVICTABLE_PGMLOCKED);
 	}
 	lru_cache_add(page);
@@ -531,7 +531,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
 {
 	int lru;
 	bool active;
-	int nr_pages = hpage_nr_pages(page);
+	int nr_pages = thp_nr_pages(page);
 
 	if (!PageLRU(page))
 		return;
@@ -579,7 +579,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
 {
 	if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
 		int lru = page_lru_base_type(page);
-		int nr_pages = hpage_nr_pages(page);
+		int nr_pages = thp_nr_pages(page);
 
 		del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE);
 		ClearPageActive(page);
@@ -598,7 +598,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec,
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
 	    !PageSwapCache(page) && !PageUnevictable(page)) {
 		bool active = PageActive(page);
-		int nr_pages = hpage_nr_pages(page);
+		int nr_pages = thp_nr_pages(page);
 
 		del_page_from_lru_list(page, lruvec,
 				       LRU_INACTIVE_ANON + active);
@@ -971,7 +971,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
 {
 	enum lru_list lru;
 	int was_unevictable = TestClearPageUnevictable(page);
-	int nr_pages = hpage_nr_pages(page);
+	int nr_pages = thp_nr_pages(page);
 
 	VM_BUG_ON_PAGE(PageLRU(page), page);
 
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 05889e8e3c97..1983be226b1c 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -115,7 +115,7 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp)
 	struct address_space *address_space = swap_address_space(entry);
 	pgoff_t idx = swp_offset(entry);
 	XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page));
-	unsigned long i, nr = hpage_nr_pages(page);
+	unsigned long i, nr = thp_nr_pages(page);
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	VM_BUG_ON_PAGE(PageSwapCache(page), page);
@@ -157,7 +157,7 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp)
 void __delete_from_swap_cache(struct page *page, swp_entry_t entry)
 {
 	struct address_space *address_space = swap_address_space(entry);
-	int i, nr = hpage_nr_pages(page);
+	int i, nr = thp_nr_pages(page);
 	pgoff_t idx = swp_offset(entry);
 	XA_STATE(xas, &address_space->i_pages, idx);
 
@@ -250,7 +250,7 @@ void delete_from_swap_cache(struct page *page)
 	xa_unlock_irq(&address_space->i_pages);
 
 	put_swap_page(page, entry);
-	page_ref_sub(page, hpage_nr_pages(page));
+	page_ref_sub(page, thp_nr_pages(page));
 }
 
 /* 
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 987276c557d1..142095774e55 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1368,7 +1368,7 @@ void put_swap_page(struct page *page, swp_entry_t entry)
 	unsigned char *map;
 	unsigned int i, free_entries = 0;
 	unsigned char val;
-	int size = swap_entry_size(hpage_nr_pages(page));
+	int size = swap_entry_size(thp_nr_pages(page));
 
 	si = _swap_info_get(entry);
 	if (!si)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 749d239c62b2..6325003e2f16 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1360,7 +1360,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 			case PAGE_ACTIVATE:
 				goto activate_locked;
 			case PAGE_SUCCESS:
-				stat->nr_pageout += hpage_nr_pages(page);
+				stat->nr_pageout += thp_nr_pages(page);
 
 				if (PageWriteback(page))
 					goto keep;
@@ -1868,7 +1868,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 		SetPageLRU(page);
 		lru = page_lru(page);
 
-		nr_pages = hpage_nr_pages(page);
+		nr_pages = thp_nr_pages(page);
 		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
 		list_move(&page->lru, &lruvec->lists[lru]);
 
@@ -2070,7 +2070,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 			 * so we ignore them here.
 			 */
 			if ((vm_flags & VM_EXEC) && page_is_file_lru(page)) {
-				nr_rotated += hpage_nr_pages(page);
+				nr_rotated += thp_nr_pages(page);
 				list_add(&page->lru, &l_active);
 				continue;
 			}
diff --git a/mm/workingset.c b/mm/workingset.c
index 50b7937bab32..fdeabea54e77 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -262,7 +262,7 @@ void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg)
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 
 	lruvec = mem_cgroup_lruvec(target_memcg, pgdat);
-	workingset_age_nonresident(lruvec, hpage_nr_pages(page));
+	workingset_age_nonresident(lruvec, thp_nr_pages(page));
 	/* XXX: target_memcg can be NULL, go through lruvec */
 	memcgid = mem_cgroup_id(lruvec_memcg(lruvec));
 	eviction = atomic_long_read(&lruvec->nonresident_age);
@@ -365,7 +365,7 @@ void workingset_refault(struct page *page, void *shadow)
 		goto out;
 
 	SetPageActive(page);
-	workingset_age_nonresident(lruvec, hpage_nr_pages(page));
+	workingset_age_nonresident(lruvec, thp_nr_pages(page));
 	inc_lruvec_state(lruvec, WORKINGSET_ACTIVATE);
 
 	/* Page was active prior to eviction */
@@ -402,7 +402,7 @@ void workingset_activation(struct page *page)
 	if (!mem_cgroup_disabled() && !memcg)
 		goto out;
 	lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
-	workingset_age_nonresident(lruvec, hpage_nr_pages(page));
+	workingset_age_nonresident(lruvec, thp_nr_pages(page));
 out:
 	rcu_read_unlock();
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/7] mm: Add thp_head
  2020-06-29 15:19 [PATCH 0/7] THP prep patches Matthew Wilcox (Oracle)
                   ` (4 preceding siblings ...)
  2020-06-29 15:19 ` [PATCH 5/7] mm: Replace hpage_nr_pages with thp_nr_pages Matthew Wilcox (Oracle)
@ 2020-06-29 15:19 ` Matthew Wilcox (Oracle)
  2020-06-29 15:19 ` [PATCH 7/7] mm: Introduce offset_in_thp Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-06-29 15:19 UTC (permalink / raw)
  To: linux-fsdevel, linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

This is like compound_head() but compiles away when
CONFIG_TRANSPARENT_HUGEPAGE is not enabled.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/huge_mm.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index dcdfd21763a3..bd13e9ac3437 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -266,6 +266,15 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
 		return NULL;
 }
 
+/**
+ * thp_head - Head page of a transparent huge page.
+ * @page: Any page (tail, head or regular) found in the page cache.
+ */
+static inline struct page *thp_head(struct page *page)
+{
+	return compound_head(page);
+}
+
 /**
  * thp_order - Order of a transparent huge page.
  * @page: Head page of a transparent huge page.
@@ -342,6 +351,12 @@ static inline struct list_head *page_deferred_list(struct page *page)
 #define HPAGE_PUD_MASK ({ BUILD_BUG(); 0; })
 #define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; })
 
+static inline struct page *thp_head(struct page *page)
+{
+	VM_BUG_ON_PGFLAGS(PageTail(page), page);
+	return page;
+}
+
 static inline unsigned int thp_order(struct page *page)
 {
 	VM_BUG_ON_PGFLAGS(PageTail(page), page);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 7/7] mm: Introduce offset_in_thp
  2020-06-29 15:19 [PATCH 0/7] THP prep patches Matthew Wilcox (Oracle)
                   ` (5 preceding siblings ...)
  2020-06-29 15:19 ` [PATCH 6/7] mm: Add thp_head Matthew Wilcox (Oracle)
@ 2020-06-29 15:19 ` Matthew Wilcox (Oracle)
  2020-06-29 17:11 ` [PATCH 0/7] THP prep patches William Kucharski
  2020-06-29 18:13 ` Zi Yan
  8 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-06-29 15:19 UTC (permalink / raw)
  To: linux-fsdevel, linux-mm, Andrew Morton
  Cc: Matthew Wilcox (Oracle), David Hildenbrand

Mirroring offset_in_page(), this gives you the offset within this
particular page, no matter what size page it is.  It optimises down
to offset_in_page() if CONFIG_TRANSPARENT_HUGEPAGE is not set.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6c29b663135f..3fc7e8121216 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1583,6 +1583,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
 extern void pagefault_out_of_memory(void);
 
 #define offset_in_page(p)	((unsigned long)(p) & ~PAGE_MASK)
+#define offset_in_thp(page, p)	((unsigned long)(p) & (thp_size(page) - 1))
 
 /*
  * Flags passed to show_mem() and show_free_areas() to suppress output in
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/7] mm: Store compound_nr as well as compound_order
  2020-06-29 15:19 ` [PATCH 1/7] mm: Store compound_nr as well as compound_order Matthew Wilcox (Oracle)
@ 2020-06-29 16:22   ` Ira Weiny
  2020-06-29 16:24     ` Matthew Wilcox
  2020-07-06 10:29   ` Kirill A. Shutemov
  1 sibling, 1 reply; 18+ messages in thread
From: Ira Weiny @ 2020-06-29 16:22 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-fsdevel, linux-mm, Andrew Morton

On Mon, Jun 29, 2020 at 04:19:53PM +0100, Matthew Wilcox wrote:
> This removes a few instructions from functions which need to know how many
> pages are in a compound page.  The storage used is either page->mapping
> on 64-bit or page->index on 32-bit.  Both of these are fine to overlay
> on tail pages.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/mm.h       | 5 ++++-
>  include/linux/mm_types.h | 1 +
>  mm/page_alloc.c          | 5 +++--
>  3 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index dc7b87310c10..af0305ad090f 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -911,12 +911,15 @@ static inline int compound_pincount(struct page *page)
>  static inline void set_compound_order(struct page *page, unsigned int order)
>  {
>  	page[1].compound_order = order;
> +	page[1].compound_nr = 1U << order;
                              ^^^
			      1UL?

Ira

>  }
>  
>  /* Returns the number of pages in this potentially compound page. */
>  static inline unsigned long compound_nr(struct page *page)
>  {
> -	return 1UL << compound_order(page);
> +	if (!PageHead(page))
> +		return 1;
> +	return page[1].compound_nr;
>  }
>  
>  /* Returns the number of bytes in this potentially compound page. */
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 64ede5f150dc..561ed987ab44 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -134,6 +134,7 @@ struct page {
>  			unsigned char compound_dtor;
>  			unsigned char compound_order;
>  			atomic_t compound_mapcount;
> +			unsigned int compound_nr; /* 1 << compound_order */
>  		};
>  		struct {	/* Second tail page of compound page */
>  			unsigned long _compound_pad_1;	/* compound_head */
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 48eb0f1410d4..c7beb5f13193 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -673,8 +673,6 @@ void prep_compound_page(struct page *page, unsigned int order)
>  	int i;
>  	int nr_pages = 1 << order;
>  
> -	set_compound_page_dtor(page, COMPOUND_PAGE_DTOR);
> -	set_compound_order(page, order);
>  	__SetPageHead(page);
>  	for (i = 1; i < nr_pages; i++) {
>  		struct page *p = page + i;
> @@ -682,6 +680,9 @@ void prep_compound_page(struct page *page, unsigned int order)
>  		p->mapping = TAIL_MAPPING;
>  		set_compound_head(p, page);
>  	}
> +
> +	set_compound_page_dtor(page, COMPOUND_PAGE_DTOR);
> +	set_compound_order(page, order);
>  	atomic_set(compound_mapcount_ptr(page), -1);
>  	if (hpage_pincount_available(page))
>  		atomic_set(compound_pincount_ptr(page), 0);
> -- 
> 2.27.0
> 
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/7] mm: Store compound_nr as well as compound_order
  2020-06-29 16:22   ` Ira Weiny
@ 2020-06-29 16:24     ` Matthew Wilcox
  0 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox @ 2020-06-29 16:24 UTC (permalink / raw)
  To: Ira Weiny; +Cc: linux-fsdevel, linux-mm, Andrew Morton

On Mon, Jun 29, 2020 at 09:22:27AM -0700, Ira Weiny wrote:
> On Mon, Jun 29, 2020 at 04:19:53PM +0100, Matthew Wilcox wrote:
> >  static inline void set_compound_order(struct page *page, unsigned int order)
> >  {
> >  	page[1].compound_order = order;
> > +	page[1].compound_nr = 1U << order;
>                               ^^^
> 			      1UL?
> 
> Ira
> 
> > +++ b/include/linux/mm_types.h
> > @@ -134,6 +134,7 @@ struct page {
> >  			unsigned char compound_dtor;
> >  			unsigned char compound_order;
> >  			atomic_t compound_mapcount;
> > +			unsigned int compound_nr; /* 1 << compound_order */

                        ^^^^^^^^^^^^

No


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/7] THP prep patches
  2020-06-29 15:19 [PATCH 0/7] THP prep patches Matthew Wilcox (Oracle)
                   ` (6 preceding siblings ...)
  2020-06-29 15:19 ` [PATCH 7/7] mm: Introduce offset_in_thp Matthew Wilcox (Oracle)
@ 2020-06-29 17:11 ` William Kucharski
  2020-06-29 18:13 ` Zi Yan
  8 siblings, 0 replies; 18+ messages in thread
From: William Kucharski @ 2020-06-29 17:11 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-fsdevel, linux-mm, Andrew Morton

Very nice cleanup and improvement in readability.

For the series:

Reviewed-by: William Kucharski <william.kucharski@oracle.com>


> On Jun 29, 2020, at 9:19 AM, Matthew Wilcox (Oracle) <willy@infradead.org> wrote:
> 
> These are some generic cleanups and improvements, which I would like
> merged into mmotm soon.  The first one should be a performance improvement
> for all users of compound pages, and the others are aimed at getting
> code to compile away when CONFIG_TRANSPARENT_HUGEPAGE is disabled (ie
> small systems).  Also better documented / less confusing than the current
> prefix mixture of compound, hpage and thp.
> 
> Matthew Wilcox (Oracle) (7):
>  mm: Store compound_nr as well as compound_order
>  mm: Move page-flags include to top of file
>  mm: Add thp_order
>  mm: Add thp_size
>  mm: Replace hpage_nr_pages with thp_nr_pages
>  mm: Add thp_head
>  mm: Introduce offset_in_thp
> 
> drivers/nvdimm/btt.c      |  4 +--
> drivers/nvdimm/pmem.c     |  6 ++--
> include/linux/huge_mm.h   | 58 ++++++++++++++++++++++++++++++++++++---
> include/linux/mm.h        | 12 ++++----
> include/linux/mm_inline.h |  6 ++--
> include/linux/mm_types.h  |  1 +
> include/linux/pagemap.h   |  6 ++--
> mm/compaction.c           |  2 +-
> mm/filemap.c              |  2 +-
> mm/gup.c                  |  2 +-
> mm/hugetlb.c              |  2 +-
> mm/internal.h             |  4 +--
> mm/memcontrol.c           | 10 +++----
> mm/memory_hotplug.c       |  7 ++---
> mm/mempolicy.c            |  2 +-
> mm/migrate.c              | 16 +++++------
> mm/mlock.c                |  9 +++---
> mm/page_alloc.c           |  5 ++--
> mm/page_io.c              |  4 +--
> mm/page_vma_mapped.c      |  6 ++--
> mm/rmap.c                 |  8 +++---
> mm/swap.c                 | 16 +++++------
> mm/swap_state.c           |  6 ++--
> mm/swapfile.c             |  2 +-
> mm/vmscan.c               |  6 ++--
> mm/workingset.c           |  6 ++--
> 26 files changed, 127 insertions(+), 81 deletions(-)
> 
> -- 
> 2.27.0
> 
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/7] mm: Replace hpage_nr_pages with thp_nr_pages
  2020-06-29 15:19 ` [PATCH 5/7] mm: Replace hpage_nr_pages with thp_nr_pages Matthew Wilcox (Oracle)
@ 2020-06-29 17:40   ` Mike Kravetz
  2020-06-29 18:14     ` Matthew Wilcox
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Kravetz @ 2020-06-29 17:40 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), linux-fsdevel, linux-mm, Andrew Morton

On 6/29/20 8:19 AM, Matthew Wilcox (Oracle) wrote:
> The thp prefix is more frequently used than hpage and we should
> be consistent between the various functions.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
...
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 57ece74e3aae..6bb07bc655f7 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1593,7 +1593,7 @@ static struct address_space *_get_hugetlb_page_mapping(struct page *hpage)
>  
>  	/* Use first found vma */
>  	pgoff_start = page_to_pgoff(hpage);
> -	pgoff_end = pgoff_start + hpage_nr_pages(hpage) - 1;
> +	pgoff_end = pgoff_start + thp_nr_pages(hpage) - 1;
>  	anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
>  					pgoff_start, pgoff_end) {
>  		struct vm_area_struct *vma = avc->vma;

Naming consistency is a good idea!

I was wondering why hugetlb code would be calling a 'thp related' routine.
The reason is that hpage_nr_pages was incorrectly added (by me) to get the
number of pages in a hugetlb page.  If the name of the routine was thp_nr_pages
as proposed, I would not have made this mistake.

I will provide a patch to change the above hpage_nr_pages call to
pages_per_huge_page(page_hstate()).
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/7] THP prep patches
  2020-06-29 15:19 [PATCH 0/7] THP prep patches Matthew Wilcox (Oracle)
                   ` (7 preceding siblings ...)
  2020-06-29 17:11 ` [PATCH 0/7] THP prep patches William Kucharski
@ 2020-06-29 18:13 ` Zi Yan
  8 siblings, 0 replies; 18+ messages in thread
From: Zi Yan @ 2020-06-29 18:13 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, linux-mm, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 902 bytes --]

On 29 Jun 2020, at 11:19, Matthew Wilcox (Oracle) wrote:

> These are some generic cleanups and improvements, which I would like
> merged into mmotm soon.  The first one should be a performance improvement
> for all users of compound pages, and the others are aimed at getting
> code to compile away when CONFIG_TRANSPARENT_HUGEPAGE is disabled (ie
> small systems).  Also better documented / less confusing than the current
> prefix mixture of compound, hpage and thp.
>
> Matthew Wilcox (Oracle) (7):
>   mm: Store compound_nr as well as compound_order
>   mm: Move page-flags include to top of file
>   mm: Add thp_order
>   mm: Add thp_size
>   mm: Replace hpage_nr_pages with thp_nr_pages
>   mm: Add thp_head
>   mm: Introduce offset_in_thp
>

The whole series looks good to me. Thank you for the patches.

Reviewed-by: Zi Yan <ziy@nvidia.com>

—
Best Regards,
Yan Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/7] mm: Replace hpage_nr_pages with thp_nr_pages
  2020-06-29 17:40   ` Mike Kravetz
@ 2020-06-29 18:14     ` Matthew Wilcox
  2020-06-29 18:32       ` Matthew Wilcox
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2020-06-29 18:14 UTC (permalink / raw)
  To: Mike Kravetz; +Cc: linux-fsdevel, linux-mm, Andrew Morton

On Mon, Jun 29, 2020 at 10:40:12AM -0700, Mike Kravetz wrote:
> > +++ b/mm/hugetlb.c
> > @@ -1593,7 +1593,7 @@ static struct address_space *_get_hugetlb_page_mapping(struct page *hpage)
> >  
> >  	/* Use first found vma */
> >  	pgoff_start = page_to_pgoff(hpage);
> > -	pgoff_end = pgoff_start + hpage_nr_pages(hpage) - 1;
> > +	pgoff_end = pgoff_start + thp_nr_pages(hpage) - 1;
> >  	anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
> >  					pgoff_start, pgoff_end) {
> >  		struct vm_area_struct *vma = avc->vma;
> 
> Naming consistency is a good idea!
> 
> I was wondering why hugetlb code would be calling a 'thp related' routine.
> The reason is that hpage_nr_pages was incorrectly added (by me) to get the
> number of pages in a hugetlb page.  If the name of the routine was thp_nr_pages
> as proposed, I would not have made this mistake.
> 
> I will provide a patch to change the above hpage_nr_pages call to
> pages_per_huge_page(page_hstate()).

Thank you!  Clearly I wasn't thinking about this patch and just did a
mindless search-and-replace!  I should evaluate the other places where
I did this and see if any of them are wrong too.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/7] mm: Replace hpage_nr_pages with thp_nr_pages
  2020-06-29 18:14     ` Matthew Wilcox
@ 2020-06-29 18:32       ` Matthew Wilcox
  2020-07-01  1:33         ` Andrew Morton
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2020-06-29 18:32 UTC (permalink / raw)
  To: Mike Kravetz; +Cc: linux-fsdevel, linux-mm, Andrew Morton

On Mon, Jun 29, 2020 at 07:14:40PM +0100, Matthew Wilcox wrote:
> Thank you!  Clearly I wasn't thinking about this patch and just did a
> mindless search-and-replace!  I should evaluate the other places where
> I did this and see if any of them are wrong too.

add_page_to_lru_list() and friends -- safe.  hugetlbfs doesn't use the
  LRU lists.
find_subpage() -- safe.  hugetlbfs has already returned by this point.
readahead_page() and friends -- safe.  hugetlbfs doesn't readahead.
isolate_migratepages_block() -- probably safe.  I don't think hugetlbfs
  pages are migratable, and it calls del_page_from_lru_list(), so I
  infer hugetlbfs doesn't reach this point.
unaccount_page_cache_page() -- safe.  hugetlbfs has already returned by
  this point.
check_and_migrate_cma_pages() -- CMA pages aren't hugetlbfs pages.
mlock_migrate_page() -- not used for hugetlbfs.
mem_cgroup_move_account() mem_cgroup_charge() mem_cgroup_migrate()
mem_cgroup_swapout() mem_cgroup_try_charge_swap() -- I don't think
  memory cgroups control hugetlbfs pages.
do_migrate_range() -- explicitly not in the hugetlb arm of this if
  statement
migrate_page_add() -- Assumes LRU
putback_movable_pages() -- Also LRU
expected_page_refs() migrate_page_move_mapping() copy_huge_page()
unmap_and_move() add_page_for_migration() numamigrate_isolate_page()
  -- more page migration
mlock.c: This is all related to being on the LRU list
page_io.c: We don't swap out hugetlbfs pages
pfn_is_match() -- Already returned from this function for hugetlbfs pages
do_page_add_anon_rmap() page_add_new_anon_rmap() rmap_walk_anon()
  -- hugetlbfs pages aren't anon.
rmap_walk_file() -- This one I'm unsure about.  There's explicit
  support for hugetlbfs elsewhere in the file, and maybe this is never
  called for hugetlb pages.  Help?
swap.c, swap_state.c, swapfile.c: No swap for hugetlbfs pages.
vmscan.c: hugetlbfs pages not on the LRUs
workingset.c: hugetlbfs pages not on the LRUs

So I think you found the only bug of this type, although I'm a little
unsure about the rmap_walk_file().

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/7] mm: Replace hpage_nr_pages with thp_nr_pages
  2020-06-29 18:32       ` Matthew Wilcox
@ 2020-07-01  1:33         ` Andrew Morton
  0 siblings, 0 replies; 18+ messages in thread
From: Andrew Morton @ 2020-07-01  1:33 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Mike Kravetz, linux-fsdevel, linux-mm

On Mon, 29 Jun 2020 19:32:28 +0100 Matthew Wilcox <willy@infradead.org> wrote:

> So I think you found the only bug of this type, although I'm a little
> unsure about the rmap_walk_file().

So...  I'll assume that this part of this patch ("mm: Replace
hpage_nr_pages with thp_nr_pages") is to be dropped.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/7] mm: Store compound_nr as well as compound_order
  2020-06-29 15:19 ` [PATCH 1/7] mm: Store compound_nr as well as compound_order Matthew Wilcox (Oracle)
  2020-06-29 16:22   ` Ira Weiny
@ 2020-07-06 10:29   ` Kirill A. Shutemov
  2020-08-11 22:53     ` Matthew Wilcox
  1 sibling, 1 reply; 18+ messages in thread
From: Kirill A. Shutemov @ 2020-07-06 10:29 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-fsdevel, linux-mm, Andrew Morton

On Mon, Jun 29, 2020 at 04:19:53PM +0100, Matthew Wilcox (Oracle) wrote:
> This removes a few instructions from functions which need to know how many
> pages are in a compound page.  The storage used is either page->mapping
> on 64-bit or page->index on 32-bit.  Both of these are fine to overlay
> on tail pages.

I'm not a fan of redundant data in struct page, even if it's less busy
tail page. We tend to find more use of the space over time.

Any numbers on what it gives for typical kernel? Does it really worth it?

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/7] mm: Store compound_nr as well as compound_order
  2020-07-06 10:29   ` Kirill A. Shutemov
@ 2020-08-11 22:53     ` Matthew Wilcox
  0 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox @ 2020-08-11 22:53 UTC (permalink / raw)
  To: Kirill A. Shutemov; +Cc: linux-fsdevel, linux-mm, Andrew Morton

On Mon, Jul 06, 2020 at 01:29:25PM +0300, Kirill A. Shutemov wrote:
> On Mon, Jun 29, 2020 at 04:19:53PM +0100, Matthew Wilcox (Oracle) wrote:
> > This removes a few instructions from functions which need to know how many
> > pages are in a compound page.  The storage used is either page->mapping
> > on 64-bit or page->index on 32-bit.  Both of these are fine to overlay
> > on tail pages.
> 
> I'm not a fan of redundant data in struct page, even if it's less busy
> tail page. We tend to find more use of the space over time.
> 
> Any numbers on what it gives for typical kernel? Does it really worth it?

Oops, I overlooked this email.  Sorry.  Thanks to Andrew for the reminder.

I haven't analysed the performance win for this.  The assembly is
two instructions (11 bytes) shorter:

before:
    206c:       a9 00 00 01 00          test   $0x10000,%eax
    2071:       0f 84 af 02 00 00       je     2326 <shmem_add_to_page_cache.isra.0+0x3b6>
    2077:       41 0f b6 4c 24 51       movzbl 0x51(%r12),%ecx
    207d:       41 bd 01 00 00 00       mov    $0x1,%r13d
    2083:       49 8b 44 24 08          mov    0x8(%r12),%rax
    2088:       49 d3 e5                shl    %cl,%r13
    208b:       a8 01                   test   $0x1,%al

after:
    2691:       a9 00 00 01 00          test   $0x10000,%eax
    2696:       0f 84 95 01 00 00       je     2831 <shmem_add_to_page_cache.isr
a.0+0x291>
    269c:       49 8b 47 08             mov    0x8(%r15),%rax
    26a0:       45 8b 77 58             mov    0x58(%r15),%r14d
    26a4:       a8 01                   test   $0x1,%al

(there are other changes in these files, so the addresses aren't
meaningful).

If we need the space, we can always revert this patch.  It's all hidden
behind the macro anyway.

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2020-08-11 22:53 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-29 15:19 [PATCH 0/7] THP prep patches Matthew Wilcox (Oracle)
2020-06-29 15:19 ` [PATCH 1/7] mm: Store compound_nr as well as compound_order Matthew Wilcox (Oracle)
2020-06-29 16:22   ` Ira Weiny
2020-06-29 16:24     ` Matthew Wilcox
2020-07-06 10:29   ` Kirill A. Shutemov
2020-08-11 22:53     ` Matthew Wilcox
2020-06-29 15:19 ` [PATCH 2/7] mm: Move page-flags include to top of file Matthew Wilcox (Oracle)
2020-06-29 15:19 ` [PATCH 3/7] mm: Add thp_order Matthew Wilcox (Oracle)
2020-06-29 15:19 ` [PATCH 4/7] mm: Add thp_size Matthew Wilcox (Oracle)
2020-06-29 15:19 ` [PATCH 5/7] mm: Replace hpage_nr_pages with thp_nr_pages Matthew Wilcox (Oracle)
2020-06-29 17:40   ` Mike Kravetz
2020-06-29 18:14     ` Matthew Wilcox
2020-06-29 18:32       ` Matthew Wilcox
2020-07-01  1:33         ` Andrew Morton
2020-06-29 15:19 ` [PATCH 6/7] mm: Add thp_head Matthew Wilcox (Oracle)
2020-06-29 15:19 ` [PATCH 7/7] mm: Introduce offset_in_thp Matthew Wilcox (Oracle)
2020-06-29 17:11 ` [PATCH 0/7] THP prep patches William Kucharski
2020-06-29 18:13 ` Zi Yan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.