linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Make working with compound pages easier
@ 2019-07-21 10:46 Matthew Wilcox
  2019-07-21 10:46 ` [PATCH v2 1/3] mm: Introduce page_size() Matthew Wilcox
                   ` (3 more replies)
  0 siblings, 4 replies; 20+ messages in thread
From: Matthew Wilcox @ 2019-07-21 10:46 UTC (permalink / raw)
  To: Andrew Morton, linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

These three patches add three helpers and convert the appropriate
places to use them.

v2:
 - Add page_shift() and compound_nr()
 - Remove unsigned long cast from PAGE_SIZE as it is already unsigned long
 - Update to current Linus tree

Matthew Wilcox (Oracle) (3):
  mm: Introduce page_size()
  mm: Introduce page_shift()
  mm: Introduce compound_nr()

 arch/arm/include/asm/xen/page-coherent.h      |  3 +--
 arch/arm/mm/flush.c                           |  7 +++----
 arch/arm64/include/asm/xen/page-coherent.h    |  3 +--
 arch/arm64/mm/flush.c                         |  3 +--
 arch/ia64/mm/init.c                           |  2 +-
 arch/powerpc/mm/book3s64/iommu_api.c          |  7 ++-----
 arch/powerpc/mm/hugetlbpage.c                 |  2 +-
 drivers/crypto/chelsio/chtls/chtls_io.c       |  5 ++---
 drivers/staging/android/ion/ion_system_heap.c |  4 ++--
 drivers/target/tcm_fc/tfc_io.c                |  3 +--
 drivers/vfio/vfio_iommu_spapr_tce.c           |  2 +-
 fs/io_uring.c                                 |  2 +-
 fs/proc/task_mmu.c                            |  2 +-
 include/linux/hugetlb.h                       |  2 +-
 include/linux/mm.h                            | 18 ++++++++++++++++++
 lib/iov_iter.c                                |  2 +-
 mm/compaction.c                               |  2 +-
 mm/filemap.c                                  |  2 +-
 mm/gup.c                                      |  2 +-
 mm/hugetlb_cgroup.c                           |  2 +-
 mm/kasan/common.c                             | 10 ++++------
 mm/memcontrol.c                               |  4 ++--
 mm/memory_hotplug.c                           |  4 ++--
 mm/migrate.c                                  |  2 +-
 mm/nommu.c                                    |  2 +-
 mm/page_alloc.c                               |  2 +-
 mm/page_vma_mapped.c                          |  3 +--
 mm/rmap.c                                     |  9 +++------
 mm/shmem.c                                    |  8 ++++----
 mm/slob.c                                     |  2 +-
 mm/slub.c                                     | 18 +++++++++---------
 mm/swap_state.c                               |  2 +-
 mm/util.c                                     |  2 +-
 mm/vmscan.c                                   |  4 ++--
 net/xdp/xsk.c                                 |  2 +-
 35 files changed, 76 insertions(+), 73 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 1/3] mm: Introduce page_size()
  2019-07-21 10:46 [PATCH v2 0/3] Make working with compound pages easier Matthew Wilcox
@ 2019-07-21 10:46 ` Matthew Wilcox
  2019-07-23  0:43   ` Ira Weiny
  2019-07-21 10:46 ` [PATCH v2 2/3] mm: Introduce page_shift() Matthew Wilcox
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 20+ messages in thread
From: Matthew Wilcox @ 2019-07-21 10:46 UTC (permalink / raw)
  To: Andrew Morton, linux-mm; +Cc: Matthew Wilcox, Michal Hocko

From: Matthew Wilcox (Oracle) <willy@infradead.org>

It's unnecessarily hard to find out the size of a potentially huge page.
Replace 'PAGE_SIZE << compound_order(page)' with page_size(page).

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 arch/arm/mm/flush.c                           |  3 +--
 arch/arm64/mm/flush.c                         |  3 +--
 arch/ia64/mm/init.c                           |  2 +-
 drivers/crypto/chelsio/chtls/chtls_io.c       |  5 ++---
 drivers/staging/android/ion/ion_system_heap.c |  4 ++--
 drivers/target/tcm_fc/tfc_io.c                |  3 +--
 fs/io_uring.c                                 |  2 +-
 include/linux/hugetlb.h                       |  2 +-
 include/linux/mm.h                            |  6 ++++++
 lib/iov_iter.c                                |  2 +-
 mm/kasan/common.c                             |  8 +++-----
 mm/nommu.c                                    |  2 +-
 mm/page_vma_mapped.c                          |  3 +--
 mm/rmap.c                                     |  6 ++----
 mm/slob.c                                     |  2 +-
 mm/slub.c                                     | 18 +++++++++---------
 net/xdp/xsk.c                                 |  2 +-
 17 files changed, 35 insertions(+), 38 deletions(-)

diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
index 6ecbda87ee46..4c7ebe094a83 100644
--- a/arch/arm/mm/flush.c
+++ b/arch/arm/mm/flush.c
@@ -204,8 +204,7 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page)
 	 * coherent with the kernels mapping.
 	 */
 	if (!PageHighMem(page)) {
-		size_t page_size = PAGE_SIZE << compound_order(page);
-		__cpuc_flush_dcache_area(page_address(page), page_size);
+		__cpuc_flush_dcache_area(page_address(page), page_size(page));
 	} else {
 		unsigned long i;
 		if (cache_is_vipt_nonaliasing()) {
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index dc19300309d2..ac485163a4a7 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -56,8 +56,7 @@ void __sync_icache_dcache(pte_t pte)
 	struct page *page = pte_page(pte);
 
 	if (!test_and_set_bit(PG_dcache_clean, &page->flags))
-		sync_icache_aliases(page_address(page),
-				    PAGE_SIZE << compound_order(page));
+		sync_icache_aliases(page_address(page), page_size(page));
 }
 EXPORT_SYMBOL_GPL(__sync_icache_dcache);
 
diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
index aae75fd7b810..e97e24816bd4 100644
--- a/arch/ia64/mm/init.c
+++ b/arch/ia64/mm/init.c
@@ -63,7 +63,7 @@ __ia64_sync_icache_dcache (pte_t pte)
 	if (test_bit(PG_arch_1, &page->flags))
 		return;				/* i-cache is already coherent with d-cache */
 
-	flush_icache_range(addr, addr + (PAGE_SIZE << compound_order(page)));
+	flush_icache_range(addr, addr + page_size(page));
 	set_bit(PG_arch_1, &page->flags);	/* mark page as clean */
 }
 
diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
index 551bca6fef24..925be5942895 100644
--- a/drivers/crypto/chelsio/chtls/chtls_io.c
+++ b/drivers/crypto/chelsio/chtls/chtls_io.c
@@ -1078,7 +1078,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
 			bool merge;
 
 			if (page)
-				pg_size <<= compound_order(page);
+				pg_size = page_size(page);
 			if (off < pg_size &&
 			    skb_can_coalesce(skb, i, page, off)) {
 				merge = 1;
@@ -1105,8 +1105,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
 							   __GFP_NORETRY,
 							   order);
 					if (page)
-						pg_size <<=
-							compound_order(page);
+						pg_size <<= order;
 				}
 				if (!page) {
 					page = alloc_page(gfp);
diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c
index aa8d8425be25..b83a1d16bd89 100644
--- a/drivers/staging/android/ion/ion_system_heap.c
+++ b/drivers/staging/android/ion/ion_system_heap.c
@@ -120,7 +120,7 @@ static int ion_system_heap_allocate(struct ion_heap *heap,
 		if (!page)
 			goto free_pages;
 		list_add_tail(&page->lru, &pages);
-		size_remaining -= PAGE_SIZE << compound_order(page);
+		size_remaining -= page_size(page);
 		max_order = compound_order(page);
 		i++;
 	}
@@ -133,7 +133,7 @@ static int ion_system_heap_allocate(struct ion_heap *heap,
 
 	sg = table->sgl;
 	list_for_each_entry_safe(page, tmp_page, &pages, lru) {
-		sg_set_page(sg, page, PAGE_SIZE << compound_order(page), 0);
+		sg_set_page(sg, page, page_size(page), 0);
 		sg = sg_next(sg);
 		list_del(&page->lru);
 	}
diff --git a/drivers/target/tcm_fc/tfc_io.c b/drivers/target/tcm_fc/tfc_io.c
index a254792d882c..1354a157e9af 100644
--- a/drivers/target/tcm_fc/tfc_io.c
+++ b/drivers/target/tcm_fc/tfc_io.c
@@ -136,8 +136,7 @@ int ft_queue_data_in(struct se_cmd *se_cmd)
 					   page, off_in_page, tlen);
 			fr_len(fp) += tlen;
 			fp_skb(fp)->data_len += tlen;
-			fp_skb(fp)->truesize +=
-					PAGE_SIZE << compound_order(page);
+			fp_skb(fp)->truesize += page_size(page);
 		} else {
 			BUG_ON(!page);
 			from = kmap_atomic(page + (mem_off >> PAGE_SHIFT));
diff --git a/fs/io_uring.c b/fs/io_uring.c
index e2a66e12fbc6..c55d8b411d2a 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3084,7 +3084,7 @@ static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
 	}
 
 	page = virt_to_head_page(ptr);
-	if (sz > (PAGE_SIZE << compound_order(page)))
+	if (sz > page_size(page))
 		return -EINVAL;
 
 	pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index edfca4278319..53fc34f930d0 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -454,7 +454,7 @@ static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
 static inline struct hstate *page_hstate(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHuge(page), page);
-	return size_to_hstate(PAGE_SIZE << compound_order(page));
+	return size_to_hstate(page_size(page));
 }
 
 static inline unsigned hstate_index_to_shift(unsigned index)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0334ca97c584..899dfcf7c23d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -805,6 +805,12 @@ static inline void set_compound_order(struct page *page, unsigned int order)
 	page[1].compound_order = order;
 }
 
+/* Returns the number of bytes in this potentially compound page. */
+static inline unsigned long page_size(struct page *page)
+{
+	return PAGE_SIZE << compound_order(page);
+}
+
 void free_compound_page(struct page *page);
 
 #ifdef CONFIG_MMU
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index f1e0569b4539..639d5e7014c1 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -878,7 +878,7 @@ static inline bool page_copy_sane(struct page *page, size_t offset, size_t n)
 	head = compound_head(page);
 	v += (page - head) << PAGE_SHIFT;
 
-	if (likely(n <= v && v <= (PAGE_SIZE << compound_order(head))))
+	if (likely(n <= v && v <= (page_size(head))))
 		return true;
 	WARN_ON(1);
 	return false;
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 2277b82902d8..a929a3b9444d 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -321,8 +321,7 @@ void kasan_poison_slab(struct page *page)
 
 	for (i = 0; i < (1 << compound_order(page)); i++)
 		page_kasan_tag_reset(page + i);
-	kasan_poison_shadow(page_address(page),
-			PAGE_SIZE << compound_order(page),
+	kasan_poison_shadow(page_address(page), page_size(page),
 			KASAN_KMALLOC_REDZONE);
 }
 
@@ -518,7 +517,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
 	page = virt_to_page(ptr);
 	redzone_start = round_up((unsigned long)(ptr + size),
 				KASAN_SHADOW_SCALE_SIZE);
-	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+	redzone_end = (unsigned long)ptr + page_size(page);
 
 	kasan_unpoison_shadow(ptr, size);
 	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
@@ -554,8 +553,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
 			kasan_report_invalid_free(ptr, ip);
 			return;
 		}
-		kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
-				KASAN_FREE_PAGE);
+		kasan_poison_shadow(ptr, page_size(page), KASAN_FREE_PAGE);
 	} else {
 		__kasan_slab_free(page->slab_cache, ptr, ip, false);
 	}
diff --git a/mm/nommu.c b/mm/nommu.c
index fed1b6e9c89b..99b7ec318824 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -108,7 +108,7 @@ unsigned int kobjsize(const void *objp)
 	 * The ksize() function is only guaranteed to work for pointers
 	 * returned by kmalloc(). So handle arbitrary pointers here.
 	 */
-	return PAGE_SIZE << compound_order(page);
+	return page_size(page);
 }
 
 /**
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 11df03e71288..eff4b4520c8d 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -153,8 +153,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 
 	if (unlikely(PageHuge(pvmw->page))) {
 		/* when pud is not present, pte will be NULL */
-		pvmw->pte = huge_pte_offset(mm, pvmw->address,
-					    PAGE_SIZE << compound_order(page));
+		pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page));
 		if (!pvmw->pte)
 			return false;
 
diff --git a/mm/rmap.c b/mm/rmap.c
index e5dfe2ae6b0d..09ce05c481fc 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -898,8 +898,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 	 */
 	mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
 				0, vma, vma->vm_mm, address,
-				min(vma->vm_end, address +
-				    (PAGE_SIZE << compound_order(page))));
+				min(vma->vm_end, address + page_size(page)));
 	mmu_notifier_invalidate_range_start(&range);
 
 	while (page_vma_mapped_walk(&pvmw)) {
@@ -1374,8 +1373,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 	 */
 	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
 				address,
-				min(vma->vm_end, address +
-				    (PAGE_SIZE << compound_order(page))));
+				min(vma->vm_end, address + page_size(page)));
 	if (PageHuge(page)) {
 		/*
 		 * If sharing is possible, start and end will be adjusted
diff --git a/mm/slob.c b/mm/slob.c
index 7f421d0ca9ab..cf377beab962 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -539,7 +539,7 @@ size_t __ksize(const void *block)
 
 	sp = virt_to_page(block);
 	if (unlikely(!PageSlab(sp)))
-		return PAGE_SIZE << compound_order(sp);
+		return page_size(sp);
 
 	align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
 	m = (unsigned int *)(block - align);
diff --git a/mm/slub.c b/mm/slub.c
index e6c030e47364..1e8e20a99660 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -829,7 +829,7 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
 		return 1;
 
 	start = page_address(page);
-	length = PAGE_SIZE << compound_order(page);
+	length = page_size(page);
 	end = start + length;
 	remainder = length % s->size;
 	if (!remainder)
@@ -1074,13 +1074,14 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page,
 	init_tracking(s, object);
 }
 
-static void setup_page_debug(struct kmem_cache *s, void *addr, int order)
+static
+void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr)
 {
 	if (!(s->flags & SLAB_POISON))
 		return;
 
 	metadata_access_enable();
-	memset(addr, POISON_INUSE, PAGE_SIZE << order);
+	memset(addr, POISON_INUSE, page_size(page));
 	metadata_access_disable();
 }
 
@@ -1340,8 +1341,8 @@ slab_flags_t kmem_cache_flags(unsigned int object_size,
 #else /* !CONFIG_SLUB_DEBUG */
 static inline void setup_object_debug(struct kmem_cache *s,
 			struct page *page, void *object) {}
-static inline void setup_page_debug(struct kmem_cache *s,
-			void *addr, int order) {}
+static inline
+void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) {}
 
 static inline int alloc_debug_processing(struct kmem_cache *s,
 	struct page *page, void *object, unsigned long addr) { return 0; }
@@ -1635,7 +1636,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	struct kmem_cache_order_objects oo = s->oo;
 	gfp_t alloc_gfp;
 	void *start, *p, *next;
-	int idx, order;
+	int idx;
 	bool shuffle;
 
 	flags &= gfp_allowed_mask;
@@ -1669,7 +1670,6 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 
 	page->objects = oo_objects(oo);
 
-	order = compound_order(page);
 	page->slab_cache = s;
 	__SetPageSlab(page);
 	if (page_is_pfmemalloc(page))
@@ -1679,7 +1679,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 
 	start = page_address(page);
 
-	setup_page_debug(s, start, order);
+	setup_page_debug(s, page, start);
 
 	shuffle = shuffle_freelist(s, page);
 
@@ -3926,7 +3926,7 @@ size_t __ksize(const void *object)
 
 	if (unlikely(!PageSlab(page))) {
 		WARN_ON(!PageCompound(page));
-		return PAGE_SIZE << compound_order(page);
+		return page_size(page);
 	}
 
 	return slab_ksize(page->slab_cache);
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 59b57d708697..44bfb76fbad9 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -739,7 +739,7 @@ static int xsk_mmap(struct file *file, struct socket *sock,
 	/* Matches the smp_wmb() in xsk_init_queue */
 	smp_rmb();
 	qpg = virt_to_head_page(q->ring);
-	if (size > (PAGE_SIZE << compound_order(qpg)))
+	if (size > page_size(qpg))
 		return -EINVAL;
 
 	pfn = virt_to_phys(q->ring) >> PAGE_SHIFT;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/3] mm: Introduce page_shift()
  2019-07-21 10:46 [PATCH v2 0/3] Make working with compound pages easier Matthew Wilcox
  2019-07-21 10:46 ` [PATCH v2 1/3] mm: Introduce page_size() Matthew Wilcox
@ 2019-07-21 10:46 ` Matthew Wilcox
  2019-07-23  0:44   ` Ira Weiny
  2019-07-24 10:40   ` kbuild test robot
  2019-07-21 10:46 ` [PATCH v2 3/3] mm: Introduce compound_nr() Matthew Wilcox
  2019-07-23 12:55 ` [PATCH v2 0/3] Make working with compound pages easier Kirill A. Shutemov
  3 siblings, 2 replies; 20+ messages in thread
From: Matthew Wilcox @ 2019-07-21 10:46 UTC (permalink / raw)
  To: Andrew Morton, linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Replace PAGE_SHIFT + compound_order(page) with the new page_shift()
function.  Minor improvements in readability.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 arch/powerpc/mm/book3s64/iommu_api.c | 7 ++-----
 drivers/vfio/vfio_iommu_spapr_tce.c  | 2 +-
 include/linux/mm.h                   | 6 ++++++
 3 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c
index b056cae3388b..56cc84520577 100644
--- a/arch/powerpc/mm/book3s64/iommu_api.c
+++ b/arch/powerpc/mm/book3s64/iommu_api.c
@@ -129,11 +129,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
 		 * Allow to use larger than 64k IOMMU pages. Only do that
 		 * if we are backed by hugetlb.
 		 */
-		if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page)) {
-			struct page *head = compound_head(page);
-
-			pageshift = compound_order(head) + PAGE_SHIFT;
-		}
+		if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page))
+			pageshift = page_shift(compound_head(page));
 		mem->pageshift = min(mem->pageshift, pageshift);
 		/*
 		 * We don't need struct page reference any more, switch
diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
index 8ce9ad21129f..1883fd2901b2 100644
--- a/drivers/vfio/vfio_iommu_spapr_tce.c
+++ b/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -190,7 +190,7 @@ static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa,
 	 * a page we just found. Otherwise the hardware can get access to
 	 * a bigger memory chunk that it should.
 	 */
-	return (PAGE_SHIFT + compound_order(compound_head(page))) >= page_shift;
+	return page_shift(compound_head(page)) >= page_shift;
 }
 
 static inline bool tce_groups_attached(struct tce_container *container)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 899dfcf7c23d..64762559885f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -811,6 +811,12 @@ static inline unsigned long page_size(struct page *page)
 	return PAGE_SIZE << compound_order(page);
 }
 
+/* Returns the number of bits needed for the number of bytes in a page */
+static inline unsigned int page_shift(struct page *page)
+{
+	return PAGE_SHIFT + compound_order(page);
+}
+
 void free_compound_page(struct page *page);
 
 #ifdef CONFIG_MMU
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/3] mm: Introduce compound_nr()
  2019-07-21 10:46 [PATCH v2 0/3] Make working with compound pages easier Matthew Wilcox
  2019-07-21 10:46 ` [PATCH v2 1/3] mm: Introduce page_size() Matthew Wilcox
  2019-07-21 10:46 ` [PATCH v2 2/3] mm: Introduce page_shift() Matthew Wilcox
@ 2019-07-21 10:46 ` Matthew Wilcox
  2019-07-23  0:46   ` Ira Weiny
  2019-07-23 12:55 ` [PATCH v2 0/3] Make working with compound pages easier Kirill A. Shutemov
  3 siblings, 1 reply; 20+ messages in thread
From: Matthew Wilcox @ 2019-07-21 10:46 UTC (permalink / raw)
  To: Andrew Morton, linux-mm; +Cc: Matthew Wilcox

From: Matthew Wilcox (Oracle) <willy@infradead.org>

Replace 1 << compound_order(page) with compound_nr(page).  Minor
improvements in readability.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 arch/arm/include/asm/xen/page-coherent.h   | 3 +--
 arch/arm/mm/flush.c                        | 4 ++--
 arch/arm64/include/asm/xen/page-coherent.h | 3 +--
 arch/powerpc/mm/hugetlbpage.c              | 2 +-
 fs/proc/task_mmu.c                         | 2 +-
 include/linux/mm.h                         | 6 ++++++
 mm/compaction.c                            | 2 +-
 mm/filemap.c                               | 2 +-
 mm/gup.c                                   | 2 +-
 mm/hugetlb_cgroup.c                        | 2 +-
 mm/kasan/common.c                          | 2 +-
 mm/memcontrol.c                            | 4 ++--
 mm/memory_hotplug.c                        | 4 ++--
 mm/migrate.c                               | 2 +-
 mm/page_alloc.c                            | 2 +-
 mm/rmap.c                                  | 3 +--
 mm/shmem.c                                 | 8 ++++----
 mm/swap_state.c                            | 2 +-
 mm/util.c                                  | 2 +-
 mm/vmscan.c                                | 4 ++--
 20 files changed, 32 insertions(+), 29 deletions(-)

diff --git a/arch/arm/include/asm/xen/page-coherent.h b/arch/arm/include/asm/xen/page-coherent.h
index 2c403e7c782d..ea39cb724ffa 100644
--- a/arch/arm/include/asm/xen/page-coherent.h
+++ b/arch/arm/include/asm/xen/page-coherent.h
@@ -31,8 +31,7 @@ static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
 {
 	unsigned long page_pfn = page_to_xen_pfn(page);
 	unsigned long dev_pfn = XEN_PFN_DOWN(dev_addr);
-	unsigned long compound_pages =
-		(1<<compound_order(page)) * XEN_PFN_PER_PAGE;
+	unsigned long compound_pages = compound_nr(page) * XEN_PFN_PER_PAGE;
 	bool local = (page_pfn <= dev_pfn) &&
 		(dev_pfn - page_pfn < compound_pages);
 
diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
index 4c7ebe094a83..6d89db7895d1 100644
--- a/arch/arm/mm/flush.c
+++ b/arch/arm/mm/flush.c
@@ -208,13 +208,13 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page)
 	} else {
 		unsigned long i;
 		if (cache_is_vipt_nonaliasing()) {
-			for (i = 0; i < (1 << compound_order(page)); i++) {
+			for (i = 0; i < compound_nr(page); i++) {
 				void *addr = kmap_atomic(page + i);
 				__cpuc_flush_dcache_area(addr, PAGE_SIZE);
 				kunmap_atomic(addr);
 			}
 		} else {
-			for (i = 0; i < (1 << compound_order(page)); i++) {
+			for (i = 0; i < compound_nr(page); i++) {
 				void *addr = kmap_high_get(page + i);
 				if (addr) {
 					__cpuc_flush_dcache_area(addr, PAGE_SIZE);
diff --git a/arch/arm64/include/asm/xen/page-coherent.h b/arch/arm64/include/asm/xen/page-coherent.h
index d88e56b90b93..b600a8ef3349 100644
--- a/arch/arm64/include/asm/xen/page-coherent.h
+++ b/arch/arm64/include/asm/xen/page-coherent.h
@@ -45,8 +45,7 @@ static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
 {
 	unsigned long page_pfn = page_to_xen_pfn(page);
 	unsigned long dev_pfn = XEN_PFN_DOWN(dev_addr);
-	unsigned long compound_pages =
-		(1<<compound_order(page)) * XEN_PFN_PER_PAGE;
+	unsigned long compound_pages = compound_nr(page) * XEN_PFN_PER_PAGE;
 	bool local = (page_pfn <= dev_pfn) &&
 		(dev_pfn - page_pfn < compound_pages);
 
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index a8953f108808..73d4873fc7f8 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -667,7 +667,7 @@ void flush_dcache_icache_hugepage(struct page *page)
 
 	BUG_ON(!PageCompound(page));
 
-	for (i = 0; i < (1UL << compound_order(page)); i++) {
+	for (i = 0; i < compound_nr(page); i++) {
 		if (!PageHighMem(page)) {
 			__flush_dcache_icache(page_address(page+i));
 		} else {
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 731642e0f5a0..a9f2deb8ab79 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -461,7 +461,7 @@ static void smaps_page_accumulate(struct mem_size_stats *mss,
 static void smaps_account(struct mem_size_stats *mss, struct page *page,
 		bool compound, bool young, bool dirty, bool locked)
 {
-	int i, nr = compound ? 1 << compound_order(page) : 1;
+	int i, nr = compound ? compound_nr(page) : 1;
 	unsigned long size = nr * PAGE_SIZE;
 
 	/*
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 64762559885f..726d7f046b49 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -805,6 +805,12 @@ static inline void set_compound_order(struct page *page, unsigned int order)
 	page[1].compound_order = order;
 }
 
+/* Returns the number of pages in this potentially compound page. */
+static inline unsigned long compound_nr(struct page *page)
+{
+	return 1UL << compound_order(page);
+}
+
 /* Returns the number of bytes in this potentially compound page. */
 static inline unsigned long page_size(struct page *page)
 {
diff --git a/mm/compaction.c b/mm/compaction.c
index 9e1b9acb116b..78d42e2dbc64 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -967,7 +967,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			 * is safe to read and it's 0 for tail pages.
 			 */
 			if (unlikely(PageCompound(page))) {
-				low_pfn += (1UL << compound_order(page)) - 1;
+				low_pfn += compound_nr(page) - 1;
 				goto isolate_fail;
 			}
 		}
diff --git a/mm/filemap.c b/mm/filemap.c
index d0cf700bf201..f00f53ad383f 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -126,7 +126,7 @@ static void page_cache_delete(struct address_space *mapping,
 	/* hugetlb pages are represented by a single entry in the xarray */
 	if (!PageHuge(page)) {
 		xas_set_order(&xas, page->index, compound_order(page));
-		nr = 1U << compound_order(page);
+		nr = compound_nr(page);
 	}
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
diff --git a/mm/gup.c b/mm/gup.c
index 98f13ab37bac..84a36d80dd2e 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1460,7 +1460,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
 		 * gup may start from a tail page. Advance step by the left
 		 * part.
 		 */
-		step = (1 << compound_order(head)) - (pages[i] - head);
+		step = compound_nr(head) - (pages[i] - head);
 		/*
 		 * If we get a page from the CMA zone, since we are going to
 		 * be pinning these entries, we might as well move them out
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 68c2f2f3c05b..f1930fa0b445 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -139,7 +139,7 @@ static void hugetlb_cgroup_move_parent(int idx, struct hugetlb_cgroup *h_cg,
 	if (!page_hcg || page_hcg != h_cg)
 		goto out;
 
-	nr_pages = 1 << compound_order(page);
+	nr_pages = compound_nr(page);
 	if (!parent) {
 		parent = root_h_cgroup;
 		/* root has no limit */
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a929a3b9444d..895dc5e2b3d5 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -319,7 +319,7 @@ void kasan_poison_slab(struct page *page)
 {
 	unsigned long i;
 
-	for (i = 0; i < (1 << compound_order(page)); i++)
+	for (i = 0; i < compound_nr(page); i++)
 		page_kasan_tag_reset(page + i);
 	kasan_poison_shadow(page_address(page), page_size(page),
 			KASAN_KMALLOC_REDZONE);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index cdbb7a84cb6e..b5c4c618d087 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6257,7 +6257,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
 		unsigned int nr_pages = 1;
 
 		if (PageTransHuge(page)) {
-			nr_pages <<= compound_order(page);
+			nr_pages = compound_nr(page);
 			ug->nr_huge += nr_pages;
 		}
 		if (PageAnon(page))
@@ -6269,7 +6269,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
 		}
 		ug->pgpgout++;
 	} else {
-		ug->nr_kmem += 1 << compound_order(page);
+		ug->nr_kmem += compound_nr(page);
 		__ClearPageKmemcg(page);
 	}
 
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2a9bbddb0e55..bb2ab9f58f8c 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1311,7 +1311,7 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
 		head = compound_head(page);
 		if (page_huge_active(head))
 			return pfn;
-		skip = (1 << compound_order(head)) - (page - head);
+		skip = compound_nr(head) - (page - head);
 		pfn += skip - 1;
 	}
 	return 0;
@@ -1349,7 +1349,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 
 		if (PageHuge(page)) {
 			struct page *head = compound_head(page);
-			pfn = page_to_pfn(head) + (1<<compound_order(head)) - 1;
+			pfn = page_to_pfn(head) + compound_nr(head) - 1;
 			isolate_huge_page(head, &source);
 			continue;
 		} else if (PageTransHuge(page))
diff --git a/mm/migrate.c b/mm/migrate.c
index 8992741f10aa..702115a9cf11 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1889,7 +1889,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
 
 	/* Avoid migrating to a node that is nearly full */
-	if (!migrate_balanced_pgdat(pgdat, 1UL << compound_order(page)))
+	if (!migrate_balanced_pgdat(pgdat, compound_nr(page)))
 		return 0;
 
 	if (isolate_lru_page(page))
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 272c6de1bf4e..d3bb601c461b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8207,7 +8207,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 			if (!hugepage_migration_supported(page_hstate(head)))
 				goto unmovable;
 
-			skip_pages = (1 << compound_order(head)) - (page - head);
+			skip_pages = compound_nr(head) - (page - head);
 			iter += skip_pages - 1;
 			continue;
 		}
diff --git a/mm/rmap.c b/mm/rmap.c
index 09ce05c481fc..05e41f097b1d 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1514,8 +1514,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 		if (PageHWPoison(page) && !(flags & TTU_IGNORE_HWPOISON)) {
 			pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
 			if (PageHuge(page)) {
-				int nr = 1 << compound_order(page);
-				hugetlb_count_sub(nr, mm);
+				hugetlb_count_sub(compound_nr(page), mm);
 				set_huge_swap_pte_at(mm, address,
 						     pvmw.pte, pteval,
 						     vma_mmu_pagesize(vma));
diff --git a/mm/shmem.c b/mm/shmem.c
index 626d8c74b973..fccb34aca6ea 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -594,7 +594,7 @@ static int shmem_add_to_page_cache(struct page *page,
 {
 	XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page));
 	unsigned long i = 0;
-	unsigned long nr = 1UL << compound_order(page);
+	unsigned long nr = compound_nr(page);
 
 	VM_BUG_ON_PAGE(PageTail(page), page);
 	VM_BUG_ON_PAGE(index != round_down(index, nr), page);
@@ -1869,7 +1869,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 	lru_cache_add_anon(page);
 
 	spin_lock_irq(&info->lock);
-	info->alloced += 1 << compound_order(page);
+	info->alloced += compound_nr(page);
 	inode->i_blocks += BLOCKS_PER_PAGE << compound_order(page);
 	shmem_recalc_inode(inode);
 	spin_unlock_irq(&info->lock);
@@ -1910,7 +1910,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 		struct page *head = compound_head(page);
 		int i;
 
-		for (i = 0; i < (1 << compound_order(head)); i++) {
+		for (i = 0; i < compound_nr(head); i++) {
 			clear_highpage(head + i);
 			flush_dcache_page(head + i);
 		}
@@ -1937,7 +1937,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 	 * Error recovery.
 	 */
 unacct:
-	shmem_inode_unacct_blocks(inode, 1 << compound_order(page));
+	shmem_inode_unacct_blocks(inode, compound_nr(page));
 
 	if (PageTransHuge(page)) {
 		unlock_page(page);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 8368621a0fc7..f844af5f09ba 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -116,7 +116,7 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp)
 	struct address_space *address_space = swap_address_space(entry);
 	pgoff_t idx = swp_offset(entry);
 	XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page));
-	unsigned long i, nr = 1UL << compound_order(page);
+	unsigned long i, nr = compound_nr(page);
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	VM_BUG_ON_PAGE(PageSwapCache(page), page);
diff --git a/mm/util.c b/mm/util.c
index e6351a80f248..bab284d69c8c 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -521,7 +521,7 @@ bool page_mapped(struct page *page)
 		return true;
 	if (PageHuge(page))
 		return false;
-	for (i = 0; i < (1 << compound_order(page)); i++) {
+	for (i = 0; i < compound_nr(page); i++) {
 		if (atomic_read(&page[i]._mapcount) >= 0)
 			return true;
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 44df66a98f2a..bb69bd2d9c78 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1145,7 +1145,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 
 		VM_BUG_ON_PAGE(PageActive(page), page);
 
-		nr_pages = 1 << compound_order(page);
+		nr_pages = compound_nr(page);
 
 		/* Account the number of base pages even though THP */
 		sc->nr_scanned += nr_pages;
@@ -1701,7 +1701,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 		VM_BUG_ON_PAGE(!PageLRU(page), page);
 
-		nr_pages = 1 << compound_order(page);
+		nr_pages = compound_nr(page);
 		total_scan += nr_pages;
 
 		if (page_zonenum(page) > sc->reclaim_idx) {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/3] mm: Introduce page_size()
  2019-07-21 10:46 ` [PATCH v2 1/3] mm: Introduce page_size() Matthew Wilcox
@ 2019-07-23  0:43   ` Ira Weiny
  2019-07-23 16:02     ` Matthew Wilcox
  0 siblings, 1 reply; 20+ messages in thread
From: Ira Weiny @ 2019-07-23  0:43 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm, Michal Hocko

On Sun, Jul 21, 2019 at 03:46:10AM -0700, Matthew Wilcox wrote:
> From: Matthew Wilcox (Oracle) <willy@infradead.org>
> 
> It's unnecessarily hard to find out the size of a potentially huge page.
> Replace 'PAGE_SIZE << compound_order(page)' with page_size(page).
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Acked-by: Michal Hocko <mhocko@suse.com>
> ---
>  arch/arm/mm/flush.c                           |  3 +--
>  arch/arm64/mm/flush.c                         |  3 +--
>  arch/ia64/mm/init.c                           |  2 +-
>  drivers/crypto/chelsio/chtls/chtls_io.c       |  5 ++---
>  drivers/staging/android/ion/ion_system_heap.c |  4 ++--
>  drivers/target/tcm_fc/tfc_io.c                |  3 +--
>  fs/io_uring.c                                 |  2 +-
>  include/linux/hugetlb.h                       |  2 +-
>  include/linux/mm.h                            |  6 ++++++
>  lib/iov_iter.c                                |  2 +-
>  mm/kasan/common.c                             |  8 +++-----
>  mm/nommu.c                                    |  2 +-
>  mm/page_vma_mapped.c                          |  3 +--
>  mm/rmap.c                                     |  6 ++----
>  mm/slob.c                                     |  2 +-
>  mm/slub.c                                     | 18 +++++++++---------
>  net/xdp/xsk.c                                 |  2 +-
>  17 files changed, 35 insertions(+), 38 deletions(-)
> 
> diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
> index 6ecbda87ee46..4c7ebe094a83 100644
> --- a/arch/arm/mm/flush.c
> +++ b/arch/arm/mm/flush.c
> @@ -204,8 +204,7 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page)
>  	 * coherent with the kernels mapping.
>  	 */
>  	if (!PageHighMem(page)) {
> -		size_t page_size = PAGE_SIZE << compound_order(page);
> -		__cpuc_flush_dcache_area(page_address(page), page_size);
> +		__cpuc_flush_dcache_area(page_address(page), page_size(page));
>  	} else {
>  		unsigned long i;
>  		if (cache_is_vipt_nonaliasing()) {
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index dc19300309d2..ac485163a4a7 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -56,8 +56,7 @@ void __sync_icache_dcache(pte_t pte)
>  	struct page *page = pte_page(pte);
>  
>  	if (!test_and_set_bit(PG_dcache_clean, &page->flags))
> -		sync_icache_aliases(page_address(page),
> -				    PAGE_SIZE << compound_order(page));
> +		sync_icache_aliases(page_address(page), page_size(page));
>  }
>  EXPORT_SYMBOL_GPL(__sync_icache_dcache);
>  
> diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
> index aae75fd7b810..e97e24816bd4 100644
> --- a/arch/ia64/mm/init.c
> +++ b/arch/ia64/mm/init.c
> @@ -63,7 +63,7 @@ __ia64_sync_icache_dcache (pte_t pte)
>  	if (test_bit(PG_arch_1, &page->flags))
>  		return;				/* i-cache is already coherent with d-cache */
>  
> -	flush_icache_range(addr, addr + (PAGE_SIZE << compound_order(page)));
> +	flush_icache_range(addr, addr + page_size(page));
>  	set_bit(PG_arch_1, &page->flags);	/* mark page as clean */
>  }
>  
> diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
> index 551bca6fef24..925be5942895 100644
> --- a/drivers/crypto/chelsio/chtls/chtls_io.c
> +++ b/drivers/crypto/chelsio/chtls/chtls_io.c
> @@ -1078,7 +1078,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
>  			bool merge;
>  
>  			if (page)
> -				pg_size <<= compound_order(page);
> +				pg_size = page_size(page);
>  			if (off < pg_size &&
>  			    skb_can_coalesce(skb, i, page, off)) {
>  				merge = 1;
> @@ -1105,8 +1105,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
>  							   __GFP_NORETRY,
>  							   order);
>  					if (page)
> -						pg_size <<=
> -							compound_order(page);
> +						pg_size <<= order;

Looking at the code I see pg_size should be PAGE_SIZE right before this so why
not just use the new call and remove the initial assignment?

Regardless it should be fine.

Reviewed-by: Ira Weiny <ira.weiny@intel.com>

>  				}
>  				if (!page) {
>  					page = alloc_page(gfp);
> diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c
> index aa8d8425be25..b83a1d16bd89 100644
> --- a/drivers/staging/android/ion/ion_system_heap.c
> +++ b/drivers/staging/android/ion/ion_system_heap.c
> @@ -120,7 +120,7 @@ static int ion_system_heap_allocate(struct ion_heap *heap,
>  		if (!page)
>  			goto free_pages;
>  		list_add_tail(&page->lru, &pages);
> -		size_remaining -= PAGE_SIZE << compound_order(page);
> +		size_remaining -= page_size(page);
>  		max_order = compound_order(page);
>  		i++;
>  	}
> @@ -133,7 +133,7 @@ static int ion_system_heap_allocate(struct ion_heap *heap,
>  
>  	sg = table->sgl;
>  	list_for_each_entry_safe(page, tmp_page, &pages, lru) {
> -		sg_set_page(sg, page, PAGE_SIZE << compound_order(page), 0);
> +		sg_set_page(sg, page, page_size(page), 0);
>  		sg = sg_next(sg);
>  		list_del(&page->lru);
>  	}
> diff --git a/drivers/target/tcm_fc/tfc_io.c b/drivers/target/tcm_fc/tfc_io.c
> index a254792d882c..1354a157e9af 100644
> --- a/drivers/target/tcm_fc/tfc_io.c
> +++ b/drivers/target/tcm_fc/tfc_io.c
> @@ -136,8 +136,7 @@ int ft_queue_data_in(struct se_cmd *se_cmd)
>  					   page, off_in_page, tlen);
>  			fr_len(fp) += tlen;
>  			fp_skb(fp)->data_len += tlen;
> -			fp_skb(fp)->truesize +=
> -					PAGE_SIZE << compound_order(page);
> +			fp_skb(fp)->truesize += page_size(page);
>  		} else {
>  			BUG_ON(!page);
>  			from = kmap_atomic(page + (mem_off >> PAGE_SHIFT));
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index e2a66e12fbc6..c55d8b411d2a 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -3084,7 +3084,7 @@ static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
>  	}
>  
>  	page = virt_to_head_page(ptr);
> -	if (sz > (PAGE_SIZE << compound_order(page)))
> +	if (sz > page_size(page))
>  		return -EINVAL;
>  
>  	pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index edfca4278319..53fc34f930d0 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -454,7 +454,7 @@ static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
>  static inline struct hstate *page_hstate(struct page *page)
>  {
>  	VM_BUG_ON_PAGE(!PageHuge(page), page);
> -	return size_to_hstate(PAGE_SIZE << compound_order(page));
> +	return size_to_hstate(page_size(page));
>  }
>  
>  static inline unsigned hstate_index_to_shift(unsigned index)
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 0334ca97c584..899dfcf7c23d 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -805,6 +805,12 @@ static inline void set_compound_order(struct page *page, unsigned int order)
>  	page[1].compound_order = order;
>  }
>  
> +/* Returns the number of bytes in this potentially compound page. */
> +static inline unsigned long page_size(struct page *page)
> +{
> +	return PAGE_SIZE << compound_order(page);
> +}
> +
>  void free_compound_page(struct page *page);
>  
>  #ifdef CONFIG_MMU
> diff --git a/lib/iov_iter.c b/lib/iov_iter.c
> index f1e0569b4539..639d5e7014c1 100644
> --- a/lib/iov_iter.c
> +++ b/lib/iov_iter.c
> @@ -878,7 +878,7 @@ static inline bool page_copy_sane(struct page *page, size_t offset, size_t n)
>  	head = compound_head(page);
>  	v += (page - head) << PAGE_SHIFT;
>  
> -	if (likely(n <= v && v <= (PAGE_SIZE << compound_order(head))))
> +	if (likely(n <= v && v <= (page_size(head))))
>  		return true;
>  	WARN_ON(1);
>  	return false;
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 2277b82902d8..a929a3b9444d 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -321,8 +321,7 @@ void kasan_poison_slab(struct page *page)
>  
>  	for (i = 0; i < (1 << compound_order(page)); i++)
>  		page_kasan_tag_reset(page + i);
> -	kasan_poison_shadow(page_address(page),
> -			PAGE_SIZE << compound_order(page),
> +	kasan_poison_shadow(page_address(page), page_size(page),
>  			KASAN_KMALLOC_REDZONE);
>  }
>  
> @@ -518,7 +517,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
>  	page = virt_to_page(ptr);
>  	redzone_start = round_up((unsigned long)(ptr + size),
>  				KASAN_SHADOW_SCALE_SIZE);
> -	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
> +	redzone_end = (unsigned long)ptr + page_size(page);
>  
>  	kasan_unpoison_shadow(ptr, size);
>  	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> @@ -554,8 +553,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
>  			kasan_report_invalid_free(ptr, ip);
>  			return;
>  		}
> -		kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
> -				KASAN_FREE_PAGE);
> +		kasan_poison_shadow(ptr, page_size(page), KASAN_FREE_PAGE);
>  	} else {
>  		__kasan_slab_free(page->slab_cache, ptr, ip, false);
>  	}
> diff --git a/mm/nommu.c b/mm/nommu.c
> index fed1b6e9c89b..99b7ec318824 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -108,7 +108,7 @@ unsigned int kobjsize(const void *objp)
>  	 * The ksize() function is only guaranteed to work for pointers
>  	 * returned by kmalloc(). So handle arbitrary pointers here.
>  	 */
> -	return PAGE_SIZE << compound_order(page);
> +	return page_size(page);
>  }
>  
>  /**
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index 11df03e71288..eff4b4520c8d 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -153,8 +153,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>  
>  	if (unlikely(PageHuge(pvmw->page))) {
>  		/* when pud is not present, pte will be NULL */
> -		pvmw->pte = huge_pte_offset(mm, pvmw->address,
> -					    PAGE_SIZE << compound_order(page));
> +		pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page));
>  		if (!pvmw->pte)
>  			return false;
>  
> diff --git a/mm/rmap.c b/mm/rmap.c
> index e5dfe2ae6b0d..09ce05c481fc 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -898,8 +898,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
>  	 */
>  	mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
>  				0, vma, vma->vm_mm, address,
> -				min(vma->vm_end, address +
> -				    (PAGE_SIZE << compound_order(page))));
> +				min(vma->vm_end, address + page_size(page)));
>  	mmu_notifier_invalidate_range_start(&range);
>  
>  	while (page_vma_mapped_walk(&pvmw)) {
> @@ -1374,8 +1373,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  	 */
>  	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
>  				address,
> -				min(vma->vm_end, address +
> -				    (PAGE_SIZE << compound_order(page))));
> +				min(vma->vm_end, address + page_size(page)));
>  	if (PageHuge(page)) {
>  		/*
>  		 * If sharing is possible, start and end will be adjusted
> diff --git a/mm/slob.c b/mm/slob.c
> index 7f421d0ca9ab..cf377beab962 100644
> --- a/mm/slob.c
> +++ b/mm/slob.c
> @@ -539,7 +539,7 @@ size_t __ksize(const void *block)
>  
>  	sp = virt_to_page(block);
>  	if (unlikely(!PageSlab(sp)))
> -		return PAGE_SIZE << compound_order(sp);
> +		return page_size(sp);
>  
>  	align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
>  	m = (unsigned int *)(block - align);
> diff --git a/mm/slub.c b/mm/slub.c
> index e6c030e47364..1e8e20a99660 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -829,7 +829,7 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
>  		return 1;
>  
>  	start = page_address(page);
> -	length = PAGE_SIZE << compound_order(page);
> +	length = page_size(page);
>  	end = start + length;
>  	remainder = length % s->size;
>  	if (!remainder)
> @@ -1074,13 +1074,14 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page,
>  	init_tracking(s, object);
>  }
>  
> -static void setup_page_debug(struct kmem_cache *s, void *addr, int order)
> +static
> +void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr)
>  {
>  	if (!(s->flags & SLAB_POISON))
>  		return;
>  
>  	metadata_access_enable();
> -	memset(addr, POISON_INUSE, PAGE_SIZE << order);
> +	memset(addr, POISON_INUSE, page_size(page));
>  	metadata_access_disable();
>  }
>  
> @@ -1340,8 +1341,8 @@ slab_flags_t kmem_cache_flags(unsigned int object_size,
>  #else /* !CONFIG_SLUB_DEBUG */
>  static inline void setup_object_debug(struct kmem_cache *s,
>  			struct page *page, void *object) {}
> -static inline void setup_page_debug(struct kmem_cache *s,
> -			void *addr, int order) {}
> +static inline
> +void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) {}
>  
>  static inline int alloc_debug_processing(struct kmem_cache *s,
>  	struct page *page, void *object, unsigned long addr) { return 0; }
> @@ -1635,7 +1636,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>  	struct kmem_cache_order_objects oo = s->oo;
>  	gfp_t alloc_gfp;
>  	void *start, *p, *next;
> -	int idx, order;
> +	int idx;
>  	bool shuffle;
>  
>  	flags &= gfp_allowed_mask;
> @@ -1669,7 +1670,6 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>  
>  	page->objects = oo_objects(oo);
>  
> -	order = compound_order(page);
>  	page->slab_cache = s;
>  	__SetPageSlab(page);
>  	if (page_is_pfmemalloc(page))
> @@ -1679,7 +1679,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>  
>  	start = page_address(page);
>  
> -	setup_page_debug(s, start, order);
> +	setup_page_debug(s, page, start);
>  
>  	shuffle = shuffle_freelist(s, page);
>  
> @@ -3926,7 +3926,7 @@ size_t __ksize(const void *object)
>  
>  	if (unlikely(!PageSlab(page))) {
>  		WARN_ON(!PageCompound(page));
> -		return PAGE_SIZE << compound_order(page);
> +		return page_size(page);
>  	}
>  
>  	return slab_ksize(page->slab_cache);
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 59b57d708697..44bfb76fbad9 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -739,7 +739,7 @@ static int xsk_mmap(struct file *file, struct socket *sock,
>  	/* Matches the smp_wmb() in xsk_init_queue */
>  	smp_rmb();
>  	qpg = virt_to_head_page(q->ring);
> -	if (size > (PAGE_SIZE << compound_order(qpg)))
> +	if (size > page_size(qpg))
>  		return -EINVAL;
>  
>  	pfn = virt_to_phys(q->ring) >> PAGE_SHIFT;
> -- 
> 2.20.1
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/3] mm: Introduce page_shift()
  2019-07-21 10:46 ` [PATCH v2 2/3] mm: Introduce page_shift() Matthew Wilcox
@ 2019-07-23  0:44   ` Ira Weiny
  2019-07-24 10:40   ` kbuild test robot
  1 sibling, 0 replies; 20+ messages in thread
From: Ira Weiny @ 2019-07-23  0:44 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm

On Sun, Jul 21, 2019 at 03:46:11AM -0700, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> Replace PAGE_SHIFT + compound_order(page) with the new page_shift()
> function.  Minor improvements in readability.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Reviewed-by: Ira Weiny <ira.weiny@intel.com>

> ---
>  arch/powerpc/mm/book3s64/iommu_api.c | 7 ++-----
>  drivers/vfio/vfio_iommu_spapr_tce.c  | 2 +-
>  include/linux/mm.h                   | 6 ++++++
>  3 files changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c
> index b056cae3388b..56cc84520577 100644
> --- a/arch/powerpc/mm/book3s64/iommu_api.c
> +++ b/arch/powerpc/mm/book3s64/iommu_api.c
> @@ -129,11 +129,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
>  		 * Allow to use larger than 64k IOMMU pages. Only do that
>  		 * if we are backed by hugetlb.
>  		 */
> -		if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page)) {
> -			struct page *head = compound_head(page);
> -
> -			pageshift = compound_order(head) + PAGE_SHIFT;
> -		}
> +		if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page))
> +			pageshift = page_shift(compound_head(page));
>  		mem->pageshift = min(mem->pageshift, pageshift);
>  		/*
>  		 * We don't need struct page reference any more, switch
> diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
> index 8ce9ad21129f..1883fd2901b2 100644
> --- a/drivers/vfio/vfio_iommu_spapr_tce.c
> +++ b/drivers/vfio/vfio_iommu_spapr_tce.c
> @@ -190,7 +190,7 @@ static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa,
>  	 * a page we just found. Otherwise the hardware can get access to
>  	 * a bigger memory chunk that it should.
>  	 */
> -	return (PAGE_SHIFT + compound_order(compound_head(page))) >= page_shift;
> +	return page_shift(compound_head(page)) >= page_shift;
>  }
>  
>  static inline bool tce_groups_attached(struct tce_container *container)
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 899dfcf7c23d..64762559885f 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -811,6 +811,12 @@ static inline unsigned long page_size(struct page *page)
>  	return PAGE_SIZE << compound_order(page);
>  }
>  
> +/* Returns the number of bits needed for the number of bytes in a page */
> +static inline unsigned int page_shift(struct page *page)
> +{
> +	return PAGE_SHIFT + compound_order(page);
> +}
> +
>  void free_compound_page(struct page *page);
>  
>  #ifdef CONFIG_MMU
> -- 
> 2.20.1
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/3] mm: Introduce compound_nr()
  2019-07-21 10:46 ` [PATCH v2 3/3] mm: Introduce compound_nr() Matthew Wilcox
@ 2019-07-23  0:46   ` Ira Weiny
  0 siblings, 0 replies; 20+ messages in thread
From: Ira Weiny @ 2019-07-23  0:46 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm

On Sun, Jul 21, 2019 at 03:46:12AM -0700, Matthew Wilcox wrote:
> From: Matthew Wilcox (Oracle) <willy@infradead.org>
> 
> Replace 1 << compound_order(page) with compound_nr(page).  Minor
> improvements in readability.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Reviewed-by: Ira Weiny <ira.weiny@intel.com>

> ---
>  arch/arm/include/asm/xen/page-coherent.h   | 3 +--
>  arch/arm/mm/flush.c                        | 4 ++--
>  arch/arm64/include/asm/xen/page-coherent.h | 3 +--
>  arch/powerpc/mm/hugetlbpage.c              | 2 +-
>  fs/proc/task_mmu.c                         | 2 +-
>  include/linux/mm.h                         | 6 ++++++
>  mm/compaction.c                            | 2 +-
>  mm/filemap.c                               | 2 +-
>  mm/gup.c                                   | 2 +-
>  mm/hugetlb_cgroup.c                        | 2 +-
>  mm/kasan/common.c                          | 2 +-
>  mm/memcontrol.c                            | 4 ++--
>  mm/memory_hotplug.c                        | 4 ++--
>  mm/migrate.c                               | 2 +-
>  mm/page_alloc.c                            | 2 +-
>  mm/rmap.c                                  | 3 +--
>  mm/shmem.c                                 | 8 ++++----
>  mm/swap_state.c                            | 2 +-
>  mm/util.c                                  | 2 +-
>  mm/vmscan.c                                | 4 ++--
>  20 files changed, 32 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/page-coherent.h b/arch/arm/include/asm/xen/page-coherent.h
> index 2c403e7c782d..ea39cb724ffa 100644
> --- a/arch/arm/include/asm/xen/page-coherent.h
> +++ b/arch/arm/include/asm/xen/page-coherent.h
> @@ -31,8 +31,7 @@ static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
>  {
>  	unsigned long page_pfn = page_to_xen_pfn(page);
>  	unsigned long dev_pfn = XEN_PFN_DOWN(dev_addr);
> -	unsigned long compound_pages =
> -		(1<<compound_order(page)) * XEN_PFN_PER_PAGE;
> +	unsigned long compound_pages = compound_nr(page) * XEN_PFN_PER_PAGE;
>  	bool local = (page_pfn <= dev_pfn) &&
>  		(dev_pfn - page_pfn < compound_pages);
>  
> diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
> index 4c7ebe094a83..6d89db7895d1 100644
> --- a/arch/arm/mm/flush.c
> +++ b/arch/arm/mm/flush.c
> @@ -208,13 +208,13 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page)
>  	} else {
>  		unsigned long i;
>  		if (cache_is_vipt_nonaliasing()) {
> -			for (i = 0; i < (1 << compound_order(page)); i++) {
> +			for (i = 0; i < compound_nr(page); i++) {
>  				void *addr = kmap_atomic(page + i);
>  				__cpuc_flush_dcache_area(addr, PAGE_SIZE);
>  				kunmap_atomic(addr);
>  			}
>  		} else {
> -			for (i = 0; i < (1 << compound_order(page)); i++) {
> +			for (i = 0; i < compound_nr(page); i++) {
>  				void *addr = kmap_high_get(page + i);
>  				if (addr) {
>  					__cpuc_flush_dcache_area(addr, PAGE_SIZE);
> diff --git a/arch/arm64/include/asm/xen/page-coherent.h b/arch/arm64/include/asm/xen/page-coherent.h
> index d88e56b90b93..b600a8ef3349 100644
> --- a/arch/arm64/include/asm/xen/page-coherent.h
> +++ b/arch/arm64/include/asm/xen/page-coherent.h
> @@ -45,8 +45,7 @@ static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
>  {
>  	unsigned long page_pfn = page_to_xen_pfn(page);
>  	unsigned long dev_pfn = XEN_PFN_DOWN(dev_addr);
> -	unsigned long compound_pages =
> -		(1<<compound_order(page)) * XEN_PFN_PER_PAGE;
> +	unsigned long compound_pages = compound_nr(page) * XEN_PFN_PER_PAGE;
>  	bool local = (page_pfn <= dev_pfn) &&
>  		(dev_pfn - page_pfn < compound_pages);
>  
> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> index a8953f108808..73d4873fc7f8 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -667,7 +667,7 @@ void flush_dcache_icache_hugepage(struct page *page)
>  
>  	BUG_ON(!PageCompound(page));
>  
> -	for (i = 0; i < (1UL << compound_order(page)); i++) {
> +	for (i = 0; i < compound_nr(page); i++) {
>  		if (!PageHighMem(page)) {
>  			__flush_dcache_icache(page_address(page+i));
>  		} else {
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 731642e0f5a0..a9f2deb8ab79 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -461,7 +461,7 @@ static void smaps_page_accumulate(struct mem_size_stats *mss,
>  static void smaps_account(struct mem_size_stats *mss, struct page *page,
>  		bool compound, bool young, bool dirty, bool locked)
>  {
> -	int i, nr = compound ? 1 << compound_order(page) : 1;
> +	int i, nr = compound ? compound_nr(page) : 1;
>  	unsigned long size = nr * PAGE_SIZE;
>  
>  	/*
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 64762559885f..726d7f046b49 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -805,6 +805,12 @@ static inline void set_compound_order(struct page *page, unsigned int order)
>  	page[1].compound_order = order;
>  }
>  
> +/* Returns the number of pages in this potentially compound page. */
> +static inline unsigned long compound_nr(struct page *page)
> +{
> +	return 1UL << compound_order(page);
> +}
> +
>  /* Returns the number of bytes in this potentially compound page. */
>  static inline unsigned long page_size(struct page *page)
>  {
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 9e1b9acb116b..78d42e2dbc64 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -967,7 +967,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  			 * is safe to read and it's 0 for tail pages.
>  			 */
>  			if (unlikely(PageCompound(page))) {
> -				low_pfn += (1UL << compound_order(page)) - 1;
> +				low_pfn += compound_nr(page) - 1;
>  				goto isolate_fail;
>  			}
>  		}
> diff --git a/mm/filemap.c b/mm/filemap.c
> index d0cf700bf201..f00f53ad383f 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -126,7 +126,7 @@ static void page_cache_delete(struct address_space *mapping,
>  	/* hugetlb pages are represented by a single entry in the xarray */
>  	if (!PageHuge(page)) {
>  		xas_set_order(&xas, page->index, compound_order(page));
> -		nr = 1U << compound_order(page);
> +		nr = compound_nr(page);
>  	}
>  
>  	VM_BUG_ON_PAGE(!PageLocked(page), page);
> diff --git a/mm/gup.c b/mm/gup.c
> index 98f13ab37bac..84a36d80dd2e 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1460,7 +1460,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
>  		 * gup may start from a tail page. Advance step by the left
>  		 * part.
>  		 */
> -		step = (1 << compound_order(head)) - (pages[i] - head);
> +		step = compound_nr(head) - (pages[i] - head);
>  		/*
>  		 * If we get a page from the CMA zone, since we are going to
>  		 * be pinning these entries, we might as well move them out
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> index 68c2f2f3c05b..f1930fa0b445 100644
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -139,7 +139,7 @@ static void hugetlb_cgroup_move_parent(int idx, struct hugetlb_cgroup *h_cg,
>  	if (!page_hcg || page_hcg != h_cg)
>  		goto out;
>  
> -	nr_pages = 1 << compound_order(page);
> +	nr_pages = compound_nr(page);
>  	if (!parent) {
>  		parent = root_h_cgroup;
>  		/* root has no limit */
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index a929a3b9444d..895dc5e2b3d5 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -319,7 +319,7 @@ void kasan_poison_slab(struct page *page)
>  {
>  	unsigned long i;
>  
> -	for (i = 0; i < (1 << compound_order(page)); i++)
> +	for (i = 0; i < compound_nr(page); i++)
>  		page_kasan_tag_reset(page + i);
>  	kasan_poison_shadow(page_address(page), page_size(page),
>  			KASAN_KMALLOC_REDZONE);
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index cdbb7a84cb6e..b5c4c618d087 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6257,7 +6257,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>  		unsigned int nr_pages = 1;
>  
>  		if (PageTransHuge(page)) {
> -			nr_pages <<= compound_order(page);
> +			nr_pages = compound_nr(page);
>  			ug->nr_huge += nr_pages;
>  		}
>  		if (PageAnon(page))
> @@ -6269,7 +6269,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>  		}
>  		ug->pgpgout++;
>  	} else {
> -		ug->nr_kmem += 1 << compound_order(page);
> +		ug->nr_kmem += compound_nr(page);
>  		__ClearPageKmemcg(page);
>  	}
>  
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 2a9bbddb0e55..bb2ab9f58f8c 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1311,7 +1311,7 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
>  		head = compound_head(page);
>  		if (page_huge_active(head))
>  			return pfn;
> -		skip = (1 << compound_order(head)) - (page - head);
> +		skip = compound_nr(head) - (page - head);
>  		pfn += skip - 1;
>  	}
>  	return 0;
> @@ -1349,7 +1349,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
>  
>  		if (PageHuge(page)) {
>  			struct page *head = compound_head(page);
> -			pfn = page_to_pfn(head) + (1<<compound_order(head)) - 1;
> +			pfn = page_to_pfn(head) + compound_nr(head) - 1;
>  			isolate_huge_page(head, &source);
>  			continue;
>  		} else if (PageTransHuge(page))
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 8992741f10aa..702115a9cf11 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1889,7 +1889,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
>  
>  	/* Avoid migrating to a node that is nearly full */
> -	if (!migrate_balanced_pgdat(pgdat, 1UL << compound_order(page)))
> +	if (!migrate_balanced_pgdat(pgdat, compound_nr(page)))
>  		return 0;
>  
>  	if (isolate_lru_page(page))
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 272c6de1bf4e..d3bb601c461b 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8207,7 +8207,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
>  			if (!hugepage_migration_supported(page_hstate(head)))
>  				goto unmovable;
>  
> -			skip_pages = (1 << compound_order(head)) - (page - head);
> +			skip_pages = compound_nr(head) - (page - head);
>  			iter += skip_pages - 1;
>  			continue;
>  		}
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 09ce05c481fc..05e41f097b1d 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1514,8 +1514,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  		if (PageHWPoison(page) && !(flags & TTU_IGNORE_HWPOISON)) {
>  			pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
>  			if (PageHuge(page)) {
> -				int nr = 1 << compound_order(page);
> -				hugetlb_count_sub(nr, mm);
> +				hugetlb_count_sub(compound_nr(page), mm);
>  				set_huge_swap_pte_at(mm, address,
>  						     pvmw.pte, pteval,
>  						     vma_mmu_pagesize(vma));
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 626d8c74b973..fccb34aca6ea 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -594,7 +594,7 @@ static int shmem_add_to_page_cache(struct page *page,
>  {
>  	XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page));
>  	unsigned long i = 0;
> -	unsigned long nr = 1UL << compound_order(page);
> +	unsigned long nr = compound_nr(page);
>  
>  	VM_BUG_ON_PAGE(PageTail(page), page);
>  	VM_BUG_ON_PAGE(index != round_down(index, nr), page);
> @@ -1869,7 +1869,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>  	lru_cache_add_anon(page);
>  
>  	spin_lock_irq(&info->lock);
> -	info->alloced += 1 << compound_order(page);
> +	info->alloced += compound_nr(page);
>  	inode->i_blocks += BLOCKS_PER_PAGE << compound_order(page);
>  	shmem_recalc_inode(inode);
>  	spin_unlock_irq(&info->lock);
> @@ -1910,7 +1910,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>  		struct page *head = compound_head(page);
>  		int i;
>  
> -		for (i = 0; i < (1 << compound_order(head)); i++) {
> +		for (i = 0; i < compound_nr(head); i++) {
>  			clear_highpage(head + i);
>  			flush_dcache_page(head + i);
>  		}
> @@ -1937,7 +1937,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>  	 * Error recovery.
>  	 */
>  unacct:
> -	shmem_inode_unacct_blocks(inode, 1 << compound_order(page));
> +	shmem_inode_unacct_blocks(inode, compound_nr(page));
>  
>  	if (PageTransHuge(page)) {
>  		unlock_page(page);
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 8368621a0fc7..f844af5f09ba 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -116,7 +116,7 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp)
>  	struct address_space *address_space = swap_address_space(entry);
>  	pgoff_t idx = swp_offset(entry);
>  	XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page));
> -	unsigned long i, nr = 1UL << compound_order(page);
> +	unsigned long i, nr = compound_nr(page);
>  
>  	VM_BUG_ON_PAGE(!PageLocked(page), page);
>  	VM_BUG_ON_PAGE(PageSwapCache(page), page);
> diff --git a/mm/util.c b/mm/util.c
> index e6351a80f248..bab284d69c8c 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -521,7 +521,7 @@ bool page_mapped(struct page *page)
>  		return true;
>  	if (PageHuge(page))
>  		return false;
> -	for (i = 0; i < (1 << compound_order(page)); i++) {
> +	for (i = 0; i < compound_nr(page); i++) {
>  		if (atomic_read(&page[i]._mapcount) >= 0)
>  			return true;
>  	}
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 44df66a98f2a..bb69bd2d9c78 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1145,7 +1145,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>  
>  		VM_BUG_ON_PAGE(PageActive(page), page);
>  
> -		nr_pages = 1 << compound_order(page);
> +		nr_pages = compound_nr(page);
>  
>  		/* Account the number of base pages even though THP */
>  		sc->nr_scanned += nr_pages;
> @@ -1701,7 +1701,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  
>  		VM_BUG_ON_PAGE(!PageLRU(page), page);
>  
> -		nr_pages = 1 << compound_order(page);
> +		nr_pages = compound_nr(page);
>  		total_scan += nr_pages;
>  
>  		if (page_zonenum(page) > sc->reclaim_idx) {
> -- 
> 2.20.1
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/3] Make working with compound pages easier
  2019-07-21 10:46 [PATCH v2 0/3] Make working with compound pages easier Matthew Wilcox
                   ` (2 preceding siblings ...)
  2019-07-21 10:46 ` [PATCH v2 3/3] mm: Introduce compound_nr() Matthew Wilcox
@ 2019-07-23 12:55 ` Kirill A. Shutemov
  3 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2019-07-23 12:55 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm

On Sun, Jul 21, 2019 at 03:46:09AM -0700, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> These three patches add three helpers and convert the appropriate
> places to use them.

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

-- 
 Kirill A. Shutemov


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/3] mm: Introduce page_size()
  2019-07-23  0:43   ` Ira Weiny
@ 2019-07-23 16:02     ` Matthew Wilcox
  2019-07-23 17:58       ` Ira Weiny
  2019-09-20 23:28       ` Andrew Morton
  0 siblings, 2 replies; 20+ messages in thread
From: Matthew Wilcox @ 2019-07-23 16:02 UTC (permalink / raw)
  To: Ira Weiny; +Cc: Andrew Morton, linux-mm, Atul Gupta, linux-crypto

On Mon, Jul 22, 2019 at 05:43:07PM -0700, Ira Weiny wrote:
> > diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
> > index 551bca6fef24..925be5942895 100644
> > --- a/drivers/crypto/chelsio/chtls/chtls_io.c
> > +++ b/drivers/crypto/chelsio/chtls/chtls_io.c
> > @@ -1078,7 +1078,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> >  			bool merge;
> >  
> >  			if (page)
> > -				pg_size <<= compound_order(page);
> > +				pg_size = page_size(page);
> >  			if (off < pg_size &&
> >  			    skb_can_coalesce(skb, i, page, off)) {
> >  				merge = 1;
> > @@ -1105,8 +1105,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> >  							   __GFP_NORETRY,
> >  							   order);
> >  					if (page)
> > -						pg_size <<=
> > -							compound_order(page);
> > +						pg_size <<= order;
> 
> Looking at the code I see pg_size should be PAGE_SIZE right before this so why
> not just use the new call and remove the initial assignment?

This driver is really convoluted.  I wasn't certain I wouldn't break it
in some horrid way.  I made larger changes to it originally, then they
touched this part of the driver and I had to rework the patch to apply
on top of their changes.  So I did something more minimal.

This, on top of what's in Andrew's tree, would be my guess, but I don't
have the hardware.

diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
index 925be5942895..d4eb0fcd04c7 100644
--- a/drivers/crypto/chelsio/chtls/chtls_io.c
+++ b/drivers/crypto/chelsio/chtls/chtls_io.c
@@ -1073,7 +1073,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
 		} else {
 			int i = skb_shinfo(skb)->nr_frags;
 			struct page *page = TCP_PAGE(sk);
-			int pg_size = PAGE_SIZE;
+			unsigned int pg_size = 0;
 			int off = TCP_OFF(sk);
 			bool merge;
 
@@ -1092,7 +1092,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
 			if (page && off == pg_size) {
 				put_page(page);
 				TCP_PAGE(sk) = page = NULL;
-				pg_size = PAGE_SIZE;
+				pg_size = 0;
 			}
 
 			if (!page) {
@@ -1104,15 +1104,13 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
 							   __GFP_NOWARN |
 							   __GFP_NORETRY,
 							   order);
-					if (page)
-						pg_size <<= order;
 				}
 				if (!page) {
 					page = alloc_page(gfp);
-					pg_size = PAGE_SIZE;
 				}
 				if (!page)
 					goto wait_for_memory;
+				pg_size = page_size(page);
 				off = 0;
 			}
 copy:


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/3] mm: Introduce page_size()
  2019-07-23 16:02     ` Matthew Wilcox
@ 2019-07-23 17:58       ` Ira Weiny
  2019-07-23 18:14         ` Matthew Wilcox
  2019-09-20 23:28       ` Andrew Morton
  1 sibling, 1 reply; 20+ messages in thread
From: Ira Weiny @ 2019-07-23 17:58 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm, Atul Gupta, linux-crypto

On Tue, Jul 23, 2019 at 09:02:48AM -0700, Matthew Wilcox wrote:
> On Mon, Jul 22, 2019 at 05:43:07PM -0700, Ira Weiny wrote:
> > > diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
> > > index 551bca6fef24..925be5942895 100644
> > > --- a/drivers/crypto/chelsio/chtls/chtls_io.c
> > > +++ b/drivers/crypto/chelsio/chtls/chtls_io.c
> > > @@ -1078,7 +1078,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> > >  			bool merge;
> > >  
> > >  			if (page)
> > > -				pg_size <<= compound_order(page);
> > > +				pg_size = page_size(page);
> > >  			if (off < pg_size &&
> > >  			    skb_can_coalesce(skb, i, page, off)) {
> > >  				merge = 1;
> > > @@ -1105,8 +1105,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> > >  							   __GFP_NORETRY,
> > >  							   order);
> > >  					if (page)
> > > -						pg_size <<=
> > > -							compound_order(page);
> > > +						pg_size <<= order;
> > 
> > Looking at the code I see pg_size should be PAGE_SIZE right before this so why
> > not just use the new call and remove the initial assignment?
> 
> This driver is really convoluted.

Agreed...

>
> I wasn't certain I wouldn't break it
> in some horrid way.  I made larger changes to it originally, then they
> touched this part of the driver and I had to rework the patch to apply
> on top of their changes.  So I did something more minimal.
> 
> This, on top of what's in Andrew's tree, would be my guess, but I don't
> have the hardware.
> 
> diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
> index 925be5942895..d4eb0fcd04c7 100644
> --- a/drivers/crypto/chelsio/chtls/chtls_io.c
> +++ b/drivers/crypto/chelsio/chtls/chtls_io.c
> @@ -1073,7 +1073,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
>  		} else {
>  			int i = skb_shinfo(skb)->nr_frags;
>  			struct page *page = TCP_PAGE(sk);
> -			int pg_size = PAGE_SIZE;
> +			unsigned int pg_size = 0;
>  			int off = TCP_OFF(sk);
>  			bool merge;
>  
> @@ -1092,7 +1092,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
>  			if (page && off == pg_size) {
>  				put_page(page);
>  				TCP_PAGE(sk) = page = NULL;
> -				pg_size = PAGE_SIZE;
> +				pg_size = 0;

Yea...  I was not sure about this one at first...  :-/

>  			}
>  
>  			if (!page) {
> @@ -1104,15 +1104,13 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
>  							   __GFP_NOWARN |
>  							   __GFP_NORETRY,
>  							   order);
> -					if (page)
> -						pg_size <<= order;
>  				}
>  				if (!page) {
>  					page = alloc_page(gfp);
> -					pg_size = PAGE_SIZE;
>  				}
>  				if (!page)
>  					goto wait_for_memory;

Side note: why 2 checks for !page?

Reviewed-by: Ira Weiny <ira.weiny@intel.com>

> +				pg_size = page_size(page);
>  				off = 0;
>  			}
>  copy:


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/3] mm: Introduce page_size()
  2019-07-23 17:58       ` Ira Weiny
@ 2019-07-23 18:14         ` Matthew Wilcox
  2019-07-23 20:44           ` Ira Weiny
  0 siblings, 1 reply; 20+ messages in thread
From: Matthew Wilcox @ 2019-07-23 18:14 UTC (permalink / raw)
  To: Ira Weiny; +Cc: Andrew Morton, linux-mm, Atul Gupta, linux-crypto

On Tue, Jul 23, 2019 at 10:58:38AM -0700, Ira Weiny wrote:
> > @@ -1092,7 +1092,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> >  			if (page && off == pg_size) {
> >  				put_page(page);
> >  				TCP_PAGE(sk) = page = NULL;
> > -				pg_size = PAGE_SIZE;
> > +				pg_size = 0;
> 
> Yea...  I was not sure about this one at first...  :-/

I'm not sure we actually need to change pg_size here, but it seemed
appropriate to set it back to 0.

> >  							   __GFP_NORETRY,
> >  							   order);
> > -					if (page)
> > -						pg_size <<= order;
> >  				}
> >  				if (!page) {
> >  					page = alloc_page(gfp);
> > -					pg_size = PAGE_SIZE;
> >  				}
> >  				if (!page)
> >  					goto wait_for_memory;
> 
> Side note: why 2 checks for !page?

Because page is assigned to after the first check ...


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/3] mm: Introduce page_size()
  2019-07-23 18:14         ` Matthew Wilcox
@ 2019-07-23 20:44           ` Ira Weiny
  2019-07-23 21:03             ` Matthew Wilcox
  0 siblings, 1 reply; 20+ messages in thread
From: Ira Weiny @ 2019-07-23 20:44 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm, Atul Gupta, linux-crypto

On Tue, Jul 23, 2019 at 11:14:13AM -0700, Matthew Wilcox wrote:
> On Tue, Jul 23, 2019 at 10:58:38AM -0700, Ira Weiny wrote:
> > > @@ -1092,7 +1092,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> > >  			if (page && off == pg_size) {
> > >  				put_page(page);
> > >  				TCP_PAGE(sk) = page = NULL;
> > > -				pg_size = PAGE_SIZE;
> > > +				pg_size = 0;
> > 
> > Yea...  I was not sure about this one at first...  :-/
> 
> I'm not sure we actually need to change pg_size here, but it seemed
> appropriate to set it back to 0.
> 
> > >  							   __GFP_NORETRY,
> > >  							   order);
> > > -					if (page)
> > > -						pg_size <<= order;
> > >  				}
> > >  				if (!page) {
> > >  					page = alloc_page(gfp);
> > > -					pg_size = PAGE_SIZE;
> > >  				}
> > >  				if (!page)
> > >  					goto wait_for_memory;
> > 
> > Side note: why 2 checks for !page?
> 
> Because page is assigned to after the first check ...

Ah yea duh!  Sorry it is a bit hard to follow.

Ira


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/3] mm: Introduce page_size()
  2019-07-23 20:44           ` Ira Weiny
@ 2019-07-23 21:03             ` Matthew Wilcox
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Wilcox @ 2019-07-23 21:03 UTC (permalink / raw)
  To: Ira Weiny; +Cc: Andrew Morton, linux-mm, Atul Gupta, linux-crypto

On Tue, Jul 23, 2019 at 01:44:16PM -0700, Ira Weiny wrote:
> > > Side note: why 2 checks for !page?
> > 
> > Because page is assigned to after the first check ...
> 
> Ah yea duh!  Sorry it is a bit hard to follow.

This is one of those users who really wants the VM to fall back
automatically to any page order block it has on hand.  We talked about
it a bit in the MM track this year; not sure whether you were in the
room for it.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/3] mm: Introduce page_shift()
  2019-07-21 10:46 ` [PATCH v2 2/3] mm: Introduce page_shift() Matthew Wilcox
  2019-07-23  0:44   ` Ira Weiny
@ 2019-07-24 10:40   ` kbuild test robot
  2019-07-25  0:30     ` Andrew Morton
  1 sibling, 1 reply; 20+ messages in thread
From: kbuild test robot @ 2019-07-24 10:40 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: kbuild-all, Andrew Morton, linux-mm, Matthew Wilcox (Oracle)

[-- Attachment #1: Type: text/plain, Size: 1802 bytes --]

Hi Matthew,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[cannot apply to v5.3-rc1 next-20190724]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Matthew-Wilcox/Make-working-with-compound-pages-easier/20190722-030555
config: powerpc64-allyesconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 7.4.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.4.0 make.cross ARCH=powerpc64 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

Note: the linux-review/Matthew-Wilcox/Make-working-with-compound-pages-easier/20190722-030555 HEAD e1bb8b04ba8cf861b2610b0ae646ee49cb069568 builds fine.
      It only hurts bisectibility.

All error/warnings (new ones prefixed by >>):

   drivers/vfio/vfio_iommu_spapr_tce.c: In function 'tce_page_is_contained':
>> drivers/vfio/vfio_iommu_spapr_tce.c:193:9: error: called object 'page_shift' is not a function or function pointer
     return page_shift(compound_head(page)) >= page_shift;
            ^~~~~~~~~~
   drivers/vfio/vfio_iommu_spapr_tce.c:179:16: note: declared here
      unsigned int page_shift)
                   ^~~~~~~~~~
>> drivers/vfio/vfio_iommu_spapr_tce.c:194:1: warning: control reaches end of non-void function [-Wreturn-type]
    }
    ^

vim +/page_shift +193 drivers/vfio/vfio_iommu_spapr_tce.c

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 62427 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/3] mm: Introduce page_shift()
  2019-07-24 10:40   ` kbuild test robot
@ 2019-07-25  0:30     ` Andrew Morton
  2019-07-25 20:30       ` Ira Weiny
  0 siblings, 1 reply; 20+ messages in thread
From: Andrew Morton @ 2019-07-25  0:30 UTC (permalink / raw)
  To: kbuild test robot; +Cc: Matthew Wilcox, kbuild-all, linux-mm

On Wed, 24 Jul 2019 18:40:25 +0800 kbuild test robot <lkp@intel.com> wrote:

> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on linus/master]
> [cannot apply to v5.3-rc1 next-20190724]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/Matthew-Wilcox/Make-working-with-compound-pages-easier/20190722-030555
> config: powerpc64-allyesconfig (attached as .config)
> compiler: powerpc64-linux-gcc (GCC) 7.4.0
> reproduce:
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         GCC_VERSION=7.4.0 make.cross ARCH=powerpc64 
> 
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <lkp@intel.com>
> 
> Note: the linux-review/Matthew-Wilcox/Make-working-with-compound-pages-easier/20190722-030555 HEAD e1bb8b04ba8cf861b2610b0ae646ee49cb069568 builds fine.
>       It only hurts bisectibility.
> 
> All error/warnings (new ones prefixed by >>):
> 
>    drivers/vfio/vfio_iommu_spapr_tce.c: In function 'tce_page_is_contained':
> >> drivers/vfio/vfio_iommu_spapr_tce.c:193:9: error: called object 'page_shift' is not a function or function pointer
>      return page_shift(compound_head(page)) >= page_shift;
>             ^~~~~~~~~~
>    drivers/vfio/vfio_iommu_spapr_tce.c:179:16: note: declared here
>       unsigned int page_shift)
>                    ^~~~~~~~~~

This?

--- a/drivers/vfio/vfio_iommu_spapr_tce.c~mm-introduce-page_shift-fix
+++ a/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -176,13 +176,13 @@ put_exit:
 }
 
 static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa,
-		unsigned int page_shift)
+		unsigned int it_page_shift)
 {
 	struct page *page;
 	unsigned long size = 0;
 
-	if (mm_iommu_is_devmem(mm, hpa, page_shift, &size))
-		return size == (1UL << page_shift);
+	if (mm_iommu_is_devmem(mm, hpa, it_page_shift, &size))
+		return size == (1UL << it_page_shift);
 
 	page = pfn_to_page(hpa >> PAGE_SHIFT);
 	/*
@@ -190,7 +190,7 @@ static bool tce_page_is_contained(struct
 	 * a page we just found. Otherwise the hardware can get access to
 	 * a bigger memory chunk that it should.
 	 */
-	return page_shift(compound_head(page)) >= page_shift;
+	return page_shift(compound_head(page)) >= it_page_shift;
 }
 
 static inline bool tce_groups_attached(struct tce_container *container)
_


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/3] mm: Introduce page_shift()
  2019-07-25  0:30     ` Andrew Morton
@ 2019-07-25 20:30       ` Ira Weiny
  2019-09-23 20:30         ` Matthew Wilcox
  0 siblings, 1 reply; 20+ messages in thread
From: Ira Weiny @ 2019-07-25 20:30 UTC (permalink / raw)
  To: Andrew Morton; +Cc: kbuild test robot, Matthew Wilcox, kbuild-all, linux-mm

On Wed, Jul 24, 2019 at 05:30:55PM -0700, Andrew Morton wrote:
> On Wed, 24 Jul 2019 18:40:25 +0800 kbuild test robot <lkp@intel.com> wrote:
> 
> > Thank you for the patch! Yet something to improve:
> > 
> > [auto build test ERROR on linus/master]
> > [cannot apply to v5.3-rc1 next-20190724]
> > [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> > 
> > url:    https://github.com/0day-ci/linux/commits/Matthew-Wilcox/Make-working-with-compound-pages-easier/20190722-030555
> > config: powerpc64-allyesconfig (attached as .config)
> > compiler: powerpc64-linux-gcc (GCC) 7.4.0
> > reproduce:
> >         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> >         chmod +x ~/bin/make.cross
> >         # save the attached .config to linux build tree
> >         GCC_VERSION=7.4.0 make.cross ARCH=powerpc64 
> > 
> > If you fix the issue, kindly add following tag
> > Reported-by: kbuild test robot <lkp@intel.com>
> > 
> > Note: the linux-review/Matthew-Wilcox/Make-working-with-compound-pages-easier/20190722-030555 HEAD e1bb8b04ba8cf861b2610b0ae646ee49cb069568 builds fine.
> >       It only hurts bisectibility.
> > 
> > All error/warnings (new ones prefixed by >>):
> > 
> >    drivers/vfio/vfio_iommu_spapr_tce.c: In function 'tce_page_is_contained':
> > >> drivers/vfio/vfio_iommu_spapr_tce.c:193:9: error: called object 'page_shift' is not a function or function pointer
> >      return page_shift(compound_head(page)) >= page_shift;
> >             ^~~~~~~~~~
> >    drivers/vfio/vfio_iommu_spapr_tce.c:179:16: note: declared here
> >       unsigned int page_shift)
> >                    ^~~~~~~~~~
> 
> This?

Looks reasonable to me.  But checking around it does seem like "page_shift" is
used as a parameter or variable in quite a few other places.

Is this something to be concerned with?

Reviewed-by: Ira Weiny <ira.weiny@intel.com>

> 
> --- a/drivers/vfio/vfio_iommu_spapr_tce.c~mm-introduce-page_shift-fix
> +++ a/drivers/vfio/vfio_iommu_spapr_tce.c
> @@ -176,13 +176,13 @@ put_exit:
>  }
>  
>  static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa,
> -		unsigned int page_shift)
> +		unsigned int it_page_shift)
>  {
>  	struct page *page;
>  	unsigned long size = 0;
>  
> -	if (mm_iommu_is_devmem(mm, hpa, page_shift, &size))
> -		return size == (1UL << page_shift);
> +	if (mm_iommu_is_devmem(mm, hpa, it_page_shift, &size))
> +		return size == (1UL << it_page_shift);
>  
>  	page = pfn_to_page(hpa >> PAGE_SHIFT);
>  	/*
> @@ -190,7 +190,7 @@ static bool tce_page_is_contained(struct
>  	 * a page we just found. Otherwise the hardware can get access to
>  	 * a bigger memory chunk that it should.
>  	 */
> -	return page_shift(compound_head(page)) >= page_shift;
> +	return page_shift(compound_head(page)) >= it_page_shift;
>  }
>  
>  static inline bool tce_groups_attached(struct tce_container *container)
> _
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/3] mm: Introduce page_size()
  2019-07-23 16:02     ` Matthew Wilcox
  2019-07-23 17:58       ` Ira Weiny
@ 2019-09-20 23:28       ` Andrew Morton
  2019-09-21  1:09         ` Matthew Wilcox
  1 sibling, 1 reply; 20+ messages in thread
From: Andrew Morton @ 2019-09-20 23:28 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Ira Weiny, linux-mm, Atul Gupta, linux-crypto

On Tue, 23 Jul 2019 09:02:48 -0700 Matthew Wilcox <willy@infradead.org> wrote:

> On Mon, Jul 22, 2019 at 05:43:07PM -0700, Ira Weiny wrote:
> > > diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
> > > index 551bca6fef24..925be5942895 100644
> > > --- a/drivers/crypto/chelsio/chtls/chtls_io.c
> > > +++ b/drivers/crypto/chelsio/chtls/chtls_io.c
> > > @@ -1078,7 +1078,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> > >  			bool merge;
> > >  
> > >  			if (page)
> > > -				pg_size <<= compound_order(page);
> > > +				pg_size = page_size(page);
> > >  			if (off < pg_size &&
> > >  			    skb_can_coalesce(skb, i, page, off)) {
> > >  				merge = 1;
> > > @@ -1105,8 +1105,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> > >  							   __GFP_NORETRY,
> > >  							   order);
> > >  					if (page)
> > > -						pg_size <<=
> > > -							compound_order(page);
> > > +						pg_size <<= order;
> > 
> > Looking at the code I see pg_size should be PAGE_SIZE right before this so why
> > not just use the new call and remove the initial assignment?
> 
> This driver is really convoluted.  I wasn't certain I wouldn't break it
> in some horrid way.  I made larger changes to it originally, then they
> touched this part of the driver and I had to rework the patch to apply
> on top of their changes.  So I did something more minimal.
> 
> This, on top of what's in Andrew's tree, would be my guess, but I don't
> have the hardware.
> 
> diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
> index 925be5942895..d4eb0fcd04c7 100644
> --- a/drivers/crypto/chelsio/chtls/chtls_io.c
> +++ b/drivers/crypto/chelsio/chtls/chtls_io.c
> @@ -1073,7 +1073,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
>  		} else {
>  			int i = skb_shinfo(skb)->nr_frags;
>  			struct page *page = TCP_PAGE(sk);
> -			int pg_size = PAGE_SIZE;
> +			unsigned int pg_size = 0;
>  			int off = TCP_OFF(sk);
>  			bool merge;
>  
> @@ -1092,7 +1092,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
>  			if (page && off == pg_size) {
>  				put_page(page);
>  				TCP_PAGE(sk) = page = NULL;
> -				pg_size = PAGE_SIZE;
> +				pg_size = 0;
>  			}
>  
>  			if (!page) {
> @@ -1104,15 +1104,13 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
>  							   __GFP_NOWARN |
>  							   __GFP_NORETRY,
>  							   order);
> -					if (page)
> -						pg_size <<= order;
>  				}
>  				if (!page) {
>  					page = alloc_page(gfp);
> -					pg_size = PAGE_SIZE;
>  				}
>  				if (!page)
>  					goto wait_for_memory;
> +				pg_size = page_size(page);
>  				off = 0;
>  			}

I didn't do anything with this.  I assume the original patch (which has
been in -next since July 22) is good and the above is merely a cleanup?




^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/3] mm: Introduce page_size()
  2019-09-20 23:28       ` Andrew Morton
@ 2019-09-21  1:09         ` Matthew Wilcox
  2019-09-22  2:13           ` Weiny, Ira
  0 siblings, 1 reply; 20+ messages in thread
From: Matthew Wilcox @ 2019-09-21  1:09 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Ira Weiny, linux-mm, Atul Gupta, linux-crypto

On Fri, Sep 20, 2019 at 04:28:48PM -0700, Andrew Morton wrote:
> On Tue, 23 Jul 2019 09:02:48 -0700 Matthew Wilcox <willy@infradead.org> wrote:
> 
> > On Mon, Jul 22, 2019 at 05:43:07PM -0700, Ira Weiny wrote:
> > > > diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
> > > > index 551bca6fef24..925be5942895 100644
> > > > --- a/drivers/crypto/chelsio/chtls/chtls_io.c
> > > > +++ b/drivers/crypto/chelsio/chtls/chtls_io.c
> > > > @@ -1078,7 +1078,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> > > >  			bool merge;
> > > >  
> > > >  			if (page)
> > > > -				pg_size <<= compound_order(page);
> > > > +				pg_size = page_size(page);
> > > >  			if (off < pg_size &&
> > > >  			    skb_can_coalesce(skb, i, page, off)) {
> > > >  				merge = 1;
> > > > @@ -1105,8 +1105,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> > > >  							   __GFP_NORETRY,
> > > >  							   order);
> > > >  					if (page)
> > > > -						pg_size <<=
> > > > -							compound_order(page);
> > > > +						pg_size <<= order;
> > > 
> > > Looking at the code I see pg_size should be PAGE_SIZE right before this so why
> > > not just use the new call and remove the initial assignment?
> > 
> > This driver is really convoluted.  I wasn't certain I wouldn't break it
> > in some horrid way.  I made larger changes to it originally, then they
> > touched this part of the driver and I had to rework the patch to apply
> > on top of their changes.  So I did something more minimal.
> > 
> > This, on top of what's in Andrew's tree, would be my guess, but I don't
> > have the hardware.
> > 
> > diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
> > index 925be5942895..d4eb0fcd04c7 100644
> > --- a/drivers/crypto/chelsio/chtls/chtls_io.c
> > +++ b/drivers/crypto/chelsio/chtls/chtls_io.c
> > @@ -1073,7 +1073,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> >  		} else {
> >  			int i = skb_shinfo(skb)->nr_frags;
> >  			struct page *page = TCP_PAGE(sk);
> > -			int pg_size = PAGE_SIZE;
> > +			unsigned int pg_size = 0;
> >  			int off = TCP_OFF(sk);
> >  			bool merge;
> >  
> > @@ -1092,7 +1092,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> >  			if (page && off == pg_size) {
> >  				put_page(page);
> >  				TCP_PAGE(sk) = page = NULL;
> > -				pg_size = PAGE_SIZE;
> > +				pg_size = 0;
> >  			}
> >  
> >  			if (!page) {
> > @@ -1104,15 +1104,13 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> >  							   __GFP_NOWARN |
> >  							   __GFP_NORETRY,
> >  							   order);
> > -					if (page)
> > -						pg_size <<= order;
> >  				}
> >  				if (!page) {
> >  					page = alloc_page(gfp);
> > -					pg_size = PAGE_SIZE;
> >  				}
> >  				if (!page)
> >  					goto wait_for_memory;
> > +				pg_size = page_size(page);
> >  				off = 0;
> >  			}
> 
> I didn't do anything with this.  I assume the original patch (which has
> been in -next since July 22) is good and the above is merely a cleanup?

Yes, just a cleanup.  Since Atul didn't offer an opinion, I assume
he doesn't care.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: [PATCH v2 1/3] mm: Introduce page_size()
  2019-09-21  1:09         ` Matthew Wilcox
@ 2019-09-22  2:13           ` Weiny, Ira
  0 siblings, 0 replies; 20+ messages in thread
From: Weiny, Ira @ 2019-09-22  2:13 UTC (permalink / raw)
  To: Matthew Wilcox, Andrew Morton; +Cc: linux-mm, Atul Gupta, linux-crypto

> On Fri, Sep 20, 2019 at 04:28:48PM -0700, Andrew Morton wrote:
> > On Tue, 23 Jul 2019 09:02:48 -0700 Matthew Wilcox <willy@infradead.org>
> wrote:
> >
> > > On Mon, Jul 22, 2019 at 05:43:07PM -0700, Ira Weiny wrote:
> > > > > diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c
> > > > > b/drivers/crypto/chelsio/chtls/chtls_io.c
> > > > > index 551bca6fef24..925be5942895 100644
> > > > > --- a/drivers/crypto/chelsio/chtls/chtls_io.c
> > > > > +++ b/drivers/crypto/chelsio/chtls/chtls_io.c
> > > > > @@ -1078,7 +1078,7 @@ int chtls_sendmsg(struct sock *sk, struct
> msghdr *msg, size_t size)
> > > > >  			bool merge;
> > > > >
> > > > >  			if (page)
> > > > > -				pg_size <<= compound_order(page);
> > > > > +				pg_size = page_size(page);
> > > > >  			if (off < pg_size &&
> > > > >  			    skb_can_coalesce(skb, i, page, off)) {
> > > > >  				merge = 1;
> > > > > @@ -1105,8 +1105,7 @@ int chtls_sendmsg(struct sock *sk, struct
> msghdr *msg, size_t size)
> > > > >
> __GFP_NORETRY,
> > > > >  							   order);
> > > > >  					if (page)
> > > > > -						pg_size <<=
> > > > > -
> 	compound_order(page);
> > > > > +						pg_size <<= order;
> > > >
> > > > Looking at the code I see pg_size should be PAGE_SIZE right before
> > > > this so why not just use the new call and remove the initial assignment?
> > >
> > > This driver is really convoluted.  I wasn't certain I wouldn't break
> > > it in some horrid way.  I made larger changes to it originally, then
> > > they touched this part of the driver and I had to rework the patch
> > > to apply on top of their changes.  So I did something more minimal.
> > >
> > > This, on top of what's in Andrew's tree, would be my guess, but I
> > > don't have the hardware.
> > >
> > > diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c
> > > b/drivers/crypto/chelsio/chtls/chtls_io.c
> > > index 925be5942895..d4eb0fcd04c7 100644
> > > --- a/drivers/crypto/chelsio/chtls/chtls_io.c
> > > +++ b/drivers/crypto/chelsio/chtls/chtls_io.c
> > > @@ -1073,7 +1073,7 @@ int chtls_sendmsg(struct sock *sk, struct
> msghdr *msg, size_t size)
> > >  		} else {
> > >  			int i = skb_shinfo(skb)->nr_frags;
> > >  			struct page *page = TCP_PAGE(sk);
> > > -			int pg_size = PAGE_SIZE;
> > > +			unsigned int pg_size = 0;
> > >  			int off = TCP_OFF(sk);
> > >  			bool merge;
> > >
> > > @@ -1092,7 +1092,7 @@ int chtls_sendmsg(struct sock *sk, struct
> msghdr *msg, size_t size)
> > >  			if (page && off == pg_size) {
> > >  				put_page(page);
> > >  				TCP_PAGE(sk) = page = NULL;
> > > -				pg_size = PAGE_SIZE;
> > > +				pg_size = 0;
> > >  			}
> > >
> > >  			if (!page) {
> > > @@ -1104,15 +1104,13 @@ int chtls_sendmsg(struct sock *sk, struct
> msghdr *msg, size_t size)
> > >  							   __GFP_NOWARN |
> > >  							   __GFP_NORETRY,
> > >  							   order);
> > > -					if (page)
> > > -						pg_size <<= order;
> > >  				}
> > >  				if (!page) {
> > >  					page = alloc_page(gfp);
> > > -					pg_size = PAGE_SIZE;
> > >  				}
> > >  				if (!page)
> > >  					goto wait_for_memory;
> > > +				pg_size = page_size(page);
> > >  				off = 0;
> > >  			}
> >
> > I didn't do anything with this.  I assume the original patch (which
> > has been in -next since July 22) is good and the above is merely a cleanup?
> 
> Yes, just a cleanup.  Since Atul didn't offer an opinion, I assume he doesn't
> care.

Agreed I think what went in is fine.

Ira



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/3] mm: Introduce page_shift()
  2019-07-25 20:30       ` Ira Weiny
@ 2019-09-23 20:30         ` Matthew Wilcox
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Wilcox @ 2019-09-23 20:30 UTC (permalink / raw)
  To: Ira Weiny; +Cc: Andrew Morton, kbuild test robot, kbuild-all, linux-mm

On Thu, Jul 25, 2019 at 01:30:12PM -0700, Ira Weiny wrote:
> On Wed, Jul 24, 2019 at 05:30:55PM -0700, Andrew Morton wrote:
> > On Wed, 24 Jul 2019 18:40:25 +0800 kbuild test robot <lkp@intel.com> wrote:
> > >    drivers/vfio/vfio_iommu_spapr_tce.c: In function 'tce_page_is_contained':
> > > >> drivers/vfio/vfio_iommu_spapr_tce.c:193:9: error: called object 'page_shift' is not a function or function pointer
> > >      return page_shift(compound_head(page)) >= page_shift;
> > >             ^~~~~~~~~~
> > >    drivers/vfio/vfio_iommu_spapr_tce.c:179:16: note: declared here
> > >       unsigned int page_shift)
> > >                    ^~~~~~~~~~
> > 
> > This?
> 
> Looks reasonable to me.  But checking around it does seem like "page_shift" is
> used as a parameter or variable in quite a few other places.
> 
> Is this something to be concerned with?

Sorry, I missed this earlier.

It's not currently a warning because we don't enable -Wshadow.
For functions which have a parameter or variable named page_shift, the
local definition overrides the global function.  They continue to work
"as expected".  The only reason this particular function has an issue
is that it also wants to call the function page_shift().  The compiler
resolves the symbol 'page_shift' to be the parameter, and says "This is
an int, not a function, you're clearly mistaken".

We don't need to mass-rename all the local variables called page_shift,
unless we want to enable -Wshadow.  Which I would actually like to do,
but I don't have time.


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-09-23 20:30 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-21 10:46 [PATCH v2 0/3] Make working with compound pages easier Matthew Wilcox
2019-07-21 10:46 ` [PATCH v2 1/3] mm: Introduce page_size() Matthew Wilcox
2019-07-23  0:43   ` Ira Weiny
2019-07-23 16:02     ` Matthew Wilcox
2019-07-23 17:58       ` Ira Weiny
2019-07-23 18:14         ` Matthew Wilcox
2019-07-23 20:44           ` Ira Weiny
2019-07-23 21:03             ` Matthew Wilcox
2019-09-20 23:28       ` Andrew Morton
2019-09-21  1:09         ` Matthew Wilcox
2019-09-22  2:13           ` Weiny, Ira
2019-07-21 10:46 ` [PATCH v2 2/3] mm: Introduce page_shift() Matthew Wilcox
2019-07-23  0:44   ` Ira Weiny
2019-07-24 10:40   ` kbuild test robot
2019-07-25  0:30     ` Andrew Morton
2019-07-25 20:30       ` Ira Weiny
2019-09-23 20:30         ` Matthew Wilcox
2019-07-21 10:46 ` [PATCH v2 3/3] mm: Introduce compound_nr() Matthew Wilcox
2019-07-23  0:46   ` Ira Weiny
2019-07-23 12:55 ` [PATCH v2 0/3] Make working with compound pages easier Kirill A. Shutemov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).