linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/15] Remove 'order' argument from many mm functions
@ 2019-05-10 13:50 Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 01/15] mm: Remove gfp_flags argument from rmqueue_pcplist Matthew Wilcox
                   ` (16 more replies)
  0 siblings, 17 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

This is a little more serious attempt than v1, since nobody seems opposed
to the concept of using GFP flags to pass the order around.  I've split
it up a bit better, and I've reversed the arguments of __alloc_pages_node
to match the order of the arguments to other functions in the same family.
alloc_pages_node() needs the same treatment, but there's about 70 callers,
so I'm going to skip it for now.

This is against current -mm.  I'm seeing a text saving of 482 bytes from
a tinyconfig vmlinux (1003785 reduced to 1003303).  There are more
savings to be had by combining together order and the gfp flags, for
example in the scan_control data structure.

I think there are also cognitive savings to be had from eliminating
some of the function variants which exist solely to take an 'order'.

Matthew Wilcox (Oracle) (15):
  mm: Remove gfp_flags argument from rmqueue_pcplist
  mm: Pass order to __alloc_pages_nodemask in GFP flags
  mm: Pass order to __alloc_pages in GFP flags
  mm: Pass order to alloc_page_interleave in GFP flags
  mm: Pass order to alloc_pages_current in GFP flags
  mm: Pass order to alloc_pages_vma in GFP flags
  mm: Pass order to __alloc_pages_node in GFP flags
  mm: Pass order to __get_free_page in GFP flags
  mm: Pass order to prep_new_page in GFP flags
  mm: Pass order to rmqueue in GFP flags
  mm: Pass order to get_page_from_freelist in GFP flags
  mm: Pass order to __alloc_pages_cpuset_fallback in GFP flags
  mm: Pass order to prepare_alloc_pages in GFP flags
  mm: Pass order to try_to_free_pages in GFP flags
  mm: Pass order to node_reclaim() in GFP flags

 arch/ia64/kernel/uncached.c       |  6 +-
 arch/ia64/sn/pci/pci_dma.c        |  4 +-
 arch/powerpc/platforms/cell/ras.c |  5 +-
 arch/x86/events/intel/ds.c        |  4 +-
 arch/x86/kvm/vmx/vmx.c            |  4 +-
 drivers/misc/sgi-xp/xpc_uv.c      |  5 +-
 include/linux/gfp.h               | 59 +++++++++++--------
 include/linux/migrate.h           |  2 +-
 include/linux/swap.h              |  2 +-
 include/trace/events/vmscan.h     | 28 ++++-----
 kernel/profile.c                  |  2 +-
 mm/filemap.c                      |  2 +-
 mm/gup.c                          |  4 +-
 mm/hugetlb.c                      |  5 +-
 mm/internal.h                     |  5 +-
 mm/khugepaged.c                   |  2 +-
 mm/mempolicy.c                    | 34 +++++------
 mm/migrate.c                      |  9 ++-
 mm/page_alloc.c                   | 98 +++++++++++++++----------------
 mm/shmem.c                        |  5 +-
 mm/slab.c                         |  3 +-
 mm/slob.c                         |  2 +-
 mm/slub.c                         |  2 +-
 mm/vmscan.c                       | 26 ++++----
 24 files changed, 157 insertions(+), 161 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 01/15] mm: Remove gfp_flags argument from rmqueue_pcplist
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 02/15] mm: Pass order to __alloc_pages_nodemask in GFP flags Matthew Wilcox
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Unused argument.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page_alloc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1f99db76b1ff..57373327712e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3161,8 +3161,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
 
 /* Lock and remove page from the per-cpu list */
 static struct page *rmqueue_pcplist(struct zone *preferred_zone,
-			struct zone *zone, gfp_t gfp_flags,
-			int migratetype, unsigned int alloc_flags)
+			struct zone *zone, int migratetype,
+			unsigned int alloc_flags)
 {
 	struct per_cpu_pages *pcp;
 	struct list_head *list;
@@ -3194,8 +3194,8 @@ struct page *rmqueue(struct zone *preferred_zone,
 	struct page *page;
 
 	if (likely(order == 0)) {
-		page = rmqueue_pcplist(preferred_zone, zone, gfp_flags,
-					migratetype, alloc_flags);
+		page = rmqueue_pcplist(preferred_zone, zone, migratetype,
+				alloc_flags);
 		goto out;
 	}
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 02/15] mm: Pass order to __alloc_pages_nodemask in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 01/15] mm: Remove gfp_flags argument from rmqueue_pcplist Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 03/15] mm: Pass order to __alloc_pages " Matthew Wilcox
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Save marshalling an extra argument in all the callers at the expense of
using five bits of the GFP flags.  We still have three GFP bits remaining
after doing this (and we can release one more by reallocating NORETRY,
RETRY_MAYFAIL and NOFAIL).

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/gfp.h     | 18 +++++++++++++++---
 include/linux/migrate.h |  2 +-
 mm/hugetlb.c            |  5 +++--
 mm/mempolicy.c          |  5 +++--
 mm/page_alloc.c         |  4 ++--
 5 files changed, 24 insertions(+), 10 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index fb07b503dc45..c466b08df0ec 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -219,6 +219,18 @@ struct vm_area_struct;
 /* Room for N __GFP_FOO bits */
 #define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP))
 #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
+#define __GFP_ORDER(order) ((__force gfp_t)(order << __GFP_BITS_SHIFT))
+#define __GFP_PMD	__GFP_ORDER(PMD_SHIFT - PAGE_SHIFT)
+#define __GFP_PUD	__GFP_ORDER(PUD_SHIFT - PAGE_SHIFT)
+
+/*
+ * Extract the order from a GFP bitmask.
+ * Must be the top bits to avoid an AND operation.  Don't let
+ * __GFP_BITS_SHIFT get over 27, or we won't be able to encode orders
+ * above 15 (some architectures allow configuring MAX_ORDER up to 64,
+ * but I doubt larger than 31 are ever used).
+ */
+#define gfp_order(gfp)	(((__force unsigned int)gfp) >> __GFP_BITS_SHIFT)
 
 /**
  * DOC: Useful GFP flag combinations
@@ -464,13 +476,13 @@ static inline void arch_alloc_page(struct page *page, int order) { }
 #endif
 
 struct page *
-__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
-							nodemask_t *nodemask);
+__alloc_pages_nodemask(gfp_t gfp_mask, int preferred_nid, nodemask_t *nodemask);
 
 static inline struct page *
 __alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid)
 {
-	return __alloc_pages_nodemask(gfp_mask, order, preferred_nid, NULL);
+	return __alloc_pages_nodemask(gfp_mask | __GFP_ORDER(order),
+			preferred_nid, NULL);
 }
 
 /*
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index e13d9bf2f9a5..ba4385144cc9 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -50,7 +50,7 @@ static inline struct page *new_page_nodemask(struct page *page,
 	if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE))
 		gfp_mask |= __GFP_HIGHMEM;
 
-	new_page = __alloc_pages_nodemask(gfp_mask, order,
+	new_page = __alloc_pages_nodemask(gfp_mask | __GFP_ORDER(order),
 				preferred_nid, nodemask);
 
 	if (new_page && PageTransHuge(new_page))
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bf58cee30f65..c8ee747ca437 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1409,10 +1409,11 @@ static struct page *alloc_buddy_huge_page(struct hstate *h,
 	int order = huge_page_order(h);
 	struct page *page;
 
-	gfp_mask |= __GFP_COMP|__GFP_RETRY_MAYFAIL|__GFP_NOWARN;
+	gfp_mask |= __GFP_COMP | __GFP_RETRY_MAYFAIL | __GFP_NOWARN |
+			__GFP_ORDER(order);
 	if (nid == NUMA_NO_NODE)
 		nid = numa_mem_id();
-	page = __alloc_pages_nodemask(gfp_mask, order, nid, nmask);
+	page = __alloc_pages_nodemask(gfp_mask, nid, nmask);
 	if (page)
 		__count_vm_event(HTLB_BUDDY_PGALLOC);
 	else
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 2219e747df49..310ad69effdd 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2093,7 +2093,8 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 
 	nmask = policy_nodemask(gfp, pol);
 	preferred_nid = policy_node(gfp, pol, node);
-	page = __alloc_pages_nodemask(gfp, order, preferred_nid, nmask);
+	page = __alloc_pages_nodemask(gfp | __GFP_ORDER(order), preferred_nid,
+			nmask);
 	mpol_cond_put(pol);
 out:
 	return page;
@@ -2129,7 +2130,7 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order)
 	if (pol->mode == MPOL_INTERLEAVE)
 		page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
 	else
-		page = __alloc_pages_nodemask(gfp, order,
+		page = __alloc_pages_nodemask(gfp | __GFP_ORDER(order),
 				policy_node(gfp, pol, numa_node_id()),
 				policy_nodemask(gfp, pol));
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 57373327712e..6e968ab91660 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4622,11 +4622,11 @@ static inline void finalise_ac(gfp_t gfp_mask, struct alloc_context *ac)
  * This is the 'heart' of the zoned buddy allocator.
  */
 struct page *
-__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
-							nodemask_t *nodemask)
+__alloc_pages_nodemask(gfp_t gfp_mask, int preferred_nid, nodemask_t *nodemask)
 {
 	struct page *page;
 	unsigned int alloc_flags = ALLOC_WMARK_LOW;
+	unsigned int order = gfp_order(gfp_mask);
 	gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */
 	struct alloc_context ac = { };
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 03/15] mm: Pass order to __alloc_pages in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 01/15] mm: Remove gfp_flags argument from rmqueue_pcplist Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 02/15] mm: Pass order to __alloc_pages_nodemask in GFP flags Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 04/15] mm: Pass order to alloc_page_interleave " Matthew Wilcox
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Matches the change to the __alloc_pages_nodemask API.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/gfp.h | 8 +++-----
 mm/mempolicy.c      | 2 +-
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index c466b08df0ec..9ddc7703ea81 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -478,11 +478,9 @@ static inline void arch_alloc_page(struct page *page, int order) { }
 struct page *
 __alloc_pages_nodemask(gfp_t gfp_mask, int preferred_nid, nodemask_t *nodemask);
 
-static inline struct page *
-__alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid)
+static inline struct page *__alloc_pages(gfp_t gfp, int preferred_nid)
 {
-	return __alloc_pages_nodemask(gfp_mask | __GFP_ORDER(order),
-			preferred_nid, NULL);
+	return __alloc_pages_nodemask(gfp, preferred_nid, NULL);
 }
 
 /*
@@ -495,7 +493,7 @@ __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order)
 	VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES);
 	VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid));
 
-	return __alloc_pages(gfp_mask, order, nid);
+	return __alloc_pages(gfp_mask | __GFP_ORDER(order), nid);
 }
 
 /*
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 310ad69effdd..0a22f106edb2 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2011,7 +2011,7 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
 {
 	struct page *page;
 
-	page = __alloc_pages(gfp, order, nid);
+	page = __alloc_pages(gfp | __GFP_ORDER(order), nid);
 	/* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */
 	if (!static_branch_likely(&vm_numa_stat_key))
 		return page;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 04/15] mm: Pass order to alloc_page_interleave in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (2 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 03/15] mm: Pass order to __alloc_pages " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 05/15] mm: Pass order to alloc_pages_current " Matthew Wilcox
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Matches the change to the __alloc_pages_nodemask API.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/mempolicy.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 0a22f106edb2..8d5375cdd928 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2006,12 +2006,11 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk,
 
 /* Allocate a page in interleaved policy.
    Own path because it needs to do special accounting. */
-static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
-					unsigned nid)
+static struct page *alloc_page_interleave(gfp_t gfp, unsigned nid)
 {
 	struct page *page;
 
-	page = __alloc_pages(gfp | __GFP_ORDER(order), nid);
+	page = __alloc_pages(gfp, nid);
 	/* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */
 	if (!static_branch_likely(&vm_numa_stat_key))
 		return page;
@@ -2062,7 +2061,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 
 		nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
 		mpol_cond_put(pol);
-		page = alloc_page_interleave(gfp, order, nid);
+		page = alloc_page_interleave(gfp | __GFP_ORDER(order), nid);
 		goto out;
 	}
 
@@ -2128,7 +2127,8 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order)
 	 * nor system default_policy
 	 */
 	if (pol->mode == MPOL_INTERLEAVE)
-		page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
+		page = alloc_page_interleave(gfp | __GFP_ORDER(order),
+				interleave_nodes(pol));
 	else
 		page = __alloc_pages_nodemask(gfp | __GFP_ORDER(order),
 				policy_node(gfp, pol, numa_node_id()),
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 05/15] mm: Pass order to alloc_pages_current in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (3 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 04/15] mm: Pass order to alloc_page_interleave " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 06/15] mm: Pass order to alloc_pages_vma " Matthew Wilcox
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Matches the change to the __alloc_pages_nodemask API.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/gfp.h |  4 ++--
 mm/mempolicy.c      | 10 ++++------
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 9ddc7703ea81..94ba8a6172e4 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -511,12 +511,12 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,
 }
 
 #ifdef CONFIG_NUMA
-extern struct page *alloc_pages_current(gfp_t gfp_mask, unsigned order);
+extern struct page *alloc_pages_current(gfp_t gfp_mask);
 
 static inline struct page *
 alloc_pages(gfp_t gfp_mask, unsigned int order)
 {
-	return alloc_pages_current(gfp_mask, order);
+	return alloc_pages_current(gfp_mask | __GFP_ORDER(order));
 }
 extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order,
 			struct vm_area_struct *vma, unsigned long addr,
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 8d5375cdd928..eec0b9c21962 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2108,13 +2108,12 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
  *      	%GFP_HIGHMEM highmem allocation,
  *      	%GFP_FS     don't call back into a file system.
  *      	%GFP_ATOMIC don't sleep.
- *	@order: Power of two of allocation size in pages. 0 is a single page.
  *
  *	Allocate a page from the kernel page pool.  When not in
- *	interrupt context and apply the current process NUMA policy.
+ *	interrupt context apply the current process NUMA policy.
  *	Returns NULL when no page can be allocated.
  */
-struct page *alloc_pages_current(gfp_t gfp, unsigned order)
+struct page *alloc_pages_current(gfp_t gfp)
 {
 	struct mempolicy *pol = &default_policy;
 	struct page *page;
@@ -2127,10 +2126,9 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order)
 	 * nor system default_policy
 	 */
 	if (pol->mode == MPOL_INTERLEAVE)
-		page = alloc_page_interleave(gfp | __GFP_ORDER(order),
-				interleave_nodes(pol));
+		page = alloc_page_interleave(gfp, interleave_nodes(pol));
 	else
-		page = __alloc_pages_nodemask(gfp | __GFP_ORDER(order),
+		page = __alloc_pages_nodemask(gfp,
 				policy_node(gfp, pol, numa_node_id()),
 				policy_nodemask(gfp, pol));
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 06/15] mm: Pass order to alloc_pages_vma in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (4 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 05/15] mm: Pass order to alloc_pages_current " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 07/15] mm: Pass order to __alloc_pages_node " Matthew Wilcox
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Matches the change to the __alloc_pages_nodemask API.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/gfp.h | 20 ++++++++++----------
 mm/mempolicy.c      | 15 +++++++--------
 mm/shmem.c          |  5 +++--
 3 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 94ba8a6172e4..6133f77abc91 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -518,24 +518,24 @@ alloc_pages(gfp_t gfp_mask, unsigned int order)
 {
 	return alloc_pages_current(gfp_mask | __GFP_ORDER(order));
 }
-extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order,
-			struct vm_area_struct *vma, unsigned long addr,
-			int node, bool hugepage);
-#define alloc_hugepage_vma(gfp_mask, vma, addr, order) \
-	alloc_pages_vma(gfp_mask, order, vma, addr, numa_node_id(), true)
+extern struct page *alloc_pages_vma(gfp_t gfp, struct vm_area_struct *vma,
+		unsigned long addr, int node, bool hugepage);
+#define alloc_hugepage_vma(gfp, vma, addr, order) \
+	alloc_pages_vma(gfp | __GFP_ORDER(order), vma, addr, numa_node_id(), \
+			true)
 #else
 #define alloc_pages(gfp_mask, order) \
-		alloc_pages_node(numa_node_id(), gfp_mask, order)
-#define alloc_pages_vma(gfp_mask, order, vma, addr, node, false)\
-	alloc_pages(gfp_mask, order)
+	alloc_pages_node(numa_node_id(), gfp_mask, order)
+#define alloc_pages_vma(gfp, vma, addr, node, false) \
+	alloc_pages(gfp, 0)
 #define alloc_hugepage_vma(gfp_mask, vma, addr, order) \
 	alloc_pages(gfp_mask, order)
 #endif
 #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0)
 #define alloc_page_vma(gfp_mask, vma, addr)			\
-	alloc_pages_vma(gfp_mask, 0, vma, addr, numa_node_id(), false)
+	alloc_pages_vma(gfp_mask, vma, addr, numa_node_id(), false)
 #define alloc_page_vma_node(gfp_mask, vma, addr, node)		\
-	alloc_pages_vma(gfp_mask, 0, vma, addr, node, false)
+	alloc_pages_vma(gfp_mask, vma, addr, node, false)
 
 extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order);
 extern unsigned long get_zeroed_page(gfp_t gfp_mask);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index eec0b9c21962..e81d4a94878b 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2032,7 +2032,6 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned nid)
  *      %GFP_FS      allocation should not call back into a file system.
  *      %GFP_ATOMIC  don't sleep.
  *
- *	@order:Order of the GFP allocation.
  * 	@vma:  Pointer to VMA or NULL if not available.
  *	@addr: Virtual Address of the allocation. Must be inside the VMA.
  *	@node: Which node to prefer for allocation (modulo policy).
@@ -2046,8 +2045,8 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned nid)
  *	NULL when no page can be allocated.
  */
 struct page *
-alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
-		unsigned long addr, int node, bool hugepage)
+alloc_pages_vma(gfp_t gfp, struct vm_area_struct *vma, unsigned long addr,
+		int node, bool hugepage)
 {
 	struct mempolicy *pol;
 	struct page *page;
@@ -2059,9 +2058,10 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 	if (pol->mode == MPOL_INTERLEAVE) {
 		unsigned nid;
 
-		nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
+		nid = interleave_nid(pol, vma, addr,
+				PAGE_SHIFT + gfp_order(gfp));
 		mpol_cond_put(pol);
-		page = alloc_page_interleave(gfp | __GFP_ORDER(order), nid);
+		page = alloc_page_interleave(gfp, nid);
 		goto out;
 	}
 
@@ -2085,15 +2085,14 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 		if (!nmask || node_isset(hpage_node, *nmask)) {
 			mpol_cond_put(pol);
 			page = __alloc_pages_node(hpage_node,
-						gfp | __GFP_THISNODE, order);
+						gfp | __GFP_THISNODE, 0);
 			goto out;
 		}
 	}
 
 	nmask = policy_nodemask(gfp, pol);
 	preferred_nid = policy_node(gfp, pol, node);
-	page = __alloc_pages_nodemask(gfp | __GFP_ORDER(order), preferred_nid,
-			nmask);
+	page = __alloc_pages_nodemask(gfp, preferred_nid, nmask);
 	mpol_cond_put(pol);
 out:
 	return page;
diff --git a/mm/shmem.c b/mm/shmem.c
index 1bb3b8dc8bb2..fdbab5dbf1fd 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1463,8 +1463,9 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp,
 		return NULL;
 
 	shmem_pseudo_vma_init(&pvma, info, hindex);
-	page = alloc_pages_vma(gfp | __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN,
-			HPAGE_PMD_ORDER, &pvma, 0, numa_node_id(), true);
+	page = alloc_pages_vma(gfp | __GFP_COMP | __GFP_NORETRY |
+					__GFP_NOWARN | __GFP_PMD,
+			&pvma, 0, numa_node_id(), true);
 	shmem_pseudo_vma_destroy(&pvma);
 	if (page)
 		prep_transhuge_page(page);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 07/15] mm: Pass order to __alloc_pages_node in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (5 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 06/15] mm: Pass order to alloc_pages_vma " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 08/15] mm: Pass order to __get_free_page " Matthew Wilcox
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Matches the change to the __alloc_pages_nodemask API.
Also switch the order of node and gfp to match the other memory
allocation APIs.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 arch/ia64/kernel/uncached.c       | 6 +++---
 arch/ia64/sn/pci/pci_dma.c        | 4 ++--
 arch/powerpc/platforms/cell/ras.c | 5 ++---
 arch/x86/events/intel/ds.c        | 4 ++--
 arch/x86/kvm/vmx/vmx.c            | 4 ++--
 drivers/misc/sgi-xp/xpc_uv.c      | 5 ++---
 include/linux/gfp.h               | 9 ++++-----
 kernel/profile.c                  | 2 +-
 mm/filemap.c                      | 2 +-
 mm/gup.c                          | 4 ++--
 mm/khugepaged.c                   | 2 +-
 mm/mempolicy.c                    | 8 ++++----
 mm/migrate.c                      | 9 ++++-----
 mm/slab.c                         | 3 ++-
 mm/slob.c                         | 2 +-
 mm/slub.c                         | 2 +-
 16 files changed, 34 insertions(+), 37 deletions(-)

diff --git a/arch/ia64/kernel/uncached.c b/arch/ia64/kernel/uncached.c
index 583f7ff6b589..2e53b7311777 100644
--- a/arch/ia64/kernel/uncached.c
+++ b/arch/ia64/kernel/uncached.c
@@ -98,9 +98,9 @@ static int uncached_add_chunk(struct uncached_pool *uc_pool, int nid)
 
 	/* attempt to allocate a granule's worth of cached memory pages */
 
-	page = __alloc_pages_node(nid,
-				GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE,
-				IA64_GRANULE_SHIFT-PAGE_SHIFT);
+	page = __alloc_pages_node(GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
+				__GFP_ORDER(IA64_GRANULE_SHIFT-PAGE_SHIFT),
+				nid);
 	if (!page) {
 		mutex_unlock(&uc_pool->add_chunk_mutex);
 		return -1;
diff --git a/arch/ia64/sn/pci/pci_dma.c b/arch/ia64/sn/pci/pci_dma.c
index b7d42e4edc1f..77e24145189c 100644
--- a/arch/ia64/sn/pci/pci_dma.c
+++ b/arch/ia64/sn/pci/pci_dma.c
@@ -92,8 +92,8 @@ static void *sn_dma_alloc_coherent(struct device *dev, size_t size,
 	 */
 	node = pcibus_to_node(pdev->bus);
 	if (likely(node >=0)) {
-		struct page *p = __alloc_pages_node(node,
-						flags, get_order(size));
+		struct page *p = __alloc_pages_node(flags |
+					__GFP_ORDER(get_order(size)), node);
 
 		if (likely(p))
 			cpuaddr = page_address(p);
diff --git a/arch/powerpc/platforms/cell/ras.c b/arch/powerpc/platforms/cell/ras.c
index 2f704afe9af3..8d2dcb07bacd 100644
--- a/arch/powerpc/platforms/cell/ras.c
+++ b/arch/powerpc/platforms/cell/ras.c
@@ -123,9 +123,8 @@ static int __init cbe_ptcal_enable_on_node(int nid, int order)
 
 	area->nid = nid;
 	area->order = order;
-	area->pages = __alloc_pages_node(area->nid,
-						GFP_KERNEL|__GFP_THISNODE,
-						area->order);
+	area->pages = __alloc_pages_node(GFP_KERNEL | __GFP_THISNODE |
+						__GFP_ORDER(area->order), nid);
 
 	if (!area->pages) {
 		printk(KERN_WARNING "%s: no page on node %d\n",
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 7a9f5dac5abe..2de66bd6fac5 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -315,13 +315,13 @@ static void ds_clear_cea(void *cea, size_t size)
 	preempt_enable();
 }
 
-static void *dsalloc_pages(size_t size, gfp_t flags, int cpu)
+static void *dsalloc_pages(size_t size, gfp_t gfp, int cpu)
 {
 	unsigned int order = get_order(size);
 	int node = cpu_to_node(cpu);
 	struct page *page;
 
-	page = __alloc_pages_node(node, flags | __GFP_ZERO, order);
+	page = __alloc_pages_node(gfp | __GFP_ZERO | __GFP_ORDER(order), node);
 	return page ? page_address(page) : NULL;
 }
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index cbf66e23a1a6..b643057486ff 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2379,13 +2379,13 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 	return 0;
 }
 
-struct vmcs *alloc_vmcs_cpu(bool shadow, int cpu, gfp_t flags)
+struct vmcs *alloc_vmcs_cpu(bool shadow, int cpu, gfp_t gfp)
 {
 	int node = cpu_to_node(cpu);
 	struct page *pages;
 	struct vmcs *vmcs;
 
-	pages = __alloc_pages_node(node, flags, vmcs_config.order);
+	pages = __alloc_pages_node(gfp | __GFP_ORDER(vmcs_config.order), node);
 	if (!pages)
 		return NULL;
 	vmcs = page_address(pages);
diff --git a/drivers/misc/sgi-xp/xpc_uv.c b/drivers/misc/sgi-xp/xpc_uv.c
index 0c6de97dd347..ed6c4f42ce8c 100644
--- a/drivers/misc/sgi-xp/xpc_uv.c
+++ b/drivers/misc/sgi-xp/xpc_uv.c
@@ -240,9 +240,8 @@ xpc_create_gru_mq_uv(unsigned int mq_size, int cpu, char *irq_name,
 	mq->mmr_blade = uv_cpu_to_blade_id(cpu);
 
 	nid = cpu_to_node(cpu);
-	page = __alloc_pages_node(nid,
-				      GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE,
-				      pg_order);
+	page = __alloc_pages_node(GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
+					__GFP_ORDER(pg_order), nid);
 	if (page == NULL) {
 		dev_err(xpc_part, "xpc_create_gru_mq_uv() failed to alloc %d "
 			"bytes of memory on nid=%d for GRU mq\n", mq_size, nid);
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 6133f77abc91..faf3586419ce 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -487,13 +487,12 @@ static inline struct page *__alloc_pages(gfp_t gfp, int preferred_nid)
  * Allocate pages, preferring the node given as nid. The node must be valid and
  * online. For more general interface, see alloc_pages_node().
  */
-static inline struct page *
-__alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order)
+static inline struct page *__alloc_pages_node(gfp_t gfp, int nid)
 {
 	VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES);
-	VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid));
+	VM_WARN_ON((gfp & __GFP_THISNODE) && !node_online(nid));
 
-	return __alloc_pages(gfp_mask | __GFP_ORDER(order), nid);
+	return __alloc_pages(gfp, nid);
 }
 
 /*
@@ -507,7 +506,7 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,
 	if (nid == NUMA_NO_NODE)
 		nid = numa_mem_id();
 
-	return __alloc_pages_node(nid, gfp_mask, order);
+	return __alloc_pages_node(gfp_mask | __GFP_ORDER(order), nid);
 }
 
 #ifdef CONFIG_NUMA
diff --git a/kernel/profile.c b/kernel/profile.c
index 9c08a2c7cb1d..1453ac0b1c21 100644
--- a/kernel/profile.c
+++ b/kernel/profile.c
@@ -359,7 +359,7 @@ static int profile_prepare_cpu(unsigned int cpu)
 		if (per_cpu(cpu_profile_hits, cpu)[i])
 			continue;
 
-		page = __alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0);
+		page = __alloc_pages_node(GFP_KERNEL | __GFP_ZERO, node);
 		if (!page) {
 			profile_dead_cpu(cpu);
 			return -ENOMEM;
diff --git a/mm/filemap.c b/mm/filemap.c
index 3ad18fa56057..9a4d0b6e5fc3 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -945,7 +945,7 @@ struct page *__page_cache_alloc(gfp_t gfp)
 		do {
 			cpuset_mems_cookie = read_mems_allowed_begin();
 			n = cpuset_mem_spread_node();
-			page = __alloc_pages_node(n, gfp, 0);
+			page = __alloc_pages_node(gfp, n);
 		} while (!page && read_mems_allowed_retry(cpuset_mems_cookie));
 
 		return page;
diff --git a/mm/gup.c b/mm/gup.c
index 2c08248d4fa2..8427ff9d42e4 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1316,14 +1316,14 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
 		 * CMA area again.
 		 */
 		thp_gfpmask &= ~__GFP_MOVABLE;
-		thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER);
+		thp = __alloc_pages_node(thp_gfpmask | __GFP_PMD, nid);
 		if (!thp)
 			return NULL;
 		prep_transhuge_page(thp);
 		return thp;
 	}
 
-	return __alloc_pages_node(nid, gfp_mask, 0);
+	return __alloc_pages_node(nid, gfp_mask);
 }
 
 static long check_and_migrate_cma_pages(struct task_struct *tsk,
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index a335f7c1fac4..2f643ee74edc 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -770,7 +770,7 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node)
 {
 	VM_BUG_ON_PAGE(*hpage, *hpage);
 
-	*hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER);
+	*hpage = __alloc_pages_node(gfp | __GFP_PMD, node);
 	if (unlikely(!*hpage)) {
 		count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
 		*hpage = ERR_PTR(-ENOMEM);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index e81d4a94878b..a2006e5e0f67 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -974,8 +974,8 @@ struct page *alloc_new_node_page(struct page *page, unsigned long node)
 		prep_transhuge_page(thp);
 		return thp;
 	} else
-		return __alloc_pages_node(node, GFP_HIGHUSER_MOVABLE |
-						    __GFP_THISNODE, 0);
+		return __alloc_pages_node(GFP_HIGHUSER_MOVABLE |
+						    __GFP_THISNODE, node);
 }
 
 /*
@@ -2084,8 +2084,8 @@ alloc_pages_vma(gfp_t gfp, struct vm_area_struct *vma, unsigned long addr,
 		nmask = policy_nodemask(gfp, pol);
 		if (!nmask || node_isset(hpage_node, *nmask)) {
 			mpol_cond_put(pol);
-			page = __alloc_pages_node(hpage_node,
-						gfp | __GFP_THISNODE, 0);
+			page = __alloc_pages_node(gfp | __GFP_THISNODE,
+					hpage_node);
 			goto out;
 		}
 	}
diff --git a/mm/migrate.c b/mm/migrate.c
index f2ecc2855a12..01466e82a387 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1880,11 +1880,10 @@ static struct page *alloc_misplaced_dst_page(struct page *page,
 	int nid = (int) data;
 	struct page *newpage;
 
-	newpage = __alloc_pages_node(nid,
-					 (GFP_HIGHUSER_MOVABLE |
-					  __GFP_THISNODE | __GFP_NOMEMALLOC |
-					  __GFP_NORETRY | __GFP_NOWARN) &
-					 ~__GFP_RECLAIM, 0);
+	newpage = __alloc_pages_node((GFP_HIGHUSER_MOVABLE | __GFP_THISNODE |
+					__GFP_NOMEMALLOC | __GFP_NORETRY |
+					__GFP_NOWARN) & ~__GFP_RECLAIM,
+			nid);
 
 	return newpage;
 }
diff --git a/mm/slab.c b/mm/slab.c
index 2915d912e89a..63c3a8a0d796 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1393,7 +1393,8 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
 
 	flags |= cachep->allocflags;
 
-	page = __alloc_pages_node(nodeid, flags, cachep->gfporder);
+	page = __alloc_pages_node(flags | __GFP_ORDER(cachep->gfporder),
+				nodeid);
 	if (!page) {
 		slab_out_of_memory(cachep, flags, nodeid);
 		return NULL;
diff --git a/mm/slob.c b/mm/slob.c
index 84aefd9b91ee..510f0941d032 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -194,7 +194,7 @@ static void *slob_new_pages(gfp_t gfp, int order, int node)
 
 #ifdef CONFIG_NUMA
 	if (node != NUMA_NO_NODE)
-		page = __alloc_pages_node(node, gfp, order);
+		page = __alloc_pages_node(gfp | __GFP_ORDER(order), node);
 	else
 #endif
 		page = alloc_pages(gfp, order);
diff --git a/mm/slub.c b/mm/slub.c
index e6ce13c54cb0..51453216a1ed 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1488,7 +1488,7 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
 	if (node == NUMA_NO_NODE)
 		page = alloc_pages(flags, order);
 	else
-		page = __alloc_pages_node(node, flags, order);
+		page = __alloc_pages_node(flags | __GFP_ORDER(order), node);
 
 	if (page && memcg_charge_slab(page, flags, order, s)) {
 		__free_pages(page, order);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 08/15] mm: Pass order to __get_free_page in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (6 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 07/15] mm: Pass order to __alloc_pages_node " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 09/15] mm: Pass order to prep_new_page " Matthew Wilcox
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Switch __get_free_page() to be the implementation and __get_free_pages()
to be the wrapper that calls __get_free_page() with the appropriate
argument.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/gfp.h | 6 +++---
 mm/page_alloc.c     | 6 +++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index faf3586419ce..dac282ac1158 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -536,15 +536,15 @@ extern struct page *alloc_pages_vma(gfp_t gfp, struct vm_area_struct *vma,
 #define alloc_page_vma_node(gfp_mask, vma, addr, node)		\
 	alloc_pages_vma(gfp_mask, vma, addr, node, false)
 
-extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order);
+extern unsigned long __get_free_page(gfp_t gfp_mask);
 extern unsigned long get_zeroed_page(gfp_t gfp_mask);
 
 void *alloc_pages_exact(size_t size, gfp_t gfp_mask);
 void free_pages_exact(void *virt, size_t size);
 void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask);
 
-#define __get_free_page(gfp_mask) \
-		__get_free_pages((gfp_mask), 0)
+#define __get_free_pages(gfp_mask, order) \
+		__get_free_page(gfp_mask | __GFP_ORDER(order))
 
 #define __get_dma_pages(gfp_mask, order) \
 		__get_free_pages((gfp_mask) | GFP_DMA, (order))
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6e968ab91660..eefe3c81c383 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4693,16 +4693,16 @@ EXPORT_SYMBOL(__alloc_pages_nodemask);
  * address cannot represent highmem pages. Use alloc_pages and then kmap if
  * you need to access high mem.
  */
-unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order)
+unsigned long __get_free_page(gfp_t gfp_mask)
 {
 	struct page *page;
 
-	page = alloc_pages(gfp_mask & ~__GFP_HIGHMEM, order);
+	page = alloc_page(gfp_mask & ~__GFP_HIGHMEM);
 	if (!page)
 		return 0;
 	return (unsigned long) page_address(page);
 }
-EXPORT_SYMBOL(__get_free_pages);
+EXPORT_SYMBOL(__get_free_page);
 
 unsigned long get_zeroed_page(gfp_t gfp_mask)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 09/15] mm: Pass order to prep_new_page in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (7 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 08/15] mm: Pass order to __get_free_page " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 10/15] mm: Pass order to rmqueue " Matthew Wilcox
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Matches the change to the __alloc_pages_nodemask API.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page_alloc.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eefe3c81c383..91d8bafa7945 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2071,10 +2071,11 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 	set_page_owner(page, order, gfp_flags);
 }
 
-static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
-							unsigned int alloc_flags)
+static void prep_new_page(struct page *page, gfp_t gfp_flags,
+						unsigned int alloc_flags)
 {
 	int i;
+	unsigned int order = gfp_order(gfp_flags);
 
 	post_alloc_hook(page, order, gfp_flags);
 
@@ -3615,7 +3616,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 		page = rmqueue(ac->preferred_zoneref->zone, zone, order,
 				gfp_mask, alloc_flags, ac->migratetype);
 		if (page) {
-			prep_new_page(page, order, gfp_mask, alloc_flags);
+			prep_new_page(page, gfp_mask, alloc_flags);
 
 			/*
 			 * If this is a high-order atomic allocation then check
@@ -3840,7 +3841,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 
 	/* Prep a captured page if available */
 	if (page)
-		prep_new_page(page, order, gfp_mask, alloc_flags);
+		prep_new_page(page, gfp_mask, alloc_flags);
 
 	/* Try get a page from the freelist if available */
 	if (!page)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 10/15] mm: Pass order to rmqueue in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (8 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 09/15] mm: Pass order to prep_new_page " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 11/15] mm: Pass order to get_page_from_freelist " Matthew Wilcox
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Matches the change to the __alloc_pages_nodemask API.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page_alloc.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 91d8bafa7945..6cff996289be 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3186,11 +3186,10 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
  * Allocate a page from the given zone. Use pcplists for order-0 allocations.
  */
 static inline
-struct page *rmqueue(struct zone *preferred_zone,
-			struct zone *zone, unsigned int order,
-			gfp_t gfp_flags, unsigned int alloc_flags,
-			int migratetype)
+struct page *rmqueue(struct zone *preferred_zone, struct zone *zone,
+		gfp_t gfp_flags, unsigned int alloc_flags, int migratetype)
 {
+	unsigned int order = gfp_order(gfp_flags);
 	unsigned long flags;
 	struct page *page;
 
@@ -3613,7 +3612,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 		}
 
 try_this_zone:
-		page = rmqueue(ac->preferred_zoneref->zone, zone, order,
+		page = rmqueue(ac->preferred_zoneref->zone, zone,
 				gfp_mask, alloc_flags, ac->migratetype);
 		if (page) {
 			prep_new_page(page, gfp_mask, alloc_flags);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 11/15] mm: Pass order to get_page_from_freelist in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (9 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 10/15] mm: Pass order to rmqueue " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 12/15] mm: Pass order to __alloc_pages_cpuset_fallback " Matthew Wilcox
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Matches the change to the __alloc_pages_nodemask API.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page_alloc.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6cff996289be..38211bc541a7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3500,13 +3500,14 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask)
  * a page.
  */
 static struct page *
-get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
-						const struct alloc_context *ac)
+get_page_from_freelist(gfp_t gfp_mask, int alloc_flags,
+			const struct alloc_context *ac)
 {
 	struct zoneref *z;
 	struct zone *zone;
 	struct pglist_data *last_pgdat_dirty_limit = NULL;
 	bool no_fallback;
+	unsigned int order = gfp_order(gfp_mask);
 
 retry:
 	/*
@@ -3702,15 +3703,13 @@ __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
 {
 	struct page *page;
 
-	page = get_page_from_freelist(gfp_mask, order,
-			alloc_flags|ALLOC_CPUSET, ac);
+	page = get_page_from_freelist(gfp_mask, alloc_flags|ALLOC_CPUSET, ac);
 	/*
 	 * fallback to ignore cpuset restriction if our nodes
 	 * are depleted
 	 */
 	if (!page)
-		page = get_page_from_freelist(gfp_mask, order,
-				alloc_flags, ac);
+		page = get_page_from_freelist(gfp_mask, alloc_flags, ac);
 
 	return page;
 }
@@ -3748,7 +3747,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 	 * allocation which will never fail due to oom_lock already held.
 	 */
 	page = get_page_from_freelist((gfp_mask | __GFP_HARDWALL) &
-				      ~__GFP_DIRECT_RECLAIM, order,
+				      ~__GFP_DIRECT_RECLAIM,
 				      ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac);
 	if (page)
 		goto out;
@@ -3844,7 +3843,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 
 	/* Try get a page from the freelist if available */
 	if (!page)
-		page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
+		page = get_page_from_freelist(gfp_mask, alloc_flags, ac);
 
 	if (page) {
 		struct zone *zone = page_zone(page);
@@ -4071,7 +4070,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 		return NULL;
 
 retry:
-	page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
+	page = get_page_from_freelist(gfp_mask, alloc_flags, ac);
 
 	/*
 	 * If an allocation failed after direct reclaim, it could be because
@@ -4376,7 +4375,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * The adjusted alloc_flags might result in immediate success, so try
 	 * that first
 	 */
-	page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
+	page = get_page_from_freelist(gfp_mask, alloc_flags, ac);
 	if (page)
 		goto got_pg;
 
@@ -4446,7 +4445,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	}
 
 	/* Attempt with potentially adjusted zonelist and alloc_flags */
-	page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
+	page = get_page_from_freelist(gfp_mask, alloc_flags, ac);
 	if (page)
 		goto got_pg;
 
@@ -4653,7 +4652,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, int preferred_nid, nodemask_t *nodemask)
 	alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, gfp_mask);
 
 	/* First allocation attempt */
-	page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac);
+	page = get_page_from_freelist(alloc_mask, alloc_flags, &ac);
 	if (likely(page))
 		goto out;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 12/15] mm: Pass order to __alloc_pages_cpuset_fallback in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (10 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 11/15] mm: Pass order to get_page_from_freelist " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 13/15] mm: Pass order to prepare_alloc_pages " Matthew Wilcox
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Matches the change to the __alloc_pages_nodemask API.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page_alloc.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 38211bc541a7..d4ac38780e44 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3697,8 +3697,7 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
 }
 
 static inline struct page *
-__alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
-			      unsigned int alloc_flags,
+__alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int alloc_flags,
 			      const struct alloc_context *ac)
 {
 	struct page *page;
@@ -3794,7 +3793,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 		 * reserves
 		 */
 		if (gfp_mask & __GFP_NOFAIL)
-			page = __alloc_pages_cpuset_fallback(gfp_mask, order,
+			page = __alloc_pages_cpuset_fallback(gfp_mask,
 					ALLOC_NO_WATERMARKS, ac);
 	}
 out:
@@ -4556,7 +4555,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 * could deplete whole memory reserves which would just make
 		 * the situation worse
 		 */
-		page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac);
+		page = __alloc_pages_cpuset_fallback(gfp_mask, ALLOC_HARDER, ac);
 		if (page)
 			goto got_pg;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 13/15] mm: Pass order to prepare_alloc_pages in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (11 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 12/15] mm: Pass order to __alloc_pages_cpuset_fallback " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 13:50 ` [PATCH v2 14/15] mm: Pass order to try_to_free_pages " Matthew Wilcox
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Also pass the order to should_fail_alloc_page() in the GFP flags,
which only used the order when calling prepare_alloc_pages().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page_alloc.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d4ac38780e44..d457dfa8a0ac 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3262,8 +3262,9 @@ static int __init setup_fail_page_alloc(char *str)
 }
 __setup("fail_page_alloc=", setup_fail_page_alloc);
 
-static bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
+static bool __should_fail_alloc_page(gfp_t gfp_mask)
 {
+	unsigned int order = gfp_order(gfp_mask);
 	if (order < fail_page_alloc.min_order)
 		return false;
 	if (gfp_mask & __GFP_NOFAIL)
@@ -3302,16 +3303,16 @@ late_initcall(fail_page_alloc_debugfs);
 
 #else /* CONFIG_FAIL_PAGE_ALLOC */
 
-static inline bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
+static inline bool __should_fail_alloc_page(gfp_t gfp_mask)
 {
 	return false;
 }
 
 #endif /* CONFIG_FAIL_PAGE_ALLOC */
 
-static noinline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
+static noinline bool should_fail_alloc_page(gfp_t gfp_mask)
 {
-	return __should_fail_alloc_page(gfp_mask, order);
+	return __should_fail_alloc_page(gfp_mask);
 }
 ALLOW_ERROR_INJECTION(should_fail_alloc_page, TRUE);
 
@@ -4569,7 +4570,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	return page;
 }
 
-static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
+static inline bool prepare_alloc_pages(gfp_t gfp_mask,
 		int preferred_nid, nodemask_t *nodemask,
 		struct alloc_context *ac, gfp_t *alloc_mask,
 		unsigned int *alloc_flags)
@@ -4592,7 +4593,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
 
 	might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
 
-	if (should_fail_alloc_page(gfp_mask, order))
+	if (should_fail_alloc_page(gfp_mask))
 		return false;
 
 	if (IS_ENABLED(CONFIG_CMA) && ac->migratetype == MIGRATE_MOVABLE)
@@ -4639,7 +4640,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, int preferred_nid, nodemask_t *nodemask)
 
 	gfp_mask &= gfp_allowed_mask;
 	alloc_mask = gfp_mask;
-	if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
+	if (!prepare_alloc_pages(gfp_mask, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
 		return NULL;
 
 	finalise_ac(gfp_mask, &ac);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 14/15] mm: Pass order to try_to_free_pages in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (12 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 13/15] mm: Pass order to prepare_alloc_pages " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 23:26   ` Ira Weiny
  2019-05-10 13:50 ` [PATCH v2 15/15] mm: Pass order to node_reclaim() " Matthew Wilcox
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Also remove the order argument from __perform_reclaim() and
__alloc_pages_direct_reclaim() which only passed the argument down.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/swap.h          |  2 +-
 include/trace/events/vmscan.h | 20 +++++++++-----------
 mm/page_alloc.c               | 15 ++++++---------
 mm/vmscan.c                   | 13 ++++++-------
 4 files changed, 22 insertions(+), 28 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 4bfb5c4ac108..029737fec38b 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -348,7 +348,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
 
 /* linux/mm/vmscan.c */
 extern unsigned long zone_reclaimable_pages(struct zone *zone);
-extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
+extern unsigned long try_to_free_pages(struct zonelist *zonelist,
 					gfp_t gfp_mask, nodemask_t *mask);
 extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
 extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index a5ab2973e8dc..a6b1b20333b4 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -100,45 +100,43 @@ TRACE_EVENT(mm_vmscan_wakeup_kswapd,
 
 DECLARE_EVENT_CLASS(mm_vmscan_direct_reclaim_begin_template,
 
-	TP_PROTO(int order, gfp_t gfp_flags),
+	TP_PROTO(gfp_t gfp_flags),
 
-	TP_ARGS(order, gfp_flags),
+	TP_ARGS(gfp_flags),
 
 	TP_STRUCT__entry(
-		__field(	int,	order		)
 		__field(	gfp_t,	gfp_flags	)
 	),
 
 	TP_fast_assign(
-		__entry->order		= order;
 		__entry->gfp_flags	= gfp_flags;
 	),
 
 	TP_printk("order=%d gfp_flags=%s",
-		__entry->order,
+		gfp_order(__entry->gfp_flags),
 		show_gfp_flags(__entry->gfp_flags))
 );
 
 DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_direct_reclaim_begin,
 
-	TP_PROTO(int order, gfp_t gfp_flags),
+	TP_PROTO(gfp_t gfp_flags),
 
-	TP_ARGS(order, gfp_flags)
+	TP_ARGS(gfp_flags)
 );
 
 #ifdef CONFIG_MEMCG
 DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_memcg_reclaim_begin,
 
-	TP_PROTO(int order, gfp_t gfp_flags),
+	TP_PROTO(gfp_t gfp_flags),
 
-	TP_ARGS(order, gfp_flags)
+	TP_ARGS(gfp_flags)
 );
 
 DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_memcg_softlimit_reclaim_begin,
 
-	TP_PROTO(int order, gfp_t gfp_flags),
+	TP_PROTO(gfp_t gfp_flags),
 
-	TP_ARGS(order, gfp_flags)
+	TP_ARGS(gfp_flags)
 );
 #endif /* CONFIG_MEMCG */
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d457dfa8a0ac..29daaf4ae4fb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4024,9 +4024,7 @@ EXPORT_SYMBOL_GPL(fs_reclaim_release);
 #endif
 
 /* Perform direct synchronous page reclaim */
-static int
-__perform_reclaim(gfp_t gfp_mask, unsigned int order,
-					const struct alloc_context *ac)
+static int __perform_reclaim(gfp_t gfp_mask, const struct alloc_context *ac)
 {
 	struct reclaim_state reclaim_state;
 	int progress;
@@ -4043,8 +4041,7 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
 	reclaim_state.reclaimed_slab = 0;
 	current->reclaim_state = &reclaim_state;
 
-	progress = try_to_free_pages(ac->zonelist, order, gfp_mask,
-								ac->nodemask);
+	progress = try_to_free_pages(ac->zonelist, gfp_mask, ac->nodemask);
 
 	current->reclaim_state = NULL;
 	memalloc_noreclaim_restore(noreclaim_flag);
@@ -4058,14 +4055,14 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
 
 /* The really slow allocator path where we enter direct reclaim */
 static inline struct page *
-__alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
-		unsigned int alloc_flags, const struct alloc_context *ac,
+__alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int alloc_flags,
+		const struct alloc_context *ac,
 		unsigned long *did_some_progress)
 {
 	struct page *page = NULL;
 	bool drained = false;
 
-	*did_some_progress = __perform_reclaim(gfp_mask, order, ac);
+	*did_some_progress = __perform_reclaim(gfp_mask, ac);
 	if (unlikely(!(*did_some_progress)))
 		return NULL;
 
@@ -4458,7 +4455,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		goto nopage;
 
 	/* Try direct reclaim and then allocating */
-	page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
+	page = __alloc_pages_direct_reclaim(gfp_mask, alloc_flags, ac,
 							&did_some_progress);
 	if (page)
 		goto got_pg;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index d9c3e873eca6..e4d4d9c1d7a9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3182,15 +3182,15 @@ static bool throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
 	return false;
 }
 
-unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
-				gfp_t gfp_mask, nodemask_t *nodemask)
+unsigned long try_to_free_pages(struct zonelist *zonelist, gfp_t gfp_mask,
+		nodemask_t *nodemask)
 {
 	unsigned long nr_reclaimed;
 	struct scan_control sc = {
 		.nr_to_reclaim = SWAP_CLUSTER_MAX,
 		.gfp_mask = current_gfp_context(gfp_mask),
 		.reclaim_idx = gfp_zone(gfp_mask),
-		.order = order,
+		.order = gfp_order(gfp_mask),
 		.nodemask = nodemask,
 		.priority = DEF_PRIORITY,
 		.may_writepage = !laptop_mode,
@@ -3215,7 +3215,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 	if (throttle_direct_reclaim(sc.gfp_mask, zonelist, nodemask))
 		return 1;
 
-	trace_mm_vmscan_direct_reclaim_begin(order, sc.gfp_mask);
+	trace_mm_vmscan_direct_reclaim_begin(sc.gfp_mask);
 
 	nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
 
@@ -3244,8 +3244,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
 	sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
 			(GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
 
-	trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.order,
-						      sc.gfp_mask);
+	trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.gfp_mask);
 
 	/*
 	 * NOTE: Although we can get the priority field, using it
@@ -3294,7 +3293,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
 
 	zonelist = &NODE_DATA(nid)->node_zonelists[ZONELIST_FALLBACK];
 
-	trace_mm_vmscan_memcg_reclaim_begin(0, sc.gfp_mask);
+	trace_mm_vmscan_memcg_reclaim_begin(sc.gfp_mask);
 
 	psi_memstall_enter(&pflags);
 	noreclaim_flag = memalloc_noreclaim_save();
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 15/15] mm: Pass order to node_reclaim() in GFP flags
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (13 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 14/15] mm: Pass order to try_to_free_pages " Matthew Wilcox
@ 2019-05-10 13:50 ` Matthew Wilcox
  2019-05-10 23:30 ` [PATCH v2 00/15] Remove 'order' argument from many mm functions Ira Weiny
  2019-05-13 10:51 ` Michal Hocko
  16 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-10 13:50 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/trace/events/vmscan.h |  8 +++-----
 mm/internal.h                 |  5 ++---
 mm/page_alloc.c               |  2 +-
 mm/vmscan.c                   | 13 ++++++-------
 4 files changed, 12 insertions(+), 16 deletions(-)

diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index a6b1b20333b4..2714d9ef54e6 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -460,25 +460,23 @@ TRACE_EVENT(mm_vmscan_inactive_list_is_low,
 
 TRACE_EVENT(mm_vmscan_node_reclaim_begin,
 
-	TP_PROTO(int nid, int order, gfp_t gfp_flags),
+	TP_PROTO(int nid, gfp_t gfp_flags),
 
-	TP_ARGS(nid, order, gfp_flags),
+	TP_ARGS(nid, gfp_flags),
 
 	TP_STRUCT__entry(
 		__field(int, nid)
-		__field(int, order)
 		__field(gfp_t, gfp_flags)
 	),
 
 	TP_fast_assign(
 		__entry->nid = nid;
-		__entry->order = order;
 		__entry->gfp_flags = gfp_flags;
 	),
 
 	TP_printk("nid=%d order=%d gfp_flags=%s",
 		__entry->nid,
-		__entry->order,
+		gfp_order(__entry->gfp_flags),
 		show_gfp_flags(__entry->gfp_flags))
 );
 
diff --git a/mm/internal.h b/mm/internal.h
index 9eeaf2b95166..353cefdc3f34 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -457,10 +457,9 @@ static inline void mminit_validate_memmodel_limits(unsigned long *start_pfn,
 #define NODE_RECLAIM_SUCCESS	1
 
 #ifdef CONFIG_NUMA
-extern int node_reclaim(struct pglist_data *, gfp_t, unsigned int);
+extern int node_reclaim(struct pglist_data *, gfp_t);
 #else
-static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask,
-				unsigned int order)
+static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask)
 {
 	return NODE_RECLAIM_NOSCAN;
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 29daaf4ae4fb..5365ee2e8c0b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3595,7 +3595,7 @@ get_page_from_freelist(gfp_t gfp_mask, int alloc_flags,
 			    !zone_allows_reclaim(ac->preferred_zoneref->zone, zone))
 				continue;
 
-			ret = node_reclaim(zone->zone_pgdat, gfp_mask, order);
+			ret = node_reclaim(zone->zone_pgdat, gfp_mask);
 			switch (ret) {
 			case NODE_RECLAIM_NOSCAN:
 				/* did not scan */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index e4d4d9c1d7a9..b7f141de9814 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4124,17 +4124,17 @@ static unsigned long node_pagecache_reclaimable(struct pglist_data *pgdat)
 /*
  * Try to free up some pages from this node through reclaim.
  */
-static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
+static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask)
 {
 	/* Minimum pages needed in order to stay on node */
-	const unsigned long nr_pages = 1 << order;
+	const unsigned long nr_pages = 1UL << gfp_order(gfp_mask);
 	struct task_struct *p = current;
 	struct reclaim_state reclaim_state;
 	unsigned int noreclaim_flag;
 	struct scan_control sc = {
 		.nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
 		.gfp_mask = current_gfp_context(gfp_mask),
-		.order = order,
+		.order = gfp_order(gfp_mask),
 		.priority = NODE_RECLAIM_PRIORITY,
 		.may_writepage = !!(node_reclaim_mode & RECLAIM_WRITE),
 		.may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP),
@@ -4142,8 +4142,7 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
 		.reclaim_idx = gfp_zone(gfp_mask),
 	};
 
-	trace_mm_vmscan_node_reclaim_begin(pgdat->node_id, order,
-					   sc.gfp_mask);
+	trace_mm_vmscan_node_reclaim_begin(pgdat->node_id, sc.gfp_mask);
 
 	cond_resched();
 	fs_reclaim_acquire(sc.gfp_mask);
@@ -4177,7 +4176,7 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
 	return sc.nr_reclaimed >= nr_pages;
 }
 
-int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
+int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask)
 {
 	int ret;
 
@@ -4213,7 +4212,7 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
 	if (test_and_set_bit(PGDAT_RECLAIM_LOCKED, &pgdat->flags))
 		return NODE_RECLAIM_NOSCAN;
 
-	ret = __node_reclaim(pgdat, gfp_mask, order);
+	ret = __node_reclaim(pgdat, gfp_mask);
 	clear_bit(PGDAT_RECLAIM_LOCKED, &pgdat->flags);
 
 	if (!ret)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 14/15] mm: Pass order to try_to_free_pages in GFP flags
  2019-05-10 13:50 ` [PATCH v2 14/15] mm: Pass order to try_to_free_pages " Matthew Wilcox
@ 2019-05-10 23:26   ` Ira Weiny
  0 siblings, 0 replies; 21+ messages in thread
From: Ira Weiny @ 2019-05-10 23:26 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm

On Fri, May 10, 2019 at 06:50:37AM -0700, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> Also remove the order argument from __perform_reclaim() and
> __alloc_pages_direct_reclaim() which only passed the argument down.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/swap.h          |  2 +-
>  include/trace/events/vmscan.h | 20 +++++++++-----------
>  mm/page_alloc.c               | 15 ++++++---------
>  mm/vmscan.c                   | 13 ++++++-------
>  4 files changed, 22 insertions(+), 28 deletions(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 4bfb5c4ac108..029737fec38b 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -348,7 +348,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
>  
>  /* linux/mm/vmscan.c */
>  extern unsigned long zone_reclaimable_pages(struct zone *zone);
> -extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> +extern unsigned long try_to_free_pages(struct zonelist *zonelist,
>  					gfp_t gfp_mask, nodemask_t *mask);
>  extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
>  extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
> index a5ab2973e8dc..a6b1b20333b4 100644
> --- a/include/trace/events/vmscan.h
> +++ b/include/trace/events/vmscan.h
> @@ -100,45 +100,43 @@ TRACE_EVENT(mm_vmscan_wakeup_kswapd,
>  
>  DECLARE_EVENT_CLASS(mm_vmscan_direct_reclaim_begin_template,
>  
> -	TP_PROTO(int order, gfp_t gfp_flags),
> +	TP_PROTO(gfp_t gfp_flags),
>  
> -	TP_ARGS(order, gfp_flags),
> +	TP_ARGS(gfp_flags),
>  
>  	TP_STRUCT__entry(
> -		__field(	int,	order		)
>  		__field(	gfp_t,	gfp_flags	)
>  	),
>  
>  	TP_fast_assign(
> -		__entry->order		= order;
>  		__entry->gfp_flags	= gfp_flags;
>  	),
>  
>  	TP_printk("order=%d gfp_flags=%s",
> -		__entry->order,
> +		gfp_order(__entry->gfp_flags),
>  		show_gfp_flags(__entry->gfp_flags))
>  );
>  
>  DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_direct_reclaim_begin,
>  
> -	TP_PROTO(int order, gfp_t gfp_flags),
> +	TP_PROTO(gfp_t gfp_flags),
>  
> -	TP_ARGS(order, gfp_flags)
> +	TP_ARGS(gfp_flags)
>  );
>  
>  #ifdef CONFIG_MEMCG
>  DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_memcg_reclaim_begin,
>  
> -	TP_PROTO(int order, gfp_t gfp_flags),
> +	TP_PROTO(gfp_t gfp_flags),
>  
> -	TP_ARGS(order, gfp_flags)
> +	TP_ARGS(gfp_flags)
>  );
>  
>  DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_memcg_softlimit_reclaim_begin,
>  
> -	TP_PROTO(int order, gfp_t gfp_flags),
> +	TP_PROTO(gfp_t gfp_flags),
>  
> -	TP_ARGS(order, gfp_flags)
> +	TP_ARGS(gfp_flags)
>  );
>  #endif /* CONFIG_MEMCG */
>  
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d457dfa8a0ac..29daaf4ae4fb 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4024,9 +4024,7 @@ EXPORT_SYMBOL_GPL(fs_reclaim_release);
>  #endif
>  
>  /* Perform direct synchronous page reclaim */
> -static int
> -__perform_reclaim(gfp_t gfp_mask, unsigned int order,
> -					const struct alloc_context *ac)
> +static int __perform_reclaim(gfp_t gfp_mask, const struct alloc_context *ac)
>  {
>  	struct reclaim_state reclaim_state;
>  	int progress;
> @@ -4043,8 +4041,7 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
>  	reclaim_state.reclaimed_slab = 0;
>  	current->reclaim_state = &reclaim_state;
>  
> -	progress = try_to_free_pages(ac->zonelist, order, gfp_mask,
> -								ac->nodemask);
> +	progress = try_to_free_pages(ac->zonelist, gfp_mask, ac->nodemask);
>  
>  	current->reclaim_state = NULL;
>  	memalloc_noreclaim_restore(noreclaim_flag);
> @@ -4058,14 +4055,14 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
>  
>  /* The really slow allocator path where we enter direct reclaim */
>  static inline struct page *
> -__alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> -		unsigned int alloc_flags, const struct alloc_context *ac,
> +__alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int alloc_flags,
> +		const struct alloc_context *ac,
>  		unsigned long *did_some_progress)
>  {
>  	struct page *page = NULL;
>  	bool drained = false;
>  
> -	*did_some_progress = __perform_reclaim(gfp_mask, order, ac);
> +	*did_some_progress = __perform_reclaim(gfp_mask, ac);
>  	if (unlikely(!(*did_some_progress)))
>  		return NULL;
>  
> @@ -4458,7 +4455,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		goto nopage;
>  
>  	/* Try direct reclaim and then allocating */
> -	page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
> +	page = __alloc_pages_direct_reclaim(gfp_mask, alloc_flags, ac,
>  							&did_some_progress);
>  	if (page)
>  		goto got_pg;
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d9c3e873eca6..e4d4d9c1d7a9 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3182,15 +3182,15 @@ static bool throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
>  	return false;
>  }
>  
> -unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> -				gfp_t gfp_mask, nodemask_t *nodemask)
> +unsigned long try_to_free_pages(struct zonelist *zonelist, gfp_t gfp_mask,
> +		nodemask_t *nodemask)
>  {
>  	unsigned long nr_reclaimed;
>  	struct scan_control sc = {
>  		.nr_to_reclaim = SWAP_CLUSTER_MAX,
>  		.gfp_mask = current_gfp_context(gfp_mask),
>  		.reclaim_idx = gfp_zone(gfp_mask),
> -		.order = order,
> +		.order = gfp_order(gfp_mask),

NIT: Could we remove order from scan_control?

Ira

>  		.nodemask = nodemask,
>  		.priority = DEF_PRIORITY,
>  		.may_writepage = !laptop_mode,
> @@ -3215,7 +3215,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
>  	if (throttle_direct_reclaim(sc.gfp_mask, zonelist, nodemask))
>  		return 1;
>  
> -	trace_mm_vmscan_direct_reclaim_begin(order, sc.gfp_mask);
> +	trace_mm_vmscan_direct_reclaim_begin(sc.gfp_mask);
>  
>  	nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
>  
> @@ -3244,8 +3244,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
>  	sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
>  			(GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
>  
> -	trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.order,
> -						      sc.gfp_mask);
> +	trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.gfp_mask);
>  
>  	/*
>  	 * NOTE: Although we can get the priority field, using it
> @@ -3294,7 +3293,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
>  
>  	zonelist = &NODE_DATA(nid)->node_zonelists[ZONELIST_FALLBACK];
>  
> -	trace_mm_vmscan_memcg_reclaim_begin(0, sc.gfp_mask);
> +	trace_mm_vmscan_memcg_reclaim_begin(sc.gfp_mask);
>  
>  	psi_memstall_enter(&pflags);
>  	noreclaim_flag = memalloc_noreclaim_save();
> -- 
> 2.20.1
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 00/15] Remove 'order' argument from many mm functions
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (14 preceding siblings ...)
  2019-05-10 13:50 ` [PATCH v2 15/15] mm: Pass order to node_reclaim() " Matthew Wilcox
@ 2019-05-10 23:30 ` Ira Weiny
  2019-05-13 10:51 ` Michal Hocko
  16 siblings, 0 replies; 21+ messages in thread
From: Ira Weiny @ 2019-05-10 23:30 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm

On Fri, May 10, 2019 at 06:50:23AM -0700, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> This is a little more serious attempt than v1, since nobody seems opposed
> to the concept of using GFP flags to pass the order around.  I've split
> it up a bit better, and I've reversed the arguments of __alloc_pages_node
> to match the order of the arguments to other functions in the same family.
> alloc_pages_node() needs the same treatment, but there's about 70 callers,
> so I'm going to skip it for now.
> 
> This is against current -mm.  I'm seeing a text saving of 482 bytes from
> a tinyconfig vmlinux (1003785 reduced to 1003303).  There are more
> savings to be had by combining together order and the gfp flags, for
> example in the scan_control data structure.
> 
> I think there are also cognitive savings to be had from eliminating
> some of the function variants which exist solely to take an 'order'.
> 
> Matthew Wilcox (Oracle) (15):

For the series:

Reviewed-by: Ira Weiny <ira.weiny@intel.com>

>   mm: Remove gfp_flags argument from rmqueue_pcplist
>   mm: Pass order to __alloc_pages_nodemask in GFP flags
>   mm: Pass order to __alloc_pages in GFP flags
>   mm: Pass order to alloc_page_interleave in GFP flags
>   mm: Pass order to alloc_pages_current in GFP flags
>   mm: Pass order to alloc_pages_vma in GFP flags
>   mm: Pass order to __alloc_pages_node in GFP flags
>   mm: Pass order to __get_free_page in GFP flags
>   mm: Pass order to prep_new_page in GFP flags
>   mm: Pass order to rmqueue in GFP flags
>   mm: Pass order to get_page_from_freelist in GFP flags
>   mm: Pass order to __alloc_pages_cpuset_fallback in GFP flags
>   mm: Pass order to prepare_alloc_pages in GFP flags
>   mm: Pass order to try_to_free_pages in GFP flags
>   mm: Pass order to node_reclaim() in GFP flags
> 
>  arch/ia64/kernel/uncached.c       |  6 +-
>  arch/ia64/sn/pci/pci_dma.c        |  4 +-
>  arch/powerpc/platforms/cell/ras.c |  5 +-
>  arch/x86/events/intel/ds.c        |  4 +-
>  arch/x86/kvm/vmx/vmx.c            |  4 +-
>  drivers/misc/sgi-xp/xpc_uv.c      |  5 +-
>  include/linux/gfp.h               | 59 +++++++++++--------
>  include/linux/migrate.h           |  2 +-
>  include/linux/swap.h              |  2 +-
>  include/trace/events/vmscan.h     | 28 ++++-----
>  kernel/profile.c                  |  2 +-
>  mm/filemap.c                      |  2 +-
>  mm/gup.c                          |  4 +-
>  mm/hugetlb.c                      |  5 +-
>  mm/internal.h                     |  5 +-
>  mm/khugepaged.c                   |  2 +-
>  mm/mempolicy.c                    | 34 +++++------
>  mm/migrate.c                      |  9 ++-
>  mm/page_alloc.c                   | 98 +++++++++++++++----------------
>  mm/shmem.c                        |  5 +-
>  mm/slab.c                         |  3 +-
>  mm/slob.c                         |  2 +-
>  mm/slub.c                         |  2 +-
>  mm/vmscan.c                       | 26 ++++----
>  24 files changed, 157 insertions(+), 161 deletions(-)
> 
> -- 
> 2.20.1
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 00/15] Remove 'order' argument from many mm functions
  2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
                   ` (15 preceding siblings ...)
  2019-05-10 23:30 ` [PATCH v2 00/15] Remove 'order' argument from many mm functions Ira Weiny
@ 2019-05-13 10:51 ` Michal Hocko
  2019-05-13 11:21   ` Matthew Wilcox
  16 siblings, 1 reply; 21+ messages in thread
From: Michal Hocko @ 2019-05-13 10:51 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm

On Fri 10-05-19 06:50:23, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> This is a little more serious attempt than v1, since nobody seems opposed
> to the concept of using GFP flags to pass the order around.  I've split
> it up a bit better, and I've reversed the arguments of __alloc_pages_node
> to match the order of the arguments to other functions in the same family.
> alloc_pages_node() needs the same treatment, but there's about 70 callers,
> so I'm going to skip it for now.
> 
> This is against current -mm.  I'm seeing a text saving of 482 bytes from
> a tinyconfig vmlinux (1003785 reduced to 1003303).  There are more
> savings to be had by combining together order and the gfp flags, for
> example in the scan_control data structure.

So what is the primary objective here? Reduce the code size? Reduce the
registers pressure? Please tell us more why changing the core allocator
API and make it more subtle is worth it.

> I think there are also cognitive savings to be had from eliminating
> some of the function variants which exist solely to take an 'order'.
> 
> Matthew Wilcox (Oracle) (15):
>   mm: Remove gfp_flags argument from rmqueue_pcplist
>   mm: Pass order to __alloc_pages_nodemask in GFP flags
>   mm: Pass order to __alloc_pages in GFP flags
>   mm: Pass order to alloc_page_interleave in GFP flags
>   mm: Pass order to alloc_pages_current in GFP flags
>   mm: Pass order to alloc_pages_vma in GFP flags
>   mm: Pass order to __alloc_pages_node in GFP flags
>   mm: Pass order to __get_free_page in GFP flags
>   mm: Pass order to prep_new_page in GFP flags
>   mm: Pass order to rmqueue in GFP flags
>   mm: Pass order to get_page_from_freelist in GFP flags
>   mm: Pass order to __alloc_pages_cpuset_fallback in GFP flags
>   mm: Pass order to prepare_alloc_pages in GFP flags
>   mm: Pass order to try_to_free_pages in GFP flags
>   mm: Pass order to node_reclaim() in GFP flags
> 
>  arch/ia64/kernel/uncached.c       |  6 +-
>  arch/ia64/sn/pci/pci_dma.c        |  4 +-
>  arch/powerpc/platforms/cell/ras.c |  5 +-
>  arch/x86/events/intel/ds.c        |  4 +-
>  arch/x86/kvm/vmx/vmx.c            |  4 +-
>  drivers/misc/sgi-xp/xpc_uv.c      |  5 +-
>  include/linux/gfp.h               | 59 +++++++++++--------
>  include/linux/migrate.h           |  2 +-
>  include/linux/swap.h              |  2 +-
>  include/trace/events/vmscan.h     | 28 ++++-----
>  kernel/profile.c                  |  2 +-
>  mm/filemap.c                      |  2 +-
>  mm/gup.c                          |  4 +-
>  mm/hugetlb.c                      |  5 +-
>  mm/internal.h                     |  5 +-
>  mm/khugepaged.c                   |  2 +-
>  mm/mempolicy.c                    | 34 +++++------
>  mm/migrate.c                      |  9 ++-
>  mm/page_alloc.c                   | 98 +++++++++++++++----------------
>  mm/shmem.c                        |  5 +-
>  mm/slab.c                         |  3 +-
>  mm/slob.c                         |  2 +-
>  mm/slub.c                         |  2 +-
>  mm/vmscan.c                       | 26 ++++----
>  24 files changed, 157 insertions(+), 161 deletions(-)
> 
> -- 
> 2.20.1

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 00/15] Remove 'order' argument from many mm functions
  2019-05-13 10:51 ` Michal Hocko
@ 2019-05-13 11:21   ` Matthew Wilcox
  2019-05-13 11:42     ` Michal Hocko
  0 siblings, 1 reply; 21+ messages in thread
From: Matthew Wilcox @ 2019-05-13 11:21 UTC (permalink / raw)
  To: Michal Hocko; +Cc: linux-mm

On Mon, May 13, 2019 at 12:51:38PM +0200, Michal Hocko wrote:
> On Fri 10-05-19 06:50:23, Matthew Wilcox wrote:
> > This is a little more serious attempt than v1, since nobody seems opposed
> > to the concept of using GFP flags to pass the order around.  I've split
> > it up a bit better, and I've reversed the arguments of __alloc_pages_node
> > to match the order of the arguments to other functions in the same family.
> > alloc_pages_node() needs the same treatment, but there's about 70 callers,
> > so I'm going to skip it for now.
> > 
> > This is against current -mm.  I'm seeing a text saving of 482 bytes from
> > a tinyconfig vmlinux (1003785 reduced to 1003303).  There are more
> > savings to be had by combining together order and the gfp flags, for
> > example in the scan_control data structure.
> 
> So what is the primary objective here? Reduce the code size? Reduce the
> registers pressure? Please tell us more why changing the core allocator
> API and make it more subtle is worth it.

The primary objective here is to avoid adding an 'order' parameter to
pagecache_get_page().  I don't think it makes the API more subtle; I see
it as fundamental to the allocation API as any of the other GFP flags.
It's a change, to be sure, but I think it's a worthwhile one.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 00/15] Remove 'order' argument from many mm functions
  2019-05-13 11:21   ` Matthew Wilcox
@ 2019-05-13 11:42     ` Michal Hocko
  0 siblings, 0 replies; 21+ messages in thread
From: Michal Hocko @ 2019-05-13 11:42 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm

On Mon 13-05-19 04:21:07, Matthew Wilcox wrote:
> On Mon, May 13, 2019 at 12:51:38PM +0200, Michal Hocko wrote:
> > On Fri 10-05-19 06:50:23, Matthew Wilcox wrote:
> > > This is a little more serious attempt than v1, since nobody seems opposed
> > > to the concept of using GFP flags to pass the order around.  I've split
> > > it up a bit better, and I've reversed the arguments of __alloc_pages_node
> > > to match the order of the arguments to other functions in the same family.
> > > alloc_pages_node() needs the same treatment, but there's about 70 callers,
> > > so I'm going to skip it for now.
> > > 
> > > This is against current -mm.  I'm seeing a text saving of 482 bytes from
> > > a tinyconfig vmlinux (1003785 reduced to 1003303).  There are more
> > > savings to be had by combining together order and the gfp flags, for
> > > example in the scan_control data structure.
> > 
> > So what is the primary objective here? Reduce the code size? Reduce the
> > registers pressure? Please tell us more why changing the core allocator
> > API and make it more subtle is worth it.
> 
> The primary objective here is to avoid adding an 'order' parameter to
> pagecache_get_page().

It would be great to state that explicitly in the changelog. Because
that has some clear goal to achieve and that we can weigh.

> I don't think it makes the API more subtle; I see
> it as fundamental to the allocation API as any of the other GFP flags.

Well, that really depends on how you look at it. Size, allocation
restrictions and numa placing can be viewed as orthogonal attributes of
the allocation. On the other hand the vast majority of callers do care
about order-0 requests and that's where you get most out of the change
so it makes some sense to me as well. I can imagine that this can
optimize some code paths nicely.

That being said, I am not really opposing this change, I would just
appreciate to give us full picture of where the motivation comes from.

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2019-05-13 11:42 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-10 13:50 [PATCH v2 00/15] Remove 'order' argument from many mm functions Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 01/15] mm: Remove gfp_flags argument from rmqueue_pcplist Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 02/15] mm: Pass order to __alloc_pages_nodemask in GFP flags Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 03/15] mm: Pass order to __alloc_pages " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 04/15] mm: Pass order to alloc_page_interleave " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 05/15] mm: Pass order to alloc_pages_current " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 06/15] mm: Pass order to alloc_pages_vma " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 07/15] mm: Pass order to __alloc_pages_node " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 08/15] mm: Pass order to __get_free_page " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 09/15] mm: Pass order to prep_new_page " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 10/15] mm: Pass order to rmqueue " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 11/15] mm: Pass order to get_page_from_freelist " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 12/15] mm: Pass order to __alloc_pages_cpuset_fallback " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 13/15] mm: Pass order to prepare_alloc_pages " Matthew Wilcox
2019-05-10 13:50 ` [PATCH v2 14/15] mm: Pass order to try_to_free_pages " Matthew Wilcox
2019-05-10 23:26   ` Ira Weiny
2019-05-10 13:50 ` [PATCH v2 15/15] mm: Pass order to node_reclaim() " Matthew Wilcox
2019-05-10 23:30 ` [PATCH v2 00/15] Remove 'order' argument from many mm functions Ira Weiny
2019-05-13 10:51 ` Michal Hocko
2019-05-13 11:21   ` Matthew Wilcox
2019-05-13 11:42     ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).