* [PATCH v2 0/6] Rationalise __alloc_pages wrappers
@ 2021-02-15 21:01 Matthew Wilcox (Oracle)
2021-02-15 21:01 ` [PATCH v2 1/6] mm/page_alloc: Rename alloc_mask to alloc_gfp Matthew Wilcox (Oracle)
` (5 more replies)
0 siblings, 6 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-02-15 21:01 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Mike Rapoport
I was poking around the __alloc_pages variants trying to understand
why they each exist, and couldn't really find a good justification for
keeping __alloc_pages and __alloc_pages_nodemask as separate functions.
That led to getting rid of alloc_pages_current() and then I noticed the
documentation was bad, and then I noticed it wasn't included at all ...
anyway, this is all cleanups & doc fixes.
v2:
- Added acks from Vlastimil (patches 2-4) & Michel (patches 3-4)
- Added patches 1, 5 and 6
Matthew Wilcox (Oracle) (6):
mm/page_alloc: Rename alloc_mask to alloc_gfp
mm/page_alloc: Rename gfp_mask to gfp
mm/page_alloc: Combine __alloc_pages and __alloc_pages_nodemask
mm/mempolicy: Rename alloc_pages_current to alloc_pages
mm/mempolicy: Rewrite alloc_pages documentation
mm/mempolicy: Fix mpol_misplaced kernel-doc
Documentation/admin-guide/mm/transhuge.rst | 2 +-
Documentation/core-api/mm-api.rst | 1 +
include/linux/gfp.h | 21 +++--------
mm/hugetlb.c | 2 +-
mm/internal.h | 4 +--
mm/mempolicy.c | 42 ++++++++++------------
mm/migrate.c | 2 +-
mm/page_alloc.c | 34 +++++++++---------
8 files changed, 46 insertions(+), 62 deletions(-)
--
2.29.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 1/6] mm/page_alloc: Rename alloc_mask to alloc_gfp
2021-02-15 21:01 [PATCH v2 0/6] Rationalise __alloc_pages wrappers Matthew Wilcox (Oracle)
@ 2021-02-15 21:01 ` Matthew Wilcox (Oracle)
2021-02-15 21:01 ` [PATCH v2 2/6] mm/page_alloc: Rename gfp_mask to gfp Matthew Wilcox (Oracle)
` (4 subsequent siblings)
5 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-02-15 21:01 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Mike Rapoport
We have two masks involved -- the nodemask and the gfp mask, so
alloc_mask is an unclear name.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/page_alloc.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0b55c9c95364..8100f9d123a8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4916,7 +4916,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
int preferred_nid, nodemask_t *nodemask,
- struct alloc_context *ac, gfp_t *alloc_mask,
+ struct alloc_context *ac, gfp_t *alloc_gfp,
unsigned int *alloc_flags)
{
ac->highest_zoneidx = gfp_zone(gfp_mask);
@@ -4925,7 +4925,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
ac->migratetype = gfp_migratetype(gfp_mask);
if (cpusets_enabled()) {
- *alloc_mask |= __GFP_HARDWALL;
+ *alloc_gfp |= __GFP_HARDWALL;
/*
* When we are in the interrupt context, it is irrelevant
* to the current task context. It means that any node ok.
@@ -4969,7 +4969,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
{
struct page *page;
unsigned int alloc_flags = ALLOC_WMARK_LOW;
- gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */
+ gfp_t alloc_gfp; /* The gfp_t that was actually used for allocation */
struct alloc_context ac = { };
/*
@@ -4982,8 +4982,9 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
}
gfp_mask &= gfp_allowed_mask;
- alloc_mask = gfp_mask;
- if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
+ alloc_gfp = gfp_mask;
+ if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac,
+ &alloc_gfp, &alloc_flags))
return NULL;
/*
@@ -4993,7 +4994,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, gfp_mask);
/* First allocation attempt */
- page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac);
+ page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac);
if (likely(page))
goto out;
@@ -5003,7 +5004,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
* from a particular context which has been marked by
* memalloc_no{fs,io}_{save,restore}.
*/
- alloc_mask = current_gfp_context(gfp_mask);
+ alloc_gfp = current_gfp_context(gfp_mask);
ac.spread_dirty_pages = false;
/*
@@ -5012,7 +5013,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
*/
ac.nodemask = nodemask;
- page = __alloc_pages_slowpath(alloc_mask, order, &ac);
+ page = __alloc_pages_slowpath(alloc_gfp, order, &ac);
out:
if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page &&
@@ -5021,7 +5022,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
page = NULL;
}
- trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype);
+ trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype);
return page;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 2/6] mm/page_alloc: Rename gfp_mask to gfp
2021-02-15 21:01 [PATCH v2 0/6] Rationalise __alloc_pages wrappers Matthew Wilcox (Oracle)
2021-02-15 21:01 ` [PATCH v2 1/6] mm/page_alloc: Rename alloc_mask to alloc_gfp Matthew Wilcox (Oracle)
@ 2021-02-15 21:01 ` Matthew Wilcox (Oracle)
2021-02-15 21:02 ` [PATCH v2 3/6] mm/page_alloc: Combine __alloc_pages and __alloc_pages_nodemask Matthew Wilcox (Oracle)
` (3 subsequent siblings)
5 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-02-15 21:01 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), linux-mm, Mike Rapoport, Vlastimil Babka
Shorten some overly-long lines by renaming this identifier.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/page_alloc.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8100f9d123a8..9e4841ede0f4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4964,7 +4964,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
* This is the 'heart' of the zoned buddy allocator.
*/
struct page *
-__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
+__alloc_pages_nodemask(gfp_t gfp, unsigned int order, int preferred_nid,
nodemask_t *nodemask)
{
struct page *page;
@@ -4977,13 +4977,13 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
* so bail out early if the request is out of bound.
*/
if (unlikely(order >= MAX_ORDER)) {
- WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));
+ WARN_ON_ONCE(!(gfp & __GFP_NOWARN));
return NULL;
}
- gfp_mask &= gfp_allowed_mask;
- alloc_gfp = gfp_mask;
- if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac,
+ gfp &= gfp_allowed_mask;
+ alloc_gfp = gfp;
+ if (!prepare_alloc_pages(gfp, order, preferred_nid, nodemask, &ac,
&alloc_gfp, &alloc_flags))
return NULL;
@@ -4991,7 +4991,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
* Forbid the first pass from falling back to types that fragment
* memory until all local zones are considered.
*/
- alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, gfp_mask);
+ alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, gfp);
/* First allocation attempt */
page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac);
@@ -5004,7 +5004,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
* from a particular context which has been marked by
* memalloc_no{fs,io}_{save,restore}.
*/
- alloc_gfp = current_gfp_context(gfp_mask);
+ alloc_gfp = current_gfp_context(gfp);
ac.spread_dirty_pages = false;
/*
@@ -5016,8 +5016,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
page = __alloc_pages_slowpath(alloc_gfp, order, &ac);
out:
- if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page &&
- unlikely(__memcg_kmem_charge_page(page, gfp_mask, order) != 0)) {
+ if (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT) && page &&
+ unlikely(__memcg_kmem_charge_page(page, gfp, order) != 0)) {
__free_pages(page, order);
page = NULL;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 3/6] mm/page_alloc: Combine __alloc_pages and __alloc_pages_nodemask
2021-02-15 21:01 [PATCH v2 0/6] Rationalise __alloc_pages wrappers Matthew Wilcox (Oracle)
2021-02-15 21:01 ` [PATCH v2 1/6] mm/page_alloc: Rename alloc_mask to alloc_gfp Matthew Wilcox (Oracle)
2021-02-15 21:01 ` [PATCH v2 2/6] mm/page_alloc: Rename gfp_mask to gfp Matthew Wilcox (Oracle)
@ 2021-02-15 21:02 ` Matthew Wilcox (Oracle)
2021-02-15 21:02 ` [PATCH v2 4/6] mm/mempolicy: Rename alloc_pages_current to alloc_pages Matthew Wilcox (Oracle)
` (2 subsequent siblings)
5 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-02-15 21:02 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle),
linux-mm, Mike Rapoport, Vlastimil Babka, Michal Hocko
There are only two callers of __alloc_pages() so prune the thicket of
alloc_page variants by combining the two functions together. Current
callers of __alloc_pages() simply add an extra 'NULL' parameter and
current callers of __alloc_pages_nodemask() call __alloc_pages() instead.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
---
Documentation/admin-guide/mm/transhuge.rst | 2 +-
include/linux/gfp.h | 13 +++----------
mm/hugetlb.c | 2 +-
mm/internal.h | 4 ++--
mm/mempolicy.c | 6 +++---
mm/migrate.c | 2 +-
mm/page_alloc.c | 5 ++---
7 files changed, 13 insertions(+), 21 deletions(-)
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
index 3b8a336511a4..c9c37f16eef8 100644
--- a/Documentation/admin-guide/mm/transhuge.rst
+++ b/Documentation/admin-guide/mm/transhuge.rst
@@ -402,7 +402,7 @@ compact_fail
but failed.
It is possible to establish how long the stalls were using the function
-tracer to record how long was spent in __alloc_pages_nodemask and
+tracer to record how long was spent in __alloc_pages() and
using the mm_page_alloc tracepoint to identify which allocations were
for huge pages.
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index ecd1b5d27936..012dc86923b3 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -512,15 +512,8 @@ static inline int arch_make_page_accessible(struct page *page)
}
#endif
-struct page *
-__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
- nodemask_t *nodemask);
-
-static inline struct page *
-__alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid)
-{
- return __alloc_pages_nodemask(gfp_mask, order, preferred_nid, NULL);
-}
+struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid,
+ nodemask_t *nodemask);
/*
* Allocate pages, preferring the node given as nid. The node must be valid and
@@ -532,7 +525,7 @@ __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order)
VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES);
VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid));
- return __alloc_pages(gfp_mask, order, nid);
+ return __alloc_pages(gfp_mask, order, nid, NULL);
}
/*
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b6992297aa16..05ac34febc71 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1589,7 +1589,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h,
gfp_mask |= __GFP_RETRY_MAYFAIL;
if (nid == NUMA_NO_NODE)
nid = numa_mem_id();
- page = __alloc_pages_nodemask(gfp_mask, order, nid, nmask);
+ page = __alloc_pages(gfp_mask, order, nid, nmask);
if (page)
__count_vm_event(HTLB_BUDDY_PGALLOC);
else
diff --git a/mm/internal.h b/mm/internal.h
index 9902648f2206..0c593c142175 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -126,10 +126,10 @@ extern pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
* family of functions.
*
* nodemask, migratetype and highest_zoneidx are initialized only once in
- * __alloc_pages_nodemask() and then never change.
+ * __alloc_pages() and then never change.
*
* zonelist, preferred_zone and highest_zoneidx are set first in
- * __alloc_pages_nodemask() for the fast path, and might be later changed
+ * __alloc_pages() for the fast path, and might be later changed
* in __alloc_pages_slowpath(). All other functions pass the whole structure
* by a const pointer.
*/
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index ab51132547b8..5f0d20298736 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2140,7 +2140,7 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
{
struct page *page;
- page = __alloc_pages(gfp, order, nid);
+ page = __alloc_pages(gfp, order, nid, NULL);
/* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */
if (!static_branch_likely(&vm_numa_stat_key))
return page;
@@ -2237,7 +2237,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
nmask = policy_nodemask(gfp, pol);
preferred_nid = policy_node(gfp, pol, node);
- page = __alloc_pages_nodemask(gfp, order, preferred_nid, nmask);
+ page = __alloc_pages(gfp, order, preferred_nid, nmask);
mpol_cond_put(pol);
out:
return page;
@@ -2274,7 +2274,7 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order)
if (pol->mode == MPOL_INTERLEAVE)
page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
else
- page = __alloc_pages_nodemask(gfp, order,
+ page = __alloc_pages(gfp, order,
policy_node(gfp, pol, numa_node_id()),
policy_nodemask(gfp, pol));
diff --git a/mm/migrate.c b/mm/migrate.c
index 62b81d5257aa..47df0df8f21a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1617,7 +1617,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE)
gfp_mask |= __GFP_HIGHMEM;
- new_page = __alloc_pages_nodemask(gfp_mask, order, nid, mtc->nmask);
+ new_page = __alloc_pages(gfp_mask, order, nid, mtc->nmask);
if (new_page && PageTransHuge(new_page))
prep_transhuge_page(new_page);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9e4841ede0f4..b917afdfcd69 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4963,8 +4963,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
/*
* This is the 'heart' of the zoned buddy allocator.
*/
-struct page *
-__alloc_pages_nodemask(gfp_t gfp, unsigned int order, int preferred_nid,
+struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid,
nodemask_t *nodemask)
{
struct page *page;
@@ -5026,7 +5025,7 @@ __alloc_pages_nodemask(gfp_t gfp, unsigned int order, int preferred_nid,
return page;
}
-EXPORT_SYMBOL(__alloc_pages_nodemask);
+EXPORT_SYMBOL(__alloc_pages);
/*
* Common helper functions. Never use with __GFP_HIGHMEM because the returned
--
2.29.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 4/6] mm/mempolicy: Rename alloc_pages_current to alloc_pages
2021-02-15 21:01 [PATCH v2 0/6] Rationalise __alloc_pages wrappers Matthew Wilcox (Oracle)
` (2 preceding siblings ...)
2021-02-15 21:02 ` [PATCH v2 3/6] mm/page_alloc: Combine __alloc_pages and __alloc_pages_nodemask Matthew Wilcox (Oracle)
@ 2021-02-15 21:02 ` Matthew Wilcox (Oracle)
2021-02-15 21:02 ` [PATCH v2 5/6] mm/mempolicy: Rewrite alloc_pages documentation Matthew Wilcox (Oracle)
2021-02-15 21:02 ` [PATCH v2 6/6] mm/mempolicy: Fix mpol_misplaced kernel-doc Matthew Wilcox (Oracle)
5 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-02-15 21:02 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle),
linux-mm, Mike Rapoport, Vlastimil Babka, Michal Hocko
When CONFIG_NUMA is enabled, alloc_pages() is a wrapper around
alloc_pages_current(). This is pointless, just implement alloc_pages()
directly.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
---
include/linux/gfp.h | 8 +-------
mm/mempolicy.c | 6 +++---
2 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 012dc86923b3..f61e8f4b6a1e 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -543,13 +543,7 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,
}
#ifdef CONFIG_NUMA
-extern struct page *alloc_pages_current(gfp_t gfp_mask, unsigned order);
-
-static inline struct page *
-alloc_pages(gfp_t gfp_mask, unsigned int order)
-{
- return alloc_pages_current(gfp_mask, order);
-}
+struct page *alloc_pages(gfp_t gfp, unsigned int order);
extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order,
struct vm_area_struct *vma, unsigned long addr,
int node, bool hugepage);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 5f0d20298736..c71532b7e3f8 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2245,7 +2245,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
EXPORT_SYMBOL(alloc_pages_vma);
/**
- * alloc_pages_current - Allocate pages.
+ * alloc_pages - Allocate pages.
*
* @gfp:
* %GFP_USER user allocation,
@@ -2259,7 +2259,7 @@ EXPORT_SYMBOL(alloc_pages_vma);
* interrupt context and apply the current process NUMA policy.
* Returns NULL when no page can be allocated.
*/
-struct page *alloc_pages_current(gfp_t gfp, unsigned order)
+struct page *alloc_pages(gfp_t gfp, unsigned order)
{
struct mempolicy *pol = &default_policy;
struct page *page;
@@ -2280,7 +2280,7 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order)
return page;
}
-EXPORT_SYMBOL(alloc_pages_current);
+EXPORT_SYMBOL(alloc_pages);
int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
{
--
2.29.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 5/6] mm/mempolicy: Rewrite alloc_pages documentation
2021-02-15 21:01 [PATCH v2 0/6] Rationalise __alloc_pages wrappers Matthew Wilcox (Oracle)
` (3 preceding siblings ...)
2021-02-15 21:02 ` [PATCH v2 4/6] mm/mempolicy: Rename alloc_pages_current to alloc_pages Matthew Wilcox (Oracle)
@ 2021-02-15 21:02 ` Matthew Wilcox (Oracle)
2021-02-15 21:47 ` Mike Rapoport
2021-02-16 8:22 ` Michal Hocko
2021-02-15 21:02 ` [PATCH v2 6/6] mm/mempolicy: Fix mpol_misplaced kernel-doc Matthew Wilcox (Oracle)
5 siblings, 2 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-02-15 21:02 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Mike Rapoport
Document alloc_pages() for both NUMA and non-NUMA cases as kernel-doc
doesn't care.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/mempolicy.c | 21 ++++++++++-----------
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index c71532b7e3f8..96c98ce16727 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2245,19 +2245,18 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
EXPORT_SYMBOL(alloc_pages_vma);
/**
- * alloc_pages - Allocate pages.
+ * alloc_pages - Allocate pages.
+ * @gfp: GFP flags.
+ * @order: Power of two of number of pages to allocate.
*
- * @gfp:
- * %GFP_USER user allocation,
- * %GFP_KERNEL kernel allocation,
- * %GFP_HIGHMEM highmem allocation,
- * %GFP_FS don't call back into a file system.
- * %GFP_ATOMIC don't sleep.
- * @order: Power of two of allocation size in pages. 0 is a single page.
+ * Allocate 1 << @order contiguous pages. The physical address of the
+ * first page is naturally aligned (eg an order-3 allocation will be aligned
+ * to a multiple of 8 * PAGE_SIZE bytes). The NUMA policy of the current
+ * process is honoured when in process context.
*
- * Allocate a page from the kernel page pool. When not in
- * interrupt context and apply the current process NUMA policy.
- * Returns NULL when no page can be allocated.
+ * Context: Can be called from any context, providing the appropriate GFP
+ * flags are used.
+ * Return: NULL when no page can be allocated.
*/
struct page *alloc_pages(gfp_t gfp, unsigned order)
{
--
2.29.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 6/6] mm/mempolicy: Fix mpol_misplaced kernel-doc
2021-02-15 21:01 [PATCH v2 0/6] Rationalise __alloc_pages wrappers Matthew Wilcox (Oracle)
` (4 preceding siblings ...)
2021-02-15 21:02 ` [PATCH v2 5/6] mm/mempolicy: Rewrite alloc_pages documentation Matthew Wilcox (Oracle)
@ 2021-02-15 21:02 ` Matthew Wilcox (Oracle)
2021-02-15 21:51 ` Mike Rapoport
5 siblings, 1 reply; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-02-15 21:02 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Mike Rapoport
Sphinx interprets the Return section as a list and complains about it.
Turn it into a sentence and move it to the end of the kernel-doc to
fit the kernel-doc style.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
Documentation/core-api/mm-api.rst | 1 +
mm/mempolicy.c | 11 ++++-------
2 files changed, 5 insertions(+), 7 deletions(-)
diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst
index 201b5423303b..874ae1250258 100644
--- a/Documentation/core-api/mm-api.rst
+++ b/Documentation/core-api/mm-api.rst
@@ -92,3 +92,4 @@ More Memory Management Functions
:export:
.. kernel-doc:: mm/page_alloc.c
+.. kernel-doc:: mm/mempolicy.c
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 96c98ce16727..577f59c8f327 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2456,14 +2456,11 @@ static void sp_free(struct sp_node *n)
* @addr: virtual address where page mapped
*
* Lookup current policy node id for vma,addr and "compare to" page's
- * node id.
- *
- * Returns:
- * -1 - not misplaced, page is in the right node
- * node - node id where the page should be
- *
- * Policy determination "mimics" alloc_page_vma().
+ * node id. Policy determination "mimics" alloc_page_vma().
* Called from fault path where we know the vma and faulting address.
+ *
+ * Return: -1 if the page is in a node that is valid for this policy, or a
+ * suitable node ID to allocate a replacement page from.
*/
int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long addr)
{
--
2.29.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v2 5/6] mm/mempolicy: Rewrite alloc_pages documentation
2021-02-15 21:02 ` [PATCH v2 5/6] mm/mempolicy: Rewrite alloc_pages documentation Matthew Wilcox (Oracle)
@ 2021-02-15 21:47 ` Mike Rapoport
2021-02-15 22:32 ` Matthew Wilcox
2021-02-16 8:22 ` Michal Hocko
1 sibling, 1 reply; 12+ messages in thread
From: Mike Rapoport @ 2021-02-15 21:47 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm
On Mon, Feb 15, 2021 at 09:02:02PM +0000, Matthew Wilcox (Oracle) wrote:
> Document alloc_pages() for both NUMA and non-NUMA cases as kernel-doc
> doesn't care.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
One nit below, otherwise
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
> mm/mempolicy.c | 21 ++++++++++-----------
> 1 file changed, 10 insertions(+), 11 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index c71532b7e3f8..96c98ce16727 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2245,19 +2245,18 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
> EXPORT_SYMBOL(alloc_pages_vma);
>
> /**
> - * alloc_pages - Allocate pages.
> + * alloc_pages - Allocate pages.
> + * @gfp: GFP flags.
> + * @order: Power of two of number of pages to allocate.
> *
> - * @gfp:
> - * %GFP_USER user allocation,
> - * %GFP_KERNEL kernel allocation,
> - * %GFP_HIGHMEM highmem allocation,
> - * %GFP_FS don't call back into a file system.
> - * %GFP_ATOMIC don't sleep.
> - * @order: Power of two of allocation size in pages. 0 is a single page.
> + * Allocate 1 << @order contiguous pages. The physical address of the
> + * first page is naturally aligned (eg an order-3 allocation will be aligned
> + * to a multiple of 8 * PAGE_SIZE bytes). The NUMA policy of the current
> + * process is honoured when in process context.
> *
> - * Allocate a page from the kernel page pool. When not in
> - * interrupt context and apply the current process NUMA policy.
> - * Returns NULL when no page can be allocated.
> + * Context: Can be called from any context, providing the appropriate GFP
> + * flags are used.
> + * Return: NULL when no page can be allocated.
Don't you want to mention the return value on success? It is quite obvious,
but still.
> */
> struct page *alloc_pages(gfp_t gfp, unsigned order)
> {
> --
> 2.29.2
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 6/6] mm/mempolicy: Fix mpol_misplaced kernel-doc
2021-02-15 21:02 ` [PATCH v2 6/6] mm/mempolicy: Fix mpol_misplaced kernel-doc Matthew Wilcox (Oracle)
@ 2021-02-15 21:51 ` Mike Rapoport
0 siblings, 0 replies; 12+ messages in thread
From: Mike Rapoport @ 2021-02-15 21:51 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm
On Mon, Feb 15, 2021 at 09:02:03PM +0000, Matthew Wilcox (Oracle) wrote:
> Sphinx interprets the Return section as a list and complains about it.
> Turn it into a sentence and move it to the end of the kernel-doc to
> fit the kernel-doc style.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
> Documentation/core-api/mm-api.rst | 1 +
> mm/mempolicy.c | 11 ++++-------
> 2 files changed, 5 insertions(+), 7 deletions(-)
...
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 96c98ce16727..577f59c8f327 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2456,14 +2456,11 @@ static void sp_free(struct sp_node *n)
> * @addr: virtual address where page mapped
> *
> * Lookup current policy node id for vma,addr and "compare to" page's
> - * node id.
> - *
> - * Returns:
> - * -1 - not misplaced, page is in the right node
> - * node - node id where the page should be
> - *
> - * Policy determination "mimics" alloc_page_vma().
> + * node id. Policy determination "mimics" alloc_page_vma().
> * Called from fault path where we know the vma and faulting address.
> + *
> + * Return: -1 if the page is in a node that is valid for this policy, or a
> + * suitable node ID to allocate a replacement page from.
I think it's possible to use lists in the Return: descriptions with the
right combination of spaces, asterisks and dashes, but this description is
way better anyway.
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 5/6] mm/mempolicy: Rewrite alloc_pages documentation
2021-02-15 21:47 ` Mike Rapoport
@ 2021-02-15 22:32 ` Matthew Wilcox
2021-02-16 5:37 ` Mike Rapoport
0 siblings, 1 reply; 12+ messages in thread
From: Matthew Wilcox @ 2021-02-15 22:32 UTC (permalink / raw)
To: Mike Rapoport; +Cc: Andrew Morton, linux-mm
On Mon, Feb 15, 2021 at 11:47:40PM +0200, Mike Rapoport wrote:
> > /**
> > + * alloc_pages - Allocate pages.
> > + * @gfp: GFP flags.
> > + * @order: Power of two of number of pages to allocate.
> > *
> > + * Allocate 1 << @order contiguous pages. The physical address of the
> > + * first page is naturally aligned (eg an order-3 allocation will be aligned
> > + * to a multiple of 8 * PAGE_SIZE bytes). The NUMA policy of the current
> > + * process is honoured when in process context.
> > *
> > + * Context: Can be called from any context, providing the appropriate GFP
> > + * flags are used.
> > + * Return: NULL when no page can be allocated.
>
> Don't you want to mention the return value on success? It is quite obvious,
> but still.
Sure. Preferred wording?
+ * Return: The page on success or NULL if allocation fails.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 5/6] mm/mempolicy: Rewrite alloc_pages documentation
2021-02-15 22:32 ` Matthew Wilcox
@ 2021-02-16 5:37 ` Mike Rapoport
0 siblings, 0 replies; 12+ messages in thread
From: Mike Rapoport @ 2021-02-16 5:37 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm
On Mon, Feb 15, 2021 at 10:32:57PM +0000, Matthew Wilcox wrote:
> On Mon, Feb 15, 2021 at 11:47:40PM +0200, Mike Rapoport wrote:
> > > /**
> > > + * alloc_pages - Allocate pages.
> > > + * @gfp: GFP flags.
> > > + * @order: Power of two of number of pages to allocate.
> > > *
> > > + * Allocate 1 << @order contiguous pages. The physical address of the
> > > + * first page is naturally aligned (eg an order-3 allocation will be aligned
> > > + * to a multiple of 8 * PAGE_SIZE bytes). The NUMA policy of the current
> > > + * process is honoured when in process context.
> > > *
> > > + * Context: Can be called from any context, providing the appropriate GFP
> > > + * flags are used.
> > > + * Return: NULL when no page can be allocated.
> >
> > Don't you want to mention the return value on success? It is quite obvious,
> > but still.
>
> Sure. Preferred wording?
>
> + * Return: The page on success or NULL if allocation fails.
Works for me :)
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 5/6] mm/mempolicy: Rewrite alloc_pages documentation
2021-02-15 21:02 ` [PATCH v2 5/6] mm/mempolicy: Rewrite alloc_pages documentation Matthew Wilcox (Oracle)
2021-02-15 21:47 ` Mike Rapoport
@ 2021-02-16 8:22 ` Michal Hocko
1 sibling, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2021-02-16 8:22 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm, Mike Rapoport
On Mon 15-02-21 21:02:02, Matthew Wilcox wrote:
> Document alloc_pages() for both NUMA and non-NUMA cases as kernel-doc
> doesn't care.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/mempolicy.c | 21 ++++++++++-----------
> 1 file changed, 10 insertions(+), 11 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index c71532b7e3f8..96c98ce16727 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2245,19 +2245,18 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
> EXPORT_SYMBOL(alloc_pages_vma);
>
> /**
> - * alloc_pages - Allocate pages.
> + * alloc_pages - Allocate pages.
> + * @gfp: GFP flags.
> + * @order: Power of two of number of pages to allocate.
> *
> - * @gfp:
> - * %GFP_USER user allocation,
> - * %GFP_KERNEL kernel allocation,
> - * %GFP_HIGHMEM highmem allocation,
> - * %GFP_FS don't call back into a file system.
> - * %GFP_ATOMIC don't sleep.
> - * @order: Power of two of allocation size in pages. 0 is a single page.
> + * Allocate 1 << @order contiguous pages. The physical address of the
> + * first page is naturally aligned (eg an order-3 allocation will be aligned
> + * to a multiple of 8 * PAGE_SIZE bytes). The NUMA policy of the current
> + * process is honoured when in process context.
> *
> - * Allocate a page from the kernel page pool. When not in
> - * interrupt context and apply the current process NUMA policy.
> - * Returns NULL when no page can be allocated.
> + * Context: Can be called from any context, providing the appropriate GFP
> + * flags are used.
> + * Return: NULL when no page can be allocated.
> */
> struct page *alloc_pages(gfp_t gfp, unsigned order)
> {
> --
> 2.29.2
>
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2021-02-16 8:22 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-15 21:01 [PATCH v2 0/6] Rationalise __alloc_pages wrappers Matthew Wilcox (Oracle)
2021-02-15 21:01 ` [PATCH v2 1/6] mm/page_alloc: Rename alloc_mask to alloc_gfp Matthew Wilcox (Oracle)
2021-02-15 21:01 ` [PATCH v2 2/6] mm/page_alloc: Rename gfp_mask to gfp Matthew Wilcox (Oracle)
2021-02-15 21:02 ` [PATCH v2 3/6] mm/page_alloc: Combine __alloc_pages and __alloc_pages_nodemask Matthew Wilcox (Oracle)
2021-02-15 21:02 ` [PATCH v2 4/6] mm/mempolicy: Rename alloc_pages_current to alloc_pages Matthew Wilcox (Oracle)
2021-02-15 21:02 ` [PATCH v2 5/6] mm/mempolicy: Rewrite alloc_pages documentation Matthew Wilcox (Oracle)
2021-02-15 21:47 ` Mike Rapoport
2021-02-15 22:32 ` Matthew Wilcox
2021-02-16 5:37 ` Mike Rapoport
2021-02-16 8:22 ` Michal Hocko
2021-02-15 21:02 ` [PATCH v2 6/6] mm/mempolicy: Fix mpol_misplaced kernel-doc Matthew Wilcox (Oracle)
2021-02-15 21:51 ` Mike Rapoport
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).