* [PATCH v2] mm/page_alloc: speeding up the iteration of max_order
@ 2020-12-04 12:56 Muchun Song
2020-12-04 15:28 ` Vlastimil Babka
0 siblings, 1 reply; 3+ messages in thread
From: Muchun Song @ 2020-12-04 12:56 UTC (permalink / raw)
To: akpm, vbabka; +Cc: linux-mm, linux-kernel, Muchun Song
When we free a page whose order is very close to MAX_ORDER and greater
than pageblock_order, it wastes some CPU cycles to increase max_order
to MAX_ORDER one by one and check the pageblock migratetype of that page
repeatedly especially when MAX_ORDER is much larger than pageblock_order.
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
Changes in v2:
- Rework the code suggested by Vlastimil. Thanks.
mm/page_alloc.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f91df593bf71..56e603eea1dd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1002,7 +1002,7 @@ static inline void __free_one_page(struct page *page,
struct page *buddy;
bool to_tail;
- max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
+ max_order = min_t(unsigned int, MAX_ORDER - 1, pageblock_order);
VM_BUG_ON(!zone_is_initialized(zone));
VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
@@ -1015,7 +1015,7 @@ static inline void __free_one_page(struct page *page,
VM_BUG_ON_PAGE(bad_range(zone, page), page);
continue_merging:
- while (order < max_order - 1) {
+ while (order < max_order) {
if (compaction_capture(capc, page, order, migratetype)) {
__mod_zone_freepage_state(zone, -(1 << order),
migratetype);
@@ -1041,7 +1041,7 @@ static inline void __free_one_page(struct page *page,
pfn = combined_pfn;
order++;
}
- if (max_order < MAX_ORDER) {
+ if (order < MAX_ORDER - 1) {
/* If we are here, it means order is >= pageblock_order.
* We want to prevent merge between freepages on isolate
* pageblock and normal pageblock. Without this, pageblock
@@ -1062,7 +1062,7 @@ static inline void __free_one_page(struct page *page,
is_migrate_isolate(buddy_mt)))
goto done_merging;
}
- max_order++;
+ max_order = order + 1;
goto continue_merging;
}
--
2.11.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v2] mm/page_alloc: speeding up the iteration of max_order
2020-12-04 12:56 [PATCH v2] mm/page_alloc: speeding up the iteration of max_order Muchun Song
@ 2020-12-04 15:28 ` Vlastimil Babka
2020-12-04 15:45 ` [External] " Muchun Song
0 siblings, 1 reply; 3+ messages in thread
From: Vlastimil Babka @ 2020-12-04 15:28 UTC (permalink / raw)
To: Muchun Song, akpm; +Cc: linux-mm, linux-kernel
On 12/4/20 1:56 PM, Muchun Song wrote:
> When we free a page whose order is very close to MAX_ORDER and greater
> than pageblock_order, it wastes some CPU cycles to increase max_order
> to MAX_ORDER one by one and check the pageblock migratetype of that page
> repeatedly especially when MAX_ORDER is much larger than pageblock_order.
I would add:
We also should not be checking migratetype of buddy when "order == MAX_ORDER -
1" as the buddy pfn may be invalid, so adjust the condition. With the new check,
we don't need the max_order check anymore, so we replace it.
Also adjust max_order initialization so that it's lower by one than previously,
which makes the code hopefully more clear.
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Fixes: d9dddbf55667 ("mm/page_alloc: prevent merging between isolated and other
pageblocks")
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Thanks!
> ---
> Changes in v2:
> - Rework the code suggested by Vlastimil. Thanks.
>
> mm/page_alloc.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index f91df593bf71..56e603eea1dd 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1002,7 +1002,7 @@ static inline void __free_one_page(struct page *page,
> struct page *buddy;
> bool to_tail;
>
> - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
> + max_order = min_t(unsigned int, MAX_ORDER - 1, pageblock_order);
>
> VM_BUG_ON(!zone_is_initialized(zone));
> VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
> @@ -1015,7 +1015,7 @@ static inline void __free_one_page(struct page *page,
> VM_BUG_ON_PAGE(bad_range(zone, page), page);
>
> continue_merging:
> - while (order < max_order - 1) {
> + while (order < max_order) {
> if (compaction_capture(capc, page, order, migratetype)) {
> __mod_zone_freepage_state(zone, -(1 << order),
> migratetype);
> @@ -1041,7 +1041,7 @@ static inline void __free_one_page(struct page *page,
> pfn = combined_pfn;
> order++;
> }
> - if (max_order < MAX_ORDER) {
> + if (order < MAX_ORDER - 1) {
> /* If we are here, it means order is >= pageblock_order.
> * We want to prevent merge between freepages on isolate
> * pageblock and normal pageblock. Without this, pageblock
> @@ -1062,7 +1062,7 @@ static inline void __free_one_page(struct page *page,
> is_migrate_isolate(buddy_mt)))
> goto done_merging;
> }
> - max_order++;
> + max_order = order + 1;
> goto continue_merging;
> }
>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [External] Re: [PATCH v2] mm/page_alloc: speeding up the iteration of max_order
2020-12-04 15:28 ` Vlastimil Babka
@ 2020-12-04 15:45 ` Muchun Song
0 siblings, 0 replies; 3+ messages in thread
From: Muchun Song @ 2020-12-04 15:45 UTC (permalink / raw)
To: Vlastimil Babka; +Cc: Andrew Morton, Linux Memory Management List, LKML
On Fri, Dec 4, 2020 at 11:28 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 12/4/20 1:56 PM, Muchun Song wrote:
> > When we free a page whose order is very close to MAX_ORDER and greater
> > than pageblock_order, it wastes some CPU cycles to increase max_order
> > to MAX_ORDER one by one and check the pageblock migratetype of that page
> > repeatedly especially when MAX_ORDER is much larger than pageblock_order.
>
> I would add:
>
> We also should not be checking migratetype of buddy when "order == MAX_ORDER -
> 1" as the buddy pfn may be invalid, so adjust the condition. With the new check,
> we don't need the max_order check anymore, so we replace it.
>
> Also adjust max_order initialization so that it's lower by one than previously,
> which makes the code hopefully more clear.
Got it. Thanks.
>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> Fixes: d9dddbf55667 ("mm/page_alloc: prevent merging between isolated and other
> pageblocks")
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> Thanks!
>
> > ---
> > Changes in v2:
> > - Rework the code suggested by Vlastimil. Thanks.
> >
> > mm/page_alloc.c | 8 ++++----
> > 1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index f91df593bf71..56e603eea1dd 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1002,7 +1002,7 @@ static inline void __free_one_page(struct page *page,
> > struct page *buddy;
> > bool to_tail;
> >
> > - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
> > + max_order = min_t(unsigned int, MAX_ORDER - 1, pageblock_order);
> >
> > VM_BUG_ON(!zone_is_initialized(zone));
> > VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
> > @@ -1015,7 +1015,7 @@ static inline void __free_one_page(struct page *page,
> > VM_BUG_ON_PAGE(bad_range(zone, page), page);
> >
> > continue_merging:
> > - while (order < max_order - 1) {
> > + while (order < max_order) {
> > if (compaction_capture(capc, page, order, migratetype)) {
> > __mod_zone_freepage_state(zone, -(1 << order),
> > migratetype);
> > @@ -1041,7 +1041,7 @@ static inline void __free_one_page(struct page *page,
> > pfn = combined_pfn;
> > order++;
> > }
> > - if (max_order < MAX_ORDER) {
> > + if (order < MAX_ORDER - 1) {
> > /* If we are here, it means order is >= pageblock_order.
> > * We want to prevent merge between freepages on isolate
> > * pageblock and normal pageblock. Without this, pageblock
> > @@ -1062,7 +1062,7 @@ static inline void __free_one_page(struct page *page,
> > is_migrate_isolate(buddy_mt)))
> > goto done_merging;
> > }
> > - max_order++;
> > + max_order = order + 1;
> > goto continue_merging;
> > }
> >
> >
>
--
Yours,
Muchun
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-12-04 15:46 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-04 12:56 [PATCH v2] mm/page_alloc: speeding up the iteration of max_order Muchun Song
2020-12-04 15:28 ` Vlastimil Babka
2020-12-04 15:45 ` [External] " Muchun Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).