* [PATCH] mm/page_alloc: cache the result of node_dirty_ok()
@ 2022-04-30 1:10 Wonhyuk Yang
2022-04-30 18:38 ` Andrew Morton
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Wonhyuk Yang @ 2022-04-30 1:10 UTC (permalink / raw)
To: Mel Gorman, Andrew Morton
Cc: Ohhoon Kwon, JaeSang Yoo, Wonhyuk Yang, Jiyoup Kim,
Donghyeok Kim, linux-mm, linux-kernel
To spread dirty page, nodes are checked whether
it reached the dirty limit using the expensive
node_dirty_ok(). To reduce the number of calling
node_dirty_ok(), last node that hit the dirty
limit is cached.
Instead of caching the node, caching both node
and it's result of node_dirty_ok() can reduce
the number of calling node_dirty_ok() more than
before.
Signed-off-by: Wonhyuk Yang <vvghjk1234@gmail.com>
---
mm/page_alloc.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0e42038382c1..aba62cf31a0e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4068,7 +4068,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
{
struct zoneref *z;
struct zone *zone;
- struct pglist_data *last_pgdat_dirty_limit = NULL;
+ struct pglist_data *last_pgdat = NULL;
+ bool last_pgdat_dirty_limit = false;
bool no_fallback;
retry:
@@ -4107,13 +4108,13 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
* dirty-throttling and the flusher threads.
*/
if (ac->spread_dirty_pages) {
- if (last_pgdat_dirty_limit == zone->zone_pgdat)
- continue;
+ if (last_pgdat != zone->zone_pgdat) {
+ last_pgdat = zone->zone_pgdat;
+ last_pgdat_dirty_limit = node_dirty_ok(zone->zone_pgdat);
+ }
- if (!node_dirty_ok(zone->zone_pgdat)) {
- last_pgdat_dirty_limit = zone->zone_pgdat;
+ if (!last_pgdat_dirty_limit)
continue;
- }
}
if (no_fallback && nr_online_nodes > 1 &&
--
2.30.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/page_alloc: cache the result of node_dirty_ok()
2022-04-30 1:10 [PATCH] mm/page_alloc: cache the result of node_dirty_ok() Wonhyuk Yang
@ 2022-04-30 18:38 ` Andrew Morton
2022-05-02 9:53 ` Mel Gorman
2022-05-03 16:19 ` Johannes Weiner
2 siblings, 0 replies; 4+ messages in thread
From: Andrew Morton @ 2022-04-30 18:38 UTC (permalink / raw)
To: Wonhyuk Yang
Cc: Mel Gorman, Ohhoon Kwon, JaeSang Yoo, Jiyoup Kim, Donghyeok Kim,
linux-mm, linux-kernel
On Sat, 30 Apr 2022 10:10:32 +0900 Wonhyuk Yang <vvghjk1234@gmail.com> wrote:
> To spread dirty page, nodes are checked whether
> it reached the dirty limit using the expensive
> node_dirty_ok(). To reduce the number of calling
> node_dirty_ok(), last node that hit the dirty
> limit is cached.
>
> Instead of caching the node, caching both node
> and it's result of node_dirty_ok() can reduce
> the number of calling node_dirty_ok() more than
> before.
>
> ...
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4068,7 +4068,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
> {
> struct zoneref *z;
> struct zone *zone;
> - struct pglist_data *last_pgdat_dirty_limit = NULL;
> + struct pglist_data *last_pgdat = NULL;
> + bool last_pgdat_dirty_limit = false;
> bool no_fallback;
>
> retry:
> @@ -4107,13 +4108,13 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
> * dirty-throttling and the flusher threads.
> */
> if (ac->spread_dirty_pages) {
> - if (last_pgdat_dirty_limit == zone->zone_pgdat)
> - continue;
> + if (last_pgdat != zone->zone_pgdat) {
> + last_pgdat = zone->zone_pgdat;
> + last_pgdat_dirty_limit = node_dirty_ok(zone->zone_pgdat);
> + }
>
> - if (!node_dirty_ok(zone->zone_pgdat)) {
> - last_pgdat_dirty_limit = zone->zone_pgdat;
> + if (!last_pgdat_dirty_limit)
> continue;
> - }
> }
>
> if (no_fallback && nr_online_nodes > 1 &&
Looks reasonable to me. Hopefully Mel and Johannes can review.
I think last_pgdat_dirty_limit isn't a great name. It records the
dirty_ok state of last_pgdat. So why not call it last_pgdat_dirty_ok?
--- a/mm/page_alloc.c~mm-page_alloc-cache-the-result-of-node_dirty_ok-fix
+++ a/mm/page_alloc.c
@@ -4022,7 +4022,7 @@ get_page_from_freelist(gfp_t gfp_mask, u
struct zoneref *z;
struct zone *zone;
struct pglist_data *last_pgdat = NULL;
- bool last_pgdat_dirty_limit = false;
+ bool last_pgdat_dirty_ok = false;
bool no_fallback;
retry:
@@ -4063,10 +4063,10 @@ retry:
if (ac->spread_dirty_pages) {
if (last_pgdat != zone->zone_pgdat) {
last_pgdat = zone->zone_pgdat;
- last_pgdat_dirty_limit = node_dirty_ok(zone->zone_pgdat);
+ last_pgdat_dirty_ok = node_dirty_ok(zone->zone_pgdat);
}
- if (!last_pgdat_dirty_limit)
+ if (!last_pgdat_dirty_ok)
continue;
}
_
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/page_alloc: cache the result of node_dirty_ok()
2022-04-30 1:10 [PATCH] mm/page_alloc: cache the result of node_dirty_ok() Wonhyuk Yang
2022-04-30 18:38 ` Andrew Morton
@ 2022-05-02 9:53 ` Mel Gorman
2022-05-03 16:19 ` Johannes Weiner
2 siblings, 0 replies; 4+ messages in thread
From: Mel Gorman @ 2022-05-02 9:53 UTC (permalink / raw)
To: Wonhyuk Yang
Cc: Andrew Morton, Ohhoon Kwon, JaeSang Yoo, Jiyoup Kim,
Donghyeok Kim, linux-mm, linux-kernel
On Sat, Apr 30, 2022 at 10:10:32AM +0900, Wonhyuk Yang wrote:
> To spread dirty page, nodes are checked whether
> it reached the dirty limit using the expensive
> node_dirty_ok(). To reduce the number of calling
> node_dirty_ok(), last node that hit the dirty
> limit is cached.
>
> Instead of caching the node, caching both node
> and it's result of node_dirty_ok() can reduce
> the number of calling node_dirty_ok() more than
> before.
>
> Signed-off-by: Wonhyuk Yang <vvghjk1234@gmail.com>
Acked-by: Mel Gorman <mgorman@suse.de>
I agree with Andrew that last_pgdat_dirty_ok is a better name. The old
name was also bad but seeing as the area is being changed, fixing the
name is harmless.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/page_alloc: cache the result of node_dirty_ok()
2022-04-30 1:10 [PATCH] mm/page_alloc: cache the result of node_dirty_ok() Wonhyuk Yang
2022-04-30 18:38 ` Andrew Morton
2022-05-02 9:53 ` Mel Gorman
@ 2022-05-03 16:19 ` Johannes Weiner
2 siblings, 0 replies; 4+ messages in thread
From: Johannes Weiner @ 2022-05-03 16:19 UTC (permalink / raw)
To: Wonhyuk Yang
Cc: Mel Gorman, Andrew Morton, Ohhoon Kwon, JaeSang Yoo, Jiyoup Kim,
Donghyeok Kim, linux-mm, linux-kernel
On Sat, Apr 30, 2022 at 10:10:32AM +0900, Wonhyuk Yang wrote:
> To spread dirty page, nodes are checked whether
> it reached the dirty limit using the expensive
> node_dirty_ok(). To reduce the number of calling
> node_dirty_ok(), last node that hit the dirty
> limit is cached.
>
> Instead of caching the node, caching both node
> and it's result of node_dirty_ok() can reduce
> the number of calling node_dirty_ok() more than
> before.
>
> Signed-off-by: Wonhyuk Yang <vvghjk1234@gmail.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Looks good to me. I like Andrew's naming fixlet as well.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2022-05-03 16:20 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-30 1:10 [PATCH] mm/page_alloc: cache the result of node_dirty_ok() Wonhyuk Yang
2022-04-30 18:38 ` Andrew Morton
2022-05-02 9:53 ` Mel Gorman
2022-05-03 16:19 ` Johannes Weiner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.