LKML Archive on
 help / color / Atom feed
From: Ding Hui <>
	Ding Hui <>
Subject: [PATCH v3] mm/page_alloc: fix counting of free pages after take off from buddy
Date: Wed, 26 May 2021 15:52:47 +0800
Message-ID: <> (raw)

Recently we found that there is a lot MemFree left in /proc/meminfo
after do a lot of pages soft offline, it's not quite correct.

Before Oscar rework soft offline for free pages [1], if we soft
offline free pages, these pages are left in buddy with HWPoison
flag, and NR_FREE_PAGES is not updated immediately. So the difference
between NR_FREE_PAGES and real number of available free pages is
also even big at the beginning.

However, with the workload running, when we catch HWPoison page in
any alloc functions subsequently, we will remove it from buddy,
meanwhile update the NR_FREE_PAGES and try again, so the NR_FREE_PAGES
will get more and more closer to the real number of available free pages.
(regardless of unpoison_memory())

Now, for offline free pages, after a successful call take_page_off_buddy(),
the page is no longer belong to buddy allocator, and will not be
used any more, but we missed accounting NR_FREE_PAGES in this situation,
and there is no chance to be updated later.

Do update in take_page_off_buddy() like rmqueue() does, but avoid
double counting if some one already set_migratetype_isolate() on the

[1]: commit 06be6ff3d2ec ("mm,hwpoison: rework soft offline for free pages")

Suggested-by: Naoya Horiguchi <>
Signed-off-by: Ding Hui <>
- as Naoya Horiguchi suggested, do update only when
  is_migrate_isolate(migratetype)) is false
- updated patch description

- use __mod_zone_freepage_state instead of __mod_zone_page_state 

 mm/page_alloc.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index aaa1655cf682..d1f5de1c1283 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -9158,6 +9158,8 @@ bool take_page_off_buddy(struct page *page)
 			del_page_from_free_list(page_head, zone, page_order);
 			break_down_buddy_pages(zone, page_head, page, 0,
 						page_order, migratetype);
+			if (!is_migrate_isolate(migratetype))
+				__mod_zone_freepage_state(zone, -1, migratetype);
 			ret = true;

             reply index

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-26  7:52 Ding Hui [this message]
2021-05-26  7:58 ` David Hildenbrand
2021-05-26 10:43   ` Oscar Salvador
2021-05-26 10:42 ` Oscar Salvador
2021-05-27  0:34 ` HORIGUCHI NAOYA(堀口 直也)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LKML Archive on

Archives are clonable:
	git clone --mirror lkml/git/0.git
	git clone --mirror lkml/git/1.git
	git clone --mirror lkml/git/2.git
	git clone --mirror lkml/git/3.git
	git clone --mirror lkml/git/4.git
	git clone --mirror lkml/git/5.git
	git clone --mirror lkml/git/6.git
	git clone --mirror lkml/git/7.git
	git clone --mirror lkml/git/8.git
	git clone --mirror lkml/git/9.git
	git clone --mirror lkml/git/10.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ \
	public-inbox-index lkml

Example config snippet for mirrors

Newsgroup available over NNTP:

AGPL code for this site: git clone