From: Zi Yan <zi.yan@sent.com> To: "Matthew Wilcox (Oracle)" <willy@infradead.org>, Yang Shi <shy828301@gmail.com>, Yu Zhao <yuzhao@google.com>, linux-mm@kvack.org Cc: Zi Yan <ziy@nvidia.com>, Andrew Morton <akpm@linux-foundation.org>, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH 1/5] mm: memcg: make memcg huge page split support any order split. Date: Mon, 20 Mar 2023 20:48:25 -0400 [thread overview] Message-ID: <20230321004829.2012847-2-zi.yan@sent.com> (raw) In-Reply-To: <20230321004829.2012847-1-zi.yan@sent.com> From: Zi Yan <ziy@nvidia.com> It sets memcg information for the pages after the split. A new parameter new_nr is added to tell the number of subpages in the new page, always 1 for now. It prepares for upcoming changes to support split huge page to any lower order. Signed-off-by: Zi Yan <ziy@nvidia.com> --- include/linux/memcontrol.h | 5 +++-- mm/huge_memory.c | 2 +- mm/memcontrol.c | 8 ++++---- mm/page_alloc.c | 2 +- 4 files changed, 9 insertions(+), 8 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index aa69ea98e2d8..ee1021129142 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1151,7 +1151,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, rcu_read_unlock(); } -void split_page_memcg(struct page *head, unsigned int nr); +void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_nr); unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, @@ -1588,7 +1588,8 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { } -static inline void split_page_memcg(struct page *head, unsigned int nr) +static inline void split_page_memcg(struct page *head, unsigned int nr, + unsigned int new_nr) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 81a5689806af..30e3e300c42e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2515,7 +2515,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, int i; /* complete memcg works before add pages to LRU */ - split_page_memcg(head, nr); + split_page_memcg(head, nr, 1); if (PageAnon(head) && PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 681e7528a714..8e505201baf0 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3414,7 +3414,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) /* * Because page_memcg(head) is not set on tails, set it now. */ -void split_page_memcg(struct page *head, unsigned int nr) +void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_nr) { struct folio *folio = page_folio(head); struct mem_cgroup *memcg = folio_memcg(folio); @@ -3423,13 +3423,13 @@ void split_page_memcg(struct page *head, unsigned int nr) if (mem_cgroup_disabled() || !memcg) return; - for (i = 1; i < nr; i++) + for (i = new_nr; i < nr; i += new_nr) folio_page(folio, i)->memcg_data = folio->memcg_data; if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); + obj_cgroup_get_many(__folio_objcg(folio), nr / new_nr - 1); else - css_get_many(&memcg->css, nr - 1); + css_get_many(&memcg->css, nr / new_nr - 1); } #ifdef CONFIG_SWAP diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2e72fdbdd8db..59c2b6696698 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3531,7 +3531,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); - split_page_memcg(page, 1 << order); + split_page_memcg(page, 1 << order, 1); } EXPORT_SYMBOL_GPL(split_page); -- 2.39.2
WARNING: multiple messages have this Message-ID (diff)
From: Zi Yan <zi.yan-vRdzynncJC4@public.gmane.org> To: "Matthew Wilcox (Oracle)" <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>, Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>, Yu Zhao <yuzhao-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org Cc: Zi Yan <ziy-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>, Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kselftest-u79uwXL29TY76Z2rM5mHXA@public.gmane.org Subject: [PATCH 1/5] mm: memcg: make memcg huge page split support any order split. Date: Mon, 20 Mar 2023 20:48:25 -0400 [thread overview] Message-ID: <20230321004829.2012847-2-zi.yan@sent.com> (raw) In-Reply-To: <20230321004829.2012847-1-zi.yan-vRdzynncJC4@public.gmane.org> From: Zi Yan <ziy-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org> It sets memcg information for the pages after the split. A new parameter new_nr is added to tell the number of subpages in the new page, always 1 for now. It prepares for upcoming changes to support split huge page to any lower order. Signed-off-by: Zi Yan <ziy-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org> --- include/linux/memcontrol.h | 5 +++-- mm/huge_memory.c | 2 +- mm/memcontrol.c | 8 ++++---- mm/page_alloc.c | 2 +- 4 files changed, 9 insertions(+), 8 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index aa69ea98e2d8..ee1021129142 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1151,7 +1151,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, rcu_read_unlock(); } -void split_page_memcg(struct page *head, unsigned int nr); +void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_nr); unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, @@ -1588,7 +1588,8 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { } -static inline void split_page_memcg(struct page *head, unsigned int nr) +static inline void split_page_memcg(struct page *head, unsigned int nr, + unsigned int new_nr) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 81a5689806af..30e3e300c42e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2515,7 +2515,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, int i; /* complete memcg works before add pages to LRU */ - split_page_memcg(head, nr); + split_page_memcg(head, nr, 1); if (PageAnon(head) && PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 681e7528a714..8e505201baf0 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3414,7 +3414,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) /* * Because page_memcg(head) is not set on tails, set it now. */ -void split_page_memcg(struct page *head, unsigned int nr) +void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_nr) { struct folio *folio = page_folio(head); struct mem_cgroup *memcg = folio_memcg(folio); @@ -3423,13 +3423,13 @@ void split_page_memcg(struct page *head, unsigned int nr) if (mem_cgroup_disabled() || !memcg) return; - for (i = 1; i < nr; i++) + for (i = new_nr; i < nr; i += new_nr) folio_page(folio, i)->memcg_data = folio->memcg_data; if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); + obj_cgroup_get_many(__folio_objcg(folio), nr / new_nr - 1); else - css_get_many(&memcg->css, nr - 1); + css_get_many(&memcg->css, nr / new_nr - 1); } #ifdef CONFIG_SWAP diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2e72fdbdd8db..59c2b6696698 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3531,7 +3531,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); - split_page_memcg(page, 1 << order); + split_page_memcg(page, 1 << order, 1); } EXPORT_SYMBOL_GPL(split_page); -- 2.39.2
next prev parent reply other threads:[~2023-03-21 0:49 UTC|newest] Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-03-21 0:48 [PATCH 0/5] Split a folio to any lower order folios Zi Yan 2023-03-21 0:48 ` Zi Yan 2023-03-21 0:48 ` Zi Yan [this message] 2023-03-21 0:48 ` [PATCH 1/5] mm: memcg: make memcg huge page split support any order split Zi Yan 2023-03-21 0:48 ` [PATCH 2/5] mm: page_owner: add support for splitting to any order in split page_owner Zi Yan 2023-03-21 0:48 ` Zi Yan 2023-03-24 15:17 ` Michal Koutný 2023-03-24 15:17 ` Michal Koutný 2023-03-24 15:22 ` Zi Yan 2023-03-24 15:22 ` Zi Yan 2023-03-21 0:48 ` [PATCH 3/5] mm: thp: split huge page to any lower order pages Zi Yan 2023-03-22 7:55 ` Ryan Roberts 2023-03-22 7:55 ` Ryan Roberts 2023-03-22 14:27 ` Zi Yan 2023-03-22 14:27 ` Zi Yan 2023-03-22 14:48 ` Ryan Roberts 2023-03-21 0:48 ` [PATCH 4/5] mm: truncate: split huge page cache page to a non-zero order if possible Zi Yan 2023-03-21 0:48 ` Zi Yan 2023-03-21 0:48 ` [PATCH 5/5] mm: huge_memory: enable debugfs to split huge pages to any order Zi Yan 2023-03-21 0:48 ` Zi Yan
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20230321004829.2012847-2-zi.yan@sent.com \ --to=zi.yan@sent.com \ --cc=akpm@linux-foundation.org \ --cc=cgroups@vger.kernel.org \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-kselftest@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=shy828301@gmail.com \ --cc=willy@infradead.org \ --cc=yuzhao@google.com \ --cc=ziy@nvidia.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.