* [PATCH 2/2] mm: fix numa stats for thp migration
@ 2020-12-27 18:13 ` Shakeel Butt
0 siblings, 0 replies; 19+ messages in thread
From: Shakeel Butt @ 2020-12-27 18:13 UTC (permalink / raw)
To: Muchun Song, Naoya Horiguchi, Andrew Morton
Cc: Kirill A . Shutemov, Johannes Weiner, Roman Gushchin,
cgroups-u79uwXL29TY76Z2rM5mHXA, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Shakeel Butt,
stable-u79uwXL29TY76Z2rM5mHXA
Currently the kernel is not correctly updating the numa stats for
NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
handle THP migration as kernel still does not have write support for
file THP but to be more future proof, this patch adds the THP support
for those stats as well.
Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
Signed-off-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: <stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
---
mm/migrate.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 613794f6a433..ade163c6ecdf 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
struct zone *oldzone, *newzone;
int dirty;
int expected_count = expected_page_refs(mapping, page) + extra_count;
+ int nr = thp_nr_pages(page);
if (!mapping) {
/* Anonymous page without mapping */
@@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
*/
newpage->index = page->index;
newpage->mapping = page->mapping;
- page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
+ page_ref_add(newpage, nr); /* add cache reference */
if (PageSwapBacked(page)) {
__SetPageSwapBacked(newpage);
if (PageSwapCache(page)) {
@@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
if (PageTransHuge(page)) {
int i;
- for (i = 1; i < HPAGE_PMD_NR; i++) {
+ for (i = 1; i < nr; i++) {
xas_next(&xas);
xas_store(&xas, newpage);
}
@@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
* to one less reference.
* We know this isn't the last reference.
*/
- page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
+ page_ref_unfreeze(page, expected_count - nr);
xas_unlock(&xas);
/* Leave irq disabled to prevent preemption while updating stats */
@@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
- __dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
- __inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
+ __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
+ __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
if (PageSwapBacked(page) && !PageSwapCache(page)) {
- __dec_lruvec_state(old_lruvec, NR_SHMEM);
- __inc_lruvec_state(new_lruvec, NR_SHMEM);
+ __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
+ __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
}
if (dirty && mapping_can_writeback(mapping)) {
- __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY);
- __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
- __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY);
- __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
+ __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
+ __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr);
+ __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
+ __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
}
}
local_irq_enable();
--
2.29.2.729.g45daf8777d-goog
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 2/2] mm: fix numa stats for thp migration
@ 2020-12-27 18:13 ` Shakeel Butt
0 siblings, 0 replies; 19+ messages in thread
From: Shakeel Butt @ 2020-12-27 18:13 UTC (permalink / raw)
To: Muchun Song, Naoya Horiguchi, Andrew Morton
Cc: Kirill A . Shutemov, Johannes Weiner, Roman Gushchin, cgroups,
linux-mm, linux-kernel, Shakeel Butt, stable
Currently the kernel is not correctly updating the numa stats for
NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
handle THP migration as kernel still does not have write support for
file THP but to be more future proof, this patch adds the THP support
for those stats as well.
Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Cc: <stable@vger.kernel.org>
---
mm/migrate.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 613794f6a433..ade163c6ecdf 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
struct zone *oldzone, *newzone;
int dirty;
int expected_count = expected_page_refs(mapping, page) + extra_count;
+ int nr = thp_nr_pages(page);
if (!mapping) {
/* Anonymous page without mapping */
@@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
*/
newpage->index = page->index;
newpage->mapping = page->mapping;
- page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
+ page_ref_add(newpage, nr); /* add cache reference */
if (PageSwapBacked(page)) {
__SetPageSwapBacked(newpage);
if (PageSwapCache(page)) {
@@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
if (PageTransHuge(page)) {
int i;
- for (i = 1; i < HPAGE_PMD_NR; i++) {
+ for (i = 1; i < nr; i++) {
xas_next(&xas);
xas_store(&xas, newpage);
}
@@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
* to one less reference.
* We know this isn't the last reference.
*/
- page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
+ page_ref_unfreeze(page, expected_count - nr);
xas_unlock(&xas);
/* Leave irq disabled to prevent preemption while updating stats */
@@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
- __dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
- __inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
+ __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
+ __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
if (PageSwapBacked(page) && !PageSwapCache(page)) {
- __dec_lruvec_state(old_lruvec, NR_SHMEM);
- __inc_lruvec_state(new_lruvec, NR_SHMEM);
+ __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
+ __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
}
if (dirty && mapping_can_writeback(mapping)) {
- __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY);
- __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
- __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY);
- __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
+ __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
+ __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr);
+ __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
+ __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
}
}
local_irq_enable();
--
2.29.2.729.g45daf8777d-goog
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 2/2] mm: fix numa stats for thp migration
2020-12-27 18:13 ` Shakeel Butt
@ 2020-12-27 18:16 ` Shakeel Butt
-1 siblings, 0 replies; 19+ messages in thread
From: Shakeel Butt @ 2020-12-27 18:16 UTC (permalink / raw)
To: Muchun Song, Naoya Horiguchi, Andrew Morton
Cc: Kirill A . Shutemov, Johannes Weiner, Roman Gushchin, Cgroups,
Linux MM, LKML, stable
On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt <shakeelb@google.com> wrote:
>
> Currently the kernel is not correctly updating the numa stats for
> NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
> and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
> handle THP migration as kernel still does not have write support for
> file THP but to be more future proof, this patch adds the THP support
> for those stats as well.
>
> Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> Cc: <stable@vger.kernel.org>
> ---
> mm/migrate.c | 23 ++++++++++++-----------
> 1 file changed, 12 insertions(+), 11 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 613794f6a433..ade163c6ecdf 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> struct zone *oldzone, *newzone;
> int dirty;
> int expected_count = expected_page_refs(mapping, page) + extra_count;
> + int nr = thp_nr_pages(page);
>
> if (!mapping) {
> /* Anonymous page without mapping */
> @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> */
> newpage->index = page->index;
> newpage->mapping = page->mapping;
> - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
> + page_ref_add(newpage, nr); /* add cache reference */
> if (PageSwapBacked(page)) {
> __SetPageSwapBacked(newpage);
> if (PageSwapCache(page)) {
> @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> if (PageTransHuge(page)) {
> int i;
>
> - for (i = 1; i < HPAGE_PMD_NR; i++) {
> + for (i = 1; i < nr; i++) {
> xas_next(&xas);
> xas_store(&xas, newpage);
> }
> @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> * to one less reference.
> * We know this isn't the last reference.
> */
> - page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
> + page_ref_unfreeze(page, expected_count - nr);
>
> xas_unlock(&xas);
> /* Leave irq disabled to prevent preemption while updating stats */
> @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
> old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
> new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
>
> - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
> - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
> + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
> + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
> if (PageSwapBacked(page) && !PageSwapCache(page)) {
> - __dec_lruvec_state(old_lruvec, NR_SHMEM);
> - __inc_lruvec_state(new_lruvec, NR_SHMEM);
> + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
> + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
> }
> if (dirty && mapping_can_writeback(mapping)) {
> - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY);
> - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
> - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY);
> - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
> + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
> + __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr);
This should be __mod_zone_page_state(). I fixed locally but sent the
older patch by mistake.
> + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
> + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
> }
> }
> local_irq_enable();
> --
> 2.29.2.729.g45daf8777d-goog
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/2] mm: fix numa stats for thp migration
@ 2020-12-27 18:16 ` Shakeel Butt
0 siblings, 0 replies; 19+ messages in thread
From: Shakeel Butt @ 2020-12-27 18:16 UTC (permalink / raw)
To: Muchun Song, Naoya Horiguchi, Andrew Morton
Cc: Kirill A . Shutemov, Johannes Weiner, Roman Gushchin, Cgroups,
Linux MM, LKML, stable
On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt <shakeelb@google.com> wrote:
>
> Currently the kernel is not correctly updating the numa stats for
> NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
> and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
> handle THP migration as kernel still does not have write support for
> file THP but to be more future proof, this patch adds the THP support
> for those stats as well.
>
> Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> Cc: <stable@vger.kernel.org>
> ---
> mm/migrate.c | 23 ++++++++++++-----------
> 1 file changed, 12 insertions(+), 11 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 613794f6a433..ade163c6ecdf 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> struct zone *oldzone, *newzone;
> int dirty;
> int expected_count = expected_page_refs(mapping, page) + extra_count;
> + int nr = thp_nr_pages(page);
>
> if (!mapping) {
> /* Anonymous page without mapping */
> @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> */
> newpage->index = page->index;
> newpage->mapping = page->mapping;
> - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
> + page_ref_add(newpage, nr); /* add cache reference */
> if (PageSwapBacked(page)) {
> __SetPageSwapBacked(newpage);
> if (PageSwapCache(page)) {
> @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> if (PageTransHuge(page)) {
> int i;
>
> - for (i = 1; i < HPAGE_PMD_NR; i++) {
> + for (i = 1; i < nr; i++) {
> xas_next(&xas);
> xas_store(&xas, newpage);
> }
> @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> * to one less reference.
> * We know this isn't the last reference.
> */
> - page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
> + page_ref_unfreeze(page, expected_count - nr);
>
> xas_unlock(&xas);
> /* Leave irq disabled to prevent preemption while updating stats */
> @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
> old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
> new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
>
> - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
> - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
> + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
> + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
> if (PageSwapBacked(page) && !PageSwapCache(page)) {
> - __dec_lruvec_state(old_lruvec, NR_SHMEM);
> - __inc_lruvec_state(new_lruvec, NR_SHMEM);
> + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
> + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
> }
> if (dirty && mapping_can_writeback(mapping)) {
> - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY);
> - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
> - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY);
> - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
> + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
> + __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr);
This should be __mod_zone_page_state(). I fixed locally but sent the
older patch by mistake.
> + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
> + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
> }
> }
> local_irq_enable();
> --
> 2.29.2.729.g45daf8777d-goog
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/2] mm: fix numa stats for thp migration
2020-12-27 18:16 ` Shakeel Butt
(?)
@ 2020-12-28 17:31 ` Yang Shi
-1 siblings, 0 replies; 19+ messages in thread
From: Yang Shi @ 2020-12-28 17:31 UTC (permalink / raw)
To: Shakeel Butt
Cc: Muchun Song, Naoya Horiguchi, Andrew Morton, Kirill A . Shutemov,
Johannes Weiner, Roman Gushchin, Cgroups, Linux MM, LKML, stable
On Sun, Dec 27, 2020 at 10:16 AM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt <shakeelb@google.com> wrote:
> >
> > Currently the kernel is not correctly updating the numa stats for
> > NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
> > and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
> > handle THP migration as kernel still does not have write support for
> > file THP but to be more future proof, this patch adds the THP support
> > for those stats as well.
> >
> > Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
> > Signed-off-by: Shakeel Butt <shakeelb@google.com>
> > Cc: <stable@vger.kernel.org>
> > ---
> > mm/migrate.c | 23 ++++++++++++-----------
> > 1 file changed, 12 insertions(+), 11 deletions(-)
> >
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 613794f6a433..ade163c6ecdf 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > struct zone *oldzone, *newzone;
> > int dirty;
> > int expected_count = expected_page_refs(mapping, page) + extra_count;
> > + int nr = thp_nr_pages(page);
> >
> > if (!mapping) {
> > /* Anonymous page without mapping */
> > @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > */
> > newpage->index = page->index;
> > newpage->mapping = page->mapping;
> > - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
> > + page_ref_add(newpage, nr); /* add cache reference */
> > if (PageSwapBacked(page)) {
> > __SetPageSwapBacked(newpage);
> > if (PageSwapCache(page)) {
> > @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > if (PageTransHuge(page)) {
> > int i;
> >
> > - for (i = 1; i < HPAGE_PMD_NR; i++) {
> > + for (i = 1; i < nr; i++) {
> > xas_next(&xas);
> > xas_store(&xas, newpage);
> > }
> > @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > * to one less reference.
> > * We know this isn't the last reference.
> > */
> > - page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
> > + page_ref_unfreeze(page, expected_count - nr);
> >
> > xas_unlock(&xas);
> > /* Leave irq disabled to prevent preemption while updating stats */
> > @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
> > new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
> >
> > - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
> > - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
> > + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
> > + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
> > if (PageSwapBacked(page) && !PageSwapCache(page)) {
> > - __dec_lruvec_state(old_lruvec, NR_SHMEM);
> > - __inc_lruvec_state(new_lruvec, NR_SHMEM);
> > + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
> > + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
> > }
> > if (dirty && mapping_can_writeback(mapping)) {
> > - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY);
> > - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
> > - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY);
> > - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
> > + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
> > + __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr);
>
> This should be __mod_zone_page_state(). I fixed locally but sent the
> older patch by mistake.
Acked-by: Yang Shi <shy828301@gmail.com>
>
> > + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
> > + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
> > }
> > }
> > local_irq_enable();
> > --
> > 2.29.2.729.g45daf8777d-goog
> >
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/2] mm: fix numa stats for thp migration
@ 2020-12-28 17:31 ` Yang Shi
0 siblings, 0 replies; 19+ messages in thread
From: Yang Shi @ 2020-12-28 17:31 UTC (permalink / raw)
To: Shakeel Butt
Cc: Muchun Song, Naoya Horiguchi, Andrew Morton, Kirill A . Shutemov,
Johannes Weiner, Roman Gushchin, Cgroups, Linux MM, LKML, stable
On Sun, Dec 27, 2020 at 10:16 AM Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>
> On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> >
> > Currently the kernel is not correctly updating the numa stats for
> > NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
> > and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
> > handle THP migration as kernel still does not have write support for
> > file THP but to be more future proof, this patch adds the THP support
> > for those stats as well.
> >
> > Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
> > Signed-off-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> > Cc: <stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
> > ---
> > mm/migrate.c | 23 ++++++++++++-----------
> > 1 file changed, 12 insertions(+), 11 deletions(-)
> >
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 613794f6a433..ade163c6ecdf 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > struct zone *oldzone, *newzone;
> > int dirty;
> > int expected_count = expected_page_refs(mapping, page) + extra_count;
> > + int nr = thp_nr_pages(page);
> >
> > if (!mapping) {
> > /* Anonymous page without mapping */
> > @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > */
> > newpage->index = page->index;
> > newpage->mapping = page->mapping;
> > - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
> > + page_ref_add(newpage, nr); /* add cache reference */
> > if (PageSwapBacked(page)) {
> > __SetPageSwapBacked(newpage);
> > if (PageSwapCache(page)) {
> > @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > if (PageTransHuge(page)) {
> > int i;
> >
> > - for (i = 1; i < HPAGE_PMD_NR; i++) {
> > + for (i = 1; i < nr; i++) {
> > xas_next(&xas);
> > xas_store(&xas, newpage);
> > }
> > @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > * to one less reference.
> > * We know this isn't the last reference.
> > */
> > - page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
> > + page_ref_unfreeze(page, expected_count - nr);
> >
> > xas_unlock(&xas);
> > /* Leave irq disabled to prevent preemption while updating stats */
> > @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
> > new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
> >
> > - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
> > - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
> > + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
> > + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
> > if (PageSwapBacked(page) && !PageSwapCache(page)) {
> > - __dec_lruvec_state(old_lruvec, NR_SHMEM);
> > - __inc_lruvec_state(new_lruvec, NR_SHMEM);
> > + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
> > + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
> > }
> > if (dirty && mapping_can_writeback(mapping)) {
> > - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY);
> > - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
> > - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY);
> > - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
> > + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
> > + __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr);
>
> This should be __mod_zone_page_state(). I fixed locally but sent the
> older patch by mistake.
Acked-by: Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>
> > + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
> > + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
> > }
> > }
> > local_irq_enable();
> > --
> > 2.29.2.729.g45daf8777d-goog
> >
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/2] mm: fix numa stats for thp migration
@ 2020-12-28 17:31 ` Yang Shi
0 siblings, 0 replies; 19+ messages in thread
From: Yang Shi @ 2020-12-28 17:31 UTC (permalink / raw)
To: Shakeel Butt
Cc: Muchun Song, Naoya Horiguchi, Andrew Morton, Kirill A . Shutemov,
Johannes Weiner, Roman Gushchin, Cgroups, Linux MM, LKML, stable
On Sun, Dec 27, 2020 at 10:16 AM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt <shakeelb@google.com> wrote:
> >
> > Currently the kernel is not correctly updating the numa stats for
> > NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
> > and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
> > handle THP migration as kernel still does not have write support for
> > file THP but to be more future proof, this patch adds the THP support
> > for those stats as well.
> >
> > Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
> > Signed-off-by: Shakeel Butt <shakeelb@google.com>
> > Cc: <stable@vger.kernel.org>
> > ---
> > mm/migrate.c | 23 ++++++++++++-----------
> > 1 file changed, 12 insertions(+), 11 deletions(-)
> >
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 613794f6a433..ade163c6ecdf 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > struct zone *oldzone, *newzone;
> > int dirty;
> > int expected_count = expected_page_refs(mapping, page) + extra_count;
> > + int nr = thp_nr_pages(page);
> >
> > if (!mapping) {
> > /* Anonymous page without mapping */
> > @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > */
> > newpage->index = page->index;
> > newpage->mapping = page->mapping;
> > - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
> > + page_ref_add(newpage, nr); /* add cache reference */
> > if (PageSwapBacked(page)) {
> > __SetPageSwapBacked(newpage);
> > if (PageSwapCache(page)) {
> > @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > if (PageTransHuge(page)) {
> > int i;
> >
> > - for (i = 1; i < HPAGE_PMD_NR; i++) {
> > + for (i = 1; i < nr; i++) {
> > xas_next(&xas);
> > xas_store(&xas, newpage);
> > }
> > @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > * to one less reference.
> > * We know this isn't the last reference.
> > */
> > - page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
> > + page_ref_unfreeze(page, expected_count - nr);
> >
> > xas_unlock(&xas);
> > /* Leave irq disabled to prevent preemption while updating stats */
> > @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
> > old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
> > new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
> >
> > - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
> > - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
> > + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
> > + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
> > if (PageSwapBacked(page) && !PageSwapCache(page)) {
> > - __dec_lruvec_state(old_lruvec, NR_SHMEM);
> > - __inc_lruvec_state(new_lruvec, NR_SHMEM);
> > + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
> > + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
> > }
> > if (dirty && mapping_can_writeback(mapping)) {
> > - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY);
> > - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
> > - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY);
> > - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
> > + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
> > + __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr);
>
> This should be __mod_zone_page_state(). I fixed locally but sent the
> older patch by mistake.
Acked-by: Yang Shi <shy828301@gmail.com>
>
> > + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
> > + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
> > }
> > }
> > local_irq_enable();
> > --
> > 2.29.2.729.g45daf8777d-goog
> >
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/2] mm: fix numa stats for thp migration
@ 2020-12-28 19:44 ` Roman Gushchin
0 siblings, 0 replies; 19+ messages in thread
From: Roman Gushchin @ 2020-12-28 19:44 UTC (permalink / raw)
To: Shakeel Butt
Cc: Muchun Song, Naoya Horiguchi, Andrew Morton, Kirill A . Shutemov,
Johannes Weiner, cgroups, linux-mm, linux-kernel, stable
On Sun, Dec 27, 2020 at 10:13:10AM -0800, Shakeel Butt wrote:
> Currently the kernel is not correctly updating the numa stats for
> NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
> and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
> handle THP migration as kernel still does not have write support for
> file THP but to be more future proof, this patch adds the THP support
> for those stats as well.
>
> Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> Cc: <stable@vger.kernel.org>
With the typo fix ("__mod_zone_page_tate")
Reviewed-by: Roman Gushchin <guro@fb.com>
Thanks!
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/2] mm: fix numa stats for thp migration
@ 2020-12-28 19:44 ` Roman Gushchin
0 siblings, 0 replies; 19+ messages in thread
From: Roman Gushchin @ 2020-12-28 19:44 UTC (permalink / raw)
To: Shakeel Butt
Cc: Muchun Song, Naoya Horiguchi, Andrew Morton, Kirill A . Shutemov,
Johannes Weiner, cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
stable-u79uwXL29TY76Z2rM5mHXA
On Sun, Dec 27, 2020 at 10:13:10AM -0800, Shakeel Butt wrote:
> Currently the kernel is not correctly updating the numa stats for
> NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
> and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
> handle THP migration as kernel still does not have write support for
> file THP but to be more future proof, this patch adds the THP support
> for those stats as well.
>
> Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
> Signed-off-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: <stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
With the typo fix ("__mod_zone_page_tate")
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
Thanks!
^ permalink raw reply [flat|nested] 19+ messages in thread