linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] mm: don't call activate_page() on new ksm pages
@ 2020-08-12  4:04 Yu Zhao
  2020-08-13  5:19 ` Yang Shi
  0 siblings, 1 reply; 4+ messages in thread
From: Yu Zhao @ 2020-08-12  4:04 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Duyck, Huang Ying, David Hildenbrand, Michal Hocko,
	Yang Shi, Qian Cai, Mel Gorman, Nicholas Piggin,
	Jérôme Glisse, Hugh Dickins, linux-mm, linux-kernel,
	Yu Zhao

lru_cache_add_active_or_unevictable() already adds new ksm pages to
active lru. Calling activate_page() isn't really necessary in this
case.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/swapfile.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 6c26916e95fd..cf115ea26a20 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1913,16 +1913,16 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
 		   pte_mkold(mk_pte(page, vma->vm_page_prot)));
 	if (page == swapcache) {
 		page_add_anon_rmap(page, vma, addr, false);
+		/*
+		 * Move the page to the active list so it is not
+		 * immediately swapped out again after swapon.
+		 */
+		activate_page(page);
 	} else { /* ksm created a completely new copy */
 		page_add_new_anon_rmap(page, vma, addr, false);
 		lru_cache_add_active_or_unevictable(page, vma);
 	}
 	swap_free(entry);
-	/*
-	 * Move the page to the active list so it is not
-	 * immediately swapped out again after swapon.
-	 */
-	activate_page(page);
 out:
 	pte_unmap_unlock(pte, ptl);
 	if (page != swapcache) {
-- 
2.28.0.236.gb10cc79966-goog



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/3] mm: don't call activate_page() on new ksm pages
  2020-08-12  4:04 [PATCH 1/3] mm: don't call activate_page() on new ksm pages Yu Zhao
@ 2020-08-13  5:19 ` Yang Shi
  2020-08-13  7:34   ` Yu Zhao
  0 siblings, 1 reply; 4+ messages in thread
From: Yang Shi @ 2020-08-13  5:19 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andrew Morton, Alexander Duyck, Huang Ying, David Hildenbrand,
	Michal Hocko, Yang Shi, Qian Cai, Mel Gorman, Nicholas Piggin,
	Jérôme Glisse, Hugh Dickins, Linux MM,
	Linux Kernel Mailing List, Joonsoo Kim

On Tue, Aug 11, 2020 at 9:04 PM Yu Zhao <yuzhao@google.com> wrote:
>
> lru_cache_add_active_or_unevictable() already adds new ksm pages to
> active lru. Calling activate_page() isn't really necessary in this
> case.
>
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
>  mm/swapfile.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 6c26916e95fd..cf115ea26a20 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1913,16 +1913,16 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
>                    pte_mkold(mk_pte(page, vma->vm_page_prot)));
>         if (page == swapcache) {
>                 page_add_anon_rmap(page, vma, addr, false);
> +               /*
> +                * Move the page to the active list so it is not
> +                * immediately swapped out again after swapon.
> +                */
> +               activate_page(page);

Actually I think we could just remove this activate_page() call with
Joonsoo's anonymous page workingset series merged. The active bit will
be taken care by workingset_refault().

>         } else { /* ksm created a completely new copy */
>                 page_add_new_anon_rmap(page, vma, addr, false);
>                 lru_cache_add_active_or_unevictable(page, vma);

And it looks the latest linus's tree already changed this to
lru_cache_add_inactive_or_unevictable() by commit b518154e59
("mm/vmscan: protect the workingset on anonymous LRU")

Added Joonsoo in this loop.

>         }
>         swap_free(entry);
> -       /*
> -        * Move the page to the active list so it is not
> -        * immediately swapped out again after swapon.
> -        */
> -       activate_page(page);
>  out:
>         pte_unmap_unlock(pte, ptl);
>         if (page != swapcache) {
> --
> 2.28.0.236.gb10cc79966-goog
>
>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/3] mm: don't call activate_page() on new ksm pages
  2020-08-13  5:19 ` Yang Shi
@ 2020-08-13  7:34   ` Yu Zhao
  2020-08-17 20:48     ` Yang Shi
  0 siblings, 1 reply; 4+ messages in thread
From: Yu Zhao @ 2020-08-13  7:34 UTC (permalink / raw)
  To: Yang Shi
  Cc: Andrew Morton, Alexander Duyck, Huang Ying, David Hildenbrand,
	Michal Hocko, Yang Shi, Qian Cai, Mel Gorman, Nicholas Piggin,
	Jérôme Glisse, Hugh Dickins, Linux MM,
	Linux Kernel Mailing List, Joonsoo Kim

On Wed, Aug 12, 2020 at 10:19:24PM -0700, Yang Shi wrote:
> On Tue, Aug 11, 2020 at 9:04 PM Yu Zhao <yuzhao@google.com> wrote:
> >
> > lru_cache_add_active_or_unevictable() already adds new ksm pages to
> > active lru. Calling activate_page() isn't really necessary in this
> > case.
> >
> > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > ---
> >  mm/swapfile.c | 10 +++++-----
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 6c26916e95fd..cf115ea26a20 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -1913,16 +1913,16 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> >                    pte_mkold(mk_pte(page, vma->vm_page_prot)));
> >         if (page == swapcache) {
> >                 page_add_anon_rmap(page, vma, addr, false);
> > +               /*
> > +                * Move the page to the active list so it is not
> > +                * immediately swapped out again after swapon.
> > +                */
> > +               activate_page(page);
> 
> Actually I think we could just remove this activate_page() call with
> Joonsoo's anonymous page workingset series merged. The active bit will
> be taken care by workingset_refault().
> 
> >         } else { /* ksm created a completely new copy */
> >                 page_add_new_anon_rmap(page, vma, addr, false);
> >                 lru_cache_add_active_or_unevictable(page, vma);
> 
> And it looks the latest linus's tree already changed this to
> lru_cache_add_inactive_or_unevictable() by commit b518154e59
> ("mm/vmscan: protect the workingset on anonymous LRU")

Oops, apparently my tree is out of date. I'll work on a new version
that removes the superfluous activate_page(). Meanwhile, can you
please take a look at the rest of this series and let me know if
there is anything else that we might want to change? Thank you.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/3] mm: don't call activate_page() on new ksm pages
  2020-08-13  7:34   ` Yu Zhao
@ 2020-08-17 20:48     ` Yang Shi
  0 siblings, 0 replies; 4+ messages in thread
From: Yang Shi @ 2020-08-17 20:48 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andrew Morton, Alexander Duyck, Huang Ying, David Hildenbrand,
	Michal Hocko, Yang Shi, Qian Cai, Mel Gorman, Nicholas Piggin,
	Jérôme Glisse, Hugh Dickins, Linux MM,
	Linux Kernel Mailing List, Joonsoo Kim

On Thu, Aug 13, 2020 at 12:34 AM Yu Zhao <yuzhao@google.com> wrote:
>
> On Wed, Aug 12, 2020 at 10:19:24PM -0700, Yang Shi wrote:
> > On Tue, Aug 11, 2020 at 9:04 PM Yu Zhao <yuzhao@google.com> wrote:
> > >
> > > lru_cache_add_active_or_unevictable() already adds new ksm pages to
> > > active lru. Calling activate_page() isn't really necessary in this
> > > case.
> > >
> > > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > > ---
> > >  mm/swapfile.c | 10 +++++-----
> > >  1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > > index 6c26916e95fd..cf115ea26a20 100644
> > > --- a/mm/swapfile.c
> > > +++ b/mm/swapfile.c
> > > @@ -1913,16 +1913,16 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> > >                    pte_mkold(mk_pte(page, vma->vm_page_prot)));
> > >         if (page == swapcache) {
> > >                 page_add_anon_rmap(page, vma, addr, false);
> > > +               /*
> > > +                * Move the page to the active list so it is not
> > > +                * immediately swapped out again after swapon.
> > > +                */
> > > +               activate_page(page);
> >
> > Actually I think we could just remove this activate_page() call with
> > Joonsoo's anonymous page workingset series merged. The active bit will
> > be taken care by workingset_refault().
> >
> > >         } else { /* ksm created a completely new copy */
> > >                 page_add_new_anon_rmap(page, vma, addr, false);
> > >                 lru_cache_add_active_or_unevictable(page, vma);
> >
> > And it looks the latest linus's tree already changed this to
> > lru_cache_add_inactive_or_unevictable() by commit b518154e59
> > ("mm/vmscan: protect the workingset on anonymous LRU")
>
> Oops, apparently my tree is out of date. I'll work on a new version
> that removes the superfluous activate_page(). Meanwhile, can you
> please take a look at the rest of this series and let me know if
> there is anything else that we might want to change? Thank you.

I took a look at those two patches. For the #2 I didn't spot anything
wrong, but I may miss something. For the #3, TBH I don't think the
justification is strong enough since you just moved the PG_waiters bit
cleared to allocation time, someone could argue it may hurt allocation
latency.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-08-17 20:48 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-12  4:04 [PATCH 1/3] mm: don't call activate_page() on new ksm pages Yu Zhao
2020-08-13  5:19 ` Yang Shi
2020-08-13  7:34   ` Yu Zhao
2020-08-17 20:48     ` Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).