All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alex Shi <alex.shi@linux.alibaba.com>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Tejun Heo <tj@kernel.org>, Hugh Dickins <hughd@google.com>,
	Konstantin Khlebnikov <khlebnikov@yandex-team.ru>,
	Daniel Jordan <daniel.m.jordan@oracle.com>,
	Yang Shi <yang.shi@linux.alibaba.com>,
	Matthew Wilcox <willy@infradead.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	kbuild test robot <lkp@intel.com>, linux-mm <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	cgroups@vger.kernel.org, Shakeel Butt <shakeelb@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Wei Yang <richard.weiyang@gmail.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>
Subject: Re: [PATCH v16 16/22] mm/mlock: reorder isolation sequence during munlock
Date: Sun, 19 Jul 2020 11:55:45 +0800	[thread overview]
Message-ID: <6e37ee32-c6c5-fcc5-3cad-74f7ae41fb67@linux.alibaba.com> (raw)
In-Reply-To: <CAKgT0Udcry01samXT54RkurNqFKnVmv-686ZFHF+iw4b+12T_A@mail.gmail.com>



在 2020/7/18 上午4:30, Alexander Duyck 写道:
> On Fri, Jul 10, 2020 at 5:59 PM Alex Shi <alex.shi@linux.alibaba.com> wrote:
>>
>> This patch reorder the isolation steps during munlock, move the lru lock
>> to guard each pages, unfold __munlock_isolate_lru_page func, to do the
>> preparation for lru lock change.
>>
>> __split_huge_page_refcount doesn't exist, but we still have to guard
>> PageMlocked and PageLRU for tail page in __split_huge_page_tail.
>>
>> [lkp@intel.com: found a sleeping function bug ... at mm/rmap.c]
>> Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
>> Cc: Kirill A. Shutemov <kirill@shutemov.name>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Hugh Dickins <hughd@google.com>
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> ---
>>  mm/mlock.c | 93 ++++++++++++++++++++++++++++++++++----------------------------
>>  1 file changed, 51 insertions(+), 42 deletions(-)
>>
>> diff --git a/mm/mlock.c b/mm/mlock.c
>> index 228ba5a8e0a5..0bdde88b4438 100644
>> --- a/mm/mlock.c
>> +++ b/mm/mlock.c
>> @@ -103,25 +103,6 @@ void mlock_vma_page(struct page *page)
>>  }
>>
>>  /*
>> - * Isolate a page from LRU with optional get_page() pin.
>> - * Assumes lru_lock already held and page already pinned.
>> - */
>> -static bool __munlock_isolate_lru_page(struct page *page, bool getpage)
>> -{
>> -       if (TestClearPageLRU(page)) {
>> -               struct lruvec *lruvec;
>> -
>> -               lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
>> -               if (getpage)
>> -                       get_page(page);
>> -               del_page_from_lru_list(page, lruvec, page_lru(page));
>> -               return true;
>> -       }
>> -
>> -       return false;
>> -}
>> -
>> -/*
>>   * Finish munlock after successful page isolation
>>   *
>>   * Page must be locked. This is a wrapper for try_to_munlock()
>> @@ -181,6 +162,7 @@ static void __munlock_isolation_failed(struct page *page)
>>  unsigned int munlock_vma_page(struct page *page)
>>  {
>>         int nr_pages;
>> +       bool clearlru = false;
>>         pg_data_t *pgdat = page_pgdat(page);
>>
>>         /* For try_to_munlock() and to serialize with page migration */
>> @@ -189,32 +171,42 @@ unsigned int munlock_vma_page(struct page *page)
>>         VM_BUG_ON_PAGE(PageTail(page), page);
>>
>>         /*
>> -        * Serialize with any parallel __split_huge_page_refcount() which
>> +        * Serialize split tail pages in __split_huge_page_tail() which
>>          * might otherwise copy PageMlocked to part of the tail pages before
>>          * we clear it in the head page. It also stabilizes hpage_nr_pages().
>>          */
>> +       get_page(page);
> 
> I don't think this get_page() call needs to be up here. It could be
> left down before we delete the page from the LRU list as it is really
> needed to take a reference on the page before we call
> __munlock_isolated_page(), or at least that is the way it looks to me.
> By doing that you can avoid a bunch of cleanup in these exception
> cases.

Uh, It seems unlikely for !page->_refcount, and then got to release_pages(),
if so, get_page do could move down.
Thanks

> 
>> +       clearlru = TestClearPageLRU(page);
> 
> I'm not sure I fully understand the reason for moving this here. By
> clearing this flag before you clear Mlocked does this give you some
> sort of extra protection? I don't see how since Mlocked doesn't
> necessarily imply the page is on LRU.
> 

Above comments give a reason for the lru_lock usage,
>> +        * Serialize split tail pages in __split_huge_page_tail() which
>>          * might otherwise copy PageMlocked to part of the tail pages before
>>          * we clear it in the head page. It also stabilizes hpage_nr_pages().

Look into the __split_huge_page_tail, there is a tiny gap between tail page
get PG_mlocked, and it is added into lru list.
The TestClearPageLRU could blocked memcg changes of the page from stopping
isolate_lru_page.


>>         spin_lock_irq(&pgdat->lru_lock);
>>
>>         if (!TestClearPageMlocked(page)) {
>> -               /* Potentially, PTE-mapped THP: do not skip the rest PTEs */
>> -               nr_pages = 1;
>> -               goto unlock_out;
>> +               if (clearlru)
>> +                       SetPageLRU(page);
>> +               /*
>> +                * Potentially, PTE-mapped THP: do not skip the rest PTEs
>> +                * Reuse lock as memory barrier for release_pages racing.
>> +                */
>> +               spin_unlock_irq(&pgdat->lru_lock);
>> +               put_page(page);
>> +               return 0;
>>         }
>>
>>         nr_pages = hpage_nr_pages(page);
>>         __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
>>
>> -       if (__munlock_isolate_lru_page(page, true)) {
>> +       if (clearlru) {
>> +               struct lruvec *lruvec;
>> +
> 
> You could just place the get_page() call here.
> 
>> +               lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
>> +               del_page_from_lru_list(page, lruvec, page_lru(page));
>>                 spin_unlock_irq(&pgdat->lru_lock);
>>                 __munlock_isolated_page(page);
>> -               goto out;
>> +       } else {
>> +               spin_unlock_irq(&pgdat->lru_lock);
>> +               put_page(page);
>> +               __munlock_isolation_failed(page);
> 
> If you move the get_page() as I suggested above there wouldn't be a
> need for the put_page(). It then becomes possible to simplify the code
> a bit by merging the unlock paths and doing an if/else with the
> __munlock functions like so:
> if (clearlru) {
>     ...
>     del_page_from_lru..
> }
> 
> spin_unlock_irq()
> 
> if (clearlru)
>     __munlock_isolated_page();
> else
>     __munlock_isolated_failed();
> 
>>         }
>> -       __munlock_isolation_failed(page);
>> -
>> -unlock_out:
>> -       spin_unlock_irq(&pgdat->lru_lock);
>>
>> -out:
>>         return nr_pages - 1;
>>  }
>>
>> @@ -297,34 +289,51 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
>>         pagevec_init(&pvec_putback);
>>
>>         /* Phase 1: page isolation */
>> -       spin_lock_irq(&zone->zone_pgdat->lru_lock);
>>         for (i = 0; i < nr; i++) {
>>                 struct page *page = pvec->pages[i];
>> +               struct lruvec *lruvec;
>> +               bool clearlru;
>>
>> -               if (TestClearPageMlocked(page)) {
>> -                       /*
>> -                        * We already have pin from follow_page_mask()
>> -                        * so we can spare the get_page() here.
>> -                        */
>> -                       if (__munlock_isolate_lru_page(page, false))
>> -                               continue;
>> -                       else
>> -                               __munlock_isolation_failed(page);
>> -               } else {
>> +               clearlru = TestClearPageLRU(page);
>> +               spin_lock_irq(&zone->zone_pgdat->lru_lock);
> 
> I still don't see what you are gaining by moving the bit test up to
> this point. Seems like it would be better left below with the lock
> just being used to prevent a possible race while you are pulling the
> page out of the LRU list.
> 

the same reason as above comments mentained __split_huge_page_tail() 
issue.

>> +
>> +               if (!TestClearPageMlocked(page)) {
>>                         delta_munlocked++;
>> +                       if (clearlru)
>> +                               SetPageLRU(page);
>> +                       goto putback;
>> +               }
>> +
>> +               if (!clearlru) {
>> +                       __munlock_isolation_failed(page);
>> +                       goto putback;
>>                 }
> 
> With the other function you were processing this outside of the lock,
> here you are doing it inside. It would probably make more sense here
> to follow similar logic and take care of the del_page_from_lru_list
> ifr clealru is set, unlock, and then if clearlru is set continue else
> track the isolation failure. That way you can avoid having to use as
> many jump labels.
> 
>>                 /*
>> +                * Isolate this page.
>> +                * We already have pin from follow_page_mask()
>> +                * so we can spare the get_page() here.
>> +                */
>> +               lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
>> +               del_page_from_lru_list(page, lruvec, page_lru(page));
>> +               spin_unlock_irq(&zone->zone_pgdat->lru_lock);
>> +               continue;
>> +
>> +               /*
>>                  * We won't be munlocking this page in the next phase
>>                  * but we still need to release the follow_page_mask()
>>                  * pin. We cannot do it under lru_lock however. If it's
>>                  * the last pin, __page_cache_release() would deadlock.
>>                  */
>> +putback:
>> +               spin_unlock_irq(&zone->zone_pgdat->lru_lock);
>>                 pagevec_add(&pvec_putback, pvec->pages[i]);
>>                 pvec->pages[i] = NULL;
>>         }
>> +       /* tempary disable irq, will remove later */
>> +       local_irq_disable();
>>         __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
>> -       spin_unlock_irq(&zone->zone_pgdat->lru_lock);
>> +       local_irq_enable();
>>
>>         /* Now we can release pins of pages that we are not munlocking */
>>         pagevec_release(&pvec_putback);
>> --
>> 1.8.3.1
>>
>>

WARNING: multiple messages have this Message-ID (diff)
From: Alex Shi <alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org>
To: Alexander Duyck
	<alexander.duyck-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Mel Gorman
	<mgorman-3eNAlZScCAx27rWaFMvyedHuzzzSOjJt@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Hugh Dickins <hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Konstantin Khlebnikov
	<khlebnikov-XoJtRXgx1JseBXzfvpsJ4g@public.gmane.org>,
	Daniel Jordan
	<daniel.m.jordan-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>,
	Yang Shi
	<yang.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org>,
	Matthew Wilcox <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	kbuild test robot <lkp-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	linux-mm <linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	LKML <linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Joonsoo Kim <iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org>,
	Wei Yang
	<richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	"Kirill A. Shutemov"
	<kirill-oKw7cIdHH8eLwutG50LtGA@public.gmane.org>
Subject: Re: [PATCH v16 16/22] mm/mlock: reorder isolation sequence during munlock
Date: Sun, 19 Jul 2020 11:55:45 +0800	[thread overview]
Message-ID: <6e37ee32-c6c5-fcc5-3cad-74f7ae41fb67@linux.alibaba.com> (raw)
In-Reply-To: <CAKgT0Udcry01samXT54RkurNqFKnVmv-686ZFHF+iw4b+12T_A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>



在 2020/7/18 上午4:30, Alexander Duyck 写道:
> On Fri, Jul 10, 2020 at 5:59 PM Alex Shi <alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> wrote:
>>
>> This patch reorder the isolation steps during munlock, move the lru lock
>> to guard each pages, unfold __munlock_isolate_lru_page func, to do the
>> preparation for lru lock change.
>>
>> __split_huge_page_refcount doesn't exist, but we still have to guard
>> PageMlocked and PageLRU for tail page in __split_huge_page_tail.
>>
>> [lkp-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org: found a sleeping function bug ... at mm/rmap.c]
>> Signed-off-by: Alex Shi <alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org>
>> Cc: Kirill A. Shutemov <kirill-oKw7cIdHH8eLwutG50LtGA@public.gmane.org>
>> Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
>> Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
>> Cc: Matthew Wilcox <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
>> Cc: Hugh Dickins <hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>> Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org
>> Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> ---
>>  mm/mlock.c | 93 ++++++++++++++++++++++++++++++++++----------------------------
>>  1 file changed, 51 insertions(+), 42 deletions(-)
>>
>> diff --git a/mm/mlock.c b/mm/mlock.c
>> index 228ba5a8e0a5..0bdde88b4438 100644
>> --- a/mm/mlock.c
>> +++ b/mm/mlock.c
>> @@ -103,25 +103,6 @@ void mlock_vma_page(struct page *page)
>>  }
>>
>>  /*
>> - * Isolate a page from LRU with optional get_page() pin.
>> - * Assumes lru_lock already held and page already pinned.
>> - */
>> -static bool __munlock_isolate_lru_page(struct page *page, bool getpage)
>> -{
>> -       if (TestClearPageLRU(page)) {
>> -               struct lruvec *lruvec;
>> -
>> -               lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
>> -               if (getpage)
>> -                       get_page(page);
>> -               del_page_from_lru_list(page, lruvec, page_lru(page));
>> -               return true;
>> -       }
>> -
>> -       return false;
>> -}
>> -
>> -/*
>>   * Finish munlock after successful page isolation
>>   *
>>   * Page must be locked. This is a wrapper for try_to_munlock()
>> @@ -181,6 +162,7 @@ static void __munlock_isolation_failed(struct page *page)
>>  unsigned int munlock_vma_page(struct page *page)
>>  {
>>         int nr_pages;
>> +       bool clearlru = false;
>>         pg_data_t *pgdat = page_pgdat(page);
>>
>>         /* For try_to_munlock() and to serialize with page migration */
>> @@ -189,32 +171,42 @@ unsigned int munlock_vma_page(struct page *page)
>>         VM_BUG_ON_PAGE(PageTail(page), page);
>>
>>         /*
>> -        * Serialize with any parallel __split_huge_page_refcount() which
>> +        * Serialize split tail pages in __split_huge_page_tail() which
>>          * might otherwise copy PageMlocked to part of the tail pages before
>>          * we clear it in the head page. It also stabilizes hpage_nr_pages().
>>          */
>> +       get_page(page);
> 
> I don't think this get_page() call needs to be up here. It could be
> left down before we delete the page from the LRU list as it is really
> needed to take a reference on the page before we call
> __munlock_isolated_page(), or at least that is the way it looks to me.
> By doing that you can avoid a bunch of cleanup in these exception
> cases.

Uh, It seems unlikely for !page->_refcount, and then got to release_pages(),
if so, get_page do could move down.
Thanks

> 
>> +       clearlru = TestClearPageLRU(page);
> 
> I'm not sure I fully understand the reason for moving this here. By
> clearing this flag before you clear Mlocked does this give you some
> sort of extra protection? I don't see how since Mlocked doesn't
> necessarily imply the page is on LRU.
> 

Above comments give a reason for the lru_lock usage,
>> +        * Serialize split tail pages in __split_huge_page_tail() which
>>          * might otherwise copy PageMlocked to part of the tail pages before
>>          * we clear it in the head page. It also stabilizes hpage_nr_pages().

Look into the __split_huge_page_tail, there is a tiny gap between tail page
get PG_mlocked, and it is added into lru list.
The TestClearPageLRU could blocked memcg changes of the page from stopping
isolate_lru_page.


>>         spin_lock_irq(&pgdat->lru_lock);
>>
>>         if (!TestClearPageMlocked(page)) {
>> -               /* Potentially, PTE-mapped THP: do not skip the rest PTEs */
>> -               nr_pages = 1;
>> -               goto unlock_out;
>> +               if (clearlru)
>> +                       SetPageLRU(page);
>> +               /*
>> +                * Potentially, PTE-mapped THP: do not skip the rest PTEs
>> +                * Reuse lock as memory barrier for release_pages racing.
>> +                */
>> +               spin_unlock_irq(&pgdat->lru_lock);
>> +               put_page(page);
>> +               return 0;
>>         }
>>
>>         nr_pages = hpage_nr_pages(page);
>>         __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
>>
>> -       if (__munlock_isolate_lru_page(page, true)) {
>> +       if (clearlru) {
>> +               struct lruvec *lruvec;
>> +
> 
> You could just place the get_page() call here.
> 
>> +               lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
>> +               del_page_from_lru_list(page, lruvec, page_lru(page));
>>                 spin_unlock_irq(&pgdat->lru_lock);
>>                 __munlock_isolated_page(page);
>> -               goto out;
>> +       } else {
>> +               spin_unlock_irq(&pgdat->lru_lock);
>> +               put_page(page);
>> +               __munlock_isolation_failed(page);
> 
> If you move the get_page() as I suggested above there wouldn't be a
> need for the put_page(). It then becomes possible to simplify the code
> a bit by merging the unlock paths and doing an if/else with the
> __munlock functions like so:
> if (clearlru) {
>     ...
>     del_page_from_lru..
> }
> 
> spin_unlock_irq()
> 
> if (clearlru)
>     __munlock_isolated_page();
> else
>     __munlock_isolated_failed();
> 
>>         }
>> -       __munlock_isolation_failed(page);
>> -
>> -unlock_out:
>> -       spin_unlock_irq(&pgdat->lru_lock);
>>
>> -out:
>>         return nr_pages - 1;
>>  }
>>
>> @@ -297,34 +289,51 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
>>         pagevec_init(&pvec_putback);
>>
>>         /* Phase 1: page isolation */
>> -       spin_lock_irq(&zone->zone_pgdat->lru_lock);
>>         for (i = 0; i < nr; i++) {
>>                 struct page *page = pvec->pages[i];
>> +               struct lruvec *lruvec;
>> +               bool clearlru;
>>
>> -               if (TestClearPageMlocked(page)) {
>> -                       /*
>> -                        * We already have pin from follow_page_mask()
>> -                        * so we can spare the get_page() here.
>> -                        */
>> -                       if (__munlock_isolate_lru_page(page, false))
>> -                               continue;
>> -                       else
>> -                               __munlock_isolation_failed(page);
>> -               } else {
>> +               clearlru = TestClearPageLRU(page);
>> +               spin_lock_irq(&zone->zone_pgdat->lru_lock);
> 
> I still don't see what you are gaining by moving the bit test up to
> this point. Seems like it would be better left below with the lock
> just being used to prevent a possible race while you are pulling the
> page out of the LRU list.
> 

the same reason as above comments mentained __split_huge_page_tail() 
issue.

>> +
>> +               if (!TestClearPageMlocked(page)) {
>>                         delta_munlocked++;
>> +                       if (clearlru)
>> +                               SetPageLRU(page);
>> +                       goto putback;
>> +               }
>> +
>> +               if (!clearlru) {
>> +                       __munlock_isolation_failed(page);
>> +                       goto putback;
>>                 }
> 
> With the other function you were processing this outside of the lock,
> here you are doing it inside. It would probably make more sense here
> to follow similar logic and take care of the del_page_from_lru_list
> ifr clealru is set, unlock, and then if clearlru is set continue else
> track the isolation failure. That way you can avoid having to use as
> many jump labels.
> 
>>                 /*
>> +                * Isolate this page.
>> +                * We already have pin from follow_page_mask()
>> +                * so we can spare the get_page() here.
>> +                */
>> +               lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
>> +               del_page_from_lru_list(page, lruvec, page_lru(page));
>> +               spin_unlock_irq(&zone->zone_pgdat->lru_lock);
>> +               continue;
>> +
>> +               /*
>>                  * We won't be munlocking this page in the next phase
>>                  * but we still need to release the follow_page_mask()
>>                  * pin. We cannot do it under lru_lock however. If it's
>>                  * the last pin, __page_cache_release() would deadlock.
>>                  */
>> +putback:
>> +               spin_unlock_irq(&zone->zone_pgdat->lru_lock);
>>                 pagevec_add(&pvec_putback, pvec->pages[i]);
>>                 pvec->pages[i] = NULL;
>>         }
>> +       /* tempary disable irq, will remove later */
>> +       local_irq_disable();
>>         __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
>> -       spin_unlock_irq(&zone->zone_pgdat->lru_lock);
>> +       local_irq_enable();
>>
>>         /* Now we can release pins of pages that we are not munlocking */
>>         pagevec_release(&pvec_putback);
>> --
>> 1.8.3.1
>>
>>

  reply	other threads:[~2020-07-19  3:55 UTC|newest]

Thread overview: 125+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-11  0:58 [PATCH v16 00/22] per memcg lru_lock Alex Shi
2020-07-11  0:58 ` [PATCH v16 01/22] mm/vmscan: remove unnecessary lruvec adding Alex Shi
2020-07-11  0:58   ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 02/22] mm/page_idle: no unlikely double check for idle page counting Alex Shi
2020-07-11  0:58 ` [PATCH v16 03/22] mm/compaction: correct the comments of compact_defer_shift Alex Shi
2020-07-11  0:58 ` [PATCH v16 04/22] mm/compaction: rename compact_deferred as compact_should_defer Alex Shi
2020-07-11  0:58   ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 05/22] mm/thp: move lru_add_page_tail func to huge_memory.c Alex Shi
2020-07-16  8:59   ` Alex Shi
2020-07-16  8:59     ` Alex Shi
2020-07-16 13:17     ` Kirill A. Shutemov
2020-07-16 13:17       ` Kirill A. Shutemov
2020-07-17  5:13       ` Alex Shi
2020-07-17  5:13         ` Alex Shi
2020-07-20  8:37         ` Kirill A. Shutemov
2020-07-20  8:37           ` Kirill A. Shutemov
2020-07-11  0:58 ` [PATCH v16 06/22] mm/thp: clean up lru_add_page_tail Alex Shi
2020-07-11  0:58   ` Alex Shi
2020-07-20  8:43   ` Kirill A. Shutemov
2020-07-20  8:43     ` Kirill A. Shutemov
2020-07-11  0:58 ` [PATCH v16 07/22] mm/thp: remove code path which never got into Alex Shi
2020-07-20  8:43   ` Kirill A. Shutemov
2020-07-20  8:43     ` Kirill A. Shutemov
2020-07-11  0:58 ` [PATCH v16 08/22] mm/thp: narrow lru locking Alex Shi
2020-07-11  0:58 ` [PATCH v16 09/22] mm/memcg: add debug checking in lock_page_memcg Alex Shi
2020-07-11  0:58   ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 10/22] mm/swap: fold vm event PGROTATED into pagevec_move_tail_fn Alex Shi
2020-07-11  0:58   ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 11/22] mm/lru: move lru_lock holding in func lru_note_cost_page Alex Shi
2020-07-11  0:58   ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 12/22] mm/lru: move lock into lru_note_cost Alex Shi
2020-07-11  0:58 ` [PATCH v16 13/22] mm/lru: introduce TestClearPageLRU Alex Shi
2020-07-11  0:58   ` Alex Shi
2020-07-16  9:06   ` Alex Shi
2020-07-16  9:06     ` Alex Shi
2020-07-16 21:12   ` Alexander Duyck
2020-07-16 21:12     ` Alexander Duyck
2020-07-16 21:12     ` Alexander Duyck
2020-07-17  7:45     ` Alex Shi
2020-07-17  7:45       ` Alex Shi
2020-07-17 18:26       ` Alexander Duyck
2020-07-17 18:26         ` Alexander Duyck
2020-07-19  4:45         ` Alex Shi
2020-07-19  4:45           ` Alex Shi
2020-07-19 11:24           ` Alex Shi
2020-07-19 11:24             ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 14/22] mm/thp: add tail pages into lru anyway in split_huge_page() Alex Shi
2020-07-11  0:58   ` Alex Shi
2020-07-17  9:30   ` Alex Shi
2020-07-17  9:30     ` Alex Shi
2020-07-20  8:49     ` Kirill A. Shutemov
2020-07-20  8:49       ` Kirill A. Shutemov
2020-07-20  9:04       ` Alex Shi
2020-07-20  9:04         ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 15/22] mm/compaction: do page isolation first in compaction Alex Shi
2020-07-16 21:32   ` Alexander Duyck
2020-07-16 21:32     ` Alexander Duyck
2020-07-16 21:32     ` Alexander Duyck
2020-07-17  5:09     ` Alex Shi
2020-07-17  5:09       ` Alex Shi
2020-07-17 16:09       ` Alexander Duyck
2020-07-17 16:09         ` Alexander Duyck
2020-07-17 16:09         ` Alexander Duyck
2020-07-19  3:59         ` Alex Shi
2020-07-19  3:59           ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 16/22] mm/mlock: reorder isolation sequence during munlock Alex Shi
2020-07-17 20:30   ` Alexander Duyck
2020-07-17 20:30     ` Alexander Duyck
2020-07-17 20:30     ` Alexander Duyck
2020-07-19  3:55     ` Alex Shi [this message]
2020-07-19  3:55       ` Alex Shi
2020-07-20 18:51       ` Alexander Duyck
2020-07-20 18:51         ` Alexander Duyck
2020-07-20 18:51         ` Alexander Duyck
2020-07-21  9:26         ` Alex Shi
2020-07-21  9:26           ` Alex Shi
2020-07-21 13:51           ` Alex Shi
2020-07-21 13:51             ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 17/22] mm/swap: serialize memcg changes during pagevec_lru_move_fn Alex Shi
2020-07-11  0:58 ` [PATCH v16 18/22] mm/lru: replace pgdat lru_lock with lruvec lock Alex Shi
2020-07-11  0:58   ` Alex Shi
2020-07-17 21:38   ` Alexander Duyck
2020-07-17 21:38     ` Alexander Duyck
2020-07-17 21:38     ` Alexander Duyck
2020-07-18 14:15     ` Alex Shi
2020-07-19  9:12       ` Alex Shi
2020-07-19  9:12         ` Alex Shi
2020-07-19 15:14         ` Alexander Duyck
2020-07-19 15:14           ` Alexander Duyck
2020-07-19 15:14           ` Alexander Duyck
2020-07-20  5:47           ` Alex Shi
2020-07-20  5:47             ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 19/22] mm/lru: introduce the relock_page_lruvec function Alex Shi
2020-07-11  0:58   ` Alex Shi
2020-07-17 22:03   ` Alexander Duyck
2020-07-17 22:03     ` Alexander Duyck
2020-07-18 14:01     ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 20/22] mm/vmscan: use relock for move_pages_to_lru Alex Shi
2020-07-17 21:44   ` Alexander Duyck
2020-07-17 21:44     ` Alexander Duyck
2020-07-17 21:44     ` Alexander Duyck
2020-07-18 14:15     ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 21/22] mm/pgdat: remove pgdat lru_lock Alex Shi
2020-07-17 21:09   ` Alexander Duyck
2020-07-17 21:09     ` Alexander Duyck
2020-07-18 14:17     ` Alex Shi
2020-07-18 14:17       ` Alex Shi
2020-07-11  0:58 ` [PATCH v16 22/22] mm/lru: revise the comments of lru_lock Alex Shi
2020-07-11  1:02 ` [PATCH v16 00/22] per memcg lru_lock Alex Shi
2020-07-11  1:02   ` Alex Shi
2020-07-16  8:49 ` Alex Shi
2020-07-16 14:11 ` Alexander Duyck
2020-07-16 14:11   ` Alexander Duyck
2020-07-16 14:11   ` Alexander Duyck
2020-07-17  5:24   ` Alex Shi
2020-07-17  5:24     ` Alex Shi
2020-07-19 15:23     ` Hugh Dickins
2020-07-19 15:23       ` Hugh Dickins
2020-07-20  3:01       ` Alex Shi
2020-07-20  3:01         ` Alex Shi
2020-07-20  4:47         ` Hugh Dickins
2020-07-20  4:47           ` Hugh Dickins
2020-07-20  4:47           ` Hugh Dickins
2020-07-20  7:30 ` Alex Shi
2020-07-20  7:30   ` Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6e37ee32-c6c5-fcc5-3cad-74f7ae41fb67@linux.alibaba.com \
    --to=alex.shi@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.duyck@gmail.com \
    --cc=cgroups@vger.kernel.org \
    --cc=daniel.m.jordan@oracle.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=khlebnikov@yandex-team.ru \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=mgorman@techsingularity.net \
    --cc=richard.weiyang@gmail.com \
    --cc=shakeelb@google.com \
    --cc=tj@kernel.org \
    --cc=willy@infradead.org \
    --cc=yang.shi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.