From: Alex Shi <alex.shi@linux.alibaba.com>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
Tejun Heo <tj@kernel.org>, Hugh Dickins <hughd@google.com>,
Konstantin Khlebnikov <khlebnikov@yandex-team.ru>,
Daniel Jordan <daniel.m.jordan@oracle.com>,
Yang Shi <yang.shi@linux.alibaba.com>,
Matthew Wilcox <willy@infradead.org>,
Johannes Weiner <hannes@cmpxchg.org>,
kbuild test robot <lkp@intel.com>, linux-mm <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
cgroups@vger.kernel.org, Shakeel Butt <shakeelb@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Wei Yang <richard.weiyang@gmail.com>,
"Kirill A. Shutemov" <kirill@shutemov.name>
Subject: Re: [PATCH v16 16/22] mm/mlock: reorder isolation sequence during munlock
Date: Tue, 21 Jul 2020 17:26:34 +0800 [thread overview]
Message-ID: <7a931661-e096-29ee-d97d-8bf96ba6c972@linux.alibaba.com> (raw)
In-Reply-To: <CAKgT0Ue2i96gL=Tqx_wFmsBj_b1cnM1KQHh8b+oYr5iRg0Tcpw@mail.gmail.com>
在 2020/7/21 上午2:51, Alexander Duyck 写道:
>> Look into the __split_huge_page_tail, there is a tiny gap between tail page
>> get PG_mlocked, and it is added into lru list.
>> The TestClearPageLRU could blocked memcg changes of the page from stopping
>> isolate_lru_page.
> I get that there is a gap between the two in __split_huge_page_tail.
> My concern is more the fact that you are pulling the bit testing
> outside of the locked region when I don't think it needs to be. The
> lock is being taken unconditionally, so why pull the testing out when
> you could just do it inside the lock anyway? My worry is that you
> might be addressing __split_huge_page_tail but in the process you
> might be introducing a new race with something like
> __pagevec_lru_add_fn.
Yes, the page maybe interfered by clear_page_mlock and add pages to wrong lru
list.
>
> If I am not mistaken the Mlocked flag can still be cleared regardless
> of if the LRU bit is set or not. So you can still clear the LRU bit
> before you pull the page out of the list, but it can be done after
> clearing the Mlocked flag instead of before you have even taken the
> LRU lock. In that way it would function more similar to how you
> handled pagevec_lru_move_fn() as all this function is really doing is
> moving the pages out of the unevictable list into one of the other LRU
> lists anyway since the Mlocked flag was cleared.
>
Without the lru bit guard, the page may be moved between memcgs, luckly,
lock_page would stop the mem_cgroup_move_account with BUSY state cost.
whole new change would like the following, I will testing/resend again.
Thanks!
Alex
@@ -182,7 +179,7 @@ static void __munlock_isolation_failed(struct page *page)
unsigned int munlock_vma_page(struct page *page)
{
int nr_pages;
- pg_data_t *pgdat = page_pgdat(page);
+ struct lruvec *lruvec;
/* For try_to_munlock() and to serialize with page migration */
BUG_ON(!PageLocked(page));
@@ -190,11 +187,11 @@ unsigned int munlock_vma_page(struct page *page)
VM_BUG_ON_PAGE(PageTail(page), page);
/*
- * Serialize with any parallel __split_huge_page_refcount() which
+ * Serialize split tail pages in __split_huge_page_tail() which
* might otherwise copy PageMlocked to part of the tail pages before
* we clear it in the head page. It also stabilizes hpage_nr_pages().
*/
- spin_lock_irq(&pgdat->lru_lock);
+ lruvec = lock_page_lruvec_irq(page);
if (!TestClearPageMlocked(page)) {
/* Potentially, PTE-mapped THP: do not skip the rest PTEs */
@@ -205,15 +202,15 @@ unsigned int munlock_vma_page(struct page *page)
nr_pages = hpage_nr_pages(page);
__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
- if (__munlock_isolate_lru_page(page, true)) {
- spin_unlock_irq(&pgdat->lru_lock);
+ if (__munlock_isolate_lru_page(page, lruvec, true)) {
+ unlock_page_lruvec_irq(lruvec);
__munlock_isolated_page(page);
goto out;
}
__munlock_isolation_failed(page);
unlock_out:
- spin_unlock_irq(&pgdat->lru_lock);
+ unlock_page_lruvec_irq(lruvec);
out:
return nr_pages - 1;
@@ -293,23 +290,27 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
int nr = pagevec_count(pvec);
int delta_munlocked = -nr;
struct pagevec pvec_putback;
+ struct lruvec *lruvec = NULL;
int pgrescued = 0;
pagevec_init(&pvec_putback);
/* Phase 1: page isolation */
- spin_lock_irq(&zone->zone_pgdat->lru_lock);
for (i = 0; i < nr; i++) {
struct page *page = pvec->pages[i];
+ /* block memcg change in mem_cgroup_move_account */
+ lock_page(page);
+ lruvec = relock_page_lruvec_irq(page, lruvec);
if (TestClearPageMlocked(page)) {
/*
* We already have pin from follow_page_mask()
* so we can spare the get_page() here.
*/
- if (__munlock_isolate_lru_page(page, false))
+ if (__munlock_isolate_lru_page(page, lruvec, false)) {
+ unlock_page(page);
continue;
- else
+ } else
__munlock_isolation_failed(page);
} else {
delta_munlocked++;
@@ -321,11 +322,14 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
* pin. We cannot do it under lru_lock however. If it's
* the last pin, __page_cache_release() would deadlock.
*/
+ unlock_page(page);
pagevec_add(&pvec_putback, pvec->pages[i]);
pvec->pages[i] = NULL;
}
- __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
- spin_unlock_irq(&zone->zone_pgdat->lru_lock);
+ if (lruvec) {
+ __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
+ unlock_page_lruvec_irq(lruvec);
+ }
/* Now we can release pins of pages that we are not munlocking */
pagevec_release(&pvec_putback);
next prev parent reply other threads:[~2020-07-21 9:26 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-11 0:58 [PATCH v16 00/22] per memcg lru_lock Alex Shi
2020-07-11 0:58 ` [PATCH v16 01/22] mm/vmscan: remove unnecessary lruvec adding Alex Shi
2020-07-11 0:58 ` [PATCH v16 02/22] mm/page_idle: no unlikely double check for idle page counting Alex Shi
2020-07-11 0:58 ` [PATCH v16 03/22] mm/compaction: correct the comments of compact_defer_shift Alex Shi
2020-07-11 0:58 ` [PATCH v16 04/22] mm/compaction: rename compact_deferred as compact_should_defer Alex Shi
2020-07-11 0:58 ` [PATCH v16 05/22] mm/thp: move lru_add_page_tail func to huge_memory.c Alex Shi
2020-07-16 8:59 ` Alex Shi
2020-07-16 13:17 ` Kirill A. Shutemov
2020-07-17 5:13 ` Alex Shi
2020-07-20 8:37 ` Kirill A. Shutemov
2020-07-11 0:58 ` [PATCH v16 06/22] mm/thp: clean up lru_add_page_tail Alex Shi
2020-07-20 8:43 ` Kirill A. Shutemov
2020-07-11 0:58 ` [PATCH v16 07/22] mm/thp: remove code path which never got into Alex Shi
2020-07-20 8:43 ` Kirill A. Shutemov
2020-07-11 0:58 ` [PATCH v16 08/22] mm/thp: narrow lru locking Alex Shi
2020-07-11 0:58 ` [PATCH v16 09/22] mm/memcg: add debug checking in lock_page_memcg Alex Shi
2020-07-11 0:58 ` [PATCH v16 10/22] mm/swap: fold vm event PGROTATED into pagevec_move_tail_fn Alex Shi
2020-07-11 0:58 ` [PATCH v16 11/22] mm/lru: move lru_lock holding in func lru_note_cost_page Alex Shi
2020-07-11 0:58 ` [PATCH v16 12/22] mm/lru: move lock into lru_note_cost Alex Shi
2020-07-11 0:58 ` [PATCH v16 13/22] mm/lru: introduce TestClearPageLRU Alex Shi
2020-07-16 9:06 ` Alex Shi
2020-07-16 21:12 ` Alexander Duyck
2020-07-17 7:45 ` Alex Shi
2020-07-17 18:26 ` Alexander Duyck
2020-07-19 4:45 ` Alex Shi
2020-07-19 11:24 ` Alex Shi
2020-07-11 0:58 ` [PATCH v16 14/22] mm/thp: add tail pages into lru anyway in split_huge_page() Alex Shi
2020-07-17 9:30 ` Alex Shi
2020-07-20 8:49 ` Kirill A. Shutemov
2020-07-20 9:04 ` Alex Shi
2020-07-11 0:58 ` [PATCH v16 15/22] mm/compaction: do page isolation first in compaction Alex Shi
2020-07-16 21:32 ` Alexander Duyck
2020-07-17 5:09 ` Alex Shi
2020-07-17 16:09 ` Alexander Duyck
2020-07-19 3:59 ` Alex Shi
2020-07-11 0:58 ` [PATCH v16 16/22] mm/mlock: reorder isolation sequence during munlock Alex Shi
2020-07-17 20:30 ` Alexander Duyck
2020-07-19 3:55 ` Alex Shi
2020-07-20 18:51 ` Alexander Duyck
2020-07-21 9:26 ` Alex Shi [this message]
2020-07-21 13:51 ` Alex Shi
2020-07-11 0:58 ` [PATCH v16 17/22] mm/swap: serialize memcg changes during pagevec_lru_move_fn Alex Shi
2020-07-11 0:58 ` [PATCH v16 18/22] mm/lru: replace pgdat lru_lock with lruvec lock Alex Shi
2020-07-17 21:38 ` Alexander Duyck
2020-07-18 14:15 ` Alex Shi
2020-07-19 9:12 ` Alex Shi
2020-07-19 15:14 ` Alexander Duyck
2020-07-20 5:47 ` Alex Shi
2020-07-11 0:58 ` [PATCH v16 19/22] mm/lru: introduce the relock_page_lruvec function Alex Shi
2020-07-17 22:03 ` Alexander Duyck
2020-07-18 14:01 ` Alex Shi
2020-07-11 0:58 ` [PATCH v16 20/22] mm/vmscan: use relock for move_pages_to_lru Alex Shi
2020-07-17 21:44 ` Alexander Duyck
2020-07-18 14:15 ` Alex Shi
2020-07-11 0:58 ` [PATCH v16 21/22] mm/pgdat: remove pgdat lru_lock Alex Shi
2020-07-17 21:09 ` Alexander Duyck
2020-07-18 14:17 ` Alex Shi
2020-07-11 0:58 ` [PATCH v16 22/22] mm/lru: revise the comments of lru_lock Alex Shi
2020-07-11 1:02 ` [PATCH v16 00/22] per memcg lru_lock Alex Shi
2020-07-16 8:49 ` Alex Shi
2020-07-16 14:11 ` Alexander Duyck
2020-07-17 5:24 ` Alex Shi
2020-07-19 15:23 ` Hugh Dickins
2020-07-20 3:01 ` Alex Shi
2020-07-20 4:47 ` Hugh Dickins
2020-07-20 7:30 ` Alex Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7a931661-e096-29ee-d97d-8bf96ba6c972@linux.alibaba.com \
--to=alex.shi@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.duyck@gmail.com \
--cc=cgroups@vger.kernel.org \
--cc=daniel.m.jordan@oracle.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=khlebnikov@yandex-team.ru \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=mgorman@techsingularity.net \
--cc=richard.weiyang@gmail.com \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
--cc=willy@infradead.org \
--cc=yang.shi@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).