From: Alex Shi <alex.shi@linux.alibaba.com>
To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
Tejun Heo <tj@kernel.org>
Cc: Alex Shi <alex.shi@linux.alibaba.com>,
Vlastimil Babka <vbabka@suse.cz>, Michal Hocko <mhocko@suse.com>,
Andrey Ryabinin <aryabinin@virtuozzo.com>,
swkhack <swkhack@gmail.com>,
"Potyra, Stefan" <Stefan.Potyra@elektrobit.com>
Subject: [PATCH 06/14] lru/mlock: using per lruvec lock in munlock
Date: Tue, 20 Aug 2019 17:48:29 +0800 [thread overview]
Message-ID: <1566294517-86418-7-git-send-email-alex.shi@linux.alibaba.com> (raw)
In-Reply-To: <1566294517-86418-1-git-send-email-alex.shi@linux.alibaba.com>
This patch using sperate lruvec lock for each of pages in
__munlock_pagevec().
Also this patch pass a lruvec in __munlock_isolate_lru_page() as
parameter to reduce a repeat lruvec locating.
Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: swkhack <swkhack@gmail.com>
Cc: "Potyra, Stefan" <Stefan.Potyra@elektrobit.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: cgroups@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
mm/mlock.c | 35 +++++++++++++++++++----------------
1 file changed, 19 insertions(+), 16 deletions(-)
diff --git a/mm/mlock.c b/mm/mlock.c
index 1279684bada0..9915968d490a 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -106,12 +106,10 @@ void mlock_vma_page(struct page *page)
* Isolate a page from LRU with optional get_page() pin.
* Assumes lru_lock already held and page already pinned.
*/
-static bool __munlock_isolate_lru_page(struct page *page, bool getpage)
+static bool __munlock_isolate_lru_page(struct page *page,
+ struct lruvec *lruvec, bool getpage)
{
if (PageLRU(page)) {
- struct lruvec *lruvec;
-
- lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
if (getpage)
get_page(page);
ClearPageLRU(page);
@@ -183,6 +181,9 @@ unsigned int munlock_vma_page(struct page *page)
{
int nr_pages;
pg_data_t *pgdat = page_pgdat(page);
+ struct lruvec *lruvec;
+
+ lruvec = mem_cgroup_page_lruvec(page, pgdat);
/* For try_to_munlock() and to serialize with page migration */
BUG_ON(!PageLocked(page));
@@ -194,7 +195,8 @@ unsigned int munlock_vma_page(struct page *page)
* might otherwise copy PageMlocked to part of the tail pages before
* we clear it in the head page. It also stabilizes hpage_nr_pages().
*/
- spin_lock_irq(&pgdat->lruvec.lru_lock);
+ spin_lock_irq(&lruvec->lru_lock);
+ sync_lruvec_pgdat(lruvec, pgdat);
if (!TestClearPageMlocked(page)) {
/* Potentially, PTE-mapped THP: do not skip the rest PTEs */
@@ -205,15 +207,15 @@ unsigned int munlock_vma_page(struct page *page)
nr_pages = hpage_nr_pages(page);
__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
- if (__munlock_isolate_lru_page(page, true)) {
- spin_unlock_irq(&pgdat->lruvec.lru_lock);
+ if (__munlock_isolate_lru_page(page, lruvec, true)) {
+ spin_unlock_irq(&lruvec->lru_lock);
__munlock_isolated_page(page);
goto out;
}
__munlock_isolation_failed(page);
unlock_out:
- spin_unlock_irq(&pgdat->lruvec.lru_lock);
+ spin_unlock_irq(&lruvec->lru_lock);
out:
return nr_pages - 1;
@@ -291,28 +293,30 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
{
int i;
int nr = pagevec_count(pvec);
- int delta_munlocked = -nr;
struct pagevec pvec_putback;
int pgrescued = 0;
pagevec_init(&pvec_putback);
/* Phase 1: page isolation */
- spin_lock_irq(&zone->zone_pgdat->lruvec.lru_lock);
for (i = 0; i < nr; i++) {
struct page *page = pvec->pages[i];
+ pg_data_t *pgdat = page_pgdat(page);
+ struct lruvec *lruvec = mem_cgroup_page_lruvec(page, pgdat);
+ spin_lock_irq(&lruvec->lru_lock);
+ sync_lruvec_pgdat(lruvec, pgdat);
if (TestClearPageMlocked(page)) {
/*
* We already have pin from follow_page_mask()
* so we can spare the get_page() here.
*/
- if (__munlock_isolate_lru_page(page, false))
+ if (__munlock_isolate_lru_page(page, lruvec, false)) {
+ __mod_zone_page_state(zone, NR_MLOCK, -1);
+ spin_unlock_irq(&lruvec->lru_lock);
continue;
- else
+ } else
__munlock_isolation_failed(page);
- } else {
- delta_munlocked++;
}
/*
@@ -323,9 +327,8 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
*/
pagevec_add(&pvec_putback, pvec->pages[i]);
pvec->pages[i] = NULL;
+ spin_unlock_irq(&lruvec->lru_lock);
}
- __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
- spin_unlock_irq(&zone->zone_pgdat->lruvec.lru_lock);
/* Now we can release pins of pages that we are not munlocking */
pagevec_release(&pvec_putback);
--
1.8.3.1
next prev parent reply other threads:[~2019-08-20 9:50 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-20 9:48 [PATCH 00/14] per memcg lru_lock Alex Shi
2019-08-20 9:48 ` [PATCH 01/14] mm/lru: move pgdat lru_lock into lruvec Alex Shi
2019-08-20 13:40 ` Matthew Wilcox
2019-08-20 14:11 ` Alex Shi
2019-08-20 9:48 ` [PATCH 02/14] lru/memcg: move the lruvec->pgdat sync out lru_lock Alex Shi
2019-08-20 9:48 ` [PATCH 03/14] lru/memcg: using per lruvec lock in un/lock_page_lru Alex Shi
2019-08-26 8:30 ` Konstantin Khlebnikov
2019-08-26 14:16 ` Alex Shi
2019-08-20 9:48 ` [PATCH 04/14] lru/compaction: use per lruvec lock in isolate_migratepages_block Alex Shi
2019-08-20 9:48 ` [PATCH 05/14] lru/huge_page: use per lruvec lock in __split_huge_page Alex Shi
2019-08-20 9:48 ` Alex Shi [this message]
2019-08-20 9:48 ` [PATCH 07/14] lru/swap: using per lruvec lock in page_cache_release Alex Shi
2019-08-20 9:48 ` [PATCH 08/14] lru/swap: uer lruvec lock in activate_page Alex Shi
2019-08-20 9:48 ` [PATCH 09/14] lru/swap: uer per lruvec lock in pagevec_lru_move_fn Alex Shi
2019-08-20 9:48 ` [PATCH 10/14] lru/swap: use per lruvec lock in release_pages Alex Shi
2019-08-20 9:48 ` [PATCH 11/14] lru/vmscan: using per lruvec lock in lists shrinking Alex Shi
2019-08-20 9:48 ` [PATCH 12/14] lru/vmscan: use pre lruvec lock in check_move_unevictable_pages Alex Shi
2019-08-20 9:48 ` [PATCH 13/14] lru/vmscan: using per lruvec lru_lock in get_scan_count Alex Shi
2019-08-20 9:48 ` [PATCH 14/14] mm/lru: fix the comments of lru_lock Alex Shi
2019-08-20 14:00 ` Matthew Wilcox
2019-08-20 14:21 ` Alex Shi
2019-08-20 10:45 ` [PATCH 00/14] per memcg lru_lock Michal Hocko
2019-08-20 16:48 ` Shakeel Butt
2019-08-20 18:24 ` Hugh Dickins
2019-08-21 1:21 ` Alex Shi
2019-08-21 2:00 ` Alex Shi
2019-08-24 1:59 ` Hugh Dickins
2019-08-26 14:35 ` Alex Shi
2019-08-21 18:00 ` Daniel Jordan
2019-08-22 11:56 ` Alex Shi
2019-08-22 15:20 ` Daniel Jordan
2019-08-26 8:39 ` Konstantin Khlebnikov
2019-08-26 14:22 ` Alex Shi
2019-08-26 14:25 ` Alex Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1566294517-86418-7-git-send-email-alex.shi@linux.alibaba.com \
--to=alex.shi@linux.alibaba.com \
--cc=Stefan.Potyra@elektrobit.com \
--cc=akpm@linux-foundation.org \
--cc=aryabinin@virtuozzo.com \
--cc=cgroups@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=swkhack@gmail.com \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).