linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Hugh Dickins <hughd@google.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Hugh Dickins <hughd@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@suse.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>,
	Alistair Popple <apopple@nvidia.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Rik van Riel <riel@surriel.com>,
	Suren Baghdasaryan <surenb@google.com>,
	Yu Zhao <yuzhao@google.com>, Greg Thelen <gthelen@google.com>,
	Shakeel Butt <shakeelb@google.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 06/13] mm/munlock: maintain page->mlock_count while unevictable
Date: Sun, 13 Feb 2022 22:28:14 -0800 (PST)	[thread overview]
Message-ID: <db241e60-864e-b238-d83a-f822f87aa6df@google.com> (raw)
In-Reply-To: <0e2dbdf3-f831-abbb-5ca8-02c8d1ab1a01@suse.cz>

On Fri, 11 Feb 2022, Vlastimil Babka wrote:
> On 2/6/22 22:40, Hugh Dickins wrote:
> > @@ -72,19 +91,40 @@ void mlock_page(struct page *page)
> >   */
> >  void munlock_page(struct page *page)
> >  {
> > +	struct lruvec *lruvec;
> > +	int nr_pages = thp_nr_pages(page);
> > +
> >  	VM_BUG_ON_PAGE(PageTail(page), page);
> >  
> > +	lock_page_memcg(page);
> 
> Hm this (and unlock_page_memcg() below) didn't catch my attention until I
> see patch 10/13 removes it again. It also AFAICS wasn't present in the code
> removed by patch 1. Am I missing something or it wasn't necessary to add it
> in the first place?

Something is needed to stabilize page->memcg, whoops I'm way out of date,
folio->memcg_data, before trying to get the lock on the relevant lruvec.

In commit_charge(), Johannes gives us a choice between four tools:
	 * - the page lock
	 * - LRU isolation
	 * - lock_page_memcg()
	 * - exclusive reference

The original code was using TestClearPageLRU inside isolate_lru_page()
to do it (also happened to have the page lock, but one tool is enough).

But I chose to use lock_page_memcg() at this stage, because we want to
do the TestClearPageMlocked part of the business even when !PageLRU;
and I don't entirely love the TestClearPageLRU method, since one will
fail if two try it concurrently.

Later, when doing the pagevec implementation, it seemed to become
more natural to use the TestClearPageLRU method - because that's how
pagevec_lru_move_fn() does it, or did I have a stronger reason for
making a different choice at that stage?  Maybe: I might have been
trying to keep the different functions as similar as possible.

But really we have too many ways to do that same thing, and my
choices may have been arbitrary, according to mood.  (When Alex Shi
popularized the TestClearPageLRU method, I did devise a patch which
used the lock_page_memcg() method throughout instead; but it was not
obviously better, so I didn't waste anyone else's time with it.)

I'm afraid that looking here again has led me to wonder whether
__munlock_page() in the final (10/13 pagevec) implementaton is correct
to be using __operations in its !isolated case.  But I'll have to come
back and think about that another time, must push forward tonight.

Hugh

> 
> > +	lruvec = folio_lruvec_lock_irq(page_folio(page));
> > +	if (PageLRU(page) && PageUnevictable(page)) {
> > +		/* Then mlock_count is maintained, but might undercount */
> > +		if (page->mlock_count)
> > +			page->mlock_count--;
> > +		if (page->mlock_count)
> > +			goto out;
> > +	}
> > +	/* else assume that was the last mlock: reclaim will fix it if not */
> > +
> >  	if (TestClearPageMlocked(page)) {
> > -		int nr_pages = thp_nr_pages(page);
> > -
> > -		mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
> > -		if (!isolate_lru_page(page)) {
> > -			putback_lru_page(page);
> > -			count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages);
> > -		} else if (PageUnevictable(page)) {
> > -			count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages);
> > -		}
> > +		__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
> > +		if (PageLRU(page) || !PageUnevictable(page))
> > +			__count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages);
> > +		else
> > +			__count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages);
> > +	}
> > +
> > +	/* page_evictable() has to be checked *after* clearing Mlocked */
> > +	if (PageLRU(page) && PageUnevictable(page) && page_evictable(page)) {
> > +		del_page_from_lru_list(page, lruvec);
> > +		ClearPageUnevictable(page);
> > +		add_page_to_lru_list(page, lruvec);
> > +		__count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages);
> >  	}
> > +out:
> > +	unlock_page_lruvec_irq(lruvec);
> > +	unlock_page_memcg(page);
> >  }
> >  
> >  /*

  reply	other threads:[~2022-02-14  6:28 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-06 21:27 [PATCH 00/13] mm/munlock: rework of mlock+munlock page handling Hugh Dickins
2022-02-06 21:30 ` [PATCH 01/13] mm/munlock: delete page_mlock() and all its works Hugh Dickins
2022-02-09 18:31   ` Vlastimil Babka
2022-02-09 22:28     ` Hugh Dickins
2022-02-10  9:52       ` Vlastimil Babka
2022-02-14  6:52         ` Hugh Dickins
2022-02-14  6:59           ` [PATCH v2 " Hugh Dickins
2022-02-14 10:07             ` Vlastimil Babka
2022-02-06 21:32 ` [PATCH 02/13] mm/munlock: delete FOLL_MLOCK and FOLL_POPULATE Hugh Dickins
2022-02-10 11:35   ` Vlastimil Babka
2022-02-06 21:34 ` [PATCH 03/13] mm/munlock: delete munlock_vma_pages_all(), allow oomreap Hugh Dickins
2022-02-10 15:30   ` Vlastimil Babka
2022-02-06 21:36 ` [PATCH 04/13] mm/munlock: rmap call mlock_vma_page() munlock_vma_page() Hugh Dickins
2022-02-11 10:29   ` Vlastimil Babka
2022-02-14  7:05     ` [PATCH v2 " Hugh Dickins
2022-02-06 21:38 ` [PATCH 05/13] mm/munlock: replace clear_page_mlock() by final clearance Hugh Dickins
2022-02-11 11:42   ` Vlastimil Babka
2022-02-06 21:38 ` [PATCH 00/13] mm/munlock: rework of mlock+munlock page handling Matthew Wilcox
2022-02-07 18:20   ` Hugh Dickins
2022-02-06 21:40 ` [PATCH 06/13] mm/munlock: maintain page->mlock_count while unevictable Hugh Dickins
2022-02-11 12:27   ` Vlastimil Babka
2022-02-14  5:42     ` Hugh Dickins
2022-02-11 18:07   ` Vlastimil Babka
2022-02-14  6:28     ` Hugh Dickins [this message]
2022-02-06 21:42 ` [PATCH 07/13] mm/munlock: mlock_pte_range() when mlocking or munlocking Hugh Dickins
2022-02-11 16:45   ` Vlastimil Babka
2022-02-14  6:32     ` Hugh Dickins
2022-02-14  7:09       ` [PATCH v2 " Hugh Dickins
2022-02-06 21:43 ` [PATCH 08/13] mm/migrate: __unmap_and_move() push good newpage to LRU Hugh Dickins
2022-02-11 17:19   ` Vlastimil Babka
2022-02-06 21:45 ` [PATCH 09/13] mm/munlock: delete smp_mb() from __pagevec_lru_add_fn() Hugh Dickins
2022-02-11 17:43   ` Vlastimil Babka
2022-02-06 21:47 ` [PATCH 10/13] mm/munlock: mlock_page() munlock_page() batch by pagevec Hugh Dickins
2022-02-09  8:15   ` Geert Uytterhoeven
2022-02-09 15:45     ` Hugh Dickins
2022-02-11 18:26   ` Vlastimil Babka
2022-02-14  7:15     ` [PATCH v2 " Hugh Dickins
2022-02-06 21:49 ` [PATCH 11/13] mm/munlock: page migration needs mlock pagevec drained Hugh Dickins
2022-02-11 18:49   ` Vlastimil Babka
2022-02-14  5:34     ` Hugh Dickins
2022-02-14  7:17       ` [PATCH v2 " Hugh Dickins
2022-02-06 21:51 ` [PATCH 12/13] mm/thp: collapse_file() do try_to_unmap(TTU_BATCH_FLUSH) Hugh Dickins
2022-02-06 21:53 ` [PATCH 13/13] mm/thp: shrink_page_list() avoid splitting VM_LOCKED THP Hugh Dickins
     [not found] ` <20220207033518.2602-1-hdanton@sina.com>
2022-02-07 18:46   ` [PATCH 07/13] mm/munlock: mlock_pte_range() when mlocking or munlocking Hugh Dickins
2022-02-09 15:35 ` [PATCH 00/13] mm/munlock: rework of mlock+munlock page handling Michal Hocko
2022-02-09 16:21   ` Hugh Dickins
2022-02-09 21:01     ` Michal Hocko
2022-02-09 22:59       ` Hugh Dickins
2022-02-10  7:49         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=db241e60-864e-b238-d83a-f822f87aa6df@google.com \
    --to=hughd@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=david@redhat.com \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=riel@surriel.com \
    --cc=shakeelb@google.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).