linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Hugh Dickins <hughd@google.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Hugh Dickins <hughd@google.com>,
	Shakeel Butt <shakeelb@google.com>,
	Stephen Rothwell <sfr@canb.auug.org.au>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: remove lock_page_memcg() from rmap
Date: Wed, 23 Nov 2022 22:03:00 -0800 (PST)	[thread overview]
Message-ID: <16dd09c-bb6c-6058-2b3-7559b5aefe9@google.com> (raw)
In-Reply-To: <20221123181838.1373440-1-hannes@cmpxchg.org>

On Wed, 23 Nov 2022, Johannes Weiner wrote:

> rmap changes (mapping and unmapping) of a page currently take
> lock_page_memcg() to serialize 1) update of the mapcount and the
> cgroup mapped counter with 2) cgroup moving the page and updating the
> old cgroup and the new cgroup counters based on page_mapped().
> 
> Before b2052564e66d ("mm: memcontrol: continue cache reclaim from
> offlined groups"), we used to reassign all pages that could be found
> on a cgroup's LRU list on deletion - something that rmap didn't
> naturally serialize against. Since that commit, however, the only
> pages that get moved are those mapped into page tables of a task
> that's being migrated. In that case, the pte lock is always held (and
> we know the page is mapped), which keeps rmap changes at bay already.
> 
> The additional lock_page_memcg() by rmap is redundant. Remove it.
> 
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Thank you, I love it: but with sorrow and shame, NAK to this version.

I was gearing up to rush in the crash fix at the bottom, when testing
showed that the new VM_WARN_ON_ONCE(!folio_mapped(folio)) actually hits.

So I've asked Stephen to drop this mm-unstable commit from -next for
tonight, while we think about what more is needed.

I was disbelieving when I saw the warning, couldn't understand at all.
But a look at get_mctgt_type() shatters my illusion: it's doesn't just
return a page for pte_present(ptent), it goes off looking up swap
cache and page cache; plus I've no idea whether an MC_TARGET_DEVICE
page would appear as folio_mapped() or not.

Does that mean that we just have to reinstate the folio_mapped() checks
in mm/memcontrol.c i.e. revert all mm/memcontrol.c changes from the
commit?  Or does it invalidate the whole project to remove
lock_page_memcg() from mm/rmap.c?

Too disappointed to think about it more tonight :-(
Hugh


> ---
>  mm/memcontrol.c | 35 ++++++++++++++++++++---------------
>  mm/rmap.c       | 12 ------------
>  2 files changed, 20 insertions(+), 27 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 23750cec0036..52b86ca7a78e 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5676,7 +5676,10 @@ static struct page *mc_handle_file_pte(struct vm_area_struct *vma,
>   * @from: mem_cgroup which the page is moved from.
>   * @to:	mem_cgroup which the page is moved to. @from != @to.
>   *
> - * The caller must make sure the page is not on LRU (isolate_page() is useful.)
> + * This function acquires folio_lock() and folio_lock_memcg(). The
> + * caller must exclude all other possible ways of accessing
> + * page->memcg, such as LRU isolation (to lock out isolation) and
> + * having the page mapped and pte-locked (to lock out rmap).
>   *
>   * This function doesn't do "charge" to new cgroup and doesn't do "uncharge"
>   * from old cgroup.
> @@ -5696,6 +5699,13 @@ static int mem_cgroup_move_account(struct page *page,
>  	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
>  	VM_BUG_ON(compound && !folio_test_large(folio));
>  
> +	/*
> +	 * We're only moving pages mapped into the moving process's
> +	 * page tables. The caller's pte lock prevents rmap from
> +	 * removing the NR_x_MAPPED state while we transfer it.
> +	 */
> +	VM_WARN_ON_ONCE(!folio_mapped(folio));
> +
>  	/*
>  	 * Prevent mem_cgroup_migrate() from looking at
>  	 * page's memory cgroup of its source page while we change it.
> @@ -5715,30 +5725,25 @@ static int mem_cgroup_move_account(struct page *page,
>  	folio_memcg_lock(folio);
>  
>  	if (folio_test_anon(folio)) {
> -		if (folio_mapped(folio)) {
> -			__mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
> -			__mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
> -			if (folio_test_transhuge(folio)) {
> -				__mod_lruvec_state(from_vec, NR_ANON_THPS,
> -						   -nr_pages);
> -				__mod_lruvec_state(to_vec, NR_ANON_THPS,
> -						   nr_pages);
> -			}
> +		__mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
> +		__mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
> +
> +		if (folio_test_transhuge(folio)) {
> +			__mod_lruvec_state(from_vec, NR_ANON_THPS, -nr_pages);
> +			__mod_lruvec_state(to_vec, NR_ANON_THPS, nr_pages);
>  		}
>  	} else {
>  		__mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages);
>  		__mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages);
>  
> +		__mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
> +		__mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
> +
>  		if (folio_test_swapbacked(folio)) {
>  			__mod_lruvec_state(from_vec, NR_SHMEM, -nr_pages);
>  			__mod_lruvec_state(to_vec, NR_SHMEM, nr_pages);
>  		}
>  
> -		if (folio_mapped(folio)) {
> -			__mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
> -			__mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
> -		}
> -
>  		if (folio_test_dirty(folio)) {
>  			struct address_space *mapping = folio_mapping(folio);
>  
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 459dc1c44d8a..11a4894158db 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1222,9 +1222,6 @@ void page_add_anon_rmap(struct page *page,
>  	bool compound = flags & RMAP_COMPOUND;
>  	bool first = true;
>  
> -	if (unlikely(PageKsm(page)))
> -		lock_page_memcg(page);
> -
>  	/* Is page being mapped by PTE? Is this its first map to be added? */
>  	if (likely(!compound)) {
>  		first = atomic_inc_and_test(&page->_mapcount);
> @@ -1254,9 +1251,6 @@ void page_add_anon_rmap(struct page *page,
>  	if (nr)
>  		__mod_lruvec_page_state(page, NR_ANON_MAPPED, nr);
>  
> -	if (unlikely(PageKsm(page)))
> -		unlock_page_memcg(page);
> -
>  	/* address might be in next vma when migration races vma_adjust */
>  	else if (first)
>  		__page_set_anon_rmap(page, vma, address,
> @@ -1321,7 +1315,6 @@ void page_add_file_rmap(struct page *page,
>  	bool first;
>  
>  	VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
> -	lock_page_memcg(page);
>  
>  	/* Is page being mapped by PTE? Is this its first map to be added? */
>  	if (likely(!compound)) {
> @@ -1349,7 +1342,6 @@ void page_add_file_rmap(struct page *page,
>  			NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr_pmdmapped);
>  	if (nr)
>  		__mod_lruvec_page_state(page, NR_FILE_MAPPED, nr);
> -	unlock_page_memcg(page);
>  
>  	mlock_vma_page(page, vma, compound);
>  }
> @@ -1378,8 +1370,6 @@ void page_remove_rmap(struct page *page,
>  		return;
>  	}
>  
> -	lock_page_memcg(page);
> -
>  	/* Is page being unmapped by PTE? Is this its last map to be removed? */
>  	if (likely(!compound)) {
>  		last = atomic_add_negative(-1, &page->_mapcount);
> @@ -1427,8 +1417,6 @@ void page_remove_rmap(struct page *page,
>  	 * and remember that it's only reliable while mapped.
>  	 */
>  
> -	unlock_page_memcg(page);
> -
>  	munlock_vma_page(page, vma, compound);
>  }
>  
> -- 
> 2.38.1

[PATCH] mm: remove lock_page_memcg() from rmap - fix

Blame me for the hidden "else", which now does the wrong thing, leaving
page's anon_vma unset, then VM_BUG_ON before do_swap_page's set_pte_at.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 mm/rmap.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 11a4894158db..5a8d27fdc644 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1251,13 +1251,14 @@ void page_add_anon_rmap(struct page *page,
 	if (nr)
 		__mod_lruvec_page_state(page, NR_ANON_MAPPED, nr);
 
-	/* address might be in next vma when migration races vma_adjust */
-	else if (first)
-		__page_set_anon_rmap(page, vma, address,
-				     !!(flags & RMAP_EXCLUSIVE));
-	else
-		__page_check_anon_rmap(page, vma, address);
-
+	if (!PageKsm(page)) {
+		/* address may be in next vma if migration races vma_adjust */
+		if (first)
+			__page_set_anon_rmap(page, vma, address,
+					     !!(flags & RMAP_EXCLUSIVE));
+		else
+			__page_check_anon_rmap(page, vma, address);
+	}
 	mlock_vma_page(page, vma, compound);
 }
 
-- 
2.35.3

  parent reply	other threads:[~2022-11-24  6:03 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-23 18:18 [PATCH] mm: remove lock_page_memcg() from rmap Johannes Weiner
2022-11-23 18:34 ` Shakeel Butt
2022-11-24  6:03 ` Hugh Dickins [this message]
2022-11-28 16:59   ` Johannes Weiner
2022-11-29 19:08     ` Johannes Weiner
2022-11-29 19:23       ` Linus Torvalds
2022-11-29 19:42       ` Shakeel Butt
2022-11-30  7:33       ` Hugh Dickins
2022-11-30 16:42         ` Shakeel Butt
2022-11-30 17:36           ` Hugh Dickins
2022-11-30 22:30             ` Johannes Weiner
2022-12-01  0:13               ` Hugh Dickins
2022-12-01 15:52                 ` Johannes Weiner
2022-12-01 19:28                   ` Hugh Dickins
2022-11-30 12:50       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=16dd09c-bb6c-6058-2b3-7559b5aefe9@google.com \
    --to=hughd@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=sfr@canb.auug.org.au \
    --cc=shakeelb@google.com \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).