linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Alex Shi <alex.shi@linux.alibaba.com>
Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, akpm@linux-foundation.org,
	mgorman@techsingularity.net, tj@kernel.org, hughd@google.com,
	khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com,
	yang.shi@linux.alibaba.com, willy@infradead.org,
	shakeelb@google.com, "Michal Hocko" <mhocko@kernel.org>,
	"Vladimir Davydov" <vdavydov.dev@gmail.com>,
	"Roman Gushchin" <guro@fb.com>,
	"Chris Down" <chris@chrisdown.name>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Vlastimil Babka" <vbabka@suse.cz>, "Qian Cai" <cai@lca.pw>,
	"Andrey Ryabinin" <aryabinin@virtuozzo.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Andrea Arcangeli" <aarcange@redhat.com>,
	"David Rientjes" <rientjes@google.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
	swkhack <swkhack@gmail.com>,
	"Potyra, Stefan" <Stefan.Potyra@elektrobit.com>,
	"Mike Rapoport" <rppt@linux.vnet.ibm.com>,
	"Stephen Rothwell" <sfr@canb.auug.org.au>,
	"Colin Ian King" <colin.king@canonical.com>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Mauro Carvalho Chehab" <mchehab+samsung@kernel.org>,
	"Peng Fan" <peng.fan@nxp.com>,
	"Nikolay Borisov" <nborisov@suse.com>,
	"Ira Weiny" <ira.weiny@intel.com>,
	"Kirill Tkhai" <ktkhai@virtuozzo.com>,
	"Yafang Shao" <laoar.shao@gmail.com>
Subject: Re: [PATCH v4 3/9] mm/lru: replace pgdat lru_lock with lruvec lock
Date: Thu, 21 Nov 2019 17:06:13 -0500	[thread overview]
Message-ID: <20191121220613.GB487872@cmpxchg.org> (raw)
In-Reply-To: <bcf6a952-5b92-50ad-cfc1-f4d9f8f63172@linux.alibaba.com>

On Wed, Nov 20, 2019 at 07:41:44PM +0800, Alex Shi wrote:
> 在 2019/11/20 上午12:04, Johannes Weiner 写道:
> >> @@ -1246,6 +1245,46 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd
> >>  	return lruvec;
> >>  }
> >>  
> >> +struct lruvec *lock_page_lruvec_irq(struct page *page,
> >> +					struct pglist_data *pgdat)
> >> +{
> >> +	struct lruvec *lruvec;
> >> +
> >> +again:
> >> +	rcu_read_lock();
> >> +	lruvec = mem_cgroup_page_lruvec(page, pgdat);
> >> +	spin_lock_irq(&lruvec->lru_lock);
> >> +	rcu_read_unlock();
> > The spinlock doesn't prevent the lruvec from being freed
> > 
> > You deleted the rules from the mem_cgroup_page_lruvec() documentation,
> > but they still apply: if the page is already !PageLRU() by the time
> > you get here, it could get reclaimed or migrated to another cgroup,
> > and that can free the memcg/lruvec. Merely having the lru_lock held
> > does not prevent this.
> 
> 
> Forgive my idiot, I still don't know the details of unsafe lruvec here.
> From my shortsight, the spin_lock_irq(embedded a preempt_disable) could block all rcu syncing thus, keep all memcg alive until the preempt_enabled in unspinlock, is this right?
> If so even the page->mem_cgroup is migrated to others cgroups, the new and old cgroup should still be alive here.

You are right about the freeing part, I missed this. And I should have
read this email here before sending out my "fix" to the current code;
thankfully Hugh re-iterated my mistake on that thread. My apologies.

But I still don't understand how the moving part is safe. You look up
the lruvec optimistically, lock it, then verify the lookup. What keeps
page->mem_cgroup from changing after you verified it?

lock_page_lruvec():				mem_cgroup_move_account():
again:
rcu_read_lock()
lruvec = page->mem_cgroup->lruvec
						isolate_lru_page()
spin_lock_irq(&lruvec->lru_lock)
rcu_read_unlock()
if page->mem_cgroup->lruvec != lruvec:
  spin_unlock_irq(&lruvec->lru_lock)
  goto again;
						page->mem_cgroup = new cgroup
						putback_lru_page() // new lruvec
						  SetPageLRU()
return lruvec; // old lruvec

The caller assumes page belongs to the returned lruvec and will then
change the page's lru state with a mismatched page and lruvec.

If we could restrict lock_page_lruvec() to working only on PageLRU
pages, we could fix the problem with memory barriers. But this won't
work for split_huge_page(), which is AFAICT the only user that needs
to freeze the lru state of a page that could be isolated elsewhere.

So AFAICS the only option is to lock out mem_cgroup_move_account()
entirely when the lru_lock is held. Which I guess should be fine.

  reply	other threads:[~2019-11-21 22:06 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-19 12:23 [PATCH v4 0/9] per lruvec lru_lock for memcg Alex Shi
2019-11-19 12:23 ` [PATCH v4 1/9] mm/swap: fix uninitialized compiler warning Alex Shi
2019-11-19 15:41   ` Johannes Weiner
2019-11-20 11:42     ` Alex Shi
2019-11-19 12:23 ` [PATCH v4 2/9] mm/huge_memory: " Alex Shi
2019-11-19 15:42   ` Johannes Weiner
2019-11-19 12:23 ` [PATCH v4 3/9] mm/lru: replace pgdat lru_lock with lruvec lock Alex Shi
2019-11-19 15:57   ` Matthew Wilcox
2019-11-19 16:44     ` Johannes Weiner
2019-11-20 11:50       ` Alex Shi
2019-11-19 16:04   ` Johannes Weiner
2019-11-20 11:41     ` Alex Shi
2019-11-21 22:06       ` Johannes Weiner [this message]
2019-11-22  2:36         ` Alex Shi
2019-11-22 16:16           ` Johannes Weiner
2019-11-23  0:58             ` Hugh Dickins
2019-11-24 15:19             ` Alex Shi
2019-11-25  9:26             ` Alex Shi
2019-11-25 17:27               ` Shakeel Butt
2019-11-19 16:49   ` Shakeel Butt
2019-11-19 12:23 ` [PATCH v4 4/9] mm/mlock: only change the lru_lock iff page's lruvec is different Alex Shi
2019-11-19 12:23 ` [PATCH v4 5/9] mm/swap: " Alex Shi
2019-11-19 12:23 ` [PATCH v4 6/9] mm/vmscan: " Alex Shi
2019-11-19 12:23 ` [PATCH v4 7/9] mm/pgdat: remove pgdat lru_lock Alex Shi
2019-11-19 12:23 ` [PATCH v4 8/9] mm/lru: likely enhancement Alex Shi
2019-11-19 12:23 ` [PATCH v4 9/9] mm/lru: revise the comments of lru_lock Alex Shi
2019-11-19 16:19   ` Johannes Weiner
2019-11-20 11:48     ` Alex Shi
2019-11-24 15:49 ` [PATCH v4 0/9] per lruvec lru_lock for memcg Konstantin Khlebnikov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191121220613.GB487872@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=Stefan.Potyra@elektrobit.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linux.alibaba.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=cai@lca.pw \
    --cc=cgroups@vger.kernel.org \
    --cc=chris@chrisdown.name \
    --cc=colin.king@canonical.com \
    --cc=daniel.m.jordan@oracle.com \
    --cc=guro@fb.com \
    --cc=hughd@google.com \
    --cc=ira.weiny@intel.com \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=khlebnikov@yandex-team.ru \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=ktkhai@virtuozzo.com \
    --cc=laoar.shao@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mchehab+samsung@kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=nborisov@suse.com \
    --cc=peng.fan@nxp.com \
    --cc=rientjes@google.com \
    --cc=rppt@linux.vnet.ibm.com \
    --cc=sfr@canb.auug.org.au \
    --cc=shakeelb@google.com \
    --cc=swkhack@gmail.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    --cc=willy@infradead.org \
    --cc=yang.shi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).