All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>,
	cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, akpm@linux-foundation.org,
	mgorman@techsingularity.net, tj@kernel.org, hughd@google.com,
	daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com,
	shakeelb@google.com, hannes@cmpxchg.org,
	Michal Hocko <mhocko@kernel.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>
Subject: Re: [PATCH v7 02/10] mm/memcg: fold lru_lock in lock_page_lru
Date: Mon, 13 Jan 2020 08:34:56 -0800	[thread overview]
Message-ID: <20200113163456.GA332@bombadil.infradead.org> (raw)
In-Reply-To: <a095d80d-8e34-c84f-e4be-085a5aae1929@linux.alibaba.com>

On Mon, Jan 13, 2020 at 08:47:25PM +0800, Alex Shi wrote:
> 在 2020/1/13 下午5:55, Konstantin Khlebnikov 写道:
> >>> That's wrong. Here PageLRU must be checked again under lru_lock.
> >> Hi, Konstantin,
> >>
> >> For logical remain, we can get the lock and then release for !PageLRU.
> >> but I still can figure out the problem scenario. Would like to give more hints?
> > 
> > That's trivial race: page could be isolated from lru between
> > 
> > if (PageLRU(page))
> > and
> > spin_lock_irq(&pgdat->lru_lock);
> 
> yes, it could be a problem. guess the following change could helpful:
> I will update it in new version.

> +       if (lrucare) {
> +               lruvec = lock_page_lruvec_irq(page);
> +               if (likely(PageLRU(page))) {
> +                       ClearPageLRU(page);
> +                       del_page_from_lru_list(page, lruvec, page_lru(page));
> +               } else {
> +                       unlock_page_lruvec_irq(lruvec);
> +                       lruvec = NULL;
> +               }

What about a harder race to hit like a page being on LRU list A when you
look up the lruvec, then it's removed and added to LRU list B by the
time you get the lock?  At that point, you are holding a lock on the
wrong LRU list.  I think you need to check not just that the page
is still PageLRU but also still on the same LRU list.

WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
To: Alex Shi <alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org>
Cc: Konstantin Khlebnikov
	<khlebnikov-XoJtRXgx1JseBXzfvpsJ4g@public.gmane.org>,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	mgorman-3eNAlZScCAx27rWaFMvyedHuzzzSOjJt@public.gmane.org,
	tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	daniel.m.jordan-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org,
	yang.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org,
	Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Vladimir Davydov
	<vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Subject: Re: [PATCH v7 02/10] mm/memcg: fold lru_lock in lock_page_lru
Date: Mon, 13 Jan 2020 08:34:56 -0800	[thread overview]
Message-ID: <20200113163456.GA332@bombadil.infradead.org> (raw)
In-Reply-To: <a095d80d-8e34-c84f-e4be-085a5aae1929-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org>

On Mon, Jan 13, 2020 at 08:47:25PM +0800, Alex Shi wrote:
> 在 2020/1/13 下午5:55, Konstantin Khlebnikov 写道:
> >>> That's wrong. Here PageLRU must be checked again under lru_lock.
> >> Hi, Konstantin,
> >>
> >> For logical remain, we can get the lock and then release for !PageLRU.
> >> but I still can figure out the problem scenario. Would like to give more hints?
> > 
> > That's trivial race: page could be isolated from lru between
> > 
> > if (PageLRU(page))
> > and
> > spin_lock_irq(&pgdat->lru_lock);
> 
> yes, it could be a problem. guess the following change could helpful:
> I will update it in new version.

> +       if (lrucare) {
> +               lruvec = lock_page_lruvec_irq(page);
> +               if (likely(PageLRU(page))) {
> +                       ClearPageLRU(page);
> +                       del_page_from_lru_list(page, lruvec, page_lru(page));
> +               } else {
> +                       unlock_page_lruvec_irq(lruvec);
> +                       lruvec = NULL;
> +               }

What about a harder race to hit like a page being on LRU list A when you
look up the lruvec, then it's removed and added to LRU list B by the
time you get the lock?  At that point, you are holding a lock on the
wrong LRU list.  I think you need to check not just that the page
is still PageLRU but also still on the same LRU list.

  reply	other threads:[~2020-01-13 16:35 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-25  9:04 [PATCH v7 00/10] per lruvec lru_lock for memcg Alex Shi
2019-12-25  9:04 ` [PATCH v7 01/10] mm/vmscan: remove unnecessary lruvec adding Alex Shi
2020-01-10  8:39   ` Konstantin Khlebnikov
2020-01-13  7:21     ` Alex Shi
2019-12-25  9:04 ` [PATCH v7 02/10] mm/memcg: fold lru_lock in lock_page_lru Alex Shi
2020-01-10  8:49   ` Konstantin Khlebnikov
2020-01-10  8:49     ` Konstantin Khlebnikov
2020-01-13  9:45     ` Alex Shi
2020-01-13  9:45       ` Alex Shi
2020-01-13  9:55       ` Konstantin Khlebnikov
2020-01-13 12:47         ` Alex Shi
2020-01-13 12:47           ` Alex Shi
2020-01-13 16:34           ` Matthew Wilcox [this message]
2020-01-13 16:34             ` Matthew Wilcox
2020-01-14  9:20             ` Alex Shi
2019-12-25  9:04 ` [PATCH v7 03/10] mm/lru: replace pgdat lru_lock with lruvec lock Alex Shi
2019-12-25  9:04   ` Alex Shi
2020-01-13 15:41   ` Daniel Jordan
2020-01-13 15:41     ` Daniel Jordan
2020-01-14  6:33     ` Alex Shi
2020-01-14  6:33       ` Alex Shi
2019-12-25  9:04 ` [PATCH v7 04/10] mm/lru: introduce the relock_page_lruvec function Alex Shi
2019-12-25  9:04 ` [PATCH v7 05/10] mm/mlock: optimize munlock_pagevec by relocking Alex Shi
2019-12-25  9:04 ` [PATCH v7 06/10] mm/swap: only change the lru_lock iff page's lruvec is different Alex Shi
2019-12-25  9:04 ` [PATCH v7 07/10] mm/pgdat: remove pgdat lru_lock Alex Shi
2019-12-25  9:04 ` [PATCH v7 08/10] mm/lru: revise the comments of lru_lock Alex Shi
2019-12-25  9:04   ` Alex Shi
2019-12-25  9:04 ` [PATCH v7 09/10] mm/lru: add debug checking for page memcg moving Alex Shi
2019-12-25  9:04 ` [PATCH v7 10/10] mm/memcg: add debug checking in lock_page_memcg Alex Shi
2019-12-31 23:05 ` [PATCH v7 00/10] per lruvec lru_lock for memcg Andrew Morton
2020-01-02 10:21   ` Alex Shi
2020-01-02 10:21     ` Alex Shi
2020-01-10  2:01     ` Alex Shi
2020-01-13  8:48       ` Hugh Dickins
2020-01-13  8:48         ` Hugh Dickins
2020-01-13  8:48         ` Hugh Dickins
2020-01-13 12:45         ` Alex Shi
2020-01-13 12:45           ` Alex Shi
2020-01-13 20:20           ` Hugh Dickins
2020-01-13 20:20             ` Hugh Dickins
2020-01-13 20:20             ` Hugh Dickins
2020-01-14  9:14             ` Alex Shi
2020-01-14  9:29               ` Alex Shi
2020-01-14  9:29                 ` Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200113163456.GA332@bombadil.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linux.alibaba.com \
    --cc=cgroups@vger.kernel.org \
    --cc=daniel.m.jordan@oracle.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=khlebnikov@yandex-team.ru \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=shakeelb@google.com \
    --cc=tj@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    --cc=yang.shi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.