From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.9 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93CE0C4727D for ; Mon, 21 Sep 2020 23:49:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 00CC623A6C for ; Mon, 21 Sep 2020 23:49:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ma0zWoCL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 00CC623A6C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 21E82900007; Mon, 21 Sep 2020 19:49:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F519900004; Mon, 21 Sep 2020 19:49:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10A81900007; Mon, 21 Sep 2020 19:49:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id D0350900004 for ; Mon, 21 Sep 2020 19:49:43 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 92DFF1EE6 for ; Mon, 21 Sep 2020 23:49:43 +0000 (UTC) X-FDA: 77288713446.18.pet17_2604cdf27149 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 694D0100EC666 for ; Mon, 21 Sep 2020 23:49:43 +0000 (UTC) X-HE-Tag: pet17_2604cdf27149 X-Filterd-Recvd-Size: 13508 Received: from mail-oi1-f193.google.com (mail-oi1-f193.google.com [209.85.167.193]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Mon, 21 Sep 2020 23:49:42 +0000 (UTC) Received: by mail-oi1-f193.google.com with SMTP id 26so10638688ois.5 for ; Mon, 21 Sep 2020 16:49:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=8oQpSQI8wvIpIER3mA+5Lt4f5fuUByXFTuElAIMrxcI=; b=ma0zWoCL6rzOYyWSY+cIZNt+OG/Pkpzhkrxk7+XUsZBArWxN+4ODulV6lkF/ejZ3wq sdqMdqNrnbCwZy7TGOOQjmktEWXJkmDwiTOfcEAy4S3ZPt6+ZAPZWl3cKaThR4AQqWEK luvgNfEnjUh8Ni1M0fXfbsp46OwRSYRo3AvZcPTkJ+bR30q9gbfeZvMwsQPjMp2MRtRk MPbnqEtZ1RgUKDL8zFjvTYHR/069rPbOGsCZs3WM7k1I0WJ4qFwTN+2XHEU+Qh+qTmgC dA+ZuhFoLY9PdE7Xkfs6KYGmYj1jhIhQeo1oezoGYkb0nQR5IOFTvIVNdcr6yzGwvlOW Moig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=8oQpSQI8wvIpIER3mA+5Lt4f5fuUByXFTuElAIMrxcI=; b=Px7s7SoiVoo+JBM+DbOTr/L3dzj9LX1JLhSNpbQUpP90gygUKYrCSsvLUIa9Ce0m2q 1hEQplEE8hc5iHsEBQBKQbcOH6YwKwJuV/Vt8Ylhxu6bcYzev8kNFZwCMG2vdmzwS+Pr JOkdHfT/0n/gt+4OiOjLnE1Gl+pEVE+MkvhilF88TgJ05PwFGHDdnNKRb5hydY0/mhN3 EBFsNUPa8Dxl5SsHfAuXc6RNt2EtCjfeiQCdgF26kD1Ymq4okStftDq9DDSpGrI7Q4G0 Xiny6osJrZoDDrnkHwVVUTUIlCxgTfozD1ozPf3i/S/ejUjB0QdbEJNA9sJ0gS8DU+7V DhOw== X-Gm-Message-State: AOAM533iu1b7Ow2Ons7n2J3O4ocXYRoOpMVcwhASmfpfnsaYDcOSTP8O qYX8Az1WBpgWkjB3wAkY0mT0RQ== X-Google-Smtp-Source: ABdhPJy5gdr+wGvmIXnwQJ0rjjfTSRyQJp8fGZHNIOuMCqPD+xBQdNZSHLEgLMmGAWAOU0s/ZxuQEg== X-Received: by 2002:aca:f5cc:: with SMTP id t195mr1145600oih.10.1600732182000; Mon, 21 Sep 2020 16:49:42 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id f29sm1669221ook.44.2020.09.21.16.49.39 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Mon, 21 Sep 2020 16:49:40 -0700 (PDT) Date: Mon, 21 Sep 2020 16:49:38 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Alex Shi cc: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Subject: Re: [PATCH v18 17/32] mm/compaction: do page isolation first in compaction In-Reply-To: <1598273705-69124-18-git-send-email-alex.shi@linux.alibaba.com> Message-ID: References: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> <1598273705-69124-18-git-send-email-alex.shi@linux.alibaba.com> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 24 Aug 2020, Alex Shi wrote: > Currently, compaction would get the lru_lock and then do page isolation > which works fine with pgdat->lru_lock, since any page isoltion would > compete for the lru_lock. If we want to change to memcg lru_lock, we > have to isolate the page before getting lru_lock, thus isoltion would > block page's memcg change which relay on page isoltion too. Then we > could safely use per memcg lru_lock later. > > The new page isolation use previous introduced TestClearPageLRU() + > pgdat lru locking which will be changed to memcg lru lock later. > > Hugh Dickins fixed following bugs in this patch's > early version: > > Fix lots of crashes under compaction load: isolate_migratepages_block() > must clean up appropriately when rejecting a page, setting PageLRU again > if it had been cleared; and a put_page() after get_page_unless_zero() > cannot safely be done while holding locked_lruvec - it may turn out to > be the final put_page(), which will take an lruvec lock when PageLRU. > And move __isolate_lru_page_prepare back after get_page_unless_zero to > make trylock_page() safe: > trylock_page() is not safe to use at this time: its setting PG_locked > can race with the page being freed or allocated ("Bad page"), and can > also erase flags being set by one of those "sole owners" of a freshly > allocated page who use non-atomic __SetPageFlag(). > > Suggested-by: Johannes Weiner > Signed-off-by: Hugh Dickins > Signed-off-by: Alex Shi Okay, whatever. I was about to say Acked-by: Hugh Dickins With my signed-off-by there, someone will ask if it should say "From: Hugh ..." at the top: no, it should not, this is Alex's patch, but I proposed some fixes to it, as you already acknowledged. A couple of comments below on the mm/vmscan.c part of it. > Cc: Andrew Morton > Cc: Matthew Wilcox > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > --- > include/linux/swap.h | 2 +- > mm/compaction.c | 42 +++++++++++++++++++++++++++++++++--------- > mm/vmscan.c | 46 ++++++++++++++++++++++++++-------------------- > 3 files changed, 60 insertions(+), 30 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 43e6b3458f58..550fdfdc3506 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -357,7 +357,7 @@ extern void lru_cache_add_inactive_or_unevictable(struct page *page, > extern unsigned long zone_reclaimable_pages(struct zone *zone); > extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, > gfp_t gfp_mask, nodemask_t *mask); > -extern int __isolate_lru_page(struct page *page, isolate_mode_t mode); > +extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode); > extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, > unsigned long nr_pages, > gfp_t gfp_mask, > diff --git a/mm/compaction.c b/mm/compaction.c > index 4e2c66869041..253382d99969 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -887,6 +887,7 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (!valid_page && IS_ALIGNED(low_pfn, pageblock_nr_pages)) { > if (!cc->ignore_skip_hint && get_pageblock_skip(page)) { > low_pfn = end_pfn; > + page = NULL; > goto isolate_abort; > } > valid_page = page; > @@ -968,6 +969,21 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page)) > goto isolate_fail; > > + /* > + * Be careful not to clear PageLRU until after we're > + * sure the page is not being freed elsewhere -- the > + * page release code relies on it. > + */ > + if (unlikely(!get_page_unless_zero(page))) > + goto isolate_fail; > + > + if (__isolate_lru_page_prepare(page, isolate_mode) != 0) > + goto isolate_fail_put; > + > + /* Try isolate the page */ > + if (!TestClearPageLRU(page)) > + goto isolate_fail_put; > + > /* If we already hold the lock, we can skip some rechecking */ > if (!locked) { > locked = compact_lock_irqsave(&pgdat->lru_lock, > @@ -980,10 +996,6 @@ static bool too_many_isolated(pg_data_t *pgdat) > goto isolate_abort; > } > > - /* Recheck PageLRU and PageCompound under lock */ > - if (!PageLRU(page)) > - goto isolate_fail; > - > /* > * Page become compound since the non-locked check, > * and it's on LRU. It can only be a THP so the order > @@ -991,16 +1003,13 @@ static bool too_many_isolated(pg_data_t *pgdat) > */ > if (unlikely(PageCompound(page) && !cc->alloc_contig)) { > low_pfn += compound_nr(page) - 1; > - goto isolate_fail; > + SetPageLRU(page); > + goto isolate_fail_put; > } > } > > lruvec = mem_cgroup_page_lruvec(page, pgdat); > > - /* Try isolate the page */ > - if (__isolate_lru_page(page, isolate_mode) != 0) > - goto isolate_fail; > - > /* The whole page is taken off the LRU; skip the tail pages. */ > if (PageCompound(page)) > low_pfn += compound_nr(page) - 1; > @@ -1029,6 +1038,15 @@ static bool too_many_isolated(pg_data_t *pgdat) > } > > continue; > + > +isolate_fail_put: > + /* Avoid potential deadlock in freeing page under lru_lock */ > + if (locked) { > + spin_unlock_irqrestore(&pgdat->lru_lock, flags); > + locked = false; > + } > + put_page(page); > + > isolate_fail: > if (!skip_on_failure) > continue; > @@ -1065,9 +1083,15 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (unlikely(low_pfn > end_pfn)) > low_pfn = end_pfn; > > + page = NULL; > + > isolate_abort: > if (locked) > spin_unlock_irqrestore(&pgdat->lru_lock, flags); > + if (page) { > + SetPageLRU(page); > + put_page(page); > + } > > /* > * Updated the cached scanner pfn once the pageblock has been scanned > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 1b3e0eeaad64..48b50695f883 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1538,20 +1538,20 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, > * > * returns 0 on success, -ve errno on failure. > */ > -int __isolate_lru_page(struct page *page, isolate_mode_t mode) > +int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) > { > int ret = -EINVAL; > > - /* Only take pages on the LRU. */ > - if (!PageLRU(page)) > - return ret; > - > /* Compaction should not handle unevictable pages but CMA can do so */ > if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE)) > return ret; > > ret = -EBUSY; > > + /* Only take pages on the LRU. */ > + if (!PageLRU(page)) > + return ret; > + So here you do deal with that BUG() issue. But I'd prefer you to leave it as I suggested in 16/32, just start with "int ret = -EBUSY;" and don't rearrange the checks here at all. I say that partly because the !PageLRU check is very important (when called for compaction), and the easier it is to find (at the very start), the less anxious I get! > /* > * To minimise LRU disruption, the caller can indicate that it only > * wants to isolate pages it will be able to operate on without > @@ -1592,20 +1592,9 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode) > if ((mode & ISOLATE_UNMAPPED) && page_mapped(page)) > return ret; > > - if (likely(get_page_unless_zero(page))) { > - /* > - * Be careful not to clear PageLRU until after we're > - * sure the page is not being freed elsewhere -- the > - * page release code relies on it. > - */ > - ClearPageLRU(page); > - ret = 0; > - } > - > - return ret; > + return 0; > } > > - > /* > * Update LRU sizes after isolating pages. The LRU size updates must > * be complete before mem_cgroup_update_lru_size due to a sanity check. > @@ -1685,17 +1674,34 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, > * only when the page is being freed somewhere else. > */ > scan += nr_pages; > - switch (__isolate_lru_page(page, mode)) { > + switch (__isolate_lru_page_prepare(page, mode)) { > case 0: > + /* > + * Be careful not to clear PageLRU until after we're > + * sure the page is not being freed elsewhere -- the > + * page release code relies on it. > + */ > + if (unlikely(!get_page_unless_zero(page))) > + goto busy; > + > + if (!TestClearPageLRU(page)) { > + /* > + * This page may in other isolation path, > + * but we still hold lru_lock. > + */ > + put_page(page); > + goto busy; > + } > + > nr_taken += nr_pages; > nr_zone_taken[page_zonenum(page)] += nr_pages; > list_move(&page->lru, dst); > break; > - > +busy: > case -EBUSY: It's a long time since I read a C manual. I had to try that out in a little test program: and it does seem to do the right thing. Maybe I'm just very ignorant, and everybody else finds that natural: but I'd feel more comfortable with the busy label on the line after the "case -EBUSY:" - wouldn't you? You could, of course, change that "case -EBUSY" to "default", and delete the "default: BUG();" that follows: whatever you prefer. > /* else it is being freed elsewhere */ > list_move(&page->lru, src); > - continue; > + break; Aha. Yes, I like that change, I'm not going to throw a tantrum, accusing you of sneaking in unrelated changes etc. You made me look back at the history: it was "continue" from back in the days of lumpy reclaim, when there was stuff after the switch statement which needed to be skipped in the -EBUSY case. "break" looks more natural to me now. > > default: > BUG(); > -- > 1.8.3.1