From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE565C433E0 for ; Thu, 6 Aug 2020 18:39:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DF0E9221E3 for ; Thu, 6 Aug 2020 18:39:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dF5oGek4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729876AbgHFSjN (ORCPT ); Thu, 6 Aug 2020 14:39:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728431AbgHFSjE (ORCPT ); Thu, 6 Aug 2020 14:39:04 -0400 Received: from mail-io1-xd44.google.com (mail-io1-xd44.google.com [IPv6:2607:f8b0:4864:20::d44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 567E4C061574; Thu, 6 Aug 2020 11:39:04 -0700 (PDT) Received: by mail-io1-xd44.google.com with SMTP id l17so50643173iok.7; Thu, 06 Aug 2020 11:39:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=GL3TKI9RGqTkRYjAtmDHCYyxcqr3PHgkwqSzQbwkm0M=; b=dF5oGek43ef27jbrXZKchR302KjjUZ6kdwDChDS98FOmymm2DDgexZ05AvipGa9Wzk 8fJ3KFcznJ426qICF8mkjNVEYw4bc6NVjZs8KiklcivqawtfnpxsHQ97MSZAcMQv8Kar P63SyEpp4xQRElTHkieUq8VTWyn/RNpjY4HpKr3fY2kJi5fqAtCBd0d8PXMdjNxGZuqH aIRmZSHOSrkWBvkf26n36Pr3xBlCw93w1Kc4kFnWT1E5/v9/p2E6IdclAnihPLukq0PZ Yp5NZ+i7eRYpOZZR0L4MdNCb1L331EVQkI21LtGycsDTF1k7btR74O5KQQN27d9eZnrQ 6V7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=GL3TKI9RGqTkRYjAtmDHCYyxcqr3PHgkwqSzQbwkm0M=; b=bs58eNCY6e12cekBCdX9gjF0f/kBGpi4xz5RNcMperiDN60ZKoSB+s2TwQVe3Z7MaS OrMRraWTiNJhAi5w5g1PPyiDccFbBXquZZ5aYML9ouFTmpaleBvuBvi0P1HNbRvhPOx6 1cHzgqcFWj1W8KIRlcNsH8kdGhdt0+BMyfV0NoxI/4fFwX3jW7D03zECe72eWO/0idKO nmLT5moRJjHV1KLHKRlrttksFXtZYWAnsy2sGw10K9E1YWcdUnd4KX58cVMToW0ngyAQ YuyJEBJQZETEM2g/TwJVx+/7vxe3eDYubKH4c53X2yY5xHs3tYJ9Q/5+SVnu4A3mq90L evEg== X-Gm-Message-State: AOAM532HMgQqnCMmmZR5kLfbAOVwzMM0uqV0SnoWn1pjch3OTQuulFc+ zBHV0EH9I+WjHeWMlv23UWYDfL7GqFxqXibp/sY= X-Google-Smtp-Source: ABdhPJwXHO422p4gzANjaG/gMNuqq8scXHgdNB7m1S9n5BFVNSNjHtFhldjhzn2chf4sZVLTGRYCRNmUDh8fDqXJCRk= X-Received: by 2002:a6b:da0d:: with SMTP id x13mr360684iob.138.1596739143271; Thu, 06 Aug 2020 11:39:03 -0700 (PDT) MIME-Version: 1.0 References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-15-git-send-email-alex.shi@linux.alibaba.com> In-Reply-To: <1595681998-19193-15-git-send-email-alex.shi@linux.alibaba.com> From: Alexander Duyck Date: Thu, 6 Aug 2020 11:38:50 -0700 Message-ID: Subject: Re: [PATCH v17 14/21] mm/compaction: do page isolation first in compaction To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups@vger.kernel.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Rong Chen Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Jul 25, 2020 at 6:00 AM Alex Shi wrote: > > Currently, compaction would get the lru_lock and then do page isolation > which works fine with pgdat->lru_lock, since any page isoltion would > compete for the lru_lock. If we want to change to memcg lru_lock, we > have to isolate the page before getting lru_lock, thus isoltion would > block page's memcg change which relay on page isoltion too. Then we > could safely use per memcg lru_lock later. > > The new page isolation use previous introduced TestClearPageLRU() + > pgdat lru locking which will be changed to memcg lru lock later. > > Hugh Dickins fixed following bugs in this patch's > early version: > > Fix lots of crashes under compaction load: isolate_migratepages_block() > must clean up appropriately when rejecting a page, setting PageLRU again > if it had been cleared; and a put_page() after get_page_unless_zero() > cannot safely be done while holding locked_lruvec - it may turn out to > be the final put_page(), which will take an lruvec lock when PageLRU. > And move __isolate_lru_page_prepare back after get_page_unless_zero to > make trylock_page() safe: > trylock_page() is not safe to use at this time: its setting PG_locked > can race with the page being freed or allocated ("Bad page"), and can > also erase flags being set by one of those "sole owners" of a freshly > allocated page who use non-atomic __SetPageFlag(). > > Suggested-by: Johannes Weiner > Signed-off-by: Hugh Dickins > Signed-off-by: Alex Shi > Cc: Andrew Morton > Cc: Matthew Wilcox > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > --- > include/linux/swap.h | 2 +- > mm/compaction.c | 42 +++++++++++++++++++++++++++++++++--------- > mm/vmscan.c | 46 ++++++++++++++++++++++++++-------------------- > 3 files changed, 60 insertions(+), 30 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 2c29399b29a0..6d23d3beeff7 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -358,7 +358,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page, > extern unsigned long zone_reclaimable_pages(struct zone *zone); > extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, > gfp_t gfp_mask, nodemask_t *mask); > -extern int __isolate_lru_page(struct page *page, isolate_mode_t mode); > +extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode); > extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, > unsigned long nr_pages, > gfp_t gfp_mask, > diff --git a/mm/compaction.c b/mm/compaction.c > index f14780fc296a..2da2933fe56b 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -869,6 +869,7 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (!valid_page && IS_ALIGNED(low_pfn, pageblock_nr_pages)) { > if (!cc->ignore_skip_hint && get_pageblock_skip(page)) { > low_pfn = end_pfn; > + page = NULL; > goto isolate_abort; > } > valid_page = page; > @@ -950,6 +951,21 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page)) > goto isolate_fail; > > + /* > + * Be careful not to clear PageLRU until after we're > + * sure the page is not being freed elsewhere -- the > + * page release code relies on it. > + */ > + if (unlikely(!get_page_unless_zero(page))) > + goto isolate_fail; > + > + if (__isolate_lru_page_prepare(page, isolate_mode) != 0) > + goto isolate_fail_put; > + > + /* Try isolate the page */ > + if (!TestClearPageLRU(page)) > + goto isolate_fail_put; > + > /* If we already hold the lock, we can skip some rechecking */ > if (!locked) { > locked = compact_lock_irqsave(&pgdat->lru_lock, > @@ -962,10 +978,6 @@ static bool too_many_isolated(pg_data_t *pgdat) > goto isolate_abort; > } > > - /* Recheck PageLRU and PageCompound under lock */ > - if (!PageLRU(page)) > - goto isolate_fail; > - > /* > * Page become compound since the non-locked check, > * and it's on LRU. It can only be a THP so the order > @@ -973,16 +985,13 @@ static bool too_many_isolated(pg_data_t *pgdat) > */ > if (unlikely(PageCompound(page) && !cc->alloc_contig)) { > low_pfn += compound_nr(page) - 1; > - goto isolate_fail; > + SetPageLRU(page); > + goto isolate_fail_put; > } > } > > lruvec = mem_cgroup_page_lruvec(page, pgdat); > > - /* Try isolate the page */ > - if (__isolate_lru_page(page, isolate_mode) != 0) > - goto isolate_fail; > - > /* The whole page is taken off the LRU; skip the tail pages. */ > if (PageCompound(page)) > low_pfn += compound_nr(page) - 1; > @@ -1011,6 +1020,15 @@ static bool too_many_isolated(pg_data_t *pgdat) > } > > continue; > + > +isolate_fail_put: > + /* Avoid potential deadlock in freeing page under lru_lock */ > + if (locked) { > + spin_unlock_irqrestore(&pgdat->lru_lock, flags); > + locked = false; > + } > + put_page(page); > + > isolate_fail: > if (!skip_on_failure) > continue; > @@ -1047,9 +1065,15 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (unlikely(low_pfn > end_pfn)) > low_pfn = end_pfn; > > + page = NULL; > + > isolate_abort: > if (locked) > spin_unlock_irqrestore(&pgdat->lru_lock, flags); > + if (page) { > + SetPageLRU(page); > + put_page(page); > + } > > /* > * Updated the cached scanner pfn once the pageblock has been scanned We should probably be calling SetPageLRU before we release the lru lock instead of before. It might make sense to just call it before we get here, similar to how you did in the isolate_fail_put case a few lines later. Otherwise this seems to violate the rules you had set up earlier where we were only going to be setting the LRU bit while holding the LRU lock. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 808E4C433E3 for ; Thu, 6 Aug 2020 18:39:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A379F221E3 for ; Thu, 6 Aug 2020 18:39:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dF5oGek4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A379F221E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ACE566B0002; Thu, 6 Aug 2020 14:39:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7CAB8D0001; Thu, 6 Aug 2020 14:39:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 992866B0005; Thu, 6 Aug 2020 14:39:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0131.hostedemail.com [216.40.44.131]) by kanga.kvack.org (Postfix) with ESMTP id 82CEC6B0002 for ; Thu, 6 Aug 2020 14:39:05 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E9CFF181AEF10 for ; Thu, 6 Aug 2020 18:39:04 +0000 (UTC) X-FDA: 77121005808.14.rail22_2a0880d26fb9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id A324318229837 for ; Thu, 6 Aug 2020 18:39:04 +0000 (UTC) X-HE-Tag: rail22_2a0880d26fb9 X-Filterd-Recvd-Size: 10701 Received: from mail-io1-f67.google.com (mail-io1-f67.google.com [209.85.166.67]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Thu, 6 Aug 2020 18:39:04 +0000 (UTC) Received: by mail-io1-f67.google.com with SMTP id q75so42505412iod.1 for ; Thu, 06 Aug 2020 11:39:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=GL3TKI9RGqTkRYjAtmDHCYyxcqr3PHgkwqSzQbwkm0M=; b=dF5oGek43ef27jbrXZKchR302KjjUZ6kdwDChDS98FOmymm2DDgexZ05AvipGa9Wzk 8fJ3KFcznJ426qICF8mkjNVEYw4bc6NVjZs8KiklcivqawtfnpxsHQ97MSZAcMQv8Kar P63SyEpp4xQRElTHkieUq8VTWyn/RNpjY4HpKr3fY2kJi5fqAtCBd0d8PXMdjNxGZuqH aIRmZSHOSrkWBvkf26n36Pr3xBlCw93w1Kc4kFnWT1E5/v9/p2E6IdclAnihPLukq0PZ Yp5NZ+i7eRYpOZZR0L4MdNCb1L331EVQkI21LtGycsDTF1k7btR74O5KQQN27d9eZnrQ 6V7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=GL3TKI9RGqTkRYjAtmDHCYyxcqr3PHgkwqSzQbwkm0M=; b=Qdr1V7icLcl50+bcUIw/HL6TIW8rY3JnfL1THIwv4kkT/SM4pjxmz/FabGzquT9wX8 QTTqjNQCgrPY2GF8NfuwsQ4KCIkyzl/3IBtN8HvO8fXEaptJj30pWj4sB5mJMEBtlwV3 vE364yizcV1b5q18fjShXIc9tE8XSAAi6hJsMmtu60tUIKGfFpJ6T1sZPhyM1GOAajPO Jl6slU/bbNeydHUSZ1Q7djJYuqu6W2knTq0Auz/RcxGplfPDxC2+w8/jG7r2iherFNjl KIouaBQRyJoxcj/w0M4Yg1HTY7izW/mjhWqtCvsRTF2JiZLmNFlyq+2m+Vk9wuMsjmg4 FwzA== X-Gm-Message-State: AOAM532TRJPy0VWgtGE1kO3pMAnmLQwnBhkAqIHPvlzAbB+bqsyyBODy prZ6ctfim6aatiB9J4Dl7EgopCX1DrATj8AWgH8= X-Google-Smtp-Source: ABdhPJwXHO422p4gzANjaG/gMNuqq8scXHgdNB7m1S9n5BFVNSNjHtFhldjhzn2chf4sZVLTGRYCRNmUDh8fDqXJCRk= X-Received: by 2002:a6b:da0d:: with SMTP id x13mr360684iob.138.1596739143271; Thu, 06 Aug 2020 11:39:03 -0700 (PDT) MIME-Version: 1.0 References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-15-git-send-email-alex.shi@linux.alibaba.com> In-Reply-To: <1595681998-19193-15-git-send-email-alex.shi@linux.alibaba.com> From: Alexander Duyck Date: Thu, 6 Aug 2020 11:38:50 -0700 Message-ID: Subject: Re: [PATCH v17 14/21] mm/compaction: do page isolation first in compaction To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups@vger.kernel.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Rong Chen Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: A324318229837 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Jul 25, 2020 at 6:00 AM Alex Shi wrote: > > Currently, compaction would get the lru_lock and then do page isolation > which works fine with pgdat->lru_lock, since any page isoltion would > compete for the lru_lock. If we want to change to memcg lru_lock, we > have to isolate the page before getting lru_lock, thus isoltion would > block page's memcg change which relay on page isoltion too. Then we > could safely use per memcg lru_lock later. > > The new page isolation use previous introduced TestClearPageLRU() + > pgdat lru locking which will be changed to memcg lru lock later. > > Hugh Dickins fixed following bugs in this patch's > early version: > > Fix lots of crashes under compaction load: isolate_migratepages_block() > must clean up appropriately when rejecting a page, setting PageLRU again > if it had been cleared; and a put_page() after get_page_unless_zero() > cannot safely be done while holding locked_lruvec - it may turn out to > be the final put_page(), which will take an lruvec lock when PageLRU. > And move __isolate_lru_page_prepare back after get_page_unless_zero to > make trylock_page() safe: > trylock_page() is not safe to use at this time: its setting PG_locked > can race with the page being freed or allocated ("Bad page"), and can > also erase flags being set by one of those "sole owners" of a freshly > allocated page who use non-atomic __SetPageFlag(). > > Suggested-by: Johannes Weiner > Signed-off-by: Hugh Dickins > Signed-off-by: Alex Shi > Cc: Andrew Morton > Cc: Matthew Wilcox > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > --- > include/linux/swap.h | 2 +- > mm/compaction.c | 42 +++++++++++++++++++++++++++++++++--------- > mm/vmscan.c | 46 ++++++++++++++++++++++++++-------------------- > 3 files changed, 60 insertions(+), 30 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 2c29399b29a0..6d23d3beeff7 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -358,7 +358,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page, > extern unsigned long zone_reclaimable_pages(struct zone *zone); > extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, > gfp_t gfp_mask, nodemask_t *mask); > -extern int __isolate_lru_page(struct page *page, isolate_mode_t mode); > +extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode); > extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, > unsigned long nr_pages, > gfp_t gfp_mask, > diff --git a/mm/compaction.c b/mm/compaction.c > index f14780fc296a..2da2933fe56b 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -869,6 +869,7 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (!valid_page && IS_ALIGNED(low_pfn, pageblock_nr_pages)) { > if (!cc->ignore_skip_hint && get_pageblock_skip(page)) { > low_pfn = end_pfn; > + page = NULL; > goto isolate_abort; > } > valid_page = page; > @@ -950,6 +951,21 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page)) > goto isolate_fail; > > + /* > + * Be careful not to clear PageLRU until after we're > + * sure the page is not being freed elsewhere -- the > + * page release code relies on it. > + */ > + if (unlikely(!get_page_unless_zero(page))) > + goto isolate_fail; > + > + if (__isolate_lru_page_prepare(page, isolate_mode) != 0) > + goto isolate_fail_put; > + > + /* Try isolate the page */ > + if (!TestClearPageLRU(page)) > + goto isolate_fail_put; > + > /* If we already hold the lock, we can skip some rechecking */ > if (!locked) { > locked = compact_lock_irqsave(&pgdat->lru_lock, > @@ -962,10 +978,6 @@ static bool too_many_isolated(pg_data_t *pgdat) > goto isolate_abort; > } > > - /* Recheck PageLRU and PageCompound under lock */ > - if (!PageLRU(page)) > - goto isolate_fail; > - > /* > * Page become compound since the non-locked check, > * and it's on LRU. It can only be a THP so the order > @@ -973,16 +985,13 @@ static bool too_many_isolated(pg_data_t *pgdat) > */ > if (unlikely(PageCompound(page) && !cc->alloc_contig)) { > low_pfn += compound_nr(page) - 1; > - goto isolate_fail; > + SetPageLRU(page); > + goto isolate_fail_put; > } > } > > lruvec = mem_cgroup_page_lruvec(page, pgdat); > > - /* Try isolate the page */ > - if (__isolate_lru_page(page, isolate_mode) != 0) > - goto isolate_fail; > - > /* The whole page is taken off the LRU; skip the tail pages. */ > if (PageCompound(page)) > low_pfn += compound_nr(page) - 1; > @@ -1011,6 +1020,15 @@ static bool too_many_isolated(pg_data_t *pgdat) > } > > continue; > + > +isolate_fail_put: > + /* Avoid potential deadlock in freeing page under lru_lock */ > + if (locked) { > + spin_unlock_irqrestore(&pgdat->lru_lock, flags); > + locked = false; > + } > + put_page(page); > + > isolate_fail: > if (!skip_on_failure) > continue; > @@ -1047,9 +1065,15 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (unlikely(low_pfn > end_pfn)) > low_pfn = end_pfn; > > + page = NULL; > + > isolate_abort: > if (locked) > spin_unlock_irqrestore(&pgdat->lru_lock, flags); > + if (page) { > + SetPageLRU(page); > + put_page(page); > + } > > /* > * Updated the cached scanner pfn once the pageblock has been scanned We should probably be calling SetPageLRU before we release the lru lock instead of before. It might make sense to just call it before we get here, similar to how you did in the isolate_fail_put case a few lines later. Otherwise this seems to violate the rules you had set up earlier where we were only going to be setting the LRU bit while holding the LRU lock. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Duyck Subject: Re: [PATCH v17 14/21] mm/compaction: do page isolation first in compaction Date: Thu, 6 Aug 2020 11:38:50 -0700 Message-ID: References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-15-git-send-email-alex.shi@linux.alibaba.com> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=GL3TKI9RGqTkRYjAtmDHCYyxcqr3PHgkwqSzQbwkm0M=; b=dF5oGek43ef27jbrXZKchR302KjjUZ6kdwDChDS98FOmymm2DDgexZ05AvipGa9Wzk 8fJ3KFcznJ426qICF8mkjNVEYw4bc6NVjZs8KiklcivqawtfnpxsHQ97MSZAcMQv8Kar P63SyEpp4xQRElTHkieUq8VTWyn/RNpjY4HpKr3fY2kJi5fqAtCBd0d8PXMdjNxGZuqH aIRmZSHOSrkWBvkf26n36Pr3xBlCw93w1Kc4kFnWT1E5/v9/p2E6IdclAnihPLukq0PZ Yp5NZ+i7eRYpOZZR0L4MdNCb1L331EVQkI21LtGycsDTF1k7btR74O5KQQN27d9eZnrQ 6V7A== In-Reply-To: <1595681998-19193-15-git-send-email-alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Rong Chen On Sat, Jul 25, 2020 at 6:00 AM Alex Shi wrote: > > Currently, compaction would get the lru_lock and then do page isolation > which works fine with pgdat->lru_lock, since any page isoltion would > compete for the lru_lock. If we want to change to memcg lru_lock, we > have to isolate the page before getting lru_lock, thus isoltion would > block page's memcg change which relay on page isoltion too. Then we > could safely use per memcg lru_lock later. > > The new page isolation use previous introduced TestClearPageLRU() + > pgdat lru locking which will be changed to memcg lru lock later. > > Hugh Dickins fixed following bugs in this patch's > early version: > > Fix lots of crashes under compaction load: isolate_migratepages_block() > must clean up appropriately when rejecting a page, setting PageLRU again > if it had been cleared; and a put_page() after get_page_unless_zero() > cannot safely be done while holding locked_lruvec - it may turn out to > be the final put_page(), which will take an lruvec lock when PageLRU. > And move __isolate_lru_page_prepare back after get_page_unless_zero to > make trylock_page() safe: > trylock_page() is not safe to use at this time: its setting PG_locked > can race with the page being freed or allocated ("Bad page"), and can > also erase flags being set by one of those "sole owners" of a freshly > allocated page who use non-atomic __SetPageFlag(). > > Suggested-by: Johannes Weiner > Signed-off-by: Hugh Dickins > Signed-off-by: Alex Shi > Cc: Andrew Morton > Cc: Matthew Wilcox > Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org > --- > include/linux/swap.h | 2 +- > mm/compaction.c | 42 +++++++++++++++++++++++++++++++++--------- > mm/vmscan.c | 46 ++++++++++++++++++++++++++-------------------- > 3 files changed, 60 insertions(+), 30 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 2c29399b29a0..6d23d3beeff7 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -358,7 +358,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page, > extern unsigned long zone_reclaimable_pages(struct zone *zone); > extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, > gfp_t gfp_mask, nodemask_t *mask); > -extern int __isolate_lru_page(struct page *page, isolate_mode_t mode); > +extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode); > extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, > unsigned long nr_pages, > gfp_t gfp_mask, > diff --git a/mm/compaction.c b/mm/compaction.c > index f14780fc296a..2da2933fe56b 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -869,6 +869,7 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (!valid_page && IS_ALIGNED(low_pfn, pageblock_nr_pages)) { > if (!cc->ignore_skip_hint && get_pageblock_skip(page)) { > low_pfn = end_pfn; > + page = NULL; > goto isolate_abort; > } > valid_page = page; > @@ -950,6 +951,21 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page)) > goto isolate_fail; > > + /* > + * Be careful not to clear PageLRU until after we're > + * sure the page is not being freed elsewhere -- the > + * page release code relies on it. > + */ > + if (unlikely(!get_page_unless_zero(page))) > + goto isolate_fail; > + > + if (__isolate_lru_page_prepare(page, isolate_mode) != 0) > + goto isolate_fail_put; > + > + /* Try isolate the page */ > + if (!TestClearPageLRU(page)) > + goto isolate_fail_put; > + > /* If we already hold the lock, we can skip some rechecking */ > if (!locked) { > locked = compact_lock_irqsave(&pgdat->lru_lock, > @@ -962,10 +978,6 @@ static bool too_many_isolated(pg_data_t *pgdat) > goto isolate_abort; > } > > - /* Recheck PageLRU and PageCompound under lock */ > - if (!PageLRU(page)) > - goto isolate_fail; > - > /* > * Page become compound since the non-locked check, > * and it's on LRU. It can only be a THP so the order > @@ -973,16 +985,13 @@ static bool too_many_isolated(pg_data_t *pgdat) > */ > if (unlikely(PageCompound(page) && !cc->alloc_contig)) { > low_pfn += compound_nr(page) - 1; > - goto isolate_fail; > + SetPageLRU(page); > + goto isolate_fail_put; > } > } > > lruvec = mem_cgroup_page_lruvec(page, pgdat); > > - /* Try isolate the page */ > - if (__isolate_lru_page(page, isolate_mode) != 0) > - goto isolate_fail; > - > /* The whole page is taken off the LRU; skip the tail pages. */ > if (PageCompound(page)) > low_pfn += compound_nr(page) - 1; > @@ -1011,6 +1020,15 @@ static bool too_many_isolated(pg_data_t *pgdat) > } > > continue; > + > +isolate_fail_put: > + /* Avoid potential deadlock in freeing page under lru_lock */ > + if (locked) { > + spin_unlock_irqrestore(&pgdat->lru_lock, flags); > + locked = false; > + } > + put_page(page); > + > isolate_fail: > if (!skip_on_failure) > continue; > @@ -1047,9 +1065,15 @@ static bool too_many_isolated(pg_data_t *pgdat) > if (unlikely(low_pfn > end_pfn)) > low_pfn = end_pfn; > > + page = NULL; > + > isolate_abort: > if (locked) > spin_unlock_irqrestore(&pgdat->lru_lock, flags); > + if (page) { > + SetPageLRU(page); > + put_page(page); > + } > > /* > * Updated the cached scanner pfn once the pageblock has been scanned We should probably be calling SetPageLRU before we release the lru lock instead of before. It might make sense to just call it before we get here, similar to how you did in the isolate_fail_put case a few lines later. Otherwise this seems to violate the rules you had set up earlier where we were only going to be setting the LRU bit while holding the LRU lock.