From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50015C433F5 for ; Wed, 16 Feb 2022 11:52:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 909B06B0078; Wed, 16 Feb 2022 06:52:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 891E16B007B; Wed, 16 Feb 2022 06:52:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E5646B007D; Wed, 16 Feb 2022 06:52:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0235.hostedemail.com [216.40.44.235]) by kanga.kvack.org (Postfix) with ESMTP id 5D8976B0078 for ; Wed, 16 Feb 2022 06:52:15 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1A5D791ACF for ; Wed, 16 Feb 2022 11:52:15 +0000 (UTC) X-FDA: 79148479830.25.EDEA2E0 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf20.hostedemail.com (Postfix) with ESMTP id 9F3CD1C0005 for ; Wed, 16 Feb 2022 11:52:14 +0000 (UTC) Received: by mail-pj1-f50.google.com with SMTP id t4-20020a17090a510400b001b8c4a6cd5dso2122929pjh.5 for ; Wed, 16 Feb 2022 03:52:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iGg/OyjVBMaPdX+FW5TrfDsu52VsxBTEBhG9j0K7sOM=; b=3KiU23Dk4IIdfI4c99qbz/qS0FY1rcMSvlEFnUqUEsZ+69PtQnmdQUJMVYwGhvgVHS r78whG3T/3AKLDrSJfcP1UKB06vjCZxrlcIycEiCtM38KF7GH8UR83WxgQrRzXsxgT8P kjux7lm5v7j83wsK3mc9jepTQoNbujTiA+SX0q43cFzseESCOtmPvYCgZFdjsFmu24zb unI6hREvrRte5wyTzapItLZNhuF6obCjuKQrx16ZsnBlowl7c2ExWIJNQCWiGSdMjEGn hBqHqEbw/4+2pkiC5HAqB03eivPrBHjPGgmKQf9YUkg4+3eTwtxhg4aVQQVsyjWM1tMh NBTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iGg/OyjVBMaPdX+FW5TrfDsu52VsxBTEBhG9j0K7sOM=; b=ipsmC5444hunCUY94b7mUisv9GCiMdNvMZqrtg3qEczfOGJuXzbG+r2Dqc8DfljrgM XRb01wq3R+ux0dkb1yHEgFk+MO0S23+jsBIrND05X4SuytJ3MDfqhE4qgkSm4yEdAJb9 qvB5purSmVJZa6Vf2t7u3n2T8a1NS9FOb+DsonQ9v3t7WbMR4el2qneEQdqU7hvusSbs o7hakFZZalG5Ob97nWviuoV2AnqcfPhIAaS2/qUPiqgAgsjCaXKHv+Gs5YuWrsc+g2SW IJpqn/QZRfe4dV6c5MN3ZPP7Dw4QiOXMCyKc6nBA2Jl3d47Lq15oxDuJi4qI1+0Tqxo9 wq1g== X-Gm-Message-State: AOAM5314GRlv+7XhCrviYjq1WpQ9Ecc9W5SyLJ+cbDFZvh1JUm8lYLRb jlZrQUoTFYXoLaALJ+Q9vB7MCg== X-Google-Smtp-Source: ABdhPJzWWzYhpIImuxNbs2U6LAKi9urxlUqgGKwHYnuuQnxTJG3UMWrf56amcqobb5YZEgN3v/IY8A== X-Received: by 2002:a17:903:22c3:b0:14d:8437:5136 with SMTP id y3-20020a17090322c300b0014d84375136mr1948664plg.129.1645012333649; Wed, 16 Feb 2022 03:52:13 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id m16sm14790221pfc.156.2022.02.16.03.52.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 03:52:13 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v3 02/12] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Date: Wed, 16 Feb 2022 19:51:22 +0800 Message-Id: <20220216115132.52602-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220216115132.52602-1-songmuchun@bytedance.com> References: <20220216115132.52602-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=3KiU23Dk; spf=pass (imf20.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam07 X-Rspam-User: X-Rspamd-Queue-Id: 9F3CD1C0005 X-Stat-Signature: ai94iddetciu4osnd15sbgjtdwqbd9r4 X-HE-Tag: 1645012334-492655 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we reuse the objcg APIs to charge LRU pages, the folio_memcg() can be changed when the LRU pages reparented. In this case, we need to acquire the new lruvec lock. lruvec =3D folio_lruvec(folio); // The page is reparented. compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); // Acquired the wrong lruvec lock and need to retry. But compact_lock_irqsave() only take lruvec lock as the parameter, we cannot aware this change. If it can take the page as parameter to acquire the lruvec lock. When the page memcg is changed, we can use the folio_memcg() detect whether we need to reacquire the new lruvec lock. So compact_lock_irqsave() is not suitable for us. Similar to folio_lruvec_lock_irqsave(), introduce compact_folio_lruvec_lock_irqsave() to acquire the lruvec lock in the compaction routine. Signed-off-by: Muchun Song --- mm/compaction.c | 31 +++++++++++++++++++++++++++---- 1 file changed, 27 insertions(+), 4 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index b4e94cda3019..58d0e91cde49 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -509,6 +509,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, u= nsigned long *flags, return true; } =20 +static struct lruvec * +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *fl= ags, + struct compact_control *cc) +{ + struct lruvec *lruvec; + + lruvec =3D folio_lruvec(folio); + + /* Track if the lock is contended in async mode */ + if (cc->mode =3D=3D MIGRATE_ASYNC && !cc->contended) { + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags)) + goto out; + + cc->contended =3D true; + } + + spin_lock_irqsave(&lruvec->lru_lock, *flags); +out: + lruvec_memcg_debug(lruvec, folio); + + return lruvec; +} + /* * Compaction requires the taking of some coarse locks that are potentia= lly * very heavily contended. The lock should be periodically unlocked to a= void @@ -843,6 +866,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, =20 /* Time to isolate some pages for migration */ for (; low_pfn < end_pfn; low_pfn++) { + struct folio *folio; =20 if (skip_on_failure && low_pfn >=3D next_skip_pfn) { /* @@ -1028,18 +1052,17 @@ isolate_migratepages_block(struct compact_control= *cc, unsigned long low_pfn, if (!TestClearPageLRU(page)) goto isolate_fail_put; =20 - lruvec =3D folio_lruvec(page_folio(page)); + folio =3D page_folio(page); + lruvec =3D folio_lruvec(folio); =20 /* If we already hold the lock, we can skip some rechecking */ if (lruvec !=3D locked) { if (locked) unlock_page_lruvec_irqrestore(locked, flags); =20 - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + lruvec =3D compact_folio_lruvec_lock_irqsave(folio, &flags, cc); locked =3D lruvec; =20 - lruvec_memcg_debug(lruvec, page_folio(page)); - /* Try get exclusive access under lock */ if (!skip_updated) { skip_updated =3D true; --=20 2.11.0