From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD954C47404 for ; Wed, 9 Oct 2019 03:36:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 87E5F20B7C for ; Wed, 9 Oct 2019 03:36:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 87E5F20B7C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=fromorbit.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2AE4B8E001D; Tue, 8 Oct 2019 23:36:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1EAF08E0016; Tue, 8 Oct 2019 23:36:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0164E8E001D; Tue, 8 Oct 2019 23:36:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0157.hostedemail.com [216.40.44.157]) by kanga.kvack.org (Postfix) with ESMTP id C528E8E0016 for ; Tue, 8 Oct 2019 23:36:47 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 69B0A180AD803 for ; Wed, 9 Oct 2019 03:36:47 +0000 (UTC) X-FDA: 76022834454.13.robin77_1e1373dde140e X-HE-Tag: robin77_1e1373dde140e X-Filterd-Recvd-Size: 7937 Received: from mail105.syd.optusnet.com.au (mail105.syd.optusnet.com.au [211.29.132.249]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Oct 2019 03:36:46 +0000 (UTC) Received: from dread.disaster.area (pa49-181-226-196.pa.nsw.optusnet.com.au [49.181.226.196]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 8CA32362BAA for ; Wed, 9 Oct 2019 14:36:45 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.2) (envelope-from ) id 1iI2XW-0006BS-Ty; Wed, 09 Oct 2019 14:21:26 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92) (envelope-from ) id 1iI2XW-00039K-Rj; Wed, 09 Oct 2019 14:21:26 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 09/26] shrinkers: use defer_work for GFP_NOFS sensitive shrinkers Date: Wed, 9 Oct 2019 14:21:07 +1100 Message-Id: <20191009032124.10541-10-david@fromorbit.com> X-Mailer: git-send-email 2.23.0.rc1 In-Reply-To: <20191009032124.10541-1-david@fromorbit.com> References: <20191009032124.10541-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=dRuLqZ1tmBNts2YiI0zFQg==:117 a=dRuLqZ1tmBNts2YiI0zFQg==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=XobE76Q3jBoA:10 a=20KFwNOVAAAA:8 a=bIsfdx-f5ddGStTJopEA:9 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Chinner For shrinkers that currently avoid scanning when called under GFP_NOFS contexts, convert them to use the new ->defer_work flag rather than checking and returning errors during scans. This makes it very clear that these shrinkers are not doing any work because of the context limitations, not because there is no work that can be done. Signed-off-by: Dave Chinner --- drivers/staging/android/ashmem.c | 8 ++++---- fs/gfs2/glock.c | 5 +++-- fs/gfs2/quota.c | 6 +++--- fs/nfs/dir.c | 6 +++--- fs/super.c | 6 +++--- fs/xfs/xfs_qm.c | 11 ++++++++--- net/sunrpc/auth.c | 5 ++--- 7 files changed, 26 insertions(+), 21 deletions(-) diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/a= shmem.c index 74d497d39c5a..0b80149f0ac5 100644 --- a/drivers/staging/android/ashmem.c +++ b/drivers/staging/android/ashmem.c @@ -438,10 +438,6 @@ ashmem_shrink_scan(struct shrinker *shrink, struct s= hrink_control *sc) { unsigned long freed =3D 0; =20 - /* We might recurse into filesystem code, so bail out if necessary */ - if (!(sc->gfp_mask & __GFP_FS)) - return SHRINK_STOP; - if (!mutex_trylock(&ashmem_mutex)) return -1; =20 @@ -478,6 +474,10 @@ ashmem_shrink_scan(struct shrinker *shrink, struct s= hrink_control *sc) static unsigned long ashmem_shrink_count(struct shrinker *shrink, struct shrink_control *sc) { + /* We might recurse into filesystem code, so bail out if necessary */ + if (!(sc->gfp_mask & __GFP_FS)) + sc->defer_work =3D true; + /* * note that lru_count is count of pages on the lru, not a count of * objects on the list. This means the scan function needs to return th= e diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index 0290a22ebccf..a25161b93f96 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -1614,14 +1614,15 @@ static long gfs2_scan_glock_lru(int nr) static unsigned long gfs2_glock_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) { - if (!(sc->gfp_mask & __GFP_FS)) - return SHRINK_STOP; return gfs2_scan_glock_lru(sc->nr_to_scan); } =20 static unsigned long gfs2_glock_shrink_count(struct shrinker *shrink, struct shrink_control *sc) { + if (!(sc->gfp_mask & __GFP_FS)) + sc->defer_work =3D true; + return vfs_pressure_ratio(atomic_read(&lru_count)); } =20 diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index 7c016a082aa6..661189b42c31 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -166,9 +166,6 @@ static unsigned long gfs2_qd_shrink_scan(struct shrin= ker *shrink, LIST_HEAD(dispose); unsigned long freed; =20 - if (!(sc->gfp_mask & __GFP_FS)) - return SHRINK_STOP; - freed =3D list_lru_shrink_walk(&gfs2_qd_lru, sc, gfs2_qd_isolate, &dispose); =20 @@ -180,6 +177,9 @@ static unsigned long gfs2_qd_shrink_scan(struct shrin= ker *shrink, static unsigned long gfs2_qd_shrink_count(struct shrinker *shrink, struct shrink_control *sc) { + if (!(sc->gfp_mask & __GFP_FS)) + sc->defer_work =3D true; + return vfs_pressure_ratio(list_lru_shrink_count(&gfs2_qd_lru, sc)); } =20 diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index e180033e35cf..fd4a70479790 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -2211,10 +2211,7 @@ unsigned long nfs_access_cache_scan(struct shrinker *shrink, struct shrink_control *sc= ) { int nr_to_scan =3D sc->nr_to_scan; - gfp_t gfp_mask =3D sc->gfp_mask; =20 - if ((gfp_mask & GFP_KERNEL) !=3D GFP_KERNEL) - return SHRINK_STOP; return nfs_do_access_cache_scan(nr_to_scan); } =20 @@ -2222,6 +2219,9 @@ nfs_access_cache_scan(struct shrinker *shrink, stru= ct shrink_control *sc) unsigned long nfs_access_cache_count(struct shrinker *shrink, struct shrink_control *s= c) { + if ((sc->gfp_mask & GFP_KERNEL) !=3D GFP_KERNEL) + sc->defer_work =3D true; + return vfs_pressure_ratio(atomic_long_read(&nfs_access_nr_entries)); } =20 diff --git a/fs/super.c b/fs/super.c index f627b7c53d2b..d6a93d7fe05f 100644 --- a/fs/super.c +++ b/fs/super.c @@ -74,9 +74,6 @@ static unsigned long super_cache_scan(struct shrinker *= shrink, * Deadlock avoidance. We may hold various FS locks, and we don't want * to recurse into the FS that called us in clear_inode() and friends.. */ - if (!(sc->gfp_mask & __GFP_FS)) - return SHRINK_STOP; - if (!trylock_super(sb)) return SHRINK_STOP; =20 @@ -141,6 +138,9 @@ static unsigned long super_cache_count(struct shrinke= r *shrink, return 0; smp_rmb(); =20 + if (!(sc->gfp_mask & __GFP_FS)) + sc->defer_work =3D true; + if (sb->s_op && sb->s_op->nr_cached_objects) total_objects =3D sb->s_op->nr_cached_objects(sb, sc); =20 diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index ecd8ce152ab1..aa03f2448145 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -502,9 +502,6 @@ xfs_qm_shrink_scan( unsigned long freed; int error; =20 - if ((sc->gfp_mask & (__GFP_FS|__GFP_DIRECT_RECLAIM)) !=3D (__GFP_FS|__G= FP_DIRECT_RECLAIM)) - return 0; - INIT_LIST_HEAD(&isol.buffers); INIT_LIST_HEAD(&isol.dispose); =20 @@ -534,6 +531,14 @@ xfs_qm_shrink_count( struct xfs_quotainfo *qi =3D container_of(shrink, struct xfs_quotainfo, qi_shrinker); =20 + /* + * __GFP_DIRECT_RECLAIM is used here to avoid blocking kswapd + */ + if ((sc->gfp_mask & (__GFP_FS|__GFP_DIRECT_RECLAIM)) !=3D + (__GFP_FS|__GFP_DIRECT_RECLAIM)) { + sc->defer_work =3D true; + } + return list_lru_shrink_count(&qi->qi_lru, sc); } =20 diff --git a/net/sunrpc/auth.c b/net/sunrpc/auth.c index cdb05b48de44..7d11a7034fee 100644 --- a/net/sunrpc/auth.c +++ b/net/sunrpc/auth.c @@ -527,9 +527,6 @@ static unsigned long rpcauth_cache_shrink_scan(struct shrinker *shrink, struct shrink_control= *sc) =20 { - if ((sc->gfp_mask & GFP_KERNEL) !=3D GFP_KERNEL) - return SHRINK_STOP; - /* nothing left, don't come back */ if (list_empty(&cred_unused)) return SHRINK_STOP; @@ -541,6 +538,8 @@ static unsigned long rpcauth_cache_shrink_count(struct shrinker *shrink, struct shrink_contro= l *sc) =20 { + if ((sc->gfp_mask & GFP_KERNEL) !=3D GFP_KERNEL) + sc->defer_work =3D true; return number_cred_unused * sysctl_vfs_cache_pressure / 100; } =20 --=20 2.23.0.rc1