From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB193C4361B for ; Tue, 15 Dec 2020 22:27:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8BD682065D for ; Tue, 15 Dec 2020 22:27:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8BD682065D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DE2586B0036; Tue, 15 Dec 2020 17:27:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D92816B005D; Tue, 15 Dec 2020 17:27:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA8416B0068; Tue, 15 Dec 2020 17:27:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id B23BB6B0036 for ; Tue, 15 Dec 2020 17:27:31 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7B7A71EE6 for ; Tue, 15 Dec 2020 22:27:31 +0000 (UTC) X-FDA: 77596954302.23.box48_420a3ee27427 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 56ADE37604 for ; Tue, 15 Dec 2020 22:27:31 +0000 (UTC) X-HE-Tag: box48_420a3ee27427 X-Filterd-Recvd-Size: 9140 Received: from mail-ej1-f68.google.com (mail-ej1-f68.google.com [209.85.218.68]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Dec 2020 22:27:30 +0000 (UTC) Received: by mail-ej1-f68.google.com with SMTP id g20so30005022ejb.1 for ; Tue, 15 Dec 2020 14:27:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=e7aPdqvRqJdLQv00aCJ+/E9W+p1nZGyEz+7ylHTtPMo=; b=XaL9nsunMMT+noKx6MZUlQafJRFGsAEjL1q+/hjxZrM7wm5qXyh/fUp+9fWtNuD1W6 9YWjDRhXlGP7KJMzqecTvcaVFz/unew/U7nbviK2jCg6J2X6wqfyJB+yvJxnckSpecYg 131LAre0Rmliapwfv1dL+x7+4E2Y2tW6Mz2xGcol8d+6OnDjTFLxQPtngCymAvtvyHQ/ KsIQV3TNq7eH1M3cx1a9gEBHBJB64lpAOwlpAN+RIODdcgF8TvGimeMGlp46THjiX4Ix FyZ3WuA/x1x5FaNmcpEVG1DNNUXsh+q8rp1ePrnkpQL1v+xsDP+x1LxOER6nC9KmGz4l iRww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=e7aPdqvRqJdLQv00aCJ+/E9W+p1nZGyEz+7ylHTtPMo=; b=BHApPpb/u/Bh80S4zi5LaUEqRB8adCL82U2py+XPLcqcke/F/+m97h3xU5+0jlvri1 8N4l8115nPRKZQTrN1GivrXqblNMC4pnkKi2sFLWoWMdzIJpDPvGR7rLwOxS3INtKveq /ID2OyQms+p6YKfSirKuN041KmAHOaorsLsi2nqmM8dEmsf9rdbZMz8FD3sSpIlvzpKZ UcA3Ir4P+nFqcXLbldbdQd/EeRlGO4c0J2vYQxh4BZDudrGhqCE3G+iOoUbDCJjwHX2R 3EO2kiYg3R/V3K+t7cp6jbvbOwKdVRF8tA/B/4cYKYfZKfE3Ug/BIifA6F4gxeyzJ1f7 sSfA== X-Gm-Message-State: AOAM532v2ZCbopg9zDByPSivnesMvzwZQ9rYtdGK1cn+9owkRjXnL6fK ccg0q+ACf7q7QTCxg5Ph581TrnKGlHN4xtRoyQU= X-Google-Smtp-Source: ABdhPJzO6Ko/h2biCIhij7Dwhzxk/22hDs6IxS5BZKD+/z3szPShMe/VADlbWcGIqGlXzslJwNmjSl75z66ph7+3GAU= X-Received: by 2002:a17:907:20a4:: with SMTP id pw4mr28006844ejb.499.1608071249662; Tue, 15 Dec 2020 14:27:29 -0800 (PST) MIME-Version: 1.0 References: <20201214223722.232537-1-shy828301@gmail.com> <20201214223722.232537-7-shy828301@gmail.com> <20201215024637.GM3913616@dread.disaster.area> In-Reply-To: <20201215024637.GM3913616@dread.disaster.area> From: Yang Shi Date: Tue, 15 Dec 2020 14:27:18 -0800 Message-ID: Subject: Re: [v2 PATCH 6/9] mm: vmscan: use per memcg nr_deferred of shrinker To: Dave Chinner Cc: Roman Gushchin , Kirill Tkhai , Shakeel Butt , Johannes Weiner , Michal Hocko , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Dec 14, 2020 at 6:46 PM Dave Chinner wrote: > > On Mon, Dec 14, 2020 at 02:37:19PM -0800, Yang Shi wrote: > > Use per memcg's nr_deferred for memcg aware shrinkers. The shrinker's nr_deferred > > will be used in the following cases: > > 1. Non memcg aware shrinkers > > 2. !CONFIG_MEMCG > > 3. memcg is disabled by boot parameter > > > > Signed-off-by: Yang Shi > > Lots of lines way over 80 columns. I thought that has been lifted to 100 columns. > > > --- > > mm/vmscan.c | 94 ++++++++++++++++++++++++++++++++++++++++++++++------- > > 1 file changed, 83 insertions(+), 11 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index bf34167dd67e..bce8cf44eca2 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -203,6 +203,12 @@ DECLARE_RWSEM(shrinker_rwsem); > > static DEFINE_IDR(shrinker_idr); > > static int shrinker_nr_max; > > > > +static inline bool is_deferred_memcg_aware(struct shrinker *shrinker) > > +{ > > + return (shrinker->flags & SHRINKER_MEMCG_AWARE) && > > + !mem_cgroup_disabled(); > > +} > > Why do we care if mem_cgroup_disabled() is disabled here? The return > of this function is then && sc->memcg, so if memcgs are disabled, > sc->memcg will never be set and this mem_cgroup_disabled() check is > completely redundant, right? Yes, correct. I missed this point. > > > + > > static int prealloc_memcg_shrinker(struct shrinker *shrinker) > > { > > int id, ret = -ENOMEM; > > @@ -271,7 +277,58 @@ static bool writeback_throttling_sane(struct scan_control *sc) > > #endif > > return false; > > } > > + > > +static inline long count_nr_deferred(struct shrinker *shrinker, > > + struct shrink_control *sc) > > +{ > > + bool per_memcg_deferred = is_deferred_memcg_aware(shrinker) && sc->memcg; > > + struct memcg_shrinker_deferred *deferred; > > + struct mem_cgroup *memcg = sc->memcg; > > + int nid = sc->nid; > > + int id = shrinker->id; > > + long nr; > > + > > + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) > > + nid = 0; > > + > > + if (per_memcg_deferred) { > > + deferred = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_deferred, > > + true); > > + nr = atomic_long_xchg(&deferred->nr_deferred[id], 0); > > + } else > > + nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); > > + > > + return nr; > > +} > > + > > +static inline long set_nr_deferred(long nr, struct shrinker *shrinker, > > + struct shrink_control *sc) > > +{ > > + bool per_memcg_deferred = is_deferred_memcg_aware(shrinker) && sc->memcg; > > + struct memcg_shrinker_deferred *deferred; > > + struct mem_cgroup *memcg = sc->memcg; > > + int nid = sc->nid; > > + int id = shrinker->id; > > Oh, that's a nasty trap. Nobody knows if you mean "id" or "nid" in > any of the code and a single letter type results in a bug. Sure, will come up with more descriptive names. Maybe "nid" and "shrinker_id"? > > > + long new_nr; > > + > > + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) > > + nid = 0; > > + > > + if (per_memcg_deferred) { > > + deferred = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_deferred, > > + true); > > + new_nr = atomic_long_add_return(nr, &deferred->nr_deferred[id]); > > + } else > > + new_nr = atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); > > + > > + return new_nr; > > +} > > #else > > +static inline bool is_deferred_memcg_aware(struct shrinker *shrinker) > > +{ > > + return false; > > +} > > + > > static int prealloc_memcg_shrinker(struct shrinker *shrinker) > > { > > return 0; > > @@ -290,6 +347,29 @@ static bool writeback_throttling_sane(struct scan_control *sc) > > { > > return true; > > } > > + > > +static inline long count_nr_deferred(struct shrinker *shrinker, > > + struct shrink_control *sc) > > +{ > > + int nid = sc->nid; > > + > > + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) > > + nid = 0; > > + > > + return atomic_long_xchg(&shrinker->nr_deferred[nid], 0); > > +} > > + > > +static inline long set_nr_deferred(long nr, struct shrinker *shrinker, > > + struct shrink_control *sc) > > +{ > > + int nid = sc->nid; > > + > > + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) > > + nid = 0; > > + > > + return atomic_long_add_return(nr, > > + &shrinker->nr_deferred[nid]); > > +} > > #endif > > This is pretty ... verbose. It doesn't need to be this complex at > all, and you shouldn't be duplicating code in multiple places. THere > is also no need for any of these to be "inline" functions. The > compiler will do that for static functions automatically if it makes > sense. > > Ok, so you only do the memcg nr_deferred thing if NUMA_AWARE && > sc->memcg is true. so.... > > static long shrink_slab_set_nr_deferred_memcg(...) > { > int nid = sc->nid; > > deferred = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_deferred, > true); > return atomic_long_add_return(nr, &deferred->nr_deferred[id]); > } > > static long shrink_slab_set_nr_deferred(...) > { > int nid = sc->nid; > > if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) > nid = 0; > else if (sc->memcg) > return shrink_slab_set_nr_deferred_memcg(...., nid); > > return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); > } > > And now there's no duplicated code. Thanks for the suggestion. Will incorporate in v3. > > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com