From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25D2BC4361B for ; Tue, 15 Dec 2020 14:57:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D97B022518 for ; Tue, 15 Dec 2020 14:57:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729948AbgLOOsK (ORCPT ); Tue, 15 Dec 2020 09:48:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729936AbgLOOsG (ORCPT ); Tue, 15 Dec 2020 09:48:06 -0500 Received: from mail-ej1-x643.google.com (mail-ej1-x643.google.com [IPv6:2a00:1450:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38BCEC06179C for ; Tue, 15 Dec 2020 06:47:26 -0800 (PST) Received: by mail-ej1-x643.google.com with SMTP id 6so13431022ejz.5 for ; Tue, 15 Dec 2020 06:47:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=uXsidnHN6xvD4rkSp2MPrzKExLvD2s/CBWkJh4xL/c8=; b=hmgJIO6hC+zCQO5jBDz/KqQyAqz5+66A1CXpR5ppJ3Q/jcTKWX8eO+xYDY4zLAjzNr kFDPci2DvyPzOKMfd/qC9QytiEQvX19bwaw/0YJpzMd/+c8ZyP7aFmwrx99A1Cr24HFb ihbxVMLUFLBpNilkbDc1PmzUWzAF4mzFrnMfiKzPXAmRoCJ/uwMgWCry/RDwkCBK2pUu kXrPH0Ict45lQScdNP4sQaALfJEwZ32bed71d9OnWnpJt2kxo6DQlxwxmRpBNQtkxrgQ B6dGvIZyc21A8sViRde/+qJLxJP1qAR+ozUGcdG901Um+YPCtXBIpzZKFDgSh2I1xg3P PIVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=uXsidnHN6xvD4rkSp2MPrzKExLvD2s/CBWkJh4xL/c8=; b=nfFXBisYxO6lbTJa/GMMW01s5TjyuqAq4zHTCrosA78W22bj/IhCZ4GxnSl9EwEcqB bgPI5YWIXrUB0O5SjJVucPwULq/D0MJhACCZx3IgUiGn+6tJMRm+gJ7q8qQi4j1BwpBW ufY777L5tMY2TEoCK2LF0CJERH0V3JD9nUl7BCkoErrQiZ3YFC4drNOHGOAX1N/8ESMs ykEBvz+tGZrh/cTPSy8u3zONAs0nrdc/RlZUQyxFtehngbQa4ImEqkRBcRXjN55qF81G K5xh5VI2RHbiZU6NheM+gOX/HpJBk5SV/woBWm6ny5YaC6cJQcLJh0ejctZzL0vT571v 74iw== X-Gm-Message-State: AOAM531mDWPhlIDS3zh350iO2BbLymJbZbeuqaMqCII2EYUZljVOu3QK UDwaRvbkJLCIe//+MIIAmbMUiQ== X-Google-Smtp-Source: ABdhPJz4zOZt3U3vN8cP9uNm7Tb9AU0tWJaQj6vSyL8M0CvnB/RMpBpdngGOwVVF+cPNFCPh9DwyHg== X-Received: by 2002:a17:906:134f:: with SMTP id x15mr27667216ejb.278.1608043644908; Tue, 15 Dec 2020 06:47:24 -0800 (PST) Received: from localhost ([2620:10d:c093:400::5:d6dd]) by smtp.gmail.com with ESMTPSA id h15sm1759058ejq.29.2020.12.15.06.47.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Dec 2020 06:47:23 -0800 (PST) Date: Tue, 15 Dec 2020 15:45:16 +0100 From: Johannes Weiner To: Dave Chinner Cc: Yang Shi , guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, mhocko@suse.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [v2 PATCH 5/9] mm: memcontrol: add per memcg shrinker nr_deferred Message-ID: <20201215144516.GE379720@cmpxchg.org> References: <20201214223722.232537-1-shy828301@gmail.com> <20201214223722.232537-6-shy828301@gmail.com> <20201215022233.GL3913616@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201215022233.GL3913616@dread.disaster.area> Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Tue, Dec 15, 2020 at 01:22:33PM +1100, Dave Chinner wrote: > On Mon, Dec 14, 2020 at 02:37:18PM -0800, Yang Shi wrote: > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > may suffer from over shrink, excessive reclaim latency, etc. > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > We observed this hit in our production environment which was running vfs heavy workload > > shown as the below tracing log: > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > cache items 246404277 delta 31345 total_scan 123202138 > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > last shrinker return val 123186855 > > > > The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > better isolation. > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > Signed-off-by: Yang Shi > > --- > > include/linux/memcontrol.h | 9 +++ > > mm/memcontrol.c | 110 ++++++++++++++++++++++++++++++++++++- > > mm/vmscan.c | 4 ++ > > 3 files changed, 120 insertions(+), 3 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index 922a7f600465..1b343b268359 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -92,6 +92,13 @@ struct lruvec_stat { > > long count[NR_VM_NODE_STAT_ITEMS]; > > }; > > > > + > > +/* Shrinker::id indexed nr_deferred of memcg-aware shrinkers. */ > > +struct memcg_shrinker_deferred { > > + struct rcu_head rcu; > > + atomic_long_t nr_deferred[]; > > +}; > > So you're effectively copy and pasting the memcg_shrinker_map > infrastructure and doubling the number of allocations/frees required > to set up/tear down a memcg? Why not add it to the struct > memcg_shrinker_map like this: > > struct memcg_shrinker_map { > struct rcu_head rcu; > unsigned long *map; > atomic_long_t *nr_deferred; > }; > > And when you dynamically allocate the structure, set the map and > nr_deferred pointers to the correct offset in the allocated range. > > Then this patch is really only changes to the size of the chunk > being allocated, setting up the pointers and copying the relevant > data from the old to new. Fully agreed. In the longer-term, it may be nice to further expand this and make this the generalized intersection between cgroup, node and shrinkers. There is large overlap with list_lru e.g. - with data of identical scope and lifetime, but duplicative callbacks and management. If we folded list_lru_memcg into the above data structure, we could also generalize and reuse the existing callbacks.