From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A996EC433E9 for ; Wed, 10 Feb 2021 02:28:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 88F3964E4B for ; Wed, 10 Feb 2021 02:28:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235488AbhBJC21 (ORCPT ); Tue, 9 Feb 2021 21:28:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235164AbhBJB63 (ORCPT ); Tue, 9 Feb 2021 20:58:29 -0500 Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [IPv6:2a00:1450:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38226C06174A; Tue, 9 Feb 2021 17:57:49 -0800 (PST) Received: by mail-ej1-x631.google.com with SMTP id l25so1115842eja.9; Tue, 09 Feb 2021 17:57:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=QmLgQwQexnhS3fGhl39Bt/tTW8ghrJQTbKeWpvalhMY=; b=FFg3mH0exY5JQ+1DZI+aZj6I8wi3retf756+PY/B4pqQeGC0c6W9cacxb0Tvu/tgii 595FPDuc6f9gNSdfYjvd+a3A+1+GCfzymdAH1GmaMvaBn7vcuOROGdx0Lz/Zok7dgaz7 419UfJlXX5Y12tPsbPzNM24aMWvtlnRhrmPBB/3/m4cB/f7dQNGp2XaHPS0M3Bhi7BQv o7jIA0sXlbV2qHAJF6X/zLvajrIbhmjtT31PqIlTRxmUKRRqWlsQoQunhAutbpK9C75/ q+Ru6e6f3Ns83yOTxBzHNr1sYrTyf0cjwb0zs45ZQgbEZL6Q1sBL3jltGP7vQP4k+SBx RzEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=QmLgQwQexnhS3fGhl39Bt/tTW8ghrJQTbKeWpvalhMY=; b=kPAbeyGJR9ju6Zmmiaq8pwz6aKlDK1URZTSoRdtK9X9iZHdwz3XotM1zBXBzox+cAJ bFqzbHejMT5SPSAnVhZr9vQTesiKXn0VZ3KdOth7LzLE29+eWQ8sQBAG1gAxYocmRBCR vuFCBoDM7ij9zGnIEEeS+QJvzuffRw0slnF7+DsFiiqJS08dvtFE0yRfqQK/SuYY0yPD 5c/zqZPOdFXOH63l1Wtt3tYGrie8U3d5IJ1ea7U7J9aVn342vAUn0RkkQcEFr3RIQS8P DLOCeRHJ0njv5RcEev6FhMZ9NolD/7uxjnZlKf89ZSGkSoPGR0kg2NY47FAdeppF90NI J0BA== X-Gm-Message-State: AOAM531x1RFdwk5qOU6kAnIv0ply0HKWT4ny+dR1ji8l6lfgU6FePbJg 93ZzC2J2x+Ga7mO7zu8l0Zq8YZhRgx+TR3TsJZ4= X-Google-Smtp-Source: ABdhPJwNVTpCSM/kYvDMY7rswRphlNGGhHoVCLYiSpPyE7oyfGdSXWxujJNtbkAmEM1pGpAHwbBS9UZ55yaz+e3/5q0= X-Received: by 2002:a17:906:2e85:: with SMTP id o5mr604545eji.238.1612922267948; Tue, 09 Feb 2021 17:57:47 -0800 (PST) MIME-Version: 1.0 References: <20210209174646.1310591-1-shy828301@gmail.com> <20210209174646.1310591-9-shy828301@gmail.com> <20210210011020.GL524633@carbon.DHCP.thefacebook.com> <20210210014018.GR524633@carbon.DHCP.thefacebook.com> In-Reply-To: <20210210014018.GR524633@carbon.DHCP.thefacebook.com> From: Yang Shi Date: Tue, 9 Feb 2021 17:57:35 -0800 Message-ID: Subject: Re: [v7 PATCH 08/12] mm: vmscan: add per memcg shrinker nr_deferred To: Roman Gushchin Cc: Kirill Tkhai , Vlastimil Babka , Shakeel Butt , Dave Chinner , Johannes Weiner , Michal Hocko , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 9, 2021 at 5:40 PM Roman Gushchin wrote: > > On Tue, Feb 09, 2021 at 05:25:16PM -0800, Yang Shi wrote: > > On Tue, Feb 9, 2021 at 5:10 PM Roman Gushchin wrote: > > > > > > On Tue, Feb 09, 2021 at 09:46:42AM -0800, Yang Shi wrote: > > > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > > > may suffer from over shrink, excessive reclaim latency, etc. > > > > > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > > > > > We observed this hit in our production environment which was running vfs heavy workload > > > > shown as the below tracing log: > > > > > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > > > cache items 246404277 delta 31345 total_scan 123202138 > > > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > > > last shrinker return val 123186855 > > > > > > > > The vfs cache and page cache ratio was 10:1 on this machine, and half of caches were dropped. > > > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > > > better isolation. > > > > > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > > > > > Signed-off-by: Yang Shi > > > > --- > > > > include/linux/memcontrol.h | 7 +++--- > > > > mm/vmscan.c | 49 +++++++++++++++++++++++++------------- > > > > 2 files changed, 37 insertions(+), 19 deletions(-) > > > > > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > > > index 4c9253896e25..c457fc7bc631 100644 > > > > --- a/include/linux/memcontrol.h > > > > +++ b/include/linux/memcontrol.h > > > > @@ -93,12 +93,13 @@ struct lruvec_stat { > > > > }; > > > > > > > > /* > > > > - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, > > > > - * which have elements charged to this memcg. > > > > + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware > > > > + * shrinkers, which have elements charged to this memcg. > > > > */ > > > > struct shrinker_info { > > > > struct rcu_head rcu; > > > > - unsigned long map[]; > > > > + atomic_long_t *nr_deferred; > > > > + unsigned long *map; > > > > }; > > > > > > > > /* > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > > index a047980536cf..d4b030a0b2a9 100644 > > > > --- a/mm/vmscan.c > > > > +++ b/mm/vmscan.c > > > > @@ -187,9 +187,13 @@ static DECLARE_RWSEM(shrinker_rwsem); > > > > #ifdef CONFIG_MEMCG > > > > static int shrinker_nr_max; > > > > > > > > +/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ > > > > #define NR_MAX_TO_SHR_MAP_SIZE(nr_max) \ > > > > (DIV_ROUND_UP(nr_max, BITS_PER_LONG) * sizeof(unsigned long)) > > > > > > > > +#define NR_MAX_TO_SHR_DEF_SIZE(nr_max) \ > > > > + (round_up(nr_max, BITS_PER_LONG) * sizeof(atomic_long_t)) > > > > + > > > > static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, > > > > int nid) > > > > { > > > > @@ -203,10 +207,12 @@ static void free_shrinker_info_rcu(struct rcu_head *head) > > > > } > > > > > > > > static int expand_one_shrinker_info(struct mem_cgroup *memcg, > > > > - int size, int old_size) > > > > + int m_size, int d_size, > > > > + int old_m_size, int old_d_size) > > > > { > > > > struct shrinker_info *new, *old; > > > > int nid; > > > > + int size = m_size + d_size; > > > > > > > > for_each_node(nid) { > > > > old = shrinker_info_protected(memcg, nid); > > > > @@ -218,9 +224,15 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, > > > > if (!new) > > > > return -ENOMEM; > > > > > > > > - /* Set all old bits, clear all new bits */ > > > > - memset(new->map, (int)0xff, old_size); > > > > - memset((void *)new->map + old_size, 0, size - old_size); > > > > + new->nr_deferred = (atomic_long_t *)(new + 1); > > > > + new->map = (void *)new->nr_deferred + d_size; > > > > + > > > > + /* map: set all old bits, clear all new bits */ > > > > + memset(new->map, (int)0xff, old_m_size); > > > > + memset((void *)new->map + old_m_size, 0, m_size - old_m_size); > > > > + /* nr_deferred: copy old values, clear all new values */ > > > > + memcpy(new->nr_deferred, old->nr_deferred, old_d_size); > > > > + memset((void *)new->nr_deferred + old_d_size, 0, d_size - old_d_size); > > > > > > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); > > > > call_rcu(&old->rcu, free_shrinker_info_rcu); > > > > @@ -235,9 +247,6 @@ void free_shrinker_info(struct mem_cgroup *memcg) > > > > struct shrinker_info *info; > > > > int nid; > > > > > > > > - if (mem_cgroup_is_root(memcg)) > > > > - return; > > > > - > > > > for_each_node(nid) { > > > > pn = mem_cgroup_nodeinfo(memcg, nid); > > > > info = shrinker_info_protected(memcg, nid); > > > > @@ -250,12 +259,13 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > > > > { > > > > struct shrinker_info *info; > > > > int nid, size, ret = 0; > > > > - > > > > - if (mem_cgroup_is_root(memcg)) > > > > - return 0; > > > > + int m_size, d_size = 0; > > > > > > > > down_write(&shrinker_rwsem); > > > > - size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max); > > > > + m_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max); > > > > + d_size = NR_MAX_TO_SHR_DEF_SIZE(shrinker_nr_max); > > > > + size = m_size + d_size; > > > > + > > > > for_each_node(nid) { > > > > info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); > > > > if (!info) { > > > > @@ -263,6 +273,8 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > > > > ret = -ENOMEM; > > > > break; > > > > } > > > > + info->nr_deferred = (atomic_long_t *)(info + 1); > > > > + info->map = (void *)info->nr_deferred + d_size; > > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); > > > > } > > > > up_write(&shrinker_rwsem); > > > > @@ -274,10 +286,16 @@ static int expand_shrinker_info(int new_id) > > > > { > > > > int size, old_size, ret = 0; > > > > int new_nr_max = new_id + 1; > > > > + int m_size, d_size = 0; > > > > + int old_m_size, old_d_size = 0; > > > > struct mem_cgroup *memcg; > > > > > > > > - size = NR_MAX_TO_SHR_MAP_SIZE(new_nr_max); > > > > - old_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max); > > > > + m_size = NR_MAX_TO_SHR_MAP_SIZE(new_nr_max); > > > > + d_size = NR_MAX_TO_SHR_DEF_SIZE(new_nr_max); > > > > + size = m_size + d_size; > > > > + old_m_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max); > > > > + old_d_size = NR_MAX_TO_SHR_DEF_SIZE(shrinker_nr_max); > > > > + old_size = old_m_size + old_d_size; > > > > if (size <= old_size) > > > > goto out; > > > > > > It looks correct, but a bit bulky. Can we check that the new maximum > > > number of elements is larger than then the old one here? > > > > Seems not to me. For example, we have shrinker_nr_max as 1, then a new > > shrinker is registered and the new_nr_max is 2, but actually the new > > size is equal to the old size. > > I see. > > > > > We should be able to do: > > if (round_up(new_nr_max, BITS_PER_LONG) <= round_up(shrinker_nr_mx, > > BITS_PER_LONG)) > > > > Does it seem better? > > Yes, I think so. > > > > > > > > > > > > > > @@ -286,9 +304,8 @@ static int expand_shrinker_info(int new_id) > > > > > > > > memcg = mem_cgroup_iter(NULL, NULL, NULL); > > > > do { > > > > - if (mem_cgroup_is_root(memcg)) > > > > - continue; > > > > - ret = expand_one_shrinker_info(memcg, size, old_size); > > > > + ret = expand_one_shrinker_info(memcg, m_size, d_size, > > > > + old_m_size, old_d_size); > > > > > > Pass the old and the new numbers to expand_one_shrinker_info() and > > > have all size manipulation there? > > > > With the above proposal we could move the size manipulation right > > before the memcg iter, we could save some cycles if we don't have to > > expand it. > > I mostly dislike passing 4 arguments to expand_one_shrinker_info(): > old_m_size, old_d_size, etc. But you're right, there is no good reason > to calculate them for each cgroup, if we can do it once. Can you, please, > rename arguments to map_size and defer_size (or something more obvious than > m and d on your taste)? Yes, sure. map_size/defer_size and old_map_size, old_defer_size seem good to me as well. > > Thanks! From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59893C433DB for ; Wed, 10 Feb 2021 01:57:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A661A64E45 for ; Wed, 10 Feb 2021 01:57:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A661A64E45 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 134726B0006; Tue, 9 Feb 2021 20:57:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E2A96B006C; Tue, 9 Feb 2021 20:57:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F159E6B006E; Tue, 9 Feb 2021 20:57:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id D5C3B6B0006 for ; Tue, 9 Feb 2021 20:57:49 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A023F1850A0A0 for ; Wed, 10 Feb 2021 01:57:49 +0000 (UTC) X-FDA: 77800697058.23.crowd81_0a0471c2760c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 7EC7337604 for ; Wed, 10 Feb 2021 01:57:49 +0000 (UTC) X-HE-Tag: crowd81_0a0471c2760c X-Filterd-Recvd-Size: 13411 Received: from mail-ej1-f49.google.com (mail-ej1-f49.google.com [209.85.218.49]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Wed, 10 Feb 2021 01:57:48 +0000 (UTC) Received: by mail-ej1-f49.google.com with SMTP id i8so1131848ejc.7 for ; Tue, 09 Feb 2021 17:57:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=QmLgQwQexnhS3fGhl39Bt/tTW8ghrJQTbKeWpvalhMY=; b=FFg3mH0exY5JQ+1DZI+aZj6I8wi3retf756+PY/B4pqQeGC0c6W9cacxb0Tvu/tgii 595FPDuc6f9gNSdfYjvd+a3A+1+GCfzymdAH1GmaMvaBn7vcuOROGdx0Lz/Zok7dgaz7 419UfJlXX5Y12tPsbPzNM24aMWvtlnRhrmPBB/3/m4cB/f7dQNGp2XaHPS0M3Bhi7BQv o7jIA0sXlbV2qHAJF6X/zLvajrIbhmjtT31PqIlTRxmUKRRqWlsQoQunhAutbpK9C75/ q+Ru6e6f3Ns83yOTxBzHNr1sYrTyf0cjwb0zs45ZQgbEZL6Q1sBL3jltGP7vQP4k+SBx RzEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=QmLgQwQexnhS3fGhl39Bt/tTW8ghrJQTbKeWpvalhMY=; b=DDcO4qUWBJ+nAbSiiq3oQMH9Ei4xpu462CG7Vlu7TcZVNj3KaSou+5Mz7ERY9hO4kZ HOGICus+wGsizHLpe8W7gKFegTz16WZygxvrcsoDAGo70ETYahFOtKVhh+HAhyyXSmY8 BMZPd7HOGIWDC8Otgdk0NUSuBymw68dN7OIRk4zm6LVjXIiRQJjcKFpOM1lj6Oy9aS9A hJrjWPTRw2e46jl1+cTvK+hTABlJipzFSfrGoh/JedX9iN7JnCFNOnS493QWCQPuItdw 09dfrLlxgBOeRQWIkaLdJpmfjNUAWCKHHrCc0bcHGPW2/WN9C7iYJVw3OmxAtRehviC+ 0S1g== X-Gm-Message-State: AOAM531kGhIDK7iXiBPKNajuLNeh1wo/HVROjTTpIILi1xJceifvCTG+ dF8TDGPLZat6pCzQnTfJgjUTpPBsr9PLATk5asQ= X-Google-Smtp-Source: ABdhPJwNVTpCSM/kYvDMY7rswRphlNGGhHoVCLYiSpPyE7oyfGdSXWxujJNtbkAmEM1pGpAHwbBS9UZ55yaz+e3/5q0= X-Received: by 2002:a17:906:2e85:: with SMTP id o5mr604545eji.238.1612922267948; Tue, 09 Feb 2021 17:57:47 -0800 (PST) MIME-Version: 1.0 References: <20210209174646.1310591-1-shy828301@gmail.com> <20210209174646.1310591-9-shy828301@gmail.com> <20210210011020.GL524633@carbon.DHCP.thefacebook.com> <20210210014018.GR524633@carbon.DHCP.thefacebook.com> In-Reply-To: <20210210014018.GR524633@carbon.DHCP.thefacebook.com> From: Yang Shi Date: Tue, 9 Feb 2021 17:57:35 -0800 Message-ID: Subject: Re: [v7 PATCH 08/12] mm: vmscan: add per memcg shrinker nr_deferred To: Roman Gushchin Cc: Kirill Tkhai , Vlastimil Babka , Shakeel Butt , Dave Chinner , Johannes Weiner , Michal Hocko , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 9, 2021 at 5:40 PM Roman Gushchin wrote: > > On Tue, Feb 09, 2021 at 05:25:16PM -0800, Yang Shi wrote: > > On Tue, Feb 9, 2021 at 5:10 PM Roman Gushchin wrote: > > > > > > On Tue, Feb 09, 2021 at 09:46:42AM -0800, Yang Shi wrote: > > > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > > > may suffer from over shrink, excessive reclaim latency, etc. > > > > > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > > > > > We observed this hit in our production environment which was running vfs heavy workload > > > > shown as the below tracing log: > > > > > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > > > cache items 246404277 delta 31345 total_scan 123202138 > > > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > > > last shrinker return val 123186855 > > > > > > > > The vfs cache and page cache ratio was 10:1 on this machine, and half of caches were dropped. > > > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > > > better isolation. > > > > > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > > > > > Signed-off-by: Yang Shi > > > > --- > > > > include/linux/memcontrol.h | 7 +++--- > > > > mm/vmscan.c | 49 +++++++++++++++++++++++++------------- > > > > 2 files changed, 37 insertions(+), 19 deletions(-) > > > > > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > > > index 4c9253896e25..c457fc7bc631 100644 > > > > --- a/include/linux/memcontrol.h > > > > +++ b/include/linux/memcontrol.h > > > > @@ -93,12 +93,13 @@ struct lruvec_stat { > > > > }; > > > > > > > > /* > > > > - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, > > > > - * which have elements charged to this memcg. > > > > + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware > > > > + * shrinkers, which have elements charged to this memcg. > > > > */ > > > > struct shrinker_info { > > > > struct rcu_head rcu; > > > > - unsigned long map[]; > > > > + atomic_long_t *nr_deferred; > > > > + unsigned long *map; > > > > }; > > > > > > > > /* > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > > index a047980536cf..d4b030a0b2a9 100644 > > > > --- a/mm/vmscan.c > > > > +++ b/mm/vmscan.c > > > > @@ -187,9 +187,13 @@ static DECLARE_RWSEM(shrinker_rwsem); > > > > #ifdef CONFIG_MEMCG > > > > static int shrinker_nr_max; > > > > > > > > +/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ > > > > #define NR_MAX_TO_SHR_MAP_SIZE(nr_max) \ > > > > (DIV_ROUND_UP(nr_max, BITS_PER_LONG) * sizeof(unsigned long)) > > > > > > > > +#define NR_MAX_TO_SHR_DEF_SIZE(nr_max) \ > > > > + (round_up(nr_max, BITS_PER_LONG) * sizeof(atomic_long_t)) > > > > + > > > > static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, > > > > int nid) > > > > { > > > > @@ -203,10 +207,12 @@ static void free_shrinker_info_rcu(struct rcu_head *head) > > > > } > > > > > > > > static int expand_one_shrinker_info(struct mem_cgroup *memcg, > > > > - int size, int old_size) > > > > + int m_size, int d_size, > > > > + int old_m_size, int old_d_size) > > > > { > > > > struct shrinker_info *new, *old; > > > > int nid; > > > > + int size = m_size + d_size; > > > > > > > > for_each_node(nid) { > > > > old = shrinker_info_protected(memcg, nid); > > > > @@ -218,9 +224,15 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, > > > > if (!new) > > > > return -ENOMEM; > > > > > > > > - /* Set all old bits, clear all new bits */ > > > > - memset(new->map, (int)0xff, old_size); > > > > - memset((void *)new->map + old_size, 0, size - old_size); > > > > + new->nr_deferred = (atomic_long_t *)(new + 1); > > > > + new->map = (void *)new->nr_deferred + d_size; > > > > + > > > > + /* map: set all old bits, clear all new bits */ > > > > + memset(new->map, (int)0xff, old_m_size); > > > > + memset((void *)new->map + old_m_size, 0, m_size - old_m_size); > > > > + /* nr_deferred: copy old values, clear all new values */ > > > > + memcpy(new->nr_deferred, old->nr_deferred, old_d_size); > > > > + memset((void *)new->nr_deferred + old_d_size, 0, d_size - old_d_size); > > > > > > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); > > > > call_rcu(&old->rcu, free_shrinker_info_rcu); > > > > @@ -235,9 +247,6 @@ void free_shrinker_info(struct mem_cgroup *memcg) > > > > struct shrinker_info *info; > > > > int nid; > > > > > > > > - if (mem_cgroup_is_root(memcg)) > > > > - return; > > > > - > > > > for_each_node(nid) { > > > > pn = mem_cgroup_nodeinfo(memcg, nid); > > > > info = shrinker_info_protected(memcg, nid); > > > > @@ -250,12 +259,13 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > > > > { > > > > struct shrinker_info *info; > > > > int nid, size, ret = 0; > > > > - > > > > - if (mem_cgroup_is_root(memcg)) > > > > - return 0; > > > > + int m_size, d_size = 0; > > > > > > > > down_write(&shrinker_rwsem); > > > > - size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max); > > > > + m_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max); > > > > + d_size = NR_MAX_TO_SHR_DEF_SIZE(shrinker_nr_max); > > > > + size = m_size + d_size; > > > > + > > > > for_each_node(nid) { > > > > info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); > > > > if (!info) { > > > > @@ -263,6 +273,8 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > > > > ret = -ENOMEM; > > > > break; > > > > } > > > > + info->nr_deferred = (atomic_long_t *)(info + 1); > > > > + info->map = (void *)info->nr_deferred + d_size; > > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); > > > > } > > > > up_write(&shrinker_rwsem); > > > > @@ -274,10 +286,16 @@ static int expand_shrinker_info(int new_id) > > > > { > > > > int size, old_size, ret = 0; > > > > int new_nr_max = new_id + 1; > > > > + int m_size, d_size = 0; > > > > + int old_m_size, old_d_size = 0; > > > > struct mem_cgroup *memcg; > > > > > > > > - size = NR_MAX_TO_SHR_MAP_SIZE(new_nr_max); > > > > - old_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max); > > > > + m_size = NR_MAX_TO_SHR_MAP_SIZE(new_nr_max); > > > > + d_size = NR_MAX_TO_SHR_DEF_SIZE(new_nr_max); > > > > + size = m_size + d_size; > > > > + old_m_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max); > > > > + old_d_size = NR_MAX_TO_SHR_DEF_SIZE(shrinker_nr_max); > > > > + old_size = old_m_size + old_d_size; > > > > if (size <= old_size) > > > > goto out; > > > > > > It looks correct, but a bit bulky. Can we check that the new maximum > > > number of elements is larger than then the old one here? > > > > Seems not to me. For example, we have shrinker_nr_max as 1, then a new > > shrinker is registered and the new_nr_max is 2, but actually the new > > size is equal to the old size. > > I see. > > > > > We should be able to do: > > if (round_up(new_nr_max, BITS_PER_LONG) <= round_up(shrinker_nr_mx, > > BITS_PER_LONG)) > > > > Does it seem better? > > Yes, I think so. > > > > > > > > > > > > > > @@ -286,9 +304,8 @@ static int expand_shrinker_info(int new_id) > > > > > > > > memcg = mem_cgroup_iter(NULL, NULL, NULL); > > > > do { > > > > - if (mem_cgroup_is_root(memcg)) > > > > - continue; > > > > - ret = expand_one_shrinker_info(memcg, size, old_size); > > > > + ret = expand_one_shrinker_info(memcg, m_size, d_size, > > > > + old_m_size, old_d_size); > > > > > > Pass the old and the new numbers to expand_one_shrinker_info() and > > > have all size manipulation there? > > > > With the above proposal we could move the size manipulation right > > before the memcg iter, we could save some cycles if we don't have to > > expand it. > > I mostly dislike passing 4 arguments to expand_one_shrinker_info(): > old_m_size, old_d_size, etc. But you're right, there is no good reason > to calculate them for each cgroup, if we can do it once. Can you, please, > rename arguments to map_size and defer_size (or something more obvious than > m and d on your taste)? Yes, sure. map_size/defer_size and old_map_size, old_defer_size seem good to me as well. > > Thanks!