From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DDDBC64E8A for ; Wed, 2 Dec 2020 15:02:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B5E8E221EB for ; Wed, 2 Dec 2020 15:02:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727669AbgLBPB5 (ORCPT ); Wed, 2 Dec 2020 10:01:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726620AbgLBPB4 (ORCPT ); Wed, 2 Dec 2020 10:01:56 -0500 Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com [IPv6:2a00:1450:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C73BC0613D6 for ; Wed, 2 Dec 2020 07:01:10 -0800 (PST) Received: by mail-lj1-x235.google.com with SMTP id y10so4085804ljc.7 for ; Wed, 02 Dec 2020 07:01:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=+AY9trJkRf9cxwS80EFbb5/8Q9oqK1NT5wAK/bAfyuA=; b=hHA2aVTI1aAvsxPGY7CXBZa7EJEfdZXG9AAhz0Nk13+0/vviSbyGTsSFlI83PX2W/A 9EsuDVKyc8xiQWKiDtM8DET2BvFmo6Wy8cxFwVp0JYcKHURDgVCKP/heXChSAG3UpKG4 xXnGshxq35QveZnKgL0IIrTyxR/N4NMxQBgK91eo1dEXELFUxibZtmlsiWYa04ArX7Bp lp+Dtk4KOj8SMtS114AM94kpYaaJ6rD3CoD8YAdNRASkUmfXneFJmB5sLRlBysAZ8Ac2 4daGWY4A54FU0CWuRVRGoaSAUPKVHqQnKzVSqUEy3kklbK839ZH2GgbD/eoc0bD/iDs5 o1PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=+AY9trJkRf9cxwS80EFbb5/8Q9oqK1NT5wAK/bAfyuA=; b=TV+OnH26e/dMW352Nna2SjvqH+Esr9SFGUebOIKRaTvS+B9TX0JD4noiaPcQ4vAKdG XtXbRnbr/YJXp/ph2Z+VaOw+BhCoaJW9AfIL8WtoP+tg/msHI1zIXZps31Lx/gUZ3nnf Ab49N2w+LtAiJqewwCvUiaFbUKhLmsX1S+hP11iXwV3hIvC0f3BRBB2BMyqM2mdlkJph hoHySsGm0EcKCdP6nK66LexnUsQyHShNyhdgNtIRF6VLWY+dBnj6AGPWYP+ts4ExzKsN wD45vnJGkXHuicQx6MA7pH8LqQQ6oBRHxSJ3vWIM+cu5qG0Iawi/Xj10IM7aSEsD1YQJ fLZw== X-Gm-Message-State: AOAM5320KPrsuKTIn5xrjBu4qju+A5ZkPb+snk+eijA/rVi2kAeAXiIN 6PsxZN3Z4b//UtokzCWqy7A8au1KK6Oba/RiJErROg== X-Google-Smtp-Source: ABdhPJxCyzHAm5pT+CZ4HMd6KgPW+J/acp0DUaYmvYc1AvRHOWzot9DSea2SXFKuHuYOpuRzWYMWE0gc2tQT+lJIsCA= X-Received: by 2002:a2e:9746:: with SMTP id f6mr1273915ljj.270.1606921267017; Wed, 02 Dec 2020 07:01:07 -0800 (PST) MIME-Version: 1.0 References: <20201201212553.52164-1-shy828301@gmail.com> In-Reply-To: <20201201212553.52164-1-shy828301@gmail.com> From: Shakeel Butt Date: Wed, 2 Dec 2020 07:00:55 -0800 Message-ID: Subject: Re: [v3 PATCH] mm: list_lru: set shrinker map bit when child nr_items is not zero To: Yang Shi Cc: Roman Gushchin , Kirill Tkhai , Vladimir Davydov , Andrew Morton , Linux MM , LKML , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 1, 2020 at 1:25 PM Yang Shi wrote: > > When investigating a slab cache bloat problem, significant amount of > negative dentry cache was seen, but confusingly they neither got shrunk > by reclaimer (the host has very tight memory) nor be shrunk by dropping > cache. The vmcore shows there are over 14M negative dentry objects on lru, > but tracing result shows they were even not scanned at all. The further > investigation shows the memcg's vfs shrinker_map bit is not set. So the > reclaimer or dropping cache just skip calling vfs shrinker. So we have > to reboot the hosts to get the memory back. > > I didn't manage to come up with a reproducer in test environment, and the > problem can't be reproduced after rebooting. But it seems there is race > between shrinker map bit clear and reparenting by code inspection. The > hypothesis is elaborated as below. > > The memcg hierarchy on our production environment looks like: > root > / \ > system user > > The main workloads are running under user slice's children, and it creates > and removes memcg frequently. So reparenting happens very often under user > slice, but no task is under user slice directly. > > So with the frequent reparenting and tight memory pressure, the below > hypothetical race condition may happen: > > CPU A CPU B > reparent > dst->nr_items == 0 > shrinker: > total_objects == 0 > add src->nr_items to dst > set_bit > retrun SHRINK_EMPTY return > clear_bit > child memcg offline > replace child's kmemcg_id to with > parent's (in memcg_offline_kmem()) > list_lru_del() between shrinker runs > see parent's kmemcg_id > dec dst->nr_items > reparent again > dst->nr_items may go negative > due to concurrent list_lru_del() > > The second run of shrinker: > read nr_items without any > synchronization, so it may > see intermediate negative > nr_items then total_objects > may return 0 conincidently coincidently > > keep the bit cleared > dst->nr_items != 0 > skip set_bit > add scr->nr_item to dst > > After this point dst->nr_item may never go zero, so reparenting will not > set shrinker_map bit anymore. And since there is no task under user > slice directly, so no new object will be added to its lru to set the > shrinker map bit either. That bit is kept cleared forever. > > How does list_lru_del() race with reparenting? It is because > reparenting replaces childen's kmemcg_id to parent's without protecting children's > from nlru->lock, so list_lru_del() may see parent's kmemcg_id but > actually deleting items from child's lru, but dec'ing parent's nr_items, > so the parent's nr_items may go negative as commit > 2788cf0c401c268b4819c5407493a8769b7007aa ("memcg: reparent list_lrus and > free kmemcg_id on css offline") says. > > Since it is impossible that dst->nr_items goes negative and > src->nr_items goes zero at the same time, so it seems we could set the > shrinker map bit iff src->nr_items != 0. We could synchronize > list_lru_count_one() and reparenting with nlru->lock, but it seems > checking src->nr_items in reparenting is the simplest and avoids lock > contention. > > Fixes: fae91d6d8be5 ("mm/list_lru.c: set bit in memcg shrinker bitmap on first list_lru item appearance") > Suggested-by: Roman Gushchin > Reviewed-by: Roman Gushchin > Cc: Vladimir Davydov > Cc: Kirill Tkhai > Cc: Shakeel Butt > Cc: v4.19+ > Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39733C64E7C for ; Wed, 2 Dec 2020 15:01:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 823E8221EB for ; Wed, 2 Dec 2020 15:01:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 823E8221EB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6D9E98D0002; Wed, 2 Dec 2020 10:01:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6AFEC8D0001; Wed, 2 Dec 2020 10:01:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5EDAC8D0002; Wed, 2 Dec 2020 10:01:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 4A1AE8D0001 for ; Wed, 2 Dec 2020 10:01:10 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 13A051E19 for ; Wed, 2 Dec 2020 15:01:10 +0000 (UTC) X-FDA: 77548655100.23.snake08_530c774273b4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id D5B5337606 for ; Wed, 2 Dec 2020 15:01:09 +0000 (UTC) X-HE-Tag: snake08_530c774273b4 X-Filterd-Recvd-Size: 7083 Received: from mail-lj1-f182.google.com (mail-lj1-f182.google.com [209.85.208.182]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Dec 2020 15:01:09 +0000 (UTC) Received: by mail-lj1-f182.google.com with SMTP id y7so4093814lji.8 for ; Wed, 02 Dec 2020 07:01:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=+AY9trJkRf9cxwS80EFbb5/8Q9oqK1NT5wAK/bAfyuA=; b=hHA2aVTI1aAvsxPGY7CXBZa7EJEfdZXG9AAhz0Nk13+0/vviSbyGTsSFlI83PX2W/A 9EsuDVKyc8xiQWKiDtM8DET2BvFmo6Wy8cxFwVp0JYcKHURDgVCKP/heXChSAG3UpKG4 xXnGshxq35QveZnKgL0IIrTyxR/N4NMxQBgK91eo1dEXELFUxibZtmlsiWYa04ArX7Bp lp+Dtk4KOj8SMtS114AM94kpYaaJ6rD3CoD8YAdNRASkUmfXneFJmB5sLRlBysAZ8Ac2 4daGWY4A54FU0CWuRVRGoaSAUPKVHqQnKzVSqUEy3kklbK839ZH2GgbD/eoc0bD/iDs5 o1PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=+AY9trJkRf9cxwS80EFbb5/8Q9oqK1NT5wAK/bAfyuA=; b=Evs9+/atahWehysJzXlq9jJu3Wnh1DZNeBPoxJzJSjZuVy7VHD/ydgYi+GDAtBHRzz GN/RGYa2W30Lh+LrQ+tAmj2nbp50E83FOEoYI1H+g/2vN1sPhJPMMQDRQaNOpeySeYJ1 36JFz6KG/jMqFIAAGu93NpGm3xpFcITjRAFVx0f+ry68b0uNvHs+ylXEbJ02nmcsUPia f7ZGVWbomtuZCQF2IrrucmrOrCg+h3XEbkTpjjLFBYU+wFjq2blaeZ4XTn7GSXNtFjlZ 4VfaDDtsgbT+o5RNykGMcJwTOzCvhWe3QJo+vO28JUB/+th67iiijGVcqQjxK5psycQe tGxQ== X-Gm-Message-State: AOAM530QarLSbZeqKvOsahE6cjZHNGvBcWKZ5UJ/cE5FBCCKNyfCey2n FJCUoM1ZcfSDA/lYdJUZ90UV25oToJmOnB54OorjIw== X-Google-Smtp-Source: ABdhPJxCyzHAm5pT+CZ4HMd6KgPW+J/acp0DUaYmvYc1AvRHOWzot9DSea2SXFKuHuYOpuRzWYMWE0gc2tQT+lJIsCA= X-Received: by 2002:a2e:9746:: with SMTP id f6mr1273915ljj.270.1606921267017; Wed, 02 Dec 2020 07:01:07 -0800 (PST) MIME-Version: 1.0 References: <20201201212553.52164-1-shy828301@gmail.com> In-Reply-To: <20201201212553.52164-1-shy828301@gmail.com> From: Shakeel Butt Date: Wed, 2 Dec 2020 07:00:55 -0800 Message-ID: Subject: Re: [v3 PATCH] mm: list_lru: set shrinker map bit when child nr_items is not zero To: Yang Shi Cc: Roman Gushchin , Kirill Tkhai , Vladimir Davydov , Andrew Morton , Linux MM , LKML , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Dec 1, 2020 at 1:25 PM Yang Shi wrote: > > When investigating a slab cache bloat problem, significant amount of > negative dentry cache was seen, but confusingly they neither got shrunk > by reclaimer (the host has very tight memory) nor be shrunk by dropping > cache. The vmcore shows there are over 14M negative dentry objects on lru, > but tracing result shows they were even not scanned at all. The further > investigation shows the memcg's vfs shrinker_map bit is not set. So the > reclaimer or dropping cache just skip calling vfs shrinker. So we have > to reboot the hosts to get the memory back. > > I didn't manage to come up with a reproducer in test environment, and the > problem can't be reproduced after rebooting. But it seems there is race > between shrinker map bit clear and reparenting by code inspection. The > hypothesis is elaborated as below. > > The memcg hierarchy on our production environment looks like: > root > / \ > system user > > The main workloads are running under user slice's children, and it creates > and removes memcg frequently. So reparenting happens very often under user > slice, but no task is under user slice directly. > > So with the frequent reparenting and tight memory pressure, the below > hypothetical race condition may happen: > > CPU A CPU B > reparent > dst->nr_items == 0 > shrinker: > total_objects == 0 > add src->nr_items to dst > set_bit > retrun SHRINK_EMPTY return > clear_bit > child memcg offline > replace child's kmemcg_id to with > parent's (in memcg_offline_kmem()) > list_lru_del() between shrinker runs > see parent's kmemcg_id > dec dst->nr_items > reparent again > dst->nr_items may go negative > due to concurrent list_lru_del() > > The second run of shrinker: > read nr_items without any > synchronization, so it may > see intermediate negative > nr_items then total_objects > may return 0 conincidently coincidently > > keep the bit cleared > dst->nr_items != 0 > skip set_bit > add scr->nr_item to dst > > After this point dst->nr_item may never go zero, so reparenting will not > set shrinker_map bit anymore. And since there is no task under user > slice directly, so no new object will be added to its lru to set the > shrinker map bit either. That bit is kept cleared forever. > > How does list_lru_del() race with reparenting? It is because > reparenting replaces childen's kmemcg_id to parent's without protecting children's > from nlru->lock, so list_lru_del() may see parent's kmemcg_id but > actually deleting items from child's lru, but dec'ing parent's nr_items, > so the parent's nr_items may go negative as commit > 2788cf0c401c268b4819c5407493a8769b7007aa ("memcg: reparent list_lrus and > free kmemcg_id on css offline") says. > > Since it is impossible that dst->nr_items goes negative and > src->nr_items goes zero at the same time, so it seems we could set the > shrinker map bit iff src->nr_items != 0. We could synchronize > list_lru_count_one() and reparenting with nlru->lock, but it seems > checking src->nr_items in reparenting is the simplest and avoids lock > contention. > > Fixes: fae91d6d8be5 ("mm/list_lru.c: set bit in memcg shrinker bitmap on first list_lru item appearance") > Suggested-by: Roman Gushchin > Reviewed-by: Roman Gushchin > Cc: Vladimir Davydov > Cc: Kirill Tkhai > Cc: Shakeel Butt > Cc: v4.19+ > Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt