From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED8F0C433E0 for ; Mon, 22 Feb 2021 19:23:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4861C61481 for ; Mon, 22 Feb 2021 19:23:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4861C61481 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BF3346B0006; Mon, 22 Feb 2021 14:23:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BCA1D6B006E; Mon, 22 Feb 2021 14:23:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE0A56B0070; Mon, 22 Feb 2021 14:23:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 98A7E6B0006 for ; Mon, 22 Feb 2021 14:23:55 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 68FCC87C7 for ; Mon, 22 Feb 2021 19:23:55 +0000 (UTC) X-FDA: 77846878830.20.60487F6 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf24.hostedemail.com (Postfix) with ESMTP id 7FAD1A000840 for ; Mon, 22 Feb 2021 19:23:49 +0000 (UTC) IronPort-SDR: yaqZOJIqojjkyQ6mm28Nlc6vAT7Y5MXRzLWusN2VVfqKLcrha6c0iSpDNyLFMXsNiXNcb/DZAW ajS077mrGiZA== X-IronPort-AV: E=McAfee;i="6000,8403,9903"; a="184678052" X-IronPort-AV: E=Sophos;i="5.81,198,1610438400"; d="scan'208";a="184678052" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Feb 2021 11:23:52 -0800 IronPort-SDR: RaY9gAVM1BH1UuQYP8Jx5AilW/WXuGS+KiwNANxgZWgz+mUuyJqDSIxVwt4albE5cAyFj+1M5s oPtfz/AT7Khw== X-IronPort-AV: E=Sophos;i="5.81,198,1610438400"; d="scan'208";a="389981629" Received: from schen9-mobl.amr.corp.intel.com ([10.251.12.88]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Feb 2021 11:23:51 -0800 Subject: Re: [PATCH v2 2/3] mm: Force update of mem cgroup soft limit tree on usage excess To: Michal Hocko Cc: Andrew Morton , Johannes Weiner , Vladimir Davydov , Dave Hansen , Ying Huang , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org References: <06f1f92f1f7d4e57c4e20c97f435252c16c60a27.1613584277.git.tim.c.chen@linux.intel.com> <884d7559-e118-3773-351d-84c02642ca96@linux.intel.com> From: Tim Chen Message-ID: <1d8d4ec6-e97d-b7e5-695e-f189404a80fd@linux.intel.com> Date: Mon, 22 Feb 2021 11:23:45 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 7FAD1A000840 X-Stat-Signature: nxbeqefbto4w6rw7ky6bfcqs3zd6qods Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf24; identity=mailfrom; envelope-from=""; helo=mga09.intel.com; client-ip=134.134.136.24 X-HE-DKIM-Result: none/none X-HE-Tag: 1614021829-592413 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2/22/21 11:09 AM, Michal Hocko wrote: > On Mon 22-02-21 09:41:00, Tim Chen wrote: >> >> >> On 2/22/21 12:40 AM, Michal Hocko wrote: >>> On Fri 19-02-21 10:59:05, Tim Chen wrote: >> occurrence. >>>>> >>>>> Soft limit is evaluated every THRESHOLDS_EVENTS_TARGET * SOFTLIMIT_EVENTS_TARGET. >>>>> If all events correspond with a newly charged memory and the last event >>>>> was just about the soft limit boundary then we should be bound by 128k >>>>> pages (512M and much more if this were huge pages) which is a lot! >>>>> I haven't realized this was that much. Now I see the problem. This would >>>>> be a useful information for the changelog. >>>>> >>>>> Your fix is focusing on the over-the-limit boundary which will solve the >>>>> problem but wouldn't that lead to to updates happening too often in >>>>> pathological situation when a memcg would get reclaimed immediatelly? >>>> >>>> Not really immediately. The memcg that has the most soft limit excess will >>>> be chosen for page reclaim, which is the way it should be. >>>> It is less likely that a memcg that just exceeded >>>> the soft limit becomes the worst offender immediately. >>> >>> Well this all depends on when the the soft limit reclaim triggeres. In >>> other words how often you see the global memory reclaim. If we have a >>> memcg with a sufficient excess then this will work mostly fine. I was more >>> worried about a case when you have memcgs just slightly over the limit >>> and the global memory pressure is a regular event. You can easily end up >>> bouncing memcgs off and on the tree in a rapid fashion. >>> >> >> If you are concerned about such a case, we can add an excess threshold, >> say 4 MB (or 1024 4K pages), before we trigger a forced update. You think >> that will cover this concern? > > Yes some sort of rate limiting should help. My understanding has been > that this is the main purpose of the even counting threshold. The code > as we have doesn't seem to work properly so there are two ways, either > tune the existing threshold or replace it by something else. Having both > a force update and non-functional threshold is not a great outcome. > The event counting threshold's purpose is to limit the *rate* of event update. The side effect is we will miss some update by the sampling nature. However, we can't afford to let the sampling go to frequent or the overhead will be too much. The forced update makes sure that a needed update happens to put the memcg on the tree happens and we don't allow the memcg to escape page reclaim even if the update happens relatively rarely. Lowering the threshold does not guarantee that the update happens, resulting in a runaway memcg. The forced update and threshold serves different purpose. In my opinion the forced update is necessary. We can keep the update overhead low with low update rate, and yet make sure that a needed update happens. I think both are necessary for proper function. The overhead from the forced update is negligible and I don't a downside adding it. Tim