From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DA6EC433DB for ; Wed, 10 Feb 2021 01:11:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C3F2864E3B for ; Wed, 10 Feb 2021 01:11:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234096AbhBJBLD (ORCPT ); Tue, 9 Feb 2021 20:11:03 -0500 Received: from mga17.intel.com ([192.55.52.151]:2579 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233556AbhBIWgQ (ORCPT ); Tue, 9 Feb 2021 17:36:16 -0500 IronPort-SDR: 7ArKe95l7IrpZ1mQrN6DTQElFgsWpTHg9MwiD4BELc8sxPMVMhv/HmbANfmRtwAVWxxgL4GIqQ HTQnVYDb8Chw== X-IronPort-AV: E=McAfee;i="6000,8403,9890"; a="161723488" X-IronPort-AV: E=Sophos;i="5.81,166,1610438400"; d="scan'208";a="161723488" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2021 14:34:08 -0800 IronPort-SDR: +/4lDtE6xcDGA0a9zhq3pbGtdjQUwq7fBvn/8iF/utwH12XhbXOrnCM5APNHhz/h6lNzmZGxvQ 68gV8XdX8xUg== X-IronPort-AV: E=Sophos;i="5.81,166,1610438400"; d="scan'208";a="436430938" Received: from schen9-mobl.amr.corp.intel.com ([10.251.26.185]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2021 14:34:08 -0800 Subject: Re: [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates To: Johannes Weiner Cc: Andrew Morton , Michal Hocko , Vladimir Davydov , Dave Hansen , Ying Huang , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org References: <3b6e4e9aa8b3ee1466269baf23ed82d90a8f791c.1612902157.git.tim.c.chen@linux.intel.com> From: Tim Chen Message-ID: <3445ebcd-bc69-ec6e-8995-c95753b5c4a7@linux.intel.com> Date: Tue, 9 Feb 2021 14:34:07 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2/9/21 2:22 PM, Johannes Weiner wrote: > Hello Tim, > > On Tue, Feb 09, 2021 at 12:29:47PM -0800, Tim Chen wrote: >> @@ -6849,7 +6850,9 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) >> * exclusive access to the page. >> */ >> >> - if (ug->memcg != page_memcg(page)) { >> + if (ug->memcg != page_memcg(page) || >> + /* uncharge batch update soft limit tree on a node basis */ >> + (ug->dummy_page && ug->nid != page_to_nid(page))) { > > The fix makes sense to me. > > However, unconditionally breaking up the batch by node can > unnecessarily regress workloads in cgroups that do not have a soft > limit configured, and cgroup2 which doesn't have soft limits at > all. Consider an interleaving allocation policy for example. > > Can you please further gate on memcg->soft_limit != PAGE_COUNTER_MAX, > or at least on !cgroup_subsys_on_dfl(memory_cgrp_subsys)? > Sure. Will fix this. Tim From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tim Chen Subject: Re: [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates Date: Tue, 9 Feb 2021 14:34:07 -0800 Message-ID: <3445ebcd-bc69-ec6e-8995-c95753b5c4a7@linux.intel.com> References: <3b6e4e9aa8b3ee1466269baf23ed82d90a8f791c.1612902157.git.tim.c.chen@linux.intel.com> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Content-Language: en-US List-ID: Content-Type: text/plain; charset="us-ascii" To: Johannes Weiner Cc: Andrew Morton , Michal Hocko , Vladimir Davydov , Dave Hansen , Ying Huang , linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org On 2/9/21 2:22 PM, Johannes Weiner wrote: > Hello Tim, > > On Tue, Feb 09, 2021 at 12:29:47PM -0800, Tim Chen wrote: >> @@ -6849,7 +6850,9 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) >> * exclusive access to the page. >> */ >> >> - if (ug->memcg != page_memcg(page)) { >> + if (ug->memcg != page_memcg(page) || >> + /* uncharge batch update soft limit tree on a node basis */ >> + (ug->dummy_page && ug->nid != page_to_nid(page))) { > > The fix makes sense to me. > > However, unconditionally breaking up the batch by node can > unnecessarily regress workloads in cgroups that do not have a soft > limit configured, and cgroup2 which doesn't have soft limits at > all. Consider an interleaving allocation policy for example. > > Can you please further gate on memcg->soft_limit != PAGE_COUNTER_MAX, > or at least on !cgroup_subsys_on_dfl(memory_cgrp_subsys)? > Sure. Will fix this. Tim