From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C5B1C433ED for ; Thu, 15 Apr 2021 22:31:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9E823610FB for ; Thu, 15 Apr 2021 22:31:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9E823610FB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 303426B0071; Thu, 15 Apr 2021 18:31:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B0DC6B0073; Thu, 15 Apr 2021 18:31:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 154916B0074; Thu, 15 Apr 2021 18:31:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id E8CFB6B0071 for ; Thu, 15 Apr 2021 18:31:49 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9F6DE6129 for ; Thu, 15 Apr 2021 22:31:49 +0000 (UTC) X-FDA: 78036049938.21.95E6FA2 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf26.hostedemail.com (Postfix) with ESMTP id 3F70A40002E8 for ; Thu, 15 Apr 2021 22:31:44 +0000 (UTC) IronPort-SDR: p8MOT2cWj739mlfGUbGcvqIv0Avl1DBG9C2VOQFWwTVlcmDOyoX/+Ra9HlMw4Rddd+QFaBrS1w FOxwCAlIB6ag== X-IronPort-AV: E=McAfee;i="6200,9189,9955"; a="215470450" X-IronPort-AV: E=Sophos;i="5.82,226,1613462400"; d="scan'208";a="215470450" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 15:31:47 -0700 IronPort-SDR: Vz5L/aqOpzPiervUyJdJvGmElFJ3yi7owML1W7i9kVQSa51SUX+IF0Ae+DeYJ+ot/rKL0yKwgo WSqa4xn66c/g== X-IronPort-AV: E=Sophos;i="5.82,226,1613462400"; d="scan'208";a="453108163" Received: from schen9-mobl.amr.corp.intel.com ([10.209.21.67]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 15:31:46 -0700 Subject: Re: [RFC PATCH v1 00/11] Manage the top tier memory in a tiered memory To: Michal Hocko , Shakeel Butt Cc: Yang Shi , Johannes Weiner , Andrew Morton , Dave Hansen , Ying Huang , Dan Williams , David Rientjes , Linux MM , Cgroups , LKML References: From: Tim Chen Message-ID: <4a864946-a316-3d9c-8780-64c6281276d1@linux.intel.com> Date: Thu, 15 Apr 2021 15:31:46 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3F70A40002E8 X-Stat-Signature: ba3wo3wnjs3enr8x1fr31idws9gc49pg Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=mga01.intel.com; client-ip=192.55.52.88 X-HE-DKIM-Result: none/none X-HE-Tag: 1618525904-430466 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 4/9/21 12:24 AM, Michal Hocko wrote: > On Thu 08-04-21 13:29:08, Shakeel Butt wrote: >> On Thu, Apr 8, 2021 at 11:01 AM Yang Shi wrote: > [...] >>> The low priority jobs should be able to be restricted by cpuset, for >>> example, just keep them on second tier memory nodes. Then all the >>> above problems are gone. > > Yes, if the aim is to isolate some users from certain numa node then > cpuset is a good fit but as Shakeel says this is very likely not what > this work is aiming for. > >> Yes that's an extreme way to overcome the issue but we can do less >> extreme by just (hard) limiting the top tier usage of low priority >> jobs. > > Per numa node high/hard limit would help with a more fine grained control. > The configuration would be tricky though. All low priority memcgs would > have to be carefully configured to leave enough for your important > processes. That includes also memory which is not accounted to any > memcg. > The behavior of those limits would be quite tricky for OOM situations > as well due to a lack of NUMA aware oom killer. > Another downside of putting limits on individual NUMA node is it would limit flexibility. For example two memory nodes are similar enough in performance, that you really only care about a cgroup not using more than a threshold of the combined capacity from the two memory nodes. But when you put a hard limit on NUMA node, then you are tied down to a fix allocation partition for each node. Perhaps there are some kernel resources that are pre-allocated primarily from one node. A cgroup may bump into the limit on the node and failed the allocation, even when it has a lot of slack in the other node. This makes getting the configuration right trickier. There are some differences in opinion currently on whether grouping memory nodes into tiers, and putting limit on using them by cgroup is a desirable. Many people want the management constraint placed at individual NUMA node for each cgroup, instead of at the tier level. Will appreciate feedbacks from folks who have insights on how such NUMA based control interface will work, so we at least agree here in order to move forward. Tim