From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C04FFC4743C for ; Mon, 21 Jun 2021 20:43:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4B6CF6124B for ; Mon, 21 Jun 2021 20:43:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B6CF6124B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E2436B0073; Mon, 21 Jun 2021 16:43:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 593D76B0075; Mon, 21 Jun 2021 16:43:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40C8E6B007B; Mon, 21 Jun 2021 16:43:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id 0AF826B0073 for ; Mon, 21 Jun 2021 16:43:06 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9B4CCA76E for ; Mon, 21 Jun 2021 20:43:06 +0000 (UTC) X-FDA: 78278905572.08.994C9CE Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) by imf09.hostedemail.com (Postfix) with ESMTP id 50FA560019DB for ; Mon, 21 Jun 2021 20:43:06 +0000 (UTC) Received: by mail-ed1-f43.google.com with SMTP id i24so10079850edx.4 for ; Mon, 21 Jun 2021 13:43:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=H43Bzdar7IuJzeiMCf7qhV8XsibuqdvGG6372T9hp6M=; b=aPY4ViqFe31t9H/mZ4YmCJvJtAvjq2lOQFYc6UcJw97p/XjOSZihkInQ7VDD+9jv47 qkADTFWbRtdliiRnDff6j82TDgIHyOJqdRBlJkwzln/umi6ZfMS+qpSEAXyBo/BQNVmO oK/l8+sdJ+z5pxa+G5wu/Uv33nm7Q/r/0P904RrAOIcvxN6+PZTvrQoOpeCT1ZB+RlVN d4ByoluYhG66U8WMPq21v/wFc6P9HmewdilLkzO29mCvzpgzpEIZEGc4b3IyD/ZDeZkF CZZPhuHK4QDCPTOLvptTMfxqdpsiiaZS5YOKGgeWTCa8AW5kw7v4ZEzUlpe9bUuh14R5 uSTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=H43Bzdar7IuJzeiMCf7qhV8XsibuqdvGG6372T9hp6M=; b=RGhqU1pNk8UWVtYL4ix8Rc83rxSg2QEigxyGrB3RYyt4gg0HzsIBwoIeg1Ris5Dtii Kcu7gFcmh3fWsKa0oBzcYRPi5RLu+SAJInB2JbD1perzJ/Tv3rCf3C8BqBrhnPj50+yK r1TB9Py6e2woFmQOehZdXxlmV8w6zj0/dtCJQagVbCoBvUihYvSaVbRCXzbdTbkDdLEj fp5k/UA+lDEV9NhEQxldGi6s1IMmfp8rM3u2CoTQq3FBTIvB0YKGzS3bWvqAwILLHTle mq9Y3JntbkuW8020oYTJf4A08vk1v87MSHpa5XVzrAW1Knl831+sZsnp/XTDm6GyUivz IjQg== X-Gm-Message-State: AOAM531+wtpLU+chkLUbUFEj85i5/AqtDogprlKJLKIl5XcymZOWehff F2XueVzv3ndGlBrDUMQ30IOxpGaSETvlOr5Zp3E= X-Google-Smtp-Source: ABdhPJzrnTakA3dZ9AM/8sxjiIIcMZYaIKZ7sEYbZFuB/GjkRO44JmFPPsRLlf8cmpL1MDMdAS97qTkhcUkTNKlSgqQ= X-Received: by 2002:a05:6402:42d2:: with SMTP id i18mr343640edc.168.1624308185155; Mon, 21 Jun 2021 13:43:05 -0700 (PDT) MIME-Version: 1.0 References: <475cbc62-a430-2c60-34cc-72ea8baebf2c@linux.intel.com> In-Reply-To: From: Yang Shi Date: Mon, 21 Jun 2021 13:42:52 -0700 Message-ID: Subject: Re: [LSF/MM TOPIC] Tiered memory accounting and management To: Shakeel Butt Cc: Tim Chen , lsf-pc@lists.linux-foundation.org, Linux MM , Michal Hocko , Dan Williams , Dave Hansen , David Rientjes , Wei Xu , Greg Thelen Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=aPY4ViqF; spf=pass (imf09.hostedemail.com: domain of shy828301@gmail.com designates 209.85.208.43 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 50FA560019DB X-Stat-Signature: 9w6dzu45x5og7ewfcwa6dqhjap58ds1q X-HE-Tag: 1624308186-182761 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jun 17, 2021 at 11:49 AM Shakeel Butt wrote: > > Thanks Yang for the CC. > > On Tue, Jun 15, 2021 at 5:17 PM Yang Shi wrote: > > > > On Mon, Jun 14, 2021 at 2:51 PM Tim Chen wrote: > > > > > > > > > From: Tim Chen > > > > > > Tiered memory accounting and management > > > ------------------------------------------------------------ > > > Traditionally, all RAM is DRAM. Some DRAM might be closer/faster > > > than others, but a byte of media has about the same cost whether it > > > is close or far. But, with new memory tiers such as High-Bandwidth > > > Memory or Persistent Memory, there is a choice between fast/expensive > > > and slow/cheap. But, the current memory cgroups still live in the > > > old model. There is only one set of limits, and it implies that all > > > memory has the same cost. We would like to extend memory cgroups to > > > comprehend different memory tiers to give users a way to choose a mix > > > between fast/expensive and slow/cheap. > > > > > > To manage such memory, we will need to account memory usage and > > > impose limits for each kind of memory. > > > > > > There were a couple of approaches that have been discussed previously to partition > > > the memory between the cgroups listed below. We will like to > > > use the LSF/MM session to come to a consensus on the approach to > > > take. > > > > > > 1. Per NUMA node limit and accounting for each cgroup. > > > We can assign higher limits on better performing memory node for higher priority cgroups. > > > > > > There are some loose ends here that warrant further discussions: > > > (1) A user friendly interface for such limits. Will a proportional > > > weight for the cgroup that translate to actual absolute limit be more suitable? > > > (2) Memory mis-configurations can occur more easily as the admin > > > has a much larger number of limits spread among between the > > > cgroups to manage. Over-restrictive limits can lead to under utilized > > > and wasted memory and hurt performance. > > > (3) OOM behavior when a cgroup hits its limit. > > > > > This (numa based limits) is something I was pushing for but after > discussing this internally with userspace controller devs, I have to > backoff from this position. > > The main feedback I got was that setting one memory limit is already > complicated and having to set/adjust these many limits would be > horrifying. Yes, that is also what I heard of. > > > > 2. Per memory tier limit and accounting for each cgroup. > > > We can assign higher limits on memories in better performing > > > memory tier for higher priority cgroups. I previously > > > prototyped a soft limit based implementation to demonstrate the > > > tiered limit idea. > > > > > > There are also a number of issues here: > > > (1) The advantage is we have fewer limits to deal with simplifying > > > configuration. However, there are doubts raised by a number > > > of people on whether we can really properly classify the NUMA > > > nodes into memory tiers. There could still be significant performance > > > differences between NUMA nodes even for the same kind of memory. > > > We will also not have the fine-grained control and flexibility that comes > > > with a per NUMA node limit. > > > (2) Will a memory hierarchy defined by promotion/demotion relationship between > > > memory nodes be a viable approach for defining memory tiers? > > > > > > These issues related to the management of systems with multiple kind of memories > > > can be ironed out in this session. > > > > Thanks for suggesting this topic. I'm interested in the topic and > > would like to attend. > > > > Other than the above points. I'm wondering whether we shall discuss > > "Migrate Pages in lieu of discard" as well? Dave Hansen is driving the > > development and I have been involved in the early development and > > review, but it seems there are still some open questions according to > > the latest review feedback. > > > > Some other folks may be interested in this topic either, CC'ed them in > > the thread. > > > > At the moment "personally" I am more inclined towards a passive > approach towards the memcg accounting of memory tiers. By that I mean, > let's start by providing a 'usage' interface and get more > production/real-world data to motivate the 'limit' interfaces. (One > minor reason is that defining the 'limit' interface will force us to > make the decision on defining tiers i.e. numa or a set of numa or > others). > > IMHO we should focus more on the "aging" of the application memory and > "migration/balance" between the tiers. I don't think the memory > reclaim infrastructure is the right place for these operations > (unevictable pages are ignored and not accurate ages). What we need is Why is unevictable pages a problem? I don't get why you have to demote unevictable pages. If you do care what nodes the memory will be mlock'ed on, don't you have to move the memory to the target nodes before mlock them? > proactive continuous aging and balancing. We need something like, with > additions, Multi-gen LRUs or DAMON or page idle tracking for aging and > a new mechanism for balancing which takes ages into account. I agree the better balance could be reached by more accurate aging. It is a more general problem than tier'ed memory specific. > > To give a more concrete example: Let's say we have a system with two > memory tiers and multiple low and high priority jobs. For high > priority jobs, set the allocation try list from high to low tier and > for low priority jobs the reverse of that (I am not sure if we can do > that out of the box with today's kernel). In the background we migrate AFAICT, I don't think we have. With the current APIs, you just can bind to a set of nodes, but the fallback order is one way. > cold memory down the tiers and hot memory in the reverse direction. > > In this background mechanism we can enforce all different limiting > policies like Yang's original high and low tier percentage or > something like X% of accesses of high priority jobs should be from > high tier. Basically I am saying until we find from production data > that this background mechanism is not strong enough to enforce passive > limits, we should delay the decision on limit interfaces.