From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81368C433F5 for ; Tue, 3 May 2022 06:36:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A74066B0071; Tue, 3 May 2022 02:36:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FD0A6B0073; Tue, 3 May 2022 02:36:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8764E6B0074; Tue, 3 May 2022 02:36:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 725F96B0071 for ; Tue, 3 May 2022 02:36:41 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4D4C221FFC for ; Tue, 3 May 2022 06:36:41 +0000 (UTC) X-FDA: 79423473402.20.705015C Received: from mail-ua1-f42.google.com (mail-ua1-f42.google.com [209.85.222.42]) by imf16.hostedemail.com (Postfix) with ESMTP id 2358F180087 for ; Tue, 3 May 2022 06:36:34 +0000 (UTC) Received: by mail-ua1-f42.google.com with SMTP id x5so6062852uap.8 for ; Mon, 02 May 2022 23:36:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=u86aQVHE0YFRCOAXwD0Lj1Q5WX8rv8Wdn7lFhRDk8T4=; b=EG9OoauTSQf1V/WWgNjp/LVUiVAquxw61d3vyxT5MtCeBtnGc0q/L6g5LqYuU+4GbA vRkrFQnEhxvEnDbPfWpWsuh5VRXTWiv4xWF+qOT976IBFgReJj/Jh1E+dTI4NljfOfex s9Gg7hDl919BI0IDcMKYZGWTn5eUffOtI179wUNipICzAHU/sjgrzbbivNuS6li+jV7y qF0CVWwsQwG424ps5RZN4VmzuP+ZyZ/PPxBgrkqX9hChX+JAvEsPsSGnmuxN6s/lL0Tz ozDLXOFRUM+WfMpf/xin9Un+5oKdGhB2ij/sYIjMMNFJDNQRK2eBClBAvUNJ1t4BxQ5J GDDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=u86aQVHE0YFRCOAXwD0Lj1Q5WX8rv8Wdn7lFhRDk8T4=; b=6G+TmtVK3AAeG/QePOFf3CVEQOwk4sNWIcwHavq+HzhbhV51+5TN9LJgmDUq8kV4Uf pPn59o12XidSxgd9hH0O0EaLWksjCaGQ82J/s4ljOu4CHazejcijx5OVj/ZYthORT/zc Ng0/DusQktMVJ9vHBr0cIUfb896nzt08XipLhg1du8L/zfkCtDdTFoTKAADz/kX9RXZm qWmElKyhHN41H6JhVRfcOWBLOvZWJPSLEAFop2CRhJNdacUCJGj5GxrhwndY37ifUWzr fR6DXuNfsfh2LKYLfTQM3wmK6v+kbkUdZ2pGuGPa6IPkpCxgcsLrj8BqY2fMYDjr0cC3 us8A== X-Gm-Message-State: AOAM530IWNqTvRxDWynXWkHtqq8TCN+RPGieg9P5JgkqKdundfmHiAhS AxbzwwA/8Q6DQxdorZi/KGVpo9L0IKR5VSGyBWtEIQ== X-Google-Smtp-Source: ABdhPJyC+d1398MY/r3CNy5+vr0F03VvuWn3HbON/3UtBav7rsXAW9341lvfNA5RTGpHb5/y0HxGg9FwUbnx1nDLlmQ= X-Received: by 2002:a9f:356c:0:b0:359:5bee:d1f8 with SMTP id o99-20020a9f356c000000b003595beed1f8mr4238703uao.60.1651559800098; Mon, 02 May 2022 23:36:40 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Wei Xu Date: Mon, 2 May 2022 23:36:29 -0700 Message-ID: Subject: Re: RFC: Memory Tiering Kernel Interfaces To: Dan Williams Cc: Yang Shi , Andrew Morton , Dave Hansen , Huang Ying , Linux MM , Greg Thelen , "Aneesh Kumar K.V" , Jagdish Gediya , Linux Kernel Mailing List , Alistair Popple , Davidlohr Bueso , Michal Hocko , Baolin Wang , Brice Goglin , Feng Tang , Jonathan Cameron Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: bn4f8jdxrt7a19g58rx7tz7caiw47bmk X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2358F180087 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=EG9OoauT; spf=pass (imf16.hostedemail.com: domain of weixugc@google.com designates 209.85.222.42 as permitted sender) smtp.mailfrom=weixugc@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1651559794-287962 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, May 1, 2022 at 11:35 AM Dan Williams wrote: > > On Fri, Apr 29, 2022 at 8:59 PM Yang Shi wrote: > > > > Hi Wei, > > > > Thanks for the nice writing. Please see the below inline comments. > > > > On Fri, Apr 29, 2022 at 7:10 PM Wei Xu wrote: > > > > > > The current kernel has the basic memory tiering support: Inactive > > > pages on a higher tier NUMA node can be migrated (demoted) to a lower > > > tier NUMA node to make room for new allocations on the higher tier > > > NUMA node. Frequently accessed pages on a lower tier NUMA node can be > > > migrated (promoted) to a higher tier NUMA node to improve the > > > performance. > > > > > > A tiering relationship between NUMA nodes in the form of demotion path > > > is created during the kernel initialization and updated when a NUMA > > > node is hot-added or hot-removed. The current implementation puts all > > > nodes with CPU into the top tier, and then builds the tiering hierarchy > > > tier-by-tier by establishing the per-node demotion targets based on > > > the distances between nodes. > > > > > > The current memory tiering interface needs to be improved to address > > > several important use cases: > > > > > > * The current tiering initialization code always initializes > > > each memory-only NUMA node into a lower tier. But a memory-only > > > NUMA node may have a high performance memory device (e.g. a DRAM > > > device attached via CXL.mem or a DRAM-backed memory-only node on > > > a virtual machine) and should be put into the top tier. > > > > > > * The current tiering hierarchy always puts CPU nodes into the top > > > tier. But on a system with HBM (e.g. GPU memory) devices, these > > > memory-only HBM NUMA nodes should be in the top tier, and DRAM nodes > > > with CPUs are better to be placed into the next lower tier. > > > > > > * Also because the current tiering hierarchy always puts CPU nodes > > > into the top tier, when a CPU is hot-added (or hot-removed) and > > > triggers a memory node from CPU-less into a CPU node (or vice > > > versa), the memory tiering hierarchy gets changed, even though no > > > memory node is added or removed. This can make the tiering > > > hierarchy much less stable. > > > > I'd prefer the firmware builds up tiers topology then passes it to > > kernel so that kernel knows what nodes are in what tiers. No matter > > what nodes are hot-removed/hot-added they always stay in their tiers > > defined by the firmware. I think this is important information like > > numa distances. NUMA distance alone can't satisfy all the usecases > > IMHO. > > Just want to note here that the platform firmware can only describe > the tiers of static memory present at boot. CXL hotplug breaks this > model and the kernel is left to dynamically determine the device's > performance characteristics and the performance of the topology to > reach that device. Now, the platform firmware does set expectations > for the perfomance class of different memory ranges, but there is no > way to know in advance the performance of devices that will be asked > to be physically or logically added to the memory configuration. That > said, it's probably still too early to define ABI for those > exceptional cases where the kernel needs to make a policy decision > about a device that does not fit into the firmware's performance > expectations, but just note that there are limits to the description > that platform firmware can provide. > > I agree that NUMA distance alone is inadequate and the kernel needs to > make better use of data like ACPI HMAT to determine the default > tiering order. Very useful clarification. It should be fine for the kernel to dynamically determine the memory tier of each node. I expect that it can also be fine even if a node gets attached to a different memory device and needs to be assigned into a different tier after another round of hot-remove/hot-add. What can be problematic is that a hot-added node not only changes its own iter, but also causes other existing nodes to change their tiers. This can mess up any tier-based memory accounting. One approach to address this is to: - have tiers being well-defined and stable, e.g. HBM is always in tier-0, direct-attached DRAM and high-performance CXL.mem devices are always in tier-1, slower CXL.mem devices are always in tier-2, and PMEM is always in tier-3. The tier definition is based on the device performance, something similar to the class rating of storage devices (e.g. SD cards). - allow tiers being absent in the system, e.g. a machine may have only tier-1 and tier-3, but have neither tier-0 nor tier-2. - allow demotion to not only the immediate next lower tier, but all lower tiers. The actual selection of demotion order follows the allocation fallback order. This allows tier-1 to directly demote to tier-3 without requiring the presence of tier-2. This approach can ensure that the tiers of existing nodes are stable and permit that the tier of a hot-plugged node is determined dynamically.