From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62C2BC433EF for ; Sat, 30 Apr 2022 02:11:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 652D96B0072; Fri, 29 Apr 2022 22:11:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6017E6B0073; Fri, 29 Apr 2022 22:11:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A2166B0074; Fri, 29 Apr 2022 22:11:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 374766B0072 for ; Fri, 29 Apr 2022 22:11:07 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 90F7D120C57 for ; Sat, 30 Apr 2022 02:10:57 +0000 (UTC) X-FDA: 79411917354.19.979F86C Received: from mail-vk1-f181.google.com (mail-vk1-f181.google.com [209.85.221.181]) by imf26.hostedemail.com (Postfix) with ESMTP id 4CEDB140019 for ; Sat, 30 Apr 2022 02:10:55 +0000 (UTC) Received: by mail-vk1-f181.google.com with SMTP id t12so4510945vkt.5 for ; Fri, 29 Apr 2022 19:10:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:from:date:message-id:subject:to; bh=niB8HkuoqiT56uikZtBpshfk5BjQ9RlAZ15YvCwaKPU=; b=MxXPnY8/L9n7R3r8Ww76pyWJm18WqMoEm+Nyg69g4bGkJRx6F9qbW/Ii6NVvSRzr0f rqosnjYbBLBSKItoxiIF8L7lHxfCMycbD5bPhMEZwEtJfdA/CKtx74BR9pYClU/dMISD wT05plVTvUS74cqejezPuRJpI2M3nV0N9CgridMNdnqa3kufekE9T1bhZ08jby0X9uQl Ucjkhr5FabUAi8RW/++wgCFquiZQEYAKuvnnFjssnK5LlZqCFPwQM69vYQryAuV2xoDa AupkynMRKDijjKHuQ5M1hqln9HVukGAFL2tKowQw5H+eEsbPfe9f2dQcZyTw7d8Qp/F6 H6SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=niB8HkuoqiT56uikZtBpshfk5BjQ9RlAZ15YvCwaKPU=; b=4GbsgNx9XwE23RO9bRbzQsOQxcxAB5h6dC1aiSZXfhuMAo5slkYbaQXdxMCK6HvF64 GHQdsSLnGYdXAPbh+2m+810q7A94qn64ntEQiBYFdss5yDlOjeqXnFk8cFaKu8AQDjiN GbKmpi6URzdi8bxTXuY38ESqtS7nrYWZyXR49JEF8PDvrJj25FUuaCojugGF0t91fXIH JClu7fgwyyy7+DkQEfiRo013UckEaZ7RZ7LbPx0Q3O3IKnP3kVcOK0u7NfBtPvgBCAVI 5s/brlyi80T43Vptjqi5XuvM2bhsPSmQgAx6cTX3es6DD5S74D6r43x/4ER9s8s1Hy5H j0Pw== X-Gm-Message-State: AOAM530Kn+wsuZm+oWfHM/J4bodeVWrkUTW7+0Tkn0gil4ujWtSjwXV9 IS7j3skFKsG4P7aP67y7/vfETBOaxGgb1tUL7807bg== X-Google-Smtp-Source: ABdhPJwkWN3PvMc0i2F/7LvxJJ6RvuMGZjJXQEsfxVyItKvDQP1oFtKBxNoZg5QxQ0HdQ0C7FF0Tz/fLzlmx/H8Uo8M= X-Received: by 2002:ac5:c3d0:0:b0:344:44f4:25c3 with SMTP id t16-20020ac5c3d0000000b0034444f425c3mr608834vkk.23.1651284656107; Fri, 29 Apr 2022 19:10:56 -0700 (PDT) MIME-Version: 1.0 From: Wei Xu Date: Fri, 29 Apr 2022 19:10:45 -0700 Message-ID: Subject: RFC: Memory Tiering Kernel Interfaces To: Andrew Morton , Dave Hansen , Huang Ying , Dan Williams , Yang Shi , Linux MM , Greg Thelen , "Aneesh Kumar K.V" , Jagdish Gediya , Linux Kernel Mailing List , Alistair Popple , Davidlohr Bueso , Michal Hocko , Baolin Wang , Brice Goglin , Feng Tang , Jonathan.Cameron@huawei.com Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: z41g5h1xs7uwxn8n7shcuxedcx1wxtmd X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 4CEDB140019 X-Rspam-User: Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="MxXPnY8/"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of weixugc@google.com designates 209.85.221.181 as permitted sender) smtp.mailfrom=weixugc@google.com X-HE-Tag: 1651284655-894815 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The current kernel has the basic memory tiering support: Inactive pages on a higher tier NUMA node can be migrated (demoted) to a lower tier NUMA node to make room for new allocations on the higher tier NUMA node. Frequently accessed pages on a lower tier NUMA node can be migrated (promoted) to a higher tier NUMA node to improve the performance. A tiering relationship between NUMA nodes in the form of demotion path is created during the kernel initialization and updated when a NUMA node is hot-added or hot-removed. The current implementation puts all nodes with CPU into the top tier, and then builds the tiering hierarchy tier-by-tier by establishing the per-node demotion targets based on the distances between nodes. The current memory tiering interface needs to be improved to address several important use cases: * The current tiering initialization code always initializes each memory-only NUMA node into a lower tier. But a memory-only NUMA node may have a high performance memory device (e.g. a DRAM device attached via CXL.mem or a DRAM-backed memory-only node on a virtual machine) and should be put into the top tier. * The current tiering hierarchy always puts CPU nodes into the top tier. But on a system with HBM (e.g. GPU memory) devices, these memory-only HBM NUMA nodes should be in the top tier, and DRAM nodes with CPUs are better to be placed into the next lower tier. * Also because the current tiering hierarchy always puts CPU nodes into the top tier, when a CPU is hot-added (or hot-removed) and triggers a memory node from CPU-less into a CPU node (or vice versa), the memory tiering hierarchy gets changed, even though no memory node is added or removed. This can make the tiering hierarchy much less stable. * A higher tier node can only be demoted to selected nodes on the next lower tier, not any other node from the next lower tier. This strict, hard-coded demotion order does not work in all use cases (e.g. some use cases may want to allow cross-socket demotion to another node in the same demotion tier as a fallback when the preferred demotion node is out of space), and has resulted in the feature request for an interface to override the system-wide, per-node demotion order from the userspace. * There are no interfaces for the userspace to learn about the memory tiering hierarchy in order to optimize its memory allocations. I'd like to propose revised memory tiering kernel interfaces based on the discussions in the threads: - https://lore.kernel.org/lkml/20220425201728.5kzm4seu7rep7ndr@offworld/T/ - https://lore.kernel.org/linux-mm/20220426114300.00003ad8@Huawei.com/t/ Sysfs Interfaces ================ * /sys/devices/system/node/memory_tiers Format: node list (one tier per line, in the tier order) When read, list memory nodes by tiers. When written (one tier per line), take the user-provided node-tier assignment as the new tiering hierarchy and rebuild the per-node demotion order. It is allowed to only override the top tiers, in which cases, the kernel will establish the lower tiers automatically. Kernel Representation ===================== * nodemask_t node_states[N_TOPTIER_MEMORY] Store all top-tier memory nodes. * nodemask_t memory_tiers[MAX_TIERS] Store memory nodes by tiers. * struct demotion_nodes node_demotion[] where: struct demotion_nodes { nodemask_t preferred; nodemask_t allowed; } For a node N: node_demotion[N].preferred lists all preferred demotion targets; node_demotion[N].allowed lists all allowed demotion targets (initialized to be all the nodes in the same demotion tier). Tiering Hierarchy Initialization ================================ By default, all memory nodes are in the top tier (N_TOPTIER_MEMORY). A device driver can remove its memory nodes from the top tier, e.g. a dax driver can remove PMEM nodes from the top tier. The kernel builds the memory tiering hierarchy and per-node demotion order tier-by-tier starting from N_TOPTIER_MEMORY. For a node N, the best distance nodes in the next lower tier are assigned to node_demotion[N].preferred and all the nodes in the next lower tier are assigned to node_demotion[N].allowed. node_demotion[N].preferred can be empty if no preferred demotion node is available for node N. If the userspace overrides the tiers via the memory_tiers sysfs interface, the kernel then only rebuilds the per-node demotion order accordingly. Memory tiering hierarchy is rebuilt upon hot-add or hot-remove of a memory node, but is NOT rebuilt upon hot-add or hot-remove of a CPU node. Memory Allocation for Demotion ============================== When allocating a new demotion target page, both a preferred node and the allowed nodemask are provided to the allocation function. The default kernel allocation fallback order is used to allocate the page from the specified node and nodemask. The memopolicy of cpuset, vma and owner task of the source page can be set to refine the demotion nodemask, e.g. to prevent demotion or select a particular allowed node as the demotion target. Examples ======== * Example 1: Node 0 & 1 are DRAM nodes, node 2 & 3 are PMEM nodes. Node 0 has node 2 as the preferred demotion target and can also fallback demotion to node 3. Node 1 has node 3 as the preferred demotion target and can also fallback demotion to node 2. Set mempolicy to prevent cross-socket demotion and memory access, e.g. cpuset.mems=0,2 node distances: node 0 1 2 3 0 10 20 30 40 1 20 10 40 30 2 30 40 10 40 3 40 30 40 10 /sys/devices/system/node/memory_tiers 0-1 2-3 N_TOPTIER_MEMORY: 0-1 node_demotion[]: 0: [2], [2-3] 1: [3], [2-3] 2: [], [] 3: [], [] * Example 2: Node 0 & 1 are DRAM nodes. Node 2 is a PMEM node and closer to node 0. Node 0 has node 2 as the preferred and only demotion target. Node 1 has no preferred demotion target, but can still demote to node 2. Set mempolicy to prevent cross-socket demotion and memory access, e.g. cpuset.mems=0,2 node distances: node 0 1 2 0 10 20 30 1 20 10 40 2 30 40 10 /sys/devices/system/node/memory_tiers 0-1 2 N_TOPTIER_MEMORY: 0-1 node_demotion[]: 0: [2], [2] 1: [], [2] 2: [], [] * Example 3: Node 0 & 1 are DRAM nodes. Node 2 is a PMEM node and has the same distance to node 0 & 1. Node 0 has node 2 as the preferred and only demotion target. Node 1 has node 2 as the preferred and only demotion target. node distances: node 0 1 2 0 10 20 30 1 20 10 30 2 30 30 10 /sys/devices/system/node/memory_tiers 0-1 2 N_TOPTIER_MEMORY: 0-1 node_demotion[]: 0: [2], [2] 1: [2], [2] 2: [], [] * Example 4: Node 0 & 1 are DRAM nodes, Node 2 is a memory-only DRAM node. All nodes are top-tier. node distances: node 0 1 2 0 10 20 30 1 20 10 30 2 30 30 10 /sys/devices/system/node/memory_tiers 0-2 N_TOPTIER_MEMORY: 0-2 node_demotion[]: 0: [], [] 1: [], [] 2: [], [] * Example 5: Node 0 is a DRAM node with CPU. Node 1 is a HBM node. Node 2 is a PMEM node. With userspace override, node 1 is the top tier and has node 0 as the preferred and only demotion target. Node 0 is in the second tier, tier 1, and has node 2 as the preferred and only demotion target. Node 2 is in the lowest tier, tier 2, and has no demotion targets. node distances: node 0 1 2 0 10 21 30 1 21 10 40 2 30 40 10 /sys/devices/system/node/memory_tiers (userspace override) 1 0 2 N_TOPTIER_MEMORY: 1 node_demotion[]: 0: [2], [2] 1: [0], [0] 2: [], [] -- Wei