From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78E8CC38142 for ; Tue, 31 Jan 2023 19:50:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230364AbjAaTuF (ORCPT ); Tue, 31 Jan 2023 14:50:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229943AbjAaTuD (ORCPT ); Tue, 31 Jan 2023 14:50:03 -0500 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [IPv6:2a00:1450:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9EE38166C1 for ; Tue, 31 Jan 2023 11:50:00 -0800 (PST) Received: by mail-ej1-x634.google.com with SMTP id k4so39432290eje.1 for ; Tue, 31 Jan 2023 11:50:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=gBGYSHhxIz39bWAegoggfzJ6ks2CGggvwHv6lw1lI78=; b=j+D8hqiIH8+0MHEgypl/+feUSt1xb9I4p44epkjKOGrtZCbmzJpfq0vEEe9dimZZau kceU273RfoWQlTCrCqXG3zlBYya9fH3W5qskHtSh1WsHIWMEwvrR8OCQCIC/OzRpzbHz qijkEebgtWzYDmlwg4BtVyfwpNWhTAx6qkmVu2Oo0JwVPwZJKTCCsBB+s+hUi0ihKmCq Ios8NnaF3PeUQNEYLrm03TZw2T/lel7fUC0TAyU7sDgeAmnIQYuxWpqgFvF0EG1MBXj8 TU8NhYgga/8N758Tk/jmeirtaqmG4gbyaK8lmyb2SA/5dtq3HV6DOUHDX91VqroXe0qA eF6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gBGYSHhxIz39bWAegoggfzJ6ks2CGggvwHv6lw1lI78=; b=FhzIrDGuzx2tluzKPuuQeFmSFYHYsQumCFhJdcaeSn/PQM1Qg7Jag2UvNdXwfUpyzx 4inIJE9xoGeCgeD0alrwWJ+/yjeeUMujvaZcg5i2YWuqDakbPLEvrbS432kMEIlj55R+ W99N+LjqGxWcYv6Fz+T2bQst92QGw6qTZ0D2mm6gARAF8FgW7bBlodESjGZDPkLVw2er aB/73C6NZmFbEF8Az2DfIp/QPo8DxiB2D6/C6JcjBxIDWpU9rw+XD4OnnchBiGKfvLyf +Z5+urqaCRnI+OJhGypgQ5zkfuRjNxu1l/oI75FLWh+clPxXXv/TqREP/4b0GeAaA1LI xMsw== X-Gm-Message-State: AFqh2krwQpRJhBTwBb1IEcliNe0/iUWuNJO3IST2gU6r0ib4C2muV+Or +dZtfv83JZEvpuwKHWNzz6Fs+fsqtmVH9DXvHGo94g== X-Google-Smtp-Source: AMrXdXtlD2H62LuEgeMz9hIErCWgvd2CYQ+BQJFo3dbnKHljO0E+6b0SAW0EkxZKbHKcQ+AjVM17kUdIdLR3DbZEBec= X-Received: by 2002:a17:906:6049:b0:84d:457b:b987 with SMTP id p9-20020a170906604900b0084d457bb987mr8544970ejj.161.1675194598847; Tue, 31 Jan 2023 11:49:58 -0800 (PST) MIME-Version: 1.0 References: <874js7zf38.fsf@nvidia.com> <87r0vblzf3.fsf@nvidia.com> In-Reply-To: <87r0vblzf3.fsf@nvidia.com> From: Yosry Ahmed Date: Tue, 31 Jan 2023 11:49:22 -0800 Message-ID: Subject: Re: [RFC PATCH 00/19] mm: Introduce a cgroup to limit the amount of locked and pinned memory To: Alistair Popple Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, jgg@nvidia.com, jhubbard@nvidia.com, tjmercier@google.com, hannes@cmpxchg.org, surenb@google.com, mkoutny@suse.com, daniel@ffwll.ch Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 31, 2023 at 3:24 AM Alistair Popple wrote: > > > Yosry Ahmed writes: > > > On Mon, Jan 30, 2023 at 5:07 PM Alistair Popple wrote: > >> > >> > >> Yosry Ahmed writes: > >> > >> > On Mon, Jan 23, 2023 at 9:43 PM Alistair Popple wrote: > >> >> > >> >> Having large amounts of unmovable or unreclaimable memory in a system > >> >> can lead to system instability due to increasing the likelihood of > >> >> encountering out-of-memory conditions. Therefore it is desirable to > >> >> limit the amount of memory users can lock or pin. > >> >> > >> >> From userspace such limits can be enforced by setting > >> >> RLIMIT_MEMLOCK. However there is no standard method that drivers and > >> >> other in-kernel users can use to check and enforce this limit. > >> >> > >> >> This has lead to a large number of inconsistencies in how limits are > >> >> enforced. For example some drivers will use mm->locked_mm while others > >> >> will use mm->pinned_mm or user->locked_mm. It is therefore possible to > >> >> have up to three times RLIMIT_MEMLOCKED pinned. > >> >> > >> >> Having pinned memory limited per-task also makes it easy for users to > >> >> exceed the limit. For example drivers that pin memory with > >> >> pin_user_pages() it tends to remain pinned after fork. To deal with > >> >> this and other issues this series introduces a cgroup for tracking and > >> >> limiting the number of pages pinned or locked by tasks in the group. > >> >> > >> >> However the existing behaviour with regards to the rlimit needs to be > >> >> maintained. Therefore the lesser of the two limits is > >> >> enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit, > >> >> but this bypass is not allowed for the cgroup. > >> >> > >> >> The first part of this series converts existing drivers which > >> >> open-code the use of locked_mm/pinned_mm over to a common interface > >> >> which manages the refcounts of the associated task/mm/user > >> >> structs. This ensures accounting of pages is consistent and makes it > >> >> easier to add charging of the cgroup. > >> >> > >> >> The second part of the series adds the cgroup and converts core mm > >> >> code such as mlock over to charging the cgroup before finally > >> >> introducing some selftests. > >> > > >> > > >> > I didn't go through the entire series, so apologies if this was > >> > mentioned somewhere, but do you mind elaborating on why this is added > >> > as a separate cgroup controller rather than an extension of the memory > >> > cgroup controller? > >> > >> One of my early prototypes actually did add this to the memcg > >> controller. However pinned pages fall under their own limit, and we > >> wanted to always account pages to the cgroup of the task using the > >> driver rather than say folio_memcg(). So adding it to memcg didn't seem > >> to have much benefit as we didn't end up using any of the infrastructure > >> provided by memcg. Hence I thought it was clearer to just add it as it's > >> own controller. > > > > To clarify, you account and limit pinned memory based on the cgroup of > > the process pinning the pages, not based on the cgroup that the pages > > are actually charged to? Is my understanding correct? > > That's correct. Interesting. > > > IOW, you limit the amount of memory that processes in a cgroup can > > pin, not the amount of memory charged to a cgroup that can be pinned? > > Right, that's a good clarification which I might steal and add to the > cover letter. Feel free to :) Please also clarify this in the code/docs. Glancing through the patches I was asking myself multiple times why this is not "memory.pinned.[current/max]" or similar. > > >> > >> - Alistair > >> > >> >> > >> >> > >> >> As I don't have access to systems with all the various devices I > >> >> haven't been able to test all driver changes. Any help there would be > >> >> appreciated. > >> >> > >> >> Alistair Popple (19): > >> >> mm: Introduce vm_account > >> >> drivers/vhost: Convert to use vm_account > >> >> drivers/vdpa: Convert vdpa to use the new vm_structure > >> >> infiniband/umem: Convert to use vm_account > >> >> RMDA/siw: Convert to use vm_account > >> >> RDMA/usnic: convert to use vm_account > >> >> vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm > >> >> vfio/spapr_tce: Convert accounting to pinned_vm > >> >> io_uring: convert to use vm_account > >> >> net: skb: Switch to using vm_account > >> >> xdp: convert to use vm_account > >> >> kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned() > >> >> fpga: dfl: afu: convert to use vm_account > >> >> mm: Introduce a cgroup for pinned memory > >> >> mm/util: Extend vm_account to charge pages against the pin cgroup > >> >> mm/util: Refactor account_locked_vm > >> >> mm: Convert mmap and mlock to use account_locked_vm > >> >> mm/mmap: Charge locked memory to pins cgroup > >> >> selftests/vm: Add pins-cgroup selftest for mlock/mmap > >> >> > >> >> MAINTAINERS | 8 +- > >> >> arch/powerpc/kvm/book3s_64_vio.c | 10 +- > >> >> arch/powerpc/mm/book3s64/iommu_api.c | 29 +-- > >> >> drivers/fpga/dfl-afu-dma-region.c | 11 +- > >> >> drivers/fpga/dfl-afu.h | 1 +- > >> >> drivers/infiniband/core/umem.c | 16 +- > >> >> drivers/infiniband/core/umem_odp.c | 6 +- > >> >> drivers/infiniband/hw/usnic/usnic_uiom.c | 13 +- > >> >> drivers/infiniband/hw/usnic/usnic_uiom.h | 1 +- > >> >> drivers/infiniband/sw/siw/siw.h | 2 +- > >> >> drivers/infiniband/sw/siw/siw_mem.c | 20 +-- > >> >> drivers/infiniband/sw/siw/siw_verbs.c | 15 +- > >> >> drivers/vdpa/vdpa_user/vduse_dev.c | 20 +-- > >> >> drivers/vfio/vfio_iommu_spapr_tce.c | 15 +- > >> >> drivers/vfio/vfio_iommu_type1.c | 59 +---- > >> >> drivers/vhost/vdpa.c | 9 +- > >> >> drivers/vhost/vhost.c | 2 +- > >> >> drivers/vhost/vhost.h | 1 +- > >> >> include/linux/cgroup.h | 20 ++- > >> >> include/linux/cgroup_subsys.h | 4 +- > >> >> include/linux/io_uring_types.h | 3 +- > >> >> include/linux/kvm_host.h | 1 +- > >> >> include/linux/mm.h | 5 +- > >> >> include/linux/mm_types.h | 88 ++++++++- > >> >> include/linux/skbuff.h | 6 +- > >> >> include/net/sock.h | 2 +- > >> >> include/net/xdp_sock.h | 2 +- > >> >> include/rdma/ib_umem.h | 1 +- > >> >> io_uring/io_uring.c | 20 +-- > >> >> io_uring/notif.c | 4 +- > >> >> io_uring/notif.h | 10 +- > >> >> io_uring/rsrc.c | 38 +--- > >> >> io_uring/rsrc.h | 9 +- > >> >> mm/Kconfig | 11 +- > >> >> mm/Makefile | 1 +- > >> >> mm/internal.h | 2 +- > >> >> mm/mlock.c | 76 +------ > >> >> mm/mmap.c | 76 +++---- > >> >> mm/mremap.c | 54 +++-- > >> >> mm/pins_cgroup.c | 273 ++++++++++++++++++++++++- > >> >> mm/secretmem.c | 6 +- > >> >> mm/util.c | 196 +++++++++++++++-- > >> >> net/core/skbuff.c | 47 +--- > >> >> net/rds/message.c | 9 +- > >> >> net/xdp/xdp_umem.c | 38 +-- > >> >> tools/testing/selftests/vm/Makefile | 1 +- > >> >> tools/testing/selftests/vm/pins-cgroup.c | 271 ++++++++++++++++++++++++- > >> >> virt/kvm/kvm_main.c | 3 +- > >> >> 48 files changed, 1114 insertions(+), 401 deletions(-) > >> >> create mode 100644 mm/pins_cgroup.c > >> >> create mode 100644 tools/testing/selftests/vm/pins-cgroup.c > >> >> > >> >> base-commit: 2241ab53cbb5cdb08a6b2d4688feb13971058f65 > >> >> -- > >> >> git-series 0.9.1 > >> >> > >> > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yosry Ahmed Subject: Re: [RFC PATCH 00/19] mm: Introduce a cgroup to limit the amount of locked and pinned memory Date: Tue, 31 Jan 2023 11:49:22 -0800 Message-ID: References: <874js7zf38.fsf@nvidia.com> <87r0vblzf3.fsf@nvidia.com> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=gBGYSHhxIz39bWAegoggfzJ6ks2CGggvwHv6lw1lI78=; b=j+D8hqiIH8+0MHEgypl/+feUSt1xb9I4p44epkjKOGrtZCbmzJpfq0vEEe9dimZZau kceU273RfoWQlTCrCqXG3zlBYya9fH3W5qskHtSh1WsHIWMEwvrR8OCQCIC/OzRpzbHz qijkEebgtWzYDmlwg4BtVyfwpNWhTAx6qkmVu2Oo0JwVPwZJKTCCsBB+s+hUi0ihKmCq Ios8NnaF3PeUQNEYLrm03TZw2T/lel7fUC0TAyU7sDgeAmnIQYuxWpqgFvF0EG1MBXj8 TU8NhYgga/8N758Tk/jmeirtaqmG4gbyaK8lmyb2SA/5dtq3HV6DOUHDX91VqroXe0qA eF6w== In-Reply-To: <87r0vblzf3.fsf-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org> List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Alistair Popple Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, jgg-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org, jhubbard-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org, tjmercier-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, mkoutny-IBi9RG/b67k@public.gmane.org, daniel-/w4YWyX8dFk@public.gmane.org On Tue, Jan 31, 2023 at 3:24 AM Alistair Popple wrote: > > > Yosry Ahmed writes: > > > On Mon, Jan 30, 2023 at 5:07 PM Alistair Popple wrote: > >> > >> > >> Yosry Ahmed writes: > >> > >> > On Mon, Jan 23, 2023 at 9:43 PM Alistair Popple wrote: > >> >> > >> >> Having large amounts of unmovable or unreclaimable memory in a system > >> >> can lead to system instability due to increasing the likelihood of > >> >> encountering out-of-memory conditions. Therefore it is desirable to > >> >> limit the amount of memory users can lock or pin. > >> >> > >> >> From userspace such limits can be enforced by setting > >> >> RLIMIT_MEMLOCK. However there is no standard method that drivers and > >> >> other in-kernel users can use to check and enforce this limit. > >> >> > >> >> This has lead to a large number of inconsistencies in how limits are > >> >> enforced. For example some drivers will use mm->locked_mm while others > >> >> will use mm->pinned_mm or user->locked_mm. It is therefore possible to > >> >> have up to three times RLIMIT_MEMLOCKED pinned. > >> >> > >> >> Having pinned memory limited per-task also makes it easy for users to > >> >> exceed the limit. For example drivers that pin memory with > >> >> pin_user_pages() it tends to remain pinned after fork. To deal with > >> >> this and other issues this series introduces a cgroup for tracking and > >> >> limiting the number of pages pinned or locked by tasks in the group. > >> >> > >> >> However the existing behaviour with regards to the rlimit needs to be > >> >> maintained. Therefore the lesser of the two limits is > >> >> enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit, > >> >> but this bypass is not allowed for the cgroup. > >> >> > >> >> The first part of this series converts existing drivers which > >> >> open-code the use of locked_mm/pinned_mm over to a common interface > >> >> which manages the refcounts of the associated task/mm/user > >> >> structs. This ensures accounting of pages is consistent and makes it > >> >> easier to add charging of the cgroup. > >> >> > >> >> The second part of the series adds the cgroup and converts core mm > >> >> code such as mlock over to charging the cgroup before finally > >> >> introducing some selftests. > >> > > >> > > >> > I didn't go through the entire series, so apologies if this was > >> > mentioned somewhere, but do you mind elaborating on why this is added > >> > as a separate cgroup controller rather than an extension of the memory > >> > cgroup controller? > >> > >> One of my early prototypes actually did add this to the memcg > >> controller. However pinned pages fall under their own limit, and we > >> wanted to always account pages to the cgroup of the task using the > >> driver rather than say folio_memcg(). So adding it to memcg didn't seem > >> to have much benefit as we didn't end up using any of the infrastructure > >> provided by memcg. Hence I thought it was clearer to just add it as it's > >> own controller. > > > > To clarify, you account and limit pinned memory based on the cgroup of > > the process pinning the pages, not based on the cgroup that the pages > > are actually charged to? Is my understanding correct? > > That's correct. Interesting. > > > IOW, you limit the amount of memory that processes in a cgroup can > > pin, not the amount of memory charged to a cgroup that can be pinned? > > Right, that's a good clarification which I might steal and add to the > cover letter. Feel free to :) Please also clarify this in the code/docs. Glancing through the patches I was asking myself multiple times why this is not "memory.pinned.[current/max]" or similar. > > >> > >> - Alistair > >> > >> >> > >> >> > >> >> As I don't have access to systems with all the various devices I > >> >> haven't been able to test all driver changes. Any help there would be > >> >> appreciated. > >> >> > >> >> Alistair Popple (19): > >> >> mm: Introduce vm_account > >> >> drivers/vhost: Convert to use vm_account > >> >> drivers/vdpa: Convert vdpa to use the new vm_structure > >> >> infiniband/umem: Convert to use vm_account > >> >> RMDA/siw: Convert to use vm_account > >> >> RDMA/usnic: convert to use vm_account > >> >> vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm > >> >> vfio/spapr_tce: Convert accounting to pinned_vm > >> >> io_uring: convert to use vm_account > >> >> net: skb: Switch to using vm_account > >> >> xdp: convert to use vm_account > >> >> kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned() > >> >> fpga: dfl: afu: convert to use vm_account > >> >> mm: Introduce a cgroup for pinned memory > >> >> mm/util: Extend vm_account to charge pages against the pin cgroup > >> >> mm/util: Refactor account_locked_vm > >> >> mm: Convert mmap and mlock to use account_locked_vm > >> >> mm/mmap: Charge locked memory to pins cgroup > >> >> selftests/vm: Add pins-cgroup selftest for mlock/mmap > >> >> > >> >> MAINTAINERS | 8 +- > >> >> arch/powerpc/kvm/book3s_64_vio.c | 10 +- > >> >> arch/powerpc/mm/book3s64/iommu_api.c | 29 +-- > >> >> drivers/fpga/dfl-afu-dma-region.c | 11 +- > >> >> drivers/fpga/dfl-afu.h | 1 +- > >> >> drivers/infiniband/core/umem.c | 16 +- > >> >> drivers/infiniband/core/umem_odp.c | 6 +- > >> >> drivers/infiniband/hw/usnic/usnic_uiom.c | 13 +- > >> >> drivers/infiniband/hw/usnic/usnic_uiom.h | 1 +- > >> >> drivers/infiniband/sw/siw/siw.h | 2 +- > >> >> drivers/infiniband/sw/siw/siw_mem.c | 20 +-- > >> >> drivers/infiniband/sw/siw/siw_verbs.c | 15 +- > >> >> drivers/vdpa/vdpa_user/vduse_dev.c | 20 +-- > >> >> drivers/vfio/vfio_iommu_spapr_tce.c | 15 +- > >> >> drivers/vfio/vfio_iommu_type1.c | 59 +---- > >> >> drivers/vhost/vdpa.c | 9 +- > >> >> drivers/vhost/vhost.c | 2 +- > >> >> drivers/vhost/vhost.h | 1 +- > >> >> include/linux/cgroup.h | 20 ++- > >> >> include/linux/cgroup_subsys.h | 4 +- > >> >> include/linux/io_uring_types.h | 3 +- > >> >> include/linux/kvm_host.h | 1 +- > >> >> include/linux/mm.h | 5 +- > >> >> include/linux/mm_types.h | 88 ++++++++- > >> >> include/linux/skbuff.h | 6 +- > >> >> include/net/sock.h | 2 +- > >> >> include/net/xdp_sock.h | 2 +- > >> >> include/rdma/ib_umem.h | 1 +- > >> >> io_uring/io_uring.c | 20 +-- > >> >> io_uring/notif.c | 4 +- > >> >> io_uring/notif.h | 10 +- > >> >> io_uring/rsrc.c | 38 +--- > >> >> io_uring/rsrc.h | 9 +- > >> >> mm/Kconfig | 11 +- > >> >> mm/Makefile | 1 +- > >> >> mm/internal.h | 2 +- > >> >> mm/mlock.c | 76 +------ > >> >> mm/mmap.c | 76 +++---- > >> >> mm/mremap.c | 54 +++-- > >> >> mm/pins_cgroup.c | 273 ++++++++++++++++++++++++- > >> >> mm/secretmem.c | 6 +- > >> >> mm/util.c | 196 +++++++++++++++-- > >> >> net/core/skbuff.c | 47 +--- > >> >> net/rds/message.c | 9 +- > >> >> net/xdp/xdp_umem.c | 38 +-- > >> >> tools/testing/selftests/vm/Makefile | 1 +- > >> >> tools/testing/selftests/vm/pins-cgroup.c | 271 ++++++++++++++++++++++++- > >> >> virt/kvm/kvm_main.c | 3 +- > >> >> 48 files changed, 1114 insertions(+), 401 deletions(-) > >> >> create mode 100644 mm/pins_cgroup.c > >> >> create mode 100644 tools/testing/selftests/vm/pins-cgroup.c > >> >> > >> >> base-commit: 2241ab53cbb5cdb08a6b2d4688feb13971058f65 > >> >> -- > >> >> git-series 0.9.1 > >> >> > >> >