From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EADAC433FE for ; Mon, 18 Oct 2021 12:51:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AD3CC60FF2 for ; Mon, 18 Oct 2021 12:51:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AD3CC60FF2 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 288286B006C; Mon, 18 Oct 2021 08:51:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 21193900002; Mon, 18 Oct 2021 08:51:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08A9F6B0072; Mon, 18 Oct 2021 08:51:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id E7DCE6B006C for ; Mon, 18 Oct 2021 08:51:30 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A29998249980 for ; Mon, 18 Oct 2021 12:51:30 +0000 (UTC) X-FDA: 78709544340.35.6B25158 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf04.hostedemail.com (Postfix) with ESMTP id 57A5450000B9 for ; Mon, 18 Oct 2021 12:51:28 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id oa12-20020a17090b1bcc00b0019f715462a8so12427137pjb.3 for ; Mon, 18 Oct 2021 05:51:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :content-transfer-encoding; bh=qfX+ndSVWKM3Vi8c2fHOw8/MXzmsVryHdiKR0AyJ9xw=; b=sSv9dZOV+JAzYZFPXw+1oG8Vh/GTMCXkKdXXtKcac2ijdJAv1EcSM3SkiA7mC6O3W+ 2dRpqmvnGyYV9e2m64/IbFfC9zlgSxTLFtM7+I/brndJjnuxK0/vIiDIq6hDS+MmrgHB 2KrH6UwdouPpwMbLOfbK3Fj778dbO6ZyQJgipYsxVXN2MEotKlskxaRG8QrfoBa1/Ss/ hzpG8x9LQ3PhtOvpc2dq/k8jMQ4ABHTcTUI/UwXmDNl3TLOcRE5+d3szOYFWY4KutkbV nIIZ/jBt++HPMC0vCZzQj0tEDUNP207JrL7ZoJQPircDipt9sBjBeSSdyrI2G18iUbrA 0GaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:content-transfer-encoding; bh=qfX+ndSVWKM3Vi8c2fHOw8/MXzmsVryHdiKR0AyJ9xw=; b=MLGvOVcoQnphWo23erNM+J6kb2HDc6tKapkVjqmNRm3SOGrPnA6aXIXvQ4xeOvtMw8 RL6hrxkJheHUf7j7htWL3bCbvLZsrMP16Kf6MvTVcxOdyCXU7iuMG65Vmjt8q17MAWyh IhAaUL4vU2LRmcviQDM5SdH0Sr5yDFM4E0fOpQ/Eqg0F1JT9AVQhxprjrc1ThF+Nx/ma /xiyfMoINXDZ/14PEyPgW66YQFAiayzXuRl6vb+qrJVrFcFYDdrlO8YpESEap4p0M/l8 urqnepSPt6MwqZw2X01CRtnTuXUHXDUILGV96BN8zopf9IwC8YtANO0d9xcg3rAbt74X UKcg== X-Gm-Message-State: AOAM53305cFaFeC2OVTDFzpKdi3XVl0sHyaTFBSRax/zRdU5QiCb9RaA qTfdYZ5cjmi/yC72lnT7VhowFh+BVDiR7B8lMi0otg== X-Google-Smtp-Source: ABdhPJzE1LgHATAFF1Wq/cTuV/lozzAKLvWm8zBU+nEtkLGhJSU0sTv84Na/zq2pLq7BZyJMwMy6kvGw0T62dQ4C1o4= X-Received: by 2002:a17:90b:1e05:: with SMTP id pg5mr33780030pjb.173.1634561489167; Mon, 18 Oct 2021 05:51:29 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Mina Almasry Date: Mon, 18 Oct 2021 05:51:18 -0700 Message-ID: Subject: Re: [RFC Proposal] Deterministic memcg charging for shared memory To: Roman Gushchin , Shakeel Butt , Greg Thelen , Michal Hocko , Johannes Weiner , Hugh Dickins , Tejun Heo , Linux-MM , "open list:FILESYSTEMS (VFS and infrastructure)" , cgroups@vger.kernel.org, riel@surriel.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 57A5450000B9 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=sSv9dZOV; spf=pass (imf04.hostedemail.com: domain of almasrymina@google.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=almasrymina@google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: m1to8gnixtdj3xneosp4tsxgt8rfousb X-Rspamd-Server: rspam05 X-HE-Tag: 1634561488-56916 X-Bogosity: Ham, tests=bogofilter, spamicity=0.001042, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Oct 13, 2021 at 12:23 PM Mina Almasry wrot= e: > > Below is a proposal for deterministic charging of shared memory. > Please take a look and let me know if there are any major concerns: > Friendly ping on the proposal below. If there are any issues you see that I can address in the v1 I send for review, I would love to know. And if the proposal seems fine as is I would also love to know. Thanks! Mina > Problem: > Currently shared memory is charged to the memcg of the allocating > process. This makes memory usage of processes accessing shared memory > a bit unpredictable since whichever process accesses the memory first > will get charged. We have a number of use cases where our userspace > would like deterministic charging of shared memory: > > 1. System services allocating memory for client jobs: > We have services (namely a network access service[1]) that provide > functionality for clients running on the machine and allocate memory > to carry out these services. The memory usage of these services > depends on the number of jobs running on the machine and the nature of > the requests made to the service, which makes the memory usage of > these services hard to predict and thus hard to limit via memory.max. > These system services would like a way to allocate memory and instruct > the kernel to charge this memory to the client=E2=80=99s memcg. > > 2. Shared filesystem between subtasks of a large job > Our infrastructure has large meta jobs such as kubernetes which spawn > multiple subtasks which share a tmpfs mount. These jobs and its > subtasks use that tmpfs mount for various purposes such as data > sharing or persistent data between the subtask restarts. In kubernetes > terminology, the meta job is similar to pods and subtasks are > containers under pods. We want the shared memory to be > deterministically charged to the kubernetes's pod and independent to > the lifetime of containers under the pod. > > 3. Shared libraries and language runtimes shared between independent jobs= . > We=E2=80=99d like to optimize memory usage on the machine by sharing libr= aries > and language runtimes of many of the processes running on our machines > in separate memcgs. This produces a side effect that one job may be > unlucky to be the first to access many of the libraries and may get > oom killed as all the cached files get charged to it. > > Design: > My rough proposal to solve this problem is to simply add a > =E2=80=98memcg=3D/path/to/memcg=E2=80=99 mount option for filesystems (na= mely tmpfs): > directing all the memory of the file system to be =E2=80=98remote charged= =E2=80=99 to > cgroup provided by that memcg=3D option. > > Caveats: > 1. One complication to address is the behavior when the target memcg > hits its memory.max limit because of remote charging. In this case the > oom-killer will be invoked, but the oom-killer may not find anything > to kill in the target memcg being charged. In this case, I propose > simply failing the remote charge which will cause the process > executing the remote charge to get an ENOMEM This will be documented > behavior of remote charging. > 2. I would like to provide an initial implementation that adds this > support for tmpfs, while leaving the implementation generic enough for > myself or others to extend to more filesystems where they find the > feature useful. > 3. I would like to implement this for both cgroups v2 _and_ cgroups > v1, as we still have cgroup v1 users. If this is unacceptable I can > provide the v2 implementation only, and maintain a local patch for the > v1 support. > > If this proposal sounds good in principle. I have an experimental > implementation that I can make ready for review. Please let me know of > any concerns you may have. Thank you very much in advance! > Mina Almasry > > [1] https://research.google/pubs/pub48630/