All of lore.kernel.org
 help / color / mirror / Atom feed
From: Khazhy Kumykov <khazhy@google.com>
To: Amir Goldstein <amir73il@gmail.com>
Cc: Gabriel Krisman Bertazi <krisman@collabora.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Hugh Dickins <hughd@google.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	kernel@collabora.com, Linux MM <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Theodore Tso <tytso@mit.edu>
Subject: Re: [PATCH v3 0/3] shmem: Allow userspace monitoring of tmpfs for lack of space.
Date: Thu, 21 Apr 2022 16:19:41 -0700	[thread overview]
Message-ID: <CACGdZY+KqPKaW3jM2SN4MA8_SUHSRiA2Dt43Q7NbK7BO2t_FVw@mail.gmail.com> (raw)
In-Reply-To: <CAOQ4uxhjvwwEQo+u=TD-CJ0xwZ7A1NjkA5GRFOzqG7m1dN1E2Q@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 5399 bytes --]

On Wed, Apr 20, 2022 at 10:34 PM Amir Goldstein <amir73il@gmail.com> wrote:
>
> On Tue, Apr 19, 2022 at 6:29 PM Gabriel Krisman Bertazi
> <krisman@collabora.com> wrote:
> >
> > Andrew Morton <akpm@linux-foundation.org> writes:
> >
> > Hi Andrew,
> >
> > > On Mon, 18 Apr 2022 17:37:10 -0400 Gabriel Krisman Bertazi <krisman@collabora.com> wrote:
> > >
> > >> When provisioning containerized applications, multiple very small tmpfs
> > >
> > > "files"?
> >
> > Actually, filesystems.  In cloud environments, we have several small
> > tmpfs associated with containerized tasks.
> >
> > >> are used, for which one cannot always predict the proper file system
> > >> size ahead of time.  We want to be able to reliably monitor filesystems
> > >> for ENOSPC errors, without depending on the application being executed
> > >> reporting the ENOSPC after a failure.
> > >
> > > Well that sucks.  We need a kernel-side workaround for applications
> > > that fail to check and report storage errors?
> > >
> > > We could do this for every syscall in the kernel.  What's special about
> > > tmpfs in this regard?
> > >
> > > Please provide additional justification and usage examples for such an
> > > extraordinary thing.
> >
> > For a cloud provider deploying containerized applications, they might
> > not control the application, so patching userspace wouldn't be a
> > solution.  More importantly - and why this is shmem specific -
> > they want to differentiate between a user getting ENOSPC due to
> > insufficiently provisioned fs size, vs. due to running out of memory in
> > a container, both of which return ENOSPC to the process.
> >
>
> Isn't there already a per memcg OOM handler that could be used by
> orchestrator to detect the latter?
>
> > A system administrator can then use this feature to monitor a fleet of
> > containerized applications in a uniform way, detect provisioning issues
> > caused by different reasons and address the deployment.
> >
> > I originally submitted this as a new fanotify event, but given the
> > specificity of shmem, Amir suggested the interface I'm implementing
> > here.  We've raised this discussion originally here:
> >
> > https://lore.kernel.org/linux-mm/CACGdZYLLCqzS4VLUHvzYG=rX3SEJaG7Vbs8_Wb_iUVSvXsqkxA@mail.gmail.com/
> >
>
> To put things in context, the points I was trying to make in this
> discussion are:
>
> 1. Why isn't monitoring with statfs() a sufficient solution? and more
>     specifically, the shared disk space provisioning problem does not sound
>     very tmpfs specific to me.
>     It is a well known issue for thin provisioned storage in environments
>     with shared resources as the ones that you describe

I think this solves a different problem: to my understanding statfs
polling is useful for determining if a long lived, slowly growing FS
is approaching its limits - the tmpfs here are generally short lived,
and may be intentionally running close to limits (e.g. if they "know"
exactly how much they need, and don't expect to write any more than
that). In this case, the limits are there to guard against runaway
(and assist with scheduling), so "monitor and increase limits
periodically" isn't appropriate.

It's meant just to make it easier to distinguish between "tmpfs write
failed due to OOM" and "tmpfs write failed because you exceeded tmpfs'
max size" (what makes tmpfs "special" is that tmpfs, for good reason,
returns ENOSPC for both of these situations to the user). For a small
task a user could easily go from 0% to full, or OOM, rather quickly,
so statfs polling would likely miss the event. The orchestrator can,
when the task fails, easily (and reliably) look at this statistic to
determine if a user exceeded the tmpfs limit.

(I do see the parallel here to thin provisioned storage - "exceeded
your individual budget" vs. "underlying overcommitted system ran out
of bytes")

> 2. OTOH, exporting internal fs stats via /sys/fs for debugging, health
> monitoring
>     or whatever seems legit to me and is widely practiced by other fs, so
>     exposing those tmpfs stats as this patch set is doing seems fine to me.
>
> Another point worth considering in favor of /sys/fs/tmpfs -
> since tmpfs is FS_USERNS_MOUNT, the ability of sysadmin to monitor all
> tmpfs mounts in the system and their usage is limited.
>
> Therefore, having a central way to enumerate all tmpfs instances in the system
> like blockdev fs instances and like fuse fs instances, does not sound
> like a terrible
> idea in general.
>
> > > Whatever that action is, I see no user-facing documentation which
> > > guides the user info how to take advantage of this?
> >
> > I can follow up with a new version with documentation, if we agree this
> > feature makes sense.
> >
>
> Given the time of year and participants involved, shall we continue
> this discussion
> in LSFMM?
>
> I am not sure if this even requires a shared FS/MM session, but I
> don't mind trying
> to allocate a shared FS/MM slot if Andrew and MM guys are interested
> to take part
> in the discussion.
>
> As long as memcg is able to report OOM to the orchestrator, the problem does not
> sound very tmpfs specific to me.
>
> As Ted explained, cloud providers (for some reason) charge by disk size and not
> by disk usage, so also for non-tmpfs, online growing the fs on demand could
> prove to be a rewarding practice for cloud applications.
>
> Thanks,
> Amir.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3999 bytes --]

  parent reply	other threads:[~2022-04-21 23:19 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-18 21:37 [PATCH v3 0/3] shmem: Allow userspace monitoring of tmpfs for lack of space Gabriel Krisman Bertazi
2022-04-18 21:37 ` [PATCH v3 1/3] shmem: Keep track of out-of-memory and out-of-space errors Gabriel Krisman Bertazi
2022-04-18 21:37 ` [PATCH v3 2/3] shmem: Introduce /sys/fs/tmpfs support Gabriel Krisman Bertazi
2022-04-18 21:37 ` [PATCH v3 3/3] shmem: Expose space and accounting error count Gabriel Krisman Bertazi
2022-04-19  3:42 ` [PATCH v3 0/3] shmem: Allow userspace monitoring of tmpfs for lack of space Andrew Morton
2022-04-19 15:28   ` Gabriel Krisman Bertazi
2022-04-21  5:33     ` Amir Goldstein
2022-04-21 22:37       ` Gabriel Krisman Bertazi
2022-04-21 23:19       ` Khazhy Kumykov [this message]
2022-04-22  9:02         ` Amir Goldstein
2022-05-05 21:16           ` Gabriel Krisman Bertazi
2022-05-12 20:00             ` Gabriel Krisman Bertazi
2022-04-20  0:10 [PATCH v3 2/3] shmem: Introduce /sys/fs/tmpfs support kernel test robot
2022-04-22  9:54 ` Dan Carpenter
2022-04-22  9:54 ` Dan Carpenter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACGdZY+KqPKaW3jM2SN4MA8_SUHSRiA2Dt43Q7NbK7BO2t_FVw@mail.gmail.com \
    --to=khazhy@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=amir73il@gmail.com \
    --cc=hughd@google.com \
    --cc=kernel@collabora.com \
    --cc=krisman@collabora.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=tytso@mit.edu \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.