All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Amir Goldstein <amir73il@gmail.com>
Cc: Jan Kara <jack@suse.cz>,
	Matthew Bobrowski <mbobrowski@mbobrowski.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux API <linux-api@vger.kernel.org>,
	Christian Brauner <christian.brauner@ubuntu.com>
Subject: Re: [PATCH v2 0/2] unprivileged fanotify listener
Date: Thu, 25 Mar 2021 14:49:24 +0100	[thread overview]
Message-ID: <20210325134924.GA13673@quack2.suse.cz> (raw)
In-Reply-To: <CAOQ4uxgjM8qC-Kre9ahMQzzhsOFtCXu4Vzd2HYUsSOstgf9Jyw@mail.gmail.com>

On Wed 24-03-21 17:50:52, Amir Goldstein wrote:
> > > So I have implemented this idea on fanotify_userns branch and the cost
> > > per "filtered" sb mark is quite low - its a pretty cheap check in
> > > send_to_group()
> > > But still, if an unbound number of users can add to the sb mark list it is
> > > not going to end well.
> >
> > Thinking out loud: So what is the cost going to be for the side generating
> > events? Ideally it would of O(number of fanotify groups receiving event).
> > We cannot get better than that and if the constants aren't too big I think
> > this is acceptable overhead. Sure this can mean total work of (number of
> > users) * (max number of subtree marks per user) for queueing notification
> > event but I don't think it is practical for a DoS attack and I also don't
> > think that in practice users will be watching overlapping subtrees that
> > much.
> >
> 
> Why overlapping?
> My concern is not so much from DoS attacks.
> My concern is more from innocent users adding unacceptable
> accumulated overhead.
> 
> Think of a filesystem mounted at /home/ with 100K directories at
> /home/user$N, every user gets its own idmapped mount from
> systemd-homed and may or may not choose to run a listener to
> get events generated under its own home dir (which is an idmapped
> mount). Even if we limit one sb mask per user, we can still have 100K
> marks list in sb.

I see but then you'd have to have 100K users using the same filesystem on
the server at the same time? Which doesn't look likely to me? I'd presume
the home dir is watched only if the user is actually running something on
that machine... So I'm not sure how realistic this example is. But yes,
maybe we need some more efficient algorithm for selecting which subtree
watch is actually relevant for an event.

> For this reason I think we need to limit the number of marks per sb.
> The simple way is a global config like max_queued_events, but I think
> we can do better than that.

Adding a global limit on number of sb marks is OK but still I'd like the
system to scale reasonably with say tens to hundreds watches...

> > The question is whether we can get that fast. Probably not because that
> > would have to attach subtree watches to directory inodes or otherwise
> > filter out unrelated fanotify groups in O(1). But the complexity of
> > O(number of groups receiving events + depth of dir where event is happening)
> > is probably achievable - we'd walk the tree up and have roots of watched
> > subtrees marked. What do you think?
> 
> I am for that. I already posted a POC along those lines [1].
> I was just not sure how to limit the potential accumulated overhead.
> 
> [1] https://github.com/amir73il/linux/commits/fanotify_subtree_mark

Yes, but AFAICT your solution was O((number of subtree marks on sb) * depth
of dir) while I'm speaking about O(number of *groups* on sb + depth of
dir). Which is significantly less and will require more careful setup to
achieve such complexity (like placing special marks in lists of watches for
directories that are roots of watched subtrees and checking such lists when
walking up the tree).

> > Also there is a somewhat related question what is the semantics of subtree
> > watches in presence of mounts - do subtree watches "see through" mount
> > points? Probably not but then with bind mounts this can be sometimes
> > inconvenient / confusing - e.g. if I have /tmp bind-mounted to /var/tmp and
> > I'm watching subtree of /var, I would not get events for what's in
> > /var/tmp... Which is logical if you spell it out like this but applications
> > often don't care how the mount hierarchy looks like, they just care about
> > locally visible directory structure.
> 
> Those are hard questions.  I think that userns/mountns developers needed
> to address them a while ago and I think there are some helpers that help
> with checking visibility of paths.
> 
> > > <hand waving>
> > > I think what we need here (thinking out loud) is to account the sb marks
> > > to the user that mounted the filesystem or to the user mapped to admin using
> > > idmapped mount, maybe to both(?), probably using a separate ucount entry
> > > (e.g. max_fanotify_filesystem_marks).
> >
> > I'm somewhat lost here. Are these two users different? We have /home/foo
> > which is a mounted filesystem. AFAIU it will be mounted in a special user
> > namespace for user 'foo' - let's call is 'foo-ns'. /home/foo has idmapping
> > attached so system [ug]ids and non-trivially mapped to on-disk [ug]ids. Now
> > we have a user - let's call it 'foo-usr' that has enough capabilities
> > (whatever they are) in 'foo-ns' to place fanotify subtree marks in
> > /home/foo. So these marks are naturally accounted towards 'foo-usr'. To
> > whom else you'd like to also account these marks and why?
> >
> 
> I would like the system admin to be able to limit 100 sb marks on /home
> (filtered or not) because that impacts the send_to_group iteration.

OK, so per-sb limitation of sb mark number...

> I would also like systemd to be able to grant a smaller quota of filtered
> sb marks per user when creating and mapping the idmapped mounts
> at /home/foo$N

... and a ucount to go with it?

> I *think* we can achieve that, by accounting the sb marks to uid 0
> (who mounted /home) in ucounts entry "fanotify_sb_marks".

But a superblock can be mounted in multiple places, in multiple user
namespaces, potentially by different users (think of nested containers)? So
if we want a per-sb limit on sb marks, I think that accounting those per
user won't really achieve that?

> If /home would have been a FS_USERNS_MOUNT mounted inside
> some userns, then all its sb marks would be accounted to uid 0 of
> that userns.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

  reply	other threads:[~2021-03-25 13:50 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-04 11:29 [PATCH v2 0/2] unprivileged fanotify listener Amir Goldstein
2021-03-04 11:29 ` [PATCH v2 1/2] fanotify: configurable limits via sysfs Amir Goldstein
2021-03-04 11:29 ` [PATCH v2 2/2] fanotify: support limited functionality for unprivileged users Amir Goldstein
2021-03-16 15:55 ` [PATCH v2 0/2] unprivileged fanotify listener Jan Kara
2021-03-17 11:01   ` Amir Goldstein
2021-03-17 11:42     ` Jan Kara
2021-03-17 12:19       ` Amir Goldstein
2021-03-17 17:45         ` Christian Brauner
2021-03-17 19:14           ` Amir Goldstein
2021-03-18 14:31             ` Christian Brauner
2021-03-18 16:48               ` Amir Goldstein
2021-03-19 13:40                 ` Christian Brauner
2021-03-19 14:21                   ` Amir Goldstein
2021-03-20 12:57                     ` Amir Goldstein
2021-03-22 12:44                       ` Amir Goldstein
2021-03-22 16:28                         ` Christian Brauner
2021-03-22 17:22                           ` Amir Goldstein
2021-03-24 13:57                         ` Amir Goldstein
2021-03-24 14:32                           ` Christian Brauner
2021-03-24 15:05                             ` Amir Goldstein
2021-03-24 16:28                               ` Christian Brauner
2021-03-24 17:07                                 ` Amir Goldstein
2021-03-25 11:12                                   ` Christian Brauner
2021-03-25 15:31                                     ` Amir Goldstein
2021-03-28 14:58                                       ` Amir Goldstein
2021-03-18 15:44         ` Jan Kara
2021-03-18 17:07           ` Amir Goldstein
2021-03-18 18:40             ` Christian Brauner
2021-03-22 18:38             ` Amir Goldstein
2021-03-24 11:48               ` Jan Kara
2021-03-24 15:50                 ` Amir Goldstein
2021-03-25 13:49                   ` Jan Kara [this message]
2021-03-25 15:05                     ` Amir Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210325134924.GA13673@quack2.suse.cz \
    --to=jack@suse.cz \
    --cc=amir73il@gmail.com \
    --cc=christian.brauner@ubuntu.com \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=mbobrowski@mbobrowski.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.