linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tim Hockin <thockin@hockin.org>
To: Serge Hallyn <serge.hallyn@ubuntu.com>
Cc: Mike Galbraith <bitbucket@online.de>, Tejun Heo <tj@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Containers <containers@lists.linux-foundation.org>,
	Kay Sievers <kay.sievers@vrfy.org>,
	lpoetter <lpoetter@redhat.com>,
	workman-devel <workman-devel@redhat.com>,
	jpoimboe <jpoimboe@redhat.com>,
	"dhaval.giani" <dhaval.giani@gmail.com>,
	Cgroups <cgroups@vger.kernel.org>,
	vrigo@google.com
Subject: Re: cgroup access daemon
Date: Thu, 27 Jun 2013 13:27:59 -0700	[thread overview]
Message-ID: <CAAAKZwvXYBznJ5uiWaEV7fNCHbciwLa6U+E5RUJkz1ioMu=cRg@mail.gmail.com> (raw)
In-Reply-To: <20130627181108.GA26334@sergelap>

On Thu, Jun 27, 2013 at 11:11 AM, Serge Hallyn <serge.hallyn@ubuntu.com> wrote:
> Quoting Tim Hockin (thockin@hockin.org):
>
>> For our use case this is a huge problem.  We have people who access
>> cgroup files in a fairly tight loops, polling for information.  We
>> have literally hundeds of jobs running on sub-second frequencies -
>> plumbing all of that through a daemon is going to be a disaster.
>> Either your daemon becomes a bottleneck, or we have to build something
>> far more scalable than you really want to.  Not to mention the
>> inefficiency of inserting a layer.
>
> Currently you can trivially create a container which has the
> container's cgroups bind-mounted to the expected places
> (/sys/fs/cgroup/$controller) by uncommenting two lines in the
> configuration file, and handle cgroups through cgroupfs there.
> (This is what the management agent wants to be an alternative
> for)  The main deficiency there is that /proc/self/cgroups is
> not filtered, so it will show /lxc/c1 for init's cgroup, while
> the host's /sys/fs/cgroup/devices/lxc/c1/c1.real will be what
> is seen under the container's /sys/fs/cgroup/devices (for
> instance).  Not ideal.

I'm really saying that if your daemon is to provide a replacement for
cgroupfs direct access, it needs to be designed to be scalable.  If
we're going to get away from bind mounting cgroupfs into user
namespaces, then let's try to solve ALL the problems.

>> We also need the ability to set up eventfds for users or to let them
>> poll() on the socket from this daemon.
>
> So you'd want to be able to request updates when any cgroup value
> is changed, right?

Not necessarily ANY, but that's the terminus of this API facet.

> That's currently not in my very limited set of commands, but I can
> certainly add it, and yes it would be a simple unix sock so you can
> set up eventfd, select/poll, etc.

Assuming the protocol is basically a pass-through to basic filesystem
ops, it should be pretty easy.  You just need to add it to your
protocol.

But it brings up another point - access control.  How do you decide
which files a child agent should have access to?  Does that ever
change based on the child's configuration? In our world, the answer is
almost certainly yes.

>> >> > So then the idea would be that userspace (like libvirt and lxc) would
>> >> > talk over /dev/cgroup to its manager.  Userspace inside a container
>> >> > (which can't actually mount cgroups itself) would talk to its own
>> >> > manager which is talking over a passed-in socket to the host manager,
>> >> > which in turn runs natively (uses cgroupfs, and nests "create /c1" under
>> >> > the requestor's cgroup).
>> >>
>> >> How do you handle updates of this agent?  Suppose I have hundreds of
>> >> running containers, and I want to release a new version of the cgroupd
>> >> ?
>> >
>> > This may change (which is part of what I want to investigate with some
>> > POC), but right now I'm building any controller-aware smarts into it.  I
>> > think that's what you're asking about?  The agent doesn't do "slices"
>> > etc.  This may turn out to be insufficient, we'll see.
>>
>> No, what I am asking is a release-engineering problem.  Suppose we
>> need to roll out a new version of this daemon (some new feature or a
>> bug or something).  We have hundreds of these "child" agents running
>> in the job containers.
>
> When I say "container" I mean an lxc container, with it's own isolated
> rootfs and mntns.  I'm not sure what your "containers" are, but I if
> they're not that, then they shouldn't need to run a child agent.  They
> can just talk over the host cgroup agent's socket.

If they talk over the host agent's socket, where is the access control
and restriction done?  Who decides how deep I can nest groups?  Who
says which files I may access?  Who stops me from modifying someone
else's container?

Our containers are somewhat thinner and more managed than LXC, but not
that much.  If we're running a system agent in a user container, we
need to manage that software.  We can't just start up a version and
leave it running until the user decides to upgrade - we force
upgrades.

>> How do I bring down all these children, and then bring them back up on
>> a new version in a way that does not disrupt user jobs (much)?
>>
>> Similarly, what happens when one of these child agents crashes?  Does
>> someone restart it?  Do user jobs just stop working?
>
> An upstart^W$init_system job will restart it...

What happens when the main agent crashes?  All those children on UNIX
sockets need to reconnect, I guess.  This means your UNIX socket needs
to be a named socket, not just a socketpair(),  making your auth model
more complicated.

What happens when the main agent hangs?  Is someone health-checking
it?  How about all the child daemons?

I guess my main point is that this SOUNDS like a simple project, but
if you just do the simple obvious things, it will be woefully
inadequate for anything but simple use-cases.  If we get forced into
such a model (and there are some good reasons to do it, even
disregarding all the other chatter), we'd rather use the same thing
that the upstream world uses, and not re-invent the whole thing
ourselves.

Do you have a design spec, or a requirements list, or even a prototype
that we can look at?

Tim

  reply	other threads:[~2013-06-27 20:28 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-27 16:53 cgroup access daemon Tim Hockin
2013-06-27 18:11 ` Serge Hallyn
2013-06-27 20:27   ` Tim Hockin [this message]
2013-06-28 16:31     ` Serge Hallyn
2013-06-28 18:37       ` Tim Hockin
2013-06-28 19:21         ` Serge Hallyn
2013-06-28 19:48           ` Tim Hockin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAAAKZwvXYBznJ5uiWaEV7fNCHbciwLa6U+E5RUJkz1ioMu=cRg@mail.gmail.com' \
    --to=thockin@hockin.org \
    --cc=bitbucket@online.de \
    --cc=cgroups@vger.kernel.org \
    --cc=containers@lists.linux-foundation.org \
    --cc=dhaval.giani@gmail.com \
    --cc=jpoimboe@redhat.com \
    --cc=kay.sievers@vrfy.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lpoetter@redhat.com \
    --cc=serge.hallyn@ubuntu.com \
    --cc=tj@kernel.org \
    --cc=vrigo@google.com \
    --cc=workman-devel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).