All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Galbraith <umgwanakikbuti@gmail.com>
To: Tejun Heo <tj@kernel.org>
Cc: torvalds@linux-foundation.org, akpm@linux-foundation.org,
	a.p.zijlstra@chello.nl, mingo@redhat.com, lizefan@huawei.com,
	hannes@cmpxchg.org, pjt@google.com, linux-kernel@vger.kernel.org,
	cgroups@vger.kernel.org, linux-api@vger.kernel.org,
	kernel-team@fb.com
Subject: Re: [PATCHSET RFC cgroup/for-4.6] cgroup, sched: implement resource group and PRIO_RGRP
Date: Sun, 13 Mar 2016 18:40:35 +0100	[thread overview]
Message-ID: <1457890835.3859.72.camel@gmail.com> (raw)
In-Reply-To: <20160313150012.GB13405@htj.duckdns.org>

On Sun, 2016-03-13 at 11:00 -0400, Tejun Heo wrote:
> Hello, Mike.
> 
> On Sat, Mar 12, 2016 at 07:26:59AM +0100, Mike Galbraith wrote:
> > Hrm.  You're showing that per-thread groups can coexist just fine,
> > which is good given need and usage exists today out in the wild.  Why
> > do such groups have to be invisible with a unique interface though?
> 
> I tried to explain these in the forementioned RFD document.  I'll give
> a brief example here.
> 
> Let's say there is an application which wants to manage resource
> distributions across its multiple threadpools in a hierarchical way.
> With cgroupfs interface as the only system-wide interface, it has to
> coordinate who or whatever is managing that interface.  Maybe it can
> get a subtree delegated to it, maybe it has to ask the system thing to
> create and place threads there, maybe it can just expose the pids and
> let the system management do its thing (what if the threads in the
> pools are dynamic tho?).  There is no reliable universal way of doing
> this.  Each such application has to be ready to specifically
> coordinate with the specific system management in use.

The last thing I ever want to see on my boxen is random applications
either doing their own thing with my cgroups management interface, or
conspiring with "the system thing" behind my back to do things that I
did not specifically ask them to do.

"The system thing" started doing its own thing behind my back, and
oddly enough, its tentacles started falling off.  By golly, its eyes
seem to have fallen out as well.

That's what happens when control freak meets control freak, one of them
ends up in pieces.  There can be only one, and that one is me, the
administrator.  Applications don't coordinate spit, if I put on my
administrator hat and stuff 'em in a box, they better stay stuffed.

> This is kernel failing to provide proper abstraction and isolation
> between different layers.  The "raw" feature is there but it's unsafe
> to use and thus can't be used widely.

Good, management of my boxen is my turf.  The raw feature works fine
today, and I'd like to see it to keep on working tomorrow.  If tools
written for administration diddle, that's fine IFF I'm the guy on the
other end of the tool.  All other manipulators of my management
interfaces can go.. fish.

> > Given the core has to deal with them whether they're visible or not,
> > and given they exist to fulfill a need, seems they should be first
> > class citizens, not some Quasimodo like creature sneaking into the
> > cathedral via a back door and slinking about in the shadows.
> 
> In terms of programmability and accessibility for individual
> applications, group resource management being available through
> straight-forward and incremental extension of exsiting mechanisms is
> *way* more first class citizen.  It is two seamless extensions to
> clone(2) and setpriority(2) making hierarchical resource management
> generally available to applications.

To me, that sounds like chaos.

> There can be use cases where building cpu resource hierarchy which is
> completely alien to how the rest of the system is organized is useful.
> For those cases, the only thing which can be done is building a
> separate hierarchy for the cpu controller and that capability isn't
> going anywhere.

As long as administrators can use the system interface to aggregate
what they see fit, I'm happy.  The scheduler schedules threads, ergo
the cpu controller must aggregate threads.  There is no process.

	-Mike

WARNING: multiple messages have this Message-ID (diff)
From: Mike Galbraith <umgwanakikbuti-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	a.p.zijlstra-/NLkJaSkS4VmR6Xm/wNWPw@public.gmane.org,
	mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org,
	pjt-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-api-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	kernel-team-b10kYP2dOMg@public.gmane.org
Subject: Re: [PATCHSET RFC cgroup/for-4.6] cgroup, sched: implement resource group and PRIO_RGRP
Date: Sun, 13 Mar 2016 18:40:35 +0100	[thread overview]
Message-ID: <1457890835.3859.72.camel@gmail.com> (raw)
In-Reply-To: <20160313150012.GB13405-piEFEHQLUPpN0TnZuCh8vA@public.gmane.org>

On Sun, 2016-03-13 at 11:00 -0400, Tejun Heo wrote:
> Hello, Mike.
> 
> On Sat, Mar 12, 2016 at 07:26:59AM +0100, Mike Galbraith wrote:
> > Hrm.  You're showing that per-thread groups can coexist just fine,
> > which is good given need and usage exists today out in the wild.  Why
> > do such groups have to be invisible with a unique interface though?
> 
> I tried to explain these in the forementioned RFD document.  I'll give
> a brief example here.
> 
> Let's say there is an application which wants to manage resource
> distributions across its multiple threadpools in a hierarchical way.
> With cgroupfs interface as the only system-wide interface, it has to
> coordinate who or whatever is managing that interface.  Maybe it can
> get a subtree delegated to it, maybe it has to ask the system thing to
> create and place threads there, maybe it can just expose the pids and
> let the system management do its thing (what if the threads in the
> pools are dynamic tho?).  There is no reliable universal way of doing
> this.  Each such application has to be ready to specifically
> coordinate with the specific system management in use.

The last thing I ever want to see on my boxen is random applications
either doing their own thing with my cgroups management interface, or
conspiring with "the system thing" behind my back to do things that I
did not specifically ask them to do.

"The system thing" started doing its own thing behind my back, and
oddly enough, its tentacles started falling off.  By golly, its eyes
seem to have fallen out as well.

That's what happens when control freak meets control freak, one of them
ends up in pieces.  There can be only one, and that one is me, the
administrator.  Applications don't coordinate spit, if I put on my
administrator hat and stuff 'em in a box, they better stay stuffed.

> This is kernel failing to provide proper abstraction and isolation
> between different layers.  The "raw" feature is there but it's unsafe
> to use and thus can't be used widely.

Good, management of my boxen is my turf.  The raw feature works fine
today, and I'd like to see it to keep on working tomorrow.  If tools
written for administration diddle, that's fine IFF I'm the guy on the
other end of the tool.  All other manipulators of my management
interfaces can go.. fish.

> > Given the core has to deal with them whether they're visible or not,
> > and given they exist to fulfill a need, seems they should be first
> > class citizens, not some Quasimodo like creature sneaking into the
> > cathedral via a back door and slinking about in the shadows.
> 
> In terms of programmability and accessibility for individual
> applications, group resource management being available through
> straight-forward and incremental extension of exsiting mechanisms is
> *way* more first class citizen.  It is two seamless extensions to
> clone(2) and setpriority(2) making hierarchical resource management
> generally available to applications.

To me, that sounds like chaos.

> There can be use cases where building cpu resource hierarchy which is
> completely alien to how the rest of the system is organized is useful.
> For those cases, the only thing which can be done is building a
> separate hierarchy for the cpu controller and that capability isn't
> going anywhere.

As long as administrators can use the system interface to aggregate
what they see fit, I'm happy.  The scheduler schedules threads, ergo
the cpu controller must aggregate threads.  There is no process.

	-Mike

  reply	other threads:[~2016-03-13 17:40 UTC|newest]

Thread overview: 95+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-11 15:41 [PATCHSET RFC cgroup/for-4.6] cgroup, sched: implement resource group and PRIO_RGRP Tejun Heo
2016-03-11 15:41 ` Tejun Heo
2016-03-11 15:41 ` [PATCH 01/10] cgroup: introduce cgroup_[un]lock() Tejun Heo
2016-03-11 15:41   ` Tejun Heo
2016-03-11 15:41 ` [PATCH 02/10] cgroup: un-inline cgroup_path() and friends Tejun Heo
2016-03-11 15:41 ` [PATCH 03/10] cgroup: introduce CGRP_MIGRATE_* flags Tejun Heo
2016-03-11 15:41   ` Tejun Heo
2016-03-11 15:41 ` [PATCH 04/10] signal: make put_signal_struct() public Tejun Heo
2016-03-11 15:41 ` [PATCH 05/10] cgroup, fork: add @new_rgrp_cset[p] and @clone_flags to cgroup fork callbacks Tejun Heo
2016-03-11 15:41   ` Tejun Heo
2016-03-11 15:41 ` [PATCH 06/10] cgroup, fork: add @child and @clone_flags to threadgroup_change_begin/end() Tejun Heo
2016-03-11 15:41 ` [PATCH 07/10] cgroup: introduce resource group Tejun Heo
2016-03-11 15:41   ` Tejun Heo
2016-03-11 15:41 ` [PATCH 08/10] cgroup: implement rgroup control mask handling Tejun Heo
2016-03-11 15:41   ` Tejun Heo
2016-03-11 15:41 ` [PATCH 09/10] cgroup: implement rgroup subtree migration Tejun Heo
2016-03-11 15:41 ` [PATCH 10/10] cgroup, sched: implement PRIO_RGRP for {set|get}priority() Tejun Heo
2016-03-11 15:41   ` Tejun Heo
2016-03-11 16:05 ` Example program for PRIO_RGRP Tejun Heo
2016-03-11 16:05   ` Tejun Heo
2016-03-12  6:26 ` [PATCHSET RFC cgroup/for-4.6] cgroup, sched: implement resource group and PRIO_RGRP Mike Galbraith
2016-03-12  6:26   ` Mike Galbraith
2016-03-12 17:04   ` Mike Galbraith
2016-03-12 17:04     ` Mike Galbraith
2016-03-12 17:13     ` cgroup NAKs ignored? " Ingo Molnar
2016-03-12 17:13       ` Ingo Molnar
2016-03-13 14:42       ` Tejun Heo
2016-03-13 14:42         ` Tejun Heo
2016-03-13 15:00   ` Tejun Heo
2016-03-13 15:00     ` Tejun Heo
2016-03-13 17:40     ` Mike Galbraith [this message]
2016-03-13 17:40       ` Mike Galbraith
2016-04-07  0:00       ` Tejun Heo
2016-04-07  0:00         ` Tejun Heo
2016-04-07  3:26         ` Mike Galbraith
2016-04-07  3:26           ` Mike Galbraith
2016-03-14  2:23     ` Mike Galbraith
2016-03-14  2:23       ` Mike Galbraith
2016-03-14 11:30 ` Peter Zijlstra
2016-03-14 11:30   ` Peter Zijlstra
2016-04-06 15:58   ` Tejun Heo
2016-04-06 15:58     ` Tejun Heo
2016-04-06 15:58     ` Tejun Heo
2016-04-07  6:45     ` Peter Zijlstra
2016-04-07  6:45       ` Peter Zijlstra
2016-04-07  7:35       ` Johannes Weiner
2016-04-07  7:35         ` Johannes Weiner
2016-04-07  8:05         ` Mike Galbraith
2016-04-07  8:05           ` Mike Galbraith
2016-04-07  8:08         ` Peter Zijlstra
2016-04-07  8:08           ` Peter Zijlstra
2016-04-07  9:28           ` Johannes Weiner
2016-04-07  9:28             ` Johannes Weiner
2016-04-07 10:42             ` Peter Zijlstra
2016-04-07 10:42               ` Peter Zijlstra
2016-04-07 19:45           ` Tejun Heo
2016-04-07 19:45             ` Tejun Heo
2016-04-07 20:25             ` Peter Zijlstra
2016-04-07 20:25               ` Peter Zijlstra
2016-04-08 20:11               ` Tejun Heo
2016-04-08 20:11                 ` Tejun Heo
2016-04-09  6:16                 ` Mike Galbraith
2016-04-09  6:16                   ` Mike Galbraith
2016-04-09 13:39                 ` Peter Zijlstra
2016-04-09 13:39                   ` Peter Zijlstra
2016-04-12 22:29                   ` Tejun Heo
2016-04-12 22:29                     ` Tejun Heo
2016-04-13  7:43                     ` Mike Galbraith
2016-04-13  7:43                       ` Mike Galbraith
2016-04-13 15:59                       ` Tejun Heo
2016-04-13 19:15                         ` Mike Galbraith
2016-04-13 19:15                           ` Mike Galbraith
2016-04-14  6:07                         ` Mike Galbraith
2016-04-14 19:57                           ` Tejun Heo
2016-04-14 19:57                             ` Tejun Heo
2016-04-15  2:42                             ` Mike Galbraith
2016-04-15  2:42                               ` Mike Galbraith
2016-04-09 16:02                 ` Peter Zijlstra
2016-04-09 16:02                   ` Peter Zijlstra
2016-04-07  8:28         ` Peter Zijlstra
2016-04-07  8:28           ` Peter Zijlstra
2016-04-07 19:04           ` Johannes Weiner
2016-04-07 19:04             ` Johannes Weiner
2016-04-07 19:31             ` Peter Zijlstra
2016-04-07 19:31               ` Peter Zijlstra
2016-04-07 20:23               ` Johannes Weiner
2016-04-07 20:23                 ` Johannes Weiner
2016-04-08  3:13                 ` Mike Galbraith
2016-04-08  3:13                   ` Mike Galbraith
2016-03-15 17:21 ` Michal Hocko
2016-03-15 17:21   ` Michal Hocko
2016-04-06 21:53   ` Tejun Heo
2016-04-06 21:53     ` Tejun Heo
2016-04-07  6:40     ` Peter Zijlstra
2016-04-07  6:40       ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1457890835.3859.72.camel@gmail.com \
    --to=umgwanakikbuti@gmail.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=mingo@redhat.com \
    --cc=pjt@google.com \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.