All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Vikas Shivappa <vikas.shivappa@intel.com>
Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>,
	linux-kernel@vger.kernel.org, x86@kernel.org, hpa@zytor.com,
	tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org,
	Matt Fleming <matt.fleming@intel.com>,
	"Auld, Will" <will.auld@intel.com>,
	"Williamson, Glenn P" <glenn.p.williamson@intel.com>,
	"Juvva, Kanaka D" <kanaka.d.juvva@intel.com>
Subject: Re: [PATCH 5/9] x86/intel_rdt: Add new cgroup and Class of service management
Date: Wed, 5 Aug 2015 11:46:27 -0400	[thread overview]
Message-ID: <20150805154627.GL17598@mtj.duckdns.org> (raw)
In-Reply-To: <alpine.DEB.2.10.1508041209360.921@vshiva-Udesk>

Hello,

On Tue, Aug 04, 2015 at 07:21:52PM -0700, Vikas Shivappa wrote:
> >I get that this would be an easier "bolt-on" solution but isn't a good
> >solution by itself in the long term.  As I wrote multiple times
> >before, this is a really bad programmable interface.  Unless you're
> >sure that this doesn't have to be programmable for threads of an
> >individual applications,
> 
> Yes, this doesnt have to be a programmable interface for threads. May not be
> a good idea to let the threads decide the cache allocation by themselves
> using this direct interface. We are transfering the decision maker
> responsibility to the system administrator.

I'm having hard time believing that.  There definitely are use cases
where cachelines are trashed among service threads.  Are you
proclaiming that those cases aren't gonna be supported?

> - This interface like you said can easily bolt-on. basically an easy to use
> interface without worrying about the architectural details.

But it's ripe with architectural details.  What I meant by bolt-on was
that this is a shortcut way of introducing this feature without
actually worrying about how this will be used by applications and
that's not a good thing.  We need to be worrying about that.

> - But still does the job. root user can allocate exclusive or overlapping
> cache lines to threads or group of threads.
> - No major roadblocks for usage as we can make the allocations like
> mentioned above and still keep the hierarchy etc and use it when needed.
> - An important factor is that it can co-exist with other interfaces like #2
> and #3 for the same easily. So I donot see a reason why we should not use
> this.
> This is not meant to be a programmable interface, however it does not
> prevent co-existence.

I'm not saying they are mutually exclusive but that we're going
overboard in this direction when programmable interface should be the
priority.  While this mostly happened naturally for other resources
because cgroups was introduced later but I think there's a general
rule to follow there.

> - If root user has to set affinity of threads that he is allocating cache,
> he can do so using other cgroups like cpuset or set the masks seperately
> using taskset. This would let him configure the cache allocation on a
> socket.

Well, root can do whatever it wants with programmable interface too.
The way things are designed, even containment isn't an issue, assign
an ID to all processes by default and change the allocation on that.

> this is a pretty bad interface by itself.
> >
> >>There is already a lot of such usage among different enterprise users at
> >>Intel/google/cisco etc who have been testing the patches posted to lkml and
> >>academically there is plenty of usage as well.
> >
> >I mean, that's the tool you gave them.  Of course they'd be using it
> >but I suspect most of them would do fine with a programmable interface
> >too.  Again, please think of cpu affinity.
> 
> All the methodology to support the feature may need an arbitrator/agent to
> decide the allocation.
> 
> 1. Let the root user or system administrator be the one who decides the
> allocation based on the current usage. We assume this to be one with
> administrative privileges. He could use the cgroup interface to perform the
> task. One way to do the cpu affinity is by mounting cpuset and rdt cgroup
> together.

If you factor in threads of a process, the above model is
fundamentally flawed.  How would root or any external entity find out
what threads are to be allocated what?  Each application would
constnatly have to tell an external agent about what its intentions
are.  This might seem to work in a limited feature testing setup where
you know everything about who's doing what but is no way a widely
deployable solution.  This pretty much degenerates into #3 you listed
below.

> 2. Kernel automatically assigning the cache based on the priority of the apps
> etc. This is something which could be designed to co-exist with the #1 above
> much like how the cpusets cgroup co-exist with the kernel assigning cpus to
> tasks. (the task could be having a cache capacity mask just like the cpu
> affinity mask)

I don't think CAT would be applicable in this manner.  BE allocation
is what the CPU is doing by default already.  I'm highly doubtful
something like CAT would be used automatically in generic systems.  It
requires fairly specific coordination after all.

> 3. User programmable interface , where say a resource management program
> x (and hence apps) could link a library which supports cache alloc/monitoring
> etc and then try to control and monitor the resources. The arbitrator could just
> be the resource management interface itself or the kernel could decide.
>
> If users use this programmable interface, we need to make sure all the apps
> just cannot allocate resources without some interfacing agent (in which case
> they could interface with #2 ?).
> 
> Do you think there are any issues for the user programmable interface to
> co-exist with the cgroup interface ?

Isn't that a weird question to ask when there's no reason to rush to a
full-on cgroup controller?  We can start with something simpler and
more specific and easier for applications to program against.  If the
hardware details make it difficult to design properly abstracted
interface around, make it a char device node, for example, and let
userland worry about how to control access to it.  If you stick to
something like that, exposing most of hardware details verbatim is
fine.  People know they're dealing with something very specific with
those types of interfaces.

Thanks.

-- 
tejun

  reply	other threads:[~2015-08-05 15:46 UTC|newest]

Thread overview: 85+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-01 22:21 [PATCH V12 0/9] Hot cpu handling changes to cqm, rapl and Intel Cache Allocation support Vikas Shivappa
2015-07-01 22:21 ` [PATCH 1/9] x86/intel_cqm: Modify hot cpu notification handling Vikas Shivappa
2015-07-29 16:44   ` Peter Zijlstra
2015-07-31 23:19     ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 2/9] x86/intel_rapl: Modify hot cpu notification handling for RAPL Vikas Shivappa
2015-07-01 22:21 ` [PATCH 3/9] x86/intel_rdt: Cache Allocation documentation and cgroup usage guide Vikas Shivappa
2015-07-28 14:54   ` Peter Zijlstra
2015-08-04 20:41     ` Vikas Shivappa
2015-07-28 23:15   ` Marcelo Tosatti
2015-07-29  0:06     ` Vikas Shivappa
2015-07-29  1:28       ` Auld, Will
2015-07-29 19:32         ` Marcelo Tosatti
2015-07-30 17:47           ` Vikas Shivappa
2015-07-30 20:08             ` Marcelo Tosatti
2015-07-31 15:34               ` Marcelo Tosatti
2015-08-02 15:48               ` Martin Kletzander
2015-08-03 15:13                 ` Marcelo Tosatti
2015-08-03 18:22                   ` Vikas Shivappa
2015-07-30 20:22             ` Marcelo Tosatti
2015-07-30 23:03               ` Vikas Shivappa
2015-07-31 14:45                 ` Marcelo Tosatti
2015-07-31 16:41                   ` [summary] " Vikas Shivappa
2015-07-31 18:38                     ` Marcelo Tosatti
2015-07-29 20:07         ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 4/9] x86/intel_rdt: Add support for Cache Allocation detection Vikas Shivappa
2015-07-28 16:25   ` Peter Zijlstra
2015-07-28 22:07     ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 5/9] x86/intel_rdt: Add new cgroup and Class of service management Vikas Shivappa
2015-07-28 17:06   ` Peter Zijlstra
2015-07-30 18:01     ` Vikas Shivappa
2015-07-28 17:17   ` Peter Zijlstra
2015-07-30 18:10     ` Vikas Shivappa
2015-07-30 19:44   ` Tejun Heo
2015-07-31 15:12     ` Marcelo Tosatti
2015-08-02 16:23       ` Tejun Heo
2015-08-03 20:32         ` Marcelo Tosatti
2015-08-04 12:55           ` Marcelo Tosatti
2015-08-04 18:36             ` Tejun Heo
2015-08-04 18:32           ` Tejun Heo
2015-07-31 16:24     ` Vikas Shivappa
2015-08-02 16:31       ` Tejun Heo
2015-08-04 18:50         ` Vikas Shivappa
2015-08-04 19:03           ` Tejun Heo
2015-08-05  2:21             ` Vikas Shivappa
2015-08-05 15:46               ` Tejun Heo [this message]
2015-08-06 20:58                 ` Vikas Shivappa
2015-08-07 14:48                   ` Tejun Heo
2015-08-05 12:22         ` Matt Fleming
2015-08-05 16:10           ` Tejun Heo
2015-08-06  0:24           ` Marcelo Tosatti
2015-08-06 20:46             ` Vikas Shivappa
2015-08-07 13:15               ` Marcelo Tosatti
2015-08-18  0:20                 ` Marcelo Tosatti
2015-08-21  0:06                   ` Vikas Shivappa
2015-08-21  0:13                     ` Vikas Shivappa
2015-08-22  2:28                     ` Marcelo Tosatti
2015-08-23 18:47                       ` Vikas Shivappa
2015-08-24 13:06                         ` Marcelo Tosatti
2015-07-01 22:21 ` [PATCH 6/9] x86/intel_rdt: Add support for cache bit mask management Vikas Shivappa
2015-07-28 16:35   ` Peter Zijlstra
2015-07-28 22:08     ` Vikas Shivappa
2015-07-28 16:37   ` Peter Zijlstra
2015-07-30 17:54     ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 7/9] x86/intel_rdt: Implement scheduling support for Intel RDT Vikas Shivappa
2015-07-29 13:49   ` Peter Zijlstra
2015-07-30 18:16     ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 8/9] x86/intel_rdt: Hot cpu support for Cache Allocation Vikas Shivappa
2015-07-29 15:53   ` Peter Zijlstra
2015-07-31 23:21     ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 9/9] x86/intel_rdt: Intel haswell Cache Allocation enumeration Vikas Shivappa
2015-07-29 16:35   ` Peter Zijlstra
2015-08-03 20:49     ` Vikas Shivappa
2015-07-29 16:36   ` Peter Zijlstra
2015-07-30 18:45     ` Vikas Shivappa
2015-07-13 17:13 ` [PATCH V12 0/9] Hot cpu handling changes to cqm, rapl and Intel Cache Allocation support Vikas Shivappa
2015-07-16 12:55   ` Thomas Gleixner
2015-07-24 16:52 ` Thomas Gleixner
2015-07-24 18:28   ` Vikas Shivappa
2015-07-24 18:39     ` Thomas Gleixner
2015-07-24 18:45       ` Vikas Shivappa
2015-07-29 16:47     ` Peter Zijlstra
2015-07-29 22:53       ` Vikas Shivappa
2015-07-24 18:32   ` Vikas Shivappa
  -- strict thread matches above, loose matches on Subject: below --
2015-08-06 21:55 [PATCH V13 0/9] Intel cache allocation and Hot cpu handling changes to cqm, rapl Vikas Shivappa
2015-08-06 21:55 ` [PATCH 5/9] x86/intel_rdt: Add new cgroup and Class of service management Vikas Shivappa
2015-06-25 19:25 [PATCH V11 0/9] Hot cpu handling changes to cqm,rapl and Intel Cache Allocation support Vikas Shivappa
2015-06-25 19:25 ` [PATCH 5/9] x86/intel_rdt: Add new cgroup and Class of service management Vikas Shivappa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150805154627.GL17598@mtj.duckdns.org \
    --to=tj@kernel.org \
    --cc=glenn.p.williamson@intel.com \
    --cc=hpa@zytor.com \
    --cc=kanaka.d.juvva@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matt.fleming@intel.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=vikas.shivappa@intel.com \
    --cc=vikas.shivappa@linux.intel.com \
    --cc=will.auld@intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.