From: Tejun Heo <tj@kernel.org> To: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org>, lizefan@huawei.com, hannes@cmpxchg.org, mingo@redhat.com, pjt@google.com, luto@amacapital.net, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, lvenanci@redhat.com, Linus Torvalds <torvalds@linux-foundation.org>, Andrew Morton <akpm@linux-foundation.org> Subject: Re: [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode Date: Mon, 13 Mar 2017 15:26:21 -0400 [thread overview] Message-ID: <20170313192621.GD15709@htj.duckdns.org> (raw) In-Reply-To: <1486964707.5912.93.camel@gmx.de> Hello, Mike. Sorry about the long delay. On Mon, Feb 13, 2017 at 06:45:07AM +0100, Mike Galbraith wrote: > > > So, as long as the depth stays reasonable (single digit or lower), > > > what we try to do is keeping tree traversal operations aggregated or > > > located on slow paths. There still are places that this overhead > > > shows up (e.g. the block controllers aren't too optimized) but it > > > isn't particularly difficult to make a handful of layers not matter at > > > all. > > > > A handful of cpu bean counting layers stings considerably. Hmm... yeah, I was trying to think about ways to avoid full scheduling overhead at each layer (the scheduler does a lot per each layer of scheduling) but don't think it's possible to circumvent that without introducing a whole lot of scheduling artifacts. In a lot of workloads, the added overhead from several layers of CPU controllers doesn't seem to get in the way too much (most threads do something other than scheduling after all). The only major issue that we're seeing in the fleet is the cgroup iteration in idle rebalancing code pushing up the scheduling latency too much but that's a different issue. Anyways, I understand that there are cases where people would want to avoid any extra layers. I'll continue on PeterZ's message. > BTW, that overhead is also why merging cpu/cpuacct is not really as > wonderful as it may seem on paper. If you only want to account, you > may not have anything to gain from group scheduling (in fact it may > wreck performance), but you'll pay for it. There's another reason why we would want accounting separate - because weight based controllers, cpu and io currently, can't be enabled without affecting the scheduling behavior. However, they're different from CPU controllers in that all the heavy part of operations can be shifted to the readers (we just need to do per-cpu updates from hot paths), so we might as well publish those stats by default on the v2 hierarchy. We couldn't do the same in v1 because the number of hierarchies were not limited. Thanks. -- tejun
WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> To: Mike Galbraith <efault-Mmb7MZpHnFY@public.gmane.org> Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>, lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org, hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, pjt-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, luto-kltTT9wpgjJwATOyAt5JVQ@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kernel-team-b10kYP2dOMg@public.gmane.org, lvenanci-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, Linus Torvalds <torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>, Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org> Subject: Re: [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode Date: Mon, 13 Mar 2017 15:26:21 -0400 [thread overview] Message-ID: <20170313192621.GD15709@htj.duckdns.org> (raw) In-Reply-To: <1486964707.5912.93.camel-Mmb7MZpHnFY@public.gmane.org> Hello, Mike. Sorry about the long delay. On Mon, Feb 13, 2017 at 06:45:07AM +0100, Mike Galbraith wrote: > > > So, as long as the depth stays reasonable (single digit or lower), > > > what we try to do is keeping tree traversal operations aggregated or > > > located on slow paths. There still are places that this overhead > > > shows up (e.g. the block controllers aren't too optimized) but it > > > isn't particularly difficult to make a handful of layers not matter at > > > all. > > > > A handful of cpu bean counting layers stings considerably. Hmm... yeah, I was trying to think about ways to avoid full scheduling overhead at each layer (the scheduler does a lot per each layer of scheduling) but don't think it's possible to circumvent that without introducing a whole lot of scheduling artifacts. In a lot of workloads, the added overhead from several layers of CPU controllers doesn't seem to get in the way too much (most threads do something other than scheduling after all). The only major issue that we're seeing in the fleet is the cgroup iteration in idle rebalancing code pushing up the scheduling latency too much but that's a different issue. Anyways, I understand that there are cases where people would want to avoid any extra layers. I'll continue on PeterZ's message. > BTW, that overhead is also why merging cpu/cpuacct is not really as > wonderful as it may seem on paper. If you only want to account, you > may not have anything to gain from group scheduling (in fact it may > wreck performance), but you'll pay for it. There's another reason why we would want accounting separate - because weight based controllers, cpu and io currently, can't be enabled without affecting the scheduling behavior. However, they're different from CPU controllers in that all the heavy part of operations can be shifted to the readers (we just need to do per-cpu updates from hot paths), so we might as well publish those stats by default on the v2 hierarchy. We couldn't do the same in v1 because the number of hierarchies were not limited. Thanks. -- tejun
next prev parent reply other threads:[~2017-03-13 19:26 UTC|newest] Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-02-02 20:06 [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode Tejun Heo 2017-02-02 20:06 ` Tejun Heo 2017-02-02 20:06 ` [PATCH 1/5] cgroup: reorganize cgroup.procs / task write path Tejun Heo 2017-02-02 20:06 ` [PATCH 2/5] cgroup: add @flags to css_task_iter_start() and implement CSS_TASK_ITER_PROCS Tejun Heo 2017-02-02 20:06 ` [PATCH 3/5] cgroup: introduce cgroup->proc_cgrp and threaded css_set handling Tejun Heo 2017-02-02 20:06 ` Tejun Heo 2017-02-02 20:06 ` [PATCH 4/5] cgroup: implement CSS_TASK_ITER_THREADED Tejun Heo 2017-02-02 20:06 ` [PATCH 5/5] cgroup: implement cgroup v2 thread support Tejun Heo 2017-02-02 20:06 ` Tejun Heo 2017-02-02 21:32 ` [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode Andy Lutomirski 2017-02-02 21:32 ` Andy Lutomirski 2017-02-02 21:52 ` Tejun Heo 2017-02-02 21:52 ` Tejun Heo 2017-02-03 21:10 ` Andy Lutomirski 2017-02-03 21:10 ` Andy Lutomirski 2017-02-03 21:56 ` Tejun Heo 2017-02-06 9:50 ` Peter Zijlstra 2017-02-06 9:50 ` Peter Zijlstra 2017-02-03 20:20 ` Peter Zijlstra 2017-02-03 20:20 ` Peter Zijlstra 2017-02-03 20:59 ` Tejun Heo 2017-02-03 20:59 ` Tejun Heo 2017-02-06 12:49 ` Peter Zijlstra 2017-02-06 12:49 ` Peter Zijlstra 2017-02-08 23:08 ` Tejun Heo 2017-02-08 23:08 ` Tejun Heo 2017-02-09 10:29 ` Peter Zijlstra 2017-02-09 10:29 ` Peter Zijlstra 2017-02-10 15:45 ` Tejun Heo 2017-02-10 15:45 ` Tejun Heo 2017-02-10 17:51 ` Peter Zijlstra 2017-02-10 17:51 ` Peter Zijlstra 2017-02-12 5:05 ` Tejun Heo 2017-02-12 5:05 ` Tejun Heo 2017-02-12 6:59 ` Mike Galbraith 2017-02-12 6:59 ` Mike Galbraith 2017-02-13 5:45 ` Mike Galbraith 2017-03-13 19:26 ` Tejun Heo [this message] 2017-03-13 19:26 ` Tejun Heo 2017-03-14 14:45 ` Mike Galbraith 2017-02-14 10:35 ` Peter Zijlstra 2017-03-13 20:05 ` Tejun Heo 2017-03-13 20:05 ` Tejun Heo 2017-03-21 12:39 ` Peter Zijlstra 2017-03-21 12:39 ` Peter Zijlstra 2017-03-22 14:52 ` Peter Zijlstra 2017-03-22 14:52 ` Peter Zijlstra 2017-02-09 13:07 ` Paul Turner 2017-02-09 14:47 ` Peter Zijlstra 2017-02-09 15:08 ` Mike Galbraith [not found] ` <CAPM31RJaJjFwenC36Abij+EdzO3KBm-DEjQ_crSmzrtrrn2N2A@mail.gmail.com> 2017-02-13 5:28 ` Mike Galbraith 2017-02-10 15:46 ` Tejun Heo 2017-02-10 15:46 ` Tejun Heo
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20170313192621.GD15709@htj.duckdns.org \ --to=tj@kernel.org \ --cc=akpm@linux-foundation.org \ --cc=cgroups@vger.kernel.org \ --cc=efault@gmx.de \ --cc=hannes@cmpxchg.org \ --cc=kernel-team@fb.com \ --cc=linux-kernel@vger.kernel.org \ --cc=lizefan@huawei.com \ --cc=luto@amacapital.net \ --cc=lvenanci@redhat.com \ --cc=mingo@redhat.com \ --cc=peterz@infradead.org \ --cc=pjt@google.com \ --cc=torvalds@linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.