From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753501AbdCMT0d (ORCPT ); Mon, 13 Mar 2017 15:26:33 -0400 Received: from mail-yw0-f195.google.com ([209.85.161.195]:33578 "EHLO mail-yw0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750929AbdCMT0Y (ORCPT ); Mon, 13 Mar 2017 15:26:24 -0400 Date: Mon, 13 Mar 2017 15:26:21 -0400 From: Tejun Heo To: Mike Galbraith Cc: Peter Zijlstra , lizefan@huawei.com, hannes@cmpxchg.org, mingo@redhat.com, pjt@google.com, luto@amacapital.net, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, lvenanci@redhat.com, Linus Torvalds , Andrew Morton Subject: Re: [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode Message-ID: <20170313192621.GD15709@htj.duckdns.org> References: <20170203202048.GD6515@twins.programming.kicks-ass.net> <20170203205955.GA9886@mtj.duckdns.org> <20170206124943.GJ6515@twins.programming.kicks-ass.net> <20170208230819.GD25826@htj.duckdns.org> <20170209102909.GC6515@twins.programming.kicks-ass.net> <20170210154508.GA16097@mtj.duckdns.org> <20170210175145.GJ6515@twins.programming.kicks-ass.net> <20170212050544.GJ29323@mtj.duckdns.org> <1486882799.24462.25.camel@gmx.de> <1486964707.5912.93.camel@gmx.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1486964707.5912.93.camel@gmx.de> User-Agent: Mutt/1.7.1 (2016-10-04) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, Mike. Sorry about the long delay. On Mon, Feb 13, 2017 at 06:45:07AM +0100, Mike Galbraith wrote: > > > So, as long as the depth stays reasonable (single digit or lower), > > > what we try to do is keeping tree traversal operations aggregated or > > > located on slow paths. There still are places that this overhead > > > shows up (e.g. the block controllers aren't too optimized) but it > > > isn't particularly difficult to make a handful of layers not matter at > > > all. > > > > A handful of cpu bean counting layers stings considerably. Hmm... yeah, I was trying to think about ways to avoid full scheduling overhead at each layer (the scheduler does a lot per each layer of scheduling) but don't think it's possible to circumvent that without introducing a whole lot of scheduling artifacts. In a lot of workloads, the added overhead from several layers of CPU controllers doesn't seem to get in the way too much (most threads do something other than scheduling after all). The only major issue that we're seeing in the fleet is the cgroup iteration in idle rebalancing code pushing up the scheduling latency too much but that's a different issue. Anyways, I understand that there are cases where people would want to avoid any extra layers. I'll continue on PeterZ's message. > BTW, that overhead is also why merging cpu/cpuacct is not really as > wonderful as it may seem on paper. If you only want to account, you > may not have anything to gain from group scheduling (in fact it may > wreck performance), but you'll pay for it. There's another reason why we would want accounting separate - because weight based controllers, cpu and io currently, can't be enabled without affecting the scheduling behavior. However, they're different from CPU controllers in that all the heavy part of operations can be shifted to the readers (we just need to do per-cpu updates from hot paths), so we might as well publish those stats by default on the v2 hierarchy. We couldn't do the same in v1 because the number of hierarchies were not limited. Thanks. -- tejun