From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752204AbdBMFqD (ORCPT ); Mon, 13 Feb 2017 00:46:03 -0500 Received: from mout.gmx.net ([212.227.17.20]:59807 "EHLO mout.gmx.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751643AbdBMFqB (ORCPT ); Mon, 13 Feb 2017 00:46:01 -0500 Message-ID: <1486964707.5912.93.camel@gmx.de> Subject: Re: [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode From: Mike Galbraith To: Tejun Heo , Peter Zijlstra Cc: lizefan@huawei.com, hannes@cmpxchg.org, mingo@redhat.com, pjt@google.com, luto@amacapital.net, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, lvenanci@redhat.com, Linus Torvalds , Andrew Morton Date: Mon, 13 Feb 2017 06:45:07 +0100 In-Reply-To: <1486882799.24462.25.camel@gmx.de> References: <20170202200632.13992-1-tj@kernel.org> <20170203202048.GD6515@twins.programming.kicks-ass.net> <20170203205955.GA9886@mtj.duckdns.org> <20170206124943.GJ6515@twins.programming.kicks-ass.net> <20170208230819.GD25826@htj.duckdns.org> <20170209102909.GC6515@twins.programming.kicks-ass.net> <20170210154508.GA16097@mtj.duckdns.org> <20170210175145.GJ6515@twins.programming.kicks-ass.net> <20170212050544.GJ29323@mtj.duckdns.org> <1486882799.24462.25.camel@gmx.de> Content-Type: text/plain; charset="us-ascii" X-Mailer: Evolution 3.16.5 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-Provags-ID: V03:K0:p48siBVU9rUv/ST57vM4nlysXiM0jjh7+6W5/+TK4ocxR+qZkvk NmUeZ3HuKEZxbn3trN8It1MFei2QZa9iHB3eHo+2KaVEib6SbkkDTX1Uz3eGAsPrguxl2g4 y0xhoF1YzJ+9RaCFb/25K+KRYvdQ4QUtPDqiboKx2wEhWiBMcil7VpyXtLbJSmPQSPR7r/g cwp3CGEX//hD0X/fTzRMg== X-UI-Out-Filterresults: notjunk:1;V01:K0:h2OWkhPRMRQ=:EDRH0eyOlmtSYcy+3nhZ1S u43zLvOX9IFv4+rrchzUEtyahBjP4JcQ6rASGnRGUWlgSXcdBm1P+4dUw0ZA/T50SxBP4Lo2q uR+nfKV8YMXmwzCSFCe81X5bgUf8/87QjY7haZ7D3gyDr21HVdGvGRovlvfapXrdGHx1hbAfr F+YpuVKtFJCRP5nLPuok5Miuxq11d4moRnGUvV2tz8qGcp804oMPZU2QPFxQhtb5qfQAbSjcD K4LXoctRwZcyO78Et/YEdLtXF/gHxMkBo33P/8Qxv79CDGfsT87kfFm/fgN+3c0tYoofOUK3+ ZUXmp2fzItXImgsNJazLVdhdu/3iLZS01tKWl1lQhSPGBCy08UC2WQb+JDioAj+L6ClmmfL7q bozAAJbEAgAaW36NVpIjPENX1qMR+0XMk5j24awDu1aTkg50rvR37S9AJDqGK73NzJX/g68eN pGhjfs4QOtihTBxWh/LBDTOF7o9UyDh3jn+x/UvHRB9G4A4jJbsEsEmTdJi8/GCY0ag94nxKR 3wIFYvFzAZDfibc+o0tCInzHI5jeEDfa2bnXUNmaDNc3RUtL7rbiemVRlLUVbKhe5XqcEnNHz AciOOaKOTnknGAByvg2pK+FXPuqquzfE5+Rlck/3wM1qNlfAoehAX5RTlZuzsd/TCpDPDar8A O67/d7J8cDYqV5+fx75M73pSzZ6l5d2AEFMlC7HVlrlNmuPvLhKAKLNHNdkCcdkfANGfXcfEC RXIwydJ84wOF1Hajjyf/liKg6pkAcP5bG4ebg3Unna9/hKXWM2RTO73srmLoD/aTzJ7dSpH8z HeM+RXW Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 2017-02-12 at 07:59 +0100, Mike Galbraith wrote: > On Sun, 2017-02-12 at 14:05 +0900, Tejun Heo wrote: > > > > I think cgroup tree depth is a more significant issue; because of > > > hierarchy we often do tree walks (uo-to-root or down-to-task). > > > > > > So creating elaborate trees is something I try not to do. > > > > So, as long as the depth stays reasonable (single digit or lower), > > what we try to do is keeping tree traversal operations aggregated or > > located on slow paths. There still are places that this overhead > > shows up (e.g. the block controllers aren't too optimized) but it > > isn't particularly difficult to make a handful of layers not matter at > > all. > > A handful of cpu bean counting layers stings considerably. BTW, that overhead is also why merging cpu/cpuacct is not really as wonderful as it may seem on paper. If you only want to account, you may not have anything to gain from group scheduling (in fact it may wreck performance), but you'll pay for it. > homer:/abuild # pipe-test 1 > 2.010057 usecs/loop -- avg 2.010057 995.0 KHz > 2.006630 usecs/loop -- avg 2.009714 995.2 KHz > 2.127118 usecs/loop -- avg 2.021455 989.4 KHz > 2.256244 usecs/loop -- avg 2.044934 978.0 KHz > 1.993693 usecs/loop -- avg 2.039810 980.5 KHz > ^C > homer:/abuild # cgexec -g cpu:hurt pipe-test 1 > 2.771641 usecs/loop -- avg 2.771641 721.6 KHz > 2.432333 usecs/loop -- avg 2.737710 730.5 KHz > 2.750493 usecs/loop -- avg 2.738988 730.2 KHz > 2.663203 usecs/loop -- avg 2.731410 732.2 KHz > 2.762564 usecs/loop -- avg 2.734525 731.4 KHz > ^C > homer:/abuild # cgexec -g cpu:hurt/pain pipe-test 1 > 2.967201 usecs/loop -- avg 2.967201 674.0 KHz > 3.049012 usecs/loop -- avg 2.975382 672.2 KHz > 3.031226 usecs/loop -- avg 2.980966 670.9 KHz > 2.954259 usecs/loop -- avg 2.978296 671.5 KHz > 2.933432 usecs/loop -- avg 2.973809 672.5 KHz > ^C > ... > homer:/abuild # cgexec -g cpu:hurt/pain/ouch/moan/groan pipe-test 1 > 4.417044 usecs/loop -- avg 4.417044 452.8 KHz > 4.494913 usecs/loop -- avg 4.424831 452.0 KHz > 4.253861 usecs/loop -- avg 4.407734 453.7 KHz > 4.378059 usecs/loop -- avg 4.404766 454.1 KHz > 4.179895 usecs/loop -- avg 4.382279 456.4 KHz