From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754388AbbHXWt7 (ORCPT ); Mon, 24 Aug 2015 18:49:59 -0400 Received: from mail-yk0-f180.google.com ([209.85.160.180]:34602 "EHLO mail-yk0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751989AbbHXWtk (ORCPT ); Mon, 24 Aug 2015 18:49:40 -0400 Date: Mon, 24 Aug 2015 18:49:36 -0400 From: Tejun Heo To: Paul Turner Cc: Austin S Hemmelgarn , Peter Zijlstra , Ingo Molnar , Johannes Weiner , lizefan@huawei.com, cgroups , LKML , kernel-team , Linus Torvalds , Andrew Morton Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy Message-ID: <20150824224936.GO28944@mtj.duckdns.org> References: <20150822182916.GE20768@mtj.duckdns.org> <55DB3C76.5010009@gmail.com> <20150824170427.GA27262@mtj.duckdns.org> <20150824210223.GH28944@mtj.duckdns.org> <20150824211707.GJ28944@mtj.duckdns.org> <20150824214000.GL28944@mtj.duckdns.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, On Mon, Aug 24, 2015 at 03:03:05PM -0700, Paul Turner wrote: > > Hmm... I was hoping for an actual configurations and usage scenarios. > > Preferably something people can set up and play with. > > This is much easier to set up and play with synthetically. Just > create the 10 threads and 100 threads above then experiment with > configurations designed at guaranteeing the set of 100 threads > relatively uniform throughput regardless of how many are active. I > don't think trying to run a VM stack adds anything except complexity > of reproduction here. Well, but that loses most of details and why such use cases matter to begin with. We can imagine up stuff to induce arbitrary set of requirements. > > I take that the > > CPU intensive helper threads are usually IO workers? Is the scenario > > where the VM is set up with a lot of IO devices and different ones may > > consume large amount of CPU cycles at any given point? > > Yes, generally speaking there are a few major classes of IO (flash, > disk, network) that a guest may invoke. Each of these backends is > separate and chooses its own threading. Hmmm... if that's the case, would limiting iops on those IO devices (or classes of them) work? qemu already implements IO limit mechanism after all. Anyways, a point here is that threads of the same process competing isn't a new problem. There are many ways to make those threads play nice as the application itself often has to be involved anyway, especially for something like qemu which is heavily involved in provisioning resources. cgroups can be a nice brute-force add-on which lets sysadmins do wild things but it's inherently hacky and incomplete for coordinating threads. For example, what is it gonna do if qemu cloned vcpus and IO helpers dynamically off of the same parent thread? It requires application's cooperation anyway but at the same time is painful to actually interact from those applications. Thanks. -- tejun