From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753144AbcDCKn6 (ORCPT ); Sun, 3 Apr 2016 06:43:58 -0400 Received: from e06smtp14.uk.ibm.com ([195.75.94.110]:49207 "EHLO e06smtp14.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752656AbcDCKn4 convert rfc822-to-8bit (ORCPT ); Sun, 3 Apr 2016 06:43:56 -0400 X-IBM-Helo: d06dlp03.portsmouth.uk.ibm.com X-IBM-MailFrom: rapoport@il.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Message-Id: <201604031043.u33Ahq3e016842@d06av11.portsmouth.uk.ibm.com> X-IBM-Helo: smtp.notes.na.collabserv.com X-IBM-MailFrom: rapoport@il.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org In-Reply-To: <20160331171435.GD24661@htj.duckdns.org> To: Tejun Heo Cc: Bandan Das , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, mst@redhat.com, jiangshanlai@gmail.com Subject: Re: [RFC PATCH 0/4] cgroup aware workqueues From: "Michael Rapoport" Date: Sun, 3 Apr 2016 13:43:45 +0300 References: <1458339291-4093-1-git-send-email-bsd@redhat.com> <201603210758.u2L7wiY9003907@d06av07.portsmouth.uk.ibm.com> <20160330170419.GG7822@mtj.duckdns.org> <201603310617.u2V6HIkt008006@d06av12.portsmouth.uk.ibm.com> <20160331171435.GD24661@htj.duckdns.org> MIME-Version: 1.0 X-KeepSent: F1F0AB6C:6D5F6569-C2257F8A:0021415B; type=4; name=$KeepSent X-Mailer: IBM Notes Release 9.0.1 October 14, 2013 X-LLNOutbound: False X-Disclaimed: 3 X-TNEFEvaluated: 1 Content-Transfer-Encoding: 8BIT Content-Type: text/plain; charset="US-ASCII" x-cbid: 16040310-0017-0000-0000-0000122C1174 X-IBM-ISS-SpamDetectors: Score=0.49; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0; SC=0.49; ST=0; TS=0; UL=0; ISC= X-IBM-ISS-DetailInfo: BY=3.00005112; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000156; SDB=6.00682665; UDB=6.00313520; UTC=2016-04-03 10:43:47 x-cbparentid: 16040310-4536-0000-0000-000006F8D0C8 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Tejun, > Tejun Heo wrote on 03/31/2016 08:14:35 PM: > > Hello, Michael. > > On Thu, Mar 31, 2016 at 08:17:13AM +0200, Michael Rapoport wrote: > > > There really shouldn't be any difference when using unbound > > > workqueues. workqueue becomes a convenience thing which manages > > > worker pools and there shouldn't be any difference between workqueue > > > workers and kthreads in terms of behavior. > > > > I agree that there really shouldn't be any performance difference, but the > > tests I've run show otherwise. I have no idea why and I hadn't time yet to > > investigate it. > > I'd be happy to help digging into what's going on. If kvm wants full > control over the worker thread, kvm can use workqueue as a pure > threadpool. Schedule a work item to grab a worker thread with the > matching attributes and keep using it as it'd a kthread. While that > wouldn't be able to take advantage of work item flushing and so on, > it'd still be a simpler way to manage worker threads and the extra > stuff like cgroup membership handling doesn't have to be duplicated. My concern is that we trade-off performance for simpler management of worker threads. With the three models I've tested (current vhost models, workqueues-based (1) and shared threads based (2)), workqueues-based ones gave the worst performance results :( > > > > opportunity for optimization, at least for some workloads... > > > > > > What sort of optimizations are we talking about? > > > > Well, if we take Evlis (1) as for the theoretical base, there could be > > benefit of doing I/O scheduling inside the vhost. > > Yeah, if that actually is beneficial, take full control of the > kworker thread. > > Thanks. [1] http://thread.gmane.org/gmane.linux.network/286858 [2] http://thread.gmane.org/gmane.linux.kernel.cgroups/13808 > -- > tejun >