From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751970AbeAWSCR (ORCPT ); Tue, 23 Jan 2018 13:02:17 -0500 Received: from mout.gmx.net ([212.227.15.19]:55980 "EHLO mout.gmx.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751451AbeAWSCQ (ORCPT ); Tue, 23 Jan 2018 13:02:16 -0500 Message-ID: <1516730480.6999.2.camel@gmx.de> Subject: Re: [RFC PATCH 0/4] softirq: Per vector threading v3 From: Mike Galbraith To: Linus Torvalds , Paolo Abeni Cc: David Miller , Frederic Weisbecker , Linux Kernel Mailing List , Sasha Levin , Peter Zijlstra , Mauro Carvalho Chehab , Hannes Frederic Sowa , Paul McKenney , Wanpeng Li , Dmitry Safonov , Thomas Gleixner , Andrew Morton , Radu Rendec , Ingo Molnar , Stanislaw Gruszka , Rik van Riel , Eric Dumazet Date: Tue, 23 Jan 2018 19:01:20 +0100 In-Reply-To: References: <1516376774-24076-1-git-send-email-frederic@kernel.org> <1516702432.2554.37.camel@redhat.com> <20180123.112201.1263563609292212852.davem@davemloft.net> <1516726652.2554.58.camel@redhat.com> Content-Type: text/plain; charset="ISO-8859-15" X-Mailer: Evolution 3.20.5 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-Provags-ID: V03:K0:h9/Py40tiF7pTDgwkdAh2T42YJlelyGRpAp2ALb5UOO0WHklvbO BnPHm3f486fteWbX1vQbcSto83VT1xsj1E2V52q4qMBNX+WYWgmcd1qneAPN3/WVUGoKuvD FTfaiQdFgUYmj37irHPM36lwtFOT7PEKYKhf2ugcl/GCyIN24dnA5tlOIk1TG6Who+7THYt MBXXr5FFzk0/hiWiHL6Nw== X-UI-Out-Filterresults: notjunk:1;V01:K0:RNYC7WhtZ+I=:LatoWhtDsBuzTgTDlga1bn z8shm6nia9wyk4XZSnI+SNWQb2ZFChU86nxjllWSy0uF4cqMyKpry3LQ8j81Q+YHImCdVEOkO gwy4aJPcoRIpDXdm9o3S/WFwf3/cUCosuu39V1zGB+xiEFsQqz6v4pPWH6NxM2eRV42Ev85zi QkGwh5G+6PEEi+fPA3IeZ62cIczKDpO/mNuel+vQ9R3xH673gYhpWlIq5zgOZ4HeSWK57L9DO F9cMSM911pH4Q5yb7TnrLa6NHqKLrCm+urkddoe79k9w3P8Aid575bhHGsdE6zv+9HemRlnTi CXrKDOooHTccMkPTdhjFPEuxXcJeL1OpGvYJ1gpAjziAjDZmbY68wvUACxau5cTqZbXVoHX5S iR/WVpLUEjKP9/DgZtZO6ULaGO7cShFFqMfR9mOFvx5qRde1wySe6APfNpNOlhFyMvb+FRk4Z T1snDVN4TeGwHFoq15nrZTJSzCsR8cYwxPBtrjJzJz5vAGXYpj8cChpI5eaB64Qc/4JXGI7J0 vbQxkCCAXeotZlTyC2Vlvm5w2fuU6IHzGMDKe289l+iwufbFUcvNnplBPpTvUB8udXLClZ+/+ nMlQIAMFukEGxUsHaEUcTDFQAqQFrCKQ6GfwaCidDJ5BhsosvfpomqJEyrBH+57LixVEFdTT/ XdKSppf9HzuWO4WOBanb8ymOC7G+ACItqQVl2TWFC6BfM6EDT5JG7UI+UoaHISR5PRtKiY+GT I+X2kFiOp+UTakUu4sDvp8hQBmC1+VTRPFKZ9pBib/ouy1Pr7Mri1L2NAuMqSqHogVvTYwdEn aFGD6WcCAdprnTbWfhMMEsHZJOHQD6rcQhLp/VADBCHU+5sZkc= Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2018-01-23 at 09:42 -0800, Linus Torvalds wrote: > On Tue, Jan 23, 2018 at 8:57 AM, Paolo Abeni wrote: > > > >> Or is it that the workqueue execution is simply not yielding for some > >> reason? > > > > It's like that. > > > > I spent little time on it, so I haven't many data point. I'll try to > > investigate the scenario later this week. > > Hmm. workqueues seem to use cond_resched_rcu_qs(), which does a > cond_resched() (and a RCU quiescent note). > > But I wonder if the test triggers the "lets run lots of workqueue > threads", and then the single-threaded user space just gets blown out > of the water by many kernel threads. Each thread gets its own "fair" > amount of CPU, but.. If folks aren't careful with workqueues, they can be a generic starvation problem. Like the below in the here and now. fs/nfs: Add a resched point to nfs_commit_release_pages() During heavy NFS write, kworkers can do very large amounts of work without scheduling (82ms traced). Add a resched point. Signed-off-by: Mike Galbraith Suggested-by: Trond Myklebust --- fs/nfs/write.c | 1 + 1 file changed, 1 insertion(+) --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -1837,6 +1837,7 @@ static void nfs_commit_release_pages(str set_bit(NFS_CONTEXT_RESEND_WRITES, &req->wb_context->flags); next: nfs_unlock_and_release_request(req); + cond_resched(); } nfss = NFS_SERVER(data->inode); if (atomic_long_read(&nfss->writeback) < NFS_CONGESTION_OFF_THRESH)