From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S969851AbdEYPSR (ORCPT ); Thu, 25 May 2017 11:18:17 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53500 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934050AbdEYPSN (ORCPT ); Thu, 25 May 2017 11:18:13 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com DB107C059723 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=jeyu@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com DB107C059723 Date: Thu, 25 May 2017 08:18:09 -0700 From: Jessica Yu To: Dmitry Torokhov Cc: "Luis R. Rodriguez" , shuah@kernel.org, rusty@rustcorp.com.au, ebiederm@xmission.com, acme@redhat.com, corbet@lwn.net, martin.wilck@suse.com, mmarek@suse.com, pmladek@suse.com, hare@suse.com, rwright@hpe.com, jeffm@suse.com, DSterba@suse.com, fdmanana@suse.com, neilb@suse.com, linux@roeck-us.net, rgoldwyn@suse.com, subashab@codeaurora.org, xypron.glpk@gmx.de, keescook@chromium.org, atomlin@redhat.com, mbenes@suse.cz, paulmck@linux.vnet.ibm.com, dan.j.williams@intel.com, jpoimboe@redhat.com, davem@davemloft.net, mingo@redhat.com, akpm@linux-foundation.org, torvalds@linux-foundation.org, gregkh@linuxfoundation.org, linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 5/6] kmod: preempt on kmod_umh_threads_get() Message-ID: <20170525151809.5nrvr32xlgbhi5su@jeyu> References: <20170519032444.18416-1-mcgrof@kernel.org> <20170519032444.18416-6-mcgrof@kernel.org> <20170519222712.GI19281@dtor-ws> <20170525001452.GS8951@wotan.suse.de> <20170525004537.GA14955@dtor-ws> <20170525010017.GT8951@wotan.suse.de> <20170525022738.GA30589@dtor-ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20170525022738.GA30589@dtor-ws> X-OS: Linux jeyu 4.8.12-100.fc23.x86_64 x86_64 User-Agent: NeoMutt/20161126 (1.7.1) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Thu, 25 May 2017 15:18:12 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org +++ Dmitry Torokhov [24/05/17 19:27 -0700]: >On Thu, May 25, 2017 at 03:00:17AM +0200, Luis R. Rodriguez wrote: >> On Wed, May 24, 2017 at 05:45:37PM -0700, Dmitry Torokhov wrote: >> > On Thu, May 25, 2017 at 02:14:52AM +0200, Luis R. Rodriguez wrote: >> > > On Fri, May 19, 2017 at 03:27:12PM -0700, Dmitry Torokhov wrote: >> > > > On Thu, May 18, 2017 at 08:24:43PM -0700, Luis R. Rodriguez wrote: >> > > > > In theory it is possible multiple concurrent threads will try to >> > > > > kmod_umh_threads_get() and as such atomic_inc(&kmod_concurrent) at >> > > > > the same time, therefore enabling a small time during which we've >> > > > > bumped kmod_concurrent but have not really enabled work. By using >> > > > > preemption we mitigate this a bit. >> > > > > >> > > > > Preemption is not needed when we kmod_umh_threads_put(). >> > > > > >> > > > > Signed-off-by: Luis R. Rodriguez >> > > > > --- >> > > > > kernel/kmod.c | 24 ++++++++++++++++++++++-- >> > > > > 1 file changed, 22 insertions(+), 2 deletions(-) >> > > > > >> > > > > diff --git a/kernel/kmod.c b/kernel/kmod.c >> > > > > index 563600fc9bb1..7ea11dbc7564 100644 >> > > > > --- a/kernel/kmod.c >> > > > > +++ b/kernel/kmod.c >> > > > > @@ -113,15 +113,35 @@ static int call_modprobe(char *module_name, int wait) >> > > > > >> > > > > static int kmod_umh_threads_get(void) >> > > > > { >> > > > > + int ret = 0; >> > > > > + >> > > > > + /* >> > > > > + * Disabling preemption makes sure that we are not rescheduled here >> > > > > + * >> > > > > + * Also preemption helps kmod_concurrent is not increased by mistake >> > > > > + * for too long given in theory two concurrent threads could race on >> > > > > + * atomic_inc() before we atomic_read() -- we know that's possible >> > > > > + * and but we don't care, this is not used for object accounting and >> > > > > + * is just a subjective threshold. The alternative is a lock. >> > > > > + */ >> > > > > + preempt_disable(); >> > > > > atomic_inc(&kmod_concurrent); >> > > > > if (atomic_read(&kmod_concurrent) <= max_modprobes) >> > > > >> > > > That is very "fancy" way to basically say: >> > > > >> > > > if (atomic_inc_return(&kmod_concurrent) <= max_modprobes) >> > > >> > > Do you mean to combine the atomic_inc() and atomic_read() in one as you noted >> > > (as that is not a change in this patch), *or* that using a memory barrier here >> > > with atomic_inc_return() should suffice to address the same and avoid an >> > > explicit preemption enable / disable ? >> > >> > I am saying that atomic_inc_return() will avoid situation where you have >> > more than one threads incrementing the counter and believing that they >> > are [not] allowed to start modprobe. >> > >> > I have no idea why you think preempt_disable() would help here. It only >> > ensures that current thread will not be preempted between the point >> > where you update the counter and where you check the result. It does not >> > stop interrupts nor does it affect other threads that might be updating >> > the same counter. >> >> The preemption was inspired by __module_get() and try_module_get(), was that >> rather silly ? > >As far as I can see prrempt_disable() was needed in __module_get() when >modules user per-cpu refcounts: you did not want to move away from CPU >while manipulating refcount. > >Now that modules use simple atomics for refcounting I think these >preempt_disable() and preempt_enable() can be removed. Yup, preempt_disable/enable was originally used for percpu module refcounting. AFAIK they are artifacts that remained from commit e1783a240f4 "use this_cpu_xx to dynamically allocate counters" and subsequently commit 2f35c41f58a "Replace module_ref with atomic_t refcnt" removed the need for it. Jessica