From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751508AbdFZJzi (ORCPT ); Mon, 26 Jun 2017 05:55:38 -0400 Received: from mx2.suse.de ([195.135.220.15]:49321 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751416AbdFZJzc (ORCPT ); Mon, 26 Jun 2017 05:55:32 -0400 Date: Mon, 26 Jun 2017 11:55:26 +0200 From: Petr Mladek To: "Luis R. Rodriguez" Cc: akpm@linux-foundation.org, jeyu@redhat.com, shuah@kernel.org, rusty@rustcorp.com.au, ebiederm@xmission.com, dmitry.torokhov@gmail.com, acme@redhat.com, corbet@lwn.net, josh@joshtriplett.org, martin.wilck@suse.com, mmarek@suse.com, hare@suse.com, rwright@hpe.com, jeffm@suse.com, DSterba@suse.com, fdmanana@suse.com, neilb@suse.com, linux@roeck-us.net, rgoldwyn@suse.com, subashab@codeaurora.org, xypron.glpk@gmx.de, keescook@chromium.org, atomlin@redhat.com, mbenes@suse.cz, paulmck@linux.vnet.ibm.com, dan.j.williams@intel.com, jpoimboe@redhat.com, davem@davemloft.net, mingo@redhat.com, alan@linux.intel.com, tytso@mit.edu, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 4/4] kmod: throttle kmod thread limit Message-ID: <20170626095526.GG1538@pathway.suse.cz> References: <20170526001630.19203-1-mcgrof@kernel.org> <20170526211228.27764-1-mcgrof@kernel.org> <20170526211228.27764-5-mcgrof@kernel.org> <20170622151936.GE1538@pathway.suse.cz> <20170623161619.GL21846@wotan.suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170623161619.GL21846@wotan.suse.de> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 2017-06-23 18:16:19, Luis R. Rodriguez wrote: > On Thu, Jun 22, 2017 at 05:19:36PM +0200, Petr Mladek wrote: > > On Fri 2017-05-26 14:12:28, Luis R. Rodriguez wrote: > > > --- a/kernel/kmod.c > > > +++ b/kernel/kmod.c > > > @@ -163,14 +163,11 @@ int __request_module(bool wait, const char *fmt, ...) > > > return ret; > > > > > > if (atomic_dec_if_positive(&kmod_concurrent_max) < 0) { > > > - /* We may be blaming an innocent here, but unlikely */ > > > - if (kmod_loop_msg < 5) { > > > - printk(KERN_ERR > > > - "request_module: runaway loop modprobe %s\n", > > > - module_name); > > > - kmod_loop_msg++; > > > - } > > > - return -ENOMEM; > > > + pr_warn_ratelimited("request_module: kmod_concurrent_max (%u) close to 0 (max_modprobes: %u), for module %s\n, throttling...", > > > + atomic_read(&kmod_concurrent_max), > > > + 50, module_name); > > > > It is weird to pass the constant '50' via %s. > > The 50 was passed with %u, so I take it you meant it is odd to use a parameter > for it. Yeah, I meant %u and not %s. > > Also a #define should be > > used to keep it in sync with the kmod_concurrent_max initialization. > > OK. > > > > + wait_event_interruptible(kmod_wq, > > > + atomic_dec_if_positive(&kmod_concurrent_max) >= 0); > > > } > > > > > > trace_module_request(module_name, wait, _RET_IP_); > > > @@ -178,6 +175,7 @@ int __request_module(bool wait, const char *fmt, ...) > > > ret = call_modprobe(module_name, wait ? UMH_WAIT_PROC : UMH_WAIT_EXEC); > > > > > > atomic_inc(&kmod_concurrent_max); > > > + wake_up_all(&kmod_wq); > > > > Does it make sense to wake up all waiters when we released the resource > > only for one? IMHO, a simple wake_up() should be here. > > Then we should wake_up() also on failure, otherwise we have the potential > to not wake some in a proper time. I think that we must wake_up() always when we increment kmod_concurrent_max. If the value was negative, the increment will allow exactly one process to pass that atomic_dec_if_positive(&kmod_concurrent_max) >= 0). It the value is positive, there must have been other wake_up() calls or there is no waiter. IMHO, this works because kmod_concurrent_max handling is atomic and race-less now. Also (s)wait_event_interruptible() is safe and does not allow to get into sleep when the resource is available. Anyway, it is great that you have double checked this. Best Regards, Petr