From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751919AbdF1Npi (ORCPT ); Wed, 28 Jun 2017 09:45:38 -0400 Received: from mx2.suse.de ([195.135.220.15]:59248 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751469AbdF1Npc (ORCPT ); Wed, 28 Jun 2017 09:45:32 -0400 Date: Wed, 28 Jun 2017 06:45:14 -0700 From: Davidlohr Bueso To: "Luis R. Rodriguez" Cc: Linus Torvalds , Thomas Gleixner , Peter Zijlstra , DanielWagnerwagi@monom.org, Boqun Feng , Marcelo Tosatti , Paul Gortmaker , Ming Lei , "Li, Yi" , "AKASHI, Takahiro" , Jakub Kicinski , Greg Kroah-Hartman , "Paul E. McKenney" , Linux Kernel Mailing List , "Eric W. Biederman" , Petr Mladek , Andrew Lutomirski , Kees Cook , David Howells , Alan Cox , "Theodore Ts'o" , oss-drivers@netronome.com Subject: Re: [PATCH] firmware: wake all waiters Message-ID: <20170628134514.GA2644@linux-80c1.suse> References: <20170623233702.20564-1-jakub.kicinski@netronome.com> <20170626233030.GI21846@wotan.suse.de> <20170627001534.GK21846@wotan.suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20170627001534.GK21846@wotan.suse.de> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 27 Jun 2017, Luis R. Rodriguez wrote: >diff --git a/include/linux/swait.h b/include/linux/swait.h >index 4a4e180d0a35..14fcf23cece4 100644 >--- a/include/linux/swait.h >+++ b/include/linux/swait.h >@@ -29,7 +29,10 @@ > * > * As a side effect of this; the data structures are slimmer. > * >- * One would recommend using this wait queue where possible. So I think this was added due to the smaller footprint and fewer cycles that swait has compared to the traditional (bulkier) waitqueues. While probably not worth it, I guess we could offer super-simple waitqueues (sswait? :-) which do not have the rt caveats and uses a regular spinlock. The wakeup_all() call would not drop the lock upon every wakeup as we are stripping the waitqueue not for determinism, but for overhead. To mitigate this, we might also want to use wake_q for reduced hold q->lock hold times. But I don't think its worth yet another wait interface. Alternatively, it crossed my mind we could also have wakeup_all() use in the regular waitqueues, but I'd have to audit all the current users to make sure we could actually do this. Thanks, Davidlohr