From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 081956B025F for ; Wed, 16 Aug 2017 19:23:15 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id f23so69374130pgn.15 for ; Wed, 16 Aug 2017 16:23:15 -0700 (PDT) Received: from out02.mta.xmission.com (out02.mta.xmission.com. [166.70.13.232]) by mx.google.com with ESMTPS id 86si1148887pfk.240.2017.08.16.16.23.13 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 16 Aug 2017 16:23:13 -0700 (PDT) From: ebiederm@xmission.com (Eric W. Biederman) References: <84c7f26182b7f4723c0fe3b34ba912a9de92b8b7.1502758114.git.tim.c.chen@linux.intel.com> <20170815022743.GB28715@tassilo.jf.intel.com> <20170815031524.GC28715@tassilo.jf.intel.com> <20170815224728.GA1373@linux-80c1.suse> Date: Wed, 16 Aug 2017 18:22:57 -0500 In-Reply-To: (Linus Torvalds's message of "Tue, 15 Aug 2017 16:50:34 -0700") Message-ID: <87inhnrtbi.fsf@xmission.com> MIME-Version: 1.0 Content-Type: text/plain Subject: Re: [PATCH 1/2] sched/wait: Break up long wake list walk Sender: owner-linux-mm@kvack.org List-ID: To: Linus Torvalds Cc: Andi Kleen , Tim Chen , Peter Zijlstra , Ingo Molnar , Kan Liang , Andrew Morton , Johannes Weiner , Jan Kara , linux-mm , Linux Kernel Mailing List Linus Torvalds writes: > On Tue, Aug 15, 2017 at 3:57 PM, Linus Torvalds > wrote: >> >> Oh, and the page wait-queue really needs that key argument too, which >> is another thing that swait queue code got rid of in the name of >> simplicity. > > Actually, it gets worse. > > Because the page wait queues are hashed, it's not an all-or-nothing > thing even for the non-exclusive cases, and it's not a "wake up first > entry" for the exclusive case. Both have to be conditional on the wait > entry actually matching the page and bit in question. > > So no way to use swait, or any of the lockless queuing code in general > (so we can't do some clever private wait-list using llist.h either). > > End result: it looks like you fairly fundamentally do need to use a > lock over the whole list traversal (like the standard wait-queues), > and then add a cursor entry like Tim's patch if dropping the lock in > the middle. > > Anyway, looking at the old code, we *used* to limit the page wait hash > table to 4k entries, and we used to have one hash table per memory > zone. > > The per-zone thing didn't work at all for the generic bit-waitqueues, > because of how people used them on virtual addresses on the stack. > > But it *could* work for the page waitqueues, which are now a totally > separate entity, and is obviously always physically addressed (since > the indexing is by "struct page" pointer), and doesn't have that > issue. > > So I guess we could re-introduce the notion of per-zone page waitqueue > hash tables. It was disgusting to allocate and free though (and hooked > into the memory hotplug code). > > So I'd still hope that we can instead just have one larger hash table, > and that is sufficient for the problem. If increasing the hash table size fixes the problem I am wondering if rhash tables might be the proper solution to this problem. They start out small and then grow as needed. Eric -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org