From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936218AbcLTRbc (ORCPT ); Tue, 20 Dec 2016 12:31:32 -0500 Received: from mail-it0-f68.google.com ([209.85.214.68]:36770 "EHLO mail-it0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936132AbcLTRbU (ORCPT ); Tue, 20 Dec 2016 12:31:20 -0500 MIME-Version: 1.0 In-Reply-To: <156a5b34-ad3b-d0aa-83c9-109b366c1bdf@linux.intel.com> References: <20161219225826.F8CB356F@viggo.jf.intel.com> <156a5b34-ad3b-d0aa-83c9-109b366c1bdf@linux.intel.com> From: Linus Torvalds Date: Tue, 20 Dec 2016 09:31:18 -0800 X-Google-Sender-Auth: Xso3HPTCP_0COgqfN5Yr-94yqQQ Message-ID: Subject: Re: [RFC][PATCH] make global bitlock waitqueues per-node To: Dave Hansen Cc: Bob Peterson , Linux Kernel Mailing List , Steven Whitehouse , Andrew Lutomirski , Andreas Gruenbacher , Peter Zijlstra , Mel Gorman , linux-mm Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Dec 19, 2016 at 4:20 PM, Dave Hansen wrote: > On 12/19/2016 03:07 PM, Linus Torvalds wrote: >> +wait_queue_head_t *bit_waitqueue(void *word, int bit) >> +{ >> + const int __maybe_unused nid = page_to_nid(virt_to_page(word)); >> + >> + return __bit_waitqueue(word, bit, nid); >> >> No can do. Part of the problem with the old coffee was that it did that >> virt_to_page() crud. That doesn't work with the virtually mapped stack. > > Ahhh, got it. > > So, what did you have in mind? Just redirect bit_waitqueue() to the > "first_online_node" waitqueues? That was my initial thought, but now that I'm back home and look at the code, I realize: - we never merged the PageWaiters patch. I thought we already did, because I didn't think there was any confusion left, but that was clearly just in my dreams. I was surprised that you'd see the cache ping-pong with per-page contention bit, but thought that maybe your benchmark was some kind of broken "fault in the same page over and over again on multiple nodes" thing. But it was simpler than that - you simply don't have the per-page contention bit at all. And quite frankly, I still suspect that just doing the per-page contention bit will solve everything, and we don't want to do the numa-spreading bit_waitqueue() at all. - but if I'm wrong, and you can still see numa issues even with the per-page contention bit in place, we should just treat "bit_waitqueue()" separately from the page waitqueue, and just have a separate (non-node) array for the bit-waitqueue. I'll go back and try to see why the page flag contention patch didn't get applied. Linus