From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756034AbcL0S7J (ORCPT ); Tue, 27 Dec 2016 13:59:09 -0500 Received: from mail-io0-f194.google.com ([209.85.223.194]:34739 "EHLO mail-io0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755833AbcL0S7B (ORCPT ); Tue, 27 Dec 2016 13:59:01 -0500 MIME-Version: 1.0 In-Reply-To: <20161227211946.3770b6ce@roar.ozlabs.ibm.com> References: <20161225030030.23219-1-npiggin@gmail.com> <20161225030030.23219-3-npiggin@gmail.com> <20161226111654.76ab0957@roar.ozlabs.ibm.com> <20161227211946.3770b6ce@roar.ozlabs.ibm.com> From: Linus Torvalds Date: Tue, 27 Dec 2016 10:58:59 -0800 X-Google-Sender-Auth: asfpV8gEuIlTt-4Spnx8GHb19D4 Message-ID: Subject: Re: [PATCH 2/2] mm: add PageWaiters indicating tasks are waiting for a page bit To: Nicholas Piggin Cc: Dave Hansen , Bob Peterson , Linux Kernel Mailing List , Steven Whitehouse , Andrew Lutomirski , Andreas Gruenbacher , Peter Zijlstra , linux-mm , Mel Gorman Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 27, 2016 at 3:19 AM, Nicholas Piggin wrote: > > Attached is part of a patch I've been mulling over for a while. I > expect you to hate it, and it does not solve this problem for x86, > but I like being able to propagate values from atomic ops back > to the compiler. Of course, volatile then can't be used either which > is another spanner... Yeah, that patch is disgusting, and doesn't even help x86. It also depends on the compiler doing the right thing in ways that are not obviously true. I'd much rather just add the "xyz_return()" primitives for and/or, the way we already have atomic_add_return() and friends. In fact, we could probably play games with bit numbering, and actually use the atomic ops we already have. For example, if the lock bit was the top bit, we could unlock by doing "atomic_add_return()" with that bit, and look at the remaining bits that way. That would actually work really well on x86, since there we have "xadd", but we do *not* have "set/clear bit and return old word". We could make a special case for just the page lock bit, make it bit #7, and use movb $128,%al lock xaddb %al,flags and then test the bits in %al. And all the RISC architectures would be ok with that too, because they can just use ll/sc and test the bits with that. So for them, adding a "atomic_long_and_return()" would be very natural in the general case. Hmm? The other alternative is to keep the lock bit as bit #0, and just make the contention bit be the high bit. Then, on x86, you can do lock andb $0xfe,flags js contention which might be even better. Again, it would be a very special operation just for unlock. Something like bit_clear_and_branch_if_negative_byte(mem, label); and again, it would be trivial to do on most architectures. Let me try to write a patch or two for testing. Linus