From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48CB6C433EF for ; Fri, 29 Apr 2022 10:58:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357772AbiD2LBq (ORCPT ); Fri, 29 Apr 2022 07:01:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236220AbiD2LBp (ORCPT ); Fri, 29 Apr 2022 07:01:45 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 047C7B6452 for ; Fri, 29 Apr 2022 03:58:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1651229906; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=hd8jixMX/+sl7pAeiP0+TvqgeE8m0z/UExwqrgT495g=; b=UB5VbFfQ7mcLxCpSSH5/xA6V/YXCaad//gM7/QpjL0XkSo+kT8uN7KGoQQSffD44Ax55kD jyodzXlg/R0cBi4Sy2GyWg3Tb/nZLSSbwtoTNY8Aela4VPF5CVIcXfNTBDRBsYsxIoyHrO YFfqY6mM8Ub8kIkVJM9/HSSUWo/nP/A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-136-DkyOpML2MXGOm-WSt3_m7A-1; Fri, 29 Apr 2022 06:58:22 -0400 X-MC-Unique: DkyOpML2MXGOm-WSt3_m7A-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 50142811E84; Fri, 29 Apr 2022 10:58:21 +0000 (UTC) Received: from fedora (unknown [10.22.16.76]) by smtp.corp.redhat.com (Postfix) with SMTP id F1469C27EAB; Fri, 29 Apr 2022 10:58:15 +0000 (UTC) Date: Fri, 29 Apr 2022 07:58:14 -0300 From: Wander Lairson Costa To: "Kirill A. Shutemov" Cc: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCHv5 04/12] x86/boot: Add infrastructure required for unaccepted memory support Message-ID: References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> <20220425033934.68551-5-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220425033934.68551-5-kirill.shutemov@linux.intel.com> X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Apr 25, 2022 at 06:39:26AM +0300, Kirill A. Shutemov wrote: [snip] > > +static __always_inline void __set_bit(long nr, volatile unsigned long *addr) Can't we update the existing set_bit function? > +{ > + asm volatile(__ASM_SIZE(bts) " %1,%0" : : "m" (*(volatile long *) addr), Why do we need the cast here? > + "Ir" (nr) : "memory"); Shouldn't we add "cc" to the clobber list? > +} > + > +static __always_inline void __clear_bit(long nr, volatile unsigned long *addr) > +{ > + asm volatile(__ASM_SIZE(btr) " %1,%0" : : "m" (*(volatile long *) addr), > + "Ir" (nr) : "memory"); > +} Same comments of __set_bit apply here (except there is no clear_bit function) [snip] > + > +static __always_inline unsigned long swab(const unsigned long y) > +{ > +#if __BITS_PER_LONG == 64 > + return __builtin_bswap32(y); > +#else /* __BITS_PER_LONG == 32 */ > + return __builtin_bswap64(y); Suppose y = 0x11223344UL, then, the compiler to cast it to a 64 bits value yielding 0x0000000011223344ULL, __builtin_bswap64 will then return 0x4433221100000000, and the return value will be casted back to 32 bits, so swapb will always return 0, won't it? > +#endif > +} > + > +unsigned long _find_next_bit(const unsigned long *addr1, > + const unsigned long *addr2, unsigned long nbits, The addr2 name seems a bit misleading, it seems to act as some kind of mask, is that right? > + unsigned long start, unsigned long invert, unsigned long le) > +{ > + unsigned long tmp, mask; > + > + if (unlikely(start >= nbits)) > + return nbits; > + > + tmp = addr1[start / BITS_PER_LONG]; > + if (addr2) > + tmp &= addr2[start / BITS_PER_LONG]; > + tmp ^= invert; > + > + /* Handle 1st word. */ > + mask = BITMAP_FIRST_WORD_MASK(start); > + if (le) > + mask = swab(mask); > + > + tmp &= mask; > + > + start = round_down(start, BITS_PER_LONG); > + > + while (!tmp) { > + start += BITS_PER_LONG; > + if (start >= nbits) > + return nbits; > + > + tmp = addr1[start / BITS_PER_LONG]; > + if (addr2) > + tmp &= addr2[start / BITS_PER_LONG]; > + tmp ^= invert; > + } Isn't better to divide start by BITS_PER_LONG in the beginning of the fuction, and then multiply it by BITS_PER_LONG when necessary, saving the division operations in the while loop? [snip]