From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07DFFC04A6B for ; Wed, 8 May 2019 08:47:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D6A9820989 for ; Wed, 8 May 2019 08:47:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726982AbfEHIrW (ORCPT ); Wed, 8 May 2019 04:47:22 -0400 Received: from mga01.intel.com ([192.55.52.88]:50755 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726387AbfEHIrW (ORCPT ); Wed, 8 May 2019 04:47:22 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 May 2019 01:47:21 -0700 X-ExtLoop1: 1 Received: from smile.fi.intel.com (HELO smile) ([10.237.72.86]) by orsmga005.jf.intel.com with ESMTP; 08 May 2019 01:47:14 -0700 Received: from andy by smile with local (Exim 4.92) (envelope-from ) id 1hOIEK-0006VP-61; Wed, 08 May 2019 11:47:12 +0300 Date: Wed, 8 May 2019 11:47:12 +0300 From: Andy Shevchenko To: Yury Norov Cc: Andrew Morton , Rasmus Villemoes , Dmitry Torokhov , "David S . Miller" , Stephen Rothwell , Amritha Nambiar , Willem de Bruijn , Kees Cook , Matthew Wilcox , "Tobin C . Harding" , Will Deacon , Miklos Szeredi , Vineet Gupta , Chris Wilson , Arnaldo Carvalho de Melo , linux-kernel@vger.kernel.org, Yury Norov , Jens Axboe , Steffen Klassert Subject: Re: [PATCH 2/7] bitops: more BITS_TO_* macros Message-ID: <20190508084712.GA9224@smile.fi.intel.com> References: <20190501010636.30595-1-ynorov@marvell.com> <20190501010636.30595-3-ynorov@marvell.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190501010636.30595-3-ynorov@marvell.com> Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 30, 2019 at 06:06:31PM -0700, Yury Norov wrote: > Introduce BITS_TO_U64, BITS_TO_U32 and BITS_TO_BYTES as they are handy > in the following patches (BITS_TO_U32 specifically). Reimplement tools/ > version of the macros according to the kernel implementation. > > Also fix indentation for BITS_PER_TYPE definition. > Reviewed-by: Andy Shevchenko > Signed-off-by: Yury Norov > --- > include/linux/bitops.h | 5 ++++- > tools/include/linux/bitops.h | 9 +++++---- > 2 files changed, 9 insertions(+), 5 deletions(-) > > diff --git a/include/linux/bitops.h b/include/linux/bitops.h > index cf074bce3eb3..e61c4e614264 100644 > --- a/include/linux/bitops.h > +++ b/include/linux/bitops.h > @@ -4,8 +4,11 @@ > #include > #include > > -#define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE) > +#define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE) > #define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(long)) > +#define BITS_TO_U64(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(u64)) > +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(u32)) > +#define BITS_TO_BYTES(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(char)) > > extern unsigned int __sw_hweight8(unsigned int w); > extern unsigned int __sw_hweight16(unsigned int w); > diff --git a/tools/include/linux/bitops.h b/tools/include/linux/bitops.h > index 0b0ef3abc966..a8ba37a50d08 100644 > --- a/tools/include/linux/bitops.h > +++ b/tools/include/linux/bitops.h > @@ -13,10 +13,11 @@ > #include > #include > > -#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) > -#define BITS_TO_U64(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u64)) > -#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u32)) > -#define BITS_TO_BYTES(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE) > +#define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE) > +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(long)) > +#define BITS_TO_U64(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(u64)) > +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(u32)) > +#define BITS_TO_BYTES(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(char)) > > extern unsigned int __sw_hweight8(unsigned int w); > extern unsigned int __sw_hweight16(unsigned int w); > -- > 2.17.1 > -- With Best Regards, Andy Shevchenko