linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] Bitmap: Optimized division operation to shift operation
@ 2020-04-15  7:27 Wang Qing
  2020-04-15  7:47 ` Joe Perches
  2020-04-15  8:56 ` Peter Zijlstra
  0 siblings, 2 replies; 3+ messages in thread
From: Wang Qing @ 2020-04-15  7:27 UTC (permalink / raw)
  To: Andrew Morton, Dennis Zhou, Wolfram Sang, David Sterba,
	Josef Bacik, Stefano Brivio, Thomas Gleixner,
	William Breathitt Gray, Randy Dunlap, Andy Shevchenko,
	Yury Norov, Wang Qing, linux-kernel
  Cc: opensource.kernel

On some processors, the / operate will call the compiler`s div lib,
which is low efficient. Bitmap is performance sensitive, We can
replace the / operation with shift.

Signed-off-by: Wang Qing <wangqing@vivo.com>
---
 include/linux/bitmap.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
index 99058eb..85ff982 100644
--- a/include/linux/bitmap.h
+++ b/include/linux/bitmap.h
@@ -337,7 +337,7 @@ static inline int bitmap_equal(const unsigned long *src1,
 		return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits));
 	if (__builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
 	    IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
-		return !memcmp(src1, src2, nbits / 8);
+		return !memcmp(src1, src2, nbits >> 3);
 	return __bitmap_equal(src1, src2, nbits);
 }
 
@@ -411,7 +411,7 @@ static __always_inline void bitmap_set(unsigned long *map, unsigned int start,
 		 IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
 		 __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
 		 IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
-		memset((char *)map + start / 8, 0xff, nbits / 8);
+		memset((char *)map + (start >> 3), 0xff, nbits >> 3);
 	else
 		__bitmap_set(map, start, nbits);
 }
@@ -425,7 +425,7 @@ static __always_inline void bitmap_clear(unsigned long *map, unsigned int start,
 		 IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
 		 __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
 		 IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
-		memset((char *)map + start / 8, 0, nbits / 8);
+		memset((char *)map + (start >> 3), 0, nbits >> 3);
 	else
 		__bitmap_clear(map, start, nbits);
 }
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] Bitmap: Optimized division operation to shift operation
  2020-04-15  7:27 [PATCH] Bitmap: Optimized division operation to shift operation Wang Qing
@ 2020-04-15  7:47 ` Joe Perches
  2020-04-15  8:56 ` Peter Zijlstra
  1 sibling, 0 replies; 3+ messages in thread
From: Joe Perches @ 2020-04-15  7:47 UTC (permalink / raw)
  To: Wang Qing, Andrew Morton, Dennis Zhou, Wolfram Sang,
	David Sterba, Josef Bacik, Stefano Brivio, Thomas Gleixner,
	William Breathitt Gray, Randy Dunlap, Andy Shevchenko,
	Yury Norov, linux-kernel
  Cc: opensource.kernel

On Wed, 2020-04-15 at 15:27 +0800, Wang Qing wrote:
> On some processors, the / operate will call the compiler`s div lib,
> which is low efficient. Bitmap is performance sensitive, We can
> replace the / operation with shift.

Seems more like bad compilers than useful code changes
unless you can specify what compilers and what processors.

> diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
[]
> @@ -337,7 +337,7 @@ static inline int bitmap_equal(const unsigned long *src1,
>  		return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits));
>  	if (__builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
>  	    IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
> -		return !memcmp(src1, src2, nbits / 8);
> +		return !memcmp(src1, src2, nbits >> 3);



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] Bitmap: Optimized division operation to shift operation
  2020-04-15  7:27 [PATCH] Bitmap: Optimized division operation to shift operation Wang Qing
  2020-04-15  7:47 ` Joe Perches
@ 2020-04-15  8:56 ` Peter Zijlstra
  1 sibling, 0 replies; 3+ messages in thread
From: Peter Zijlstra @ 2020-04-15  8:56 UTC (permalink / raw)
  To: Wang Qing
  Cc: Andrew Morton, Dennis Zhou, Wolfram Sang, David Sterba,
	Josef Bacik, Stefano Brivio, Thomas Gleixner,
	William Breathitt Gray, Randy Dunlap, Andy Shevchenko,
	Yury Norov, linux-kernel, opensource.kernel

On Wed, Apr 15, 2020 at 03:27:40PM +0800, Wang Qing wrote:
> On some processors, the / operate will call the compiler`s div lib,
> which is low efficient. Bitmap is performance sensitive, We can
> replace the / operation with shift.
> 
> Signed-off-by: Wang Qing <wangqing@vivo.com>
> ---
>  include/linux/bitmap.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
> index 99058eb..85ff982 100644
> --- a/include/linux/bitmap.h
> +++ b/include/linux/bitmap.h
> @@ -337,7 +337,7 @@ static inline int bitmap_equal(const unsigned long *src1,
>  		return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits));
>  	if (__builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
>  	    IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
> -		return !memcmp(src1, src2, nbits / 8);
> +		return !memcmp(src1, src2, nbits >> 3);
>  	return __bitmap_equal(src1, src2, nbits);
>  }

If your compiler gets this wrong, set it on fire and scatter its remains.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-04-15  8:57 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-15  7:27 [PATCH] Bitmap: Optimized division operation to shift operation Wang Qing
2020-04-15  7:47 ` Joe Perches
2020-04-15  8:56 ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).