* [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
@ 2022-12-07 11:23 Dan Carpenter
2022-12-07 12:19 ` Vladimir Oltean
2022-12-09 8:21 ` Uladzislau Koshchanka
0 siblings, 2 replies; 17+ messages in thread
From: Dan Carpenter @ 2022-12-07 11:23 UTC (permalink / raw)
To: Vladimir Oltean; +Cc: David S. Miller, netdev, kernel-janitors
The bit_reverse() function is clearly supposed to be able to handle
64 bit values, but the types for "(1 << i)" and "bit << (width - i - 1)"
are not enough to handle more than 32 bits.
Fixes: 554aae35007e ("lib: Add support for generic packing operations")
Signed-off-by: Dan Carpenter <error27@gmail.com>
---
lib/packing.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/lib/packing.c b/lib/packing.c
index 9a72f4bbf0e2..9d7418052f5a 100644
--- a/lib/packing.c
+++ b/lib/packing.c
@@ -32,12 +32,11 @@ static int get_reverse_lsw32_offset(int offset, size_t len)
static u64 bit_reverse(u64 val, unsigned int width)
{
u64 new_val = 0;
- unsigned int bit;
unsigned int i;
for (i = 0; i < width; i++) {
- bit = (val & (1 << i)) != 0;
- new_val |= (bit << (width - i - 1));
+ if (val & BIT_ULL(1))
+ new_val |= BIT_ULL(width - i - 1);
}
return new_val;
}
--
2.35.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-07 11:23 [PATCH net] lib: packing: fix shift wrapping in bit_reverse() Dan Carpenter
@ 2022-12-07 12:19 ` Vladimir Oltean
2022-12-07 12:21 ` Dan Carpenter
2022-12-09 8:21 ` Uladzislau Koshchanka
1 sibling, 1 reply; 17+ messages in thread
From: Vladimir Oltean @ 2022-12-07 12:19 UTC (permalink / raw)
To: Dan Carpenter; +Cc: David S. Miller, netdev, kernel-janitors
On Wed, Dec 07, 2022 at 02:23:28PM +0300, Dan Carpenter wrote:
> The bit_reverse() function is clearly supposed to be able to handle
> 64 bit values, but the types for "(1 << i)" and "bit << (width - i - 1)"
> are not enough to handle more than 32 bits.
>
> Fixes: 554aae35007e ("lib: Add support for generic packing operations")
> Signed-off-by: Dan Carpenter <error27@gmail.com>
> ---
> lib/packing.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/lib/packing.c b/lib/packing.c
> index 9a72f4bbf0e2..9d7418052f5a 100644
> --- a/lib/packing.c
> +++ b/lib/packing.c
> @@ -32,12 +32,11 @@ static int get_reverse_lsw32_offset(int offset, size_t len)
> static u64 bit_reverse(u64 val, unsigned int width)
> {
> u64 new_val = 0;
> - unsigned int bit;
> unsigned int i;
>
> for (i = 0; i < width; i++) {
> - bit = (val & (1 << i)) != 0;
> - new_val |= (bit << (width - i - 1));
> + if (val & BIT_ULL(1))
hmm, why 1 and not i?
> + new_val |= BIT_ULL(width - i - 1);
> }
> return new_val;
> }
> --
> 2.35.1
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-07 12:19 ` Vladimir Oltean
@ 2022-12-07 12:21 ` Dan Carpenter
2022-12-07 12:22 ` Vladimir Oltean
2022-12-07 19:41 ` David Laight
0 siblings, 2 replies; 17+ messages in thread
From: Dan Carpenter @ 2022-12-07 12:21 UTC (permalink / raw)
To: Vladimir Oltean; +Cc: David S. Miller, netdev, kernel-janitors
On Wed, Dec 07, 2022 at 02:19:36PM +0200, Vladimir Oltean wrote:
> On Wed, Dec 07, 2022 at 02:23:28PM +0300, Dan Carpenter wrote:
> > The bit_reverse() function is clearly supposed to be able to handle
> > 64 bit values, but the types for "(1 << i)" and "bit << (width - i - 1)"
> > are not enough to handle more than 32 bits.
> >
> > Fixes: 554aae35007e ("lib: Add support for generic packing operations")
> > Signed-off-by: Dan Carpenter <error27@gmail.com>
> > ---
> > lib/packing.c | 5 ++---
> > 1 file changed, 2 insertions(+), 3 deletions(-)
> >
> > diff --git a/lib/packing.c b/lib/packing.c
> > index 9a72f4bbf0e2..9d7418052f5a 100644
> > --- a/lib/packing.c
> > +++ b/lib/packing.c
> > @@ -32,12 +32,11 @@ static int get_reverse_lsw32_offset(int offset, size_t len)
> > static u64 bit_reverse(u64 val, unsigned int width)
> > {
> > u64 new_val = 0;
> > - unsigned int bit;
> > unsigned int i;
> >
> > for (i = 0; i < width; i++) {
> > - bit = (val & (1 << i)) != 0;
> > - new_val |= (bit << (width - i - 1));
> > + if (val & BIT_ULL(1))
>
> hmm, why 1 and not i?
Because I'm a moron. Let me resend.
regards,
dan carpenter
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-07 12:21 ` Dan Carpenter
@ 2022-12-07 12:22 ` Vladimir Oltean
2022-12-07 12:51 ` Dan Carpenter
2022-12-07 13:02 ` Vladimir Oltean
2022-12-07 19:41 ` David Laight
1 sibling, 2 replies; 17+ messages in thread
From: Vladimir Oltean @ 2022-12-07 12:22 UTC (permalink / raw)
To: Dan Carpenter; +Cc: David S. Miller, netdev, kernel-janitors
On Wed, Dec 07, 2022 at 03:21:04PM +0300, Dan Carpenter wrote:
> On Wed, Dec 07, 2022 at 02:19:36PM +0200, Vladimir Oltean wrote:
> > On Wed, Dec 07, 2022 at 02:23:28PM +0300, Dan Carpenter wrote:
> > > The bit_reverse() function is clearly supposed to be able to handle
> > > 64 bit values, but the types for "(1 << i)" and "bit << (width - i - 1)"
> > > are not enough to handle more than 32 bits.
> > >
> > > Fixes: 554aae35007e ("lib: Add support for generic packing operations")
> > > Signed-off-by: Dan Carpenter <error27@gmail.com>
> > > ---
> > > lib/packing.c | 5 ++---
> > > 1 file changed, 2 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/lib/packing.c b/lib/packing.c
> > > index 9a72f4bbf0e2..9d7418052f5a 100644
> > > --- a/lib/packing.c
> > > +++ b/lib/packing.c
> > > @@ -32,12 +32,11 @@ static int get_reverse_lsw32_offset(int offset, size_t len)
> > > static u64 bit_reverse(u64 val, unsigned int width)
> > > {
> > > u64 new_val = 0;
> > > - unsigned int bit;
> > > unsigned int i;
> > >
> > > for (i = 0; i < width; i++) {
> > > - bit = (val & (1 << i)) != 0;
> > > - new_val |= (bit << (width - i - 1));
> > > + if (val & BIT_ULL(1))
> >
> > hmm, why 1 and not i?
>
> Because I'm a moron. Let me resend.
Wait a second, I deliberately wrote the code without conditionals.
Let me look at the code disassembly before and after the patch and see
what they look like.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-07 12:22 ` Vladimir Oltean
@ 2022-12-07 12:51 ` Dan Carpenter
2022-12-07 13:06 ` Vladimir Oltean
2022-12-07 13:02 ` Vladimir Oltean
1 sibling, 1 reply; 17+ messages in thread
From: Dan Carpenter @ 2022-12-07 12:51 UTC (permalink / raw)
To: Vladimir Oltean; +Cc: David S. Miller, netdev, kernel-janitors
On Wed, Dec 07, 2022 at 02:22:54PM +0200, Vladimir Oltean wrote:
> On Wed, Dec 07, 2022 at 03:21:04PM +0300, Dan Carpenter wrote:
> > On Wed, Dec 07, 2022 at 02:19:36PM +0200, Vladimir Oltean wrote:
> > > On Wed, Dec 07, 2022 at 02:23:28PM +0300, Dan Carpenter wrote:
> > > > The bit_reverse() function is clearly supposed to be able to handle
> > > > 64 bit values, but the types for "(1 << i)" and "bit << (width - i - 1)"
> > > > are not enough to handle more than 32 bits.
> > > >
> > > > Fixes: 554aae35007e ("lib: Add support for generic packing operations")
> > > > Signed-off-by: Dan Carpenter <error27@gmail.com>
> > > > ---
> > > > lib/packing.c | 5 ++---
> > > > 1 file changed, 2 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/lib/packing.c b/lib/packing.c
> > > > index 9a72f4bbf0e2..9d7418052f5a 100644
> > > > --- a/lib/packing.c
> > > > +++ b/lib/packing.c
> > > > @@ -32,12 +32,11 @@ static int get_reverse_lsw32_offset(int offset, size_t len)
> > > > static u64 bit_reverse(u64 val, unsigned int width)
> > > > {
> > > > u64 new_val = 0;
> > > > - unsigned int bit;
> > > > unsigned int i;
> > > >
> > > > for (i = 0; i < width; i++) {
> > > > - bit = (val & (1 << i)) != 0;
> > > > - new_val |= (bit << (width - i - 1));
> > > > + if (val & BIT_ULL(1))
> > >
> > > hmm, why 1 and not i?
> >
> > Because I'm a moron. Let me resend.
>
> Wait a second, I deliberately wrote the code without conditionals.
> Let me look at the code disassembly before and after the patch and see
> what they look like.
My crappy benchmark says that the if statement is faster. 22 vs 26
seconds.
regards,
dan carpenter
#include <stdio.h>
#include <limits.h>
#include <stdbool.h>
#include <string.h>
#define BIT(n) (1 << (n))
#define BIT_ULL(n) (1ULL << (n))
#define u64 unsigned long long
#define u32 unsigned int
#define u16 unsigned short
#define u8 unsigned char
static u64 bit_reverse1(u64 val, unsigned int width)
{
u64 new_val = 0;
unsigned int i;
for (i = 0; i < width; i++) {
if (val & BIT_ULL(i))
new_val |= BIT_ULL(width - i - 1);
}
return new_val;
}
static u64 bit_reverse2(u64 val, unsigned int width)
{
u64 new_val = 0;
u64 bit;
unsigned int i;
for (i = 0; i < width; i++) {
bit = (val & BIT_ULL(i)) != 0;
new_val |= (bit << (width - i - 1));
}
return new_val;
}
int main(void)
{
unsigned long long val;
for (val = ULLONG_MAX - INT_MAX; val; val++)
bit_reverse1(val, 2);
return 0;
}
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-07 12:22 ` Vladimir Oltean
2022-12-07 12:51 ` Dan Carpenter
@ 2022-12-07 13:02 ` Vladimir Oltean
1 sibling, 0 replies; 17+ messages in thread
From: Vladimir Oltean @ 2022-12-07 13:02 UTC (permalink / raw)
To: Dan Carpenter; +Cc: David S. Miller, netdev, kernel-janitors
On Wed, Dec 07, 2022 at 02:22:54PM +0200, Vladimir Oltean wrote:
> Wait a second, I deliberately wrote the code without conditionals.
> Let me look at the code disassembly before and after the patch and see
> what they look like.
Before (make lib/packing.lst on arm64):
for (i = 0; i < width; i++) {
1c: 310004ec adds w12, w7, #0x1
20: 540001c0 b.eq 58 <adjust_for_msb_right_quirk+0x58> // b.none
24: 52800004 mov w4, #0x0 // #0
bit = (val & (1 << i)) != 0;
28: 5280002b mov w11, #0x1 // #1
2c: d503201f nop
30: 1ac42166 lsl w6, w11, w4
new_val |= (bit << (width - i - 1));
34: 4b0400e9 sub w9, w7, w4
bit = (val & (1 << i)) != 0;
38: 93407cc6 sxtw x6, w6 /* We don't want sign extension */
3c: ea0a00df tst x6, x10
40: 1a9f07e6 cset w6, ne // ne = any
for (i = 0; i < width; i++) {
44: 6b07009f cmp w4, w7
48: 11000484 add w4, w4, #0x1
new_val |= (bit << (width - i - 1));
4c: 1ac920c6 lsl w6, w6, w9
50: aa060108 orr x8, x8, x6
for (i = 0; i < width; i++) {
54: 54fffee1 b.ne 30 <adjust_for_msb_right_quirk+0x30> // b.any
Then this modified code:
static u64 bit_reverse(u64 val, unsigned int width)
{
u64 new_val = 0;
unsigned int i;
for (i = 0; i < width; i++)
if (val & BIT_ULL(i))
new_val |= BIT_ULL(width - i - 1);
return new_val;
}
compiles to this:
for (i = 0; i < width; i++) {
1c: 310004eb adds w11, w7, #0x1
20: 540001c0 b.eq 58 <adjust_for_msb_right_quirk+0x58> // b.none
24: 52800005 mov w5, #0x0 // #0
new_val |= BIT_ULL(width - i - 1);
28: d280002a mov x10, #0x1 // #1
2c: 14000002 b 34 <adjust_for_msb_right_quirk+0x34> /* this is an unconditional jump to "sub w4, w7, w5", skipping the assignment to w5 */
30: 2a0803e5 mov w5, w8 /* this is only hit by the backwards jump b.ne 30 at the end */
34: 4b0500e4 sub w4, w7, w5
if (val & BIT_ULL(i))
38: 9ac52528 lsr x8, x9, x5
new_val |= BIT_ULL(width - i - 1);
3c: f240011f tst x8, #0x1
for (i = 0; i < width; i++) {
40: 110004a8 add w8, w5, #0x1
new_val |= BIT_ULL(width - i - 1);
44: 9ac42144 lsl x4, x10, x4
48: aa0400c4 orr x4, x6, x4
4c: 9a861086 csel x6, x4, x6, ne // ne = any
for (i = 0; i < width; i++) {
50: 6b0700bf cmp w5, w7
54: 54fffee1 b.ne 30 <adjust_for_msb_right_quirk+0x30> // b.any
which indeed has no branch in the main loop (between 0x30 and 0x54),
because as I explained, the "b 34" doesn't count - it's not in the loop.
Additionally, this rewritten code which is considerably more obscure in C:
static u64 bit_reverse(u64 val, unsigned int width)
{
u64 new_val = 0;
unsigned int i;
for (i = 0; i < width; i++)
new_val |= !!(val & BIT_ULL(i)) ?
BIT_ULL(width - i - 1) : 0;
return new_val;
}
compiles to the exact same assembly code as the variant with "if":
for (i = 0; i < width; i++)
1c: 310004eb adds w11, w7, #0x1
20: 540001c0 b.eq 58 <adjust_for_msb_right_quirk+0x58> // b.none
24: 52800005 mov w5, #0x0 // #0
new_val |= !!(val & BIT_ULL(i)) ?
28: d280002a mov x10, #0x1 // #1
2c: 14000002 b 34 <adjust_for_msb_right_quirk+0x34>
30: 2a0803e5 mov w5, w8
34: 4b0500e4 sub w4, w7, w5
38: 9ac52528 lsr x8, x9, x5
3c: f240011f tst x8, #0x1
for (i = 0; i < width; i++)
40: 110004a8 add w8, w5, #0x1
new_val |= !!(val & BIT_ULL(i)) ?
44: 9ac42144 lsl x4, x10, x4
48: aa0400c4 orr x4, x6, x4
4c: 9a861086 csel x6, x4, x6, ne // ne = any
for (i = 0; i < width; i++)
50: 6b0700bf cmp w5, w7
54: 54fffee1 b.ne 30 <adjust_for_msb_right_quirk+0x30> // b.any
So this is good with the "if".
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-07 12:51 ` Dan Carpenter
@ 2022-12-07 13:06 ` Vladimir Oltean
0 siblings, 0 replies; 17+ messages in thread
From: Vladimir Oltean @ 2022-12-07 13:06 UTC (permalink / raw)
To: Dan Carpenter; +Cc: David S. Miller, netdev, kernel-janitors
On Wed, Dec 07, 2022 at 03:51:08PM +0300, Dan Carpenter wrote:
> My crappy benchmark says that the if statement is faster. 22 vs 26
> seconds.
Sure that's not just noise in the measurements? :)
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-07 12:21 ` Dan Carpenter
2022-12-07 12:22 ` Vladimir Oltean
@ 2022-12-07 19:41 ` David Laight
2022-12-08 16:58 ` Vladimir Oltean
1 sibling, 1 reply; 17+ messages in thread
From: David Laight @ 2022-12-07 19:41 UTC (permalink / raw)
To: 'Dan Carpenter', Vladimir Oltean
Cc: David S. Miller, netdev, kernel-janitors
From: Dan Carpenter
> Sent: 07 December 2022 12:21
....
> > > - new_val |= (bit << (width - i - 1));
> > > + if (val & BIT_ULL(1))
> >
> > hmm, why 1 and not i?
>
> Because I'm a moron. Let me resend.
Since we're not writing FORTRAN-IV why not use a variable
name that is harder to confuse with 1?
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-07 19:41 ` David Laight
@ 2022-12-08 16:58 ` Vladimir Oltean
0 siblings, 0 replies; 17+ messages in thread
From: Vladimir Oltean @ 2022-12-08 16:58 UTC (permalink / raw)
To: David Laight
Cc: 'Dan Carpenter', David S. Miller, netdev, kernel-janitors
On Wed, Dec 07, 2022 at 07:41:28PM +0000, David Laight wrote:
> From: Dan Carpenter
> > Sent: 07 December 2022 12:21
> ....
> > > > - new_val |= (bit << (width - i - 1));
> > > > + if (val & BIT_ULL(1))
> > >
> > > hmm, why 1 and not i?
> >
> > Because I'm a moron. Let me resend.
>
> Since we're not writing FORTRAN-IV why not use a variable
> name that is harder to confuse with 1?
How about 'k'?
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-07 11:23 [PATCH net] lib: packing: fix shift wrapping in bit_reverse() Dan Carpenter
2022-12-07 12:19 ` Vladimir Oltean
@ 2022-12-09 8:21 ` Uladzislau Koshchanka
2022-12-09 14:30 ` Vladimir Oltean
1 sibling, 1 reply; 17+ messages in thread
From: Uladzislau Koshchanka @ 2022-12-09 8:21 UTC (permalink / raw)
To: Dan Carpenter; +Cc: netdev, kernel-janitors, olteanv
On Wed, 7 Dec 2022 at 14:30, Dan Carpenter <error27@gmail.com> wrote:
>
> The bit_reverse() function is clearly supposed to be able to handle
> 64 bit values, but the types for "(1 << i)" and "bit << (width - i - 1)"
> are not enough to handle more than 32 bits.
It seems from the surrounding code that this function is only called
for width of up to a byte (but correct me if I'm wrong). There are
fast implementations of bit-reverse in include/linux/bitrev.h. It's
better to just remove this function entirely and call bitrev8, which
is just a precalc-table lookup. While at it, also sort includes.
Signed-off-by: Uladzislau Koshchanka <koshchanka@gmail.com>
lib/packing.c | 20 ++++----------------
1 file changed, 4 insertions(+), 16 deletions(-)
diff --git a/lib/packing.c b/lib/packing.c
index 9a72f4bbf0e2..47ea47c1198a 100644
--- a/lib/packing.c
+++ b/lib/packing.c
@@ -2,10 +2,11 @@
/* Copyright 2016-2018 NXP
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
-#include <linux/packing.h>
-#include <linux/module.h>
#include <linux/bitops.h>
+#include <linux/bitrev.h>
#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/packing.h>
#include <linux/types.h>
static int get_le_offset(int offset)
@@ -29,19 +30,6 @@ static int get_reverse_lsw32_offset(int offset, size_t len)
return word_index * 4 + offset;
}
-static u64 bit_reverse(u64 val, unsigned int width)
-{
- u64 new_val = 0;
- unsigned int bit;
- unsigned int i;
-
- for (i = 0; i < width; i++) {
- bit = (val & (1 << i)) != 0;
- new_val |= (bit << (width - i - 1));
- }
- return new_val;
-}
-
static void adjust_for_msb_right_quirk(u64 *to_write, int *box_start_bit,
int *box_end_bit, u8 *box_mask)
{
@@ -49,7 +37,7 @@ static void adjust_for_msb_right_quirk(u64
*to_write, int *box_start_bit,
int new_box_start_bit, new_box_end_bit;
*to_write >>= *box_end_bit;
- *to_write = bit_reverse(*to_write, box_bit_width);
+ *to_write = bitrev8(*to_write) >> (8 - box_bit_width);
*to_write <<= *box_end_bit;
new_box_end_bit = box_bit_width - *box_start_bit - 1;
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-09 8:21 ` Uladzislau Koshchanka
@ 2022-12-09 14:30 ` Vladimir Oltean
2022-12-09 21:01 ` Uladzislau Koshchanka
0 siblings, 1 reply; 17+ messages in thread
From: Vladimir Oltean @ 2022-12-09 14:30 UTC (permalink / raw)
To: Uladzislau Koshchanka; +Cc: Dan Carpenter, netdev, kernel-janitors
Hi Uladzislau,
On Fri, Dec 09, 2022 at 11:21:21AM +0300, Uladzislau Koshchanka wrote:
> On Wed, 7 Dec 2022 at 14:30, Dan Carpenter <error27@gmail.com> wrote:
> >
> > The bit_reverse() function is clearly supposed to be able to handle
> > 64 bit values, but the types for "(1 << i)" and "bit << (width - i - 1)"
> > are not enough to handle more than 32 bits.
>
> It seems from the surrounding code that this function is only called
> for width of up to a byte (but correct me if I'm wrong).
This observation is quite true. I was quite lazy to look and remember
whether this is the case, but the comment says it quite clearly:
/* Bit indices into the currently accessed 8-bit box */
int box_start_bit, box_end_bit, box_addr;
> There are fast implementations of bit-reverse in include/linux/bitrev.h.
> It's better to just remove this function entirely and call bitrev8,
> which is just a precalc-table lookup. While at it, also sort includes.
The problem I see with bitrev8 is that the byte_rev_table[] can
seemingly be built as a module (the BITREVERSE Kconfig knob is tristate,
and btw your patch doesn't make PACKING select BITREVERSE). But PACKING
is bool. IIRC, I got comments during review that it's not worth making
packing a module, but I may remember wrong.
> @@ -49,7 +37,7 @@ static void adjust_for_msb_right_quirk(u64
> *to_write, int *box_start_bit,
> int new_box_start_bit, new_box_end_bit;
>
> *to_write >>= *box_end_bit;
> - *to_write = bit_reverse(*to_write, box_bit_width);
> + *to_write = bitrev8(*to_write) >> (8 - box_bit_width);
> *to_write <<= *box_end_bit;
>
> new_box_end_bit = box_bit_width - *box_start_bit - 1;
Anyway, the patch works in principle. I know this because I wrote the
following patch to check:
From 17099a86291713d2bcf8137473daea5f390a2ef4 Mon Sep 17 00:00:00 2001
From: Vladimir Oltean <vladimir.oltean@nxp.com>
Date: Fri, 9 Dec 2022 16:23:35 +0200
Subject: [PATCH] lib: packing: add boot-time selftests
In case people want to make changes to the packing() implementation but
they aren't sure it's going to keep working, provide 16 boot-time calls
to packing() which exercise all combinations of quirks plus PACK |
UNPACK.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
---
lib/Kconfig | 9 +++
lib/packing.c | 186 ++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 195 insertions(+)
diff --git a/lib/Kconfig b/lib/Kconfig
index 9bbf8a4b2108..54b8deaf44fc 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -39,6 +39,15 @@ config PACKING
When in doubt, say N.
+config PACKING_SELFTESTS
+ bool "Selftests for packing library"
+ depends on PACKING
+ help
+ Boot-time selftests to make sure that the packing and unpacking
+ functions work properly.
+
+ When in doubt, say N.
+
config BITREVERSE
tristate
diff --git a/lib/packing.c b/lib/packing.c
index 9a72f4bbf0e2..aff70853b0c4 100644
--- a/lib/packing.c
+++ b/lib/packing.c
@@ -210,5 +210,191 @@ int packing(void *pbuf, u64 *uval, int startbit, int endbit, size_t pbuflen,
}
EXPORT_SYMBOL(packing);
+#if IS_ENABLED(CONFIG_PACKING_SELFTESTS)
+
+#define PBUF_LEN 16
+
+/* These selftests pack and unpack a magic 64-bit value (0xcafedeadbeefcafe) at
+ * a fixed logical offset (32) within an otherwise zero array of 128 bits
+ * (16 bytes). They test all possible bit layouts of the 128 bit buffer.
+ */
+static bool test_pack(u8 expected_pbuf[PBUF_LEN], u8 quirks)
+{
+ u64 uval = 0xcafedeadbeefcafe;
+ u8 pbuf[PBUF_LEN];
+ int err, i;
+
+ memset(pbuf, 0, PBUF_LEN);
+ err = packing(pbuf, &uval, 95, 32, PBUF_LEN, PACK, quirks);
+ if (err) {
+ pr_err("packing() returned %pe\n", ERR_PTR(err));
+ return false;
+ }
+
+ for (i = 0; i < PBUF_LEN; i++) {
+ if (pbuf[i] != expected_pbuf[i]) {
+ print_hex_dump(KERN_ERR, "pbuf: ", DUMP_PREFIX_NONE,
+ 16, 1, pbuf, PBUF_LEN, false);
+ print_hex_dump(KERN_ERR, "expected: ", DUMP_PREFIX_NONE,
+ 16, 1, expected_pbuf, PBUF_LEN, false);
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static bool test_unpack(u8 pbuf[PBUF_LEN], u8 quirks)
+{
+ u64 uval, expected_uval = 0xcafedeadbeefcafe;
+ int err;
+
+ err = packing(pbuf, &uval, 95, 32, PBUF_LEN, UNPACK, quirks);
+ if (err) {
+ pr_err("packing() returned %pe\n", ERR_PTR(err));
+ return false;
+ }
+
+ if (uval != expected_uval) {
+ pr_err("uval: 0x%llx expected 0x%llx\n", uval, expected_uval);
+ return false;
+ }
+
+ return true;
+}
+
+static void test_no_quirks(void)
+{
+ u8 pbuf[PBUF_LEN] = {0x00, 0x00, 0x00, 0x00, 0xca, 0xfe, 0xde, 0xad,
+ 0xbe, 0xef, 0xca, 0xfe, 0x00, 0x00, 0x00, 0x00};
+ bool ret;
+
+ ret = test_pack(pbuf, 0);
+ pr_info("packing with no quirks: %s\n", ret ? "OK" : "FAIL");
+
+ ret = test_unpack(pbuf, 0);
+ pr_info("unpacking with no quirks: %s\n", ret ? "OK" : "FAIL");
+}
+
+static void test_msb_right(void)
+{
+ u8 pbuf[PBUF_LEN] = {0x00, 0x00, 0x00, 0x00, 0x53, 0x7f, 0x7b, 0xb5,
+ 0x7d, 0xf7, 0x53, 0x7f, 0x00, 0x00, 0x00, 0x00};
+ bool ret;
+
+ ret = test_pack(pbuf, QUIRK_MSB_ON_THE_RIGHT);
+ pr_info("packing with QUIRK_MSB_ON_THE_RIGHT: %s\n",
+ ret ? "OK" : "FAIL");
+
+ ret = test_unpack(pbuf, QUIRK_MSB_ON_THE_RIGHT);
+ pr_info("unpacking with QUIRK_MSB_ON_THE_RIGHT: %s\n",
+ ret ? "OK" : "FAIL");
+}
+
+static void test_le(void)
+{
+ u8 pbuf[PBUF_LEN] = {0x00, 0x00, 0x00, 0x00, 0xad, 0xde, 0xfe, 0xca,
+ 0xfe, 0xca, 0xef, 0xbe, 0x00, 0x00, 0x00, 0x00};
+ bool ret;
+
+ ret = test_pack(pbuf, QUIRK_LITTLE_ENDIAN);
+ pr_info("packing with QUIRK_LITTLE_ENDIAN: %s\n", ret ? "OK" : "FAIL");
+
+ ret = test_unpack(pbuf, QUIRK_LITTLE_ENDIAN);
+ pr_info("unpacking with QUIRK_LITTLE_ENDIAN: %s\n",
+ ret ? "OK" : "FAIL");
+}
+
+static void test_le_msb_right(void)
+{
+ u8 pbuf[PBUF_LEN] = {0x00, 0x00, 0x00, 0x00, 0xb5, 0x7b, 0x7f, 0x53,
+ 0x7f, 0x53, 0xf7, 0x7d, 0x00, 0x00, 0x00, 0x00};
+ bool ret;
+
+ ret = test_pack(pbuf, QUIRK_LITTLE_ENDIAN | QUIRK_MSB_ON_THE_RIGHT);
+ pr_info("packing with QUIRK_LITTLE_ENDIAN | QUIRK_MSB_ON_THE_RIGHT: %s\n",
+ ret ? "OK" : "FAIL");
+
+ ret = test_unpack(pbuf, QUIRK_LITTLE_ENDIAN | QUIRK_MSB_ON_THE_RIGHT);
+ pr_info("unpacking with QUIRK_LITTLE_ENDIAN | QUIRK_MSB_ON_THE_RIGHT: %s\n",
+ ret ? "OK" : "FAIL");
+}
+
+static void test_lsw32_first(void)
+{
+ u8 pbuf[PBUF_LEN] = {0x00, 0x00, 0x00, 0x00, 0xbe, 0xef, 0xca, 0xfe,
+ 0xca, 0xfe, 0xde, 0xad, 0x00, 0x00, 0x00, 0x00};
+ bool ret;
+
+ ret = test_pack(pbuf, QUIRK_LSW32_IS_FIRST);
+ pr_info("packing with QUIRK_LSW32_IS_FIRST: %s\n", ret ? "OK" : "FAIL");
+
+ ret = test_unpack(pbuf, QUIRK_LSW32_IS_FIRST);
+ pr_info("unpacking with QUIRK_LSW32_IS_FIRST: %s\n", ret ? "OK" : "FAIL");
+}
+
+static void test_lsw32_first_msb_right(void)
+{
+ u8 pbuf[PBUF_LEN] = {0x00, 0x00, 0x00, 0x00, 0x7d, 0xf7, 0x53, 0x7f,
+ 0x53, 0x7f, 0x7b, 0xb5, 0x00, 0x00, 0x00, 0x00};
+ bool ret;
+
+ ret = test_pack(pbuf, QUIRK_LSW32_IS_FIRST | QUIRK_MSB_ON_THE_RIGHT);
+ pr_info("packing with QUIRK_LSW32_IS_FIRST | QUIRK_MSB_ON_THE_RIGHT: %s\n",
+ ret ? "OK" : "FAIL");
+
+ ret = test_unpack(pbuf, QUIRK_LSW32_IS_FIRST | QUIRK_MSB_ON_THE_RIGHT);
+ pr_info("unpacking with QUIRK_LSW32_IS_FIRST | QUIRK_MSB_ON_THE_RIGHT: %s\n",
+ ret ? "OK" : "FAIL");
+}
+
+static void test_lsw32_first_le(void)
+{
+ u8 pbuf[PBUF_LEN] = {0x00, 0x00, 0x00, 0x00, 0xfe, 0xca, 0xef, 0xbe,
+ 0xad, 0xde, 0xfe, 0xca, 0x00, 0x00, 0x00, 0x00};
+ bool ret;
+
+ ret = test_pack(pbuf, QUIRK_LSW32_IS_FIRST | QUIRK_LITTLE_ENDIAN);
+ pr_info("packing with QUIRK_LSW32_IS_FIRST | QUIRK_LITTLE_ENDIAN: %s\n",
+ ret ? "OK" : "FAIL");
+
+ ret = test_unpack(pbuf, QUIRK_LSW32_IS_FIRST | QUIRK_LITTLE_ENDIAN);
+ pr_info("unpacking with QUIRK_LSW32_IS_FIRST | QUIRK_LITTLE_ENDIAN: %s\n",
+ ret ? "OK" : "FAIL");
+}
+
+static void test_lsw32_first_le_msb_right(void)
+{
+ u8 pbuf[PBUF_LEN] = {0x00, 0x00, 0x00, 0x00, 0x7f, 0x53, 0xf7, 0x7d,
+ 0xb5, 0x7b, 0x7f, 0x53, 0x00, 0x00, 0x00, 0x00};
+ bool ret;
+
+ ret = test_pack(pbuf, QUIRK_LSW32_IS_FIRST | QUIRK_LITTLE_ENDIAN |
+ QUIRK_MSB_ON_THE_RIGHT);
+ pr_info("packing with QUIRK_LSW32_IS_FIRST | QUIRK_LITTLE_ENDIAN | QUIRK_MSB_ON_THE_RIGHT: %s\n",
+ ret ? "OK" : "FAIL");
+
+ ret = test_unpack(pbuf, QUIRK_LSW32_IS_FIRST | QUIRK_LITTLE_ENDIAN |
+ QUIRK_MSB_ON_THE_RIGHT);
+ pr_info("unpacking with QUIRK_LSW32_IS_FIRST | QUIRK_LITTLE_ENDIAN | QUIRK_MSB_ON_THE_RIGHT: %s\n",
+ ret ? "OK" : "FAIL");
+}
+
+static int __init packing_init(void)
+{
+ test_no_quirks();
+ test_msb_right();
+ test_le();
+ test_le_msb_right();
+ test_lsw32_first();
+ test_lsw32_first_msb_right();
+ test_lsw32_first_le();
+ test_lsw32_first_le_msb_right();
+
+ return 0;
+}
+module_init(packing_init);
+#endif
+
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Generic bitfield packing and unpacking");
--
2.34.1
I've been meaning to do this for a while, but I'm not sure what is the
best way to integrate such a thing. Does anyone have any idea?
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-09 14:30 ` Vladimir Oltean
@ 2022-12-09 21:01 ` Uladzislau Koshchanka
2022-12-09 22:06 ` Vladimir Oltean
0 siblings, 1 reply; 17+ messages in thread
From: Uladzislau Koshchanka @ 2022-12-09 21:01 UTC (permalink / raw)
To: Vladimir Oltean; +Cc: Dan Carpenter, netdev, kernel-janitors
Hi Vladimir,
> The problem I see with bitrev8 is that the byte_rev_table[] can
> seemingly be built as a module (the BITREVERSE Kconfig knob is tristate,
> and btw your patch doesn't make PACKING select BITREVERSE). But PACKING
> is bool. IIRC, I got comments during review that it's not worth making
> packing a module, but I may remember wrong.
Do you really think it's a problem? I personally would just select
BITREVERSE with/without making PACKING tristate. BITREVERSE is already
selected by CRC32 which defaults to y, so just adding a select isn't a
change in the default. Can't think of a practical point in avoiding
linking against 256 bytes here.
In any case, it just doesn't look right to have multiple bit-reverse
implementations only because of Kconfig relations.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-09 21:01 ` Uladzislau Koshchanka
@ 2022-12-09 22:06 ` Vladimir Oltean
2022-12-09 22:07 ` Vladimir Oltean
2022-12-10 0:44 ` [PATCH] lib: packing: replace bit_reverse() with bitrev8() Uladzislau Koshchanka
0 siblings, 2 replies; 17+ messages in thread
From: Vladimir Oltean @ 2022-12-09 22:06 UTC (permalink / raw)
To: Uladzislau Koshchanka; +Cc: Dan Carpenter, netdev, kernel-janitors
On Sat, Dec 10, 2022 at 12:01:28AM +0300, Uladzislau Koshchanka wrote:
> Hi Vladimir,
>
> > The problem I see with bitrev8 is that the byte_rev_table[] can
> > seemingly be built as a module (the BITREVERSE Kconfig knob is tristate,
> > and btw your patch doesn't make PACKING select BITREVERSE). But PACKING
> > is bool. IIRC, I got comments during review that it's not worth making
> > packing a module, but I may remember wrong.
>
> Do you really think it's a problem? I personally would just select
> BITREVERSE with/without making PACKING tristate. BITREVERSE is already
> selected by CRC32 which defaults to y, so just adding a select isn't a
> change in the default. Can't think of a practical point in avoiding
> linking against 256 bytes here.
>
> In any case, it just doesn't look right to have multiple bit-reverse
> implementations only because of Kconfig relations.
Ok, let's use BITREVERSE then. Could you submit your patch formally please?
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH net] lib: packing: fix shift wrapping in bit_reverse()
2022-12-09 22:06 ` Vladimir Oltean
@ 2022-12-09 22:07 ` Vladimir Oltean
2022-12-10 0:44 ` [PATCH] lib: packing: replace bit_reverse() with bitrev8() Uladzislau Koshchanka
1 sibling, 0 replies; 17+ messages in thread
From: Vladimir Oltean @ 2022-12-09 22:07 UTC (permalink / raw)
To: Uladzislau Koshchanka; +Cc: Dan Carpenter, netdev, kernel-janitors
On Sat, Dec 10, 2022 at 12:06:51AM +0200, Vladimir Oltean wrote:
> On Sat, Dec 10, 2022 at 12:01:28AM +0300, Uladzislau Koshchanka wrote:
> > Hi Vladimir,
> >
> > > The problem I see with bitrev8 is that the byte_rev_table[] can
> > > seemingly be built as a module (the BITREVERSE Kconfig knob is tristate,
> > > and btw your patch doesn't make PACKING select BITREVERSE). But PACKING
> > > is bool. IIRC, I got comments during review that it's not worth making
> > > packing a module, but I may remember wrong.
> >
> > Do you really think it's a problem? I personally would just select
> > BITREVERSE with/without making PACKING tristate. BITREVERSE is already
> > selected by CRC32 which defaults to y, so just adding a select isn't a
> > change in the default. Can't think of a practical point in avoiding
> > linking against 256 bytes here.
> >
> > In any case, it just doesn't look right to have multiple bit-reverse
> > implementations only because of Kconfig relations.
>
> Ok, let's use BITREVERSE then. Could you submit your patch formally please?
Also, none of that 'while at it, do XYZ unrelated stuff'. One patch per
logical change please.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH] lib: packing: replace bit_reverse() with bitrev8()
2022-12-09 22:06 ` Vladimir Oltean
2022-12-09 22:07 ` Vladimir Oltean
@ 2022-12-10 0:44 ` Uladzislau Koshchanka
2022-12-12 23:04 ` Vladimir Oltean
2022-12-12 23:30 ` patchwork-bot+netdevbpf
1 sibling, 2 replies; 17+ messages in thread
From: Uladzislau Koshchanka @ 2022-12-10 0:44 UTC (permalink / raw)
To: olteanv; +Cc: netdev, kernel-janitors, Uladzislau Koshchanka
Remove bit_reverse() function. Instead use bitrev8() from linux/bitrev.h +
bitshift. Reduces code-repetition.
Signed-off-by: Uladzislau Koshchanka <koshchanka@gmail.com>
---
lib/Kconfig | 1 +
lib/packing.c | 16 ++--------------
2 files changed, 3 insertions(+), 14 deletions(-)
diff --git a/lib/Kconfig b/lib/Kconfig
index 9bbf8a4b2108..cc969ef58a2a 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -24,6 +24,7 @@ config LINEAR_RANGES
config PACKING
bool "Generic bitfield packing and unpacking"
+ select BITREVERSE
default n
help
This option provides the packing() helper function, which permits
diff --git a/lib/packing.c b/lib/packing.c
index 9a72f4bbf0e2..a96169237ae6 100644
--- a/lib/packing.c
+++ b/lib/packing.c
@@ -7,6 +7,7 @@
#include <linux/bitops.h>
#include <linux/errno.h>
#include <linux/types.h>
+#include <linux/bitrev.h>
static int get_le_offset(int offset)
{
@@ -29,19 +30,6 @@ static int get_reverse_lsw32_offset(int offset, size_t len)
return word_index * 4 + offset;
}
-static u64 bit_reverse(u64 val, unsigned int width)
-{
- u64 new_val = 0;
- unsigned int bit;
- unsigned int i;
-
- for (i = 0; i < width; i++) {
- bit = (val & (1 << i)) != 0;
- new_val |= (bit << (width - i - 1));
- }
- return new_val;
-}
-
static void adjust_for_msb_right_quirk(u64 *to_write, int *box_start_bit,
int *box_end_bit, u8 *box_mask)
{
@@ -49,7 +37,7 @@ static void adjust_for_msb_right_quirk(u64 *to_write, int *box_start_bit,
int new_box_start_bit, new_box_end_bit;
*to_write >>= *box_end_bit;
- *to_write = bit_reverse(*to_write, box_bit_width);
+ *to_write = bitrev8(*to_write) >> (8 - box_bit_width);
*to_write <<= *box_end_bit;
new_box_end_bit = box_bit_width - *box_start_bit - 1;
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH] lib: packing: replace bit_reverse() with bitrev8()
2022-12-10 0:44 ` [PATCH] lib: packing: replace bit_reverse() with bitrev8() Uladzislau Koshchanka
@ 2022-12-12 23:04 ` Vladimir Oltean
2022-12-12 23:30 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 17+ messages in thread
From: Vladimir Oltean @ 2022-12-12 23:04 UTC (permalink / raw)
To: Uladzislau Koshchanka; +Cc: netdev, kernel-janitors
On Sat, 10 Dec 2022 at 02:47, Uladzislau Koshchanka
<koshchanka@gmail.com> wrote:
>
> Remove bit_reverse() function. Instead use bitrev8() from linux/bitrev.h +
> bitshift. Reduces code-repetition.
>
> Signed-off-by: Uladzislau Koshchanka <koshchanka@gmail.com>
> ---
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH] lib: packing: replace bit_reverse() with bitrev8()
2022-12-10 0:44 ` [PATCH] lib: packing: replace bit_reverse() with bitrev8() Uladzislau Koshchanka
2022-12-12 23:04 ` Vladimir Oltean
@ 2022-12-12 23:30 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 17+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-12-12 23:30 UTC (permalink / raw)
To: Uladzislau Koshchanka; +Cc: olteanv, netdev, kernel-janitors
Hello:
This patch was applied to netdev/net-next.git (master)
by Jakub Kicinski <kuba@kernel.org>:
On Sat, 10 Dec 2022 03:44:23 +0300 you wrote:
> Remove bit_reverse() function. Instead use bitrev8() from linux/bitrev.h +
> bitshift. Reduces code-repetition.
>
> Signed-off-by: Uladzislau Koshchanka <koshchanka@gmail.com>
> ---
> lib/Kconfig | 1 +
> lib/packing.c | 16 ++--------------
> 2 files changed, 3 insertions(+), 14 deletions(-)
Here is the summary with links:
- lib: packing: replace bit_reverse() with bitrev8()
https://git.kernel.org/netdev/net-next/c/1280d4b76f34
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2022-12-12 23:30 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-07 11:23 [PATCH net] lib: packing: fix shift wrapping in bit_reverse() Dan Carpenter
2022-12-07 12:19 ` Vladimir Oltean
2022-12-07 12:21 ` Dan Carpenter
2022-12-07 12:22 ` Vladimir Oltean
2022-12-07 12:51 ` Dan Carpenter
2022-12-07 13:06 ` Vladimir Oltean
2022-12-07 13:02 ` Vladimir Oltean
2022-12-07 19:41 ` David Laight
2022-12-08 16:58 ` Vladimir Oltean
2022-12-09 8:21 ` Uladzislau Koshchanka
2022-12-09 14:30 ` Vladimir Oltean
2022-12-09 21:01 ` Uladzislau Koshchanka
2022-12-09 22:06 ` Vladimir Oltean
2022-12-09 22:07 ` Vladimir Oltean
2022-12-10 0:44 ` [PATCH] lib: packing: replace bit_reverse() with bitrev8() Uladzislau Koshchanka
2022-12-12 23:04 ` Vladimir Oltean
2022-12-12 23:30 ` patchwork-bot+netdevbpf
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.