All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/6]  Implement byteswap and update references
@ 2022-05-23 14:50 Lin Liu
  2022-05-23 14:50 ` [PATCH v5 2/6] crypto/vmac: Simplify code with byteswap Lin Liu
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Lin Liu @ 2022-05-23 14:50 UTC (permalink / raw)
  To: xen-devel
  Cc: Lin Liu, Andrew Cooper, Daniel De Graaf, Daniel P. Smith,
	George Dunlap, Ian Jackson, Jan Beulich, Julien Grall,
	Bertrand Marquis, Konrad Rzeszutek Wilk, Roger Pau Monné,
	Ross Lagerwall, Stefano Stabellini, Volodymyr Babchuk, Wei Liu


Lin Liu (6):
  xen: implement byteswap
  crypto/vmac: Simplify code with byteswap
  arm64/find_next_bit: Remove ext2_swab()
  xen: Switch to byteswap
  tools: Use new byteswap helper
  byteorder: Remove byteorder

 .../libs/guest/xg_dom_decompress_unsafe_xz.c  |   5 +
 .../guest/xg_dom_decompress_unsafe_zstd.c     |   3 +-
 xen/arch/arm/arm64/lib/find_next_bit.c        |  36 +---
 xen/arch/arm/include/asm/byteorder.h          |   6 +-
 xen/arch/x86/include/asm/byteorder.h          |  34 +---
 xen/common/device_tree.c                      |  44 ++---
 xen/common/libelf/libelf-private.h            |   6 +-
 xen/common/xz/private.h                       |   2 +-
 xen/crypto/vmac.c                             |  76 +-------
 xen/include/xen/byteorder.h                   |  56 ++++++
 xen/include/xen/byteorder/big_endian.h        | 102 ----------
 xen/include/xen/byteorder/generic.h           |  68 -------
 xen/include/xen/byteorder/little_endian.h     | 102 ----------
 xen/include/xen/byteorder/swab.h              | 183 ------------------
 xen/include/xen/byteswap.h                    |  52 +++++
 xen/include/xen/compiler.h                    |  20 ++
 xen/include/xen/unaligned.h                   |  12 +-
 17 files changed, 184 insertions(+), 623 deletions(-)
 create mode 100644 xen/include/xen/byteorder.h
 delete mode 100644 xen/include/xen/byteorder/big_endian.h
 delete mode 100644 xen/include/xen/byteorder/generic.h
 delete mode 100644 xen/include/xen/byteorder/little_endian.h
 delete mode 100644 xen/include/xen/byteorder/swab.h
 create mode 100644 xen/include/xen/byteswap.h

-- 
2.27.0



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 2/6] crypto/vmac: Simplify code with byteswap
  2022-05-23 14:50 [PATCH v5 0/6] Implement byteswap and update references Lin Liu
@ 2022-05-23 14:50 ` Lin Liu
  2022-05-23 14:50 ` [PATCH v5 3/6] arm64/find_next_bit: Remove ext2_swab() Lin Liu
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Lin Liu @ 2022-05-23 14:50 UTC (permalink / raw)
  To: xen-devel
  Cc: Lin Liu, Jan Beulich, Andrew Cooper, George Dunlap, Julien Grall,
	Bertrand Marquis, Stefano Stabellini, Wei Liu

This file has its own implementation of swap bytes. Clean up
the code with xen/byteswap.h.

No functional change.

Signed-off-by: Lin Liu <lin.liu@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
----
 xen/crypto/vmac.c | 76 ++---------------------------------------------
 1 file changed, 3 insertions(+), 73 deletions(-)

diff --git a/xen/crypto/vmac.c b/xen/crypto/vmac.c
index 294dd16a52..acb4e015f5 100644
--- a/xen/crypto/vmac.c
+++ b/xen/crypto/vmac.c
@@ -8,6 +8,7 @@
 
 /* start for Xen */
 #include <xen/init.h>
+#include <xen/byteswap.h>
 #include <xen/types.h>
 #include <xen/lib.h>
 #include <crypto/vmac.h>
@@ -50,7 +51,6 @@ const uint64_t mpoly = UINT64_C(0x1fffffff1fffffff);  /* Poly key mask     */
  * MUL64: 64x64->128-bit multiplication
  * PMUL64: assumes top bits cleared on inputs
  * ADD128: 128x128->128-bit addition
- * GET_REVERSED_64: load and byte-reverse 64-bit word  
  * ----------------------------------------------------------------------- */
 
 /* ----------------------------------------------------------------------- */
@@ -68,22 +68,6 @@ const uint64_t mpoly = UINT64_C(0x1fffffff1fffffff);  /* Poly key mask     */
 
 #define PMUL64 MUL64
 
-#define GET_REVERSED_64(p)                                                \
-    ({uint64_t x;                                                         \
-     asm ("bswapq %0" : "=r" (x) : "0"(*(uint64_t *)(p))); x;})
-
-/* ----------------------------------------------------------------------- */
-#elif (__GNUC__ && __i386__)
-/* ----------------------------------------------------------------------- */
-
-#define GET_REVERSED_64(p)                                                \
-    ({ uint64_t x;                                                        \
-    uint32_t *tp = (uint32_t *)(p);                                       \
-    asm  ("bswap %%edx\n\t"                                               \
-          "bswap %%eax"                                                   \
-    : "=A"(x)                                                             \
-    : "a"(tp[1]), "d"(tp[0]));                                            \
-    x; })
 
 /* ----------------------------------------------------------------------- */
 #elif (__GNUC__ && __ppc64__)
@@ -103,37 +87,6 @@ const uint64_t mpoly = UINT64_C(0x1fffffff1fffffff);  /* Poly key mask     */
 
 #define PMUL64 MUL64
 
-#define GET_REVERSED_64(p)                                                \
-    ({ uint32_t hi, lo, *_p = (uint32_t *)(p);                            \
-       asm volatile ("lwbrx %0, %1, %2" : "=r"(lo) : "b%"(0), "r"(_p) );  \
-       asm volatile ("lwbrx %0, %1, %2" : "=r"(hi) : "b%"(4), "r"(_p) );  \
-       ((uint64_t)hi << 32) | (uint64_t)lo; } )
-
-/* ----------------------------------------------------------------------- */
-#elif (__GNUC__ && (__ppc__ || __PPC__))
-/* ----------------------------------------------------------------------- */
-
-#define GET_REVERSED_64(p)                                                \
-    ({ uint32_t hi, lo, *_p = (uint32_t *)(p);                            \
-       asm volatile ("lwbrx %0, %1, %2" : "=r"(lo) : "b%"(0), "r"(_p) );  \
-       asm volatile ("lwbrx %0, %1, %2" : "=r"(hi) : "b%"(4), "r"(_p) );  \
-       ((uint64_t)hi << 32) | (uint64_t)lo; } )
-
-/* ----------------------------------------------------------------------- */
-#elif (__GNUC__ && (__ARMEL__ || __ARM__))
-/* ----------------------------------------------------------------------- */
-
-#define bswap32(v)                                                        \
-({ uint32_t tmp,out;                                                      \
-    asm volatile(                                                         \
-        "eor    %1, %2, %2, ror #16\n"                                    \
-        "bic    %1, %1, #0x00ff0000\n"                                    \
-        "mov    %0, %2, ror #8\n"                                         \
-        "eor    %0, %0, %1, lsr #8"                                       \
-    : "=r" (out), "=&r" (tmp)                                             \
-    : "r" (v));                                                           \
-    out;})
-
 /* ----------------------------------------------------------------------- */
 #elif _MSC_VER
 /* ----------------------------------------------------------------------- */
@@ -154,11 +107,6 @@ const uint64_t mpoly = UINT64_C(0x1fffffff1fffffff);  /* Poly key mask     */
         (rh) += (ih) + ((rl) < (_il));                               \
     }
 
-#if _MSC_VER >= 1300
-#define GET_REVERSED_64(p) _byteswap_uint64(*(uint64_t *)(p))
-#pragma intrinsic(_byteswap_uint64)
-#endif
-
 #if _MSC_VER >= 1400 && \
     (!defined(__INTEL_COMPILER) || __INTEL_COMPILER >= 1000)
 #define MUL32(i1,i2)    (__emulu((uint32_t)(i1),(uint32_t)(i2)))
@@ -219,24 +167,6 @@ const uint64_t mpoly = UINT64_C(0x1fffffff1fffffff);  /* Poly key mask     */
     }
 #endif
 
-#ifndef GET_REVERSED_64
-#ifndef bswap64
-#ifndef bswap32
-#define bswap32(x)                                                        \
-  ({ uint32_t bsx = (x);                                                  \
-      ((((bsx) & 0xff000000u) >> 24) | (((bsx) & 0x00ff0000u) >>  8) |    \
-       (((bsx) & 0x0000ff00u) <<  8) | (((bsx) & 0x000000ffu) << 24)); })
-#endif
-#define bswap64(x)                                                        \
-     ({ union { uint64_t ll; uint32_t l[2]; } w, r;                       \
-         w.ll = (x);                                                      \
-         r.l[0] = bswap32 (w.l[1]);                                       \
-         r.l[1] = bswap32 (w.l[0]);                                       \
-         r.ll; })
-#endif
-#define GET_REVERSED_64(p) bswap64(*(uint64_t *)(p)) 
-#endif
-
 /* ----------------------------------------------------------------------- */
 
 #if (VMAC_PREFER_BIG_ENDIAN)
@@ -247,9 +177,9 @@ const uint64_t mpoly = UINT64_C(0x1fffffff1fffffff);  /* Poly key mask     */
 
 #if (VMAC_ARCH_BIG_ENDIAN)
 #  define get64BE(ptr) (*(uint64_t *)(ptr))
-#  define get64LE(ptr) GET_REVERSED_64(ptr)
+#  define get64LE(ptr) bswap64(*(uint64_t *)(ptr))
 #else /* assume little-endian */
-#  define get64BE(ptr) GET_REVERSED_64(ptr)
+#  define get64BE(ptr) bswap64(*(uint64_t *)(ptr))
 #  define get64LE(ptr) (*(uint64_t *)(ptr))
 #endif
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 3/6] arm64/find_next_bit: Remove ext2_swab()
  2022-05-23 14:50 [PATCH v5 0/6] Implement byteswap and update references Lin Liu
  2022-05-23 14:50 ` [PATCH v5 2/6] crypto/vmac: Simplify code with byteswap Lin Liu
@ 2022-05-23 14:50 ` Lin Liu
  2022-05-23 14:53   ` Julien Grall
  2022-05-23 14:50 ` [PATCH v5 4/6] xen: Switch to byteswap Lin Liu
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Lin Liu @ 2022-05-23 14:50 UTC (permalink / raw)
  To: xen-devel
  Cc: Lin Liu, Julien Grall, Andrew Cooper, Stefano Stabellini,
	Julien Grall, Bertrand Marquis, Volodymyr Babchuk

ext2 has nothing to do with this logic.  Clean up the code with
xen/byteswap.h which now has an unsigned long helper.

No functional change.

Signed-off-by: Lin Liu <lin.liu@citrix.com>
Acked-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>
Cc: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
---
 xen/arch/arm/arm64/lib/find_next_bit.c | 36 +++++---------------------
 1 file changed, 6 insertions(+), 30 deletions(-)

diff --git a/xen/arch/arm/arm64/lib/find_next_bit.c b/xen/arch/arm/arm64/lib/find_next_bit.c
index 8ebf8bfe97..e3b3720ff4 100644
--- a/xen/arch/arm/arm64/lib/find_next_bit.c
+++ b/xen/arch/arm/arm64/lib/find_next_bit.c
@@ -161,30 +161,6 @@ EXPORT_SYMBOL(find_first_zero_bit);
 
 #ifdef __BIG_ENDIAN
 
-/* include/linux/byteorder does not support "unsigned long" type */
-static inline unsigned long ext2_swabp(const unsigned long * x)
-{
-#if BITS_PER_LONG == 64
-	return (unsigned long) __swab64p((u64 *) x);
-#elif BITS_PER_LONG == 32
-	return (unsigned long) __swab32p((u32 *) x);
-#else
-#error BITS_PER_LONG not defined
-#endif
-}
-
-/* include/linux/byteorder doesn't support "unsigned long" type */
-static inline unsigned long ext2_swab(const unsigned long y)
-{
-#if BITS_PER_LONG == 64
-	return (unsigned long) __swab64((u64) y);
-#elif BITS_PER_LONG == 32
-	return (unsigned long) __swab32((u32) y);
-#else
-#error BITS_PER_LONG not defined
-#endif
-}
-
 #ifndef find_next_zero_bit_le
 unsigned long find_next_zero_bit_le(const void *addr, unsigned
 		long size, unsigned long offset)
@@ -199,7 +175,7 @@ unsigned long find_next_zero_bit_le(const void *addr, unsigned
 	size -= result;
 	offset &= (BITS_PER_LONG - 1UL);
 	if (offset) {
-		tmp = ext2_swabp(p++);
+		tmp = bswap_ul(*p++);
 		tmp |= (~0UL >> (BITS_PER_LONG - offset));
 		if (size < BITS_PER_LONG)
 			goto found_first;
@@ -217,7 +193,7 @@ unsigned long find_next_zero_bit_le(const void *addr, unsigned
 	}
 	if (!size)
 		return result;
-	tmp = ext2_swabp(p);
+	tmp = bswap_ul(*p);
 found_first:
 	tmp |= ~0UL << size;
 	if (tmp == ~0UL)	/* Are any bits zero? */
@@ -226,7 +202,7 @@ found_middle:
 	return result + ffz(tmp);
 
 found_middle_swap:
-	return result + ffz(ext2_swab(tmp));
+	return result + ffz(bswap_ul(tmp));
 }
 EXPORT_SYMBOL(find_next_zero_bit_le);
 #endif
@@ -245,7 +221,7 @@ unsigned long find_next_bit_le(const void *addr, unsigned
 	size -= result;
 	offset &= (BITS_PER_LONG - 1UL);
 	if (offset) {
-		tmp = ext2_swabp(p++);
+		tmp = bswap_ul(*p++);
 		tmp &= (~0UL << offset);
 		if (size < BITS_PER_LONG)
 			goto found_first;
@@ -264,7 +240,7 @@ unsigned long find_next_bit_le(const void *addr, unsigned
 	}
 	if (!size)
 		return result;
-	tmp = ext2_swabp(p);
+	tmp = bswap_ul(*p);
 found_first:
 	tmp &= (~0UL >> (BITS_PER_LONG - size));
 	if (tmp == 0UL)		/* Are any bits set? */
@@ -273,7 +249,7 @@ found_middle:
 	return result + __ffs(tmp);
 
 found_middle_swap:
-	return result + __ffs(ext2_swab(tmp));
+	return result + __ffs(bswap_ul(tmp));
 }
 EXPORT_SYMBOL(find_next_bit_le);
 #endif
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 4/6] xen: Switch to byteswap
  2022-05-23 14:50 [PATCH v5 0/6] Implement byteswap and update references Lin Liu
  2022-05-23 14:50 ` [PATCH v5 2/6] crypto/vmac: Simplify code with byteswap Lin Liu
  2022-05-23 14:50 ` [PATCH v5 3/6] arm64/find_next_bit: Remove ext2_swab() Lin Liu
@ 2022-05-23 14:50 ` Lin Liu
  2022-05-23 14:56   ` Julien Grall
  2022-05-23 14:50 ` [PATCH v5 5/6] tools: Use new byteswap helper Lin Liu
  2022-05-23 14:50 ` [PATCH v5 6/6] byteorder: Remove byteorder Lin Liu
  4 siblings, 1 reply; 16+ messages in thread
From: Lin Liu @ 2022-05-23 14:50 UTC (permalink / raw)
  To: xen-devel
  Cc: Lin Liu, Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Andrew Cooper, George Dunlap, Jan Beulich, Wei Liu

Update to use byteswap to swap bytes
be*_to_cpup(p) is short for be*to_cpu(*p), update to use latter
one explictly

No functional change.

Signed-off-by: Lin Liu <lin.liu@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>
Cc: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>
Changes in v5:
- Add git message to explain be*to_cpu helper

Changes in v4:
- Revert the __force in type casting

Changes in v3:
- Update xen/common/device_tree.c to use be32_to_cpu
- Keep const in type cast in unaligned.h
---

 xen/common/device_tree.c           | 44 +++++++++++++++---------------
 xen/common/libelf/libelf-private.h |  6 ++--
 xen/common/xz/private.h            |  2 +-
 xen/include/xen/unaligned.h        | 12 ++++----
 4 files changed, 32 insertions(+), 32 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 4aae281e89..70d3be3be6 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -171,7 +171,7 @@ bool_t dt_property_read_u32(const struct dt_device_node *np,
     if ( !val || len < sizeof(*out_value) )
         return 0;
 
-    *out_value = be32_to_cpup(val);
+    *out_value = be32_to_cpu(*val);
 
     return 1;
 }
@@ -264,7 +264,7 @@ int dt_property_read_variable_u32_array(const struct dt_device_node *np,
 
     count = sz;
     while ( count-- )
-        *out_values++ = be32_to_cpup(val++);
+        *out_values++ = be32_to_cpu(*val++);
 
     return sz;
 }
@@ -490,7 +490,7 @@ static int __dt_n_addr_cells(const struct dt_device_node *np, bool_t parent)
 
         ip = dt_get_property(np, "#address-cells", NULL);
         if ( ip )
-            return be32_to_cpup(ip);
+            return be32_to_cpu(*ip);
     } while ( np->parent );
     /* No #address-cells property for the root node */
     return DT_ROOT_NODE_ADDR_CELLS_DEFAULT;
@@ -507,7 +507,7 @@ int __dt_n_size_cells(const struct dt_device_node *np, bool_t parent)
 
         ip = dt_get_property(np, "#size-cells", NULL);
         if ( ip )
-            return be32_to_cpup(ip);
+            return be32_to_cpu(*ip);
     } while ( np->parent );
     /* No #address-cells property for the root node */
     return DT_ROOT_NODE_SIZE_CELLS_DEFAULT;
@@ -660,7 +660,7 @@ static void dt_bus_pci_count_cells(const struct dt_device_node *np,
 static unsigned int dt_bus_pci_get_flags(const __be32 *addr)
 {
     unsigned int flags = 0;
-    u32 w = be32_to_cpup(addr);
+    u32 w = be32_to_cpu(*addr);
 
     switch((w >> 24) & 0x03) {
     case 0x01:
@@ -1077,7 +1077,7 @@ dt_irq_find_parent(const struct dt_device_node *child)
         if ( parp == NULL )
             p = dt_get_parent(child);
         else
-            p = dt_find_node_by_phandle(be32_to_cpup(parp));
+            p = dt_find_node_by_phandle(be32_to_cpu(*parp));
         child = p;
     } while ( p && dt_get_property(p, "#interrupt-cells", NULL) == NULL );
 
@@ -1110,7 +1110,7 @@ unsigned int dt_number_of_irq(const struct dt_device_node *device)
     intlen /= sizeof(*intspec);
 
     dt_dprintk(" using 'interrupts' property\n");
-    dt_dprintk(" intspec=%d intlen=%d\n", be32_to_cpup(intspec), intlen);
+    dt_dprintk(" intspec=%d intlen=%d\n", be32_to_cpu(*intspec), intlen);
 
     /* Look for the interrupt parent. */
     p = dt_irq_find_parent(device);
@@ -1241,7 +1241,7 @@ int dt_for_each_irq_map(const struct dt_device_node *dev,
         imaplen -= addrsize + intsize;
 
         /* Get the interrupt parent */
-        ipar = dt_find_node_by_phandle(be32_to_cpup(imap));
+        ipar = dt_find_node_by_phandle(be32_to_cpu(*imap));
         imap++;
         --imaplen;
 
@@ -1358,8 +1358,8 @@ static int dt_irq_map_raw(const struct dt_device_node *parent,
     int match, i;
 
     dt_dprintk("dt_irq_map_raw: par=%s,intspec=[0x%08x 0x%08x...],ointsize=%d\n",
-               parent->full_name, be32_to_cpup(intspec),
-               be32_to_cpup(intspec + 1), ointsize);
+               parent->full_name, be32_to_cpu(*intspec),
+               be32_to_cpu(*(intspec+1)), ointsize);
 
     ipar = parent;
 
@@ -1471,7 +1471,7 @@ static int dt_irq_map_raw(const struct dt_device_node *parent,
             dt_dprintk(" -> match=%d (imaplen=%d)\n", match, imaplen);
 
             /* Get the interrupt parent */
-            newpar = dt_find_node_by_phandle(be32_to_cpup(imap));
+            newpar = dt_find_node_by_phandle(be32_to_cpu(*imap));
             imap++;
             --imaplen;
 
@@ -1565,7 +1565,7 @@ int dt_device_get_raw_irq(const struct dt_device_node *device,
     intlen /= sizeof(*intspec);
 
     dt_dprintk(" using 'interrupts' property\n");
-    dt_dprintk(" intspec=%d intlen=%d\n", be32_to_cpup(intspec), intlen);
+    dt_dprintk(" intspec=%d intlen=%d\n", be32_to_cpu(*intspec), intlen);
 
     /* Look for the interrupt parent. */
     p = dt_irq_find_parent(device);
@@ -1676,7 +1676,7 @@ static int __dt_parse_phandle_with_args(const struct dt_device_node *np,
          * If phandle is 0, then it is an empty entry with no
          * arguments.  Skip forward to the next entry.
          * */
-        phandle = be32_to_cpup(list++);
+        phandle = be32_to_cpu(*list++);
         if ( phandle )
         {
             /*
@@ -1745,7 +1745,7 @@ static int __dt_parse_phandle_with_args(const struct dt_device_node *np,
                 out_args->np = node;
                 out_args->args_count = count;
                 for ( i = 0; i < count; i++ )
-                    out_args->args[i] = be32_to_cpup(list++);
+                    out_args->args[i] = be32_to_cpu(*list++);
             }
 
             /* Found it! return success */
@@ -1826,7 +1826,7 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
     int has_name = 0;
     int new_format = 0;
 
-    tag = be32_to_cpup((__be32 *)(*p));
+    tag = be32_to_cpu(*(__be32 *)(*p));
     if ( tag != FDT_BEGIN_NODE )
     {
         printk(XENLOG_WARNING "Weird tag at start of node: %x\n", tag);
@@ -1919,7 +1919,7 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
         u32 sz, noff;
         const char *pname;
 
-        tag = be32_to_cpup((__be32 *)(*p));
+        tag = be32_to_cpu(*(__be32 *)(*p));
         if ( tag == FDT_NOP )
         {
             *p += 4;
@@ -1928,8 +1928,8 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
         if ( tag != FDT_PROP )
             break;
         *p += 4;
-        sz = be32_to_cpup((__be32 *)(*p));
-        noff = be32_to_cpup((__be32 *)((*p) + 4));
+        sz = be32_to_cpu(*(__be32 *)(*p));
+        noff = be32_to_cpu(*(__be32 *)((*p) + 4));
         *p += 8;
         if ( fdt_version(fdt) < 0x10 )
             *p = ROUNDUP(*p, sz >= 8 ? 8 : 4);
@@ -1956,13 +1956,13 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
                  (strcmp(pname, "linux,phandle") == 0) )
             {
                 if ( np->phandle == 0 )
-                    np->phandle = be32_to_cpup((__be32*)*p);
+                    np->phandle = be32_to_cpu(*(__be32*)*p);
             }
             /* And we process the "ibm,phandle" property
              * used in pSeries dynamic device tree
              * stuff */
             if ( strcmp(pname, "ibm,phandle") == 0 )
-                np->phandle = be32_to_cpup((__be32 *)*p);
+                np->phandle = be32_to_cpu(*(__be32 *)*p);
             pp->name = pname;
             pp->length = sz;
             pp->value = (void *)*p;
@@ -2034,7 +2034,7 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
             *p += 4;
         else
             mem = unflatten_dt_node(fdt, mem, p, np, allnextpp, fpsize);
-        tag = be32_to_cpup((__be32 *)(*p));
+        tag = be32_to_cpu(*(__be32 *)(*p));
     }
     if ( tag != FDT_END_NODE )
     {
@@ -2086,7 +2086,7 @@ static void __init __unflatten_device_tree(const void *fdt,
     /* Second pass, do actual unflattening */
     start = ((unsigned long)fdt) + fdt_off_dt_struct(fdt);
     unflatten_dt_node(fdt, mem, &start, NULL, &allnextp, 0);
-    if ( be32_to_cpup((__be32 *)start) != FDT_END )
+    if ( be32_to_cpu(*(__be32 *)start) != FDT_END )
         printk(XENLOG_WARNING "Weird tag at end of tree: %08x\n",
                   *((u32 *)start));
     if ( be32_to_cpu(((__be32 *)mem)[size / 4]) != 0xdeadbeef )
diff --git a/xen/common/libelf/libelf-private.h b/xen/common/libelf/libelf-private.h
index 47db679966..6062598fb8 100644
--- a/xen/common/libelf/libelf-private.h
+++ b/xen/common/libelf/libelf-private.h
@@ -31,9 +31,9 @@
    printk(fmt, ## args )
 
 #define strtoull(str, end, base) simple_strtoull(str, end, base)
-#define bswap_16(x) swab16(x)
-#define bswap_32(x) swab32(x)
-#define bswap_64(x) swab64(x)
+#define bswap_16(x) bswap16(x)
+#define bswap_32(x) bswap32(x)
+#define bswap_64(x) bswap64(x)
 
 #else /* !__XEN__ */
 
diff --git a/xen/common/xz/private.h b/xen/common/xz/private.h
index 511343fcc2..97131fa714 100644
--- a/xen/common/xz/private.h
+++ b/xen/common/xz/private.h
@@ -28,7 +28,7 @@ static inline void put_unaligned_le32(u32 val, void *p)
 
 #endif
 
-#define get_le32(p) le32_to_cpup((const uint32_t *)(p))
+#define get_le32(p) le32_to_cpu(*(const uint32_t *)(p))
 
 #define false 0
 #define true 1
diff --git a/xen/include/xen/unaligned.h b/xen/include/xen/unaligned.h
index 0a2b16d05d..56807bd157 100644
--- a/xen/include/xen/unaligned.h
+++ b/xen/include/xen/unaligned.h
@@ -20,7 +20,7 @@
 
 static inline uint16_t get_unaligned_be16(const void *p)
 {
-	return be16_to_cpup(p);
+	return be16_to_cpu(*(const uint16_t *)p);
 }
 
 static inline void put_unaligned_be16(uint16_t val, void *p)
@@ -30,7 +30,7 @@ static inline void put_unaligned_be16(uint16_t val, void *p)
 
 static inline uint32_t get_unaligned_be32(const void *p)
 {
-	return be32_to_cpup(p);
+	return be32_to_cpu(*(const uint32_t *)p);
 }
 
 static inline void put_unaligned_be32(uint32_t val, void *p)
@@ -40,7 +40,7 @@ static inline void put_unaligned_be32(uint32_t val, void *p)
 
 static inline uint64_t get_unaligned_be64(const void *p)
 {
-	return be64_to_cpup(p);
+	return be64_to_cpu(*(const uint64_t *)p);
 }
 
 static inline void put_unaligned_be64(uint64_t val, void *p)
@@ -50,7 +50,7 @@ static inline void put_unaligned_be64(uint64_t val, void *p)
 
 static inline uint16_t get_unaligned_le16(const void *p)
 {
-	return le16_to_cpup(p);
+	return le16_to_cpu(*(const uint16_t *)p);
 }
 
 static inline void put_unaligned_le16(uint16_t val, void *p)
@@ -60,7 +60,7 @@ static inline void put_unaligned_le16(uint16_t val, void *p)
 
 static inline uint32_t get_unaligned_le32(const void *p)
 {
-	return le32_to_cpup(p);
+	return le32_to_cpu(*(const uint32_t *)p);
 }
 
 static inline void put_unaligned_le32(uint32_t val, void *p)
@@ -70,7 +70,7 @@ static inline void put_unaligned_le32(uint32_t val, void *p)
 
 static inline uint64_t get_unaligned_le64(const void *p)
 {
-	return le64_to_cpup(p);
+	return le64_to_cpu(*(const uint64_t *)p);
 }
 
 static inline void put_unaligned_le64(uint64_t val, void *p)
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 5/6] tools: Use new byteswap helper
  2022-05-23 14:50 [PATCH v5 0/6] Implement byteswap and update references Lin Liu
                   ` (2 preceding siblings ...)
  2022-05-23 14:50 ` [PATCH v5 4/6] xen: Switch to byteswap Lin Liu
@ 2022-05-23 14:50 ` Lin Liu
  2022-05-23 14:50 ` [PATCH v5 6/6] byteorder: Remove byteorder Lin Liu
  4 siblings, 0 replies; 16+ messages in thread
From: Lin Liu @ 2022-05-23 14:50 UTC (permalink / raw)
  To: xen-devel; +Cc: Lin Liu, Wei Liu, Anthony PERARD, Juergen Gross

Include new header to use new byteswap helper

No functional change.

Signed-off-by: Lin Liu <lin.liu@citrix.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/libs/guest/xg_dom_decompress_unsafe_xz.c   | 5 +++++
 tools/libs/guest/xg_dom_decompress_unsafe_zstd.c | 3 ++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/tools/libs/guest/xg_dom_decompress_unsafe_xz.c b/tools/libs/guest/xg_dom_decompress_unsafe_xz.c
index fc48198741..493427d517 100644
--- a/tools/libs/guest/xg_dom_decompress_unsafe_xz.c
+++ b/tools/libs/guest/xg_dom_decompress_unsafe_xz.c
@@ -34,6 +34,11 @@ static inline u32 le32_to_cpup(const u32 *p)
 	return cpu_to_le32(*p);
 }
 
+static inline u32 le32_to_cpu(u32 val)
+{
+   return le32_to_cpup((const u32 *)&val);
+}
+
 #define __force
 #define always_inline
 
diff --git a/tools/libs/guest/xg_dom_decompress_unsafe_zstd.c b/tools/libs/guest/xg_dom_decompress_unsafe_zstd.c
index 01eafaaaa6..b06f2e767f 100644
--- a/tools/libs/guest/xg_dom_decompress_unsafe_zstd.c
+++ b/tools/libs/guest/xg_dom_decompress_unsafe_zstd.c
@@ -31,7 +31,8 @@ typedef uint64_t __be64;
 
 #define __BYTEORDER_HAS_U64__
 #define __TYPES_H__ /* xen/types.h guard */
-#include "../../xen/include/xen/byteorder/little_endian.h"
+#define __BYTE_ORDER__ __ORDER_LITTLE_ENDIAN__
+#include "../../xen/include/xen/byteorder.h"
 #define __ASM_UNALIGNED_H__ /* asm/unaligned.h guard */
 #include "../../xen/include/xen/unaligned.h"
 #include "../../xen/include/xen/xxhash.h"
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 6/6] byteorder: Remove byteorder
  2022-05-23 14:50 [PATCH v5 0/6] Implement byteswap and update references Lin Liu
                   ` (3 preceding siblings ...)
  2022-05-23 14:50 ` [PATCH v5 5/6] tools: Use new byteswap helper Lin Liu
@ 2022-05-23 14:50 ` Lin Liu
  2022-05-24  2:48   ` Jiamei Xie
  4 siblings, 1 reply; 16+ messages in thread
From: Lin Liu @ 2022-05-23 14:50 UTC (permalink / raw)
  To: xen-devel
  Cc: Lin Liu, Andrew Cooper, George Dunlap, Jan Beulich, Julien Grall,
	Bertrand Marquis, Stefano Stabellini, Wei Liu

include/xen/byteswap.h has simplify the interface, just clean
the old interface

No functional change

Signed-off-by: Lin Liu <lin.liu@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
---
 xen/include/xen/byteorder/big_endian.h    | 102 ------------
 xen/include/xen/byteorder/generic.h       |  68 --------
 xen/include/xen/byteorder/little_endian.h | 102 ------------
 xen/include/xen/byteorder/swab.h          | 183 ----------------------
 4 files changed, 455 deletions(-)
 delete mode 100644 xen/include/xen/byteorder/big_endian.h
 delete mode 100644 xen/include/xen/byteorder/generic.h
 delete mode 100644 xen/include/xen/byteorder/little_endian.h
 delete mode 100644 xen/include/xen/byteorder/swab.h

diff --git a/xen/include/xen/byteorder/big_endian.h b/xen/include/xen/byteorder/big_endian.h
deleted file mode 100644
index 40eb80a390..0000000000
--- a/xen/include/xen/byteorder/big_endian.h
+++ /dev/null
@@ -1,102 +0,0 @@
-#ifndef __XEN_BYTEORDER_BIG_ENDIAN_H__
-#define __XEN_BYTEORDER_BIG_ENDIAN_H__
-
-#ifndef __BIG_ENDIAN
-#define __BIG_ENDIAN 4321
-#endif
-#ifndef __BIG_ENDIAN_BITFIELD
-#define __BIG_ENDIAN_BITFIELD
-#endif
-
-#include <xen/types.h>
-#include <xen/byteorder/swab.h>
-
-#define __constant_cpu_to_le64(x) ((__force __le64)___constant_swab64((x)))
-#define __constant_le64_to_cpu(x) ___constant_swab64((__force __u64)(__le64)(x))
-#define __constant_cpu_to_le32(x) ((__force __le32)___constant_swab32((x)))
-#define __constant_le32_to_cpu(x) ___constant_swab32((__force __u32)(__le32)(x))
-#define __constant_cpu_to_le16(x) ((__force __le16)___constant_swab16((x)))
-#define __constant_le16_to_cpu(x) ___constant_swab16((__force __u16)(__le16)(x))
-#define __constant_cpu_to_be64(x) ((__force __be64)(__u64)(x))
-#define __constant_be64_to_cpu(x) ((__force __u64)(__be64)(x))
-#define __constant_cpu_to_be32(x) ((__force __be32)(__u32)(x))
-#define __constant_be32_to_cpu(x) ((__force __u32)(__be32)(x))
-#define __constant_cpu_to_be16(x) ((__force __be16)(__u16)(x))
-#define __constant_be16_to_cpu(x) ((__force __u16)(__be16)(x))
-#define __cpu_to_le64(x) ((__force __le64)__swab64((x)))
-#define __le64_to_cpu(x) __swab64((__force __u64)(__le64)(x))
-#define __cpu_to_le32(x) ((__force __le32)__swab32((x)))
-#define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
-#define __cpu_to_le16(x) ((__force __le16)__swab16((x)))
-#define __le16_to_cpu(x) __swab16((__force __u16)(__le16)(x))
-#define __cpu_to_be64(x) ((__force __be64)(__u64)(x))
-#define __be64_to_cpu(x) ((__force __u64)(__be64)(x))
-#define __cpu_to_be32(x) ((__force __be32)(__u32)(x))
-#define __be32_to_cpu(x) ((__force __u32)(__be32)(x))
-#define __cpu_to_be16(x) ((__force __be16)(__u16)(x))
-#define __be16_to_cpu(x) ((__force __u16)(__be16)(x))
-
-static inline __le64 __cpu_to_le64p(const __u64 *p)
-{
-    return (__force __le64)__swab64p(p);
-}
-static inline __u64 __le64_to_cpup(const __le64 *p)
-{
-    return __swab64p((__u64 *)p);
-}
-static inline __le32 __cpu_to_le32p(const __u32 *p)
-{
-    return (__force __le32)__swab32p(p);
-}
-static inline __u32 __le32_to_cpup(const __le32 *p)
-{
-    return __swab32p((__u32 *)p);
-}
-static inline __le16 __cpu_to_le16p(const __u16 *p)
-{
-    return (__force __le16)__swab16p(p);
-}
-static inline __u16 __le16_to_cpup(const __le16 *p)
-{
-    return __swab16p((__u16 *)p);
-}
-static inline __be64 __cpu_to_be64p(const __u64 *p)
-{
-    return (__force __be64)*p;
-}
-static inline __u64 __be64_to_cpup(const __be64 *p)
-{
-    return (__force __u64)*p;
-}
-static inline __be32 __cpu_to_be32p(const __u32 *p)
-{
-    return (__force __be32)*p;
-}
-static inline __u32 __be32_to_cpup(const __be32 *p)
-{
-    return (__force __u32)*p;
-}
-static inline __be16 __cpu_to_be16p(const __u16 *p)
-{
-    return (__force __be16)*p;
-}
-static inline __u16 __be16_to_cpup(const __be16 *p)
-{
-    return (__force __u16)*p;
-}
-#define __cpu_to_le64s(x) __swab64s((x))
-#define __le64_to_cpus(x) __swab64s((x))
-#define __cpu_to_le32s(x) __swab32s((x))
-#define __le32_to_cpus(x) __swab32s((x))
-#define __cpu_to_le16s(x) __swab16s((x))
-#define __le16_to_cpus(x) __swab16s((x))
-#define __cpu_to_be64s(x) do {} while (0)
-#define __be64_to_cpus(x) do {} while (0)
-#define __cpu_to_be32s(x) do {} while (0)
-#define __be32_to_cpus(x) do {} while (0)
-#define __cpu_to_be16s(x) do {} while (0)
-#define __be16_to_cpus(x) do {} while (0)
-
-#include <xen/byteorder/generic.h>
-
-#endif /* __XEN_BYTEORDER_BIG_ENDIAN_H__ */
diff --git a/xen/include/xen/byteorder/generic.h b/xen/include/xen/byteorder/generic.h
deleted file mode 100644
index 8a0006b755..0000000000
--- a/xen/include/xen/byteorder/generic.h
+++ /dev/null
@@ -1,68 +0,0 @@
-#ifndef __XEN_BYTEORDER_GENERIC_H__
-#define __XEN_BYTEORDER_GENERIC_H__
-
-/*
- * Generic Byte-reordering support
- *
- * The "... p" macros, like le64_to_cpup, can be used with pointers
- * to unaligned data, but there will be a performance penalty on 
- * some architectures.  Use get_unaligned for unaligned data.
- *
- * The following macros are to be defined by <asm/byteorder.h>:
- *
- * Conversion of XX-bit integers (16- 32- or 64-)
- * between native CPU format and little/big endian format
- * 64-bit stuff only defined for proper architectures
- *     cpu_to_[bl]eXX(__uXX x)
- *     [bl]eXX_to_cpu(__uXX x)
- *
- * The same, but takes a pointer to the value to convert
- *     cpu_to_[bl]eXXp(__uXX x)
- *     [bl]eXX_to_cpup(__uXX x)
- *
- * The same, but change in situ
- *     cpu_to_[bl]eXXs(__uXX x)
- *     [bl]eXX_to_cpus(__uXX x)
- *
- * See asm-foo/byteorder.h for examples of how to provide
- * architecture-optimized versions
- */
-
-#define cpu_to_le64 __cpu_to_le64
-#define le64_to_cpu __le64_to_cpu
-#define cpu_to_le32 __cpu_to_le32
-#define le32_to_cpu __le32_to_cpu
-#define cpu_to_le16 __cpu_to_le16
-#define le16_to_cpu __le16_to_cpu
-#define cpu_to_be64 __cpu_to_be64
-#define be64_to_cpu __be64_to_cpu
-#define cpu_to_be32 __cpu_to_be32
-#define be32_to_cpu __be32_to_cpu
-#define cpu_to_be16 __cpu_to_be16
-#define be16_to_cpu __be16_to_cpu
-#define cpu_to_le64p __cpu_to_le64p
-#define le64_to_cpup __le64_to_cpup
-#define cpu_to_le32p __cpu_to_le32p
-#define le32_to_cpup __le32_to_cpup
-#define cpu_to_le16p __cpu_to_le16p
-#define le16_to_cpup __le16_to_cpup
-#define cpu_to_be64p __cpu_to_be64p
-#define be64_to_cpup __be64_to_cpup
-#define cpu_to_be32p __cpu_to_be32p
-#define be32_to_cpup __be32_to_cpup
-#define cpu_to_be16p __cpu_to_be16p
-#define be16_to_cpup __be16_to_cpup
-#define cpu_to_le64s __cpu_to_le64s
-#define le64_to_cpus __le64_to_cpus
-#define cpu_to_le32s __cpu_to_le32s
-#define le32_to_cpus __le32_to_cpus
-#define cpu_to_le16s __cpu_to_le16s
-#define le16_to_cpus __le16_to_cpus
-#define cpu_to_be64s __cpu_to_be64s
-#define be64_to_cpus __be64_to_cpus
-#define cpu_to_be32s __cpu_to_be32s
-#define be32_to_cpus __be32_to_cpus
-#define cpu_to_be16s __cpu_to_be16s
-#define be16_to_cpus __be16_to_cpus
-
-#endif /* __XEN_BYTEORDER_GENERIC_H__ */
diff --git a/xen/include/xen/byteorder/little_endian.h b/xen/include/xen/byteorder/little_endian.h
deleted file mode 100644
index 4955632793..0000000000
--- a/xen/include/xen/byteorder/little_endian.h
+++ /dev/null
@@ -1,102 +0,0 @@
-#ifndef __XEN_BYTEORDER_LITTLE_ENDIAN_H__
-#define __XEN_BYTEORDER_LITTLE_ENDIAN_H__
-
-#ifndef __LITTLE_ENDIAN
-#define __LITTLE_ENDIAN 1234
-#endif
-#ifndef __LITTLE_ENDIAN_BITFIELD
-#define __LITTLE_ENDIAN_BITFIELD
-#endif
-
-#include <xen/types.h>
-#include <xen/byteorder/swab.h>
-
-#define __constant_cpu_to_le64(x) ((__force __le64)(__u64)(x))
-#define __constant_le64_to_cpu(x) ((__force __u64)(__le64)(x))
-#define __constant_cpu_to_le32(x) ((__force __le32)(__u32)(x))
-#define __constant_le32_to_cpu(x) ((__force __u32)(__le32)(x))
-#define __constant_cpu_to_le16(x) ((__force __le16)(__u16)(x))
-#define __constant_le16_to_cpu(x) ((__force __u16)(__le16)(x))
-#define __constant_cpu_to_be64(x) ((__force __be64)___constant_swab64((x)))
-#define __constant_be64_to_cpu(x) ___constant_swab64((__force __u64)(__be64)(x))
-#define __constant_cpu_to_be32(x) ((__force __be32)___constant_swab32((x)))
-#define __constant_be32_to_cpu(x) ___constant_swab32((__force __u32)(__be32)(x))
-#define __constant_cpu_to_be16(x) ((__force __be16)___constant_swab16((x)))
-#define __constant_be16_to_cpu(x) ___constant_swab16((__force __u16)(__be16)(x))
-#define __cpu_to_le64(x) ((__force __le64)(__u64)(x))
-#define __le64_to_cpu(x) ((__force __u64)(__le64)(x))
-#define __cpu_to_le32(x) ((__force __le32)(__u32)(x))
-#define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
-#define __cpu_to_le16(x) ((__force __le16)(__u16)(x))
-#define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
-#define __cpu_to_be64(x) ((__force __be64)__swab64((x)))
-#define __be64_to_cpu(x) __swab64((__force __u64)(__be64)(x))
-#define __cpu_to_be32(x) ((__force __be32)__swab32((x)))
-#define __be32_to_cpu(x) __swab32((__force __u32)(__be32)(x))
-#define __cpu_to_be16(x) ((__force __be16)__swab16((x)))
-#define __be16_to_cpu(x) __swab16((__force __u16)(__be16)(x))
-
-static inline __le64 __cpu_to_le64p(const __u64 *p)
-{
-    return (__force __le64)*p;
-}
-static inline __u64 __le64_to_cpup(const __le64 *p)
-{
-    return (__force __u64)*p;
-}
-static inline __le32 __cpu_to_le32p(const __u32 *p)
-{
-    return (__force __le32)*p;
-}
-static inline __u32 __le32_to_cpup(const __le32 *p)
-{
-    return (__force __u32)*p;
-}
-static inline __le16 __cpu_to_le16p(const __u16 *p)
-{
-    return (__force __le16)*p;
-}
-static inline __u16 __le16_to_cpup(const __le16 *p)
-{
-    return (__force __u16)*p;
-}
-static inline __be64 __cpu_to_be64p(const __u64 *p)
-{
-    return (__force __be64)__swab64p(p);
-}
-static inline __u64 __be64_to_cpup(const __be64 *p)
-{
-    return __swab64p((__u64 *)p);
-}
-static inline __be32 __cpu_to_be32p(const __u32 *p)
-{
-    return (__force __be32)__swab32p(p);
-}
-static inline __u32 __be32_to_cpup(const __be32 *p)
-{
-    return __swab32p((__u32 *)p);
-}
-static inline __be16 __cpu_to_be16p(const __u16 *p)
-{
-    return (__force __be16)__swab16p(p);
-}
-static inline __u16 __be16_to_cpup(const __be16 *p)
-{
-    return __swab16p((__u16 *)p);
-}
-#define __cpu_to_le64s(x) do {} while (0)
-#define __le64_to_cpus(x) do {} while (0)
-#define __cpu_to_le32s(x) do {} while (0)
-#define __le32_to_cpus(x) do {} while (0)
-#define __cpu_to_le16s(x) do {} while (0)
-#define __le16_to_cpus(x) do {} while (0)
-#define __cpu_to_be64s(x) __swab64s((x))
-#define __be64_to_cpus(x) __swab64s((x))
-#define __cpu_to_be32s(x) __swab32s((x))
-#define __be32_to_cpus(x) __swab32s((x))
-#define __cpu_to_be16s(x) __swab16s((x))
-#define __be16_to_cpus(x) __swab16s((x))
-
-#include <xen/byteorder/generic.h>
-
-#endif /* __XEN_BYTEORDER_LITTLE_ENDIAN_H__ */
diff --git a/xen/include/xen/byteorder/swab.h b/xen/include/xen/byteorder/swab.h
deleted file mode 100644
index b7e30f0503..0000000000
--- a/xen/include/xen/byteorder/swab.h
+++ /dev/null
@@ -1,183 +0,0 @@
-#ifndef __XEN_BYTEORDER_SWAB_H__
-#define __XEN_BYTEORDER_SWAB_H__
-
-/*
- * Byte-swapping, independently from CPU endianness
- *     swabXX[ps]?(foo)
- *
- * Francois-Rene Rideau <fare@tunes.org> 19971205
- *    separated swab functions from cpu_to_XX,
- *    to clean up support for bizarre-endian architectures.
- */
-
-/* casts are necessary for constants, because we never know how for sure
- * how U/UL/ULL map to __u16, __u32, __u64. At least not in a portable way.
- */
-#define ___swab16(x)                                    \
-({                                                      \
-    __u16 __x = (x);                                    \
-    ((__u16)(                                           \
-        (((__u16)(__x) & (__u16)0x00ffU) << 8) |        \
-        (((__u16)(__x) & (__u16)0xff00U) >> 8) ));      \
-})
-
-#define ___swab32(x)                                            \
-({                                                              \
-    __u32 __x = (x);                                            \
-    ((__u32)(                                                   \
-        (((__u32)(__x) & (__u32)0x000000ffUL) << 24) |          \
-        (((__u32)(__x) & (__u32)0x0000ff00UL) <<  8) |          \
-        (((__u32)(__x) & (__u32)0x00ff0000UL) >>  8) |          \
-        (((__u32)(__x) & (__u32)0xff000000UL) >> 24) ));        \
-})
-
-#define ___swab64(x)                                                       \
-({                                                                         \
-    __u64 __x = (x);                                                       \
-    ((__u64)(                                                              \
-        (__u64)(((__u64)(__x) & (__u64)0x00000000000000ffULL) << 56) |     \
-        (__u64)(((__u64)(__x) & (__u64)0x000000000000ff00ULL) << 40) |     \
-        (__u64)(((__u64)(__x) & (__u64)0x0000000000ff0000ULL) << 24) |     \
-        (__u64)(((__u64)(__x) & (__u64)0x00000000ff000000ULL) <<  8) |     \
-            (__u64)(((__u64)(__x) & (__u64)0x000000ff00000000ULL) >>  8) | \
-        (__u64)(((__u64)(__x) & (__u64)0x0000ff0000000000ULL) >> 24) |     \
-        (__u64)(((__u64)(__x) & (__u64)0x00ff000000000000ULL) >> 40) |     \
-        (__u64)(((__u64)(__x) & (__u64)0xff00000000000000ULL) >> 56) ));   \
-})
-
-#define ___constant_swab16(x)                   \
-    ((__u16)(                                   \
-        (((__u16)(x) & (__u16)0x00ffU) << 8) |  \
-        (((__u16)(x) & (__u16)0xff00U) >> 8) ))
-#define ___constant_swab32(x)                           \
-    ((__u32)(                                           \
-        (((__u32)(x) & (__u32)0x000000ffUL) << 24) |    \
-        (((__u32)(x) & (__u32)0x0000ff00UL) <<  8) |    \
-        (((__u32)(x) & (__u32)0x00ff0000UL) >>  8) |    \
-        (((__u32)(x) & (__u32)0xff000000UL) >> 24) ))
-#define ___constant_swab64(x)                                            \
-    ((__u64)(                                                            \
-        (__u64)(((__u64)(x) & (__u64)0x00000000000000ffULL) << 56) |     \
-        (__u64)(((__u64)(x) & (__u64)0x000000000000ff00ULL) << 40) |     \
-        (__u64)(((__u64)(x) & (__u64)0x0000000000ff0000ULL) << 24) |     \
-        (__u64)(((__u64)(x) & (__u64)0x00000000ff000000ULL) <<  8) |     \
-            (__u64)(((__u64)(x) & (__u64)0x000000ff00000000ULL) >>  8) | \
-        (__u64)(((__u64)(x) & (__u64)0x0000ff0000000000ULL) >> 24) |     \
-        (__u64)(((__u64)(x) & (__u64)0x00ff000000000000ULL) >> 40) |     \
-        (__u64)(((__u64)(x) & (__u64)0xff00000000000000ULL) >> 56) ))
-
-/*
- * provide defaults when no architecture-specific optimization is detected
- */
-#ifndef __arch__swab16
-#  define __arch__swab16(x) ({ __u16 __tmp = (x) ; ___swab16(__tmp); })
-#endif
-#ifndef __arch__swab32
-#  define __arch__swab32(x) ({ __u32 __tmp = (x) ; ___swab32(__tmp); })
-#endif
-#ifndef __arch__swab64
-#  define __arch__swab64(x) ({ __u64 __tmp = (x) ; ___swab64(__tmp); })
-#endif
-
-#ifndef __arch__swab16p
-#  define __arch__swab16p(x) __arch__swab16(*(x))
-#endif
-#ifndef __arch__swab32p
-#  define __arch__swab32p(x) __arch__swab32(*(x))
-#endif
-#ifndef __arch__swab64p
-#  define __arch__swab64p(x) __arch__swab64(*(x))
-#endif
-
-#ifndef __arch__swab16s
-#  define __arch__swab16s(x) do { *(x) = __arch__swab16p((x)); } while (0)
-#endif
-#ifndef __arch__swab32s
-#  define __arch__swab32s(x) do { *(x) = __arch__swab32p((x)); } while (0)
-#endif
-#ifndef __arch__swab64s
-#  define __arch__swab64s(x) do { *(x) = __arch__swab64p((x)); } while (0)
-#endif
-
-
-/*
- * Allow constant folding
- */
-#if defined(__GNUC__) && defined(__OPTIMIZE__)
-#  define __swab16(x) \
-(__builtin_constant_p((__u16)(x)) ? \
- ___swab16((x)) : \
- __fswab16((x)))
-#  define __swab32(x) \
-(__builtin_constant_p((__u32)(x)) ? \
- ___swab32((x)) : \
- __fswab32((x)))
-#  define __swab64(x) \
-(__builtin_constant_p((__u64)(x)) ? \
- ___swab64((x)) : \
- __fswab64((x)))
-#else
-#  define __swab16(x) __fswab16(x)
-#  define __swab32(x) __fswab32(x)
-#  define __swab64(x) __fswab64(x)
-#endif /* OPTIMIZE */
-
-
-static inline __attribute_const__ __u16 __fswab16(__u16 x)
-{
-    return __arch__swab16(x);
-}
-static inline __u16 __swab16p(const __u16 *x)
-{
-    return __arch__swab16p(x);
-}
-static inline void __swab16s(__u16 *addr)
-{
-    __arch__swab16s(addr);
-}
-
-static inline __attribute_const__ __u32 __fswab32(__u32 x)
-{
-    return __arch__swab32(x);
-}
-static inline __u32 __swab32p(const __u32 *x)
-{
-    return __arch__swab32p(x);
-}
-static inline void __swab32s(__u32 *addr)
-{
-    __arch__swab32s(addr);
-}
-
-#ifdef __BYTEORDER_HAS_U64__
-static inline __attribute_const__ __u64 __fswab64(__u64 x)
-{
-#  ifdef __SWAB_64_THRU_32__
-    __u32 h = x >> 32;
-        __u32 l = x & ((1ULL<<32)-1);
-        return (((__u64)__swab32(l)) << 32) | ((__u64)(__swab32(h)));
-#  else
-    return __arch__swab64(x);
-#  endif
-}
-static inline __u64 __swab64p(const __u64 *x)
-{
-    return __arch__swab64p(x);
-}
-static inline void __swab64s(__u64 *addr)
-{
-    __arch__swab64s(addr);
-}
-#endif /* __BYTEORDER_HAS_U64__ */
-
-#define swab16 __swab16
-#define swab32 __swab32
-#define swab64 __swab64
-#define swab16p __swab16p
-#define swab32p __swab32p
-#define swab64p __swab64p
-#define swab16s __swab16s
-#define swab32s __swab32s
-#define swab64s __swab64s
-
-#endif /* __XEN_BYTEORDER_SWAB_H__ */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 3/6] arm64/find_next_bit: Remove ext2_swab()
  2022-05-23 14:50 ` [PATCH v5 3/6] arm64/find_next_bit: Remove ext2_swab() Lin Liu
@ 2022-05-23 14:53   ` Julien Grall
  2022-05-24  1:35     ` 回复: " Lin Liu (刘林)
  0 siblings, 1 reply; 16+ messages in thread
From: Julien Grall @ 2022-05-23 14:53 UTC (permalink / raw)
  To: Lin Liu, xen-devel
  Cc: Julien Grall, Andrew Cooper, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi,

On 23/05/2022 15:50, Lin Liu wrote:
> ext2 has nothing to do with this logic.

You have again not addressed my comment. If you don't understand my 
comment then please ask.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 4/6] xen: Switch to byteswap
  2022-05-23 14:50 ` [PATCH v5 4/6] xen: Switch to byteswap Lin Liu
@ 2022-05-23 14:56   ` Julien Grall
  2022-05-23 15:38     ` Andrew Cooper
  0 siblings, 1 reply; 16+ messages in thread
From: Julien Grall @ 2022-05-23 14:56 UTC (permalink / raw)
  To: Lin Liu, xen-devel
  Cc: Stefano Stabellini, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu

Hi,

On 23/05/2022 15:50, Lin Liu wrote:
> Update to use byteswap to swap bytes
> be*_to_cpup(p) is short for be*to_cpu(*p), update to use latter
> one explictly

But why? I really don't have a suggestion on the comment because I 
disagree (and AFAICT Jan as well) with the approach.

In any case, I think it would be helpful if you participate in the 
discussion rather than sending a new version quickly. This would make 
sure you don't spend time on resending with unfinished discussion.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 4/6] xen: Switch to byteswap
  2022-05-23 14:56   ` Julien Grall
@ 2022-05-23 15:38     ` Andrew Cooper
  2022-05-23 16:05       ` Julien Grall
  2022-05-23 16:14       ` Jan Beulich
  0 siblings, 2 replies; 16+ messages in thread
From: Andrew Cooper @ 2022-05-23 15:38 UTC (permalink / raw)
  To: Julien Grall, Lin Liu (刘林), xen-devel
  Cc: Stefano Stabellini, Bertrand Marquis, George Dunlap, Jan Beulich,
	Wei Liu

On 23/05/2022 15:56, Julien Grall wrote:
> Hi,
>
> On 23/05/2022 15:50, Lin Liu wrote:
>> Update to use byteswap to swap bytes
>> be*_to_cpup(p) is short for be*to_cpu(*p), update to use latter
>> one explictly
>
> But why?

Because deleting code obfuscation constructs *is* the point of the cleanup.

> I really don't have a suggestion on the comment because I disagree
> (and AFAICT Jan as well) with the approach.

Dropping the obfuscation has uncovered pre-existing bugs in the
hypervisor.  The series stands on its own merit.

While I can't help if you like it or not, it really does bring an
improvement to code quality and legibility.

If you have no technical objections, and no suggestions for how to do it
differently while retaining the quality and legibility improvements,
then "I don't like it" doesn't block it going in.

I specifically do like this change, because it does improve the codebase.

~Andrew

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 4/6] xen: Switch to byteswap
  2022-05-23 15:38     ` Andrew Cooper
@ 2022-05-23 16:05       ` Julien Grall
  2022-05-24  2:42         ` Lin Liu (刘林)
  2022-05-23 16:14       ` Jan Beulich
  1 sibling, 1 reply; 16+ messages in thread
From: Julien Grall @ 2022-05-23 16:05 UTC (permalink / raw)
  To: Andrew Cooper, Lin Liu (刘林), xen-devel
  Cc: Stefano Stabellini, Bertrand Marquis, George Dunlap, Jan Beulich,
	Wei Liu

Hi Andrew,

On 23/05/2022 16:38, Andrew Cooper wrote:
> On 23/05/2022 15:56, Julien Grall wrote:
>> Hi,
>>
>> On 23/05/2022 15:50, Lin Liu wrote:
>>> Update to use byteswap to swap bytes
>>> be*_to_cpup(p) is short for be*to_cpu(*p), update to use latter
>>> one explictly
>>
>> But why?
> 
> Because deleting code obfuscation constructs *is* the point of the cleanup.
> 
>> I really don't have a suggestion on the comment because I disagree
>> (and AFAICT Jan as well) with the approach.
> 
> Dropping the obfuscation has uncovered pre-existing bugs in the
> hypervisor.  The series stands on its own merit.

I am guessing you mean that we don't handle unaligned access? If so, yes 
I agree this helped with that.

> 
> While I can't help if you like it or not, it really does bring an
> improvement to code quality and legibility.
> 
> If you have no technical objections, and no suggestions for how to do it
> differently while retaining the quality and legibility improvements,
> then "I don't like it" doesn't block it going in.

And you don't like the existing code :). I am willing to compromise, but 
for that I need to understand why the existing code is technically not 
correct.

So far, all the arguments you provided in v3 was either a matter of 
taste or IMHO bogus.

Your taste is nor better nor worse than mine. At which, we need someone 
else to break the tie.

If I am not mistaken, Jan is also objecting on the proposal. At which 
point, we are 2 vs 1.

So there are three choices here:
   1) You find two others maintainers (including on Arm maintainer) to 
agree with you
   2) You provide arguments that will sway one of us in your side
   3) We keep be32_cpu*() (they are simple wrapper and I am willing to 
write the code).

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 4/6] xen: Switch to byteswap
  2022-05-23 15:38     ` Andrew Cooper
  2022-05-23 16:05       ` Julien Grall
@ 2022-05-23 16:14       ` Jan Beulich
  1 sibling, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2022-05-23 16:14 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Stefano Stabellini, Bertrand Marquis, George Dunlap, Wei Liu,
	Julien Grall, Lin Liu (刘林),
	xen-devel

On 23.05.2022 17:38, Andrew Cooper wrote:
> On 23/05/2022 15:56, Julien Grall wrote:
>> On 23/05/2022 15:50, Lin Liu wrote:
>>> Update to use byteswap to swap bytes
>>> be*_to_cpup(p) is short for be*to_cpu(*p), update to use latter
>>> one explictly
>>
>> But why?
> 
> Because deleting code obfuscation constructs *is* the point of the cleanup.

It's obfuscation only as long as not implemented correctly, i.e. dealing
with unaligned data. Then "be*_to_cpup(p) is short for be*to_cpu(*p)" no
longer applies.

Jan



^ permalink raw reply	[flat|nested] 16+ messages in thread

* 回复: [PATCH v5 3/6] arm64/find_next_bit: Remove ext2_swab()
  2022-05-23 14:53   ` Julien Grall
@ 2022-05-24  1:35     ` Lin Liu (刘林)
  2022-05-25  7:53       ` Julien Grall
  0 siblings, 1 reply; 16+ messages in thread
From: Lin Liu (刘林) @ 2022-05-24  1:35 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Julien Grall, Andrew Cooper, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk


>>Hi,

>>On 23/05/2022 15:50, Lin Liu wrote:
> >ext2 has nothing to do with this logic.

>You have again not addressed my comment. If you don't understand my comment then please ask.

>Cheers,

>--
>Julien Grall

Sorry I missed this one as I saw this patch already got an some tags,  I suppose your comment requires commit message update, 
Will update it if a newer version is required. 

Cheers,
Lin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 4/6] xen: Switch to byteswap
  2022-05-23 16:05       ` Julien Grall
@ 2022-05-24  2:42         ` Lin Liu (刘林)
  0 siblings, 0 replies; 16+ messages in thread
From: Lin Liu (刘林) @ 2022-05-24  2:42 UTC (permalink / raw)
  To: Julien Grall, Andrew Cooper, xen-devel
  Cc: Stefano Stabellini, Bertrand Marquis, George Dunlap, Jan Beulich,
	Wei Liu

[-- Attachment #1: Type: text/plain, Size: 2286 bytes --]

>Hi Andrew,
>
>On 23/05/2022 16:38, Andrew Cooper wrote:
>> On 23/05/2022 15:56, Julien Grall wrote:
>>> Hi,
>>>
>>> On 23/05/2022 15:50, Lin Liu wrote:
>>>> Update to use byteswap to swap bytes
>>>> be*_to_cpup(p) is short for be*to_cpu(*p), update to use latter
>>>> one explictly
>>>
>>> But why?
>>
>> Because deleting code obfuscation constructs *is* the point of the cleanup.
>>
>>> I really don't have a suggestion on the comment because I disagree
>>> (and AFAICT Jan as well) with the approach.
>>
>> Dropping the obfuscation has uncovered pre-existing bugs in the
>> hypervisor.  The series stands on its own merit.
>
>I am guessing you mean that we don't handle unaligned access? If so, yes
>I agree this helped with that.
>
>>
>> While I can't help if you like it or not, it really does bring an
>> improvement to code quality and legibility.
>>
>> If you have no technical objections, and no suggestions for how to do it
>> differently while retaining the quality and legibility improvements,
>> then "I don't like it" doesn't block it going in.
>
>And you don't like the existing code :). I am willing to compromise, but
>for that I need to understand why the existing code is technically not
>correct.
>
>So far, all the arguments you provided in v3 was either a matter of
>taste or IMHO bogus.
>
>Your taste is nor better nor worse than mine. At which, we need someone
>else to break the tie.
>
>If I am not mistaken, Jan is also objecting on the proposal. At which
>point, we are 2 vs 1.
>
>So there are three choices here:
>   1) You find two others maintainers (including on Arm maintainer) to
>agree with you
>   2) You provide arguments that will sway one of us in your side
>   3) We keep be32_cpu*() (they are simple wrapper and I am willing to
>write the code).

Personaly, I agree with Andrew Copper to remove the be*_to_cpup helpers as current
implemetation is just a wrapper, like

#ifndef __arch__swab16p
#  define __arch__swab16p(x) __arch__swab16(*(x))
#endif

With be*_to_cpup been removed, the interface keeps simple and clear, and callers
are dereference the pointer explictly.

I am very happy to see the three choices, hope we can reach an agreement about this soon.

Cherrs,
---
Lin

[-- Attachment #2: Type: text/html, Size: 8936 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v5 6/6] byteorder: Remove byteorder
  2022-05-23 14:50 ` [PATCH v5 6/6] byteorder: Remove byteorder Lin Liu
@ 2022-05-24  2:48   ` Jiamei Xie
  2022-05-24  3:07     ` Lin Liu (刘林)
  0 siblings, 1 reply; 16+ messages in thread
From: Jiamei Xie @ 2022-05-24  2:48 UTC (permalink / raw)
  To: Lin Liu, xen-devel
  Cc: Andrew Cooper, George Dunlap, Jan Beulich, Julien Grall,
	Bertrand Marquis, Stefano Stabellini, Wei Liu

Hi Lin,
> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Lin
> Liu
> Sent: 2022年5月23日 22:51
> To: xen-devel@lists.xenproject.org
> Cc: Lin Liu <lin.liu@citrix.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
> Jan Beulich <jbeulich@suse.com>; Julien Grall <julien@xen.org>; Bertrand
> Marquis <Bertrand.Marquis@arm.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
> Subject: [PATCH v5 6/6] byteorder: Remove byteorder
> 
> include/xen/byteswap.h has simplify the interface, just clean
> the old interface
There is a  typo.   s/ simplify/simplified /.

Best wishes
Jiamei Xie
> 
> No functional change
> 
> Signed-off-by: Lin Liu <lin.liu@citrix.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Bertrand Marquis <bertrand.marquis@arm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Wei Liu <wl@xen.org>
> ---
>  xen/include/xen/byteorder/big_endian.h    | 102 ------------
>  xen/include/xen/byteorder/generic.h       |  68 --------
>  xen/include/xen/byteorder/little_endian.h | 102 ------------
>  xen/include/xen/byteorder/swab.h          | 183 ----------------------
>  4 files changed, 455 deletions(-)
>  delete mode 100644 xen/include/xen/byteorder/big_endian.h
>  delete mode 100644 xen/include/xen/byteorder/generic.h
>  delete mode 100644 xen/include/xen/byteorder/little_endian.h
>  delete mode 100644 xen/include/xen/byteorder/swab.h
> 
> diff --git a/xen/include/xen/byteorder/big_endian.h
> b/xen/include/xen/byteorder/big_endian.h
> deleted file mode 100644
> index 40eb80a390..0000000000
> --- a/xen/include/xen/byteorder/big_endian.h
> +++ /dev/null
> @@ -1,102 +0,0 @@
> -#ifndef __XEN_BYTEORDER_BIG_ENDIAN_H__
> -#define __XEN_BYTEORDER_BIG_ENDIAN_H__
> -
> -#ifndef __BIG_ENDIAN
> -#define __BIG_ENDIAN 4321
> -#endif
> -#ifndef __BIG_ENDIAN_BITFIELD
> -#define __BIG_ENDIAN_BITFIELD
> -#endif
> -
> -#include <xen/types.h>
> -#include <xen/byteorder/swab.h>
> -
> -#define __constant_cpu_to_le64(x) ((__force
> __le64)___constant_swab64((x)))
> -#define __constant_le64_to_cpu(x) ___constant_swab64((__force
> __u64)(__le64)(x))
> -#define __constant_cpu_to_le32(x) ((__force
> __le32)___constant_swab32((x)))
> -#define __constant_le32_to_cpu(x) ___constant_swab32((__force
> __u32)(__le32)(x))
> -#define __constant_cpu_to_le16(x) ((__force
> __le16)___constant_swab16((x)))
> -#define __constant_le16_to_cpu(x) ___constant_swab16((__force
> __u16)(__le16)(x))
> -#define __constant_cpu_to_be64(x) ((__force __be64)(__u64)(x))
> -#define __constant_be64_to_cpu(x) ((__force __u64)(__be64)(x))
> -#define __constant_cpu_to_be32(x) ((__force __be32)(__u32)(x))
> -#define __constant_be32_to_cpu(x) ((__force __u32)(__be32)(x))
> -#define __constant_cpu_to_be16(x) ((__force __be16)(__u16)(x))
> -#define __constant_be16_to_cpu(x) ((__force __u16)(__be16)(x))
> -#define __cpu_to_le64(x) ((__force __le64)__swab64((x)))
> -#define __le64_to_cpu(x) __swab64((__force __u64)(__le64)(x))
> -#define __cpu_to_le32(x) ((__force __le32)__swab32((x)))
> -#define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
> -#define __cpu_to_le16(x) ((__force __le16)__swab16((x)))
> -#define __le16_to_cpu(x) __swab16((__force __u16)(__le16)(x))
> -#define __cpu_to_be64(x) ((__force __be64)(__u64)(x))
> -#define __be64_to_cpu(x) ((__force __u64)(__be64)(x))
> -#define __cpu_to_be32(x) ((__force __be32)(__u32)(x))
> -#define __be32_to_cpu(x) ((__force __u32)(__be32)(x))
> -#define __cpu_to_be16(x) ((__force __be16)(__u16)(x))
> -#define __be16_to_cpu(x) ((__force __u16)(__be16)(x))
> -
> -static inline __le64 __cpu_to_le64p(const __u64 *p)
> -{
> -    return (__force __le64)__swab64p(p);
> -}
> -static inline __u64 __le64_to_cpup(const __le64 *p)
> -{
> -    return __swab64p((__u64 *)p);
> -}
> -static inline __le32 __cpu_to_le32p(const __u32 *p)
> -{
> -    return (__force __le32)__swab32p(p);
> -}
> -static inline __u32 __le32_to_cpup(const __le32 *p)
> -{
> -    return __swab32p((__u32 *)p);
> -}
> -static inline __le16 __cpu_to_le16p(const __u16 *p)
> -{
> -    return (__force __le16)__swab16p(p);
> -}
> -static inline __u16 __le16_to_cpup(const __le16 *p)
> -{
> -    return __swab16p((__u16 *)p);
> -}
> -static inline __be64 __cpu_to_be64p(const __u64 *p)
> -{
> -    return (__force __be64)*p;
> -}
> -static inline __u64 __be64_to_cpup(const __be64 *p)
> -{
> -    return (__force __u64)*p;
> -}
> -static inline __be32 __cpu_to_be32p(const __u32 *p)
> -{
> -    return (__force __be32)*p;
> -}
> -static inline __u32 __be32_to_cpup(const __be32 *p)
> -{
> -    return (__force __u32)*p;
> -}
> -static inline __be16 __cpu_to_be16p(const __u16 *p)
> -{
> -    return (__force __be16)*p;
> -}
> -static inline __u16 __be16_to_cpup(const __be16 *p)
> -{
> -    return (__force __u16)*p;
> -}
> -#define __cpu_to_le64s(x) __swab64s((x))
> -#define __le64_to_cpus(x) __swab64s((x))
> -#define __cpu_to_le32s(x) __swab32s((x))
> -#define __le32_to_cpus(x) __swab32s((x))
> -#define __cpu_to_le16s(x) __swab16s((x))
> -#define __le16_to_cpus(x) __swab16s((x))
> -#define __cpu_to_be64s(x) do {} while (0)
> -#define __be64_to_cpus(x) do {} while (0)
> -#define __cpu_to_be32s(x) do {} while (0)
> -#define __be32_to_cpus(x) do {} while (0)
> -#define __cpu_to_be16s(x) do {} while (0)
> -#define __be16_to_cpus(x) do {} while (0)
> -
> -#include <xen/byteorder/generic.h>
> -
> -#endif /* __XEN_BYTEORDER_BIG_ENDIAN_H__ */
> diff --git a/xen/include/xen/byteorder/generic.h
> b/xen/include/xen/byteorder/generic.h
> deleted file mode 100644
> index 8a0006b755..0000000000
> --- a/xen/include/xen/byteorder/generic.h
> +++ /dev/null
> @@ -1,68 +0,0 @@
> -#ifndef __XEN_BYTEORDER_GENERIC_H__
> -#define __XEN_BYTEORDER_GENERIC_H__
> -
> -/*
> - * Generic Byte-reordering support
> - *
> - * The "... p" macros, like le64_to_cpup, can be used with pointers
> - * to unaligned data, but there will be a performance penalty on
> - * some architectures.  Use get_unaligned for unaligned data.
> - *
> - * The following macros are to be defined by <asm/byteorder.h>:
> - *
> - * Conversion of XX-bit integers (16- 32- or 64-)
> - * between native CPU format and little/big endian format
> - * 64-bit stuff only defined for proper architectures
> - *     cpu_to_[bl]eXX(__uXX x)
> - *     [bl]eXX_to_cpu(__uXX x)
> - *
> - * The same, but takes a pointer to the value to convert
> - *     cpu_to_[bl]eXXp(__uXX x)
> - *     [bl]eXX_to_cpup(__uXX x)
> - *
> - * The same, but change in situ
> - *     cpu_to_[bl]eXXs(__uXX x)
> - *     [bl]eXX_to_cpus(__uXX x)
> - *
> - * See asm-foo/byteorder.h for examples of how to provide
> - * architecture-optimized versions
> - */
> -
> -#define cpu_to_le64 __cpu_to_le64
> -#define le64_to_cpu __le64_to_cpu
> -#define cpu_to_le32 __cpu_to_le32
> -#define le32_to_cpu __le32_to_cpu
> -#define cpu_to_le16 __cpu_to_le16
> -#define le16_to_cpu __le16_to_cpu
> -#define cpu_to_be64 __cpu_to_be64
> -#define be64_to_cpu __be64_to_cpu
> -#define cpu_to_be32 __cpu_to_be32
> -#define be32_to_cpu __be32_to_cpu
> -#define cpu_to_be16 __cpu_to_be16
> -#define be16_to_cpu __be16_to_cpu
> -#define cpu_to_le64p __cpu_to_le64p
> -#define le64_to_cpup __le64_to_cpup
> -#define cpu_to_le32p __cpu_to_le32p
> -#define le32_to_cpup __le32_to_cpup
> -#define cpu_to_le16p __cpu_to_le16p
> -#define le16_to_cpup __le16_to_cpup
> -#define cpu_to_be64p __cpu_to_be64p
> -#define be64_to_cpup __be64_to_cpup
> -#define cpu_to_be32p __cpu_to_be32p
> -#define be32_to_cpup __be32_to_cpup
> -#define cpu_to_be16p __cpu_to_be16p
> -#define be16_to_cpup __be16_to_cpup
> -#define cpu_to_le64s __cpu_to_le64s
> -#define le64_to_cpus __le64_to_cpus
> -#define cpu_to_le32s __cpu_to_le32s
> -#define le32_to_cpus __le32_to_cpus
> -#define cpu_to_le16s __cpu_to_le16s
> -#define le16_to_cpus __le16_to_cpus
> -#define cpu_to_be64s __cpu_to_be64s
> -#define be64_to_cpus __be64_to_cpus
> -#define cpu_to_be32s __cpu_to_be32s
> -#define be32_to_cpus __be32_to_cpus
> -#define cpu_to_be16s __cpu_to_be16s
> -#define be16_to_cpus __be16_to_cpus
> -
> -#endif /* __XEN_BYTEORDER_GENERIC_H__ */
> diff --git a/xen/include/xen/byteorder/little_endian.h
> b/xen/include/xen/byteorder/little_endian.h
> deleted file mode 100644
> index 4955632793..0000000000
> --- a/xen/include/xen/byteorder/little_endian.h
> +++ /dev/null
> @@ -1,102 +0,0 @@
> -#ifndef __XEN_BYTEORDER_LITTLE_ENDIAN_H__
> -#define __XEN_BYTEORDER_LITTLE_ENDIAN_H__
> -
> -#ifndef __LITTLE_ENDIAN
> -#define __LITTLE_ENDIAN 1234
> -#endif
> -#ifndef __LITTLE_ENDIAN_BITFIELD
> -#define __LITTLE_ENDIAN_BITFIELD
> -#endif
> -
> -#include <xen/types.h>
> -#include <xen/byteorder/swab.h>
> -
> -#define __constant_cpu_to_le64(x) ((__force __le64)(__u64)(x))
> -#define __constant_le64_to_cpu(x) ((__force __u64)(__le64)(x))
> -#define __constant_cpu_to_le32(x) ((__force __le32)(__u32)(x))
> -#define __constant_le32_to_cpu(x) ((__force __u32)(__le32)(x))
> -#define __constant_cpu_to_le16(x) ((__force __le16)(__u16)(x))
> -#define __constant_le16_to_cpu(x) ((__force __u16)(__le16)(x))
> -#define __constant_cpu_to_be64(x) ((__force
> __be64)___constant_swab64((x)))
> -#define __constant_be64_to_cpu(x) ___constant_swab64((__force
> __u64)(__be64)(x))
> -#define __constant_cpu_to_be32(x) ((__force
> __be32)___constant_swab32((x)))
> -#define __constant_be32_to_cpu(x) ___constant_swab32((__force
> __u32)(__be32)(x))
> -#define __constant_cpu_to_be16(x) ((__force
> __be16)___constant_swab16((x)))
> -#define __constant_be16_to_cpu(x) ___constant_swab16((__force
> __u16)(__be16)(x))
> -#define __cpu_to_le64(x) ((__force __le64)(__u64)(x))
> -#define __le64_to_cpu(x) ((__force __u64)(__le64)(x))
> -#define __cpu_to_le32(x) ((__force __le32)(__u32)(x))
> -#define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
> -#define __cpu_to_le16(x) ((__force __le16)(__u16)(x))
> -#define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
> -#define __cpu_to_be64(x) ((__force __be64)__swab64((x)))
> -#define __be64_to_cpu(x) __swab64((__force __u64)(__be64)(x))
> -#define __cpu_to_be32(x) ((__force __be32)__swab32((x)))
> -#define __be32_to_cpu(x) __swab32((__force __u32)(__be32)(x))
> -#define __cpu_to_be16(x) ((__force __be16)__swab16((x)))
> -#define __be16_to_cpu(x) __swab16((__force __u16)(__be16)(x))
> -
> -static inline __le64 __cpu_to_le64p(const __u64 *p)
> -{
> -    return (__force __le64)*p;
> -}
> -static inline __u64 __le64_to_cpup(const __le64 *p)
> -{
> -    return (__force __u64)*p;
> -}
> -static inline __le32 __cpu_to_le32p(const __u32 *p)
> -{
> -    return (__force __le32)*p;
> -}
> -static inline __u32 __le32_to_cpup(const __le32 *p)
> -{
> -    return (__force __u32)*p;
> -}
> -static inline __le16 __cpu_to_le16p(const __u16 *p)
> -{
> -    return (__force __le16)*p;
> -}
> -static inline __u16 __le16_to_cpup(const __le16 *p)
> -{
> -    return (__force __u16)*p;
> -}
> -static inline __be64 __cpu_to_be64p(const __u64 *p)
> -{
> -    return (__force __be64)__swab64p(p);
> -}
> -static inline __u64 __be64_to_cpup(const __be64 *p)
> -{
> -    return __swab64p((__u64 *)p);
> -}
> -static inline __be32 __cpu_to_be32p(const __u32 *p)
> -{
> -    return (__force __be32)__swab32p(p);
> -}
> -static inline __u32 __be32_to_cpup(const __be32 *p)
> -{
> -    return __swab32p((__u32 *)p);
> -}
> -static inline __be16 __cpu_to_be16p(const __u16 *p)
> -{
> -    return (__force __be16)__swab16p(p);
> -}
> -static inline __u16 __be16_to_cpup(const __be16 *p)
> -{
> -    return __swab16p((__u16 *)p);
> -}
> -#define __cpu_to_le64s(x) do {} while (0)
> -#define __le64_to_cpus(x) do {} while (0)
> -#define __cpu_to_le32s(x) do {} while (0)
> -#define __le32_to_cpus(x) do {} while (0)
> -#define __cpu_to_le16s(x) do {} while (0)
> -#define __le16_to_cpus(x) do {} while (0)
> -#define __cpu_to_be64s(x) __swab64s((x))
> -#define __be64_to_cpus(x) __swab64s((x))
> -#define __cpu_to_be32s(x) __swab32s((x))
> -#define __be32_to_cpus(x) __swab32s((x))
> -#define __cpu_to_be16s(x) __swab16s((x))
> -#define __be16_to_cpus(x) __swab16s((x))
> -
> -#include <xen/byteorder/generic.h>
> -
> -#endif /* __XEN_BYTEORDER_LITTLE_ENDIAN_H__ */
> diff --git a/xen/include/xen/byteorder/swab.h
> b/xen/include/xen/byteorder/swab.h
> deleted file mode 100644
> index b7e30f0503..0000000000
> --- a/xen/include/xen/byteorder/swab.h
> +++ /dev/null
> @@ -1,183 +0,0 @@
> -#ifndef __XEN_BYTEORDER_SWAB_H__
> -#define __XEN_BYTEORDER_SWAB_H__
> -
> -/*
> - * Byte-swapping, independently from CPU endianness
> - *     swabXX[ps]?(foo)
> - *
> - * Francois-Rene Rideau <fare@tunes.org> 19971205
> - *    separated swab functions from cpu_to_XX,
> - *    to clean up support for bizarre-endian architectures.
> - */
> -
> -/* casts are necessary for constants, because we never know how for sure
> - * how U/UL/ULL map to __u16, __u32, __u64. At least not in a portable way.
> - */
> -#define ___swab16(x)                                    \
> -({                                                      \
> -    __u16 __x = (x);                                    \
> -    ((__u16)(                                           \
> -        (((__u16)(__x) & (__u16)0x00ffU) << 8) |        \
> -        (((__u16)(__x) & (__u16)0xff00U) >> 8) ));      \
> -})
> -
> -#define ___swab32(x)                                            \
> -({                                                              \
> -    __u32 __x = (x);                                            \
> -    ((__u32)(                                                   \
> -        (((__u32)(__x) & (__u32)0x000000ffUL) << 24) |          \
> -        (((__u32)(__x) & (__u32)0x0000ff00UL) <<  8) |          \
> -        (((__u32)(__x) & (__u32)0x00ff0000UL) >>  8) |          \
> -        (((__u32)(__x) & (__u32)0xff000000UL) >> 24) ));        \
> -})
> -
> -#define ___swab64(x)                                                       \
> -({                                                                         \
> -    __u64 __x = (x);                                                       \
> -    ((__u64)(                                                              \
> -        (__u64)(((__u64)(__x) & (__u64)0x00000000000000ffULL) << 56) |     \
> -        (__u64)(((__u64)(__x) & (__u64)0x000000000000ff00ULL) << 40) |     \
> -        (__u64)(((__u64)(__x) & (__u64)0x0000000000ff0000ULL) << 24) |     \
> -        (__u64)(((__u64)(__x) & (__u64)0x00000000ff000000ULL) <<  8) |     \
> -            (__u64)(((__u64)(__x) & (__u64)0x000000ff00000000ULL) >>  8) | \
> -        (__u64)(((__u64)(__x) & (__u64)0x0000ff0000000000ULL) >> 24) |     \
> -        (__u64)(((__u64)(__x) & (__u64)0x00ff000000000000ULL) >> 40) |     \
> -        (__u64)(((__u64)(__x) & (__u64)0xff00000000000000ULL) >> 56) ));   \
> -})
> -
> -#define ___constant_swab16(x)                   \
> -    ((__u16)(                                   \
> -        (((__u16)(x) & (__u16)0x00ffU) << 8) |  \
> -        (((__u16)(x) & (__u16)0xff00U) >> 8) ))
> -#define ___constant_swab32(x)                           \
> -    ((__u32)(                                           \
> -        (((__u32)(x) & (__u32)0x000000ffUL) << 24) |    \
> -        (((__u32)(x) & (__u32)0x0000ff00UL) <<  8) |    \
> -        (((__u32)(x) & (__u32)0x00ff0000UL) >>  8) |    \
> -        (((__u32)(x) & (__u32)0xff000000UL) >> 24) ))
> -#define ___constant_swab64(x)                                            \
> -    ((__u64)(                                                            \
> -        (__u64)(((__u64)(x) & (__u64)0x00000000000000ffULL) << 56) |     \
> -        (__u64)(((__u64)(x) & (__u64)0x000000000000ff00ULL) << 40) |     \
> -        (__u64)(((__u64)(x) & (__u64)0x0000000000ff0000ULL) << 24) |     \
> -        (__u64)(((__u64)(x) & (__u64)0x00000000ff000000ULL) <<  8) |     \
> -            (__u64)(((__u64)(x) & (__u64)0x000000ff00000000ULL) >>  8) | \
> -        (__u64)(((__u64)(x) & (__u64)0x0000ff0000000000ULL) >> 24) |     \
> -        (__u64)(((__u64)(x) & (__u64)0x00ff000000000000ULL) >> 40) |     \
> -        (__u64)(((__u64)(x) & (__u64)0xff00000000000000ULL) >> 56) ))
> -
> -/*
> - * provide defaults when no architecture-specific optimization is detected
> - */
> -#ifndef __arch__swab16
> -#  define __arch__swab16(x) ({ __u16 __tmp = (x) ; ___swab16(__tmp); })
> -#endif
> -#ifndef __arch__swab32
> -#  define __arch__swab32(x) ({ __u32 __tmp = (x) ; ___swab32(__tmp); })
> -#endif
> -#ifndef __arch__swab64
> -#  define __arch__swab64(x) ({ __u64 __tmp = (x) ; ___swab64(__tmp); })
> -#endif
> -
> -#ifndef __arch__swab16p
> -#  define __arch__swab16p(x) __arch__swab16(*(x))
> -#endif
> -#ifndef __arch__swab32p
> -#  define __arch__swab32p(x) __arch__swab32(*(x))
> -#endif
> -#ifndef __arch__swab64p
> -#  define __arch__swab64p(x) __arch__swab64(*(x))
> -#endif
> -
> -#ifndef __arch__swab16s
> -#  define __arch__swab16s(x) do { *(x) = __arch__swab16p((x)); } while (0)
> -#endif
> -#ifndef __arch__swab32s
> -#  define __arch__swab32s(x) do { *(x) = __arch__swab32p((x)); } while (0)
> -#endif
> -#ifndef __arch__swab64s
> -#  define __arch__swab64s(x) do { *(x) = __arch__swab64p((x)); } while (0)
> -#endif
> -
> -
> -/*
> - * Allow constant folding
> - */
> -#if defined(__GNUC__) && defined(__OPTIMIZE__)
> -#  define __swab16(x) \
> -(__builtin_constant_p((__u16)(x)) ? \
> - ___swab16((x)) : \
> - __fswab16((x)))
> -#  define __swab32(x) \
> -(__builtin_constant_p((__u32)(x)) ? \
> - ___swab32((x)) : \
> - __fswab32((x)))
> -#  define __swab64(x) \
> -(__builtin_constant_p((__u64)(x)) ? \
> - ___swab64((x)) : \
> - __fswab64((x)))
> -#else
> -#  define __swab16(x) __fswab16(x)
> -#  define __swab32(x) __fswab32(x)
> -#  define __swab64(x) __fswab64(x)
> -#endif /* OPTIMIZE */
> -
> -
> -static inline __attribute_const__ __u16 __fswab16(__u16 x)
> -{
> -    return __arch__swab16(x);
> -}
> -static inline __u16 __swab16p(const __u16 *x)
> -{
> -    return __arch__swab16p(x);
> -}
> -static inline void __swab16s(__u16 *addr)
> -{
> -    __arch__swab16s(addr);
> -}
> -
> -static inline __attribute_const__ __u32 __fswab32(__u32 x)
> -{
> -    return __arch__swab32(x);
> -}
> -static inline __u32 __swab32p(const __u32 *x)
> -{
> -    return __arch__swab32p(x);
> -}
> -static inline void __swab32s(__u32 *addr)
> -{
> -    __arch__swab32s(addr);
> -}
> -
> -#ifdef __BYTEORDER_HAS_U64__
> -static inline __attribute_const__ __u64 __fswab64(__u64 x)
> -{
> -#  ifdef __SWAB_64_THRU_32__
> -    __u32 h = x >> 32;
> -        __u32 l = x & ((1ULL<<32)-1);
> -        return (((__u64)__swab32(l)) << 32) | ((__u64)(__swab32(h)));
> -#  else
> -    return __arch__swab64(x);
> -#  endif
> -}
> -static inline __u64 __swab64p(const __u64 *x)
> -{
> -    return __arch__swab64p(x);
> -}
> -static inline void __swab64s(__u64 *addr)
> -{
> -    __arch__swab64s(addr);
> -}
> -#endif /* __BYTEORDER_HAS_U64__ */
> -
> -#define swab16 __swab16
> -#define swab32 __swab32
> -#define swab64 __swab64
> -#define swab16p __swab16p
> -#define swab32p __swab32p
> -#define swab64p __swab64p
> -#define swab16s __swab16s
> -#define swab32s __swab32s
> -#define swab64s __swab64s
> -
> -#endif /* __XEN_BYTEORDER_SWAB_H__ */
> --
> 2.27.0
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 6/6] byteorder: Remove byteorder
  2022-05-24  2:48   ` Jiamei Xie
@ 2022-05-24  3:07     ` Lin Liu (刘林)
  0 siblings, 0 replies; 16+ messages in thread
From: Lin Liu (刘林) @ 2022-05-24  3:07 UTC (permalink / raw)
  To: Jiamei Xie, xen-devel
  Cc: Andrew Cooper, George Dunlap, Jan Beulich, Julien Grall,
	Bertrand Marquis, Stefano Stabellini, Wei Liu

[-- Attachment #1: Type: text/plain, Size: 276 bytes --]

>> Subject: [PATCH v5 6/6] byteorder: Remove byteorder
>>
>> include/xen/byteswap.h has simplify the interface, just clean
>> the old interface
>There is a  typo.   s/ simplify/simplified /.

Thanks for pointing this out, will update in next patch

Cherrs,
---
Lin

[-- Attachment #2: Type: text/html, Size: 2441 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: 回复: [PATCH v5 3/6] arm64/find_next_bit: Remove ext2_swab()
  2022-05-24  1:35     ` 回复: " Lin Liu (刘林)
@ 2022-05-25  7:53       ` Julien Grall
  0 siblings, 0 replies; 16+ messages in thread
From: Julien Grall @ 2022-05-25  7:53 UTC (permalink / raw)
  To: Lin Liu (刘林), xen-devel
  Cc: Julien Grall, Andrew Cooper, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi,

On 24/05/2022 02:35, Lin Liu (刘林) wrote:
> 
>>> Hi,
> 
>>> On 23/05/2022 15:50, Lin Liu wrote:
>>> ext2 has nothing to do with this logic.
> 
>> You have again not addressed my comment. If you don't understand my comment then please ask.
> 
>> Cheers,
> 
>> --
>> Julien Grall
> 
> Sorry I missed this one as I saw this patch already got an some tags,

For smaller changes, we tend to provide the Reviewed-by/Acked-by at the 
same time. The comments may then be addressed on commit (if the 
committer is happy with that) or you could handle them on a respin.

>  I suppose your comment requires commit message update,

Yes. I would like the first sentence to be dropped or reworked.

> Will update it if a newer version is required.

Thanks! I would have committed now but AFAIU it depends on previous patches.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-05-25  7:53 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-23 14:50 [PATCH v5 0/6] Implement byteswap and update references Lin Liu
2022-05-23 14:50 ` [PATCH v5 2/6] crypto/vmac: Simplify code with byteswap Lin Liu
2022-05-23 14:50 ` [PATCH v5 3/6] arm64/find_next_bit: Remove ext2_swab() Lin Liu
2022-05-23 14:53   ` Julien Grall
2022-05-24  1:35     ` 回复: " Lin Liu (刘林)
2022-05-25  7:53       ` Julien Grall
2022-05-23 14:50 ` [PATCH v5 4/6] xen: Switch to byteswap Lin Liu
2022-05-23 14:56   ` Julien Grall
2022-05-23 15:38     ` Andrew Cooper
2022-05-23 16:05       ` Julien Grall
2022-05-24  2:42         ` Lin Liu (刘林)
2022-05-23 16:14       ` Jan Beulich
2022-05-23 14:50 ` [PATCH v5 5/6] tools: Use new byteswap helper Lin Liu
2022-05-23 14:50 ` [PATCH v5 6/6] byteorder: Remove byteorder Lin Liu
2022-05-24  2:48   ` Jiamei Xie
2022-05-24  3:07     ` Lin Liu (刘林)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.