From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C8CAC433B4 for ; Fri, 7 May 2021 22:11:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D50E961157 for ; Fri, 7 May 2021 22:11:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229542AbhEGWMR (ORCPT ); Fri, 7 May 2021 18:12:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:58626 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229748AbhEGWMM (ORCPT ); Fri, 7 May 2021 18:12:12 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1729960BD3; Fri, 7 May 2021 22:11:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620425472; bh=sSEuaPN8WyybTHgj5w5zexQYf6B+WKl2AX9NgAkY89g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GCuCt/nPIQiOGA21Ma8aJNHze35Fh56KeNLWNRPHMoRJQgsggmm+2wNEokMy/xmOx tGn4UaOZsDTgycoLw24+IFZhYoCrg7pQjawTU/qWY8zFXbW49tUH6xhIAWptWOwhJ2 b+p1z80YcROJUtFFRPrYbc01vBggmfttAY/CckKg9Iv1MpnCjMB/dFolbPcKMntlfj K/2lA2x+uHjWNaY22IpO7cNrxlyW1wEhoUa728k2FzcNfuGIA3P+t6UknhCABot+Cw IgVgln9PhU6MKHdjTqGkXOieoQ+YnARrbBQmwYQ42VqTrshb0uMKDrBZ1WHIzXJRho GPXZdp2NAHm/Q== From: Arnd Bergmann To: linux-arch@vger.kernel.org Cc: Linus Torvalds , Vineet Gupta , Arnd Bergmann , Russell King , Nathan Chancellor , Nick Desaulniers , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, clang-built-linux@googlegroups.com Subject: [RFC 06/12] asm-generic: unaligned: remove byteshift helpers Date: Sat, 8 May 2021 00:07:51 +0200 Message-Id: <20210507220813.365382-7-arnd@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210507220813.365382-1-arnd@kernel.org> References: <20210507220813.365382-1-arnd@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Arnd Bergmann In theory, compilers should be able to work this out themselves so we can use a simpler version based on the swab() helpers. I have verified that this works on all supported compiler versions (gcc-4.9 and up, clang-10 and up). Looking at the object code produced by gcc-11, I found that the impact is mostly a change in inlining decisions that lead to slightly larger code. In other cases, this version produces explicit byte swaps in place of separate byte access, or comparing against pre-swapped constants. While the source code is clearly simpler, I have not seen an indication of the new version actually producing better code on Arm, so maybe we want to skip this after all. From what I can tell, gcc recognizes the byteswap pattern in the byteshift.h header and can turn it into explicit instructions, but it does not turn a __builtin_bswap32() back into individual bytes when that would result in better output, e.g. when storing a byte-reversed constant. Suggested-by: Linus Torvalds Signed-off-by: Arnd Bergmann --- I've included this patch in the series because Linus asked about removing the byteshift version, but after trying it out, I'd prefer to drop this and use the byteshift version for the generic code as well. --- arch/arm/include/asm/unaligned.h | 2 - include/asm-generic/unaligned.h | 2 - include/linux/unaligned/be_byteshift.h | 71 -------------------------- include/linux/unaligned/be_struct.h | 30 +++++++++++ include/linux/unaligned/le_byteshift.h | 71 -------------------------- include/linux/unaligned/le_struct.h | 30 +++++++++++ 6 files changed, 60 insertions(+), 146 deletions(-) delete mode 100644 include/linux/unaligned/be_byteshift.h delete mode 100644 include/linux/unaligned/le_byteshift.h diff --git a/arch/arm/include/asm/unaligned.h b/arch/arm/include/asm/unaligned.h index ab905ffcf193..3c5248fb4cdc 100644 --- a/arch/arm/include/asm/unaligned.h +++ b/arch/arm/include/asm/unaligned.h @@ -10,13 +10,11 @@ #if defined(__LITTLE_ENDIAN) # include -# include # include # define get_unaligned __get_unaligned_le # define put_unaligned __put_unaligned_le #elif defined(__BIG_ENDIAN) # include -# include # include # define get_unaligned __get_unaligned_be # define put_unaligned __put_unaligned_be diff --git a/include/asm-generic/unaligned.h b/include/asm-generic/unaligned.h index 374c940e9be1..d79df721ae60 100644 --- a/include/asm-generic/unaligned.h +++ b/include/asm-generic/unaligned.h @@ -16,7 +16,6 @@ #if defined(__LITTLE_ENDIAN) # ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS # include -# include # endif # include # define get_unaligned __get_unaligned_le @@ -24,7 +23,6 @@ #elif defined(__BIG_ENDIAN) # ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS # include -# include # endif # include # define get_unaligned __get_unaligned_be diff --git a/include/linux/unaligned/be_byteshift.h b/include/linux/unaligned/be_byteshift.h deleted file mode 100644 index c43ff5918c8a..000000000000 --- a/include/linux/unaligned/be_byteshift.h +++ /dev/null @@ -1,71 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_UNALIGNED_BE_BYTESHIFT_H -#define _LINUX_UNALIGNED_BE_BYTESHIFT_H - -#include - -static inline u16 __get_unaligned_be16(const u8 *p) -{ - return p[0] << 8 | p[1]; -} - -static inline u32 __get_unaligned_be32(const u8 *p) -{ - return p[0] << 24 | p[1] << 16 | p[2] << 8 | p[3]; -} - -static inline u64 __get_unaligned_be64(const u8 *p) -{ - return (u64)__get_unaligned_be32(p) << 32 | - __get_unaligned_be32(p + 4); -} - -static inline void __put_unaligned_be16(u16 val, u8 *p) -{ - *p++ = val >> 8; - *p++ = val; -} - -static inline void __put_unaligned_be32(u32 val, u8 *p) -{ - __put_unaligned_be16(val >> 16, p); - __put_unaligned_be16(val, p + 2); -} - -static inline void __put_unaligned_be64(u64 val, u8 *p) -{ - __put_unaligned_be32(val >> 32, p); - __put_unaligned_be32(val, p + 4); -} - -static inline u16 get_unaligned_be16(const void *p) -{ - return __get_unaligned_be16(p); -} - -static inline u32 get_unaligned_be32(const void *p) -{ - return __get_unaligned_be32(p); -} - -static inline u64 get_unaligned_be64(const void *p) -{ - return __get_unaligned_be64(p); -} - -static inline void put_unaligned_be16(u16 val, void *p) -{ - __put_unaligned_be16(val, p); -} - -static inline void put_unaligned_be32(u32 val, void *p) -{ - __put_unaligned_be32(val, p); -} - -static inline void put_unaligned_be64(u64 val, void *p) -{ - __put_unaligned_be64(val, p); -} - -#endif /* _LINUX_UNALIGNED_BE_BYTESHIFT_H */ diff --git a/include/linux/unaligned/be_struct.h b/include/linux/unaligned/be_struct.h index 15ea503a13fc..76d9fe297c33 100644 --- a/include/linux/unaligned/be_struct.h +++ b/include/linux/unaligned/be_struct.h @@ -34,4 +34,34 @@ static inline void put_unaligned_be64(u64 val, void *p) __put_unaligned_cpu64(val, p); } +static inline u16 get_unaligned_le16(const void *p) +{ + return swab16(__get_unaligned_cpu16((const u8 *)p)); +} + +static inline u32 get_unaligned_le32(const void *p) +{ + return swab32(__get_unaligned_cpu32((const u8 *)p)); +} + +static inline u64 get_unaligned_le64(const void *p) +{ + return swab64(__get_unaligned_cpu64((const u8 *)p)); +} + +static inline void put_unaligned_le16(u16 val, void *p) +{ + __put_unaligned_cpu16(swab16(val), p); +} + +static inline void put_unaligned_le32(u32 val, void *p) +{ + __put_unaligned_cpu32(swab32(val), p); +} + +static inline void put_unaligned_le64(u64 val, void *p) +{ + __put_unaligned_cpu64(swab64(val), p); +} + #endif /* _LINUX_UNALIGNED_BE_STRUCT_H */ diff --git a/include/linux/unaligned/le_byteshift.h b/include/linux/unaligned/le_byteshift.h deleted file mode 100644 index 2248dcb0df76..000000000000 --- a/include/linux/unaligned/le_byteshift.h +++ /dev/null @@ -1,71 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_UNALIGNED_LE_BYTESHIFT_H -#define _LINUX_UNALIGNED_LE_BYTESHIFT_H - -#include - -static inline u16 __get_unaligned_le16(const u8 *p) -{ - return p[0] | p[1] << 8; -} - -static inline u32 __get_unaligned_le32(const u8 *p) -{ - return p[0] | p[1] << 8 | p[2] << 16 | p[3] << 24; -} - -static inline u64 __get_unaligned_le64(const u8 *p) -{ - return (u64)__get_unaligned_le32(p + 4) << 32 | - __get_unaligned_le32(p); -} - -static inline void __put_unaligned_le16(u16 val, u8 *p) -{ - *p++ = val; - *p++ = val >> 8; -} - -static inline void __put_unaligned_le32(u32 val, u8 *p) -{ - __put_unaligned_le16(val >> 16, p + 2); - __put_unaligned_le16(val, p); -} - -static inline void __put_unaligned_le64(u64 val, u8 *p) -{ - __put_unaligned_le32(val >> 32, p + 4); - __put_unaligned_le32(val, p); -} - -static inline u16 get_unaligned_le16(const void *p) -{ - return __get_unaligned_le16(p); -} - -static inline u32 get_unaligned_le32(const void *p) -{ - return __get_unaligned_le32(p); -} - -static inline u64 get_unaligned_le64(const void *p) -{ - return __get_unaligned_le64(p); -} - -static inline void put_unaligned_le16(u16 val, void *p) -{ - __put_unaligned_le16(val, p); -} - -static inline void put_unaligned_le32(u32 val, void *p) -{ - __put_unaligned_le32(val, p); -} - -static inline void put_unaligned_le64(u64 val, void *p) -{ - __put_unaligned_le64(val, p); -} - -#endif /* _LINUX_UNALIGNED_LE_BYTESHIFT_H */ diff --git a/include/linux/unaligned/le_struct.h b/include/linux/unaligned/le_struct.h index 9977987883a6..22f90a4afaa5 100644 --- a/include/linux/unaligned/le_struct.h +++ b/include/linux/unaligned/le_struct.h @@ -34,4 +34,34 @@ static inline void put_unaligned_le64(u64 val, void *p) __put_unaligned_cpu64(val, p); } +static inline u16 get_unaligned_be16(const void *p) +{ + return swab16(__get_unaligned_cpu16((const u8 *)p)); +} + +static inline u32 get_unaligned_be32(const void *p) +{ + return swab32(__get_unaligned_cpu32((const u8 *)p)); +} + +static inline u64 get_unaligned_be64(const void *p) +{ + return swab64(__get_unaligned_cpu64((const u8 *)p)); +} + +static inline void put_unaligned_be16(u16 val, void *p) +{ + __put_unaligned_cpu16(swab16(val), p); +} + +static inline void put_unaligned_be32(u32 val, void *p) +{ + __put_unaligned_cpu32(swab32(val), p); +} + +static inline void put_unaligned_be64(u64 val, void *p) +{ + __put_unaligned_cpu64(swab64(val), p); +} + #endif /* _LINUX_UNALIGNED_LE_STRUCT_H */ -- 2.29.2 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C957C43600 for ; Fri, 7 May 2021 22:12:38 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F403661401 for ; Fri, 7 May 2021 22:12:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F403661401 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4o7c9OOlgro1HnQxHce75J6zaguhVZgxI+vs5myEfaQ=; b=Lh/vI9C563+H1bjFZ9ZkYTQtL 8gVgYf4ExxMpJkrDllqW9S60UVQGJgXnP0Bg3QYDFu7JV6d9UZI0rRRyMUA0O1zELn/GSz2MYt3Mj Dub6HFqt4OMD5CCrl576CjP+ET19dbkTAAdNy+YtmUtXZRbQBxwga0bPtZJi+5oRO95wJ/qOgQ95i Cnk9Ej6+QOh4ytqYdXvG5bVudh3OPSQRU63F1AJmw+z6flutqq/eJMFPrgB0hiBHEJJ23V26oHU5s GhGkNzrgX+rLTabbHmlLGFw9YD7GXRs/mH8SgCkIIxAEq0V5KXTcO9O0DEKXHmqGkg5227bL6oQ33 l3kwZwcww==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lf8gn-0088sd-Dg; Fri, 07 May 2021 22:11:17 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lf8gl-0088sN-CW for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 22:11:15 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Kxusa5rCsCJKOXRp7ciV+M8X95WxpXHCfBJsC6HngbQ=; b=oSolLsYr0Ey1+N0cskhVPUnNGJ fF++IVtzzvmUWnPrxX01QinRN8M3YWrC3apV5K0U8zxK50GjWe4MXqtE+B7wzX72t8ta+SgJKXHzB DA01WI9dfyghzbeKLZHG9Pb7fyGQpQAc47abJFX73egX1cPpSESaPrVARbJ/O7yX0VftfWoH18gem jFPdD4Q99nfK6z0AYgW2LXIaRvGXdjBXzRL4Dz2MH8i28j/FxHGjZTr/LJHX3bw9r+I7vG5ZKkdxb twoMEP0Rg9qlIANcJt3uZVneiujTcbVzJyUluAEHjEH3CfqmSi0cyxW4xZPePlW7jIO/I6VfBzXsd muUhWdOg==; Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lf8gi-007DRR-Cu for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 22:11:14 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1729960BD3; Fri, 7 May 2021 22:11:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620425472; bh=sSEuaPN8WyybTHgj5w5zexQYf6B+WKl2AX9NgAkY89g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GCuCt/nPIQiOGA21Ma8aJNHze35Fh56KeNLWNRPHMoRJQgsggmm+2wNEokMy/xmOx tGn4UaOZsDTgycoLw24+IFZhYoCrg7pQjawTU/qWY8zFXbW49tUH6xhIAWptWOwhJ2 b+p1z80YcROJUtFFRPrYbc01vBggmfttAY/CckKg9Iv1MpnCjMB/dFolbPcKMntlfj K/2lA2x+uHjWNaY22IpO7cNrxlyW1wEhoUa728k2FzcNfuGIA3P+t6UknhCABot+Cw IgVgln9PhU6MKHdjTqGkXOieoQ+YnARrbBQmwYQ42VqTrshb0uMKDrBZ1WHIzXJRho GPXZdp2NAHm/Q== From: Arnd Bergmann To: linux-arch@vger.kernel.org Cc: Linus Torvalds , Vineet Gupta , Arnd Bergmann , Russell King , Nathan Chancellor , Nick Desaulniers , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, clang-built-linux@googlegroups.com Subject: [RFC 06/12] asm-generic: unaligned: remove byteshift helpers Date: Sat, 8 May 2021 00:07:51 +0200 Message-Id: <20210507220813.365382-7-arnd@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210507220813.365382-1-arnd@kernel.org> References: <20210507220813.365382-1-arnd@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_151112_509813_B803E546 X-CRM114-Status: GOOD ( 18.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Arnd Bergmann In theory, compilers should be able to work this out themselves so we can use a simpler version based on the swab() helpers. I have verified that this works on all supported compiler versions (gcc-4.9 and up, clang-10 and up). Looking at the object code produced by gcc-11, I found that the impact is mostly a change in inlining decisions that lead to slightly larger code. In other cases, this version produces explicit byte swaps in place of separate byte access, or comparing against pre-swapped constants. While the source code is clearly simpler, I have not seen an indication of the new version actually producing better code on Arm, so maybe we want to skip this after all. From what I can tell, gcc recognizes the byteswap pattern in the byteshift.h header and can turn it into explicit instructions, but it does not turn a __builtin_bswap32() back into individual bytes when that would result in better output, e.g. when storing a byte-reversed constant. Suggested-by: Linus Torvalds Signed-off-by: Arnd Bergmann --- I've included this patch in the series because Linus asked about removing the byteshift version, but after trying it out, I'd prefer to drop this and use the byteshift version for the generic code as well. --- arch/arm/include/asm/unaligned.h | 2 - include/asm-generic/unaligned.h | 2 - include/linux/unaligned/be_byteshift.h | 71 -------------------------- include/linux/unaligned/be_struct.h | 30 +++++++++++ include/linux/unaligned/le_byteshift.h | 71 -------------------------- include/linux/unaligned/le_struct.h | 30 +++++++++++ 6 files changed, 60 insertions(+), 146 deletions(-) delete mode 100644 include/linux/unaligned/be_byteshift.h delete mode 100644 include/linux/unaligned/le_byteshift.h diff --git a/arch/arm/include/asm/unaligned.h b/arch/arm/include/asm/unaligned.h index ab905ffcf193..3c5248fb4cdc 100644 --- a/arch/arm/include/asm/unaligned.h +++ b/arch/arm/include/asm/unaligned.h @@ -10,13 +10,11 @@ #if defined(__LITTLE_ENDIAN) # include -# include # include # define get_unaligned __get_unaligned_le # define put_unaligned __put_unaligned_le #elif defined(__BIG_ENDIAN) # include -# include # include # define get_unaligned __get_unaligned_be # define put_unaligned __put_unaligned_be diff --git a/include/asm-generic/unaligned.h b/include/asm-generic/unaligned.h index 374c940e9be1..d79df721ae60 100644 --- a/include/asm-generic/unaligned.h +++ b/include/asm-generic/unaligned.h @@ -16,7 +16,6 @@ #if defined(__LITTLE_ENDIAN) # ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS # include -# include # endif # include # define get_unaligned __get_unaligned_le @@ -24,7 +23,6 @@ #elif defined(__BIG_ENDIAN) # ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS # include -# include # endif # include # define get_unaligned __get_unaligned_be diff --git a/include/linux/unaligned/be_byteshift.h b/include/linux/unaligned/be_byteshift.h deleted file mode 100644 index c43ff5918c8a..000000000000 --- a/include/linux/unaligned/be_byteshift.h +++ /dev/null @@ -1,71 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_UNALIGNED_BE_BYTESHIFT_H -#define _LINUX_UNALIGNED_BE_BYTESHIFT_H - -#include - -static inline u16 __get_unaligned_be16(const u8 *p) -{ - return p[0] << 8 | p[1]; -} - -static inline u32 __get_unaligned_be32(const u8 *p) -{ - return p[0] << 24 | p[1] << 16 | p[2] << 8 | p[3]; -} - -static inline u64 __get_unaligned_be64(const u8 *p) -{ - return (u64)__get_unaligned_be32(p) << 32 | - __get_unaligned_be32(p + 4); -} - -static inline void __put_unaligned_be16(u16 val, u8 *p) -{ - *p++ = val >> 8; - *p++ = val; -} - -static inline void __put_unaligned_be32(u32 val, u8 *p) -{ - __put_unaligned_be16(val >> 16, p); - __put_unaligned_be16(val, p + 2); -} - -static inline void __put_unaligned_be64(u64 val, u8 *p) -{ - __put_unaligned_be32(val >> 32, p); - __put_unaligned_be32(val, p + 4); -} - -static inline u16 get_unaligned_be16(const void *p) -{ - return __get_unaligned_be16(p); -} - -static inline u32 get_unaligned_be32(const void *p) -{ - return __get_unaligned_be32(p); -} - -static inline u64 get_unaligned_be64(const void *p) -{ - return __get_unaligned_be64(p); -} - -static inline void put_unaligned_be16(u16 val, void *p) -{ - __put_unaligned_be16(val, p); -} - -static inline void put_unaligned_be32(u32 val, void *p) -{ - __put_unaligned_be32(val, p); -} - -static inline void put_unaligned_be64(u64 val, void *p) -{ - __put_unaligned_be64(val, p); -} - -#endif /* _LINUX_UNALIGNED_BE_BYTESHIFT_H */ diff --git a/include/linux/unaligned/be_struct.h b/include/linux/unaligned/be_struct.h index 15ea503a13fc..76d9fe297c33 100644 --- a/include/linux/unaligned/be_struct.h +++ b/include/linux/unaligned/be_struct.h @@ -34,4 +34,34 @@ static inline void put_unaligned_be64(u64 val, void *p) __put_unaligned_cpu64(val, p); } +static inline u16 get_unaligned_le16(const void *p) +{ + return swab16(__get_unaligned_cpu16((const u8 *)p)); +} + +static inline u32 get_unaligned_le32(const void *p) +{ + return swab32(__get_unaligned_cpu32((const u8 *)p)); +} + +static inline u64 get_unaligned_le64(const void *p) +{ + return swab64(__get_unaligned_cpu64((const u8 *)p)); +} + +static inline void put_unaligned_le16(u16 val, void *p) +{ + __put_unaligned_cpu16(swab16(val), p); +} + +static inline void put_unaligned_le32(u32 val, void *p) +{ + __put_unaligned_cpu32(swab32(val), p); +} + +static inline void put_unaligned_le64(u64 val, void *p) +{ + __put_unaligned_cpu64(swab64(val), p); +} + #endif /* _LINUX_UNALIGNED_BE_STRUCT_H */ diff --git a/include/linux/unaligned/le_byteshift.h b/include/linux/unaligned/le_byteshift.h deleted file mode 100644 index 2248dcb0df76..000000000000 --- a/include/linux/unaligned/le_byteshift.h +++ /dev/null @@ -1,71 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_UNALIGNED_LE_BYTESHIFT_H -#define _LINUX_UNALIGNED_LE_BYTESHIFT_H - -#include - -static inline u16 __get_unaligned_le16(const u8 *p) -{ - return p[0] | p[1] << 8; -} - -static inline u32 __get_unaligned_le32(const u8 *p) -{ - return p[0] | p[1] << 8 | p[2] << 16 | p[3] << 24; -} - -static inline u64 __get_unaligned_le64(const u8 *p) -{ - return (u64)__get_unaligned_le32(p + 4) << 32 | - __get_unaligned_le32(p); -} - -static inline void __put_unaligned_le16(u16 val, u8 *p) -{ - *p++ = val; - *p++ = val >> 8; -} - -static inline void __put_unaligned_le32(u32 val, u8 *p) -{ - __put_unaligned_le16(val >> 16, p + 2); - __put_unaligned_le16(val, p); -} - -static inline void __put_unaligned_le64(u64 val, u8 *p) -{ - __put_unaligned_le32(val >> 32, p + 4); - __put_unaligned_le32(val, p); -} - -static inline u16 get_unaligned_le16(const void *p) -{ - return __get_unaligned_le16(p); -} - -static inline u32 get_unaligned_le32(const void *p) -{ - return __get_unaligned_le32(p); -} - -static inline u64 get_unaligned_le64(const void *p) -{ - return __get_unaligned_le64(p); -} - -static inline void put_unaligned_le16(u16 val, void *p) -{ - __put_unaligned_le16(val, p); -} - -static inline void put_unaligned_le32(u32 val, void *p) -{ - __put_unaligned_le32(val, p); -} - -static inline void put_unaligned_le64(u64 val, void *p) -{ - __put_unaligned_le64(val, p); -} - -#endif /* _LINUX_UNALIGNED_LE_BYTESHIFT_H */ diff --git a/include/linux/unaligned/le_struct.h b/include/linux/unaligned/le_struct.h index 9977987883a6..22f90a4afaa5 100644 --- a/include/linux/unaligned/le_struct.h +++ b/include/linux/unaligned/le_struct.h @@ -34,4 +34,34 @@ static inline void put_unaligned_le64(u64 val, void *p) __put_unaligned_cpu64(val, p); } +static inline u16 get_unaligned_be16(const void *p) +{ + return swab16(__get_unaligned_cpu16((const u8 *)p)); +} + +static inline u32 get_unaligned_be32(const void *p) +{ + return swab32(__get_unaligned_cpu32((const u8 *)p)); +} + +static inline u64 get_unaligned_be64(const void *p) +{ + return swab64(__get_unaligned_cpu64((const u8 *)p)); +} + +static inline void put_unaligned_be16(u16 val, void *p) +{ + __put_unaligned_cpu16(swab16(val), p); +} + +static inline void put_unaligned_be32(u32 val, void *p) +{ + __put_unaligned_cpu32(swab32(val), p); +} + +static inline void put_unaligned_be64(u64 val, void *p) +{ + __put_unaligned_cpu64(swab64(val), p); +} + #endif /* _LINUX_UNALIGNED_LE_STRUCT_H */ -- 2.29.2 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel