From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16F65C433EF for ; Wed, 6 Apr 2022 23:46:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238113AbiDFXsL (ORCPT ); Wed, 6 Apr 2022 19:48:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231515AbiDFXsJ (ORCPT ); Wed, 6 Apr 2022 19:48:09 -0400 Received: from mail-40134.protonmail.ch (mail-40134.protonmail.ch [185.70.40.134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F49914DA2A for ; Wed, 6 Apr 2022 16:46:09 -0700 (PDT) Date: Wed, 06 Apr 2022 23:46:04 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail2; t=1649288767; bh=msv3nOGjLIGTrH1A0ydlYoFmzBEnAkJ0hjTDqARjFZY=; h=Date:To:From:Cc:Reply-To:Subject:Message-ID:From:To:Cc:Date: Subject:Reply-To:Feedback-ID:Message-ID; b=VO5Vhnal3btdccM03/zjx2HXeh90LnPQtZxVlbYpXI/CIiB1agDAywbdGZh5hywcq ABZlR70nWy5Ow9CkEO/b7fi6iEkoZCzmSITCMiB7MjVivjekK659yX21GM2dNVOIRA mh59hg3iMWpa2Z+NfVHe3TPbA6B69hxMVJR5JzD8ZsZ1nW2u9MIUMZBgDT1MsW60lq A+aMAv9jOnBanleUgziFJ7IbNKttao6DHkEcfLY675fWBdkapXa3bm8kOBxcieAygi 9GJ9QNw4ZrZLFW2iSKAgXg0dG1qMEwHClLk4n7XC6K8HFN/eb1kQPE6jYtGllJ3H9d 6DkQR7EZW/BwQ== To: Arnd Bergmann From: Alexander Lobakin Cc: Bart Van Assche , Jens Axboe , Keith Busch , Chaitanya Kulkarni , "Martin K. Petersen" , linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Alexander Lobakin Reply-To: Alexander Lobakin Subject: [PATCH] asm-generic: fix __get_unaligned_be48() on 32 bit platforms Message-ID: <20220406233909.529613-1-alobakin@pm.me> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org While testing the new macros for working with 48 bit containers, I faced a weird problem: 32 + 16: 0x2ef6e8da 0x79e60000 48: 0xffffe8da + 0x79e60000 All the bits starting from the 32nd were getting 1d in 9/10 cases. The debug showed: p[0]: 0x00002e0000000000 p[1]: 0x00002ef600000000 p[2]: 0xffffffffe8000000 p[3]: 0xffffffffe8da0000 p[4]: 0xffffffffe8da7900 p[5]: 0xffffffffe8da79e6 that the value becomes a garbage after the third OR, i.e. on `p[2] << 24`. When the 31st bit is 1 and there's no explicit cast to an unsigned, it's being considered as a signed int and getting sign-extended on OR, so `e8000000` becomes `ffffffffe8000000` and messes up the result. Cast the @p[2] to u64 as well to avoid this. Now: 32 + 16: 0x7ef6a490 0xddc10000 48: 0x7ef6a490 + 0xddc10000 p[0]: 0x00007e0000000000 p[1]: 0x00007ef600000000 p[2]: 0x00007ef6a4000000 p[3]: 0x00007ef6a4900000 p[4]: 0x00007ef6a490dd00 p[5]: 0x00007ef6a490ddc1 Fixes: c2ea5fcf53d5 ("asm-generic: introduce be48 unaligned accessors") Signed-off-by: Alexander Lobakin --- include/asm-generic/unaligned.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/asm-generic/unaligned.h b/include/asm-generic/unaligne= d.h index 8fc637379899..df30f11b4a46 100644 --- a/include/asm-generic/unaligned.h +++ b/include/asm-generic/unaligned.h @@ -143,7 +143,7 @@ static inline void put_unaligned_be48(const u64 val, vo= id *p) static inline u64 __get_unaligned_be48(const u8 *p) { -=09return (u64)p[0] << 40 | (u64)p[1] << 32 | p[2] << 24 | +=09return (u64)p[0] << 40 | (u64)p[1] << 32 | (u64)p[2] << 24 | =09=09p[3] << 16 | p[4] << 8 | p[5]; } -- 2.35.1