From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94A39C433FE for ; Tue, 18 Oct 2022 00:18:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232038AbiJRASB (ORCPT ); Mon, 17 Oct 2022 20:18:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231682AbiJRAPF (ORCPT ); Mon, 17 Oct 2022 20:15:05 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB75F895E2; Mon, 17 Oct 2022 17:12:43 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 344DBB81C13; Tue, 18 Oct 2022 00:12:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 94169C433D6; Tue, 18 Oct 2022 00:12:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666051939; bh=gnitjREAosEnc2q8OgrCTjC9y263oqZqF5PO1jPmfAk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=q7i2DxbdEcxbhfpzWd7/PqUCedMhspOijxzXuQatS+gez7pf5cjxzCH9olhYsFZAQ MK2bKDWrhOMTg7eqhAuu8rFjt9KYv93+6ZjGJkYMuLI9nbF72Yv1Luia0qrfJysCsF mnxq5Ywwdt2Kbh25Brw2hdqK+i9N1RjnS+XpsTodn0KG6f1pzwaeJzNN4eG7LHT207 kOGNJDpnF1RQ9ywEXzh5XN5Kb72mJZLCMwfE8RAFNDQoN4MnKMMiHOQf9SFxaImpQ6 ZIymM2Xvkh/ew0lhZZBsxlGpQeYbbshRgyemsr+ycWZbpbH8XBCFWAOc5dbcOcZXuj YRew0JcmpZM9w== Date: Mon, 17 Oct 2022 17:12:16 -0700 From: Eric Biggers To: Nathan Huckleberry Cc: Bruno Goncalves , Herbert Xu , "David S. Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Ard Biesheuvel , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned Message-ID: References: <20221017222620.715153-1-nhuck@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 17, 2022 at 04:38:25PM -0700, Nathan Huckleberry wrote: > On Mon, Oct 17, 2022 at 4:02 PM Eric Biggers wrote: > > > > On Mon, Oct 17, 2022 at 03:26:20PM -0700, Nathan Huckleberry wrote: > > > The key_powers array is not guaranteed to be 16-byte aligned, so using > > > movaps to operate on key_powers is not allowed. > > > > > > Switch movaps to movups. > > > > > > Fixes: 34f7f6c30112 ("crypto: x86/polyval - Add PCLMULQDQ accelerated implementation of POLYVAL") > > > Reported-by: Bruno Goncalves > > > Signed-off-by: Nathan Huckleberry > > > --- > > > arch/x86/crypto/polyval-clmulni_asm.S | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > diff --git a/arch/x86/crypto/polyval-clmulni_asm.S b/arch/x86/crypto/polyval-clmulni_asm.S > > > index a6ebe4e7dd2b..32b98cb53ddf 100644 > > > --- a/arch/x86/crypto/polyval-clmulni_asm.S > > > +++ b/arch/x86/crypto/polyval-clmulni_asm.S > > > @@ -234,7 +234,7 @@ > > > > > > movups (MSG), %xmm0 > > > pxor SUM, %xmm0 > > > - movaps (KEY_POWERS), %xmm1 > > > + movups (KEY_POWERS), %xmm1 > > > schoolbook1_noload > > > dec BLOCKS_LEFT > > > addq $16, MSG > > > > I thought that crypto_tfm::__crt_ctx is guaranteed to be 16-byte aligned, > > and that the x86 AES code relies on that property. > > > > But now I see that actually the x86 AES code manually aligns the context. > > See aes_ctx() in arch/x86/crypto/aesni-intel_glue.c. > > > > Did you consider doing the same for polyval? > > I'll submit a v2 aligning the tfm_ctx. I think that makes more sense > than working on unaligned keys. > > Is there a need to do the same changes on arm64? The keys are also > unaligned there. > arm64 defines ARCH_DMA_MINALIGN to 128, so I don't think the same issue applies there. Also the instructions used don't assume aligned addresses. - Eric