From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A19DC04AAC for ; Mon, 20 May 2019 12:32:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2F096214DA for ; Mon, 20 May 2019 12:32:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1558355558; bh=nxNOUnAZtzm/cE0H+MLnQKOIQBtAbjpuXj6FT/WkQXw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=P+CIl8sl42z5yL7NP6SQF7WYUkRjAp3CH4dHWV2bsl/zxqxGpf+pfPwfcTHfXT2Ng cEGNlsLYGEqYchfrIbbdfgY9FS+9RcbgQSbgvsnxQYiL+0Dv0norX8A3xvZLBye3XR KlGNCGwUxDV3ajvehEU0Pm0HfmWYeiReM/2G77iE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390587AbfETMch (ORCPT ); Mon, 20 May 2019 08:32:37 -0400 Received: from mail.kernel.org ([198.145.29.99]:49496 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389210AbfETMcd (ORCPT ); Mon, 20 May 2019 08:32:33 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3630220645; Mon, 20 May 2019 12:32:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1558355552; bh=nxNOUnAZtzm/cE0H+MLnQKOIQBtAbjpuXj6FT/WkQXw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mgwiu+pDcq6pYi19InFKm+WjP4rQ+BcIl2TV5BzN22ZrlcqV+vmYjkhUleq2dmZQ8 WWWfaIptQ/XrQ6yr5OqpTj1UfjBzZVV1ecyrfiVluhL5mRI2ME8Gs+dHyd/5d5n77h 3VfBuMY3oDq7Pky44gcwgz7hg+JKPaWxL4N6NcQY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Eric Biggers , Ard Biesheuvel , Herbert Xu Subject: [PATCH 5.1 035/128] crypto: arm64/gcm-aes-ce - fix no-NEON fallback code Date: Mon, 20 May 2019 14:13:42 +0200 Message-Id: <20190520115252.038410139@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190520115249.449077487@linuxfoundation.org> References: <20190520115249.449077487@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Eric Biggers commit 580e295178402d14bbf598a5702f8e01fc59dbaa upstream. The arm64 gcm-aes-ce algorithm is failing the extra crypto self-tests following my patches to test the !may_use_simd() code paths, which previously were untested. The problem is that in the !may_use_simd() case, an odd number of AES blocks can be processed within each step of the skcipher_walk. However, the skcipher_walk is being done with a "stride" of 2 blocks and is advanced by an even number of blocks after each step. This causes the encryption to produce the wrong ciphertext and authentication tag, and causes the decryption to incorrectly fail. Fix it by only processing an even number of blocks per step. Fixes: c2b24c36e0a3 ("crypto: arm64/aes-gcm-ce - fix scatterwalk API violation") Fixes: 71e52c278c54 ("crypto: arm64/aes-ce-gcm - operate on two input blocks at a time") Cc: # v4.19+ Signed-off-by: Eric Biggers Reviewed-by: Ard Biesheuvel Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- arch/arm64/crypto/ghash-ce-glue.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -473,9 +473,11 @@ static int gcm_encrypt(struct aead_reque put_unaligned_be32(2, iv + GCM_IV_SIZE); while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) { - int blocks = walk.nbytes / AES_BLOCK_SIZE; + const int blocks = + walk.nbytes / (2 * AES_BLOCK_SIZE) * 2; u8 *dst = walk.dst.virt.addr; u8 *src = walk.src.virt.addr; + int remaining = blocks; do { __aes_arm64_encrypt(ctx->aes_key.key_enc, @@ -485,9 +487,9 @@ static int gcm_encrypt(struct aead_reque dst += AES_BLOCK_SIZE; src += AES_BLOCK_SIZE; - } while (--blocks > 0); + } while (--remaining > 0); - ghash_do_update(walk.nbytes / AES_BLOCK_SIZE, dg, + ghash_do_update(blocks, dg, walk.dst.virt.addr, &ctx->ghash_key, NULL, pmull_ghash_update_p64); @@ -609,7 +611,7 @@ static int gcm_decrypt(struct aead_reque put_unaligned_be32(2, iv + GCM_IV_SIZE); while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) { - int blocks = walk.nbytes / AES_BLOCK_SIZE; + int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2; u8 *dst = walk.dst.virt.addr; u8 *src = walk.src.virt.addr;