From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBA30C54EED for ; Mon, 30 Jan 2023 08:16:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235876AbjA3IQ3 (ORCPT ); Mon, 30 Jan 2023 03:16:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235232AbjA3IQ2 (ORCPT ); Mon, 30 Jan 2023 03:16:28 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 426712CFEF; Mon, 30 Jan 2023 00:16:02 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMPK9-005VfP-6V; Mon, 30 Jan 2023 16:15:34 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 30 Jan 2023 16:15:33 +0800 Date: Mon, 30 Jan 2023 16:15:33 +0800 From: Herbert Xu To: Tianjia Zhang Cc: "David S. Miller" , Catalin Marinas , Will Deacon , linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Ard Biesheuvel Subject: Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption Message-ID: References: <20230118141928.48136-1-tianjia.zhang@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Mon, Jan 30, 2023 at 03:34:42PM +0800, Tianjia Zhang wrote: > > I printed the walk->nbytes of each iteration of the walker, it is not > always multiples of chunksize except at the end when the algorithm test > manager is turned on. Sorry I was mistaken. We only guarantee that a minimum of chunksize bytes is given to you until the very end, not that it is exactly a multiple of chunksize. While you still need to compute tail, you could get rid of the else if check as walk->nbytes - tail cannot be zero (we must provide you with at least one chunk before the end): if (walk->nbytes == walk->total) { tail = 0; sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, walk->nbytes, ghash, ctx->ghash_table, (const u8 *)&lengths); } else { sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, walk->nbytes - tail, ghash, ctx->ghash_table, NULL); } In fact we could rewrite it like this: unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE; unsigned int nbytes = walk->nbytes - tail; const u8 *src = walk->src.virt.addr; u8 *dst = walk->dst.virt.addr; u8 *lp = NULL; if (walk->nbytes == walk->total) { nbytes = walk->nbytes; tail = 0; lp = (u8 *)&lengths; } sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, nbytes, ghash, ctx->ghash_table, lp); The second part of that loop could also be rewritten as: kernel_neon_end(); err = skcipher_walk_done(walk, tail); if (!walk->nbytes) return err; kernel_neon_begin(); } while (1); Actually I think there is a serious bug here. If you're doing an empty message, you must not call skcipher_walk_done as that may then free random uninitialised stack memory. Did you copy this code from somewhere else? If so wherever you got it from needs to be fixed too. The loop should look like this: if (!walk->nbytes) { /* iv may be unaligned as the walker didn't run at all. */ sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, NULL, NULL, iv, 0, ghash, ctx->ghash_table, (u8 *)&lengths); kernel_neon_end(); return 0; } do { ... } Thanks, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BEF7BC54EED for ; Mon, 30 Jan 2023 08:18:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NraxEmHa0Ks2n3mmzle7vI+GBCsirVZGSziXqOMH20M=; b=3UB7Yp+x+l/9Yg mg0wsAoeYEAsI2yp0IEXWkoRCKbzK+u7tYMlBPJ9kL1DcATAxm/W66nma3OcRDSOvT7HDnTh6BnU5 zc4WmiIWZVI5Tv1l7aiMKuzFQ8J+Zf1O9nXB0+WFf+k5asurC1LsbpDsTK0D4lRksfG9Wj0ADvZHs Niu5GkCPleHG2uUT44TVNPprzYiKgk8XxUXmeK+2Q7EilL75nSvfT99TlFP8Dl9NmI10VdUUqCPf2 ookh+6GflCIU4xEq9vSFIJPxj1UXY8rVZTh23upAaOlVQTVQBeAZgTodnk8KzrwXbhCGsAqdhaju5 aJpqnQEPDEw+yfx+f/rQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pMPL9-002d45-6V; Mon, 30 Jan 2023 08:16:35 +0000 Received: from helcar.hmeau.com ([216.24.177.18] helo=formenos.hmeau.com) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pMPKO-002d2x-3W for linux-arm-kernel@lists.infradead.org; Mon, 30 Jan 2023 08:15:50 +0000 Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMPK9-005VfP-6V; Mon, 30 Jan 2023 16:15:34 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 30 Jan 2023 16:15:33 +0800 Date: Mon, 30 Jan 2023 16:15:33 +0800 From: Herbert Xu To: Tianjia Zhang Cc: "David S. Miller" , Catalin Marinas , Will Deacon , linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Ard Biesheuvel Subject: Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption Message-ID: References: <20230118141928.48136-1-tianjia.zhang@linux.alibaba.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230130_001548_163161_9BD30ABD X-CRM114-Status: GOOD ( 15.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Jan 30, 2023 at 03:34:42PM +0800, Tianjia Zhang wrote: > > I printed the walk->nbytes of each iteration of the walker, it is not > always multiples of chunksize except at the end when the algorithm test > manager is turned on. Sorry I was mistaken. We only guarantee that a minimum of chunksize bytes is given to you until the very end, not that it is exactly a multiple of chunksize. While you still need to compute tail, you could get rid of the else if check as walk->nbytes - tail cannot be zero (we must provide you with at least one chunk before the end): if (walk->nbytes == walk->total) { tail = 0; sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, walk->nbytes, ghash, ctx->ghash_table, (const u8 *)&lengths); } else { sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, walk->nbytes - tail, ghash, ctx->ghash_table, NULL); } In fact we could rewrite it like this: unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE; unsigned int nbytes = walk->nbytes - tail; const u8 *src = walk->src.virt.addr; u8 *dst = walk->dst.virt.addr; u8 *lp = NULL; if (walk->nbytes == walk->total) { nbytes = walk->nbytes; tail = 0; lp = (u8 *)&lengths; } sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, nbytes, ghash, ctx->ghash_table, lp); The second part of that loop could also be rewritten as: kernel_neon_end(); err = skcipher_walk_done(walk, tail); if (!walk->nbytes) return err; kernel_neon_begin(); } while (1); Actually I think there is a serious bug here. If you're doing an empty message, you must not call skcipher_walk_done as that may then free random uninitialised stack memory. Did you copy this code from somewhere else? If so wherever you got it from needs to be fixed too. The loop should look like this: if (!walk->nbytes) { /* iv may be unaligned as the walker didn't run at all. */ sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, NULL, NULL, iv, 0, ghash, ctx->ghash_table, (u8 *)&lengths); kernel_neon_end(); return 0; } do { ... } Thanks, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel