From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6015FC25B50 for ; Fri, 20 Jan 2023 10:21:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229696AbjATKVx (ORCPT ); Fri, 20 Jan 2023 05:21:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229678AbjATKVw (ORCPT ); Fri, 20 Jan 2023 05:21:52 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DC364617D; Fri, 20 Jan 2023 02:21:51 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pIoWd-002BGR-8C; Fri, 20 Jan 2023 18:21:36 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 20 Jan 2023 18:21:35 +0800 Date: Fri, 20 Jan 2023 18:21:35 +0800 From: Herbert Xu To: Linus Walleij Cc: "David S. Miller" , Rob Herring , Krzysztof Kozlowski , Maxime Coquelin , Alexandre Torgue , Lionel Debieve , linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 5/6] crypto: stm32/hash: Support Ux500 hash Message-ID: References: <20221227-ux500-stm32-hash-v2-0-bc443bc44ca4@linaro.org> <20221227-ux500-stm32-hash-v2-5-bc443bc44ca4@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221227-ux500-stm32-hash-v2-5-bc443bc44ca4@linaro.org> Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Tue, Jan 10, 2023 at 08:19:16PM +0100, Linus Walleij wrote: > > +static void stm32_hash_emptymsg_fallback(struct ahash_request *req) > +{ > + struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); > + struct stm32_hash_ctx *ctx = crypto_ahash_ctx(ahash); > + struct stm32_hash_request_ctx *rctx = ahash_request_ctx(req); > + struct stm32_hash_dev *hdev = rctx->hdev; > + struct crypto_shash *xtfm; > + struct shash_desc *sdesc; > + size_t len; > + int ret; > + > + dev_dbg(hdev->dev, "use fallback message size 0 key size %d\n", > + ctx->keylen); > + xtfm = crypto_alloc_shash(crypto_ahash_alg_name(ahash), > + 0, CRYPTO_ALG_NEED_FALLBACK); > + if (IS_ERR(xtfm)) { > + dev_err(hdev->dev, "failed to allocate synchronous fallback\n"); > + return; > + } > + > + len = sizeof(*sdesc) + crypto_shash_descsize(xtfm); > + sdesc = kmalloc(len, GFP_KERNEL); > + if (!sdesc) > + goto err_hashkey_sdesc; > + sdesc->tfm = xtfm; > + > + if (ctx->keylen) { > + ret = crypto_shash_setkey(xtfm, ctx->key, ctx->keylen); > + if (ret) { > + dev_err(hdev->dev, "failed to set key ret=%d\n", ret); > + goto err_hashkey; > + } > + } > + > + ret = crypto_shash_init(sdesc); > + if (ret) { > + dev_err(hdev->dev, "shash init error ret=%d\n", ret); > + goto err_hashkey; > + } > + > + ret = crypto_shash_finup(sdesc, NULL, 0, rctx->digest); > + if (ret) > + dev_err(hdev->dev, "shash finup error\n"); > +err_hashkey: > + kfree(sdesc); > +err_hashkey_sdesc: > + crypto_free_shash(xtfm); > +} Calling crypto_alloc_shash is not allowed in this context. For example, we might have been called down from the block layer due to swapping. Even if you intermediate this with kernel threads, it still doesn't change the nature of the dead-lock. So if you need a fallback for zero-length messages, just allocate it unconditionally in the init_tfm function. Cheers, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D10F3C25B4E for ; Fri, 20 Jan 2023 10:23:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FLpbNn1aFg+oDra+xD7zq2WtyrIqFO6D9H7d8W2x07o=; b=r1NbIlfjVo56AK BWoYQ7+MTc2y6XVoeASUSGpMzrqbrqK1jMW8e0jXCmmcDD9fy98CrOw63E3LNzC/gSsdo8gmKnuIh BRwVmdbRyYpOedcmgZPE0FrF+RKiPOuURA2beVKi42rjBDgA8T9wYQib1BAJpZRO6cuTwSfRQUpCJ 69yl5r+tkxJJ3gzC0jA+5OS4FuJ2c5xOfJZGT8BfWkY5sWdzUNe20AbgTYcOAlNjETExU+glwM50L IgHQm/v3/ARppTfxDEma+vh9NIK62JlYSmvJZYerVGc5YS/zX2coiDrMvb/hettJIxcrDeMDDHM8I CEg6A8v+nSHJZOxFJAkw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pIoXR-009mJD-N1; Fri, 20 Jan 2023 10:22:25 +0000 Received: from helcar.hmeau.com ([216.24.177.18] helo=formenos.hmeau.com) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pIoWw-009m4z-Fp for linux-arm-kernel@lists.infradead.org; Fri, 20 Jan 2023 10:21:56 +0000 Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pIoWd-002BGR-8C; Fri, 20 Jan 2023 18:21:36 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 20 Jan 2023 18:21:35 +0800 Date: Fri, 20 Jan 2023 18:21:35 +0800 From: Herbert Xu To: Linus Walleij Cc: "David S. Miller" , Rob Herring , Krzysztof Kozlowski , Maxime Coquelin , Alexandre Torgue , Lionel Debieve , linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 5/6] crypto: stm32/hash: Support Ux500 hash Message-ID: References: <20221227-ux500-stm32-hash-v2-0-bc443bc44ca4@linaro.org> <20221227-ux500-stm32-hash-v2-5-bc443bc44ca4@linaro.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20221227-ux500-stm32-hash-v2-5-bc443bc44ca4@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230120_022154_596414_C23F38F7 X-CRM114-Status: GOOD ( 13.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Jan 10, 2023 at 08:19:16PM +0100, Linus Walleij wrote: > > +static void stm32_hash_emptymsg_fallback(struct ahash_request *req) > +{ > + struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); > + struct stm32_hash_ctx *ctx = crypto_ahash_ctx(ahash); > + struct stm32_hash_request_ctx *rctx = ahash_request_ctx(req); > + struct stm32_hash_dev *hdev = rctx->hdev; > + struct crypto_shash *xtfm; > + struct shash_desc *sdesc; > + size_t len; > + int ret; > + > + dev_dbg(hdev->dev, "use fallback message size 0 key size %d\n", > + ctx->keylen); > + xtfm = crypto_alloc_shash(crypto_ahash_alg_name(ahash), > + 0, CRYPTO_ALG_NEED_FALLBACK); > + if (IS_ERR(xtfm)) { > + dev_err(hdev->dev, "failed to allocate synchronous fallback\n"); > + return; > + } > + > + len = sizeof(*sdesc) + crypto_shash_descsize(xtfm); > + sdesc = kmalloc(len, GFP_KERNEL); > + if (!sdesc) > + goto err_hashkey_sdesc; > + sdesc->tfm = xtfm; > + > + if (ctx->keylen) { > + ret = crypto_shash_setkey(xtfm, ctx->key, ctx->keylen); > + if (ret) { > + dev_err(hdev->dev, "failed to set key ret=%d\n", ret); > + goto err_hashkey; > + } > + } > + > + ret = crypto_shash_init(sdesc); > + if (ret) { > + dev_err(hdev->dev, "shash init error ret=%d\n", ret); > + goto err_hashkey; > + } > + > + ret = crypto_shash_finup(sdesc, NULL, 0, rctx->digest); > + if (ret) > + dev_err(hdev->dev, "shash finup error\n"); > +err_hashkey: > + kfree(sdesc); > +err_hashkey_sdesc: > + crypto_free_shash(xtfm); > +} Calling crypto_alloc_shash is not allowed in this context. For example, we might have been called down from the block layer due to swapping. Even if you intermediate this with kernel threads, it still doesn't change the nature of the dead-lock. So if you need a fallback for zero-length messages, just allocate it unconditionally in the init_tfm function. Cheers, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel