From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 946E3C433FE for ; Fri, 6 May 2022 05:41:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1389255AbiEFFos (ORCPT ); Fri, 6 May 2022 01:44:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242042AbiEFFor (ORCPT ); Fri, 6 May 2022 01:44:47 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29C3613F90; Thu, 5 May 2022 22:41:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C9F32B83137; Fri, 6 May 2022 05:41:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D9CEC385AA; Fri, 6 May 2022 05:41:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1651815662; bh=YU3kHHc9yTMaXiVCK6PqwUW5BRxgHzvMtgGhbF8A9/Y=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Ry11ZpM2K6ciEsw6sU2I3HXXAB/Kl4OUHw773f+6tBsQg3Zz2WxLvDcdPbNXvogEm YmLnQoHh6vAHUGWSxkrH0rW5VIzgKdDNrlEbMF4CiuhUAgiT9wqPZ22vRTlMv4Fbv4 +1ct2q59QYPZYo9gDG+PK/HbDoEtrF2FcDp1c878foA1HTg0On/yDgOK3s9o6ExNeM nFxkP+Ajz7iXIa9sGbAHTt1S//68tvUP4D9duClL/ubjcp7+UuXjsaqTvMcjjHew5S Is68Aq0ZbiovNWFIhSbOug+6dUEM8RWE+CX2QAhbp4Ja7hinJeHjA6sS/r2cprmbh/ fsRqv7EMSX5PQ== Date: Thu, 5 May 2022 22:41:00 -0700 From: Eric Biggers To: Nathan Huckleberry Cc: linux-crypto@vger.kernel.org, linux-fscrypt@vger.kernel.org, Herbert Xu , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Paul Crowley , Sami Tolvanen , Ard Biesheuvel Subject: Re: [PATCH v6 6/9] crypto: arm64/aes-xctr: Improve readability of XCTR and CTR modes Message-ID: References: <20220504001823.2483834-1-nhuck@google.com> <20220504001823.2483834-7-nhuck@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220504001823.2483834-7-nhuck@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Wed, May 04, 2022 at 12:18:20AM +0000, Nathan Huckleberry wrote: > Added some clarifying comments, changed the register allocations to make > the code clearer, and added register aliases. > > Signed-off-by: Nathan Huckleberry I was a bit surprised to see this after the xctr support patch rather than before. Doing the cleanup first would make adding and reviewing the xctr support easier. But it's not a big deal; if you already tested it this way you can just leave it as-is if you want. A few minor comments below. > + /* > + * Set up the counter values in v0-v4. > + * > + * If we are encrypting less than MAX_STRIDE blocks, the tail block > + * handling code expects the last keystream block to be in v4. For > + * example: if encrypting two blocks with MAX_STRIDE=5, then v3 and v4 > + * should have the next two counter blocks. > + */ The first two mentions of v4 should actually be v{MAX_STRIDE-1}, as it is actually v4 for MAX_STRIDE==5 and v3 for MAX_STRIDE==4. > @@ -355,16 +383,16 @@ AES_FUNC_END(aes_cbc_cts_decrypt) > mov v3.16b, vctr.16b > ST5( mov v4.16b, vctr.16b ) > .if \xctr > - sub x6, x11, #MAX_STRIDE - 1 > - sub x7, x11, #MAX_STRIDE - 2 > - sub x8, x11, #MAX_STRIDE - 3 > - sub x9, x11, #MAX_STRIDE - 4 > -ST5( sub x10, x11, #MAX_STRIDE - 5 ) > - eor x6, x6, x12 > - eor x7, x7, x12 > - eor x8, x8, x12 > - eor x9, x9, x12 > - eor x10, x10, x12 > + sub x6, CTR, #MAX_STRIDE - 1 > + sub x7, CTR, #MAX_STRIDE - 2 > + sub x8, CTR, #MAX_STRIDE - 3 > + sub x9, CTR, #MAX_STRIDE - 4 > +ST5( sub x10, CTR, #MAX_STRIDE - 5 ) > + eor x6, x6, IV_PART > + eor x7, x7, IV_PART > + eor x8, x8, IV_PART > + eor x9, x9, IV_PART > + eor x10, x10, IV_PART The eor into x10 should be enclosed by ST5(), since it's dead code otherwise. > + /* > + * If there are at least MAX_STRIDE blocks left, XOR the plaintext with > + * keystream and store. Otherwise jump to tail handling. > + */ Technically this could be XOR-ing with either the plaintext or the ciphertext. Maybe write "data" instead. > .Lctrtail1x\xctr: > - sub x7, x6, #16 > - csel x6, x6, x7, eq > - add x1, x1, x6 > - add x0, x0, x6 > - ld1 {v5.16b}, [x1] > - ld1 {v6.16b}, [x0] > + /* > + * Handle <= 16 bytes of plaintext > + */ > + sub x8, x7, #16 > + csel x7, x7, x8, eq > + add IN, IN, x7 > + add OUT, OUT, x7 > + ld1 {v5.16b}, [IN] > + ld1 {v6.16b}, [OUT] > ST5( mov v3.16b, v4.16b ) > encrypt_block v3, w3, x2, x8, w7 w3 and x2 should be ROUNDS_W and KEY, respectively. This code also has the very unusual property that it reads and writes before the buffers given. Specifically, for bytes < 16, it access the 16 bytes beginning at &in[bytes - 16] and &dst[bytes - 16]. Mentioning this explicitly would be very helpful, particularly in the function comments for aes_ctr_encrypt() and aes_xctr_encrypt(), and maybe in the C code, so that anyone calling these functions has this in mind. Anyway, with the above addressed feel free to add: Reviewed-by: Eric Biggers - Eric From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0480EC433F5 for ; Fri, 6 May 2022 05:42:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xVo675L+/jkjN3OP5hkqRF1wFU/i9uvo/tlgp/bkEdw=; b=2KNV8rOgQFw7TG FEKakoJRIHOO0orjRuK7B3i/NZ+wDpNw5sJpATRl8YWi0/rzV9vAaoJc8xadPvjDS5TusuVki50ok o5w7mLLMSdhm7hVO0eqab5myKkKc9oAc7cf0y8p5zvhMIcjrD4vpN2wF8pajHuB/RARBHEj3IdNX8 vh/Oh8LvYXCV1mGS6p9/a7gizUe4bte32eRkyShviLzLC/2nkKteUiN3CR9VyW/0UYhZQyuPnA3hx ZOPtmHkTrnLdPFSxuEaYjmcLZpHQCJjl4H7D67e9/vYK7p7JbosWQ/jaTRDUPBG2e9FpaRiCImWtO hMA33LRyca/xrBmR8o8w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmqiC-001TXN-Bs; Fri, 06 May 2022 05:41:08 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmqi8-001TWm-QA for linux-arm-kernel@lists.infradead.org; Fri, 06 May 2022 05:41:06 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E4E2261EEC; Fri, 6 May 2022 05:41:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D9CEC385AA; Fri, 6 May 2022 05:41:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1651815662; bh=YU3kHHc9yTMaXiVCK6PqwUW5BRxgHzvMtgGhbF8A9/Y=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Ry11ZpM2K6ciEsw6sU2I3HXXAB/Kl4OUHw773f+6tBsQg3Zz2WxLvDcdPbNXvogEm YmLnQoHh6vAHUGWSxkrH0rW5VIzgKdDNrlEbMF4CiuhUAgiT9wqPZ22vRTlMv4Fbv4 +1ct2q59QYPZYo9gDG+PK/HbDoEtrF2FcDp1c878foA1HTg0On/yDgOK3s9o6ExNeM nFxkP+Ajz7iXIa9sGbAHTt1S//68tvUP4D9duClL/ubjcp7+UuXjsaqTvMcjjHew5S Is68Aq0ZbiovNWFIhSbOug+6dUEM8RWE+CX2QAhbp4Ja7hinJeHjA6sS/r2cprmbh/ fsRqv7EMSX5PQ== Date: Thu, 5 May 2022 22:41:00 -0700 From: Eric Biggers To: Nathan Huckleberry Cc: linux-crypto@vger.kernel.org, linux-fscrypt@vger.kernel.org, Herbert Xu , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Paul Crowley , Sami Tolvanen , Ard Biesheuvel Subject: Re: [PATCH v6 6/9] crypto: arm64/aes-xctr: Improve readability of XCTR and CTR modes Message-ID: References: <20220504001823.2483834-1-nhuck@google.com> <20220504001823.2483834-7-nhuck@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220504001823.2483834-7-nhuck@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220505_224104_967386_7C2D996E X-CRM114-Status: GOOD ( 20.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, May 04, 2022 at 12:18:20AM +0000, Nathan Huckleberry wrote: > Added some clarifying comments, changed the register allocations to make > the code clearer, and added register aliases. > > Signed-off-by: Nathan Huckleberry I was a bit surprised to see this after the xctr support patch rather than before. Doing the cleanup first would make adding and reviewing the xctr support easier. But it's not a big deal; if you already tested it this way you can just leave it as-is if you want. A few minor comments below. > + /* > + * Set up the counter values in v0-v4. > + * > + * If we are encrypting less than MAX_STRIDE blocks, the tail block > + * handling code expects the last keystream block to be in v4. For > + * example: if encrypting two blocks with MAX_STRIDE=5, then v3 and v4 > + * should have the next two counter blocks. > + */ The first two mentions of v4 should actually be v{MAX_STRIDE-1}, as it is actually v4 for MAX_STRIDE==5 and v3 for MAX_STRIDE==4. > @@ -355,16 +383,16 @@ AES_FUNC_END(aes_cbc_cts_decrypt) > mov v3.16b, vctr.16b > ST5( mov v4.16b, vctr.16b ) > .if \xctr > - sub x6, x11, #MAX_STRIDE - 1 > - sub x7, x11, #MAX_STRIDE - 2 > - sub x8, x11, #MAX_STRIDE - 3 > - sub x9, x11, #MAX_STRIDE - 4 > -ST5( sub x10, x11, #MAX_STRIDE - 5 ) > - eor x6, x6, x12 > - eor x7, x7, x12 > - eor x8, x8, x12 > - eor x9, x9, x12 > - eor x10, x10, x12 > + sub x6, CTR, #MAX_STRIDE - 1 > + sub x7, CTR, #MAX_STRIDE - 2 > + sub x8, CTR, #MAX_STRIDE - 3 > + sub x9, CTR, #MAX_STRIDE - 4 > +ST5( sub x10, CTR, #MAX_STRIDE - 5 ) > + eor x6, x6, IV_PART > + eor x7, x7, IV_PART > + eor x8, x8, IV_PART > + eor x9, x9, IV_PART > + eor x10, x10, IV_PART The eor into x10 should be enclosed by ST5(), since it's dead code otherwise. > + /* > + * If there are at least MAX_STRIDE blocks left, XOR the plaintext with > + * keystream and store. Otherwise jump to tail handling. > + */ Technically this could be XOR-ing with either the plaintext or the ciphertext. Maybe write "data" instead. > .Lctrtail1x\xctr: > - sub x7, x6, #16 > - csel x6, x6, x7, eq > - add x1, x1, x6 > - add x0, x0, x6 > - ld1 {v5.16b}, [x1] > - ld1 {v6.16b}, [x0] > + /* > + * Handle <= 16 bytes of plaintext > + */ > + sub x8, x7, #16 > + csel x7, x7, x8, eq > + add IN, IN, x7 > + add OUT, OUT, x7 > + ld1 {v5.16b}, [IN] > + ld1 {v6.16b}, [OUT] > ST5( mov v3.16b, v4.16b ) > encrypt_block v3, w3, x2, x8, w7 w3 and x2 should be ROUNDS_W and KEY, respectively. This code also has the very unusual property that it reads and writes before the buffers given. Specifically, for bytes < 16, it access the 16 bytes beginning at &in[bytes - 16] and &dst[bytes - 16]. Mentioning this explicitly would be very helpful, particularly in the function comments for aes_ctr_encrypt() and aes_xctr_encrypt(), and maybe in the C code, so that anyone calling these functions has this in mind. Anyway, with the above addressed feel free to add: Reviewed-by: Eric Biggers - Eric _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel