From: Nathan Huckleberry <nhuck@google.com>
To: Eric Biggers <ebiggers@kernel.org>
Cc: linux-crypto@vger.kernel.org,
Herbert Xu <herbert@gondor.apana.org.au>,
"David S. Miller" <davem@davemloft.net>,
linux-arm-kernel@lists.infradead.org,
Paul Crowley <paulcrowley@google.com>,
Sami Tolvanen <samitolvanen@google.com>,
Ard Biesheuvel <ardb@kernel.org>
Subject: Re: [PATCH v3 7/8] crypto: arm64/polyval: Add PMULL accelerated implementation of POLYVAL
Date: Mon, 4 Apr 2022 20:55:54 -0500 [thread overview]
Message-ID: <CAJkfWY7gU7ZfziTmUbRKkWH4yLW6mQ3MrWZ6-UQW=EfBG0FtXg@mail.gmail.com> (raw)
In-Reply-To: <YjvLZkjW2ts8qDfr@sol.localdomain>
On Wed, Mar 23, 2022 at 8:37 PM Eric Biggers <ebiggers@kernel.org> wrote:
>
> On Tue, Mar 15, 2022 at 11:00:34PM +0000, Nathan Huckleberry wrote:
> > Add hardware accelerated version of POLYVAL for ARM64 CPUs with
> > Crypto Extension support.
>
> Nit: It's "Crypto Extensions", not "Crypto Extension".
>
> > +config CRYPTO_POLYVAL_ARM64_CE
> > + tristate "POLYVAL using ARMv8 Crypto Extensions (for HCTR2)"
> > + depends on KERNEL_MODE_NEON
> > + select CRYPTO_CRYPTD
> > + select CRYPTO_HASH
> > + select CRYPTO_POLYVAL
>
> CRYPTO_POLYVAL selects CRYPTO_HASH already, so there's no need to select it
> here.
>
> > +/*
> > + * Perform polynomial evaluation as specified by POLYVAL. This computes:
> > + * h^n * accumulator + h^n * m_0 + ... + h^1 * m_{n-1}
> > + * where n=nblocks, h is the hash key, and m_i are the message blocks.
> > + *
> > + * x0 - pointer to message blocks
> > + * x1 - pointer to precomputed key powers h^8 ... h^1
> > + * x2 - number of blocks to hash
> > + * x3 - pointer to accumulator
> > + *
> > + * void pmull_polyval_update(const u8 *in, const struct polyval_ctx *ctx,
> > + * size_t nblocks, u8 *accumulator);
> > + */
> > +SYM_FUNC_START(pmull_polyval_update)
> > + adr TMP, .Lgstar
> > + ld1 {GSTAR.2d}, [TMP]
> > + ld1 {SUM.16b}, [x3]
> > + ands PARTIAL_LEFT, BLOCKS_LEFT, #7
> > + beq .LskipPartial
> > + partial_stride
> > +.LskipPartial:
> > + subs BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
> > + blt .LstrideLoopExit
> > + ld1 {KEY8.16b, KEY7.16b, KEY6.16b, KEY5.16b}, [x1], #64
> > + ld1 {KEY4.16b, KEY3.16b, KEY2.16b, KEY1.16b}, [x1], #64
> > + full_stride 0
> > + subs BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
> > + blt .LstrideLoopExitReduce
> > +.LstrideLoop:
> > + full_stride 1
> > + subs BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
> > + bge .LstrideLoop
> > +.LstrideLoopExitReduce:
> > + montgomery_reduction
> > + mov SUM.16b, PH.16b
> > +.LstrideLoopExit:
> > + st1 {SUM.16b}, [x3]
> > + ret
> > +SYM_FUNC_END(pmull_polyval_update)
>
> Is there a reason why partial_stride is done first in the arm64 implementation,
> but last in the x86 implementation? It would be nice if the implementations
> worked the same way. Probably last would be better? What is the advantage of
> doing it first?
It was so I could return early without loading keys into registers,
since I only need them if there's
a full stride. I was able to rewrite it in the same way that the x86
implementation works.
>
> Besides that, many of the comments I made on the x86 implementation apply to the
> arm64 implementation too.
>
> - Eric
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-04-05 1:57 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-15 23:00 [PATCH v3 0/8] crypto: HCTR2 support Nathan Huckleberry
2022-03-15 23:00 ` [PATCH v3 1/8] crypto: xctr - Add XCTR support Nathan Huckleberry
2022-03-22 5:23 ` Eric Biggers
2022-03-15 23:00 ` [PATCH v3 2/8] crypto: polyval - Add POLYVAL support Nathan Huckleberry
2022-03-22 5:55 ` Eric Biggers
2022-03-15 23:00 ` [PATCH v3 3/8] crypto: hctr2 - Add HCTR2 support Nathan Huckleberry
2022-03-22 7:00 ` Eric Biggers
2022-03-15 23:00 ` [PATCH v3 4/8] crypto: x86/aesni-xctr: Add accelerated implementation of XCTR Nathan Huckleberry
2022-03-15 23:00 ` [PATCH v3 5/8] crypto: arm64/aes-xctr: " Nathan Huckleberry
2022-03-15 23:00 ` [PATCH v3 6/8] crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of POLYVAL Nathan Huckleberry
2022-03-23 2:15 ` Eric Biggers
2022-03-15 23:00 ` [PATCH v3 7/8] crypto: arm64/polyval: Add PMULL " Nathan Huckleberry
2022-03-24 1:37 ` Eric Biggers
2022-04-05 1:55 ` Nathan Huckleberry [this message]
2022-03-15 23:00 ` [PATCH v3 8/8] fscrypt: Add HCTR2 support for filename encryption Nathan Huckleberry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJkfWY7gU7ZfziTmUbRKkWH4yLW6mQ3MrWZ6-UQW=EfBG0FtXg@mail.gmail.com' \
--to=nhuck@google.com \
--cc=ardb@kernel.org \
--cc=davem@davemloft.net \
--cc=ebiggers@kernel.org \
--cc=herbert@gondor.apana.org.au \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-crypto@vger.kernel.org \
--cc=paulcrowley@google.com \
--cc=samitolvanen@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).