From: Robin Murphy <robin.murphy@arm.com>
To: David Laight <David.Laight@ACULAB.COM>,
'Will Deacon' <will.deacon@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"ilias.apalodimas@linaro.org" <ilias.apalodimas@linaro.org>,
Zhangshaokun <zhangshaokun@hisilicon.com>,
"huanglingyan \(A\)" <huanglingyan2@huawei.com>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"steve.capper@arm.com" <steve.capper@arm.com>
Subject: Re: [PATCH] arm64: do_csum: implement accelerated scalar version
Date: Wed, 15 May 2019 13:39:47 +0100 [thread overview]
Message-ID: <083f8222-971c-0d8e-4650-0d88b193e316@arm.com> (raw)
In-Reply-To: <9f72aecd99e74c1a939df6562ed9c18c@AcuMS.aculab.com>
On 15/05/2019 12:13, David Laight wrote:
> From: Robin Murphy
>> Sent: 15 May 2019 11:58
>> To: David Laight; 'Will Deacon'
>> Cc: Zhangshaokun; Ard Biesheuvel; linux-arm-kernel@lists.infradead.org; netdev@vger.kernel.org;
>> ilias.apalodimas@linaro.org; huanglingyan (A); steve.capper@arm.com
>> Subject: Re: [PATCH] arm64: do_csum: implement accelerated scalar version
>>
>> On 15/05/2019 11:15, David Laight wrote:
>>> ...
>>>>> ptr = (u64 *)(buff - offset);
>>>>> shift = offset * 8;
>>>>>
>>>>> /*
>>>>> * Head: zero out any excess leading bytes. Shifting back by the same
>>>>> * amount should be at least as fast as any other way of handling the
>>>>> * odd/even alignment, and means we can ignore it until the very end.
>>>>> */
>>>>> data = *ptr++;
>>>>> #ifdef __LITTLE_ENDIAN
>>>>> data = (data >> shift) << shift;
>>>>> #else
>>>>> data = (data << shift) >> shift;
>>>>> #endif
>>>
>>> I suspect that
>>> #ifdef __LITTLE_ENDIAN
>>> data &= ~0ull << shift;
>>> #else
>>> data &= ~0ull >> shift;
>>> #endif
>>> is likely to be better.
>>
>> Out of interest, better in which respects? For the A64 ISA at least,
>> that would take 3 instructions plus an additional scratch register, e.g.:
>>
>> MOV x2, #~0
>> LSL x2, x2, x1
>> AND x0, x0, x1
[That should have been "AND x0, x1, x2", obviously...]
>>
>> (alternatively "AND x0, x0, x1 LSL x2" to save 4 bytes of code, but that
>> will typically take as many cycles if not more than just pipelining the
>> two 'simple' ALU instructions)
>>
>> Whereas the original is just two shift instruction in-place.
>>
>> LSR x0, x0, x1
>> LSL x0, x0, x1
>>
>> If the operation were repeated, the constant generation could certainly
>> be amortised over multiple subsequent ANDs for a net win, but that isn't
>> the case here.
>
> On a superscaler processor you reduce the register dependency
> chain by one instruction.
> The original code is pretty much a single dependency chain so
> you are likely to be able to generate the mask 'for free'.
Gotcha, although 'free' still means additional I$ and register rename
footprint, vs. (typically) just 1 extra cycle to forward an ALU result.
It's an interesting consideration, but in our case there are almost
certainly far more little in-order cores out in the wild than big OoO
ones, and the double-shift will always be objectively better for those.
Thanks,
Robin.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2019-05-15 12:39 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-18 23:08 [PATCH] arm64: do_csum: implement accelerated scalar version Ard Biesheuvel
2019-02-19 15:08 ` Ilias Apalodimas
2019-02-28 14:16 ` Ard Biesheuvel
2019-02-28 15:13 ` Robin Murphy
2019-02-28 15:28 ` Ard Biesheuvel
2019-04-12 2:31 ` Zhangshaokun
2019-04-12 9:52 ` Will Deacon
2019-04-15 18:18 ` Robin Murphy
2019-05-15 9:47 ` Will Deacon
2019-05-15 10:15 ` David Laight
2019-05-15 10:57 ` Robin Murphy
2019-05-15 11:13 ` David Laight
2019-05-15 12:39 ` Robin Murphy [this message]
2019-05-15 13:54 ` David Laight
2019-05-15 11:02 ` Robin Murphy
2019-05-16 3:14 ` Zhangshaokun
2019-08-15 16:46 ` Will Deacon
2019-08-16 8:15 ` Shaokun Zhang
2019-08-16 14:55 ` Robin Murphy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=083f8222-971c-0d8e-4650-0d88b193e316@arm.com \
--to=robin.murphy@arm.com \
--cc=David.Laight@ACULAB.COM \
--cc=ard.biesheuvel@linaro.org \
--cc=huanglingyan2@huawei.com \
--cc=ilias.apalodimas@linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=netdev@vger.kernel.org \
--cc=steve.capper@arm.com \
--cc=will.deacon@arm.com \
--cc=zhangshaokun@hisilicon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).