All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Laight <David.Laight@ACULAB.COM>
To: 'Borislav Petkov' <bp@alien8.de>,
	Noah Goldstein <goldstein.w.n@gmail.com>,
	Linus Torvalds <torvalds@linux-foundation.org>
Cc: "x86@kernel.org" <x86@kernel.org>,
	"edumazet@google.com" <edumazet@google.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
	"hpa@zytor.com" <hpa@zytor.com>,
	lkml <linux-kernel@vger.kernel.org>
Subject: RE: x86/csum: Remove unnecessary odd handling
Date: Thu, 29 Jun 2023 14:04:30 +0000	[thread overview]
Message-ID: <a6fce3b915e04125b15aa33317ce07ff@AcuMS.aculab.com> (raw)
In-Reply-To: <20230628091241.GAZJv5ie0xVGvnMKIM@fat_crate.local>

From: Borislav Petkov
> Sent: 28 June 2023 10:13
> 
> + Linus who's been poking at this yesterday.
> 
> + lkml. Please always CC lkml when sending patches.
> 
> On Tue, Jun 27, 2023 at 09:06:57PM -0500, Noah Goldstein wrote:
> > The special case for odd aligned buffers is unnecessary and mostly
> > just adds overhead. Aligned buffers is the expectations, and even for
> > unaligned buffer, the only case that was helped is if the buffer was
> > 1-byte from word aligned which is ~1/7 of the cases. Overall it seems
> > highly unlikely to be worth to extra branch.
> >
> > It was left in the previous perf improvement patch because I was
> > erroneously comparing the exact output of `csum_partial(...)`, but
> > really we only need `csum_fold(csum_partial(...))` to match so its
> > safe to remove.

I'm sure I've suggested this before.
The 'odd' check was needed by an earlier implementation.

Misaligned buffers are (just about) measurably slower.
But it is pretty much noise and the extra code in the
aligned case will code more.

It is pretty much impossible to find out what the cpu is doing,
but if you do misaligned accesses to a PCIe target you can
(with suitable hardware) look at the generated TLP.

What that shows is misaligned transfers being done in 8-byte
chunks and being split into two TLP if they cross a 64 byte
(probably cache line) boundary.

It is likely that the same happens for cached accesses.

Given that the cpu can do two memory reads each clock
it isn't surprising that the checksum loop (which doesn't
even manage a read every clock) is slower by less than
one clock every cache line.

Someone might also want to use the 'arc' C version of csum_fold()
on pretty much every architecture [1].
It is:
	return (~sum - ror32(sum, 16)) >> 16;
significantly better than the x86 asm (even on more recent
cpu that don't take 2 clocks for an 'adc').

[1] arm can do a bit better because of the barrel shifter.
    sparc is slower because it has a carry flag but no rotate.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

  parent reply	other threads:[~2023-06-29 14:04 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20230628020657.957880-1-goldstein.w.n@gmail.com>
2023-06-28  9:12 ` x86/csum: Remove unnecessary odd handling Borislav Petkov
2023-06-28 15:32   ` Noah Goldstein
2023-06-28 17:44     ` Linus Torvalds
2023-06-28 18:34       ` Noah Goldstein
2023-06-28 20:02         ` Linus Torvalds
2023-06-29 14:04   ` David Laight [this message]
2023-06-29 14:27   ` David Laight
2023-09-01 22:21 ` Noah Goldstein
2023-09-06 13:49   ` David Laight
2023-09-06 14:38   ` David Laight
2023-09-20 19:20     ` Noah Goldstein
2023-09-20 19:23 ` Noah Goldstein
2023-09-23  3:24   ` kernel test robot
2023-09-23 14:05     ` Noah Goldstein
2023-09-23 21:13       ` David Laight
2023-09-24 14:35         ` Noah Goldstein
2023-12-23 22:18           ` Noah Goldstein
2024-01-04 23:28             ` Noah Goldstein
2024-01-04 23:34               ` Dave Hansen
2024-01-04 23:36               ` Linus Torvalds
2024-01-05  0:33                 ` Linus Torvalds
2024-01-05 10:41                   ` David Laight
2024-01-05 16:12                     ` David Laight
2024-01-05 18:05                     ` Linus Torvalds
2024-01-05 23:52                       ` David Laight
2024-01-06  0:18                         ` Linus Torvalds
2024-01-06 10:26                           ` Eric Dumazet
2024-01-06 19:32                             ` Linus Torvalds
2024-01-07 12:11                             ` David Laight
2024-01-06 22:08                       ` David Laight
2024-01-07  1:09                         ` H. Peter Anvin
2024-01-07 11:44                           ` David Laight
2023-09-24 14:35 ` Noah Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a6fce3b915e04125b15aa33317ce07ff@AcuMS.aculab.com \
    --to=david.laight@aculab.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=edumazet@google.com \
    --cc=goldstein.w.n@gmail.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.