On Fri, May 27, 2022 at 11:15 AM Jason A. Donenfeld wrote: > > Hi Ingo, > > On 5/27/22, Ingo Molnar wrote: > > > > * Jason A. Donenfeld wrote: > > > >> On Mon, May 23, 2022 at 10:03:45AM -0600, Jens Axboe wrote: > >> > clear_user() > >> > 32 ~96MB/sec > >> > 64 195MB/sec > >> > 128 386MB/sec > >> > 1k 2.7GB/sec > >> > 4k 7.8GB/sec > >> > 16k 14.8GB/sec > >> > > >> > copy_from_zero_page() > >> > 32 ~96MB/sec > >> > 64 193MB/sec > >> > 128 383MB/sec > >> > 1k 2.9GB/sec > >> > 4k 9.8GB/sec > >> > 16k 21.8GB/sec > >> > >> Just FYI, on x86, Samuel Neves proposed some nice clear_user() > >> performance improvements that were forgotten about: > >> > >> https://lore.kernel.org/lkml/20210523180423.108087-1-sneves@dei.uc.pt/ > >> https://lore.kernel.org/lkml/Yk9yBcj78mpXOOLL@zx2c4.com/ > >> > >> Hoping somebody picks this up at some point... > > > > Those ~2x speedup numbers are indeed looking very nice: > > > > | After this patch, on a Skylake CPU, these are the > > | before/after figures: > > | > > | $ dd if=/dev/zero of=/dev/null bs=1024k status=progress > > | 94402248704 bytes (94 GB, 88 GiB) copied, 6 s, 15.7 GB/s > > | > > | $ dd if=/dev/zero of=/dev/null bs=1024k status=progress > > | 446476320768 bytes (446 GB, 416 GiB) copied, 15 s, 29.8 GB/s > > > > Patch fell through the cracks & it doesn't apply anymore: > > > > checking file arch/x86/lib/usercopy_64.c > > Hunk #2 FAILED at 17. > > 1 out of 2 hunks FAILED > > > > Would be nice to re-send it. > > I don't think Samuel is going to do that at this point, so I think > it's probably best if you do it. For what it's worth, these are the benchmarks I did at the time comparing the various code paths (generic loop, rep stosd + stosb, rep stosd + final loop, rep stosb) for sizes from 0 to 4096 (and then to 32k in larger increments) in the following chips: - Atom C2550 - Avoton - Xeon E3-1240 v3 - Haswell - EPYC 7401P - Zen 1 - EPYC 7402P - Zen 2 - Xeon D-1537 - Broadwell - Core i7-6700HQ - Skylake - Core i7-3770 - Ivy Bridge - Xeon Gold 5120 - Skylake-SP The fields are: bytes, cycles for generic loop, cycles for rep stosd+stosb, cycles for rep stosd + final loop, cycles for rep stosb. There are aligned (to cacheline) and unaligned (cacheline+1) destination buffer numbers, as that seems to be relevant for rep stos performance. Make of that what you will; as Jason said, I'm not particularly interested in reviving this. Samuel. > Jason