All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mikulas Patocka <mpatocka@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Peter Zijlstra <peterz@infradead.org>, X86 ML <x86@kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	device-mapper development <dm-devel@redhat.com>
Subject: Re: [PATCH] memcpy_flushcache: use cache flusing for larger lengths
Date: Thu, 16 Apr 2020 04:24:20 -0400 (EDT)	[thread overview]
Message-ID: <alpine.LRH.2.02.2004160411460.7833@file01.intranet.prod.int.rdu2.redhat.com> (raw)
In-Reply-To: <alpine.LRH.2.02.2004090612320.27517@file01.intranet.prod.int.rdu2.redhat.com>



On Thu, 9 Apr 2020, Mikulas Patocka wrote:

> With dm-writecache on emulated pmem (with the memmap argument), we get
> 
> With the original kernel:
> 8508 - 11378
> real    0m4.960s
> user    0m0.638s
> sys     0m4.312s
> 
> With dm-writecache hacked to use cached writes + clflushopt:
> 8505 - 11378
> real    0m4.151s
> user    0m0.560s
> sys     0m3.582s

I did some multithreaded tests: 
http://people.redhat.com/~mpatocka/testcases/pmem/microbenchmarks/pmem-multithreaded.txt

And it turns out that for singlethreaded access, write+clwb performs 
better, while for multithreaded access, non-temporal stores perform 
better.

1       sequential write-nt 8 bytes             1.3 GB/s
2       sequential write-nt 8 bytes             2.5 GB/s
3       sequential write-nt 8 bytes             2.8 GB/s
4       sequential write-nt 8 bytes             2.8 GB/s
5       sequential write-nt 8 bytes             2.5 GB/s

1       sequential write 8 bytes + clwb         1.6 GB/s
2       sequential write 8 bytes + clwb         2.4 GB/s
3       sequential write 8 bytes + clwb         1.7 GB/s
4       sequential write 8 bytes + clwb         1.2 GB/s
5       sequential write 8 bytes + clwb         0.8 GB/s

For one thread, we can see that write-nt 8 bytes has 1.3 GB/s and write 
8+clwb has 1.6 GB/s, but for multiple threads, write-nt has better 
throughput.

The dm-writecache target is singlethreaded (all the copying is done while 
holding the writecache lock), so it benefits from clwb.

Should memcpy_flushcache be changed to write+clwb? Or are there some 
multithreaded users of memcpy_flushcache that would be hurt by this 
change?

Mikulas


WARNING: multiple messages have this Message-ID (diff)
From: Mikulas Patocka <mpatocka@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>, X86 ML <x86@kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	device-mapper development <dm-devel@redhat.com>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH] memcpy_flushcache: use cache flusing for larger lengths
Date: Thu, 16 Apr 2020 04:24:20 -0400 (EDT)	[thread overview]
Message-ID: <alpine.LRH.2.02.2004160411460.7833@file01.intranet.prod.int.rdu2.redhat.com> (raw)
In-Reply-To: <alpine.LRH.2.02.2004090612320.27517@file01.intranet.prod.int.rdu2.redhat.com>



On Thu, 9 Apr 2020, Mikulas Patocka wrote:

> With dm-writecache on emulated pmem (with the memmap argument), we get
> 
> With the original kernel:
> 8508 - 11378
> real    0m4.960s
> user    0m0.638s
> sys     0m4.312s
> 
> With dm-writecache hacked to use cached writes + clflushopt:
> 8505 - 11378
> real    0m4.151s
> user    0m0.560s
> sys     0m3.582s

I did some multithreaded tests: 
http://people.redhat.com/~mpatocka/testcases/pmem/microbenchmarks/pmem-multithreaded.txt

And it turns out that for singlethreaded access, write+clwb performs 
better, while for multithreaded access, non-temporal stores perform 
better.

1       sequential write-nt 8 bytes             1.3 GB/s
2       sequential write-nt 8 bytes             2.5 GB/s
3       sequential write-nt 8 bytes             2.8 GB/s
4       sequential write-nt 8 bytes             2.8 GB/s
5       sequential write-nt 8 bytes             2.5 GB/s

1       sequential write 8 bytes + clwb         1.6 GB/s
2       sequential write 8 bytes + clwb         2.4 GB/s
3       sequential write 8 bytes + clwb         1.7 GB/s
4       sequential write 8 bytes + clwb         1.2 GB/s
5       sequential write 8 bytes + clwb         0.8 GB/s

For one thread, we can see that write-nt 8 bytes has 1.3 GB/s and write 
8+clwb has 1.6 GB/s, but for multiple threads, write-nt has better 
throughput.

The dm-writecache target is singlethreaded (all the copying is done while 
holding the writecache lock), so it benefits from clwb.

Should memcpy_flushcache be changed to write+clwb? Or are there some 
multithreaded users of memcpy_flushcache that would be hurt by this 
change?

Mikulas

  reply	other threads:[~2020-04-16  8:44 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-07 15:01 [PATCH] memcpy_flushcache: use cache flusing for larger lengths Mikulas Patocka
2020-04-07 16:09 ` Andy Lutomirski
2020-04-07 16:33   ` Mikulas Patocka
2020-04-07 17:52 ` Dan Williams
2020-04-08 18:54   ` Mikulas Patocka
2020-04-08 19:29     ` Dan Williams
2020-04-09 14:36       ` Mikulas Patocka
2020-04-16  8:24         ` Mikulas Patocka [this message]
2020-04-16  8:24           ` Mikulas Patocka
2020-04-16 18:28           ` Dan Williams
2020-04-17 12:47             ` [PATCH] x86: introduce memcpy_flushcache_clflushopt Mikulas Patocka
2020-04-17 17:57               ` Dan Williams
2020-04-17 20:45                 ` Thomas Gleixner
2020-04-20 13:47                   ` [PATCH v2] x86: introduce memcpy_flushcache_single Mikulas Patocka
2020-04-21 18:43                     ` Dan Williams
2020-04-18 13:27               ` [PATCH] x86: introduce memcpy_flushcache_clflushopt David Laight
2020-04-18 15:21                 ` Mikulas Patocka
2020-04-19 17:48                   ` David Laight
2020-04-20  4:49                     ` Dan Williams
  -- strict thread matches above, loose matches on Subject: below --
2020-03-29 20:28 [PATCH] memcpy_flushcache: use cache flusing for larger lengths Mikulas Patocka
2020-03-29 20:28 ` Mikulas Patocka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LRH.2.02.2004160411460.7833@file01.intranet.prod.int.rdu2.redhat.com \
    --to=mpatocka@redhat.com \
    --cc=bp@alien8.de \
    --cc=dan.j.williams@intel.com \
    --cc=dm-devel@redhat.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.