nvdimm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* Re: [dm-devel] [PATCH] dm-writecache
       [not found] <alpine.LRH.2.02.1803080824210.2819@file01.intranet.prod.int.rdu2.redhat.com>
@ 2018-03-08 14:51 ` Christoph Hellwig
  2018-03-08 17:08   ` Dan Williams
  2018-03-09  3:26   ` [dm-devel] [PATCH] dm-writecache Mikulas Patocka
  0 siblings, 2 replies; 13+ messages in thread
From: Christoph Hellwig @ 2018-03-08 14:51 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: Mike Snitzer, dm-devel, Alasdair G. Kergon, linux-nvdimm

> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6/drivers/md/dm-writecache.c	2018-03-08 14:23:31.059999000 +0100
> @@ -0,0 +1,2417 @@
> +#include <linux/device-mapper.h>

missing copyright statement, or for those new-fashioned SPDX statement.

> +#define WRITEBACK_FUA			true

no business having this around.

> +#ifndef bio_set_dev
> +#define	bio_set_dev(bio, dev)	((bio)->bi_bdev = (dev))
> +#endif
> +#ifndef timer_setup
> +#define timer_setup(t, c, f)	setup_timer(t, c, (unsigned long)(t))
> +#endif

no business in mainline.

> +/*
> + * On X86, non-temporal stores are more efficient than cache flushing.
> + * On ARM64, cache flushing is more efficient.
> + */
> +#if defined(CONFIG_X86_64)
> +#define NT_STORE(dest, src)				\
> +do {							\
> +	typeof(src) val = (src);			\
> +	memcpy_flushcache(&(dest), &val, sizeof(src));	\
> +} while (0)
> +#define COMMIT_FLUSHED()	wmb()
> +#else
> +#define NT_STORE(dest, src)	WRITE_ONCE(dest, src)
> +#define FLUSH_RANGE		dax_flush
> +#define COMMIT_FLUSHED()	do { } while (0)
> +#endif

Please use proper APIs for this, this has no business in a driver.

And that's it for now.  This is clearly not submission ready, and I
should got back to my backlog of other things.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dm-devel] [PATCH] dm-writecache
  2018-03-08 14:51 ` [dm-devel] [PATCH] dm-writecache Christoph Hellwig
@ 2018-03-08 17:08   ` Dan Williams
  2018-03-12  7:48     ` Christoph Hellwig
  2018-05-18 15:44     ` dm-writecache Mike Snitzer
  2018-03-09  3:26   ` [dm-devel] [PATCH] dm-writecache Mikulas Patocka
  1 sibling, 2 replies; 13+ messages in thread
From: Dan Williams @ 2018-03-08 17:08 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Mike Snitzer, Mikulas Patocka, device-mapper development,
	Alasdair G. Kergon, linux-nvdimm

On Thu, Mar 8, 2018 at 6:51 AM, Christoph Hellwig <hch@infradead.org> wrote:
>> --- /dev/null 1970-01-01 00:00:00.000000000 +0000
>> +++ linux-2.6/drivers/md/dm-writecache.c      2018-03-08 14:23:31.059999000 +0100
>> @@ -0,0 +1,2417 @@
>> +#include <linux/device-mapper.h>
>
> missing copyright statement, or for those new-fashioned SPDX statement.
>
>> +#define WRITEBACK_FUA                        true
>
> no business having this around.
>
>> +#ifndef bio_set_dev
>> +#define      bio_set_dev(bio, dev)   ((bio)->bi_bdev = (dev))
>> +#endif
>> +#ifndef timer_setup
>> +#define timer_setup(t, c, f) setup_timer(t, c, (unsigned long)(t))
>> +#endif
>
> no business in mainline.
>
>> +/*
>> + * On X86, non-temporal stores are more efficient than cache flushing.
>> + * On ARM64, cache flushing is more efficient.
>> + */
>> +#if defined(CONFIG_X86_64)
>> +#define NT_STORE(dest, src)                          \
>> +do {                                                 \
>> +     typeof(src) val = (src);                        \
>> +     memcpy_flushcache(&(dest), &val, sizeof(src));  \
>> +} while (0)
>> +#define COMMIT_FLUSHED()     wmb()
>> +#else
>> +#define NT_STORE(dest, src)  WRITE_ONCE(dest, src)
>> +#define FLUSH_RANGE          dax_flush
>> +#define COMMIT_FLUSHED()     do { } while (0)
>> +#endif
>
> Please use proper APIs for this, this has no business in a driver.

I had the same feedback, and Mikulas sent this useful enhancement to
the memcpy_flushcache API:

    https://patchwork.kernel.org/patch/10217655/

...it's in my queue to either push through -tip or add it to the next
libnvdimm pull request for 4.17-rc1.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dm-devel] [PATCH] dm-writecache
  2018-03-08 14:51 ` [dm-devel] [PATCH] dm-writecache Christoph Hellwig
  2018-03-08 17:08   ` Dan Williams
@ 2018-03-09  3:26   ` Mikulas Patocka
  2018-03-12  7:50     ` Christoph Hellwig
  1 sibling, 1 reply; 13+ messages in thread
From: Mikulas Patocka @ 2018-03-09  3:26 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Mike Snitzer, dm-devel, Alasdair G. Kergon, linux-nvdimm



On Thu, 8 Mar 2018, Christoph Hellwig wrote:

> > --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> > +++ linux-2.6/drivers/md/dm-writecache.c	2018-03-08 14:23:31.059999000 +0100
> > @@ -0,0 +1,2417 @@
> > +#include <linux/device-mapper.h>
> 
> missing copyright statement, or for those new-fashioned SPDX statement.
> 
> > +#define WRITEBACK_FUA			true
> 
> no business having this around.

It's the default setting of the flag wc->writeback_fua (it can be changed 
with target parameters). The flag selects whether the target uses FUA 
requests when doing writeback or whether it uses non-FUA requests and 
FLUSH afterwards. For some block devices, FUA is faster, for some 
nonFUA+FLUSH is faster.

What's wrong with this? Why can't default settings be #defined at the 
beginning of a file?

> > +#ifndef bio_set_dev
> > +#define	bio_set_dev(bio, dev)	((bio)->bi_bdev = (dev))
> > +#endif
> > +#ifndef timer_setup
> > +#define timer_setup(t, c, f)	setup_timer(t, c, (unsigned long)(t))
> > +#endif
> 
> no business in mainline.

People removed dax support for ramdisk in 4.15.

If I need to test it on non-x86 architecture, I need ramdisk as a fake dax 
device - and that only works up to 4.14. These defines are for 4.14 
compatibility.

> > +/*
> > + * On X86, non-temporal stores are more efficient than cache flushing.
> > + * On ARM64, cache flushing is more efficient.
> > + */
> > +#if defined(CONFIG_X86_64)
> > +#define NT_STORE(dest, src)				\
> > +do {							\
> > +	typeof(src) val = (src);			\
> > +	memcpy_flushcache(&(dest), &val, sizeof(src));	\
> > +} while (0)
> > +#define COMMIT_FLUSHED()	wmb()
> > +#else
> > +#define NT_STORE(dest, src)	WRITE_ONCE(dest, src)
> > +#define FLUSH_RANGE		dax_flush
> > +#define COMMIT_FLUSHED()	do { } while (0)
> > +#endif
> 
> Please use proper APIs for this, this has no business in a driver.
> 
> And that's it for now.  This is clearly not submission ready, and I
> should got back to my backlog of other things.

Why is memcpy_flushcache and dax_flush "improper"? What should I use 
instead of them?

Mikulas
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dm-devel] [PATCH] dm-writecache
  2018-03-08 17:08   ` Dan Williams
@ 2018-03-12  7:48     ` Christoph Hellwig
  2018-03-12 12:15       ` Mikulas Patocka
  2018-05-18 15:44     ` dm-writecache Mike Snitzer
  1 sibling, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2018-03-12  7:48 UTC (permalink / raw)
  To: Dan Williams
  Cc: Mike Snitzer, linux-nvdimm, Christoph Hellwig,
	device-mapper development, Mikulas Patocka, Alasdair G. Kergon

On Thu, Mar 08, 2018 at 09:08:32AM -0800, Dan Williams wrote:
> I had the same feedback, and Mikulas sent this useful enhancement to
> the memcpy_flushcache API:
> 
>     https://patchwork.kernel.org/patch/10217655/
> 
> ...it's in my queue to either push through -tip or add it to the next
> libnvdimm pull request for 4.17-rc1.

So lets rebase this submission on top of that.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dm-devel] [PATCH] dm-writecache
  2018-03-09  3:26   ` [dm-devel] [PATCH] dm-writecache Mikulas Patocka
@ 2018-03-12  7:50     ` Christoph Hellwig
  2018-03-12 12:12       ` Mikulas Patocka
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2018-03-12  7:50 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Christoph Hellwig, Mike Snitzer, dm-devel, Alasdair G. Kergon,
	linux-nvdimm

On Thu, Mar 08, 2018 at 10:26:17PM -0500, Mikulas Patocka wrote:
> > no business having this around.
> 
> It's the default setting of the flag wc->writeback_fua (it can be changed 
> with target parameters). The flag selects whether the target uses FUA 
> requests when doing writeback or whether it uses non-FUA requests and 
> FLUSH afterwards. For some block devices, FUA is faster, for some 
> nonFUA+FLUSH is faster.

So just use true as the default flag, adding a name for it in addition
to the field it is assigned to makes no sense at all.

> > > +#ifndef bio_set_dev
> > > +#define	bio_set_dev(bio, dev)	((bio)->bi_bdev = (dev))
> > > +#endif
> > > +#ifndef timer_setup
> > > +#define timer_setup(t, c, f)	setup_timer(t, c, (unsigned long)(t))
> > > +#endif
> > 
> > no business in mainline.
> 
> People removed dax support for ramdisk in 4.15.
> 
> If I need to test it on non-x86 architecture, I need ramdisk as a fake dax 
> device - and that only works up to 4.14. These defines are for 4.14 
> compatibility.

So add them when you backport, or use the existing automated backport
frameworks.  But do not add dead code to an upstream submission.

> > > +#if defined(CONFIG_X86_64)
> > > +#define NT_STORE(dest, src)				\
> > > +do {							\
> > > +	typeof(src) val = (src);			\
> > > +	memcpy_flushcache(&(dest), &val, sizeof(src));	\
> > > +} while (0)
> > > +#define COMMIT_FLUSHED()	wmb()
> > > +#else
> > > +#define NT_STORE(dest, src)	WRITE_ONCE(dest, src)
> > > +#define FLUSH_RANGE		dax_flush
> > > +#define COMMIT_FLUSHED()	do { } while (0)
> > > +#endif
> > 
> > Please use proper APIs for this, this has no business in a driver.
> > 
> > And that's it for now.  This is clearly not submission ready, and I
> > should got back to my backlog of other things.
> 
> Why is memcpy_flushcache and dax_flush "improper"? What should I use 
> instead of them?

They are proper and should be used directly instead of through your
hacky macros.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dm-devel] [PATCH] dm-writecache
  2018-03-12  7:50     ` Christoph Hellwig
@ 2018-03-12 12:12       ` Mikulas Patocka
  0 siblings, 0 replies; 13+ messages in thread
From: Mikulas Patocka @ 2018-03-12 12:12 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Mike Snitzer, dm-devel, Alasdair G. Kergon, linux-nvdimm



On Mon, 12 Mar 2018, Christoph Hellwig wrote:

> On Thu, Mar 08, 2018 at 10:26:17PM -0500, Mikulas Patocka wrote:
> > > no business having this around.
> > 
> > It's the default setting of the flag wc->writeback_fua (it can be changed 
> > with target parameters). The flag selects whether the target uses FUA 
> > requests when doing writeback or whether it uses non-FUA requests and 
> > FLUSH afterwards. For some block devices, FUA is faster, for some 
> > nonFUA+FLUSH is faster.
> 
> So just use true as the default flag, adding a name for it in addition
> to the field it is assigned to makes no sense at all.

It makes sense, because all the default values are at one place on the top 
of the file and not scattered through the codebase.

> > > > +#ifndef bio_set_dev
> > > > +#define	bio_set_dev(bio, dev)	((bio)->bi_bdev = (dev))
> > > > +#endif
> > > > +#ifndef timer_setup
> > > > +#define timer_setup(t, c, f)	setup_timer(t, c, (unsigned long)(t))
> > > > +#endif
> > > 
> > > no business in mainline.
> > 
> > People removed dax support for ramdisk in 4.15.
> > 
> > If I need to test it on non-x86 architecture, I need ramdisk as a fake dax 
> > device - and that only works up to 4.14. These defines are for 4.14 
> > compatibility.
> 
> So add them when you backport, or use the existing automated backport
> frameworks.  But do not add dead code to an upstream submission.

I don't intend to backport this driver to stable kernel branches. But I 
can move the file between different machines and test it - it is just 
convenience for me, so that I don't have to patch the file when moving it 
around. It helps me and it doesn't harm anyone else, so what's the problem 
with it?

> > > > +#if defined(CONFIG_X86_64)
> > > > +#define NT_STORE(dest, src)				\
> > > > +do {							\
> > > > +	typeof(src) val = (src);			\
> > > > +	memcpy_flushcache(&(dest), &val, sizeof(src));	\
> > > > +} while (0)
> > > > +#define COMMIT_FLUSHED()	wmb()
> > > > +#else
> > > > +#define NT_STORE(dest, src)	WRITE_ONCE(dest, src)
> > > > +#define FLUSH_RANGE		dax_flush
> > > > +#define COMMIT_FLUSHED()	do { } while (0)
> > > > +#endif
> > > 
> > > Please use proper APIs for this, this has no business in a driver.
> > > 
> > > And that's it for now.  This is clearly not submission ready, and I
> > > should got back to my backlog of other things.
> > 
> > Why is memcpy_flushcache and dax_flush "improper"? What should I use 
> > instead of them?
> 
> They are proper and should be used directly instead of through your
> hacky macros.

On x86-64, memcpy_flushcache is faster than dax_flush.
On ARM64, dax_flush is faster than memcpy_flushcache.

So what should I do? I need to differentiate them based on architecture.

Do you argue that instead of one "#if defined(CONFIG_X86_64)" at the top 
of the file we many more "#if defined(CONFIG_X86_64)" lines all over the 
file - just, because you don't like #defines?

Currently, we can change one line of source code to switch between these 
two functions and benchmark which one performs better on a particular 
processor. Once these macros are deleted, the switch will not be possible.

Mikulas
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dm-devel] [PATCH] dm-writecache
  2018-03-12  7:48     ` Christoph Hellwig
@ 2018-03-12 12:15       ` Mikulas Patocka
  0 siblings, 0 replies; 13+ messages in thread
From: Mikulas Patocka @ 2018-03-12 12:15 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Alasdair G. Kergon, Mike Snitzer, device-mapper development,
	linux-nvdimm



On Mon, 12 Mar 2018, Christoph Hellwig wrote:

> On Thu, Mar 08, 2018 at 09:08:32AM -0800, Dan Williams wrote:
> > I had the same feedback, and Mikulas sent this useful enhancement to
> > the memcpy_flushcache API:
> > 
> >     https://patchwork.kernel.org/patch/10217655/
> > 
> > ...it's in my queue to either push through -tip or add it to the next
> > libnvdimm pull request for 4.17-rc1.
> 
> So lets rebase this submission on top of that.

I already did and the patch that you criticized is based on the top of 
that.

I've found out that memcpy_flushcache performs better on x86 and dax_flush 
performs betetr on ARM64, so the code has two flushing strategies that are 
switched with preprocessor condition.

Mikulas
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: dm-writecache
  2018-03-08 17:08   ` Dan Williams
  2018-03-12  7:48     ` Christoph Hellwig
@ 2018-05-18 15:44     ` Mike Snitzer
  2018-05-18 15:54       ` dm-writecache Dan Williams
  1 sibling, 1 reply; 13+ messages in thread
From: Mike Snitzer @ 2018-05-18 15:44 UTC (permalink / raw)
  To: Dan Williams
  Cc: Christoph Hellwig, device-mapper development, Mikulas Patocka,
	Alasdair G. Kergon, linux-nvdimm

On Thu, Mar 08 2018 at 12:08pm -0500,
Dan Williams <dan.j.williams@intel.com> wrote:

> Mikulas sent this useful enhancement to the memcpy_flushcache API:
> 
>     https://patchwork.kernel.org/patch/10217655/
> 
> ...it's in my queue to either push through -tip or add it to the next
> libnvdimm pull request for 4.17-rc1.

Hi Dan,

Seems this never actually went upstream.  I've staged it in
linux-dm.git's "for-next" for the time being:
https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.18&id=a7e96990b5ff6206fefdc5bfe74396bb880f7e48

But do you intend to pick it up for 4.18 inclusion?  If so I'll drop
it.. would just hate for it to get dropped on the floor by getting lost
in the shuffle between trees.

Please avise, thanks!
Mike
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: dm-writecache
  2018-05-18 15:44     ` dm-writecache Mike Snitzer
@ 2018-05-18 15:54       ` Dan Williams
  2018-05-18 20:12         ` dm-writecache Mikulas Patocka
  0 siblings, 1 reply; 13+ messages in thread
From: Dan Williams @ 2018-05-18 15:54 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Christoph Hellwig, device-mapper development, Mikulas Patocka,
	Alasdair G. Kergon, linux-nvdimm

On Fri, May 18, 2018 at 8:44 AM, Mike Snitzer <snitzer@redhat.com> wrote:
> On Thu, Mar 08 2018 at 12:08pm -0500,
> Dan Williams <dan.j.williams@intel.com> wrote:
>
>> Mikulas sent this useful enhancement to the memcpy_flushcache API:
>>
>>     https://patchwork.kernel.org/patch/10217655/
>>
>> ...it's in my queue to either push through -tip or add it to the next
>> libnvdimm pull request for 4.17-rc1.
>
> Hi Dan,
>
> Seems this never actually went upstream.  I've staged it in
> linux-dm.git's "for-next" for the time being:
> https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.18&id=a7e96990b5ff6206fefdc5bfe74396bb880f7e48
>
> But do you intend to pick it up for 4.18 inclusion?  If so I'll drop
> it.. would just hate for it to get dropped on the floor by getting lost
> in the shuffle between trees.
>
> Please avise, thanks!
> Mike

Thanks for picking it up! I was hoping to resend it to get acks from
x86 folks, and then yes it fell through the cracks in my patch
tracking.

Now that I look at it again I don't think we need this hunk:

void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
size_t len)
{
char *from = kmap_atomic(page);
- memcpy_flushcache(to, from + offset, len);
+ __memcpy_flushcache(to, from + offset, len);
kunmap_atomic(from);
}

...and I wonder what the benefit is of the 16-byte case? I would
assume the bulk of the benefit is limited to the 4 and 8 byte copy
cases.

Mikulas please resend with those comments addressed and include Ingo and Thomas.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: dm-writecache
  2018-05-18 15:54       ` dm-writecache Dan Williams
@ 2018-05-18 20:12         ` Mikulas Patocka
  2018-05-18 20:14           ` dm-writecache Dan Williams
  0 siblings, 1 reply; 13+ messages in thread
From: Mikulas Patocka @ 2018-05-18 20:12 UTC (permalink / raw)
  To: Dan Williams
  Cc: Christoph Hellwig, device-mapper development, Alasdair G. Kergon,
	Mike Snitzer, linux-nvdimm



On Fri, 18 May 2018, Dan Williams wrote:

> On Fri, May 18, 2018 at 8:44 AM, Mike Snitzer <snitzer@redhat.com> wrote:
> > On Thu, Mar 08 2018 at 12:08pm -0500,
> > Dan Williams <dan.j.williams@intel.com> wrote:
> >
> >> Mikulas sent this useful enhancement to the memcpy_flushcache API:
> >>
> >>     https://patchwork.kernel.org/patch/10217655/
> >>
> >> ...it's in my queue to either push through -tip or add it to the next
> >> libnvdimm pull request for 4.17-rc1.
> >
> > Hi Dan,
> >
> > Seems this never actually went upstream.  I've staged it in
> > linux-dm.git's "for-next" for the time being:
> > https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.18&id=a7e96990b5ff6206fefdc5bfe74396bb880f7e48
> >
> > But do you intend to pick it up for 4.18 inclusion?  If so I'll drop
> > it.. would just hate for it to get dropped on the floor by getting lost
> > in the shuffle between trees.
> >
> > Please avise, thanks!
> > Mike
> 
> Thanks for picking it up! I was hoping to resend it to get acks from
> x86 folks, and then yes it fell through the cracks in my patch
> tracking.
> 
> Now that I look at it again I don't think we need this hunk:
> 
> void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
> size_t len)
> {
> char *from = kmap_atomic(page);
> - memcpy_flushcache(to, from + offset, len);
> + __memcpy_flushcache(to, from + offset, len);
> kunmap_atomic(from);
> }

Yes - this is not needed.

> ...and I wonder what the benefit is of the 16-byte case? I would
> assume the bulk of the benefit is limited to the 4 and 8 byte copy
> cases.

dm-writecache uses 16-byte writes frequently, so it is needed for that.

If we split 16-byte write to two 8-byte writes, it would degrade 
performance for architectures where memcpy_flushcache needs to flush the 
cache.

> Mikulas please resend with those comments addressed and include Ingo and 
> Thomas.

Mikulas
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: dm-writecache
  2018-05-18 20:12         ` dm-writecache Mikulas Patocka
@ 2018-05-18 20:14           ` Dan Williams
  2018-05-18 22:00             ` dm-writecache Mikulas Patocka
  0 siblings, 1 reply; 13+ messages in thread
From: Dan Williams @ 2018-05-18 20:14 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Christoph Hellwig, device-mapper development, Alasdair G. Kergon,
	Mike Snitzer, linux-nvdimm

On Fri, May 18, 2018 at 1:12 PM, Mikulas Patocka <mpatocka@redhat.com> wrote:
>
>
> On Fri, 18 May 2018, Dan Williams wrote:
>
>> On Fri, May 18, 2018 at 8:44 AM, Mike Snitzer <snitzer@redhat.com> wrote:
>> > On Thu, Mar 08 2018 at 12:08pm -0500,
>> > Dan Williams <dan.j.williams@intel.com> wrote:
>> >
>> >> Mikulas sent this useful enhancement to the memcpy_flushcache API:
>> >>
>> >>     https://patchwork.kernel.org/patch/10217655/
>> >>
>> >> ...it's in my queue to either push through -tip or add it to the next
>> >> libnvdimm pull request for 4.17-rc1.
>> >
>> > Hi Dan,
>> >
>> > Seems this never actually went upstream.  I've staged it in
>> > linux-dm.git's "for-next" for the time being:
>> > https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.18&id=a7e96990b5ff6206fefdc5bfe74396bb880f7e48
>> >
>> > But do you intend to pick it up for 4.18 inclusion?  If so I'll drop
>> > it.. would just hate for it to get dropped on the floor by getting lost
>> > in the shuffle between trees.
>> >
>> > Please avise, thanks!
>> > Mike
>>
>> Thanks for picking it up! I was hoping to resend it to get acks from
>> x86 folks, and then yes it fell through the cracks in my patch
>> tracking.
>>
>> Now that I look at it again I don't think we need this hunk:
>>
>> void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
>> size_t len)
>> {
>> char *from = kmap_atomic(page);
>> - memcpy_flushcache(to, from + offset, len);
>> + __memcpy_flushcache(to, from + offset, len);
>> kunmap_atomic(from);
>> }
>
> Yes - this is not needed.
>
>> ...and I wonder what the benefit is of the 16-byte case? I would
>> assume the bulk of the benefit is limited to the 4 and 8 byte copy
>> cases.
>
> dm-writecache uses 16-byte writes frequently, so it is needed for that.
>
> If we split 16-byte write to two 8-byte writes, it would degrade
> performance for architectures where memcpy_flushcache needs to flush the
> cache.

My question was how measurable it is to special case 16-byte
transfers? I know Ingo is going to ask this question, so it would
speed things along if this patch included performance benefit numbers
for each special case in the changelog.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: dm-writecache
  2018-05-18 20:14           ` dm-writecache Dan Williams
@ 2018-05-18 22:00             ` Mikulas Patocka
  2018-05-18 22:10               ` dm-writecache Dan Williams
  0 siblings, 1 reply; 13+ messages in thread
From: Mikulas Patocka @ 2018-05-18 22:00 UTC (permalink / raw)
  To: Dan Williams
  Cc: Christoph Hellwig, device-mapper development, Alasdair G. Kergon,
	Mike Snitzer, linux-nvdimm



On Fri, 18 May 2018, Dan Williams wrote:

> >> ...and I wonder what the benefit is of the 16-byte case? I would
> >> assume the bulk of the benefit is limited to the 4 and 8 byte copy
> >> cases.
> >
> > dm-writecache uses 16-byte writes frequently, so it is needed for that.
> >
> > If we split 16-byte write to two 8-byte writes, it would degrade
> > performance for architectures where memcpy_flushcache needs to flush the
> > cache.
> 
> My question was how measurable it is to special case 16-byte
> transfers? I know Ingo is going to ask this question, so it would
> speed things along if this patch included performance benefit numbers
> for each special case in the changelog.

I tested it some times ago - and the movnti instruction has 2% better 
throughput than the existing memcpy_flushcache function.

It is doing one 16-byte write for every sector written and one 8-byte 
write for every sector clean-up. So, the overhead is measurable.

Mikulas
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: dm-writecache
  2018-05-18 22:00             ` dm-writecache Mikulas Patocka
@ 2018-05-18 22:10               ` Dan Williams
  0 siblings, 0 replies; 13+ messages in thread
From: Dan Williams @ 2018-05-18 22:10 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Christoph Hellwig, device-mapper development, Alasdair G. Kergon,
	Mike Snitzer, linux-nvdimm

On Fri, May 18, 2018 at 3:00 PM, Mikulas Patocka <mpatocka@redhat.com> wrote:
>
>
> On Fri, 18 May 2018, Dan Williams wrote:
>
>> >> ...and I wonder what the benefit is of the 16-byte case? I would
>> >> assume the bulk of the benefit is limited to the 4 and 8 byte copy
>> >> cases.
>> >
>> > dm-writecache uses 16-byte writes frequently, so it is needed for that.
>> >
>> > If we split 16-byte write to two 8-byte writes, it would degrade
>> > performance for architectures where memcpy_flushcache needs to flush the
>> > cache.
>>
>> My question was how measurable it is to special case 16-byte
>> transfers? I know Ingo is going to ask this question, so it would
>> speed things along if this patch included performance benefit numbers
>> for each special case in the changelog.
>
> I tested it some times ago - and the movnti instruction has 2% better
> throughput than the existing memcpy_flushcache function.
>
> It is doing one 16-byte write for every sector written and one 8-byte
> write for every sector clean-up. So, the overhead is measurable.

Awesome, include those measured numbers in the changelog for the next
spin of the patch.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-05-18 22:10 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <alpine.LRH.2.02.1803080824210.2819@file01.intranet.prod.int.rdu2.redhat.com>
2018-03-08 14:51 ` [dm-devel] [PATCH] dm-writecache Christoph Hellwig
2018-03-08 17:08   ` Dan Williams
2018-03-12  7:48     ` Christoph Hellwig
2018-03-12 12:15       ` Mikulas Patocka
2018-05-18 15:44     ` dm-writecache Mike Snitzer
2018-05-18 15:54       ` dm-writecache Dan Williams
2018-05-18 20:12         ` dm-writecache Mikulas Patocka
2018-05-18 20:14           ` dm-writecache Dan Williams
2018-05-18 22:00             ` dm-writecache Mikulas Patocka
2018-05-18 22:10               ` dm-writecache Dan Williams
2018-03-09  3:26   ` [dm-devel] [PATCH] dm-writecache Mikulas Patocka
2018-03-12  7:50     ` Christoph Hellwig
2018-03-12 12:12       ` Mikulas Patocka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).