All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Kani, Toshimitsu" <toshi.kani@hpe.com>
To: "ross.zwisler@linux.intel.com" <ross.zwisler@linux.intel.com>,
	"jack@suse.cz" <jack@suse.cz>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"lsf-pc@lists.linux-foundation.org"
	<lsf-pc@lists.linux-foundation.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>
Subject: Re: [Lsf-pc] [LSF/MM TOPIC] Future direction of DAX
Date: Wed, 18 Jan 2017 00:03:08 +0000	[thread overview]
Message-ID: <1484701124.2029.9.camel@hpe.com> (raw)
In-Reply-To: <20170117155910.GU2517@quack2.suse.cz>

On Tue, 2017-01-17 at 16:59 +0100, Jan Kara wrote:
> On Fri 13-01-17 17:20:08, Ross Zwisler wrote:
 :
> > - If I recall correctly, at one point Dave Chinner suggested that
> > we change - If I recall correctly, at one point Dave Chinner
> > suggested that we change   DAX so that I/O would use cached stores
> > instead of the non-temporal stores   that it currently uses.  We
> > would then track pages that were written to by DAX in the radix
> > tree so that they would be flushed later during  
> > fsync/msync.  Does this sound like a win?  Also, assuming that we
> > can find a solution for platforms where the processor cache is part
> > of the ADR safe zone (above topic) this would be a clear
> > improvement, moving us from using non-temporal stores to faster
> > cached stores with no downside.
> 
> I guess this needs measurements. But it is worth a try.

Brain Boylston did some measurement before.
http://oss.sgi.com/archives/xfs/2016-08/msg00239.html

I updated his test program to skip pmem_persist() for the cached copy
case.

                        dst = dstbase;
+ #if 0
                        /* see note above */
                        if (mode == 'c')
                                pmem_persist(dst, dstsz);
+ #endif
                }

Here are sample runs:

$ numactl -N0 time -p ./memcpyperf c /mnt/pmem0/file 1000000
INFO: dst 0x7f1d00000000 src 0x601200 dstsz 2756509696 cpysz 16384
real 3.28
user 3.27
sys 0.00

$ numactl -N0 time -p ./memcpyperf n /mnt/pmem0/file 1000000
INFO: dst 0x7f6080000000 src 0x601200 dstsz 2756509696 cpysz 16384
real 1.01
user 1.01
sys 0.00

$ numactl -N1 time -p ./memcpyperf c /mnt/pmem0/file 1000000
INFO: dst 0x7fe900000000 src 0x601200 dstsz 2756509696 cpysz 16384
real 4.06
user 4.06
sys 0.00

$ numactl -N1 time -p ./memcpyperf n /mnt/pmem0/file 1000000
INFO: dst 0x7f7640000000 src 0x601200 dstsz 2756509696 cpysz 16384
real 1.27
user 1.27
sys 0.00

In this simple test, using non-temporal copy is still faster than using
cached copy.

Thanks,
-Toshi

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: "Kani, Toshimitsu" <toshi.kani@hpe.com>
To: "ross.zwisler@linux.intel.com" <ross.zwisler@linux.intel.com>,
	"jack@suse.cz" <jack@suse.cz>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"lsf-pc@lists.linux-foundation.org"
	<lsf-pc@lists.linux-foundation.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>
Subject: Re: [Lsf-pc] [LSF/MM TOPIC] Future direction of DAX
Date: Wed, 18 Jan 2017 00:03:08 +0000	[thread overview]
Message-ID: <1484701124.2029.9.camel@hpe.com> (raw)
In-Reply-To: <20170117155910.GU2517@quack2.suse.cz>

T24gVHVlLCAyMDE3LTAxLTE3IGF0IDE2OjU5ICswMTAwLCBKYW4gS2FyYSB3cm90ZToNCj4gT24g
RnJpIDEzLTAxLTE3IDE3OjIwOjA4LCBSb3NzIFp3aXNsZXIgd3JvdGU6DQogOg0KPiA+IC0gSWYg
SSByZWNhbGwgY29ycmVjdGx5LCBhdCBvbmUgcG9pbnQgRGF2ZSBDaGlubmVyIHN1Z2dlc3RlZCB0
aGF0DQo+ID4gd2UgY2hhbmdlIC0gSWYgSSByZWNhbGwgY29ycmVjdGx5LCBhdCBvbmUgcG9pbnQg
RGF2ZSBDaGlubmVyDQo+ID4gc3VnZ2VzdGVkIHRoYXQgd2UgY2hhbmdlIMKgIERBWCBzbyB0aGF0
IEkvTyB3b3VsZCB1c2UgY2FjaGVkIHN0b3Jlcw0KPiA+IGluc3RlYWQgb2YgdGhlIG5vbi10ZW1w
b3JhbCBzdG9yZXMgwqAgdGhhdCBpdCBjdXJyZW50bHkgdXNlcy7CoMKgV2UNCj4gPiB3b3VsZCB0
aGVuIHRyYWNrIHBhZ2VzIHRoYXQgd2VyZSB3cml0dGVuIHRvIGJ5IERBWCBpbiB0aGUgcmFkaXgN
Cj4gPiB0cmVlIHNvIHRoYXQgdGhleSB3b3VsZCBiZSBmbHVzaGVkIGxhdGVyIGR1cmluZyDCoA0K
PiA+IGZzeW5jL21zeW5jLsKgwqBEb2VzIHRoaXMgc291bmQgbGlrZSBhIHdpbj/CoMKgQWxzbywg
YXNzdW1pbmcgdGhhdCB3ZQ0KPiA+IGNhbiBmaW5kIGEgc29sdXRpb24gZm9yIHBsYXRmb3JtcyB3
aGVyZSB0aGUgcHJvY2Vzc29yIGNhY2hlIGlzIHBhcnQNCj4gPiBvZiB0aGUgQURSIHNhZmUgem9u
ZSAoYWJvdmUgdG9waWMpIHRoaXMgd291bGQgYmUgYSBjbGVhcg0KPiA+IGltcHJvdmVtZW50LCBt
b3ZpbmcgdXMgZnJvbSB1c2luZyBub24tdGVtcG9yYWwgc3RvcmVzIHRvIGZhc3Rlcg0KPiA+IGNh
Y2hlZCBzdG9yZXMgd2l0aCBubyBkb3duc2lkZS4NCj4gDQo+IEkgZ3Vlc3MgdGhpcyBuZWVkcyBt
ZWFzdXJlbWVudHMuIEJ1dCBpdCBpcyB3b3J0aCBhIHRyeS4NCg0KQnJhaW4gQm95bHN0b24gZGlk
IHNvbWUgbWVhc3VyZW1lbnQgYmVmb3JlLg0KaHR0cDovL29zcy5zZ2kuY29tL2FyY2hpdmVzL3hm
cy8yMDE2LTA4L21zZzAwMjM5Lmh0bWwNCg0KSSB1cGRhdGVkIGhpcyB0ZXN0IHByb2dyYW0gdG8g
c2tpcCBwbWVtX3BlcnNpc3QoKSBmb3IgdGhlIGNhY2hlZCBjb3B5DQpjYXNlLg0KDQrCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqBkc3QgPSBkc3RiYXNlOw0K
KyAjaWYgMA0KwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
Lyogc2VlIG5vdGUgYWJvdmUgKi8NCsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoGlmIChtb2RlID09ICdjJykNCsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqBwbWVtX3BlcnNpc3QoZHN0LCBk
c3Rzeik7DQorICNlbmRpZg0KwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqB9DQoNCkhl
cmUgYXJlIHNhbXBsZSBydW5zOg0KDQokIG51bWFjdGwgLU4wIHRpbWUgLXAgLi9tZW1jcHlwZXJm
IGMgL21udC9wbWVtMC9maWxlIDEwMDAwMDANCklORk86IGRzdCAweDdmMWQwMDAwMDAwMCBzcmMg
MHg2MDEyMDAgZHN0c3ogMjc1NjUwOTY5NiBjcHlzeiAxNjM4NA0KcmVhbCAzLjI4DQp1c2VyIDMu
MjcNCnN5cyAwLjAwDQoNCiQgbnVtYWN0bCAtTjAgdGltZSAtcCAuL21lbWNweXBlcmYgbiAvbW50
L3BtZW0wL2ZpbGUgMTAwMDAwMA0KSU5GTzogZHN0IDB4N2Y2MDgwMDAwMDAwIHNyYyAweDYwMTIw
MCBkc3RzeiAyNzU2NTA5Njk2IGNweXN6IDE2Mzg0DQpyZWFsIDEuMDENCnVzZXIgMS4wMQ0Kc3lz
IDAuMDANCg0KJCBudW1hY3RsIC1OMSB0aW1lIC1wIC4vbWVtY3B5cGVyZiBjIC9tbnQvcG1lbTAv
ZmlsZSAxMDAwMDAwDQpJTkZPOiBkc3QgMHg3ZmU5MDAwMDAwMDAgc3JjIDB4NjAxMjAwIGRzdHN6
IDI3NTY1MDk2OTYgY3B5c3ogMTYzODQNCnJlYWwgNC4wNg0KdXNlciA0LjA2DQpzeXMgMC4wMA0K
DQokIG51bWFjdGwgLU4xIHRpbWUgLXAgLi9tZW1jcHlwZXJmIG4gL21udC9wbWVtMC9maWxlIDEw
MDAwMDANCklORk86IGRzdCAweDdmNzY0MDAwMDAwMCBzcmMgMHg2MDEyMDAgZHN0c3ogMjc1NjUw
OTY5NiBjcHlzeiAxNjM4NA0KcmVhbCAxLjI3DQp1c2VyIDEuMjcNCnN5cyAwLjAwDQoNCkluIHRo
aXMgc2ltcGxlIHRlc3QsIHVzaW5nIG5vbi10ZW1wb3JhbCBjb3B5IGlzIHN0aWxsIGZhc3RlciB0
aGFuIHVzaW5nDQpjYWNoZWQgY29weS4NCg0KVGhhbmtzLA0KLVRvc2hpDQoNCg==

WARNING: multiple messages have this Message-ID (diff)
From: "Kani, Toshimitsu" <toshi.kani@hpe.com>
To: "ross.zwisler@linux.intel.com" <ross.zwisler@linux.intel.com>,
	"jack@suse.cz" <jack@suse.cz>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"lsf-pc@lists.linux-foundation.org"
	<lsf-pc@lists.linux-foundation.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>
Subject: Re: [Lsf-pc] [LSF/MM TOPIC] Future direction of DAX
Date: Wed, 18 Jan 2017 00:03:08 +0000	[thread overview]
Message-ID: <1484701124.2029.9.camel@hpe.com> (raw)
In-Reply-To: <20170117155910.GU2517@quack2.suse.cz>

On Tue, 2017-01-17 at 16:59 +0100, Jan Kara wrote:
> On Fri 13-01-17 17:20:08, Ross Zwisler wrote:
 :
> > - If I recall correctly, at one point Dave Chinner suggested that
> > we change - If I recall correctly, at one point Dave Chinner
> > suggested that we change   DAX so that I/O would use cached stores
> > instead of the non-temporal stores   that it currently uses.  We
> > would then track pages that were written to by DAX in the radix
> > tree so that they would be flushed later during  
> > fsync/msync.  Does this sound like a win?  Also, assuming that we
> > can find a solution for platforms where the processor cache is part
> > of the ADR safe zone (above topic) this would be a clear
> > improvement, moving us from using non-temporal stores to faster
> > cached stores with no downside.
> 
> I guess this needs measurements. But it is worth a try.

Brain Boylston did some measurement before.
http://oss.sgi.com/archives/xfs/2016-08/msg00239.html

I updated his test program to skip pmem_persist() for the cached copy
case.

                        dst = dstbase;
+ #if 0
                        /* see note above */
                        if (mode == 'c')
                                pmem_persist(dst, dstsz);
+ #endif
                }

Here are sample runs:

$ numactl -N0 time -p ./memcpyperf c /mnt/pmem0/file 1000000
INFO: dst 0x7f1d00000000 src 0x601200 dstsz 2756509696 cpysz 16384
real 3.28
user 3.27
sys 0.00

$ numactl -N0 time -p ./memcpyperf n /mnt/pmem0/file 1000000
INFO: dst 0x7f6080000000 src 0x601200 dstsz 2756509696 cpysz 16384
real 1.01
user 1.01
sys 0.00

$ numactl -N1 time -p ./memcpyperf c /mnt/pmem0/file 1000000
INFO: dst 0x7fe900000000 src 0x601200 dstsz 2756509696 cpysz 16384
real 4.06
user 4.06
sys 0.00

$ numactl -N1 time -p ./memcpyperf n /mnt/pmem0/file 1000000
INFO: dst 0x7f7640000000 src 0x601200 dstsz 2756509696 cpysz 16384
real 1.27
user 1.27
sys 0.00

In this simple test, using non-temporal copy is still faster than using
cached copy.

Thanks,
-Toshi


  parent reply	other threads:[~2017-01-18  0:03 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-14  0:20 [LSF/MM TOPIC] Future direction of DAX Ross Zwisler
2017-01-14  0:20 ` Ross Zwisler
2017-01-14  0:20 ` Ross Zwisler
     [not found] ` <20170114002008.GA25379-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2017-01-14  8:26   ` Darrick J. Wong
2017-01-14  8:26     ` Darrick J. Wong
2017-01-14  8:26     ` Darrick J. Wong
2017-01-16  0:19     ` Viacheslav Dubeyko
2017-01-16  0:19       ` Viacheslav Dubeyko
2017-01-16  0:19       ` Viacheslav Dubeyko
2017-01-16 20:00     ` Jeff Moyer
2017-01-16 20:00       ` Jeff Moyer
2017-01-17  1:50       ` Darrick J. Wong
2017-01-17  1:50         ` Darrick J. Wong
2017-01-17  2:42         ` Dan Williams
2017-01-17  2:42           ` Dan Williams
     [not found]         ` <20170117015033.GD10498-PTl6brltDGh4DFYR7WNSRA@public.gmane.org>
2017-01-17  7:57           ` Christoph Hellwig
2017-01-17  7:57             ` Christoph Hellwig
2017-01-17  7:57             ` Christoph Hellwig
2017-01-17 14:54             ` Jeff Moyer
2017-01-17 14:54               ` Jeff Moyer
     [not found]               ` <x49mvep4tzw.fsf-RRHT56Q3PSP4kTEheFKJxxDDeQx5vsVwAInAS/Ez/D0@public.gmane.org>
2017-01-17 15:06                 ` Christoph Hellwig
2017-01-17 15:06                   ` Christoph Hellwig
2017-01-17 15:06                   ` Christoph Hellwig
     [not found]                   ` <20170117150638.GA3747-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
2017-01-17 16:07                     ` Jeff Moyer
2017-01-17 16:07                       ` Jeff Moyer
2017-01-17 16:07                       ` Jeff Moyer
2017-01-17 15:59 ` [Lsf-pc] " Jan Kara
2017-01-17 15:59   ` Jan Kara
2017-01-17 15:59   ` Jan Kara
2017-01-17 16:56   ` Dan Williams
2017-01-17 16:56     ` Dan Williams
2017-01-17 16:56     ` Dan Williams
2017-01-18  0:03   ` Kani, Toshimitsu [this message]
2017-01-18  0:03     ` Kani, Toshimitsu
2017-01-18  0:03     ` Kani, Toshimitsu
2017-01-18  5:25 ` willy
2017-01-18  5:25   ` willy
2017-01-18  5:25   ` willy
2017-01-18  6:01   ` Dan Williams
2017-01-18  6:01     ` Dan Williams
2017-01-18  6:01     ` Dan Williams
2017-01-18  6:07     ` willy
2017-01-18  6:07       ` willy
2017-01-18  6:07       ` willy
2017-01-18  6:25       ` Dan Williams
2017-01-18  6:25         ` Dan Williams
2017-01-18  6:25         ` Dan Williams
2017-01-18 17:22   ` Ross Zwisler
2017-01-18 17:22     ` Ross Zwisler
2017-01-18 17:22     ` Ross Zwisler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1484701124.2029.9.camel@hpe.com \
    --to=toshi.kani@hpe.com \
    --cc=jack@suse.cz \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=ross.zwisler@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.