All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: linux-nvdimm@lists.01.org
Cc: Jan Kara <jack@suse.cz>, Matthew Wilcox <mawilcox@microsoft.com>,
	x86@kernel.org, dm-devel@redhat.com,
	linux-kernel@vger.kernel.org, viro@zeniv.linux.org.uk,
	linux-fsdevel@vger.kernel.org, hch@lst.de
Subject: [PATCH v3 14/14] libnvdimm, pmem: disable dax flushing when pmem is fronting a volatile region
Date: Fri, 09 Jun 2017 13:25:01 -0700	[thread overview]
Message-ID: <149703990158.20620.16257541455791191783.stgit@dwillia2-desk3.amr.corp.intel.com> (raw)
In-Reply-To: <149703982465.20620.14881139332926778446.stgit@dwillia2-desk3.amr.corp.intel.com>

The pmem driver attaches to both persistent and volatile memory ranges
advertised by the ACPI NFIT. When the region is volatile it is redundant
to spend cycles flushing caches at fsync(). Check if the hosting region
is volatile and do not set QUEUE_FLAG_WC if it is.

Cc: Jan Kara <jack@suse.cz>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/pmem.c        |   13 ++++++++-----
 drivers/nvdimm/region_devs.c |    6 ++++++
 include/linux/libnvdimm.h    |    1 +
 3 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 06f6c27ec1e9..5cac9fb39db8 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -279,10 +279,10 @@ static int pmem_attach_disk(struct device *dev,
 	struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev);
 	struct nd_region *nd_region = to_nd_region(dev->parent);
 	struct vmem_altmap __altmap, *altmap = NULL;
+	int nid = dev_to_node(dev), fua, wbc;
 	struct resource *res = &nsio->res;
 	struct nd_pfn *nd_pfn = NULL;
 	struct dax_device *dax_dev;
-	int nid = dev_to_node(dev);
 	struct nd_pfn_sb *pfn_sb;
 	struct pmem_device *pmem;
 	struct resource pfn_res;
@@ -308,9 +308,12 @@ static int pmem_attach_disk(struct device *dev,
 	dev_set_drvdata(dev, pmem);
 	pmem->phys_addr = res->start;
 	pmem->size = resource_size(res);
-	if (!IS_ENABLED(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE)
-			|| nvdimm_has_flush(nd_region) < 0)
-		dev_warn(dev, "unable to guarantee persistence of writes\n");
+	fua = nvdimm_has_flush(nd_region);
+	if (!IS_ENABLED(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) || fua < 0)
+		dev_warn(dev, "unable to guarantee persistence of writes\n"); {
+		fua = 0;
+	}
+	wbc = nvdimm_has_cache(nd_region);
 
 	if (!devm_request_mem_region(dev, res->start, resource_size(res),
 				dev_name(&ndns->dev))) {
@@ -354,7 +357,7 @@ static int pmem_attach_disk(struct device *dev,
 		return PTR_ERR(addr);
 	pmem->virt_addr = addr;
 
-	blk_queue_write_cache(q, true, true);
+	blk_queue_write_cache(q, wbc, fua);
 	blk_queue_make_request(q, pmem_make_request);
 	blk_queue_physical_block_size(q, PAGE_SIZE);
 	blk_queue_max_hw_sectors(q, UINT_MAX);
diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
index 53a64a16aba4..0c3b089b280a 100644
--- a/drivers/nvdimm/region_devs.c
+++ b/drivers/nvdimm/region_devs.c
@@ -1060,6 +1060,12 @@ int nvdimm_has_flush(struct nd_region *nd_region)
 }
 EXPORT_SYMBOL_GPL(nvdimm_has_flush);
 
+int nvdimm_has_cache(struct nd_region *nd_region)
+{
+	return is_nd_pmem(&nd_region->dev);
+}
+EXPORT_SYMBOL_GPL(nvdimm_has_cache);
+
 void __exit nd_region_devs_exit(void)
 {
 	ida_destroy(&region_ida);
diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
index b2f659bd661d..a8ee1d0afd70 100644
--- a/include/linux/libnvdimm.h
+++ b/include/linux/libnvdimm.h
@@ -165,4 +165,5 @@ void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane);
 u64 nd_fletcher64(void *addr, size_t len, bool le);
 void nvdimm_flush(struct nd_region *nd_region);
 int nvdimm_has_flush(struct nd_region *nd_region);
+int nvdimm_has_cache(struct nd_region *nd_region);
 #endif /* __LIBNVDIMM_H__ */

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Dan Williams <dan.j.williams@intel.com>
To: linux-nvdimm@lists.01.org
Cc: Jan Kara <jack@suse.cz>,
	dm-devel@redhat.com, Matthew Wilcox <mawilcox@microsoft.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Jeff Moyer <jmoyer@redhat.com>,
	viro@zeniv.linux.org.uk, linux-fsdevel@vger.kernel.org,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	hch@lst.de
Subject: [PATCH v3 14/14] libnvdimm, pmem: disable dax flushing when pmem is fronting a volatile region
Date: Fri, 09 Jun 2017 13:25:01 -0700	[thread overview]
Message-ID: <149703990158.20620.16257541455791191783.stgit@dwillia2-desk3.amr.corp.intel.com> (raw)
In-Reply-To: <149703982465.20620.14881139332926778446.stgit@dwillia2-desk3.amr.corp.intel.com>

The pmem driver attaches to both persistent and volatile memory ranges
advertised by the ACPI NFIT. When the region is volatile it is redundant
to spend cycles flushing caches at fsync(). Check if the hosting region
is volatile and do not set QUEUE_FLAG_WC if it is.

Cc: Jan Kara <jack@suse.cz>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/pmem.c        |   13 ++++++++-----
 drivers/nvdimm/region_devs.c |    6 ++++++
 include/linux/libnvdimm.h    |    1 +
 3 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 06f6c27ec1e9..5cac9fb39db8 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -279,10 +279,10 @@ static int pmem_attach_disk(struct device *dev,
 	struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev);
 	struct nd_region *nd_region = to_nd_region(dev->parent);
 	struct vmem_altmap __altmap, *altmap = NULL;
+	int nid = dev_to_node(dev), fua, wbc;
 	struct resource *res = &nsio->res;
 	struct nd_pfn *nd_pfn = NULL;
 	struct dax_device *dax_dev;
-	int nid = dev_to_node(dev);
 	struct nd_pfn_sb *pfn_sb;
 	struct pmem_device *pmem;
 	struct resource pfn_res;
@@ -308,9 +308,12 @@ static int pmem_attach_disk(struct device *dev,
 	dev_set_drvdata(dev, pmem);
 	pmem->phys_addr = res->start;
 	pmem->size = resource_size(res);
-	if (!IS_ENABLED(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE)
-			|| nvdimm_has_flush(nd_region) < 0)
-		dev_warn(dev, "unable to guarantee persistence of writes\n");
+	fua = nvdimm_has_flush(nd_region);
+	if (!IS_ENABLED(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) || fua < 0)
+		dev_warn(dev, "unable to guarantee persistence of writes\n"); {
+		fua = 0;
+	}
+	wbc = nvdimm_has_cache(nd_region);
 
 	if (!devm_request_mem_region(dev, res->start, resource_size(res),
 				dev_name(&ndns->dev))) {
@@ -354,7 +357,7 @@ static int pmem_attach_disk(struct device *dev,
 		return PTR_ERR(addr);
 	pmem->virt_addr = addr;
 
-	blk_queue_write_cache(q, true, true);
+	blk_queue_write_cache(q, wbc, fua);
 	blk_queue_make_request(q, pmem_make_request);
 	blk_queue_physical_block_size(q, PAGE_SIZE);
 	blk_queue_max_hw_sectors(q, UINT_MAX);
diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
index 53a64a16aba4..0c3b089b280a 100644
--- a/drivers/nvdimm/region_devs.c
+++ b/drivers/nvdimm/region_devs.c
@@ -1060,6 +1060,12 @@ int nvdimm_has_flush(struct nd_region *nd_region)
 }
 EXPORT_SYMBOL_GPL(nvdimm_has_flush);
 
+int nvdimm_has_cache(struct nd_region *nd_region)
+{
+	return is_nd_pmem(&nd_region->dev);
+}
+EXPORT_SYMBOL_GPL(nvdimm_has_cache);
+
 void __exit nd_region_devs_exit(void)
 {
 	ida_destroy(&region_ida);
diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
index b2f659bd661d..a8ee1d0afd70 100644
--- a/include/linux/libnvdimm.h
+++ b/include/linux/libnvdimm.h
@@ -165,4 +165,5 @@ void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane);
 u64 nd_fletcher64(void *addr, size_t len, bool le);
 void nvdimm_flush(struct nd_region *nd_region);
 int nvdimm_has_flush(struct nd_region *nd_region);
+int nvdimm_has_cache(struct nd_region *nd_region);
 #endif /* __LIBNVDIMM_H__ */

WARNING: multiple messages have this Message-ID (diff)
From: Dan Williams <dan.j.williams-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
Cc: Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>,
	Matthew Wilcox <mawilcox-0li6OtcxBFHby3iVrkZq2A@public.gmane.org>,
	x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	viro-RmSDqhL/yNMiFSDQTTA3OLVCufUGDwFn@public.gmane.org,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	hch-jcswGhMUV9g@public.gmane.org
Subject: [PATCH v3 14/14] libnvdimm, pmem: disable dax flushing when pmem is fronting a volatile region
Date: Fri, 09 Jun 2017 13:25:01 -0700	[thread overview]
Message-ID: <149703990158.20620.16257541455791191783.stgit@dwillia2-desk3.amr.corp.intel.com> (raw)
In-Reply-To: <149703982465.20620.14881139332926778446.stgit-p8uTFz9XbKj2zm6wflaqv1nYeNYlB/vhral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

The pmem driver attaches to both persistent and volatile memory ranges
advertised by the ACPI NFIT. When the region is volatile it is redundant
to spend cycles flushing caches at fsync(). Check if the hosting region
is volatile and do not set QUEUE_FLAG_WC if it is.

Cc: Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>
Cc: Jeff Moyer <jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Cc: Matthew Wilcox <mawilcox-0li6OtcxBFHby3iVrkZq2A@public.gmane.org>
Cc: Ross Zwisler <ross.zwisler-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Signed-off-by: Dan Williams <dan.j.williams-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 drivers/nvdimm/pmem.c        |   13 ++++++++-----
 drivers/nvdimm/region_devs.c |    6 ++++++
 include/linux/libnvdimm.h    |    1 +
 3 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 06f6c27ec1e9..5cac9fb39db8 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -279,10 +279,10 @@ static int pmem_attach_disk(struct device *dev,
 	struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev);
 	struct nd_region *nd_region = to_nd_region(dev->parent);
 	struct vmem_altmap __altmap, *altmap = NULL;
+	int nid = dev_to_node(dev), fua, wbc;
 	struct resource *res = &nsio->res;
 	struct nd_pfn *nd_pfn = NULL;
 	struct dax_device *dax_dev;
-	int nid = dev_to_node(dev);
 	struct nd_pfn_sb *pfn_sb;
 	struct pmem_device *pmem;
 	struct resource pfn_res;
@@ -308,9 +308,12 @@ static int pmem_attach_disk(struct device *dev,
 	dev_set_drvdata(dev, pmem);
 	pmem->phys_addr = res->start;
 	pmem->size = resource_size(res);
-	if (!IS_ENABLED(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE)
-			|| nvdimm_has_flush(nd_region) < 0)
-		dev_warn(dev, "unable to guarantee persistence of writes\n");
+	fua = nvdimm_has_flush(nd_region);
+	if (!IS_ENABLED(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) || fua < 0)
+		dev_warn(dev, "unable to guarantee persistence of writes\n"); {
+		fua = 0;
+	}
+	wbc = nvdimm_has_cache(nd_region);
 
 	if (!devm_request_mem_region(dev, res->start, resource_size(res),
 				dev_name(&ndns->dev))) {
@@ -354,7 +357,7 @@ static int pmem_attach_disk(struct device *dev,
 		return PTR_ERR(addr);
 	pmem->virt_addr = addr;
 
-	blk_queue_write_cache(q, true, true);
+	blk_queue_write_cache(q, wbc, fua);
 	blk_queue_make_request(q, pmem_make_request);
 	blk_queue_physical_block_size(q, PAGE_SIZE);
 	blk_queue_max_hw_sectors(q, UINT_MAX);
diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
index 53a64a16aba4..0c3b089b280a 100644
--- a/drivers/nvdimm/region_devs.c
+++ b/drivers/nvdimm/region_devs.c
@@ -1060,6 +1060,12 @@ int nvdimm_has_flush(struct nd_region *nd_region)
 }
 EXPORT_SYMBOL_GPL(nvdimm_has_flush);
 
+int nvdimm_has_cache(struct nd_region *nd_region)
+{
+	return is_nd_pmem(&nd_region->dev);
+}
+EXPORT_SYMBOL_GPL(nvdimm_has_cache);
+
 void __exit nd_region_devs_exit(void)
 {
 	ida_destroy(&region_ida);
diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
index b2f659bd661d..a8ee1d0afd70 100644
--- a/include/linux/libnvdimm.h
+++ b/include/linux/libnvdimm.h
@@ -165,4 +165,5 @@ void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane);
 u64 nd_fletcher64(void *addr, size_t len, bool le);
 void nvdimm_flush(struct nd_region *nd_region);
 int nvdimm_has_flush(struct nd_region *nd_region);
+int nvdimm_has_cache(struct nd_region *nd_region);
 #endif /* __LIBNVDIMM_H__ */

  parent reply	other threads:[~2017-06-09 20:30 UTC|newest]

Thread overview: 117+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-09 20:23 [PATCH v3 00/14] pmem: stop abusing __copy_user_nocache(), and other reworks Dan Williams
2017-06-09 20:23 ` Dan Williams
2017-06-09 20:23 ` Dan Williams
2017-06-09 20:23 ` [PATCH v3 01/14] x86, uaccess: introduce copy_from_iter_flushcache for pmem / cache-bypass operations Dan Williams
2017-06-09 20:23   ` Dan Williams
2017-06-09 20:23   ` Dan Williams
2017-06-18  8:28   ` Christoph Hellwig
2017-06-18  8:28     ` Christoph Hellwig
2017-06-18  8:28     ` Christoph Hellwig
2017-06-19  2:02     ` Dan Williams
2017-06-19  2:02       ` Dan Williams
2017-06-19  2:02       ` Dan Williams
2017-06-09 20:23 ` [PATCH v3 02/14] dm: add ->copy_from_iter() dax operation support Dan Williams
2017-06-09 20:23   ` Dan Williams
2017-06-09 20:23   ` Dan Williams
2017-06-15  0:46   ` Kani, Toshimitsu
2017-06-15  0:46     ` Kani, Toshimitsu
2017-06-15  0:46     ` Kani, Toshimitsu
2017-06-15  1:21     ` Kani, Toshimitsu
2017-06-15  1:21       ` Kani, Toshimitsu
2017-06-18  8:37   ` Christoph Hellwig
2017-06-18  8:37     ` Christoph Hellwig
2017-06-18  8:37     ` Christoph Hellwig
2017-06-19  2:04     ` Dan Williams
2017-06-19  2:04       ` Dan Williams
2017-06-19  2:04       ` Dan Williams
2017-06-09 20:24 ` [PATCH v3 03/14] filesystem-dax: convert to dax_copy_from_iter() Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-14 10:58   ` Jan Kara
2017-06-14 10:58     ` Jan Kara
2017-06-14 10:58     ` Jan Kara
2017-06-09 20:24 ` [PATCH v3 04/14] dax, pmem: introduce an optional 'flush' dax_operation Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-14 10:57   ` Jan Kara
2017-06-14 10:57     ` Jan Kara
2017-06-14 10:57     ` Jan Kara
2017-06-09 20:24 ` [PATCH v3 05/14] dm: add ->flush() dax operation support Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-15  1:44   ` Kani, Toshimitsu
2017-06-15  1:44     ` Kani, Toshimitsu
2017-06-15  1:44     ` Kani, Toshimitsu
2017-06-09 20:24 ` [PATCH v3 06/14] filesystem-dax: convert to dax_flush() Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-14 10:56   ` Jan Kara
2017-06-14 10:56     ` Jan Kara
2017-06-14 10:56     ` Jan Kara
2017-06-09 20:24 ` [PATCH v3 07/14] x86, dax: replace clear_pmem() with open coded memset + dax_ops->flush Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-14 10:55   ` Jan Kara
2017-06-14 10:55     ` Jan Kara
2017-06-14 10:55     ` Jan Kara
2017-06-09 20:24 ` [PATCH v3 08/14] x86, dax, libnvdimm: move wb_cache_pmem() to libnvdimm Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-12  0:29   ` [PATCH v4 " Dan Williams
2017-06-12  0:29     ` Dan Williams
2017-06-14 10:54   ` [PATCH v3 " Jan Kara
2017-06-14 10:54     ` Jan Kara
2017-06-14 16:49     ` Dan Williams
2017-06-14 16:49       ` Dan Williams
2017-06-14 16:49       ` Dan Williams
2017-06-15  8:11       ` Jan Kara
2017-06-15  8:11         ` Jan Kara
2017-06-15  8:11         ` Jan Kara
2017-06-18  8:40   ` Christoph Hellwig
2017-06-18  8:40     ` Christoph Hellwig
2017-06-18  8:40     ` Christoph Hellwig
2017-06-19  2:06     ` Dan Williams
2017-06-19  2:06       ` Dan Williams
2017-06-19  2:06       ` Dan Williams
2017-06-09 20:24 ` [PATCH v3 09/14] x86, libnvdimm, pmem: move arch_invalidate_pmem() " Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-14 10:49   ` Jan Kara
2017-06-14 10:49     ` Jan Kara
2017-06-09 20:24 ` [PATCH v3 10/14] pmem: remove global pmem api Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-14 10:48   ` Jan Kara
2017-06-14 10:48     ` Jan Kara
2017-06-14 10:48     ` Jan Kara
2017-06-09 20:24 ` [PATCH v3 11/14] libnvdimm, pmem: fix persistence warning Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24 ` [PATCH v3 12/14] libnvdimm, nfit: enable support for volatile ranges Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24 ` [PATCH v3 13/14] filesystem-dax: gate calls to dax_flush() on QUEUE_FLAG_WC Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-09 20:24   ` Dan Williams
2017-06-14 10:46   ` Jan Kara
2017-06-14 10:46     ` Jan Kara
2017-06-14 16:49     ` Dan Williams
2017-06-14 16:49       ` Dan Williams
2017-06-14 16:49       ` Dan Williams
2017-06-14 23:11   ` [PATCH v4 13/14] libnvdimm, pmem: gate cache management on QUEUE_FLAG_WC in pmem_dax_flush() Dan Williams
2017-06-14 23:11     ` Dan Williams
2017-06-15  8:09     ` Jan Kara
2017-06-15  8:09       ` Jan Kara
2017-06-18  8:45   ` [PATCH v3 13/14] filesystem-dax: gate calls to dax_flush() on QUEUE_FLAG_WC Christoph Hellwig
2017-06-18  8:45     ` Christoph Hellwig
2017-06-18  8:45     ` Christoph Hellwig
2017-06-19  2:07     ` Dan Williams
2017-06-19  2:07       ` Dan Williams
2017-06-19  2:07       ` Dan Williams
2017-06-09 20:25 ` Dan Williams [this message]
2017-06-09 20:25   ` [PATCH v3 14/14] libnvdimm, pmem: disable dax flushing when pmem is fronting a volatile region Dan Williams
2017-06-09 20:25   ` Dan Williams
2017-06-09 23:21   ` Dan Williams
2017-06-09 23:21     ` Dan Williams
2017-06-10 17:54   ` [PATCH v4 " Dan Williams
2017-06-10 17:54     ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=149703990158.20620.16257541455791191783.stgit@dwillia2-desk3.amr.corp.intel.com \
    --to=dan.j.williams@intel.com \
    --cc=dm-devel@redhat.com \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=mawilcox@microsoft.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.