All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org
Cc: miklos@szeredi.hu, dgilbert@redhat.com, virtio-fs@redhat.com,
	stefanha@redhat.com
Subject: [PATCH 16/19] dax: Create a range version of dax_layout_busy_page()
Date: Wed, 21 Aug 2019 13:57:17 -0400	[thread overview]
Message-ID: <20190821175720.25901-17-vgoyal@redhat.com> (raw)
In-Reply-To: <20190821175720.25901-1-vgoyal@redhat.com>

While reclaiming a dax range, we do not want to unamap whole file instead
want to make sure pages in a certain range do not have references taken
on them. Hence create a version of the function which allows to pass in
a range.

Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 fs/dax.c            | 66 ++++++++++++++++++++++++++++++++-------------
 include/linux/dax.h |  6 +++++
 2 files changed, 54 insertions(+), 18 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 60620a37030c..435f5b67e828 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -557,27 +557,20 @@ static void *grab_mapping_entry(struct xa_state *xas,
 	return xa_mk_internal(VM_FAULT_FALLBACK);
 }
 
-/**
- * dax_layout_busy_page - find first pinned page in @mapping
- * @mapping: address space to scan for a page with ref count > 1
- *
- * DAX requires ZONE_DEVICE mapped pages. These pages are never
- * 'onlined' to the page allocator so they are considered idle when
- * page->count == 1. A filesystem uses this interface to determine if
- * any page in the mapping is busy, i.e. for DMA, or other
- * get_user_pages() usages.
- *
- * It is expected that the filesystem is holding locks to block the
- * establishment of new mappings in this address_space. I.e. it expects
- * to be able to run unmap_mapping_range() and subsequently not race
- * mapping_mapped() becoming true.
+/*
+ * Partial pages are included. If end is 0, pages in the range from start
+ * to end of the file are inluded.
  */
-struct page *dax_layout_busy_page(struct address_space *mapping)
+struct page *dax_layout_busy_page_range(struct address_space *mapping,
+					loff_t start, loff_t end)
 {
-	XA_STATE(xas, &mapping->i_pages, 0);
 	void *entry;
 	unsigned int scanned = 0;
 	struct page *page = NULL;
+	pgoff_t start_idx = start >> PAGE_SHIFT;
+	pgoff_t end_idx = end >> PAGE_SHIFT;
+	XA_STATE(xas, &mapping->i_pages, start_idx);
+	loff_t len, lstart = round_down(start, PAGE_SIZE);
 
 	/*
 	 * In the 'limited' case get_user_pages() for dax is disabled.
@@ -588,6 +581,22 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
 	if (!dax_mapping(mapping) || !mapping_mapped(mapping))
 		return NULL;
 
+	/* If end == 0, all pages from start to till end of file */
+	if (!end) {
+		end_idx = ULONG_MAX;
+		len = 0;
+	} else {
+		/* length is being calculated from lstart and not start.
+		 * This is due to behavior of unmap_mapping_range(). If
+		 * start is say 4094 and end is on 4093 then want to
+		 * unamp two pages, idx 0 and 1. But unmap_mapping_range()
+		 * will unmap only page at idx 0. If we calculate len
+		 * from the rounded down start, this problem should not
+		 * happen.
+		 */
+		len = end - lstart + 1;
+	}
+
 	/*
 	 * If we race get_user_pages_fast() here either we'll see the
 	 * elevated page count in the iteration and wait, or
@@ -600,10 +609,10 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
 	 * guaranteed to either see new references or prevent new
 	 * references from being established.
 	 */
-	unmap_mapping_range(mapping, 0, 0, 0);
+	unmap_mapping_range(mapping, start, len, 0);
 
 	xas_lock_irq(&xas);
-	xas_for_each(&xas, entry, ULONG_MAX) {
+	xas_for_each(&xas, entry, end_idx) {
 		if (WARN_ON_ONCE(!xa_is_value(entry)))
 			continue;
 		if (unlikely(dax_is_locked(entry)))
@@ -624,6 +633,27 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
 	xas_unlock_irq(&xas);
 	return page;
 }
+EXPORT_SYMBOL_GPL(dax_layout_busy_page_range);
+
+/**
+ * dax_layout_busy_page - find first pinned page in @mapping
+ * @mapping: address space to scan for a page with ref count > 1
+ *
+ * DAX requires ZONE_DEVICE mapped pages. These pages are never
+ * 'onlined' to the page allocator so they are considered idle when
+ * page->count == 1. A filesystem uses this interface to determine if
+ * any page in the mapping is busy, i.e. for DMA, or other
+ * get_user_pages() usages.
+ *
+ * It is expected that the filesystem is holding locks to block the
+ * establishment of new mappings in this address_space. I.e. it expects
+ * to be able to run unmap_mapping_range() and subsequently not race
+ * mapping_mapped() becoming true.
+ */
+struct page *dax_layout_busy_page(struct address_space *mapping)
+{
+	return dax_layout_busy_page_range(mapping, 0, 0);
+}
 EXPORT_SYMBOL_GPL(dax_layout_busy_page);
 
 static int __dax_invalidate_entry(struct address_space *mapping,
diff --git a/include/linux/dax.h b/include/linux/dax.h
index e7f40108f2c9..3ef6686c080b 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -145,6 +145,7 @@ int dax_writeback_mapping_range(struct address_space *mapping,
 		struct writeback_control *wbc);
 
 struct page *dax_layout_busy_page(struct address_space *mapping);
+struct page *dax_layout_busy_page_range(struct address_space *mapping, loff_t start, loff_t end);
 dax_entry_t dax_lock_page(struct page *page);
 void dax_unlock_page(struct page *page, dax_entry_t cookie);
 #else
@@ -180,6 +181,11 @@ static inline struct page *dax_layout_busy_page(struct address_space *mapping)
 	return NULL;
 }
 
+static inline struct page *dax_layout_busy_page_range(struct address_space *mapping, pgoff_t start, pgoff_t nr_pages)
+{
+	return NULL;
+}
+
 static inline int dax_writeback_mapping_range(struct address_space *mapping,
 		struct block_device *bdev, struct dax_device *dax_dev,
 		struct writeback_control *wbc)
-- 
2.20.1

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org
Cc: virtio-fs@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu,
	stefanha@redhat.com, dgilbert@redhat.com,
	Dan Williams <dan.j.williams@intel.com>
Subject: [PATCH 16/19] dax: Create a range version of dax_layout_busy_page()
Date: Wed, 21 Aug 2019 13:57:17 -0400	[thread overview]
Message-ID: <20190821175720.25901-17-vgoyal@redhat.com> (raw)
In-Reply-To: <20190821175720.25901-1-vgoyal@redhat.com>

While reclaiming a dax range, we do not want to unamap whole file instead
want to make sure pages in a certain range do not have references taken
on them. Hence create a version of the function which allows to pass in
a range.

Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 fs/dax.c            | 66 ++++++++++++++++++++++++++++++++-------------
 include/linux/dax.h |  6 +++++
 2 files changed, 54 insertions(+), 18 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 60620a37030c..435f5b67e828 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -557,27 +557,20 @@ static void *grab_mapping_entry(struct xa_state *xas,
 	return xa_mk_internal(VM_FAULT_FALLBACK);
 }
 
-/**
- * dax_layout_busy_page - find first pinned page in @mapping
- * @mapping: address space to scan for a page with ref count > 1
- *
- * DAX requires ZONE_DEVICE mapped pages. These pages are never
- * 'onlined' to the page allocator so they are considered idle when
- * page->count == 1. A filesystem uses this interface to determine if
- * any page in the mapping is busy, i.e. for DMA, or other
- * get_user_pages() usages.
- *
- * It is expected that the filesystem is holding locks to block the
- * establishment of new mappings in this address_space. I.e. it expects
- * to be able to run unmap_mapping_range() and subsequently not race
- * mapping_mapped() becoming true.
+/*
+ * Partial pages are included. If end is 0, pages in the range from start
+ * to end of the file are inluded.
  */
-struct page *dax_layout_busy_page(struct address_space *mapping)
+struct page *dax_layout_busy_page_range(struct address_space *mapping,
+					loff_t start, loff_t end)
 {
-	XA_STATE(xas, &mapping->i_pages, 0);
 	void *entry;
 	unsigned int scanned = 0;
 	struct page *page = NULL;
+	pgoff_t start_idx = start >> PAGE_SHIFT;
+	pgoff_t end_idx = end >> PAGE_SHIFT;
+	XA_STATE(xas, &mapping->i_pages, start_idx);
+	loff_t len, lstart = round_down(start, PAGE_SIZE);
 
 	/*
 	 * In the 'limited' case get_user_pages() for dax is disabled.
@@ -588,6 +581,22 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
 	if (!dax_mapping(mapping) || !mapping_mapped(mapping))
 		return NULL;
 
+	/* If end == 0, all pages from start to till end of file */
+	if (!end) {
+		end_idx = ULONG_MAX;
+		len = 0;
+	} else {
+		/* length is being calculated from lstart and not start.
+		 * This is due to behavior of unmap_mapping_range(). If
+		 * start is say 4094 and end is on 4093 then want to
+		 * unamp two pages, idx 0 and 1. But unmap_mapping_range()
+		 * will unmap only page at idx 0. If we calculate len
+		 * from the rounded down start, this problem should not
+		 * happen.
+		 */
+		len = end - lstart + 1;
+	}
+
 	/*
 	 * If we race get_user_pages_fast() here either we'll see the
 	 * elevated page count in the iteration and wait, or
@@ -600,10 +609,10 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
 	 * guaranteed to either see new references or prevent new
 	 * references from being established.
 	 */
-	unmap_mapping_range(mapping, 0, 0, 0);
+	unmap_mapping_range(mapping, start, len, 0);
 
 	xas_lock_irq(&xas);
-	xas_for_each(&xas, entry, ULONG_MAX) {
+	xas_for_each(&xas, entry, end_idx) {
 		if (WARN_ON_ONCE(!xa_is_value(entry)))
 			continue;
 		if (unlikely(dax_is_locked(entry)))
@@ -624,6 +633,27 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
 	xas_unlock_irq(&xas);
 	return page;
 }
+EXPORT_SYMBOL_GPL(dax_layout_busy_page_range);
+
+/**
+ * dax_layout_busy_page - find first pinned page in @mapping
+ * @mapping: address space to scan for a page with ref count > 1
+ *
+ * DAX requires ZONE_DEVICE mapped pages. These pages are never
+ * 'onlined' to the page allocator so they are considered idle when
+ * page->count == 1. A filesystem uses this interface to determine if
+ * any page in the mapping is busy, i.e. for DMA, or other
+ * get_user_pages() usages.
+ *
+ * It is expected that the filesystem is holding locks to block the
+ * establishment of new mappings in this address_space. I.e. it expects
+ * to be able to run unmap_mapping_range() and subsequently not race
+ * mapping_mapped() becoming true.
+ */
+struct page *dax_layout_busy_page(struct address_space *mapping)
+{
+	return dax_layout_busy_page_range(mapping, 0, 0);
+}
 EXPORT_SYMBOL_GPL(dax_layout_busy_page);
 
 static int __dax_invalidate_entry(struct address_space *mapping,
diff --git a/include/linux/dax.h b/include/linux/dax.h
index e7f40108f2c9..3ef6686c080b 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -145,6 +145,7 @@ int dax_writeback_mapping_range(struct address_space *mapping,
 		struct writeback_control *wbc);
 
 struct page *dax_layout_busy_page(struct address_space *mapping);
+struct page *dax_layout_busy_page_range(struct address_space *mapping, loff_t start, loff_t end);
 dax_entry_t dax_lock_page(struct page *page);
 void dax_unlock_page(struct page *page, dax_entry_t cookie);
 #else
@@ -180,6 +181,11 @@ static inline struct page *dax_layout_busy_page(struct address_space *mapping)
 	return NULL;
 }
 
+static inline struct page *dax_layout_busy_page_range(struct address_space *mapping, pgoff_t start, pgoff_t nr_pages)
+{
+	return NULL;
+}
+
 static inline int dax_writeback_mapping_range(struct address_space *mapping,
 		struct block_device *bdev, struct dax_device *dax_dev,
 		struct writeback_control *wbc)
-- 
2.20.1


WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org
Cc: miklos@szeredi.hu, virtio-fs@redhat.com,
	Dan Williams <dan.j.williams@intel.com>,
	vgoyal@redhat.com
Subject: [Virtio-fs] [PATCH 16/19] dax: Create a range version of dax_layout_busy_page()
Date: Wed, 21 Aug 2019 13:57:17 -0400	[thread overview]
Message-ID: <20190821175720.25901-17-vgoyal@redhat.com> (raw)
In-Reply-To: <20190821175720.25901-1-vgoyal@redhat.com>

While reclaiming a dax range, we do not want to unamap whole file instead
want to make sure pages in a certain range do not have references taken
on them. Hence create a version of the function which allows to pass in
a range.

Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 fs/dax.c            | 66 ++++++++++++++++++++++++++++++++-------------
 include/linux/dax.h |  6 +++++
 2 files changed, 54 insertions(+), 18 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 60620a37030c..435f5b67e828 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -557,27 +557,20 @@ static void *grab_mapping_entry(struct xa_state *xas,
 	return xa_mk_internal(VM_FAULT_FALLBACK);
 }
 
-/**
- * dax_layout_busy_page - find first pinned page in @mapping
- * @mapping: address space to scan for a page with ref count > 1
- *
- * DAX requires ZONE_DEVICE mapped pages. These pages are never
- * 'onlined' to the page allocator so they are considered idle when
- * page->count == 1. A filesystem uses this interface to determine if
- * any page in the mapping is busy, i.e. for DMA, or other
- * get_user_pages() usages.
- *
- * It is expected that the filesystem is holding locks to block the
- * establishment of new mappings in this address_space. I.e. it expects
- * to be able to run unmap_mapping_range() and subsequently not race
- * mapping_mapped() becoming true.
+/*
+ * Partial pages are included. If end is 0, pages in the range from start
+ * to end of the file are inluded.
  */
-struct page *dax_layout_busy_page(struct address_space *mapping)
+struct page *dax_layout_busy_page_range(struct address_space *mapping,
+					loff_t start, loff_t end)
 {
-	XA_STATE(xas, &mapping->i_pages, 0);
 	void *entry;
 	unsigned int scanned = 0;
 	struct page *page = NULL;
+	pgoff_t start_idx = start >> PAGE_SHIFT;
+	pgoff_t end_idx = end >> PAGE_SHIFT;
+	XA_STATE(xas, &mapping->i_pages, start_idx);
+	loff_t len, lstart = round_down(start, PAGE_SIZE);
 
 	/*
 	 * In the 'limited' case get_user_pages() for dax is disabled.
@@ -588,6 +581,22 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
 	if (!dax_mapping(mapping) || !mapping_mapped(mapping))
 		return NULL;
 
+	/* If end == 0, all pages from start to till end of file */
+	if (!end) {
+		end_idx = ULONG_MAX;
+		len = 0;
+	} else {
+		/* length is being calculated from lstart and not start.
+		 * This is due to behavior of unmap_mapping_range(). If
+		 * start is say 4094 and end is on 4093 then want to
+		 * unamp two pages, idx 0 and 1. But unmap_mapping_range()
+		 * will unmap only page at idx 0. If we calculate len
+		 * from the rounded down start, this problem should not
+		 * happen.
+		 */
+		len = end - lstart + 1;
+	}
+
 	/*
 	 * If we race get_user_pages_fast() here either we'll see the
 	 * elevated page count in the iteration and wait, or
@@ -600,10 +609,10 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
 	 * guaranteed to either see new references or prevent new
 	 * references from being established.
 	 */
-	unmap_mapping_range(mapping, 0, 0, 0);
+	unmap_mapping_range(mapping, start, len, 0);
 
 	xas_lock_irq(&xas);
-	xas_for_each(&xas, entry, ULONG_MAX) {
+	xas_for_each(&xas, entry, end_idx) {
 		if (WARN_ON_ONCE(!xa_is_value(entry)))
 			continue;
 		if (unlikely(dax_is_locked(entry)))
@@ -624,6 +633,27 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
 	xas_unlock_irq(&xas);
 	return page;
 }
+EXPORT_SYMBOL_GPL(dax_layout_busy_page_range);
+
+/**
+ * dax_layout_busy_page - find first pinned page in @mapping
+ * @mapping: address space to scan for a page with ref count > 1
+ *
+ * DAX requires ZONE_DEVICE mapped pages. These pages are never
+ * 'onlined' to the page allocator so they are considered idle when
+ * page->count == 1. A filesystem uses this interface to determine if
+ * any page in the mapping is busy, i.e. for DMA, or other
+ * get_user_pages() usages.
+ *
+ * It is expected that the filesystem is holding locks to block the
+ * establishment of new mappings in this address_space. I.e. it expects
+ * to be able to run unmap_mapping_range() and subsequently not race
+ * mapping_mapped() becoming true.
+ */
+struct page *dax_layout_busy_page(struct address_space *mapping)
+{
+	return dax_layout_busy_page_range(mapping, 0, 0);
+}
 EXPORT_SYMBOL_GPL(dax_layout_busy_page);
 
 static int __dax_invalidate_entry(struct address_space *mapping,
diff --git a/include/linux/dax.h b/include/linux/dax.h
index e7f40108f2c9..3ef6686c080b 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -145,6 +145,7 @@ int dax_writeback_mapping_range(struct address_space *mapping,
 		struct writeback_control *wbc);
 
 struct page *dax_layout_busy_page(struct address_space *mapping);
+struct page *dax_layout_busy_page_range(struct address_space *mapping, loff_t start, loff_t end);
 dax_entry_t dax_lock_page(struct page *page);
 void dax_unlock_page(struct page *page, dax_entry_t cookie);
 #else
@@ -180,6 +181,11 @@ static inline struct page *dax_layout_busy_page(struct address_space *mapping)
 	return NULL;
 }
 
+static inline struct page *dax_layout_busy_page_range(struct address_space *mapping, pgoff_t start, pgoff_t nr_pages)
+{
+	return NULL;
+}
+
 static inline int dax_writeback_mapping_range(struct address_space *mapping,
 		struct block_device *bdev, struct dax_device *dax_dev,
 		struct writeback_control *wbc)
-- 
2.20.1


  parent reply	other threads:[~2019-08-21 17:58 UTC|newest]

Thread overview: 231+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-21 17:57 [PATCH v3 00/19][RFC] virtio-fs: Enable DAX support Vivek Goyal
2019-08-21 17:57 ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57 ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 01/19] dax: remove block device dependencies Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-26 11:51   ` Christoph Hellwig
2019-08-26 11:51     ` [Virtio-fs] " Christoph Hellwig
2019-08-26 11:51     ` Christoph Hellwig
2019-08-27 16:38     ` Vivek Goyal
2019-08-27 16:38       ` [Virtio-fs] " Vivek Goyal
2019-08-27 16:38       ` Vivek Goyal
2019-08-28  6:58       ` Christoph Hellwig
2019-08-28  6:58         ` [Virtio-fs] " Christoph Hellwig
2019-08-28  6:58         ` Christoph Hellwig
2019-08-28 17:58         ` Vivek Goyal
2019-08-28 17:58           ` [Virtio-fs] " Vivek Goyal
2019-08-28 17:58           ` Vivek Goyal
2019-08-28 22:53           ` Dave Chinner
2019-08-28 22:53             ` [Virtio-fs] " Dave Chinner
2019-08-28 22:53             ` Dave Chinner
2019-08-29  0:04             ` Dan Williams
2019-08-29  0:04               ` [Virtio-fs] " Dan Williams
2019-08-29  0:04               ` Dan Williams
2019-08-29  9:32               ` Christoph Hellwig
2019-08-29  9:32                 ` [Virtio-fs] " Christoph Hellwig
2019-08-29  9:32                 ` Christoph Hellwig
2019-12-16 18:10               ` Vivek Goyal
2019-12-16 18:10                 ` [Virtio-fs] " Vivek Goyal
2019-12-16 18:10                 ` Vivek Goyal
2020-01-07 12:51                 ` Christoph Hellwig
2020-01-07 12:51                   ` [Virtio-fs] " Christoph Hellwig
2020-01-07 12:51                   ` Christoph Hellwig
2020-01-07 14:22                   ` Dan Williams
2020-01-07 14:22                     ` [Virtio-fs] " Dan Williams
2020-01-07 14:22                     ` Dan Williams
2020-01-07 17:07                     ` Darrick J. Wong
2020-01-07 17:07                       ` [Virtio-fs] " Darrick J. Wong
2020-01-07 17:07                       ` Darrick J. Wong
2020-01-07 17:29                       ` Dan Williams
2020-01-07 17:29                         ` [Virtio-fs] " Dan Williams
2020-01-07 17:29                         ` Dan Williams
2020-01-07 18:01                         ` Vivek Goyal
2020-01-07 18:01                           ` [Virtio-fs] " Vivek Goyal
2020-01-07 18:01                           ` Vivek Goyal
2020-01-07 18:07                           ` Dan Williams
2020-01-07 18:07                             ` [Virtio-fs] " Dan Williams
2020-01-07 18:07                             ` Dan Williams
2020-01-07 18:33                             ` Vivek Goyal
2020-01-07 18:33                               ` [Virtio-fs] " Vivek Goyal
2020-01-07 18:33                               ` Vivek Goyal
2020-01-07 18:49                               ` Dan Williams
2020-01-07 18:49                                 ` [Virtio-fs] " Dan Williams
2020-01-07 18:49                                 ` Dan Williams
2020-01-07 19:02                                 ` Darrick J. Wong
2020-01-07 19:02                                   ` [Virtio-fs] " Darrick J. Wong
2020-01-07 19:02                                   ` Darrick J. Wong
2020-01-07 19:46                                   ` Dan Williams
2020-01-07 19:46                                     ` [Virtio-fs] " Dan Williams
2020-01-07 19:46                                     ` Dan Williams
2020-01-07 23:38                                     ` Dan Williams
2020-01-07 23:38                                       ` [Virtio-fs] " Dan Williams
2020-01-07 23:38                                       ` Dan Williams
2020-01-09 11:24                                 ` Jan Kara
2020-01-09 11:24                                   ` [Virtio-fs] " Jan Kara
2020-01-09 11:24                                   ` Jan Kara
2020-01-09 20:03                                   ` Dan Williams
2020-01-09 20:03                                     ` [Virtio-fs] " Dan Williams
2020-01-09 20:03                                     ` Dan Williams
2020-01-10 12:36                                     ` Christoph Hellwig
2020-01-10 12:36                                       ` [Virtio-fs] " Christoph Hellwig
2020-01-10 12:36                                       ` Christoph Hellwig
2020-01-14 20:31                                     ` Vivek Goyal
2020-01-14 20:31                                       ` [Virtio-fs] " Vivek Goyal
2020-01-14 20:31                                       ` Vivek Goyal
2020-01-14 20:39                                       ` Dan Williams
2020-01-14 20:39                                         ` [Virtio-fs] " Dan Williams
2020-01-14 20:39                                         ` Dan Williams
2020-01-14 21:28                                         ` Vivek Goyal
2020-01-14 21:28                                           ` [Virtio-fs] " Vivek Goyal
2020-01-14 21:28                                           ` Vivek Goyal
2020-01-14 22:23                                           ` Dan Williams
2020-01-14 22:23                                             ` [Virtio-fs] " Dan Williams
2020-01-14 22:23                                             ` Dan Williams
2020-01-15 19:56                                             ` Vivek Goyal
2020-01-15 19:56                                               ` [Virtio-fs] " Vivek Goyal
2020-01-15 19:56                                               ` Vivek Goyal
2020-01-15 20:17                                               ` Dan Williams
2020-01-15 20:17                                                 ` [Virtio-fs] " Dan Williams
2020-01-15 20:17                                                 ` Dan Williams
2020-01-15 21:08                                                 ` Jeff Moyer
2020-01-15 21:08                                                   ` [Virtio-fs] " Jeff Moyer
2020-01-15 21:08                                                   ` Jeff Moyer
2020-01-16 18:09                                                   ` Dan Williams
2020-01-16 18:09                                                     ` [Virtio-fs] " Dan Williams
2020-01-16 18:09                                                     ` Dan Williams
2020-01-16 18:39                                                     ` Vivek Goyal
2020-01-16 18:39                                                       ` [Virtio-fs] " Vivek Goyal
2020-01-16 18:39                                                       ` Vivek Goyal
2020-01-16 19:09                                                       ` Dan Williams
2020-01-16 19:09                                                         ` [Virtio-fs] " Dan Williams
2020-01-16 19:09                                                         ` Dan Williams
2020-01-16 19:23                                                         ` Vivek Goyal
2020-01-16 19:23                                                           ` [Virtio-fs] " Vivek Goyal
2020-01-16 19:23                                                           ` Vivek Goyal
2020-02-11 17:33                                                     ` Vivek Goyal
2020-02-11 17:33                                                       ` [Virtio-fs] " Vivek Goyal
2020-02-11 17:33                                                       ` Vivek Goyal
2020-01-15  9:03                                           ` Jan Kara
2020-01-15  9:03                                             ` [Virtio-fs] " Jan Kara
2020-01-15  9:03                                             ` Jan Kara
2019-08-21 17:57 ` [PATCH 02/19] dax: Pass dax_dev to dax_writeback_mapping_range() Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-26 11:53   ` Christoph Hellwig
2019-08-26 11:53     ` [Virtio-fs] " Christoph Hellwig
2019-08-26 11:53     ` Christoph Hellwig
2019-08-26 20:33     ` Vivek Goyal
2019-08-26 20:33       ` [Virtio-fs] " Vivek Goyal
2019-08-26 20:33       ` Vivek Goyal
2019-08-26 20:58       ` Vivek Goyal
2019-08-26 20:58         ` [Virtio-fs] " Vivek Goyal
2019-08-26 20:58         ` Vivek Goyal
2019-08-26 21:33         ` Dan Williams
2019-08-26 21:33           ` [Virtio-fs] " Dan Williams
2019-08-26 21:33           ` Dan Williams
2019-08-28  6:58         ` Christoph Hellwig
2019-08-28  6:58           ` [Virtio-fs] " Christoph Hellwig
2019-08-28  6:58           ` Christoph Hellwig
2020-01-03 14:12         ` Vivek Goyal
2020-01-03 14:12           ` [Virtio-fs] " Vivek Goyal
2020-01-03 14:12           ` Vivek Goyal
2020-01-03 18:12           ` Dan Williams
2020-01-03 18:12             ` [Virtio-fs] " Dan Williams
2020-01-03 18:12             ` Dan Williams
2020-01-03 18:18             ` Dan Williams
2020-01-03 18:18               ` [Virtio-fs] " Dan Williams
2020-01-03 18:18               ` Dan Williams
2020-01-03 18:33               ` Vivek Goyal
2020-01-03 18:33                 ` [Virtio-fs] " Vivek Goyal
2020-01-03 18:33                 ` Vivek Goyal
2020-01-03 19:30                 ` Dan Williams
2020-01-03 19:30                   ` [Virtio-fs] " Dan Williams
2020-01-03 19:30                   ` Dan Williams
2020-01-03 18:43               ` Vivek Goyal
2020-01-03 18:43                 ` [Virtio-fs] " Vivek Goyal
2020-01-03 18:43                 ` Vivek Goyal
2019-08-27 13:45       ` Jan Kara
2019-08-27 13:45         ` [Virtio-fs] " Jan Kara
2019-08-27 13:45         ` Jan Kara
2019-08-21 17:57 ` [PATCH 03/19] virtio: Add get_shm_region method Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 04/19] virtio: Implement get_shm_region for PCI transport Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-26  1:43   ` [Virtio-fs] " piaojun
2019-08-26  1:43     ` piaojun
2019-08-26  1:43     ` piaojun
2019-08-26 13:06     ` Vivek Goyal
2019-08-26 13:06       ` Vivek Goyal
2019-08-26 13:06       ` Vivek Goyal
2019-08-27  9:41       ` piaojun
2019-08-27  9:41         ` piaojun
2019-08-27  9:41         ` piaojun
2019-08-27  8:34   ` Cornelia Huck
2019-08-27  8:34     ` [Virtio-fs] " Cornelia Huck
2019-08-27  8:34     ` Cornelia Huck
2019-08-27  8:46     ` Cornelia Huck
2019-08-27  8:46       ` [Virtio-fs] " Cornelia Huck
2019-08-27  8:46       ` Cornelia Huck
2019-08-27 11:53     ` Vivek Goyal
2019-08-27 11:53       ` [Virtio-fs] " Vivek Goyal
2019-08-27 11:53       ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 05/19] virtio: Implement get_shm_region for MMIO transport Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-27  8:39   ` Cornelia Huck
2019-08-27  8:39     ` [Virtio-fs] " Cornelia Huck
2019-08-27  8:39     ` Cornelia Huck
2019-08-27 11:54     ` Vivek Goyal
2019-08-27 11:54       ` [Virtio-fs] " Vivek Goyal
2019-08-27 11:54       ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 06/19] fuse, dax: add fuse_conn->dax_dev field Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 07/19] virtio_fs, dax: Set up virtio_fs dax_device Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 08/19] fuse: Keep a list of free dax memory ranges Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 09/19] fuse: implement FUSE_INIT map_alignment field Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 10/19] fuse: Introduce setupmapping/removemapping commands Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 11/19] fuse, dax: Implement dax read/write operations Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 19:49   ` Liu Bo
2019-08-21 19:49     ` [Virtio-fs] " Liu Bo
2019-08-21 19:49     ` Liu Bo
2019-08-22 12:59     ` Vivek Goyal
2019-08-22 12:59       ` [Virtio-fs] " Vivek Goyal
2019-08-22 12:59       ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 12/19] fuse, dax: add DAX mmap support Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 13/19] fuse: Define dax address space operations Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 14/19] fuse, dax: Take ->i_mmap_sem lock during dax page fault Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 15/19] fuse: Maintain a list of busy elements Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` Vivek Goyal [this message]
2019-08-21 17:57   ` [Virtio-fs] [PATCH 16/19] dax: Create a range version of dax_layout_busy_page() Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 17/19] fuse: Add logic to free up a memory range Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 18/19] fuse: Release file in process context Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal
2019-08-21 17:57 ` [PATCH 19/19] fuse: Take inode lock for dax inode truncation Vivek Goyal
2019-08-21 17:57   ` [Virtio-fs] " Vivek Goyal
2019-08-21 17:57   ` Vivek Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190821175720.25901-17-vgoyal@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=miklos@szeredi.hu \
    --cc=stefanha@redhat.com \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.