All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: linux-fsdevel@vger.kernel.org
Cc: linux-ext4@vger.kernel.org, linux-mm@kvack.org,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-nvdimm@lists.01.org, Matthew Wilcox <willy@linux.intel.com>,
	Jan Kara <jack@suse.cz>
Subject: [PATCH 12/18] dax: Remove redundant inode size checks
Date: Mon, 18 Apr 2016 23:35:35 +0200	[thread overview]
Message-ID: <1461015341-20153-13-git-send-email-jack@suse.cz> (raw)
In-Reply-To: <1461015341-20153-1-git-send-email-jack@suse.cz>

Callers of dax fault handlers must make sure these calls cannot race
with truncate. Thus it is enough to check inode size when entering the
function and we don't have to recheck it again later in the handler.
Note that inode size itself can be decreased while the fault handler
runs but filesystem locking prevents against any radix tree or block
mapping information changes resulting from the truncate and that is what
we really care about.

Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c | 60 +-----------------------------------------------------------
 1 file changed, 1 insertion(+), 59 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 42bf65b4e752..d7addfab2094 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -305,20 +305,11 @@ EXPORT_SYMBOL_GPL(dax_do_io);
 static int dax_load_hole(struct address_space *mapping, struct page *page,
 							struct vm_fault *vmf)
 {
-	unsigned long size;
-	struct inode *inode = mapping->host;
 	if (!page)
 		page = find_or_create_page(mapping, vmf->pgoff,
 						GFP_KERNEL | __GFP_ZERO);
 	if (!page)
 		return VM_FAULT_OOM;
-	/* Recheck i_size under page lock to avoid truncate race */
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (vmf->pgoff >= size) {
-		unlock_page(page);
-		put_page(page);
-		return VM_FAULT_SIGBUS;
-	}
 
 	vmf->page = page;
 	return VM_FAULT_LOCKED;
@@ -549,24 +540,10 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
 		.sector = to_sector(bh, inode),
 		.size = bh->b_size,
 	};
-	pgoff_t size;
 	int error;
 
 	i_mmap_lock_read(mapping);
 
-	/*
-	 * Check truncate didn't happen while we were allocating a block.
-	 * If it did, this block may or may not be still allocated to the
-	 * file.  We can't tell the filesystem to free it because we can't
-	 * take i_mutex here.  In the worst case, the file still has blocks
-	 * allocated past the end of the file.
-	 */
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (unlikely(vmf->pgoff >= size)) {
-		error = -EIO;
-		goto out;
-	}
-
 	if (dax_map_atomic(bdev, &dax) < 0) {
 		error = PTR_ERR(dax.addr);
 		goto out;
@@ -632,15 +609,6 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
 			put_page(page);
 			goto repeat;
 		}
-		size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-		if (unlikely(vmf->pgoff >= size)) {
-			/*
-			 * We have a struct page covering a hole in the file
-			 * from a read fault and we've raced with a truncate
-			 */
-			error = -EIO;
-			goto unlock_page;
-		}
 	}
 
 	error = get_block(inode, block, &bh, 0);
@@ -673,17 +641,8 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
 		if (error)
 			goto unlock_page;
 		vmf->page = page;
-		if (!page) {
+		if (!page)
 			i_mmap_lock_read(mapping);
-			/* Check we didn't race with truncate */
-			size = (i_size_read(inode) + PAGE_SIZE - 1) >>
-								PAGE_SHIFT;
-			if (vmf->pgoff >= size) {
-				i_mmap_unlock_read(mapping);
-				error = -EIO;
-				goto out;
-			}
-		}
 		return VM_FAULT_LOCKED;
 	}
 
@@ -861,23 +820,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
 
 	i_mmap_lock_read(mapping);
 
-	/*
-	 * If a truncate happened while we were allocating blocks, we may
-	 * leave blocks allocated to the file that are beyond EOF.  We can't
-	 * take i_mutex here, so just leave them hanging; they'll be freed
-	 * when the file is deleted.
-	 */
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (pgoff >= size) {
-		result = VM_FAULT_SIGBUS;
-		goto out;
-	}
-	if ((pgoff | PG_PMD_COLOUR) >= size) {
-		dax_pmd_dbg(&bh, address,
-				"offset + huge page size > file size");
-		goto fallback;
-	}
-
 	if (!write && !buffer_mapped(&bh)) {
 		spinlock_t *ptl;
 		pmd_t entry;
-- 
2.6.6


WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz>
To: linux-fsdevel@vger.kernel.org
Cc: linux-ext4@vger.kernel.org, linux-mm@kvack.org,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-nvdimm@lists.01.org, Matthew Wilcox <willy@linux.intel.com>,
	Jan Kara <jack@suse.cz>
Subject: [PATCH 12/18] dax: Remove redundant inode size checks
Date: Mon, 18 Apr 2016 23:35:35 +0200	[thread overview]
Message-ID: <1461015341-20153-13-git-send-email-jack@suse.cz> (raw)
In-Reply-To: <1461015341-20153-1-git-send-email-jack@suse.cz>

Callers of dax fault handlers must make sure these calls cannot race
with truncate. Thus it is enough to check inode size when entering the
function and we don't have to recheck it again later in the handler.
Note that inode size itself can be decreased while the fault handler
runs but filesystem locking prevents against any radix tree or block
mapping information changes resulting from the truncate and that is what
we really care about.

Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c | 60 +-----------------------------------------------------------
 1 file changed, 1 insertion(+), 59 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 42bf65b4e752..d7addfab2094 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -305,20 +305,11 @@ EXPORT_SYMBOL_GPL(dax_do_io);
 static int dax_load_hole(struct address_space *mapping, struct page *page,
 							struct vm_fault *vmf)
 {
-	unsigned long size;
-	struct inode *inode = mapping->host;
 	if (!page)
 		page = find_or_create_page(mapping, vmf->pgoff,
 						GFP_KERNEL | __GFP_ZERO);
 	if (!page)
 		return VM_FAULT_OOM;
-	/* Recheck i_size under page lock to avoid truncate race */
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (vmf->pgoff >= size) {
-		unlock_page(page);
-		put_page(page);
-		return VM_FAULT_SIGBUS;
-	}
 
 	vmf->page = page;
 	return VM_FAULT_LOCKED;
@@ -549,24 +540,10 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
 		.sector = to_sector(bh, inode),
 		.size = bh->b_size,
 	};
-	pgoff_t size;
 	int error;
 
 	i_mmap_lock_read(mapping);
 
-	/*
-	 * Check truncate didn't happen while we were allocating a block.
-	 * If it did, this block may or may not be still allocated to the
-	 * file.  We can't tell the filesystem to free it because we can't
-	 * take i_mutex here.  In the worst case, the file still has blocks
-	 * allocated past the end of the file.
-	 */
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (unlikely(vmf->pgoff >= size)) {
-		error = -EIO;
-		goto out;
-	}
-
 	if (dax_map_atomic(bdev, &dax) < 0) {
 		error = PTR_ERR(dax.addr);
 		goto out;
@@ -632,15 +609,6 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
 			put_page(page);
 			goto repeat;
 		}
-		size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-		if (unlikely(vmf->pgoff >= size)) {
-			/*
-			 * We have a struct page covering a hole in the file
-			 * from a read fault and we've raced with a truncate
-			 */
-			error = -EIO;
-			goto unlock_page;
-		}
 	}
 
 	error = get_block(inode, block, &bh, 0);
@@ -673,17 +641,8 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
 		if (error)
 			goto unlock_page;
 		vmf->page = page;
-		if (!page) {
+		if (!page)
 			i_mmap_lock_read(mapping);
-			/* Check we didn't race with truncate */
-			size = (i_size_read(inode) + PAGE_SIZE - 1) >>
-								PAGE_SHIFT;
-			if (vmf->pgoff >= size) {
-				i_mmap_unlock_read(mapping);
-				error = -EIO;
-				goto out;
-			}
-		}
 		return VM_FAULT_LOCKED;
 	}
 
@@ -861,23 +820,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
 
 	i_mmap_lock_read(mapping);
 
-	/*
-	 * If a truncate happened while we were allocating blocks, we may
-	 * leave blocks allocated to the file that are beyond EOF.  We can't
-	 * take i_mutex here, so just leave them hanging; they'll be freed
-	 * when the file is deleted.
-	 */
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (pgoff >= size) {
-		result = VM_FAULT_SIGBUS;
-		goto out;
-	}
-	if ((pgoff | PG_PMD_COLOUR) >= size) {
-		dax_pmd_dbg(&bh, address,
-				"offset + huge page size > file size");
-		goto fallback;
-	}
-
 	if (!write && !buffer_mapped(&bh)) {
 		spinlock_t *ptl;
 		pmd_t entry;
-- 
2.6.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz>
To: linux-fsdevel@vger.kernel.org
Cc: linux-ext4@vger.kernel.org, linux-mm@kvack.org,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-nvdimm@lists.01.org, Matthew Wilcox <willy@linux.intel.com>,
	Jan Kara <jack@suse.cz>
Subject: [PATCH 12/18] dax: Remove redundant inode size checks
Date: Mon, 18 Apr 2016 23:35:35 +0200	[thread overview]
Message-ID: <1461015341-20153-13-git-send-email-jack@suse.cz> (raw)
In-Reply-To: <1461015341-20153-1-git-send-email-jack@suse.cz>

Callers of dax fault handlers must make sure these calls cannot race
with truncate. Thus it is enough to check inode size when entering the
function and we don't have to recheck it again later in the handler.
Note that inode size itself can be decreased while the fault handler
runs but filesystem locking prevents against any radix tree or block
mapping information changes resulting from the truncate and that is what
we really care about.

Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c | 60 +-----------------------------------------------------------
 1 file changed, 1 insertion(+), 59 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 42bf65b4e752..d7addfab2094 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -305,20 +305,11 @@ EXPORT_SYMBOL_GPL(dax_do_io);
 static int dax_load_hole(struct address_space *mapping, struct page *page,
 							struct vm_fault *vmf)
 {
-	unsigned long size;
-	struct inode *inode = mapping->host;
 	if (!page)
 		page = find_or_create_page(mapping, vmf->pgoff,
 						GFP_KERNEL | __GFP_ZERO);
 	if (!page)
 		return VM_FAULT_OOM;
-	/* Recheck i_size under page lock to avoid truncate race */
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (vmf->pgoff >= size) {
-		unlock_page(page);
-		put_page(page);
-		return VM_FAULT_SIGBUS;
-	}
 
 	vmf->page = page;
 	return VM_FAULT_LOCKED;
@@ -549,24 +540,10 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
 		.sector = to_sector(bh, inode),
 		.size = bh->b_size,
 	};
-	pgoff_t size;
 	int error;
 
 	i_mmap_lock_read(mapping);
 
-	/*
-	 * Check truncate didn't happen while we were allocating a block.
-	 * If it did, this block may or may not be still allocated to the
-	 * file.  We can't tell the filesystem to free it because we can't
-	 * take i_mutex here.  In the worst case, the file still has blocks
-	 * allocated past the end of the file.
-	 */
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (unlikely(vmf->pgoff >= size)) {
-		error = -EIO;
-		goto out;
-	}
-
 	if (dax_map_atomic(bdev, &dax) < 0) {
 		error = PTR_ERR(dax.addr);
 		goto out;
@@ -632,15 +609,6 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
 			put_page(page);
 			goto repeat;
 		}
-		size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-		if (unlikely(vmf->pgoff >= size)) {
-			/*
-			 * We have a struct page covering a hole in the file
-			 * from a read fault and we've raced with a truncate
-			 */
-			error = -EIO;
-			goto unlock_page;
-		}
 	}
 
 	error = get_block(inode, block, &bh, 0);
@@ -673,17 +641,8 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
 		if (error)
 			goto unlock_page;
 		vmf->page = page;
-		if (!page) {
+		if (!page)
 			i_mmap_lock_read(mapping);
-			/* Check we didn't race with truncate */
-			size = (i_size_read(inode) + PAGE_SIZE - 1) >>
-								PAGE_SHIFT;
-			if (vmf->pgoff >= size) {
-				i_mmap_unlock_read(mapping);
-				error = -EIO;
-				goto out;
-			}
-		}
 		return VM_FAULT_LOCKED;
 	}
 
@@ -861,23 +820,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
 
 	i_mmap_lock_read(mapping);
 
-	/*
-	 * If a truncate happened while we were allocating blocks, we may
-	 * leave blocks allocated to the file that are beyond EOF.  We can't
-	 * take i_mutex here, so just leave them hanging; they'll be freed
-	 * when the file is deleted.
-	 */
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (pgoff >= size) {
-		result = VM_FAULT_SIGBUS;
-		goto out;
-	}
-	if ((pgoff | PG_PMD_COLOUR) >= size) {
-		dax_pmd_dbg(&bh, address,
-				"offset + huge page size > file size");
-		goto fallback;
-	}

  parent reply	other threads:[~2016-04-18 21:35 UTC|newest]

Thread overview: 122+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-18 21:35 [RFC v3] [PATCH 0/18] DAX page fault locking Jan Kara
2016-04-18 21:35 ` Jan Kara
2016-04-18 21:35 ` [PATCH 01/18] ext4: Handle transient ENOSPC properly for DAX Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35 ` [PATCH 02/18] ext4: Fix race in transient ENOSPC detection Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35 ` [PATCH 03/18] DAX: move RADIX_DAX_ definitions to dax.c Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35 ` [PATCH 04/18] dax: Remove complete_unwritten argument Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35 ` [PATCH 05/18] ext2: Avoid DAX zeroing to corrupt data Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-29 16:30   ` Ross Zwisler
2016-04-29 16:30     ` Ross Zwisler
2016-04-29 16:30     ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 06/18] dax: Remove dead zeroing code from fault handlers Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-29 16:48   ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 07/18] ext4: Refactor direct IO code Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35 ` [PATCH 08/18] ext4: Pre-zero allocated blocks for DAX IO Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-29 18:01   ` Ross Zwisler
2016-04-29 18:01     ` Ross Zwisler
2016-04-29 18:01     ` Ross Zwisler
2016-05-02 13:09     ` Jan Kara
2016-05-02 13:09       ` Jan Kara
2016-04-18 21:35 ` [PATCH 09/18] dax: Remove zeroing from dax_io() Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-29 18:56   ` Ross Zwisler
2016-04-29 18:56     ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 10/18] dax: Remove pointless writeback from dax_do_io() Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-29 19:00   ` Ross Zwisler
2016-04-29 19:00     ` Ross Zwisler
2016-04-29 19:00     ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 11/18] dax: Fix condition for filling of PMD holes Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-29 19:08   ` Ross Zwisler
2016-04-29 19:08     ` Ross Zwisler
2016-04-29 19:08     ` Ross Zwisler
2016-05-02 13:16     ` Jan Kara
2016-05-02 13:16       ` Jan Kara
2016-04-18 21:35 ` Jan Kara [this message]
2016-04-18 21:35   ` [PATCH 12/18] dax: Remove redundant inode size checks Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35 ` [PATCH 13/18] dax: Make huge page handling depend of CONFIG_BROKEN Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-29 19:53   ` Ross Zwisler
2016-04-29 19:53     ` Ross Zwisler
2016-05-02 13:19     ` Jan Kara
2016-05-02 13:19       ` Jan Kara
2016-05-02 13:19       ` Jan Kara
2016-04-18 21:35 ` [PATCH 14/18] dax: Define DAX lock bit for radix tree exceptional entry Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-29 20:03   ` Ross Zwisler
2016-04-29 20:03     ` Ross Zwisler
2016-04-29 20:03     ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 15/18] dax: Allow DAX code to replace exceptional entries Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-29 20:29   ` Ross Zwisler
2016-04-29 20:29     ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 16/18] dax: New fault locking Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-27  4:27   ` NeilBrown
2016-04-27  4:27     ` NeilBrown
2016-05-06  4:13   ` Ross Zwisler
2016-05-06  4:13     ` Ross Zwisler
2016-05-06  4:13     ` Ross Zwisler
2016-05-10 12:27     ` Jan Kara
2016-05-10 12:27       ` Jan Kara
2016-05-11 19:26       ` Ross Zwisler
2016-05-11 19:26         ` Ross Zwisler
2016-05-12  7:58         ` Jan Kara
2016-05-12  7:58           ` Jan Kara
2016-04-18 21:35 ` [PATCH 17/18] dax: Use radix tree entry lock to protect cow faults Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-19 11:46   ` Jerome Glisse
2016-04-19 11:46     ` Jerome Glisse
2016-04-19 11:46     ` Jerome Glisse
2016-04-19 14:33     ` Jan Kara
2016-04-19 14:33       ` Jan Kara
2016-04-19 14:33       ` Jan Kara
2016-04-19 15:19       ` Jerome Glisse
2016-04-19 15:19         ` Jerome Glisse
2016-04-19 15:19         ` Jerome Glisse
2016-04-19 15:19         ` Jerome Glisse
2016-04-18 21:35 ` [PATCH 18/18] dax: Remove i_mmap_lock protection Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-04-18 21:35   ` Jan Kara
2016-05-06  3:35 ` [RFC v3] [PATCH 0/18] DAX page fault locking Ross Zwisler
2016-05-06  3:35   ` Ross Zwisler
2016-05-06 20:33 ` Ross Zwisler
2016-05-06 20:33   ` Ross Zwisler
2016-05-09  9:38   ` Jan Kara
2016-05-09  9:38     ` Jan Kara
2016-05-09  9:38     ` Jan Kara
2016-05-10 15:28     ` Jan Kara
2016-05-10 15:28       ` Jan Kara
2016-05-10 15:28       ` Jan Kara
2016-05-10 20:30       ` Ross Zwisler
2016-05-10 20:30         ` Ross Zwisler
2016-05-10 20:30         ` Ross Zwisler
2016-05-10 22:39         ` Ross Zwisler
2016-05-10 22:39           ` Ross Zwisler
2016-05-11  9:19           ` Jan Kara
2016-05-11  9:19             ` Jan Kara
2016-05-11  9:19             ` Jan Kara
2016-05-11 15:52             ` Ross Zwisler
2016-05-11 15:52               ` Ross Zwisler
2016-05-11 15:52               ` Ross Zwisler
2016-05-09 21:28 ` Verma, Vishal L
2016-05-10 11:52   ` Jan Kara
2016-05-10 11:52     ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1461015341-20153-13-git-send-email-jack@suse.cz \
    --to=jack@suse.cz \
    --cc=dan.j.williams@intel.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=ross.zwisler@linux.intel.com \
    --cc=willy@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.