linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org
Cc: Matthew Wilcox <mawilcox@microsoft.com>, Jan Kara <jack@suse.cz>,
	Jeff Layton <jlayton@redhat.com>,
	Lukas Czerner <lczerner@redhat.com>,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	Christoph Hellwig <hch@lst.de>,
	Goldwyn Rodrigues <rgoldwyn@suse.com>,
	Nicholas Piggin <npiggin@gmail.com>,
	Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>,
	linux-nilfs@vger.kernel.org, Jaegeuk Kim <jaegeuk@kernel.org>,
	Chao Yu <yuchao0@huawei.com>,
	linux-f2fs-devel@lists.sourceforge.net,
	Oleg Drokin <oleg.drokin@intel.com>,
	Andreas Dilger <andreas.dilger@intel.com>,
	James Simmons <jsimmons@infradead.org>,
	Mike Kravetz <mike.kravetz@oracle.com>
Subject: [PATCH v10 54/62] dax: Rename some functions
Date: Thu, 29 Mar 2018 20:42:37 -0700	[thread overview]
Message-ID: <20180330034245.10462-55-willy@infradead.org> (raw)
In-Reply-To: <20180330034245.10462-1-willy@infradead.org>

From: Matthew Wilcox <mawilcox@microsoft.com>

Remove mentions of 'radix' and 'radix tree'.  Simplify some names by
dropping the word 'mapping'.

Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
---
 fs/dax.c | 81 +++++++++++++++++++++++++++++++---------------------------------
 1 file changed, 39 insertions(+), 42 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 3bd9f624c1f8..ce4ad6a99e78 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -74,18 +74,18 @@ fs_initcall(init_dax_wait_table);
 #define DAX_ZERO_PAGE	(1UL << 2)
 #define DAX_EMPTY	(1UL << 3)
 
-static unsigned long dax_radix_pfn(void *entry)
+static unsigned long dax_to_pfn(void *entry)
 {
 	return xa_to_value(entry) >> DAX_SHIFT;
 }
 
-static void *dax_radix_locked_entry(unsigned long pfn, unsigned long flags)
+static void *dax_mk_locked(unsigned long pfn, unsigned long flags)
 {
 	return xa_mk_value(flags | ((unsigned long)pfn << DAX_SHIFT) |
 			DAX_ENTRY_LOCK);
 }
 
-static unsigned int dax_radix_order(void *entry)
+static unsigned int dax_entry_order(void *entry)
 {
 	if (xa_to_value(entry) & DAX_PMD)
 		return PMD_SHIFT - PAGE_SHIFT;
@@ -113,7 +113,7 @@ static int dax_is_empty_entry(void *entry)
 }
 
 /*
- * DAX radix tree locking
+ * DAX page cache entry locking
  */
 struct exceptional_entry_key {
 	struct address_space *mapping;
@@ -217,7 +217,7 @@ static inline void *unlock_slot(struct address_space *mapping, void **slot)
 }
 
 /*
- * Lookup entry in radix tree, wait for it to become unlocked if it is
+ * Lookup entry in page cache, wait for it to become unlocked if it is
  * a DAX entry and return it. The caller must call
  * put_unlocked_mapping_entry() when he decided not to lock the entry or
  * put_locked_mapping_entry() when he locked the entry and now wants to
@@ -280,7 +280,7 @@ static void put_locked_mapping_entry(struct address_space *mapping,
 }
 
 /*
- * Called when we are done with radix tree entry we looked up via
+ * Called when we are done with page cache entry we looked up via
  * get_unlocked_mapping_entry() and which we didn't lock in the end.
  */
 static void put_unlocked_mapping_entry(struct address_space *mapping,
@@ -289,7 +289,7 @@ static void put_unlocked_mapping_entry(struct address_space *mapping,
 	if (!entry)
 		return;
 
-	/* We have to wake up next waiter for the radix tree entry lock */
+	/* We have to wake up next waiter for the page cache entry lock */
 	dax_wake_mapping_entry_waiter(mapping, index, entry, false);
 }
 
@@ -306,7 +306,7 @@ static unsigned long dax_entry_size(void *entry)
 }
 
 #define for_each_entry_pfn(entry, pfn, end_pfn) \
-	for (pfn = dax_radix_pfn(entry), \
+	for (pfn = dax_to_pfn(entry), \
 			end_pfn = pfn + dax_entry_size(entry) / PAGE_SIZE; \
 			pfn < end_pfn; \
 			pfn++)
@@ -357,9 +357,9 @@ static struct page *dax_busy_page(void *entry)
 }
 
 /*
- * Find radix tree entry at given index. If it is a DAX entry, return it
- * with the radix tree entry locked. If the radix tree doesn't contain the
- * given index, create an empty entry for the index and return with it locked.
+ * Find page cache entry at given index. If it is a DAX entry, return it
+ * with the entry locked. If the page cache doesn't contain an entry at
+ * that index, add a locked empty entry.
  *
  * When requesting an entry with size DAX_PMD, grab_mapping_entry() will
  * either return that locked entry or will return an error.  This error will
@@ -468,10 +468,10 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
 					true);
 		}
 
-		entry = dax_radix_locked_entry(0, size_flag | DAX_EMPTY);
+		entry = dax_mk_locked(0, size_flag | DAX_EMPTY);
 
 		err = __radix_tree_insert(&mapping->i_pages, index,
-				dax_radix_order(entry), entry);
+				dax_entry_order(entry), entry);
 		radix_tree_preload_end();
 		if (err) {
 			xa_unlock_irq(&mapping->i_pages);
@@ -575,7 +575,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
 }
 EXPORT_SYMBOL_GPL(dax_layout_busy_page);
 
-static int __dax_invalidate_mapping_entry(struct address_space *mapping,
+static int __dax_invalidate_entry(struct address_space *mapping,
 					  pgoff_t index, bool trunc)
 {
 	int ret = 0;
@@ -605,12 +605,12 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,
  */
 int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
 {
-	int ret = __dax_invalidate_mapping_entry(mapping, index, true);
+	int ret = __dax_invalidate_entry(mapping, index, true);
 
 	/*
 	 * This gets called from truncate / punch_hole path. As such, the caller
 	 * must hold locks protecting against concurrent modifications of the
-	 * radix tree (usually fs-private i_mmap_sem for writing). Since the
+	 * page cache (usually fs-private i_mmap_sem for writing). Since the
 	 * caller has seen a DAX entry for this index, we better find it
 	 * at that index as well...
 	 */
@@ -624,7 +624,7 @@ int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
 int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
 				      pgoff_t index)
 {
-	return __dax_invalidate_mapping_entry(mapping, index, false);
+	return __dax_invalidate_entry(mapping, index, false);
 }
 
 static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev,
@@ -661,10 +661,9 @@ static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev,
  * already in the tree, we will skip the insertion and just dirty the PMD as
  * appropriate.
  */
-static void *dax_insert_mapping_entry(struct address_space *mapping,
-				      struct vm_fault *vmf,
-				      void *entry, pfn_t pfn_t,
-				      unsigned long flags, bool dirty)
+static void *dax_insert_entry(struct address_space *mapping,
+				struct vm_fault *vmf, void *entry, pfn_t pfn_t,
+				unsigned long flags, bool dirty)
 {
 	struct radix_tree_root *pages = &mapping->i_pages;
 	unsigned long pfn = pfn_t_to_pfn(pfn_t);
@@ -684,7 +683,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
 	}
 
 	xa_lock_irq(pages);
-	new_entry = dax_radix_locked_entry(pfn, flags);
+	new_entry = dax_mk_locked(pfn, flags);
 	if (dax_entry_size(entry) != dax_entry_size(new_entry)) {
 		dax_disassociate_entry(entry, mapping, false);
 		dax_associate_entry(new_entry, mapping);
@@ -692,9 +691,9 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
 
 	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
 		/*
-		 * Only swap our new entry into the radix tree if the current
+		 * Only swap our new entry into the page cache if the current
 		 * entry is a zero page or an empty entry.  If a normal PTE or
-		 * PMD entry is already in the tree, we leave it alone.  This
+		 * PMD entry is already in the cache, we leave it alone.  This
 		 * means that if we are trying to insert a PTE and the
 		 * existing entry is a PMD, we will just leave the PMD in the
 		 * tree and dirty it if necessary.
@@ -717,8 +716,8 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
 	return entry;
 }
 
-static inline unsigned long
-pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma)
+static inline
+unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma)
 {
 	unsigned long address;
 
@@ -728,8 +727,8 @@ pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma)
 }
 
 /* Walk all mappings of a given index of a file and writeprotect them */
-static void dax_mapping_entry_mkclean(struct address_space *mapping,
-				      pgoff_t index, unsigned long pfn)
+static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
+		unsigned long pfn)
 {
 	struct vm_area_struct *vma;
 	pte_t pte, *ptep = NULL;
@@ -825,7 +824,7 @@ static int dax_writeback_one(struct dax_device *dax_dev,
 	 * compare pfns as we must not bail out due to difference in lockbit
 	 * or entry type.
 	 */
-	if (dax_radix_pfn(entry2) != dax_radix_pfn(entry))
+	if (dax_to_pfn(entry2) != dax_to_pfn(entry))
 		goto put_unlocked;
 	if (WARN_ON_ONCE(dax_is_empty_entry(entry) ||
 				dax_is_zero_entry(entry))) {
@@ -855,10 +854,10 @@ static int dax_writeback_one(struct dax_device *dax_dev,
 	 * This allows us to flush for PMD_SIZE and not have to worry about
 	 * partial PMD writebacks.
 	 */
-	pfn = dax_radix_pfn(entry);
-	size = PAGE_SIZE << dax_radix_order(entry);
+	pfn = dax_to_pfn(entry);
+	size = PAGE_SIZE << dax_entry_order(entry);
 
-	dax_mapping_entry_mkclean(mapping, index, pfn);
+	dax_entry_mkclean(mapping, index, pfn);
 	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), size);
 	/*
 	 * After we have flushed the cache, we can clear the dirty tag. There
@@ -996,8 +995,7 @@ static int dax_load_hole(struct address_space *mapping, void *entry,
 	int ret = VM_FAULT_NOPAGE;
 	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
 
-	dax_insert_mapping_entry(mapping, vmf, entry, pfn,
-			DAX_ZERO_PAGE, false);
+	dax_insert_entry(mapping, vmf, entry, pfn, DAX_ZERO_PAGE, false);
 
 	vm_insert_mixed(vmf->vma, vaddr, pfn);
 	trace_dax_load_hole(inode, vmf, ret);
@@ -1307,7 +1305,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
 		if (error < 0)
 			goto error_finish_iomap;
 
-		entry = dax_insert_mapping_entry(mapping, vmf, entry, pfn,
+		entry = dax_insert_entry(mapping, vmf, entry, pfn,
 						 0, write && !sync);
 
 		/*
@@ -1390,7 +1388,7 @@ static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap,
 		goto fallback;
 
 	pfn = page_to_pfn_t(zero_page);
-	ret = dax_insert_mapping_entry(mapping, vmf, entry, pfn,
+	ret = dax_insert_entry(mapping, vmf, entry, pfn,
 			DAX_PMD | DAX_ZERO_PAGE, false);
 
 	ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd);
@@ -1443,7 +1441,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 	 * Make sure that the faulting address's PMD offset (color) matches
 	 * the PMD offset from the start of the file.  This is necessary so
 	 * that a PMD range in the page table overlaps exactly with a PMD
-	 * range in the radix tree.
+	 * range in the page cache.
 	 */
 	if ((vmf->pgoff & PG_PMD_COLOUR) !=
 	    ((vmf->address >> PAGE_SHIFT) & PG_PMD_COLOUR))
@@ -1511,7 +1509,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 		if (error < 0)
 			goto finish_iomap;
 
-		entry = dax_insert_mapping_entry(mapping, vmf, entry, pfn,
+		entry = dax_insert_entry(mapping, vmf, entry, pfn,
 						DAX_PMD, write && !sync);
 
 		/*
@@ -1604,15 +1602,14 @@ int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
 }
 EXPORT_SYMBOL_GPL(dax_iomap_fault);
 
-/**
+/*
  * dax_insert_pfn_mkwrite - insert PTE or PMD entry into page tables
  * @vmf: The description of the fault
  * @pe_size: Size of entry to be inserted
  * @pfn: PFN to insert
  *
- * This function inserts writeable PTE or PMD entry into page tables for mmaped
- * DAX file.  It takes care of marking corresponding radix tree entry as dirty
- * as well.
+ * This function inserts a writeable PTE or PMD entry into the page tables
+ * for an mmaped DAX file.  It also marks the page cache entry as dirty.
  */
 static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
 				  enum page_entry_size pe_size,
-- 
2.16.2

  parent reply	other threads:[~2018-03-30  3:42 UTC|newest]

Thread overview: 74+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-30  3:41 [PATCH v10 00/62] Convert page cache to XArray Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 01/62] page cache: Use xa_lock Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 02/62] xarray: Replace exceptional entries Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 03/62] xarray: Change definition of sibling entries Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 04/62] xarray: Add definition of struct xarray Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 05/62] xarray: Define struct xa_node Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 06/62] xarray: Add documentation Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 07/62] xarray: Add xa_load Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 08/62] xarray: Add xa_get_tag, xa_set_tag and xa_clear_tag Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 09/62] xarray: Add xa_store Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 10/62] xarray: Add xa_cmpxchg and xa_insert Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 11/62] xarray: Add xa_for_each Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 12/62] xarray: Add xa_extract Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 13/62] xarray: Add xa_destroy Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 14/62] xarray: Add xas_next and xas_prev Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 15/62] xarray: Add xas_create_range Matthew Wilcox
2018-03-30  3:41 ` [PATCH v10 16/62] xarray: Add MAINTAINERS entry Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 17/62] page cache: Rearrange address_space Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 18/62] page cache: Convert hole search to XArray Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 19/62] page cache: Add and replace pages using the XArray Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 20/62] page cache: Convert page deletion to XArray Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 21/62] page cache: Convert page cache lookups " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 22/62] page cache: Convert delete_batch " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 23/62] page cache: Remove stray radix comment Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 24/62] page cache: Convert filemap_range_has_page to XArray Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 25/62] mm: Convert page-writeback " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 26/62] mm: Convert workingset " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 27/62] mm: Convert truncate " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 28/62] mm: Convert add_to_swap_cache " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 29/62] mm: Convert delete_from_swap_cache " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 30/62] mm: Convert __do_page_cache_readahead " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 31/62] mm: Convert page migration " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 32/62] mm: Convert huge_memory " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 33/62] mm: Convert collapse_shmem " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 34/62] mm: Convert khugepaged_scan_shmem " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 35/62] pagevec: Use xa_tag_t Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 36/62] shmem: Convert replace to XArray Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 37/62] shmem: Convert shmem_confirm_swap " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 38/62] shmem: Convert find_swap_entry " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 39/62] shmem: Convert shmem_add_to_page_cache " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 40/62] shmem: Convert shmem_alloc_hugepage " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 41/62] shmem: Convert shmem_free_swap " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 42/62] shmem: Convert shmem_partial_swap_usage " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 43/62] memfd: Convert shmem_tag_pins " Matthew Wilcox
2018-03-31  0:05   ` Mike Kravetz
2018-03-31  2:11     ` Matthew Wilcox
2018-04-03 20:51       ` Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 44/62] memfd: Convert shmem_wait_for_pins " Matthew Wilcox
2018-03-31  0:07   ` Mike Kravetz
2018-03-30  3:42 ` [PATCH v10 45/62] shmem: Comment fixups Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 46/62] btrfs: Convert page cache to XArray Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 47/62] fs: Convert buffer " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 48/62] fs: Convert writeback " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 49/62] nilfs2: Convert " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 50/62] f2fs: " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 51/62] lustre: " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 52/62] dax: Fix use of zero page Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 53/62] dax: dax_insert_mapping_entry always succeeds Matthew Wilcox
2018-03-30  3:42 ` Matthew Wilcox [this message]
2018-03-30  3:42 ` [PATCH v10 55/62] dax: Hash on XArray instead of mapping Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 56/62] dax: Convert dax_insert_pfn_mkwrite to XArray Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 57/62] dax: Convert dax_layout_busy_page " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 58/62] dax: Convert __dax_invalidate_entry " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 59/62] dax: Convert dax writeback " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 60/62] dax: Convert page fault handlers " Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 61/62] page cache: Finish XArray conversion Matthew Wilcox
2018-03-30  3:42 ` [PATCH v10 62/62] radix tree: Remove unused functions Matthew Wilcox
2018-04-04 16:35 ` [PATCH v10 00/62] Convert page cache to XArray Mike Kravetz
2018-04-05  3:52   ` Matthew Wilcox
2018-04-05 18:33     ` Mike Kravetz
2018-04-09 21:18 ` Goldwyn Rodrigues
2018-04-14 19:50   ` Matthew Wilcox
2018-04-14 19:58     ` Matthew Wilcox
2018-04-17 21:49       ` Goldwyn Rodrigues

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180330034245.10462-55-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=andreas.dilger@intel.com \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=jaegeuk@kernel.org \
    --cc=jlayton@redhat.com \
    --cc=jsimmons@infradead.org \
    --cc=konishi.ryusuke@lab.ntt.co.jp \
    --cc=lczerner@redhat.com \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nilfs@vger.kernel.org \
    --cc=mawilcox@microsoft.com \
    --cc=mike.kravetz@oracle.com \
    --cc=npiggin@gmail.com \
    --cc=oleg.drokin@intel.com \
    --cc=rgoldwyn@suse.com \
    --cc=ross.zwisler@linux.intel.com \
    --cc=yuchao0@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).