linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Prathu Baronia <prathu.baronia@oneplus.com>
To: Will Deacon <will@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	catalin.marinas@arm.com, alexander.duyck@gmail.com,
	chintan.pandya@oneplus.com, mhocko@suse.com,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	gregkh@linuxfoundation.com, gthelen@google.com, jack@suse.cz,
	ken.lin@oneplus.com, gasine.xu@oneplus.com, ying.huang@intel.com,
	mark.rutland@arm.com
Subject: Re: [PATCH v2] mm: Optimized hugepage zeroing & copying from user
Date: Fri, 1 May 2020 14:28:55 +0530	[thread overview]
Message-ID: <20200501085855.c5dzk5hfrdzunqdl@oneplus.com> (raw)
In-Reply-To: <20200422143841.ozuow4jkltzymvgs@oneplus.com>

Platform and setup conditions:
Qualcomm's SM8150 platform under controlled conditions(i.e. only CPU0 and 6
turned on and set to max frequency, and DDR set to performance governor).
---------------------------------------------------------------------------

---------------------------------------------------------------------------
Summary:
	We observed a ~61% improvement in executon time of clearing a hugepage
	in the case of arm64 if we increase the granularity i.e. the chunk size
	to 64KB from 4KB for each chunk clearing subroutine call.
---------------------------------------------------------------------------

For the base build:

clear_huge_page() ftrace profile
--------------------------------
- CPU0:
	- Samples: 95
	- Mean: 242.099 us
	- Std dev: 45.0096 us
- CPU6:
	- Samples: 61
	- Mean: 258.372 us
	- Std dev: 22.0754 us

With patches [PATCH {1,2,3}/4] provided at the end where we just revert the
forward-reverse traversal code we observed:

clear_huge_page() ftrace profile
--------------------------------
- CPU0:
	- Samples: 77
	- Mean: 234.568
	- Std dev: 6.52
- CPU6:
	- Samples: 81
	- Mean: 259.437
	- Std dev: 19.25

We were expecting a bit of an improvement for arm64's case because of our
hypothesis that reverse traversal is considerably slower in arm64 but after Will
Deacon's test code which showed similar timings for forward and reverse
traversals we digged a bit deeper into this.

I found that In the case of arm64 a page is cleared using a special clear_page.S
assembly routine instead of an explicit call to memset. With the below patch we
bypassed the assembly routine and oberserved improvement in execution time of
clear_huge_page on CPU0.

 diff --git a/include/linux/highmem.h b/include/linux/highmem.h
 index ea5cdbd8c2c3..a0a97a95aee8 100644
 --- a/include/linux/highmem.h
 +++ b/include/linux/highmem.h
 @@ -158,7 +158,7 @@ do {
 \
 static inline void clear_user_highpage(struct page *page, unsigned long vaddr)
 {
        void *addr = kmap_atomic(page);
 -      clear_user_page(addr, vaddr, page);
 +      memset(addr, 0x0, PAGE_SIZE);
        kunmap_atomic(addr);
 }
 #endif

For reference I will call the above patch v-exp.

When v-exp is applied on base we observed:

clear_huge_page() ftrace profile
--------------------------------
- CPU0:
	- Samples: 71
	- Mean: 124.657 us
	- Std dev: 0.494165 us
- CPU6:
	- Samples: 60
	- Mean: 252.153 us
	- Std dev: 17.7591 us

When v-exp is applied on base along with [PATCH {1,2,3}/4] we observed:

clear_huge_page() ftrace profile
--------------------------------
- CPU0:
        - Samples: 103
        - Mean: 127.034
        - Std dev: 25.0312
- CPU6:
	- Samples: 80
	- Mean: 250.88
	- Std dev: 18.85

These timings for CPU0 made sense since according to Will Deacon's test code for
5GB the time was ~330us for our platform. So for 2 MB the time should be:
330*2/(5*1024) = 0.12890625 ms ~ 129 us.

But the above tmings seemed somewhat unjustified for CPU6.

As suggested earlier by Ying Huang to vary the chunk sizes for zeroing the
hugepage, we observe ~61% improvement in the execution time for CPU6 and slight
improvement in CPU0 when we vary the chunk_size from 4KB to 64KB with all the
below patches i.e. [PATCH {1,2,3,4}/4]]

================
Results
================

Chunk Size = 4KB
----------------
CPU0:
	Samples: 103
	Mean: 127.034
	Std dev: 25.0312
CPU6:
	Samples: 80
	Mean: 250.88
	Std dev: 18.85

Chunk Size = 8KB
-----------------

CPU0
	Samples : 68
	Mean : 120.627
	Stddev : 2.12056

CPU6
	Samples : 68
	Mean : 241.897
	Stddev : 24.8305


Chunk Size = 16KB
-----------------

CPU0
	Samples : 75
	Mean : 121.632
	Stddev : 44.83

CPU6
	Samples : 75
	Mean : 166.851
	Stddev : 10.6432


Chunk Size = 32KB
-----------------

CPU0
	Samples : 68
	Mean : 114.552
	Stddev : 0.302009

CPU6
	Samples : 68
	Mean : 120.836
	Stddev : 6.52462


Chunk Size = 64KB
-----------------

CPU0
	Samples : 70
	Mean : 113.639
	Stddev : 0.228294

CPU6
	Samples : 70
	Mean : 97.96
	Stddev : 5.33412

=============================================================================

From 2c0e618953316bd532cd3c0545e168e930515d42 Mon Sep 17 00:00:00 2001
From: Prathu Baronia <prathu.baronia@oneplus.com>
Date: Tue, 21 Apr 2020 19:01:01 +0530
Subject: [PATCH 1/4] Revert "mm, clear_huge_page: move order algorithm into a
 separate function"

This reverts commit c6ddfb6c58903262d2d77042c41dba58cf775d88.
---
 mm/memory.c | 90 ++++++++++++++++++++---------------------------------
 1 file changed, 34 insertions(+), 56 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index e8bfdf0d9d1d..e83fa2d01e1a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4655,93 +4655,71 @@ EXPORT_SYMBOL(__might_fault);
 #endif

 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
-/*
- * Process all subpages of the specified huge page with the specified
- * operation.  The target subpage will be processed last to keep its
- * cache lines hot.
- */
-static inline void process_huge_page(
-	unsigned long addr_hint, unsigned int pages_per_huge_page,
-	void (*process_subpage)(unsigned long addr, int idx, void *arg),
-	void *arg)
+static void clear_gigantic_page(struct page *page,
+				unsigned long addr,
+				unsigned int pages_per_huge_page)
+{
+	int i;
+	struct page *p = page;
+
+	might_sleep();
+	for (i = 0; i < pages_per_huge_page;
+	     i++, p = mem_map_next(p, page, i)) {
+		cond_resched();
+		clear_user_highpage(p, addr + i * PAGE_SIZE);
+	}
+}
+void clear_huge_page(struct page *page,
+		     unsigned long addr_hint, unsigned int pages_per_huge_page)
 {
 	int i, n, base, l;
 	unsigned long addr = addr_hint &
 		~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);

-	/* Process target subpage last to keep its cache lines hot */
+	if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) {
+		clear_gigantic_page(page, addr, pages_per_huge_page);
+		return;
+	}
+
+	/* Clear sub-page to access last to keep its cache lines hot */
 	might_sleep();
 	n = (addr_hint - addr) / PAGE_SIZE;
 	if (2 * n <= pages_per_huge_page) {
-		/* If target subpage in first half of huge page */
+		/* If sub-page to access in first half of huge page */
 		base = 0;
 		l = n;
-		/* Process subpages at the end of huge page */
+		/* Clear sub-pages at the end of huge page */
 		for (i = pages_per_huge_page - 1; i >= 2 * n; i--) {
 			cond_resched();
-			process_subpage(addr + i * PAGE_SIZE, i, arg);
+			clear_user_highpage(page + i, addr + i * PAGE_SIZE);
 		}
 	} else {
-		/* If target subpage in second half of huge page */
+		/* If sub-page to access in second half of huge page */
 		base = pages_per_huge_page - 2 * (pages_per_huge_page - n);
 		l = pages_per_huge_page - n;
-		/* Process subpages at the begin of huge page */
+		/* Clear sub-pages at the begin of huge page */
 		for (i = 0; i < base; i++) {
 			cond_resched();
-			process_subpage(addr + i * PAGE_SIZE, i, arg);
+			clear_user_highpage(page + i, addr + i * PAGE_SIZE);
 		}
 	}
 	/*
-	 * Process remaining subpages in left-right-left-right pattern
-	 * towards the target subpage
+	 * Clear remaining sub-pages in left-right-left-right pattern
+	 * towards the sub-page to access
 	 */
 	for (i = 0; i < l; i++) {
 		int left_idx = base + i;
 		int right_idx = base + 2 * l - 1 - i;

 		cond_resched();
-		process_subpage(addr + left_idx * PAGE_SIZE, left_idx, arg);
+		clear_user_highpage(page + left_idx,
+				    addr + left_idx * PAGE_SIZE);
 		cond_resched();
-		process_subpage(addr + right_idx * PAGE_SIZE, right_idx, arg);
+		clear_user_highpage(page + right_idx,
+				    addr + right_idx * PAGE_SIZE);
 	}
 }

-static void clear_gigantic_page(struct page *page,
-				unsigned long addr,
-				unsigned int pages_per_huge_page)
-{
-	int i;
-	struct page *p = page;
-
-	might_sleep();
-	for (i = 0; i < pages_per_huge_page;
-	     i++, p = mem_map_next(p, page, i)) {
-		cond_resched();
-		clear_user_highpage(p, addr + i * PAGE_SIZE);
-	}
-}
-
-static void clear_subpage(unsigned long addr, int idx, void *arg)
-{
-	struct page *page = arg;
-
-	clear_user_highpage(page + idx, addr);
-}
-
-void clear_huge_page(struct page *page,
-		     unsigned long addr_hint, unsigned int pages_per_huge_page)
-{
-	unsigned long addr = addr_hint &
-		~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);
-
-	if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) {
-		clear_gigantic_page(page, addr, pages_per_huge_page);
-		return;
-	}
-
-	process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page);
-}
-
 static void copy_user_gigantic_page(struct page *dst, struct page *src,
 				    unsigned long addr,
 				    struct vm_area_struct *vma,
--
2.17.1


From ac357d010ba25b910c5830dcb8958497ea4b5bfa Mon Sep 17 00:00:00 2001
From: Prathu Baronia <prathu.baronia@oneplus.com>
Date: Tue, 21 Apr 2020 19:03:24 +0530
Subject: [PATCH 2/4] Revert "mm, huge page: copy target sub-page last when
 copy huge page"

This reverts commit c9f4cd71383576a916e7fca99c490fc92a289f5a.
---
 include/linux/mm.h |  3 +--
 mm/huge_memory.c   |  3 +--
 mm/memory.c        | 30 +++++++-----------------------
 3 files changed, 9 insertions(+), 27 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c54fb96cb1e6..243754937b0f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2860,8 +2860,7 @@ extern void clear_huge_page(struct page *page,
 			    unsigned long addr_hint,
 			    unsigned int pages_per_huge_page);
 extern void copy_user_huge_page(struct page *dst, struct page *src,
-				unsigned long addr_hint,
-				struct vm_area_struct *vma,
+				unsigned long addr, struct vm_area_struct *vma,
 				unsigned int pages_per_huge_page);
 extern long copy_huge_page_from_user(struct page *dst_page,
 				const void __user *usr_src,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 24ad53b4dfc0..844764f11e41 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1415,8 +1415,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 	if (!page)
 		clear_huge_page(new_page, vmf->address, HPAGE_PMD_NR);
 	else
-		copy_user_huge_page(new_page, page, vmf->address,
-				    vma, HPAGE_PMD_NR);
+		copy_user_huge_page(new_page, page, haddr, vma, HPAGE_PMD_NR);
 	__SetPageUptodate(new_page);

 	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
diff --git a/mm/memory.c b/mm/memory.c
index e83fa2d01e1a..495880c3e945 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4739,31 +4739,11 @@ static void copy_user_gigantic_page(struct page *dst, struct page *src,
 	}
 }

-struct copy_subpage_arg {
-	struct page *dst;
-	struct page *src;
-	struct vm_area_struct *vma;
-};
-
-static void copy_subpage(unsigned long addr, int idx, void *arg)
-{
-	struct copy_subpage_arg *copy_arg = arg;
-
-	copy_user_highpage(copy_arg->dst + idx, copy_arg->src + idx,
-			   addr, copy_arg->vma);
-}
-
 void copy_user_huge_page(struct page *dst, struct page *src,
-			 unsigned long addr_hint, struct vm_area_struct *vma,
+			 unsigned long addr, struct vm_area_struct *vma,
 			 unsigned int pages_per_huge_page)
 {
-	unsigned long addr = addr_hint &
-		~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);
-	struct copy_subpage_arg arg = {
-		.dst = dst,
-		.src = src,
-		.vma = vma,
-	};
+	int i;

 	if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) {
 		copy_user_gigantic_page(dst, src, addr, vma,
@@ -4771,7 +4751,11 @@ void copy_user_huge_page(struct page *dst, struct page *src,
 		return;
 	}

-	process_huge_page(addr_hint, pages_per_huge_page, copy_subpage, &arg);
+	might_sleep();
+	for (i = 0; i < pages_per_huge_page; i++) {
+		cond_resched();
+		copy_user_highpage(dst + i, src + i, addr + i*PAGE_SIZE, vma);
+	}
 }

 long copy_huge_page_from_user(struct page *dst_page,
--
2.17.1


From 14f0266254e4a2ea55f1cf33323205c77bec1a12 Mon Sep 17 00:00:00 2001
From: Prathu Baronia <prathu.baronia@oneplus.com>
Date: Tue, 21 Apr 2020 19:06:30 +0530
Subject: [PATCH 3/4] Revert "mm: hugetlb: clear target sub-page last when
 clearing huge page"

This reverts commit c79b57e462b5d2f47afa5f175cf1828f16e18612.
---
 include/linux/mm.h |  2 +-
 mm/huge_memory.c   |  4 ++--
 mm/memory.c        | 42 ++++--------------------------------------
 3 files changed, 7 insertions(+), 41 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 243754937b0f..0b77926c3705 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2857,7 +2857,7 @@ enum mf_action_page_type {

 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
 extern void clear_huge_page(struct page *page,
-			    unsigned long addr_hint,
+			    unsigned long addr,
 			    unsigned int pages_per_huge_page);
 extern void copy_user_huge_page(struct page *dst, struct page *src,
 				unsigned long addr, struct vm_area_struct *vma,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 844764f11e41..a2131ee72a37 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -606,7 +606,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
 		goto release;
 	}

-	clear_huge_page(page, vmf->address, HPAGE_PMD_NR);
+	clear_huge_page(page, haddr, HPAGE_PMD_NR);
 	/*
 	 * The memory barrier inside __SetPageUptodate makes sure that
 	 * clear_huge_page writes become visible before the set_pmd_at()
@@ -1413,7 +1413,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 	count_memcg_events(memcg, THP_FAULT_ALLOC, 1);

 	if (!page)
-		clear_huge_page(new_page, vmf->address, HPAGE_PMD_NR);
+		clear_huge_page(new_page, haddr, HPAGE_PMD_NR);
 	else
 		copy_user_huge_page(new_page, page, haddr, vma, HPAGE_PMD_NR);
 	__SetPageUptodate(new_page);
diff --git a/mm/memory.c b/mm/memory.c
index 495880c3e945..a297569d7308 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4670,53 +4670,19 @@ static void clear_gigantic_page(struct page *page,
 	}
 }
 void clear_huge_page(struct page *page,
-		     unsigned long addr_hint, unsigned int pages_per_huge_page)
+		     unsigned long addr, unsigned int pages_per_huge_page)
 {
-	int i, n, base, l;
-	unsigned long addr = addr_hint &
-		~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);
+	int i;

 	if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) {
 		clear_gigantic_page(page, addr, pages_per_huge_page);
 		return;
 	}

-	/* Clear sub-page to access last to keep its cache lines hot */
 	might_sleep();
-	n = (addr_hint - addr) / PAGE_SIZE;
-	if (2 * n <= pages_per_huge_page) {
-		/* If sub-page to access in first half of huge page */
-		base = 0;
-		l = n;
-		/* Clear sub-pages at the end of huge page */
-		for (i = pages_per_huge_page - 1; i >= 2 * n; i--) {
-			cond_resched();
-			clear_user_highpage(page + i, addr + i * PAGE_SIZE);
-		}
-	} else {
-		/* If sub-page to access in second half of huge page */
-		base = pages_per_huge_page - 2 * (pages_per_huge_page - n);
-		l = pages_per_huge_page - n;
-		/* Clear sub-pages at the begin of huge page */
-		for (i = 0; i < base; i++) {
-			cond_resched();
-			clear_user_highpage(page + i, addr + i * PAGE_SIZE);
-		}
-	}
-	/*
-	 * Clear remaining sub-pages in left-right-left-right pattern
-	 * towards the sub-page to access
-	 */
-	for (i = 0; i < l; i++) {
-		int left_idx = base + i;
-		int right_idx = base + 2 * l - 1 - i;
-
-		cond_resched();
-		clear_user_highpage(page + left_idx,
-				    addr + left_idx * PAGE_SIZE);
+	for (i = 0; i < pages_per_huge_page; i++) {
 		cond_resched();
-		clear_user_highpage(page + right_idx,
-				    addr + right_idx * PAGE_SIZE);
+		clear_user_highpage(page + i, addr + i * PAGE_SIZE);
 	}
 }

--
2.17.1


From 1540b3a6b8732b03a480905b564611a14d09824a Mon Sep 17 00:00:00 2001
From: Prathu Baronia <prathu.baronia@oneplus.com>
Date: Fri, 1 May 2020 11:50:58 +0530
Subject: [PATCH 4/4] Varied chunk size for clearing out a huge page.

Signed-off-by: Prathu Baronia <prathu.baronia@oneplus.com>
---
 mm/memory.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index a297569d7308..b6f5a0cf4d40 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4673,6 +4673,9 @@ void clear_huge_page(struct page *page,
 		     unsigned long addr, unsigned int pages_per_huge_page)
 {
 	int i;
+	void * addr_chunk;
+	int chunk_factor = 1;
+	int chunk_size = PAGE_SIZE*chunk_factor;

 	if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) {
 		clear_gigantic_page(page, addr, pages_per_huge_page);
@@ -4680,9 +4683,11 @@ void clear_huge_page(struct page *page,
 	}

 	might_sleep();
-	for (i = 0; i < pages_per_huge_page; i++) {
+	for (i = 0; i < pages_per_huge_page; i += chunk_factor) {
 		cond_resched();
-		clear_user_highpage(page + i, addr + i * PAGE_SIZE);
+		addr_chunk = kmap_atomic(page + i);
+		memset(addr_chunk, 0x0, chunk_size);
+		kunmap_atomic(addr_chunk);
 	}
 }

--
2.17.1

-- 
Prathu Baronia
OnePlus RnD


  reply	other threads:[~2020-05-01  8:59 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-14 15:38 [PATCH v2] mm: Optimized hugepage zeroing & copying from user Prathu Baronia
2020-04-14 17:03 ` Michal Hocko
2020-04-14 17:41   ` Daniel Jordan
     [not found]   ` <20200414184743.GB2097@oneplus.com>
2020-04-14 19:32     ` Alexander Duyck
2020-04-15  3:40       ` Huang, Ying
2020-04-15 11:09         ` Michal Hocko
2020-04-19 12:05       ` Prathu Baronia
2020-04-14 19:40     ` Michal Hocko
2020-04-15  3:27 ` Huang, Ying
2020-04-16  1:21   ` Huang, Ying
2020-04-19 15:58   ` Prathu Baronia
2020-04-20  0:18     ` Huang, Ying
2020-04-21  9:36       ` Prathu Baronia
2020-04-21 10:09         ` Will Deacon
2020-04-21 12:47           ` Vlastimil Babka
2020-04-21 12:48             ` Vlastimil Babka
2020-04-21 13:39               ` Will Deacon
2020-04-21 13:48                 ` Vlastimil Babka
2020-04-21 13:56                   ` Chintan Pandya
2020-04-22  8:18                   ` Will Deacon
2020-04-22 11:19                     ` Will Deacon
2020-04-22 14:38                       ` Prathu Baronia
2020-05-01  8:58                         ` Prathu Baronia [this message]
2020-05-05  8:59                           ` Will Deacon
2020-04-21 13:00             ` Michal Hocko
2020-04-21 13:10               ` Will Deacon
2020-04-17  7:48 ` [mm] 134c8b410f: vm-scalability.median -7.9% regression kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200501085855.c5dzk5hfrdzunqdl@oneplus.com \
    --to=prathu.baronia@oneplus.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.duyck@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=chintan.pandya@oneplus.com \
    --cc=gasine.xu@oneplus.com \
    --cc=gregkh@linuxfoundation.com \
    --cc=gthelen@google.com \
    --cc=jack@suse.cz \
    --cc=ken.lin@oneplus.com \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=mhocko@suse.com \
    --cc=vbabka@suse.cz \
    --cc=will@kernel.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).