From: Dave Hansen <dave.hansen@linux.intel.com> To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Dave Hansen <dave.hansen@linux.intel.com>, weixugc@google.com, yang.shi@linux.alibaba.com, rientjes@google.com, ying.huang@intel.com, dan.j.williams@intel.com, osalvador@suse.de Subject: [PATCH 05/10] mm/migrate: demote pages during reclaim Date: Thu, 01 Apr 2021 11:32:25 -0700 [thread overview] Message-ID: <20210401183225.2EDC224F@viggo.jf.intel.com> (raw) In-Reply-To: <20210401183216.443C4443@viggo.jf.intel.com> From: Dave Hansen <dave.hansen@linux.intel.com> This is mostly derived from a patch from Yang Shi: https://lore.kernel.org/linux-mm/1560468577-101178-10-git-send-email-yang.shi@linux.alibaba.com/ Add code to the reclaim path (shrink_page_list()) to "demote" data to another NUMA node instead of discarding the data. This always avoids the cost of I/O needed to read the page back in and sometimes avoids the writeout cost when the pagee is dirty. A second pass through shrink_page_list() will be made if any demotions fail. This essentally falls back to normal reclaim behavior in the case that demotions fail. Previous versions of this patch may have simply failed to reclaim pages which were eligible for demotion but were unable to be demoted in practice. Note: This just adds the start of infratructure for migration. It is actually disabled next to the FIXME in migrate_demote_page_ok(). Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Wei Xu <weixugc@google.com> Cc: Yang Shi <yang.shi@linux.alibaba.com> Cc: David Rientjes <rientjes@google.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: osalvador <osalvador@suse.de> -- changes from 20210122: * move from GFP_HIGHUSER -> GFP_HIGHUSER_MOVABLE (Ying) changes from 202010: * add MR_NUMA_MISPLACED to trace MIGRATE_REASON define * make migrate_demote_page_ok() static, remove 'sc' arg until later patch * remove unnecessary alloc_demote_page() hugetlb warning * Simplify alloc_demote_page() gfp mask. Depend on __GFP_NORETRY to make it lightweight instead of fancier stuff like leaving out __GFP_IO/FS. * Allocate migration page with alloc_migration_target() instead of allocating directly. changes from 20200730: * Add another pass through shrink_page_list() when demotion fails. changes from 20210302: * Use __GFP_THISNODE and revise the comment explaining the GFP mask constructionn --- b/include/linux/migrate.h | 9 ++++ b/include/trace/events/migrate.h | 3 - b/mm/vmscan.c | 82 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 93 insertions(+), 1 deletion(-) diff -puN include/linux/migrate.h~demote-with-migrate_pages include/linux/migrate.h --- a/include/linux/migrate.h~demote-with-migrate_pages 2021-03-31 15:17:15.842000251 -0700 +++ b/include/linux/migrate.h 2021-03-31 15:17:15.853000251 -0700 @@ -27,6 +27,7 @@ enum migrate_reason { MR_MEMPOLICY_MBIND, MR_NUMA_MISPLACED, MR_CONTIG_RANGE, + MR_DEMOTION, MR_TYPES }; @@ -196,6 +197,14 @@ struct migrate_vma { int migrate_vma_setup(struct migrate_vma *args); void migrate_vma_pages(struct migrate_vma *migrate); void migrate_vma_finalize(struct migrate_vma *migrate); +int next_demotion_node(int node); + +#else /* CONFIG_MIGRATION disabled: */ + +static inline int next_demotion_node(int node) +{ + return NUMA_NO_NODE; +} #endif /* CONFIG_MIGRATION */ diff -puN include/trace/events/migrate.h~demote-with-migrate_pages include/trace/events/migrate.h --- a/include/trace/events/migrate.h~demote-with-migrate_pages 2021-03-31 15:17:15.846000251 -0700 +++ b/include/trace/events/migrate.h 2021-03-31 15:17:15.853000251 -0700 @@ -20,7 +20,8 @@ EM( MR_SYSCALL, "syscall_or_cpuset") \ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ - EMe(MR_CONTIG_RANGE, "contig_range") + EM( MR_CONTIG_RANGE, "contig_range") \ + EMe(MR_DEMOTION, "demotion") /* * First define the enums in the above macros to be exported to userspace diff -puN mm/vmscan.c~demote-with-migrate_pages mm/vmscan.c --- a/mm/vmscan.c~demote-with-migrate_pages 2021-03-31 15:17:15.848000251 -0700 +++ b/mm/vmscan.c 2021-03-31 15:17:15.856000251 -0700 @@ -41,6 +41,7 @@ #include <linux/kthread.h> #include <linux/freezer.h> #include <linux/memcontrol.h> +#include <linux/migrate.h> #include <linux/delayacct.h> #include <linux/sysctl.h> #include <linux/oom.h> @@ -1035,6 +1036,23 @@ static enum page_references page_check_r return PAGEREF_RECLAIM; } +static bool migrate_demote_page_ok(struct page *page) +{ + int next_nid = next_demotion_node(page_to_nid(page)); + + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(PageHuge(page), page); + VM_BUG_ON_PAGE(PageLRU(page), page); + + if (next_nid == NUMA_NO_NODE) + return false; + if (PageTransHuge(page) && !thp_migration_supported()) + return false; + + // FIXME: actually enable this later in the series + return false; +} + /* Check if a page is dirty or under writeback */ static void page_check_dirty_writeback(struct page *page, bool *dirty, bool *writeback) @@ -1065,6 +1083,46 @@ static void page_check_dirty_writeback(s mapping->a_ops->is_dirty_writeback(page, dirty, writeback); } +static struct page *alloc_demote_page(struct page *page, unsigned long node) +{ + struct migration_target_control mtc = { + /* + * Allocate from 'node', or fail the quickly and quietly. + * When this happens, 'page; will likely just be discarded + * instead of migrated. + */ + .gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_NORETRY | + __GFP_THISNODE | __GFP_NOWARN, + .nid = node + }; + + return alloc_migration_target(page, (unsigned long)&mtc); +} + +/* + * Take pages on @demote_list and attempt to demote them to + * another node. Pages which are not demoted are left on + * @demote_pages. + */ +static unsigned int demote_page_list(struct list_head *demote_pages, + struct pglist_data *pgdat, + struct scan_control *sc) +{ + int target_nid = next_demotion_node(pgdat->node_id); + unsigned int nr_succeeded = 0; + int err; + + if (list_empty(demote_pages)) + return 0; + + /* Demotion ignores all cpuset and mempolicy settings */ + err = migrate_pages(demote_pages, alloc_demote_page, NULL, + target_nid, MIGRATE_ASYNC, MR_DEMOTION, + &nr_succeeded); + + return nr_succeeded; +} + /* * shrink_page_list() returns the number of reclaimed pages */ @@ -1076,12 +1134,15 @@ static unsigned int shrink_page_list(str { LIST_HEAD(ret_pages); LIST_HEAD(free_pages); + LIST_HEAD(demote_pages); unsigned int nr_reclaimed = 0; unsigned int pgactivate = 0; + bool do_demote_pass = true; memset(stat, 0, sizeof(*stat)); cond_resched(); +retry: while (!list_empty(page_list)) { struct address_space *mapping; struct page *page; @@ -1231,6 +1292,16 @@ static unsigned int shrink_page_list(str } /* + * Before reclaiming the page, try to relocate + * its contents to another node. + */ + if (do_demote_pass && migrate_demote_page_ok(page)) { + list_add(&page->lru, &demote_pages); + unlock_page(page); + continue; + } + + /* * Anonymous process memory has backing store? * Try to allocate it some swap space here. * Lazyfree page could be freed directly @@ -1480,6 +1551,17 @@ keep: list_add(&page->lru, &ret_pages); VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page); } + /* 'page_list' is always empty here */ + + /* Migrate pages selected for demotion */ + nr_reclaimed += demote_page_list(&demote_pages, pgdat, sc); + /* Pages that could not be demoted are still in @demote_pages */ + if (!list_empty(&demote_pages)) { + /* Pages which failed to demoted go back on @page_list for retry: */ + list_splice_init(&demote_pages, page_list); + do_demote_pass = false; + goto retry; + } pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; _
WARNING: multiple messages have this Message-ID (diff)
From: Dave Hansen <dave.hansen@linux.intel.com> To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org,Dave Hansen <dave.hansen@linux.intel.com>,weixugc@google.com,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,osalvador@suse.de Subject: [PATCH 05/10] mm/migrate: demote pages during reclaim Date: Thu, 01 Apr 2021 11:32:25 -0700 [thread overview] Message-ID: <20210401183225.2EDC224F@viggo.jf.intel.com> (raw) In-Reply-To: <20210401183216.443C4443@viggo.jf.intel.com> From: Dave Hansen <dave.hansen@linux.intel.com> This is mostly derived from a patch from Yang Shi: https://lore.kernel.org/linux-mm/1560468577-101178-10-git-send-email-yang.shi@linux.alibaba.com/ Add code to the reclaim path (shrink_page_list()) to "demote" data to another NUMA node instead of discarding the data. This always avoids the cost of I/O needed to read the page back in and sometimes avoids the writeout cost when the pagee is dirty. A second pass through shrink_page_list() will be made if any demotions fail. This essentally falls back to normal reclaim behavior in the case that demotions fail. Previous versions of this patch may have simply failed to reclaim pages which were eligible for demotion but were unable to be demoted in practice. Note: This just adds the start of infratructure for migration. It is actually disabled next to the FIXME in migrate_demote_page_ok(). Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Wei Xu <weixugc@google.com> Cc: Yang Shi <yang.shi@linux.alibaba.com> Cc: David Rientjes <rientjes@google.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: osalvador <osalvador@suse.de> -- changes from 20210122: * move from GFP_HIGHUSER -> GFP_HIGHUSER_MOVABLE (Ying) changes from 202010: * add MR_NUMA_MISPLACED to trace MIGRATE_REASON define * make migrate_demote_page_ok() static, remove 'sc' arg until later patch * remove unnecessary alloc_demote_page() hugetlb warning * Simplify alloc_demote_page() gfp mask. Depend on __GFP_NORETRY to make it lightweight instead of fancier stuff like leaving out __GFP_IO/FS. * Allocate migration page with alloc_migration_target() instead of allocating directly. changes from 20200730: * Add another pass through shrink_page_list() when demotion fails. changes from 20210302: * Use __GFP_THISNODE and revise the comment explaining the GFP mask constructionn --- b/include/linux/migrate.h | 9 ++++ b/include/trace/events/migrate.h | 3 - b/mm/vmscan.c | 82 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 93 insertions(+), 1 deletion(-) diff -puN include/linux/migrate.h~demote-with-migrate_pages include/linux/migrate.h --- a/include/linux/migrate.h~demote-with-migrate_pages 2021-03-31 15:17:15.842000251 -0700 +++ b/include/linux/migrate.h 2021-03-31 15:17:15.853000251 -0700 @@ -27,6 +27,7 @@ enum migrate_reason { MR_MEMPOLICY_MBIND, MR_NUMA_MISPLACED, MR_CONTIG_RANGE, + MR_DEMOTION, MR_TYPES }; @@ -196,6 +197,14 @@ struct migrate_vma { int migrate_vma_setup(struct migrate_vma *args); void migrate_vma_pages(struct migrate_vma *migrate); void migrate_vma_finalize(struct migrate_vma *migrate); +int next_demotion_node(int node); + +#else /* CONFIG_MIGRATION disabled: */ + +static inline int next_demotion_node(int node) +{ + return NUMA_NO_NODE; +} #endif /* CONFIG_MIGRATION */ diff -puN include/trace/events/migrate.h~demote-with-migrate_pages include/trace/events/migrate.h --- a/include/trace/events/migrate.h~demote-with-migrate_pages 2021-03-31 15:17:15.846000251 -0700 +++ b/include/trace/events/migrate.h 2021-03-31 15:17:15.853000251 -0700 @@ -20,7 +20,8 @@ EM( MR_SYSCALL, "syscall_or_cpuset") \ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ - EMe(MR_CONTIG_RANGE, "contig_range") + EM( MR_CONTIG_RANGE, "contig_range") \ + EMe(MR_DEMOTION, "demotion") /* * First define the enums in the above macros to be exported to userspace diff -puN mm/vmscan.c~demote-with-migrate_pages mm/vmscan.c --- a/mm/vmscan.c~demote-with-migrate_pages 2021-03-31 15:17:15.848000251 -0700 +++ b/mm/vmscan.c 2021-03-31 15:17:15.856000251 -0700 @@ -41,6 +41,7 @@ #include <linux/kthread.h> #include <linux/freezer.h> #include <linux/memcontrol.h> +#include <linux/migrate.h> #include <linux/delayacct.h> #include <linux/sysctl.h> #include <linux/oom.h> @@ -1035,6 +1036,23 @@ static enum page_references page_check_r return PAGEREF_RECLAIM; } +static bool migrate_demote_page_ok(struct page *page) +{ + int next_nid = next_demotion_node(page_to_nid(page)); + + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(PageHuge(page), page); + VM_BUG_ON_PAGE(PageLRU(page), page); + + if (next_nid == NUMA_NO_NODE) + return false; + if (PageTransHuge(page) && !thp_migration_supported()) + return false; + + // FIXME: actually enable this later in the series + return false; +} + /* Check if a page is dirty or under writeback */ static void page_check_dirty_writeback(struct page *page, bool *dirty, bool *writeback) @@ -1065,6 +1083,46 @@ static void page_check_dirty_writeback(s mapping->a_ops->is_dirty_writeback(page, dirty, writeback); } +static struct page *alloc_demote_page(struct page *page, unsigned long node) +{ + struct migration_target_control mtc = { + /* + * Allocate from 'node', or fail the quickly and quietly. + * When this happens, 'page; will likely just be discarded + * instead of migrated. + */ + .gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_NORETRY | + __GFP_THISNODE | __GFP_NOWARN, + .nid = node + }; + + return alloc_migration_target(page, (unsigned long)&mtc); +} + +/* + * Take pages on @demote_list and attempt to demote them to + * another node. Pages which are not demoted are left on + * @demote_pages. + */ +static unsigned int demote_page_list(struct list_head *demote_pages, + struct pglist_data *pgdat, + struct scan_control *sc) +{ + int target_nid = next_demotion_node(pgdat->node_id); + unsigned int nr_succeeded = 0; + int err; + + if (list_empty(demote_pages)) + return 0; + + /* Demotion ignores all cpuset and mempolicy settings */ + err = migrate_pages(demote_pages, alloc_demote_page, NULL, + target_nid, MIGRATE_ASYNC, MR_DEMOTION, + &nr_succeeded); + + return nr_succeeded; +} + /* * shrink_page_list() returns the number of reclaimed pages */ @@ -1076,12 +1134,15 @@ static unsigned int shrink_page_list(str { LIST_HEAD(ret_pages); LIST_HEAD(free_pages); + LIST_HEAD(demote_pages); unsigned int nr_reclaimed = 0; unsigned int pgactivate = 0; + bool do_demote_pass = true; memset(stat, 0, sizeof(*stat)); cond_resched(); +retry: while (!list_empty(page_list)) { struct address_space *mapping; struct page *page; @@ -1231,6 +1292,16 @@ static unsigned int shrink_page_list(str } /* + * Before reclaiming the page, try to relocate + * its contents to another node. + */ + if (do_demote_pass && migrate_demote_page_ok(page)) { + list_add(&page->lru, &demote_pages); + unlock_page(page); + continue; + } + + /* * Anonymous process memory has backing store? * Try to allocate it some swap space here. * Lazyfree page could be freed directly @@ -1480,6 +1551,17 @@ keep: list_add(&page->lru, &ret_pages); VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page); } + /* 'page_list' is always empty here */ + + /* Migrate pages selected for demotion */ + nr_reclaimed += demote_page_list(&demote_pages, pgdat, sc); + /* Pages that could not be demoted are still in @demote_pages */ + if (!list_empty(&demote_pages)) { + /* Pages which failed to demoted go back on @page_list for retry: */ + list_splice_init(&demote_pages, page_list); + do_demote_pass = false; + goto retry; + } pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; _
next prev parent reply other threads:[~2021-04-01 18:52 UTC|newest] Thread overview: 100+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-04-01 18:32 [PATCH 00/10] [v7][RESEND] Migrate Pages in lieu of discard Dave Hansen 2021-04-01 18:32 ` Dave Hansen 2021-04-01 18:32 ` [PATCH 01/10] mm/numa: node demotion data structure and lookup Dave Hansen 2021-04-01 18:32 ` Dave Hansen 2021-04-08 8:03 ` Oscar Salvador 2021-04-08 21:29 ` Dave Hansen 2021-04-09 5:32 ` Wei Xu 2021-04-09 5:32 ` Wei Xu 2021-04-01 18:32 ` [PATCH 02/10] mm/numa: automatically generate node migration order Dave Hansen 2021-04-01 18:32 ` Dave Hansen 2021-04-08 8:26 ` Oscar Salvador 2021-04-08 21:51 ` Dave Hansen 2021-04-09 8:17 ` Oscar Salvador 2021-04-10 3:07 ` Wei Xu 2021-04-10 3:07 ` Wei Xu 2021-04-14 8:08 ` Oscar Salvador 2021-04-14 8:11 ` Oscar Salvador 2021-04-14 8:12 ` David Hildenbrand 2021-04-14 8:14 ` Oscar Salvador 2021-04-14 8:20 ` David Hildenbrand 2021-04-15 4:07 ` Wei Xu 2021-04-15 4:07 ` Wei Xu 2021-04-15 15:35 ` Dave Hansen 2021-04-15 20:25 ` Wei Xu 2021-04-15 20:25 ` Wei Xu 2021-04-01 18:32 ` [PATCH 03/10] mm/migrate: update node demotion order during on hotplug events Dave Hansen 2021-04-01 18:32 ` Dave Hansen 2021-04-08 9:52 ` Oscar Salvador 2021-04-09 10:14 ` Oscar Salvador 2021-04-09 10:15 ` Oscar Salvador 2021-04-09 18:59 ` David Hildenbrand 2021-04-12 7:19 ` Oscar Salvador 2021-04-12 9:19 ` David Hildenbrand 2021-04-01 18:32 ` [PATCH 04/10] mm/migrate: make migrate_pages() return nr_succeeded Dave Hansen 2021-04-01 18:32 ` Dave Hansen 2021-04-01 22:35 ` Wei Xu 2021-04-01 23:21 ` Dave Hansen 2021-04-01 22:39 ` Wei Xu 2021-04-01 22:39 ` Wei Xu 2021-04-08 10:14 ` Oscar Salvador 2021-04-08 17:26 ` Yang Shi 2021-04-08 17:26 ` Yang Shi 2021-04-08 18:17 ` Oscar Salvador 2021-04-08 18:21 ` Oscar Salvador 2021-04-08 20:40 ` Yang Shi 2021-04-08 20:40 ` Yang Shi 2021-04-09 5:06 ` Oscar Salvador 2021-04-09 5:43 ` Wei Xu 2021-04-09 5:43 ` Wei Xu 2021-04-09 15:43 ` Yang Shi 2021-04-09 15:43 ` Yang Shi 2021-04-09 15:50 ` Dave Hansen 2021-04-09 18:47 ` Wei Xu 2021-04-09 18:47 ` Wei Xu 2021-04-09 20:10 ` Yang Shi 2021-04-09 20:10 ` Yang Shi 2021-04-01 18:32 ` Dave Hansen [this message] 2021-04-01 18:32 ` [PATCH 05/10] mm/migrate: demote pages during reclaim Dave Hansen 2021-04-01 20:01 ` Yang Shi 2021-04-01 20:01 ` Yang Shi 2021-04-01 22:58 ` Dave Hansen 2021-04-08 10:47 ` Oscar Salvador 2021-04-10 3:35 ` Wei Xu 2021-04-10 3:35 ` Wei Xu 2021-04-01 18:32 ` [PATCH 06/10] mm/vmscan: add page demotion counter Dave Hansen 2021-04-01 18:32 ` Dave Hansen 2021-04-10 3:40 ` Wei Xu 2021-04-10 3:40 ` Wei Xu 2021-04-01 18:32 ` [PATCH 07/10] mm/vmscan: add helper for querying ability to age anonymous pages Dave Hansen 2021-04-01 18:32 ` Dave Hansen 2021-04-07 18:40 ` Wei Xu 2021-04-07 18:40 ` Wei Xu 2021-04-09 8:31 ` Oscar Salvador 2021-04-01 18:32 ` [PATCH 08/10] mm/vmscan: Consider anonymous pages without swap Dave Hansen 2021-04-01 18:32 ` Dave Hansen 2021-04-02 0:55 ` Wei Xu 2021-04-02 0:55 ` Wei Xu 2021-04-01 18:32 ` [PATCH 09/10] mm/vmscan: never demote for memcg reclaim Dave Hansen 2021-04-01 18:32 ` Dave Hansen 2021-04-02 0:18 ` Wei Xu 2021-04-02 0:18 ` Wei Xu 2021-04-01 18:32 ` [PATCH 10/10] mm/migrate: new zone_reclaim_mode to enable reclaim migration Dave Hansen 2021-04-01 18:32 ` Dave Hansen 2021-04-01 20:06 ` Yang Shi 2021-04-01 20:06 ` Yang Shi 2021-04-10 4:10 ` Wei Xu 2021-04-10 4:10 ` Wei Xu 2021-04-16 12:35 ` [PATCH 00/10] [v7][RESEND] Migrate Pages in lieu of discard Michal Hocko 2021-04-16 14:26 ` Dave Hansen 2021-04-16 15:02 ` Michal Hocko 2021-04-21 2:39 ` Huang, Ying 2021-04-21 2:39 ` Huang, Ying 2021-05-07 6:14 ` Huang, Ying 2021-05-07 6:14 ` Huang, Ying 2021-06-11 5:50 ` Huang, Ying -- strict thread matches above, loose matches on Subject: below -- 2021-03-04 23:59 [PATCH 00/10] [v6] " Dave Hansen 2021-03-04 23:59 ` [PATCH 05/10] mm/migrate: demote pages during reclaim Dave Hansen 2021-03-04 23:59 ` Dave Hansen 2021-03-09 0:10 ` Yang Shi 2021-03-09 0:10 ` Yang Shi 2021-03-09 23:05 ` Dave Hansen
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210401183225.2EDC224F@viggo.jf.intel.com \ --to=dave.hansen@linux.intel.com \ --cc=dan.j.williams@intel.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=osalvador@suse.de \ --cc=rientjes@google.com \ --cc=weixugc@google.com \ --cc=yang.shi@linux.alibaba.com \ --cc=ying.huang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.