From: "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com> To: Greg KH <gregkh@linuxfoundation.org>, Jonathan Cameron <jonathan.cameron@huawei.com> Cc: "tim.c.chen@linux.intel.com" <tim.c.chen@linux.intel.com>, "catalin.marinas@arm.com" <catalin.marinas@arm.com>, "will@kernel.org" <will@kernel.org>, "rjw@rjwysocki.net" <rjw@rjwysocki.net>, "vincent.guittot@linaro.org" <vincent.guittot@linaro.org>, "bp@alien8.de" <bp@alien8.de>, "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com" <mingo@redhat.com>, "lenb@kernel.org" <lenb@kernel.org>, "peterz@infradead.org" <peterz@infradead.org>, "dietmar.eggemann@arm.com" <dietmar.eggemann@arm.com>, "rostedt@goodmis.org" <rostedt@goodmis.org>, "bsegall@google.com" <bsegall@google.com>, "mgorman@suse.de" <mgorman@suse.de>, "msys.mizuma@gmail.com" <msys.mizuma@gmail.com>, "valentin.schneider@arm.com" <valentin.schneider@arm.com>, "juri.lelli@redhat.com" <juri.lelli@redhat.com>, "mark.rutland@arm.com" <mark.rutland@arm.com>, "sudeep.holla@arm.com" <sudeep.holla@arm.com>, "aubrey.li@linux.intel.com" <aubrey.li@linux.intel.com>, "linux-arm-kernel@lists.infradead.org" <linux-arm-kernel@lists.infradead.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>, "x86@kernel.org" <x86@kernel.org>, "xuwei (O)" <xuwei5@huawei.com>, "Zengtao (B)" <prime.zeng@hisilicon.com>, "guodong.xu@linaro.org" <guodong.xu@linaro.org>, yangyicong <yangyicong@huawei.com>, "Liguozhu (Kenneth)" <liguozhu@hisilicon.com>, "linuxarm@openeuler.org" <linuxarm@openeuler.org>, "hpa@zytor.com" <hpa@zytor.com>, "tiantao (H)" <tiantao6@hisilicon.com> Subject: RE: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die Date: Tue, 20 Apr 2021 03:30:53 +0000 [thread overview] Message-ID: <57660c1863194dd58e441fce9d5d1580@hisilicon.com> (raw) In-Reply-To: <YFR2kwakbcGiI37w@kroah.com> > -----Original Message----- > From: Greg KH [mailto:gregkh@linuxfoundation.org] > Sent: Friday, March 19, 2021 11:02 PM > To: Jonathan Cameron <jonathan.cameron@huawei.com> > Cc: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; > tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org; > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de; > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org; > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com; > mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com; > juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com; > aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org; > linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org; > xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>; > guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth) > <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within > a die > > On Fri, Mar 19, 2021 at 09:36:16AM +0000, Jonathan Cameron wrote: > > On Fri, 19 Mar 2021 06:57:08 +0000 > > "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com> wrote: > > > > > > -----Original Message----- > > > > From: Greg KH [mailto:gregkh@linuxfoundation.org] > > > > Sent: Friday, March 19, 2021 7:35 PM > > > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com> > > > > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; > will@kernel.org; > > > > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de; > > > > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; > peterz@infradead.org; > > > > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com; > > > > mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com; > Jonathan > > > > Cameron <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; > > > > mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com; > > > > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; > > > > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) > <xuwei5@huawei.com>; > > > > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; > yangyicong > > > > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>; > > > > linuxarm@openeuler.org; hpa@zytor.com > > > > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within > > > > a die > > > > > > > > On Fri, Mar 19, 2021 at 05:16:15PM +1300, Barry Song wrote: > > > > > diff --git a/Documentation/admin-guide/cputopology.rst > > > > b/Documentation/admin-guide/cputopology.rst > > > > > index b90dafc..f9d3745 100644 > > > > > --- a/Documentation/admin-guide/cputopology.rst > > > > > +++ b/Documentation/admin-guide/cputopology.rst > > > > > @@ -24,6 +24,12 @@ core_id: > > > > > identifier (rather than the kernel's). The actual value is > > > > > architecture and platform dependent. > > > > > > > > > > +cluster_id: > > > > > + > > > > > + the Cluster ID of cpuX. Typically it is the hardware platform's > > > > > + identifier (rather than the kernel's). The actual value is > > > > > + architecture and platform dependent. > > > > > + > > > > > book_id: > > > > > > > > > > the book ID of cpuX. Typically it is the hardware platform's > > > > > @@ -56,6 +62,14 @@ package_cpus_list: > > > > > human-readable list of CPUs sharing the same physical_package_id. > > > > > (deprecated name: "core_siblings_list") > > > > > > > > > > +cluster_cpus: > > > > > + > > > > > + internal kernel map of CPUs within the same cluster. > > > > > + > > > > > +cluster_cpus_list: > > > > > + > > > > > + human-readable list of CPUs within the same cluster. > > > > > + > > > > > die_cpus: > > > > > > > > > > internal kernel map of CPUs within the same die. > > > > > > > > Why are these sysfs files in this file, and not in a Documentation/ABI/ > > > > file which can be correctly parsed and shown to userspace? > > > > > > Well. Those ABIs have been there for much a long time. It is like: > > > > > > [root@ceph1 topology]# ls > > > core_id core_siblings core_siblings_list physical_package_id > thread_siblings thread_siblings_list > > > [root@ceph1 topology]# pwd > > > /sys/devices/system/cpu/cpu100/topology > > > [root@ceph1 topology]# cat core_siblings_list > > > 64-127 > > > [root@ceph1 topology]# > > > > > > > > > > > Any chance you can fix that up here as well? > > > > > > Yes. we will send a separate patch to address this, which won't > > > be in this patchset. This patchset will base on that one. > > > > > > > > > > > Also note that "list" is not something that goes in sysfs, sysfs is "one > > > > value per file", and a list is not "one value". How do you prevent > > > > overflowing the buffer of the sysfs file if you have a "list"? > > > > > > > > > > At a glance, the list is using "-" rather than a real list > > > [root@ceph1 topology]# cat core_siblings_list > > > 64-127 > > > > > > Anyway, I will take a look if it has any chance to overflow. > > > > It could in theory be alternate CPUs as comma separated list. > > So it's would get interesting around 500-1000 cpus (guessing). > > > > Hopefully no one has that crazy a cpu numbering scheme but it's possible > > (note that cluster is fine for this, but I guess it might eventually > > happen for core-siblings list (cpus within a package). > > > > Shouldn't crash or anything like that but might terminate early. > > We have a broken sysfs api already for listing LED numbers that has had > to be worked around in the past, please do not create a new one with > that same problem, we should learn from them :) Another place I am seeing a cpu list is in numa topology: /sys/devices/system/node/nodex/cpulist. But the code has a BUILD_BUG_ON to guard the pagebuf: static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf) { ssize_t n; cpumask_var_t mask; struct node *node_dev = to_node(dev); /* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */ BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1)); if (!alloc_cpumask_var(&mask, GFP_KERNEL)) return 0; cpumask_and(mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask); n = cpumap_print_to_pagebuf(list, buf, mask); free_cpumask_var(mask); return n; } For lists in cpu topology, I haven't seen this while I believe we need it. Or am I missing something? > > thanks, > > greg k-h Thanks Barry
WARNING: multiple messages have this Message-ID (diff)
From: "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com> To: Greg KH <gregkh@linuxfoundation.org>, Jonathan Cameron <jonathan.cameron@huawei.com> Cc: "tim.c.chen@linux.intel.com" <tim.c.chen@linux.intel.com>, "catalin.marinas@arm.com" <catalin.marinas@arm.com>, "will@kernel.org" <will@kernel.org>, "rjw@rjwysocki.net" <rjw@rjwysocki.net>, "vincent.guittot@linaro.org" <vincent.guittot@linaro.org>, "bp@alien8.de" <bp@alien8.de>, "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com" <mingo@redhat.com>, "lenb@kernel.org" <lenb@kernel.org>, "peterz@infradead.org" <peterz@infradead.org>, "dietmar.eggemann@arm.com" <dietmar.eggemann@arm.com>, "rostedt@goodmis.org" <rostedt@goodmis.org>, "bsegall@google.com" <bsegall@google.com>, "mgorman@suse.de" <mgorman@suse.de>, "msys.mizuma@gmail.com" <msys.mizuma@gmail.com>, "valentin.schneider@arm.com" <valentin.schneider@arm.com>, "juri.lelli@redhat.com" <juri.lelli@redhat.com>, "mark.rutland@arm.com" <mark.rutland@arm.com>, "sudeep.holla@arm.com" <sudeep.holla@arm.com>, "aubrey.li@linux.intel.com" <aubrey.li@linux.intel.com>, "linux-arm-kernel@lists.infradead.org" <linux-arm-kernel@lists.infradead.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>, "x86@kernel.org" <x86@kernel.org>, "xuwei (O)" <xuwei5@huawei.com>, "Zengtao (B)" <prime.zeng@hisilicon.com>, "guodong.xu@linaro.org" <guodong.xu@linaro.org>, yangyicong <yangyicong@huawei.com>, "Liguozhu (Kenneth)" <liguozhu@hisilicon.com>, "linuxarm@openeuler.org" <linuxarm@openeuler.org>, "hpa@zytor.com" <hpa@zytor.com>, "tiantao (H)" <tiantao6@hisilicon.com> Subject: RE: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die Date: Tue, 20 Apr 2021 03:30:53 +0000 [thread overview] Message-ID: <57660c1863194dd58e441fce9d5d1580@hisilicon.com> (raw) In-Reply-To: <YFR2kwakbcGiI37w@kroah.com> > -----Original Message----- > From: Greg KH [mailto:gregkh@linuxfoundation.org] > Sent: Friday, March 19, 2021 11:02 PM > To: Jonathan Cameron <jonathan.cameron@huawei.com> > Cc: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; > tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org; > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de; > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org; > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com; > mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com; > juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com; > aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org; > linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org; > xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>; > guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth) > <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within > a die > > On Fri, Mar 19, 2021 at 09:36:16AM +0000, Jonathan Cameron wrote: > > On Fri, 19 Mar 2021 06:57:08 +0000 > > "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com> wrote: > > > > > > -----Original Message----- > > > > From: Greg KH [mailto:gregkh@linuxfoundation.org] > > > > Sent: Friday, March 19, 2021 7:35 PM > > > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com> > > > > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; > will@kernel.org; > > > > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de; > > > > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; > peterz@infradead.org; > > > > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com; > > > > mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com; > Jonathan > > > > Cameron <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; > > > > mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com; > > > > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; > > > > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) > <xuwei5@huawei.com>; > > > > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; > yangyicong > > > > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>; > > > > linuxarm@openeuler.org; hpa@zytor.com > > > > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within > > > > a die > > > > > > > > On Fri, Mar 19, 2021 at 05:16:15PM +1300, Barry Song wrote: > > > > > diff --git a/Documentation/admin-guide/cputopology.rst > > > > b/Documentation/admin-guide/cputopology.rst > > > > > index b90dafc..f9d3745 100644 > > > > > --- a/Documentation/admin-guide/cputopology.rst > > > > > +++ b/Documentation/admin-guide/cputopology.rst > > > > > @@ -24,6 +24,12 @@ core_id: > > > > > identifier (rather than the kernel's). The actual value is > > > > > architecture and platform dependent. > > > > > > > > > > +cluster_id: > > > > > + > > > > > + the Cluster ID of cpuX. Typically it is the hardware platform's > > > > > + identifier (rather than the kernel's). The actual value is > > > > > + architecture and platform dependent. > > > > > + > > > > > book_id: > > > > > > > > > > the book ID of cpuX. Typically it is the hardware platform's > > > > > @@ -56,6 +62,14 @@ package_cpus_list: > > > > > human-readable list of CPUs sharing the same physical_package_id. > > > > > (deprecated name: "core_siblings_list") > > > > > > > > > > +cluster_cpus: > > > > > + > > > > > + internal kernel map of CPUs within the same cluster. > > > > > + > > > > > +cluster_cpus_list: > > > > > + > > > > > + human-readable list of CPUs within the same cluster. > > > > > + > > > > > die_cpus: > > > > > > > > > > internal kernel map of CPUs within the same die. > > > > > > > > Why are these sysfs files in this file, and not in a Documentation/ABI/ > > > > file which can be correctly parsed and shown to userspace? > > > > > > Well. Those ABIs have been there for much a long time. It is like: > > > > > > [root@ceph1 topology]# ls > > > core_id core_siblings core_siblings_list physical_package_id > thread_siblings thread_siblings_list > > > [root@ceph1 topology]# pwd > > > /sys/devices/system/cpu/cpu100/topology > > > [root@ceph1 topology]# cat core_siblings_list > > > 64-127 > > > [root@ceph1 topology]# > > > > > > > > > > > Any chance you can fix that up here as well? > > > > > > Yes. we will send a separate patch to address this, which won't > > > be in this patchset. This patchset will base on that one. > > > > > > > > > > > Also note that "list" is not something that goes in sysfs, sysfs is "one > > > > value per file", and a list is not "one value". How do you prevent > > > > overflowing the buffer of the sysfs file if you have a "list"? > > > > > > > > > > At a glance, the list is using "-" rather than a real list > > > [root@ceph1 topology]# cat core_siblings_list > > > 64-127 > > > > > > Anyway, I will take a look if it has any chance to overflow. > > > > It could in theory be alternate CPUs as comma separated list. > > So it's would get interesting around 500-1000 cpus (guessing). > > > > Hopefully no one has that crazy a cpu numbering scheme but it's possible > > (note that cluster is fine for this, but I guess it might eventually > > happen for core-siblings list (cpus within a package). > > > > Shouldn't crash or anything like that but might terminate early. > > We have a broken sysfs api already for listing LED numbers that has had > to be worked around in the past, please do not create a new one with > that same problem, we should learn from them :) Another place I am seeing a cpu list is in numa topology: /sys/devices/system/node/nodex/cpulist. But the code has a BUILD_BUG_ON to guard the pagebuf: static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf) { ssize_t n; cpumask_var_t mask; struct node *node_dev = to_node(dev); /* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */ BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1)); if (!alloc_cpumask_var(&mask, GFP_KERNEL)) return 0; cpumask_and(mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask); n = cpumap_print_to_pagebuf(list, buf, mask); free_cpumask_var(mask); return n; } For lists in cpu topology, I haven't seen this while I believe we need it. Or am I missing something? > > thanks, > > greg k-h Thanks Barry _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2021-04-20 3:31 UTC|newest] Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-03-19 4:16 [RFC PATCH v5 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song 2021-03-19 4:16 ` Barry Song 2021-03-19 4:16 ` [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die Barry Song 2021-03-19 4:16 ` Barry Song 2021-03-19 6:35 ` Greg KH 2021-03-19 6:35 ` Greg KH 2021-03-19 6:57 ` Song Bao Hua (Barry Song) 2021-03-19 6:57 ` Song Bao Hua (Barry Song) 2021-03-19 9:36 ` Jonathan Cameron 2021-03-19 9:36 ` Jonathan Cameron 2021-03-19 10:01 ` Greg KH 2021-03-19 10:01 ` Greg KH 2021-04-20 3:30 ` Song Bao Hua (Barry Song) [this message] 2021-04-20 3:30 ` Song Bao Hua (Barry Song) 2021-04-21 4:06 ` Song Bao Hua (Barry Song) 2021-04-21 4:06 ` Song Bao Hua (Barry Song) 2021-03-19 4:16 ` [RFC PATCH v5 2/4] scheduler: add scheduler level for clusters Barry Song 2021-03-19 4:16 ` Barry Song 2021-03-19 4:16 ` [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before scanning the whole llc Barry Song 2021-03-19 4:16 ` Barry Song 2021-03-19 21:39 ` Song Bao Hua (Barry Song) 2021-03-19 21:39 ` Song Bao Hua (Barry Song) 2021-03-19 4:16 ` [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86 Barry Song 2021-03-19 4:16 ` Barry Song 2021-03-23 22:50 ` Tim Chen 2021-03-23 22:50 ` Tim Chen 2021-03-23 23:21 ` Song Bao Hua (Barry Song) 2021-03-23 23:21 ` Song Bao Hua (Barry Song) 2021-04-20 18:31 ` Tim Chen 2021-04-20 18:31 ` Tim Chen 2021-04-20 22:31 ` Song Bao Hua (Barry Song) 2021-04-20 22:31 ` Song Bao Hua (Barry Song) 2021-03-31 10:07 ` Song Bao Hua (Barry Song) 2021-03-31 10:07 ` Song Bao Hua (Barry Song)
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=57660c1863194dd58e441fce9d5d1580@hisilicon.com \ --to=song.bao.hua@hisilicon.com \ --cc=aubrey.li@linux.intel.com \ --cc=bp@alien8.de \ --cc=bsegall@google.com \ --cc=catalin.marinas@arm.com \ --cc=dietmar.eggemann@arm.com \ --cc=gregkh@linuxfoundation.org \ --cc=guodong.xu@linaro.org \ --cc=hpa@zytor.com \ --cc=jonathan.cameron@huawei.com \ --cc=juri.lelli@redhat.com \ --cc=lenb@kernel.org \ --cc=liguozhu@hisilicon.com \ --cc=linux-acpi@vger.kernel.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linuxarm@openeuler.org \ --cc=mark.rutland@arm.com \ --cc=mgorman@suse.de \ --cc=mingo@redhat.com \ --cc=msys.mizuma@gmail.com \ --cc=peterz@infradead.org \ --cc=prime.zeng@hisilicon.com \ --cc=rjw@rjwysocki.net \ --cc=rostedt@goodmis.org \ --cc=sudeep.holla@arm.com \ --cc=tglx@linutronix.de \ --cc=tiantao6@hisilicon.com \ --cc=tim.c.chen@linux.intel.com \ --cc=valentin.schneider@arm.com \ --cc=vincent.guittot@linaro.org \ --cc=will@kernel.org \ --cc=x86@kernel.org \ --cc=xuwei5@huawei.com \ --cc=yangyicong@huawei.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.