From: Jason Wang <jasowang@redhat.com> To: "Michael S. Tsirkin" <mst@redhat.com> Cc: syzbot <syzbot+e58112d71f77113ddb7b@syzkaller.appspotmail.com>, aarcange@redhat.com, akpm@linux-foundation.org, christian@brauner.io, davem@davemloft.net, ebiederm@xmission.com, elena.reshetova@intel.com, guro@fb.com, hch@infradead.org, james.bottomley@hansenpartnership.com, jglisse@redhat.com, keescook@chromium.org, ldv@altlinux.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, luto@amacapital.net, mhocko@suse.com, mingo@kernel.org, namit@vmware.com, peterz@infradead.org, syzkaller-bugs@googlegroups.com, viro@zeniv.linux.org.uk, wad@chromium.org Subject: Re: WARNING in __mmdrop Date: Tue, 23 Jul 2019 16:42:19 +0800 [thread overview] Message-ID: <e2e01a05-63d8-4388-2bcd-b2be3c865486@redhat.com> (raw) In-Reply-To: <20190723032800-mutt-send-email-mst@kernel.org> On 2019/7/23 下午3:56, Michael S. Tsirkin wrote: > On Tue, Jul 23, 2019 at 01:48:52PM +0800, Jason Wang wrote: >> On 2019/7/23 下午1:02, Michael S. Tsirkin wrote: >>> On Tue, Jul 23, 2019 at 11:55:28AM +0800, Jason Wang wrote: >>>> On 2019/7/22 下午4:02, Michael S. Tsirkin wrote: >>>>> On Mon, Jul 22, 2019 at 01:21:59PM +0800, Jason Wang wrote: >>>>>> On 2019/7/21 下午6:02, Michael S. Tsirkin wrote: >>>>>>> On Sat, Jul 20, 2019 at 03:08:00AM -0700, syzbot wrote: >>>>>>>> syzbot has bisected this bug to: >>>>>>>> >>>>>>>> commit 7f466032dc9e5a61217f22ea34b2df932786bbfc >>>>>>>> Author: Jason Wang <jasowang@redhat.com> >>>>>>>> Date: Fri May 24 08:12:18 2019 +0000 >>>>>>>> >>>>>>>> vhost: access vq metadata through kernel virtual address >>>>>>>> >>>>>>>> bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=149a8a20600000 >>>>>>>> start commit: 6d21a41b Add linux-next specific files for 20190718 >>>>>>>> git tree: linux-next >>>>>>>> final crash: https://syzkaller.appspot.com/x/report.txt?x=169a8a20600000 >>>>>>>> console output: https://syzkaller.appspot.com/x/log.txt?x=129a8a20600000 >>>>>>>> kernel config: https://syzkaller.appspot.com/x/.config?x=3430a151e1452331 >>>>>>>> dashboard link: https://syzkaller.appspot.com/bug?extid=e58112d71f77113ddb7b >>>>>>>> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10139e68600000 >>>>>>>> >>>>>>>> Reported-by: syzbot+e58112d71f77113ddb7b@syzkaller.appspotmail.com >>>>>>>> Fixes: 7f466032dc9e ("vhost: access vq metadata through kernel virtual >>>>>>>> address") >>>>>>>> >>>>>>>> For information about bisection process see: https://goo.gl/tpsmEJ#bisection >>>>>>> OK I poked at this for a bit, I see several things that >>>>>>> we need to fix, though I'm not yet sure it's the reason for >>>>>>> the failures: >>>>>>> >>>>>>> >>>>>>> 1. mmu_notifier_register shouldn't be called from vhost_vring_set_num_addr >>>>>>> That's just a bad hack, >>>>>> This is used to avoid holding lock when checking whether the addresses are >>>>>> overlapped. Otherwise we need to take spinlock for each invalidation request >>>>>> even if it was the va range that is not interested for us. This will be very >>>>>> slow e.g during guest boot. >>>>> KVM seems to do exactly that. >>>>> I tried and guest does not seem to boot any slower. >>>>> Do you observe any slowdown? >>>> Yes I do. >>>> >>>> >>>>> Now I took a hard look at the uaddr hackery it really makes >>>>> me nervious. So I think for this release we want something >>>>> safe, and optimizations on top. As an alternative revert the >>>>> optimization and try again for next merge window. >>>> Will post a series of fixes, let me know if you're ok with that. >>>> >>>> Thanks >>> I'd prefer you to take a hard look at the patch I posted >>> which makes code cleaner, >> >> I did. But it looks to me a series that is only about 60 lines of code can >> fix all the issues we found without reverting the uaddr optimization. > Another thing I like about the patch I posted is that > it removes 60 lines of code, instead of adding more :) > Mostly because of unifying everything into > a single cleanup function and using kfree_rcu. Yes. > > So how about this: do exactly what you propose but as a 2 patch series: > start with the slow safe patch, and add then return uaddr optimizations > on top. We can then more easily reason about whether they are safe. If you stick, I can do this. > Basically you are saying this: > - notifiers are only needed to invalidate maps > - we make sure any uaddr change invalidates maps anyway > - thus it's ok not to have notifiers since we do > not have maps > > All this looks ok but the question is why do we > bother unregistering them. And the answer seems to > be that this is so we can start with a balanced > counter: otherwise we can be between _start and > _end calls. Yes, since there could be multiple co-current invalidation requests. We need count them to make sure we don't pin wrong pages. > > I also wonder about ordering. kvm has this: > /* > * Used to check for invalidations in progress, of the pfn that is > * returned by pfn_to_pfn_prot below. > */ > mmu_seq = kvm->mmu_notifier_seq; > /* > * Ensure the read of mmu_notifier_seq isn't reordered with PTE reads in > * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't > * risk the page we get a reference to getting unmapped before we have a > * chance to grab the mmu_lock without mmu_notifier_retry() noticing. > * > * This smp_rmb() pairs with the effective smp_wmb() of the combination > * of the pte_unmap_unlock() after the PTE is zapped, and the > * spin_lock() in kvm_mmu_notifier_invalidate_<page|range_end>() before > * mmu_notifier_seq is incremented. > */ > smp_rmb(); > > does this apply to us? Can't we use a seqlock instead so we do > not need to worry? I'm not familiar with kvm MMU internals, but we do everything under of mmu_lock. Thanks
WARNING: multiple messages have this Message-ID (diff)
From: Jason Wang <jasowang@redhat.com> To: "Michael S. Tsirkin" <mst@redhat.com> Cc: mhocko@suse.com, peterz@infradead.org, ldv@altlinux.org, james.bottomley@hansenpartnership.com, linux-mm@kvack.org, namit@vmware.com, mingo@kernel.org, elena.reshetova@intel.com, keescook@chromium.org, aarcange@redhat.com, davem@davemloft.net, hch@infradead.org, christian@brauner.io, syzbot <syzbot+e58112d71f77113ddb7b@syzkaller.appspotmail.com>, syzkaller-bugs@googlegroups.com, jglisse@redhat.com, viro@zeniv.linux.org.uk, linux-arm-kernel@lists.infradead.org, wad@chromium.org, linux-parisc@vger.kernel.org, linux-kernel@vger.kernel.org, luto@amacapital.net, ebiederm@xmission.com, akpm@linux-foundation.org, guro@fb.com Subject: Re: WARNING in __mmdrop Date: Tue, 23 Jul 2019 16:42:19 +0800 [thread overview] Message-ID: <e2e01a05-63d8-4388-2bcd-b2be3c865486@redhat.com> (raw) In-Reply-To: <20190723032800-mutt-send-email-mst@kernel.org> On 2019/7/23 下午3:56, Michael S. Tsirkin wrote: > On Tue, Jul 23, 2019 at 01:48:52PM +0800, Jason Wang wrote: >> On 2019/7/23 下午1:02, Michael S. Tsirkin wrote: >>> On Tue, Jul 23, 2019 at 11:55:28AM +0800, Jason Wang wrote: >>>> On 2019/7/22 下午4:02, Michael S. Tsirkin wrote: >>>>> On Mon, Jul 22, 2019 at 01:21:59PM +0800, Jason Wang wrote: >>>>>> On 2019/7/21 下午6:02, Michael S. Tsirkin wrote: >>>>>>> On Sat, Jul 20, 2019 at 03:08:00AM -0700, syzbot wrote: >>>>>>>> syzbot has bisected this bug to: >>>>>>>> >>>>>>>> commit 7f466032dc9e5a61217f22ea34b2df932786bbfc >>>>>>>> Author: Jason Wang <jasowang@redhat.com> >>>>>>>> Date: Fri May 24 08:12:18 2019 +0000 >>>>>>>> >>>>>>>> vhost: access vq metadata through kernel virtual address >>>>>>>> >>>>>>>> bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=149a8a20600000 >>>>>>>> start commit: 6d21a41b Add linux-next specific files for 20190718 >>>>>>>> git tree: linux-next >>>>>>>> final crash: https://syzkaller.appspot.com/x/report.txt?x=169a8a20600000 >>>>>>>> console output: https://syzkaller.appspot.com/x/log.txt?x=129a8a20600000 >>>>>>>> kernel config: https://syzkaller.appspot.com/x/.config?x=3430a151e1452331 >>>>>>>> dashboard link: https://syzkaller.appspot.com/bug?extid=e58112d71f77113ddb7b >>>>>>>> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10139e68600000 >>>>>>>> >>>>>>>> Reported-by: syzbot+e58112d71f77113ddb7b@syzkaller.appspotmail.com >>>>>>>> Fixes: 7f466032dc9e ("vhost: access vq metadata through kernel virtual >>>>>>>> address") >>>>>>>> >>>>>>>> For information about bisection process see: https://goo.gl/tpsmEJ#bisection >>>>>>> OK I poked at this for a bit, I see several things that >>>>>>> we need to fix, though I'm not yet sure it's the reason for >>>>>>> the failures: >>>>>>> >>>>>>> >>>>>>> 1. mmu_notifier_register shouldn't be called from vhost_vring_set_num_addr >>>>>>> That's just a bad hack, >>>>>> This is used to avoid holding lock when checking whether the addresses are >>>>>> overlapped. Otherwise we need to take spinlock for each invalidation request >>>>>> even if it was the va range that is not interested for us. This will be very >>>>>> slow e.g during guest boot. >>>>> KVM seems to do exactly that. >>>>> I tried and guest does not seem to boot any slower. >>>>> Do you observe any slowdown? >>>> Yes I do. >>>> >>>> >>>>> Now I took a hard look at the uaddr hackery it really makes >>>>> me nervious. So I think for this release we want something >>>>> safe, and optimizations on top. As an alternative revert the >>>>> optimization and try again for next merge window. >>>> Will post a series of fixes, let me know if you're ok with that. >>>> >>>> Thanks >>> I'd prefer you to take a hard look at the patch I posted >>> which makes code cleaner, >> >> I did. But it looks to me a series that is only about 60 lines of code can >> fix all the issues we found without reverting the uaddr optimization. > Another thing I like about the patch I posted is that > it removes 60 lines of code, instead of adding more :) > Mostly because of unifying everything into > a single cleanup function and using kfree_rcu. Yes. > > So how about this: do exactly what you propose but as a 2 patch series: > start with the slow safe patch, and add then return uaddr optimizations > on top. We can then more easily reason about whether they are safe. If you stick, I can do this. > Basically you are saying this: > - notifiers are only needed to invalidate maps > - we make sure any uaddr change invalidates maps anyway > - thus it's ok not to have notifiers since we do > not have maps > > All this looks ok but the question is why do we > bother unregistering them. And the answer seems to > be that this is so we can start with a balanced > counter: otherwise we can be between _start and > _end calls. Yes, since there could be multiple co-current invalidation requests. We need count them to make sure we don't pin wrong pages. > > I also wonder about ordering. kvm has this: > /* > * Used to check for invalidations in progress, of the pfn that is > * returned by pfn_to_pfn_prot below. > */ > mmu_seq = kvm->mmu_notifier_seq; > /* > * Ensure the read of mmu_notifier_seq isn't reordered with PTE reads in > * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't > * risk the page we get a reference to getting unmapped before we have a > * chance to grab the mmu_lock without mmu_notifier_retry() noticing. > * > * This smp_rmb() pairs with the effective smp_wmb() of the combination > * of the pte_unmap_unlock() after the PTE is zapped, and the > * spin_lock() in kvm_mmu_notifier_invalidate_<page|range_end>() before > * mmu_notifier_seq is incremented. > */ > smp_rmb(); > > does this apply to us? Can't we use a seqlock instead so we do > not need to worry? I'm not familiar with kvm MMU internals, but we do everything under of mmu_lock. Thanks _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2019-07-23 8:42 UTC|newest] Thread overview: 176+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-07-19 3:35 WARNING in __mmdrop syzbot 2019-07-20 10:08 ` syzbot 2019-07-20 10:08 ` syzbot 2019-07-20 10:08 ` syzbot 2019-07-21 10:02 ` Michael S. Tsirkin 2019-07-21 10:02 ` Michael S. Tsirkin 2019-07-21 12:18 ` Michael S. Tsirkin 2019-07-21 12:18 ` Michael S. Tsirkin 2019-07-22 5:24 ` Jason Wang 2019-07-22 5:24 ` Jason Wang 2019-07-22 8:08 ` Michael S. Tsirkin 2019-07-22 8:08 ` Michael S. Tsirkin 2019-07-23 4:01 ` Jason Wang 2019-07-23 4:01 ` Jason Wang 2019-07-23 5:01 ` Michael S. Tsirkin 2019-07-23 5:01 ` Michael S. Tsirkin 2019-07-23 5:47 ` Jason Wang 2019-07-23 5:47 ` Jason Wang 2019-07-23 7:23 ` Michael S. Tsirkin 2019-07-23 7:23 ` Michael S. Tsirkin 2019-07-23 7:53 ` Jason Wang 2019-07-23 7:53 ` Jason Wang 2019-07-23 8:10 ` Michael S. Tsirkin 2019-07-23 8:10 ` Michael S. Tsirkin 2019-07-23 8:49 ` Jason Wang 2019-07-23 8:49 ` Jason Wang 2019-07-23 9:26 ` Michael S. Tsirkin 2019-07-23 9:26 ` Michael S. Tsirkin 2019-07-23 13:31 ` Jason Wang 2019-07-23 13:31 ` Jason Wang 2019-07-25 5:52 ` Michael S. Tsirkin 2019-07-25 5:52 ` Michael S. Tsirkin 2019-07-25 7:43 ` Jason Wang 2019-07-25 7:43 ` Jason Wang 2019-07-25 8:28 ` Michael S. Tsirkin 2019-07-25 8:28 ` Michael S. Tsirkin 2019-07-25 13:21 ` Jason Wang 2019-07-25 13:21 ` Jason Wang 2019-07-25 13:26 ` Michael S. Tsirkin 2019-07-25 13:26 ` Michael S. Tsirkin 2019-07-25 14:25 ` Jason Wang 2019-07-25 14:25 ` Jason Wang 2019-07-26 11:49 ` Michael S. Tsirkin 2019-07-26 11:49 ` Michael S. Tsirkin 2019-07-26 12:00 ` Jason Wang 2019-07-26 12:00 ` Jason Wang 2019-07-26 12:38 ` Michael S. Tsirkin 2019-07-26 12:38 ` Michael S. Tsirkin 2019-07-26 12:53 ` Jason Wang 2019-07-26 12:53 ` Jason Wang 2019-07-26 13:36 ` Jason Wang 2019-07-26 13:36 ` Jason Wang 2019-07-26 13:49 ` Michael S. Tsirkin 2019-07-26 13:49 ` Michael S. Tsirkin 2019-07-29 5:54 ` Jason Wang 2019-07-29 5:54 ` Jason Wang 2019-07-29 8:59 ` Michael S. Tsirkin 2019-07-29 8:59 ` Michael S. Tsirkin 2019-07-29 14:24 ` Jason Wang 2019-07-29 14:24 ` Jason Wang 2019-07-29 14:44 ` Michael S. Tsirkin 2019-07-29 14:44 ` Michael S. Tsirkin 2019-07-30 7:44 ` Jason Wang 2019-07-30 7:44 ` Jason Wang 2019-07-30 8:03 ` Jason Wang 2019-07-30 8:03 ` Jason Wang 2019-07-30 15:08 ` Michael S. Tsirkin 2019-07-30 15:08 ` Michael S. Tsirkin 2019-07-31 8:49 ` Jason Wang 2019-07-31 8:49 ` Jason Wang 2019-07-31 23:00 ` Jason Gunthorpe 2019-07-31 23:00 ` Jason Gunthorpe 2019-07-26 13:47 ` Michael S. Tsirkin 2019-07-26 13:47 ` Michael S. Tsirkin 2019-07-26 14:00 ` Jason Wang 2019-07-26 14:00 ` Jason Wang 2019-07-26 14:10 ` Michael S. Tsirkin 2019-07-26 14:10 ` Michael S. Tsirkin 2019-07-26 15:03 ` Jason Gunthorpe 2019-07-26 15:03 ` Jason Gunthorpe 2019-07-29 5:56 ` Jason Wang 2019-07-29 5:56 ` Jason Wang 2019-07-21 12:28 ` RFC: call_rcu_outstanding (was Re: WARNING in __mmdrop) Michael S. Tsirkin 2019-07-21 12:28 ` Michael S. Tsirkin 2019-07-21 13:17 ` Paul E. McKenney 2019-07-21 13:17 ` Paul E. McKenney 2019-07-21 17:53 ` Michael S. Tsirkin 2019-07-21 17:53 ` Michael S. Tsirkin 2019-07-21 19:28 ` Paul E. McKenney 2019-07-21 19:28 ` Paul E. McKenney 2019-07-22 7:56 ` Michael S. Tsirkin 2019-07-22 7:56 ` Michael S. Tsirkin 2019-07-22 11:57 ` Paul E. McKenney 2019-07-22 11:57 ` Paul E. McKenney 2019-07-21 21:08 ` Matthew Wilcox 2019-07-21 21:08 ` Matthew Wilcox 2019-07-21 23:31 ` Paul E. McKenney 2019-07-21 23:31 ` Paul E. McKenney 2019-07-22 7:52 ` Michael S. Tsirkin 2019-07-22 7:52 ` Michael S. Tsirkin 2019-07-22 11:51 ` Paul E. McKenney 2019-07-22 11:51 ` Paul E. McKenney 2019-07-22 13:41 ` Jason Gunthorpe 2019-07-22 13:41 ` Jason Gunthorpe 2019-07-22 15:52 ` Paul E. McKenney 2019-07-22 15:52 ` Paul E. McKenney 2019-07-22 16:04 ` Jason Gunthorpe 2019-07-22 16:04 ` Jason Gunthorpe 2019-07-22 16:15 ` Michael S. Tsirkin 2019-07-22 16:15 ` Michael S. Tsirkin 2019-07-22 16:15 ` Paul E. McKenney 2019-07-22 16:15 ` Paul E. McKenney 2019-07-22 15:14 ` Joel Fernandes 2019-07-22 15:14 ` Joel Fernandes 2019-07-22 15:47 ` Michael S. Tsirkin 2019-07-22 15:47 ` Michael S. Tsirkin 2019-07-22 15:55 ` Paul E. McKenney 2019-07-22 15:55 ` Paul E. McKenney 2019-07-22 16:13 ` Michael S. Tsirkin 2019-07-22 16:13 ` Michael S. Tsirkin 2019-07-22 16:25 ` Paul E. McKenney 2019-07-22 16:25 ` Paul E. McKenney 2019-07-22 16:32 ` Michael S. Tsirkin 2019-07-22 16:32 ` Michael S. Tsirkin 2019-07-22 18:58 ` Paul E. McKenney 2019-07-22 18:58 ` Paul E. McKenney 2019-07-22 5:21 ` WARNING in __mmdrop Jason Wang 2019-07-22 5:21 ` Jason Wang 2019-07-22 8:02 ` Michael S. Tsirkin 2019-07-22 8:02 ` Michael S. Tsirkin 2019-07-23 3:55 ` Jason Wang 2019-07-23 3:55 ` Jason Wang 2019-07-23 5:02 ` Michael S. Tsirkin 2019-07-23 5:02 ` Michael S. Tsirkin 2019-07-23 5:48 ` Jason Wang 2019-07-23 5:48 ` Jason Wang 2019-07-23 7:25 ` Michael S. Tsirkin 2019-07-23 7:25 ` Michael S. Tsirkin 2019-07-23 7:55 ` Jason Wang 2019-07-23 7:55 ` Jason Wang 2019-07-23 7:56 ` Michael S. Tsirkin 2019-07-23 7:56 ` Michael S. Tsirkin 2019-07-23 8:42 ` Jason Wang [this message] 2019-07-23 8:42 ` Jason Wang 2019-07-23 10:27 ` Michael S. Tsirkin 2019-07-23 10:27 ` Michael S. Tsirkin 2019-07-23 13:34 ` Jason Wang 2019-07-23 13:34 ` Jason Wang 2019-07-23 15:02 ` Michael S. Tsirkin 2019-07-23 15:02 ` Michael S. Tsirkin 2019-07-24 2:17 ` Jason Wang 2019-07-24 2:17 ` Jason Wang 2019-07-24 8:05 ` Michael S. Tsirkin 2019-07-24 8:05 ` Michael S. Tsirkin 2019-07-24 10:08 ` Jason Wang 2019-07-24 10:08 ` Jason Wang 2019-07-24 18:25 ` Michael S. Tsirkin 2019-07-24 18:25 ` Michael S. Tsirkin 2019-07-25 3:44 ` Jason Wang 2019-07-25 3:44 ` Jason Wang 2019-07-25 5:09 ` Michael S. Tsirkin 2019-07-25 5:09 ` Michael S. Tsirkin 2019-07-24 16:53 ` Jason Gunthorpe 2019-07-24 16:53 ` Jason Gunthorpe 2019-07-24 18:25 ` Michael S. Tsirkin 2019-07-24 18:25 ` Michael S. Tsirkin 2019-07-23 10:42 ` Michael S. Tsirkin 2019-07-23 10:42 ` Michael S. Tsirkin 2019-07-23 13:37 ` Jason Wang 2019-07-23 13:37 ` Jason Wang 2019-07-22 14:11 ` Jason Gunthorpe 2019-07-22 14:11 ` Jason Gunthorpe 2019-07-25 6:02 ` Michael S. Tsirkin 2019-07-25 6:02 ` Michael S. Tsirkin 2019-07-25 7:44 ` Jason Wang 2019-07-25 7:44 ` Jason Wang
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=e2e01a05-63d8-4388-2bcd-b2be3c865486@redhat.com \ --to=jasowang@redhat.com \ --cc=aarcange@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=christian@brauner.io \ --cc=davem@davemloft.net \ --cc=ebiederm@xmission.com \ --cc=elena.reshetova@intel.com \ --cc=guro@fb.com \ --cc=hch@infradead.org \ --cc=james.bottomley@hansenpartnership.com \ --cc=jglisse@redhat.com \ --cc=keescook@chromium.org \ --cc=ldv@altlinux.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-parisc@vger.kernel.org \ --cc=luto@amacapital.net \ --cc=mhocko@suse.com \ --cc=mingo@kernel.org \ --cc=mst@redhat.com \ --cc=namit@vmware.com \ --cc=peterz@infradead.org \ --cc=syzbot+e58112d71f77113ddb7b@syzkaller.appspotmail.com \ --cc=syzkaller-bugs@googlegroups.com \ --cc=viro@zeniv.linux.org.uk \ --cc=wad@chromium.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.