All of lore.kernel.org
 help / color / mirror / Atom feed
From: Devesh Sharma <Devesh.Sharma-iH1Dq9VlAzfQT0dZR+AlfA@public.gmane.org>
To: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Cc: Linux NFS Mailing List
	<linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Trond Myklebust
	<trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
Subject: RE: [PATCH V1] NFS-RDMA: fix qp pointer validation checks
Date: Tue, 15 Apr 2014 18:25:47 +0000	[thread overview]
Message-ID: <EE7902D3F51F404C82415C4803930ACD3FDEE11F@CMEXMB1.ad.emulex.com> (raw)
In-Reply-To: <C689AB91-46F6-4E96-A673-0DE76FE54CC4-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>



> -----Original Message-----
> From: Chuck Lever [mailto:chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org]
> Sent: Tuesday, April 15, 2014 6:10 AM
> To: Devesh Sharma
> Cc: Linux NFS Mailing List; linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Trond Myklebust
> Subject: Re: [PATCH V1] NFS-RDMA: fix qp pointer validation checks
> 
> 
> On Apr 14, 2014, at 6:46 PM, Devesh Sharma <devesh.sharma-laKkSmNT4hbQT0dZR+AlfA@public.gmane.org>
> wrote:
> 
> > Hi Chuck
> >
> >> -----Original Message-----
> >> From: Chuck Lever [mailto:chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org]
> >> Sent: Tuesday, April 15, 2014 2:24 AM
> >> To: Devesh Sharma
> >> Cc: Linux NFS Mailing List; linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Trond
> >> Myklebust
> >> Subject: Re: [PATCH V1] NFS-RDMA: fix qp pointer validation checks
> >>
> >> Hi Devesh-
> >>
> >>
> >> On Apr 13, 2014, at 12:01 AM, Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
> wrote:
> >>
> >>>
> >>> On Apr 11, 2014, at 7:51 PM, Devesh Sharma
> >> <Devesh.Sharma-iH1Dq9VlAzfQT0dZR+AlfA@public.gmane.org> wrote:
> >>>
> >>>> Hi  Chuck,
> >>>> Yes that is the case, Following is the trace I got.
> >>>>
> >>>> <4>RPC:   355 setting alarm for 60000 ms
> >>>> <4>RPC:   355 sync task going to sleep
> >>>> <4>RPC:       xprt_rdma_connect_worker: reconnect
> >>>> <4>RPC:       rpcrdma_ep_disconnect: rdma_disconnect -1
> >>>> <4>RPC:       rpcrdma_ep_connect: rpcrdma_ep_disconnect status -1
> >>>> <3>ocrdma_mbx_create_qp(0) rq_err
> >>>> <3>ocrdma_mbx_create_qp(0) sq_err
> >>>> <3>ocrdma_create_qp(0) error=-1
> >>>> <4>RPC:       rpcrdma_ep_connect: rdma_create_qp failed -1
> >>>> <4>RPC:   355 __rpc_wake_up_task (now 4296956756)
> >>>> <4>RPC:   355 disabling timer
> >>>> <4>RPC:   355 removed from queue ffff880454578258 "xprt_pending"
> >>>> <4>RPC:       __rpc_wake_up_task done
> >>>> <4>RPC:       xprt_rdma_connect_worker: exit
> >>>> <4>RPC:   355 sync task resuming
> >>>> <4>RPC:   355 xprt_connect_status: error 1 connecting to server
> >> 192.168.1.1
> >>>
> >>> xprtrdma's connect worker is returning "1" instead of a negative errno.
> >>> That's the bug that triggers this chain of events.
> >>
> >> rdma_create_qp() has returned -EPERM. There's very little xprtrdma
> >> can do if the provider won't even create a QP. That seems like a rare
> >> and fatal problem.
> >>
> >> For the moment, I'm inclined to think that a panic is correct
> >> behavior, since there are outstanding registered memory regions that
> >> cannot be cleaned up without a QP (see below).
> > Well, I think the system should still remain alive.
> 
> Sure, in the long run. I'm not suggesting we leave it this way.
Okay, Agreed.
> 
> > This will definatly cause a memory leak. But QP create failure does not
> mean system should also crash.
> 
> It's more than leaked memory.  A permanent QP creation failure can leave
> pages in the page cache registered and pinned, as I understand it.
Yes! true.
> 
> > I think for the time being it is worth to put Null pointer checks to prevent
> system from crash.
> 
> Common practice in the Linux kernel is to avoid unnecessary NULL checks.
> Work-around fixes are typically rejected, and not with a happy face either.
> 
> Once the connection tear-down code is fixed, it should be clear where NULL
> checks need to go.
Okay.
> 
> >>
> >>> RPC tasks waiting for the reconnect are awoken.
> >>> xprt_connect_status() doesn't recognize a tk_status of "1", so it
> >>> turns it into -EIO, and kills each waiting RPC task.
> >>
> >>>> <4>RPC:       wake_up_next(ffff880454578190 "xprt_sending")
> >>>> <4>RPC:   355 call_connect_status (status -5)
> >>>> <4>RPC:   355 return 0, status -5
> >>>> <4>RPC:   355 release task
> >>>> <4>RPC:       wake_up_next(ffff880454578190 "xprt_sending")
> >>>> <4>RPC:       xprt_rdma_free: called on 0x(null)
> >>>
> >>> And as part of exiting, the RPC task has to free its buffer.
> >>>
> >>> Not exactly sure why req->rl_nchunks is not zero for an NFSv4 GETATTR.
> >>> This is why rpcrdma_deregister_external() is invoked here.
> >>>
> >>> Eventually this gets around to attempting to post a LOCAL_INV WR
> >>> with
> >>> ->qp set to NULL, and the panic below occurs.
> >>
> >> This is a somewhat different problem.
> >>
> >> Not only do we need to have a good ->qp here, but it has to be
> >> connected and in the ready-to-send state before LOCAL_INV work
> >> requests can be posted.
> >>
> >> The implication of this is that if a server disconnects (server crash
> >> or network partition), the client is stuck waiting for the server to
> >> come back before it can deregister memory and retire outstanding RPC
> requests.
> > This is a real problem to solve. In the existing state of xprtrdma
> > code. Even a Server reboot will cause Client to crash.
> 
> I don't see how that can happen if the HCA/provider manages to create a
> fresh QP successfully and then rdma_connect() succeeds.
Okay yes, since QP creation will still succeed.
> 
> A soft timeout or a ^C while the server is rebooting might be a problem.
> 
> >>
> >> This is bad for ^C or soft timeouts or umount ... when the server is
> >> unavailable.
> >>
> >> So I feel we need better clean-up when the client cannot reconnect.
> > Unreg old frmrs with the help of new QP? Until the new QP is created with
> same PD and FRMR is bound to PD and not to QP.
> >> Probably deregistering RPC chunk MR's before finally tearing down the
> >> old QP is what is necessary.
> >
> > We need a scheme that handles Memory registrations separately from
> connection establishment and do book-keeping of which region is Registered
> and which one is not.
> > Once the new connection is back. Either start using old mem-regions as it is,
> or invalidate old and re-register on the new QP.
> > What is the existing scheme xprtrdma is following? Is it the same?
> 
> This is what is going on now.  Clearly, when managing its own memory
> resources, the client should never depend on the server ever coming back.
> 
> The proposal is to deregister _before_ the old QP is torn down, using
> ib_dereg_mr() in the connect worker process. All RPC requests on that
> connection should be sleeping waiting for the reconnect to complete.
> 
> If chunks are created and marshaled during xprt_transmit(), the waiting RPC
> requests should simply re-register when they are ready to be sent again.
> 
Ok, I will try to change this and test, I may take a week's time to understand and rollout V3.

> > I think it is possible to create FRMR on qp->qp_num = x while
> > invalidate on qp->qp_num = y until qpx.pd == qpy.pd
> 
> --
> Chuck Lever
> chuck[dot]lever[at]oracle[dot]com
> 
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: Devesh Sharma <Devesh.Sharma@Emulex.Com>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: Linux NFS Mailing List <linux-nfs@vger.kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	Trond Myklebust <trond.myklebust@primarydata.com>
Subject: RE: [PATCH V1] NFS-RDMA: fix qp pointer validation checks
Date: Tue, 15 Apr 2014 18:25:47 +0000	[thread overview]
Message-ID: <EE7902D3F51F404C82415C4803930ACD3FDEE11F@CMEXMB1.ad.emulex.com> (raw)
In-Reply-To: <C689AB91-46F6-4E96-A673-0DE76FE54CC4@oracle.com>



> -----Original Message-----
> From: Chuck Lever [mailto:chuck.lever@oracle.com]
> Sent: Tuesday, April 15, 2014 6:10 AM
> To: Devesh Sharma
> Cc: Linux NFS Mailing List; linux-rdma@vger.kernel.org; Trond Myklebust
> Subject: Re: [PATCH V1] NFS-RDMA: fix qp pointer validation checks
> 
> 
> On Apr 14, 2014, at 6:46 PM, Devesh Sharma <devesh.sharma@emulex.com>
> wrote:
> 
> > Hi Chuck
> >
> >> -----Original Message-----
> >> From: Chuck Lever [mailto:chuck.lever@oracle.com]
> >> Sent: Tuesday, April 15, 2014 2:24 AM
> >> To: Devesh Sharma
> >> Cc: Linux NFS Mailing List; linux-rdma@vger.kernel.org; Trond
> >> Myklebust
> >> Subject: Re: [PATCH V1] NFS-RDMA: fix qp pointer validation checks
> >>
> >> Hi Devesh-
> >>
> >>
> >> On Apr 13, 2014, at 12:01 AM, Chuck Lever <chuck.lever@oracle.com>
> wrote:
> >>
> >>>
> >>> On Apr 11, 2014, at 7:51 PM, Devesh Sharma
> >> <Devesh.Sharma@Emulex.Com> wrote:
> >>>
> >>>> Hi  Chuck,
> >>>> Yes that is the case, Following is the trace I got.
> >>>>
> >>>> <4>RPC:   355 setting alarm for 60000 ms
> >>>> <4>RPC:   355 sync task going to sleep
> >>>> <4>RPC:       xprt_rdma_connect_worker: reconnect
> >>>> <4>RPC:       rpcrdma_ep_disconnect: rdma_disconnect -1
> >>>> <4>RPC:       rpcrdma_ep_connect: rpcrdma_ep_disconnect status -1
> >>>> <3>ocrdma_mbx_create_qp(0) rq_err
> >>>> <3>ocrdma_mbx_create_qp(0) sq_err
> >>>> <3>ocrdma_create_qp(0) error=-1
> >>>> <4>RPC:       rpcrdma_ep_connect: rdma_create_qp failed -1
> >>>> <4>RPC:   355 __rpc_wake_up_task (now 4296956756)
> >>>> <4>RPC:   355 disabling timer
> >>>> <4>RPC:   355 removed from queue ffff880454578258 "xprt_pending"
> >>>> <4>RPC:       __rpc_wake_up_task done
> >>>> <4>RPC:       xprt_rdma_connect_worker: exit
> >>>> <4>RPC:   355 sync task resuming
> >>>> <4>RPC:   355 xprt_connect_status: error 1 connecting to server
> >> 192.168.1.1
> >>>
> >>> xprtrdma's connect worker is returning "1" instead of a negative errno.
> >>> That's the bug that triggers this chain of events.
> >>
> >> rdma_create_qp() has returned -EPERM. There's very little xprtrdma
> >> can do if the provider won't even create a QP. That seems like a rare
> >> and fatal problem.
> >>
> >> For the moment, I'm inclined to think that a panic is correct
> >> behavior, since there are outstanding registered memory regions that
> >> cannot be cleaned up without a QP (see below).
> > Well, I think the system should still remain alive.
> 
> Sure, in the long run. I'm not suggesting we leave it this way.
Okay, Agreed.
> 
> > This will definatly cause a memory leak. But QP create failure does not
> mean system should also crash.
> 
> It's more than leaked memory.  A permanent QP creation failure can leave
> pages in the page cache registered and pinned, as I understand it.
Yes! true.
> 
> > I think for the time being it is worth to put Null pointer checks to prevent
> system from crash.
> 
> Common practice in the Linux kernel is to avoid unnecessary NULL checks.
> Work-around fixes are typically rejected, and not with a happy face either.
> 
> Once the connection tear-down code is fixed, it should be clear where NULL
> checks need to go.
Okay.
> 
> >>
> >>> RPC tasks waiting for the reconnect are awoken.
> >>> xprt_connect_status() doesn't recognize a tk_status of "1", so it
> >>> turns it into -EIO, and kills each waiting RPC task.
> >>
> >>>> <4>RPC:       wake_up_next(ffff880454578190 "xprt_sending")
> >>>> <4>RPC:   355 call_connect_status (status -5)
> >>>> <4>RPC:   355 return 0, status -5
> >>>> <4>RPC:   355 release task
> >>>> <4>RPC:       wake_up_next(ffff880454578190 "xprt_sending")
> >>>> <4>RPC:       xprt_rdma_free: called on 0x(null)
> >>>
> >>> And as part of exiting, the RPC task has to free its buffer.
> >>>
> >>> Not exactly sure why req->rl_nchunks is not zero for an NFSv4 GETATTR.
> >>> This is why rpcrdma_deregister_external() is invoked here.
> >>>
> >>> Eventually this gets around to attempting to post a LOCAL_INV WR
> >>> with
> >>> ->qp set to NULL, and the panic below occurs.
> >>
> >> This is a somewhat different problem.
> >>
> >> Not only do we need to have a good ->qp here, but it has to be
> >> connected and in the ready-to-send state before LOCAL_INV work
> >> requests can be posted.
> >>
> >> The implication of this is that if a server disconnects (server crash
> >> or network partition), the client is stuck waiting for the server to
> >> come back before it can deregister memory and retire outstanding RPC
> requests.
> > This is a real problem to solve. In the existing state of xprtrdma
> > code. Even a Server reboot will cause Client to crash.
> 
> I don't see how that can happen if the HCA/provider manages to create a
> fresh QP successfully and then rdma_connect() succeeds.
Okay yes, since QP creation will still succeed.
> 
> A soft timeout or a ^C while the server is rebooting might be a problem.
> 
> >>
> >> This is bad for ^C or soft timeouts or umount ... when the server is
> >> unavailable.
> >>
> >> So I feel we need better clean-up when the client cannot reconnect.
> > Unreg old frmrs with the help of new QP? Until the new QP is created with
> same PD and FRMR is bound to PD and not to QP.
> >> Probably deregistering RPC chunk MR's before finally tearing down the
> >> old QP is what is necessary.
> >
> > We need a scheme that handles Memory registrations separately from
> connection establishment and do book-keeping of which region is Registered
> and which one is not.
> > Once the new connection is back. Either start using old mem-regions as it is,
> or invalidate old and re-register on the new QP.
> > What is the existing scheme xprtrdma is following? Is it the same?
> 
> This is what is going on now.  Clearly, when managing its own memory
> resources, the client should never depend on the server ever coming back.
> 
> The proposal is to deregister _before_ the old QP is torn down, using
> ib_dereg_mr() in the connect worker process. All RPC requests on that
> connection should be sleeping waiting for the reconnect to complete.
> 
> If chunks are created and marshaled during xprt_transmit(), the waiting RPC
> requests should simply re-register when they are ready to be sent again.
> 
Ok, I will try to change this and test, I may take a week's time to understand and rollout V3.

> > I think it is possible to create FRMR on qp->qp_num = x while
> > invalidate on qp->qp_num = y until qpx.pd == qpy.pd
> 
> --
> Chuck Lever
> chuck[dot]lever[at]oracle[dot]com
> 
> 
> 


  parent reply	other threads:[~2014-04-15 18:25 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-09 18:40 [PATCH V1] NFS-RDMA: fix qp pointer validation checks Devesh Sharma
2014-04-09 18:40 ` Devesh Sharma
     [not found] ` <014738b6-698e-4ea1-82f9-287378bfec19-3RiH6ntJJkOPfaB/Gd0HpljyZtpTMMwT@public.gmane.org>
2014-04-09 20:22   ` Trond Myklebust
2014-04-09 20:22     ` Trond Myklebust
     [not found]     ` <D7AB2150-5F25-4BA2-80D9-94890AD11F8F-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
2014-04-09 20:26       ` Chuck Lever
2014-04-09 20:26         ` Chuck Lever
     [not found]         ` <F1C70AD6-BDD4-4534-8DC4-61D2767581D9-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2014-04-09 23:56           ` Devesh Sharma
2014-04-09 23:56             ` Devesh Sharma
     [not found]             ` <EE7902D3F51F404C82415C4803930ACD3FDEAA43-DWYeeINJQrxExQ8dmkPuX0M9+F4ksjoh@public.gmane.org>
2014-04-10  0:26               ` Chuck Lever
2014-04-10  0:26                 ` Chuck Lever
     [not found]                 ` <E66D006A-0D04-4602-8BF5-6834CACD2E24-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2014-04-10 15:01                   ` Steve Wise
2014-04-10 15:01                     ` Steve Wise
     [not found]                     ` <5346B22D.3060706-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>
2014-04-10 17:43                       ` Chuck Lever
2014-04-10 17:43                         ` Chuck Lever
     [not found]                         ` <D7836AB3-FCB6-40EF-9954-B58A05A87791-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2014-04-10 18:34                           ` Steve Wise
2014-04-10 18:34                             ` Steve Wise
2014-04-10 17:42                   ` Devesh Sharma
2014-04-10 17:42                     ` Devesh Sharma
     [not found]                     ` <EE7902D3F51F404C82415C4803930ACD3FDEB3B4-DWYeeINJQrxExQ8dmkPuX0M9+F4ksjoh@public.gmane.org>
2014-04-10 17:51                       ` Chuck Lever
2014-04-10 17:51                         ` Chuck Lever
     [not found]                         ` <BD7B05C0-4733-4DD1-83F3-B30B6B0EE48C-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2014-04-10 17:54                           ` Devesh Sharma
2014-04-10 17:54                             ` Devesh Sharma
     [not found]                             ` <EE7902D3F51F404C82415C4803930ACD3FDEB3DF-DWYeeINJQrxExQ8dmkPuX0M9+F4ksjoh@public.gmane.org>
2014-04-10 19:53                               ` Chuck Lever
2014-04-10 19:53                                 ` Chuck Lever
     [not found]                                 ` <56C87770-7940-4006-948C-FEF3C0EC4ACC-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2014-04-11 23:51                                   ` Devesh Sharma
2014-04-11 23:51                                     ` Devesh Sharma
     [not found]                                     ` <EE7902D3F51F404C82415C4803930ACD3FDEBD66-DWYeeINJQrxExQ8dmkPuX0M9+F4ksjoh@public.gmane.org>
2014-04-13  4:01                                       ` Chuck Lever
2014-04-13  4:01                                         ` Chuck Lever
     [not found]                                         ` <5710A71F-C4D5-408B-9B41-07F21B5853F0-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2014-04-14 20:53                                           ` Chuck Lever
2014-04-14 20:53                                             ` Chuck Lever
     [not found]                                             ` <6837A427-B677-4CC7-A022-4FB9E52A3FC6-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2014-04-14 22:46                                               ` Devesh Sharma
2014-04-14 22:46                                                 ` Devesh Sharma
     [not found]                                                 ` <EE7902D3F51F404C82415C4803930ACD3FDED915-DWYeeINJQrxExQ8dmkPuX0M9+F4ksjoh@public.gmane.org>
2014-04-15  0:39                                                   ` Chuck Lever
2014-04-15  0:39                                                     ` Chuck Lever
     [not found]                                                     ` <C689AB91-46F6-4E96-A673-0DE76FE54CC4-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2014-04-15 18:25                                                       ` Devesh Sharma [this message]
2014-04-15 18:25                                                         ` Devesh Sharma
     [not found]                                                         ` <EE7902D3F51F404C82415C4803930ACD3FDEE11F-DWYeeINJQrxExQ8dmkPuX0M9+F4ksjoh@public.gmane.org>
2014-04-23 23:30                                                           ` Devesh Sharma
2014-04-23 23:30                                                             ` Devesh Sharma
     [not found]                                                             ` <1bab6615-60c4-4865-a6a0-c53bb1c32341-3RiH6ntJJkP8BX6JNMqfyFjyZtpTMMwT@public.gmane.org>
2014-04-24  7:12                                                               ` Sagi Grimberg
2014-04-24  7:12                                                                 ` Sagi Grimberg
     [not found]                                                                 ` <5358B975.4020207-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2014-04-24 15:01                                                                   ` Chuck Lever
2014-04-24 15:01                                                                     ` Chuck Lever
     [not found]                                                                     ` <B39C0B38-357F-4BDA-BDA7-048BD38853F7-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2014-04-24 15:48                                                                       ` Devesh Sharma
2014-04-24 15:48                                                                         ` Devesh Sharma
     [not found]                                                                         ` <EE7902D3F51F404C82415C4803930ACD3FDF4F83-DWYeeINJQrxExQ8dmkPuX0M9+F4ksjoh@public.gmane.org>
2014-04-24 17:44                                                                           ` Chuck Lever
2014-04-24 17:44                                                                             ` Chuck Lever
2014-04-27 10:12                                                                       ` Sagi Grimberg
2014-04-27 10:12                                                                         ` Sagi Grimberg
     [not found]                                                                     ` <535CD819.3050508@dev! .mellanox.co.il>
     [not found]                                                                       ` <535CD819.3050508-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2014-04-27 12:37                                                                         ` Chuck Lever
2014-04-27 12:37                                                                           ` Chuck Lever
     [not found]                                                                           ` <4ACED3B0-CC8B-4F1F-8DB6-6C272AB17C99-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2014-04-28  8:58                                                                             ` Sagi Grimberg
2014-04-28  8:58                                                                               ` Sagi Grimberg
2014-04-14 23:55                                           ` Devesh Sharma
2014-04-14 23:55                                             ` Devesh Sharma

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=EE7902D3F51F404C82415C4803930ACD3FDEE11F@CMEXMB1.ad.emulex.com \
    --to=devesh.sharma-ih1dq9vlazfqt0dzr+alfa@public.gmane.org \
    --cc=chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
    --cc=linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.