linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Himanshu Madhani <himanshu.madhani@oracle.com>
To: James Smart <jsmart2021@gmail.com>, linux-nvme@lists.infradead.org
Cc: martin.petersen@oracle.com
Subject: Re: [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header
Date: Thu, 26 Mar 2020 11:26:13 -0500	[thread overview]
Message-ID: <a97e3a31-b090-9e62-843e-c28a0eb31177@oracle.com> (raw)
In-Reply-To: <20200205183753.25959-5-jsmart2021@gmail.com>

On 2/5/2020 12:37 PM, James Smart wrote:
> deal with following naming changes in the header:
>    nvmefc_tgt_ls_req -> nvmefc_ls_rsp
>    nvmefc_tgt_ls_req.nvmet_fc_private -> nvmefc_ls_rsp.nvme_fc_private
> 
> Change calling sequence to nvmet_fc_rcv_ls_req() for hosthandle.
> 
> Add stubs for new interfaces:
> host/fc.c: nvme_fc_rcv_ls_req()
> target/fc.c: nvmet_fc_invalidate_host()
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>   drivers/nvme/host/fc.c       | 35 ++++++++++++++++++++
>   drivers/nvme/target/fc.c     | 77 ++++++++++++++++++++++++++++++++------------
>   drivers/nvme/target/fcloop.c | 20 ++++++------
>   3 files changed, 102 insertions(+), 30 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 5a70ac395d53..f8f79cd88769 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -1465,6 +1465,41 @@ nvme_fc_xmt_disconnect_assoc(struct nvme_fc_ctrl *ctrl)
>   		kfree(lsop);
>   }
>   
> +/**
> + * nvme_fc_rcv_ls_req - transport entry point called by an LLDD
> + *                       upon the reception of a NVME LS request.
> + *
> + * The nvme-fc layer will copy payload to an internal structure for
> + * processing.  As such, upon completion of the routine, the LLDD may
> + * immediately free/reuse the LS request buffer passed in the call.
> + *
> + * If this routine returns error, the LLDD should abort the exchange.
> + *
> + * @remoteport: pointer to the (registered) remote port that the LS
> + *              was received from. The remoteport is associated with
> + *              a specific localport.
> + * @lsrsp:      pointer to a nvmefc_ls_rsp response structure to be
> + *              used to reference the exchange corresponding to the LS
> + *              when issuing an ls response.
> + * @lsreqbuf:   pointer to the buffer containing the LS Request
> + * @lsreqbuf_len: length, in bytes, of the received LS request
> + */
> +int
> +nvme_fc_rcv_ls_req(struct nvme_fc_remote_port *portptr,
> +			struct nvmefc_ls_rsp *lsrsp,
> +			void *lsreqbuf, u32 lsreqbuf_len)
> +{
> +	struct nvme_fc_rport *rport = remoteport_to_rport(portptr);
> +	struct nvme_fc_lport *lport = rport->lport;
> +
> +	/* validate there's a routine to transmit a response */
> +	if (!lport->ops->xmt_ls_rsp)
> +		return(-EINVAL);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(nvme_fc_rcv_ls_req);
> +
>   
>   /* *********************** NVME Ctrl Routines **************************** */
>   
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index a8ceb7721640..aac7869a70bb 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -28,7 +28,7 @@ struct nvmet_fc_tgtport;
>   struct nvmet_fc_tgt_assoc;
>   
>   struct nvmet_fc_ls_iod {
> -	struct nvmefc_tgt_ls_req	*lsreq;
> +	struct nvmefc_ls_rsp		*lsrsp;
>   	struct nvmefc_tgt_fcp_req	*fcpreq;	/* only if RS */
>   
>   	struct list_head		ls_list;	/* tgtport->ls_list */
> @@ -1146,6 +1146,42 @@ __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport)
>   	spin_unlock_irqrestore(&tgtport->lock, flags);
>   }
>   
> +/**
> + * nvmet_fc_invalidate_host - transport entry point called by an LLDD
> + *                       to remove references to a hosthandle for LS's.
> + *
> + * The nvmet-fc layer ensures that any references to the hosthandle
> + * on the targetport are forgotten (set to NULL).  The LLDD will
> + * typically call this when a login with a remote host port has been
> + * lost, thus LS's for the remote host port are no longer possible.
> + *
> + * If an LS request is outstanding to the targetport/hosthandle (or
> + * issued concurrently with the call to invalidate the host), the
> + * LLDD is responsible for terminating/aborting the LS and completing
> + * the LS request. It is recommended that these terminations/aborts
> + * occur after calling to invalidate the host handle to avoid additional
> + * retries by the nvmet-fc transport. The nvmet-fc transport may
> + * continue to reference host handle while it cleans up outstanding
> + * NVME associations. The nvmet-fc transport will call the
> + * ops->host_release() callback to notify the LLDD that all references
> + * are complete and the related host handle can be recovered.
> + * Note: if there are no references, the callback may be called before
> + * the invalidate host call returns.
> + *
> + * @target_port: pointer to the (registered) target port that a prior
> + *              LS was received on and which supplied the transport the
> + *              hosthandle.
> + * @hosthandle: the handle (pointer) that represents the host port
> + *              that no longer has connectivity and that LS's should
> + *              no longer be directed to.
> + */
> +void
> +nvmet_fc_invalidate_host(struct nvmet_fc_target_port *target_port,
> +			void *hosthandle)
> +{
> +}
> +EXPORT_SYMBOL_GPL(nvmet_fc_invalidate_host);
> +
>   /*
>    * nvmet layer has called to terminate an association
>    */
> @@ -1371,7 +1407,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
>   		dev_err(tgtport->dev,
>   			"Create Association LS failed: %s\n",
>   			validation_errors[ret]);
> -		iod->lsreq->rsplen = nvmet_fc_format_rjt(acc,
> +		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
>   				FCNVME_RJT_RC_LOGIC,
>   				FCNVME_RJT_EXP_NONE, 0);
> @@ -1384,7 +1420,7 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
>   
>   	/* format a response */
>   
> -	iod->lsreq->rsplen = sizeof(*acc);
> +	iod->lsrsp->rsplen = sizeof(*acc);
>   
>   	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
>   			fcnvme_lsdesc_len(
> @@ -1462,7 +1498,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
>   		dev_err(tgtport->dev,
>   			"Create Connection LS failed: %s\n",
>   			validation_errors[ret]);
> -		iod->lsreq->rsplen = nvmet_fc_format_rjt(acc,
> +		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
>   				(ret == VERR_NO_ASSOC) ?
>   					FCNVME_RJT_RC_INV_ASSOC :
> @@ -1477,7 +1513,7 @@ nvmet_fc_ls_create_connection(struct nvmet_fc_tgtport *tgtport,
>   
>   	/* format a response */
>   
> -	iod->lsreq->rsplen = sizeof(*acc);
> +	iod->lsrsp->rsplen = sizeof(*acc);
>   
>   	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
>   			fcnvme_lsdesc_len(sizeof(struct fcnvme_ls_cr_conn_acc)),
> @@ -1542,7 +1578,7 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
>   		dev_err(tgtport->dev,
>   			"Disconnect LS failed: %s\n",
>   			validation_errors[ret]);
> -		iod->lsreq->rsplen = nvmet_fc_format_rjt(acc,
> +		iod->lsrsp->rsplen = nvmet_fc_format_rjt(acc,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, rqst->w0.ls_cmd,
>   				(ret == VERR_NO_ASSOC) ?
>   					FCNVME_RJT_RC_INV_ASSOC :
> @@ -1555,7 +1591,7 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
>   
>   	/* format a response */
>   
> -	iod->lsreq->rsplen = sizeof(*acc);
> +	iod->lsrsp->rsplen = sizeof(*acc);
>   
>   	nvmet_fc_format_rsp_hdr(acc, FCNVME_LS_ACC,
>   			fcnvme_lsdesc_len(
> @@ -1577,9 +1613,9 @@ static void nvmet_fc_fcp_nvme_cmd_done(struct nvmet_req *nvme_req);
>   static const struct nvmet_fabrics_ops nvmet_fc_tgt_fcp_ops;
>   
>   static void
> -nvmet_fc_xmt_ls_rsp_done(struct nvmefc_tgt_ls_req *lsreq)
> +nvmet_fc_xmt_ls_rsp_done(struct nvmefc_ls_rsp *lsrsp)
>   {
> -	struct nvmet_fc_ls_iod *iod = lsreq->nvmet_fc_private;
> +	struct nvmet_fc_ls_iod *iod = lsrsp->nvme_fc_private;
>   	struct nvmet_fc_tgtport *tgtport = iod->tgtport;
>   
>   	fc_dma_sync_single_for_cpu(tgtport->dev, iod->rspdma,
> @@ -1597,9 +1633,9 @@ nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport,
>   	fc_dma_sync_single_for_device(tgtport->dev, iod->rspdma,
>   				  NVME_FC_MAX_LS_BUFFER_SIZE, DMA_TO_DEVICE);
>   
> -	ret = tgtport->ops->xmt_ls_rsp(&tgtport->fc_target_port, iod->lsreq);
> +	ret = tgtport->ops->xmt_ls_rsp(&tgtport->fc_target_port, iod->lsrsp);
>   	if (ret)
> -		nvmet_fc_xmt_ls_rsp_done(iod->lsreq);
> +		nvmet_fc_xmt_ls_rsp_done(iod->lsrsp);
>   }
>   
>   /*
> @@ -1612,12 +1648,12 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
>   	struct fcnvme_ls_rqst_w0 *w0 =
>   			(struct fcnvme_ls_rqst_w0 *)iod->rqstbuf;
>   
> -	iod->lsreq->nvmet_fc_private = iod;
> -	iod->lsreq->rspbuf = iod->rspbuf;
> -	iod->lsreq->rspdma = iod->rspdma;
> -	iod->lsreq->done = nvmet_fc_xmt_ls_rsp_done;
> +	iod->lsrsp->nvme_fc_private = iod;
> +	iod->lsrsp->rspbuf = iod->rspbuf;
> +	iod->lsrsp->rspdma = iod->rspdma;
> +	iod->lsrsp->done = nvmet_fc_xmt_ls_rsp_done;
>   	/* Be preventative. handlers will later set to valid length */
> -	iod->lsreq->rsplen = 0;
> +	iod->lsrsp->rsplen = 0;
>   
>   	iod->assoc = NULL;
>   
> @@ -1640,7 +1676,7 @@ nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
>   		nvmet_fc_ls_disconnect(tgtport, iod);
>   		break;
>   	default:
> -		iod->lsreq->rsplen = nvmet_fc_format_rjt(iod->rspbuf,
> +		iod->lsrsp->rsplen = nvmet_fc_format_rjt(iod->rspbuf,
>   				NVME_FC_MAX_LS_BUFFER_SIZE, w0->ls_cmd,
>   				FCNVME_RJT_RC_INVAL, FCNVME_RJT_EXP_NONE, 0);
>   	}
> @@ -1674,14 +1710,15 @@ nvmet_fc_handle_ls_rqst_work(struct work_struct *work)
>    *
>    * @target_port: pointer to the (registered) target port the LS was
>    *              received on.
> - * @lsreq:      pointer to a lsreq request structure to be used to reference
> + * @lsrsp:      pointer to a lsrsp structure to be used to reference
>    *              the exchange corresponding to the LS.
>    * @lsreqbuf:   pointer to the buffer containing the LS Request
>    * @lsreqbuf_len: length, in bytes, of the received LS request
>    */
>   int
>   nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *target_port,
> -			struct nvmefc_tgt_ls_req *lsreq,
> +			void *hosthandle,
> +			struct nvmefc_ls_rsp *lsrsp,
>   			void *lsreqbuf, u32 lsreqbuf_len)
>   {
>   	struct nvmet_fc_tgtport *tgtport = targetport_to_tgtport(target_port);
> @@ -1699,7 +1736,7 @@ nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *target_port,
>   		return -ENOENT;
>   	}
>   
> -	iod->lsreq = lsreq;
> +	iod->lsrsp = lsrsp;
>   	iod->fcpreq = NULL;
>   	memcpy(iod->rqstbuf, lsreqbuf, lsreqbuf_len);
>   	iod->rqstdatalen = lsreqbuf_len;
> diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
> index 1c50af6219f3..130932a5db0c 100644
> --- a/drivers/nvme/target/fcloop.c
> +++ b/drivers/nvme/target/fcloop.c
> @@ -227,7 +227,7 @@ struct fcloop_lsreq {
>   	struct fcloop_tport		*tport;
>   	struct nvmefc_ls_req		*lsreq;
>   	struct work_struct		work;
> -	struct nvmefc_tgt_ls_req	tgt_ls_req;
> +	struct nvmefc_ls_rsp		ls_rsp;
>   	int				status;
>   };
>   
> @@ -265,9 +265,9 @@ struct fcloop_ini_fcpreq {
>   };
>   
>   static inline struct fcloop_lsreq *
> -tgt_ls_req_to_lsreq(struct nvmefc_tgt_ls_req *tgt_lsreq)
> +ls_rsp_to_lsreq(struct nvmefc_ls_rsp *lsrsp)
>   {
> -	return container_of(tgt_lsreq, struct fcloop_lsreq, tgt_ls_req);
> +	return container_of(lsrsp, struct fcloop_lsreq, ls_rsp);
>   }
>   
>   static inline struct fcloop_fcpreq *
> @@ -330,7 +330,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
>   
>   	tls_req->status = 0;
>   	tls_req->tport = rport->targetport->private;
> -	ret = nvmet_fc_rcv_ls_req(rport->targetport, &tls_req->tgt_ls_req,
> +	ret = nvmet_fc_rcv_ls_req(rport->targetport, NULL, &tls_req->ls_rsp,
>   				 lsreq->rqstaddr, lsreq->rqstlen);
>   
>   	return ret;
> @@ -338,15 +338,15 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
>   
>   static int
>   fcloop_xmt_ls_rsp(struct nvmet_fc_target_port *tport,
> -			struct nvmefc_tgt_ls_req *tgt_lsreq)
> +			struct nvmefc_ls_rsp *lsrsp)
>   {
> -	struct fcloop_lsreq *tls_req = tgt_ls_req_to_lsreq(tgt_lsreq);
> +	struct fcloop_lsreq *tls_req = ls_rsp_to_lsreq(lsrsp);
>   	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
>   
> -	memcpy(lsreq->rspaddr, tgt_lsreq->rspbuf,
> -		((lsreq->rsplen < tgt_lsreq->rsplen) ?
> -				lsreq->rsplen : tgt_lsreq->rsplen));
> -	tgt_lsreq->done(tgt_lsreq);
> +	memcpy(lsreq->rspaddr, lsrsp->rspbuf,
> +		((lsreq->rsplen < lsrsp->rsplen) ?
> +				lsreq->rsplen : lsrsp->rsplen));
> +	lsrsp->done(lsrsp);
>   
>   	schedule_work(&tls_req->work);
>   
> 


Looks Good.

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2020-03-26 16:26 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-05 18:37 [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
2020-02-05 18:37 ` [PATCH 01/29] nvme-fc: Sync header to FC-NVME-2 rev 1.08 James Smart
2020-02-28 20:36   ` Sagi Grimberg
2020-03-06  8:16   ` Hannes Reinecke
2020-03-26 16:10   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 02/29] nvmet-fc: fix typo in comment James Smart
2020-02-28 20:36   ` Sagi Grimberg
2020-03-06  8:17   ` Hannes Reinecke
2020-03-26 16:10   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 03/29] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request James Smart
2020-02-28 20:38   ` Sagi Grimberg
2020-03-06  8:19   ` Hannes Reinecke
2020-03-26 16:16   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 04/29] nvme-fc nvmet_fc nvme_fcloop: adapt code to changed names in api header James Smart
2020-02-28 20:40   ` Sagi Grimberg
2020-03-06  8:21   ` Hannes Reinecke
2020-03-26 16:26   ` Himanshu Madhani [this message]
2020-02-05 18:37 ` [PATCH 05/29] lpfc: " James Smart
2020-02-28 20:40   ` Sagi Grimberg
2020-03-06  8:25   ` Hannes Reinecke
2020-03-26 16:30   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 06/29] nvme-fcloop: Fix deallocation of working context James Smart
2020-02-28 20:43   ` Sagi Grimberg
2020-03-06  8:34   ` Hannes Reinecke
2020-03-26 16:35   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 07/29] nvme-fc nvmet-fc: refactor for common LS definitions James Smart
2020-03-06  8:35   ` Hannes Reinecke
2020-03-26 16:36   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 08/29] nvmet-fc: Better size LS buffers James Smart
2020-02-28 21:04   ` Sagi Grimberg
2020-03-06  8:36   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 09/29] nvme-fc: Ensure private pointers are NULL if no data James Smart
2020-02-28 21:05   ` Sagi Grimberg
2020-03-06  8:44   ` Hannes Reinecke
2020-03-26 16:39   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 10/29] nvmefc: Use common definitions for LS names, formatting, and validation James Smart
2020-03-06  8:44   ` Hannes Reinecke
2020-03-26 16:41   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 11/29] nvme-fc: convert assoc_active flag to atomic James Smart
2020-02-28 21:08   ` Sagi Grimberg
2020-03-06  8:47   ` Hannes Reinecke
2020-03-26 19:16   ` Himanshu Madhani
2020-02-05 18:37 ` [PATCH 12/29] nvme-fc: Add Disconnect Association Rcv support James Smart
2020-03-06  9:00   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 13/29] nvmet-fc: add LS failure messages James Smart
2020-03-06  9:01   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 14/29] nvmet-fc: perform small cleanups on unneeded checks James Smart
2020-03-06  9:01   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 15/29] nvmet-fc: track hostport handle for associations James Smart
2020-03-06  9:02   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 16/29] nvmet-fc: rename ls_list to ls_rcv_list James Smart
2020-03-06  9:03   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 17/29] nvmet-fc: Add Disconnect Association Xmt support James Smart
2020-03-06  9:04   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 18/29] nvme-fcloop: refactor to enable target to host LS James Smart
2020-03-06  9:06   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 19/29] nvme-fcloop: add target to host LS request support James Smart
2020-03-06  9:07   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 20/29] lpfc: Refactor lpfc nvme headers James Smart
2020-03-06  9:18   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 21/29] lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx James Smart
2020-03-06  9:19   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 22/29] lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions James Smart
2020-03-06  9:19   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 23/29] lpfc: Refactor NVME LS receive handling James Smart
2020-03-06  9:20   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 24/29] lpfc: Refactor Send LS Request support James Smart
2020-03-06  9:20   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 25/29] lpfc: Refactor Send LS Abort support James Smart
2020-03-06  9:21   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 26/29] lpfc: Refactor Send LS Response support James Smart
2020-03-06  9:21   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 27/29] lpfc: nvme: Add Receive LS Request and Send LS Response support to nvme James Smart
2020-03-06  9:23   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 28/29] lpfc: nvmet: Add support for NVME LS request hosthandle James Smart
2020-03-06  9:23   ` Hannes Reinecke
2020-02-05 18:37 ` [PATCH 29/29] lpfc: nvmet: Add Send LS Request and Abort LS Request support James Smart
2020-03-06  9:24   ` Hannes Reinecke
2020-03-06  9:26 ` [PATCH 00/29] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support Hannes Reinecke
2020-03-31 14:29 ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a97e3a31-b090-9e62-843e-c28a0eb31177@oracle.com \
    --to=himanshu.madhani@oracle.com \
    --cc=jsmart2021@gmail.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=martin.petersen@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).