Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
* [RFC PATCH 0/3] Passthru Execute Request Interface
@ 2019-10-25 20:25 Logan Gunthorpe
  2019-10-25 20:25 ` [RFC PATCH 1/3] nvme: Move nvme_passthru_[start|end]() calls to common code Logan Gunthorpe
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Logan Gunthorpe @ 2019-10-25 20:25 UTC (permalink / raw)
  To: linux-kernel, linux-nvme
  Cc: Sagi Grimberg, Chaitanya Kulkarni, Stephen Bates, Keith Busch,
	Max Gurtovoy, Logan Gunthorpe, Christoph Hellwig

Hi,

This is just an RFC meant to get some early feedback on the core
interface for executing passthru commands that will be needed
in the upcomming nvmet passth patchset.

The first patch moves the calls to nvme_passthru_[start|end]() into
a common helper such that all passthru requests will call them.

The second patch does a bit of code reorganization for the third patch.

The third patch proposes a new nvme_execute_passthru_rq_nowait() interface
for the nvmet passthru code. For commands that have no effects it is
simply equivalent to blk_execute_rq_nowait(). For commands that
has effects, it pushes the command submission to a work queue. This
requires adding a work struct to nvme_request.

The code that will use this interfcae can be seen in the WIP passthru
patch[1]. It helps clean things up considerably from the last submission
of the patch.

Thanks,

Logan

[1] https://github.com/sbates130272/linux-p2pmem/commit/a468e458795e6df6483ad8c98635536d6da31064

--

Logan Gunthorpe (3):
  nvme: Move nvme_passthru_[start|end]() calls to common code
  nvme: Create helper function to obtain command effects
  nvme: Introduce nvme_execute_passthru_rq_nowait()

 drivers/nvme/host/core.c | 228 ++++++++++++++++++++++++---------------
 drivers/nvme/host/nvme.h |   7 ++
 2 files changed, 147 insertions(+), 88 deletions(-)

--
2.20.1

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 1/3] nvme: Move nvme_passthru_[start|end]() calls to common code
  2019-10-25 20:25 [RFC PATCH 0/3] Passthru Execute Request Interface Logan Gunthorpe
@ 2019-10-25 20:25 ` Logan Gunthorpe
  2019-10-25 20:25 ` [RFC PATCH 2/3] nvme: Create helper function to obtain command effects Logan Gunthorpe
  2019-10-25 20:25 ` [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait() Logan Gunthorpe
  2 siblings, 0 replies; 12+ messages in thread
From: Logan Gunthorpe @ 2019-10-25 20:25 UTC (permalink / raw)
  To: linux-kernel, linux-nvme
  Cc: Sagi Grimberg, Chaitanya Kulkarni, Stephen Bates, Keith Busch,
	Max Gurtovoy, Logan Gunthorpe, Christoph Hellwig

Introduce a new nvme_execute_passthru_rq() helper which calls
nvme_passthru_[start|end]() around blk_execute_rq(). This ensures
all passthru calls (including nvme_submit_io()) will be
wrapped appropriatly.

nvme_execute_passthru_rq() will also be useful for the nvmet
passthru code.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/nvme/host/core.c | 183 ++++++++++++++++++++-------------------
 1 file changed, 95 insertions(+), 88 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index fa7ba09dca77..c2bde988d1aa 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -886,6 +886,100 @@ static void *nvme_add_user_metadata(struct bio *bio, void __user *ubuf,
 	return ERR_PTR(ret);
 }
 
+static u32 nvme_known_admin_effects(u8 opcode)
+{
+	switch (opcode) {
+	case nvme_admin_format_nvm:
+		return NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC |
+					NVME_CMD_EFFECTS_CSE_MASK;
+	case nvme_admin_sanitize_nvm:
+		return NVME_CMD_EFFECTS_CSE_MASK;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+			       u8 opcode)
+{
+	u32 effects = 0;
+
+	if (ns) {
+		if (ctrl->effects)
+			effects = le32_to_cpu(ctrl->effects->iocs[opcode]);
+		if (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC))
+			dev_warn(ctrl->device,
+				 "IO command:%02x has unhandled effects:%08x\n",
+				 opcode, effects);
+		return 0;
+	}
+
+	if (ctrl->effects)
+		effects = le32_to_cpu(ctrl->effects->acs[opcode]);
+	effects |= nvme_known_admin_effects(opcode);
+
+	/*
+	 * For simplicity, IO to all namespaces is quiesced even if the command
+	 * effects say only one namespace is affected.
+	 */
+	if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) {
+		mutex_lock(&ctrl->scan_lock);
+		mutex_lock(&ctrl->subsys->lock);
+		nvme_mpath_start_freeze(ctrl->subsys);
+		nvme_mpath_wait_freeze(ctrl->subsys);
+		nvme_start_freeze(ctrl);
+		nvme_wait_freeze(ctrl);
+	}
+	return effects;
+}
+
+static void nvme_update_formats(struct nvme_ctrl *ctrl)
+{
+	struct nvme_ns *ns;
+
+	down_read(&ctrl->namespaces_rwsem);
+	list_for_each_entry(ns, &ctrl->namespaces, list)
+		if (ns->disk && nvme_revalidate_disk(ns->disk))
+			nvme_set_queue_dying(ns);
+	up_read(&ctrl->namespaces_rwsem);
+}
+
+static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects)
+{
+	/*
+	 * Revalidate LBA changes prior to unfreezing. This is necessary to
+	 * prevent memory corruption if a logical block size was changed by
+	 * this command.
+	 */
+	if (effects & NVME_CMD_EFFECTS_LBCC)
+		nvme_update_formats(ctrl);
+	if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) {
+		nvme_unfreeze(ctrl);
+		nvme_mpath_unfreeze(ctrl->subsys);
+		mutex_unlock(&ctrl->subsys->lock);
+		nvme_remove_invalid_namespaces(ctrl, NVME_NSID_ALL);
+		mutex_unlock(&ctrl->scan_lock);
+	}
+	if (effects & NVME_CMD_EFFECTS_CCC)
+		nvme_init_identify(ctrl);
+	if (effects & (NVME_CMD_EFFECTS_NIC | NVME_CMD_EFFECTS_NCC))
+		nvme_queue_scan(ctrl);
+}
+
+static void nvme_execute_passthru_rq(struct request *rq)
+{
+	struct nvme_command *cmd = nvme_req(rq)->cmd;
+	struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
+	struct nvme_ns *ns = rq->q->queuedata;
+	struct gendisk *disk = ns ? ns->disk : NULL;
+	u32 effects;
+
+	effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode);
+	blk_execute_rq(rq->q, disk, rq, 0);
+	nvme_passthru_end(ctrl, effects);
+}
+
 static int nvme_submit_user_cmd(struct request_queue *q,
 		struct nvme_command *cmd, void __user *ubuffer,
 		unsigned bufflen, void __user *meta_buffer, unsigned meta_len,
@@ -924,7 +1018,7 @@ static int nvme_submit_user_cmd(struct request_queue *q,
 		}
 	}
 
-	blk_execute_rq(req->q, disk, req, 0);
+	nvme_execute_passthru_rq(req);
 	if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
 		ret = -EINTR;
 	else
@@ -1288,94 +1382,12 @@ static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
 			metadata, meta_len, lower_32_bits(io.slba), NULL, 0);
 }
 
-static u32 nvme_known_admin_effects(u8 opcode)
-{
-	switch (opcode) {
-	case nvme_admin_format_nvm:
-		return NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC |
-					NVME_CMD_EFFECTS_CSE_MASK;
-	case nvme_admin_sanitize_nvm:
-		return NVME_CMD_EFFECTS_CSE_MASK;
-	default:
-		break;
-	}
-	return 0;
-}
-
-static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
-								u8 opcode)
-{
-	u32 effects = 0;
-
-	if (ns) {
-		if (ctrl->effects)
-			effects = le32_to_cpu(ctrl->effects->iocs[opcode]);
-		if (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC))
-			dev_warn(ctrl->device,
-				 "IO command:%02x has unhandled effects:%08x\n",
-				 opcode, effects);
-		return 0;
-	}
-
-	if (ctrl->effects)
-		effects = le32_to_cpu(ctrl->effects->acs[opcode]);
-	effects |= nvme_known_admin_effects(opcode);
-
-	/*
-	 * For simplicity, IO to all namespaces is quiesced even if the command
-	 * effects say only one namespace is affected.
-	 */
-	if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) {
-		mutex_lock(&ctrl->scan_lock);
-		mutex_lock(&ctrl->subsys->lock);
-		nvme_mpath_start_freeze(ctrl->subsys);
-		nvme_mpath_wait_freeze(ctrl->subsys);
-		nvme_start_freeze(ctrl);
-		nvme_wait_freeze(ctrl);
-	}
-	return effects;
-}
-
-static void nvme_update_formats(struct nvme_ctrl *ctrl)
-{
-	struct nvme_ns *ns;
-
-	down_read(&ctrl->namespaces_rwsem);
-	list_for_each_entry(ns, &ctrl->namespaces, list)
-		if (ns->disk && nvme_revalidate_disk(ns->disk))
-			nvme_set_queue_dying(ns);
-	up_read(&ctrl->namespaces_rwsem);
-}
-
-static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects)
-{
-	/*
-	 * Revalidate LBA changes prior to unfreezing. This is necessary to
-	 * prevent memory corruption if a logical block size was changed by
-	 * this command.
-	 */
-	if (effects & NVME_CMD_EFFECTS_LBCC)
-		nvme_update_formats(ctrl);
-	if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) {
-		nvme_unfreeze(ctrl);
-		nvme_mpath_unfreeze(ctrl->subsys);
-		mutex_unlock(&ctrl->subsys->lock);
-		nvme_remove_invalid_namespaces(ctrl, NVME_NSID_ALL);
-		mutex_unlock(&ctrl->scan_lock);
-	}
-	if (effects & NVME_CMD_EFFECTS_CCC)
-		nvme_init_identify(ctrl);
-	if (effects & (NVME_CMD_EFFECTS_NIC | NVME_CMD_EFFECTS_NCC))
-		nvme_queue_scan(ctrl);
-}
-
 static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 			struct nvme_passthru_cmd __user *ucmd)
 {
 	struct nvme_passthru_cmd cmd;
 	struct nvme_command c;
 	unsigned timeout = 0;
-	u32 effects;
 	u64 result;
 	int status;
 
@@ -1402,12 +1414,10 @@ static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	if (cmd.timeout_ms)
 		timeout = msecs_to_jiffies(cmd.timeout_ms);
 
-	effects = nvme_passthru_start(ctrl, ns, cmd.opcode);
 	status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
 			(void __user *)(uintptr_t)cmd.addr, cmd.data_len,
 			(void __user *)(uintptr_t)cmd.metadata,
 			cmd.metadata_len, 0, &result, timeout);
-	nvme_passthru_end(ctrl, effects);
 
 	if (status >= 0) {
 		if (put_user(result, &ucmd->result))
@@ -1423,7 +1433,6 @@ static int nvme_user_cmd64(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	struct nvme_passthru_cmd64 cmd;
 	struct nvme_command c;
 	unsigned timeout = 0;
-	u32 effects;
 	int status;
 
 	if (!capable(CAP_SYS_ADMIN))
@@ -1449,12 +1458,10 @@ static int nvme_user_cmd64(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	if (cmd.timeout_ms)
 		timeout = msecs_to_jiffies(cmd.timeout_ms);
 
-	effects = nvme_passthru_start(ctrl, ns, cmd.opcode);
 	status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
 			(void __user *)(uintptr_t)cmd.addr, cmd.data_len,
 			(void __user *)(uintptr_t)cmd.metadata, cmd.metadata_len,
 			0, &cmd.result, timeout);
-	nvme_passthru_end(ctrl, effects);
 
 	if (status >= 0) {
 		if (put_user(cmd.result, &ucmd->result))
-- 
2.20.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 2/3] nvme: Create helper function to obtain command effects
  2019-10-25 20:25 [RFC PATCH 0/3] Passthru Execute Request Interface Logan Gunthorpe
  2019-10-25 20:25 ` [RFC PATCH 1/3] nvme: Move nvme_passthru_[start|end]() calls to common code Logan Gunthorpe
@ 2019-10-25 20:25 ` Logan Gunthorpe
  2019-10-27 15:05   ` Christoph Hellwig
  2019-10-25 20:25 ` [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait() Logan Gunthorpe
  2 siblings, 1 reply; 12+ messages in thread
From: Logan Gunthorpe @ 2019-10-25 20:25 UTC (permalink / raw)
  To: linux-kernel, linux-nvme
  Cc: Sagi Grimberg, Chaitanya Kulkarni, Stephen Bates, Keith Busch,
	Max Gurtovoy, Logan Gunthorpe, Christoph Hellwig

Separate the code to obtain command effects from the code
to start a passthru request and open code nvme_known_admin_effects()
in the new helper.

The new helper function will be necessary for nvmet passthru
code to determine if we need to change out of interrupt context
to handle the effects.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/nvme/host/core.c | 41 +++++++++++++++++++++-------------------
 1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index c2bde988d1aa..2b4f0ea55f8d 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -886,22 +886,8 @@ static void *nvme_add_user_metadata(struct bio *bio, void __user *ubuf,
 	return ERR_PTR(ret);
 }
 
-static u32 nvme_known_admin_effects(u8 opcode)
-{
-	switch (opcode) {
-	case nvme_admin_format_nvm:
-		return NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC |
-					NVME_CMD_EFFECTS_CSE_MASK;
-	case nvme_admin_sanitize_nvm:
-		return NVME_CMD_EFFECTS_CSE_MASK;
-	default:
-		break;
-	}
-	return 0;
-}
-
-static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
-			       u8 opcode)
+static u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+				u8 opcode)
 {
 	u32 effects = 0;
 
@@ -917,7 +903,24 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 
 	if (ctrl->effects)
 		effects = le32_to_cpu(ctrl->effects->acs[opcode]);
-	effects |= nvme_known_admin_effects(opcode);
+
+	switch (opcode) {
+	case nvme_admin_format_nvm:
+		effects |= NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC |
+					NVME_CMD_EFFECTS_CSE_MASK;
+		break;
+	case nvme_admin_sanitize_nvm:
+		effects |= NVME_CMD_EFFECTS_CSE_MASK;
+		break;
+	default:
+		break;
+	}
+
+	return effects;
+}
+
+static void nvme_passthru_start(struct nvme_ctrl *ctrl, u32 effects)
+{
 
 	/*
 	 * For simplicity, IO to all namespaces is quiesced even if the command
@@ -931,7 +934,6 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 		nvme_start_freeze(ctrl);
 		nvme_wait_freeze(ctrl);
 	}
-	return effects;
 }
 
 static void nvme_update_formats(struct nvme_ctrl *ctrl)
@@ -975,7 +977,8 @@ static void nvme_execute_passthru_rq(struct request *rq)
 	struct gendisk *disk = ns ? ns->disk : NULL;
 	u32 effects;
 
-	effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode);
+	effects = nvme_command_effects(ctrl, ns, cmd->common.opcode);
+	nvme_passthru_start(ctrl, effects);
 	blk_execute_rq(rq->q, disk, rq, 0);
 	nvme_passthru_end(ctrl, effects);
 }
-- 
2.20.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait()
  2019-10-25 20:25 [RFC PATCH 0/3] Passthru Execute Request Interface Logan Gunthorpe
  2019-10-25 20:25 ` [RFC PATCH 1/3] nvme: Move nvme_passthru_[start|end]() calls to common code Logan Gunthorpe
  2019-10-25 20:25 ` [RFC PATCH 2/3] nvme: Create helper function to obtain command effects Logan Gunthorpe
@ 2019-10-25 20:25 ` Logan Gunthorpe
  2019-10-25 20:41   ` Sagi Grimberg
  2019-10-27 15:09   ` Christoph Hellwig
  2 siblings, 2 replies; 12+ messages in thread
From: Logan Gunthorpe @ 2019-10-25 20:25 UTC (permalink / raw)
  To: linux-kernel, linux-nvme
  Cc: Sagi Grimberg, Chaitanya Kulkarni, Stephen Bates, Keith Busch,
	Max Gurtovoy, Logan Gunthorpe, Christoph Hellwig

This function is similar to nvme_execute_passthru_rq() but does
not wait and will call a callback when the request is complete.

The new function can also be called in interrupt context, so if there
are side effects, the request will be executed in a work queue to
avoid sleeping.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/nvme/host/core.c | 42 ++++++++++++++++++++++++++++++++++++++++
 drivers/nvme/host/nvme.h |  7 +++++++
 2 files changed, 49 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 2b4f0ea55f8d..6d3cade0e63d 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4211,6 +4211,48 @@ void nvme_sync_queues(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvme_sync_queues);
 
+#ifdef CONFIG_NVME_TARGET_PASSTHRU
+static void nvme_execute_passthru_rq_work(struct work_struct *w)
+{
+	struct nvme_request *req = container_of(w, struct nvme_request, work);
+	struct request *rq = blk_mq_rq_from_pdu(req);
+	rq_end_io_fn *done = rq->end_io;
+	void *end_io_data = rq->end_io_data;
+
+	nvme_execute_passthru_rq(rq);
+
+	if (done) {
+		rq->end_io_data = end_io_data;
+		done(rq, 0);
+	}
+}
+
+void nvme_execute_passthru_rq_nowait(struct request *rq, rq_end_io_fn *done)
+{
+	struct nvme_command *cmd = nvme_req(rq)->cmd;
+	struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
+	struct nvme_ns *ns = rq->q->queuedata;
+	struct gendisk *disk = ns ? ns->disk : NULL;
+	u32 effects;
+
+	/*
+	 * This function may be called in interrupt context, so we cannot sleep
+	 * but nvme_passthru_[start|end]() may sleep so we need to execute
+	 * the command in a work queue.
+	 */
+	effects = nvme_command_effects(ctrl, ns, cmd->common.opcode);
+	if (effects) {
+		rq->end_io = done;
+		INIT_WORK(&nvme_req(rq)->work, nvme_execute_passthru_rq_work);
+		queue_work(nvme_wq, &nvme_req(rq)->work);
+		return;
+	}
+
+	blk_execute_rq_nowait(rq->q, disk, rq, 0, done);
+}
+EXPORT_SYMBOL_GPL(nvme_execute_passthru_rq_nowait);
+#endif /* CONFIG_NVME_TARGET_PASSTHRU */
+
 /*
  * Check we didn't inadvertently grow the command structure sizes:
  */
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 22e8401352c2..9523779de662 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -128,6 +128,9 @@ struct nvme_request {
 	u8			flags;
 	u16			status;
 	struct nvme_ctrl	*ctrl;
+#ifdef CONFIG_NVME_TARGET_PASSTHRU
+	struct work_struct	work;
+#endif
 };
 
 /*
@@ -652,4 +655,8 @@ static inline struct nvme_ns *nvme_get_ns_from_dev(struct device *dev)
 	return dev_to_disk(dev)->private_data;
 }
 
+#ifdef CONFIG_NVME_TARGET_PASSTHRU
+void nvme_execute_passthru_rq_nowait(struct request *rq, rq_end_io_fn *done);
+#endif /* CONFIG_NVME_TARGET_PASSTHRU */
+
 #endif /* _NVME_H */
-- 
2.20.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait()
  2019-10-25 20:25 ` [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait() Logan Gunthorpe
@ 2019-10-25 20:41   ` Sagi Grimberg
  2019-10-25 21:12     ` Logan Gunthorpe
  2019-10-27 15:09   ` Christoph Hellwig
  1 sibling, 1 reply; 12+ messages in thread
From: Sagi Grimberg @ 2019-10-25 20:41 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme
  Cc: Keith Busch, Max Gurtovoy, Christoph Hellwig, Chaitanya Kulkarni,
	Stephen Bates

> +#ifdef CONFIG_NVME_TARGET_PASSTHRU
> +static void nvme_execute_passthru_rq_work(struct work_struct *w)
> +{
> +	struct nvme_request *req = container_of(w, struct nvme_request, work);
> +	struct request *rq = blk_mq_rq_from_pdu(req);
> +	rq_end_io_fn *done = rq->end_io;
> +	void *end_io_data = rq->end_io_data;

Why is end_io_data stored to a local variable here? where is it set?

> +
> +	nvme_execute_passthru_rq(rq);
> +
> +	if (done) {
> +		rq->end_io_data = end_io_data;
> +		done(rq, 0);
> +	}
> +}
> +
> +void nvme_execute_passthru_rq_nowait(struct request *rq, rq_end_io_fn *done)
> +{
> +	struct nvme_command *cmd = nvme_req(rq)->cmd;
> +	struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
> +	struct nvme_ns *ns = rq->q->queuedata;
> +	struct gendisk *disk = ns ? ns->disk : NULL;
> +	u32 effects;
> +
> +	/*
> +	 * This function may be called in interrupt context, so we cannot sleep
> +	 * but nvme_passthru_[start|end]() may sleep so we need to execute
> +	 * the command in a work queue.
> +	 */
> +	effects = nvme_command_effects(ctrl, ns, cmd->common.opcode);
> +	if (effects) {
> +		rq->end_io = done;
> +		INIT_WORK(&nvme_req(rq)->work, nvme_execute_passthru_rq_work);
> +		queue_work(nvme_wq, &nvme_req(rq)->work);

This work will need to be flushed when in nvme_stop_ctrl. That is
assuming that it will fail-fast and not hang (which it should given
that its a passthru command that is allocated via nvme_alloc_request).

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait()
  2019-10-25 20:41   ` Sagi Grimberg
@ 2019-10-25 21:12     ` Logan Gunthorpe
  2019-10-25 21:40       ` Sagi Grimberg
  0 siblings, 1 reply; 12+ messages in thread
From: Logan Gunthorpe @ 2019-10-25 21:12 UTC (permalink / raw)
  To: Sagi Grimberg, linux-kernel, linux-nvme
  Cc: Keith Busch, Max Gurtovoy, Christoph Hellwig, Chaitanya Kulkarni,
	Stephen Bates



On 2019-10-25 2:41 p.m., Sagi Grimberg wrote:
>> +#ifdef CONFIG_NVME_TARGET_PASSTHRU
>> +static void nvme_execute_passthru_rq_work(struct work_struct *w)
>> +{
>> +    struct nvme_request *req = container_of(w, struct nvme_request,
>> work);
>> +    struct request *rq = blk_mq_rq_from_pdu(req);
>> +    rq_end_io_fn *done = rq->end_io;
>> +    void *end_io_data = rq->end_io_data;
> 
> Why is end_io_data stored to a local variable here? where is it set?

blk_execute_rq() (which is called by nvme_execute_rq()) will overwrite
rq->endio and rq->end_io_data. We store them here so we can call the
callback appropriately after the request completes. It would be set by
the caller in the same way they set it if they were calling
blk_execute_rq_nowait().

>> +
>> +    nvme_execute_passthru_rq(rq);
>> +
>> +    if (done) {
>> +        rq->end_io_data = end_io_data;
>> +        done(rq, 0);
>> +    }
>> +}
>> +
>> +void nvme_execute_passthru_rq_nowait(struct request *rq, rq_end_io_fn
>> *done)
>> +{
>> +    struct nvme_command *cmd = nvme_req(rq)->cmd;
>> +    struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
>> +    struct nvme_ns *ns = rq->q->queuedata;
>> +    struct gendisk *disk = ns ? ns->disk : NULL;
>> +    u32 effects;
>> +
>> +    /*
>> +     * This function may be called in interrupt context, so we cannot
>> sleep
>> +     * but nvme_passthru_[start|end]() may sleep so we need to execute
>> +     * the command in a work queue.
>> +     */
>> +    effects = nvme_command_effects(ctrl, ns, cmd->common.opcode);
>> +    if (effects) {
>> +        rq->end_io = done;
>> +        INIT_WORK(&nvme_req(rq)->work, nvme_execute_passthru_rq_work);
>> +        queue_work(nvme_wq, &nvme_req(rq)->work);
> 
> This work will need to be flushed when in nvme_stop_ctrl. That is
> assuming that it will fail-fast and not hang (which it should given
> that its a passthru command that is allocated via nvme_alloc_request).

Hmm, that's going to be a bit tricky. Seeing the work_struct belongs
potentially to a number of different requests, we can't just flush the
individual work items. I think we'd have to create a work-queue per ctrl
and flush that. Any objections to that?

Logan

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait()
  2019-10-25 21:12     ` Logan Gunthorpe
@ 2019-10-25 21:40       ` Sagi Grimberg
  2019-10-25 21:55         ` Logan Gunthorpe
  0 siblings, 1 reply; 12+ messages in thread
From: Sagi Grimberg @ 2019-10-25 21:40 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme
  Cc: Keith Busch, Max Gurtovoy, Christoph Hellwig, Chaitanya Kulkarni,
	Stephen Bates


>>> +#ifdef CONFIG_NVME_TARGET_PASSTHRU
>>> +static void nvme_execute_passthru_rq_work(struct work_struct *w)
>>> +{
>>> +    struct nvme_request *req = container_of(w, struct nvme_request,
>>> work);
>>> +    struct request *rq = blk_mq_rq_from_pdu(req);
>>> +    rq_end_io_fn *done = rq->end_io;
>>> +    void *end_io_data = rq->end_io_data;
>>
>> Why is end_io_data stored to a local variable here? where is it set?
> 
> blk_execute_rq() (which is called by nvme_execute_rq()) will overwrite
> rq->endio and rq->end_io_data. We store them here so we can call the
> callback appropriately after the request completes. It would be set by
> the caller in the same way they set it if they were calling
> blk_execute_rq_nowait().

I see..

>>> +
>>> +    nvme_execute_passthru_rq(rq);
>>> +
>>> +    if (done) {
>>> +        rq->end_io_data = end_io_data;
>>> +        done(rq, 0);
>>> +    }
>>> +}
>>> +
>>> +void nvme_execute_passthru_rq_nowait(struct request *rq, rq_end_io_fn
>>> *done)
>>> +{
>>> +    struct nvme_command *cmd = nvme_req(rq)->cmd;
>>> +    struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
>>> +    struct nvme_ns *ns = rq->q->queuedata;
>>> +    struct gendisk *disk = ns ? ns->disk : NULL;
>>> +    u32 effects;
>>> +
>>> +    /*
>>> +     * This function may be called in interrupt context, so we cannot
>>> sleep
>>> +     * but nvme_passthru_[start|end]() may sleep so we need to execute
>>> +     * the command in a work queue.
>>> +     */
>>> +    effects = nvme_command_effects(ctrl, ns, cmd->common.opcode);
>>> +    if (effects) {
>>> +        rq->end_io = done;
>>> +        INIT_WORK(&nvme_req(rq)->work, nvme_execute_passthru_rq_work);
>>> +        queue_work(nvme_wq, &nvme_req(rq)->work);
>>
>> This work will need to be flushed when in nvme_stop_ctrl. That is
>> assuming that it will fail-fast and not hang (which it should given
>> that its a passthru command that is allocated via nvme_alloc_request).
> 
> Hmm, that's going to be a bit tricky. Seeing the work_struct belongs
> potentially to a number of different requests, we can't just flush the
> individual work items. I think we'd have to create a work-queue per ctrl
> and flush that. Any objections to that?

I'd object to that overhead...

How about marking the request if the workqueue path is taken and
in nvme_stop_ctrl you add a blk_mq_tagset_busy_iter and cancel
it in the callback?

Something like:
--
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index fa7ba09dca77..13dbbec5497d 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3955,12 +3955,33 @@ void nvme_complete_async_event(struct nvme_ctrl 
*ctrl, __le16 status,
  }
  EXPORT_SYMBOL_GPL(nvme_complete_async_event);

+static bool nvme_flush_async_passthru_request(struct request *rq,
+               void *data, bool reserved)
+{
+       if (!(nvme_req(rq)->flags & NVME_REQ_ASYNC_PASSTHRU))
+               return true;
+
+       dev_dbg_ratelimited(((struct nvme_ctrl *) data)->device,
+                               "Cancelling passthru I/O %d", req->tag);
+       flush_work(&nvme_req(rq)->work);
+       return true;
+}
+
+static void nvme_flush_async_passthru_requests(struct nvme_ctrl *ctrl)
+{
+       blk_mq_tagset_busy_iter(ctrl->tagset,
+               nvme_flush_async_passthru_request, ctrl);
+       blk_mq_tagset_busy_iter(ctrl->admin_tagset,
+               nvme_flush_async_passthru_request, ctrl);
+}
+
  void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
  {
         nvme_mpath_stop(ctrl);
         nvme_stop_keep_alive(ctrl);
         flush_work(&ctrl->async_event_work);
         cancel_work_sync(&ctrl->fw_act_work);
+       nvme_flush_async_passthru_requests(ctrl);
  }
  EXPORT_SYMBOL_GPL(nvme_stop_ctrl);
--

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait()
  2019-10-25 21:40       ` Sagi Grimberg
@ 2019-10-25 21:55         ` Logan Gunthorpe
  0 siblings, 0 replies; 12+ messages in thread
From: Logan Gunthorpe @ 2019-10-25 21:55 UTC (permalink / raw)
  To: Sagi Grimberg, linux-kernel, linux-nvme
  Cc: Keith Busch, Max Gurtovoy, Christoph Hellwig, Chaitanya Kulkarni,
	Stephen Bates



On 2019-10-25 3:40 p.m., Sagi Grimberg wrote:
>> Hmm, that's going to be a bit tricky. Seeing the work_struct belongs
>> potentially to a number of different requests, we can't just flush the
>> individual work items. I think we'd have to create a work-queue per ctrl
>> and flush that. Any objections to that?
> 
> I'd object to that overhead...
> 
> How about marking the request if the workqueue path is taken and
> in nvme_stop_ctrl you add a blk_mq_tagset_busy_iter and cancel
> it in the callback?

Oh, cool. That looks great, I'll do that. Thanks!

Logan

> Something like:
> -- 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index fa7ba09dca77..13dbbec5497d 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3955,12 +3955,33 @@ void nvme_complete_async_event(struct nvme_ctrl
> *ctrl, __le16 status,
>  }
>  EXPORT_SYMBOL_GPL(nvme_complete_async_event);
> 
> +static bool nvme_flush_async_passthru_request(struct request *rq,
> +               void *data, bool reserved)
> +{
> +       if (!(nvme_req(rq)->flags & NVME_REQ_ASYNC_PASSTHRU))
> +               return true;
> +
> +       dev_dbg_ratelimited(((struct nvme_ctrl *) data)->device,
> +                               "Cancelling passthru I/O %d", req->tag);
> +       flush_work(&nvme_req(rq)->work);
> +       return true;
> +}
> +
> +static void nvme_flush_async_passthru_requests(struct nvme_ctrl *ctrl)
> +{
> +       blk_mq_tagset_busy_iter(ctrl->tagset,
> +               nvme_flush_async_passthru_request, ctrl);
> +       blk_mq_tagset_busy_iter(ctrl->admin_tagset,
> +               nvme_flush_async_passthru_request, ctrl);
> +}
> +
>  void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
>  {
>         nvme_mpath_stop(ctrl);
>         nvme_stop_keep_alive(ctrl);
>         flush_work(&ctrl->async_event_work);
>         cancel_work_sync(&ctrl->fw_act_work);
> +       nvme_flush_async_passthru_requests(ctrl);
>  }
>  EXPORT_SYMBOL_GPL(nvme_stop_ctrl);
> -- 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 2/3] nvme: Create helper function to obtain command effects
  2019-10-25 20:25 ` [RFC PATCH 2/3] nvme: Create helper function to obtain command effects Logan Gunthorpe
@ 2019-10-27 15:05   ` Christoph Hellwig
  0 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2019-10-27 15:05 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: Sagi Grimberg, Chaitanya Kulkarni, linux-kernel, linux-nvme,
	Stephen Bates, Keith Busch, Max Gurtovoy, Christoph Hellwig

The change in this and patch 1 look fine.  But wouldn't the changes be
a little simpler to understand if you moved this patch before the
other one instead of touching the same code multiple times?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait()
  2019-10-25 20:25 ` [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait() Logan Gunthorpe
  2019-10-25 20:41   ` Sagi Grimberg
@ 2019-10-27 15:09   ` Christoph Hellwig
  2019-10-28 16:58     ` Logan Gunthorpe
  1 sibling, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2019-10-27 15:09 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: Sagi Grimberg, Chaitanya Kulkarni, linux-kernel, linux-nvme,
	Stephen Bates, Keith Busch, Max Gurtovoy, Christoph Hellwig

On Fri, Oct 25, 2019 at 02:25:35PM -0600, Logan Gunthorpe wrote:
> This function is similar to nvme_execute_passthru_rq() but does
> not wait and will call a callback when the request is complete.
> 
> The new function can also be called in interrupt context, so if there
> are side effects, the request will be executed in a work queue to
> avoid sleeping.

Why would you ever call it from interrupt context?  All the target
submission handlers should run in process context.

> +void nvme_execute_passthru_rq_nowait(struct request *rq, rq_end_io_fn *done)
> +{
> +	struct nvme_command *cmd = nvme_req(rq)->cmd;
> +	struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
> +	struct nvme_ns *ns = rq->q->queuedata;
> +	struct gendisk *disk = ns ? ns->disk : NULL;
> +	u32 effects;
> +
> +	/*
> +	 * This function may be called in interrupt context, so we cannot sleep
> +	 * but nvme_passthru_[start|end]() may sleep so we need to execute
> +	 * the command in a work queue.
> +	 */
> +	effects = nvme_command_effects(ctrl, ns, cmd->common.opcode);
> +	if (effects) {
> +		rq->end_io = done;
> +		INIT_WORK(&nvme_req(rq)->work, nvme_execute_passthru_rq_work);
> +		queue_work(nvme_wq, &nvme_req(rq)->work);

But independent of the target code - I'd much rather leave this to the
caller.  Just call nvme_command_effects in the target code, then if
there are not side effects use blk_execute_rq_nowait directly, else
schedule a workqueue in the target code and call
nvme_execute_passthru_rq from it.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait()
  2019-10-27 15:09   ` Christoph Hellwig
@ 2019-10-28 16:58     ` Logan Gunthorpe
  2019-10-28 21:04       ` Sagi Grimberg
  0 siblings, 1 reply; 12+ messages in thread
From: Logan Gunthorpe @ 2019-10-28 16:58 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Sagi Grimberg, Chaitanya Kulkarni, linux-kernel, linux-nvme,
	Stephen Bates, Keith Busch, Max Gurtovoy



On 2019-10-27 9:09 a.m., Christoph Hellwig wrote:
> On Fri, Oct 25, 2019 at 02:25:35PM -0600, Logan Gunthorpe wrote:
>> This function is similar to nvme_execute_passthru_rq() but does
>> not wait and will call a callback when the request is complete.
>>
>> The new function can also be called in interrupt context, so if there
>> are side effects, the request will be executed in a work queue to
>> avoid sleeping.
> 
> Why would you ever call it from interrupt context?  All the target
> submission handlers should run in process context.

Oh, I mis-understood this a bit and worded that incorrectly. The intent
is to avoid having to call nvme_passthru_end() in the completion handler
which can be in interrupt context.

>> +void nvme_execute_passthru_rq_nowait(struct request *rq, rq_end_io_fn *done)
>> +{
>> +	struct nvme_command *cmd = nvme_req(rq)->cmd;
>> +	struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
>> +	struct nvme_ns *ns = rq->q->queuedata;
>> +	struct gendisk *disk = ns ? ns->disk : NULL;
>> +	u32 effects;
>> +
>> +	/*
>> +	 * This function may be called in interrupt context, so we cannot sleep
>> +	 * but nvme_passthru_[start|end]() may sleep so we need to execute
>> +	 * the command in a work queue.
>> +	 */
>> +	effects = nvme_command_effects(ctrl, ns, cmd->common.opcode);
>> +	if (effects) {
>> +		rq->end_io = done;
>> +		INIT_WORK(&nvme_req(rq)->work, nvme_execute_passthru_rq_work);
>> +		queue_work(nvme_wq, &nvme_req(rq)->work);
> 
> But independent of the target code - I'd much rather leave this to the
> caller.  Just call nvme_command_effects in the target code, then if
> there are not side effects use blk_execute_rq_nowait directly, else
> schedule a workqueue in the target code and call
> nvme_execute_passthru_rq from it.

Ok, that seems sensible. Except it conflicts a bit with Sagi's feedback:
presumably we need to cancel the work items during nvme_stop_ctrl() and
that's going to be rather difficult to do from the caller. Are we saying
this is unnecessary? It's not clear to me if passthru_start/end is going
to be affected by nvme_stop_ctrl() which I believe is the main concern.

Logan


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait()
  2019-10-28 16:58     ` Logan Gunthorpe
@ 2019-10-28 21:04       ` Sagi Grimberg
  0 siblings, 0 replies; 12+ messages in thread
From: Sagi Grimberg @ 2019-10-28 21:04 UTC (permalink / raw)
  To: Logan Gunthorpe, Christoph Hellwig
  Cc: Chaitanya Kulkarni, linux-kernel, linux-nvme, Stephen Bates,
	Keith Busch, Max Gurtovoy


>>> +void nvme_execute_passthru_rq_nowait(struct request *rq, rq_end_io_fn *done)
>>> +{
>>> +	struct nvme_command *cmd = nvme_req(rq)->cmd;
>>> +	struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
>>> +	struct nvme_ns *ns = rq->q->queuedata;
>>> +	struct gendisk *disk = ns ? ns->disk : NULL;
>>> +	u32 effects;
>>> +
>>> +	/*
>>> +	 * This function may be called in interrupt context, so we cannot sleep
>>> +	 * but nvme_passthru_[start|end]() may sleep so we need to execute
>>> +	 * the command in a work queue.
>>> +	 */
>>> +	effects = nvme_command_effects(ctrl, ns, cmd->common.opcode);
>>> +	if (effects) {
>>> +		rq->end_io = done;
>>> +		INIT_WORK(&nvme_req(rq)->work, nvme_execute_passthru_rq_work);
>>> +		queue_work(nvme_wq, &nvme_req(rq)->work);
>>
>> But independent of the target code - I'd much rather leave this to the
>> caller.  Just call nvme_command_effects in the target code, then if
>> there are not side effects use blk_execute_rq_nowait directly, else
>> schedule a workqueue in the target code and call
>> nvme_execute_passthru_rq from it.
> 
> Ok, that seems sensible. Except it conflicts a bit with Sagi's feedback:
> presumably we need to cancel the work items during nvme_stop_ctrl() and
> that's going to be rather difficult to do from the caller. Are we saying
> this is unnecessary? It's not clear to me if passthru_start/end is going
> to be affected by nvme_stop_ctrl() which I believe is the main concern.

Actually, I don't think we need it thinking on it again... These are
just I/Os sent to the device. The reset sequence will simply iterate
all the I/Os and fail the busy ones, and those that will execute after
it will block on a frozen queue, just like any other I/O. So I don't
think we need to cancel them. And if this logic sits on the caller its
even clearer that this is the case.

However, it'd be good to test live controller resets to make sure
we are not missing anything...

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, back to index

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-25 20:25 [RFC PATCH 0/3] Passthru Execute Request Interface Logan Gunthorpe
2019-10-25 20:25 ` [RFC PATCH 1/3] nvme: Move nvme_passthru_[start|end]() calls to common code Logan Gunthorpe
2019-10-25 20:25 ` [RFC PATCH 2/3] nvme: Create helper function to obtain command effects Logan Gunthorpe
2019-10-27 15:05   ` Christoph Hellwig
2019-10-25 20:25 ` [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait() Logan Gunthorpe
2019-10-25 20:41   ` Sagi Grimberg
2019-10-25 21:12     ` Logan Gunthorpe
2019-10-25 21:40       ` Sagi Grimberg
2019-10-25 21:55         ` Logan Gunthorpe
2019-10-27 15:09   ` Christoph Hellwig
2019-10-28 16:58     ` Logan Gunthorpe
2019-10-28 21:04       ` Sagi Grimberg

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git