linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/2] nvme-loop: use xarray for ctrl and port list
@ 2020-09-30  4:55 Chaitanya Kulkarni
  2020-09-30  4:55 ` [PATCH V2 1/2] nvme-loop: use xarray for loop ctrl tracking Chaitanya Kulkarni
  2020-09-30  4:55 ` [PATCH V2 2/2] nvme-loop: use xarray for loop port tracking Chaitanya Kulkarni
  0 siblings, 2 replies; 6+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-30  4:55 UTC (permalink / raw)
  To: linux-nvme; +Cc: hch, Chaitanya Kulkarni, sagi

Hi,

Currently ctrl and port tracking is done using the list & a global lock
respectively. Since this lock is only used to protect the list mgmt we
can safely get rid of the lock and use XArray.

This patch series replaces ctrl, port list & its respective locks
with XArray.

I've not tested this exaustively, but all the blktest testcases for
nvme-loop are passing [1].

Regards,
Chaitanya

* Changes from V2 :-
1. Rebase and retest on latest nvme-5.10.
2. Generate this series on the top of 
   http://lists.infradead.org/pipermail/linux-nvme/2020-September/020023.html

Chaitanya Kulkarni (2):
  nvme-loop: use xarray for loop ctrl tracking
  nvme-loop: use xarray for loop port tracking

 drivers/nvme/target/loop.c | 69 ++++++++++++++++++--------------------
 1 file changed, 32 insertions(+), 37 deletions(-)

[1] NVMeOF nvme-loop blktest results :-
root@vm blktests (master) # ./check tests/nvme/???
nvme/002 (create many subsystems and test discovery)         [passed]
    runtime  24.180s  ...  24.181s
nvme/003 (test if we're sending keep-alives to a discovery controller) [passed]
    runtime  10.130s  ...  10.128s
nvme/004 (test nvme and nvmet UUID NS descriptors)           [passed]
    runtime  2.876s  ...  2.880s
nvme/005 (reset local loopback target)                       [passed]
    runtime  3.467s  ...  3.459s
nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
    runtime  0.104s  ...  0.098s
nvme/007 (create an NVMeOF target with a file-backed ns)     [passed]
    runtime  0.060s  ...  0.056s
nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
    runtime  2.843s  ...  2.873s
nvme/009 (create an NVMeOF host with a file-backed ns)       [passed]
    runtime  2.815s  ...  2.804s
nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  26.820s  ...  32.029s
nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  302.723s  ...  268.770s
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  42.882s  ...  41.107s
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  142.143s  ...  128.990s
nvme/014 (flush a NVMeOF block device-backed ns)             [passed]
    runtime  23.945s  ...  23.363s
nvme/015 (unit test for NVMe flush for file backed ns)       [passed]
    runtime  22.146s  ...  19.836s
nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [passed]
    runtime  15.456s  ...  15.097s
nvme/017 (create/delete many file-ns and test discovery)     [passed]
    runtime  15.267s  ...  15.184s
nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed]
    runtime  2.806s  ...  2.842s
nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed]
    runtime  2.841s  ...  2.852s
nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed]
    runtime  2.834s  ...  2.873s
nvme/021 (test NVMe list command on NVMeOF file-backed ns)   [passed]
    runtime  2.850s  ...  2.828s
nvme/022 (test NVMe reset command on NVMeOF file-backed ns)  [passed]
    runtime  3.415s  ...  3.422s
nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed]
    runtime  2.870s  ...  2.853s
nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed]
    runtime  2.827s  ...  2.826s
nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed]
    runtime  2.862s  ...  2.812s
nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed]
    runtime  2.836s  ...  2.809s
nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed]
    runtime  2.877s  ...  2.878s
nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed]
    runtime  2.835s  ...  2.822s
nvme/029 (test userspace IO via nvme-cli read/write interface) [passed]
    runtime  3.157s  ...  3.163s
nvme/030 (ensure the discovery generation counter is updated appropriately) [passed]
    runtime  0.352s  ...  0.349s
nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed]
    runtime  17.561s  ...  17.649s

-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH V2 1/2] nvme-loop: use xarray for loop ctrl tracking
  2020-09-30  4:55 [PATCH V2 0/2] nvme-loop: use xarray for ctrl and port list Chaitanya Kulkarni
@ 2020-09-30  4:55 ` Chaitanya Kulkarni
  2020-10-05 12:46   ` Christoph Hellwig
  2020-09-30  4:55 ` [PATCH V2 2/2] nvme-loop: use xarray for loop port tracking Chaitanya Kulkarni
  1 sibling, 1 reply; 6+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-30  4:55 UTC (permalink / raw)
  To: linux-nvme; +Cc: hch, Chaitanya Kulkarni, sagi

For nvme-loop ctrls are tracked with nvme_loop_ctrl_list. This requires
an extra locking just for list operations.

The Xarray data structure provides a clear API which handles locking
implicitly so we can get rid of the locking and the list loops if any.

Replace nvme loop ctrl list and its lock nvme_loop_ctrl_mutex with
nvme_loop_ctrls XArray.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/loop.c | 46 ++++++++++++++++++--------------------
 1 file changed, 22 insertions(+), 24 deletions(-)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index f6d81239be21..2f2e16fb9f29 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -31,7 +31,6 @@ struct nvme_loop_ctrl {
 
 	struct blk_mq_tag_set	admin_tag_set;
 
-	struct list_head	list;
 	struct blk_mq_tag_set	tag_set;
 	struct nvme_loop_iod	async_event_iod;
 	struct nvme_ctrl	ctrl;
@@ -58,8 +57,7 @@ struct nvme_loop_queue {
 static LIST_HEAD(nvme_loop_ports);
 static DEFINE_MUTEX(nvme_loop_ports_mutex);
 
-static LIST_HEAD(nvme_loop_ctrl_list);
-static DEFINE_MUTEX(nvme_loop_ctrl_mutex);
+static DEFINE_XARRAY(nvme_loop_ctrls);
 
 static void nvme_loop_queue_response(struct nvmet_req *nvme_req);
 static void nvme_loop_delete_ctrl(struct nvmet_ctrl *ctrl);
@@ -262,12 +260,10 @@ static void nvme_loop_free_ctrl(struct nvme_ctrl *nctrl)
 {
 	struct nvme_loop_ctrl *ctrl = to_loop_ctrl(nctrl);
 
-	if (list_empty(&ctrl->list))
+	if (!xa_load(&nvme_loop_ctrls, nctrl->cntlid))
 		goto free_ctrl;
 
-	mutex_lock(&nvme_loop_ctrl_mutex);
-	list_del(&ctrl->list);
-	mutex_unlock(&nvme_loop_ctrl_mutex);
+	xa_erase(&nvme_loop_ctrls, nctrl->cntlid);
 
 	if (nctrl->tagset) {
 		blk_cleanup_queue(ctrl->ctrl.connect_q);
@@ -430,14 +426,10 @@ static void nvme_loop_delete_ctrl_host(struct nvme_ctrl *ctrl)
 
 static void nvme_loop_delete_ctrl(struct nvmet_ctrl *nctrl)
 {
-	struct nvme_loop_ctrl *ctrl;
+	struct nvme_loop_ctrl *ctrl = xa_load(&nvme_loop_ctrls, nctrl->cntlid);
 
-	mutex_lock(&nvme_loop_ctrl_mutex);
-	list_for_each_entry(ctrl, &nvme_loop_ctrl_list, list) {
-		if (ctrl->ctrl.cntlid == nctrl->cntlid)
-			nvme_delete_ctrl(&ctrl->ctrl);
-	}
-	mutex_unlock(&nvme_loop_ctrl_mutex);
+	if (ctrl)
+		nvme_delete_ctrl(&ctrl->ctrl);
 }
 
 static void nvme_loop_reset_ctrl_work(struct work_struct *work)
@@ -572,7 +564,6 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
 	if (!ctrl)
 		return ERR_PTR(-ENOMEM);
 	ctrl->ctrl.opts = opts;
-	INIT_LIST_HEAD(&ctrl->list);
 
 	INIT_WORK(&ctrl->ctrl.reset_work, nvme_loop_reset_ctrl_work);
 
@@ -599,6 +590,12 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
 	if (ret)
 		goto out_free_queues;
 
+	/* unusual place to update xarray, makes unwind code simple */
+	ret = xa_insert(&nvme_loop_ctrls, ctrl->ctrl.cntlid, &ctrl,
+			GFP_KERNEL);
+	if (ret)
+		goto out_remove_ctrl;
+
 	if (opts->queue_size > ctrl->ctrl.maxcmd) {
 		/* warn if maxcmd is lower than queue_size */
 		dev_warn(ctrl->ctrl.device,
@@ -621,16 +618,14 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
 	if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE))
 		WARN_ON_ONCE(1);
 
-	mutex_lock(&nvme_loop_ctrl_mutex);
-	list_add_tail(&ctrl->list, &nvme_loop_ctrl_list);
-	mutex_unlock(&nvme_loop_ctrl_mutex);
-
 	nvme_start_ctrl(&ctrl->ctrl);
 
 	return &ctrl->ctrl;
 
 out_remove_admin_queue:
 	nvme_loop_destroy_admin_queue(ctrl);
+out_remove_ctrl:
+	xa_erase(&nvme_loop_ctrls, ctrl->ctrl.cntlid);
 out_free_queues:
 	kfree(ctrl->queues);
 out_uninit_ctrl:
@@ -678,7 +673,7 @@ static struct nvmf_transport_ops nvme_loop_transport = {
 	.name		= "loop",
 	.module		= THIS_MODULE,
 	.create_ctrl	= nvme_loop_create_ctrl,
-	.allowed_opts	= NVMF_OPT_TRADDR,
+	.allowed_opts	= NVMF_OPT_TRADDR | NVMF_OPT_CTRL_LOSS_TMO,
 };
 
 static int __init nvme_loop_init_module(void)
@@ -693,20 +688,23 @@ static int __init nvme_loop_init_module(void)
 	if (ret)
 		nvmet_unregister_transport(&nvme_loop_ops);
 
+	xa_init(&nvme_loop_ctrls);
+
 	return ret;
 }
 
 static void __exit nvme_loop_cleanup_module(void)
 {
-	struct nvme_loop_ctrl *ctrl, *next;
+	struct nvme_loop_ctrl *ctrl;
+	unsigned long idx;
 
 	nvmf_unregister_transport(&nvme_loop_transport);
 	nvmet_unregister_transport(&nvme_loop_ops);
 
-	mutex_lock(&nvme_loop_ctrl_mutex);
-	list_for_each_entry_safe(ctrl, next, &nvme_loop_ctrl_list, list)
+	xa_for_each(&nvme_loop_ctrls, idx, ctrl)
 		nvme_delete_ctrl(&ctrl->ctrl);
-	mutex_unlock(&nvme_loop_ctrl_mutex);
+
+	xa_destroy(&nvme_loop_ctrls);
 
 	flush_workqueue(nvme_delete_wq);
 }
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH V2 2/2] nvme-loop: use xarray for loop port tracking
  2020-09-30  4:55 [PATCH V2 0/2] nvme-loop: use xarray for ctrl and port list Chaitanya Kulkarni
  2020-09-30  4:55 ` [PATCH V2 1/2] nvme-loop: use xarray for loop ctrl tracking Chaitanya Kulkarni
@ 2020-09-30  4:55 ` Chaitanya Kulkarni
  2020-10-05 12:48   ` Christoph Hellwig
  1 sibling, 1 reply; 6+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-30  4:55 UTC (permalink / raw)
  To: linux-nvme; +Cc: hch, Chaitanya Kulkarni, sagi

For nvme-loop ports are tracked with nvme_loop_ports_list. This requires
an extra locking just for list operations.

The Xarray data structure provides a clear API which handles locking
implicitly so we can get rid of the locking and the list loop(s) if any.

Replace nvme loop ports list and its lock nvme_loop_ports_mutex with
nvme_loop_ports XArray.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/loop.c | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 2f2e16fb9f29..873afe0654e9 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -54,9 +54,7 @@ struct nvme_loop_queue {
 	unsigned long		flags;
 };
 
-static LIST_HEAD(nvme_loop_ports);
-static DEFINE_MUTEX(nvme_loop_ports_mutex);
-
+static DEFINE_XARRAY(nvme_loop_ports);
 static DEFINE_XARRAY(nvme_loop_ctrls);
 
 static void nvme_loop_queue_response(struct nvmet_req *nvme_req);
@@ -540,9 +538,10 @@ static int nvme_loop_create_io_queues(struct nvme_loop_ctrl *ctrl)
 static struct nvmet_port *nvme_loop_find_port(struct nvme_ctrl *ctrl)
 {
 	struct nvmet_port *p, *found = NULL;
+	unsigned long idx;
 
-	mutex_lock(&nvme_loop_ports_mutex);
-	list_for_each_entry(p, &nvme_loop_ports, entry) {
+	rcu_read_lock();
+	xa_for_each(&nvme_loop_ports, idx, p) {
 		/* if no transport address is specified use the first port */
 		if ((ctrl->opts->mask & NVMF_OPT_TRADDR) &&
 		    strcmp(ctrl->opts->traddr, p->disc_addr.traddr))
@@ -550,7 +549,7 @@ static struct nvmet_port *nvme_loop_find_port(struct nvme_ctrl *ctrl)
 		found = p;
 		break;
 	}
-	mutex_unlock(&nvme_loop_ports_mutex);
+	rcu_read_unlock();
 	return found;
 }
 
@@ -639,17 +638,13 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
 
 static int nvme_loop_add_port(struct nvmet_port *port)
 {
-	mutex_lock(&nvme_loop_ports_mutex);
-	list_add_tail(&port->entry, &nvme_loop_ports);
-	mutex_unlock(&nvme_loop_ports_mutex);
-	return 0;
+	return xa_insert(&nvme_loop_ports, port->disc_addr.portid,
+			 port, GFP_KERNEL);
 }
 
 static void nvme_loop_remove_port(struct nvmet_port *port)
 {
-	mutex_lock(&nvme_loop_ports_mutex);
-	list_del_init(&port->entry);
-	mutex_unlock(&nvme_loop_ports_mutex);
+	xa_erase(&nvme_loop_ports, port->disc_addr.portid);
 
 	/*
 	 * Ensure any ctrls that are in the process of being
@@ -689,6 +684,7 @@ static int __init nvme_loop_init_module(void)
 		nvmet_unregister_transport(&nvme_loop_ops);
 
 	xa_init(&nvme_loop_ctrls);
+	xa_init(&nvme_loop_ports);
 
 	return ret;
 }
@@ -704,6 +700,7 @@ static void __exit nvme_loop_cleanup_module(void)
 	xa_for_each(&nvme_loop_ctrls, idx, ctrl)
 		nvme_delete_ctrl(&ctrl->ctrl);
 
+	xa_destroy(&nvme_loop_ports);
 	xa_destroy(&nvme_loop_ctrls);
 
 	flush_workqueue(nvme_delete_wq);
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH V2 1/2] nvme-loop: use xarray for loop ctrl tracking
  2020-09-30  4:55 ` [PATCH V2 1/2] nvme-loop: use xarray for loop ctrl tracking Chaitanya Kulkarni
@ 2020-10-05 12:46   ` Christoph Hellwig
  2020-10-06  2:56     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 6+ messages in thread
From: Christoph Hellwig @ 2020-10-05 12:46 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: hch, linux-nvme, sagi

On Tue, Sep 29, 2020 at 09:55:56PM -0700, Chaitanya Kulkarni wrote:
> @@ -262,12 +260,10 @@ static void nvme_loop_free_ctrl(struct nvme_ctrl *nctrl)
>  {
>  	struct nvme_loop_ctrl *ctrl = to_loop_ctrl(nctrl);
>  
> -	if (list_empty(&ctrl->list))
> +	if (!xa_load(&nvme_loop_ctrls, nctrl->cntlid))
>  		goto free_ctrl;
>  
> -	mutex_lock(&nvme_loop_ctrl_mutex);
> -	list_del(&ctrl->list);
> -	mutex_unlock(&nvme_loop_ctrl_mutex);
> +	xa_erase(&nvme_loop_ctrls, nctrl->cntlid);

xa_erase is fine with an already deleted object, so the above xa_load
check doesn't make any sense.

> @@ -599,6 +590,12 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
>  	if (ret)
>  		goto out_free_queues;
>  
> +	/* unusual place to update xarray, makes unwind code simple */
> +	ret = xa_insert(&nvme_loop_ctrls, ctrl->ctrl.cntlid, &ctrl,
> +			GFP_KERNEL);
> +	if (ret)
> +		goto out_remove_ctrl;
> +

>  out_remove_admin_queue:
>  	nvme_loop_destroy_admin_queue(ctrl);
> +out_remove_ctrl:
> +	xa_erase(&nvme_loop_ctrls, ctrl->ctrl.cntlid);

This looks weird.  Why would you remove the controller when inserting
it fails?

Also did you do an audit that none of the lookups will do something
funny with the insered but not fully initialized controller?

> @@ -678,7 +673,7 @@ static struct nvmf_transport_ops nvme_loop_transport = {
>  	.name		= "loop",
>  	.module		= THIS_MODULE,
>  	.create_ctrl	= nvme_loop_create_ctrl,
> -	.allowed_opts	= NVMF_OPT_TRADDR,
> +	.allowed_opts	= NVMF_OPT_TRADDR | NVMF_OPT_CTRL_LOSS_TMO,

This looks unrelated?

> +	xa_init(&nvme_loop_ctrls);

DEFINE_XARRAY seems like the btter choice for nvme_loop_ctrls.


>  	return ret;
>  }
>  
>  static void __exit nvme_loop_cleanup_module(void)
>  {
> -	struct nvme_loop_ctrl *ctrl, *next;
> +	struct nvme_loop_ctrl *ctrl;
> +	unsigned long idx;
>  
>  	nvmf_unregister_transport(&nvme_loop_transport);
>  	nvmet_unregister_transport(&nvme_loop_ops);
>  
> -	mutex_lock(&nvme_loop_ctrl_mutex);
> -	list_for_each_entry_safe(ctrl, next, &nvme_loop_ctrl_list, list)
> +	xa_for_each(&nvme_loop_ctrls, idx, ctrl)
>  		nvme_delete_ctrl(&ctrl->ctrl);
> -	mutex_unlock(&nvme_loop_ctrl_mutex);
> +
> +	xa_destroy(&nvme_loop_ctrls);

What replace the protection previously provided by nvme_loop_ctrl_mutex?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH V2 2/2] nvme-loop: use xarray for loop port tracking
  2020-09-30  4:55 ` [PATCH V2 2/2] nvme-loop: use xarray for loop port tracking Chaitanya Kulkarni
@ 2020-10-05 12:48   ` Christoph Hellwig
  0 siblings, 0 replies; 6+ messages in thread
From: Christoph Hellwig @ 2020-10-05 12:48 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: hch, linux-nvme, sagi

> +	unsigned long idx;
>  
> -	mutex_lock(&nvme_loop_ports_mutex);
> -	list_for_each_entry(p, &nvme_loop_ports, entry) {
> +	rcu_read_lock();
> +	xa_for_each(&nvme_loop_ports, idx, p) {

Why is the rcu_read_lock()?  We'd need a synchronize_rcu on the port
disable side.  But the manual mutex_locking seems more sensible here
anyway.

> +	xa_init(&nvme_loop_ports);

I don't think this is needed.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH V2 1/2] nvme-loop: use xarray for loop ctrl tracking
  2020-10-05 12:46   ` Christoph Hellwig
@ 2020-10-06  2:56     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 6+ messages in thread
From: Chaitanya Kulkarni @ 2020-10-06  2:56 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: sagi, linux-nvme

On 10/5/20 05:46, Christoph Hellwig wrote:
> On Tue, Sep 29, 2020 at 09:55:56PM -0700, Chaitanya Kulkarni wrote:
>> @@ -262,12 +260,10 @@ static void nvme_loop_free_ctrl(struct nvme_ctrl *nctrl)
>>  {
>>  	struct nvme_loop_ctrl *ctrl = to_loop_ctrl(nctrl);
>>  
>> -	if (list_empty(&ctrl->list))
>> +	if (!xa_load(&nvme_loop_ctrls, nctrl->cntlid))
>>  		goto free_ctrl;
>>  
>> -	mutex_lock(&nvme_loop_ctrl_mutex);
>> -	list_del(&ctrl->list);
>> -	mutex_unlock(&nvme_loop_ctrl_mutex);
>> +	xa_erase(&nvme_loop_ctrls, nctrl->cntlid);
> xa_erase is fine with an already deleted object, so the above xa_load
> check doesn't make any sense.
Okay, we can certainly get rid of the xa_load().
>> @@ -599,6 +590,12 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
>>  	if (ret)
>>  		goto out_free_queues;
>>  
>> +	/* unusual place to update xarray, makes unwind code simple */
>> +	ret = xa_insert(&nvme_loop_ctrls, ctrl->ctrl.cntlid, &ctrl,
>> +			GFP_KERNEL);
>> +	if (ret)
>> +		goto out_remove_ctrl;
>> +
>>  out_remove_admin_queue:
>>  	nvme_loop_destroy_admin_queue(ctrl);
>> +out_remove_ctrl:
>> +	xa_erase(&nvme_loop_ctrls, ctrl->ctrl.cntlid);
> This looks weird.  Why would you remove the controller when inserting
> it fails?
This is wrong it should be "goto out_free_queues:".
> Also did you do an audit that none of the lookups will do something
> funny with the insered but not fully initialized controller?

Now looking at the code xa_insert() should just replace original

list_add_tail() and do the right error handling for destructing the io

queues if any. Will adjust this patch in the next version.

>> @@ -678,7 +673,7 @@ static struct nvmf_transport_ops nvme_loop_transport = {
>>  	.name		= "loop",
>>  	.module		= THIS_MODULE,
>>  	.create_ctrl	= nvme_loop_create_ctrl,
>> -	.allowed_opts	= NVMF_OPT_TRADDR,
>> +	.allowed_opts	= NVMF_OPT_TRADDR | NVMF_OPT_CTRL_LOSS_TMO,
> This looks unrelated?
Yes, will remove.
>> +	xa_init(&nvme_loop_ctrls);
> DEFINE_XARRAY seems like the btter choice for nvme_loop_ctrls.
>
True.
>>  	return ret;
>>  }
>>  
>>  static void __exit nvme_loop_cleanup_module(void)
>>  {
>> -	struct nvme_loop_ctrl *ctrl, *next;
>> +	struct nvme_loop_ctrl *ctrl;
>> +	unsigned long idx;
>>  
>>  	nvmf_unregister_transport(&nvme_loop_transport);
>>  	nvmet_unregister_transport(&nvme_loop_ops);
>>  
>> -	mutex_lock(&nvme_loop_ctrl_mutex);
>> -	list_for_each_entry_safe(ctrl, next, &nvme_loop_ctrl_list, list)
>> +	xa_for_each(&nvme_loop_ctrls, idx, ctrl)
>>  		nvme_delete_ctrl(&ctrl->ctrl);
>> -	mutex_unlock(&nvme_loop_ctrl_mutex);
>> +
>> +	xa_destroy(&nvme_loop_ctrls);

Hmm, let me see if we can fix this in the next version without adding more

complexity.



_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-10-06  2:56 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-30  4:55 [PATCH V2 0/2] nvme-loop: use xarray for ctrl and port list Chaitanya Kulkarni
2020-09-30  4:55 ` [PATCH V2 1/2] nvme-loop: use xarray for loop ctrl tracking Chaitanya Kulkarni
2020-10-05 12:46   ` Christoph Hellwig
2020-10-06  2:56     ` Chaitanya Kulkarni
2020-09-30  4:55 ` [PATCH V2 2/2] nvme-loop: use xarray for loop port tracking Chaitanya Kulkarni
2020-10-05 12:48   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).