linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] nvme-auth: avoid locking during authentication
@ 2022-10-28 13:50 Hannes Reinecke
  2022-10-28 13:50 ` [PATCH 1/2] nvme-auth: allocate authentication buffer only during transaction Hannes Reinecke
  2022-10-28 13:50 ` [PATCH 2/2] nvme-auth: use xarray instead of linked list Hannes Reinecke
  0 siblings, 2 replies; 6+ messages in thread
From: Hannes Reinecke @ 2022-10-28 13:50 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke

Hi all,

blktests have uncovered a possible locking issue in the authentication
code. These patches rework the authentication context handling to
avoid long-running allocations and switch over to using xarray.

As usual, comments and reviews are welcome.

Hannes Reinecke (2):
  nvme-auth: allocate authentication buffer only during transaction
  nvme-auth: use xarray instead of linked list

 drivers/nvme/host/auth.c | 163 +++++++++++++++++++++------------------
 drivers/nvme/host/nvme.h |   3 +-
 2 files changed, 88 insertions(+), 78 deletions(-)

-- 
2.35.3



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] nvme-auth: allocate authentication buffer only during transaction
  2022-10-28 13:50 [PATCH 0/2] nvme-auth: avoid locking during authentication Hannes Reinecke
@ 2022-10-28 13:50 ` Hannes Reinecke
  2022-10-30  7:52   ` Christoph Hellwig
  2022-10-28 13:50 ` [PATCH 2/2] nvme-auth: use xarray instead of linked list Hannes Reinecke
  1 sibling, 1 reply; 6+ messages in thread
From: Hannes Reinecke @ 2022-10-28 13:50 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke

The authentication buffer is only used during the authentication
transaction, so no need to keep it around.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/host/auth.c | 49 +++++++++++++++++++---------------------
 1 file changed, 23 insertions(+), 26 deletions(-)

diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
index 3b63aa155beb..b68fb2c764f6 100644
--- a/drivers/nvme/host/auth.c
+++ b/drivers/nvme/host/auth.c
@@ -667,8 +667,6 @@ static void __nvme_auth_reset(struct nvme_dhchap_queue_context *chap)
 	kfree_sensitive(chap->sess_key);
 	chap->sess_key = NULL;
 	chap->sess_key_len = 0;
-	chap->status = 0;
-	chap->error = 0;
 	chap->s1 = 0;
 	chap->s2 = 0;
 	chap->transaction = 0;
@@ -687,7 +685,6 @@ static void __nvme_auth_free(struct nvme_dhchap_queue_context *chap)
 	kfree_sensitive(chap->host_key);
 	kfree_sensitive(chap->sess_key);
 	kfree_sensitive(chap->host_response);
-	kfree(chap->buf);
 	kfree(chap);
 }
 
@@ -700,6 +697,19 @@ static void __nvme_auth_work(struct work_struct *work)
 	int ret = 0;
 
 	chap->transaction = ctrl->transaction++;
+	chap->status = 0;
+	chap->error = 0;
+
+	/*
+	 * Allocate a large enough buffer for the entire negotiation:
+	 * 4k should be enough to ffdhe8192.
+	 */
+	chap->buf_size = 4096;
+	chap->buf = kzalloc(chap->buf_size, GFP_KERNEL);
+	if (!chap->buf) {
+		chap->error = -ENOMEM;
+		return;
+	}
 
 	/* DH-HMAC-CHAP Step 1: send negotiate */
 	dev_dbg(ctrl->device, "%s: qid %d send negotiate\n",
@@ -707,13 +717,13 @@ static void __nvme_auth_work(struct work_struct *work)
 	ret = nvme_auth_set_dhchap_negotiate_data(ctrl, chap);
 	if (ret < 0) {
 		chap->error = ret;
-		return;
+		goto out_free;
 	}
 	tl = ret;
 	ret = nvme_auth_submit(ctrl, chap->qid, chap->buf, tl, true);
 	if (ret) {
 		chap->error = ret;
-		return;
+		goto out_free;
 	}
 
 	/* DH-HMAC-CHAP Step 2: receive challenge */
@@ -727,14 +737,14 @@ static void __nvme_auth_work(struct work_struct *work)
 			 "qid %d failed to receive challenge, %s %d\n",
 			 chap->qid, ret < 0 ? "error" : "nvme status", ret);
 		chap->error = ret;
-		return;
+		goto out_free;
 	}
 	ret = nvme_auth_receive_validate(ctrl, chap->qid, chap->buf, chap->transaction,
 					 NVME_AUTH_DHCHAP_MESSAGE_CHALLENGE);
 	if (ret) {
 		chap->status = ret;
 		chap->error = NVME_SC_AUTH_REQUIRED;
-		return;
+		goto out_free;
 	}
 
 	ret = nvme_auth_process_dhchap_challenge(ctrl, chap);
@@ -790,7 +800,7 @@ static void __nvme_auth_work(struct work_struct *work)
 			 "qid %d failed to receive success1, %s %d\n",
 			 chap->qid, ret < 0 ? "error" : "nvme status", ret);
 		chap->error = ret;
-		return;
+		goto out_free;
 	}
 	ret = nvme_auth_receive_validate(ctrl, chap->qid,
 					 chap->buf, chap->transaction,
@@ -798,7 +808,7 @@ static void __nvme_auth_work(struct work_struct *work)
 	if (ret) {
 		chap->status = ret;
 		chap->error = NVME_SC_AUTH_REQUIRED;
-		return;
+		goto out_free;
 	}
 
 	if (ctrl->ctrl_key) {
@@ -828,10 +838,7 @@ static void __nvme_auth_work(struct work_struct *work)
 		if (ret)
 			chap->error = ret;
 	}
-	if (!ret) {
-		chap->error = 0;
-		return;
-	}
+	goto out_free;
 
 fail2:
 	dev_dbg(ctrl->device, "%s: qid %d send failure2, status %x\n",
@@ -844,6 +851,9 @@ static void __nvme_auth_work(struct work_struct *work)
 	 */
 	if (ret && !chap->error)
 		chap->error = ret;
+out_free:
+	kfree(chap->buf);
+	chap->buf = NULL;
 }
 
 int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid)
@@ -863,7 +873,6 @@ int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid)
 	mutex_lock(&ctrl->dhchap_auth_mutex);
 	/* Check if the context is already queued */
 	list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
-		WARN_ON(!chap->buf);
 		if (chap->qid == qid) {
 			dev_dbg(ctrl->device, "qid %d: re-using context\n", qid);
 			mutex_unlock(&ctrl->dhchap_auth_mutex);
@@ -881,18 +890,6 @@ int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid)
 	chap->qid = (qid == NVME_QID_ANY) ? 0 : qid;
 	chap->ctrl = ctrl;
 
-	/*
-	 * Allocate a large enough buffer for the entire negotiation:
-	 * 4k should be enough to ffdhe8192.
-	 */
-	chap->buf_size = 4096;
-	chap->buf = kzalloc(chap->buf_size, GFP_KERNEL);
-	if (!chap->buf) {
-		mutex_unlock(&ctrl->dhchap_auth_mutex);
-		kfree(chap);
-		return -ENOMEM;
-	}
-
 	INIT_WORK(&chap->auth_work, __nvme_auth_work);
 	list_add(&chap->entry, &ctrl->dhchap_auth_list);
 	mutex_unlock(&ctrl->dhchap_auth_mutex);
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] nvme-auth: use xarray instead of linked list
  2022-10-28 13:50 [PATCH 0/2] nvme-auth: avoid locking during authentication Hannes Reinecke
  2022-10-28 13:50 ` [PATCH 1/2] nvme-auth: allocate authentication buffer only during transaction Hannes Reinecke
@ 2022-10-28 13:50 ` Hannes Reinecke
  2022-10-30  8:00   ` Christoph Hellwig
  1 sibling, 1 reply; 6+ messages in thread
From: Hannes Reinecke @ 2022-10-28 13:50 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke

The current design of holding the chap context is slightly awkward,
as the context is allocated on demand, and we have to lock the list
when looking up contexts as we wouldn't know if the context is
allocated.

This patch moves the allocation out of the chap context before starting
authentication and stores it into an xarray. With that we can do
away with the lock and access the context directly via the queue number.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/host/auth.c | 116 ++++++++++++++++++++++-----------------
 drivers/nvme/host/nvme.h |   3 +-
 2 files changed, 66 insertions(+), 53 deletions(-)

diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
index b68fb2c764f6..7b974bd0fa64 100644
--- a/drivers/nvme/host/auth.c
+++ b/drivers/nvme/host/auth.c
@@ -72,10 +72,12 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
 				     0, flags, nvme_max_retries);
 	if (ret > 0)
 		dev_warn(ctrl->device,
-			"qid %d auth_send failed with status %d\n", qid, ret);
+			"qid %d auth_%s failed with status %d\n",
+			 qid, auth_send ? "send" : "recv", ret);
 	else if (ret < 0)
 		dev_err(ctrl->device,
-			"qid %d auth_send failed with error %d\n", qid, ret);
+			"qid %d auth_%s failed with error %d\n",
+			qid, auth_send ? "send" : "recv", ret);
 	return ret;
 }
 
@@ -870,29 +872,42 @@ int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid)
 		return -ENOKEY;
 	}
 
-	mutex_lock(&ctrl->dhchap_auth_mutex);
-	/* Check if the context is already queued */
-	list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
-		if (chap->qid == qid) {
-			dev_dbg(ctrl->device, "qid %d: re-using context\n", qid);
-			mutex_unlock(&ctrl->dhchap_auth_mutex);
-			flush_work(&chap->auth_work);
-			__nvme_auth_reset(chap);
-			queue_work(nvme_wq, &chap->auth_work);
-			return 0;
-		}
-	}
-	chap = kzalloc(sizeof(*chap), GFP_KERNEL);
+	if (qid == NVME_QID_ANY)
+		qid = 0;
+	chap = xa_load(&ctrl->dhchap_auth_xa, qid);
 	if (!chap) {
-		mutex_unlock(&ctrl->dhchap_auth_mutex);
-		return -ENOMEM;
-	}
-	chap->qid = (qid == NVME_QID_ANY) ? 0 : qid;
-	chap->ctrl = ctrl;
+		int ret;
+
+		chap = kzalloc(sizeof(*chap), GFP_KERNEL);
+		if (!chap) {
+			dev_warn(ctrl->device,
+				 "qid %d: error allocation authentication", qid);
+			return -ENOMEM;
+		}
+		chap->qid = qid;
+		chap->ctrl = ctrl;
 
-	INIT_WORK(&chap->auth_work, __nvme_auth_work);
-	list_add(&chap->entry, &ctrl->dhchap_auth_list);
-	mutex_unlock(&ctrl->dhchap_auth_mutex);
+		INIT_WORK(&chap->auth_work, __nvme_auth_work);
+		ret = xa_insert(&ctrl->dhchap_auth_xa, qid, chap, GFP_KERNEL);
+		if (ret) {
+			dev_warn(ctrl->device,
+				 "qid %d: error %d inserting authentication",
+				 qid, ret);
+			kfree(chap);
+			return ret;
+		}
+	} else {
+		if (chap->qid != qid) {
+			dev_warn(ctrl->device,
+				 "qid %d: authentication qid mismatch (%d)!",
+				 chap->qid, qid);
+			chap = xa_erase(&ctrl->dhchap_auth_xa, qid);
+			__nvme_auth_free(chap);
+			return -ENOENT;
+		}
+		flush_work(&chap->auth_work);
+		__nvme_auth_reset(chap);
+	}
 	queue_work(nvme_wq, &chap->auth_work);
 	return 0;
 }
@@ -901,33 +916,35 @@ EXPORT_SYMBOL_GPL(nvme_auth_negotiate);
 int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid)
 {
 	struct nvme_dhchap_queue_context *chap;
-	int ret;
 
-	mutex_lock(&ctrl->dhchap_auth_mutex);
-	list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
-		if (chap->qid != qid)
-			continue;
-		mutex_unlock(&ctrl->dhchap_auth_mutex);
-		flush_work(&chap->auth_work);
-		ret = chap->error;
-		return ret;
+	if (qid == NVME_QID_ANY)
+		qid = 0;
+	chap = xa_load(&ctrl->dhchap_auth_xa, qid);
+	if (!chap) {
+		dev_warn(ctrl->device,
+			 "qid %d: authentication not initialized!",
+			 qid);
+		return -ENOENT;
+	} else if (chap->qid != qid) {
+		dev_warn(ctrl->device,
+			 "qid %d: authentication qid mismatch (%d)!",
+			 chap->qid, qid);
+		return -ENOENT;
 	}
-	mutex_unlock(&ctrl->dhchap_auth_mutex);
-	return -ENXIO;
+	flush_work(&chap->auth_work);
+	return chap->error;
 }
 EXPORT_SYMBOL_GPL(nvme_auth_wait);
 
 void nvme_auth_reset(struct nvme_ctrl *ctrl)
 {
 	struct nvme_dhchap_queue_context *chap;
+	unsigned long qid;
 
-	mutex_lock(&ctrl->dhchap_auth_mutex);
-	list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
-		mutex_unlock(&ctrl->dhchap_auth_mutex);
+	xa_for_each(&ctrl->dhchap_auth_xa, qid, chap) {
 		flush_work(&chap->auth_work);
 		__nvme_auth_reset(chap);
 	}
-	mutex_unlock(&ctrl->dhchap_auth_mutex);
 }
 EXPORT_SYMBOL_GPL(nvme_auth_reset);
 
@@ -947,7 +964,7 @@ static void nvme_dhchap_auth_work(struct work_struct *work)
 	ret = nvme_auth_wait(ctrl, 0);
 	if (ret) {
 		dev_warn(ctrl->device,
-			 "qid 0: authentication failed\n");
+			 "qid 0: authentication failed with %d\n", ret);
 		return;
 	}
 
@@ -969,9 +986,8 @@ static void nvme_dhchap_auth_work(struct work_struct *work)
 
 void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl)
 {
-	INIT_LIST_HEAD(&ctrl->dhchap_auth_list);
+	xa_init_flags(&ctrl->dhchap_auth_xa, XA_FLAGS_ALLOC);
 	INIT_WORK(&ctrl->dhchap_auth_work, nvme_dhchap_auth_work);
-	mutex_init(&ctrl->dhchap_auth_mutex);
 	if (!ctrl->opts)
 		return;
 	nvme_auth_generate_key(ctrl->opts->dhchap_secret, &ctrl->host_key);
@@ -981,27 +997,25 @@ EXPORT_SYMBOL_GPL(nvme_auth_init_ctrl);
 
 void nvme_auth_stop(struct nvme_ctrl *ctrl)
 {
-	struct nvme_dhchap_queue_context *chap = NULL, *tmp;
+	struct nvme_dhchap_queue_context *chap;
+	unsigned long qid;
 
 	cancel_work_sync(&ctrl->dhchap_auth_work);
-	mutex_lock(&ctrl->dhchap_auth_mutex);
-	list_for_each_entry_safe(chap, tmp, &ctrl->dhchap_auth_list, entry)
+	xa_for_each(&ctrl->dhchap_auth_xa, qid, chap)
 		cancel_work_sync(&chap->auth_work);
-	mutex_unlock(&ctrl->dhchap_auth_mutex);
 }
 EXPORT_SYMBOL_GPL(nvme_auth_stop);
 
 void nvme_auth_free(struct nvme_ctrl *ctrl)
 {
-	struct nvme_dhchap_queue_context *chap = NULL, *tmp;
+	struct nvme_dhchap_queue_context *chap;
+	unsigned long qid;
 
-	mutex_lock(&ctrl->dhchap_auth_mutex);
-	list_for_each_entry_safe(chap, tmp, &ctrl->dhchap_auth_list, entry) {
-		list_del_init(&chap->entry);
-		flush_work(&chap->auth_work);
+	xa_for_each(&ctrl->dhchap_auth_xa, qid, chap) {
+		chap = xa_erase(&ctrl->dhchap_auth_xa, qid);
 		__nvme_auth_free(chap);
 	}
-	mutex_unlock(&ctrl->dhchap_auth_mutex);
+	xa_destroy(&ctrl->dhchap_auth_xa);
 	if (ctrl->host_key) {
 		nvme_auth_free_key(ctrl->host_key);
 		ctrl->host_key = NULL;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 32d9dc2d957e..d0b2d3e4b63f 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -338,8 +338,7 @@ struct nvme_ctrl {
 
 #ifdef CONFIG_NVME_AUTH
 	struct work_struct dhchap_auth_work;
-	struct list_head dhchap_auth_list;
-	struct mutex dhchap_auth_mutex;
+	struct xarray dhchap_auth_xa;
 	struct nvme_dhchap_key *host_key;
 	struct nvme_dhchap_key *ctrl_key;
 	u16 transaction;
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] nvme-auth: allocate authentication buffer only during transaction
  2022-10-28 13:50 ` [PATCH 1/2] nvme-auth: allocate authentication buffer only during transaction Hannes Reinecke
@ 2022-10-30  7:52   ` Christoph Hellwig
  2022-10-31 17:46     ` Hannes Reinecke
  0 siblings, 1 reply; 6+ messages in thread
From: Christoph Hellwig @ 2022-10-30  7:52 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, linux-nvme

> +
> +	/*
> +	 * Allocate a large enough buffer for the entire negotiation:
> +	 * 4k should be enough to ffdhe8192.

should or is?

> +	chap->buf_size = 4096;

why do we even need the buf_size member if it is constant?

> +	chap->buf = kzalloc(chap->buf_size, GFP_KERNEL);

I think this needs to be GFP_NOIO as we can hit this during a reconnect.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] nvme-auth: use xarray instead of linked list
  2022-10-28 13:50 ` [PATCH 2/2] nvme-auth: use xarray instead of linked list Hannes Reinecke
@ 2022-10-30  8:00   ` Christoph Hellwig
  0 siblings, 0 replies; 6+ messages in thread
From: Christoph Hellwig @ 2022-10-30  8:00 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, linux-nvme

On Fri, Oct 28, 2022 at 03:50:27PM +0200, Hannes Reinecke wrote:
> The current design of holding the chap context is slightly awkward,
> as the context is allocated on demand, and we have to lock the list
> when looking up contexts as we wouldn't know if the context is
> allocated.
> 
> This patch moves the allocation out of the chap context before starting
> authentication and stores it into an xarray. With that we can do
> away with the lock and access the context directly via the queue number.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/nvme/host/auth.c | 116 ++++++++++++++++++++++-----------------
>  drivers/nvme/host/nvme.h |   3 +-
>  2 files changed, 66 insertions(+), 53 deletions(-)
> 
> diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
> index b68fb2c764f6..7b974bd0fa64 100644
> --- a/drivers/nvme/host/auth.c
> +++ b/drivers/nvme/host/auth.c
> @@ -72,10 +72,12 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
>  				     0, flags, nvme_max_retries);
>  	if (ret > 0)
>  		dev_warn(ctrl->device,
> -			"qid %d auth_send failed with status %d\n", qid, ret);
> +			"qid %d auth_%s failed with status %d\n",
> +			 qid, auth_send ? "send" : "recv", ret);
>  	else if (ret < 0)
>  		dev_err(ctrl->device,
> -			"qid %d auth_send failed with error %d\n", qid, ret);
> +			"qid %d auth_%s failed with error %d\n",
> +			qid, auth_send ? "send" : "recv", ret);
>  	return ret;

This looks like an unrelated message fixup.

> @@ -870,29 +872,42 @@ int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid)
>  		return -ENOKEY;
>  	}
>  
> -	mutex_lock(&ctrl->dhchap_auth_mutex);

> +	if (qid == NVME_QID_ANY)
> +		qid = 0;

I can't see how NVME_QID_ANY would ever be passed here.

> +	chap = xa_load(&ctrl->dhchap_auth_xa, qid);
>  	if (!chap) {
> +		int ret;
> +
> +		chap = kzalloc(sizeof(*chap), GFP_KERNEL);
> +		if (!chap) {
> +			dev_warn(ctrl->device,
> +				 "qid %d: error allocation authentication", qid);
> +			return -ENOMEM;
> +		}
> +		chap->qid = qid;
> +		chap->ctrl = ctrl;
>  
> +		INIT_WORK(&chap->auth_work, __nvme_auth_work);
> +		ret = xa_insert(&ctrl->dhchap_auth_xa, qid, chap, GFP_KERNEL);

GFP_NOFS?

Also xa_insert can fail with -EBUSY here if someone concurrently inserted
an entry now that there is no locking.

> +	} else {
> +		if (chap->qid != qid) {
> +			dev_warn(ctrl->device,
> +				 "qid %d: authentication qid mismatch (%d)!",
> +				 chap->qid, qid);
> +			chap = xa_erase(&ctrl->dhchap_auth_xa, qid);
> +			__nvme_auth_free(chap);
> +			return -ENOENT;
> +		}

How can the qid not match given that the lookup is by qid?

> +		flush_work(&chap->auth_work);
> +		__nvme_auth_reset(chap);
> +	}

What protects us against someone concurrently freeing the entry here?

> @@ -901,33 +916,35 @@ EXPORT_SYMBOL_GPL(nvme_auth_negotiate);
>  int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid)
>  {
>  	struct nvme_dhchap_queue_context *chap;
>  
> +	if (qid == NVME_QID_ANY)
> +		qid = 0;

I can't see how NVME_QID_ANY gets passed here ever.


> +	chap = xa_load(&ctrl->dhchap_auth_xa, qid);
> +	if (!chap) {
> +		dev_warn(ctrl->device,
> +			 "qid %d: authentication not initialized!",
> +			 qid);
> +		return -ENOENT;
> +	} else if (chap->qid != qid) {

No need for an return after an else.  But I can't see how the qid
makes sense here.  But once again, what protects the entry from
being freed?

> @@ -947,7 +964,7 @@ static void nvme_dhchap_auth_work(struct work_struct *work)
>  	ret = nvme_auth_wait(ctrl, 0);
>  	if (ret) {
>  		dev_warn(ctrl->device,
> -			 "qid 0: authentication failed\n");
> +			 "qid 0: authentication failed with %d\n", ret);

This looks like an unrelated message cleanup.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] nvme-auth: allocate authentication buffer only during transaction
  2022-10-30  7:52   ` Christoph Hellwig
@ 2022-10-31 17:46     ` Hannes Reinecke
  0 siblings, 0 replies; 6+ messages in thread
From: Hannes Reinecke @ 2022-10-31 17:46 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme

On 10/30/22 08:52, Christoph Hellwig wrote:
>> +
>> +	/*
>> +	 * Allocate a large enough buffer for the entire negotiation:
>> +	 * 4k should be enough to ffdhe8192.
> 
> should or is?
> 
Well, is.

>> +	chap->buf_size = 4096;
> 
> why do we even need the buf_size member if it is constant?
> 
Will be hardcoding it.

>> +	chap->buf = kzalloc(chap->buf_size, GFP_KERNEL);
> 
> I think this needs to be GFP_NOIO as we can hit this during a reconnect.
Will do.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-10-31 17:46 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-28 13:50 [PATCH 0/2] nvme-auth: avoid locking during authentication Hannes Reinecke
2022-10-28 13:50 ` [PATCH 1/2] nvme-auth: allocate authentication buffer only during transaction Hannes Reinecke
2022-10-30  7:52   ` Christoph Hellwig
2022-10-31 17:46     ` Hannes Reinecke
2022-10-28 13:50 ` [PATCH 2/2] nvme-auth: use xarray instead of linked list Hannes Reinecke
2022-10-30  8:00   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).