linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/6] Reduce SMBDirect RDMA SGE counts and sizes
@ 2022-09-13 22:19 Tom Talpey
  2022-09-13 22:19 ` [PATCH 1/6] Decrease the number of SMB3 smbdirect client SGEs Tom Talpey
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Tom Talpey @ 2022-09-13 22:19 UTC (permalink / raw)
  To: smfrench, linux-cifs; +Cc: linkinjeon, senozhatsky, bmt, Tom Talpey

Allocate fewer SGEs and standard packet sizes in both kernel SMBDirect
implementations.

The current maximum values (16 SGEs and 8192 bytes) cause failures on the
SoftiWARP provider, and are suboptimal on others. Reduce these to 6 and
1364. Additionally, recode smbd_send() to work with as few as 2 SGEs,
and for debug sanity, reformat client-side logging to more clearly show
addresses, lengths and flags in the appropriate base.

Tested over SoftiWARP and SoftRoCE with shell, Connectathon basic and general,
and to be tested with XFStests here at SMB3 IOLab this week.

Tom Talpey (6):
  Decrease the number of SMB3 smbdirect client SGEs
  Decrease the number of SMB3 smbdirect server SGEs
  Reduce client smbdirect max receive segment size
  Reduce server smbdirect max send/receive segment sizes
  Handle variable number of SGEs in client smbdirect send.
  Fix formatting of client smbdirect RDMA logging

 fs/cifs/smbdirect.c       | 227 ++++++++++++++++----------------------
 fs/cifs/smbdirect.h       |  14 ++-
 fs/ksmbd/transport_rdma.c |   6 +-
 3 files changed, 109 insertions(+), 138 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/6] Decrease the number of SMB3 smbdirect client SGEs
  2022-09-13 22:19 [RFC PATCH 0/6] Reduce SMBDirect RDMA SGE counts and sizes Tom Talpey
@ 2022-09-13 22:19 ` Tom Talpey
  2022-09-13 22:19 ` [PATCH 2/6] Decrease the number of SMB3 smbdirect server SGEs Tom Talpey
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tom Talpey @ 2022-09-13 22:19 UTC (permalink / raw)
  To: smfrench, linux-cifs; +Cc: linkinjeon, senozhatsky, bmt, Tom Talpey

The client-side SMBDirect layer requires no more than 6 send SGEs
and 1 receive SGE. The previous default of 8 send and 8 receive
causes smbdirect to fail on the SoftiWARP (siw) provider, and
possibly others. Additionally, large numbers of SGEs reduces
performance significantly on adapter implementations.

Also correct the frmr page count comment (not an SGE count).

Signed-off-by: Tom Talpey <tom@talpey.com>
---
 fs/cifs/smbdirect.c | 26 ++++++++++++--------------
 fs/cifs/smbdirect.h | 14 +++++++++-----
 2 files changed, 21 insertions(+), 19 deletions(-)

diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
index 5fbbec22bcc8..f81229721b76 100644
--- a/fs/cifs/smbdirect.c
+++ b/fs/cifs/smbdirect.c
@@ -99,7 +99,7 @@ int smbd_keep_alive_interval = 120;
  * User configurable initial values for RDMA transport
  * The actual values used may be lower and are limited to hardware capabilities
  */
-/* Default maximum number of SGEs in a RDMA write/read */
+/* Default maximum number of pages in a single RDMA write/read */
 int smbd_max_frmr_depth = 2048;
 
 /* If payload is less than this byte, use RDMA send/recv not read/write */
@@ -1017,9 +1017,9 @@ static int smbd_post_send_data(
 {
 	int i;
 	u32 data_length = 0;
-	struct scatterlist sgl[SMBDIRECT_MAX_SGE];
+	struct scatterlist sgl[SMBDIRECT_MAX_SEND_SGE - 1];
 
-	if (n_vec > SMBDIRECT_MAX_SGE) {
+	if (n_vec > SMBDIRECT_MAX_SEND_SGE - 1) {
 		cifs_dbg(VFS, "Can't fit data to SGL, n_vec=%d\n", n_vec);
 		return -EINVAL;
 	}
@@ -1562,17 +1562,15 @@ static struct smbd_connection *_smbd_get_connection(
 	info->max_receive_size = smbd_max_receive_size;
 	info->keep_alive_interval = smbd_keep_alive_interval;
 
-	if (info->id->device->attrs.max_send_sge < SMBDIRECT_MAX_SGE) {
+	if (info->id->device->attrs.max_send_sge < SMBDIRECT_MAX_SEND_SGE ||
+	    info->id->device->attrs.max_recv_sge < SMBDIRECT_MAX_RECV_SGE) {
 		log_rdma_event(ERR,
-			"warning: device max_send_sge = %d too small\n",
-			info->id->device->attrs.max_send_sge);
-		log_rdma_event(ERR, "Queue Pair creation may fail\n");
-	}
-	if (info->id->device->attrs.max_recv_sge < SMBDIRECT_MAX_SGE) {
-		log_rdma_event(ERR,
-			"warning: device max_recv_sge = %d too small\n",
+			"device %.*s max_send_sge/max_recv_sge = %d/%d too small\n",
+			IB_DEVICE_NAME_MAX,
+			info->id->device->name,
+			info->id->device->attrs.max_send_sge,
 			info->id->device->attrs.max_recv_sge);
-		log_rdma_event(ERR, "Queue Pair creation may fail\n");
+		goto config_failed;
 	}
 
 	info->send_cq = NULL;
@@ -1598,8 +1596,8 @@ static struct smbd_connection *_smbd_get_connection(
 	qp_attr.qp_context = info;
 	qp_attr.cap.max_send_wr = info->send_credit_target;
 	qp_attr.cap.max_recv_wr = info->receive_credit_max;
-	qp_attr.cap.max_send_sge = SMBDIRECT_MAX_SGE;
-	qp_attr.cap.max_recv_sge = SMBDIRECT_MAX_SGE;
+	qp_attr.cap.max_send_sge = SMBDIRECT_MAX_SEND_SGE;
+	qp_attr.cap.max_recv_sge = SMBDIRECT_MAX_RECV_SGE;
 	qp_attr.cap.max_inline_data = 0;
 	qp_attr.sq_sig_type = IB_SIGNAL_REQ_WR;
 	qp_attr.qp_type = IB_QPT_RC;
diff --git a/fs/cifs/smbdirect.h b/fs/cifs/smbdirect.h
index a87fca82a796..207ef979cd51 100644
--- a/fs/cifs/smbdirect.h
+++ b/fs/cifs/smbdirect.h
@@ -91,7 +91,7 @@ struct smbd_connection {
 	/* Memory registrations */
 	/* Maximum number of RDMA read/write outstanding on this connection */
 	int responder_resources;
-	/* Maximum number of SGEs in a RDMA write/read */
+	/* Maximum number of pages in a single RDMA write/read on this connection */
 	int max_frmr_depth;
 	/*
 	 * If payload is less than or equal to the threshold,
@@ -225,21 +225,25 @@ struct smbd_buffer_descriptor_v1 {
 	__le32 length;
 } __packed;
 
-/* Default maximum number of SGEs in a RDMA send/recv */
-#define SMBDIRECT_MAX_SGE	16
+/* Maximum number of SGEs used by smbdirect.c in any send work request */
+#define SMBDIRECT_MAX_SEND_SGE	6
+
 /* The context for a SMBD request */
 struct smbd_request {
 	struct smbd_connection *info;
 	struct ib_cqe cqe;
 
-	/* the SGE entries for this packet */
-	struct ib_sge sge[SMBDIRECT_MAX_SGE];
+	/* the SGE entries for this work request */
+	struct ib_sge sge[SMBDIRECT_MAX_SEND_SGE];
 	int num_sge;
 
 	/* SMBD packet header follows this structure */
 	u8 packet[];
 };
 
+/* Maximum number of SGEs used by smbdirect.c in any receive work request */
+#define SMBDIRECT_MAX_RECV_SGE	1
+
 /* The context for a SMBD response */
 struct smbd_response {
 	struct smbd_connection *info;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/6] Decrease the number of SMB3 smbdirect server SGEs
  2022-09-13 22:19 [RFC PATCH 0/6] Reduce SMBDirect RDMA SGE counts and sizes Tom Talpey
  2022-09-13 22:19 ` [PATCH 1/6] Decrease the number of SMB3 smbdirect client SGEs Tom Talpey
@ 2022-09-13 22:19 ` Tom Talpey
  2022-09-13 22:19 ` [PATCH 3/6] Reduce client smbdirect max receive segment size Tom Talpey
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tom Talpey @ 2022-09-13 22:19 UTC (permalink / raw)
  To: smfrench, linux-cifs; +Cc: linkinjeon, senozhatsky, bmt, Tom Talpey

The server-side SMBDirect layer requires no more than 6 send SGEs
The previous default of 8 causes ksmbd to fail on the SoftiWARP
(siw) provider, and possibly others. Additionally, large numbers
of SGEs reduces performance significantly on adapter implementations.

Signed-off-by: Tom Talpey <tom@talpey.com>
---
 fs/ksmbd/transport_rdma.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
index 35b55ee94fe5..494b8e5af4b3 100644
--- a/fs/ksmbd/transport_rdma.c
+++ b/fs/ksmbd/transport_rdma.c
@@ -32,7 +32,7 @@
 /* SMB_DIRECT negotiation timeout in seconds */
 #define SMB_DIRECT_NEGOTIATE_TIMEOUT		120
 
-#define SMB_DIRECT_MAX_SEND_SGES		8
+#define SMB_DIRECT_MAX_SEND_SGES		6
 #define SMB_DIRECT_MAX_RECV_SGES		1
 
 /*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/6] Reduce client smbdirect max receive segment size
  2022-09-13 22:19 [RFC PATCH 0/6] Reduce SMBDirect RDMA SGE counts and sizes Tom Talpey
  2022-09-13 22:19 ` [PATCH 1/6] Decrease the number of SMB3 smbdirect client SGEs Tom Talpey
  2022-09-13 22:19 ` [PATCH 2/6] Decrease the number of SMB3 smbdirect server SGEs Tom Talpey
@ 2022-09-13 22:19 ` Tom Talpey
  2022-09-13 22:19 ` [PATCH 4/6] Reduce server smbdirect max send/receive segment sizes Tom Talpey
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tom Talpey @ 2022-09-13 22:19 UTC (permalink / raw)
  To: smfrench, linux-cifs; +Cc: linkinjeon, senozhatsky, bmt, Tom Talpey

Reduce client smbdirect max segment receive size to 1364 to match
protocol norms. Larger buffers are unnecessary and add significant
memory overhead.

Signed-off-by: Tom Talpey <tom@talpey.com>
---
 fs/cifs/smbdirect.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
index f81229721b76..4908ca54610c 100644
--- a/fs/cifs/smbdirect.c
+++ b/fs/cifs/smbdirect.c
@@ -90,7 +90,7 @@ int smbd_max_send_size = 1364;
 int smbd_max_fragmented_recv_size = 1024 * 1024;
 
 /*  The maximum single-message size which can be received */
-int smbd_max_receive_size = 8192;
+int smbd_max_receive_size = 1364;
 
 /* The timeout to initiate send of a keepalive message on idle */
 int smbd_keep_alive_interval = 120;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 4/6] Reduce server smbdirect max send/receive segment sizes
  2022-09-13 22:19 [RFC PATCH 0/6] Reduce SMBDirect RDMA SGE counts and sizes Tom Talpey
                   ` (2 preceding siblings ...)
  2022-09-13 22:19 ` [PATCH 3/6] Reduce client smbdirect max receive segment size Tom Talpey
@ 2022-09-13 22:19 ` Tom Talpey
  2022-09-13 22:19 ` [PATCH 5/6] Handle variable number of SGEs in client smbdirect send Tom Talpey
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tom Talpey @ 2022-09-13 22:19 UTC (permalink / raw)
  To: smfrench, linux-cifs; +Cc: linkinjeon, senozhatsky, bmt, Tom Talpey

Reduce ksmbd smbdirect max segment send and receive size to 1364
to match protocol norms. Larger buffers are unnecessary and add
significant memory overhead.

Signed-off-by: Tom Talpey <tom@talpey.com>
---
 fs/ksmbd/transport_rdma.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
index 494b8e5af4b3..0315bca3d53b 100644
--- a/fs/ksmbd/transport_rdma.c
+++ b/fs/ksmbd/transport_rdma.c
@@ -62,13 +62,13 @@ static int smb_direct_receive_credit_max = 255;
 static int smb_direct_send_credit_target = 255;
 
 /* The maximum single message size can be sent to remote peer */
-static int smb_direct_max_send_size = 8192;
+static int smb_direct_max_send_size = 1364;
 
 /*  The maximum fragmented upper-layer payload receive size supported */
 static int smb_direct_max_fragmented_recv_size = 1024 * 1024;
 
 /*  The maximum single-message size which can be received */
-static int smb_direct_max_receive_size = 8192;
+static int smb_direct_max_receive_size = 1364;
 
 static int smb_direct_max_read_write_size = SMBD_DEFAULT_IOSIZE;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 5/6] Handle variable number of SGEs in client smbdirect send.
  2022-09-13 22:19 [RFC PATCH 0/6] Reduce SMBDirect RDMA SGE counts and sizes Tom Talpey
                   ` (3 preceding siblings ...)
  2022-09-13 22:19 ` [PATCH 4/6] Reduce server smbdirect max send/receive segment sizes Tom Talpey
@ 2022-09-13 22:19 ` Tom Talpey
  2022-09-13 22:19 ` [PATCH 6/6] Fix formatting of client smbdirect RDMA logging Tom Talpey
       [not found] ` <CAH2r5muhROptfZ+fUjhCfwzoeJqNku4Nkpjv63N5WTdhpEWWoA@mail.gmail.com>
  6 siblings, 0 replies; 8+ messages in thread
From: Tom Talpey @ 2022-09-13 22:19 UTC (permalink / raw)
  To: smfrench, linux-cifs; +Cc: linkinjeon, senozhatsky, bmt, Tom Talpey

If/when an outgoing request contains more scatter/gather segments
than can be mapped in a single RDMA send work request, use smbdirect
fragments to send it in multiple packets.

Signed-off-by: Tom Talpey <tom@talpey.com>
---
 fs/cifs/smbdirect.c | 185 ++++++++++++++++++--------------------------
 1 file changed, 77 insertions(+), 108 deletions(-)

diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
index 4908ca54610c..494ac916158f 100644
--- a/fs/cifs/smbdirect.c
+++ b/fs/cifs/smbdirect.c
@@ -1984,10 +1984,11 @@ int smbd_send(struct TCP_Server_Info *server,
 	int num_rqst, struct smb_rqst *rqst_array)
 {
 	struct smbd_connection *info = server->smbd_conn;
-	struct kvec vec;
+	struct kvec vecs[SMBDIRECT_MAX_SEND_SGE - 1];
 	int nvecs;
 	int size;
 	unsigned int buflen, remaining_data_length;
+	unsigned int offset, remaining_vec_data_length;
 	int start, i, j;
 	int max_iov_size =
 		info->max_send_size - sizeof(struct smbd_data_transfer);
@@ -1996,10 +1997,8 @@ int smbd_send(struct TCP_Server_Info *server,
 	struct smb_rqst *rqst;
 	int rqst_idx;
 
-	if (info->transport_status != SMBD_CONNECTED) {
-		rc = -EAGAIN;
-		goto done;
-	}
+	if (info->transport_status != SMBD_CONNECTED)
+		return -EAGAIN;
 
 	/*
 	 * Add in the page array if there is one. The caller needs to set
@@ -2010,125 +2009,95 @@ int smbd_send(struct TCP_Server_Info *server,
 	for (i = 0; i < num_rqst; i++)
 		remaining_data_length += smb_rqst_len(server, &rqst_array[i]);
 
-	if (remaining_data_length > info->max_fragmented_send_size) {
+	if (unlikely(remaining_data_length > info->max_fragmented_send_size)) {
+		/* assertion: payload never exceeds negotiated maximum */
 		log_write(ERR, "payload size %d > max size %d\n",
 			remaining_data_length, info->max_fragmented_send_size);
-		rc = -EINVAL;
-		goto done;
+		return -EINVAL;
 	}
 
 	log_write(INFO, "num_rqst=%d total length=%u\n",
 			num_rqst, remaining_data_length);
 
 	rqst_idx = 0;
-next_rqst:
-	rqst = &rqst_array[rqst_idx];
-	iov = rqst->rq_iov;
-
-	cifs_dbg(FYI, "Sending smb (RDMA): idx=%d smb_len=%lu\n",
-		rqst_idx, smb_rqst_len(server, rqst));
-	for (i = 0; i < rqst->rq_nvec; i++)
-		dump_smb(iov[i].iov_base, iov[i].iov_len);
-
-
-	log_write(INFO, "rqst_idx=%d nvec=%d rqst->rq_npages=%d rq_pagesz=%d rq_tailsz=%d buflen=%lu\n",
-		  rqst_idx, rqst->rq_nvec, rqst->rq_npages, rqst->rq_pagesz,
-		  rqst->rq_tailsz, smb_rqst_len(server, rqst));
-
-	start = i = 0;
-	buflen = 0;
-	while (true) {
-		buflen += iov[i].iov_len;
-		if (buflen > max_iov_size) {
-			if (i > start) {
-				remaining_data_length -=
-					(buflen-iov[i].iov_len);
-				log_write(INFO, "sending iov[] from start=%d i=%d nvecs=%d remaining_data_length=%d\n",
-					  start, i, i - start,
-					  remaining_data_length);
-				rc = smbd_post_send_data(
-					info, &iov[start], i-start,
-					remaining_data_length);
-				if (rc)
-					goto done;
-			} else {
-				/* iov[start] is too big, break it */
-				nvecs = (buflen+max_iov_size-1)/max_iov_size;
-				log_write(INFO, "iov[%d] iov_base=%p buflen=%d break to %d vectors\n",
-					  start, iov[start].iov_base,
-					  buflen, nvecs);
-				for (j = 0; j < nvecs; j++) {
-					vec.iov_base =
-						(char *)iov[start].iov_base +
-						j*max_iov_size;
-					vec.iov_len = max_iov_size;
-					if (j == nvecs-1)
-						vec.iov_len =
-							buflen -
-							max_iov_size*(nvecs-1);
-					remaining_data_length -= vec.iov_len;
-					log_write(INFO,
-						"sending vec j=%d iov_base=%p iov_len=%zu remaining_data_length=%d\n",
-						  j, vec.iov_base, vec.iov_len,
-						  remaining_data_length);
-					rc = smbd_post_send_data(
-						info, &vec, 1,
-						remaining_data_length);
-					if (rc)
-						goto done;
+	do {
+		rqst = &rqst_array[rqst_idx];
+		iov = rqst->rq_iov;
+
+		cifs_dbg(FYI, "Sending smb (RDMA): idx=%d smb_len=%lu\n",
+			rqst_idx, smb_rqst_len(server, rqst));
+		remaining_vec_data_length = 0;
+		for (i = 0; i < rqst->rq_nvec; i++) {
+			remaining_vec_data_length += iov[i].iov_len;
+			dump_smb(iov[i].iov_base, iov[i].iov_len);
+		}
+
+		log_write(INFO, "rqst_idx=%d nvec=%d rqst->rq_npages=%d rq_pagesz=%d rq_tailsz=%d buflen=%lu\n",
+			  rqst_idx, rqst->rq_nvec,
+			  rqst->rq_npages, rqst->rq_pagesz,
+			  rqst->rq_tailsz, smb_rqst_len(server, rqst));
+
+		start = 0;
+		offset = 0;
+		do {
+			buflen = 0;
+			i = start;
+			j = 0;
+			while (i < rqst->rq_nvec &&
+				j < SMBDIRECT_MAX_SEND_SGE - 1 &&
+				buflen < max_iov_size) {
+
+				vecs[j].iov_base = iov[i].iov_base + offset;
+				if (buflen + iov[i].iov_len > max_iov_size) {
+					vecs[j].iov_len =
+						max_iov_size - vecs[j].iov_len;
+					buflen = max_iov_size;
+					offset = vecs[j].iov_len;
+				} else {
+					vecs[j].iov_len =
+						iov[i].iov_len - offset;
+					buflen += vecs[j].iov_len;
+					offset = 0;
+					++i;
 				}
-				i++;
-				if (i == rqst->rq_nvec)
-					break;
+				++j;
 			}
+
+			remaining_vec_data_length -= buflen;
+			remaining_data_length -= buflen;
+			log_write(INFO, "sending %s iov[%d] from start=%d nvecs=%d remaining_data_length=%d\n",
+					remaining_vec_data_length > 0 ?
+						"partial" : "complete",
+					rqst->rq_nvec, start, j,
+					remaining_data_length);
+
 			start = i;
-			buflen = 0;
-		} else {
-			i++;
-			if (i == rqst->rq_nvec) {
-				/* send out all remaining vecs */
-				remaining_data_length -= buflen;
-				log_write(INFO, "sending iov[] from start=%d i=%d nvecs=%d remaining_data_length=%d\n",
-					  start, i, i - start,
+			rc = smbd_post_send_data(info, vecs, j, remaining_data_length);
+			if (rc)
+				goto done;
+		} while (remaining_vec_data_length > 0);
+
+		/* now sending pages if there are any */
+		for (i = 0; i < rqst->rq_npages; i++) {
+			rqst_page_get_length(rqst, i, &buflen, &offset);
+			nvecs = (buflen + max_iov_size - 1) / max_iov_size;
+			log_write(INFO, "sending pages buflen=%d nvecs=%d\n",
+				buflen, nvecs);
+			for (j = 0; j < nvecs; j++) {
+				size = min_t(unsigned int, max_iov_size, remaining_data_length);
+				remaining_data_length -= size;
+				log_write(INFO, "sending pages i=%d offset=%d size=%d remaining_data_length=%d\n",
+					  i, j * max_iov_size + offset, size,
 					  remaining_data_length);
-				rc = smbd_post_send_data(info, &iov[start],
-					i-start, remaining_data_length);
+				rc = smbd_post_send_page(
+					info, rqst->rq_pages[i],
+					j*max_iov_size + offset,
+					size, remaining_data_length);
 				if (rc)
 					goto done;
-				break;
 			}
 		}
-		log_write(INFO, "looping i=%d buflen=%d\n", i, buflen);
-	}
-
-	/* now sending pages if there are any */
-	for (i = 0; i < rqst->rq_npages; i++) {
-		unsigned int offset;
-
-		rqst_page_get_length(rqst, i, &buflen, &offset);
-		nvecs = (buflen + max_iov_size - 1) / max_iov_size;
-		log_write(INFO, "sending pages buflen=%d nvecs=%d\n",
-			buflen, nvecs);
-		for (j = 0; j < nvecs; j++) {
-			size = max_iov_size;
-			if (j == nvecs-1)
-				size = buflen - j*max_iov_size;
-			remaining_data_length -= size;
-			log_write(INFO, "sending pages i=%d offset=%d size=%d remaining_data_length=%d\n",
-				  i, j * max_iov_size + offset, size,
-				  remaining_data_length);
-			rc = smbd_post_send_page(
-				info, rqst->rq_pages[i],
-				j*max_iov_size + offset,
-				size, remaining_data_length);
-			if (rc)
-				goto done;
-		}
-	}
-
-	rqst_idx++;
-	if (rqst_idx < num_rqst)
-		goto next_rqst;
+	} while (++rqst_idx < num_rqst);
 
 done:
 	/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 6/6] Fix formatting of client smbdirect RDMA logging
  2022-09-13 22:19 [RFC PATCH 0/6] Reduce SMBDirect RDMA SGE counts and sizes Tom Talpey
                   ` (4 preceding siblings ...)
  2022-09-13 22:19 ` [PATCH 5/6] Handle variable number of SGEs in client smbdirect send Tom Talpey
@ 2022-09-13 22:19 ` Tom Talpey
       [not found] ` <CAH2r5muhROptfZ+fUjhCfwzoeJqNku4Nkpjv63N5WTdhpEWWoA@mail.gmail.com>
  6 siblings, 0 replies; 8+ messages in thread
From: Tom Talpey @ 2022-09-13 22:19 UTC (permalink / raw)
  To: smfrench, linux-cifs; +Cc: linkinjeon, senozhatsky, bmt, Tom Talpey

Make the debug logging more consistent in formatting of addresses,
lengths, and bitfields.

Signed-off-by: Tom Talpey <tom@talpey.com>
---
 fs/cifs/smbdirect.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
index 494ac916158f..4491c4a10bfc 100644
--- a/fs/cifs/smbdirect.c
+++ b/fs/cifs/smbdirect.c
@@ -270,7 +270,7 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
 	struct smbd_request *request =
 		container_of(wc->wr_cqe, struct smbd_request, cqe);
 
-	log_rdma_send(INFO, "smbd_request %p completed wc->status=%d\n",
+	log_rdma_send(INFO, "smbd_request 0x%p completed wc->status=%d\n",
 		request, wc->status);
 
 	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {
@@ -448,7 +448,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	struct smbd_connection *info = response->info;
 	int data_length = 0;
 
-	log_rdma_recv(INFO, "response=%p type=%d wc status=%d wc opcode %d byte_len=%d pkey_index=%x\n",
+	log_rdma_recv(INFO, "response=0x%p type=%d wc status=%d wc opcode %d byte_len=%d pkey_index=%u\n",
 		      response, response->type, wc->status, wc->opcode,
 		      wc->byte_len, wc->pkey_index);
 
@@ -723,7 +723,7 @@ static int smbd_post_send_negotiate_req(struct smbd_connection *info)
 	send_wr.opcode = IB_WR_SEND;
 	send_wr.send_flags = IB_SEND_SIGNALED;
 
-	log_rdma_send(INFO, "sge addr=%llx length=%x lkey=%x\n",
+	log_rdma_send(INFO, "sge addr=0x%llx length=%u lkey=0x%x\n",
 		request->sge[0].addr,
 		request->sge[0].length, request->sge[0].lkey);
 
@@ -792,7 +792,7 @@ static int smbd_post_send(struct smbd_connection *info,
 
 	for (i = 0; i < request->num_sge; i++) {
 		log_rdma_send(INFO,
-			"rdma_request sge[%d] addr=%llu length=%u\n",
+			"rdma_request sge[%d] addr=0x%llx length=%u\n",
 			i, request->sge[i].addr, request->sge[i].length);
 		ib_dma_sync_single_for_device(
 			info->id->device,
@@ -1079,7 +1079,7 @@ static int smbd_negotiate(struct smbd_connection *info)
 
 	response->type = SMBD_NEGOTIATE_RESP;
 	rc = smbd_post_recv(info, response);
-	log_rdma_event(INFO, "smbd_post_recv rc=%d iov.addr=%llx iov.length=%x iov.lkey=%x\n",
+	log_rdma_event(INFO, "smbd_post_recv rc=%d iov.addr=0x%llx iov.length=%u iov.lkey=0x%x\n",
 		       rc, response->sge.addr,
 		       response->sge.length, response->sge.lkey);
 	if (rc)
@@ -1539,7 +1539,7 @@ static struct smbd_connection *_smbd_get_connection(
 
 	if (smbd_send_credit_target > info->id->device->attrs.max_cqe ||
 	    smbd_send_credit_target > info->id->device->attrs.max_qp_wr) {
-		log_rdma_event(ERR, "consider lowering send_credit_target = %d. Possible CQE overrun, device reporting max_cpe %d max_qp_wr %d\n",
+		log_rdma_event(ERR, "consider lowering send_credit_target = %d. Possible CQE overrun, device reporting max_cqe %d max_qp_wr %d\n",
 			       smbd_send_credit_target,
 			       info->id->device->attrs.max_cqe,
 			       info->id->device->attrs.max_qp_wr);
@@ -1548,7 +1548,7 @@ static struct smbd_connection *_smbd_get_connection(
 
 	if (smbd_receive_credit_max > info->id->device->attrs.max_cqe ||
 	    smbd_receive_credit_max > info->id->device->attrs.max_qp_wr) {
-		log_rdma_event(ERR, "consider lowering receive_credit_max = %d. Possible CQE overrun, device reporting max_cpe %d max_qp_wr %d\n",
+		log_rdma_event(ERR, "consider lowering receive_credit_max = %d. Possible CQE overrun, device reporting max_cqe %d max_qp_wr %d\n",
 			       smbd_receive_credit_max,
 			       info->id->device->attrs.max_cqe,
 			       info->id->device->attrs.max_qp_wr);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 0/6] Reduce SMBDirect RDMA SGE counts and sizes
       [not found] ` <CAH2r5muhROptfZ+fUjhCfwzoeJqNku4Nkpjv63N5WTdhpEWWoA@mail.gmail.com>
@ 2022-09-22 19:35   ` Tom Talpey
  0 siblings, 0 replies; 8+ messages in thread
From: Tom Talpey @ 2022-09-22 19:35 UTC (permalink / raw)
  To: Steve French; +Cc: linux-cifs, linkinjeon, senozhatsky, bmt

[ Resending to fix the missing "vger."! ]

On 9/20/2022 9:58 PM, Steve French wrote:
> are these still current?

Hi Steve, yes these are current. As you know we were going to add
siw-based RDMA testing to the buildbot during last week's IOLab,
but we didn't get the chance. And this week, I've been forced to
self-isolate and I therefore have had no chance to test further.

Here's the deal:

1) The changes are tested by me as described in the message. They
passed all my confidence checks, including artificially reducing
SGEs all the way down to 2.

2) But, I haven't seen any reviews, or anyone testing.

3) I won't be very productive until next week, assuming I feel better.
I'll be back on it as soon as I can.

4) Because neither client nor server ever worked over softiWARP before,
the patches shouldn't go to stable.

5) Because they only address RDMA operation, they won't break ordinary
users.

Your call.

Tom.


> On Tue, Sep 13, 2022 at 5:04 PM Tom Talpey <tom@talpey.com> wrote:
>>
>> Allocate fewer SGEs and standard packet sizes in both kernel SMBDirect
>> implementations.
>>
>> The current maximum values (16 SGEs and 8192 bytes) cause failures on the
>> SoftiWARP provider, and are suboptimal on others. Reduce these to 6 and
>> 1364. Additionally, recode smbd_send() to work with as few as 2 SGEs,
>> and for debug sanity, reformat client-side logging to more clearly show
>> addresses, lengths and flags in the appropriate base.
>>
>> Tested over SoftiWARP and SoftRoCE with shell, Connectathon basic and general,
>> and to be tested with XFStests here at SMB3 IOLab this week.
>>
>> Tom Talpey (6):
>>    Decrease the number of SMB3 smbdirect client SGEs
>>    Decrease the number of SMB3 smbdirect server SGEs
>>    Reduce client smbdirect max receive segment size
>>    Reduce server smbdirect max send/receive segment sizes
>>    Handle variable number of SGEs in client smbdirect send.
>>    Fix formatting of client smbdirect RDMA logging
>>
>>   fs/cifs/smbdirect.c       | 227 ++++++++++++++++----------------------
>>   fs/cifs/smbdirect.h       |  14 ++-
>>   fs/ksmbd/transport_rdma.c |   6 +-
>>   3 files changed, 109 insertions(+), 138 deletions(-)
>>
>> --
>> 2.34.1
>>
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-09-22 19:35 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-13 22:19 [RFC PATCH 0/6] Reduce SMBDirect RDMA SGE counts and sizes Tom Talpey
2022-09-13 22:19 ` [PATCH 1/6] Decrease the number of SMB3 smbdirect client SGEs Tom Talpey
2022-09-13 22:19 ` [PATCH 2/6] Decrease the number of SMB3 smbdirect server SGEs Tom Talpey
2022-09-13 22:19 ` [PATCH 3/6] Reduce client smbdirect max receive segment size Tom Talpey
2022-09-13 22:19 ` [PATCH 4/6] Reduce server smbdirect max send/receive segment sizes Tom Talpey
2022-09-13 22:19 ` [PATCH 5/6] Handle variable number of SGEs in client smbdirect send Tom Talpey
2022-09-13 22:19 ` [PATCH 6/6] Fix formatting of client smbdirect RDMA logging Tom Talpey
     [not found] ` <CAH2r5muhROptfZ+fUjhCfwzoeJqNku4Nkpjv63N5WTdhpEWWoA@mail.gmail.com>
2022-09-22 19:35   ` [RFC PATCH 0/6] Reduce SMBDirect RDMA SGE counts and sizes Tom Talpey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).