All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 0/3] net/smc: fixes 2018-03-14
@ 2018-03-14 10:00 Ursula Braun
  2018-03-14 10:01 ` [PATCH net-next 1/3] net/smc: pay attention to MAX_ORDER for CQ entries Ursula Braun
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Ursula Braun @ 2018-03-14 10:00 UTC (permalink / raw)
  To: davem
  Cc: netdev, linux-s390, linux-rdma, schwidefsky, heiko.carstens,
	raspl, ubraun

here are smc changes for the net-next tree.
The first patch enables SMC to work with mlx5-RoCE-devices.
Patches 2 and 3 deal with link group freeing.

Thanks, Ursula

Karsten Graul (1):
  net/smc: schedule free_work when link group is terminated

Ursula Braun (2):
  net/smc: pay attention to MAX_ORDER for CQ entries
  net/smc: free link group without pending free_work only

 net/smc/af_smc.c   |  1 +
 net/smc/smc_core.c | 23 +++++++++++++++--------
 net/smc/smc_ib.c   | 10 +++++++++-
 net/smc/smc_wr.h   |  1 -
 4 files changed, 25 insertions(+), 10 deletions(-)

-- 
2.13.5

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH net-next 1/3] net/smc: pay attention to MAX_ORDER for CQ entries
  2018-03-14 10:00 [PATCH net-next 0/3] net/smc: fixes 2018-03-14 Ursula Braun
@ 2018-03-14 10:01 ` Ursula Braun
  2018-03-14 10:01 ` [PATCH net-next 2/3] net/smc: free link group without pending free_work only Ursula Braun
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Ursula Braun @ 2018-03-14 10:01 UTC (permalink / raw)
  To: davem
  Cc: netdev, linux-s390, linux-rdma, schwidefsky, heiko.carstens,
	raspl, ubraun

smc allocates a certain number of CQ entries for used RoCE devices. For
mlx5 devices the chosen constant number results in a large allocation
causing this warning:

[13355.124656] WARNING: CPU: 3 PID: 16535 at mm/page_alloc.c:3883 __alloc_pages_nodemask+0x2be/0x10c0
[13355.124657] Modules linked in: smc_diag(O) smc(O) xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp bridge stp llc ip6table_filter ip6_tables iptable_filter mlx5_ib ib_core sunrpc mlx5_core s390_trng rng_core ghash_s390 prng aes_s390 des_s390 des_generic sha512_s390 sha256_s390 sha1_s390 sha_common ptp pps_core eadm_sch dm_multipath dm_mod vhost_net tun vhost tap sch_fq_codel kvm ip_tables x_tables autofs4 [last unloaded: smc]
[13355.124672] CPU: 3 PID: 16535 Comm: kworker/3:0 Tainted: G           O    4.14.0uschi #1
[13355.124673] Hardware name: IBM 3906 M04 704 (LPAR)
[13355.124675] Workqueue: events smc_listen_work [smc]
[13355.124677] task: 00000000e2f22100 task.stack: 0000000084720000
[13355.124678] Krnl PSW : 0704c00180000000 000000000029da76 (__alloc_pages_nodemask+0x2be/0x10c0)
[13355.124681]            R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
[13355.124682] Krnl GPRS: 0000000000000000 00550e00014080c0 0000000000000000 0000000000000001
[13355.124684]            000000000029d8b6 00000000f3bfd710 0000000000000000 00000000014080c0
[13355.124685]            0000000000000009 00000000ec277a00 0000000000200000 0000000000000000
[13355.124686]            0000000000000000 00000000000001ff 000000000029d8b6 0000000084723720
[13355.124708] Krnl Code: 000000000029da6a: a7110200		tmll	%r1,512
                          000000000029da6e: a774ff29		brc	7,29d8c0
                         #000000000029da72: a7f40001		brc	15,29da74
                         >000000000029da76: a7f4ff25		brc	15,29d8c0
                          000000000029da7a: a7380000		lhi	%r3,0
                          000000000029da7e: a7f4fef1		brc	15,29d860
                          000000000029da82: 5820f0c4		l	%r2,196(%r15)
                          000000000029da86: a53e0048		llilh	%r3,72
[13355.124720] Call Trace:
[13355.124722] ([<000000000029d8b6>] __alloc_pages_nodemask+0xfe/0x10c0)
[13355.124724]  [<000000000013bd1e>] s390_dma_alloc+0x6e/0x148
[13355.124733]  [<000003ff802eeba6>] mlx5_dma_zalloc_coherent_node+0x8e/0xe0 [mlx5_core]
[13355.124740]  [<000003ff802eee18>] mlx5_buf_alloc_node+0x70/0x108 [mlx5_core]
[13355.124744]  [<000003ff804eb410>] mlx5_ib_create_cq+0x558/0x898 [mlx5_ib]
[13355.124749]  [<000003ff80407d40>] ib_create_cq+0x48/0x88 [ib_core]
[13355.124751]  [<000003ff80109fba>] smc_ib_setup_per_ibdev+0x52/0x118 [smc]
[13355.124753]  [<000003ff8010bcb6>] smc_conn_create+0x65e/0x728 [smc]
[13355.124755]  [<000003ff801081a2>] smc_listen_work+0x2d2/0x540 [smc]
[13355.124756]  [<0000000000162c66>] process_one_work+0x1be/0x440
[13355.124758]  [<0000000000162f40>] worker_thread+0x58/0x458
[13355.124759]  [<0000000000169e7e>] kthread+0x14e/0x168
[13355.124760]  [<00000000009ce8be>] kernel_thread_starter+0x6/0xc
[13355.124762]  [<00000000009ce8b8>] kernel_thread_starter+0x0/0xc
[13355.124762] Last Breaking-Event-Address:
[13355.124764]  [<000000000029da72>] __alloc_pages_nodemask+0x2ba/0x10c0
[13355.124764] ---[ end trace 34be38b581c0b585 ]---

This patch reduces the smc constant for the maximum number of allocated
completion queue entries SMC_MAX_CQE by 2 to avoid high round up values
in the mlx5 code, and reduces the number of allocated completion queue
entries even more, if the final allocation for an mlx5 device hits the
MAX_ORDER limit.

Reported-by: Ihnken Menssen <menssen@de.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
---
 net/smc/smc_ib.c | 10 +++++++++-
 net/smc/smc_wr.h |  1 -
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c
index 2a8957bd6d38..26df554f7588 100644
--- a/net/smc/smc_ib.c
+++ b/net/smc/smc_ib.c
@@ -23,6 +23,8 @@
 #include "smc_wr.h"
 #include "smc.h"
 
+#define SMC_MAX_CQE 32766	/* max. # of completion queue elements */
+
 #define SMC_QP_MIN_RNR_TIMER		5
 #define SMC_QP_TIMEOUT			15 /* 4096 * 2 ** timeout usec */
 #define SMC_QP_RETRY_CNT			7 /* 7: infinite */
@@ -438,9 +440,15 @@ int smc_ib_remember_port_attr(struct smc_ib_device *smcibdev, u8 ibport)
 long smc_ib_setup_per_ibdev(struct smc_ib_device *smcibdev)
 {
 	struct ib_cq_init_attr cqattr =	{
-		.cqe = SMC_WR_MAX_CQE, .comp_vector = 0 };
+		.cqe = SMC_MAX_CQE, .comp_vector = 0 };
+	int cqe_size_order, smc_order;
 	long rc;
 
+	/* the calculated number of cq entries fits to mlx5 cq allocation */
+	cqe_size_order = cache_line_size() == 128 ? 7 : 6;
+	smc_order = MAX_ORDER - cqe_size_order - 1;
+	if (SMC_MAX_CQE + 2 > (0x00000001 << smc_order) * PAGE_SIZE)
+		cqattr.cqe = (0x00000001 << smc_order) * PAGE_SIZE - 2;
 	smcibdev->roce_cq_send = ib_create_cq(smcibdev->ibdev,
 					      smc_wr_tx_cq_handler, NULL,
 					      smcibdev, &cqattr);
diff --git a/net/smc/smc_wr.h b/net/smc/smc_wr.h
index ef0c3494c9cb..210bec3c3ebe 100644
--- a/net/smc/smc_wr.h
+++ b/net/smc/smc_wr.h
@@ -19,7 +19,6 @@
 #include "smc.h"
 #include "smc_core.h"
 
-#define SMC_WR_MAX_CQE 32768	/* max. # of completion queue elements */
 #define SMC_WR_BUF_CNT 16	/* # of ctrl buffers per link */
 
 #define SMC_WR_TX_WAIT_FREE_SLOT_TIME	(10 * HZ)
-- 
2.13.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH net-next 2/3] net/smc: free link group without pending free_work only
  2018-03-14 10:00 [PATCH net-next 0/3] net/smc: fixes 2018-03-14 Ursula Braun
  2018-03-14 10:01 ` [PATCH net-next 1/3] net/smc: pay attention to MAX_ORDER for CQ entries Ursula Braun
@ 2018-03-14 10:01 ` Ursula Braun
  2018-03-14 10:01 ` [PATCH net-next 3/3] net/smc: schedule free_work when link group is terminated Ursula Braun
  2018-03-14 17:41 ` [PATCH net-next 0/3] net/smc: fixes 2018-03-14 David Miller
  3 siblings, 0 replies; 5+ messages in thread
From: Ursula Braun @ 2018-03-14 10:01 UTC (permalink / raw)
  To: davem
  Cc: netdev, linux-s390, linux-rdma, schwidefsky, heiko.carstens,
	raspl, ubraun

Make sure there is no pending or running free_work worker for the link
group when freeing the link group.

Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
---
 net/smc/af_smc.c   | 1 +
 net/smc/smc_core.c | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
index 2c6f4e0a9f3d..649489f825a5 100644
--- a/net/smc/af_smc.c
+++ b/net/smc/af_smc.c
@@ -1477,6 +1477,7 @@ static void __exit smc_exit(void)
 	spin_unlock_bh(&smc_lgr_list.lock);
 	list_for_each_entry_safe(lgr, lg, &lgr_freeing_list, list) {
 		list_del_init(&lgr->list);
+		cancel_delayed_work_sync(&lgr->free_work);
 		smc_lgr_free(lgr); /* free link group */
 	}
 	static_branch_disable(&tcp_have_smc);
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index f76f60e463cb..ec6189fe2a48 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -140,7 +140,8 @@ static void smc_lgr_free_work(struct work_struct *work)
 	list_del_init(&lgr->list); /* remove from smc_lgr_list */
 free:
 	spin_unlock_bh(&smc_lgr_list.lock);
-	smc_lgr_free(lgr);
+	if (!delayed_work_pending(&lgr->free_work))
+		smc_lgr_free(lgr);
 }
 
 /* create a new SMC link group */
-- 
2.13.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH net-next 3/3] net/smc: schedule free_work when link group is terminated
  2018-03-14 10:00 [PATCH net-next 0/3] net/smc: fixes 2018-03-14 Ursula Braun
  2018-03-14 10:01 ` [PATCH net-next 1/3] net/smc: pay attention to MAX_ORDER for CQ entries Ursula Braun
  2018-03-14 10:01 ` [PATCH net-next 2/3] net/smc: free link group without pending free_work only Ursula Braun
@ 2018-03-14 10:01 ` Ursula Braun
  2018-03-14 17:41 ` [PATCH net-next 0/3] net/smc: fixes 2018-03-14 David Miller
  3 siblings, 0 replies; 5+ messages in thread
From: Ursula Braun @ 2018-03-14 10:01 UTC (permalink / raw)
  To: davem
  Cc: netdev, linux-s390, linux-rdma, schwidefsky, heiko.carstens,
	raspl, ubraun

From: Karsten Graul <kgraul@linux.vnet.ibm.com>

The free_work worker must be scheduled when the link group is
abnormally terminated.

Signed-off-by: Karsten Graul <kgraul@linux.vnet.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
---
 net/smc/smc_core.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index ec6189fe2a48..f44f6803f7ff 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -32,6 +32,17 @@
 
 static u32 smc_lgr_num;			/* unique link group number */
 
+static void smc_lgr_schedule_free_work(struct smc_link_group *lgr)
+{
+	/* client link group creation always follows the server link group
+	 * creation. For client use a somewhat higher removal delay time,
+	 * otherwise there is a risk of out-of-sync link groups.
+	 */
+	mod_delayed_work(system_wq, &lgr->free_work,
+			 lgr->role == SMC_CLNT ? SMC_LGR_FREE_DELAY_CLNT :
+						 SMC_LGR_FREE_DELAY_SERV);
+}
+
 /* Register connection's alert token in our lookup structure.
  * To use rbtrees we have to implement our own insert core.
  * Requires @conns_lock
@@ -111,13 +122,7 @@ static void smc_lgr_unregister_conn(struct smc_connection *conn)
 	write_unlock_bh(&lgr->conns_lock);
 	if (!reduced || lgr->conns_num)
 		return;
-	/* client link group creation always follows the server link group
-	 * creation. For client use a somewhat higher removal delay time,
-	 * otherwise there is a risk of out-of-sync link groups.
-	 */
-	mod_delayed_work(system_wq, &lgr->free_work,
-			 lgr->role == SMC_CLNT ? SMC_LGR_FREE_DELAY_CLNT :
-						 SMC_LGR_FREE_DELAY_SERV);
+	smc_lgr_schedule_free_work(lgr);
 }
 
 static void smc_lgr_free_work(struct work_struct *work)
@@ -344,6 +349,7 @@ void smc_lgr_terminate(struct smc_link_group *lgr)
 	}
 	write_unlock_bh(&lgr->conns_lock);
 	wake_up(&lgr->lnk[SMC_SINGLE_LINK].wr_reg_wait);
+	smc_lgr_schedule_free_work(lgr);
 }
 
 /* Determine vlan of internal TCP socket.
-- 
2.13.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH net-next 0/3] net/smc: fixes 2018-03-14
  2018-03-14 10:00 [PATCH net-next 0/3] net/smc: fixes 2018-03-14 Ursula Braun
                   ` (2 preceding siblings ...)
  2018-03-14 10:01 ` [PATCH net-next 3/3] net/smc: schedule free_work when link group is terminated Ursula Braun
@ 2018-03-14 17:41 ` David Miller
  3 siblings, 0 replies; 5+ messages in thread
From: David Miller @ 2018-03-14 17:41 UTC (permalink / raw)
  To: ubraun; +Cc: netdev, linux-s390, linux-rdma, schwidefsky, heiko.carstens, raspl

From: Ursula Braun <ubraun@linux.vnet.ibm.com>
Date: Wed, 14 Mar 2018 11:00:59 +0100

> here are smc changes for the net-next tree.
> The first patch enables SMC to work with mlx5-RoCE-devices.
> Patches 2 and 3 deal with link group freeing.

Series applied, thank you.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-03-14 17:41 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-14 10:00 [PATCH net-next 0/3] net/smc: fixes 2018-03-14 Ursula Braun
2018-03-14 10:01 ` [PATCH net-next 1/3] net/smc: pay attention to MAX_ORDER for CQ entries Ursula Braun
2018-03-14 10:01 ` [PATCH net-next 2/3] net/smc: free link group without pending free_work only Ursula Braun
2018-03-14 10:01 ` [PATCH net-next 3/3] net/smc: schedule free_work when link group is terminated Ursula Braun
2018-03-14 17:41 ` [PATCH net-next 0/3] net/smc: fixes 2018-03-14 David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.