* [PATCH rdma-next 00/10] Various clean ups to ib_cm
@ 2020-05-06 7:46 Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 01/10] RDMA/addr: Mark addr_resolve as might_sleep() Leon Romanovsky
` (10 more replies)
0 siblings, 11 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:46 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe
Cc: Leon Romanovsky, Danit Goldberg, linux-kernel, linux-rdma
From: Leon Romanovsky <leonro@mellanox.com>
From Jason,
These are intended to be no-functional change edits that tidy up the
code flow or coding style.
Thanks
Danit Goldberg (1):
RDMA/cm: Remove unused store to ret in cm_rej_handler
Jason Gunthorpe (9):
RDMA/addr: Mark addr_resolve as might_sleep()
RDMA/cm: Remove return code from add_cm_id_to_port_list
RDMA/cm: Pull duplicated code into cm_queue_work_unlock()
RDMA/cm: Pass the cm_id_private into cm_cleanup_timewait
RDMA/cm: Add a note explaining how the timewait is eventually freed
RDMA/cm: Make find_remote_id() return a cm_id_private
RDMA/cm: Remove the cm_free_id() wrapper function
RDMA/cm: Remove needless cm_id variable
RDMA/cm: Increment the refcount inside cm_find_listen()
drivers/infiniband/core/addr.c | 4 +
drivers/infiniband/core/cm.c | 239 +++++++++++----------------------
2 files changed, 85 insertions(+), 158 deletions(-)
--
2.26.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH rdma-next 01/10] RDMA/addr: Mark addr_resolve as might_sleep()
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
@ 2020-05-06 7:46 ` Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 02/10] RDMA/cm: Remove return code from add_cm_id_to_port_list Leon Romanovsky
` (9 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:46 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma
From: Jason Gunthorpe <jgg@mellanox.com>
Under one path through ib_nl_fetch_ha() this calls nlmsg_new(GFP_KERNEL)
which is a sleeping call. This is a very rare path, so mark fetch_ha() and
the module external entry point that conditionally calls through to
fetch_ha() as might_sleep().
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/core/addr.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
index 1753a9801b70..3a98439bba83 100644
--- a/drivers/infiniband/core/addr.c
+++ b/drivers/infiniband/core/addr.c
@@ -371,6 +371,8 @@ static int fetch_ha(const struct dst_entry *dst, struct rdma_dev_addr *dev_addr,
(const void *)&dst_in6->sin6_addr;
sa_family_t family = dst_in->sa_family;
+ might_sleep();
+
/* If we have a gateway in IB mode then it must be an IB network */
if (has_gateway(dst, family) && dev_addr->network == RDMA_NETWORK_IB)
return ib_nl_fetch_ha(dev_addr, daddr, seq, family);
@@ -727,6 +729,8 @@ int roce_resolve_route_from_path(struct sa_path_rec *rec,
struct rdma_dev_addr dev_addr = {};
int ret;
+ might_sleep();
+
if (rec->roce.route_resolved)
return 0;
--
2.26.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH rdma-next 02/10] RDMA/cm: Remove return code from add_cm_id_to_port_list
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 01/10] RDMA/addr: Mark addr_resolve as might_sleep() Leon Romanovsky
@ 2020-05-06 7:46 ` Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 03/10] RDMA/cm: Remove unused store to ret in cm_rej_handler Leon Romanovsky
` (8 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:46 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma
From: Jason Gunthorpe <jgg@mellanox.com>
This cannot happen, all callers pass in one of the two pointers. Use
a WARN_ON guard instead.
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/core/cm.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index e77e01b03622..78c9acc9393a 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -478,24 +478,19 @@ static int cm_init_av_for_response(struct cm_port *port, struct ib_wc *wc,
grh, &av->ah_attr);
}
-static int add_cm_id_to_port_list(struct cm_id_private *cm_id_priv,
- struct cm_av *av,
- struct cm_port *port)
+static void add_cm_id_to_port_list(struct cm_id_private *cm_id_priv,
+ struct cm_av *av, struct cm_port *port)
{
unsigned long flags;
- int ret = 0;
spin_lock_irqsave(&cm.lock, flags);
-
if (&cm_id_priv->av == av)
list_add_tail(&cm_id_priv->prim_list, &port->cm_priv_prim_list);
else if (&cm_id_priv->alt_av == av)
list_add_tail(&cm_id_priv->altr_list, &port->cm_priv_altr_list);
else
- ret = -EINVAL;
-
+ WARN_ON(true);
spin_unlock_irqrestore(&cm.lock, flags);
- return ret;
}
static struct cm_port *
@@ -576,12 +571,7 @@ static int cm_init_av_by_path(struct sa_path_rec *path,
return ret;
av->timeout = path->packet_life_time + 1;
-
- ret = add_cm_id_to_port_list(cm_id_priv, av, port);
- if (ret) {
- rdma_destroy_ah_attr(&new_ah_attr);
- return ret;
- }
+ add_cm_id_to_port_list(cm_id_priv, av, port);
rdma_move_ah_attr(&av->ah_attr, &new_ah_attr);
return 0;
}
--
2.26.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH rdma-next 03/10] RDMA/cm: Remove unused store to ret in cm_rej_handler
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 01/10] RDMA/addr: Mark addr_resolve as might_sleep() Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 02/10] RDMA/cm: Remove return code from add_cm_id_to_port_list Leon Romanovsky
@ 2020-05-06 7:46 ` Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 04/10] RDMA/cm: Pull duplicated code into cm_queue_work_unlock() Leon Romanovsky
` (7 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:46 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: Danit Goldberg, linux-rdma
From: Danit Goldberg <danitg@mellanox.com>
The 'goto out' label doesn't read ret, so don't set it.
Signed-off-by: Danit Goldberg <danitg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/core/cm.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 78c9acc9393a..439055c463fa 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -3083,7 +3083,6 @@ static int cm_rej_handler(struct cm_work *work)
__func__, be32_to_cpu(cm_id_priv->id.local_id),
cm_id_priv->id.state);
spin_unlock_irq(&cm_id_priv->lock);
- ret = -EINVAL;
goto out;
}
--
2.26.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH rdma-next 04/10] RDMA/cm: Pull duplicated code into cm_queue_work_unlock()
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
` (2 preceding siblings ...)
2020-05-06 7:46 ` [PATCH rdma-next 03/10] RDMA/cm: Remove unused store to ret in cm_rej_handler Leon Romanovsky
@ 2020-05-06 7:46 ` Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 05/10] RDMA/cm: Pass the cm_id_private into cm_cleanup_timewait Leon Romanovsky
` (6 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:46 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma
From: Jason Gunthorpe <jgg@mellanox.com>
While unlocking a spinlock held by the caller is a disturbing pattern,
this extensively duplicated code is even worse. Pull all the duplicates
into a function and explain the purpose of the algorithm.
The on creation side call in cm_req_handler() which is different has been
micro-optimized on the basis that the work_count == -1 during creation,
remove that and just use the normal function.
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/core/cm.c | 146 +++++++++++------------------------
1 file changed, 44 insertions(+), 102 deletions(-)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 439055c463fa..e7c20c44a361 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -83,8 +83,11 @@ const char *__attribute_const__ ibcm_reject_msg(int reason)
EXPORT_SYMBOL(ibcm_reject_msg);
struct cm_id_private;
+struct cm_work;
static int cm_add_one(struct ib_device *device);
static void cm_remove_one(struct ib_device *device, void *client_data);
+static void cm_process_work(struct cm_id_private *cm_id_priv,
+ struct cm_work *work);
static int cm_send_sidr_rep_locked(struct cm_id_private *cm_id_priv,
struct ib_cm_sidr_rep_param *param);
static int cm_send_dreq_locked(struct cm_id_private *cm_id_priv,
@@ -911,6 +914,35 @@ static void cm_free_work(struct cm_work *work)
kfree(work);
}
+static void cm_queue_work_unlock(struct cm_id_private *cm_id_priv,
+ struct cm_work *work)
+{
+ bool immediate;
+
+ /*
+ * To deliver the event to the user callback we have the drop the
+ * spinlock, however, we need to ensure that the user callback is single
+ * threaded and receives events in the temporal order. If there are
+ * already events being processed then thread new events onto a list,
+ * the thread currently processing will pick them up.
+ */
+ immediate = atomic_inc_and_test(&cm_id_priv->work_count);
+ if (!immediate) {
+ list_add_tail(&work->list, &cm_id_priv->work_list);
+ /*
+ * This routine always consumes incoming reference. Once queued
+ * to the work_list then a reference is held by the thread
+ * currently running cm_process_work() and this reference is not
+ * needed.
+ */
+ cm_deref_id(cm_id_priv);
+ }
+ spin_unlock_irq(&cm_id_priv->lock);
+
+ if (immediate)
+ cm_process_work(cm_id_priv, work);
+}
+
static inline int cm_convert_to_ms(int iba_time)
{
/* approximate conversion to ms from 4.096us x 2^iba_time */
@@ -2159,9 +2191,7 @@ static int cm_req_handler(struct cm_work *work)
/* Refcount belongs to the event, pairs with cm_process_work() */
refcount_inc(&cm_id_priv->refcount);
- atomic_inc(&cm_id_priv->work_count);
- spin_unlock_irq(&cm_id_priv->lock);
- cm_process_work(cm_id_priv, work);
+ cm_queue_work_unlock(cm_id_priv, work);
/*
* Since this ID was just created and was not made visible to other MAD
* handlers until the cm_finalize_id() above we know that the
@@ -2519,15 +2549,7 @@ static int cm_rep_handler(struct cm_work *work)
cm_id_priv->alt_av.timeout - 1);
ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
- ret = atomic_inc_and_test(&cm_id_priv->work_count);
- if (!ret)
- list_add_tail(&work->list, &cm_id_priv->work_list);
- spin_unlock_irq(&cm_id_priv->lock);
-
- if (ret)
- cm_process_work(cm_id_priv, work);
- else
- cm_deref_id(cm_id_priv);
+ cm_queue_work_unlock(cm_id_priv, work);
return 0;
error:
@@ -2538,7 +2560,6 @@ static int cm_rep_handler(struct cm_work *work)
static int cm_establish_handler(struct cm_work *work)
{
struct cm_id_private *cm_id_priv;
- int ret;
/* See comment in cm_establish about lookup. */
cm_id_priv = cm_acquire_id(work->local_id, work->remote_id);
@@ -2552,15 +2573,7 @@ static int cm_establish_handler(struct cm_work *work)
}
ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
- ret = atomic_inc_and_test(&cm_id_priv->work_count);
- if (!ret)
- list_add_tail(&work->list, &cm_id_priv->work_list);
- spin_unlock_irq(&cm_id_priv->lock);
-
- if (ret)
- cm_process_work(cm_id_priv, work);
- else
- cm_deref_id(cm_id_priv);
+ cm_queue_work_unlock(cm_id_priv, work);
return 0;
out:
cm_deref_id(cm_id_priv);
@@ -2571,7 +2584,6 @@ static int cm_rtu_handler(struct cm_work *work)
{
struct cm_id_private *cm_id_priv;
struct cm_rtu_msg *rtu_msg;
- int ret;
rtu_msg = (struct cm_rtu_msg *)work->mad_recv_wc->recv_buf.mad;
cm_id_priv = cm_acquire_id(
@@ -2594,15 +2606,7 @@ static int cm_rtu_handler(struct cm_work *work)
cm_id_priv->id.state = IB_CM_ESTABLISHED;
ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
- ret = atomic_inc_and_test(&cm_id_priv->work_count);
- if (!ret)
- list_add_tail(&work->list, &cm_id_priv->work_list);
- spin_unlock_irq(&cm_id_priv->lock);
-
- if (ret)
- cm_process_work(cm_id_priv, work);
- else
- cm_deref_id(cm_id_priv);
+ cm_queue_work_unlock(cm_id_priv, work);
return 0;
out:
cm_deref_id(cm_id_priv);
@@ -2795,7 +2799,6 @@ static int cm_dreq_handler(struct cm_work *work)
struct cm_id_private *cm_id_priv;
struct cm_dreq_msg *dreq_msg;
struct ib_mad_send_buf *msg = NULL;
- int ret;
dreq_msg = (struct cm_dreq_msg *)work->mad_recv_wc->recv_buf.mad;
cm_id_priv = cm_acquire_id(
@@ -2860,15 +2863,7 @@ static int cm_dreq_handler(struct cm_work *work)
}
cm_id_priv->id.state = IB_CM_DREQ_RCVD;
cm_id_priv->tid = dreq_msg->hdr.tid;
- ret = atomic_inc_and_test(&cm_id_priv->work_count);
- if (!ret)
- list_add_tail(&work->list, &cm_id_priv->work_list);
- spin_unlock_irq(&cm_id_priv->lock);
-
- if (ret)
- cm_process_work(cm_id_priv, work);
- else
- cm_deref_id(cm_id_priv);
+ cm_queue_work_unlock(cm_id_priv, work);
return 0;
unlock: spin_unlock_irq(&cm_id_priv->lock);
@@ -2880,7 +2875,6 @@ static int cm_drep_handler(struct cm_work *work)
{
struct cm_id_private *cm_id_priv;
struct cm_drep_msg *drep_msg;
- int ret;
drep_msg = (struct cm_drep_msg *)work->mad_recv_wc->recv_buf.mad;
cm_id_priv = cm_acquire_id(
@@ -2901,15 +2895,7 @@ static int cm_drep_handler(struct cm_work *work)
cm_enter_timewait(cm_id_priv);
ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
- ret = atomic_inc_and_test(&cm_id_priv->work_count);
- if (!ret)
- list_add_tail(&work->list, &cm_id_priv->work_list);
- spin_unlock_irq(&cm_id_priv->lock);
-
- if (ret)
- cm_process_work(cm_id_priv, work);
- else
- cm_deref_id(cm_id_priv);
+ cm_queue_work_unlock(cm_id_priv, work);
return 0;
out:
cm_deref_id(cm_id_priv);
@@ -3037,7 +3023,6 @@ static int cm_rej_handler(struct cm_work *work)
{
struct cm_id_private *cm_id_priv;
struct cm_rej_msg *rej_msg;
- int ret;
rej_msg = (struct cm_rej_msg *)work->mad_recv_wc->recv_buf.mad;
cm_id_priv = cm_acquire_rejected_id(rej_msg);
@@ -3086,15 +3071,7 @@ static int cm_rej_handler(struct cm_work *work)
goto out;
}
- ret = atomic_inc_and_test(&cm_id_priv->work_count);
- if (!ret)
- list_add_tail(&work->list, &cm_id_priv->work_list);
- spin_unlock_irq(&cm_id_priv->lock);
-
- if (ret)
- cm_process_work(cm_id_priv, work);
- else
- cm_deref_id(cm_id_priv);
+ cm_queue_work_unlock(cm_id_priv, work);
return 0;
out:
cm_deref_id(cm_id_priv);
@@ -3204,7 +3181,7 @@ static int cm_mra_handler(struct cm_work *work)
{
struct cm_id_private *cm_id_priv;
struct cm_mra_msg *mra_msg;
- int timeout, ret;
+ int timeout;
mra_msg = (struct cm_mra_msg *)work->mad_recv_wc->recv_buf.mad;
cm_id_priv = cm_acquire_mraed_id(mra_msg);
@@ -3264,15 +3241,7 @@ static int cm_mra_handler(struct cm_work *work)
cm_id_priv->msg->context[1] = (void *) (unsigned long)
cm_id_priv->id.state;
- ret = atomic_inc_and_test(&cm_id_priv->work_count);
- if (!ret)
- list_add_tail(&work->list, &cm_id_priv->work_list);
- spin_unlock_irq(&cm_id_priv->lock);
-
- if (ret)
- cm_process_work(cm_id_priv, work);
- else
- cm_deref_id(cm_id_priv);
+ cm_queue_work_unlock(cm_id_priv, work);
return 0;
out:
spin_unlock_irq(&cm_id_priv->lock);
@@ -3407,15 +3376,7 @@ static int cm_lap_handler(struct cm_work *work)
cm_id_priv->id.lap_state = IB_CM_LAP_RCVD;
cm_id_priv->tid = lap_msg->hdr.tid;
- ret = atomic_inc_and_test(&cm_id_priv->work_count);
- if (!ret)
- list_add_tail(&work->list, &cm_id_priv->work_list);
- spin_unlock_irq(&cm_id_priv->lock);
-
- if (ret)
- cm_process_work(cm_id_priv, work);
- else
- cm_deref_id(cm_id_priv);
+ cm_queue_work_unlock(cm_id_priv, work);
return 0;
unlock: spin_unlock_irq(&cm_id_priv->lock);
@@ -3427,7 +3388,6 @@ static int cm_apr_handler(struct cm_work *work)
{
struct cm_id_private *cm_id_priv;
struct cm_apr_msg *apr_msg;
- int ret;
/* Currently Alternate path messages are not supported for
* RoCE link layer.
@@ -3462,16 +3422,7 @@ static int cm_apr_handler(struct cm_work *work)
cm_id_priv->id.lap_state = IB_CM_LAP_IDLE;
ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
cm_id_priv->msg = NULL;
-
- ret = atomic_inc_and_test(&cm_id_priv->work_count);
- if (!ret)
- list_add_tail(&work->list, &cm_id_priv->work_list);
- spin_unlock_irq(&cm_id_priv->lock);
-
- if (ret)
- cm_process_work(cm_id_priv, work);
- else
- cm_deref_id(cm_id_priv);
+ cm_queue_work_unlock(cm_id_priv, work);
return 0;
out:
cm_deref_id(cm_id_priv);
@@ -3482,7 +3433,6 @@ static int cm_timewait_handler(struct cm_work *work)
{
struct cm_timewait_info *timewait_info;
struct cm_id_private *cm_id_priv;
- int ret;
timewait_info = container_of(work, struct cm_timewait_info, work);
spin_lock_irq(&cm.lock);
@@ -3501,15 +3451,7 @@ static int cm_timewait_handler(struct cm_work *work)
goto out;
}
cm_id_priv->id.state = IB_CM_IDLE;
- ret = atomic_inc_and_test(&cm_id_priv->work_count);
- if (!ret)
- list_add_tail(&work->list, &cm_id_priv->work_list);
- spin_unlock_irq(&cm_id_priv->lock);
-
- if (ret)
- cm_process_work(cm_id_priv, work);
- else
- cm_deref_id(cm_id_priv);
+ cm_queue_work_unlock(cm_id_priv, work);
return 0;
out:
cm_deref_id(cm_id_priv);
--
2.26.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH rdma-next 05/10] RDMA/cm: Pass the cm_id_private into cm_cleanup_timewait
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
` (3 preceding siblings ...)
2020-05-06 7:46 ` [PATCH rdma-next 04/10] RDMA/cm: Pull duplicated code into cm_queue_work_unlock() Leon Romanovsky
@ 2020-05-06 7:46 ` Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 06/10] RDMA/cm: Add a note explaining how the timewait is eventually freed Leon Romanovsky
` (5 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:46 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma
From: Jason Gunthorpe <jgg@mellanox.com>
Also rename it to cm_remove_remote(). This function now removes the
tracking of the remote ID/QPN in the redblack trees from a cm_id_private.
Replace a open-coded version with a call. The open coded version was
deleting only the remote_id, however at this call site the qpn can not
have been in the RB tree either, so the cm_remove_remote() will do the
same.
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/core/cm.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index e7c20c44a361..3899e9d88eb1 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -968,8 +968,10 @@ static u8 cm_ack_timeout(u8 ca_ack_delay, u8 packet_life_time)
return min(31, ack_timeout);
}
-static void cm_cleanup_timewait(struct cm_timewait_info *timewait_info)
+static void cm_remove_remote(struct cm_id_private *cm_id_priv)
{
+ struct cm_timewait_info *timewait_info = cm_id_priv->timewait_info;
+
if (timewait_info->inserted_remote_id) {
rb_erase(&timewait_info->remote_id_node, &cm.remote_id_table);
timewait_info->inserted_remote_id = 0;
@@ -1008,7 +1010,7 @@ static void cm_enter_timewait(struct cm_id_private *cm_id_priv)
return;
spin_lock_irqsave(&cm.lock, flags);
- cm_cleanup_timewait(cm_id_priv->timewait_info);
+ cm_remove_remote(cm_id_priv);
list_add_tail(&cm_id_priv->timewait_info->list, &cm.timewait_list);
spin_unlock_irqrestore(&cm.lock, flags);
@@ -1039,7 +1041,7 @@ static void cm_reset_to_idle(struct cm_id_private *cm_id_priv)
cm_id_priv->id.state = IB_CM_IDLE;
if (cm_id_priv->timewait_info) {
spin_lock_irqsave(&cm.lock, flags);
- cm_cleanup_timewait(cm_id_priv->timewait_info);
+ cm_remove_remote(cm_id_priv);
spin_unlock_irqrestore(&cm.lock, flags);
kfree(cm_id_priv->timewait_info);
cm_id_priv->timewait_info = NULL;
@@ -1140,7 +1142,7 @@ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
spin_lock(&cm.lock);
/* Required for cleanup paths related cm_req_handler() */
if (cm_id_priv->timewait_info) {
- cm_cleanup_timewait(cm_id_priv->timewait_info);
+ cm_remove_remote(cm_id_priv);
kfree(cm_id_priv->timewait_info);
cm_id_priv->timewait_info = NULL;
}
@@ -1986,7 +1988,7 @@ static struct cm_id_private * cm_match_req(struct cm_work *work,
/* Check for stale connections. */
timewait_info = cm_insert_remote_qpn(cm_id_priv->timewait_info);
if (timewait_info) {
- cm_cleanup_timewait(cm_id_priv->timewait_info);
+ cm_remove_remote(cm_id_priv);
cur_cm_id_priv = cm_acquire_id(timewait_info->work.local_id,
timewait_info->work.remote_id);
@@ -2007,7 +2009,7 @@ static struct cm_id_private * cm_match_req(struct cm_work *work,
cm_id_priv->id.device,
cpu_to_be64(IBA_GET(CM_REQ_SERVICE_ID, req_msg)));
if (!listen_cm_id_priv) {
- cm_cleanup_timewait(cm_id_priv->timewait_info);
+ cm_remove_remote(cm_id_priv);
spin_unlock_irq(&cm.lock);
cm_issue_rej(work->port, work->mad_recv_wc,
IB_CM_REJ_INVALID_SERVICE_ID, CM_MSG_RESPONSE_REQ,
@@ -2502,9 +2504,7 @@ static int cm_rep_handler(struct cm_work *work)
/* Check for a stale connection. */
timewait_info = cm_insert_remote_qpn(cm_id_priv->timewait_info);
if (timewait_info) {
- rb_erase(&cm_id_priv->timewait_info->remote_id_node,
- &cm.remote_id_table);
- cm_id_priv->timewait_info->inserted_remote_id = 0;
+ cm_remove_remote(cm_id_priv);
cur_cm_id_priv = cm_acquire_id(timewait_info->work.local_id,
timewait_info->work.remote_id);
--
2.26.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH rdma-next 06/10] RDMA/cm: Add a note explaining how the timewait is eventually freed
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
` (4 preceding siblings ...)
2020-05-06 7:46 ` [PATCH rdma-next 05/10] RDMA/cm: Pass the cm_id_private into cm_cleanup_timewait Leon Romanovsky
@ 2020-05-06 7:46 ` Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 07/10] RDMA/cm: Make find_remote_id() return a cm_id_private Leon Romanovsky
` (4 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:46 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma
From: Jason Gunthorpe <jgg@mellanox.com>
The way the cm_timewait_info is converted into a work and then freed
is very subtle and surprising, add a note clarifying the lifetime
here.
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/core/cm.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 3899e9d88eb1..acb0c5c5c5cf 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1029,6 +1029,11 @@ static void cm_enter_timewait(struct cm_id_private *cm_id_priv)
msecs_to_jiffies(wait_time));
spin_unlock_irqrestore(&cm.lock, flags);
+ /*
+ * The timewait_info is converted into a work and gets freed during
+ * cm_free_work() in cm_timewait_handler().
+ */
+ BUILD_BUG_ON(offsetof(struct cm_timewait_info, work) != 0);
cm_id_priv->timewait_info = NULL;
}
--
2.26.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH rdma-next 07/10] RDMA/cm: Make find_remote_id() return a cm_id_private
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
` (5 preceding siblings ...)
2020-05-06 7:46 ` [PATCH rdma-next 06/10] RDMA/cm: Add a note explaining how the timewait is eventually freed Leon Romanovsky
@ 2020-05-06 7:46 ` Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 08/10] RDMA/cm: Remove the cm_free_id() wrapper function Leon Romanovsky
` (3 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:46 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma
From: Jason Gunthorpe <jgg@mellanox.com>
The only caller doesn't care about the timewait, so acquire and return the
cm_id_private from the function.
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/core/cm.c | 27 ++++++++++++---------------
1 file changed, 12 insertions(+), 15 deletions(-)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index acb0c5c5c5cf..dfbad8b924b3 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -742,12 +742,14 @@ static struct cm_timewait_info * cm_insert_remote_id(struct cm_timewait_info
return NULL;
}
-static struct cm_timewait_info * cm_find_remote_id(__be64 remote_ca_guid,
- __be32 remote_id)
+static struct cm_id_private *cm_find_remote_id(__be64 remote_ca_guid,
+ __be32 remote_id)
{
struct rb_node *node = cm.remote_id_table.rb_node;
struct cm_timewait_info *timewait_info;
+ struct cm_id_private *res = NULL;
+ spin_lock_irq(&cm.lock);
while (node) {
timewait_info = rb_entry(node, struct cm_timewait_info,
remote_id_node);
@@ -759,10 +761,14 @@ static struct cm_timewait_info * cm_find_remote_id(__be64 remote_ca_guid,
node = node->rb_left;
else if (be64_gt(remote_ca_guid, timewait_info->remote_ca_guid))
node = node->rb_right;
- else
- return timewait_info;
+ else {
+ res = cm_acquire_id(timewait_info->work.local_id,
+ timewait_info->work.remote_id);
+ break;
+ }
}
- return NULL;
+ spin_unlock_irq(&cm.lock);
+ return res;
}
static struct cm_timewait_info * cm_insert_remote_qpn(struct cm_timewait_info
@@ -2993,24 +2999,15 @@ static void cm_format_rej_event(struct cm_work *work)
static struct cm_id_private * cm_acquire_rejected_id(struct cm_rej_msg *rej_msg)
{
- struct cm_timewait_info *timewait_info;
struct cm_id_private *cm_id_priv;
__be32 remote_id;
remote_id = cpu_to_be32(IBA_GET(CM_REJ_LOCAL_COMM_ID, rej_msg));
if (IBA_GET(CM_REJ_REASON, rej_msg) == IB_CM_REJ_TIMEOUT) {
- spin_lock_irq(&cm.lock);
- timewait_info = cm_find_remote_id(
+ cm_id_priv = cm_find_remote_id(
*((__be64 *)IBA_GET_MEM_PTR(CM_REJ_ARI, rej_msg)),
remote_id);
- if (!timewait_info) {
- spin_unlock_irq(&cm.lock);
- return NULL;
- }
- cm_id_priv =
- cm_acquire_id(timewait_info->work.local_id, remote_id);
- spin_unlock_irq(&cm.lock);
} else if (IBA_GET(CM_REJ_MESSAGE_REJECTED, rej_msg) ==
CM_MSG_RESPONSE_REQ)
cm_id_priv = cm_acquire_id(
--
2.26.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH rdma-next 08/10] RDMA/cm: Remove the cm_free_id() wrapper function
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
` (6 preceding siblings ...)
2020-05-06 7:46 ` [PATCH rdma-next 07/10] RDMA/cm: Make find_remote_id() return a cm_id_private Leon Romanovsky
@ 2020-05-06 7:46 ` Leon Romanovsky
2020-05-06 7:47 ` [PATCH rdma-next 09/10] RDMA/cm: Remove needless cm_id variable Leon Romanovsky
` (2 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:46 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma
From: Jason Gunthorpe <jgg@mellanox.com>
Just call xa_erase directly during cm_destroy_id()
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/core/cm.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index dfbad8b924b3..e70b4c70f00c 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -584,11 +584,6 @@ static u32 cm_local_id(__be32 local_id)
return (__force u32) (local_id ^ cm.random_id_operand);
}
-static void cm_free_id(__be32 local_id)
-{
- xa_erase_irq(&cm.local_id_table, cm_local_id(local_id));
-}
-
static struct cm_id_private *cm_acquire_id(__be32 local_id, __be32 remote_id)
{
struct cm_id_private *cm_id_priv;
@@ -1140,7 +1135,7 @@ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
case IB_CM_TIMEWAIT:
/*
* The cm_acquire_id in cm_timewait_handler will stop working
- * once we do cm_free_id() below, so just move to idle here for
+ * once we do xa_erase below, so just move to idle here for
* consistency.
*/
cm_id->state = IB_CM_IDLE;
@@ -1170,7 +1165,7 @@ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
spin_unlock(&cm.lock);
spin_unlock_irq(&cm_id_priv->lock);
- cm_free_id(cm_id->local_id);
+ xa_erase_irq(&cm.local_id_table, cm_local_id(cm_id->local_id));
cm_deref_id(cm_id_priv);
wait_for_completion(&cm_id_priv->comp);
while ((work = cm_dequeue_work(cm_id_priv)) != NULL)
--
2.26.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH rdma-next 09/10] RDMA/cm: Remove needless cm_id variable
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
` (7 preceding siblings ...)
2020-05-06 7:46 ` [PATCH rdma-next 08/10] RDMA/cm: Remove the cm_free_id() wrapper function Leon Romanovsky
@ 2020-05-06 7:47 ` Leon Romanovsky
2020-05-06 7:47 ` [PATCH rdma-next 10/10] RDMA/cm: Increment the refcount inside cm_find_listen() Leon Romanovsky
2020-05-13 0:35 ` [PATCH rdma-next 00/10] Various clean ups to ib_cm Jason Gunthorpe
10 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:47 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma
From: Jason Gunthorpe <jgg@mellanox.com>
Just put the expression in the only reader
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/core/cm.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index e70b4c70f00c..c8b2088dcd0a 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1973,7 +1973,6 @@ static struct cm_id_private * cm_match_req(struct cm_work *work,
struct cm_id_private *listen_cm_id_priv, *cur_cm_id_priv;
struct cm_timewait_info *timewait_info;
struct cm_req_msg *req_msg;
- struct ib_cm_id *cm_id;
req_msg = (struct cm_req_msg *)work->mad_recv_wc->recv_buf.mad;
@@ -2003,8 +2002,7 @@ static struct cm_id_private * cm_match_req(struct cm_work *work,
IB_CM_REJ_STALE_CONN, CM_MSG_RESPONSE_REQ,
NULL, 0);
if (cur_cm_id_priv) {
- cm_id = &cur_cm_id_priv->id;
- ib_send_cm_dreq(cm_id, NULL, 0);
+ ib_send_cm_dreq(&cur_cm_id_priv->id, NULL, 0);
cm_deref_id(cur_cm_id_priv);
}
return NULL;
@@ -2460,7 +2458,6 @@ static int cm_rep_handler(struct cm_work *work)
struct cm_rep_msg *rep_msg;
int ret;
struct cm_id_private *cur_cm_id_priv;
- struct ib_cm_id *cm_id;
struct cm_timewait_info *timewait_info;
rep_msg = (struct cm_rep_msg *)work->mad_recv_wc->recv_buf.mad;
@@ -2526,8 +2523,7 @@ static int cm_rep_handler(struct cm_work *work)
IBA_GET(CM_REP_REMOTE_COMM_ID, rep_msg));
if (cur_cm_id_priv) {
- cm_id = &cur_cm_id_priv->id;
- ib_send_cm_dreq(cm_id, NULL, 0);
+ ib_send_cm_dreq(&cur_cm_id_priv->id, NULL, 0);
cm_deref_id(cur_cm_id_priv);
}
--
2.26.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH rdma-next 10/10] RDMA/cm: Increment the refcount inside cm_find_listen()
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
` (8 preceding siblings ...)
2020-05-06 7:47 ` [PATCH rdma-next 09/10] RDMA/cm: Remove needless cm_id variable Leon Romanovsky
@ 2020-05-06 7:47 ` Leon Romanovsky
2020-05-13 0:35 ` [PATCH rdma-next 00/10] Various clean ups to ib_cm Jason Gunthorpe
10 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2020-05-06 7:47 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma
From: Jason Gunthorpe <jgg@mellanox.com>
All callers need the 'get', so do it in a central place before returning
the pointer.
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/core/cm.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index c8b2088dcd0a..44ab4fc7c5d4 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -690,9 +690,10 @@ static struct cm_id_private * cm_find_listen(struct ib_device *device,
cm_id_priv = rb_entry(node, struct cm_id_private, service_node);
if ((cm_id_priv->id.service_mask & service_id) ==
cm_id_priv->id.service_id &&
- (cm_id_priv->id.device == device))
+ (cm_id_priv->id.device == device)) {
+ refcount_inc(&cm_id_priv->refcount);
return cm_id_priv;
-
+ }
if (device < cm_id_priv->id.device)
node = node->rb_left;
else if (device > cm_id_priv->id.device)
@@ -2020,7 +2021,6 @@ static struct cm_id_private * cm_match_req(struct cm_work *work,
NULL, 0);
return NULL;
}
- refcount_inc(&listen_cm_id_priv->refcount);
spin_unlock_irq(&cm.lock);
return listen_cm_id_priv;
}
@@ -3591,7 +3591,6 @@ static int cm_sidr_req_handler(struct cm_work *work)
.status = IB_SIDR_UNSUPPORTED });
goto out; /* No match. */
}
- refcount_inc(&listen_cm_id_priv->refcount);
spin_unlock_irq(&cm.lock);
cm_id_priv->id.cm_handler = listen_cm_id_priv->id.cm_handler;
--
2.26.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH rdma-next 00/10] Various clean ups to ib_cm
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
` (9 preceding siblings ...)
2020-05-06 7:47 ` [PATCH rdma-next 10/10] RDMA/cm: Increment the refcount inside cm_find_listen() Leon Romanovsky
@ 2020-05-13 0:35 ` Jason Gunthorpe
10 siblings, 0 replies; 12+ messages in thread
From: Jason Gunthorpe @ 2020-05-13 0:35 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Doug Ledford, Leon Romanovsky, Danit Goldberg, linux-kernel, linux-rdma
On Wed, May 06, 2020 at 10:46:51AM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@mellanox.com>
>
> >From Jason,
>
> These are intended to be no-functional change edits that tidy up the
> code flow or coding style.
>
> Thanks
>
> Danit Goldberg (1):
> RDMA/cm: Remove unused store to ret in cm_rej_handler
>
> Jason Gunthorpe (9):
> RDMA/addr: Mark addr_resolve as might_sleep()
> RDMA/cm: Remove return code from add_cm_id_to_port_list
> RDMA/cm: Pull duplicated code into cm_queue_work_unlock()
> RDMA/cm: Pass the cm_id_private into cm_cleanup_timewait
> RDMA/cm: Add a note explaining how the timewait is eventually freed
> RDMA/cm: Make find_remote_id() return a cm_id_private
> RDMA/cm: Remove the cm_free_id() wrapper function
> RDMA/cm: Remove needless cm_id variable
> RDMA/cm: Increment the refcount inside cm_find_listen()
Applied to for-next
Jason
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2020-05-13 0:35 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-06 7:46 [PATCH rdma-next 00/10] Various clean ups to ib_cm Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 01/10] RDMA/addr: Mark addr_resolve as might_sleep() Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 02/10] RDMA/cm: Remove return code from add_cm_id_to_port_list Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 03/10] RDMA/cm: Remove unused store to ret in cm_rej_handler Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 04/10] RDMA/cm: Pull duplicated code into cm_queue_work_unlock() Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 05/10] RDMA/cm: Pass the cm_id_private into cm_cleanup_timewait Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 06/10] RDMA/cm: Add a note explaining how the timewait is eventually freed Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 07/10] RDMA/cm: Make find_remote_id() return a cm_id_private Leon Romanovsky
2020-05-06 7:46 ` [PATCH rdma-next 08/10] RDMA/cm: Remove the cm_free_id() wrapper function Leon Romanovsky
2020-05-06 7:47 ` [PATCH rdma-next 09/10] RDMA/cm: Remove needless cm_id variable Leon Romanovsky
2020-05-06 7:47 ` [PATCH rdma-next 10/10] RDMA/cm: Increment the refcount inside cm_find_listen() Leon Romanovsky
2020-05-13 0:35 ` [PATCH rdma-next 00/10] Various clean ups to ib_cm Jason Gunthorpe
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.