All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-core 0/7] Add support for OPA classport info
@ 2017-03-15 21:24 Dasaratharaman Chandramouli
       [not found] ` <1489613066-61684-1-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Dasaratharaman Chandramouli @ 2017-03-15 21:24 UTC (permalink / raw)
  To: Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma

This series moves the classport info query initiation and update
from callers such as ipoib to the ib_sa module itself. The classport
info cache is updated whenever ib_sa receives an appropriate state
change event.

Since classport info is only used to check if sendonly full member support
is enabled by the SM, we expose a function ib_sa_sendonly_fullmem_support
that can be called to check if the support is enabled. 

Additionally, we introduce support for opa classport info. These are
defined specifically for OPA devices and expose additional features in the
capability mask bits along with longer LID sizes in some of the other
fields.

Patch 1 to 3 fix checkpatch issues (1 issue type per patch) on two
functions that patch 4 then moves around. Patch 5 makes changes
to implicitly query and cache classport info. Patch 6 adds 
verbs capability API for core layers to query and find out if they
are running on an OPA device. Finally, patch 7 adds OPA classport info
query support.

Dasaratharaman Chandramouli (7):
  IB/SA: Fix lines longer than 80 columns
  IB/SA: Add braces when using sizeof
  IB/SA: Remove unwanted braces
  IB/SA: Move functions update_sm_ah() and ib_sa_event()
  IB/SA: Modify SA to implicity cache Class Port info
  IB/core: Add rdma_cap_opa_ah to expose opa address handles
  IB/SA: Add support to query opa classport info.

 drivers/infiniband/core/cma.c                  |  76 +---
 drivers/infiniband/core/sa_query.c             | 574 +++++++++++++++++--------
 drivers/infiniband/hw/hfi1/mad.c               |  25 --
 drivers/infiniband/ulp/ipoib/ipoib.h           |   1 -
 drivers/infiniband/ulp/ipoib/ipoib_main.c      |  71 ---
 drivers/infiniband/ulp/ipoib/ipoib_multicast.c |   9 +-
 include/rdma/ib_mad.h                          |  25 ++
 include/rdma/ib_sa.h                           |  13 +-
 include/rdma/ib_verbs.h                        |  16 +
 9 files changed, 447 insertions(+), 363 deletions(-)

-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH rdma-core 1/7] IB/SA: Fix lines longer than 80 columns
       [not found] ` <1489613066-61684-1-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
@ 2017-03-15 21:24   ` Dasaratharaman Chandramouli
  2017-03-15 21:24   ` [PATCH rdma-core 2/7] IB/SA: Add braces when using sizeof Dasaratharaman Chandramouli
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dasaratharaman Chandramouli @ 2017-03-15 21:24 UTC (permalink / raw)
  To: Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma

This fixes a checkpatch issue. The fix is needed
so that some of these functions can be moved around
in the forthcoming patches

Reviewed-by: Don Hiatt <don.hiatt-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Reviewed-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 drivers/infiniband/core/sa_query.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
index 81b742c..60c0405 100644
--- a/drivers/infiniband/core/sa_query.c
+++ b/drivers/infiniband/core/sa_query.c
@@ -963,8 +963,10 @@ static void update_sm_ah(struct work_struct *work)
 	ah_attr.port_num = port->port_num;
 	if (port_attr.grh_required) {
 		ah_attr.ah_flags = IB_AH_GRH;
-		ah_attr.grh.dgid.global.subnet_prefix = cpu_to_be64(port_attr.subnet_prefix);
-		ah_attr.grh.dgid.global.interface_id = cpu_to_be64(IB_SA_WELL_KNOWN_GUID);
+		ah_attr.grh.dgid.global.subnet_prefix =
+			cpu_to_be64(port_attr.subnet_prefix);
+		ah_attr.grh.dgid.global.interface_id =
+			cpu_to_be64(IB_SA_WELL_KNOWN_GUID);
 	}
 
 	new_ah->ah = ib_create_ah(port->agent->qp->pd, &ah_attr);
@@ -979,10 +981,10 @@ static void update_sm_ah(struct work_struct *work)
 		kref_put(&port->sm_ah->ref, free_sm_ah);
 	port->sm_ah = new_ah;
 	spin_unlock_irq(&port->ah_lock);
-
 }
 
-static void ib_sa_event(struct ib_event_handler *handler, struct ib_event *event)
+static void ib_sa_event(struct ib_event_handler *handler,
+			struct ib_event *event)
 {
 	if (event->event == IB_EVENT_PORT_ERR    ||
 	    event->event == IB_EVENT_PORT_ACTIVE ||
@@ -993,8 +995,8 @@ static void ib_sa_event(struct ib_event_handler *handler, struct ib_event *event
 		unsigned long flags;
 		struct ib_sa_device *sa_dev =
 			container_of(handler, typeof(*sa_dev), event_handler);
-		struct ib_sa_port *port =
-			&sa_dev->port[event->element.port_num - sa_dev->start_port];
+		u8 port_num = event->element.port_num - sa_dev->start_port;
+		struct ib_sa_port *port = &sa_dev->port[port_num];
 
 		if (!rdma_cap_ib_sa(handler->device, port->port_num))
 			return;
@@ -1012,8 +1014,7 @@ static void ib_sa_event(struct ib_event_handler *handler, struct ib_event *event
 			port->classport_info.valid = false;
 			spin_unlock_irqrestore(&port->classport_lock, flags);
 		}
-		queue_work(ib_wq, &sa_dev->port[event->element.port_num -
-					    sa_dev->start_port].update_task);
+		queue_work(ib_wq, &sa_dev->port[port_num].update_task);
 	}
 }
 
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-core 2/7] IB/SA: Add braces when using sizeof
       [not found] ` <1489613066-61684-1-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
  2017-03-15 21:24   ` [PATCH rdma-core 1/7] IB/SA: Fix lines longer than 80 columns Dasaratharaman Chandramouli
@ 2017-03-15 21:24   ` Dasaratharaman Chandramouli
  2017-03-15 21:24   ` [PATCH rdma-core 3/7] IB/SA: Remove unwanted braces Dasaratharaman Chandramouli
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dasaratharaman Chandramouli @ 2017-03-15 21:24 UTC (permalink / raw)
  To: Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma

This fixes a checkpatch issue. The fix is needed
so that some of these functions can be moved around
in the forthcoming patches

Reviewed-by: Don Hiatt <don.hiatt-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Reviewed-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 drivers/infiniband/core/sa_query.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
index 60c0405..8cfe636 100644
--- a/drivers/infiniband/core/sa_query.c
+++ b/drivers/infiniband/core/sa_query.c
@@ -944,7 +944,7 @@ static void update_sm_ah(struct work_struct *work)
 		return;
 	}
 
-	new_ah = kmalloc(sizeof *new_ah, GFP_KERNEL);
+	new_ah = kmalloc(sizeof(*new_ah), GFP_KERNEL);
 	if (!new_ah) {
 		return;
 	}
@@ -957,7 +957,7 @@ static void update_sm_ah(struct work_struct *work)
 			 IB_DEFAULT_PKEY_FULL, &new_ah->pkey_index))
 		pr_err("Couldn't find index for default PKey\n");
 
-	memset(&ah_attr, 0, sizeof ah_attr);
+	memset(&ah_attr, 0, sizeof(ah_attr));
 	ah_attr.dlid     = port_attr.sm_lid;
 	ah_attr.sl       = port_attr.sm_sl;
 	ah_attr.port_num = port->port_num;
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-core 3/7] IB/SA: Remove unwanted braces
       [not found] ` <1489613066-61684-1-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
  2017-03-15 21:24   ` [PATCH rdma-core 1/7] IB/SA: Fix lines longer than 80 columns Dasaratharaman Chandramouli
  2017-03-15 21:24   ` [PATCH rdma-core 2/7] IB/SA: Add braces when using sizeof Dasaratharaman Chandramouli
@ 2017-03-15 21:24   ` Dasaratharaman Chandramouli
  2017-03-15 21:24   ` [PATCH rdma-core 4/7] IB/SA: Move functions update_sm_ah() and ib_sa_event() Dasaratharaman Chandramouli
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dasaratharaman Chandramouli @ 2017-03-15 21:24 UTC (permalink / raw)
  To: Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma

This fixes a checkpatch issue. The fix is needed
so that some of these functions can be moved around
in the forthcoming patches

Reviewed-by: Don Hiatt <don.hiatt-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Reviewed-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 drivers/infiniband/core/sa_query.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
index 8cfe636..b04b499 100644
--- a/drivers/infiniband/core/sa_query.c
+++ b/drivers/infiniband/core/sa_query.c
@@ -945,9 +945,8 @@ static void update_sm_ah(struct work_struct *work)
 	}
 
 	new_ah = kmalloc(sizeof(*new_ah), GFP_KERNEL);
-	if (!new_ah) {
+	if (!new_ah)
 		return;
-	}
 
 	kref_init(&new_ah->ref);
 	new_ah->src_path_mask = (1 << port_attr.lmc) - 1;
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-core 4/7] IB/SA: Move functions update_sm_ah() and ib_sa_event()
       [not found] ` <1489613066-61684-1-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (2 preceding siblings ...)
  2017-03-15 21:24   ` [PATCH rdma-core 3/7] IB/SA: Remove unwanted braces Dasaratharaman Chandramouli
@ 2017-03-15 21:24   ` Dasaratharaman Chandramouli
  2017-03-15 21:24   ` [PATCH rdma-core 5/7] IB/SA: Modify SA to implicity cache Class Port info Dasaratharaman Chandramouli
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dasaratharaman Chandramouli @ 2017-03-15 21:24 UTC (permalink / raw)
  To: Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma

Moving these will facilitate changes to these in the
next patchs. This is strictly a move and there are no
changes to the functions in any way.

Reviewed-by: Don Hiatt <don.hiatt-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Reviewed-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 drivers/infiniband/core/sa_query.c | 172 ++++++++++++++++++-------------------
 1 file changed, 86 insertions(+), 86 deletions(-)

diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
index b04b499..2181f8c 100644
--- a/drivers/infiniband/core/sa_query.c
+++ b/drivers/infiniband/core/sa_query.c
@@ -931,92 +931,6 @@ static void free_sm_ah(struct kref *kref)
 	kfree(sm_ah);
 }
 
-static void update_sm_ah(struct work_struct *work)
-{
-	struct ib_sa_port *port =
-		container_of(work, struct ib_sa_port, update_task);
-	struct ib_sa_sm_ah *new_ah;
-	struct ib_port_attr port_attr;
-	struct ib_ah_attr   ah_attr;
-
-	if (ib_query_port(port->agent->device, port->port_num, &port_attr)) {
-		pr_warn("Couldn't query port\n");
-		return;
-	}
-
-	new_ah = kmalloc(sizeof(*new_ah), GFP_KERNEL);
-	if (!new_ah)
-		return;
-
-	kref_init(&new_ah->ref);
-	new_ah->src_path_mask = (1 << port_attr.lmc) - 1;
-
-	new_ah->pkey_index = 0;
-	if (ib_find_pkey(port->agent->device, port->port_num,
-			 IB_DEFAULT_PKEY_FULL, &new_ah->pkey_index))
-		pr_err("Couldn't find index for default PKey\n");
-
-	memset(&ah_attr, 0, sizeof(ah_attr));
-	ah_attr.dlid     = port_attr.sm_lid;
-	ah_attr.sl       = port_attr.sm_sl;
-	ah_attr.port_num = port->port_num;
-	if (port_attr.grh_required) {
-		ah_attr.ah_flags = IB_AH_GRH;
-		ah_attr.grh.dgid.global.subnet_prefix =
-			cpu_to_be64(port_attr.subnet_prefix);
-		ah_attr.grh.dgid.global.interface_id =
-			cpu_to_be64(IB_SA_WELL_KNOWN_GUID);
-	}
-
-	new_ah->ah = ib_create_ah(port->agent->qp->pd, &ah_attr);
-	if (IS_ERR(new_ah->ah)) {
-		pr_warn("Couldn't create new SM AH\n");
-		kfree(new_ah);
-		return;
-	}
-
-	spin_lock_irq(&port->ah_lock);
-	if (port->sm_ah)
-		kref_put(&port->sm_ah->ref, free_sm_ah);
-	port->sm_ah = new_ah;
-	spin_unlock_irq(&port->ah_lock);
-}
-
-static void ib_sa_event(struct ib_event_handler *handler,
-			struct ib_event *event)
-{
-	if (event->event == IB_EVENT_PORT_ERR    ||
-	    event->event == IB_EVENT_PORT_ACTIVE ||
-	    event->event == IB_EVENT_LID_CHANGE  ||
-	    event->event == IB_EVENT_PKEY_CHANGE ||
-	    event->event == IB_EVENT_SM_CHANGE   ||
-	    event->event == IB_EVENT_CLIENT_REREGISTER) {
-		unsigned long flags;
-		struct ib_sa_device *sa_dev =
-			container_of(handler, typeof(*sa_dev), event_handler);
-		u8 port_num = event->element.port_num - sa_dev->start_port;
-		struct ib_sa_port *port = &sa_dev->port[port_num];
-
-		if (!rdma_cap_ib_sa(handler->device, port->port_num))
-			return;
-
-		spin_lock_irqsave(&port->ah_lock, flags);
-		if (port->sm_ah)
-			kref_put(&port->sm_ah->ref, free_sm_ah);
-		port->sm_ah = NULL;
-		spin_unlock_irqrestore(&port->ah_lock, flags);
-
-		if (event->event == IB_EVENT_SM_CHANGE ||
-		    event->event == IB_EVENT_CLIENT_REREGISTER ||
-		    event->event == IB_EVENT_LID_CHANGE) {
-			spin_lock_irqsave(&port->classport_lock, flags);
-			port->classport_info.valid = false;
-			spin_unlock_irqrestore(&port->classport_lock, flags);
-		}
-		queue_work(ib_wq, &sa_dev->port[port_num].update_task);
-	}
-}
-
 void ib_sa_register_client(struct ib_sa_client *client)
 {
 	atomic_set(&client->users, 1);
@@ -1897,6 +1811,92 @@ static void recv_handler(struct ib_mad_agent *mad_agent,
 	ib_free_recv_mad(mad_recv_wc);
 }
 
+static void update_sm_ah(struct work_struct *work)
+{
+	struct ib_sa_port *port =
+		container_of(work, struct ib_sa_port, update_task);
+	struct ib_sa_sm_ah *new_ah;
+	struct ib_port_attr port_attr;
+	struct ib_ah_attr   ah_attr;
+
+	if (ib_query_port(port->agent->device, port->port_num, &port_attr)) {
+		pr_warn("Couldn't query port\n");
+		return;
+	}
+
+	new_ah = kmalloc(sizeof(*new_ah), GFP_KERNEL);
+	if (!new_ah)
+		return;
+
+	kref_init(&new_ah->ref);
+	new_ah->src_path_mask = (1 << port_attr.lmc) - 1;
+
+	new_ah->pkey_index = 0;
+	if (ib_find_pkey(port->agent->device, port->port_num,
+			 IB_DEFAULT_PKEY_FULL, &new_ah->pkey_index))
+		pr_err("Couldn't find index for default PKey\n");
+
+	memset(&ah_attr, 0, sizeof(ah_attr));
+	ah_attr.dlid     = port_attr.sm_lid;
+	ah_attr.sl       = port_attr.sm_sl;
+	ah_attr.port_num = port->port_num;
+	if (port_attr.grh_required) {
+		ah_attr.ah_flags = IB_AH_GRH;
+		ah_attr.grh.dgid.global.subnet_prefix =
+			cpu_to_be64(port_attr.subnet_prefix);
+		ah_attr.grh.dgid.global.interface_id =
+			cpu_to_be64(IB_SA_WELL_KNOWN_GUID);
+	}
+
+	new_ah->ah = ib_create_ah(port->agent->qp->pd, &ah_attr);
+	if (IS_ERR(new_ah->ah)) {
+		pr_warn("Couldn't create new SM AH\n");
+		kfree(new_ah);
+		return;
+	}
+
+	spin_lock_irq(&port->ah_lock);
+	if (port->sm_ah)
+		kref_put(&port->sm_ah->ref, free_sm_ah);
+	port->sm_ah = new_ah;
+	spin_unlock_irq(&port->ah_lock);
+}
+
+static void ib_sa_event(struct ib_event_handler *handler,
+			struct ib_event *event)
+{
+	if (event->event == IB_EVENT_PORT_ERR    ||
+	    event->event == IB_EVENT_PORT_ACTIVE ||
+	    event->event == IB_EVENT_LID_CHANGE  ||
+	    event->event == IB_EVENT_PKEY_CHANGE ||
+	    event->event == IB_EVENT_SM_CHANGE   ||
+	    event->event == IB_EVENT_CLIENT_REREGISTER) {
+		unsigned long flags;
+		struct ib_sa_device *sa_dev =
+			container_of(handler, typeof(*sa_dev), event_handler);
+		u8 port_num = event->element.port_num - sa_dev->start_port;
+		struct ib_sa_port *port = &sa_dev->port[port_num];
+
+		if (!rdma_cap_ib_sa(handler->device, port->port_num))
+			return;
+
+		spin_lock_irqsave(&port->ah_lock, flags);
+		if (port->sm_ah)
+			kref_put(&port->sm_ah->ref, free_sm_ah);
+		port->sm_ah = NULL;
+		spin_unlock_irqrestore(&port->ah_lock, flags);
+
+		if (event->event == IB_EVENT_SM_CHANGE ||
+		    event->event == IB_EVENT_CLIENT_REREGISTER ||
+		    event->event == IB_EVENT_LID_CHANGE) {
+			spin_lock_irqsave(&port->classport_lock, flags);
+			port->classport_info.valid = false;
+			spin_unlock_irqrestore(&port->classport_lock, flags);
+		}
+		queue_work(ib_wq, &sa_dev->port[port_num].update_task);
+	}
+}
+
 static void ib_sa_add_one(struct ib_device *device)
 {
 	struct ib_sa_device *sa_dev;
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-core 5/7] IB/SA: Modify SA to implicity cache Class Port info
       [not found] ` <1489613066-61684-1-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (3 preceding siblings ...)
  2017-03-15 21:24   ` [PATCH rdma-core 4/7] IB/SA: Move functions update_sm_ah() and ib_sa_event() Dasaratharaman Chandramouli
@ 2017-03-15 21:24   ` Dasaratharaman Chandramouli
       [not found]     ` <1489613066-61684-6-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
  2017-03-15 21:24   ` [PATCH rdma-core 6/7] IB/core: Add rdma_cap_opa_ah to expose opa address handles Dasaratharaman Chandramouli
                     ` (2 subsequent siblings)
  7 siblings, 1 reply; 12+ messages in thread
From: Dasaratharaman Chandramouli @ 2017-03-15 21:24 UTC (permalink / raw)
  To: Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma

SA will query and cache class port info as part of
its initialization. SA will also invalidate and
refresh the cache based on specific events. Callers such
as IPoIB and CM can query the SA to get the classportinfo
information. Apart from making the caller code much simpler,
this change puts the onus on the SA to query and maintain
classportinfo much like how it maitains the address handle to the SM.

Reviewed-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Reviewed-by: Don Hiatt <don.hiatt-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 drivers/infiniband/core/cma.c                  |  76 ++---------
 drivers/infiniband/core/sa_query.c             | 179 ++++++++++++++++++-------
 drivers/infiniband/ulp/ipoib/ipoib.h           |   1 -
 drivers/infiniband/ulp/ipoib/ipoib_main.c      |  71 ----------
 drivers/infiniband/ulp/ipoib/ipoib_multicast.c |   9 +-
 include/rdma/ib_sa.h                           |  12 +-
 6 files changed, 142 insertions(+), 206 deletions(-)

diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 5ed6ec9..421400a 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -3943,63 +3943,10 @@ static void cma_set_mgid(struct rdma_id_private *id_priv,
 	}
 }
 
-static void cma_query_sa_classport_info_cb(int status,
-					   struct ib_class_port_info *rec,
-					   void *context)
-{
-	struct class_port_info_context *cb_ctx = context;
-
-	WARN_ON(!context);
-
-	if (status || !rec) {
-		pr_debug("RDMA CM: %s port %u failed query ClassPortInfo status: %d\n",
-			 cb_ctx->device->name, cb_ctx->port_num, status);
-		goto out;
-	}
-
-	memcpy(cb_ctx->class_port_info, rec, sizeof(struct ib_class_port_info));
-
-out:
-	complete(&cb_ctx->done);
-}
-
-static int cma_query_sa_classport_info(struct ib_device *device, u8 port_num,
-				       struct ib_class_port_info *class_port_info)
-{
-	struct class_port_info_context *cb_ctx;
-	int ret;
-
-	cb_ctx = kmalloc(sizeof(*cb_ctx), GFP_KERNEL);
-	if (!cb_ctx)
-		return -ENOMEM;
-
-	cb_ctx->device = device;
-	cb_ctx->class_port_info = class_port_info;
-	cb_ctx->port_num = port_num;
-	init_completion(&cb_ctx->done);
-
-	ret = ib_sa_classport_info_rec_query(&sa_client, device, port_num,
-					     CMA_QUERY_CLASSPORT_INFO_TIMEOUT,
-					     GFP_KERNEL, cma_query_sa_classport_info_cb,
-					     cb_ctx, &cb_ctx->sa_query);
-	if (ret < 0) {
-		pr_err("RDMA CM: %s port %u failed to send ClassPortInfo query, ret: %d\n",
-		       device->name, port_num, ret);
-		goto out;
-	}
-
-	wait_for_completion(&cb_ctx->done);
-
-out:
-	kfree(cb_ctx);
-	return ret;
-}
-
 static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
 				 struct cma_multicast *mc)
 {
 	struct ib_sa_mcmember_rec rec;
-	struct ib_class_port_info class_port_info;
 	struct rdma_dev_addr *dev_addr = &id_priv->id.route.addr.dev_addr;
 	ib_sa_comp_mask comp_mask;
 	int ret;
@@ -4020,21 +3967,14 @@ static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
 	rec.pkey = cpu_to_be16(ib_addr_get_pkey(dev_addr));
 	rec.join_state = mc->join_state;
 
-	if (rec.join_state == BIT(SENDONLY_FULLMEMBER_JOIN)) {
-		ret = cma_query_sa_classport_info(id_priv->id.device,
-						  id_priv->id.port_num,
-						  &class_port_info);
-
-		if (ret)
-			return ret;
-
-		if (!(ib_get_cpi_capmask2(&class_port_info) &
-		      IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT)) {
-			pr_warn("RDMA CM: %s port %u Unable to multicast join\n"
-				"RDMA CM: SM doesn't support Send Only Full Member option\n",
-				id_priv->id.device->name, id_priv->id.port_num);
-			return -EOPNOTSUPP;
-		}
+	if ((rec.join_state == BIT(SENDONLY_FULLMEMBER_JOIN)) &&
+	    (!ib_sa_sendonly_fullmem_support(&sa_client,
+					     id_priv->id.device,
+					     id_priv->id.port_num))) {
+		pr_warn("RDMA CM: %s port %u Unable to multicast join\n"
+			"RDMA CM: SM doesn't support Send Only Full Member option\n",
+			id_priv->id.device->name, id_priv->id.port_num);
+		return -EOPNOTSUPP;
 	}
 
 	comp_mask = IB_SA_MCMEMBER_REC_MGID | IB_SA_MCMEMBER_REC_PORT_GID |
diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
index 2181f8c..bc32989 100644
--- a/drivers/infiniband/core/sa_query.c
+++ b/drivers/infiniband/core/sa_query.c
@@ -56,6 +56,8 @@
 #define IB_SA_LOCAL_SVC_TIMEOUT_MIN		100
 #define IB_SA_LOCAL_SVC_TIMEOUT_DEFAULT		2000
 #define IB_SA_LOCAL_SVC_TIMEOUT_MAX		200000
+#define IB_SA_CPI_MAX_RETRY_CNT			3
+#define IB_SA_CPI_RETRY_WAIT			1000 /*msecs */
 static int sa_local_svc_timeout_ms = IB_SA_LOCAL_SVC_TIMEOUT_DEFAULT;
 
 struct ib_sa_sm_ah {
@@ -67,6 +69,7 @@ struct ib_sa_sm_ah {
 
 struct ib_sa_classport_cache {
 	bool valid;
+	int retry_cnt;
 	struct ib_class_port_info data;
 };
 
@@ -75,6 +78,7 @@ struct ib_sa_port {
 	struct ib_sa_sm_ah  *sm_ah;
 	struct work_struct   update_task;
 	struct ib_sa_classport_cache classport_info;
+	struct delayed_work ib_cpi_work;
 	spinlock_t                   classport_lock; /* protects class port info set */
 	spinlock_t           ah_lock;
 	u8                   port_num;
@@ -123,7 +127,7 @@ struct ib_sa_guidinfo_query {
 };
 
 struct ib_sa_classport_info_query {
-	void (*callback)(int, struct ib_class_port_info *, void *);
+	void (*callback)(void *);
 	void *context;
 	struct ib_sa_query sa_query;
 };
@@ -1642,7 +1646,41 @@ int ib_sa_guid_info_rec_query(struct ib_sa_client *client,
 }
 EXPORT_SYMBOL(ib_sa_guid_info_rec_query);
 
-/* Support get SA ClassPortInfo */
+bool ib_sa_sendonly_fullmem_support(struct ib_sa_client *client,
+				    struct ib_device *device,
+				    u8 port_num)
+{
+	struct ib_sa_device *sa_dev = ib_get_client_data(device, &sa_client);
+	struct ib_sa_port *port;
+	bool ret = false;
+	unsigned long flags;
+
+	if (!sa_dev)
+		return ret;
+
+	port  = &sa_dev->port[port_num - sa_dev->start_port];
+
+	spin_lock_irqsave(&port->classport_lock, flags);
+	if (port->classport_info.valid)
+		ret = ib_get_cpi_capmask2(&port->classport_info.data) &
+			IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT;
+	spin_unlock_irqrestore(&port->classport_lock, flags);
+	return ret;
+}
+EXPORT_SYMBOL(ib_sa_sendonly_fullmem_support);
+
+struct ib_classport_info_context {
+	struct completion	done;
+	struct ib_sa_query	*sa_query;
+};
+
+static void ib_classportinfo_cb(void *context)
+{
+	struct ib_classport_info_context *cb_ctx = context;
+
+	complete(&cb_ctx->done);
+}
+
 static void ib_sa_classport_info_rec_callback(struct ib_sa_query *sa_query,
 					      int status,
 					      struct ib_sa_mad *mad)
@@ -1666,54 +1704,30 @@ static void ib_sa_classport_info_rec_callback(struct ib_sa_query *sa_query,
 			sa_query->port->classport_info.valid = true;
 		}
 		spin_unlock_irqrestore(&sa_query->port->classport_lock, flags);
-
-		query->callback(status, &rec, query->context);
-	} else {
-		query->callback(status, NULL, query->context);
 	}
+	query->callback(query->context);
 }
 
-static void ib_sa_portclass_info_rec_release(struct ib_sa_query *sa_query)
+static void ib_sa_classport_info_rec_release(struct ib_sa_query *sa_query)
 {
 	kfree(container_of(sa_query, struct ib_sa_classport_info_query,
 			   sa_query));
 }
 
-int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
-				   struct ib_device *device, u8 port_num,
-				   int timeout_ms, gfp_t gfp_mask,
-				   void (*callback)(int status,
-						    struct ib_class_port_info *resp,
-						    void *context),
-				   void *context,
-				   struct ib_sa_query **sa_query)
+static int ib_sa_classport_info_rec_query(struct ib_sa_port *port,
+					  int timeout_ms,
+					  void (*callback)(void *context),
+					  void *context,
+					  struct ib_sa_query **sa_query)
 {
-	struct ib_sa_classport_info_query *query;
-	struct ib_sa_device *sa_dev = ib_get_client_data(device, &sa_client);
-	struct ib_sa_port *port;
 	struct ib_mad_agent *agent;
+	struct ib_sa_classport_info_query *query;
 	struct ib_sa_mad *mad;
-	struct ib_class_port_info cached_class_port_info;
+	gfp_t gfp_mask = GFP_KERNEL;
 	int ret;
-	unsigned long flags;
-
-	if (!sa_dev)
-		return -ENODEV;
 
-	port  = &sa_dev->port[port_num - sa_dev->start_port];
 	agent = port->agent;
 
-	/* Use cached ClassPortInfo attribute if valid instead of sending mad */
-	spin_lock_irqsave(&port->classport_lock, flags);
-	if (port->classport_info.valid && callback) {
-		memcpy(&cached_class_port_info, &port->classport_info.data,
-		       sizeof(cached_class_port_info));
-		spin_unlock_irqrestore(&port->classport_lock, flags);
-		callback(0, &cached_class_port_info, context);
-		return 0;
-	}
-	spin_unlock_irqrestore(&port->classport_lock, flags);
-
 	query = kzalloc(sizeof(*query), gfp_mask);
 	if (!query)
 		return -ENOMEM;
@@ -1721,20 +1735,16 @@ int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
 	query->sa_query.port = port;
 	ret = alloc_mad(&query->sa_query, gfp_mask);
 	if (ret)
-		goto err1;
+		goto err_free;
 
-	ib_sa_client_get(client);
-	query->sa_query.client = client;
-	query->callback        = callback;
-	query->context         = context;
+	query->callback = callback;
+	query->context = context;
 
 	mad = query->sa_query.mad_buf->mad;
 	init_mad(mad, agent);
 
-	query->sa_query.callback = callback ? ib_sa_classport_info_rec_callback : NULL;
-
-	query->sa_query.release  = ib_sa_portclass_info_rec_release;
-	/* support GET only */
+	query->sa_query.callback = ib_sa_classport_info_rec_callback;
+	query->sa_query.release  = ib_sa_classport_info_rec_release;
 	mad->mad_hdr.method	 = IB_MGMT_METHOD_GET;
 	mad->mad_hdr.attr_id	 = cpu_to_be16(IB_SA_ATTR_CLASS_PORTINFO);
 	mad->sa_hdr.comp_mask	 = 0;
@@ -1742,20 +1752,71 @@ int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
 
 	ret = send_mad(&query->sa_query, timeout_ms, gfp_mask);
 	if (ret < 0)
-		goto err2;
+		goto err_free_mad;
 
 	return ret;
 
-err2:
+err_free_mad:
 	*sa_query = NULL;
-	ib_sa_client_put(query->sa_query.client);
 	free_mad(&query->sa_query);
 
-err1:
+err_free:
 	kfree(query);
 	return ret;
 }
-EXPORT_SYMBOL(ib_sa_classport_info_rec_query);
+
+static void update_ib_cpi(struct work_struct *work)
+{
+	struct ib_sa_port *port =
+		container_of(work, struct ib_sa_port, ib_cpi_work.work);
+	struct ib_classport_info_context *cb_context;
+	unsigned long flags;
+	int ret;
+
+	/* If the classport info is valid, nothing
+	 * to do here.
+	 */
+	spin_lock_irqsave(&port->classport_lock, flags);
+	if (port->classport_info.valid) {
+		spin_unlock_irqrestore(&port->classport_lock, flags);
+		return;
+	}
+	spin_unlock_irqrestore(&port->classport_lock, flags);
+
+	cb_context = kmalloc(sizeof(*cb_context), GFP_KERNEL);
+	if (!cb_context)
+		goto err_nomem;
+
+	init_completion(&cb_context->done);
+
+	ret = ib_sa_classport_info_rec_query(port, 3000,
+					     ib_classportinfo_cb, cb_context,
+					     &cb_context->sa_query);
+	if (ret < 0)
+		goto free_cb_err;
+	wait_for_completion(&cb_context->done);
+free_cb_err:
+	kfree(cb_context);
+	spin_lock_irqsave(&port->classport_lock, flags);
+
+	/* If the classport info is still not valid, the query should have
+	 * failed for some reason. Retry issuing the query
+	 */
+	if (!port->classport_info.valid) {
+		port->classport_info.retry_cnt++;
+		if (port->classport_info.retry_cnt <=
+		    IB_SA_CPI_MAX_RETRY_CNT) {
+			unsigned long delay =
+				msecs_to_jiffies(IB_SA_CPI_RETRY_WAIT);
+
+			queue_delayed_work(ib_wq, &port->ib_cpi_work, delay);
+		}
+	}
+	spin_unlock_irqrestore(&port->classport_lock, flags);
+
+err_nomem:
+	return;
+}
 
 static void send_handler(struct ib_mad_agent *agent,
 			 struct ib_mad_send_wc *mad_send_wc)
@@ -1784,7 +1845,8 @@ static void send_handler(struct ib_mad_agent *agent,
 	spin_unlock_irqrestore(&idr_lock, flags);
 
 	free_mad(query);
-	ib_sa_client_put(query->client);
+	if (query->client)
+		ib_sa_client_put(query->client);
 	query->release(query);
 }
 
@@ -1894,6 +1956,19 @@ static void ib_sa_event(struct ib_event_handler *handler,
 			spin_unlock_irqrestore(&port->classport_lock, flags);
 		}
 		queue_work(ib_wq, &sa_dev->port[port_num].update_task);
+
+		/*Query for class port info on a re-reregister event */
+		if ((event->event == IB_EVENT_CLIENT_REREGISTER) ||
+		    (event->event == IB_EVENT_PORT_ACTIVE)) {
+			unsigned long delay =
+				msecs_to_jiffies(IB_SA_CPI_RETRY_WAIT);
+
+			spin_lock_irqsave(&port->classport_lock, flags);
+			port->classport_info.retry_cnt = 0;
+			spin_unlock_irqrestore(&port->classport_lock, flags);
+			queue_delayed_work(ib_wq,
+					   &port->ib_cpi_work, delay);
+		}
 	}
 }
 
@@ -1934,6 +2009,8 @@ static void ib_sa_add_one(struct ib_device *device)
 			goto err;
 
 		INIT_WORK(&sa_dev->port[i].update_task, update_sm_ah);
+		INIT_DELAYED_WORK(&sa_dev->port[i].ib_cpi_work,
+				  update_ib_cpi);
 
 		count++;
 	}
@@ -1980,11 +2057,11 @@ static void ib_sa_remove_one(struct ib_device *device, void *client_data)
 		return;
 
 	ib_unregister_event_handler(&sa_dev->event_handler);
-
 	flush_workqueue(ib_wq);
 
 	for (i = 0; i <= sa_dev->end_port - sa_dev->start_port; ++i) {
 		if (rdma_cap_ib_sa(device, i + 1)) {
+			cancel_delayed_work_sync(&sa_dev->port[i].ib_cpi_work);
 			ib_unregister_mad_agent(sa_dev->port[i].agent);
 			if (sa_dev->port[i].sm_ah)
 				kref_put(&sa_dev->port[i].sm_ah->ref, free_sm_ah);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
index bed233b..060e543 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib.h
+++ b/drivers/infiniband/ulp/ipoib/ipoib.h
@@ -489,7 +489,6 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
 struct ipoib_path *__path_find(struct net_device *dev, void *gid);
 void ipoib_mark_paths_invalid(struct net_device *dev);
 void ipoib_flush_paths(struct net_device *dev);
-int ipoib_check_sm_sendonly_fullmember_support(struct ipoib_dev_priv *priv);
 struct ipoib_dev_priv *ipoib_intf_alloc(const char *format);
 
 int ipoib_ib_dev_init(struct net_device *dev, struct ib_device *ca, int port);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index 259c59f..1c70ae9 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -650,77 +650,6 @@ void ipoib_mark_paths_invalid(struct net_device *dev)
 	spin_unlock_irq(&priv->lock);
 }
 
-struct classport_info_context {
-	struct ipoib_dev_priv	*priv;
-	struct completion	done;
-	struct ib_sa_query	*sa_query;
-};
-
-static void classport_info_query_cb(int status, struct ib_class_port_info *rec,
-				    void *context)
-{
-	struct classport_info_context *cb_ctx = context;
-	struct ipoib_dev_priv *priv;
-
-	WARN_ON(!context);
-
-	priv = cb_ctx->priv;
-
-	if (status || !rec) {
-		pr_debug("device: %s failed query classport_info status: %d\n",
-			 priv->dev->name, status);
-		/* keeps the default, will try next mcast_restart */
-		priv->sm_fullmember_sendonly_support = false;
-		goto out;
-	}
-
-	if (ib_get_cpi_capmask2(rec) &
-	    IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT) {
-		pr_debug("device: %s enabled fullmember-sendonly for sendonly MCG\n",
-			 priv->dev->name);
-		priv->sm_fullmember_sendonly_support = true;
-	} else {
-		pr_debug("device: %s disabled fullmember-sendonly for sendonly MCG\n",
-			 priv->dev->name);
-		priv->sm_fullmember_sendonly_support = false;
-	}
-
-out:
-	complete(&cb_ctx->done);
-}
-
-int ipoib_check_sm_sendonly_fullmember_support(struct ipoib_dev_priv *priv)
-{
-	struct classport_info_context *callback_context;
-	int ret;
-
-	callback_context = kmalloc(sizeof(*callback_context), GFP_KERNEL);
-	if (!callback_context)
-		return -ENOMEM;
-
-	callback_context->priv = priv;
-	init_completion(&callback_context->done);
-
-	ret = ib_sa_classport_info_rec_query(&ipoib_sa_client,
-					     priv->ca, priv->port, 3000,
-					     GFP_KERNEL,
-					     classport_info_query_cb,
-					     callback_context,
-					     &callback_context->sa_query);
-	if (ret < 0) {
-		pr_info("%s failed to send ib_sa_classport_info query, ret: %d\n",
-			priv->dev->name, ret);
-		kfree(callback_context);
-		return ret;
-	}
-
-	/* waiting for the callback to finish before returnning */
-	wait_for_completion(&callback_context->done);
-	kfree(callback_context);
-
-	return ret;
-}
-
 static void push_pseudo_header(struct sk_buff *skb, const char *daddr)
 {
 	struct ipoib_pseudo_header *phdr;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
index 69e146c..3e3a84f 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
@@ -331,7 +331,6 @@ void ipoib_mcast_carrier_on_task(struct work_struct *work)
 	struct ipoib_dev_priv *priv = container_of(work, struct ipoib_dev_priv,
 						   carrier_on_task);
 	struct ib_port_attr attr;
-	int ret;
 
 	if (ib_query_port(priv->ca, priv->port, &attr) ||
 	    attr.state != IB_PORT_ACTIVE) {
@@ -344,11 +343,9 @@ void ipoib_mcast_carrier_on_task(struct work_struct *work)
 	 * because the broadcast group must always be joined first and is always
 	 * re-joined if the SM changes substantially.
 	 */
-	ret = ipoib_check_sm_sendonly_fullmember_support(priv);
-	if (ret < 0)
-		pr_debug("%s failed query sm support for sendonly-fullmember (ret: %d)\n",
-			 priv->dev->name, ret);
-
+	priv->sm_fullmember_sendonly_support =
+		ib_sa_sendonly_fullmem_support(&ipoib_sa_client,
+					       priv->ca, priv->port);
 	/*
 	 * Take rtnl_lock to avoid racing with ipoib_stop() and
 	 * turning the carrier back on while a device is being
diff --git a/include/rdma/ib_sa.h b/include/rdma/ib_sa.h
index fd0e532..46838c8 100644
--- a/include/rdma/ib_sa.h
+++ b/include/rdma/ib_sa.h
@@ -454,14 +454,8 @@ int ib_sa_guid_info_rec_query(struct ib_sa_client *client,
 			      void *context,
 			      struct ib_sa_query **sa_query);
 
-/* Support get SA ClassPortInfo */
-int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
-				   struct ib_device *device, u8 port_num,
-				   int timeout_ms, gfp_t gfp_mask,
-				   void (*callback)(int status,
-						    struct ib_class_port_info *resp,
-						    void *context),
-				   void *context,
-				   struct ib_sa_query **sa_query);
+bool ib_sa_sendonly_fullmem_support(struct ib_sa_client *client,
+				    struct ib_device *device,
+				    u8 port_num);
 
 #endif /* IB_SA_H */
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-core 6/7] IB/core: Add rdma_cap_opa_ah to expose opa address handles
       [not found] ` <1489613066-61684-1-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (4 preceding siblings ...)
  2017-03-15 21:24   ` [PATCH rdma-core 5/7] IB/SA: Modify SA to implicity cache Class Port info Dasaratharaman Chandramouli
@ 2017-03-15 21:24   ` Dasaratharaman Chandramouli
  2017-03-15 21:24   ` [PATCH rdma-core 7/7] IB/SA: Add support to query opa classport info Dasaratharaman Chandramouli
  2017-03-20  7:49   ` [PATCH rdma-core 0/7] Add support for OPA " Leon Romanovsky
  7 siblings, 0 replies; 12+ messages in thread
From: Dasaratharaman Chandramouli @ 2017-03-15 21:24 UTC (permalink / raw)
  To: Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma

rdma_cap_opa_ah(..) enables core components to check if the
corresponding port supports OPA extended addressing.

Reviewed-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Reviewed-by: Don Hiatt <don.hiatt-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 include/rdma/ib_verbs.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 8c61532..64cc4ef 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -493,6 +493,7 @@ static inline struct rdma_hw_stats *rdma_alloc_hw_stats_struct(
 /* Address format                       0x000FF000 */
 #define RDMA_CORE_CAP_AF_IB             0x00001000
 #define RDMA_CORE_CAP_ETH_AH            0x00002000
+#define RDMA_CORE_CAP_OPA_AH            0x00004000
 
 /* Protocol                             0xFFF00000 */
 #define RDMA_CORE_CAP_PROT_IB           0x00100000
@@ -2519,6 +2520,21 @@ static inline bool rdma_cap_eth_ah(const struct ib_device *device, u8 port_num)
 }
 
 /**
+ * rdma_cap_opa_ah - Check if the port of device supports
+ * OPA Address handles
+ * @device: Device to check
+ * @port_num: Port number to check
+ *
+ * Return: true if we are running on an OPA device which supports
+ * the extended OPA addressing.
+ */
+static inline bool rdma_cap_opa_ah(struct ib_device *device, u8 port_num)
+{
+	return (device->port_immutable[port_num].core_cap_flags &
+		RDMA_CORE_CAP_OPA_AH) == RDMA_CORE_CAP_OPA_AH;
+}
+
+/**
  * rdma_max_mad_size - Return the max MAD size required by this RDMA Port.
  *
  * @device: Device
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-core 7/7] IB/SA: Add support to query opa classport info.
       [not found] ` <1489613066-61684-1-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (5 preceding siblings ...)
  2017-03-15 21:24   ` [PATCH rdma-core 6/7] IB/core: Add rdma_cap_opa_ah to expose opa address handles Dasaratharaman Chandramouli
@ 2017-03-15 21:24   ` Dasaratharaman Chandramouli
  2017-03-20  7:49   ` [PATCH rdma-core 0/7] Add support for OPA " Leon Romanovsky
  7 siblings, 0 replies; 12+ messages in thread
From: Dasaratharaman Chandramouli @ 2017-03-15 21:24 UTC (permalink / raw)
  To: Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma

For OPA devices, SA will query the OPA classport info
instead of the IB defined classport info.
opa classport info exposes additional information and
capabilities that are specific to OPA devices.

Reviewed-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Reviewed-by: Don Hiatt <don.hiatt-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 drivers/infiniband/core/sa_query.c | 229 +++++++++++++++++++++++++++++--------
 drivers/infiniband/hw/hfi1/mad.c   |  25 ----
 include/rdma/ib_mad.h              |  25 ++++
 include/rdma/ib_sa.h               |   1 +
 4 files changed, 206 insertions(+), 74 deletions(-)

diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
index bc32989..b2b0c1f 100644
--- a/drivers/infiniband/core/sa_query.c
+++ b/drivers/infiniband/core/sa_query.c
@@ -67,10 +67,23 @@ struct ib_sa_sm_ah {
 	u8		     src_path_mask;
 };
 
+enum rdma_class_port_info_type {
+	RDMA_CLASS_PORT_INFO_IB,
+	RDMA_CLASS_PORT_INFO_OPA
+};
+
+struct rdma_class_port_info {
+	enum rdma_class_port_info_type type;
+	union {
+		struct ib_class_port_info ib;
+		struct opa_class_port_info opa;
+	};
+};
+
 struct ib_sa_classport_cache {
 	bool valid;
 	int retry_cnt;
-	struct ib_class_port_info data;
+	struct rdma_class_port_info data;
 };
 
 struct ib_sa_port {
@@ -107,6 +120,7 @@ struct ib_sa_query {
 
 #define IB_SA_ENABLE_LOCAL_SERVICE	0x00000001
 #define IB_SA_CANCEL			0x00000002
+#define IB_SA_QUERY_OPA			0x00000004
 
 struct ib_sa_service_query {
 	void (*callback)(int, struct ib_sa_service_rec *, void *);
@@ -405,80 +419,162 @@ struct ib_sa_mcmember_query {
 	  .size_bits    = 2*64 },
 };
 
-#define CLASSPORTINFO_REC_FIELD(field) \
+#define IB_CLASSPORTINFO_REC_FIELD(field) \
 	.struct_offset_bytes = offsetof(struct ib_class_port_info, field),	\
 	.struct_size_bytes   = sizeof((struct ib_class_port_info *)0)->field,	\
 	.field_name          = "ib_class_port_info:" #field
 
-static const struct ib_field classport_info_rec_table[] = {
-	{ CLASSPORTINFO_REC_FIELD(base_version),
+static const struct ib_field ib_classport_info_rec_table[] = {
+	{ IB_CLASSPORTINFO_REC_FIELD(base_version),
 	  .offset_words = 0,
 	  .offset_bits  = 0,
 	  .size_bits    = 8 },
-	{ CLASSPORTINFO_REC_FIELD(class_version),
+	{ IB_CLASSPORTINFO_REC_FIELD(class_version),
 	  .offset_words = 0,
 	  .offset_bits  = 8,
 	  .size_bits    = 8 },
-	{ CLASSPORTINFO_REC_FIELD(capability_mask),
+	{ IB_CLASSPORTINFO_REC_FIELD(capability_mask),
 	  .offset_words = 0,
 	  .offset_bits  = 16,
 	  .size_bits    = 16 },
-	{ CLASSPORTINFO_REC_FIELD(cap_mask2_resp_time),
+	{ IB_CLASSPORTINFO_REC_FIELD(cap_mask2_resp_time),
 	  .offset_words = 1,
 	  .offset_bits  = 0,
 	  .size_bits    = 32 },
-	{ CLASSPORTINFO_REC_FIELD(redirect_gid),
+	{ IB_CLASSPORTINFO_REC_FIELD(redirect_gid),
 	  .offset_words = 2,
 	  .offset_bits  = 0,
 	  .size_bits    = 128 },
-	{ CLASSPORTINFO_REC_FIELD(redirect_tcslfl),
+	{ IB_CLASSPORTINFO_REC_FIELD(redirect_tcslfl),
 	  .offset_words = 6,
 	  .offset_bits  = 0,
 	  .size_bits    = 32 },
-	{ CLASSPORTINFO_REC_FIELD(redirect_lid),
+	{ IB_CLASSPORTINFO_REC_FIELD(redirect_lid),
 	  .offset_words = 7,
 	  .offset_bits  = 0,
 	  .size_bits    = 16 },
-	{ CLASSPORTINFO_REC_FIELD(redirect_pkey),
+	{ IB_CLASSPORTINFO_REC_FIELD(redirect_pkey),
 	  .offset_words = 7,
 	  .offset_bits  = 16,
 	  .size_bits    = 16 },
 
-	{ CLASSPORTINFO_REC_FIELD(redirect_qp),
+	{ IB_CLASSPORTINFO_REC_FIELD(redirect_qp),
 	  .offset_words = 8,
 	  .offset_bits  = 0,
 	  .size_bits    = 32 },
-	{ CLASSPORTINFO_REC_FIELD(redirect_qkey),
+	{ IB_CLASSPORTINFO_REC_FIELD(redirect_qkey),
 	  .offset_words = 9,
 	  .offset_bits  = 0,
 	  .size_bits    = 32 },
 
-	{ CLASSPORTINFO_REC_FIELD(trap_gid),
+	{ IB_CLASSPORTINFO_REC_FIELD(trap_gid),
 	  .offset_words = 10,
 	  .offset_bits  = 0,
 	  .size_bits    = 128 },
-	{ CLASSPORTINFO_REC_FIELD(trap_tcslfl),
+	{ IB_CLASSPORTINFO_REC_FIELD(trap_tcslfl),
 	  .offset_words = 14,
 	  .offset_bits  = 0,
 	  .size_bits    = 32 },
 
-	{ CLASSPORTINFO_REC_FIELD(trap_lid),
+	{ IB_CLASSPORTINFO_REC_FIELD(trap_lid),
 	  .offset_words = 15,
 	  .offset_bits  = 0,
 	  .size_bits    = 16 },
-	{ CLASSPORTINFO_REC_FIELD(trap_pkey),
+	{ IB_CLASSPORTINFO_REC_FIELD(trap_pkey),
 	  .offset_words = 15,
 	  .offset_bits  = 16,
 	  .size_bits    = 16 },
 
-	{ CLASSPORTINFO_REC_FIELD(trap_hlqp),
+	{ IB_CLASSPORTINFO_REC_FIELD(trap_hlqp),
+	  .offset_words = 16,
+	  .offset_bits  = 0,
+	  .size_bits    = 32 },
+	{ IB_CLASSPORTINFO_REC_FIELD(trap_qkey),
+	  .offset_words = 17,
+	  .offset_bits  = 0,
+	  .size_bits    = 32 },
+};
+
+#define OPA_CLASSPORTINFO_REC_FIELD(field) \
+	.struct_offset_bytes =\
+		offsetof(struct opa_class_port_info, field),	\
+	.struct_size_bytes   = \
+		sizeof((struct opa_class_port_info *)0)->field,	\
+	.field_name          = "opa_class_port_info:" #field
+
+static const struct ib_field opa_classport_info_rec_table[] = {
+	{ OPA_CLASSPORTINFO_REC_FIELD(base_version),
+	  .offset_words = 0,
+	  .offset_bits  = 0,
+	  .size_bits    = 8 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(class_version),
+	  .offset_words = 0,
+	  .offset_bits  = 8,
+	  .size_bits    = 8 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(cap_mask),
+	  .offset_words = 0,
+	  .offset_bits  = 16,
+	  .size_bits    = 16 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(cap_mask2_resp_time),
+	  .offset_words = 1,
+	  .offset_bits  = 0,
+	  .size_bits    = 32 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(redirect_gid),
+	  .offset_words = 2,
+	  .offset_bits  = 0,
+	  .size_bits    = 128 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(redirect_tc_fl),
+	  .offset_words = 6,
+	  .offset_bits  = 0,
+	  .size_bits    = 32 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(redirect_lid),
+	  .offset_words = 7,
+	  .offset_bits  = 0,
+	  .size_bits    = 32 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(redirect_sl_qp),
+	  .offset_words = 8,
+	  .offset_bits  = 0,
+	  .size_bits    = 32 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(redirect_qkey),
+	  .offset_words = 9,
+	  .offset_bits  = 0,
+	  .size_bits    = 32 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(trap_gid),
+	  .offset_words = 10,
+	  .offset_bits  = 0,
+	  .size_bits    = 128 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(trap_tc_fl),
+	  .offset_words = 14,
+	  .offset_bits  = 0,
+	  .size_bits    = 32 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(trap_lid),
+	  .offset_words = 15,
+	  .offset_bits  = 0,
+	  .size_bits    = 32 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(trap_hl_qp),
 	  .offset_words = 16,
 	  .offset_bits  = 0,
 	  .size_bits    = 32 },
-	{ CLASSPORTINFO_REC_FIELD(trap_qkey),
+	{ OPA_CLASSPORTINFO_REC_FIELD(trap_qkey),
 	  .offset_words = 17,
 	  .offset_bits  = 0,
 	  .size_bits    = 32 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(trap_pkey),
+	  .offset_words = 18,
+	  .offset_bits  = 0,
+	  .size_bits    = 16 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(redirect_pkey),
+	  .offset_words = 18,
+	  .offset_bits  = 16,
+	  .size_bits    = 16 },
+	{ OPA_CLASSPORTINFO_REC_FIELD(trap_sl),
+	  .offset_words = 19,
+	  .offset_bits  = 0,
+	  .size_bits    = 8 },
+	{ RESERVED,
+	  .offset_words = 19,
+	  .offset_bits  = 8,
+	  .size_bits    = 24 },
 };
 
 #define GUIDINFO_REC_FIELD(field) \
@@ -1104,7 +1200,7 @@ int ib_init_ah_from_path(struct ib_device *device, u8 port_num,
 }
 EXPORT_SYMBOL(ib_init_ah_from_path);
 
-static int alloc_mad(struct ib_sa_query *query, gfp_t gfp_mask)
+static int alloc_mad(struct ib_sa_query *query, gfp_t gfp_mask, bool is_opa)
 {
 	unsigned long flags;
 
@@ -1116,12 +1212,12 @@ static int alloc_mad(struct ib_sa_query *query, gfp_t gfp_mask)
 	kref_get(&query->port->sm_ah->ref);
 	query->sm_ah = query->port->sm_ah;
 	spin_unlock_irqrestore(&query->port->ah_lock, flags);
-
 	query->mad_buf = ib_create_send_mad(query->port->agent, 1,
 					    query->sm_ah->pkey_index,
 					    0, IB_MGMT_SA_HDR, IB_MGMT_SA_DATA,
 					    gfp_mask,
-					    IB_MGMT_BASE_VERSION);
+					    ((is_opa) ? OPA_MGMT_BASE_VERSION :
+					     IB_MGMT_BASE_VERSION));
 	if (IS_ERR(query->mad_buf)) {
 		kref_put(&query->sm_ah->ref, free_sm_ah);
 		return -ENOMEM;
@@ -1138,16 +1234,21 @@ static void free_mad(struct ib_sa_query *query)
 	kref_put(&query->sm_ah->ref, free_sm_ah);
 }
 
-static void init_mad(struct ib_sa_mad *mad, struct ib_mad_agent *agent)
+static void init_mad(struct ib_sa_mad *mad, struct ib_mad_agent *agent,
+		     bool is_opa)
 {
 	unsigned long flags;
 
 	memset(mad, 0, sizeof *mad);
 
-	mad->mad_hdr.base_version  = IB_MGMT_BASE_VERSION;
+	if (is_opa) {
+		mad->mad_hdr.base_version  = OPA_MGMT_BASE_VERSION;
+		mad->mad_hdr.class_version = OPA_SA_CLASS_VERSION;
+	} else {
+		mad->mad_hdr.base_version  = IB_MGMT_BASE_VERSION;
+		mad->mad_hdr.class_version = IB_SA_CLASS_VERSION;
+	}
 	mad->mad_hdr.mgmt_class    = IB_MGMT_CLASS_SUBN_ADM;
-	mad->mad_hdr.class_version = IB_SA_CLASS_VERSION;
-
 	spin_lock_irqsave(&tid_lock, flags);
 	mad->mad_hdr.tid           =
 		cpu_to_be64(((u64) agent->hi_tid) << 32 | tid++);
@@ -1291,7 +1392,7 @@ int ib_sa_path_rec_get(struct ib_sa_client *client,
 		return -ENOMEM;
 
 	query->sa_query.port     = port;
-	ret = alloc_mad(&query->sa_query, gfp_mask);
+	ret = alloc_mad(&query->sa_query, gfp_mask, false);
 	if (ret)
 		goto err1;
 
@@ -1301,7 +1402,7 @@ int ib_sa_path_rec_get(struct ib_sa_client *client,
 	query->context         = context;
 
 	mad = query->sa_query.mad_buf->mad;
-	init_mad(mad, agent);
+	init_mad(mad, agent, false);
 
 	query->sa_query.callback = callback ? ib_sa_path_rec_callback : NULL;
 	query->sa_query.release  = ib_sa_path_rec_release;
@@ -1416,7 +1517,7 @@ int ib_sa_service_rec_query(struct ib_sa_client *client,
 		return -ENOMEM;
 
 	query->sa_query.port     = port;
-	ret = alloc_mad(&query->sa_query, gfp_mask);
+	ret = alloc_mad(&query->sa_query, gfp_mask, false);
 	if (ret)
 		goto err1;
 
@@ -1426,7 +1527,7 @@ int ib_sa_service_rec_query(struct ib_sa_client *client,
 	query->context         = context;
 
 	mad = query->sa_query.mad_buf->mad;
-	init_mad(mad, agent);
+	init_mad(mad, agent, false);
 
 	query->sa_query.callback = callback ? ib_sa_service_rec_callback : NULL;
 	query->sa_query.release  = ib_sa_service_rec_release;
@@ -1508,7 +1609,7 @@ int ib_sa_mcmember_rec_query(struct ib_sa_client *client,
 		return -ENOMEM;
 
 	query->sa_query.port     = port;
-	ret = alloc_mad(&query->sa_query, gfp_mask);
+	ret = alloc_mad(&query->sa_query, gfp_mask, false);
 	if (ret)
 		goto err1;
 
@@ -1518,7 +1619,7 @@ int ib_sa_mcmember_rec_query(struct ib_sa_client *client,
 	query->context         = context;
 
 	mad = query->sa_query.mad_buf->mad;
-	init_mad(mad, agent);
+	init_mad(mad, agent, false);
 
 	query->sa_query.callback = callback ? ib_sa_mcmember_rec_callback : NULL;
 	query->sa_query.release  = ib_sa_mcmember_rec_release;
@@ -1605,7 +1706,7 @@ int ib_sa_guid_info_rec_query(struct ib_sa_client *client,
 		return -ENOMEM;
 
 	query->sa_query.port = port;
-	ret = alloc_mad(&query->sa_query, gfp_mask);
+	ret = alloc_mad(&query->sa_query, gfp_mask, false);
 	if (ret)
 		goto err1;
 
@@ -1615,7 +1716,7 @@ int ib_sa_guid_info_rec_query(struct ib_sa_client *client,
 	query->context         = context;
 
 	mad = query->sa_query.mad_buf->mad;
-	init_mad(mad, agent);
+	init_mad(mad, agent, false);
 
 	query->sa_query.callback = callback ? ib_sa_guidinfo_rec_callback : NULL;
 	query->sa_query.release  = ib_sa_guidinfo_rec_release;
@@ -1661,9 +1762,11 @@ bool ib_sa_sendonly_fullmem_support(struct ib_sa_client *client,
 	port  = &sa_dev->port[port_num - sa_dev->start_port];
 
 	spin_lock_irqsave(&port->classport_lock, flags);
-	if (port->classport_info.valid)
-		ret = ib_get_cpi_capmask2(&port->classport_info.data) &
-			IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT;
+	if (port->classport_info.valid) {
+		if (port->classport_info.data.type == RDMA_CLASS_PORT_INFO_IB)
+			ret = ib_get_cpi_capmask2(&port->classport_info.data.ib)
+				& IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT;
+	}
 	spin_unlock_irqrestore(&port->classport_lock, flags);
 	return ret;
 }
@@ -1688,22 +1791,47 @@ static void ib_sa_classport_info_rec_callback(struct ib_sa_query *sa_query,
 	unsigned long flags;
 	struct ib_sa_classport_info_query *query =
 		container_of(sa_query, struct ib_sa_classport_info_query, sa_query);
+	struct ib_sa_classport_cache *info = &sa_query->port->classport_info;
 
 	if (mad) {
-		struct ib_class_port_info rec;
+		if (sa_query->flags & IB_SA_QUERY_OPA) {
+			struct opa_class_port_info rec;
 
-		ib_unpack(classport_info_rec_table,
-			  ARRAY_SIZE(classport_info_rec_table),
-			  mad->data, &rec);
+			ib_unpack(opa_classport_info_rec_table,
+				  ARRAY_SIZE(opa_classport_info_rec_table),
+				  mad->data, &rec);
+
+			spin_lock_irqsave(&sa_query->port->classport_lock,
+					  flags);
+			if (!status && !info->valid) {
+				memcpy(&info->data.opa, &rec,
+				       sizeof(info->data.opa));
+
+				info->valid = true;
+				info->data.type = RDMA_CLASS_PORT_INFO_OPA;
+			}
+			spin_unlock_irqrestore(&sa_query->port->classport_lock,
+					       flags);
+
+		} else {
+			struct ib_class_port_info rec;
 
-		spin_lock_irqsave(&sa_query->port->classport_lock, flags);
-		if (!status && !sa_query->port->classport_info.valid) {
-			memcpy(&sa_query->port->classport_info.data, &rec,
-			       sizeof(sa_query->port->classport_info.data));
+			ib_unpack(ib_classport_info_rec_table,
+				  ARRAY_SIZE(ib_classport_info_rec_table),
+				  mad->data, &rec);
 
-			sa_query->port->classport_info.valid = true;
+			spin_lock_irqsave(&sa_query->port->classport_lock,
+					  flags);
+			if (!status && !info->valid) {
+				memcpy(&info->data.ib, &rec,
+				       sizeof(info->data.ib));
+
+				info->valid = true;
+				info->data.type = RDMA_CLASS_PORT_INFO_IB;
+			}
+			spin_unlock_irqrestore(&sa_query->port->classport_lock,
+					       flags);
 		}
-		spin_unlock_irqrestore(&sa_query->port->classport_lock, flags);
 	}
 	query->callback(query->context);
 }
@@ -1725,6 +1853,8 @@ static int ib_sa_classport_info_rec_query(struct ib_sa_port *port,
 	struct ib_sa_mad *mad;
 	gfp_t gfp_mask = GFP_KERNEL;
 	int ret;
+	bool is_opa = rdma_cap_opa_ah(port->agent->device,
+				      port->port_num);
 
 	agent = port->agent;
 
@@ -1733,7 +1863,8 @@ static int ib_sa_classport_info_rec_query(struct ib_sa_port *port,
 		return -ENOMEM;
 
 	query->sa_query.port = port;
-	ret = alloc_mad(&query->sa_query, gfp_mask);
+	query->sa_query.flags |= (is_opa) ? IB_SA_QUERY_OPA : 0;
+	ret = alloc_mad(&query->sa_query, gfp_mask, is_opa);
 	if (ret)
 		goto err_free;
 
@@ -1741,7 +1872,7 @@ static int ib_sa_classport_info_rec_query(struct ib_sa_port *port,
 	query->context = context;
 
 	mad = query->sa_query.mad_buf->mad;
-	init_mad(mad, agent);
+	init_mad(mad, agent, is_opa);
 
 	query->sa_query.callback = ib_sa_classport_info_rec_callback;
 	query->sa_query.release  = ib_sa_classport_info_rec_release;
diff --git a/drivers/infiniband/hw/hfi1/mad.c b/drivers/infiniband/hw/hfi1/mad.c
index 6e595af..c3d0e48 100644
--- a/drivers/infiniband/hw/hfi1/mad.c
+++ b/drivers/infiniband/hw/hfi1/mad.c
@@ -1986,31 +1986,6 @@ struct opa_pma_mad {
 	u8 data[2024];
 } __packed;
 
-struct opa_class_port_info {
-	u8 base_version;
-	u8 class_version;
-	__be16 cap_mask;
-	__be32 cap_mask2_resp_time;
-
-	u8 redirect_gid[16];
-	__be32 redirect_tc_fl;
-	__be32 redirect_lid;
-	__be32 redirect_sl_qp;
-	__be32 redirect_qkey;
-
-	u8 trap_gid[16];
-	__be32 trap_tc_fl;
-	__be32 trap_lid;
-	__be32 trap_hl_qp;
-	__be32 trap_qkey;
-
-	__be16 trap_pkey;
-	__be16 redirect_pkey;
-
-	u8 trap_sl_rsvd;
-	u8 reserved[3];
-} __packed;
-
 struct opa_port_status_req {
 	__u8 port_num;
 	__u8 reserved[3];
diff --git a/include/rdma/ib_mad.h b/include/rdma/ib_mad.h
index 981214b..8e75f5d 100644
--- a/include/rdma/ib_mad.h
+++ b/include/rdma/ib_mad.h
@@ -262,6 +262,31 @@ struct ib_class_port_info {
 	__be32			trap_qkey;
 };
 
+struct opa_class_port_info {
+	u8 base_version;
+	u8 class_version;
+	__be16 cap_mask;
+	__be32 cap_mask2_resp_time;
+
+	u8 redirect_gid[16];
+	__be32 redirect_tc_fl;
+	__be32 redirect_lid;
+	__be32 redirect_sl_qp;
+	__be32 redirect_qkey;
+
+	u8 trap_gid[16];
+	__be32 trap_tc_fl;
+	__be32 trap_lid;
+	__be32 trap_hl_qp;
+	__be32 trap_qkey;
+
+	__be16 trap_pkey;
+	__be16 redirect_pkey;
+
+	u8 trap_sl;
+	u8 reserved[3];
+} __packed;
+
 /**
  * ib_get_cpi_resp_time - Returns the resp_time value from
  * cap_mask2_resp_time in ib_class_port_info.
diff --git a/include/rdma/ib_sa.h b/include/rdma/ib_sa.h
index 46838c8..843b562 100644
--- a/include/rdma/ib_sa.h
+++ b/include/rdma/ib_sa.h
@@ -56,6 +56,7 @@ enum {
 	IB_SA_METHOD_GET_TRACE_TBL	= 0x13
 };
 
+#define OPA_SA_CLASS_VERSION	0x80
 enum {
 	IB_SA_ATTR_CLASS_PORTINFO    = 0x01,
 	IB_SA_ATTR_NOTICE	     = 0x02,
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-core 5/7] IB/SA: Modify SA to implicity cache Class Port info
       [not found]     ` <1489613066-61684-6-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
@ 2017-03-16 12:59       ` Hal Rosenstock
       [not found]         ` <c01d0080-26da-8eb9-59b6-6d959457ea0c-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Hal Rosenstock @ 2017-03-16 12:59 UTC (permalink / raw)
  To: Dasaratharaman Chandramouli, Don Hiatt, Ira Weiny, Doug Ledford,
	linux-rdma

On 3/15/2017 5:24 PM, Dasaratharaman Chandramouli wrote:
> SA will query and cache class port info as part of
> its initialization. SA will also invalidate and
> refresh the cache based on specific events. Callers such
> as IPoIB and CM can query the SA to get the classportinfo
> information. Apart from making the caller code much simpler,
> this change puts the onus on the SA to query and maintain
> classportinfo much like how it maitains the address handle to the SM.
> 
> Reviewed-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
> Reviewed-by: Don Hiatt <don.hiatt-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
> Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
> ---
>  drivers/infiniband/core/cma.c                  |  76 ++---------
>  drivers/infiniband/core/sa_query.c             | 179 ++++++++++++++++++-------
>  drivers/infiniband/ulp/ipoib/ipoib.h           |   1 -
>  drivers/infiniband/ulp/ipoib/ipoib_main.c      |  71 ----------
>  drivers/infiniband/ulp/ipoib/ipoib_multicast.c |   9 +-
>  include/rdma/ib_sa.h                           |  12 +-
>  6 files changed, 142 insertions(+), 206 deletions(-)
> 
> diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
> index 5ed6ec9..421400a 100644
> --- a/drivers/infiniband/core/cma.c
> +++ b/drivers/infiniband/core/cma.c
> @@ -3943,63 +3943,10 @@ static void cma_set_mgid(struct rdma_id_private *id_priv,
>  	}
>  }
>  
> -static void cma_query_sa_classport_info_cb(int status,
> -					   struct ib_class_port_info *rec,
> -					   void *context)
> -{
> -	struct class_port_info_context *cb_ctx = context;
> -
> -	WARN_ON(!context);
> -
> -	if (status || !rec) {
> -		pr_debug("RDMA CM: %s port %u failed query ClassPortInfo status: %d\n",
> -			 cb_ctx->device->name, cb_ctx->port_num, status);
> -		goto out;
> -	}
> -
> -	memcpy(cb_ctx->class_port_info, rec, sizeof(struct ib_class_port_info));
> -
> -out:
> -	complete(&cb_ctx->done);
> -}
> -
> -static int cma_query_sa_classport_info(struct ib_device *device, u8 port_num,
> -				       struct ib_class_port_info *class_port_info)
> -{
> -	struct class_port_info_context *cb_ctx;
> -	int ret;
> -
> -	cb_ctx = kmalloc(sizeof(*cb_ctx), GFP_KERNEL);
> -	if (!cb_ctx)
> -		return -ENOMEM;
> -
> -	cb_ctx->device = device;
> -	cb_ctx->class_port_info = class_port_info;
> -	cb_ctx->port_num = port_num;
> -	init_completion(&cb_ctx->done);
> -
> -	ret = ib_sa_classport_info_rec_query(&sa_client, device, port_num,
> -					     CMA_QUERY_CLASSPORT_INFO_TIMEOUT,
> -					     GFP_KERNEL, cma_query_sa_classport_info_cb,
> -					     cb_ctx, &cb_ctx->sa_query);
> -	if (ret < 0) {
> -		pr_err("RDMA CM: %s port %u failed to send ClassPortInfo query, ret: %d\n",
> -		       device->name, port_num, ret);
> -		goto out;
> -	}
> -
> -	wait_for_completion(&cb_ctx->done);
> -
> -out:
> -	kfree(cb_ctx);
> -	return ret;
> -}
> -
>  static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
>  				 struct cma_multicast *mc)
>  {
>  	struct ib_sa_mcmember_rec rec;
> -	struct ib_class_port_info class_port_info;
>  	struct rdma_dev_addr *dev_addr = &id_priv->id.route.addr.dev_addr;
>  	ib_sa_comp_mask comp_mask;
>  	int ret;
> @@ -4020,21 +3967,14 @@ static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
>  	rec.pkey = cpu_to_be16(ib_addr_get_pkey(dev_addr));
>  	rec.join_state = mc->join_state;
>  
> -	if (rec.join_state == BIT(SENDONLY_FULLMEMBER_JOIN)) {
> -		ret = cma_query_sa_classport_info(id_priv->id.device,
> -						  id_priv->id.port_num,
> -						  &class_port_info);
> -
> -		if (ret)
> -			return ret;
> -
> -		if (!(ib_get_cpi_capmask2(&class_port_info) &
> -		      IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT)) {
> -			pr_warn("RDMA CM: %s port %u Unable to multicast join\n"
> -				"RDMA CM: SM doesn't support Send Only Full Member option\n",
> -				id_priv->id.device->name, id_priv->id.port_num);
> -			return -EOPNOTSUPP;
> -		}
> +	if ((rec.join_state == BIT(SENDONLY_FULLMEMBER_JOIN)) &&
> +	    (!ib_sa_sendonly_fullmem_support(&sa_client,
> +					     id_priv->id.device,
> +					     id_priv->id.port_num))) {
> +		pr_warn("RDMA CM: %s port %u Unable to multicast join\n"
> +			"RDMA CM: SM doesn't support Send Only Full Member option\n",
> +			id_priv->id.device->name, id_priv->id.port_num);
> +		return -EOPNOTSUPP;
>  	}
>  
>  	comp_mask = IB_SA_MCMEMBER_REC_MGID | IB_SA_MCMEMBER_REC_PORT_GID |
> diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
> index 2181f8c..bc32989 100644
> --- a/drivers/infiniband/core/sa_query.c
> +++ b/drivers/infiniband/core/sa_query.c
> @@ -56,6 +56,8 @@
>  #define IB_SA_LOCAL_SVC_TIMEOUT_MIN		100
>  #define IB_SA_LOCAL_SVC_TIMEOUT_DEFAULT		2000
>  #define IB_SA_LOCAL_SVC_TIMEOUT_MAX		200000
> +#define IB_SA_CPI_MAX_RETRY_CNT			3
> +#define IB_SA_CPI_RETRY_WAIT			1000 /*msecs */
>  static int sa_local_svc_timeout_ms = IB_SA_LOCAL_SVC_TIMEOUT_DEFAULT;
>  
>  struct ib_sa_sm_ah {
> @@ -67,6 +69,7 @@ struct ib_sa_sm_ah {
>  
>  struct ib_sa_classport_cache {
>  	bool valid;
> +	int retry_cnt;
>  	struct ib_class_port_info data;
>  };
>  
> @@ -75,6 +78,7 @@ struct ib_sa_port {
>  	struct ib_sa_sm_ah  *sm_ah;
>  	struct work_struct   update_task;
>  	struct ib_sa_classport_cache classport_info;
> +	struct delayed_work ib_cpi_work;
>  	spinlock_t                   classport_lock; /* protects class port info set */
>  	spinlock_t           ah_lock;
>  	u8                   port_num;
> @@ -123,7 +127,7 @@ struct ib_sa_guidinfo_query {
>  };
>  
>  struct ib_sa_classport_info_query {
> -	void (*callback)(int, struct ib_class_port_info *, void *);
> +	void (*callback)(void *);
>  	void *context;
>  	struct ib_sa_query sa_query;
>  };
> @@ -1642,7 +1646,41 @@ int ib_sa_guid_info_rec_query(struct ib_sa_client *client,
>  }
>  EXPORT_SYMBOL(ib_sa_guid_info_rec_query);
>  
> -/* Support get SA ClassPortInfo */
> +bool ib_sa_sendonly_fullmem_support(struct ib_sa_client *client,
> +				    struct ib_device *device,
> +				    u8 port_num)
> +{
> +	struct ib_sa_device *sa_dev = ib_get_client_data(device, &sa_client);
> +	struct ib_sa_port *port;
> +	bool ret = false;
> +	unsigned long flags;
> +
> +	if (!sa_dev)
> +		return ret;
> +
> +	port  = &sa_dev->port[port_num - sa_dev->start_port];
> +
> +	spin_lock_irqsave(&port->classport_lock, flags);
> +	if (port->classport_info.valid)
> +		ret = ib_get_cpi_capmask2(&port->classport_info.data) &
> +			IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT;
> +	spin_unlock_irqrestore(&port->classport_lock, flags);
> +	return ret;
> +}
> +EXPORT_SYMBOL(ib_sa_sendonly_fullmem_support);
> +
> +struct ib_classport_info_context {
> +	struct completion	done;
> +	struct ib_sa_query	*sa_query;
> +};
> +
> +static void ib_classportinfo_cb(void *context)
> +{
> +	struct ib_classport_info_context *cb_ctx = context;
> +
> +	complete(&cb_ctx->done);
> +}
> +
>  static void ib_sa_classport_info_rec_callback(struct ib_sa_query *sa_query,
>  					      int status,
>  					      struct ib_sa_mad *mad)
> @@ -1666,54 +1704,30 @@ static void ib_sa_classport_info_rec_callback(struct ib_sa_query *sa_query,
>  			sa_query->port->classport_info.valid = true;
>  		}
>  		spin_unlock_irqrestore(&sa_query->port->classport_lock, flags);
> -
> -		query->callback(status, &rec, query->context);
> -	} else {
> -		query->callback(status, NULL, query->context);
>  	}
> +	query->callback(query->context);
>  }
>  
> -static void ib_sa_portclass_info_rec_release(struct ib_sa_query *sa_query)
> +static void ib_sa_classport_info_rec_release(struct ib_sa_query *sa_query)
>  {
>  	kfree(container_of(sa_query, struct ib_sa_classport_info_query,
>  			   sa_query));
>  }
>  
> -int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
> -				   struct ib_device *device, u8 port_num,
> -				   int timeout_ms, gfp_t gfp_mask,
> -				   void (*callback)(int status,
> -						    struct ib_class_port_info *resp,
> -						    void *context),
> -				   void *context,
> -				   struct ib_sa_query **sa_query)
> +static int ib_sa_classport_info_rec_query(struct ib_sa_port *port,
> +					  int timeout_ms,
> +					  void (*callback)(void *context),
> +					  void *context,
> +					  struct ib_sa_query **sa_query)
>  {
> -	struct ib_sa_classport_info_query *query;
> -	struct ib_sa_device *sa_dev = ib_get_client_data(device, &sa_client);
> -	struct ib_sa_port *port;
>  	struct ib_mad_agent *agent;
> +	struct ib_sa_classport_info_query *query;
>  	struct ib_sa_mad *mad;
> -	struct ib_class_port_info cached_class_port_info;
> +	gfp_t gfp_mask = GFP_KERNEL;
>  	int ret;
> -	unsigned long flags;
> -
> -	if (!sa_dev)
> -		return -ENODEV;
>  
> -	port  = &sa_dev->port[port_num - sa_dev->start_port];
>  	agent = port->agent;
>  
> -	/* Use cached ClassPortInfo attribute if valid instead of sending mad */
> -	spin_lock_irqsave(&port->classport_lock, flags);
> -	if (port->classport_info.valid && callback) {
> -		memcpy(&cached_class_port_info, &port->classport_info.data,
> -		       sizeof(cached_class_port_info));
> -		spin_unlock_irqrestore(&port->classport_lock, flags);
> -		callback(0, &cached_class_port_info, context);
> -		return 0;
> -	}
> -	spin_unlock_irqrestore(&port->classport_lock, flags);
> -
>  	query = kzalloc(sizeof(*query), gfp_mask);
>  	if (!query)
>  		return -ENOMEM;
> @@ -1721,20 +1735,16 @@ int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
>  	query->sa_query.port = port;
>  	ret = alloc_mad(&query->sa_query, gfp_mask);
>  	if (ret)
> -		goto err1;
> +		goto err_free;
>  
> -	ib_sa_client_get(client);
> -	query->sa_query.client = client;
> -	query->callback        = callback;
> -	query->context         = context;
> +	query->callback = callback;
> +	query->context = context;
>  
>  	mad = query->sa_query.mad_buf->mad;
>  	init_mad(mad, agent);
>  
> -	query->sa_query.callback = callback ? ib_sa_classport_info_rec_callback : NULL;
> -
> -	query->sa_query.release  = ib_sa_portclass_info_rec_release;
> -	/* support GET only */
> +	query->sa_query.callback = ib_sa_classport_info_rec_callback;
> +	query->sa_query.release  = ib_sa_classport_info_rec_release;
>  	mad->mad_hdr.method	 = IB_MGMT_METHOD_GET;
>  	mad->mad_hdr.attr_id	 = cpu_to_be16(IB_SA_ATTR_CLASS_PORTINFO);
>  	mad->sa_hdr.comp_mask	 = 0;
> @@ -1742,20 +1752,71 @@ int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
>  
>  	ret = send_mad(&query->sa_query, timeout_ms, gfp_mask);
>  	if (ret < 0)
> -		goto err2;
> +		goto err_free_mad;
>  
>  	return ret;
>  
> -err2:
> +err_free_mad:
>  	*sa_query = NULL;
> -	ib_sa_client_put(query->sa_query.client);
>  	free_mad(&query->sa_query);
>  
> -err1:
> +err_free:
>  	kfree(query);
>  	return ret;
>  }
> -EXPORT_SYMBOL(ib_sa_classport_info_rec_query);
> +
> +static void update_ib_cpi(struct work_struct *work)
> +{
> +	struct ib_sa_port *port =
> +		container_of(work, struct ib_sa_port, ib_cpi_work.work);
> +	struct ib_classport_info_context *cb_context;
> +	unsigned long flags;
> +	int ret;
> +
> +	/* If the classport info is valid, nothing
> +	 * to do here.
> +	 */
> +	spin_lock_irqsave(&port->classport_lock, flags);
> +	if (port->classport_info.valid) {
> +		spin_unlock_irqrestore(&port->classport_lock, flags);
> +		return;
> +	}
> +	spin_unlock_irqrestore(&port->classport_lock, flags);
> +
> +	cb_context = kmalloc(sizeof(*cb_context), GFP_KERNEL);
> +	if (!cb_context)
> +		goto err_nomem;
> +
> +	init_completion(&cb_context->done);
> +
> +	ret = ib_sa_classport_info_rec_query(port, 3000,
> +					     ib_classportinfo_cb, cb_context,
> +					     &cb_context->sa_query);
> +	if (ret < 0)
> +		goto free_cb_err;
> +	wait_for_completion(&cb_context->done);
> +free_cb_err:
> +	kfree(cb_context);
> +	spin_lock_irqsave(&port->classport_lock, flags);
> +
> +	/* If the classport info is still not valid, the query should have
> +	 * failed for some reason. Retry issuing the query
> +	 */
> +	if (!port->classport_info.valid) {
> +		port->classport_info.retry_cnt++;
> +		if (port->classport_info.retry_cnt <=
> +		    IB_SA_CPI_MAX_RETRY_CNT) {
> +			unsigned long delay =
> +				msecs_to_jiffies(IB_SA_CPI_RETRY_WAIT);
> +
> +			queue_delayed_work(ib_wq, &port->ib_cpi_work, delay);
> +		}
> +	}
> +	spin_unlock_irqrestore(&port->classport_lock, flags);
> +
> +err_nomem:
> +	return;
> +}
>  
>  static void send_handler(struct ib_mad_agent *agent,
>  			 struct ib_mad_send_wc *mad_send_wc)
> @@ -1784,7 +1845,8 @@ static void send_handler(struct ib_mad_agent *agent,
>  	spin_unlock_irqrestore(&idr_lock, flags);
>  
>  	free_mad(query);
> -	ib_sa_client_put(query->client);
> +	if (query->client)
> +		ib_sa_client_put(query->client);
>  	query->release(query);
>  }
>  
> @@ -1894,6 +1956,19 @@ static void ib_sa_event(struct ib_event_handler *handler,
>  			spin_unlock_irqrestore(&port->classport_lock, flags);
>  		}
>  		queue_work(ib_wq, &sa_dev->port[port_num].update_task);
> +
> +		/*Query for class port info on a re-reregister event */
> +		if ((event->event == IB_EVENT_CLIENT_REREGISTER) ||
> +		    (event->event == IB_EVENT_PORT_ACTIVE)) {

Since SA CPI is invalidated on SM change and LID change events,
shouldn't these events also be included here to retrigger SA CPI query ?

-- Hal

> +			unsigned long delay =
> +				msecs_to_jiffies(IB_SA_CPI_RETRY_WAIT);
> +
> +			spin_lock_irqsave(&port->classport_lock, flags);
> +			port->classport_info.retry_cnt = 0;
> +			spin_unlock_irqrestore(&port->classport_lock, flags);
> +			queue_delayed_work(ib_wq,
> +					   &port->ib_cpi_work, delay);
> +		}
>  	}
>  }
>  
> @@ -1934,6 +2009,8 @@ static void ib_sa_add_one(struct ib_device *device)
>  			goto err;
>  
>  		INIT_WORK(&sa_dev->port[i].update_task, update_sm_ah);
> +		INIT_DELAYED_WORK(&sa_dev->port[i].ib_cpi_work,
> +				  update_ib_cpi);
>  
>  		count++;
>  	}
> @@ -1980,11 +2057,11 @@ static void ib_sa_remove_one(struct ib_device *device, void *client_data)
>  		return;
>  
>  	ib_unregister_event_handler(&sa_dev->event_handler);
> -
>  	flush_workqueue(ib_wq);
>  
>  	for (i = 0; i <= sa_dev->end_port - sa_dev->start_port; ++i) {
>  		if (rdma_cap_ib_sa(device, i + 1)) {
> +			cancel_delayed_work_sync(&sa_dev->port[i].ib_cpi_work);
>  			ib_unregister_mad_agent(sa_dev->port[i].agent);
>  			if (sa_dev->port[i].sm_ah)
>  				kref_put(&sa_dev->port[i].sm_ah->ref, free_sm_ah);
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
> index bed233b..060e543 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib.h
> +++ b/drivers/infiniband/ulp/ipoib/ipoib.h
> @@ -489,7 +489,6 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
>  struct ipoib_path *__path_find(struct net_device *dev, void *gid);
>  void ipoib_mark_paths_invalid(struct net_device *dev);
>  void ipoib_flush_paths(struct net_device *dev);
> -int ipoib_check_sm_sendonly_fullmember_support(struct ipoib_dev_priv *priv);
>  struct ipoib_dev_priv *ipoib_intf_alloc(const char *format);
>  
>  int ipoib_ib_dev_init(struct net_device *dev, struct ib_device *ca, int port);
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
> index 259c59f..1c70ae9 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
> +++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
> @@ -650,77 +650,6 @@ void ipoib_mark_paths_invalid(struct net_device *dev)
>  	spin_unlock_irq(&priv->lock);
>  }
>  
> -struct classport_info_context {
> -	struct ipoib_dev_priv	*priv;
> -	struct completion	done;
> -	struct ib_sa_query	*sa_query;
> -};
> -
> -static void classport_info_query_cb(int status, struct ib_class_port_info *rec,
> -				    void *context)
> -{
> -	struct classport_info_context *cb_ctx = context;
> -	struct ipoib_dev_priv *priv;
> -
> -	WARN_ON(!context);
> -
> -	priv = cb_ctx->priv;
> -
> -	if (status || !rec) {
> -		pr_debug("device: %s failed query classport_info status: %d\n",
> -			 priv->dev->name, status);
> -		/* keeps the default, will try next mcast_restart */
> -		priv->sm_fullmember_sendonly_support = false;
> -		goto out;
> -	}
> -
> -	if (ib_get_cpi_capmask2(rec) &
> -	    IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT) {
> -		pr_debug("device: %s enabled fullmember-sendonly for sendonly MCG\n",
> -			 priv->dev->name);
> -		priv->sm_fullmember_sendonly_support = true;
> -	} else {
> -		pr_debug("device: %s disabled fullmember-sendonly for sendonly MCG\n",
> -			 priv->dev->name);
> -		priv->sm_fullmember_sendonly_support = false;
> -	}
> -
> -out:
> -	complete(&cb_ctx->done);
> -}
> -
> -int ipoib_check_sm_sendonly_fullmember_support(struct ipoib_dev_priv *priv)
> -{
> -	struct classport_info_context *callback_context;
> -	int ret;
> -
> -	callback_context = kmalloc(sizeof(*callback_context), GFP_KERNEL);
> -	if (!callback_context)
> -		return -ENOMEM;
> -
> -	callback_context->priv = priv;
> -	init_completion(&callback_context->done);
> -
> -	ret = ib_sa_classport_info_rec_query(&ipoib_sa_client,
> -					     priv->ca, priv->port, 3000,
> -					     GFP_KERNEL,
> -					     classport_info_query_cb,
> -					     callback_context,
> -					     &callback_context->sa_query);
> -	if (ret < 0) {
> -		pr_info("%s failed to send ib_sa_classport_info query, ret: %d\n",
> -			priv->dev->name, ret);
> -		kfree(callback_context);
> -		return ret;
> -	}
> -
> -	/* waiting for the callback to finish before returnning */
> -	wait_for_completion(&callback_context->done);
> -	kfree(callback_context);
> -
> -	return ret;
> -}
> -
>  static void push_pseudo_header(struct sk_buff *skb, const char *daddr)
>  {
>  	struct ipoib_pseudo_header *phdr;
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
> index 69e146c..3e3a84f 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
> +++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
> @@ -331,7 +331,6 @@ void ipoib_mcast_carrier_on_task(struct work_struct *work)
>  	struct ipoib_dev_priv *priv = container_of(work, struct ipoib_dev_priv,
>  						   carrier_on_task);
>  	struct ib_port_attr attr;
> -	int ret;
>  
>  	if (ib_query_port(priv->ca, priv->port, &attr) ||
>  	    attr.state != IB_PORT_ACTIVE) {
> @@ -344,11 +343,9 @@ void ipoib_mcast_carrier_on_task(struct work_struct *work)
>  	 * because the broadcast group must always be joined first and is always
>  	 * re-joined if the SM changes substantially.
>  	 */
> -	ret = ipoib_check_sm_sendonly_fullmember_support(priv);
> -	if (ret < 0)
> -		pr_debug("%s failed query sm support for sendonly-fullmember (ret: %d)\n",
> -			 priv->dev->name, ret);
> -
> +	priv->sm_fullmember_sendonly_support =
> +		ib_sa_sendonly_fullmem_support(&ipoib_sa_client,
> +					       priv->ca, priv->port);
>  	/*
>  	 * Take rtnl_lock to avoid racing with ipoib_stop() and
>  	 * turning the carrier back on while a device is being
> diff --git a/include/rdma/ib_sa.h b/include/rdma/ib_sa.h
> index fd0e532..46838c8 100644
> --- a/include/rdma/ib_sa.h
> +++ b/include/rdma/ib_sa.h
> @@ -454,14 +454,8 @@ int ib_sa_guid_info_rec_query(struct ib_sa_client *client,
>  			      void *context,
>  			      struct ib_sa_query **sa_query);
>  
> -/* Support get SA ClassPortInfo */
> -int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
> -				   struct ib_device *device, u8 port_num,
> -				   int timeout_ms, gfp_t gfp_mask,
> -				   void (*callback)(int status,
> -						    struct ib_class_port_info *resp,
> -						    void *context),
> -				   void *context,
> -				   struct ib_sa_query **sa_query);
> +bool ib_sa_sendonly_fullmem_support(struct ib_sa_client *client,
> +				    struct ib_device *device,
> +				    u8 port_num);
>  
>  #endif /* IB_SA_H */
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-core 5/7] IB/SA: Modify SA to implicity cache Class Port info
       [not found]         ` <c01d0080-26da-8eb9-59b6-6d959457ea0c-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2017-03-16 18:27           ` Chandramouli, Dasaratharaman
  0 siblings, 0 replies; 12+ messages in thread
From: Chandramouli, Dasaratharaman @ 2017-03-16 18:27 UTC (permalink / raw)
  To: Hal Rosenstock, Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma



On 3/16/2017 5:59 AM, Hal Rosenstock wrote:
> On 3/15/2017 5:24 PM, Dasaratharaman Chandramouli wrote:
>> SA will query and cache class port info as part of
>> its initialization. SA will also invalidate and
>> refresh the cache based on specific events. Callers such
>> as IPoIB and CM can query the SA to get the classportinfo
>> information. Apart from making the caller code much simpler,
>> this change puts the onus on the SA to query and maintain
>> classportinfo much like how it maitains the address handle to the SM.
>>
>> Reviewed-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
>> Reviewed-by: Don Hiatt <don.hiatt-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
>> Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
>> ---
>>  drivers/infiniband/core/cma.c                  |  76 ++---------
>>  drivers/infiniband/core/sa_query.c             | 179 ++++++++++++++++++-------
>>  drivers/infiniband/ulp/ipoib/ipoib.h           |   1 -
>>  drivers/infiniband/ulp/ipoib/ipoib_main.c      |  71 ----------
>>  drivers/infiniband/ulp/ipoib/ipoib_multicast.c |   9 +-
>>  include/rdma/ib_sa.h                           |  12 +-
>>  6 files changed, 142 insertions(+), 206 deletions(-)
>>
>> diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
>> index 5ed6ec9..421400a 100644
>> --- a/drivers/infiniband/core/cma.c
>> +++ b/drivers/infiniband/core/cma.c
>> @@ -3943,63 +3943,10 @@ static void cma_set_mgid(struct rdma_id_private *id_priv,
>>  	}
>>  }
>>
>> -static void cma_query_sa_classport_info_cb(int status,
>> -					   struct ib_class_port_info *rec,
>> -					   void *context)
>> -{
>> -	struct class_port_info_context *cb_ctx = context;
>> -
>> -	WARN_ON(!context);
>> -
>> -	if (status || !rec) {
>> -		pr_debug("RDMA CM: %s port %u failed query ClassPortInfo status: %d\n",
>> -			 cb_ctx->device->name, cb_ctx->port_num, status);
>> -		goto out;
>> -	}
>> -
>> -	memcpy(cb_ctx->class_port_info, rec, sizeof(struct ib_class_port_info));
>> -
>> -out:
>> -	complete(&cb_ctx->done);
>> -}
>> -
>> -static int cma_query_sa_classport_info(struct ib_device *device, u8 port_num,
>> -				       struct ib_class_port_info *class_port_info)
>> -{
>> -	struct class_port_info_context *cb_ctx;
>> -	int ret;
>> -
>> -	cb_ctx = kmalloc(sizeof(*cb_ctx), GFP_KERNEL);
>> -	if (!cb_ctx)
>> -		return -ENOMEM;
>> -
>> -	cb_ctx->device = device;
>> -	cb_ctx->class_port_info = class_port_info;
>> -	cb_ctx->port_num = port_num;
>> -	init_completion(&cb_ctx->done);
>> -
>> -	ret = ib_sa_classport_info_rec_query(&sa_client, device, port_num,
>> -					     CMA_QUERY_CLASSPORT_INFO_TIMEOUT,
>> -					     GFP_KERNEL, cma_query_sa_classport_info_cb,
>> -					     cb_ctx, &cb_ctx->sa_query);
>> -	if (ret < 0) {
>> -		pr_err("RDMA CM: %s port %u failed to send ClassPortInfo query, ret: %d\n",
>> -		       device->name, port_num, ret);
>> -		goto out;
>> -	}
>> -
>> -	wait_for_completion(&cb_ctx->done);
>> -
>> -out:
>> -	kfree(cb_ctx);
>> -	return ret;
>> -}
>> -
>>  static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
>>  				 struct cma_multicast *mc)
>>  {
>>  	struct ib_sa_mcmember_rec rec;
>> -	struct ib_class_port_info class_port_info;
>>  	struct rdma_dev_addr *dev_addr = &id_priv->id.route.addr.dev_addr;
>>  	ib_sa_comp_mask comp_mask;
>>  	int ret;
>> @@ -4020,21 +3967,14 @@ static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
>>  	rec.pkey = cpu_to_be16(ib_addr_get_pkey(dev_addr));
>>  	rec.join_state = mc->join_state;
>>
>> -	if (rec.join_state == BIT(SENDONLY_FULLMEMBER_JOIN)) {
>> -		ret = cma_query_sa_classport_info(id_priv->id.device,
>> -						  id_priv->id.port_num,
>> -						  &class_port_info);
>> -
>> -		if (ret)
>> -			return ret;
>> -
>> -		if (!(ib_get_cpi_capmask2(&class_port_info) &
>> -		      IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT)) {
>> -			pr_warn("RDMA CM: %s port %u Unable to multicast join\n"
>> -				"RDMA CM: SM doesn't support Send Only Full Member option\n",
>> -				id_priv->id.device->name, id_priv->id.port_num);
>> -			return -EOPNOTSUPP;
>> -		}
>> +	if ((rec.join_state == BIT(SENDONLY_FULLMEMBER_JOIN)) &&
>> +	    (!ib_sa_sendonly_fullmem_support(&sa_client,
>> +					     id_priv->id.device,
>> +					     id_priv->id.port_num))) {
>> +		pr_warn("RDMA CM: %s port %u Unable to multicast join\n"
>> +			"RDMA CM: SM doesn't support Send Only Full Member option\n",
>> +			id_priv->id.device->name, id_priv->id.port_num);
>> +		return -EOPNOTSUPP;
>>  	}
>>
>>  	comp_mask = IB_SA_MCMEMBER_REC_MGID | IB_SA_MCMEMBER_REC_PORT_GID |
>> diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
>> index 2181f8c..bc32989 100644
>> --- a/drivers/infiniband/core/sa_query.c
>> +++ b/drivers/infiniband/core/sa_query.c
>> @@ -56,6 +56,8 @@
>>  #define IB_SA_LOCAL_SVC_TIMEOUT_MIN		100
>>  #define IB_SA_LOCAL_SVC_TIMEOUT_DEFAULT		2000
>>  #define IB_SA_LOCAL_SVC_TIMEOUT_MAX		200000
>> +#define IB_SA_CPI_MAX_RETRY_CNT			3
>> +#define IB_SA_CPI_RETRY_WAIT			1000 /*msecs */
>>  static int sa_local_svc_timeout_ms = IB_SA_LOCAL_SVC_TIMEOUT_DEFAULT;
>>
>>  struct ib_sa_sm_ah {
>> @@ -67,6 +69,7 @@ struct ib_sa_sm_ah {
>>
>>  struct ib_sa_classport_cache {
>>  	bool valid;
>> +	int retry_cnt;
>>  	struct ib_class_port_info data;
>>  };
>>
>> @@ -75,6 +78,7 @@ struct ib_sa_port {
>>  	struct ib_sa_sm_ah  *sm_ah;
>>  	struct work_struct   update_task;
>>  	struct ib_sa_classport_cache classport_info;
>> +	struct delayed_work ib_cpi_work;
>>  	spinlock_t                   classport_lock; /* protects class port info set */
>>  	spinlock_t           ah_lock;
>>  	u8                   port_num;
>> @@ -123,7 +127,7 @@ struct ib_sa_guidinfo_query {
>>  };
>>
>>  struct ib_sa_classport_info_query {
>> -	void (*callback)(int, struct ib_class_port_info *, void *);
>> +	void (*callback)(void *);
>>  	void *context;
>>  	struct ib_sa_query sa_query;
>>  };
>> @@ -1642,7 +1646,41 @@ int ib_sa_guid_info_rec_query(struct ib_sa_client *client,
>>  }
>>  EXPORT_SYMBOL(ib_sa_guid_info_rec_query);
>>
>> -/* Support get SA ClassPortInfo */
>> +bool ib_sa_sendonly_fullmem_support(struct ib_sa_client *client,
>> +				    struct ib_device *device,
>> +				    u8 port_num)
>> +{
>> +	struct ib_sa_device *sa_dev = ib_get_client_data(device, &sa_client);
>> +	struct ib_sa_port *port;
>> +	bool ret = false;
>> +	unsigned long flags;
>> +
>> +	if (!sa_dev)
>> +		return ret;
>> +
>> +	port  = &sa_dev->port[port_num - sa_dev->start_port];
>> +
>> +	spin_lock_irqsave(&port->classport_lock, flags);
>> +	if (port->classport_info.valid)
>> +		ret = ib_get_cpi_capmask2(&port->classport_info.data) &
>> +			IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT;
>> +	spin_unlock_irqrestore(&port->classport_lock, flags);
>> +	return ret;
>> +}
>> +EXPORT_SYMBOL(ib_sa_sendonly_fullmem_support);
>> +
>> +struct ib_classport_info_context {
>> +	struct completion	done;
>> +	struct ib_sa_query	*sa_query;
>> +};
>> +
>> +static void ib_classportinfo_cb(void *context)
>> +{
>> +	struct ib_classport_info_context *cb_ctx = context;
>> +
>> +	complete(&cb_ctx->done);
>> +}
>> +
>>  static void ib_sa_classport_info_rec_callback(struct ib_sa_query *sa_query,
>>  					      int status,
>>  					      struct ib_sa_mad *mad)
>> @@ -1666,54 +1704,30 @@ static void ib_sa_classport_info_rec_callback(struct ib_sa_query *sa_query,
>>  			sa_query->port->classport_info.valid = true;
>>  		}
>>  		spin_unlock_irqrestore(&sa_query->port->classport_lock, flags);
>> -
>> -		query->callback(status, &rec, query->context);
>> -	} else {
>> -		query->callback(status, NULL, query->context);
>>  	}
>> +	query->callback(query->context);
>>  }
>>
>> -static void ib_sa_portclass_info_rec_release(struct ib_sa_query *sa_query)
>> +static void ib_sa_classport_info_rec_release(struct ib_sa_query *sa_query)
>>  {
>>  	kfree(container_of(sa_query, struct ib_sa_classport_info_query,
>>  			   sa_query));
>>  }
>>
>> -int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
>> -				   struct ib_device *device, u8 port_num,
>> -				   int timeout_ms, gfp_t gfp_mask,
>> -				   void (*callback)(int status,
>> -						    struct ib_class_port_info *resp,
>> -						    void *context),
>> -				   void *context,
>> -				   struct ib_sa_query **sa_query)
>> +static int ib_sa_classport_info_rec_query(struct ib_sa_port *port,
>> +					  int timeout_ms,
>> +					  void (*callback)(void *context),
>> +					  void *context,
>> +					  struct ib_sa_query **sa_query)
>>  {
>> -	struct ib_sa_classport_info_query *query;
>> -	struct ib_sa_device *sa_dev = ib_get_client_data(device, &sa_client);
>> -	struct ib_sa_port *port;
>>  	struct ib_mad_agent *agent;
>> +	struct ib_sa_classport_info_query *query;
>>  	struct ib_sa_mad *mad;
>> -	struct ib_class_port_info cached_class_port_info;
>> +	gfp_t gfp_mask = GFP_KERNEL;
>>  	int ret;
>> -	unsigned long flags;
>> -
>> -	if (!sa_dev)
>> -		return -ENODEV;
>>
>> -	port  = &sa_dev->port[port_num - sa_dev->start_port];
>>  	agent = port->agent;
>>
>> -	/* Use cached ClassPortInfo attribute if valid instead of sending mad */
>> -	spin_lock_irqsave(&port->classport_lock, flags);
>> -	if (port->classport_info.valid && callback) {
>> -		memcpy(&cached_class_port_info, &port->classport_info.data,
>> -		       sizeof(cached_class_port_info));
>> -		spin_unlock_irqrestore(&port->classport_lock, flags);
>> -		callback(0, &cached_class_port_info, context);
>> -		return 0;
>> -	}
>> -	spin_unlock_irqrestore(&port->classport_lock, flags);
>> -
>>  	query = kzalloc(sizeof(*query), gfp_mask);
>>  	if (!query)
>>  		return -ENOMEM;
>> @@ -1721,20 +1735,16 @@ int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
>>  	query->sa_query.port = port;
>>  	ret = alloc_mad(&query->sa_query, gfp_mask);
>>  	if (ret)
>> -		goto err1;
>> +		goto err_free;
>>
>> -	ib_sa_client_get(client);
>> -	query->sa_query.client = client;
>> -	query->callback        = callback;
>> -	query->context         = context;
>> +	query->callback = callback;
>> +	query->context = context;
>>
>>  	mad = query->sa_query.mad_buf->mad;
>>  	init_mad(mad, agent);
>>
>> -	query->sa_query.callback = callback ? ib_sa_classport_info_rec_callback : NULL;
>> -
>> -	query->sa_query.release  = ib_sa_portclass_info_rec_release;
>> -	/* support GET only */
>> +	query->sa_query.callback = ib_sa_classport_info_rec_callback;
>> +	query->sa_query.release  = ib_sa_classport_info_rec_release;
>>  	mad->mad_hdr.method	 = IB_MGMT_METHOD_GET;
>>  	mad->mad_hdr.attr_id	 = cpu_to_be16(IB_SA_ATTR_CLASS_PORTINFO);
>>  	mad->sa_hdr.comp_mask	 = 0;
>> @@ -1742,20 +1752,71 @@ int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
>>
>>  	ret = send_mad(&query->sa_query, timeout_ms, gfp_mask);
>>  	if (ret < 0)
>> -		goto err2;
>> +		goto err_free_mad;
>>
>>  	return ret;
>>
>> -err2:
>> +err_free_mad:
>>  	*sa_query = NULL;
>> -	ib_sa_client_put(query->sa_query.client);
>>  	free_mad(&query->sa_query);
>>
>> -err1:
>> +err_free:
>>  	kfree(query);
>>  	return ret;
>>  }
>> -EXPORT_SYMBOL(ib_sa_classport_info_rec_query);
>> +
>> +static void update_ib_cpi(struct work_struct *work)
>> +{
>> +	struct ib_sa_port *port =
>> +		container_of(work, struct ib_sa_port, ib_cpi_work.work);
>> +	struct ib_classport_info_context *cb_context;
>> +	unsigned long flags;
>> +	int ret;
>> +
>> +	/* If the classport info is valid, nothing
>> +	 * to do here.
>> +	 */
>> +	spin_lock_irqsave(&port->classport_lock, flags);
>> +	if (port->classport_info.valid) {
>> +		spin_unlock_irqrestore(&port->classport_lock, flags);
>> +		return;
>> +	}
>> +	spin_unlock_irqrestore(&port->classport_lock, flags);
>> +
>> +	cb_context = kmalloc(sizeof(*cb_context), GFP_KERNEL);
>> +	if (!cb_context)
>> +		goto err_nomem;
>> +
>> +	init_completion(&cb_context->done);
>> +
>> +	ret = ib_sa_classport_info_rec_query(port, 3000,
>> +					     ib_classportinfo_cb, cb_context,
>> +					     &cb_context->sa_query);
>> +	if (ret < 0)
>> +		goto free_cb_err;
>> +	wait_for_completion(&cb_context->done);
>> +free_cb_err:
>> +	kfree(cb_context);
>> +	spin_lock_irqsave(&port->classport_lock, flags);
>> +
>> +	/* If the classport info is still not valid, the query should have
>> +	 * failed for some reason. Retry issuing the query
>> +	 */
>> +	if (!port->classport_info.valid) {
>> +		port->classport_info.retry_cnt++;
>> +		if (port->classport_info.retry_cnt <=
>> +		    IB_SA_CPI_MAX_RETRY_CNT) {
>> +			unsigned long delay =
>> +				msecs_to_jiffies(IB_SA_CPI_RETRY_WAIT);
>> +
>> +			queue_delayed_work(ib_wq, &port->ib_cpi_work, delay);
>> +		}
>> +	}
>> +	spin_unlock_irqrestore(&port->classport_lock, flags);
>> +
>> +err_nomem:
>> +	return;
>> +}
>>
>>  static void send_handler(struct ib_mad_agent *agent,
>>  			 struct ib_mad_send_wc *mad_send_wc)
>> @@ -1784,7 +1845,8 @@ static void send_handler(struct ib_mad_agent *agent,
>>  	spin_unlock_irqrestore(&idr_lock, flags);
>>
>>  	free_mad(query);
>> -	ib_sa_client_put(query->client);
>> +	if (query->client)
>> +		ib_sa_client_put(query->client);
>>  	query->release(query);
>>  }
>>
>> @@ -1894,6 +1956,19 @@ static void ib_sa_event(struct ib_event_handler *handler,
>>  			spin_unlock_irqrestore(&port->classport_lock, flags);
>>  		}
>>  		queue_work(ib_wq, &sa_dev->port[port_num].update_task);
>> +
>> +		/*Query for class port info on a re-reregister event */
>> +		if ((event->event == IB_EVENT_CLIENT_REREGISTER) ||
>> +		    (event->event == IB_EVENT_PORT_ACTIVE)) {
>
> Since SA CPI is invalidated on SM change and LID change events,
> shouldn't these events also be included here to retrigger SA CPI query ?
>
> -- Hal
>

It certainly makes sense to trigger them on an SM change event. A LID 
change doesn't necessarily mean that the SM up and is in a state to 
respond to a CPI query. In any case, will issue a retrigger on all 
events that invalidates the cache.

>> +			unsigned long delay =
>> +				msecs_to_jiffies(IB_SA_CPI_RETRY_WAIT);
>> +
>> +			spin_lock_irqsave(&port->classport_lock, flags);
>> +			port->classport_info.retry_cnt = 0;
>> +			spin_unlock_irqrestore(&port->classport_lock, flags);
>> +			queue_delayed_work(ib_wq,
>> +					   &port->ib_cpi_work, delay);
>> +		}
>>  	}
>>  }
>>
>> @@ -1934,6 +2009,8 @@ static void ib_sa_add_one(struct ib_device *device)
>>  			goto err;
>>
>>  		INIT_WORK(&sa_dev->port[i].update_task, update_sm_ah);
>> +		INIT_DELAYED_WORK(&sa_dev->port[i].ib_cpi_work,
>> +				  update_ib_cpi);
>>
>>  		count++;
>>  	}
>> @@ -1980,11 +2057,11 @@ static void ib_sa_remove_one(struct ib_device *device, void *client_data)
>>  		return;
>>
>>  	ib_unregister_event_handler(&sa_dev->event_handler);
>> -
>>  	flush_workqueue(ib_wq);
>>
>>  	for (i = 0; i <= sa_dev->end_port - sa_dev->start_port; ++i) {
>>  		if (rdma_cap_ib_sa(device, i + 1)) {
>> +			cancel_delayed_work_sync(&sa_dev->port[i].ib_cpi_work);
>>  			ib_unregister_mad_agent(sa_dev->port[i].agent);
>>  			if (sa_dev->port[i].sm_ah)
>>  				kref_put(&sa_dev->port[i].sm_ah->ref, free_sm_ah);
>> diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
>> index bed233b..060e543 100644
>> --- a/drivers/infiniband/ulp/ipoib/ipoib.h
>> +++ b/drivers/infiniband/ulp/ipoib/ipoib.h
>> @@ -489,7 +489,6 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
>>  struct ipoib_path *__path_find(struct net_device *dev, void *gid);
>>  void ipoib_mark_paths_invalid(struct net_device *dev);
>>  void ipoib_flush_paths(struct net_device *dev);
>> -int ipoib_check_sm_sendonly_fullmember_support(struct ipoib_dev_priv *priv);
>>  struct ipoib_dev_priv *ipoib_intf_alloc(const char *format);
>>
>>  int ipoib_ib_dev_init(struct net_device *dev, struct ib_device *ca, int port);
>> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
>> index 259c59f..1c70ae9 100644
>> --- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
>> +++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
>> @@ -650,77 +650,6 @@ void ipoib_mark_paths_invalid(struct net_device *dev)
>>  	spin_unlock_irq(&priv->lock);
>>  }
>>
>> -struct classport_info_context {
>> -	struct ipoib_dev_priv	*priv;
>> -	struct completion	done;
>> -	struct ib_sa_query	*sa_query;
>> -};
>> -
>> -static void classport_info_query_cb(int status, struct ib_class_port_info *rec,
>> -				    void *context)
>> -{
>> -	struct classport_info_context *cb_ctx = context;
>> -	struct ipoib_dev_priv *priv;
>> -
>> -	WARN_ON(!context);
>> -
>> -	priv = cb_ctx->priv;
>> -
>> -	if (status || !rec) {
>> -		pr_debug("device: %s failed query classport_info status: %d\n",
>> -			 priv->dev->name, status);
>> -		/* keeps the default, will try next mcast_restart */
>> -		priv->sm_fullmember_sendonly_support = false;
>> -		goto out;
>> -	}
>> -
>> -	if (ib_get_cpi_capmask2(rec) &
>> -	    IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT) {
>> -		pr_debug("device: %s enabled fullmember-sendonly for sendonly MCG\n",
>> -			 priv->dev->name);
>> -		priv->sm_fullmember_sendonly_support = true;
>> -	} else {
>> -		pr_debug("device: %s disabled fullmember-sendonly for sendonly MCG\n",
>> -			 priv->dev->name);
>> -		priv->sm_fullmember_sendonly_support = false;
>> -	}
>> -
>> -out:
>> -	complete(&cb_ctx->done);
>> -}
>> -
>> -int ipoib_check_sm_sendonly_fullmember_support(struct ipoib_dev_priv *priv)
>> -{
>> -	struct classport_info_context *callback_context;
>> -	int ret;
>> -
>> -	callback_context = kmalloc(sizeof(*callback_context), GFP_KERNEL);
>> -	if (!callback_context)
>> -		return -ENOMEM;
>> -
>> -	callback_context->priv = priv;
>> -	init_completion(&callback_context->done);
>> -
>> -	ret = ib_sa_classport_info_rec_query(&ipoib_sa_client,
>> -					     priv->ca, priv->port, 3000,
>> -					     GFP_KERNEL,
>> -					     classport_info_query_cb,
>> -					     callback_context,
>> -					     &callback_context->sa_query);
>> -	if (ret < 0) {
>> -		pr_info("%s failed to send ib_sa_classport_info query, ret: %d\n",
>> -			priv->dev->name, ret);
>> -		kfree(callback_context);
>> -		return ret;
>> -	}
>> -
>> -	/* waiting for the callback to finish before returnning */
>> -	wait_for_completion(&callback_context->done);
>> -	kfree(callback_context);
>> -
>> -	return ret;
>> -}
>> -
>>  static void push_pseudo_header(struct sk_buff *skb, const char *daddr)
>>  {
>>  	struct ipoib_pseudo_header *phdr;
>> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
>> index 69e146c..3e3a84f 100644
>> --- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
>> +++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
>> @@ -331,7 +331,6 @@ void ipoib_mcast_carrier_on_task(struct work_struct *work)
>>  	struct ipoib_dev_priv *priv = container_of(work, struct ipoib_dev_priv,
>>  						   carrier_on_task);
>>  	struct ib_port_attr attr;
>> -	int ret;
>>
>>  	if (ib_query_port(priv->ca, priv->port, &attr) ||
>>  	    attr.state != IB_PORT_ACTIVE) {
>> @@ -344,11 +343,9 @@ void ipoib_mcast_carrier_on_task(struct work_struct *work)
>>  	 * because the broadcast group must always be joined first and is always
>>  	 * re-joined if the SM changes substantially.
>>  	 */
>> -	ret = ipoib_check_sm_sendonly_fullmember_support(priv);
>> -	if (ret < 0)
>> -		pr_debug("%s failed query sm support for sendonly-fullmember (ret: %d)\n",
>> -			 priv->dev->name, ret);
>> -
>> +	priv->sm_fullmember_sendonly_support =
>> +		ib_sa_sendonly_fullmem_support(&ipoib_sa_client,
>> +					       priv->ca, priv->port);
>>  	/*
>>  	 * Take rtnl_lock to avoid racing with ipoib_stop() and
>>  	 * turning the carrier back on while a device is being
>> diff --git a/include/rdma/ib_sa.h b/include/rdma/ib_sa.h
>> index fd0e532..46838c8 100644
>> --- a/include/rdma/ib_sa.h
>> +++ b/include/rdma/ib_sa.h
>> @@ -454,14 +454,8 @@ int ib_sa_guid_info_rec_query(struct ib_sa_client *client,
>>  			      void *context,
>>  			      struct ib_sa_query **sa_query);
>>
>> -/* Support get SA ClassPortInfo */
>> -int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
>> -				   struct ib_device *device, u8 port_num,
>> -				   int timeout_ms, gfp_t gfp_mask,
>> -				   void (*callback)(int status,
>> -						    struct ib_class_port_info *resp,
>> -						    void *context),
>> -				   void *context,
>> -				   struct ib_sa_query **sa_query);
>> +bool ib_sa_sendonly_fullmem_support(struct ib_sa_client *client,
>> +				    struct ib_device *device,
>> +				    u8 port_num);
>>
>>  #endif /* IB_SA_H */
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-core 0/7] Add support for OPA classport info
       [not found] ` <1489613066-61684-1-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (6 preceding siblings ...)
  2017-03-15 21:24   ` [PATCH rdma-core 7/7] IB/SA: Add support to query opa classport info Dasaratharaman Chandramouli
@ 2017-03-20  7:49   ` Leon Romanovsky
       [not found]     ` <20170320074911.GW2079-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
  7 siblings, 1 reply; 12+ messages in thread
From: Leon Romanovsky @ 2017-03-20  7:49 UTC (permalink / raw)
  To: Dasaratharaman Chandramouli
  Cc: Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma

[-- Attachment #1: Type: text/plain, Size: 163 bytes --]

The title in subject line "rdma-core" is a little bit misleading. We
are using it for the patches intended for the http://github.com/linux-rdma/rdma-core.

Thanks

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-core 0/7] Add support for OPA classport info
       [not found]     ` <20170320074911.GW2079-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
@ 2017-03-20 18:21       ` Chandramouli, Dasaratharaman
  0 siblings, 0 replies; 12+ messages in thread
From: Chandramouli, Dasaratharaman @ 2017-03-20 18:21 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Don Hiatt, Ira Weiny, Doug Ledford, linux-rdma



On 3/20/2017 12:49 AM, Leon Romanovsky wrote:
> The title in subject line "rdma-core" is a little bit misleading. We
> are using it for the patches intended for the http://github.com/linux-rdma/rdma-core.
>
> Thanks
>

Thanks for pointing it out. Had realized it after i posted v1. Will fix 
before posting v2.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-03-20 18:21 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-15 21:24 [PATCH rdma-core 0/7] Add support for OPA classport info Dasaratharaman Chandramouli
     [not found] ` <1489613066-61684-1-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2017-03-15 21:24   ` [PATCH rdma-core 1/7] IB/SA: Fix lines longer than 80 columns Dasaratharaman Chandramouli
2017-03-15 21:24   ` [PATCH rdma-core 2/7] IB/SA: Add braces when using sizeof Dasaratharaman Chandramouli
2017-03-15 21:24   ` [PATCH rdma-core 3/7] IB/SA: Remove unwanted braces Dasaratharaman Chandramouli
2017-03-15 21:24   ` [PATCH rdma-core 4/7] IB/SA: Move functions update_sm_ah() and ib_sa_event() Dasaratharaman Chandramouli
2017-03-15 21:24   ` [PATCH rdma-core 5/7] IB/SA: Modify SA to implicity cache Class Port info Dasaratharaman Chandramouli
     [not found]     ` <1489613066-61684-6-git-send-email-dasaratharaman.chandramouli-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2017-03-16 12:59       ` Hal Rosenstock
     [not found]         ` <c01d0080-26da-8eb9-59b6-6d959457ea0c-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2017-03-16 18:27           ` Chandramouli, Dasaratharaman
2017-03-15 21:24   ` [PATCH rdma-core 6/7] IB/core: Add rdma_cap_opa_ah to expose opa address handles Dasaratharaman Chandramouli
2017-03-15 21:24   ` [PATCH rdma-core 7/7] IB/SA: Add support to query opa classport info Dasaratharaman Chandramouli
2017-03-20  7:49   ` [PATCH rdma-core 0/7] Add support for OPA " Leon Romanovsky
     [not found]     ` <20170320074911.GW2079-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-03-20 18:21       ` Chandramouli, Dasaratharaman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.