* [PATCH 01/14] IB/mad: Clean up ib_find_send_mad
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
[not found] ` <1432109615-19564-2-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-20 8:13 ` [PATCH 02/14] IB/mad: Create an RMPP Base header ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (14 subsequent siblings)
15 siblings, 1 reply; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
ib_find_send_mad only needs access to the MAD header. Change the local
variable to ib_mad_hdr and change the corresponding cast.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/mad.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index 600af266838c..deefe5df9697 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -1815,18 +1815,18 @@ ib_find_send_mad(const struct ib_mad_agent_private *mad_agent_priv,
const struct ib_mad_recv_wc *wc)
{
struct ib_mad_send_wr_private *wr;
- struct ib_mad *mad;
+ struct ib_mad_hdr *mad_hdr;
- mad = (struct ib_mad *)wc->recv_buf.mad;
+ mad_hdr = (struct ib_mad_hdr *)wc->recv_buf.mad;
list_for_each_entry(wr, &mad_agent_priv->wait_list, agent_list) {
- if ((wr->tid == mad->mad_hdr.tid) &&
+ if ((wr->tid == mad_hdr->tid) &&
rcv_has_same_class(wr, wc) &&
/*
* Don't check GID for direct routed MADs.
* These might have permissive LIDs.
*/
- (is_direct(wc->recv_buf.mad->mad_hdr.mgmt_class) ||
+ (is_direct(mad_hdr->mgmt_class) ||
rcv_has_same_gid(mad_agent_priv, wr, wc)))
return (wr->status == IB_WC_SUCCESS) ? wr : NULL;
}
@@ -1837,14 +1837,14 @@ ib_find_send_mad(const struct ib_mad_agent_private *mad_agent_priv,
*/
list_for_each_entry(wr, &mad_agent_priv->send_list, agent_list) {
if (is_rmpp_data_mad(mad_agent_priv, wr->send_buf.mad) &&
- wr->tid == mad->mad_hdr.tid &&
+ wr->tid == mad_hdr->tid &&
wr->timeout &&
rcv_has_same_class(wr, wc) &&
/*
* Don't check GID for direct routed MADs.
* These might have permissive LIDs.
*/
- (is_direct(wc->recv_buf.mad->mad_hdr.mgmt_class) ||
+ (is_direct(mad_hdr->mgmt_class) ||
rcv_has_same_gid(mad_agent_priv, wr, wc)))
/* Verify request has not been canceled */
return (wr->status == IB_WC_SUCCESS) ? wr : NULL;
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 02/14] IB/mad: Create an RMPP Base header
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-20 8:13 ` [PATCH 01/14] IB/mad: Clean up ib_find_send_mad ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
[not found] ` <1432109615-19564-3-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-20 8:13 ` [PATCH 03/14] IB/mad: Create handle_ib_smi ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (13 subsequent siblings)
15 siblings, 1 reply; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
OPA RMPP MADs share the same RMPP base header as IB MADs. Create a common
structure in anticipation of sharing that header with OPA MADs in a future
patch.
Update existing RMPP code to use the new base header.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/mad.c | 18 +++---
drivers/infiniband/core/mad_rmpp.c | 124 ++++++++++++++++++-------------------
drivers/infiniband/core/user_mad.c | 16 ++---
include/rdma/ib_mad.h | 6 +-
4 files changed, 84 insertions(+), 80 deletions(-)
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index deefe5df9697..9cd4ce8dfbd0 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -873,7 +873,7 @@ static int alloc_send_rmpp_list(struct ib_mad_send_wr_private *send_wr,
gfp_t gfp_mask)
{
struct ib_mad_send_buf *send_buf = &send_wr->send_buf;
- struct ib_rmpp_mad *rmpp_mad = send_buf->mad;
+ struct ib_rmpp_base *rmpp_base = send_buf->mad;
struct ib_rmpp_segment *seg = NULL;
int left, seg_size, pad;
@@ -899,10 +899,10 @@ static int alloc_send_rmpp_list(struct ib_mad_send_wr_private *send_wr,
if (pad)
memset(seg->data + seg_size - pad, 0, pad);
- rmpp_mad->rmpp_hdr.rmpp_version = send_wr->mad_agent_priv->
+ rmpp_base->rmpp_hdr.rmpp_version = send_wr->mad_agent_priv->
agent.rmpp_version;
- rmpp_mad->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_DATA;
- ib_set_rmpp_flags(&rmpp_mad->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
+ rmpp_base->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_DATA;
+ ib_set_rmpp_flags(&rmpp_base->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
send_wr->cur_seg = container_of(send_wr->rmpp_list.next,
struct ib_rmpp_segment, list);
@@ -1737,14 +1737,14 @@ out:
static int is_rmpp_data_mad(const struct ib_mad_agent_private *mad_agent_priv,
const struct ib_mad_hdr *mad_hdr)
{
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
- rmpp_mad = (struct ib_rmpp_mad *)mad_hdr;
+ rmpp_base = (struct ib_rmpp_base *)mad_hdr;
return !mad_agent_priv->agent.rmpp_version ||
!ib_mad_kernel_rmpp_agent(&mad_agent_priv->agent) ||
- !(ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) &
+ !(ib_get_rmpp_flags(&rmpp_base->rmpp_hdr) &
IB_MGMT_RMPP_FLAG_ACTIVE) ||
- (rmpp_mad->rmpp_hdr.rmpp_type == IB_MGMT_RMPP_TYPE_DATA);
+ (rmpp_base->rmpp_hdr.rmpp_type == IB_MGMT_RMPP_TYPE_DATA);
}
static inline int rcv_has_same_class(const struct ib_mad_send_wr_private *wr,
@@ -1886,7 +1886,7 @@ static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv,
spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
if (!ib_mad_kernel_rmpp_agent(&mad_agent_priv->agent)
&& ib_is_mad_class_rmpp(mad_recv_wc->recv_buf.mad->mad_hdr.mgmt_class)
- && (ib_get_rmpp_flags(&((struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad)->rmpp_hdr)
+ && (ib_get_rmpp_flags(&((struct ib_rmpp_base *)mad_recv_wc->recv_buf.mad)->rmpp_hdr)
& IB_MGMT_RMPP_FLAG_ACTIVE)) {
/* user rmpp is in effect
* and this is an active RMPP MAD
diff --git a/drivers/infiniband/core/mad_rmpp.c b/drivers/infiniband/core/mad_rmpp.c
index f37878c9c06e..13279826342f 100644
--- a/drivers/infiniband/core/mad_rmpp.c
+++ b/drivers/infiniband/core/mad_rmpp.c
@@ -111,10 +111,10 @@ void ib_cancel_rmpp_recvs(struct ib_mad_agent_private *agent)
}
static void format_ack(struct ib_mad_send_buf *msg,
- struct ib_rmpp_mad *data,
+ struct ib_rmpp_base *data,
struct mad_rmpp_recv *rmpp_recv)
{
- struct ib_rmpp_mad *ack = msg->mad;
+ struct ib_rmpp_base *ack = msg->mad;
unsigned long flags;
memcpy(ack, &data->mad_hdr, msg->hdr_len);
@@ -143,7 +143,7 @@ static void ack_recv(struct mad_rmpp_recv *rmpp_recv,
if (IS_ERR(msg))
return;
- format_ack(msg, (struct ib_rmpp_mad *) recv_wc->recv_buf.mad, rmpp_recv);
+ format_ack(msg, (struct ib_rmpp_base *) recv_wc->recv_buf.mad, rmpp_recv);
msg->ah = rmpp_recv->ah;
ret = ib_post_send_mad(msg, NULL);
if (ret)
@@ -180,20 +180,20 @@ static void ack_ds_ack(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *recv_wc)
{
struct ib_mad_send_buf *msg;
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
int ret;
msg = alloc_response_msg(&agent->agent, recv_wc);
if (IS_ERR(msg))
return;
- rmpp_mad = msg->mad;
- memcpy(rmpp_mad, recv_wc->recv_buf.mad, msg->hdr_len);
+ rmpp_base = msg->mad;
+ memcpy(rmpp_base, recv_wc->recv_buf.mad, msg->hdr_len);
- rmpp_mad->mad_hdr.method ^= IB_MGMT_METHOD_RESP;
- ib_set_rmpp_flags(&rmpp_mad->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
- rmpp_mad->rmpp_hdr.seg_num = 0;
- rmpp_mad->rmpp_hdr.paylen_newwin = cpu_to_be32(1);
+ rmpp_base->mad_hdr.method ^= IB_MGMT_METHOD_RESP;
+ ib_set_rmpp_flags(&rmpp_base->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
+ rmpp_base->rmpp_hdr.seg_num = 0;
+ rmpp_base->rmpp_hdr.paylen_newwin = cpu_to_be32(1);
ret = ib_post_send_mad(msg, NULL);
if (ret) {
@@ -213,23 +213,23 @@ static void nack_recv(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *recv_wc, u8 rmpp_status)
{
struct ib_mad_send_buf *msg;
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
int ret;
msg = alloc_response_msg(&agent->agent, recv_wc);
if (IS_ERR(msg))
return;
- rmpp_mad = msg->mad;
- memcpy(rmpp_mad, recv_wc->recv_buf.mad, msg->hdr_len);
+ rmpp_base = msg->mad;
+ memcpy(rmpp_base, recv_wc->recv_buf.mad, msg->hdr_len);
- rmpp_mad->mad_hdr.method ^= IB_MGMT_METHOD_RESP;
- rmpp_mad->rmpp_hdr.rmpp_version = IB_MGMT_RMPP_VERSION;
- rmpp_mad->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ABORT;
- ib_set_rmpp_flags(&rmpp_mad->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
- rmpp_mad->rmpp_hdr.rmpp_status = rmpp_status;
- rmpp_mad->rmpp_hdr.seg_num = 0;
- rmpp_mad->rmpp_hdr.paylen_newwin = 0;
+ rmpp_base->mad_hdr.method ^= IB_MGMT_METHOD_RESP;
+ rmpp_base->rmpp_hdr.rmpp_version = IB_MGMT_RMPP_VERSION;
+ rmpp_base->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ABORT;
+ ib_set_rmpp_flags(&rmpp_base->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
+ rmpp_base->rmpp_hdr.rmpp_status = rmpp_status;
+ rmpp_base->rmpp_hdr.seg_num = 0;
+ rmpp_base->rmpp_hdr.paylen_newwin = 0;
ret = ib_post_send_mad(msg, NULL);
if (ret) {
@@ -371,18 +371,18 @@ insert_rmpp_recv(struct ib_mad_agent_private *agent,
static inline int get_last_flag(struct ib_mad_recv_buf *seg)
{
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
- rmpp_mad = (struct ib_rmpp_mad *) seg->mad;
- return ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) & IB_MGMT_RMPP_FLAG_LAST;
+ rmpp_base = (struct ib_rmpp_base *) seg->mad;
+ return ib_get_rmpp_flags(&rmpp_base->rmpp_hdr) & IB_MGMT_RMPP_FLAG_LAST;
}
static inline int get_seg_num(struct ib_mad_recv_buf *seg)
{
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
- rmpp_mad = (struct ib_rmpp_mad *) seg->mad;
- return be32_to_cpu(rmpp_mad->rmpp_hdr.seg_num);
+ rmpp_base = (struct ib_rmpp_base *) seg->mad;
+ return be32_to_cpu(rmpp_base->rmpp_hdr.seg_num);
}
static inline struct ib_mad_recv_buf * get_next_seg(struct list_head *rmpp_list,
@@ -429,14 +429,14 @@ static void update_seg_num(struct mad_rmpp_recv *rmpp_recv,
static inline int get_mad_len(struct mad_rmpp_recv *rmpp_recv)
{
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
int hdr_size, data_size, pad;
- rmpp_mad = (struct ib_rmpp_mad *)rmpp_recv->cur_seg_buf->mad;
+ rmpp_base = (struct ib_rmpp_base *)rmpp_recv->cur_seg_buf->mad;
- hdr_size = ib_get_mad_data_offset(rmpp_mad->mad_hdr.mgmt_class);
+ hdr_size = ib_get_mad_data_offset(rmpp_base->mad_hdr.mgmt_class);
data_size = sizeof(struct ib_rmpp_mad) - hdr_size;
- pad = IB_MGMT_RMPP_DATA - be32_to_cpu(rmpp_mad->rmpp_hdr.paylen_newwin);
+ pad = IB_MGMT_RMPP_DATA - be32_to_cpu(rmpp_base->rmpp_hdr.paylen_newwin);
if (pad > IB_MGMT_RMPP_DATA || pad < 0)
pad = 0;
@@ -565,20 +565,20 @@ static int send_next_seg(struct ib_mad_send_wr_private *mad_send_wr)
u32 paylen = 0;
rmpp_mad = mad_send_wr->send_buf.mad;
- ib_set_rmpp_flags(&rmpp_mad->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
- rmpp_mad->rmpp_hdr.seg_num = cpu_to_be32(++mad_send_wr->seg_num);
+ ib_set_rmpp_flags(&rmpp_mad->base.rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
+ rmpp_mad->base.rmpp_hdr.seg_num = cpu_to_be32(++mad_send_wr->seg_num);
if (mad_send_wr->seg_num == 1) {
- rmpp_mad->rmpp_hdr.rmpp_rtime_flags |= IB_MGMT_RMPP_FLAG_FIRST;
+ rmpp_mad->base.rmpp_hdr.rmpp_rtime_flags |= IB_MGMT_RMPP_FLAG_FIRST;
paylen = mad_send_wr->send_buf.seg_count * IB_MGMT_RMPP_DATA -
mad_send_wr->pad;
}
if (mad_send_wr->seg_num == mad_send_wr->send_buf.seg_count) {
- rmpp_mad->rmpp_hdr.rmpp_rtime_flags |= IB_MGMT_RMPP_FLAG_LAST;
+ rmpp_mad->base.rmpp_hdr.rmpp_rtime_flags |= IB_MGMT_RMPP_FLAG_LAST;
paylen = IB_MGMT_RMPP_DATA - mad_send_wr->pad;
}
- rmpp_mad->rmpp_hdr.paylen_newwin = cpu_to_be32(paylen);
+ rmpp_mad->base.rmpp_hdr.paylen_newwin = cpu_to_be32(paylen);
/* 2 seconds for an ACK until we can find the packet lifetime */
timeout = mad_send_wr->send_buf.timeout_ms;
@@ -642,19 +642,19 @@ static void process_rmpp_ack(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *mad_recv_wc)
{
struct ib_mad_send_wr_private *mad_send_wr;
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
unsigned long flags;
int seg_num, newwin, ret;
- rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad;
- if (rmpp_mad->rmpp_hdr.rmpp_status) {
+ rmpp_base = (struct ib_rmpp_base *)mad_recv_wc->recv_buf.mad;
+ if (rmpp_base->rmpp_hdr.rmpp_status) {
abort_send(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
return;
}
- seg_num = be32_to_cpu(rmpp_mad->rmpp_hdr.seg_num);
- newwin = be32_to_cpu(rmpp_mad->rmpp_hdr.paylen_newwin);
+ seg_num = be32_to_cpu(rmpp_base->rmpp_hdr.seg_num);
+ newwin = be32_to_cpu(rmpp_base->rmpp_hdr.paylen_newwin);
if (newwin < seg_num) {
abort_send(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_W2S);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_W2S);
@@ -739,7 +739,7 @@ process_rmpp_data(struct ib_mad_agent_private *agent,
struct ib_rmpp_hdr *rmpp_hdr;
u8 rmpp_status;
- rmpp_hdr = &((struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad)->rmpp_hdr;
+ rmpp_hdr = &((struct ib_rmpp_base *)mad_recv_wc->recv_buf.mad)->rmpp_hdr;
if (rmpp_hdr->rmpp_status) {
rmpp_status = IB_MGMT_RMPP_STATUS_BAD_STATUS;
@@ -768,30 +768,30 @@ bad:
static void process_rmpp_stop(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *mad_recv_wc)
{
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
- rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad;
+ rmpp_base = (struct ib_rmpp_base *)mad_recv_wc->recv_buf.mad;
- if (rmpp_mad->rmpp_hdr.rmpp_status != IB_MGMT_RMPP_STATUS_RESX) {
+ if (rmpp_base->rmpp_hdr.rmpp_status != IB_MGMT_RMPP_STATUS_RESX) {
abort_send(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
} else
- abort_send(agent, mad_recv_wc, rmpp_mad->rmpp_hdr.rmpp_status);
+ abort_send(agent, mad_recv_wc, rmpp_base->rmpp_hdr.rmpp_status);
}
static void process_rmpp_abort(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *mad_recv_wc)
{
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
- rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad;
+ rmpp_base = (struct ib_rmpp_base *)mad_recv_wc->recv_buf.mad;
- if (rmpp_mad->rmpp_hdr.rmpp_status < IB_MGMT_RMPP_STATUS_ABORT_MIN ||
- rmpp_mad->rmpp_hdr.rmpp_status > IB_MGMT_RMPP_STATUS_ABORT_MAX) {
+ if (rmpp_base->rmpp_hdr.rmpp_status < IB_MGMT_RMPP_STATUS_ABORT_MIN ||
+ rmpp_base->rmpp_hdr.rmpp_status > IB_MGMT_RMPP_STATUS_ABORT_MAX) {
abort_send(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
} else
- abort_send(agent, mad_recv_wc, rmpp_mad->rmpp_hdr.rmpp_status);
+ abort_send(agent, mad_recv_wc, rmpp_base->rmpp_hdr.rmpp_status);
}
struct ib_mad_recv_wc *
@@ -801,16 +801,16 @@ ib_process_rmpp_recv_wc(struct ib_mad_agent_private *agent,
struct ib_rmpp_mad *rmpp_mad;
rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad;
- if (!(rmpp_mad->rmpp_hdr.rmpp_rtime_flags & IB_MGMT_RMPP_FLAG_ACTIVE))
+ if (!(rmpp_mad->base.rmpp_hdr.rmpp_rtime_flags & IB_MGMT_RMPP_FLAG_ACTIVE))
return mad_recv_wc;
- if (rmpp_mad->rmpp_hdr.rmpp_version != IB_MGMT_RMPP_VERSION) {
+ if (rmpp_mad->base.rmpp_hdr.rmpp_version != IB_MGMT_RMPP_VERSION) {
abort_send(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_UNV);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_UNV);
goto out;
}
- switch (rmpp_mad->rmpp_hdr.rmpp_type) {
+ switch (rmpp_mad->base.rmpp_hdr.rmpp_type) {
case IB_MGMT_RMPP_TYPE_DATA:
return process_rmpp_data(agent, mad_recv_wc);
case IB_MGMT_RMPP_TYPE_ACK:
@@ -871,11 +871,11 @@ int ib_send_rmpp_mad(struct ib_mad_send_wr_private *mad_send_wr)
int ret;
rmpp_mad = mad_send_wr->send_buf.mad;
- if (!(ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) &
+ if (!(ib_get_rmpp_flags(&rmpp_mad->base.rmpp_hdr) &
IB_MGMT_RMPP_FLAG_ACTIVE))
return IB_RMPP_RESULT_UNHANDLED;
- if (rmpp_mad->rmpp_hdr.rmpp_type != IB_MGMT_RMPP_TYPE_DATA) {
+ if (rmpp_mad->base.rmpp_hdr.rmpp_type != IB_MGMT_RMPP_TYPE_DATA) {
mad_send_wr->seg_num = 1;
return IB_RMPP_RESULT_INTERNAL;
}
@@ -893,15 +893,15 @@ int ib_send_rmpp_mad(struct ib_mad_send_wr_private *mad_send_wr)
int ib_process_rmpp_send_wc(struct ib_mad_send_wr_private *mad_send_wr,
struct ib_mad_send_wc *mad_send_wc)
{
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
int ret;
- rmpp_mad = mad_send_wr->send_buf.mad;
- if (!(ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) &
+ rmpp_base = mad_send_wr->send_buf.mad;
+ if (!(ib_get_rmpp_flags(&rmpp_base->rmpp_hdr) &
IB_MGMT_RMPP_FLAG_ACTIVE))
return IB_RMPP_RESULT_UNHANDLED; /* RMPP not active */
- if (rmpp_mad->rmpp_hdr.rmpp_type != IB_MGMT_RMPP_TYPE_DATA)
+ if (rmpp_base->rmpp_hdr.rmpp_type != IB_MGMT_RMPP_TYPE_DATA)
return IB_RMPP_RESULT_INTERNAL; /* ACK, STOP, or ABORT */
if (mad_send_wc->status != IB_WC_SUCCESS ||
@@ -931,11 +931,11 @@ int ib_process_rmpp_send_wc(struct ib_mad_send_wr_private *mad_send_wr,
int ib_retry_rmpp(struct ib_mad_send_wr_private *mad_send_wr)
{
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
int ret;
- rmpp_mad = mad_send_wr->send_buf.mad;
- if (!(ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) &
+ rmpp_base = mad_send_wr->send_buf.mad;
+ if (!(ib_get_rmpp_flags(&rmpp_base->rmpp_hdr) &
IB_MGMT_RMPP_FLAG_ACTIVE))
return IB_RMPP_RESULT_UNHANDLED; /* RMPP not active */
diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
index e58d701b7791..acff2f4bb6dd 100644
--- a/drivers/infiniband/core/user_mad.c
+++ b/drivers/infiniband/core/user_mad.c
@@ -447,7 +447,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
struct ib_mad_agent *agent;
struct ib_ah_attr ah_attr;
struct ib_ah *ah;
- struct ib_rmpp_mad *rmpp_mad;
+ struct ib_rmpp_base *rmpp_base;
__be64 *tid;
int ret, data_len, hdr_len, copy_offset, rmpp_active;
@@ -503,13 +503,13 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
goto err_up;
}
- rmpp_mad = (struct ib_rmpp_mad *) packet->mad.data;
- hdr_len = ib_get_mad_data_offset(rmpp_mad->mad_hdr.mgmt_class);
+ rmpp_base = (struct ib_rmpp_base *) packet->mad.data;
+ hdr_len = ib_get_mad_data_offset(rmpp_base->mad_hdr.mgmt_class);
- if (ib_is_mad_class_rmpp(rmpp_mad->mad_hdr.mgmt_class)
+ if (ib_is_mad_class_rmpp(rmpp_base->mad_hdr.mgmt_class)
&& ib_mad_kernel_rmpp_agent(agent)) {
copy_offset = IB_MGMT_RMPP_HDR;
- rmpp_active = ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) &
+ rmpp_active = ib_get_rmpp_flags(&rmpp_base->rmpp_hdr) &
IB_MGMT_RMPP_FLAG_ACTIVE;
} else {
copy_offset = IB_MGMT_MAD_HDR;
@@ -556,12 +556,12 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
tid = &((struct ib_mad_hdr *) packet->msg->mad)->tid;
*tid = cpu_to_be64(((u64) agent->hi_tid) << 32 |
(be64_to_cpup(tid) & 0xffffffff));
- rmpp_mad->mad_hdr.tid = *tid;
+ rmpp_base->mad_hdr.tid = *tid;
}
if (!ib_mad_kernel_rmpp_agent(agent)
- && ib_is_mad_class_rmpp(rmpp_mad->mad_hdr.mgmt_class)
- && (ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) & IB_MGMT_RMPP_FLAG_ACTIVE)) {
+ && ib_is_mad_class_rmpp(rmpp_base->mad_hdr.mgmt_class)
+ && (ib_get_rmpp_flags(&rmpp_base->rmpp_hdr) & IB_MGMT_RMPP_FLAG_ACTIVE)) {
spin_lock_irq(&file->send_lock);
list_add_tail(&packet->list, &file->send_list);
spin_unlock_irq(&file->send_lock);
diff --git a/include/rdma/ib_mad.h b/include/rdma/ib_mad.h
index c0ea51f90a03..1b37b493b2b4 100644
--- a/include/rdma/ib_mad.h
+++ b/include/rdma/ib_mad.h
@@ -181,9 +181,13 @@ struct ib_mad {
u8 data[IB_MGMT_MAD_DATA];
};
-struct ib_rmpp_mad {
+struct ib_rmpp_base {
struct ib_mad_hdr mad_hdr;
struct ib_rmpp_hdr rmpp_hdr;
+} __packed;
+
+struct ib_rmpp_mad {
+ struct ib_rmpp_base base;
u8 data[IB_MGMT_RMPP_DATA];
};
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 03/14] IB/mad: Create handle_ib_smi
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-20 8:13 ` [PATCH 01/14] IB/mad: Clean up ib_find_send_mad ira.weiny-ral2JQCrhuEAvxtiuMwx3w
2015-05-20 8:13 ` [PATCH 02/14] IB/mad: Create an RMPP Base header ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
[not found] ` <1432109615-19564-4-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-20 8:13 ` [PATCH 04/14] IB/mad: Add helper function for smi_handle_dr_smp_send ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (12 subsequent siblings)
15 siblings, 1 reply; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Make a function to process Directed Route SMPs in ib_mad_recv_done_handler.
This change will help in later patches which separate the processing of OPA
SMPs from IBTA SMPs.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/mad.c | 85 +++++++++++++++++++++++++------------------
1 file changed, 49 insertions(+), 36 deletions(-)
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index 9cd4ce8dfbd0..96daba58afad 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -1924,6 +1924,52 @@ static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv,
}
}
+enum smi_action handle_ib_smi(struct ib_mad_port_private *port_priv,
+ struct ib_mad_qp_info *qp_info,
+ struct ib_wc *wc,
+ int port_num,
+ struct ib_mad_private *recv,
+ struct ib_mad_private *response)
+{
+ enum smi_forward_action retsmi;
+
+ if (smi_handle_dr_smp_recv(&recv->mad.smp,
+ port_priv->device->node_type,
+ port_num,
+ port_priv->device->phys_port_cnt) ==
+ IB_SMI_DISCARD)
+ return IB_SMI_DISCARD;
+
+ retsmi = smi_check_forward_dr_smp(&recv->mad.smp);
+ if (retsmi == IB_SMI_LOCAL)
+ return IB_SMI_HANDLE;
+
+ if (retsmi == IB_SMI_SEND) { /* don't forward */
+ if (smi_handle_dr_smp_send(&recv->mad.smp,
+ port_priv->device->node_type,
+ port_num) == IB_SMI_DISCARD)
+ return IB_SMI_DISCARD;
+
+ if (smi_check_local_smp(&recv->mad.smp, port_priv->device) == IB_SMI_DISCARD)
+ return IB_SMI_DISCARD;
+ } else if (port_priv->device->node_type == RDMA_NODE_IB_SWITCH) {
+ /* forward case for switches */
+ memcpy(response, recv, sizeof(*response));
+ response->header.recv_wc.wc = &response->header.wc;
+ response->header.recv_wc.recv_buf.mad = &response->mad.mad;
+ response->header.recv_wc.recv_buf.grh = &response->grh;
+
+ agent_send_response(&response->mad.mad,
+ &response->grh, wc,
+ port_priv->device,
+ smi_get_fwd_port(&recv->mad.smp),
+ qp_info->qp->qp_num);
+
+ return IB_SMI_DISCARD;
+ }
+ return IB_SMI_HANDLE;
+}
+
static bool generate_unmatched_resp(struct ib_mad_private *recv,
struct ib_mad_private *response)
{
@@ -1996,45 +2042,12 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
if (recv->mad.mad.mad_hdr.mgmt_class ==
IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) {
- enum smi_forward_action retsmi;
-
- if (smi_handle_dr_smp_recv(&recv->mad.smp,
- port_priv->device->node_type,
- port_num,
- port_priv->device->phys_port_cnt) ==
- IB_SMI_DISCARD)
+ if (handle_ib_smi(port_priv, qp_info, wc, port_num, recv,
+ response)
+ == IB_SMI_DISCARD)
goto out;
-
- retsmi = smi_check_forward_dr_smp(&recv->mad.smp);
- if (retsmi == IB_SMI_LOCAL)
- goto local;
-
- if (retsmi == IB_SMI_SEND) { /* don't forward */
- if (smi_handle_dr_smp_send(&recv->mad.smp,
- port_priv->device->node_type,
- port_num) == IB_SMI_DISCARD)
- goto out;
-
- if (smi_check_local_smp(&recv->mad.smp, port_priv->device) == IB_SMI_DISCARD)
- goto out;
- } else if (port_priv->device->node_type == RDMA_NODE_IB_SWITCH) {
- /* forward case for switches */
- memcpy(response, recv, sizeof(*response));
- response->header.recv_wc.wc = &response->header.wc;
- response->header.recv_wc.recv_buf.mad = &response->mad.mad;
- response->header.recv_wc.recv_buf.grh = &response->grh;
-
- agent_send_response(&response->mad.mad,
- &response->grh, wc,
- port_priv->device,
- smi_get_fwd_port(&recv->mad.smp),
- qp_info->qp->qp_num);
-
- goto out;
- }
}
-local:
/* Give driver "right of first refusal" on incoming MAD */
if (port_priv->device->process_mad) {
ret = port_priv->device->process_mad(port_priv->device, 0,
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 04/14] IB/mad: Add helper function for smi_handle_dr_smp_send
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (2 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 03/14] IB/mad: Create handle_ib_smi ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
2015-05-20 8:13 ` [PATCH 05/14] IB/mad: Add helper function for smi_handle_dr_smp_recv ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (11 subsequent siblings)
15 siblings, 0 replies; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
This helper function will be used later for processing both IB and OPA SMP
sends.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/smi.c | 81 +++++++++++++++++++++++++------------------
1 file changed, 47 insertions(+), 34 deletions(-)
diff --git a/drivers/infiniband/core/smi.c b/drivers/infiniband/core/smi.c
index e6c6810c8c41..4115b0979377 100644
--- a/drivers/infiniband/core/smi.c
+++ b/drivers/infiniband/core/smi.c
@@ -39,84 +39,81 @@
#include <rdma/ib_smi.h>
#include "smi.h"
-/*
- * Fixup a directed route SMP for sending
- * Return IB_SMI_DISCARD if the SMP should be discarded
- */
-enum smi_action smi_handle_dr_smp_send(struct ib_smp *smp,
- u8 node_type, int port_num)
+static inline
+enum smi_action __smi_handle_dr_smp_send(u8 node_type, int port_num,
+ u8 *hop_ptr, u8 hop_cnt,
+ u8 *initial_path,
+ u8 *return_path,
+ u8 direction,
+ int dr_dlid_is_permissive,
+ int dr_slid_is_permissive)
{
- u8 hop_ptr, hop_cnt;
-
- hop_ptr = smp->hop_ptr;
- hop_cnt = smp->hop_cnt;
-
/* See section 14.2.2.2, Vol 1 IB spec */
/* C14-6 -- valid hop_cnt values are from 0 to 63 */
if (hop_cnt >= IB_SMP_MAX_PATH_HOPS)
return IB_SMI_DISCARD;
- if (!ib_get_smp_direction(smp)) {
+ if (!direction) {
/* C14-9:1 */
- if (hop_cnt && hop_ptr == 0) {
- smp->hop_ptr++;
- return (smp->initial_path[smp->hop_ptr] ==
+ if (hop_cnt && *hop_ptr == 0) {
+ (*hop_ptr)++;
+ return (initial_path[*hop_ptr] ==
port_num ? IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-9:2 */
- if (hop_ptr && hop_ptr < hop_cnt) {
+ if (*hop_ptr && *hop_ptr < hop_cnt) {
if (node_type != RDMA_NODE_IB_SWITCH)
return IB_SMI_DISCARD;
- /* smp->return_path set when received */
- smp->hop_ptr++;
- return (smp->initial_path[smp->hop_ptr] ==
+ /* return_path set when received */
+ (*hop_ptr)++;
+ return (initial_path[*hop_ptr] ==
port_num ? IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-9:3 -- We're at the end of the DR segment of path */
- if (hop_ptr == hop_cnt) {
- /* smp->return_path set when received */
- smp->hop_ptr++;
+ if (*hop_ptr == hop_cnt) {
+ /* return_path set when received */
+ (*hop_ptr)++;
return (node_type == RDMA_NODE_IB_SWITCH ||
- smp->dr_dlid == IB_LID_PERMISSIVE ?
+ dr_dlid_is_permissive ?
IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-9:4 -- hop_ptr = hop_cnt + 1 -> give to SMA/SM */
/* C14-9:5 -- Fail unreasonable hop pointer */
- return (hop_ptr == hop_cnt + 1 ? IB_SMI_HANDLE : IB_SMI_DISCARD);
+ return (*hop_ptr == hop_cnt + 1 ? IB_SMI_HANDLE : IB_SMI_DISCARD);
} else {
/* C14-13:1 */
- if (hop_cnt && hop_ptr == hop_cnt + 1) {
- smp->hop_ptr--;
- return (smp->return_path[smp->hop_ptr] ==
+ if (hop_cnt && *hop_ptr == hop_cnt + 1) {
+ (*hop_ptr)--;
+ return (return_path[*hop_ptr] ==
port_num ? IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-13:2 */
- if (2 <= hop_ptr && hop_ptr <= hop_cnt) {
+ if (2 <= *hop_ptr && *hop_ptr <= hop_cnt) {
if (node_type != RDMA_NODE_IB_SWITCH)
return IB_SMI_DISCARD;
- smp->hop_ptr--;
- return (smp->return_path[smp->hop_ptr] ==
+ (*hop_ptr)--;
+ return (return_path[*hop_ptr] ==
port_num ? IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-13:3 -- at the end of the DR segment of path */
- if (hop_ptr == 1) {
- smp->hop_ptr--;
+ if (*hop_ptr == 1) {
+ (*hop_ptr)--;
/* C14-13:3 -- SMPs destined for SM shouldn't be here */
return (node_type == RDMA_NODE_IB_SWITCH ||
- smp->dr_slid == IB_LID_PERMISSIVE ?
+ dr_slid_is_permissive ?
IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-13:4 -- hop_ptr = 0 -> should have gone to SM */
- if (hop_ptr == 0)
+ if (*hop_ptr == 0)
return IB_SMI_HANDLE;
/* C14-13:5 -- Check for unreasonable hop pointer */
@@ -125,6 +122,22 @@ enum smi_action smi_handle_dr_smp_send(struct ib_smp *smp,
}
/*
+ * Fixup a directed route SMP for sending
+ * Return IB_SMI_DISCARD if the SMP should be discarded
+ */
+enum smi_action smi_handle_dr_smp_send(struct ib_smp *smp,
+ u8 node_type, int port_num)
+{
+ return __smi_handle_dr_smp_send(node_type, port_num,
+ &smp->hop_ptr, smp->hop_cnt,
+ smp->initial_path,
+ smp->return_path,
+ ib_get_smp_direction(smp),
+ smp->dr_dlid == IB_LID_PERMISSIVE,
+ smp->dr_slid == IB_LID_PERMISSIVE);
+}
+
+/*
* Adjust information for a received SMP
* Return IB_SMI_DISCARD if the SMP should be dropped
*/
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 05/14] IB/mad: Add helper function for smi_handle_dr_smp_recv
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (3 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 04/14] IB/mad: Add helper function for smi_handle_dr_smp_send ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
2015-05-20 8:13 ` [PATCH 06/14] IB/mad: Add helper function for smi_check_forward_dr_smp ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (10 subsequent siblings)
15 siblings, 0 replies; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
This helper function will be used later for processing both IB and OPA SMP
recvs.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/smi.c | 80 +++++++++++++++++++++++++------------------
1 file changed, 47 insertions(+), 33 deletions(-)
diff --git a/drivers/infiniband/core/smi.c b/drivers/infiniband/core/smi.c
index 4115b0979377..b09fa1dec865 100644
--- a/drivers/infiniband/core/smi.c
+++ b/drivers/infiniband/core/smi.c
@@ -137,91 +137,105 @@ enum smi_action smi_handle_dr_smp_send(struct ib_smp *smp,
smp->dr_slid == IB_LID_PERMISSIVE);
}
-/*
- * Adjust information for a received SMP
- * Return IB_SMI_DISCARD if the SMP should be dropped
- */
-enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, u8 node_type,
- int port_num, int phys_port_cnt)
+static inline
+enum smi_action __smi_handle_dr_smp_recv(u8 node_type, int port_num,
+ int phys_port_cnt,
+ u8 *hop_ptr, u8 hop_cnt,
+ u8 *initial_path,
+ u8 *return_path,
+ u8 direction,
+ int dr_dlid_is_permissive,
+ int dr_slid_is_permissive)
{
- u8 hop_ptr, hop_cnt;
-
- hop_ptr = smp->hop_ptr;
- hop_cnt = smp->hop_cnt;
-
/* See section 14.2.2.2, Vol 1 IB spec */
/* C14-6 -- valid hop_cnt values are from 0 to 63 */
if (hop_cnt >= IB_SMP_MAX_PATH_HOPS)
return IB_SMI_DISCARD;
- if (!ib_get_smp_direction(smp)) {
+ if (!direction) {
/* C14-9:1 -- sender should have incremented hop_ptr */
- if (hop_cnt && hop_ptr == 0)
+ if (hop_cnt && *hop_ptr == 0)
return IB_SMI_DISCARD;
/* C14-9:2 -- intermediate hop */
- if (hop_ptr && hop_ptr < hop_cnt) {
+ if (*hop_ptr && *hop_ptr < hop_cnt) {
if (node_type != RDMA_NODE_IB_SWITCH)
return IB_SMI_DISCARD;
- smp->return_path[hop_ptr] = port_num;
- /* smp->hop_ptr updated when sending */
- return (smp->initial_path[hop_ptr+1] <= phys_port_cnt ?
+ return_path[*hop_ptr] = port_num;
+ /* hop_ptr updated when sending */
+ return (initial_path[*hop_ptr+1] <= phys_port_cnt ?
IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-9:3 -- We're at the end of the DR segment of path */
- if (hop_ptr == hop_cnt) {
+ if (*hop_ptr == hop_cnt) {
if (hop_cnt)
- smp->return_path[hop_ptr] = port_num;
- /* smp->hop_ptr updated when sending */
+ return_path[*hop_ptr] = port_num;
+ /* hop_ptr updated when sending */
return (node_type == RDMA_NODE_IB_SWITCH ||
- smp->dr_dlid == IB_LID_PERMISSIVE ?
+ dr_dlid_is_permissive ?
IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-9:4 -- hop_ptr = hop_cnt + 1 -> give to SMA/SM */
/* C14-9:5 -- fail unreasonable hop pointer */
- return (hop_ptr == hop_cnt + 1 ? IB_SMI_HANDLE : IB_SMI_DISCARD);
+ return (*hop_ptr == hop_cnt + 1 ? IB_SMI_HANDLE : IB_SMI_DISCARD);
} else {
/* C14-13:1 */
- if (hop_cnt && hop_ptr == hop_cnt + 1) {
- smp->hop_ptr--;
- return (smp->return_path[smp->hop_ptr] ==
+ if (hop_cnt && *hop_ptr == hop_cnt + 1) {
+ (*hop_ptr)--;
+ return (return_path[*hop_ptr] ==
port_num ? IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-13:2 */
- if (2 <= hop_ptr && hop_ptr <= hop_cnt) {
+ if (2 <= *hop_ptr && *hop_ptr <= hop_cnt) {
if (node_type != RDMA_NODE_IB_SWITCH)
return IB_SMI_DISCARD;
- /* smp->hop_ptr updated when sending */
- return (smp->return_path[hop_ptr-1] <= phys_port_cnt ?
+ /* hop_ptr updated when sending */
+ return (return_path[*hop_ptr-1] <= phys_port_cnt ?
IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-13:3 -- We're at the end of the DR segment of path */
- if (hop_ptr == 1) {
- if (smp->dr_slid == IB_LID_PERMISSIVE) {
+ if (*hop_ptr == 1) {
+ if (dr_slid_is_permissive) {
/* giving SMP to SM - update hop_ptr */
- smp->hop_ptr--;
+ (*hop_ptr)--;
return IB_SMI_HANDLE;
}
- /* smp->hop_ptr updated when sending */
+ /* hop_ptr updated when sending */
return (node_type == RDMA_NODE_IB_SWITCH ?
IB_SMI_HANDLE : IB_SMI_DISCARD);
}
/* C14-13:4 -- hop_ptr = 0 -> give to SM */
/* C14-13:5 -- Check for unreasonable hop pointer */
- return (hop_ptr == 0 ? IB_SMI_HANDLE : IB_SMI_DISCARD);
+ return (*hop_ptr == 0 ? IB_SMI_HANDLE : IB_SMI_DISCARD);
}
}
+/*
+ * Adjust information for a received SMP
+ * Return IB_SMI_DISCARD if the SMP should be dropped
+ */
+enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, u8 node_type,
+ int port_num, int phys_port_cnt)
+{
+ return __smi_handle_dr_smp_recv(node_type, port_num, phys_port_cnt,
+ &smp->hop_ptr, smp->hop_cnt,
+ smp->initial_path,
+ smp->return_path,
+ ib_get_smp_direction(smp),
+ smp->dr_dlid == IB_LID_PERMISSIVE,
+ smp->dr_slid == IB_LID_PERMISSIVE);
+}
+
enum smi_forward_action smi_check_forward_dr_smp(struct ib_smp *smp)
{
u8 hop_ptr, hop_cnt;
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 06/14] IB/mad: Add helper function for smi_check_forward_dr_smp
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (4 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 05/14] IB/mad: Add helper function for smi_handle_dr_smp_recv ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
[not found] ` <1432109615-19564-7-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-20 8:13 ` [PATCH 07/14] IB/mad: Add base version to ib_create_send_mad ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (9 subsequent siblings)
15 siblings, 1 reply; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
This helper function will be used later for processing both IB and OPA SMPs.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/smi.c | 26 +++++++++++++++++---------
1 file changed, 17 insertions(+), 9 deletions(-)
diff --git a/drivers/infiniband/core/smi.c b/drivers/infiniband/core/smi.c
index b09fa1dec865..4969ce3338fb 100644
--- a/drivers/infiniband/core/smi.c
+++ b/drivers/infiniband/core/smi.c
@@ -236,21 +236,20 @@ enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, u8 node_type,
smp->dr_slid == IB_LID_PERMISSIVE);
}
-enum smi_forward_action smi_check_forward_dr_smp(struct ib_smp *smp)
+static inline
+enum smi_forward_action __smi_check_forward_dr_smp(u8 hop_ptr, u8 hop_cnt,
+ u8 direction,
+ int dr_dlid_is_permissive,
+ int dr_slid_is_permissive)
{
- u8 hop_ptr, hop_cnt;
-
- hop_ptr = smp->hop_ptr;
- hop_cnt = smp->hop_cnt;
-
- if (!ib_get_smp_direction(smp)) {
+ if (!direction) {
/* C14-9:2 -- intermediate hop */
if (hop_ptr && hop_ptr < hop_cnt)
return IB_SMI_FORWARD;
/* C14-9:3 -- at the end of the DR segment of path */
if (hop_ptr == hop_cnt)
- return (smp->dr_dlid == IB_LID_PERMISSIVE ?
+ return (dr_dlid_is_permissive ?
IB_SMI_SEND : IB_SMI_LOCAL);
/* C14-9:4 -- hop_ptr = hop_cnt + 1 -> give to SMA/SM */
@@ -263,10 +262,19 @@ enum smi_forward_action smi_check_forward_dr_smp(struct ib_smp *smp)
/* C14-13:3 -- at the end of the DR segment of path */
if (hop_ptr == 1)
- return (smp->dr_slid != IB_LID_PERMISSIVE ?
+ return (dr_slid_is_permissive ?
IB_SMI_SEND : IB_SMI_LOCAL);
}
return IB_SMI_LOCAL;
+
+}
+
+enum smi_forward_action smi_check_forward_dr_smp(struct ib_smp *smp)
+{
+ return __smi_check_forward_dr_smp(smp->hop_ptr, smp->hop_cnt,
+ ib_get_smp_direction(smp),
+ smp->dr_dlid == IB_LID_PERMISSIVE,
+ smp->dr_slid != IB_LID_PERMISSIVE);
}
/*
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 07/14] IB/mad: Add base version to ib_create_send_mad
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (5 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 06/14] IB/mad: Add helper function for smi_check_forward_dr_smp ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
2015-05-20 8:13 ` [PATCH 08/14] IB/core: Add rdma_max_mad_size helper ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (8 subsequent siblings)
15 siblings, 0 replies; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
In preparation to support the new OPA MAD Base version, add a base version
parameter to ib_create_send_mad and set it to IB_MGMT_BASE_VERSION for current
users.
Definition of the new base version and it's processing will occur in later
patches.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/agent.c | 3 ++-
drivers/infiniband/core/cm.c | 6 ++++--
drivers/infiniband/core/mad.c | 3 ++-
drivers/infiniband/core/mad_rmpp.c | 6 ++++--
drivers/infiniband/core/sa_query.c | 3 ++-
drivers/infiniband/core/user_mad.c | 3 ++-
drivers/infiniband/hw/mlx4/mad.c | 3 ++-
drivers/infiniband/hw/mthca/mthca_mad.c | 3 ++-
drivers/infiniband/hw/qib/qib_iba7322.c | 3 ++-
drivers/infiniband/hw/qib/qib_mad.c | 3 ++-
drivers/infiniband/ulp/srpt/ib_srpt.c | 3 ++-
include/rdma/ib_mad.h | 4 +++-
12 files changed, 29 insertions(+), 14 deletions(-)
diff --git a/drivers/infiniband/core/agent.c b/drivers/infiniband/core/agent.c
index a6fc4d6dc7d7..5c7627c3278c 100644
--- a/drivers/infiniband/core/agent.c
+++ b/drivers/infiniband/core/agent.c
@@ -108,7 +108,8 @@ void agent_send_response(struct ib_mad *mad, struct ib_grh *grh,
send_buf = ib_create_send_mad(agent, wc->src_qp, wc->pkey_index, 0,
IB_MGMT_MAD_HDR, IB_MGMT_MAD_DATA,
- GFP_KERNEL);
+ GFP_KERNEL,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(send_buf)) {
dev_err(&device->dev, "ib_create_send_mad error\n");
goto err1;
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index c3be6663f4f5..dbddddd6fb5d 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -267,7 +267,8 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv,
m = ib_create_send_mad(mad_agent, cm_id_priv->id.remote_cm_qpn,
cm_id_priv->av.pkey_index,
0, IB_MGMT_MAD_HDR, IB_MGMT_MAD_DATA,
- GFP_ATOMIC);
+ GFP_ATOMIC,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(m)) {
ib_destroy_ah(ah);
return PTR_ERR(m);
@@ -297,7 +298,8 @@ static int cm_alloc_response_msg(struct cm_port *port,
m = ib_create_send_mad(port->mad_agent, 1, mad_recv_wc->wc->pkey_index,
0, IB_MGMT_MAD_HDR, IB_MGMT_MAD_DATA,
- GFP_ATOMIC);
+ GFP_ATOMIC,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(m)) {
ib_destroy_ah(ah);
return PTR_ERR(m);
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index 96daba58afad..8055195b5d60 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -920,7 +920,8 @@ struct ib_mad_send_buf * ib_create_send_mad(struct ib_mad_agent *mad_agent,
u32 remote_qpn, u16 pkey_index,
int rmpp_active,
int hdr_len, int data_len,
- gfp_t gfp_mask)
+ gfp_t gfp_mask,
+ u8 base_version)
{
struct ib_mad_agent_private *mad_agent_priv;
struct ib_mad_send_wr_private *mad_send_wr;
diff --git a/drivers/infiniband/core/mad_rmpp.c b/drivers/infiniband/core/mad_rmpp.c
index 13279826342f..9c284d9b4fa9 100644
--- a/drivers/infiniband/core/mad_rmpp.c
+++ b/drivers/infiniband/core/mad_rmpp.c
@@ -139,7 +139,8 @@ static void ack_recv(struct mad_rmpp_recv *rmpp_recv,
hdr_len = ib_get_mad_data_offset(recv_wc->recv_buf.mad->mad_hdr.mgmt_class);
msg = ib_create_send_mad(&rmpp_recv->agent->agent, recv_wc->wc->src_qp,
recv_wc->wc->pkey_index, 1, hdr_len,
- 0, GFP_KERNEL);
+ 0, GFP_KERNEL,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(msg))
return;
@@ -165,7 +166,8 @@ static struct ib_mad_send_buf *alloc_response_msg(struct ib_mad_agent *agent,
hdr_len = ib_get_mad_data_offset(recv_wc->recv_buf.mad->mad_hdr.mgmt_class);
msg = ib_create_send_mad(agent, recv_wc->wc->src_qp,
recv_wc->wc->pkey_index, 1,
- hdr_len, 0, GFP_KERNEL);
+ hdr_len, 0, GFP_KERNEL,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(msg))
ib_destroy_ah(ah);
else {
diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
index 7f7c8c9fa92c..78fbedd8d013 100644
--- a/drivers/infiniband/core/sa_query.c
+++ b/drivers/infiniband/core/sa_query.c
@@ -583,7 +583,8 @@ static int alloc_mad(struct ib_sa_query *query, gfp_t gfp_mask)
query->mad_buf = ib_create_send_mad(query->port->agent, 1,
query->sm_ah->pkey_index,
0, IB_MGMT_SA_HDR, IB_MGMT_SA_DATA,
- gfp_mask);
+ gfp_mask,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(query->mad_buf)) {
kref_put(&query->sm_ah->ref, free_sm_ah);
return -ENOMEM;
diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
index acff2f4bb6dd..6be72a563c61 100644
--- a/drivers/infiniband/core/user_mad.c
+++ b/drivers/infiniband/core/user_mad.c
@@ -520,7 +520,8 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
packet->msg = ib_create_send_mad(agent,
be32_to_cpu(packet->mad.hdr.qpn),
packet->mad.hdr.pkey_index, rmpp_active,
- hdr_len, data_len, GFP_KERNEL);
+ hdr_len, data_len, GFP_KERNEL,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(packet->msg)) {
ret = PTR_ERR(packet->msg);
goto err_ah;
diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
index 9cd2b002d7ae..d44614f9da49 100644
--- a/drivers/infiniband/hw/mlx4/mad.c
+++ b/drivers/infiniband/hw/mlx4/mad.c
@@ -366,7 +366,8 @@ static void forward_trap(struct mlx4_ib_dev *dev, u8 port_num, struct ib_mad *ma
if (agent) {
send_buf = ib_create_send_mad(agent, qpn, 0, 0, IB_MGMT_MAD_HDR,
- IB_MGMT_MAD_DATA, GFP_ATOMIC);
+ IB_MGMT_MAD_DATA, GFP_ATOMIC,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(send_buf))
return;
/*
diff --git a/drivers/infiniband/hw/mthca/mthca_mad.c b/drivers/infiniband/hw/mthca/mthca_mad.c
index 8881fa376e06..2a34cd2f92b7 100644
--- a/drivers/infiniband/hw/mthca/mthca_mad.c
+++ b/drivers/infiniband/hw/mthca/mthca_mad.c
@@ -170,7 +170,8 @@ static void forward_trap(struct mthca_dev *dev,
if (agent) {
send_buf = ib_create_send_mad(agent, qpn, 0, 0, IB_MGMT_MAD_HDR,
- IB_MGMT_MAD_DATA, GFP_ATOMIC);
+ IB_MGMT_MAD_DATA, GFP_ATOMIC,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(send_buf))
return;
/*
diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c
index f32b4628e991..6c8ff10101c0 100644
--- a/drivers/infiniband/hw/qib/qib_iba7322.c
+++ b/drivers/infiniband/hw/qib/qib_iba7322.c
@@ -5502,7 +5502,8 @@ static void try_7322_ipg(struct qib_pportdata *ppd)
goto retry;
send_buf = ib_create_send_mad(agent, 0, 0, 0, IB_MGMT_MAD_HDR,
- IB_MGMT_MAD_DATA, GFP_ATOMIC);
+ IB_MGMT_MAD_DATA, GFP_ATOMIC,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(send_buf))
goto retry;
diff --git a/drivers/infiniband/hw/qib/qib_mad.c b/drivers/infiniband/hw/qib/qib_mad.c
index 395f4046dba2..8dc3bc213845 100644
--- a/drivers/infiniband/hw/qib/qib_mad.c
+++ b/drivers/infiniband/hw/qib/qib_mad.c
@@ -83,7 +83,8 @@ static void qib_send_trap(struct qib_ibport *ibp, void *data, unsigned len)
return;
send_buf = ib_create_send_mad(agent, 0, 0, 0, IB_MGMT_MAD_HDR,
- IB_MGMT_MAD_DATA, GFP_ATOMIC);
+ IB_MGMT_MAD_DATA, GFP_ATOMIC,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(send_buf))
return;
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index 9b84b4c0a000..68201d69d99e 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -476,7 +476,8 @@ static void srpt_mad_recv_handler(struct ib_mad_agent *mad_agent,
rsp = ib_create_send_mad(mad_agent, mad_wc->wc->src_qp,
mad_wc->wc->pkey_index, 0,
IB_MGMT_DEVICE_HDR, IB_MGMT_DEVICE_DATA,
- GFP_KERNEL);
+ GFP_KERNEL,
+ IB_MGMT_BASE_VERSION);
if (IS_ERR(rsp))
goto err_rsp;
diff --git a/include/rdma/ib_mad.h b/include/rdma/ib_mad.h
index 1b37b493b2b4..ddc82d65a9e5 100644
--- a/include/rdma/ib_mad.h
+++ b/include/rdma/ib_mad.h
@@ -622,6 +622,7 @@ int ib_process_mad_wc(struct ib_mad_agent *mad_agent,
* automatically adjust the allocated buffer size to account for any
* additional padding that may be necessary.
* @gfp_mask: GFP mask used for the memory allocation.
+ * @base_version: Base Version of this MAD
*
* This routine allocates a MAD for sending. The returned MAD send buffer
* will reference a data buffer usable for sending a MAD, along
@@ -637,7 +638,8 @@ struct ib_mad_send_buf *ib_create_send_mad(struct ib_mad_agent *mad_agent,
u32 remote_qpn, u16 pkey_index,
int rmpp_active,
int hdr_len, int data_len,
- gfp_t gfp_mask);
+ gfp_t gfp_mask,
+ u8 base_version);
/**
* ib_is_mad_class_rmpp - returns whether given management class
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 08/14] IB/core: Add rdma_max_mad_size helper
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (6 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 07/14] IB/mad: Add base version to ib_create_send_mad ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
[not found] ` <1432109615-19564-9-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-20 8:13 ` [PATCH 09/14] IB/mad: Convert allocations from kmem_cache to kmalloc ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (7 subsequent siblings)
15 siblings, 1 reply; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Add max MAD size to the device immutable data structure and change all drivers
that support MADs to set IB_MGMT_MAD_SIZE.
Add WARN_ON to verify that all devices support at least IB_MGMT_MAD_SIZE.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/mad.c | 4 ++++
drivers/infiniband/hw/ehca/ehca_main.c | 2 ++
drivers/infiniband/hw/ipath/ipath_verbs.c | 1 +
drivers/infiniband/hw/mlx4/main.c | 2 ++
drivers/infiniband/hw/mlx5/main.c | 1 +
drivers/infiniband/hw/mthca/mthca_provider.c | 1 +
drivers/infiniband/hw/ocrdma/ocrdma_main.c | 2 ++
drivers/infiniband/hw/qib/qib_verbs.c | 1 +
include/rdma/ib_mad.h | 1 +
include/rdma/ib_verbs.h | 15 +++++++++++++++
10 files changed, 30 insertions(+)
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index 8055195b5d60..ea25e9467cb5 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -2937,6 +2937,10 @@ static int ib_mad_port_open(struct ib_device *device,
unsigned long flags;
char name[sizeof "ib_mad123"];
int has_smi;
+ size_t max_mad_size = rdma_max_mad_size(device, port_num);
+
+ if (WARN_ON(max_mad_size < IB_MGMT_MAD_SIZE))
+ return -EFAULT;
/* Create new device info */
port_priv = kzalloc(sizeof *port_priv, GFP_KERNEL);
diff --git a/drivers/infiniband/hw/ehca/ehca_main.c b/drivers/infiniband/hw/ehca/ehca_main.c
index 5e30b72d3677..64c834b358d6 100644
--- a/drivers/infiniband/hw/ehca/ehca_main.c
+++ b/drivers/infiniband/hw/ehca/ehca_main.c
@@ -46,6 +46,7 @@
#include <linux/notifier.h>
#include <linux/memory.h>
+#include <rdma/ib_mad.h>
#include "ehca_classes.h"
#include "ehca_iverbs.h"
#include "ehca_mrmw.h"
@@ -444,6 +445,7 @@ static int ehca_port_immutable(struct ib_device *ibdev, u8 port_num,
immutable->pkey_tbl_len = attr.pkey_tbl_len;
immutable->gid_tbl_len = attr.gid_tbl_len;
immutable->core_cap_flags = RDMA_CORE_PORT_IBA_IB;
+ immutable->max_mad_size = IB_MGMT_MAD_SIZE;
return 0;
}
diff --git a/drivers/infiniband/hw/ipath/ipath_verbs.c b/drivers/infiniband/hw/ipath/ipath_verbs.c
index 764081d305b6..c67e8c22dabc 100644
--- a/drivers/infiniband/hw/ipath/ipath_verbs.c
+++ b/drivers/infiniband/hw/ipath/ipath_verbs.c
@@ -1993,6 +1993,7 @@ static int ipath_port_immutable(struct ib_device *ibdev, u8 port_num,
immutable->pkey_tbl_len = attr.pkey_tbl_len;
immutable->gid_tbl_len = attr.gid_tbl_len;
immutable->core_cap_flags = RDMA_CORE_PORT_IBA_IB;
+ immutable->max_mad_size = IB_MGMT_MAD_SIZE;
return 0;
}
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index c49dd0bb251a..7c62daf58302 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -2132,6 +2132,8 @@ static int mlx4_port_immutable(struct ib_device *ibdev, u8 port_num,
else
immutable->core_cap_flags = RDMA_CORE_PORT_IBA_ROCE;
+ immutable->max_mad_size = IB_MGMT_MAD_SIZE;
+
return 0;
}
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index b2fdb9cfa645..bb7f718adc11 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -1195,6 +1195,7 @@ static int mlx5_port_immutable(struct ib_device *ibdev, u8 port_num,
immutable->pkey_tbl_len = attr.pkey_tbl_len;
immutable->gid_tbl_len = attr.gid_tbl_len;
immutable->core_cap_flags = RDMA_CORE_PORT_IBA_IB;
+ immutable->max_mad_size = IB_MGMT_MAD_SIZE;
return 0;
}
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
index 509d59e7a15a..db11d0cc5d59 100644
--- a/drivers/infiniband/hw/mthca/mthca_provider.c
+++ b/drivers/infiniband/hw/mthca/mthca_provider.c
@@ -1257,6 +1257,7 @@ static int mthca_port_immutable(struct ib_device *ibdev, u8 port_num,
immutable->pkey_tbl_len = attr.pkey_tbl_len;
immutable->gid_tbl_len = attr.gid_tbl_len;
immutable->core_cap_flags = RDMA_CORE_PORT_IBA_IB;
+ immutable->max_mad_size = IB_MGMT_MAD_SIZE;
return 0;
}
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_main.c b/drivers/infiniband/hw/ocrdma/ocrdma_main.c
index f55289869357..8a1398b253a2 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_main.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_main.c
@@ -30,6 +30,7 @@
#include <rdma/ib_verbs.h>
#include <rdma/ib_user_verbs.h>
#include <rdma/ib_addr.h>
+#include <rdma/ib_mad.h>
#include <linux/netdevice.h>
#include <net/addrconf.h>
@@ -215,6 +216,7 @@ static int ocrdma_port_immutable(struct ib_device *ibdev, u8 port_num,
immutable->pkey_tbl_len = attr.pkey_tbl_len;
immutable->gid_tbl_len = attr.gid_tbl_len;
immutable->core_cap_flags = RDMA_CORE_PORT_IBA_ROCE;
+ immutable->max_mad_size = IB_MGMT_MAD_SIZE;
return 0;
}
diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c
index dba1c92f1a54..e9e21f9c36e2 100644
--- a/drivers/infiniband/hw/qib/qib_verbs.c
+++ b/drivers/infiniband/hw/qib/qib_verbs.c
@@ -2053,6 +2053,7 @@ static int qib_port_immutable(struct ib_device *ibdev, u8 port_num,
immutable->pkey_tbl_len = attr.pkey_tbl_len;
immutable->gid_tbl_len = attr.gid_tbl_len;
immutable->core_cap_flags = RDMA_CORE_PORT_IBA_IB;
+ immutable->max_mad_size = IB_MGMT_MAD_SIZE;
return 0;
}
diff --git a/include/rdma/ib_mad.h b/include/rdma/ib_mad.h
index ddc82d65a9e5..20ed361a9224 100644
--- a/include/rdma/ib_mad.h
+++ b/include/rdma/ib_mad.h
@@ -135,6 +135,7 @@ enum {
IB_MGMT_SA_DATA = 200,
IB_MGMT_DEVICE_HDR = 64,
IB_MGMT_DEVICE_DATA = 192,
+ IB_MGMT_MAD_SIZE = IB_MGMT_MAD_HDR + IB_MGMT_MAD_DATA,
};
struct ib_mad_hdr {
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index ad499bda62a4..8859afea4e2f 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1523,6 +1523,7 @@ struct ib_port_immutable {
int pkey_tbl_len;
int gid_tbl_len;
u32 core_cap_flags;
+ u32 max_mad_size;
};
struct ib_device {
@@ -2040,6 +2041,20 @@ static inline bool rdma_cap_read_multi_sge(struct ib_device *device,
return !(device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_PROT_IWARP);
}
+/**
+ * rdma_max_mad_size - Return the max MAD size required by this RDMA Port.
+ *
+ * @device: Device
+ * @port_num: Port number
+ *
+ * Return the max MAD size required by the Port. Should return 0 if the port
+ * does not support MADs
+ */
+static inline size_t rdma_max_mad_size(struct ib_device *device, u8 port_num)
+{
+ return device->port_immutable[port_num].max_mad_size;
+}
+
int ib_query_gid(struct ib_device *device,
u8 port_num, int index, union ib_gid *gid);
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 09/14] IB/mad: Convert allocations from kmem_cache to kmalloc
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (7 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 08/14] IB/core: Add rdma_max_mad_size helper ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
[not found] ` <1432109615-19564-10-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-20 8:13 ` [PATCH 10/14] IB/mad: Add MAD size parameters to process_mad ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (6 subsequent siblings)
15 siblings, 1 reply; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
kmalloc is more flexible to support devices with different sized MADs and
research and testing showed that the current use of kmem_cache does not provide
performance benefits over kmalloc.
Use the new max_mad_size specified by devices for the allocations and DMA maps.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/mad.c | 82 +++++++++++++++++++------------------------
1 file changed, 37 insertions(+), 45 deletions(-)
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index ea25e9467cb5..cb5c1e484901 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -59,8 +59,6 @@ MODULE_PARM_DESC(send_queue_size, "Size of send queue in number of work requests
module_param_named(recv_queue_size, mad_recvq_size, int, 0444);
MODULE_PARM_DESC(recv_queue_size, "Size of receive queue in number of work requests");
-static struct kmem_cache *ib_mad_cache;
-
static struct list_head ib_mad_port_list;
static u32 ib_mad_client_id = 0;
@@ -717,6 +715,17 @@ static void build_smp_wc(struct ib_qp *qp,
wc->port_num = port_num;
}
+static struct ib_mad_private *alloc_mad_priv(size_t mad_size, gfp_t flags)
+{
+ return (kmalloc(sizeof(struct ib_mad_private_header) +
+ sizeof(struct ib_grh) + mad_size, flags));
+}
+
+static size_t max_mad_size(struct ib_mad_port_private *port_priv)
+{
+ return rdma_max_mad_size(port_priv->device, port_priv->port_num);
+}
+
/*
* Return 0 if SMP is to be sent
* Return 1 if SMP was consumed locally (whether or not solicited)
@@ -771,7 +780,9 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
}
local->mad_priv = NULL;
local->recv_mad_agent = NULL;
- mad_priv = kmem_cache_alloc(ib_mad_cache, GFP_ATOMIC);
+
+ mad_priv = alloc_mad_priv(max_mad_size(mad_agent_priv->qp_info->port_priv),
+ GFP_ATOMIC);
if (!mad_priv) {
ret = -ENOMEM;
dev_err(&device->dev, "No memory for local response MAD\n");
@@ -801,10 +812,10 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
*/
atomic_inc(&mad_agent_priv->refcount);
} else
- kmem_cache_free(ib_mad_cache, mad_priv);
+ kfree(mad_priv);
break;
case IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_CONSUMED:
- kmem_cache_free(ib_mad_cache, mad_priv);
+ kfree(mad_priv);
break;
case IB_MAD_RESULT_SUCCESS:
/* Treat like an incoming receive MAD */
@@ -820,14 +831,14 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
* No receiving agent so drop packet and
* generate send completion.
*/
- kmem_cache_free(ib_mad_cache, mad_priv);
+ kfree(mad_priv);
break;
}
local->mad_priv = mad_priv;
local->recv_mad_agent = recv_mad_agent;
break;
default:
- kmem_cache_free(ib_mad_cache, mad_priv);
+ kfree(mad_priv);
kfree(local);
ret = -EINVAL;
goto out;
@@ -1238,7 +1249,7 @@ void ib_free_recv_mad(struct ib_mad_recv_wc *mad_recv_wc)
recv_wc);
priv = container_of(mad_priv_hdr, struct ib_mad_private,
header);
- kmem_cache_free(ib_mad_cache, priv);
+ kfree(priv);
}
}
EXPORT_SYMBOL(ib_free_recv_mad);
@@ -1971,6 +1982,11 @@ enum smi_action handle_ib_smi(struct ib_mad_port_private *port_priv,
return IB_SMI_HANDLE;
}
+static size_t mad_recv_buf_size(struct ib_mad_port_private *port_priv)
+{
+ return(sizeof(struct ib_grh) + max_mad_size(port_priv));
+}
+
static bool generate_unmatched_resp(struct ib_mad_private *recv,
struct ib_mad_private *response)
{
@@ -2011,8 +2027,7 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
recv = container_of(mad_priv_hdr, struct ib_mad_private, header);
ib_dma_unmap_single(port_priv->device,
recv->header.mapping,
- sizeof(struct ib_mad_private) -
- sizeof(struct ib_mad_private_header),
+ mad_recv_buf_size(port_priv),
DMA_FROM_DEVICE);
/* Setup MAD receive work completion from "normal" work completion */
@@ -2029,7 +2044,7 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
if (!validate_mad(&recv->mad.mad.mad_hdr, qp_info->qp->qp_num))
goto out;
- response = kmem_cache_alloc(ib_mad_cache, GFP_KERNEL);
+ response = alloc_mad_priv(max_mad_size(port_priv), GFP_KERNEL);
if (!response) {
dev_err(&port_priv->device->dev,
"ib_mad_recv_done_handler no memory for response buffer\n");
@@ -2089,7 +2104,7 @@ out:
if (response) {
ib_mad_post_receive_mads(qp_info, response);
if (recv)
- kmem_cache_free(ib_mad_cache, recv);
+ kfree(recv);
} else
ib_mad_post_receive_mads(qp_info, recv);
}
@@ -2549,7 +2564,7 @@ local_send_completion:
spin_lock_irqsave(&mad_agent_priv->lock, flags);
atomic_dec(&mad_agent_priv->refcount);
if (free_mad)
- kmem_cache_free(ib_mad_cache, local->mad_priv);
+ kfree(local->mad_priv);
kfree(local);
}
spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
@@ -2678,7 +2693,8 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
mad_priv = mad;
mad = NULL;
} else {
- mad_priv = kmem_cache_alloc(ib_mad_cache, GFP_KERNEL);
+ mad_priv =
+ alloc_mad_priv(max_mad_size(qp_info->port_priv), GFP_KERNEL);
if (!mad_priv) {
dev_err(&qp_info->port_priv->device->dev,
"No memory for receive buffer\n");
@@ -2688,8 +2704,7 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
}
sg_list.addr = ib_dma_map_single(qp_info->port_priv->device,
&mad_priv->grh,
- sizeof *mad_priv -
- sizeof mad_priv->header,
+ mad_recv_buf_size(qp_info->port_priv),
DMA_FROM_DEVICE);
if (unlikely(ib_dma_mapping_error(qp_info->port_priv->device,
sg_list.addr))) {
@@ -2713,10 +2728,9 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
spin_unlock_irqrestore(&recv_queue->lock, flags);
ib_dma_unmap_single(qp_info->port_priv->device,
mad_priv->header.mapping,
- sizeof *mad_priv -
- sizeof mad_priv->header,
+ mad_recv_buf_size(qp_info->port_priv),
DMA_FROM_DEVICE);
- kmem_cache_free(ib_mad_cache, mad_priv);
+ kfree(mad_priv);
dev_err(&qp_info->port_priv->device->dev,
"ib_post_recv failed: %d\n", ret);
break;
@@ -2753,10 +2767,9 @@ static void cleanup_recv_queue(struct ib_mad_qp_info *qp_info)
ib_dma_unmap_single(qp_info->port_priv->device,
recv->header.mapping,
- sizeof(struct ib_mad_private) -
- sizeof(struct ib_mad_private_header),
+ mad_recv_buf_size(qp_info->port_priv),
DMA_FROM_DEVICE);
- kmem_cache_free(ib_mad_cache, recv);
+ kfree(recv);
}
qp_info->recv_queue.count = 0;
@@ -2937,9 +2950,8 @@ static int ib_mad_port_open(struct ib_device *device,
unsigned long flags;
char name[sizeof "ib_mad123"];
int has_smi;
- size_t max_mad_size = rdma_max_mad_size(device, port_num);
- if (WARN_ON(max_mad_size < IB_MGMT_MAD_SIZE))
+ if (WARN_ON(rdma_max_mad_size(device, port_num) < IB_MGMT_MAD_SIZE))
return -EFAULT;
/* Create new device info */
@@ -3149,45 +3161,25 @@ static struct ib_client mad_client = {
static int __init ib_mad_init_module(void)
{
- int ret;
-
mad_recvq_size = min(mad_recvq_size, IB_MAD_QP_MAX_SIZE);
mad_recvq_size = max(mad_recvq_size, IB_MAD_QP_MIN_SIZE);
mad_sendq_size = min(mad_sendq_size, IB_MAD_QP_MAX_SIZE);
mad_sendq_size = max(mad_sendq_size, IB_MAD_QP_MIN_SIZE);
- ib_mad_cache = kmem_cache_create("ib_mad",
- sizeof(struct ib_mad_private),
- 0,
- SLAB_HWCACHE_ALIGN,
- NULL);
- if (!ib_mad_cache) {
- pr_err("Couldn't create ib_mad cache\n");
- ret = -ENOMEM;
- goto error1;
- }
-
INIT_LIST_HEAD(&ib_mad_port_list);
if (ib_register_client(&mad_client)) {
pr_err("Couldn't register ib_mad client\n");
- ret = -EINVAL;
- goto error2;
+ return(-EINVAL);
}
return 0;
-
-error2:
- kmem_cache_destroy(ib_mad_cache);
-error1:
- return ret;
}
static void __exit ib_mad_cleanup_module(void)
{
ib_unregister_client(&mad_client);
- kmem_cache_destroy(ib_mad_cache);
}
module_init(ib_mad_init_module);
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 10/14] IB/mad: Add MAD size parameters to process_mad
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (8 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 09/14] IB/mad: Convert allocations from kmem_cache to kmalloc ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
[not found] ` <1432109615-19564-11-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-20 8:13 ` [PATCH 11/14] IB/core: Add rdma_cap_opa_mad helper ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (5 subsequent siblings)
15 siblings, 1 reply; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
In support of variable length MADs add in and out MAD size parameters to the
process_mad call.
The out MAD size parameter is passed by reference such that it can be updated
by the agent to indicate the proper response length to be sent by the MAD
stack.
The in and out MAD parameters are made generic by specifying them as
ib_mad_hdr.
Drivers are modified as needed.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/mad.c | 26 +++++++++++++++++---------
drivers/infiniband/core/sysfs.c | 5 ++++-
drivers/infiniband/hw/amso1100/c2_provider.c | 5 ++++-
drivers/infiniband/hw/cxgb3/iwch_provider.c | 5 ++++-
drivers/infiniband/hw/cxgb4/provider.c | 7 +++++--
drivers/infiniband/hw/ehca/ehca_iverbs.h | 4 ++--
drivers/infiniband/hw/ehca/ehca_sqp.c | 9 ++++++++-
drivers/infiniband/hw/ipath/ipath_mad.c | 9 ++++++++-
drivers/infiniband/hw/ipath/ipath_verbs.h | 3 ++-
drivers/infiniband/hw/mlx4/mad.c | 10 +++++++++-
drivers/infiniband/hw/mlx4/mlx4_ib.h | 3 ++-
drivers/infiniband/hw/mlx5/mad.c | 9 ++++++++-
drivers/infiniband/hw/mlx5/mlx5_ib.h | 3 ++-
drivers/infiniband/hw/mthca/mthca_dev.h | 4 ++--
drivers/infiniband/hw/mthca/mthca_mad.c | 10 ++++++++--
drivers/infiniband/hw/nes/nes_verbs.c | 3 ++-
drivers/infiniband/hw/ocrdma/ocrdma_ah.c | 9 ++++++++-
drivers/infiniband/hw/ocrdma/ocrdma_ah.h | 3 ++-
drivers/infiniband/hw/ocrdma/ocrdma_stats.h | 1 +
drivers/infiniband/hw/qib/qib_mad.c | 9 ++++++++-
drivers/infiniband/hw/qib/qib_verbs.h | 3 ++-
include/rdma/ib_verbs.h | 8 +++++---
22 files changed, 113 insertions(+), 35 deletions(-)
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index cb5c1e484901..a8ea6137771b 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -745,6 +745,8 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
u8 port_num;
struct ib_wc mad_wc;
struct ib_send_wr *send_wr = &mad_send_wr->send_wr;
+ size_t in_mad_size = max_mad_size(mad_agent_priv->qp_info->port_priv);
+ size_t out_mad_size;
if (device->node_type == RDMA_NODE_IB_SWITCH &&
smp->mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
@@ -781,8 +783,8 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
local->mad_priv = NULL;
local->recv_mad_agent = NULL;
- mad_priv = alloc_mad_priv(max_mad_size(mad_agent_priv->qp_info->port_priv),
- GFP_ATOMIC);
+ out_mad_size = max_mad_size(mad_agent_priv->qp_info->port_priv);
+ mad_priv = alloc_mad_priv(out_mad_size, GFP_ATOMIC);
if (!mad_priv) {
ret = -ENOMEM;
dev_err(&device->dev, "No memory for local response MAD\n");
@@ -797,8 +799,9 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
/* No GRH for DR SMP */
ret = device->process_mad(device, 0, port_num, &mad_wc, NULL,
- (struct ib_mad *)smp,
- (struct ib_mad *)&mad_priv->mad);
+ (struct ib_mad_hdr *)smp, in_mad_size,
+ (struct ib_mad_hdr *)&mad_priv->mad,
+ &out_mad_size);
switch (ret)
{
case IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY:
@@ -2017,6 +2020,7 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
struct ib_mad_agent_private *mad_agent;
int port_num;
int ret = IB_MAD_RESULT_SUCCESS;
+ size_t resp_mad_size;
mad_list = (struct ib_mad_list_head *)(unsigned long)wc->wr_id;
qp_info = mad_list->mad_queue->qp_info;
@@ -2044,7 +2048,8 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
if (!validate_mad(&recv->mad.mad.mad_hdr, qp_info->qp->qp_num))
goto out;
- response = alloc_mad_priv(max_mad_size(port_priv), GFP_KERNEL);
+ resp_mad_size = max_mad_size(port_priv);
+ response = alloc_mad_priv(resp_mad_size, GFP_KERNEL);
if (!response) {
dev_err(&port_priv->device->dev,
"ib_mad_recv_done_handler no memory for response buffer\n");
@@ -2069,8 +2074,10 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
ret = port_priv->device->process_mad(port_priv->device, 0,
port_priv->port_num,
wc, &recv->grh,
- &recv->mad.mad,
- &response->mad.mad);
+ (struct ib_mad_hdr *)&recv->mad.mad,
+ max_mad_size(port_priv),
+ (struct ib_mad_hdr *)&response->mad.mad,
+ &resp_mad_size);
if (ret & IB_MAD_RESULT_SUCCESS) {
if (ret & IB_MAD_RESULT_CONSUMED)
goto out;
@@ -2693,8 +2700,9 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
mad_priv = mad;
mad = NULL;
} else {
- mad_priv =
- alloc_mad_priv(max_mad_size(qp_info->port_priv), GFP_KERNEL);
+ size_t mad_size = max_mad_size(qp_info->port_priv);
+
+ mad_priv = alloc_mad_priv(mad_size, GFP_KERNEL);
if (!mad_priv) {
dev_err(&qp_info->port_priv->device->dev,
"No memory for receive buffer\n");
diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
index d0334c101ecb..101ba69362e8 100644
--- a/drivers/infiniband/core/sysfs.c
+++ b/drivers/infiniband/core/sysfs.c
@@ -326,6 +326,7 @@ static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr,
int width = (tab_attr->index >> 16) & 0xff;
struct ib_mad *in_mad = NULL;
struct ib_mad *out_mad = NULL;
+ size_t out_mad_size = sizeof(*out_mad);
ssize_t ret;
if (!p->ibdev->process_mad)
@@ -347,7 +348,9 @@ static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr,
in_mad->data[41] = p->port_num; /* PortSelect field */
if ((p->ibdev->process_mad(p->ibdev, IB_MAD_IGNORE_MKEY,
- p->port_num, NULL, NULL, in_mad, out_mad) &
+ p->port_num, NULL, NULL,
+ (struct ib_mad_hdr *)in_mad, sizeof(*in_mad),
+ (struct ib_mad_hdr *)out_mad, &out_mad_size) &
(IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) !=
(IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) {
ret = -EINVAL;
diff --git a/drivers/infiniband/hw/amso1100/c2_provider.c b/drivers/infiniband/hw/amso1100/c2_provider.c
index d396c39918de..7a1f92549bfa 100644
--- a/drivers/infiniband/hw/amso1100/c2_provider.c
+++ b/drivers/infiniband/hw/amso1100/c2_provider.c
@@ -584,7 +584,10 @@ static int c2_process_mad(struct ib_device *ibdev,
u8 port_num,
struct ib_wc *in_wc,
struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad)
+ struct ib_mad_hdr *in_mad,
+ size_t in_mad_size,
+ struct ib_mad_hdr *out_mad,
+ size_t *out_mad_size)
{
pr_debug("%s:%u\n", __func__, __LINE__);
return -ENOSYS;
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
index 061ef08c92e2..c6b1a36accaa 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
@@ -87,7 +87,10 @@ static int iwch_process_mad(struct ib_device *ibdev,
u8 port_num,
struct ib_wc *in_wc,
struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad)
+ struct ib_mad_hdr *in_mad,
+ size_t in_mad_size,
+ struct ib_mad_hdr *out_mad,
+ size_t *out_mad_size)
{
return -ENOSYS;
}
diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
index ef08a9f29451..12519473d7da 100644
--- a/drivers/infiniband/hw/cxgb4/provider.c
+++ b/drivers/infiniband/hw/cxgb4/provider.c
@@ -81,8 +81,11 @@ static int c4iw_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
static int c4iw_process_mad(struct ib_device *ibdev, int mad_flags,
u8 port_num, struct ib_wc *in_wc,
- struct ib_grh *in_grh, struct ib_mad *in_mad,
- struct ib_mad *out_mad)
+ struct ib_grh *in_grh,
+ struct ib_mad_hdr *in_mad,
+ size_t in_mad_size,
+ struct ib_mad_hdr *out_mad,
+ size_t *out_mad_size)
{
return -ENOSYS;
}
diff --git a/drivers/infiniband/hw/ehca/ehca_iverbs.h b/drivers/infiniband/hw/ehca/ehca_iverbs.h
index 077185b3fbd6..7ea3ea36a6ef 100644
--- a/drivers/infiniband/hw/ehca/ehca_iverbs.h
+++ b/drivers/infiniband/hw/ehca/ehca_iverbs.h
@@ -192,8 +192,8 @@ int ehca_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
int ehca_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
struct ib_wc *in_wc, struct ib_grh *in_grh,
- struct ib_mad *in_mad,
- struct ib_mad *out_mad);
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size);
void ehca_poll_eqs(unsigned long data);
diff --git a/drivers/infiniband/hw/ehca/ehca_sqp.c b/drivers/infiniband/hw/ehca/ehca_sqp.c
index dba8f9f8b996..76b401720593 100644
--- a/drivers/infiniband/hw/ehca/ehca_sqp.c
+++ b/drivers/infiniband/hw/ehca/ehca_sqp.c
@@ -218,9 +218,16 @@ perf_reply:
int ehca_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
struct ib_wc *in_wc, struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad)
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size)
{
int ret;
+ struct ib_mad *in_mad = (struct ib_mad *)in;
+ struct ib_mad *out_mad = (struct ib_mad *)out;
+
+ if (WARN_ON(in_mad_size != sizeof(*in_mad) ||
+ *out_mad_size != sizeof(*out_mad)))
+ return IB_MAD_RESULT_FAILURE;
if (!port_num || port_num > ibdev->phys_port_cnt || !in_wc)
return IB_MAD_RESULT_FAILURE;
diff --git a/drivers/infiniband/hw/ipath/ipath_mad.c b/drivers/infiniband/hw/ipath/ipath_mad.c
index e890e5ba0e01..b384cfd4685e 100644
--- a/drivers/infiniband/hw/ipath/ipath_mad.c
+++ b/drivers/infiniband/hw/ipath/ipath_mad.c
@@ -1491,9 +1491,16 @@ bail:
*/
int ipath_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
struct ib_wc *in_wc, struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad)
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size)
{
int ret;
+ struct ib_mad *in_mad = (struct ib_mad *)in;
+ struct ib_mad *out_mad = (struct ib_mad *)out;
+
+ if (WARN_ON(in_mad_size != sizeof(*in_mad) ||
+ *out_mad_size != sizeof(*out_mad)))
+ return IB_MAD_RESULT_FAILURE;
switch (in_mad->mad_hdr.mgmt_class) {
case IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE:
diff --git a/drivers/infiniband/hw/ipath/ipath_verbs.h b/drivers/infiniband/hw/ipath/ipath_verbs.h
index ae6cff4abffc..cd8dd09722f4 100644
--- a/drivers/infiniband/hw/ipath/ipath_verbs.h
+++ b/drivers/infiniband/hw/ipath/ipath_verbs.h
@@ -703,7 +703,8 @@ int ipath_process_mad(struct ib_device *ibdev,
u8 port_num,
struct ib_wc *in_wc,
struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad);
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size);
/*
* Compare the lower 24 bits of the two values.
diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
index d44614f9da49..ce1a6105411d 100644
--- a/drivers/infiniband/hw/mlx4/mad.c
+++ b/drivers/infiniband/hw/mlx4/mad.c
@@ -868,8 +868,16 @@ static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
int mlx4_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
struct ib_wc *in_wc, struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad)
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size)
{
+ struct ib_mad *in_mad = (struct ib_mad *)in;
+ struct ib_mad *out_mad = (struct ib_mad *)out;
+
+ if (WARN_ON(in_mad_size != sizeof(*in_mad) ||
+ *out_mad_size != sizeof(*out_mad)))
+ return IB_MAD_RESULT_FAILURE;
+
switch (rdma_port_get_link_layer(ibdev, port_num)) {
case IB_LINK_LAYER_INFINIBAND:
return ib_process_mad(ibdev, mad_flags, port_num, in_wc,
diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h
index fce3934372a1..a8c92c6d403a 100644
--- a/drivers/infiniband/hw/mlx4/mlx4_ib.h
+++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h
@@ -710,7 +710,8 @@ int mlx4_MAD_IFC(struct mlx4_ib_dev *dev, int mad_ifc_flags,
void *in_mad, void *response_mad);
int mlx4_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
struct ib_wc *in_wc, struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad);
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size);
int mlx4_ib_mad_init(struct mlx4_ib_dev *dev);
void mlx4_ib_mad_cleanup(struct mlx4_ib_dev *dev);
diff --git a/drivers/infiniband/hw/mlx5/mad.c b/drivers/infiniband/hw/mlx5/mad.c
index 9cf9a37bb5ff..76abd14b04c7 100644
--- a/drivers/infiniband/hw/mlx5/mad.c
+++ b/drivers/infiniband/hw/mlx5/mad.c
@@ -59,10 +59,17 @@ int mlx5_MAD_IFC(struct mlx5_ib_dev *dev, int ignore_mkey, int ignore_bkey,
int mlx5_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
struct ib_wc *in_wc, struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad)
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size)
{
u16 slid;
int err;
+ struct ib_mad *in_mad = (struct ib_mad *)in;
+ struct ib_mad *out_mad = (struct ib_mad *)out;
+
+ if (WARN_ON(in_mad_size != sizeof(*in_mad) ||
+ *out_mad_size != sizeof(*out_mad)))
+ return IB_MAD_RESULT_FAILURE;
slid = in_wc ? in_wc->slid : be16_to_cpu(IB_LID_PERMISSIVE);
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index dff1cfcdf476..d6c88b7f9d2d 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -587,7 +587,8 @@ int mlx5_ib_unmap_fmr(struct list_head *fmr_list);
int mlx5_ib_fmr_dealloc(struct ib_fmr *ibfmr);
int mlx5_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
struct ib_wc *in_wc, struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad);
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size);
struct ib_xrcd *mlx5_ib_alloc_xrcd(struct ib_device *ibdev,
struct ib_ucontext *context,
struct ib_udata *udata);
diff --git a/drivers/infiniband/hw/mthca/mthca_dev.h b/drivers/infiniband/hw/mthca/mthca_dev.h
index 7e6a6d64ad4e..60d15f13a597 100644
--- a/drivers/infiniband/hw/mthca/mthca_dev.h
+++ b/drivers/infiniband/hw/mthca/mthca_dev.h
@@ -578,8 +578,8 @@ int mthca_process_mad(struct ib_device *ibdev,
u8 port_num,
struct ib_wc *in_wc,
struct ib_grh *in_grh,
- struct ib_mad *in_mad,
- struct ib_mad *out_mad);
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size);
int mthca_create_agents(struct mthca_dev *dev);
void mthca_free_agents(struct mthca_dev *dev);
diff --git a/drivers/infiniband/hw/mthca/mthca_mad.c b/drivers/infiniband/hw/mthca/mthca_mad.c
index 2a34cd2f92b7..78504be7fcca 100644
--- a/drivers/infiniband/hw/mthca/mthca_mad.c
+++ b/drivers/infiniband/hw/mthca/mthca_mad.c
@@ -198,13 +198,19 @@ int mthca_process_mad(struct ib_device *ibdev,
u8 port_num,
struct ib_wc *in_wc,
struct ib_grh *in_grh,
- struct ib_mad *in_mad,
- struct ib_mad *out_mad)
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size)
{
int err;
u16 slid = in_wc ? in_wc->slid : be16_to_cpu(IB_LID_PERMISSIVE);
u16 prev_lid = 0;
struct ib_port_attr pattr;
+ struct ib_mad *in_mad = (struct ib_mad *)in;
+ struct ib_mad *out_mad = (struct ib_mad *)out;
+
+ if (WARN_ON(in_mad_size != sizeof(*in_mad) ||
+ *out_mad_size != sizeof(*out_mad)))
+ return IB_MAD_RESULT_FAILURE;
/* Forward locally generated traps to the SM */
if (in_mad->mad_hdr.method == IB_MGMT_METHOD_TRAP &&
diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
index 05530e3f6ff0..1eb0bae56381 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.c
+++ b/drivers/infiniband/hw/nes/nes_verbs.c
@@ -3222,7 +3222,8 @@ static int nes_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
*/
static int nes_process_mad(struct ib_device *ibdev, int mad_flags,
u8 port_num, struct ib_wc *in_wc, struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad)
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size)
{
nes_debug(NES_DBG_INIT, "\n");
return -ENOSYS;
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_ah.c b/drivers/infiniband/hw/ocrdma/ocrdma_ah.c
index d812904f3984..74c2f2df7554 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_ah.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_ah.c
@@ -198,10 +198,17 @@ int ocrdma_process_mad(struct ib_device *ibdev,
u8 port_num,
struct ib_wc *in_wc,
struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad)
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size)
{
int status;
struct ocrdma_dev *dev;
+ struct ib_mad *in_mad = (struct ib_mad *)in;
+ struct ib_mad *out_mad = (struct ib_mad *)out;
+
+ if (WARN_ON(in_mad_size != sizeof(*in_mad) ||
+ *out_mad_size != sizeof(*out_mad)))
+ return IB_MAD_RESULT_FAILURE;
switch (in_mad->mad_hdr.mgmt_class) {
case IB_MGMT_CLASS_PERF_MGMT:
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_ah.h b/drivers/infiniband/hw/ocrdma/ocrdma_ah.h
index 726a87cf22dc..7c29d17d179b 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_ah.h
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_ah.h
@@ -44,5 +44,6 @@ int ocrdma_process_mad(struct ib_device *,
u8 port_num,
struct ib_wc *in_wc,
struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad);
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size);
#endif /* __OCRDMA_AH_H__ */
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_stats.h b/drivers/infiniband/hw/ocrdma/ocrdma_stats.h
index 091edd68a8a3..42e9f0d5da9b 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_stats.h
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_stats.h
@@ -29,6 +29,7 @@
#define __OCRDMA_STATS_H__
#include <linux/debugfs.h>
+#include <rdma/ib_mad.h>
#include "ocrdma.h"
#include "ocrdma_hw.h"
diff --git a/drivers/infiniband/hw/qib/qib_mad.c b/drivers/infiniband/hw/qib/qib_mad.c
index 8dc3bc213845..13eefcee33dc 100644
--- a/drivers/infiniband/hw/qib/qib_mad.c
+++ b/drivers/infiniband/hw/qib/qib_mad.c
@@ -2402,11 +2402,18 @@ bail:
*/
int qib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port,
struct ib_wc *in_wc, struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad)
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size)
{
int ret;
struct qib_ibport *ibp = to_iport(ibdev, port);
struct qib_pportdata *ppd = ppd_from_ibp(ibp);
+ struct ib_mad *in_mad = (struct ib_mad *)in;
+ struct ib_mad *out_mad = (struct ib_mad *)out;
+
+ if (WARN_ON(in_mad_size != sizeof(*in_mad) ||
+ *out_mad_size != sizeof(*out_mad)))
+ return IB_MAD_RESULT_FAILURE;
switch (in_mad->mad_hdr.mgmt_class) {
case IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE:
diff --git a/drivers/infiniband/hw/qib/qib_verbs.h b/drivers/infiniband/hw/qib/qib_verbs.h
index bfc8948fdd35..77f1d313932a 100644
--- a/drivers/infiniband/hw/qib/qib_verbs.h
+++ b/drivers/infiniband/hw/qib/qib_verbs.h
@@ -873,7 +873,8 @@ void qib_sys_guid_chg(struct qib_ibport *ibp);
void qib_node_desc_chg(struct qib_ibport *ibp);
int qib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
struct ib_wc *in_wc, struct ib_grh *in_grh,
- struct ib_mad *in_mad, struct ib_mad *out_mad);
+ struct ib_mad_hdr *in, size_t in_mad_size,
+ struct ib_mad_hdr *out, size_t *out_mad_size);
int qib_create_agents(struct qib_ibdev *dev);
void qib_free_agents(struct qib_ibdev *dev);
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 8859afea4e2f..7b1dcb380987 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1452,7 +1452,7 @@ struct ib_flow {
struct ib_uobject *uobject;
};
-struct ib_mad;
+struct ib_mad_hdr;
struct ib_grh;
enum ib_process_mad_flags {
@@ -1693,8 +1693,10 @@ struct ib_device {
u8 port_num,
struct ib_wc *in_wc,
struct ib_grh *in_grh,
- struct ib_mad *in_mad,
- struct ib_mad *out_mad);
+ struct ib_mad_hdr *in_mad,
+ size_t in_mad_size,
+ struct ib_mad_hdr *out_mad,
+ size_t *out_mad_size);
struct ib_xrcd * (*alloc_xrcd)(struct ib_device *device,
struct ib_ucontext *ucontext,
struct ib_udata *udata);
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 11/14] IB/core: Add rdma_cap_opa_mad helper
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (9 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 10/14] IB/mad: Add MAD size parameters to process_mad ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
[not found] ` <1432109615-19564-12-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-20 8:13 ` [PATCH 12/14] IB/mad: Add partial Intel OPA MAD support ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (4 subsequent siblings)
15 siblings, 1 reply; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
OPA MADs share a common header with IBTA MADs but with a different base version
and an extended length. These MADs increase the performance of management
traffic on OPA devices.
Sharing a common header with IBTA MADs allows us to share most of the MAD
processing code when dealing with OPA MADs in addition to supporting some IBTA
MADs on OPA devices.
This patch adds the Core Capability flag for OPA MADs as well as a helper
function for testing that flag both of which are used in future patches.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
include/rdma/ib_verbs.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 7b1dcb380987..6d729651901c 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -362,6 +362,7 @@ union rdma_protocol_stats {
#define RDMA_CORE_CAP_IB_CM 0x00000004
#define RDMA_CORE_CAP_IW_CM 0x00000008
#define RDMA_CORE_CAP_IB_SA 0x00000010
+#define RDMA_CORE_CAP_OPA_MAD 0x00000020
/* Address format 0x000FF000 */
#define RDMA_CORE_CAP_AF_IB 0x00001000
@@ -386,6 +387,8 @@ union rdma_protocol_stats {
| RDMA_CORE_CAP_ETH_AH)
#define RDMA_CORE_PORT_IWARP (RDMA_CORE_CAP_PROT_IWARP \
| RDMA_CORE_CAP_IW_CM)
+#define RDMA_CORE_PORT_INTEL_OPA (RDMA_CORE_PORT_IBA_IB \
+ | RDMA_CORE_CAP_OPA_MAD)
struct ib_port_attr {
enum ib_port_state state;
@@ -1873,6 +1876,24 @@ static inline bool rdma_cap_ib_mad(struct ib_device *device, u8 port_num)
}
/**
+ * rdma_cap_opa_mad - Check if the port of device provides support for OPA
+ * Management Datagrams.
+ * @device: Device to check
+ * @port_num: Port number to check
+ *
+ * Intel OmniPath devices extend and/or replace the InfiniBand Management
+ * datagrams with their own versions. These OPA MADs share many but not all of
+ * the characteristics of InfiniBand MADs.
+ *
+ * Return: true if the port supports OPA MAD packet formats.
+ */
+static inline bool rdma_cap_opa_mad(struct ib_device *device, u8 port_num)
+{
+ return (device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_OPA_MAD)
+ == RDMA_CORE_CAP_OPA_MAD;
+}
+
+/**
* rdma_cap_ib_smi - Check if the port of a device provides an Infiniband
* Subnet Management Agent (SMA) on the Subnet Management Interface (SMI).
* @device: Device to check
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 12/14] IB/mad: Add partial Intel OPA MAD support
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (10 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 11/14] IB/core: Add rdma_cap_opa_mad helper ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
2015-05-20 8:13 ` [PATCH 13/14] " ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (3 subsequent siblings)
15 siblings, 0 replies; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
This patch adds processing of OPA MADs to ib_create_send_mad.
1) Create opa_mad and opa_rmpp_mad data structures.
OPA MADs are 2K derivatives of the ib_mad and ib_rmpp_mad structures.
Therefore they share the same common headers and much of the same processing
code as IB MADs.
2) Add Intel Omni-Path Architecture defines
3) Increase max management version to accommodate OPA
4) If the device supports OPA MADs process the OPA base version.
Set MAD size and sg lengths as appropriate
Split RMPP MADs as appropriate
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/mad.c | 37 +++++++++++++++++++++++++++----------
drivers/infiniband/core/mad_priv.h | 4 +++-
include/rdma/ib_mad.h | 23 +++++++++++++++++++++--
3 files changed, 51 insertions(+), 13 deletions(-)
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index a8ea6137771b..50a63247f6f9 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -861,11 +861,11 @@ out:
return ret;
}
-static int get_pad_size(int hdr_len, int data_len)
+static int get_pad_size(int hdr_len, int data_len, size_t mad_size)
{
int seg_size, pad;
- seg_size = sizeof(struct ib_mad) - hdr_len;
+ seg_size = mad_size - hdr_len;
if (data_len && seg_size) {
pad = seg_size - data_len % seg_size;
return pad == seg_size ? 0 : pad;
@@ -884,14 +884,14 @@ static void free_send_rmpp_list(struct ib_mad_send_wr_private *mad_send_wr)
}
static int alloc_send_rmpp_list(struct ib_mad_send_wr_private *send_wr,
- gfp_t gfp_mask)
+ size_t mad_size, gfp_t gfp_mask)
{
struct ib_mad_send_buf *send_buf = &send_wr->send_buf;
struct ib_rmpp_base *rmpp_base = send_buf->mad;
struct ib_rmpp_segment *seg = NULL;
int left, seg_size, pad;
- send_buf->seg_size = sizeof (struct ib_mad) - send_buf->hdr_len;
+ send_buf->seg_size = mad_size - send_buf->hdr_len;
seg_size = send_buf->seg_size;
pad = send_wr->pad;
@@ -941,20 +941,30 @@ struct ib_mad_send_buf * ib_create_send_mad(struct ib_mad_agent *mad_agent,
struct ib_mad_send_wr_private *mad_send_wr;
int pad, message_size, ret, size;
void *buf;
+ size_t mad_size;
+ bool opa;
mad_agent_priv = container_of(mad_agent, struct ib_mad_agent_private,
agent);
- pad = get_pad_size(hdr_len, data_len);
+
+ opa = rdma_cap_opa_mad(mad_agent->device, mad_agent->port_num);
+
+ if (opa && base_version == OPA_MGMT_BASE_VERSION)
+ mad_size = sizeof(struct opa_mad);
+ else
+ mad_size = sizeof(struct ib_mad);
+
+ pad = get_pad_size(hdr_len, data_len, mad_size);
message_size = hdr_len + data_len + pad;
if (ib_mad_kernel_rmpp_agent(mad_agent)) {
- if (!rmpp_active && message_size > sizeof(struct ib_mad))
+ if (!rmpp_active && message_size > mad_size)
return ERR_PTR(-EINVAL);
} else
- if (rmpp_active || message_size > sizeof(struct ib_mad))
+ if (rmpp_active || message_size > mad_size)
return ERR_PTR(-EINVAL);
- size = rmpp_active ? hdr_len : sizeof(struct ib_mad);
+ size = rmpp_active ? hdr_len : mad_size;
buf = kzalloc(sizeof *mad_send_wr + size, gfp_mask);
if (!buf)
return ERR_PTR(-ENOMEM);
@@ -969,7 +979,14 @@ struct ib_mad_send_buf * ib_create_send_mad(struct ib_mad_agent *mad_agent,
mad_send_wr->mad_agent_priv = mad_agent_priv;
mad_send_wr->sg_list[0].length = hdr_len;
mad_send_wr->sg_list[0].lkey = mad_agent->mr->lkey;
- mad_send_wr->sg_list[1].length = sizeof(struct ib_mad) - hdr_len;
+
+ /* OPA MADs don't have to be the full 2048 bytes */
+ if (opa && base_version == OPA_MGMT_BASE_VERSION &&
+ data_len < mad_size - hdr_len)
+ mad_send_wr->sg_list[1].length = data_len;
+ else
+ mad_send_wr->sg_list[1].length = mad_size - hdr_len;
+
mad_send_wr->sg_list[1].lkey = mad_agent->mr->lkey;
mad_send_wr->send_wr.wr_id = (unsigned long) mad_send_wr;
@@ -982,7 +999,7 @@ struct ib_mad_send_buf * ib_create_send_mad(struct ib_mad_agent *mad_agent,
mad_send_wr->send_wr.wr.ud.pkey_index = pkey_index;
if (rmpp_active) {
- ret = alloc_send_rmpp_list(mad_send_wr, gfp_mask);
+ ret = alloc_send_rmpp_list(mad_send_wr, mad_size, gfp_mask);
if (ret) {
kfree(buf);
return ERR_PTR(ret);
diff --git a/drivers/infiniband/core/mad_priv.h b/drivers/infiniband/core/mad_priv.h
index 7b19cba2adf0..b93e9a958725 100644
--- a/drivers/infiniband/core/mad_priv.h
+++ b/drivers/infiniband/core/mad_priv.h
@@ -56,7 +56,7 @@
/* Registration table sizes */
#define MAX_MGMT_CLASS 80
-#define MAX_MGMT_VERSION 8
+#define MAX_MGMT_VERSION 0x83
#define MAX_MGMT_OUI 8
#define MAX_MGMT_VENDOR_RANGE2 (IB_MGMT_CLASS_VENDOR_RANGE2_END - \
IB_MGMT_CLASS_VENDOR_RANGE2_START + 1)
@@ -80,6 +80,8 @@ struct ib_mad_private {
struct ib_mad mad;
struct ib_rmpp_mad rmpp_mad;
struct ib_smp smp;
+ struct opa_mad opa_mad;
+ struct opa_rmpp_mad opa_rmpp_mad;
} mad;
} __attribute__ ((packed));
diff --git a/include/rdma/ib_mad.h b/include/rdma/ib_mad.h
index 20ed361a9224..a8a6e9d2485e 100644
--- a/include/rdma/ib_mad.h
+++ b/include/rdma/ib_mad.h
@@ -42,8 +42,11 @@
#include <rdma/ib_verbs.h>
#include <uapi/rdma/ib_user_mad.h>
-/* Management base version */
+/* Management base versions */
#define IB_MGMT_BASE_VERSION 1
+#define OPA_MGMT_BASE_VERSION 0x80
+
+#define OPA_SMP_CLASS_VERSION 0x80
/* Management classes */
#define IB_MGMT_CLASS_SUBN_LID_ROUTED 0x01
@@ -136,6 +139,9 @@ enum {
IB_MGMT_DEVICE_HDR = 64,
IB_MGMT_DEVICE_DATA = 192,
IB_MGMT_MAD_SIZE = IB_MGMT_MAD_HDR + IB_MGMT_MAD_DATA,
+ OPA_MGMT_MAD_DATA = 2024,
+ OPA_MGMT_RMPP_DATA = 2012,
+ OPA_MGMT_MAD_SIZE = IB_MGMT_MAD_HDR + OPA_MGMT_MAD_DATA,
};
struct ib_mad_hdr {
@@ -182,6 +188,11 @@ struct ib_mad {
u8 data[IB_MGMT_MAD_DATA];
};
+struct opa_mad {
+ struct ib_mad_hdr mad_hdr;
+ u8 data[OPA_MGMT_MAD_DATA];
+};
+
struct ib_rmpp_base {
struct ib_mad_hdr mad_hdr;
struct ib_rmpp_hdr rmpp_hdr;
@@ -192,6 +203,11 @@ struct ib_rmpp_mad {
u8 data[IB_MGMT_RMPP_DATA];
};
+struct opa_rmpp_mad {
+ struct ib_rmpp_base base;
+ u8 data[OPA_MGMT_RMPP_DATA];
+};
+
struct ib_sa_mad {
struct ib_mad_hdr mad_hdr;
struct ib_rmpp_hdr rmpp_hdr;
@@ -406,7 +422,10 @@ struct ib_mad_send_wc {
struct ib_mad_recv_buf {
struct list_head list;
struct ib_grh *grh;
- struct ib_mad *mad;
+ union {
+ struct ib_mad *mad;
+ struct opa_mad *opa_mad;
+ };
};
/**
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 13/14] IB/mad: Add partial Intel OPA MAD support
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (11 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 12/14] IB/mad: Add partial Intel OPA MAD support ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
2015-05-20 8:13 ` [PATCH 14/14] IB/mad: Add final OPA MAD processing ira.weiny-ral2JQCrhuEAvxtiuMwx3w
` (2 subsequent siblings)
15 siblings, 0 replies; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Add OPA SMP processing functionality.
Define the new OPA SMP format, create support functions for this format using
the previously defined helper functions as appropriate.
These functions are defined in this patch and used in the final OPA MAD support
patch.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/mad_priv.h | 2 +
drivers/infiniband/core/opa_smi.h | 78 +++++++++++++++++++++++++++
drivers/infiniband/core/smi.c | 54 +++++++++++++++++++
include/rdma/opa_smi.h | 106 +++++++++++++++++++++++++++++++++++++
4 files changed, 240 insertions(+)
create mode 100644 drivers/infiniband/core/opa_smi.h
create mode 100644 include/rdma/opa_smi.h
diff --git a/drivers/infiniband/core/mad_priv.h b/drivers/infiniband/core/mad_priv.h
index b93e9a958725..6e8b02fdcc5f 100644
--- a/drivers/infiniband/core/mad_priv.h
+++ b/drivers/infiniband/core/mad_priv.h
@@ -41,6 +41,7 @@
#include <linux/workqueue.h>
#include <rdma/ib_mad.h>
#include <rdma/ib_smi.h>
+#include <rdma/opa_smi.h>
#define IB_MAD_QPS_CORE 2 /* Always QP0 and QP1 as a minimum */
@@ -82,6 +83,7 @@ struct ib_mad_private {
struct ib_smp smp;
struct opa_mad opa_mad;
struct opa_rmpp_mad opa_rmpp_mad;
+ struct opa_smp opa_smp;
} mad;
} __attribute__ ((packed));
diff --git a/drivers/infiniband/core/opa_smi.h b/drivers/infiniband/core/opa_smi.h
new file mode 100644
index 000000000000..62d91bfa4cb7
--- /dev/null
+++ b/drivers/infiniband/core/opa_smi.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright (c) 2014 Intel Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ */
+
+#ifndef __OPA_SMI_H_
+#define __OPA_SMI_H_
+
+#include <rdma/ib_smi.h>
+#include <rdma/opa_smi.h>
+
+#include "smi.h"
+
+enum smi_action opa_smi_handle_dr_smp_recv(struct opa_smp *smp, u8 node_type,
+ int port_num, int phys_port_cnt);
+int opa_smi_get_fwd_port(struct opa_smp *smp);
+extern enum smi_forward_action opa_smi_check_forward_dr_smp(struct opa_smp *smp);
+extern enum smi_action opa_smi_handle_dr_smp_send(struct opa_smp *smp,
+ u8 node_type, int port_num);
+
+/*
+ * Return IB_SMI_HANDLE if the SMP should be handled by the local SMA/SM
+ * via process_mad
+ */
+static inline enum smi_action opa_smi_check_local_smp(struct opa_smp *smp,
+ struct ib_device *device)
+{
+ /* C14-9:3 -- We're at the end of the DR segment of path */
+ /* C14-9:4 -- Hop Pointer = Hop Count + 1 -> give to SMA/SM */
+ return (device->process_mad &&
+ !opa_get_smp_direction(smp) &&
+ (smp->hop_ptr == smp->hop_cnt + 1)) ?
+ IB_SMI_HANDLE : IB_SMI_DISCARD;
+}
+
+/*
+ * Return IB_SMI_HANDLE if the SMP should be handled by the local SMA/SM
+ * via process_mad
+ */
+static inline enum smi_action opa_smi_check_local_returning_smp(struct opa_smp *smp,
+ struct ib_device *device)
+{
+ /* C14-13:3 -- We're at the end of the DR segment of path */
+ /* C14-13:4 -- Hop Pointer == 0 -> give to SM */
+ return (device->process_mad &&
+ opa_get_smp_direction(smp) &&
+ !smp->hop_ptr) ? IB_SMI_HANDLE : IB_SMI_DISCARD;
+}
+
+#endif /* __OPA_SMI_H_ */
diff --git a/drivers/infiniband/core/smi.c b/drivers/infiniband/core/smi.c
index 4969ce3338fb..8ce332207596 100644
--- a/drivers/infiniband/core/smi.c
+++ b/drivers/infiniband/core/smi.c
@@ -5,6 +5,7 @@
* Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved.
* Copyright (c) 2004-2007 Voltaire Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
+ * Copyright (c) 2014 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@@ -38,6 +39,7 @@
#include <rdma/ib_smi.h>
#include "smi.h"
+#include "opa_smi.h"
static inline
enum smi_action __smi_handle_dr_smp_send(u8 node_type, int port_num,
@@ -137,6 +139,20 @@ enum smi_action smi_handle_dr_smp_send(struct ib_smp *smp,
smp->dr_slid == IB_LID_PERMISSIVE);
}
+enum smi_action opa_smi_handle_dr_smp_send(struct opa_smp *smp,
+ u8 node_type, int port_num)
+{
+ return __smi_handle_dr_smp_send(node_type, port_num,
+ &smp->hop_ptr, smp->hop_cnt,
+ smp->route.dr.initial_path,
+ smp->route.dr.return_path,
+ opa_get_smp_direction(smp),
+ smp->route.dr.dr_dlid ==
+ OPA_LID_PERMISSIVE,
+ smp->route.dr.dr_slid ==
+ OPA_LID_PERMISSIVE);
+}
+
static inline
enum smi_action __smi_handle_dr_smp_recv(u8 node_type, int port_num,
int phys_port_cnt,
@@ -236,6 +252,24 @@ enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, u8 node_type,
smp->dr_slid == IB_LID_PERMISSIVE);
}
+/*
+ * Adjust information for a received SMP
+ * Return IB_SMI_DISCARD if the SMP should be dropped
+ */
+enum smi_action opa_smi_handle_dr_smp_recv(struct opa_smp *smp, u8 node_type,
+ int port_num, int phys_port_cnt)
+{
+ return __smi_handle_dr_smp_recv(node_type, port_num, phys_port_cnt,
+ &smp->hop_ptr, smp->hop_cnt,
+ smp->route.dr.initial_path,
+ smp->route.dr.return_path,
+ opa_get_smp_direction(smp),
+ smp->route.dr.dr_dlid ==
+ OPA_LID_PERMISSIVE,
+ smp->route.dr.dr_slid ==
+ OPA_LID_PERMISSIVE);
+}
+
static inline
enum smi_forward_action __smi_check_forward_dr_smp(u8 hop_ptr, u8 hop_cnt,
u8 direction,
@@ -277,6 +311,16 @@ enum smi_forward_action smi_check_forward_dr_smp(struct ib_smp *smp)
smp->dr_slid != IB_LID_PERMISSIVE);
}
+enum smi_forward_action opa_smi_check_forward_dr_smp(struct opa_smp *smp)
+{
+ return __smi_check_forward_dr_smp(smp->hop_ptr, smp->hop_cnt,
+ opa_get_smp_direction(smp),
+ smp->route.dr.dr_dlid ==
+ OPA_LID_PERMISSIVE,
+ smp->route.dr.dr_slid ==
+ OPA_LID_PERMISSIVE);
+}
+
/*
* Return the forwarding port number from initial_path for outgoing SMP and
* from return_path for returning SMP
@@ -286,3 +330,13 @@ int smi_get_fwd_port(struct ib_smp *smp)
return (!ib_get_smp_direction(smp) ? smp->initial_path[smp->hop_ptr+1] :
smp->return_path[smp->hop_ptr-1]);
}
+
+/*
+ * Return the forwarding port number from initial_path for outgoing SMP and
+ * from return_path for returning SMP
+ */
+int opa_smi_get_fwd_port(struct opa_smp *smp)
+{
+ return !opa_get_smp_direction(smp) ? smp->route.dr.initial_path[smp->hop_ptr+1] :
+ smp->route.dr.return_path[smp->hop_ptr-1];
+}
diff --git a/include/rdma/opa_smi.h b/include/rdma/opa_smi.h
new file mode 100644
index 000000000000..29063e84c253
--- /dev/null
+++ b/include/rdma/opa_smi.h
@@ -0,0 +1,106 @@
+/*
+ * Copyright (c) 2014 Intel Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#if !defined(OPA_SMI_H)
+#define OPA_SMI_H
+
+#include <rdma/ib_mad.h>
+#include <rdma/ib_smi.h>
+
+#define OPA_SMP_LID_DATA_SIZE 2016
+#define OPA_SMP_DR_DATA_SIZE 1872
+#define OPA_SMP_MAX_PATH_HOPS 64
+
+#define OPA_SMI_CLASS_VERSION 0x80
+
+#define OPA_LID_PERMISSIVE cpu_to_be32(0xFFFFFFFF)
+
+struct opa_smp {
+ u8 base_version;
+ u8 mgmt_class;
+ u8 class_version;
+ u8 method;
+ __be16 status;
+ u8 hop_ptr;
+ u8 hop_cnt;
+ __be64 tid;
+ __be16 attr_id;
+ __be16 resv;
+ __be32 attr_mod;
+ __be64 mkey;
+ union {
+ struct {
+ uint8_t data[OPA_SMP_LID_DATA_SIZE];
+ } lid;
+ struct {
+ __be32 dr_slid;
+ __be32 dr_dlid;
+ u8 initial_path[OPA_SMP_MAX_PATH_HOPS];
+ u8 return_path[OPA_SMP_MAX_PATH_HOPS];
+ u8 reserved[8];
+ u8 data[OPA_SMP_DR_DATA_SIZE];
+ } dr;
+ } route;
+} __packed;
+
+
+static inline u8
+opa_get_smp_direction(struct opa_smp *smp)
+{
+ return ib_get_smp_direction((struct ib_smp *)smp);
+}
+
+static inline u8 *opa_get_smp_data(struct opa_smp *smp)
+{
+ if (smp->mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
+ return smp->route.dr.data;
+
+ return smp->route.lid.data;
+}
+
+static inline size_t opa_get_smp_data_size(struct opa_smp *smp)
+{
+ if (smp->mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
+ return sizeof(smp->route.dr.data);
+
+ return sizeof(smp->route.lid.data);
+}
+
+static inline size_t opa_get_smp_header_size(struct opa_smp *smp)
+{
+ if (smp->mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
+ return sizeof(*smp) - sizeof(smp->route.dr.data);
+
+ return sizeof(*smp) - sizeof(smp->route.lid.data);
+}
+
+#endif /* OPA_SMI_H */
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH 14/14] IB/mad: Add final OPA MAD processing
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (12 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 13/14] " ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-20 8:13 ` ira.weiny-ral2JQCrhuEAvxtiuMwx3w
[not found] ` <1432109615-19564-15-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2015-05-28 7:03 ` [PATCH 00/14] IB/mad: Add support for " Or Gerlitz
2015-05-28 7:07 ` Or Gerlitz
15 siblings, 1 reply; 50+ messages in thread
From: ira.weiny-ral2JQCrhuEAvxtiuMwx3w @ 2015-05-20 8:13 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Ira Weiny
From: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
For devices which support OPA MADs
Use previously defined SMP support functions.
Pass correct base version to ib_create_send_mad when processing OPA MADs.
Process wc.pkey_index returned by agents for response because OPA SMP packets
must carry a valid pkey.
Carry the correct segment size (OPA vs IBTA) of RMPP messages within
ib_mad_recv_wc.
Handle variable length OPA MADs by:
* Adjusting the 'fake' WC for locally routed SMP's to represent the
proper incoming byte_len
* out_mad_size is used from the local HCA agents
1) when sending agent responses on the wire
2) when passing responses through the local_completions
function
NOTE: wc.byte_len includes the GRH length and therefore is different from the
in_mad_size specified to the local HCA agents. out_mad_size should _not_
include the GRH length as it is added by the verbs layer and is not part
of MAD processing.
Signed-off-by: Ira Weiny <ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
drivers/infiniband/core/agent.c | 23 +++-
drivers/infiniband/core/agent.h | 3 +-
drivers/infiniband/core/mad.c | 222 +++++++++++++++++++++++++++++++------
drivers/infiniband/core/mad_priv.h | 1 +
drivers/infiniband/core/mad_rmpp.c | 20 +++-
drivers/infiniband/core/user_mad.c | 19 ++--
include/rdma/ib_mad.h | 2 +
7 files changed, 242 insertions(+), 48 deletions(-)
diff --git a/drivers/infiniband/core/agent.c b/drivers/infiniband/core/agent.c
index 5c7627c3278c..fb305635c95d 100644
--- a/drivers/infiniband/core/agent.c
+++ b/drivers/infiniband/core/agent.c
@@ -80,13 +80,16 @@ ib_get_agent_port(struct ib_device *device, int port_num)
void agent_send_response(struct ib_mad *mad, struct ib_grh *grh,
struct ib_wc *wc, struct ib_device *device,
- int port_num, int qpn)
+ int port_num, int qpn, u32 resp_mad_len,
+ bool opa)
{
struct ib_agent_port_private *port_priv;
struct ib_mad_agent *agent;
struct ib_mad_send_buf *send_buf;
struct ib_ah *ah;
struct ib_mad_send_wr_private *mad_send_wr;
+ size_t data_len;
+ u8 base_version;
if (device->node_type == RDMA_NODE_IB_SWITCH)
port_priv = ib_get_agent_port(device, 0);
@@ -106,16 +109,26 @@ void agent_send_response(struct ib_mad *mad, struct ib_grh *grh,
return;
}
+ /* On OPA devices base version determines MAD size */
+ base_version = mad->mad_hdr.base_version;
+ if (opa && base_version == OPA_MGMT_BASE_VERSION)
+ data_len = resp_mad_len - IB_MGMT_MAD_HDR;
+ else
+ data_len = IB_MGMT_MAD_DATA;
+
send_buf = ib_create_send_mad(agent, wc->src_qp, wc->pkey_index, 0,
- IB_MGMT_MAD_HDR, IB_MGMT_MAD_DATA,
- GFP_KERNEL,
- IB_MGMT_BASE_VERSION);
+ IB_MGMT_MAD_HDR, data_len, GFP_KERNEL,
+ base_version);
if (IS_ERR(send_buf)) {
dev_err(&device->dev, "ib_create_send_mad error\n");
goto err1;
}
- memcpy(send_buf->mad, mad, sizeof *mad);
+ if (opa && base_version == OPA_MGMT_BASE_VERSION)
+ memcpy(send_buf->mad, mad, resp_mad_len);
+ else
+ memcpy(send_buf->mad, mad, sizeof(*mad));
+
send_buf->ah = ah;
if (device->node_type == RDMA_NODE_IB_SWITCH) {
diff --git a/drivers/infiniband/core/agent.h b/drivers/infiniband/core/agent.h
index 6669287009c2..7ede18b34ca8 100644
--- a/drivers/infiniband/core/agent.h
+++ b/drivers/infiniband/core/agent.h
@@ -46,6 +46,7 @@ extern int ib_agent_port_close(struct ib_device *device, int port_num);
extern void agent_send_response(struct ib_mad *mad, struct ib_grh *grh,
struct ib_wc *wc, struct ib_device *device,
- int port_num, int qpn);
+ int port_num, int qpn, u32 resp_mad_len,
+ bool opa);
#endif /* __AGENT_H_ */
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index 50a63247f6f9..e3328a353f92 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -3,6 +3,7 @@
* Copyright (c) 2005 Intel Corporation. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2009 HNR Consulting. All rights reserved.
+ * Copyright (c) 2014 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@@ -44,6 +45,7 @@
#include "mad_priv.h"
#include "mad_rmpp.h"
#include "smi.h"
+#include "opa_smi.h"
#include "agent.h"
MODULE_LICENSE("Dual BSD/GPL");
@@ -736,6 +738,7 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
{
int ret = 0;
struct ib_smp *smp = mad_send_wr->send_buf.mad;
+ struct opa_smp *opa_smp = (struct opa_smp *)smp;
unsigned long flags;
struct ib_mad_local_private *local;
struct ib_mad_private *mad_priv;
@@ -747,6 +750,9 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
struct ib_send_wr *send_wr = &mad_send_wr->send_wr;
size_t in_mad_size = max_mad_size(mad_agent_priv->qp_info->port_priv);
size_t out_mad_size;
+ u16 drslid;
+ bool opa = rdma_cap_opa_mad(mad_agent_priv->qp_info->port_priv->device,
+ mad_agent_priv->qp_info->port_priv->port_num);
if (device->node_type == RDMA_NODE_IB_SWITCH &&
smp->mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
@@ -760,19 +766,47 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
* If we are at the start of the LID routed part, don't update the
* hop_ptr or hop_cnt. See section 14.2.2, Vol 1 IB spec.
*/
- if ((ib_get_smp_direction(smp) ? smp->dr_dlid : smp->dr_slid) ==
- IB_LID_PERMISSIVE &&
- smi_handle_dr_smp_send(smp, device->node_type, port_num) ==
- IB_SMI_DISCARD) {
- ret = -EINVAL;
- dev_err(&device->dev, "Invalid directed route\n");
- goto out;
- }
+ if (opa && smp->class_version == OPA_SMP_CLASS_VERSION) {
+ u32 opa_drslid;
+ if ((opa_get_smp_direction(opa_smp)
+ ? opa_smp->route.dr.dr_dlid : opa_smp->route.dr.dr_slid) ==
+ OPA_LID_PERMISSIVE &&
+ opa_smi_handle_dr_smp_send(opa_smp, device->node_type,
+ port_num) == IB_SMI_DISCARD) {
+ ret = -EINVAL;
+ dev_err(&device->dev, "OPA Invalid directed route\n");
+ goto out;
+ }
+ opa_drslid = be32_to_cpu(opa_smp->route.dr.dr_slid);
+ if (opa_drslid != OPA_LID_PERMISSIVE &&
+ opa_drslid & 0xffff0000) {
+ ret = -EINVAL;
+ dev_err(&device->dev, "OPA Invalid dr_slid 0x%x\n",
+ opa_drslid);
+ goto out;
+ }
+ drslid = (u16)(opa_drslid & 0x0000ffff);
- /* Check to post send on QP or process locally */
- if (smi_check_local_smp(smp, device) == IB_SMI_DISCARD &&
- smi_check_local_returning_smp(smp, device) == IB_SMI_DISCARD)
- goto out;
+ /* Check to post send on QP or process locally */
+ if (opa_smi_check_local_smp(opa_smp, device) == IB_SMI_DISCARD &&
+ opa_smi_check_local_returning_smp(opa_smp, device) == IB_SMI_DISCARD)
+ goto out;
+ } else {
+ if ((ib_get_smp_direction(smp) ? smp->dr_dlid : smp->dr_slid) ==
+ IB_LID_PERMISSIVE &&
+ smi_handle_dr_smp_send(smp, device->node_type, port_num) ==
+ IB_SMI_DISCARD) {
+ ret = -EINVAL;
+ dev_err(&device->dev, "Invalid directed route\n");
+ goto out;
+ }
+ drslid = be16_to_cpu(smp->dr_slid);
+
+ /* Check to post send on QP or process locally */
+ if (smi_check_local_smp(smp, device) == IB_SMI_DISCARD &&
+ smi_check_local_returning_smp(smp, device) == IB_SMI_DISCARD)
+ goto out;
+ }
local = kmalloc(sizeof *local, GFP_ATOMIC);
if (!local) {
@@ -793,10 +827,16 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
}
build_smp_wc(mad_agent_priv->agent.qp,
- send_wr->wr_id, be16_to_cpu(smp->dr_slid),
+ send_wr->wr_id, drslid,
send_wr->wr.ud.pkey_index,
send_wr->wr.ud.port_num, &mad_wc);
+ if (opa && smp->base_version == OPA_MGMT_BASE_VERSION) {
+ mad_wc.byte_len = mad_send_wr->send_buf.hdr_len
+ + mad_send_wr->send_buf.data_len
+ + sizeof(struct ib_grh);
+ }
+
/* No GRH for DR SMP */
ret = device->process_mad(device, 0, port_num, &mad_wc, NULL,
(struct ib_mad_hdr *)smp, in_mad_size,
@@ -825,7 +865,10 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
port_priv = ib_get_mad_port(mad_agent_priv->agent.device,
mad_agent_priv->agent.port_num);
if (port_priv) {
- memcpy(&mad_priv->mad.mad, smp, sizeof(struct ib_mad));
+ if (opa && smp->base_version == OPA_MGMT_BASE_VERSION)
+ memcpy(&mad_priv->mad.mad, smp, sizeof(struct opa_mad));
+ else
+ memcpy(&mad_priv->mad.mad, smp, sizeof(struct ib_mad));
recv_mad_agent = find_mad_agent(port_priv,
&mad_priv->mad.mad);
}
@@ -848,6 +891,8 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
}
local->mad_send_wr = mad_send_wr;
+ local->mad_send_wr->send_wr.wr.ud.pkey_index = mad_wc.pkey_index;
+ local->return_wc_byte_len = out_mad_size;
/* Reference MAD agent until send side of local completion handled */
atomic_inc(&mad_agent_priv->refcount);
/* Queue local completion to local list */
@@ -1740,14 +1785,18 @@ out:
return mad_agent;
}
-static int validate_mad(const struct ib_mad_hdr *mad_hdr, u32 qp_num)
+static int validate_mad(const struct ib_mad_hdr *mad_hdr,
+ const struct ib_mad_qp_info *qp_info,
+ bool opa)
{
int valid = 0;
+ u32 qp_num = qp_info->qp->qp_num;
/* Make sure MAD base version is understood */
- if (mad_hdr->base_version != IB_MGMT_BASE_VERSION) {
- pr_err("MAD received with unsupported base version %d\n",
- mad_hdr->base_version);
+ if (mad_hdr->base_version != IB_MGMT_BASE_VERSION &&
+ (!opa || mad_hdr->base_version != OPA_MGMT_BASE_VERSION)) {
+ pr_err("MAD received with unsupported base version %d %s\n",
+ mad_hdr->base_version, opa ? "(opa)" : "");
goto out;
}
@@ -1995,7 +2044,9 @@ enum smi_action handle_ib_smi(struct ib_mad_port_private *port_priv,
&response->grh, wc,
port_priv->device,
smi_get_fwd_port(&recv->mad.smp),
- qp_info->qp->qp_num);
+ qp_info->qp->qp_num,
+ sizeof(struct ib_mad),
+ false);
return IB_SMI_DISCARD;
}
@@ -2008,7 +2059,9 @@ static size_t mad_recv_buf_size(struct ib_mad_port_private *port_priv)
}
static bool generate_unmatched_resp(struct ib_mad_private *recv,
- struct ib_mad_private *response)
+ struct ib_mad_private *response,
+ size_t *resp_len,
+ bool opa)
{
if (recv->mad.mad.mad_hdr.method == IB_MGMT_METHOD_GET ||
recv->mad.mad.mad_hdr.method == IB_MGMT_METHOD_SET) {
@@ -2022,11 +2075,90 @@ static bool generate_unmatched_resp(struct ib_mad_private *recv,
if (recv->mad.mad.mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
response->mad.mad.mad_hdr.status |= IB_SMP_DIRECTION;
+ if (opa && recv->mad.mad.mad_hdr.base_version == OPA_MGMT_BASE_VERSION) {
+ if (recv->mad.mad.mad_hdr.mgmt_class ==
+ IB_MGMT_CLASS_SUBN_LID_ROUTED ||
+ recv->mad.mad.mad_hdr.mgmt_class ==
+ IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
+ *resp_len = opa_get_smp_header_size(
+ (struct opa_smp *)&recv->mad.smp);
+ else
+ *resp_len = sizeof(struct ib_mad_hdr);
+ }
+
return true;
} else {
return false;
}
}
+
+static enum smi_action
+handle_opa_smi(struct ib_mad_port_private *port_priv,
+ struct ib_mad_qp_info *qp_info,
+ struct ib_wc *wc,
+ int port_num,
+ struct ib_mad_private *recv,
+ struct ib_mad_private *response)
+{
+ enum smi_forward_action retsmi;
+
+ if (opa_smi_handle_dr_smp_recv(&recv->mad.opa_smp,
+ port_priv->device->node_type,
+ port_num,
+ port_priv->device->phys_port_cnt) ==
+ IB_SMI_DISCARD)
+ return IB_SMI_DISCARD;
+
+ retsmi = opa_smi_check_forward_dr_smp(&recv->mad.opa_smp);
+ if (retsmi == IB_SMI_LOCAL)
+ return IB_SMI_HANDLE;
+
+ if (retsmi == IB_SMI_SEND) { /* don't forward */
+ if (opa_smi_handle_dr_smp_send(&recv->mad.opa_smp,
+ port_priv->device->node_type,
+ port_num) == IB_SMI_DISCARD)
+ return IB_SMI_DISCARD;
+
+ if (opa_smi_check_local_smp(&recv->mad.opa_smp, port_priv->device) == IB_SMI_DISCARD)
+ return IB_SMI_DISCARD;
+
+ } else if (port_priv->device->node_type == RDMA_NODE_IB_SWITCH) {
+ /* forward case for switches */
+ memcpy(response, recv, sizeof(*response));
+ response->header.recv_wc.wc = &response->header.wc;
+ response->header.recv_wc.recv_buf.opa_mad = &response->mad.opa_mad;
+ response->header.recv_wc.recv_buf.grh = &response->grh;
+
+ agent_send_response((struct ib_mad *)&response->mad.mad,
+ &response->grh, wc,
+ port_priv->device,
+ opa_smi_get_fwd_port(&recv->mad.opa_smp),
+ qp_info->qp->qp_num,
+ recv->header.wc.byte_len,
+ 1);
+
+ return IB_SMI_DISCARD;
+ }
+
+ return IB_SMI_HANDLE;
+}
+
+static enum smi_action
+handle_smi(struct ib_mad_port_private *port_priv,
+ struct ib_mad_qp_info *qp_info,
+ struct ib_wc *wc,
+ int port_num,
+ struct ib_mad_private *recv,
+ struct ib_mad_private *response,
+ bool opa)
+{
+ if (opa && recv->mad.mad.mad_hdr.base_version == OPA_MGMT_BASE_VERSION &&
+ recv->mad.mad.mad_hdr.class_version == OPA_SMI_CLASS_VERSION)
+ return handle_opa_smi(port_priv, qp_info, wc, port_num, recv, response);
+
+ return handle_ib_smi(port_priv, qp_info, wc, port_num, recv, response);
+}
+
static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
struct ib_wc *wc)
{
@@ -2038,11 +2170,15 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
int port_num;
int ret = IB_MAD_RESULT_SUCCESS;
size_t resp_mad_size;
+ bool opa;
mad_list = (struct ib_mad_list_head *)(unsigned long)wc->wr_id;
qp_info = mad_list->mad_queue->qp_info;
dequeue_mad(mad_list);
+ opa = rdma_cap_opa_mad(qp_info->port_priv->device,
+ qp_info->port_priv->port_num);
+
mad_priv_hdr = container_of(mad_list, struct ib_mad_private_header,
mad_list);
recv = container_of(mad_priv_hdr, struct ib_mad_private, header);
@@ -2054,7 +2190,13 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
/* Setup MAD receive work completion from "normal" work completion */
recv->header.wc = *wc;
recv->header.recv_wc.wc = &recv->header.wc;
- recv->header.recv_wc.mad_len = sizeof(struct ib_mad);
+ if (opa && recv->mad.mad.mad_hdr.base_version == OPA_MGMT_BASE_VERSION) {
+ recv->header.recv_wc.mad_len = wc->byte_len - sizeof(struct ib_grh);
+ recv->header.recv_wc.mad_seg_size = sizeof(struct opa_mad);
+ } else {
+ recv->header.recv_wc.mad_len = sizeof(struct ib_mad);
+ recv->header.recv_wc.mad_seg_size = sizeof(struct ib_mad);
+ }
recv->header.recv_wc.recv_buf.mad = &recv->mad.mad;
recv->header.recv_wc.recv_buf.grh = &recv->grh;
@@ -2062,7 +2204,7 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
snoop_recv(qp_info, &recv->header.recv_wc, IB_MAD_SNOOP_RECVS);
/* Validate MAD */
- if (!validate_mad(&recv->mad.mad.mad_hdr, qp_info->qp->qp_num))
+ if (!validate_mad(&recv->mad.mad.mad_hdr, qp_info, opa))
goto out;
resp_mad_size = max_mad_size(port_priv);
@@ -2080,8 +2222,7 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
if (recv->mad.mad.mad_hdr.mgmt_class ==
IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) {
- if (handle_ib_smi(port_priv, qp_info, wc, port_num, recv,
- response)
+ if (handle_smi(port_priv, qp_info, wc, port_num, recv, response, opa)
== IB_SMI_DISCARD)
goto out;
}
@@ -2103,7 +2244,9 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
&recv->grh, wc,
port_priv->device,
port_num,
- qp_info->qp->qp_num);
+ qp_info->qp->qp_num,
+ resp_mad_size,
+ opa);
goto out;
}
}
@@ -2118,9 +2261,12 @@ static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
*/
recv = NULL;
} else if ((ret & IB_MAD_RESULT_SUCCESS) &&
- generate_unmatched_resp(recv, response)) {
+ generate_unmatched_resp(recv, response, &resp_mad_size, opa)) {
agent_send_response(&response->mad.mad, &recv->grh, wc,
- port_priv->device, port_num, qp_info->qp->qp_num);
+ port_priv->device, port_num,
+ qp_info->qp->qp_num,
+ resp_mad_size,
+ opa);
}
out:
@@ -2522,10 +2668,14 @@ static void local_completions(struct work_struct *work)
int free_mad;
struct ib_wc wc;
struct ib_mad_send_wc mad_send_wc;
+ bool opa;
mad_agent_priv =
container_of(work, struct ib_mad_agent_private, local_work);
+ opa = rdma_cap_opa_mad(mad_agent_priv->qp_info->port_priv->device,
+ mad_agent_priv->qp_info->port_priv->port_num);
+
spin_lock_irqsave(&mad_agent_priv->lock, flags);
while (!list_empty(&mad_agent_priv->local_list)) {
local = list_entry(mad_agent_priv->local_list.next,
@@ -2535,6 +2685,7 @@ static void local_completions(struct work_struct *work)
spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
free_mad = 0;
if (local->mad_priv) {
+ u8 base_version;
recv_mad_agent = local->recv_mad_agent;
if (!recv_mad_agent) {
dev_err(&mad_agent_priv->agent.device->dev,
@@ -2550,11 +2701,20 @@ static void local_completions(struct work_struct *work)
build_smp_wc(recv_mad_agent->agent.qp,
(unsigned long) local->mad_send_wr,
be16_to_cpu(IB_LID_PERMISSIVE),
- 0, recv_mad_agent->agent.port_num, &wc);
+ local->mad_send_wr->send_wr.wr.ud.pkey_index,
+ recv_mad_agent->agent.port_num, &wc);
local->mad_priv->header.recv_wc.wc = &wc;
- local->mad_priv->header.recv_wc.mad_len =
- sizeof(struct ib_mad);
+
+ base_version = local->mad_priv->mad.mad.mad_hdr.base_version;
+ if (opa && base_version == OPA_MGMT_BASE_VERSION) {
+ local->mad_priv->header.recv_wc.mad_len = local->return_wc_byte_len;
+ local->mad_priv->header.recv_wc.mad_seg_size = sizeof(struct opa_mad);
+ } else {
+ local->mad_priv->header.recv_wc.mad_len = sizeof(struct ib_mad);
+ local->mad_priv->header.recv_wc.mad_seg_size = sizeof(struct ib_mad);
+ }
+
INIT_LIST_HEAD(&local->mad_priv->header.recv_wc.rmpp_list);
list_add(&local->mad_priv->header.recv_wc.recv_buf.list,
&local->mad_priv->header.recv_wc.rmpp_list);
@@ -2703,7 +2863,7 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
struct ib_mad_queue *recv_queue = &qp_info->recv_queue;
/* Initialize common scatter list fields */
- sg_list.length = sizeof *mad_priv - sizeof mad_priv->header;
+ sg_list.length = mad_recv_buf_size(qp_info->port_priv);
sg_list.lkey = (*qp_info->port_priv->mr).lkey;
/* Initialize common receive WR fields */
diff --git a/drivers/infiniband/core/mad_priv.h b/drivers/infiniband/core/mad_priv.h
index 6e8b02fdcc5f..e8db522cc447 100644
--- a/drivers/infiniband/core/mad_priv.h
+++ b/drivers/infiniband/core/mad_priv.h
@@ -154,6 +154,7 @@ struct ib_mad_local_private {
struct ib_mad_private *mad_priv;
struct ib_mad_agent_private *recv_mad_agent;
struct ib_mad_send_wr_private *mad_send_wr;
+ size_t return_wc_byte_len;
};
struct ib_mad_mgmt_method_table {
diff --git a/drivers/infiniband/core/mad_rmpp.c b/drivers/infiniband/core/mad_rmpp.c
index 9c284d9b4fa9..4930bb90319f 100644
--- a/drivers/infiniband/core/mad_rmpp.c
+++ b/drivers/infiniband/core/mad_rmpp.c
@@ -1,6 +1,7 @@
/*
* Copyright (c) 2005 Intel Inc. All rights reserved.
* Copyright (c) 2005-2006 Voltaire, Inc. All rights reserved.
+ * Copyright (c) 2014 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@@ -67,6 +68,7 @@ struct mad_rmpp_recv {
u8 mgmt_class;
u8 class_version;
u8 method;
+ u8 base_version;
};
static inline void deref_rmpp_recv(struct mad_rmpp_recv *rmpp_recv)
@@ -318,6 +320,7 @@ create_rmpp_recv(struct ib_mad_agent_private *agent,
rmpp_recv->mgmt_class = mad_hdr->mgmt_class;
rmpp_recv->class_version = mad_hdr->class_version;
rmpp_recv->method = mad_hdr->method;
+ rmpp_recv->base_version = mad_hdr->base_version;
return rmpp_recv;
error: kfree(rmpp_recv);
@@ -433,14 +436,23 @@ static inline int get_mad_len(struct mad_rmpp_recv *rmpp_recv)
{
struct ib_rmpp_base *rmpp_base;
int hdr_size, data_size, pad;
+ int opa = rdma_cap_opa_mad(rmpp_recv->agent->qp_info->port_priv->device,
+ rmpp_recv->agent->qp_info->port_priv->port_num);
rmpp_base = (struct ib_rmpp_base *)rmpp_recv->cur_seg_buf->mad;
hdr_size = ib_get_mad_data_offset(rmpp_base->mad_hdr.mgmt_class);
- data_size = sizeof(struct ib_rmpp_mad) - hdr_size;
- pad = IB_MGMT_RMPP_DATA - be32_to_cpu(rmpp_base->rmpp_hdr.paylen_newwin);
- if (pad > IB_MGMT_RMPP_DATA || pad < 0)
- pad = 0;
+ if (opa && rmpp_recv->base_version == OPA_MGMT_BASE_VERSION) {
+ data_size = sizeof(struct opa_rmpp_mad) - hdr_size;
+ pad = OPA_MGMT_RMPP_DATA - be32_to_cpu(rmpp_base->rmpp_hdr.paylen_newwin);
+ if (pad > OPA_MGMT_RMPP_DATA || pad < 0)
+ pad = 0;
+ } else {
+ data_size = sizeof(struct ib_rmpp_mad) - hdr_size;
+ pad = IB_MGMT_RMPP_DATA - be32_to_cpu(rmpp_base->rmpp_hdr.paylen_newwin);
+ if (pad > IB_MGMT_RMPP_DATA || pad < 0)
+ pad = 0;
+ }
return hdr_size + rmpp_recv->seg_num * data_size - pad;
}
diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
index 6be72a563c61..8be631d94615 100644
--- a/drivers/infiniband/core/user_mad.c
+++ b/drivers/infiniband/core/user_mad.c
@@ -262,20 +262,23 @@ static ssize_t copy_recv_mad(struct ib_umad_file *file, char __user *buf,
{
struct ib_mad_recv_buf *recv_buf;
int left, seg_payload, offset, max_seg_payload;
+ size_t seg_size;
- /* We need enough room to copy the first (or only) MAD segment. */
recv_buf = &packet->recv_wc->recv_buf;
- if ((packet->length <= sizeof (*recv_buf->mad) &&
+ seg_size = packet->recv_wc->mad_seg_size;
+
+ /* We need enough room to copy the first (or only) MAD segment. */
+ if ((packet->length <= seg_size &&
count < hdr_size(file) + packet->length) ||
- (packet->length > sizeof (*recv_buf->mad) &&
- count < hdr_size(file) + sizeof (*recv_buf->mad)))
+ (packet->length > seg_size &&
+ count < hdr_size(file) + seg_size))
return -EINVAL;
if (copy_to_user(buf, &packet->mad, hdr_size(file)))
return -EFAULT;
buf += hdr_size(file);
- seg_payload = min_t(int, packet->length, sizeof (*recv_buf->mad));
+ seg_payload = min_t(int, packet->length, seg_size);
if (copy_to_user(buf, recv_buf->mad, seg_payload))
return -EFAULT;
@@ -292,7 +295,7 @@ static ssize_t copy_recv_mad(struct ib_umad_file *file, char __user *buf,
return -ENOSPC;
}
offset = ib_get_mad_data_offset(recv_buf->mad->mad_hdr.mgmt_class);
- max_seg_payload = sizeof (struct ib_mad) - offset;
+ max_seg_payload = seg_size - offset;
for (left = packet->length - seg_payload, buf += seg_payload;
left; left -= seg_payload, buf += seg_payload) {
@@ -450,6 +453,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
struct ib_rmpp_base *rmpp_base;
__be64 *tid;
int ret, data_len, hdr_len, copy_offset, rmpp_active;
+ u8 base_version;
if (count < hdr_size(file) + IB_MGMT_RMPP_HDR)
return -EINVAL;
@@ -516,12 +520,13 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
rmpp_active = 0;
}
+ base_version = ((struct ib_mad_hdr *)&packet->mad.data)->base_version;
data_len = count - hdr_size(file) - hdr_len;
packet->msg = ib_create_send_mad(agent,
be32_to_cpu(packet->mad.hdr.qpn),
packet->mad.hdr.pkey_index, rmpp_active,
hdr_len, data_len, GFP_KERNEL,
- IB_MGMT_BASE_VERSION);
+ base_version);
if (IS_ERR(packet->msg)) {
ret = PTR_ERR(packet->msg);
goto err_ah;
diff --git a/include/rdma/ib_mad.h b/include/rdma/ib_mad.h
index a8a6e9d2485e..e5b664098047 100644
--- a/include/rdma/ib_mad.h
+++ b/include/rdma/ib_mad.h
@@ -434,6 +434,7 @@ struct ib_mad_recv_buf {
* @recv_buf: Specifies the location of the received data buffer(s).
* @rmpp_list: Specifies a list of RMPP reassembled received MAD buffers.
* @mad_len: The length of the received MAD, without duplicated headers.
+ * @mad_seg_size: The size of individual MAD segments
*
* For received response, the wr_id contains a pointer to the ib_mad_send_buf
* for the corresponding send request.
@@ -443,6 +444,7 @@ struct ib_mad_recv_wc {
struct ib_mad_recv_buf recv_buf;
struct list_head rmpp_list;
int mad_len;
+ size_t mad_seg_size;
};
/**
--
1.8.2
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 50+ messages in thread
* Re: [PATCH 00/14] IB/mad: Add support for OPA MAD processing.
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (13 preceding siblings ...)
2015-05-20 8:13 ` [PATCH 14/14] IB/mad: Add final OPA MAD processing ira.weiny-ral2JQCrhuEAvxtiuMwx3w
@ 2015-05-28 7:03 ` Or Gerlitz
2015-05-28 7:07 ` Or Gerlitz
15 siblings, 0 replies; 50+ messages in thread
From: Or Gerlitz @ 2015-05-28 7:03 UTC (permalink / raw)
To: ira.weiny-ral2JQCrhuEAvxtiuMwx3w, dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb
On 5/20/2015 11:13 AM, ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org wrote:
> Ira Weiny (14):
> IB/mad: Clean up ib_find_send_mad
> IB/mad: Create an RMPP Base header
> IB/mad: Create handle_ib_smi
> IB/mad: Add helper function for smi_handle_dr_smp_send
> IB/mad: Add helper function for smi_handle_dr_smp_recv
> IB/mad: Add helper function for smi_check_forward_dr_smp
> IB/mad: Add base version to ib_create_send_mad
> IB/core: Add rdma_max_mad_size helper
> IB/mad: Convert allocations from kmem_cache to kmalloc
> IB/mad: Add MAD size parameters to process_mad
> IB/core: Add rdma_cap_opa_mad helper
> IB/mad: Add partial Intel OPA MAD support
> IB/mad: Add partial Intel OPA MAD support
> IB/mad: Add final OPA MAD processing
Ira,
so again... patch title can be much better that
"Add helper function name_of_function_x" isn't some
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH 00/14] IB/mad: Add support for OPA MAD processing.
[not found] ` <1432109615-19564-1-git-send-email-ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
` (14 preceding siblings ...)
2015-05-28 7:03 ` [PATCH 00/14] IB/mad: Add support for " Or Gerlitz
@ 2015-05-28 7:07 ` Or Gerlitz
15 siblings, 0 replies; 50+ messages in thread
From: Or Gerlitz @ 2015-05-28 7:07 UTC (permalink / raw)
To: Weiny, Ira
Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA,
linux-rdma-u79uwXL29TY76Z2rM5mHXA,
sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb
On 5/20/2015 11:13 AM, ira.weiny-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org wrote:
> Ira Weiny (14):
> IB/mad: Clean up ib_find_send_mad
> IB/mad: Create an RMPP Base header
> IB/mad: Create handle_ib_smi
> IB/mad: Add helper function for smi_handle_dr_smp_send
> IB/mad: Add helper function for smi_handle_dr_smp_recv
> IB/mad: Add helper function for smi_check_forward_dr_smp
> IB/mad: Add base version to ib_create_send_mad
Ira,
Again... you should do better w.r.t patch titles --
"Add helper function name_of_function_x"
isn't the way to go, try
"Add helper function for functionality X"
no for "Add base version to function_x"
yes for "Add base version when doing X"
no for "Create function_x"
yes for "Add functionality X"
etc
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 50+ messages in thread