linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Doug Ledford <dledford@redhat.com>, Jason Gunthorpe <jgg@mellanox.com>
Cc: Leon Romanovsky <leonro@mellanox.com>,
	RDMA mailing list <linux-rdma@vger.kernel.org>,
	Mike Marciniszyn <mike.marciniszyn@intel.com>,
	Ralph Campbell <ralph.campbell@qlogic.com>
Subject: [PATCH rdma-next 06/16] RDMA/qib: Delete redundant memset for MAD output buffer
Date: Tue, 29 Oct 2019 08:27:35 +0200	[thread overview]
Message-ID: <20191029062745.7932-7-leon@kernel.org> (raw)
In-Reply-To: <20191029062745.7932-1-leon@kernel.org>

From: Leon Romanovsky <leonro@mellanox.com>

There is no need to clear MAD output buffer, because all
callers are calling process_mad() with cleared output buffer.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/qib/qib_mad.c | 18 ------------------
 1 file changed, 18 deletions(-)

diff --git a/drivers/infiniband/hw/qib/qib_mad.c b/drivers/infiniband/hw/qib/qib_mad.c
index f92faf5ec369..5a1e6371ea57 100644
--- a/drivers/infiniband/hw/qib/qib_mad.c
+++ b/drivers/infiniband/hw/qib/qib_mad.c
@@ -2098,8 +2098,6 @@ static int cc_get_classportinfo(struct ib_cc_mad *ccp,
 	struct ib_cc_classportinfo_attr *p =
 		(struct ib_cc_classportinfo_attr *)ccp->mgmt_data;
 
-	memset(ccp->mgmt_data, 0, sizeof(ccp->mgmt_data));
-
 	p->base_version = 1;
 	p->class_version = 1;
 	p->cap_mask = 0;
@@ -2120,8 +2118,6 @@ static int cc_get_congestion_info(struct ib_cc_mad *ccp,
 	struct qib_ibport *ibp = to_iport(ibdev, port);
 	struct qib_pportdata *ppd = ppd_from_ibp(ibp);
 
-	memset(ccp->mgmt_data, 0, sizeof(ccp->mgmt_data));
-
 	p->congestion_info = 0;
 	p->control_table_cap = ppd->cc_max_table_entries;
 
@@ -2138,8 +2134,6 @@ static int cc_get_congestion_setting(struct ib_cc_mad *ccp,
 	struct qib_pportdata *ppd = ppd_from_ibp(ibp);
 	struct ib_cc_congestion_entry_shadow *entries;
 
-	memset(ccp->mgmt_data, 0, sizeof(ccp->mgmt_data));
-
 	spin_lock(&ppd->cc_shadow_lock);
 
 	entries = ppd->congestion_entries_shadow->entries;
@@ -2176,8 +2170,6 @@ static int cc_get_congestion_control_table(struct ib_cc_mad *ccp,
 	if (cct_block_index > IB_CC_TABLE_CAP_DEFAULT - 1)
 		goto bail;
 
-	memset(ccp->mgmt_data, 0, sizeof(ccp->mgmt_data));
-
 	spin_lock(&ppd->cc_shadow_lock);
 
 	max_cct_block =
@@ -2296,12 +2288,6 @@ static int cc_set_congestion_control_table(struct ib_cc_mad *ccp,
 	return reply_failure((struct ib_smp *) ccp);
 }
 
-static int check_cc_key(struct qib_ibport *ibp,
-			struct ib_cc_mad *ccp, int mad_flags)
-{
-	return 0;
-}
-
 static int process_cc(struct ib_device *ibdev, int mad_flags,
 			u8 port, const struct ib_mad *in_mad,
 			struct ib_mad *out_mad)
@@ -2318,10 +2304,6 @@ static int process_cc(struct ib_device *ibdev, int mad_flags,
 		goto bail;
 	}
 
-	ret = check_cc_key(ibp, ccp, mad_flags);
-	if (ret)
-		goto bail;
-
 	switch (ccp->method) {
 	case IB_MGMT_METHOD_GET:
 		switch (ccp->attr_id) {
-- 
2.20.1


  parent reply	other threads:[~2019-10-29  6:28 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-29  6:27 [PATCH rdma-next 00/16] MAD cleanup Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 01/16] RDMA/mad: Delete never implemented functions Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 02/16] RDMA/mad: Allocate zeroed MAD buffer Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 03/16] RDMA/mlx4: Delete redundant zero memset Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 04/16] RDMA/mlx5: " Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 05/16] RDMA/ocrdma: Clean MAD processing logic Leon Romanovsky
2019-10-29  6:27 ` Leon Romanovsky [this message]
2019-10-29  6:27 ` [PATCH rdma-next 07/16] RDMA/hfi1: Delete unreachable code Leon Romanovsky
2019-10-29 23:33   ` Ira Weiny
2019-10-30  0:02     ` Ira Weiny
2019-10-29  6:27 ` [PATCH rdma-next 08/16] RDMA/mlx4: " Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 09/16] RDMA/mlx5: " Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 10/16] RDMA/mthca: " Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 11/16] RDMA/ocrdma: Simplify process_mad function Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 12/16] RDMA/qib: Delete unreachable code Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 13/16] RDMA/mlx5: Rewrite MAD processing logic to be readable Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 14/16] RDMA/qib: Delete extra line Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 15/16] RDMA/qib: Delete unused variable in process_cc call Leon Romanovsky
2019-10-29  6:27 ` [PATCH rdma-next 16/16] RDMA: Change MAD processing function to remove extra casting and parameter Leon Romanovsky
2019-11-11 13:33   ` Marciniszyn, Mike
2019-11-11 15:06     ` Leon Romanovsky
2019-11-06 20:14 ` [PATCH rdma-next 00/16] MAD cleanup Jason Gunthorpe
2019-11-11 15:06   ` Leon Romanovsky
2019-11-13  0:37 ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191029062745.7932-7-leon@kernel.org \
    --to=leon@kernel.org \
    --cc=dledford@redhat.com \
    --cc=jgg@mellanox.com \
    --cc=leonro@mellanox.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mike.marciniszyn@intel.com \
    --cc=ralph.campbell@qlogic.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).