From: Weihang Li <liweihang@huawei.com>
To: <dledford@redhat.com>, <jgg@ziepe.ca>
Cc: <leon@kernel.org>, <linux-rdma@vger.kernel.org>, <linuxarm@huawei.com>
Subject: [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag
Date: Wed, 20 May 2020 21:53:12 +0800 [thread overview]
Message-ID: <1589982799-28728-3-git-send-email-liweihang@huawei.com> (raw)
In-Reply-To: <1589982799-28728-1-git-send-email-liweihang@huawei.com>
From: Lang Cheng <chenglang@huawei.com>
It's easier to understand and maintain enable flags of cq using a single
field in type of u32 than defining a field for every flags in the structure
hns_roce_cq, and we can add new flags for features more conveniently in the
future.
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
---
drivers/infiniband/hw/hns/hns_roce_cq.c | 10 +++++-----
drivers/infiniband/hw/hns/hns_roce_device.h | 6 +++---
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 6 +++---
3 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
index d2d7074..925fb77 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
@@ -186,8 +186,8 @@ static int alloc_cq_db(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
&hr_cq->db);
if (err)
return err;
- hr_cq->db_en = 1;
- resp->cap_flags |= HNS_ROCE_SUPPORT_CQ_RECORD_DB;
+ hr_cq->flags |= HNS_ROCE_CQ_FLAG_RECORD_DB;
+ resp->cap_flags |= HNS_ROCE_CQ_FLAG_RECORD_DB;
}
} else {
if (has_db) {
@@ -196,7 +196,7 @@ static int alloc_cq_db(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
return err;
hr_cq->set_ci_db = hr_cq->db.db_record;
*hr_cq->set_ci_db = 0;
- hr_cq->db_en = 1;
+ hr_cq->flags |= HNS_ROCE_CQ_FLAG_RECORD_DB;
}
hr_cq->cq_db_l = hr_dev->reg_base + hr_dev->odb_offset +
DB_REG_OFFSET * hr_dev->priv_uar.index;
@@ -210,10 +210,10 @@ static void free_cq_db(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
{
struct hns_roce_ucontext *uctx;
- if (!hr_cq->db_en)
+ if (!(hr_cq->flags & HNS_ROCE_CQ_FLAG_RECORD_DB))
return;
- hr_cq->db_en = 0;
+ hr_cq->flags &= ~HNS_ROCE_CQ_FLAG_RECORD_DB;
if (udata) {
uctx = rdma_udata_to_drv_context(udata,
struct hns_roce_ucontext,
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 4fcd608e..06bafa1 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -135,8 +135,8 @@ enum {
HNS_ROCE_QP_CAP_SQ_RECORD_DB = BIT(1),
};
-enum {
- HNS_ROCE_SUPPORT_CQ_RECORD_DB = 1 << 0,
+enum hns_roce_cq_flags {
+ HNS_ROCE_CQ_FLAG_RECORD_DB = BIT(0),
};
enum hns_roce_qp_state {
@@ -454,7 +454,7 @@ struct hns_roce_cq {
struct ib_cq ib_cq;
struct hns_roce_mtr mtr;
struct hns_roce_db db;
- u8 db_en;
+ u32 flags;
spinlock_t lock;
u32 cq_depth;
u32 cons_index;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 7d0556ef..36a9871 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -2905,9 +2905,9 @@ static void hns_roce_v2_write_cqc(struct hns_roce_dev *hr_dev,
roce_set_field(cq_context->byte_40_cqe_ba, V2_CQC_BYTE_40_CQE_BA_M,
V2_CQC_BYTE_40_CQE_BA_S, (dma_handle >> (32 + 3)));
- if (hr_cq->db_en)
- roce_set_bit(cq_context->byte_44_db_record,
- V2_CQC_BYTE_44_DB_RECORD_EN_S, 1);
+ roce_set_bit(cq_context->byte_44_db_record,
+ V2_CQC_BYTE_44_DB_RECORD_EN_S,
+ (hr_cq->flags & HNS_ROCE_CQ_FLAG_RECORD_DB) ? 1 : 0);
roce_set_field(cq_context->byte_44_db_record,
V2_CQC_BYTE_44_DB_RECORD_ADDR_M,
--
2.8.1
next prev parent reply other threads:[~2020-05-20 13:53 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
2020-05-20 13:53 ` [PATCH for-next 1/9] RDMA/hns: Let software PI/CI grow naturally Weihang Li
2020-05-20 13:53 ` Weihang Li [this message]
2020-05-25 17:06 ` [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag Jason Gunthorpe
2020-05-26 2:57 ` liweihang
2020-05-26 12:08 ` Jason Gunthorpe
2020-05-28 1:15 ` liweihang
2020-05-20 13:53 ` [PATCH for-next 3/9] RDMA/hns: Optimize post and poll process Weihang Li
2020-05-20 13:53 ` [PATCH for-next 4/9] RDMA/hns: Remove unused code about assert Weihang Li
2020-05-20 13:53 ` [PATCH for-next 5/9] RDMA/hns: Rename QP buffer related function Weihang Li
2020-05-20 13:53 ` [PATCH for-next 6/9] RDMA/hns: Change all page_shift to unsigned Weihang Li
2020-05-20 13:53 ` [PATCH for-next 7/9] RDMA/hns: Change variables representing quantity " Weihang Li
2020-05-20 13:53 ` [PATCH for-next 8/9] RDMA/hns: Refactor the QP context filling process related to WQE buffer configure Weihang Li
2020-05-20 13:53 ` [PATCH for-next 9/9] RDMA/hns: Optimize the usage of MTR Weihang Li
2020-05-25 17:11 ` [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Jason Gunthorpe
2020-05-25 17:36 ` Leon Romanovsky
2020-05-26 3:13 ` liweihang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1589982799-28728-3-git-send-email-liweihang@huawei.com \
--to=liweihang@huawei.com \
--cc=dledford@redhat.com \
--cc=jgg@ziepe.ca \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linuxarm@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).