* [PATCH for-next 0/2] EFA messages & RDMA read statistics
@ 2020-09-15 14:14 Gal Pressman
2020-09-15 14:14 ` [PATCH for-next 1/2] RDMA/efa: Group keep alive received counter with other SW stats Gal Pressman
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Gal Pressman @ 2020-09-15 14:14 UTC (permalink / raw)
To: Jason Gunthorpe, Doug Ledford
Cc: linux-rdma, Alexander Matushevsky, Gal Pressman
Hi all,
This small series contains a small cleanup to the way we store our
statistics and exposes a new set of counters.
The new exposed counters report send/receive work requests counters and
RDMA read work requests counters.
Regards,
Gal
Daniel Kranzdorf (1):
RDMA/efa: Add messages and RDMA read work requests HW stats
Gal Pressman (1):
RDMA/efa: Group keep alive received counter with other SW stats
drivers/infiniband/hw/efa/efa.h | 8 +--
.../infiniband/hw/efa/efa_admin_cmds_defs.h | 30 ++++++++-
drivers/infiniband/hw/efa/efa_com_cmd.c | 26 ++++++--
drivers/infiniband/hw/efa/efa_com_cmd.h | 16 +++++
drivers/infiniband/hw/efa/efa_verbs.c | 65 ++++++++++++++-----
5 files changed, 117 insertions(+), 28 deletions(-)
base-commit: 9e054b13b2f747868c28539b3eb28256e237755f
--
2.28.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH for-next 1/2] RDMA/efa: Group keep alive received counter with other SW stats
2020-09-15 14:14 [PATCH for-next 0/2] EFA messages & RDMA read statistics Gal Pressman
@ 2020-09-15 14:14 ` Gal Pressman
2020-09-15 14:14 ` [PATCH for-next 2/2] RDMA/efa: Add messages and RDMA read work requests HW stats Gal Pressman
2020-09-22 23:21 ` [PATCH for-next 0/2] EFA messages & RDMA read statistics Jason Gunthorpe
2 siblings, 0 replies; 4+ messages in thread
From: Gal Pressman @ 2020-09-15 14:14 UTC (permalink / raw)
To: Jason Gunthorpe, Doug Ledford
Cc: linux-rdma, Alexander Matushevsky, Gal Pressman,
Daniel Kranzdorf, Yossi Leybovich
The keep alive received counter is a software stat, keep it grouped with
all other software stats.
Since all stored stats are software stats, remove the efa_sw_stats
struct and use efa_stats instead.
Reviewed-by: Daniel Kranzdorf <dkkranzd@amazon.com>
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
---
drivers/infiniband/hw/efa/efa.h | 8 ++-----
drivers/infiniband/hw/efa/efa_verbs.c | 31 ++++++++++++++-------------
2 files changed, 18 insertions(+), 21 deletions(-)
diff --git a/drivers/infiniband/hw/efa/efa.h b/drivers/infiniband/hw/efa/efa.h
index 64ae8ba6a7f6..e5d9712e98c4 100644
--- a/drivers/infiniband/hw/efa/efa.h
+++ b/drivers/infiniband/hw/efa/efa.h
@@ -33,7 +33,8 @@ struct efa_irq {
char name[EFA_IRQNAME_SIZE];
};
-struct efa_sw_stats {
+/* Don't use anything other than atomic64 */
+struct efa_stats {
atomic64_t alloc_pd_err;
atomic64_t create_qp_err;
atomic64_t create_cq_err;
@@ -41,11 +42,6 @@ struct efa_sw_stats {
atomic64_t alloc_ucontext_err;
atomic64_t create_ah_err;
atomic64_t mmap_err;
-};
-
-/* Don't use anything other than atomic64 */
-struct efa_stats {
- struct efa_sw_stats sw_stats;
atomic64_t keep_alive_rcvd;
};
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index 52b7ea9fd4ee..c0c4eeed14cd 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -380,7 +380,7 @@ int efa_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
err_dealloc_pd:
efa_pd_dealloc(dev, result.pdn);
err_out:
- atomic64_inc(&dev->stats.sw_stats.alloc_pd_err);
+ atomic64_inc(&dev->stats.alloc_pd_err);
return err;
}
@@ -742,7 +742,7 @@ struct ib_qp *efa_create_qp(struct ib_pd *ibpd,
err_free_qp:
kfree(qp);
err_out:
- atomic64_inc(&dev->stats.sw_stats.create_qp_err);
+ atomic64_inc(&dev->stats.create_qp_err);
return ERR_PTR(err);
}
@@ -1128,7 +1128,7 @@ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
DMA_FROM_DEVICE);
err_out:
- atomic64_inc(&dev->stats.sw_stats.create_cq_err);
+ atomic64_inc(&dev->stats.create_cq_err);
return err;
}
@@ -1581,7 +1581,7 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
err_free:
kfree(mr);
err_out:
- atomic64_inc(&dev->stats.sw_stats.reg_mr_err);
+ atomic64_inc(&dev->stats.reg_mr_err);
return ERR_PTR(err);
}
@@ -1709,7 +1709,7 @@ int efa_alloc_ucontext(struct ib_ucontext *ibucontext, struct ib_udata *udata)
err_dealloc_uar:
efa_dealloc_uar(dev, result.uarn);
err_out:
- atomic64_inc(&dev->stats.sw_stats.alloc_ucontext_err);
+ atomic64_inc(&dev->stats.alloc_ucontext_err);
return err;
}
@@ -1742,7 +1742,7 @@ static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
ibdev_dbg(&dev->ibdev,
"pgoff[%#lx] does not have valid entry\n",
vma->vm_pgoff);
- atomic64_inc(&dev->stats.sw_stats.mmap_err);
+ atomic64_inc(&dev->stats.mmap_err);
return -EINVAL;
}
entry = to_emmap(rdma_entry);
@@ -1784,7 +1784,7 @@ static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
"Couldn't mmap address[%#llx] length[%#zx] mmap_flag[%d] err[%d]\n",
entry->address, rdma_entry->npages * PAGE_SIZE,
entry->mmap_flag, err);
- atomic64_inc(&dev->stats.sw_stats.mmap_err);
+ atomic64_inc(&dev->stats.mmap_err);
}
rdma_user_mmap_entry_put(rdma_entry);
@@ -1869,7 +1869,7 @@ int efa_create_ah(struct ib_ah *ibah,
err_destroy_ah:
efa_ah_destroy(dev, ah);
err_out:
- atomic64_inc(&dev->stats.sw_stats.create_ah_err);
+ atomic64_inc(&dev->stats.create_ah_err);
return err;
}
@@ -1930,13 +1930,14 @@ int efa_get_hw_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats,
s = &dev->stats;
stats->value[EFA_KEEP_ALIVE_RCVD] = atomic64_read(&s->keep_alive_rcvd);
- stats->value[EFA_ALLOC_PD_ERR] = atomic64_read(&s->sw_stats.alloc_pd_err);
- stats->value[EFA_CREATE_QP_ERR] = atomic64_read(&s->sw_stats.create_qp_err);
- stats->value[EFA_CREATE_CQ_ERR] = atomic64_read(&s->sw_stats.create_cq_err);
- stats->value[EFA_REG_MR_ERR] = atomic64_read(&s->sw_stats.reg_mr_err);
- stats->value[EFA_ALLOC_UCONTEXT_ERR] = atomic64_read(&s->sw_stats.alloc_ucontext_err);
- stats->value[EFA_CREATE_AH_ERR] = atomic64_read(&s->sw_stats.create_ah_err);
- stats->value[EFA_MMAP_ERR] = atomic64_read(&s->sw_stats.mmap_err);
+ stats->value[EFA_ALLOC_PD_ERR] = atomic64_read(&s->alloc_pd_err);
+ stats->value[EFA_CREATE_QP_ERR] = atomic64_read(&s->create_qp_err);
+ stats->value[EFA_CREATE_CQ_ERR] = atomic64_read(&s->create_cq_err);
+ stats->value[EFA_REG_MR_ERR] = atomic64_read(&s->reg_mr_err);
+ stats->value[EFA_ALLOC_UCONTEXT_ERR] =
+ atomic64_read(&s->alloc_ucontext_err);
+ stats->value[EFA_CREATE_AH_ERR] = atomic64_read(&s->create_ah_err);
+ stats->value[EFA_MMAP_ERR] = atomic64_read(&s->mmap_err);
return ARRAY_SIZE(efa_stats_names);
}
--
2.28.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH for-next 2/2] RDMA/efa: Add messages and RDMA read work requests HW stats
2020-09-15 14:14 [PATCH for-next 0/2] EFA messages & RDMA read statistics Gal Pressman
2020-09-15 14:14 ` [PATCH for-next 1/2] RDMA/efa: Group keep alive received counter with other SW stats Gal Pressman
@ 2020-09-15 14:14 ` Gal Pressman
2020-09-22 23:21 ` [PATCH for-next 0/2] EFA messages & RDMA read statistics Jason Gunthorpe
2 siblings, 0 replies; 4+ messages in thread
From: Gal Pressman @ 2020-09-15 14:14 UTC (permalink / raw)
To: Jason Gunthorpe, Doug Ledford
Cc: linux-rdma, Alexander Matushevsky, Daniel Kranzdorf,
Yossi Leybovich, Gal Pressman
From: Daniel Kranzdorf <dkkranzd@amazon.com>
Add separate stats types for send messages and RDMA read work requests.
Signed-off-by: Daniel Kranzdorf <dkkranzd@amazon.com>
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
---
.../infiniband/hw/efa/efa_admin_cmds_defs.h | 30 +++++++++++++++-
drivers/infiniband/hw/efa/efa_com_cmd.c | 26 +++++++++++---
drivers/infiniband/hw/efa/efa_com_cmd.h | 16 +++++++++
drivers/infiniband/hw/efa/efa_verbs.c | 34 ++++++++++++++++++-
4 files changed, 99 insertions(+), 7 deletions(-)
diff --git a/drivers/infiniband/hw/efa/efa_admin_cmds_defs.h b/drivers/infiniband/hw/efa/efa_admin_cmds_defs.h
index d9676ca0b958..b199e4ac6cf9 100644
--- a/drivers/infiniband/hw/efa/efa_admin_cmds_defs.h
+++ b/drivers/infiniband/hw/efa/efa_admin_cmds_defs.h
@@ -61,6 +61,8 @@ enum efa_admin_qp_state {
enum efa_admin_get_stats_type {
EFA_ADMIN_GET_STATS_TYPE_BASIC = 0,
+ EFA_ADMIN_GET_STATS_TYPE_MESSAGES = 1,
+ EFA_ADMIN_GET_STATS_TYPE_RDMA_READ = 2,
};
enum efa_admin_get_stats_scope {
@@ -528,10 +530,36 @@ struct efa_admin_basic_stats {
u64 rx_drops;
};
+struct efa_admin_messages_stats {
+ u64 send_bytes;
+
+ u64 send_wrs;
+
+ u64 recv_bytes;
+
+ u64 recv_wrs;
+};
+
+struct efa_admin_rdma_read_stats {
+ u64 read_wrs;
+
+ u64 read_bytes;
+
+ u64 read_wr_err;
+
+ u64 read_resp_bytes;
+};
+
struct efa_admin_acq_get_stats_resp {
struct efa_admin_acq_common_desc acq_common_desc;
- struct efa_admin_basic_stats basic_stats;
+ union {
+ struct efa_admin_basic_stats basic_stats;
+
+ struct efa_admin_messages_stats messages_stats;
+
+ struct efa_admin_rdma_read_stats rdma_read_stats;
+ } u;
};
struct efa_admin_get_set_feature_common_desc {
diff --git a/drivers/infiniband/hw/efa/efa_com_cmd.c b/drivers/infiniband/hw/efa/efa_com_cmd.c
index f24634cce1cb..f752ef64159c 100644
--- a/drivers/infiniband/hw/efa/efa_com_cmd.c
+++ b/drivers/infiniband/hw/efa/efa_com_cmd.c
@@ -752,11 +752,27 @@ int efa_com_get_stats(struct efa_com_dev *edev,
return err;
}
- result->basic_stats.tx_bytes = resp.basic_stats.tx_bytes;
- result->basic_stats.tx_pkts = resp.basic_stats.tx_pkts;
- result->basic_stats.rx_bytes = resp.basic_stats.rx_bytes;
- result->basic_stats.rx_pkts = resp.basic_stats.rx_pkts;
- result->basic_stats.rx_drops = resp.basic_stats.rx_drops;
+ switch (cmd.type) {
+ case EFA_ADMIN_GET_STATS_TYPE_BASIC:
+ result->basic_stats.tx_bytes = resp.u.basic_stats.tx_bytes;
+ result->basic_stats.tx_pkts = resp.u.basic_stats.tx_pkts;
+ result->basic_stats.rx_bytes = resp.u.basic_stats.rx_bytes;
+ result->basic_stats.rx_pkts = resp.u.basic_stats.rx_pkts;
+ result->basic_stats.rx_drops = resp.u.basic_stats.rx_drops;
+ break;
+ case EFA_ADMIN_GET_STATS_TYPE_MESSAGES:
+ result->messages_stats.send_bytes = resp.u.messages_stats.send_bytes;
+ result->messages_stats.send_wrs = resp.u.messages_stats.send_wrs;
+ result->messages_stats.recv_bytes = resp.u.messages_stats.recv_bytes;
+ result->messages_stats.recv_wrs = resp.u.messages_stats.recv_wrs;
+ break;
+ case EFA_ADMIN_GET_STATS_TYPE_RDMA_READ:
+ result->rdma_read_stats.read_wrs = resp.u.rdma_read_stats.read_wrs;
+ result->rdma_read_stats.read_bytes = resp.u.rdma_read_stats.read_bytes;
+ result->rdma_read_stats.read_wr_err = resp.u.rdma_read_stats.read_wr_err;
+ result->rdma_read_stats.read_resp_bytes = resp.u.rdma_read_stats.read_resp_bytes;
+ break;
+ }
return 0;
}
diff --git a/drivers/infiniband/hw/efa/efa_com_cmd.h b/drivers/infiniband/hw/efa/efa_com_cmd.h
index 9ebee129f477..eea4ebfbe6ec 100644
--- a/drivers/infiniband/hw/efa/efa_com_cmd.h
+++ b/drivers/infiniband/hw/efa/efa_com_cmd.h
@@ -240,8 +240,24 @@ struct efa_com_basic_stats {
u64 rx_drops;
};
+struct efa_com_messages_stats {
+ u64 send_bytes;
+ u64 send_wrs;
+ u64 recv_bytes;
+ u64 recv_wrs;
+};
+
+struct efa_com_rdma_read_stats {
+ u64 read_wrs;
+ u64 read_bytes;
+ u64 read_wr_err;
+ u64 read_resp_bytes;
+};
+
union efa_com_get_stats_result {
struct efa_com_basic_stats basic_stats;
+ struct efa_com_messages_stats messages_stats;
+ struct efa_com_rdma_read_stats rdma_read_stats;
};
void efa_com_set_dma_addr(dma_addr_t addr, u32 *addr_high, u32 *addr_low);
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index c0c4eeed14cd..9ff2a837a8f9 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -36,6 +36,14 @@ struct efa_user_mmap_entry {
op(EFA_RX_BYTES, "rx_bytes") \
op(EFA_RX_PKTS, "rx_pkts") \
op(EFA_RX_DROPS, "rx_drops") \
+ op(EFA_SEND_BYTES, "send_bytes") \
+ op(EFA_SEND_WRS, "send_wrs") \
+ op(EFA_RECV_BYTES, "recv_bytes") \
+ op(EFA_RECV_WRS, "recv_wrs") \
+ op(EFA_RDMA_READ_WRS, "rdma_read_wrs") \
+ op(EFA_RDMA_READ_BYTES, "rdma_read_bytes") \
+ op(EFA_RDMA_READ_WR_ERR, "rdma_read_wr_err") \
+ op(EFA_RDMA_READ_RESP_BYTES, "rdma_read_resp_bytes") \
op(EFA_SUBMITTED_CMDS, "submitted_cmds") \
op(EFA_COMPLETED_CMDS, "completed_cmds") \
op(EFA_CMDS_ERR, "cmds_err") \
@@ -1903,13 +1911,15 @@ int efa_get_hw_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats,
struct efa_com_get_stats_params params = {};
union efa_com_get_stats_result result;
struct efa_dev *dev = to_edev(ibdev);
+ struct efa_com_rdma_read_stats *rrs;
+ struct efa_com_messages_stats *ms;
struct efa_com_basic_stats *bs;
struct efa_com_stats_admin *as;
struct efa_stats *s;
int err;
- params.type = EFA_ADMIN_GET_STATS_TYPE_BASIC;
params.scope = EFA_ADMIN_GET_STATS_SCOPE_ALL;
+ params.type = EFA_ADMIN_GET_STATS_TYPE_BASIC;
err = efa_com_get_stats(&dev->edev, ¶ms, &result);
if (err)
@@ -1922,6 +1932,28 @@ int efa_get_hw_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats,
stats->value[EFA_RX_PKTS] = bs->rx_pkts;
stats->value[EFA_RX_DROPS] = bs->rx_drops;
+ params.type = EFA_ADMIN_GET_STATS_TYPE_MESSAGES;
+ err = efa_com_get_stats(&dev->edev, ¶ms, &result);
+ if (err)
+ return err;
+
+ ms = &result.messages_stats;
+ stats->value[EFA_SEND_BYTES] = ms->send_bytes;
+ stats->value[EFA_SEND_WRS] = ms->send_wrs;
+ stats->value[EFA_RECV_BYTES] = ms->recv_bytes;
+ stats->value[EFA_RECV_WRS] = ms->recv_wrs;
+
+ params.type = EFA_ADMIN_GET_STATS_TYPE_RDMA_READ;
+ err = efa_com_get_stats(&dev->edev, ¶ms, &result);
+ if (err)
+ return err;
+
+ rrs = &result.rdma_read_stats;
+ stats->value[EFA_RDMA_READ_WRS] = rrs->read_wrs;
+ stats->value[EFA_RDMA_READ_BYTES] = rrs->read_bytes;
+ stats->value[EFA_RDMA_READ_WR_ERR] = rrs->read_wr_err;
+ stats->value[EFA_RDMA_READ_RESP_BYTES] = rrs->read_resp_bytes;
+
as = &dev->edev.aq.stats;
stats->value[EFA_SUBMITTED_CMDS] = atomic64_read(&as->submitted_cmd);
stats->value[EFA_COMPLETED_CMDS] = atomic64_read(&as->completed_cmd);
--
2.28.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH for-next 0/2] EFA messages & RDMA read statistics
2020-09-15 14:14 [PATCH for-next 0/2] EFA messages & RDMA read statistics Gal Pressman
2020-09-15 14:14 ` [PATCH for-next 1/2] RDMA/efa: Group keep alive received counter with other SW stats Gal Pressman
2020-09-15 14:14 ` [PATCH for-next 2/2] RDMA/efa: Add messages and RDMA read work requests HW stats Gal Pressman
@ 2020-09-22 23:21 ` Jason Gunthorpe
2 siblings, 0 replies; 4+ messages in thread
From: Jason Gunthorpe @ 2020-09-22 23:21 UTC (permalink / raw)
To: Gal Pressman; +Cc: Doug Ledford, linux-rdma, Alexander Matushevsky
On Tue, Sep 15, 2020 at 05:14:47PM +0300, Gal Pressman wrote:
> Hi all,
>
> This small series contains a small cleanup to the way we store our
> statistics and exposes a new set of counters.
> The new exposed counters report send/receive work requests counters and
> RDMA read work requests counters.
>
> Regards,
> Gal
>
> Daniel Kranzdorf (1):
> RDMA/efa: Add messages and RDMA read work requests HW stats
>
> Gal Pressman (1):
> RDMA/efa: Group keep alive received counter with other SW stats
>
> drivers/infiniband/hw/efa/efa.h | 8 +--
> .../infiniband/hw/efa/efa_admin_cmds_defs.h | 30 ++++++++-
> drivers/infiniband/hw/efa/efa_com_cmd.c | 26 ++++++--
> drivers/infiniband/hw/efa/efa_com_cmd.h | 16 +++++
> drivers/infiniband/hw/efa/efa_verbs.c | 65 ++++++++++++++-----
> 5 files changed, 117 insertions(+), 28 deletions(-)
Applied to for-next, thanks
Jason
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-09-22 23:21 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-15 14:14 [PATCH for-next 0/2] EFA messages & RDMA read statistics Gal Pressman
2020-09-15 14:14 ` [PATCH for-next 1/2] RDMA/efa: Group keep alive received counter with other SW stats Gal Pressman
2020-09-15 14:14 ` [PATCH for-next 2/2] RDMA/efa: Add messages and RDMA read work requests HW stats Gal Pressman
2020-09-22 23:21 ` [PATCH for-next 0/2] EFA messages & RDMA read statistics Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).