All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
@ 2022-11-14 13:17 Eli Cohen
  2022-11-14 13:17 ` [PATH v2 1/8] vdpa/mlx5: Fix rule forwarding VLAN to TIR Eli Cohen
                   ` (8 more replies)
  0 siblings, 9 replies; 21+ messages in thread
From: Eli Cohen @ 2022-11-14 13:17 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization
  Cc: si-wei.liu, eperezma, lulu, Eli Cohen

This series is a resend of previously sent patch list. It adds a few
fixes so I treat as a v0 of a new series.

It adds a kernel config param CONFIG_MLX5_VDPA_STEERING_DEBUG that when
eabled allows to read rx unicast and multicast counters per tagged or untagged traffic.

Examples:
$ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/mcast/packets
$ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/ucast/bytes

v1->v2:
1. Reorder patches so fixes are first
2. Break "Fix rule forwarding VLAN to TIR" into two patches
3. Squash fix for bug in first patch from "Add RX counters to debugfs"
4. Move clearing of nb_registered before calling mlx5_notifier_unregister() in mlx5_vdpa_dev_del()


Eli Cohen (8):
  vdpa/mlx5: Fix rule forwarding VLAN to TIR
  vdpa/mlx5: Return error on vlan ctrl commands if not supported
  vdpa/mlx5: Fix wrong mac address deletion
  vdpa/mlx5: Avoid using reslock in event_handler
  vdpa/mlx5: Avoid overwriting CVQ iotlb
  vdpa/mlx5: Move some definitions to a new header file
  vdpa/mlx5: Add debugfs subtree
  vdpa/mlx5: Add RX counters to debugfs

 drivers/vdpa/Kconfig               |  12 ++
 drivers/vdpa/mlx5/Makefile         |   2 +-
 drivers/vdpa/mlx5/core/mlx5_vdpa.h |   5 +-
 drivers/vdpa/mlx5/core/mr.c        |  44 ++---
 drivers/vdpa/mlx5/net/debug.c      | 152 ++++++++++++++++++
 drivers/vdpa/mlx5/net/mlx5_vnet.c  | 250 ++++++++++++++---------------
 drivers/vdpa/mlx5/net/mlx5_vnet.h  |  94 +++++++++++
 7 files changed, 412 insertions(+), 147 deletions(-)
 create mode 100644 drivers/vdpa/mlx5/net/debug.c
 create mode 100644 drivers/vdpa/mlx5/net/mlx5_vnet.h

-- 
2.38.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATH v2 1/8] vdpa/mlx5: Fix rule forwarding VLAN to TIR
  2022-11-14 13:17 [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
@ 2022-11-14 13:17 ` Eli Cohen
  2022-11-14 13:17 ` [PATH v2 2/8] vdpa/mlx5: Return error on vlan ctrl commands if not supported Eli Cohen
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Eli Cohen @ 2022-11-14 13:17 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization
  Cc: si-wei.liu, eperezma, lulu, Eli Cohen

Set the VLAN id to the header values field instead of overwriting the
headers criteria field.

Before this fix, VLAN filtering would not really work and tagged packets
would be forwarded unfiltered to the TIR.

Fixes: baf2ad3f6a98 ("vdpa/mlx5: Add RX MAC VLAN filter support")
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Eli Cohen <elic@nvidia.com>
---
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 90913365def4..3fb06dcee943 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -1468,11 +1468,13 @@ static int mlx5_vdpa_add_mac_vlan_rules(struct mlx5_vdpa_net *ndev, u8 *mac,
 	dmac_v = MLX5_ADDR_OF(fte_match_param, headers_v, outer_headers.dmac_47_16);
 	eth_broadcast_addr(dmac_c);
 	ether_addr_copy(dmac_v, mac);
-	MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
+	if (ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VLAN)) {
+		MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
+		MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, first_vid);
+	}
 	if (tagged) {
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v, cvlan_tag, 1);
-		MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, first_vid);
-		MLX5_SET(fte_match_set_lyr_2_4, headers_c, first_vid, vid);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_vid, vid);
 	}
 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
 	dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATH v2 2/8] vdpa/mlx5: Return error on vlan ctrl commands if not supported
  2022-11-14 13:17 [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
  2022-11-14 13:17 ` [PATH v2 1/8] vdpa/mlx5: Fix rule forwarding VLAN to TIR Eli Cohen
@ 2022-11-14 13:17 ` Eli Cohen
  2022-11-15  3:10     ` Jason Wang
  2022-11-15  9:43   ` Eugenio Perez Martin
  2022-11-14 13:17 ` [PATH v2 3/8] vdpa/mlx5: Fix wrong mac address deletion Eli Cohen
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 21+ messages in thread
From: Eli Cohen @ 2022-11-14 13:17 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization
  Cc: si-wei.liu, eperezma, lulu, Eli Cohen

Check if VIRTIO_NET_F_CTRL_VLAN is negotiated and return error if
control VQ command is received.

Signed-off-by: Eli Cohen <elic@nvidia.com>
---
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 3fb06dcee943..01da229d22da 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -1823,6 +1823,9 @@ static virtio_net_ctrl_ack handle_ctrl_vlan(struct mlx5_vdpa_dev *mvdev, u8 cmd)
 	size_t read;
 	u16 id;
 
+	if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VLAN)))
+		return status;
+
 	switch (cmd) {
 	case VIRTIO_NET_CTRL_VLAN_ADD:
 		read = vringh_iov_pull_iotlb(&cvq->vring, &cvq->riov, &vlan, sizeof(vlan));
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATH v2 3/8] vdpa/mlx5: Fix wrong mac address deletion
  2022-11-14 13:17 [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
  2022-11-14 13:17 ` [PATH v2 1/8] vdpa/mlx5: Fix rule forwarding VLAN to TIR Eli Cohen
  2022-11-14 13:17 ` [PATH v2 2/8] vdpa/mlx5: Return error on vlan ctrl commands if not supported Eli Cohen
@ 2022-11-14 13:17 ` Eli Cohen
  2022-11-14 13:17 ` [PATH v2 4/8] vdpa/mlx5: Avoid using reslock in event_handler Eli Cohen
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Eli Cohen @ 2022-11-14 13:17 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization
  Cc: si-wei.liu, eperezma, lulu, Eli Cohen

Delete the old MAC from the table and not the new one which is not there
yet.

Fixes: baf2ad3f6a98 ("vdpa/mlx5: Add RX MAC VLAN filter support")
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Eli Cohen <elic@nvidia.com>
---
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 01da229d22da..b06260a37680 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -1686,7 +1686,7 @@ static virtio_net_ctrl_ack handle_ctrl_mac(struct mlx5_vdpa_dev *mvdev, u8 cmd)
 
 		/* Need recreate the flow table entry, so that the packet could forward back
 		 */
-		mac_vlan_del(ndev, ndev->config.mac, 0, false);
+		mac_vlan_del(ndev, mac_back, 0, false);
 
 		if (mac_vlan_add(ndev, ndev->config.mac, 0, false)) {
 			mlx5_vdpa_warn(mvdev, "failed to insert forward rules, try to restore\n");
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATH v2 4/8] vdpa/mlx5: Avoid using reslock in event_handler
  2022-11-14 13:17 [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
                   ` (2 preceding siblings ...)
  2022-11-14 13:17 ` [PATH v2 3/8] vdpa/mlx5: Fix wrong mac address deletion Eli Cohen
@ 2022-11-14 13:17 ` Eli Cohen
  2022-11-14 13:17 ` [PATH v2 5/8] vdpa/mlx5: Avoid overwriting CVQ iotlb Eli Cohen
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Eli Cohen @ 2022-11-14 13:17 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization
  Cc: si-wei.liu, eperezma, lulu, Eli Cohen

event_handler runs under atomic context and may not acquire reslock. We
can still guarantee that the handler won't be called after suspend by
clearing nb_registered, unregistering the handler and flushing the
workqueue.

Signed-off-by: Eli Cohen <elic@nvidia.com>
---
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index b06260a37680..98dd8ce8af26 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -2845,8 +2845,8 @@ static int mlx5_vdpa_suspend(struct vdpa_device *vdev)
 	int i;
 
 	down_write(&ndev->reslock);
-	mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
 	ndev->nb_registered = false;
+	mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
 	flush_workqueue(ndev->mvdev.wq);
 	for (i = 0; i < ndev->cur_num_vqs; i++) {
 		mvq = &ndev->vqs[i];
@@ -3024,7 +3024,7 @@ static void update_carrier(struct work_struct *work)
 	else
 		ndev->config.status &= cpu_to_mlx5vdpa16(mvdev, ~VIRTIO_NET_S_LINK_UP);
 
-	if (ndev->config_cb.callback)
+	if (ndev->nb_registered && ndev->config_cb.callback)
 		ndev->config_cb.callback(ndev->config_cb.private);
 
 	kfree(wqent);
@@ -3041,21 +3041,13 @@ static int event_handler(struct notifier_block *nb, unsigned long event, void *p
 		switch (eqe->sub_type) {
 		case MLX5_PORT_CHANGE_SUBTYPE_DOWN:
 		case MLX5_PORT_CHANGE_SUBTYPE_ACTIVE:
-			down_read(&ndev->reslock);
-			if (!ndev->nb_registered) {
-				up_read(&ndev->reslock);
-				return NOTIFY_DONE;
-			}
 			wqent = kzalloc(sizeof(*wqent), GFP_ATOMIC);
-			if (!wqent) {
-				up_read(&ndev->reslock);
+			if (!wqent)
 				return NOTIFY_DONE;
-			}
 
 			wqent->mvdev = &ndev->mvdev;
 			INIT_WORK(&wqent->work, update_carrier);
 			queue_work(ndev->mvdev.wq, &wqent->work);
-			up_read(&ndev->reslock);
 			ret = NOTIFY_OK;
 			break;
 		default:
@@ -3242,8 +3234,8 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device *
 	struct workqueue_struct *wq;
 
 	if (ndev->nb_registered) {
-		mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
 		ndev->nb_registered = false;
+		mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
 	}
 	wq = mvdev->wq;
 	mvdev->wq = NULL;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATH v2 5/8] vdpa/mlx5: Avoid overwriting CVQ iotlb
  2022-11-14 13:17 [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
                   ` (3 preceding siblings ...)
  2022-11-14 13:17 ` [PATH v2 4/8] vdpa/mlx5: Avoid using reslock in event_handler Eli Cohen
@ 2022-11-14 13:17 ` Eli Cohen
  2022-11-15  3:33     ` Jason Wang
  2022-11-15  9:41   ` Eugenio Perez Martin
  2022-11-14 13:17 ` [PATH v2 6/8] vdpa/mlx5: Move some definitions to a new header file Eli Cohen
                   ` (3 subsequent siblings)
  8 siblings, 2 replies; 21+ messages in thread
From: Eli Cohen @ 2022-11-14 13:17 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization
  Cc: si-wei.liu, eperezma, lulu, Eli Cohen

When qemu uses different address spaces for data and control virtqueues,
the current code would overwrite the control virtqueue iotlb through the
dup_iotlb call. Fix this by referring to the address space identifier
and the group to asid mapping to determine which mapping needs to be
updated. We also move the address space logic from mlx5 net to core
directory.

Reported-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Eli Cohen <elic@nvidia.com>
---
 drivers/vdpa/mlx5/core/mlx5_vdpa.h |  5 +--
 drivers/vdpa/mlx5/core/mr.c        | 44 ++++++++++++++++-----------
 drivers/vdpa/mlx5/net/mlx5_vnet.c  | 49 ++++++------------------------
 3 files changed, 39 insertions(+), 59 deletions(-)

diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
index 6af9fdbb86b7..058fbe28107e 100644
--- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
@@ -116,8 +116,9 @@ int mlx5_vdpa_create_mkey(struct mlx5_vdpa_dev *mvdev, u32 *mkey, u32 *in,
 			  int inlen);
 int mlx5_vdpa_destroy_mkey(struct mlx5_vdpa_dev *mvdev, u32 mkey);
 int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
-			     bool *change_map);
-int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb);
+			     bool *change_map, unsigned int asid);
+int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
+			unsigned int asid);
 void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev);
 
 #define mlx5_vdpa_warn(__dev, format, ...)                                                         \
diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
index a639b9208d41..a4d7ee2339fa 100644
--- a/drivers/vdpa/mlx5/core/mr.c
+++ b/drivers/vdpa/mlx5/core/mr.c
@@ -511,7 +511,8 @@ void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev)
 	mutex_unlock(&mr->mkey_mtx);
 }
 
-static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
+static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev,
+				struct vhost_iotlb *iotlb, unsigned int asid)
 {
 	struct mlx5_vdpa_mr *mr = &mvdev->mr;
 	int err;
@@ -519,42 +520,49 @@ static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb
 	if (mr->initialized)
 		return 0;
 
-	if (iotlb)
-		err = create_user_mr(mvdev, iotlb);
-	else
-		err = create_dma_mr(mvdev, mr);
+	if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
+		if (iotlb)
+			err = create_user_mr(mvdev, iotlb);
+		else
+			err = create_dma_mr(mvdev, mr);
 
-	if (err)
-		return err;
+		if (err)
+			return err;
+	}
 
-	err = dup_iotlb(mvdev, iotlb);
-	if (err)
-		goto out_err;
+	if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] == asid) {
+		err = dup_iotlb(mvdev, iotlb);
+		if (err)
+			goto out_err;
+	}
 
 	mr->initialized = true;
 	return 0;
 
 out_err:
-	if (iotlb)
-		destroy_user_mr(mvdev, mr);
-	else
-		destroy_dma_mr(mvdev, mr);
+	if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
+		if (iotlb)
+			destroy_user_mr(mvdev, mr);
+		else
+			destroy_dma_mr(mvdev, mr);
+	}
 
 	return err;
 }
 
-int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
+int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
+			unsigned int asid)
 {
 	int err;
 
 	mutex_lock(&mvdev->mr.mkey_mtx);
-	err = _mlx5_vdpa_create_mr(mvdev, iotlb);
+	err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid);
 	mutex_unlock(&mvdev->mr.mkey_mtx);
 	return err;
 }
 
 int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
-			     bool *change_map)
+			     bool *change_map, unsigned int asid)
 {
 	struct mlx5_vdpa_mr *mr = &mvdev->mr;
 	int err = 0;
@@ -566,7 +574,7 @@ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *io
 		*change_map = true;
 	}
 	if (!*change_map)
-		err = _mlx5_vdpa_create_mr(mvdev, iotlb);
+		err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid);
 	mutex_unlock(&mr->mkey_mtx);
 
 	return err;
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 98dd8ce8af26..3a6dbbc6440d 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -2394,7 +2394,8 @@ static void restore_channels_info(struct mlx5_vdpa_net *ndev)
 	}
 }
 
-static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
+static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev,
+				struct vhost_iotlb *iotlb, unsigned int asid)
 {
 	struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
 	int err;
@@ -2406,7 +2407,7 @@ static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb
 
 	teardown_driver(ndev);
 	mlx5_vdpa_destroy_mr(mvdev);
-	err = mlx5_vdpa_create_mr(mvdev, iotlb);
+	err = mlx5_vdpa_create_mr(mvdev, iotlb, asid);
 	if (err)
 		goto err_mr;
 
@@ -2587,7 +2588,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev)
 	++mvdev->generation;
 
 	if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
-		if (mlx5_vdpa_create_mr(mvdev, NULL))
+		if (mlx5_vdpa_create_mr(mvdev, NULL, 0))
 			mlx5_vdpa_warn(mvdev, "create MR failed\n");
 	}
 	up_write(&ndev->reslock);
@@ -2623,41 +2624,20 @@ static u32 mlx5_vdpa_get_generation(struct vdpa_device *vdev)
 	return mvdev->generation;
 }
 
-static int set_map_control(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
-{
-	u64 start = 0ULL, last = 0ULL - 1;
-	struct vhost_iotlb_map *map;
-	int err = 0;
-
-	spin_lock(&mvdev->cvq.iommu_lock);
-	vhost_iotlb_reset(mvdev->cvq.iotlb);
-
-	for (map = vhost_iotlb_itree_first(iotlb, start, last); map;
-	     map = vhost_iotlb_itree_next(map, start, last)) {
-		err = vhost_iotlb_add_range(mvdev->cvq.iotlb, map->start,
-					    map->last, map->addr, map->perm);
-		if (err)
-			goto out;
-	}
-
-out:
-	spin_unlock(&mvdev->cvq.iommu_lock);
-	return err;
-}
-
-static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
+static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
+			unsigned int asid)
 {
 	bool change_map;
 	int err;
 
-	err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map);
+	err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map, asid);
 	if (err) {
 		mlx5_vdpa_warn(mvdev, "set map failed(%d)\n", err);
 		return err;
 	}
 
 	if (change_map)
-		err = mlx5_vdpa_change_map(mvdev, iotlb);
+		err = mlx5_vdpa_change_map(mvdev, iotlb, asid);
 
 	return err;
 }
@@ -2670,16 +2650,7 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid,
 	int err = -EINVAL;
 
 	down_write(&ndev->reslock);
-	if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
-		err = set_map_data(mvdev, iotlb);
-		if (err)
-			goto out;
-	}
-
-	if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] == asid)
-		err = set_map_control(mvdev, iotlb);
-
-out:
+	err = set_map_data(mvdev, iotlb, asid);
 	up_write(&ndev->reslock);
 	return err;
 }
@@ -3182,7 +3153,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
 		goto err_mpfs;
 
 	if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
-		err = mlx5_vdpa_create_mr(mvdev, NULL);
+		err = mlx5_vdpa_create_mr(mvdev, NULL, 0);
 		if (err)
 			goto err_res;
 	}
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATH v2 6/8] vdpa/mlx5: Move some definitions to a new header file
  2022-11-14 13:17 [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
                   ` (4 preceding siblings ...)
  2022-11-14 13:17 ` [PATH v2 5/8] vdpa/mlx5: Avoid overwriting CVQ iotlb Eli Cohen
@ 2022-11-14 13:17 ` Eli Cohen
  2022-11-14 13:17 ` [PATH v2 7/8] vdpa/mlx5: Add debugfs subtree Eli Cohen
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Eli Cohen @ 2022-11-14 13:17 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization
  Cc: si-wei.liu, eperezma, lulu, Eli Cohen

Move some definitions from mlx5_vnet.c to newly added header file
mlx5_vnet.h. We need these definitions for the following patches that
add debugfs tree to expose information vital for debug.

Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Eli Cohen <elic@nvidia.com>
---
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 45 +------------------------
 drivers/vdpa/mlx5/net/mlx5_vnet.h | 55 +++++++++++++++++++++++++++++++
 2 files changed, 56 insertions(+), 44 deletions(-)
 create mode 100644 drivers/vdpa/mlx5/net/mlx5_vnet.h

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 3a6dbbc6440d..da54a188077d 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -18,15 +18,12 @@
 #include <linux/mlx5/mlx5_ifc_vdpa.h>
 #include <linux/mlx5/mpfs.h>
 #include "mlx5_vdpa.h"
+#include "mlx5_vnet.h"
 
 MODULE_AUTHOR("Eli Cohen <eli@mellanox.com>");
 MODULE_DESCRIPTION("Mellanox VDPA driver");
 MODULE_LICENSE("Dual BSD/GPL");
 
-#define to_mlx5_vdpa_ndev(__mvdev)                                             \
-	container_of(__mvdev, struct mlx5_vdpa_net, mvdev)
-#define to_mvdev(__vdev) container_of((__vdev), struct mlx5_vdpa_dev, vdev)
-
 #define VALID_FEATURES_MASK                                                                        \
 	(BIT_ULL(VIRTIO_NET_F_CSUM) | BIT_ULL(VIRTIO_NET_F_GUEST_CSUM) |                                   \
 	 BIT_ULL(VIRTIO_NET_F_CTRL_GUEST_OFFLOADS) | BIT_ULL(VIRTIO_NET_F_MTU) | BIT_ULL(VIRTIO_NET_F_MAC) |   \
@@ -50,14 +47,6 @@ MODULE_LICENSE("Dual BSD/GPL");
 
 #define MLX5V_UNTAGGED 0x1000
 
-struct mlx5_vdpa_net_resources {
-	u32 tisn;
-	u32 tdn;
-	u32 tirn;
-	u32 rqtn;
-	bool valid;
-};
-
 struct mlx5_vdpa_cq_buf {
 	struct mlx5_frag_buf_ctrl fbc;
 	struct mlx5_frag_buf frag_buf;
@@ -146,38 +135,6 @@ static bool is_index_valid(struct mlx5_vdpa_dev *mvdev, u16 idx)
 	return idx <= mvdev->max_idx;
 }
 
-#define MLX5V_MACVLAN_SIZE 256
-
-struct mlx5_vdpa_net {
-	struct mlx5_vdpa_dev mvdev;
-	struct mlx5_vdpa_net_resources res;
-	struct virtio_net_config config;
-	struct mlx5_vdpa_virtqueue *vqs;
-	struct vdpa_callback *event_cbs;
-
-	/* Serialize vq resources creation and destruction. This is required
-	 * since memory map might change and we need to destroy and create
-	 * resources while driver in operational.
-	 */
-	struct rw_semaphore reslock;
-	struct mlx5_flow_table *rxft;
-	bool setup;
-	u32 cur_num_vqs;
-	u32 rqt_size;
-	bool nb_registered;
-	struct notifier_block nb;
-	struct vdpa_callback config_cb;
-	struct mlx5_vdpa_wq_ent cvq_ent;
-	struct hlist_head macvlan_hash[MLX5V_MACVLAN_SIZE];
-};
-
-struct macvlan_node {
-	struct hlist_node hlist;
-	struct mlx5_flow_handle *ucast_rule;
-	struct mlx5_flow_handle *mcast_rule;
-	u64 macvlan;
-};
-
 static void free_resources(struct mlx5_vdpa_net *ndev);
 static void init_mvqs(struct mlx5_vdpa_net *ndev);
 static int setup_driver(struct mlx5_vdpa_dev *mvdev);
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.h b/drivers/vdpa/mlx5/net/mlx5_vnet.h
new file mode 100644
index 000000000000..6691c879a6ca
--- /dev/null
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#ifndef __MLX5_VNET_H__
+#define __MLX5_VNET_H__
+
+#include "mlx5_vdpa.h"
+
+#define to_mlx5_vdpa_ndev(__mvdev)                                             \
+	container_of(__mvdev, struct mlx5_vdpa_net, mvdev)
+#define to_mvdev(__vdev) container_of((__vdev), struct mlx5_vdpa_dev, vdev)
+
+struct mlx5_vdpa_net_resources {
+	u32 tisn;
+	u32 tdn;
+	u32 tirn;
+	u32 rqtn;
+	bool valid;
+};
+
+#define MLX5V_MACVLAN_SIZE 256
+
+struct mlx5_vdpa_net {
+	struct mlx5_vdpa_dev mvdev;
+	struct mlx5_vdpa_net_resources res;
+	struct virtio_net_config config;
+	struct mlx5_vdpa_virtqueue *vqs;
+	struct vdpa_callback *event_cbs;
+
+	/* Serialize vq resources creation and destruction. This is required
+	 * since memory map might change and we need to destroy and create
+	 * resources while driver in operational.
+	 */
+	struct rw_semaphore reslock;
+	struct mlx5_flow_table *rxft;
+	struct dentry *rx_dent;
+	struct dentry *rx_table_dent;
+	bool setup;
+	u32 cur_num_vqs;
+	u32 rqt_size;
+	bool nb_registered;
+	struct notifier_block nb;
+	struct vdpa_callback config_cb;
+	struct mlx5_vdpa_wq_ent cvq_ent;
+	struct hlist_head macvlan_hash[MLX5V_MACVLAN_SIZE];
+};
+
+struct macvlan_node {
+	struct hlist_node hlist;
+	struct mlx5_flow_handle *ucast_rule;
+	struct mlx5_flow_handle *mcast_rule;
+	u64 macvlan;
+};
+
+#endif /* __MLX5_VNET_H__ */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATH v2 7/8] vdpa/mlx5: Add debugfs subtree
  2022-11-14 13:17 [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
                   ` (5 preceding siblings ...)
  2022-11-14 13:17 ` [PATH v2 6/8] vdpa/mlx5: Move some definitions to a new header file Eli Cohen
@ 2022-11-14 13:17 ` Eli Cohen
  2022-11-14 13:17 ` [PATH v2 8/8] vdpa/mlx5: Add RX counters to debugfs Eli Cohen
  2022-11-24  6:34 ` [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
  8 siblings, 0 replies; 21+ messages in thread
From: Eli Cohen @ 2022-11-14 13:17 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization
  Cc: si-wei.liu, eperezma, lulu, Eli Cohen

Add debugfs subtree and expose flow table ID and TIR number. This
information can be used by external tools to do extended
troubleshooting.

The information can be retrieved like so:
$ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/table_id
$ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/tirn

Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Eli Cohen <elic@nvidia.com>
---
 drivers/vdpa/mlx5/Makefile        |  2 +-
 drivers/vdpa/mlx5/net/debug.c     | 66 +++++++++++++++++++++++++++++++
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 11 ++++++
 drivers/vdpa/mlx5/net/mlx5_vnet.h |  9 +++++
 4 files changed, 87 insertions(+), 1 deletion(-)
 create mode 100644 drivers/vdpa/mlx5/net/debug.c

diff --git a/drivers/vdpa/mlx5/Makefile b/drivers/vdpa/mlx5/Makefile
index f717978c83bf..e791394c33e3 100644
--- a/drivers/vdpa/mlx5/Makefile
+++ b/drivers/vdpa/mlx5/Makefile
@@ -1,4 +1,4 @@
 subdir-ccflags-y += -I$(srctree)/drivers/vdpa/mlx5/core
 
 obj-$(CONFIG_MLX5_VDPA_NET) += mlx5_vdpa.o
-mlx5_vdpa-$(CONFIG_MLX5_VDPA_NET) += net/mlx5_vnet.o core/resources.o core/mr.o
+mlx5_vdpa-$(CONFIG_MLX5_VDPA_NET) += net/mlx5_vnet.o core/resources.o core/mr.o net/debug.o
diff --git a/drivers/vdpa/mlx5/net/debug.c b/drivers/vdpa/mlx5/net/debug.c
new file mode 100644
index 000000000000..95e4801df211
--- /dev/null
+++ b/drivers/vdpa/mlx5/net/debug.c
@@ -0,0 +1,66 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#include <linux/debugfs.h>
+#include <linux/mlx5/fs.h>
+#include "mlx5_vnet.h"
+
+static int tirn_show(struct seq_file *file, void *priv)
+{
+	struct mlx5_vdpa_net *ndev = file->private;
+
+	seq_printf(file, "0x%x\n", ndev->res.tirn);
+	return 0;
+}
+
+DEFINE_SHOW_ATTRIBUTE(tirn);
+
+void mlx5_vdpa_remove_tirn(struct mlx5_vdpa_net *ndev)
+{
+	if (ndev->debugfs)
+		debugfs_remove(ndev->res.tirn_dent);
+}
+
+void mlx5_vdpa_add_tirn(struct mlx5_vdpa_net *ndev)
+{
+	ndev->res.tirn_dent = debugfs_create_file("tirn", 0444, ndev->rx_dent,
+						  ndev, &tirn_fops);
+}
+
+static int rx_flow_table_show(struct seq_file *file, void *priv)
+{
+	struct mlx5_vdpa_net *ndev = file->private;
+
+	seq_printf(file, "0x%x\n", mlx5_flow_table_id(ndev->rxft));
+	return 0;
+}
+
+DEFINE_SHOW_ATTRIBUTE(rx_flow_table);
+
+void mlx5_vdpa_remove_rx_flow_table(struct mlx5_vdpa_net *ndev)
+{
+	if (ndev->debugfs)
+		debugfs_remove(ndev->rx_table_dent);
+}
+
+void mlx5_vdpa_add_rx_flow_table(struct mlx5_vdpa_net *ndev)
+{
+	ndev->rx_table_dent = debugfs_create_file("table_id", 0444, ndev->rx_dent,
+						  ndev, &rx_flow_table_fops);
+}
+
+void mlx5_vdpa_add_debugfs(struct mlx5_vdpa_net *ndev)
+{
+	struct mlx5_core_dev *mdev;
+
+	mdev = ndev->mvdev.mdev;
+	ndev->debugfs = debugfs_create_dir(dev_name(&ndev->mvdev.vdev.dev),
+					   mlx5_debugfs_get_dev_root(mdev));
+	if (!IS_ERR(ndev->debugfs))
+		ndev->rx_dent = debugfs_create_dir("rx", ndev->debugfs);
+}
+
+void mlx5_vdpa_remove_debugfs(struct dentry *dbg)
+{
+	debugfs_remove_recursive(dbg);
+}
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index da54a188077d..4b097e6ddba0 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -1388,11 +1388,16 @@ static int create_tir(struct mlx5_vdpa_net *ndev)
 
 	err = mlx5_vdpa_create_tir(&ndev->mvdev, in, &ndev->res.tirn);
 	kfree(in);
+	if (err)
+		return err;
+
+	mlx5_vdpa_add_tirn(ndev);
 	return err;
 }
 
 static void destroy_tir(struct mlx5_vdpa_net *ndev)
 {
+	mlx5_vdpa_remove_tirn(ndev);
 	mlx5_vdpa_destroy_tir(&ndev->mvdev, ndev->res.tirn);
 }
 
@@ -1578,6 +1583,7 @@ static int setup_steering(struct mlx5_vdpa_net *ndev)
 		mlx5_vdpa_warn(&ndev->mvdev, "failed to create flow table\n");
 		return PTR_ERR(ndev->rxft);
 	}
+	mlx5_vdpa_add_rx_flow_table(ndev);
 
 	err = mac_vlan_add(ndev, ndev->config.mac, 0, false);
 	if (err)
@@ -1586,6 +1592,7 @@ static int setup_steering(struct mlx5_vdpa_net *ndev)
 	return 0;
 
 err_add:
+	mlx5_vdpa_remove_rx_flow_table(ndev);
 	mlx5_destroy_flow_table(ndev->rxft);
 	return err;
 }
@@ -1593,6 +1600,7 @@ static int setup_steering(struct mlx5_vdpa_net *ndev)
 static void teardown_steering(struct mlx5_vdpa_net *ndev)
 {
 	clear_mac_vlan_table(ndev);
+	mlx5_vdpa_remove_rx_flow_table(ndev);
 	mlx5_destroy_flow_table(ndev->rxft);
 }
 
@@ -3135,6 +3143,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
 	if (err)
 		goto err_reg;
 
+	mlx5_vdpa_add_debugfs(ndev);
 	mgtdev->ndev = ndev;
 	return 0;
 
@@ -3161,6 +3170,8 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device *
 	struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
 	struct workqueue_struct *wq;
 
+	mlx5_vdpa_remove_debugfs(ndev->debugfs);
+	ndev->debugfs = NULL;
 	if (ndev->nb_registered) {
 		ndev->nb_registered = false;
 		mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.h b/drivers/vdpa/mlx5/net/mlx5_vnet.h
index 6691c879a6ca..f2cef3925e5b 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.h
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.h
@@ -16,6 +16,7 @@ struct mlx5_vdpa_net_resources {
 	u32 tirn;
 	u32 rqtn;
 	bool valid;
+	struct dentry *tirn_dent;
 };
 
 #define MLX5V_MACVLAN_SIZE 256
@@ -43,6 +44,7 @@ struct mlx5_vdpa_net {
 	struct vdpa_callback config_cb;
 	struct mlx5_vdpa_wq_ent cvq_ent;
 	struct hlist_head macvlan_hash[MLX5V_MACVLAN_SIZE];
+	struct dentry *debugfs;
 };
 
 struct macvlan_node {
@@ -52,4 +54,11 @@ struct macvlan_node {
 	u64 macvlan;
 };
 
+void mlx5_vdpa_add_debugfs(struct mlx5_vdpa_net *ndev);
+void mlx5_vdpa_remove_debugfs(struct dentry *dbg);
+void mlx5_vdpa_add_rx_flow_table(struct mlx5_vdpa_net *ndev);
+void mlx5_vdpa_remove_rx_flow_table(struct mlx5_vdpa_net *ndev);
+void mlx5_vdpa_add_tirn(struct mlx5_vdpa_net *ndev);
+void mlx5_vdpa_remove_tirn(struct mlx5_vdpa_net *ndev);
+
 #endif /* __MLX5_VNET_H__ */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATH v2 8/8] vdpa/mlx5: Add RX counters to debugfs
  2022-11-14 13:17 [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
                   ` (6 preceding siblings ...)
  2022-11-14 13:17 ` [PATH v2 7/8] vdpa/mlx5: Add debugfs subtree Eli Cohen
@ 2022-11-14 13:17 ` Eli Cohen
  2022-11-15  3:33     ` Jason Wang
  2022-11-24  6:34 ` [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
  8 siblings, 1 reply; 21+ messages in thread
From: Eli Cohen @ 2022-11-14 13:17 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization
  Cc: si-wei.liu, eperezma, lulu, Eli Cohen

For each interface, either VLAN tagged or untagged, add two hardware
counters: one for unicast and another for multicast. The counters count
RX packets and bytes and can be read through debugfs:

$ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/mcast/packets
$ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/ucast/bytes

This feature is controlled via the config option
MLX5_VDPA_STEERING_DEBUG. It is off by default as it may have some
impact on performance.

Signed-off-by: Eli Cohen <elic@nvidia.com>
---
 drivers/vdpa/Kconfig              |  12 ++++
 drivers/vdpa/mlx5/net/debug.c     |  86 ++++++++++++++++++++++
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 116 +++++++++++++++++++++++-------
 drivers/vdpa/mlx5/net/mlx5_vnet.h |  30 ++++++++
 4 files changed, 217 insertions(+), 27 deletions(-)

diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig
index 50f45d037611..43b716ec2d18 100644
--- a/drivers/vdpa/Kconfig
+++ b/drivers/vdpa/Kconfig
@@ -71,6 +71,18 @@ config MLX5_VDPA_NET
 	  be executed by the hardware. It also supports a variety of stateless
 	  offloads depending on the actual device used and firmware version.
 
+config MLX5_VDPA_STEERING_DEBUG
+	bool "expose steering counters on debugfs"
+	select MLX5_VDPA
+	help
+	  Expose RX steering counters in debugfs to aid in debugging. For each VLAN
+	  or non VLAN interface, two hardware counters are added to the RX flow
+	  table: one for unicast and one for multicast.
+	  The counters counts the number of packets and bytes and exposes them in
+	  debugfs. Once can read the counters using, e.g.:
+	  cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/ucast/packets
+	  cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/mcast/bytes
+
 config VP_VDPA
 	tristate "Virtio PCI bridge vDPA driver"
 	select VIRTIO_PCI_LIB
diff --git a/drivers/vdpa/mlx5/net/debug.c b/drivers/vdpa/mlx5/net/debug.c
index 95e4801df211..60d6ac68cdc4 100644
--- a/drivers/vdpa/mlx5/net/debug.c
+++ b/drivers/vdpa/mlx5/net/debug.c
@@ -49,6 +49,92 @@ void mlx5_vdpa_add_rx_flow_table(struct mlx5_vdpa_net *ndev)
 						  ndev, &rx_flow_table_fops);
 }
 
+#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
+static int packets_show(struct seq_file *file, void *priv)
+{
+	struct mlx5_vdpa_counter *counter = file->private;
+	u64 packets;
+	u64 bytes;
+	int err;
+
+	err = mlx5_fc_query(counter->mdev, counter->counter, &packets, &bytes);
+	if (err)
+		return err;
+
+	seq_printf(file, "0x%llx\n", packets);
+	return 0;
+}
+
+static int bytes_show(struct seq_file *file, void *priv)
+{
+	struct mlx5_vdpa_counter *counter = file->private;
+	u64 packets;
+	u64 bytes;
+	int err;
+
+	err = mlx5_fc_query(counter->mdev, counter->counter, &packets, &bytes);
+	if (err)
+		return err;
+
+	seq_printf(file, "0x%llx\n", bytes);
+	return 0;
+}
+
+DEFINE_SHOW_ATTRIBUTE(packets);
+DEFINE_SHOW_ATTRIBUTE(bytes);
+
+static void add_counter_node(struct mlx5_vdpa_counter *counter,
+			     struct dentry *parent)
+{
+	debugfs_create_file("packets", 0444, parent, counter,
+			    &packets_fops);
+	debugfs_create_file("bytes", 0444, parent, counter,
+			    &bytes_fops);
+}
+
+void mlx5_vdpa_add_rx_counters(struct mlx5_vdpa_net *ndev,
+			       struct macvlan_node *node)
+{
+	static const char *ut = "untagged";
+	char vidstr[9];
+	u16 vid;
+
+	node->ucast_counter.mdev = ndev->mvdev.mdev;
+	node->mcast_counter.mdev = ndev->mvdev.mdev;
+	if (node->tagged) {
+		vid = key2vid(node->macvlan);
+		snprintf(vidstr, sizeof(vidstr), "0x%x", vid);
+	} else {
+		strcpy(vidstr, ut);
+	}
+
+	node->dent = debugfs_create_dir(vidstr, ndev->rx_dent);
+	if (IS_ERR(node->dent)) {
+		node->dent = NULL;
+		return;
+	}
+
+	node->ucast_counter.dent = debugfs_create_dir("ucast", node->dent);
+	if (IS_ERR(node->ucast_counter.dent))
+		return;
+
+	add_counter_node(&node->ucast_counter, node->ucast_counter.dent);
+
+	node->mcast_counter.dent = debugfs_create_dir("mcast", node->dent);
+	if (IS_ERR(node->mcast_counter.dent))
+		return;
+
+	add_counter_node(&node->mcast_counter, node->mcast_counter.dent);
+}
+
+void mlx5_vdpa_remove_rx_counters(struct mlx5_vdpa_net *ndev,
+				  struct macvlan_node *node)
+{
+	if (node->dent && ndev->debugfs)
+		debugfs_remove_recursive(node->dent);
+}
+#endif
+
 void mlx5_vdpa_add_debugfs(struct mlx5_vdpa_net *ndev)
 {
 	struct mlx5_core_dev *mdev;
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 4b097e6ddba0..6632651b1e54 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -1404,12 +1404,55 @@ static void destroy_tir(struct mlx5_vdpa_net *ndev)
 #define MAX_STEERING_ENT 0x8000
 #define MAX_STEERING_GROUPS 2
 
+#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
+       #define NUM_DESTS 2
+#else
+       #define NUM_DESTS 1
+#endif
+
+static int add_steering_counters(struct mlx5_vdpa_net *ndev,
+				 struct macvlan_node *node,
+				 struct mlx5_flow_act *flow_act,
+				 struct mlx5_flow_destination *dests)
+{
+#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
+	int err;
+
+	node->ucast_counter.counter = mlx5_fc_create(ndev->mvdev.mdev, false);
+	if (IS_ERR(node->ucast_counter.counter))
+		return PTR_ERR(node->ucast_counter.counter);
+
+	node->mcast_counter.counter = mlx5_fc_create(ndev->mvdev.mdev, false);
+	if (IS_ERR(node->mcast_counter.counter)) {
+		err = PTR_ERR(node->mcast_counter.counter);
+		goto err_mcast_counter;
+	}
+
+	dests[1].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
+	flow_act->action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
+	return 0;
+
+err_mcast_counter:
+	mlx5_fc_destroy(ndev->mvdev.mdev, node->ucast_counter.counter);
+	return err;
+#else
+	return 0;
+#endif
+}
+
+static void remove_steering_counters(struct mlx5_vdpa_net *ndev,
+				     struct macvlan_node *node)
+{
+#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
+	mlx5_fc_destroy(ndev->mvdev.mdev, node->mcast_counter.counter);
+	mlx5_fc_destroy(ndev->mvdev.mdev, node->ucast_counter.counter);
+#endif
+}
+
 static int mlx5_vdpa_add_mac_vlan_rules(struct mlx5_vdpa_net *ndev, u8 *mac,
-					u16 vid, bool tagged,
-					struct mlx5_flow_handle **ucast,
-					struct mlx5_flow_handle **mcast)
+					struct macvlan_node *node)
 {
-	struct mlx5_flow_destination dest = {};
+	struct mlx5_flow_destination dests[NUM_DESTS] = {};
 	struct mlx5_flow_act flow_act = {};
 	struct mlx5_flow_handle *rule;
 	struct mlx5_flow_spec *spec;
@@ -1418,11 +1461,13 @@ static int mlx5_vdpa_add_mac_vlan_rules(struct mlx5_vdpa_net *ndev, u8 *mac,
 	u8 *dmac_c;
 	u8 *dmac_v;
 	int err;
+	u16 vid;
 
 	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
 	if (!spec)
 		return -ENOMEM;
 
+	vid = key2vid(node->macvlan);
 	spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
 	headers_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, outer_headers);
 	headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, outer_headers);
@@ -1434,44 +1479,58 @@ static int mlx5_vdpa_add_mac_vlan_rules(struct mlx5_vdpa_net *ndev, u8 *mac,
 		MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
 		MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, first_vid);
 	}
-	if (tagged) {
+	if (node->tagged) {
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v, cvlan_tag, 1);
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_vid, vid);
 	}
 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
-	dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
-	dest.tir_num = ndev->res.tirn;
-	rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, &dest, 1);
-	if (IS_ERR(rule))
-		return PTR_ERR(rule);
+	dests[0].type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+	dests[0].tir_num = ndev->res.tirn;
+	err = add_steering_counters(ndev, node, &flow_act, dests);
+	if (err)
+		goto out_free;
+
+#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
+	dests[1].counter_id = mlx5_fc_id(node->ucast_counter.counter);
+#endif
+	node->ucast_rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, dests, NUM_DESTS);
+	if (IS_ERR(rule)) {
+		err = PTR_ERR(rule);
+		goto err_ucast;
+	}
 
-	*ucast = rule;
+#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
+	dests[1].counter_id = mlx5_fc_id(node->mcast_counter.counter);
+#endif
 
 	memset(dmac_c, 0, ETH_ALEN);
 	memset(dmac_v, 0, ETH_ALEN);
 	dmac_c[0] = 1;
 	dmac_v[0] = 1;
-	rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, &dest, 1);
-	kvfree(spec);
+	node->mcast_rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, dests, NUM_DESTS);
 	if (IS_ERR(rule)) {
 		err = PTR_ERR(rule);
 		goto err_mcast;
 	}
-
-	*mcast = rule;
+	kvfree(spec);
+	mlx5_vdpa_add_rx_counters(ndev, node);
 	return 0;
 
 err_mcast:
-	mlx5_del_flow_rules(*ucast);
+	mlx5_del_flow_rules(node->ucast_rule);
+err_ucast:
+	remove_steering_counters(ndev, node);
+out_free:
+	kvfree(spec);
 	return err;
 }
 
 static void mlx5_vdpa_del_mac_vlan_rules(struct mlx5_vdpa_net *ndev,
-					 struct mlx5_flow_handle *ucast,
-					 struct mlx5_flow_handle *mcast)
+					 struct macvlan_node *node)
 {
-	mlx5_del_flow_rules(ucast);
-	mlx5_del_flow_rules(mcast);
+	mlx5_vdpa_remove_rx_counters(ndev, node);
+	mlx5_del_flow_rules(node->ucast_rule);
+	mlx5_del_flow_rules(node->mcast_rule);
 }
 
 static u64 search_val(u8 *mac, u16 vlan, bool tagged)
@@ -1505,14 +1564,14 @@ static struct macvlan_node *mac_vlan_lookup(struct mlx5_vdpa_net *ndev, u64 valu
 	return NULL;
 }
 
-static int mac_vlan_add(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vlan, bool tagged) // vlan -> vid
+static int mac_vlan_add(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vid, bool tagged)
 {
 	struct macvlan_node *ptr;
 	u64 val;
 	u32 idx;
 	int err;
 
-	val = search_val(mac, vlan, tagged);
+	val = search_val(mac, vid, tagged);
 	if (mac_vlan_lookup(ndev, val))
 		return -EEXIST;
 
@@ -1520,12 +1579,13 @@ static int mac_vlan_add(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vlan, bool tagg
 	if (!ptr)
 		return -ENOMEM;
 
-	err = mlx5_vdpa_add_mac_vlan_rules(ndev, ndev->config.mac, vlan, tagged,
-					   &ptr->ucast_rule, &ptr->mcast_rule);
+	ptr->tagged = tagged;
+	ptr->macvlan = val;
+	ptr->ndev = ndev;
+	err = mlx5_vdpa_add_mac_vlan_rules(ndev, ndev->config.mac, ptr);
 	if (err)
 		goto err_add;
 
-	ptr->macvlan = val;
 	idx = hash_64(val, 8);
 	hlist_add_head(&ptr->hlist, &ndev->macvlan_hash[idx]);
 	return 0;
@@ -1544,7 +1604,8 @@ static void mac_vlan_del(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vlan, bool tag
 		return;
 
 	hlist_del(&ptr->hlist);
-	mlx5_vdpa_del_mac_vlan_rules(ndev, ptr->ucast_rule, ptr->mcast_rule);
+	mlx5_vdpa_del_mac_vlan_rules(ndev, ptr);
+	remove_steering_counters(ndev, ptr);
 	kfree(ptr);
 }
 
@@ -1557,7 +1618,8 @@ static void clear_mac_vlan_table(struct mlx5_vdpa_net *ndev)
 	for (i = 0; i < MLX5V_MACVLAN_SIZE; i++) {
 		hlist_for_each_entry_safe(pos, n, &ndev->macvlan_hash[i], hlist) {
 			hlist_del(&pos->hlist);
-			mlx5_vdpa_del_mac_vlan_rules(ndev, pos->ucast_rule, pos->mcast_rule);
+			mlx5_vdpa_del_mac_vlan_rules(ndev, pos);
+			remove_steering_counters(ndev, pos);
 			kfree(pos);
 		}
 	}
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.h b/drivers/vdpa/mlx5/net/mlx5_vnet.h
index f2cef3925e5b..c90a89e1de4d 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.h
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.h
@@ -21,6 +21,11 @@ struct mlx5_vdpa_net_resources {
 
 #define MLX5V_MACVLAN_SIZE 256
 
+static inline u16 key2vid(u64 key)
+{
+	return (u16)(key >> 48) & 0xfff;
+}
+
 struct mlx5_vdpa_net {
 	struct mlx5_vdpa_dev mvdev;
 	struct mlx5_vdpa_net_resources res;
@@ -47,11 +52,24 @@ struct mlx5_vdpa_net {
 	struct dentry *debugfs;
 };
 
+struct mlx5_vdpa_counter {
+	struct mlx5_fc *counter;
+	struct dentry *dent;
+	struct mlx5_core_dev *mdev;
+};
+
 struct macvlan_node {
 	struct hlist_node hlist;
 	struct mlx5_flow_handle *ucast_rule;
 	struct mlx5_flow_handle *mcast_rule;
 	u64 macvlan;
+	struct mlx5_vdpa_net *ndev;
+	bool tagged;
+#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
+	struct dentry *dent;
+	struct mlx5_vdpa_counter ucast_counter;
+	struct mlx5_vdpa_counter mcast_counter;
+#endif
 };
 
 void mlx5_vdpa_add_debugfs(struct mlx5_vdpa_net *ndev);
@@ -60,5 +78,17 @@ void mlx5_vdpa_add_rx_flow_table(struct mlx5_vdpa_net *ndev);
 void mlx5_vdpa_remove_rx_flow_table(struct mlx5_vdpa_net *ndev);
 void mlx5_vdpa_add_tirn(struct mlx5_vdpa_net *ndev);
 void mlx5_vdpa_remove_tirn(struct mlx5_vdpa_net *ndev);
+#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
+void mlx5_vdpa_add_rx_counters(struct mlx5_vdpa_net *ndev,
+			       struct macvlan_node *node);
+void mlx5_vdpa_remove_rx_counters(struct mlx5_vdpa_net *ndev,
+				  struct macvlan_node *node);
+#else
+static inline void mlx5_vdpa_add_rx_counters(struct mlx5_vdpa_net *ndev,
+					     struct macvlan_node *node) {}
+static inline void mlx5_vdpa_remove_rx_counters(struct mlx5_vdpa_net *ndev,
+						struct macvlan_node *node) {}
+#endif
+
 
 #endif /* __MLX5_VNET_H__ */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATH v2 2/8] vdpa/mlx5: Return error on vlan ctrl commands if not supported
  2022-11-14 13:17 ` [PATH v2 2/8] vdpa/mlx5: Return error on vlan ctrl commands if not supported Eli Cohen
@ 2022-11-15  3:10     ` Jason Wang
  2022-11-15  9:43   ` Eugenio Perez Martin
  1 sibling, 0 replies; 21+ messages in thread
From: Jason Wang @ 2022-11-15  3:10 UTC (permalink / raw)
  To: Eli Cohen; +Cc: lulu, mst, linux-kernel, virtualization, eperezma

On Mon, Nov 14, 2022 at 9:18 PM Eli Cohen <elic@nvidia.com> wrote:
>
> Check if VIRTIO_NET_F_CTRL_VLAN is negotiated and return error if
> control VQ command is received.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 3fb06dcee943..01da229d22da 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1823,6 +1823,9 @@ static virtio_net_ctrl_ack handle_ctrl_vlan(struct mlx5_vdpa_dev *mvdev, u8 cmd)
>         size_t read;
>         u16 id;
>
> +       if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VLAN)))
> +               return status;
> +
>         switch (cmd) {
>         case VIRTIO_NET_CTRL_VLAN_ADD:
>                 read = vringh_iov_pull_iotlb(&cvq->vring, &cvq->riov, &vlan, sizeof(vlan));
> --
> 2.38.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATH v2 2/8] vdpa/mlx5: Return error on vlan ctrl commands if not supported
@ 2022-11-15  3:10     ` Jason Wang
  0 siblings, 0 replies; 21+ messages in thread
From: Jason Wang @ 2022-11-15  3:10 UTC (permalink / raw)
  To: Eli Cohen; +Cc: mst, linux-kernel, virtualization, si-wei.liu, eperezma, lulu

On Mon, Nov 14, 2022 at 9:18 PM Eli Cohen <elic@nvidia.com> wrote:
>
> Check if VIRTIO_NET_F_CTRL_VLAN is negotiated and return error if
> control VQ command is received.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 3fb06dcee943..01da229d22da 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1823,6 +1823,9 @@ static virtio_net_ctrl_ack handle_ctrl_vlan(struct mlx5_vdpa_dev *mvdev, u8 cmd)
>         size_t read;
>         u16 id;
>
> +       if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VLAN)))
> +               return status;
> +
>         switch (cmd) {
>         case VIRTIO_NET_CTRL_VLAN_ADD:
>                 read = vringh_iov_pull_iotlb(&cvq->vring, &cvq->riov, &vlan, sizeof(vlan));
> --
> 2.38.1
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATH v2 5/8] vdpa/mlx5: Avoid overwriting CVQ iotlb
  2022-11-14 13:17 ` [PATH v2 5/8] vdpa/mlx5: Avoid overwriting CVQ iotlb Eli Cohen
@ 2022-11-15  3:33     ` Jason Wang
  2022-11-15  9:41   ` Eugenio Perez Martin
  1 sibling, 0 replies; 21+ messages in thread
From: Jason Wang @ 2022-11-15  3:33 UTC (permalink / raw)
  To: Eli Cohen; +Cc: mst, linux-kernel, virtualization, si-wei.liu, eperezma, lulu

On Mon, Nov 14, 2022 at 9:18 PM Eli Cohen <elic@nvidia.com> wrote:
>
> When qemu uses different address spaces for data and control virtqueues,
> the current code would overwrite the control virtqueue iotlb through the
> dup_iotlb call. Fix this by referring to the address space identifier
> and the group to asid mapping to determine which mapping needs to be
> updated. We also move the address space logic from mlx5 net to core
> directory.
>
> Reported-by: Eugenio Pérez <eperezma@redhat.com>
> Signed-off-by: Eli Cohen <elic@nvidia.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/vdpa/mlx5/core/mlx5_vdpa.h |  5 +--
>  drivers/vdpa/mlx5/core/mr.c        | 44 ++++++++++++++++-----------
>  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 49 ++++++------------------------
>  3 files changed, 39 insertions(+), 59 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
> index 6af9fdbb86b7..058fbe28107e 100644
> --- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h
> +++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
> @@ -116,8 +116,9 @@ int mlx5_vdpa_create_mkey(struct mlx5_vdpa_dev *mvdev, u32 *mkey, u32 *in,
>                           int inlen);
>  int mlx5_vdpa_destroy_mkey(struct mlx5_vdpa_dev *mvdev, u32 mkey);
>  int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> -                            bool *change_map);
> -int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb);
> +                            bool *change_map, unsigned int asid);
> +int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> +                       unsigned int asid);
>  void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev);
>
>  #define mlx5_vdpa_warn(__dev, format, ...)                                                         \
> diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
> index a639b9208d41..a4d7ee2339fa 100644
> --- a/drivers/vdpa/mlx5/core/mr.c
> +++ b/drivers/vdpa/mlx5/core/mr.c
> @@ -511,7 +511,8 @@ void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev)
>         mutex_unlock(&mr->mkey_mtx);
>  }
>
> -static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev,
> +                               struct vhost_iotlb *iotlb, unsigned int asid)
>  {
>         struct mlx5_vdpa_mr *mr = &mvdev->mr;
>         int err;
> @@ -519,42 +520,49 @@ static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb
>         if (mr->initialized)
>                 return 0;
>
> -       if (iotlb)
> -               err = create_user_mr(mvdev, iotlb);
> -       else
> -               err = create_dma_mr(mvdev, mr);
> +       if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
> +               if (iotlb)
> +                       err = create_user_mr(mvdev, iotlb);
> +               else
> +                       err = create_dma_mr(mvdev, mr);
>
> -       if (err)
> -               return err;
> +               if (err)
> +                       return err;
> +       }
>
> -       err = dup_iotlb(mvdev, iotlb);
> -       if (err)
> -               goto out_err;
> +       if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] == asid) {
> +               err = dup_iotlb(mvdev, iotlb);
> +               if (err)
> +                       goto out_err;
> +       }
>
>         mr->initialized = true;
>         return 0;
>
>  out_err:
> -       if (iotlb)
> -               destroy_user_mr(mvdev, mr);
> -       else
> -               destroy_dma_mr(mvdev, mr);
> +       if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
> +               if (iotlb)
> +                       destroy_user_mr(mvdev, mr);
> +               else
> +                       destroy_dma_mr(mvdev, mr);
> +       }
>
>         return err;
>  }
>
> -int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> +                       unsigned int asid)
>  {
>         int err;
>
>         mutex_lock(&mvdev->mr.mkey_mtx);
> -       err = _mlx5_vdpa_create_mr(mvdev, iotlb);
> +       err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid);
>         mutex_unlock(&mvdev->mr.mkey_mtx);
>         return err;
>  }
>
>  int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> -                            bool *change_map)
> +                            bool *change_map, unsigned int asid)
>  {
>         struct mlx5_vdpa_mr *mr = &mvdev->mr;
>         int err = 0;
> @@ -566,7 +574,7 @@ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *io
>                 *change_map = true;
>         }
>         if (!*change_map)
> -               err = _mlx5_vdpa_create_mr(mvdev, iotlb);
> +               err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid);
>         mutex_unlock(&mr->mkey_mtx);
>
>         return err;
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 98dd8ce8af26..3a6dbbc6440d 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -2394,7 +2394,8 @@ static void restore_channels_info(struct mlx5_vdpa_net *ndev)
>         }
>  }
>
> -static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev,
> +                               struct vhost_iotlb *iotlb, unsigned int asid)
>  {
>         struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
>         int err;
> @@ -2406,7 +2407,7 @@ static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb
>
>         teardown_driver(ndev);
>         mlx5_vdpa_destroy_mr(mvdev);
> -       err = mlx5_vdpa_create_mr(mvdev, iotlb);
> +       err = mlx5_vdpa_create_mr(mvdev, iotlb, asid);
>         if (err)
>                 goto err_mr;
>
> @@ -2587,7 +2588,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev)
>         ++mvdev->generation;
>
>         if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
> -               if (mlx5_vdpa_create_mr(mvdev, NULL))
> +               if (mlx5_vdpa_create_mr(mvdev, NULL, 0))
>                         mlx5_vdpa_warn(mvdev, "create MR failed\n");
>         }
>         up_write(&ndev->reslock);
> @@ -2623,41 +2624,20 @@ static u32 mlx5_vdpa_get_generation(struct vdpa_device *vdev)
>         return mvdev->generation;
>  }
>
> -static int set_map_control(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> -{
> -       u64 start = 0ULL, last = 0ULL - 1;
> -       struct vhost_iotlb_map *map;
> -       int err = 0;
> -
> -       spin_lock(&mvdev->cvq.iommu_lock);
> -       vhost_iotlb_reset(mvdev->cvq.iotlb);
> -
> -       for (map = vhost_iotlb_itree_first(iotlb, start, last); map;
> -            map = vhost_iotlb_itree_next(map, start, last)) {
> -               err = vhost_iotlb_add_range(mvdev->cvq.iotlb, map->start,
> -                                           map->last, map->addr, map->perm);
> -               if (err)
> -                       goto out;
> -       }
> -
> -out:
> -       spin_unlock(&mvdev->cvq.iommu_lock);
> -       return err;
> -}
> -
> -static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> +                       unsigned int asid)
>  {
>         bool change_map;
>         int err;
>
> -       err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map);
> +       err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map, asid);
>         if (err) {
>                 mlx5_vdpa_warn(mvdev, "set map failed(%d)\n", err);
>                 return err;
>         }
>
>         if (change_map)
> -               err = mlx5_vdpa_change_map(mvdev, iotlb);
> +               err = mlx5_vdpa_change_map(mvdev, iotlb, asid);
>
>         return err;
>  }
> @@ -2670,16 +2650,7 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid,
>         int err = -EINVAL;
>
>         down_write(&ndev->reslock);
> -       if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
> -               err = set_map_data(mvdev, iotlb);
> -               if (err)
> -                       goto out;
> -       }
> -
> -       if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] == asid)
> -               err = set_map_control(mvdev, iotlb);
> -
> -out:
> +       err = set_map_data(mvdev, iotlb, asid);
>         up_write(&ndev->reslock);
>         return err;
>  }
> @@ -3182,7 +3153,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
>                 goto err_mpfs;
>
>         if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
> -               err = mlx5_vdpa_create_mr(mvdev, NULL);
> +               err = mlx5_vdpa_create_mr(mvdev, NULL, 0);
>                 if (err)
>                         goto err_res;
>         }
> --
> 2.38.1
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATH v2 5/8] vdpa/mlx5: Avoid overwriting CVQ iotlb
@ 2022-11-15  3:33     ` Jason Wang
  0 siblings, 0 replies; 21+ messages in thread
From: Jason Wang @ 2022-11-15  3:33 UTC (permalink / raw)
  To: Eli Cohen; +Cc: lulu, mst, linux-kernel, virtualization, eperezma

On Mon, Nov 14, 2022 at 9:18 PM Eli Cohen <elic@nvidia.com> wrote:
>
> When qemu uses different address spaces for data and control virtqueues,
> the current code would overwrite the control virtqueue iotlb through the
> dup_iotlb call. Fix this by referring to the address space identifier
> and the group to asid mapping to determine which mapping needs to be
> updated. We also move the address space logic from mlx5 net to core
> directory.
>
> Reported-by: Eugenio Pérez <eperezma@redhat.com>
> Signed-off-by: Eli Cohen <elic@nvidia.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/vdpa/mlx5/core/mlx5_vdpa.h |  5 +--
>  drivers/vdpa/mlx5/core/mr.c        | 44 ++++++++++++++++-----------
>  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 49 ++++++------------------------
>  3 files changed, 39 insertions(+), 59 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
> index 6af9fdbb86b7..058fbe28107e 100644
> --- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h
> +++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
> @@ -116,8 +116,9 @@ int mlx5_vdpa_create_mkey(struct mlx5_vdpa_dev *mvdev, u32 *mkey, u32 *in,
>                           int inlen);
>  int mlx5_vdpa_destroy_mkey(struct mlx5_vdpa_dev *mvdev, u32 mkey);
>  int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> -                            bool *change_map);
> -int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb);
> +                            bool *change_map, unsigned int asid);
> +int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> +                       unsigned int asid);
>  void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev);
>
>  #define mlx5_vdpa_warn(__dev, format, ...)                                                         \
> diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
> index a639b9208d41..a4d7ee2339fa 100644
> --- a/drivers/vdpa/mlx5/core/mr.c
> +++ b/drivers/vdpa/mlx5/core/mr.c
> @@ -511,7 +511,8 @@ void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev)
>         mutex_unlock(&mr->mkey_mtx);
>  }
>
> -static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev,
> +                               struct vhost_iotlb *iotlb, unsigned int asid)
>  {
>         struct mlx5_vdpa_mr *mr = &mvdev->mr;
>         int err;
> @@ -519,42 +520,49 @@ static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb
>         if (mr->initialized)
>                 return 0;
>
> -       if (iotlb)
> -               err = create_user_mr(mvdev, iotlb);
> -       else
> -               err = create_dma_mr(mvdev, mr);
> +       if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
> +               if (iotlb)
> +                       err = create_user_mr(mvdev, iotlb);
> +               else
> +                       err = create_dma_mr(mvdev, mr);
>
> -       if (err)
> -               return err;
> +               if (err)
> +                       return err;
> +       }
>
> -       err = dup_iotlb(mvdev, iotlb);
> -       if (err)
> -               goto out_err;
> +       if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] == asid) {
> +               err = dup_iotlb(mvdev, iotlb);
> +               if (err)
> +                       goto out_err;
> +       }
>
>         mr->initialized = true;
>         return 0;
>
>  out_err:
> -       if (iotlb)
> -               destroy_user_mr(mvdev, mr);
> -       else
> -               destroy_dma_mr(mvdev, mr);
> +       if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
> +               if (iotlb)
> +                       destroy_user_mr(mvdev, mr);
> +               else
> +                       destroy_dma_mr(mvdev, mr);
> +       }
>
>         return err;
>  }
>
> -int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> +                       unsigned int asid)
>  {
>         int err;
>
>         mutex_lock(&mvdev->mr.mkey_mtx);
> -       err = _mlx5_vdpa_create_mr(mvdev, iotlb);
> +       err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid);
>         mutex_unlock(&mvdev->mr.mkey_mtx);
>         return err;
>  }
>
>  int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> -                            bool *change_map)
> +                            bool *change_map, unsigned int asid)
>  {
>         struct mlx5_vdpa_mr *mr = &mvdev->mr;
>         int err = 0;
> @@ -566,7 +574,7 @@ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *io
>                 *change_map = true;
>         }
>         if (!*change_map)
> -               err = _mlx5_vdpa_create_mr(mvdev, iotlb);
> +               err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid);
>         mutex_unlock(&mr->mkey_mtx);
>
>         return err;
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 98dd8ce8af26..3a6dbbc6440d 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -2394,7 +2394,8 @@ static void restore_channels_info(struct mlx5_vdpa_net *ndev)
>         }
>  }
>
> -static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev,
> +                               struct vhost_iotlb *iotlb, unsigned int asid)
>  {
>         struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
>         int err;
> @@ -2406,7 +2407,7 @@ static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb
>
>         teardown_driver(ndev);
>         mlx5_vdpa_destroy_mr(mvdev);
> -       err = mlx5_vdpa_create_mr(mvdev, iotlb);
> +       err = mlx5_vdpa_create_mr(mvdev, iotlb, asid);
>         if (err)
>                 goto err_mr;
>
> @@ -2587,7 +2588,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev)
>         ++mvdev->generation;
>
>         if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
> -               if (mlx5_vdpa_create_mr(mvdev, NULL))
> +               if (mlx5_vdpa_create_mr(mvdev, NULL, 0))
>                         mlx5_vdpa_warn(mvdev, "create MR failed\n");
>         }
>         up_write(&ndev->reslock);
> @@ -2623,41 +2624,20 @@ static u32 mlx5_vdpa_get_generation(struct vdpa_device *vdev)
>         return mvdev->generation;
>  }
>
> -static int set_map_control(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> -{
> -       u64 start = 0ULL, last = 0ULL - 1;
> -       struct vhost_iotlb_map *map;
> -       int err = 0;
> -
> -       spin_lock(&mvdev->cvq.iommu_lock);
> -       vhost_iotlb_reset(mvdev->cvq.iotlb);
> -
> -       for (map = vhost_iotlb_itree_first(iotlb, start, last); map;
> -            map = vhost_iotlb_itree_next(map, start, last)) {
> -               err = vhost_iotlb_add_range(mvdev->cvq.iotlb, map->start,
> -                                           map->last, map->addr, map->perm);
> -               if (err)
> -                       goto out;
> -       }
> -
> -out:
> -       spin_unlock(&mvdev->cvq.iommu_lock);
> -       return err;
> -}
> -
> -static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> +                       unsigned int asid)
>  {
>         bool change_map;
>         int err;
>
> -       err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map);
> +       err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map, asid);
>         if (err) {
>                 mlx5_vdpa_warn(mvdev, "set map failed(%d)\n", err);
>                 return err;
>         }
>
>         if (change_map)
> -               err = mlx5_vdpa_change_map(mvdev, iotlb);
> +               err = mlx5_vdpa_change_map(mvdev, iotlb, asid);
>
>         return err;
>  }
> @@ -2670,16 +2650,7 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid,
>         int err = -EINVAL;
>
>         down_write(&ndev->reslock);
> -       if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
> -               err = set_map_data(mvdev, iotlb);
> -               if (err)
> -                       goto out;
> -       }
> -
> -       if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] == asid)
> -               err = set_map_control(mvdev, iotlb);
> -
> -out:
> +       err = set_map_data(mvdev, iotlb, asid);
>         up_write(&ndev->reslock);
>         return err;
>  }
> @@ -3182,7 +3153,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
>                 goto err_mpfs;
>
>         if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
> -               err = mlx5_vdpa_create_mr(mvdev, NULL);
> +               err = mlx5_vdpa_create_mr(mvdev, NULL, 0);
>                 if (err)
>                         goto err_res;
>         }
> --
> 2.38.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATH v2 8/8] vdpa/mlx5: Add RX counters to debugfs
  2022-11-14 13:17 ` [PATH v2 8/8] vdpa/mlx5: Add RX counters to debugfs Eli Cohen
@ 2022-11-15  3:33     ` Jason Wang
  0 siblings, 0 replies; 21+ messages in thread
From: Jason Wang @ 2022-11-15  3:33 UTC (permalink / raw)
  To: Eli Cohen; +Cc: lulu, mst, linux-kernel, virtualization, eperezma

On Mon, Nov 14, 2022 at 9:18 PM Eli Cohen <elic@nvidia.com> wrote:
>
> For each interface, either VLAN tagged or untagged, add two hardware
> counters: one for unicast and another for multicast. The counters count
> RX packets and bytes and can be read through debugfs:
>
> $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/mcast/packets
> $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/ucast/bytes
>
> This feature is controlled via the config option
> MLX5_VDPA_STEERING_DEBUG. It is off by default as it may have some
> impact on performance.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/vdpa/Kconfig              |  12 ++++
>  drivers/vdpa/mlx5/net/debug.c     |  86 ++++++++++++++++++++++
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 116 +++++++++++++++++++++++-------
>  drivers/vdpa/mlx5/net/mlx5_vnet.h |  30 ++++++++
>  4 files changed, 217 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig
> index 50f45d037611..43b716ec2d18 100644
> --- a/drivers/vdpa/Kconfig
> +++ b/drivers/vdpa/Kconfig
> @@ -71,6 +71,18 @@ config MLX5_VDPA_NET
>           be executed by the hardware. It also supports a variety of stateless
>           offloads depending on the actual device used and firmware version.
>
> +config MLX5_VDPA_STEERING_DEBUG
> +       bool "expose steering counters on debugfs"
> +       select MLX5_VDPA
> +       help
> +         Expose RX steering counters in debugfs to aid in debugging. For each VLAN
> +         or non VLAN interface, two hardware counters are added to the RX flow
> +         table: one for unicast and one for multicast.
> +         The counters counts the number of packets and bytes and exposes them in
> +         debugfs. Once can read the counters using, e.g.:
> +         cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/ucast/packets
> +         cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/mcast/bytes
> +
>  config VP_VDPA
>         tristate "Virtio PCI bridge vDPA driver"
>         select VIRTIO_PCI_LIB
> diff --git a/drivers/vdpa/mlx5/net/debug.c b/drivers/vdpa/mlx5/net/debug.c
> index 95e4801df211..60d6ac68cdc4 100644
> --- a/drivers/vdpa/mlx5/net/debug.c
> +++ b/drivers/vdpa/mlx5/net/debug.c
> @@ -49,6 +49,92 @@ void mlx5_vdpa_add_rx_flow_table(struct mlx5_vdpa_net *ndev)
>                                                   ndev, &rx_flow_table_fops);
>  }
>
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +static int packets_show(struct seq_file *file, void *priv)
> +{
> +       struct mlx5_vdpa_counter *counter = file->private;
> +       u64 packets;
> +       u64 bytes;
> +       int err;
> +
> +       err = mlx5_fc_query(counter->mdev, counter->counter, &packets, &bytes);
> +       if (err)
> +               return err;
> +
> +       seq_printf(file, "0x%llx\n", packets);
> +       return 0;
> +}
> +
> +static int bytes_show(struct seq_file *file, void *priv)
> +{
> +       struct mlx5_vdpa_counter *counter = file->private;
> +       u64 packets;
> +       u64 bytes;
> +       int err;
> +
> +       err = mlx5_fc_query(counter->mdev, counter->counter, &packets, &bytes);
> +       if (err)
> +               return err;
> +
> +       seq_printf(file, "0x%llx\n", bytes);
> +       return 0;
> +}
> +
> +DEFINE_SHOW_ATTRIBUTE(packets);
> +DEFINE_SHOW_ATTRIBUTE(bytes);
> +
> +static void add_counter_node(struct mlx5_vdpa_counter *counter,
> +                            struct dentry *parent)
> +{
> +       debugfs_create_file("packets", 0444, parent, counter,
> +                           &packets_fops);
> +       debugfs_create_file("bytes", 0444, parent, counter,
> +                           &bytes_fops);
> +}
> +
> +void mlx5_vdpa_add_rx_counters(struct mlx5_vdpa_net *ndev,
> +                              struct macvlan_node *node)
> +{
> +       static const char *ut = "untagged";
> +       char vidstr[9];
> +       u16 vid;
> +
> +       node->ucast_counter.mdev = ndev->mvdev.mdev;
> +       node->mcast_counter.mdev = ndev->mvdev.mdev;
> +       if (node->tagged) {
> +               vid = key2vid(node->macvlan);
> +               snprintf(vidstr, sizeof(vidstr), "0x%x", vid);
> +       } else {
> +               strcpy(vidstr, ut);
> +       }
> +
> +       node->dent = debugfs_create_dir(vidstr, ndev->rx_dent);
> +       if (IS_ERR(node->dent)) {
> +               node->dent = NULL;
> +               return;
> +       }
> +
> +       node->ucast_counter.dent = debugfs_create_dir("ucast", node->dent);
> +       if (IS_ERR(node->ucast_counter.dent))
> +               return;
> +
> +       add_counter_node(&node->ucast_counter, node->ucast_counter.dent);
> +
> +       node->mcast_counter.dent = debugfs_create_dir("mcast", node->dent);
> +       if (IS_ERR(node->mcast_counter.dent))
> +               return;
> +
> +       add_counter_node(&node->mcast_counter, node->mcast_counter.dent);
> +}
> +
> +void mlx5_vdpa_remove_rx_counters(struct mlx5_vdpa_net *ndev,
> +                                 struct macvlan_node *node)
> +{
> +       if (node->dent && ndev->debugfs)
> +               debugfs_remove_recursive(node->dent);
> +}
> +#endif
> +
>  void mlx5_vdpa_add_debugfs(struct mlx5_vdpa_net *ndev)
>  {
>         struct mlx5_core_dev *mdev;
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 4b097e6ddba0..6632651b1e54 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1404,12 +1404,55 @@ static void destroy_tir(struct mlx5_vdpa_net *ndev)
>  #define MAX_STEERING_ENT 0x8000
>  #define MAX_STEERING_GROUPS 2
>
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       #define NUM_DESTS 2
> +#else
> +       #define NUM_DESTS 1
> +#endif
> +
> +static int add_steering_counters(struct mlx5_vdpa_net *ndev,
> +                                struct macvlan_node *node,
> +                                struct mlx5_flow_act *flow_act,
> +                                struct mlx5_flow_destination *dests)
> +{
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       int err;
> +
> +       node->ucast_counter.counter = mlx5_fc_create(ndev->mvdev.mdev, false);
> +       if (IS_ERR(node->ucast_counter.counter))
> +               return PTR_ERR(node->ucast_counter.counter);
> +
> +       node->mcast_counter.counter = mlx5_fc_create(ndev->mvdev.mdev, false);
> +       if (IS_ERR(node->mcast_counter.counter)) {
> +               err = PTR_ERR(node->mcast_counter.counter);
> +               goto err_mcast_counter;
> +       }
> +
> +       dests[1].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
> +       flow_act->action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
> +       return 0;
> +
> +err_mcast_counter:
> +       mlx5_fc_destroy(ndev->mvdev.mdev, node->ucast_counter.counter);
> +       return err;
> +#else
> +       return 0;
> +#endif
> +}
> +
> +static void remove_steering_counters(struct mlx5_vdpa_net *ndev,
> +                                    struct macvlan_node *node)
> +{
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       mlx5_fc_destroy(ndev->mvdev.mdev, node->mcast_counter.counter);
> +       mlx5_fc_destroy(ndev->mvdev.mdev, node->ucast_counter.counter);
> +#endif
> +}
> +
>  static int mlx5_vdpa_add_mac_vlan_rules(struct mlx5_vdpa_net *ndev, u8 *mac,
> -                                       u16 vid, bool tagged,
> -                                       struct mlx5_flow_handle **ucast,
> -                                       struct mlx5_flow_handle **mcast)
> +                                       struct macvlan_node *node)
>  {
> -       struct mlx5_flow_destination dest = {};
> +       struct mlx5_flow_destination dests[NUM_DESTS] = {};
>         struct mlx5_flow_act flow_act = {};
>         struct mlx5_flow_handle *rule;
>         struct mlx5_flow_spec *spec;
> @@ -1418,11 +1461,13 @@ static int mlx5_vdpa_add_mac_vlan_rules(struct mlx5_vdpa_net *ndev, u8 *mac,
>         u8 *dmac_c;
>         u8 *dmac_v;
>         int err;
> +       u16 vid;
>
>         spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
>         if (!spec)
>                 return -ENOMEM;
>
> +       vid = key2vid(node->macvlan);
>         spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
>         headers_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, outer_headers);
>         headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, outer_headers);
> @@ -1434,44 +1479,58 @@ static int mlx5_vdpa_add_mac_vlan_rules(struct mlx5_vdpa_net *ndev, u8 *mac,
>                 MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
>                 MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, first_vid);
>         }
> -       if (tagged) {
> +       if (node->tagged) {
>                 MLX5_SET(fte_match_set_lyr_2_4, headers_v, cvlan_tag, 1);
>                 MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_vid, vid);
>         }
>         flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
> -       dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
> -       dest.tir_num = ndev->res.tirn;
> -       rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, &dest, 1);
> -       if (IS_ERR(rule))
> -               return PTR_ERR(rule);
> +       dests[0].type = MLX5_FLOW_DESTINATION_TYPE_TIR;
> +       dests[0].tir_num = ndev->res.tirn;
> +       err = add_steering_counters(ndev, node, &flow_act, dests);
> +       if (err)
> +               goto out_free;
> +
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       dests[1].counter_id = mlx5_fc_id(node->ucast_counter.counter);
> +#endif
> +       node->ucast_rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, dests, NUM_DESTS);
> +       if (IS_ERR(rule)) {
> +               err = PTR_ERR(rule);
> +               goto err_ucast;
> +       }
>
> -       *ucast = rule;
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       dests[1].counter_id = mlx5_fc_id(node->mcast_counter.counter);
> +#endif
>
>         memset(dmac_c, 0, ETH_ALEN);
>         memset(dmac_v, 0, ETH_ALEN);
>         dmac_c[0] = 1;
>         dmac_v[0] = 1;
> -       rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, &dest, 1);
> -       kvfree(spec);
> +       node->mcast_rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, dests, NUM_DESTS);
>         if (IS_ERR(rule)) {
>                 err = PTR_ERR(rule);
>                 goto err_mcast;
>         }
> -
> -       *mcast = rule;
> +       kvfree(spec);
> +       mlx5_vdpa_add_rx_counters(ndev, node);
>         return 0;
>
>  err_mcast:
> -       mlx5_del_flow_rules(*ucast);
> +       mlx5_del_flow_rules(node->ucast_rule);
> +err_ucast:
> +       remove_steering_counters(ndev, node);
> +out_free:
> +       kvfree(spec);
>         return err;
>  }
>
>  static void mlx5_vdpa_del_mac_vlan_rules(struct mlx5_vdpa_net *ndev,
> -                                        struct mlx5_flow_handle *ucast,
> -                                        struct mlx5_flow_handle *mcast)
> +                                        struct macvlan_node *node)
>  {
> -       mlx5_del_flow_rules(ucast);
> -       mlx5_del_flow_rules(mcast);
> +       mlx5_vdpa_remove_rx_counters(ndev, node);
> +       mlx5_del_flow_rules(node->ucast_rule);
> +       mlx5_del_flow_rules(node->mcast_rule);
>  }
>
>  static u64 search_val(u8 *mac, u16 vlan, bool tagged)
> @@ -1505,14 +1564,14 @@ static struct macvlan_node *mac_vlan_lookup(struct mlx5_vdpa_net *ndev, u64 valu
>         return NULL;
>  }
>
> -static int mac_vlan_add(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vlan, bool tagged) // vlan -> vid
> +static int mac_vlan_add(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vid, bool tagged)
>  {
>         struct macvlan_node *ptr;
>         u64 val;
>         u32 idx;
>         int err;
>
> -       val = search_val(mac, vlan, tagged);
> +       val = search_val(mac, vid, tagged);
>         if (mac_vlan_lookup(ndev, val))
>                 return -EEXIST;
>
> @@ -1520,12 +1579,13 @@ static int mac_vlan_add(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vlan, bool tagg
>         if (!ptr)
>                 return -ENOMEM;
>
> -       err = mlx5_vdpa_add_mac_vlan_rules(ndev, ndev->config.mac, vlan, tagged,
> -                                          &ptr->ucast_rule, &ptr->mcast_rule);
> +       ptr->tagged = tagged;
> +       ptr->macvlan = val;
> +       ptr->ndev = ndev;
> +       err = mlx5_vdpa_add_mac_vlan_rules(ndev, ndev->config.mac, ptr);
>         if (err)
>                 goto err_add;
>
> -       ptr->macvlan = val;
>         idx = hash_64(val, 8);
>         hlist_add_head(&ptr->hlist, &ndev->macvlan_hash[idx]);
>         return 0;
> @@ -1544,7 +1604,8 @@ static void mac_vlan_del(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vlan, bool tag
>                 return;
>
>         hlist_del(&ptr->hlist);
> -       mlx5_vdpa_del_mac_vlan_rules(ndev, ptr->ucast_rule, ptr->mcast_rule);
> +       mlx5_vdpa_del_mac_vlan_rules(ndev, ptr);
> +       remove_steering_counters(ndev, ptr);
>         kfree(ptr);
>  }
>
> @@ -1557,7 +1618,8 @@ static void clear_mac_vlan_table(struct mlx5_vdpa_net *ndev)
>         for (i = 0; i < MLX5V_MACVLAN_SIZE; i++) {
>                 hlist_for_each_entry_safe(pos, n, &ndev->macvlan_hash[i], hlist) {
>                         hlist_del(&pos->hlist);
> -                       mlx5_vdpa_del_mac_vlan_rules(ndev, pos->ucast_rule, pos->mcast_rule);
> +                       mlx5_vdpa_del_mac_vlan_rules(ndev, pos);
> +                       remove_steering_counters(ndev, pos);
>                         kfree(pos);
>                 }
>         }
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.h b/drivers/vdpa/mlx5/net/mlx5_vnet.h
> index f2cef3925e5b..c90a89e1de4d 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.h
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.h
> @@ -21,6 +21,11 @@ struct mlx5_vdpa_net_resources {
>
>  #define MLX5V_MACVLAN_SIZE 256
>
> +static inline u16 key2vid(u64 key)
> +{
> +       return (u16)(key >> 48) & 0xfff;
> +}
> +
>  struct mlx5_vdpa_net {
>         struct mlx5_vdpa_dev mvdev;
>         struct mlx5_vdpa_net_resources res;
> @@ -47,11 +52,24 @@ struct mlx5_vdpa_net {
>         struct dentry *debugfs;
>  };
>
> +struct mlx5_vdpa_counter {
> +       struct mlx5_fc *counter;
> +       struct dentry *dent;
> +       struct mlx5_core_dev *mdev;
> +};
> +
>  struct macvlan_node {
>         struct hlist_node hlist;
>         struct mlx5_flow_handle *ucast_rule;
>         struct mlx5_flow_handle *mcast_rule;
>         u64 macvlan;
> +       struct mlx5_vdpa_net *ndev;
> +       bool tagged;
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       struct dentry *dent;
> +       struct mlx5_vdpa_counter ucast_counter;
> +       struct mlx5_vdpa_counter mcast_counter;
> +#endif
>  };
>
>  void mlx5_vdpa_add_debugfs(struct mlx5_vdpa_net *ndev);
> @@ -60,5 +78,17 @@ void mlx5_vdpa_add_rx_flow_table(struct mlx5_vdpa_net *ndev);
>  void mlx5_vdpa_remove_rx_flow_table(struct mlx5_vdpa_net *ndev);
>  void mlx5_vdpa_add_tirn(struct mlx5_vdpa_net *ndev);
>  void mlx5_vdpa_remove_tirn(struct mlx5_vdpa_net *ndev);
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +void mlx5_vdpa_add_rx_counters(struct mlx5_vdpa_net *ndev,
> +                              struct macvlan_node *node);
> +void mlx5_vdpa_remove_rx_counters(struct mlx5_vdpa_net *ndev,
> +                                 struct macvlan_node *node);
> +#else
> +static inline void mlx5_vdpa_add_rx_counters(struct mlx5_vdpa_net *ndev,
> +                                            struct macvlan_node *node) {}
> +static inline void mlx5_vdpa_remove_rx_counters(struct mlx5_vdpa_net *ndev,
> +                                               struct macvlan_node *node) {}
> +#endif
> +
>
>  #endif /* __MLX5_VNET_H__ */
> --
> 2.38.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATH v2 8/8] vdpa/mlx5: Add RX counters to debugfs
@ 2022-11-15  3:33     ` Jason Wang
  0 siblings, 0 replies; 21+ messages in thread
From: Jason Wang @ 2022-11-15  3:33 UTC (permalink / raw)
  To: Eli Cohen; +Cc: mst, linux-kernel, virtualization, si-wei.liu, eperezma, lulu

On Mon, Nov 14, 2022 at 9:18 PM Eli Cohen <elic@nvidia.com> wrote:
>
> For each interface, either VLAN tagged or untagged, add two hardware
> counters: one for unicast and another for multicast. The counters count
> RX packets and bytes and can be read through debugfs:
>
> $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/mcast/packets
> $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/ucast/bytes
>
> This feature is controlled via the config option
> MLX5_VDPA_STEERING_DEBUG. It is off by default as it may have some
> impact on performance.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/vdpa/Kconfig              |  12 ++++
>  drivers/vdpa/mlx5/net/debug.c     |  86 ++++++++++++++++++++++
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 116 +++++++++++++++++++++++-------
>  drivers/vdpa/mlx5/net/mlx5_vnet.h |  30 ++++++++
>  4 files changed, 217 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig
> index 50f45d037611..43b716ec2d18 100644
> --- a/drivers/vdpa/Kconfig
> +++ b/drivers/vdpa/Kconfig
> @@ -71,6 +71,18 @@ config MLX5_VDPA_NET
>           be executed by the hardware. It also supports a variety of stateless
>           offloads depending on the actual device used and firmware version.
>
> +config MLX5_VDPA_STEERING_DEBUG
> +       bool "expose steering counters on debugfs"
> +       select MLX5_VDPA
> +       help
> +         Expose RX steering counters in debugfs to aid in debugging. For each VLAN
> +         or non VLAN interface, two hardware counters are added to the RX flow
> +         table: one for unicast and one for multicast.
> +         The counters counts the number of packets and bytes and exposes them in
> +         debugfs. Once can read the counters using, e.g.:
> +         cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/ucast/packets
> +         cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/mcast/bytes
> +
>  config VP_VDPA
>         tristate "Virtio PCI bridge vDPA driver"
>         select VIRTIO_PCI_LIB
> diff --git a/drivers/vdpa/mlx5/net/debug.c b/drivers/vdpa/mlx5/net/debug.c
> index 95e4801df211..60d6ac68cdc4 100644
> --- a/drivers/vdpa/mlx5/net/debug.c
> +++ b/drivers/vdpa/mlx5/net/debug.c
> @@ -49,6 +49,92 @@ void mlx5_vdpa_add_rx_flow_table(struct mlx5_vdpa_net *ndev)
>                                                   ndev, &rx_flow_table_fops);
>  }
>
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +static int packets_show(struct seq_file *file, void *priv)
> +{
> +       struct mlx5_vdpa_counter *counter = file->private;
> +       u64 packets;
> +       u64 bytes;
> +       int err;
> +
> +       err = mlx5_fc_query(counter->mdev, counter->counter, &packets, &bytes);
> +       if (err)
> +               return err;
> +
> +       seq_printf(file, "0x%llx\n", packets);
> +       return 0;
> +}
> +
> +static int bytes_show(struct seq_file *file, void *priv)
> +{
> +       struct mlx5_vdpa_counter *counter = file->private;
> +       u64 packets;
> +       u64 bytes;
> +       int err;
> +
> +       err = mlx5_fc_query(counter->mdev, counter->counter, &packets, &bytes);
> +       if (err)
> +               return err;
> +
> +       seq_printf(file, "0x%llx\n", bytes);
> +       return 0;
> +}
> +
> +DEFINE_SHOW_ATTRIBUTE(packets);
> +DEFINE_SHOW_ATTRIBUTE(bytes);
> +
> +static void add_counter_node(struct mlx5_vdpa_counter *counter,
> +                            struct dentry *parent)
> +{
> +       debugfs_create_file("packets", 0444, parent, counter,
> +                           &packets_fops);
> +       debugfs_create_file("bytes", 0444, parent, counter,
> +                           &bytes_fops);
> +}
> +
> +void mlx5_vdpa_add_rx_counters(struct mlx5_vdpa_net *ndev,
> +                              struct macvlan_node *node)
> +{
> +       static const char *ut = "untagged";
> +       char vidstr[9];
> +       u16 vid;
> +
> +       node->ucast_counter.mdev = ndev->mvdev.mdev;
> +       node->mcast_counter.mdev = ndev->mvdev.mdev;
> +       if (node->tagged) {
> +               vid = key2vid(node->macvlan);
> +               snprintf(vidstr, sizeof(vidstr), "0x%x", vid);
> +       } else {
> +               strcpy(vidstr, ut);
> +       }
> +
> +       node->dent = debugfs_create_dir(vidstr, ndev->rx_dent);
> +       if (IS_ERR(node->dent)) {
> +               node->dent = NULL;
> +               return;
> +       }
> +
> +       node->ucast_counter.dent = debugfs_create_dir("ucast", node->dent);
> +       if (IS_ERR(node->ucast_counter.dent))
> +               return;
> +
> +       add_counter_node(&node->ucast_counter, node->ucast_counter.dent);
> +
> +       node->mcast_counter.dent = debugfs_create_dir("mcast", node->dent);
> +       if (IS_ERR(node->mcast_counter.dent))
> +               return;
> +
> +       add_counter_node(&node->mcast_counter, node->mcast_counter.dent);
> +}
> +
> +void mlx5_vdpa_remove_rx_counters(struct mlx5_vdpa_net *ndev,
> +                                 struct macvlan_node *node)
> +{
> +       if (node->dent && ndev->debugfs)
> +               debugfs_remove_recursive(node->dent);
> +}
> +#endif
> +
>  void mlx5_vdpa_add_debugfs(struct mlx5_vdpa_net *ndev)
>  {
>         struct mlx5_core_dev *mdev;
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 4b097e6ddba0..6632651b1e54 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1404,12 +1404,55 @@ static void destroy_tir(struct mlx5_vdpa_net *ndev)
>  #define MAX_STEERING_ENT 0x8000
>  #define MAX_STEERING_GROUPS 2
>
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       #define NUM_DESTS 2
> +#else
> +       #define NUM_DESTS 1
> +#endif
> +
> +static int add_steering_counters(struct mlx5_vdpa_net *ndev,
> +                                struct macvlan_node *node,
> +                                struct mlx5_flow_act *flow_act,
> +                                struct mlx5_flow_destination *dests)
> +{
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       int err;
> +
> +       node->ucast_counter.counter = mlx5_fc_create(ndev->mvdev.mdev, false);
> +       if (IS_ERR(node->ucast_counter.counter))
> +               return PTR_ERR(node->ucast_counter.counter);
> +
> +       node->mcast_counter.counter = mlx5_fc_create(ndev->mvdev.mdev, false);
> +       if (IS_ERR(node->mcast_counter.counter)) {
> +               err = PTR_ERR(node->mcast_counter.counter);
> +               goto err_mcast_counter;
> +       }
> +
> +       dests[1].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
> +       flow_act->action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
> +       return 0;
> +
> +err_mcast_counter:
> +       mlx5_fc_destroy(ndev->mvdev.mdev, node->ucast_counter.counter);
> +       return err;
> +#else
> +       return 0;
> +#endif
> +}
> +
> +static void remove_steering_counters(struct mlx5_vdpa_net *ndev,
> +                                    struct macvlan_node *node)
> +{
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       mlx5_fc_destroy(ndev->mvdev.mdev, node->mcast_counter.counter);
> +       mlx5_fc_destroy(ndev->mvdev.mdev, node->ucast_counter.counter);
> +#endif
> +}
> +
>  static int mlx5_vdpa_add_mac_vlan_rules(struct mlx5_vdpa_net *ndev, u8 *mac,
> -                                       u16 vid, bool tagged,
> -                                       struct mlx5_flow_handle **ucast,
> -                                       struct mlx5_flow_handle **mcast)
> +                                       struct macvlan_node *node)
>  {
> -       struct mlx5_flow_destination dest = {};
> +       struct mlx5_flow_destination dests[NUM_DESTS] = {};
>         struct mlx5_flow_act flow_act = {};
>         struct mlx5_flow_handle *rule;
>         struct mlx5_flow_spec *spec;
> @@ -1418,11 +1461,13 @@ static int mlx5_vdpa_add_mac_vlan_rules(struct mlx5_vdpa_net *ndev, u8 *mac,
>         u8 *dmac_c;
>         u8 *dmac_v;
>         int err;
> +       u16 vid;
>
>         spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
>         if (!spec)
>                 return -ENOMEM;
>
> +       vid = key2vid(node->macvlan);
>         spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
>         headers_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, outer_headers);
>         headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, outer_headers);
> @@ -1434,44 +1479,58 @@ static int mlx5_vdpa_add_mac_vlan_rules(struct mlx5_vdpa_net *ndev, u8 *mac,
>                 MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
>                 MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, first_vid);
>         }
> -       if (tagged) {
> +       if (node->tagged) {
>                 MLX5_SET(fte_match_set_lyr_2_4, headers_v, cvlan_tag, 1);
>                 MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_vid, vid);
>         }
>         flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
> -       dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
> -       dest.tir_num = ndev->res.tirn;
> -       rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, &dest, 1);
> -       if (IS_ERR(rule))
> -               return PTR_ERR(rule);
> +       dests[0].type = MLX5_FLOW_DESTINATION_TYPE_TIR;
> +       dests[0].tir_num = ndev->res.tirn;
> +       err = add_steering_counters(ndev, node, &flow_act, dests);
> +       if (err)
> +               goto out_free;
> +
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       dests[1].counter_id = mlx5_fc_id(node->ucast_counter.counter);
> +#endif
> +       node->ucast_rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, dests, NUM_DESTS);
> +       if (IS_ERR(rule)) {
> +               err = PTR_ERR(rule);
> +               goto err_ucast;
> +       }
>
> -       *ucast = rule;
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       dests[1].counter_id = mlx5_fc_id(node->mcast_counter.counter);
> +#endif
>
>         memset(dmac_c, 0, ETH_ALEN);
>         memset(dmac_v, 0, ETH_ALEN);
>         dmac_c[0] = 1;
>         dmac_v[0] = 1;
> -       rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, &dest, 1);
> -       kvfree(spec);
> +       node->mcast_rule = mlx5_add_flow_rules(ndev->rxft, spec, &flow_act, dests, NUM_DESTS);
>         if (IS_ERR(rule)) {
>                 err = PTR_ERR(rule);
>                 goto err_mcast;
>         }
> -
> -       *mcast = rule;
> +       kvfree(spec);
> +       mlx5_vdpa_add_rx_counters(ndev, node);
>         return 0;
>
>  err_mcast:
> -       mlx5_del_flow_rules(*ucast);
> +       mlx5_del_flow_rules(node->ucast_rule);
> +err_ucast:
> +       remove_steering_counters(ndev, node);
> +out_free:
> +       kvfree(spec);
>         return err;
>  }
>
>  static void mlx5_vdpa_del_mac_vlan_rules(struct mlx5_vdpa_net *ndev,
> -                                        struct mlx5_flow_handle *ucast,
> -                                        struct mlx5_flow_handle *mcast)
> +                                        struct macvlan_node *node)
>  {
> -       mlx5_del_flow_rules(ucast);
> -       mlx5_del_flow_rules(mcast);
> +       mlx5_vdpa_remove_rx_counters(ndev, node);
> +       mlx5_del_flow_rules(node->ucast_rule);
> +       mlx5_del_flow_rules(node->mcast_rule);
>  }
>
>  static u64 search_val(u8 *mac, u16 vlan, bool tagged)
> @@ -1505,14 +1564,14 @@ static struct macvlan_node *mac_vlan_lookup(struct mlx5_vdpa_net *ndev, u64 valu
>         return NULL;
>  }
>
> -static int mac_vlan_add(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vlan, bool tagged) // vlan -> vid
> +static int mac_vlan_add(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vid, bool tagged)
>  {
>         struct macvlan_node *ptr;
>         u64 val;
>         u32 idx;
>         int err;
>
> -       val = search_val(mac, vlan, tagged);
> +       val = search_val(mac, vid, tagged);
>         if (mac_vlan_lookup(ndev, val))
>                 return -EEXIST;
>
> @@ -1520,12 +1579,13 @@ static int mac_vlan_add(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vlan, bool tagg
>         if (!ptr)
>                 return -ENOMEM;
>
> -       err = mlx5_vdpa_add_mac_vlan_rules(ndev, ndev->config.mac, vlan, tagged,
> -                                          &ptr->ucast_rule, &ptr->mcast_rule);
> +       ptr->tagged = tagged;
> +       ptr->macvlan = val;
> +       ptr->ndev = ndev;
> +       err = mlx5_vdpa_add_mac_vlan_rules(ndev, ndev->config.mac, ptr);
>         if (err)
>                 goto err_add;
>
> -       ptr->macvlan = val;
>         idx = hash_64(val, 8);
>         hlist_add_head(&ptr->hlist, &ndev->macvlan_hash[idx]);
>         return 0;
> @@ -1544,7 +1604,8 @@ static void mac_vlan_del(struct mlx5_vdpa_net *ndev, u8 *mac, u16 vlan, bool tag
>                 return;
>
>         hlist_del(&ptr->hlist);
> -       mlx5_vdpa_del_mac_vlan_rules(ndev, ptr->ucast_rule, ptr->mcast_rule);
> +       mlx5_vdpa_del_mac_vlan_rules(ndev, ptr);
> +       remove_steering_counters(ndev, ptr);
>         kfree(ptr);
>  }
>
> @@ -1557,7 +1618,8 @@ static void clear_mac_vlan_table(struct mlx5_vdpa_net *ndev)
>         for (i = 0; i < MLX5V_MACVLAN_SIZE; i++) {
>                 hlist_for_each_entry_safe(pos, n, &ndev->macvlan_hash[i], hlist) {
>                         hlist_del(&pos->hlist);
> -                       mlx5_vdpa_del_mac_vlan_rules(ndev, pos->ucast_rule, pos->mcast_rule);
> +                       mlx5_vdpa_del_mac_vlan_rules(ndev, pos);
> +                       remove_steering_counters(ndev, pos);
>                         kfree(pos);
>                 }
>         }
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.h b/drivers/vdpa/mlx5/net/mlx5_vnet.h
> index f2cef3925e5b..c90a89e1de4d 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.h
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.h
> @@ -21,6 +21,11 @@ struct mlx5_vdpa_net_resources {
>
>  #define MLX5V_MACVLAN_SIZE 256
>
> +static inline u16 key2vid(u64 key)
> +{
> +       return (u16)(key >> 48) & 0xfff;
> +}
> +
>  struct mlx5_vdpa_net {
>         struct mlx5_vdpa_dev mvdev;
>         struct mlx5_vdpa_net_resources res;
> @@ -47,11 +52,24 @@ struct mlx5_vdpa_net {
>         struct dentry *debugfs;
>  };
>
> +struct mlx5_vdpa_counter {
> +       struct mlx5_fc *counter;
> +       struct dentry *dent;
> +       struct mlx5_core_dev *mdev;
> +};
> +
>  struct macvlan_node {
>         struct hlist_node hlist;
>         struct mlx5_flow_handle *ucast_rule;
>         struct mlx5_flow_handle *mcast_rule;
>         u64 macvlan;
> +       struct mlx5_vdpa_net *ndev;
> +       bool tagged;
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +       struct dentry *dent;
> +       struct mlx5_vdpa_counter ucast_counter;
> +       struct mlx5_vdpa_counter mcast_counter;
> +#endif
>  };
>
>  void mlx5_vdpa_add_debugfs(struct mlx5_vdpa_net *ndev);
> @@ -60,5 +78,17 @@ void mlx5_vdpa_add_rx_flow_table(struct mlx5_vdpa_net *ndev);
>  void mlx5_vdpa_remove_rx_flow_table(struct mlx5_vdpa_net *ndev);
>  void mlx5_vdpa_add_tirn(struct mlx5_vdpa_net *ndev);
>  void mlx5_vdpa_remove_tirn(struct mlx5_vdpa_net *ndev);
> +#if defined(CONFIG_MLX5_VDPA_STEERING_DEBUG)
> +void mlx5_vdpa_add_rx_counters(struct mlx5_vdpa_net *ndev,
> +                              struct macvlan_node *node);
> +void mlx5_vdpa_remove_rx_counters(struct mlx5_vdpa_net *ndev,
> +                                 struct macvlan_node *node);
> +#else
> +static inline void mlx5_vdpa_add_rx_counters(struct mlx5_vdpa_net *ndev,
> +                                            struct macvlan_node *node) {}
> +static inline void mlx5_vdpa_remove_rx_counters(struct mlx5_vdpa_net *ndev,
> +                                               struct macvlan_node *node) {}
> +#endif
> +
>
>  #endif /* __MLX5_VNET_H__ */
> --
> 2.38.1
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATH v2 5/8] vdpa/mlx5: Avoid overwriting CVQ iotlb
  2022-11-14 13:17 ` [PATH v2 5/8] vdpa/mlx5: Avoid overwriting CVQ iotlb Eli Cohen
  2022-11-15  3:33     ` Jason Wang
@ 2022-11-15  9:41   ` Eugenio Perez Martin
  1 sibling, 0 replies; 21+ messages in thread
From: Eugenio Perez Martin @ 2022-11-15  9:41 UTC (permalink / raw)
  To: Eli Cohen; +Cc: mst, jasowang, linux-kernel, virtualization, si-wei.liu, lulu

On Mon, Nov 14, 2022 at 2:18 PM Eli Cohen <elic@nvidia.com> wrote:
>
> When qemu uses different address spaces for data and control virtqueues,
> the current code would overwrite the control virtqueue iotlb through the
> dup_iotlb call. Fix this by referring to the address space identifier
> and the group to asid mapping to determine which mapping needs to be
> updated. We also move the address space logic from mlx5 net to core
> directory.
>
> Reported-by: Eugenio Pérez <eperezma@redhat.com>
> Signed-off-by: Eli Cohen <elic@nvidia.com>

Acked-by: Eugenio Pérez <eperezma@redhat.com>

> ---
>  drivers/vdpa/mlx5/core/mlx5_vdpa.h |  5 +--
>  drivers/vdpa/mlx5/core/mr.c        | 44 ++++++++++++++++-----------
>  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 49 ++++++------------------------
>  3 files changed, 39 insertions(+), 59 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
> index 6af9fdbb86b7..058fbe28107e 100644
> --- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h
> +++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
> @@ -116,8 +116,9 @@ int mlx5_vdpa_create_mkey(struct mlx5_vdpa_dev *mvdev, u32 *mkey, u32 *in,
>                           int inlen);
>  int mlx5_vdpa_destroy_mkey(struct mlx5_vdpa_dev *mvdev, u32 mkey);
>  int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> -                            bool *change_map);
> -int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb);
> +                            bool *change_map, unsigned int asid);
> +int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> +                       unsigned int asid);
>  void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev);
>
>  #define mlx5_vdpa_warn(__dev, format, ...)                                                         \
> diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
> index a639b9208d41..a4d7ee2339fa 100644
> --- a/drivers/vdpa/mlx5/core/mr.c
> +++ b/drivers/vdpa/mlx5/core/mr.c
> @@ -511,7 +511,8 @@ void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev)
>         mutex_unlock(&mr->mkey_mtx);
>  }
>
> -static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev,
> +                               struct vhost_iotlb *iotlb, unsigned int asid)
>  {
>         struct mlx5_vdpa_mr *mr = &mvdev->mr;
>         int err;
> @@ -519,42 +520,49 @@ static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb
>         if (mr->initialized)
>                 return 0;
>
> -       if (iotlb)
> -               err = create_user_mr(mvdev, iotlb);
> -       else
> -               err = create_dma_mr(mvdev, mr);
> +       if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
> +               if (iotlb)
> +                       err = create_user_mr(mvdev, iotlb);
> +               else
> +                       err = create_dma_mr(mvdev, mr);
>
> -       if (err)
> -               return err;
> +               if (err)
> +                       return err;
> +       }
>
> -       err = dup_iotlb(mvdev, iotlb);
> -       if (err)
> -               goto out_err;
> +       if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] == asid) {
> +               err = dup_iotlb(mvdev, iotlb);
> +               if (err)
> +                       goto out_err;
> +       }
>
>         mr->initialized = true;
>         return 0;
>
>  out_err:
> -       if (iotlb)
> -               destroy_user_mr(mvdev, mr);
> -       else
> -               destroy_dma_mr(mvdev, mr);
> +       if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
> +               if (iotlb)
> +                       destroy_user_mr(mvdev, mr);
> +               else
> +                       destroy_dma_mr(mvdev, mr);
> +       }
>
>         return err;
>  }
>
> -int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> +                       unsigned int asid)
>  {
>         int err;
>
>         mutex_lock(&mvdev->mr.mkey_mtx);
> -       err = _mlx5_vdpa_create_mr(mvdev, iotlb);
> +       err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid);
>         mutex_unlock(&mvdev->mr.mkey_mtx);
>         return err;
>  }
>
>  int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> -                            bool *change_map)
> +                            bool *change_map, unsigned int asid)
>  {
>         struct mlx5_vdpa_mr *mr = &mvdev->mr;
>         int err = 0;
> @@ -566,7 +574,7 @@ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *io
>                 *change_map = true;
>         }
>         if (!*change_map)
> -               err = _mlx5_vdpa_create_mr(mvdev, iotlb);
> +               err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid);
>         mutex_unlock(&mr->mkey_mtx);
>
>         return err;
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 98dd8ce8af26..3a6dbbc6440d 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -2394,7 +2394,8 @@ static void restore_channels_info(struct mlx5_vdpa_net *ndev)
>         }
>  }
>
> -static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev,
> +                               struct vhost_iotlb *iotlb, unsigned int asid)
>  {
>         struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
>         int err;
> @@ -2406,7 +2407,7 @@ static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb
>
>         teardown_driver(ndev);
>         mlx5_vdpa_destroy_mr(mvdev);
> -       err = mlx5_vdpa_create_mr(mvdev, iotlb);
> +       err = mlx5_vdpa_create_mr(mvdev, iotlb, asid);
>         if (err)
>                 goto err_mr;
>
> @@ -2587,7 +2588,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev)
>         ++mvdev->generation;
>
>         if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
> -               if (mlx5_vdpa_create_mr(mvdev, NULL))
> +               if (mlx5_vdpa_create_mr(mvdev, NULL, 0))
>                         mlx5_vdpa_warn(mvdev, "create MR failed\n");
>         }
>         up_write(&ndev->reslock);
> @@ -2623,41 +2624,20 @@ static u32 mlx5_vdpa_get_generation(struct vdpa_device *vdev)
>         return mvdev->generation;
>  }
>
> -static int set_map_control(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> -{
> -       u64 start = 0ULL, last = 0ULL - 1;
> -       struct vhost_iotlb_map *map;
> -       int err = 0;
> -
> -       spin_lock(&mvdev->cvq.iommu_lock);
> -       vhost_iotlb_reset(mvdev->cvq.iotlb);
> -
> -       for (map = vhost_iotlb_itree_first(iotlb, start, last); map;
> -            map = vhost_iotlb_itree_next(map, start, last)) {
> -               err = vhost_iotlb_add_range(mvdev->cvq.iotlb, map->start,
> -                                           map->last, map->addr, map->perm);
> -               if (err)
> -                       goto out;
> -       }
> -
> -out:
> -       spin_unlock(&mvdev->cvq.iommu_lock);
> -       return err;
> -}
> -
> -static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
> +static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
> +                       unsigned int asid)
>  {
>         bool change_map;
>         int err;
>
> -       err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map);
> +       err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map, asid);
>         if (err) {
>                 mlx5_vdpa_warn(mvdev, "set map failed(%d)\n", err);
>                 return err;
>         }
>
>         if (change_map)
> -               err = mlx5_vdpa_change_map(mvdev, iotlb);
> +               err = mlx5_vdpa_change_map(mvdev, iotlb, asid);
>
>         return err;
>  }
> @@ -2670,16 +2650,7 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid,
>         int err = -EINVAL;
>
>         down_write(&ndev->reslock);
> -       if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
> -               err = set_map_data(mvdev, iotlb);
> -               if (err)
> -                       goto out;
> -       }
> -
> -       if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] == asid)
> -               err = set_map_control(mvdev, iotlb);
> -
> -out:
> +       err = set_map_data(mvdev, iotlb, asid);
>         up_write(&ndev->reslock);
>         return err;
>  }
> @@ -3182,7 +3153,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
>                 goto err_mpfs;
>
>         if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
> -               err = mlx5_vdpa_create_mr(mvdev, NULL);
> +               err = mlx5_vdpa_create_mr(mvdev, NULL, 0);
>                 if (err)
>                         goto err_res;
>         }
> --
> 2.38.1
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATH v2 2/8] vdpa/mlx5: Return error on vlan ctrl commands if not supported
  2022-11-14 13:17 ` [PATH v2 2/8] vdpa/mlx5: Return error on vlan ctrl commands if not supported Eli Cohen
  2022-11-15  3:10     ` Jason Wang
@ 2022-11-15  9:43   ` Eugenio Perez Martin
  1 sibling, 0 replies; 21+ messages in thread
From: Eugenio Perez Martin @ 2022-11-15  9:43 UTC (permalink / raw)
  To: Eli Cohen; +Cc: mst, jasowang, linux-kernel, virtualization, si-wei.liu, lulu

On Mon, Nov 14, 2022 at 2:18 PM Eli Cohen <elic@nvidia.com> wrote:
>
> Check if VIRTIO_NET_F_CTRL_VLAN is negotiated and return error if
> control VQ command is received.
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>

Acked-by: Eugenio Pérez <eperezma@redhat.com>

> ---
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 3fb06dcee943..01da229d22da 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1823,6 +1823,9 @@ static virtio_net_ctrl_ack handle_ctrl_vlan(struct mlx5_vdpa_dev *mvdev, u8 cmd)
>         size_t read;
>         u16 id;
>
> +       if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VLAN)))
> +               return status;
> +
>         switch (cmd) {
>         case VIRTIO_NET_CTRL_VLAN_ADD:
>                 read = vringh_iov_pull_iotlb(&cvq->vring, &cvq->riov, &vlan, sizeof(vlan));
> --
> 2.38.1
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
  2022-11-14 13:17 [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
                   ` (7 preceding siblings ...)
  2022-11-14 13:17 ` [PATH v2 8/8] vdpa/mlx5: Add RX counters to debugfs Eli Cohen
@ 2022-11-24  6:34 ` Eli Cohen
  2022-12-13  7:33   ` Eli Cohen
  8 siblings, 1 reply; 21+ messages in thread
From: Eli Cohen @ 2022-11-24  6:34 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization; +Cc: si-wei.liu, eperezma, lulu

Hi Michael,

Are you going to pull this series? It has been reviewed.


> -----Original Message-----
> From: Eli Cohen <elic@nvidia.com>
> Sent: Monday, 14 November 2022 15:18
> To: mst@redhat.com; jasowang@redhat.com; linux-kernel@vger.kernel.org;
> virtualization@lists.linux-foundation.org
> Cc: si-wei.liu@oracle.com; eperezma@redhat.com; lulu@redhat.com; Eli
> Cohen <elic@nvidia.com>
> Subject: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
> 
> This series is a resend of previously sent patch list. It adds a few
> fixes so I treat as a v0 of a new series.
> 
> It adds a kernel config param CONFIG_MLX5_VDPA_STEERING_DEBUG that
> when
> eabled allows to read rx unicast and multicast counters per tagged or untagged
> traffic.
> 
> Examples:
> $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-
> 0/rx/untagged/mcast/packets
> $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-0/rx/untagged/ucast/bytes
> 
> v1->v2:
> 1. Reorder patches so fixes are first
> 2. Break "Fix rule forwarding VLAN to TIR" into two patches
> 3. Squash fix for bug in first patch from "Add RX counters to debugfs"
> 4. Move clearing of nb_registered before calling mlx5_notifier_unregister() in
> mlx5_vdpa_dev_del()
> 
> 
> Eli Cohen (8):
>   vdpa/mlx5: Fix rule forwarding VLAN to TIR
>   vdpa/mlx5: Return error on vlan ctrl commands if not supported
>   vdpa/mlx5: Fix wrong mac address deletion
>   vdpa/mlx5: Avoid using reslock in event_handler
>   vdpa/mlx5: Avoid overwriting CVQ iotlb
>   vdpa/mlx5: Move some definitions to a new header file
>   vdpa/mlx5: Add debugfs subtree
>   vdpa/mlx5: Add RX counters to debugfs
> 
>  drivers/vdpa/Kconfig               |  12 ++
>  drivers/vdpa/mlx5/Makefile         |   2 +-
>  drivers/vdpa/mlx5/core/mlx5_vdpa.h |   5 +-
>  drivers/vdpa/mlx5/core/mr.c        |  44 ++---
>  drivers/vdpa/mlx5/net/debug.c      | 152 ++++++++++++++++++
>  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 250 ++++++++++++++---------------
>  drivers/vdpa/mlx5/net/mlx5_vnet.h  |  94 +++++++++++
>  7 files changed, 412 insertions(+), 147 deletions(-)
>  create mode 100644 drivers/vdpa/mlx5/net/debug.c
>  create mode 100644 drivers/vdpa/mlx5/net/mlx5_vnet.h
> 
> --
> 2.38.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
  2022-11-24  6:34 ` [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
@ 2022-12-13  7:33   ` Eli Cohen
  2022-12-13 15:19       ` Michael S. Tsirkin
  0 siblings, 1 reply; 21+ messages in thread
From: Eli Cohen @ 2022-12-13  7:33 UTC (permalink / raw)
  To: mst, jasowang, linux-kernel, virtualization; +Cc: si-wei.liu, eperezma, lulu

Michael?

> -----Original Message-----
> From: Eli Cohen
> Sent: Thursday, 24 November 2022 8:34
> To: mst@redhat.com; jasowang@redhat.com; linux-kernel@vger.kernel.org;
> virtualization@lists.linux-foundation.org
> Cc: si-wei.liu@oracle.com; eperezma@redhat.com; lulu@redhat.com
> Subject: RE: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
> 
> Hi Michael,
> 
> Are you going to pull this series? It has been reviewed.
> 
> 
> > -----Original Message-----
> > From: Eli Cohen <elic@nvidia.com>
> > Sent: Monday, 14 November 2022 15:18
> > To: mst@redhat.com; jasowang@redhat.com; linux-
> kernel@vger.kernel.org;
> > virtualization@lists.linux-foundation.org
> > Cc: si-wei.liu@oracle.com; eperezma@redhat.com; lulu@redhat.com; Eli
> > Cohen <elic@nvidia.com>
> > Subject: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
> >
> > This series is a resend of previously sent patch list. It adds a few
> > fixes so I treat as a v0 of a new series.
> >
> > It adds a kernel config param CONFIG_MLX5_VDPA_STEERING_DEBUG
> that
> > when
> > eabled allows to read rx unicast and multicast counters per tagged or
> untagged
> > traffic.
> >
> > Examples:
> > $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-
> > 0/rx/untagged/mcast/packets
> > $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-
> 0/rx/untagged/ucast/bytes
> >
> > v1->v2:
> > 1. Reorder patches so fixes are first
> > 2. Break "Fix rule forwarding VLAN to TIR" into two patches
> > 3. Squash fix for bug in first patch from "Add RX counters to debugfs"
> > 4. Move clearing of nb_registered before calling mlx5_notifier_unregister()
> in
> > mlx5_vdpa_dev_del()
> >
> >
> > Eli Cohen (8):
> >   vdpa/mlx5: Fix rule forwarding VLAN to TIR
> >   vdpa/mlx5: Return error on vlan ctrl commands if not supported
> >   vdpa/mlx5: Fix wrong mac address deletion
> >   vdpa/mlx5: Avoid using reslock in event_handler
> >   vdpa/mlx5: Avoid overwriting CVQ iotlb
> >   vdpa/mlx5: Move some definitions to a new header file
> >   vdpa/mlx5: Add debugfs subtree
> >   vdpa/mlx5: Add RX counters to debugfs
> >
> >  drivers/vdpa/Kconfig               |  12 ++
> >  drivers/vdpa/mlx5/Makefile         |   2 +-
> >  drivers/vdpa/mlx5/core/mlx5_vdpa.h |   5 +-
> >  drivers/vdpa/mlx5/core/mr.c        |  44 ++---
> >  drivers/vdpa/mlx5/net/debug.c      | 152 ++++++++++++++++++
> >  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 250 ++++++++++++++---------------
> >  drivers/vdpa/mlx5/net/mlx5_vnet.h  |  94 +++++++++++
> >  7 files changed, 412 insertions(+), 147 deletions(-)
> >  create mode 100644 drivers/vdpa/mlx5/net/debug.c
> >  create mode 100644 drivers/vdpa/mlx5/net/mlx5_vnet.h
> >
> > --
> > 2.38.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
  2022-12-13  7:33   ` Eli Cohen
@ 2022-12-13 15:19       ` Michael S. Tsirkin
  0 siblings, 0 replies; 21+ messages in thread
From: Michael S. Tsirkin @ 2022-12-13 15:19 UTC (permalink / raw)
  To: Eli Cohen; +Cc: lulu, linux-kernel, virtualization, eperezma

Yes it's all going into the next pull, thanks!

On Tue, Dec 13, 2022 at 07:33:08AM +0000, Eli Cohen wrote:
> Michael?
> 
> > -----Original Message-----
> > From: Eli Cohen
> > Sent: Thursday, 24 November 2022 8:34
> > To: mst@redhat.com; jasowang@redhat.com; linux-kernel@vger.kernel.org;
> > virtualization@lists.linux-foundation.org
> > Cc: si-wei.liu@oracle.com; eperezma@redhat.com; lulu@redhat.com
> > Subject: RE: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
> > 
> > Hi Michael,
> > 
> > Are you going to pull this series? It has been reviewed.
> > 
> > 
> > > -----Original Message-----
> > > From: Eli Cohen <elic@nvidia.com>
> > > Sent: Monday, 14 November 2022 15:18
> > > To: mst@redhat.com; jasowang@redhat.com; linux-
> > kernel@vger.kernel.org;
> > > virtualization@lists.linux-foundation.org
> > > Cc: si-wei.liu@oracle.com; eperezma@redhat.com; lulu@redhat.com; Eli
> > > Cohen <elic@nvidia.com>
> > > Subject: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
> > >
> > > This series is a resend of previously sent patch list. It adds a few
> > > fixes so I treat as a v0 of a new series.
> > >
> > > It adds a kernel config param CONFIG_MLX5_VDPA_STEERING_DEBUG
> > that
> > > when
> > > eabled allows to read rx unicast and multicast counters per tagged or
> > untagged
> > > traffic.
> > >
> > > Examples:
> > > $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-
> > > 0/rx/untagged/mcast/packets
> > > $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-
> > 0/rx/untagged/ucast/bytes
> > >
> > > v1->v2:
> > > 1. Reorder patches so fixes are first
> > > 2. Break "Fix rule forwarding VLAN to TIR" into two patches
> > > 3. Squash fix for bug in first patch from "Add RX counters to debugfs"
> > > 4. Move clearing of nb_registered before calling mlx5_notifier_unregister()
> > in
> > > mlx5_vdpa_dev_del()
> > >
> > >
> > > Eli Cohen (8):
> > >   vdpa/mlx5: Fix rule forwarding VLAN to TIR
> > >   vdpa/mlx5: Return error on vlan ctrl commands if not supported
> > >   vdpa/mlx5: Fix wrong mac address deletion
> > >   vdpa/mlx5: Avoid using reslock in event_handler
> > >   vdpa/mlx5: Avoid overwriting CVQ iotlb
> > >   vdpa/mlx5: Move some definitions to a new header file
> > >   vdpa/mlx5: Add debugfs subtree
> > >   vdpa/mlx5: Add RX counters to debugfs
> > >
> > >  drivers/vdpa/Kconfig               |  12 ++
> > >  drivers/vdpa/mlx5/Makefile         |   2 +-
> > >  drivers/vdpa/mlx5/core/mlx5_vdpa.h |   5 +-
> > >  drivers/vdpa/mlx5/core/mr.c        |  44 ++---
> > >  drivers/vdpa/mlx5/net/debug.c      | 152 ++++++++++++++++++
> > >  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 250 ++++++++++++++---------------
> > >  drivers/vdpa/mlx5/net/mlx5_vnet.h  |  94 +++++++++++
> > >  7 files changed, 412 insertions(+), 147 deletions(-)
> > >  create mode 100644 drivers/vdpa/mlx5/net/debug.c
> > >  create mode 100644 drivers/vdpa/mlx5/net/mlx5_vnet.h
> > >
> > > --
> > > 2.38.1
> 

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
@ 2022-12-13 15:19       ` Michael S. Tsirkin
  0 siblings, 0 replies; 21+ messages in thread
From: Michael S. Tsirkin @ 2022-12-13 15:19 UTC (permalink / raw)
  To: Eli Cohen
  Cc: jasowang, linux-kernel, virtualization, si-wei.liu, eperezma, lulu

Yes it's all going into the next pull, thanks!

On Tue, Dec 13, 2022 at 07:33:08AM +0000, Eli Cohen wrote:
> Michael?
> 
> > -----Original Message-----
> > From: Eli Cohen
> > Sent: Thursday, 24 November 2022 8:34
> > To: mst@redhat.com; jasowang@redhat.com; linux-kernel@vger.kernel.org;
> > virtualization@lists.linux-foundation.org
> > Cc: si-wei.liu@oracle.com; eperezma@redhat.com; lulu@redhat.com
> > Subject: RE: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
> > 
> > Hi Michael,
> > 
> > Are you going to pull this series? It has been reviewed.
> > 
> > 
> > > -----Original Message-----
> > > From: Eli Cohen <elic@nvidia.com>
> > > Sent: Monday, 14 November 2022 15:18
> > > To: mst@redhat.com; jasowang@redhat.com; linux-
> > kernel@vger.kernel.org;
> > > virtualization@lists.linux-foundation.org
> > > Cc: si-wei.liu@oracle.com; eperezma@redhat.com; lulu@redhat.com; Eli
> > > Cohen <elic@nvidia.com>
> > > Subject: [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes
> > >
> > > This series is a resend of previously sent patch list. It adds a few
> > > fixes so I treat as a v0 of a new series.
> > >
> > > It adds a kernel config param CONFIG_MLX5_VDPA_STEERING_DEBUG
> > that
> > > when
> > > eabled allows to read rx unicast and multicast counters per tagged or
> > untagged
> > > traffic.
> > >
> > > Examples:
> > > $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-
> > > 0/rx/untagged/mcast/packets
> > > $ cat /sys/kernel/debug/mlx5/mlx5_core.sf.1/vdpa-
> > 0/rx/untagged/ucast/bytes
> > >
> > > v1->v2:
> > > 1. Reorder patches so fixes are first
> > > 2. Break "Fix rule forwarding VLAN to TIR" into two patches
> > > 3. Squash fix for bug in first patch from "Add RX counters to debugfs"
> > > 4. Move clearing of nb_registered before calling mlx5_notifier_unregister()
> > in
> > > mlx5_vdpa_dev_del()
> > >
> > >
> > > Eli Cohen (8):
> > >   vdpa/mlx5: Fix rule forwarding VLAN to TIR
> > >   vdpa/mlx5: Return error on vlan ctrl commands if not supported
> > >   vdpa/mlx5: Fix wrong mac address deletion
> > >   vdpa/mlx5: Avoid using reslock in event_handler
> > >   vdpa/mlx5: Avoid overwriting CVQ iotlb
> > >   vdpa/mlx5: Move some definitions to a new header file
> > >   vdpa/mlx5: Add debugfs subtree
> > >   vdpa/mlx5: Add RX counters to debugfs
> > >
> > >  drivers/vdpa/Kconfig               |  12 ++
> > >  drivers/vdpa/mlx5/Makefile         |   2 +-
> > >  drivers/vdpa/mlx5/core/mlx5_vdpa.h |   5 +-
> > >  drivers/vdpa/mlx5/core/mr.c        |  44 ++---
> > >  drivers/vdpa/mlx5/net/debug.c      | 152 ++++++++++++++++++
> > >  drivers/vdpa/mlx5/net/mlx5_vnet.c  | 250 ++++++++++++++---------------
> > >  drivers/vdpa/mlx5/net/mlx5_vnet.h  |  94 +++++++++++
> > >  7 files changed, 412 insertions(+), 147 deletions(-)
> > >  create mode 100644 drivers/vdpa/mlx5/net/debug.c
> > >  create mode 100644 drivers/vdpa/mlx5/net/mlx5_vnet.h
> > >
> > > --
> > > 2.38.1
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2022-12-13 15:20 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-14 13:17 [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
2022-11-14 13:17 ` [PATH v2 1/8] vdpa/mlx5: Fix rule forwarding VLAN to TIR Eli Cohen
2022-11-14 13:17 ` [PATH v2 2/8] vdpa/mlx5: Return error on vlan ctrl commands if not supported Eli Cohen
2022-11-15  3:10   ` Jason Wang
2022-11-15  3:10     ` Jason Wang
2022-11-15  9:43   ` Eugenio Perez Martin
2022-11-14 13:17 ` [PATH v2 3/8] vdpa/mlx5: Fix wrong mac address deletion Eli Cohen
2022-11-14 13:17 ` [PATH v2 4/8] vdpa/mlx5: Avoid using reslock in event_handler Eli Cohen
2022-11-14 13:17 ` [PATH v2 5/8] vdpa/mlx5: Avoid overwriting CVQ iotlb Eli Cohen
2022-11-15  3:33   ` Jason Wang
2022-11-15  3:33     ` Jason Wang
2022-11-15  9:41   ` Eugenio Perez Martin
2022-11-14 13:17 ` [PATH v2 6/8] vdpa/mlx5: Move some definitions to a new header file Eli Cohen
2022-11-14 13:17 ` [PATH v2 7/8] vdpa/mlx5: Add debugfs subtree Eli Cohen
2022-11-14 13:17 ` [PATH v2 8/8] vdpa/mlx5: Add RX counters to debugfs Eli Cohen
2022-11-15  3:33   ` Jason Wang
2022-11-15  3:33     ` Jason Wang
2022-11-24  6:34 ` [PATH v2 0/8] vdpa/mlx5: Add debugfs subtree and fixes Eli Cohen
2022-12-13  7:33   ` Eli Cohen
2022-12-13 15:19     ` Michael S. Tsirkin
2022-12-13 15:19       ` Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.