All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH iwl-next v1 0/4] change MSI-X vectors per VF
@ 2023-06-15 12:38 ` Michal Swiatkowski
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 12:38 UTC (permalink / raw)
  To: intel-wired-lan
  Cc: netdev, jacob.e.keller, przemyslaw.kitszel, Michal Swiatkowski

Hi,

This patchset is implementing sysfs API introduced here [1].

It will allow user to assign different amount of MSI-X vectors to VF.
For example when there are VMs with different number of virtual cores.

Example:
1. Turn off autoprobe
echo 0 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_drivers_autoprobe
2. Create VFs
echo 4 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
3. Configure MSI-X
echo 20 > /sys/class/pci_bus/0000\:18/device/0000\:18\:01.0/sriov_vf_msix_count

[1] https://lore.kernel.org/netdev/20210314124256.70253-1-leon@kernel.org/

Michal Swiatkowski (4):
  ice: implement num_msix field per VF
  ice: add bitmap to track VF MSI-X usage
  ice: set MSI-X vector count on VF
  ice: manage VFs MSI-X using resource tracking

 drivers/net/ethernet/intel/ice/ice.h          |   2 +
 drivers/net/ethernet/intel/ice/ice_lib.c      |   2 +-
 drivers/net/ethernet/intel/ice/ice_main.c     |   2 +
 drivers/net/ethernet/intel/ice/ice_sriov.c    | 257 ++++++++++++++++--
 drivers/net/ethernet/intel/ice/ice_sriov.h    |  13 +
 drivers/net/ethernet/intel/ice/ice_vf_lib.h   |   4 +-
 drivers/net/ethernet/intel/ice/ice_virtchnl.c |   2 +-
 7 files changed, 258 insertions(+), 24 deletions(-)

-- 
2.40.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Intel-wired-lan] [PATCH iwl-next v1 0/4] change MSI-X vectors per VF
@ 2023-06-15 12:38 ` Michal Swiatkowski
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 12:38 UTC (permalink / raw)
  To: intel-wired-lan; +Cc: netdev, przemyslaw.kitszel

Hi,

This patchset is implementing sysfs API introduced here [1].

It will allow user to assign different amount of MSI-X vectors to VF.
For example when there are VMs with different number of virtual cores.

Example:
1. Turn off autoprobe
echo 0 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_drivers_autoprobe
2. Create VFs
echo 4 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
3. Configure MSI-X
echo 20 > /sys/class/pci_bus/0000\:18/device/0000\:18\:01.0/sriov_vf_msix_count

[1] https://lore.kernel.org/netdev/20210314124256.70253-1-leon@kernel.org/

Michal Swiatkowski (4):
  ice: implement num_msix field per VF
  ice: add bitmap to track VF MSI-X usage
  ice: set MSI-X vector count on VF
  ice: manage VFs MSI-X using resource tracking

 drivers/net/ethernet/intel/ice/ice.h          |   2 +
 drivers/net/ethernet/intel/ice/ice_lib.c      |   2 +-
 drivers/net/ethernet/intel/ice/ice_main.c     |   2 +
 drivers/net/ethernet/intel/ice/ice_sriov.c    | 257 ++++++++++++++++--
 drivers/net/ethernet/intel/ice/ice_sriov.h    |  13 +
 drivers/net/ethernet/intel/ice/ice_vf_lib.h   |   4 +-
 drivers/net/ethernet/intel/ice/ice_virtchnl.c |   2 +-
 7 files changed, 258 insertions(+), 24 deletions(-)

-- 
2.40.1

_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH iwl-next v1 1/4] ice: implement num_msix field per VF
  2023-06-15 12:38 ` [Intel-wired-lan] " Michal Swiatkowski
@ 2023-06-15 12:38   ` Michal Swiatkowski
  -1 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 12:38 UTC (permalink / raw)
  To: intel-wired-lan
  Cc: netdev, jacob.e.keller, przemyslaw.kitszel, Michal Swiatkowski

Store the amount of MSI-X per VF instead of storing it in pf struct. It
is used to calculate number of q_vectors (and queues) for VF VSI.

Calculate vector indexes based on this new field.

Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_lib.c      |  2 +-
 drivers/net/ethernet/intel/ice/ice_sriov.c    | 13 +++++++++----
 drivers/net/ethernet/intel/ice/ice_vf_lib.h   |  4 +++-
 drivers/net/ethernet/intel/ice/ice_virtchnl.c |  2 +-
 4 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index e8142bea2eb2..24a0bf403445 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -229,7 +229,7 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi)
 		 * of queues vectors, subtract 1 (ICE_NONQ_VECS_VF) from the
 		 * original vector count
 		 */
-		vsi->num_q_vectors = pf->vfs.num_msix_per - ICE_NONQ_VECS_VF;
+		vsi->num_q_vectors = vf->num_msix - ICE_NONQ_VECS_VF;
 		break;
 	case ICE_VSI_CTRL:
 		vsi->alloc_txq = 1;
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index 2ea6d24977a6..3137e772a64b 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -64,7 +64,7 @@ static void ice_free_vf_res(struct ice_vf *vf)
 		vf->num_mac = 0;
 	}
 
-	last_vector_idx = vf->first_vector_idx + pf->vfs.num_msix_per - 1;
+	last_vector_idx = vf->first_vector_idx + vf->num_msix - 1;
 
 	/* clear VF MDD event information */
 	memset(&vf->mdd_tx_events, 0, sizeof(vf->mdd_tx_events));
@@ -102,7 +102,7 @@ static void ice_dis_vf_mappings(struct ice_vf *vf)
 	wr32(hw, VPINT_ALLOC_PCI(vf->vf_id), 0);
 
 	first = vf->first_vector_idx;
-	last = first + pf->vfs.num_msix_per - 1;
+	last = first + vf->num_msix - 1;
 	for (v = first; v <= last; v++) {
 		u32 reg;
 
@@ -280,12 +280,12 @@ static void ice_ena_vf_msix_mappings(struct ice_vf *vf)
 
 	hw = &pf->hw;
 	pf_based_first_msix = vf->first_vector_idx;
-	pf_based_last_msix = (pf_based_first_msix + pf->vfs.num_msix_per) - 1;
+	pf_based_last_msix = (pf_based_first_msix + vf->num_msix) - 1;
 
 	device_based_first_msix = pf_based_first_msix +
 		pf->hw.func_caps.common_cap.msix_vector_first_id;
 	device_based_last_msix =
-		(device_based_first_msix + pf->vfs.num_msix_per) - 1;
+		(device_based_first_msix + vf->num_msix) - 1;
 	device_based_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
 
 	reg = (((device_based_first_msix << VPINT_ALLOC_FIRST_S) &
@@ -814,6 +814,11 @@ static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs)
 
 		vf->vf_sw_id = pf->first_sw;
 
+		/* set default number of MSI-X */
+		vf->num_msix = pf->vfs.num_msix_per;
+		vf->num_vf_qs = pf->vfs.num_qps_per;
+		ice_vc_set_default_allowlist(vf);
+
 		hash_add_rcu(vfs->table, &vf->entry, vf_id);
 	}
 
diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
index 67172fdd9bc2..4dbfb7e26bfa 100644
--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
@@ -72,7 +72,7 @@ struct ice_vfs {
 	struct mutex table_lock;	/* Lock for protecting the hash table */
 	u16 num_supported;		/* max supported VFs on this PF */
 	u16 num_qps_per;		/* number of queue pairs per VF */
-	u16 num_msix_per;		/* number of MSI-X vectors per VF */
+	u16 num_msix_per;		/* default MSI-X vectors per VF */
 	unsigned long last_printed_mdd_jiffies;	/* MDD message rate limit */
 };
 
@@ -133,6 +133,8 @@ struct ice_vf {
 
 	/* devlink port data */
 	struct devlink_port devlink_port;
+
+	u16 num_msix;			/* num of MSI-X configured on this VF */
 };
 
 /* Flags for controlling behavior of ice_reset_vf */
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
index efbc2968a7bf..37b588774ac1 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
@@ -498,7 +498,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
 	vfres->num_vsis = 1;
 	/* Tx and Rx queue are equal for VF */
 	vfres->num_queue_pairs = vsi->num_txq;
-	vfres->max_vectors = vf->pf->vfs.num_msix_per;
+	vfres->max_vectors = vf->num_msix;
 	vfres->rss_key_size = ICE_VSIQF_HKEY_ARRAY_SIZE;
 	vfres->rss_lut_size = ICE_VSIQF_HLUT_ARRAY_SIZE;
 	vfres->max_mtu = ice_vc_get_max_frame_size(vf);
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Intel-wired-lan] [PATCH iwl-next v1 1/4] ice: implement num_msix field per VF
@ 2023-06-15 12:38   ` Michal Swiatkowski
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 12:38 UTC (permalink / raw)
  To: intel-wired-lan; +Cc: netdev, przemyslaw.kitszel

Store the amount of MSI-X per VF instead of storing it in pf struct. It
is used to calculate number of q_vectors (and queues) for VF VSI.

Calculate vector indexes based on this new field.

Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_lib.c      |  2 +-
 drivers/net/ethernet/intel/ice/ice_sriov.c    | 13 +++++++++----
 drivers/net/ethernet/intel/ice/ice_vf_lib.h   |  4 +++-
 drivers/net/ethernet/intel/ice/ice_virtchnl.c |  2 +-
 4 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index e8142bea2eb2..24a0bf403445 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -229,7 +229,7 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi)
 		 * of queues vectors, subtract 1 (ICE_NONQ_VECS_VF) from the
 		 * original vector count
 		 */
-		vsi->num_q_vectors = pf->vfs.num_msix_per - ICE_NONQ_VECS_VF;
+		vsi->num_q_vectors = vf->num_msix - ICE_NONQ_VECS_VF;
 		break;
 	case ICE_VSI_CTRL:
 		vsi->alloc_txq = 1;
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index 2ea6d24977a6..3137e772a64b 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -64,7 +64,7 @@ static void ice_free_vf_res(struct ice_vf *vf)
 		vf->num_mac = 0;
 	}
 
-	last_vector_idx = vf->first_vector_idx + pf->vfs.num_msix_per - 1;
+	last_vector_idx = vf->first_vector_idx + vf->num_msix - 1;
 
 	/* clear VF MDD event information */
 	memset(&vf->mdd_tx_events, 0, sizeof(vf->mdd_tx_events));
@@ -102,7 +102,7 @@ static void ice_dis_vf_mappings(struct ice_vf *vf)
 	wr32(hw, VPINT_ALLOC_PCI(vf->vf_id), 0);
 
 	first = vf->first_vector_idx;
-	last = first + pf->vfs.num_msix_per - 1;
+	last = first + vf->num_msix - 1;
 	for (v = first; v <= last; v++) {
 		u32 reg;
 
@@ -280,12 +280,12 @@ static void ice_ena_vf_msix_mappings(struct ice_vf *vf)
 
 	hw = &pf->hw;
 	pf_based_first_msix = vf->first_vector_idx;
-	pf_based_last_msix = (pf_based_first_msix + pf->vfs.num_msix_per) - 1;
+	pf_based_last_msix = (pf_based_first_msix + vf->num_msix) - 1;
 
 	device_based_first_msix = pf_based_first_msix +
 		pf->hw.func_caps.common_cap.msix_vector_first_id;
 	device_based_last_msix =
-		(device_based_first_msix + pf->vfs.num_msix_per) - 1;
+		(device_based_first_msix + vf->num_msix) - 1;
 	device_based_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
 
 	reg = (((device_based_first_msix << VPINT_ALLOC_FIRST_S) &
@@ -814,6 +814,11 @@ static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs)
 
 		vf->vf_sw_id = pf->first_sw;
 
+		/* set default number of MSI-X */
+		vf->num_msix = pf->vfs.num_msix_per;
+		vf->num_vf_qs = pf->vfs.num_qps_per;
+		ice_vc_set_default_allowlist(vf);
+
 		hash_add_rcu(vfs->table, &vf->entry, vf_id);
 	}
 
diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
index 67172fdd9bc2..4dbfb7e26bfa 100644
--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
@@ -72,7 +72,7 @@ struct ice_vfs {
 	struct mutex table_lock;	/* Lock for protecting the hash table */
 	u16 num_supported;		/* max supported VFs on this PF */
 	u16 num_qps_per;		/* number of queue pairs per VF */
-	u16 num_msix_per;		/* number of MSI-X vectors per VF */
+	u16 num_msix_per;		/* default MSI-X vectors per VF */
 	unsigned long last_printed_mdd_jiffies;	/* MDD message rate limit */
 };
 
@@ -133,6 +133,8 @@ struct ice_vf {
 
 	/* devlink port data */
 	struct devlink_port devlink_port;
+
+	u16 num_msix;			/* num of MSI-X configured on this VF */
 };
 
 /* Flags for controlling behavior of ice_reset_vf */
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
index efbc2968a7bf..37b588774ac1 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
@@ -498,7 +498,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
 	vfres->num_vsis = 1;
 	/* Tx and Rx queue are equal for VF */
 	vfres->num_queue_pairs = vsi->num_txq;
-	vfres->max_vectors = vf->pf->vfs.num_msix_per;
+	vfres->max_vectors = vf->num_msix;
 	vfres->rss_key_size = ICE_VSIQF_HKEY_ARRAY_SIZE;
 	vfres->rss_lut_size = ICE_VSIQF_HLUT_ARRAY_SIZE;
 	vfres->max_mtu = ice_vc_get_max_frame_size(vf);
-- 
2.40.1

_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH iwl-next v1 2/4] ice: add bitmap to track VF MSI-X usage
  2023-06-15 12:38 ` [Intel-wired-lan] " Michal Swiatkowski
@ 2023-06-15 12:38   ` Michal Swiatkowski
  -1 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 12:38 UTC (permalink / raw)
  To: intel-wired-lan
  Cc: netdev, jacob.e.keller, przemyslaw.kitszel, Michal Swiatkowski

Create a bitamp to track MSI-X usage for VFs. The bitmap has the size of
total MSI-X amount on device, because at init time the amount of MSI-X
used by VFs isn't known.

The bitmap is used in follow up patchset to provide a block of
continuous block of MSI-X indexes for each created VF.

Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h       | 2 ++
 drivers/net/ethernet/intel/ice/ice_sriov.c | 9 +++++++++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 176e281dfa24..98d2e70a719d 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -541,6 +541,8 @@ struct ice_pf {
 	 * MSIX vectors allowed on this PF.
 	 */
 	u16 sriov_base_vector;
+	unsigned long *sriov_irq_bm;	/* bitmap to track irq usage */
+	u16 sriov_irq_size;		/* size of the irq_bm bitmap */
 
 	u16 ctrl_vsi_idx;		/* control VSI index in pf->vsi array */
 
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index 3137e772a64b..da0f1deef89b 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -138,6 +138,8 @@ static int ice_sriov_free_msix_res(struct ice_pf *pf)
 	if (!pf)
 		return -EINVAL;
 
+	bitmap_free(pf->sriov_irq_bm);
+	pf->sriov_irq_size = 0;
 	pf->sriov_base_vector = 0;
 
 	return 0;
@@ -836,10 +838,16 @@ static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs)
  */
 static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
 {
+	int total_vectors = pf->hw.func_caps.common_cap.num_msix_vectors;
 	struct device *dev = ice_pf_to_dev(pf);
 	struct ice_hw *hw = &pf->hw;
 	int ret;
 
+	pf->sriov_irq_bm = bitmap_zalloc(total_vectors, GFP_KERNEL);
+	if (!pf->sriov_irq_bm)
+		return -ENOMEM;
+	pf->sriov_irq_size = total_vectors;
+
 	/* Disable global interrupt 0 so we don't try to handle the VFLR. */
 	wr32(hw, GLINT_DYN_CTL(pf->oicr_irq.index),
 	     ICE_ITR_NONE << GLINT_DYN_CTL_ITR_INDX_S);
@@ -898,6 +906,7 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
 	/* rearm interrupts here */
 	ice_irq_dynamic_ena(hw, NULL, NULL);
 	clear_bit(ICE_OICR_INTR_DIS, pf->state);
+	bitmap_free(pf->sriov_irq_bm);
 	return ret;
 }
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Intel-wired-lan] [PATCH iwl-next v1 2/4] ice: add bitmap to track VF MSI-X usage
@ 2023-06-15 12:38   ` Michal Swiatkowski
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 12:38 UTC (permalink / raw)
  To: intel-wired-lan; +Cc: netdev, przemyslaw.kitszel

Create a bitamp to track MSI-X usage for VFs. The bitmap has the size of
total MSI-X amount on device, because at init time the amount of MSI-X
used by VFs isn't known.

The bitmap is used in follow up patchset to provide a block of
continuous block of MSI-X indexes for each created VF.

Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h       | 2 ++
 drivers/net/ethernet/intel/ice/ice_sriov.c | 9 +++++++++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 176e281dfa24..98d2e70a719d 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -541,6 +541,8 @@ struct ice_pf {
 	 * MSIX vectors allowed on this PF.
 	 */
 	u16 sriov_base_vector;
+	unsigned long *sriov_irq_bm;	/* bitmap to track irq usage */
+	u16 sriov_irq_size;		/* size of the irq_bm bitmap */
 
 	u16 ctrl_vsi_idx;		/* control VSI index in pf->vsi array */
 
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index 3137e772a64b..da0f1deef89b 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -138,6 +138,8 @@ static int ice_sriov_free_msix_res(struct ice_pf *pf)
 	if (!pf)
 		return -EINVAL;
 
+	bitmap_free(pf->sriov_irq_bm);
+	pf->sriov_irq_size = 0;
 	pf->sriov_base_vector = 0;
 
 	return 0;
@@ -836,10 +838,16 @@ static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs)
  */
 static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
 {
+	int total_vectors = pf->hw.func_caps.common_cap.num_msix_vectors;
 	struct device *dev = ice_pf_to_dev(pf);
 	struct ice_hw *hw = &pf->hw;
 	int ret;
 
+	pf->sriov_irq_bm = bitmap_zalloc(total_vectors, GFP_KERNEL);
+	if (!pf->sriov_irq_bm)
+		return -ENOMEM;
+	pf->sriov_irq_size = total_vectors;
+
 	/* Disable global interrupt 0 so we don't try to handle the VFLR. */
 	wr32(hw, GLINT_DYN_CTL(pf->oicr_irq.index),
 	     ICE_ITR_NONE << GLINT_DYN_CTL_ITR_INDX_S);
@@ -898,6 +906,7 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
 	/* rearm interrupts here */
 	ice_irq_dynamic_ena(hw, NULL, NULL);
 	clear_bit(ICE_OICR_INTR_DIS, pf->state);
+	bitmap_free(pf->sriov_irq_bm);
 	return ret;
 }
 
-- 
2.40.1

_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH iwl-next v1 3/4] ice: set MSI-X vector count on VF
  2023-06-15 12:38 ` [Intel-wired-lan] " Michal Swiatkowski
@ 2023-06-15 12:38   ` Michal Swiatkowski
  -1 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 12:38 UTC (permalink / raw)
  To: intel-wired-lan
  Cc: netdev, jacob.e.keller, przemyslaw.kitszel, Michal Swiatkowski

Implement ops needed to set MSI-X vector count on VF.

sriov_get_vf_total_msix() should return total number of MSI-X that can
be used by the VFs. Return the value set by devlink resources API
(pf->req_msix.vf).

sriov_set_msix_vec_count() will set number of MSI-X on particular VF.
Disable VF register mapping, rebuild VSI with new MSI-X and queues
values and enable new VF register mapping.

For best performance set number of queues equal to number of MSI-X.

Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_main.c  |  2 +
 drivers/net/ethernet/intel/ice/ice_sriov.c | 69 ++++++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_sriov.h | 13 ++++
 3 files changed, 84 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 9d55290b6dcc..1061161ec737 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -5545,6 +5545,8 @@ static struct pci_driver ice_driver = {
 #endif /* CONFIG_PM */
 	.shutdown = ice_shutdown,
 	.sriov_configure = ice_sriov_configure,
+	.sriov_get_vf_total_msix = ice_sriov_get_vf_total_msix,
+	.sriov_set_msix_vec_count = ice_sriov_set_msix_vec_count,
 	.err_handler = &ice_pci_err_handler
 };
 
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index da0f1deef89b..e20ef1924fae 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -971,6 +971,75 @@ static int ice_check_sriov_allowed(struct ice_pf *pf)
 	return 0;
 }
 
+/**
+ * ice_sriov_get_vf_total_msix - return number of MSI-X used by VFs
+ * @pdev: pointer to pci_dev struct
+ *
+ * The function is called via sysfs ops
+ */
+u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
+{
+	struct ice_pf *pf = pci_get_drvdata(pdev);
+
+	return pf->sriov_irq_size - ice_get_max_used_msix_vector(pf);
+}
+
+/**
+ * ice_sriov_set_msix_vec_count
+ * @vf_dev: pointer to pci_dev struct of VF device
+ * @msix_vec_count: new value for MSI-X amount on this VF
+ *
+ * Set requested MSI-X, queues and registers for @vf_dev.
+ *
+ * First do some sanity checks like if there are any VFs, if the new value
+ * is correct etc. Then disable old mapping (MSI-X and queues registers), change
+ * MSI-X and queues, rebuild VSI and enable new mapping.
+ *
+ * If it is possible (driver not binded to VF) try to remap also other VFs to
+ * linearize irqs register usage.
+ */
+int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+{
+	struct pci_dev *pdev = pci_physfn(vf_dev);
+	struct ice_pf *pf = pci_get_drvdata(pdev);
+	struct ice_vf *vf;
+	u16 queues;
+	int id;
+
+	if (!ice_get_num_vfs(pf))
+		return -ENOENT;
+
+	if (!msix_vec_count)
+		return 0;
+
+	queues = msix_vec_count;
+	/* add 1 MSI-X for OICR */
+	msix_vec_count += 1;
+
+	/* Transition of PCI VF function number to function_id */
+	for (id = 0; id < pci_num_vf(pdev); id++) {
+		if (vf_dev->devfn == pci_iov_virtfn_devfn(pdev, id))
+			break;
+	}
+
+	if (id == pci_num_vf(pdev))
+		return -ENOENT;
+
+	vf = ice_get_vf_by_id(pf, id);
+
+	if (!vf)
+		return -ENOENT;
+
+	ice_dis_vf_mappings(vf);
+	vf->num_msix = msix_vec_count;
+	vf->num_vf_qs = queues;
+	ice_vsi_rebuild(ice_get_vf_vsi(vf), ICE_VSI_FLAG_NO_INIT);
+	ice_ena_vf_mappings(vf);
+	ice_put_vf(vf);
+
+	return 0;
+}
+
 /**
  * ice_sriov_configure - Enable or change number of VFs via sysfs
  * @pdev: pointer to a pci_dev structure
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.h b/drivers/net/ethernet/intel/ice/ice_sriov.h
index 346cb2666f3a..77e3dc5feefd 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.h
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.h
@@ -60,6 +60,8 @@ void ice_print_vfs_mdd_events(struct ice_pf *pf);
 void ice_print_vf_rx_mdd_event(struct ice_vf *vf);
 bool
 ice_vc_validate_pattern(struct ice_vf *vf, struct virtchnl_proto_hdrs *proto);
+u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev);
+int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count);
 #else /* CONFIG_PCI_IOV */
 static inline void ice_process_vflr_event(struct ice_pf *pf) { }
 static inline void ice_free_vfs(struct ice_pf *pf) { }
@@ -142,5 +144,16 @@ ice_get_vf_stats(struct net_device __always_unused *netdev,
 {
 	return -EOPNOTSUPP;
 }
+
+static inline u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
+{
+	return 0;
+}
+
+static inline int
+ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+{
+	return -EOPNOTSUPP;
+}
 #endif /* CONFIG_PCI_IOV */
 #endif /* _ICE_SRIOV_H_ */
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Intel-wired-lan] [PATCH iwl-next v1 3/4] ice: set MSI-X vector count on VF
@ 2023-06-15 12:38   ` Michal Swiatkowski
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 12:38 UTC (permalink / raw)
  To: intel-wired-lan; +Cc: netdev, przemyslaw.kitszel

Implement ops needed to set MSI-X vector count on VF.

sriov_get_vf_total_msix() should return total number of MSI-X that can
be used by the VFs. Return the value set by devlink resources API
(pf->req_msix.vf).

sriov_set_msix_vec_count() will set number of MSI-X on particular VF.
Disable VF register mapping, rebuild VSI with new MSI-X and queues
values and enable new VF register mapping.

For best performance set number of queues equal to number of MSI-X.

Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_main.c  |  2 +
 drivers/net/ethernet/intel/ice/ice_sriov.c | 69 ++++++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_sriov.h | 13 ++++
 3 files changed, 84 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 9d55290b6dcc..1061161ec737 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -5545,6 +5545,8 @@ static struct pci_driver ice_driver = {
 #endif /* CONFIG_PM */
 	.shutdown = ice_shutdown,
 	.sriov_configure = ice_sriov_configure,
+	.sriov_get_vf_total_msix = ice_sriov_get_vf_total_msix,
+	.sriov_set_msix_vec_count = ice_sriov_set_msix_vec_count,
 	.err_handler = &ice_pci_err_handler
 };
 
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index da0f1deef89b..e20ef1924fae 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -971,6 +971,75 @@ static int ice_check_sriov_allowed(struct ice_pf *pf)
 	return 0;
 }
 
+/**
+ * ice_sriov_get_vf_total_msix - return number of MSI-X used by VFs
+ * @pdev: pointer to pci_dev struct
+ *
+ * The function is called via sysfs ops
+ */
+u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
+{
+	struct ice_pf *pf = pci_get_drvdata(pdev);
+
+	return pf->sriov_irq_size - ice_get_max_used_msix_vector(pf);
+}
+
+/**
+ * ice_sriov_set_msix_vec_count
+ * @vf_dev: pointer to pci_dev struct of VF device
+ * @msix_vec_count: new value for MSI-X amount on this VF
+ *
+ * Set requested MSI-X, queues and registers for @vf_dev.
+ *
+ * First do some sanity checks like if there are any VFs, if the new value
+ * is correct etc. Then disable old mapping (MSI-X and queues registers), change
+ * MSI-X and queues, rebuild VSI and enable new mapping.
+ *
+ * If it is possible (driver not binded to VF) try to remap also other VFs to
+ * linearize irqs register usage.
+ */
+int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+{
+	struct pci_dev *pdev = pci_physfn(vf_dev);
+	struct ice_pf *pf = pci_get_drvdata(pdev);
+	struct ice_vf *vf;
+	u16 queues;
+	int id;
+
+	if (!ice_get_num_vfs(pf))
+		return -ENOENT;
+
+	if (!msix_vec_count)
+		return 0;
+
+	queues = msix_vec_count;
+	/* add 1 MSI-X for OICR */
+	msix_vec_count += 1;
+
+	/* Transition of PCI VF function number to function_id */
+	for (id = 0; id < pci_num_vf(pdev); id++) {
+		if (vf_dev->devfn == pci_iov_virtfn_devfn(pdev, id))
+			break;
+	}
+
+	if (id == pci_num_vf(pdev))
+		return -ENOENT;
+
+	vf = ice_get_vf_by_id(pf, id);
+
+	if (!vf)
+		return -ENOENT;
+
+	ice_dis_vf_mappings(vf);
+	vf->num_msix = msix_vec_count;
+	vf->num_vf_qs = queues;
+	ice_vsi_rebuild(ice_get_vf_vsi(vf), ICE_VSI_FLAG_NO_INIT);
+	ice_ena_vf_mappings(vf);
+	ice_put_vf(vf);
+
+	return 0;
+}
+
 /**
  * ice_sriov_configure - Enable or change number of VFs via sysfs
  * @pdev: pointer to a pci_dev structure
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.h b/drivers/net/ethernet/intel/ice/ice_sriov.h
index 346cb2666f3a..77e3dc5feefd 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.h
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.h
@@ -60,6 +60,8 @@ void ice_print_vfs_mdd_events(struct ice_pf *pf);
 void ice_print_vf_rx_mdd_event(struct ice_vf *vf);
 bool
 ice_vc_validate_pattern(struct ice_vf *vf, struct virtchnl_proto_hdrs *proto);
+u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev);
+int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count);
 #else /* CONFIG_PCI_IOV */
 static inline void ice_process_vflr_event(struct ice_pf *pf) { }
 static inline void ice_free_vfs(struct ice_pf *pf) { }
@@ -142,5 +144,16 @@ ice_get_vf_stats(struct net_device __always_unused *netdev,
 {
 	return -EOPNOTSUPP;
 }
+
+static inline u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
+{
+	return 0;
+}
+
+static inline int
+ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+{
+	return -EOPNOTSUPP;
+}
 #endif /* CONFIG_PCI_IOV */
 #endif /* _ICE_SRIOV_H_ */
-- 
2.40.1

_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Intel-wired-lan] [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
  2023-06-15 12:38 ` [Intel-wired-lan] " Michal Swiatkowski
@ 2023-06-15 12:38   ` Michal Swiatkowski
  -1 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 12:38 UTC (permalink / raw)
  To: intel-wired-lan; +Cc: netdev, przemyslaw.kitszel

Track MSI-X for VFs using bitmap, by setting and clearing bitmap during
allocation and freeing.

Try to linearize irqs usage for VFs, by freeing them and allocating once
again. Do it only for VFs that aren't currently running.

Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_sriov.c | 170 ++++++++++++++++++---
 1 file changed, 151 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index e20ef1924fae..78a41163755b 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -246,22 +246,6 @@ static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf *vf)
 	return vsi;
 }
 
-/**
- * ice_calc_vf_first_vector_idx - Calculate MSIX vector index in the PF space
- * @pf: pointer to PF structure
- * @vf: pointer to VF that the first MSIX vector index is being calculated for
- *
- * This returns the first MSIX vector index in PF space that is used by this VF.
- * This index is used when accessing PF relative registers such as
- * GLINT_VECT2FUNC and GLINT_DYN_CTL.
- * This will always be the OICR index in the AVF driver so any functionality
- * using vf->first_vector_idx for queue configuration will have to increment by
- * 1 to avoid meddling with the OICR index.
- */
-static int ice_calc_vf_first_vector_idx(struct ice_pf *pf, struct ice_vf *vf)
-{
-	return pf->sriov_base_vector + vf->vf_id * pf->vfs.num_msix_per;
-}
 
 /**
  * ice_ena_vf_msix_mappings - enable VF MSIX mappings in hardware
@@ -528,6 +512,52 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16 num_vfs)
 	return 0;
 }
 
+/**
+ * ice_sriov_get_irqs - get irqs for SR-IOV usacase
+ * @pf: pointer to PF structure
+ * @needed: number of irqs to get
+ *
+ * This returns the first MSI-X vector index in PF space that is used by this
+ * VF. This index is used when accessing PF relative registers such as
+ * GLINT_VECT2FUNC and GLINT_DYN_CTL.
+ * This will always be the OICR index in the AVF driver so any functionality
+ * using vf->first_vector_idx for queue configuration_id: id of VF which will
+ * use this irqs
+ *
+ * Only SRIOV specific vectors are tracked in sriov_irq_bm. SRIOV vectors are
+ * allocated from the end of global irq index. First bit in sriov_irq_bm means
+ * last irq index etc. It simplifies extension of SRIOV vectors.
+ * They will be always located from sriov_base_vector to the last irq
+ * index. While increasing/decreasing sriov_base_vector can be moved.
+ */
+static int ice_sriov_get_irqs(struct ice_pf *pf, u16 needed)
+{
+	int res = bitmap_find_next_zero_area(pf->sriov_irq_bm,
+					     pf->sriov_irq_size, 0, needed, 0);
+	/* conversion from number in bitmap to global irq index */
+	int index = pf->sriov_irq_size - res - needed;
+
+	if (res >= pf->sriov_irq_size || index < pf->sriov_base_vector)
+		return -ENOENT;
+
+	bitmap_set(pf->sriov_irq_bm, res, needed);
+	return index;
+}
+
+/**
+ * ice_sriov_free_irqs - free irqs used by the VF
+ * @pf: pointer to PF structure
+ * @vf: pointer to VF structure
+ */
+static void ice_sriov_free_irqs(struct ice_pf *pf, struct ice_vf *vf)
+{
+	/* Move back from first vector index to first index in bitmap */
+	int bm_i = pf->sriov_irq_size - vf->first_vector_idx - vf->num_msix;
+
+	bitmap_clear(pf->sriov_irq_bm, bm_i, vf->num_msix);
+	vf->first_vector_idx = 0;
+}
+
 /**
  * ice_init_vf_vsi_res - initialize/setup VF VSI resources
  * @vf: VF to initialize/setup the VSI for
@@ -541,7 +571,9 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf)
 	struct ice_vsi *vsi;
 	int err;
 
-	vf->first_vector_idx = ice_calc_vf_first_vector_idx(pf, vf);
+	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
+	if (vf->first_vector_idx < 0)
+		return -ENOMEM;
 
 	vsi = ice_vf_vsi_setup(vf);
 	if (!vsi)
@@ -984,6 +1016,52 @@ u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
 	return pf->sriov_irq_size - ice_get_max_used_msix_vector(pf);
 }
 
+static int ice_sriov_move_base_vector(struct ice_pf *pf, int move)
+{
+	if (pf->sriov_base_vector - move < ice_get_max_used_msix_vector(pf))
+		return -ENOMEM;
+
+	pf->sriov_base_vector -= move;
+	return 0;
+}
+
+static void ice_sriov_remap_vectors(struct ice_pf *pf, u16 restricted_id)
+{
+	u16 vf_ids[ICE_MAX_SRIOV_VFS];
+	struct ice_vf *tmp_vf;
+	int to_remap = 0, bkt;
+
+	/* For better irqs usage try to remap irqs of VFs
+	 * that aren't running yet
+	 */
+	ice_for_each_vf(pf, bkt, tmp_vf) {
+		/* skip VF which is changing the number of MSI-X */
+		if (restricted_id == tmp_vf->vf_id ||
+		    test_bit(ICE_VF_STATE_ACTIVE, tmp_vf->vf_states))
+			continue;
+
+		ice_dis_vf_mappings(tmp_vf);
+		ice_sriov_free_irqs(pf, tmp_vf);
+
+		vf_ids[to_remap] = tmp_vf->vf_id;
+		to_remap += 1;
+	}
+
+	for (int i = 0; i < to_remap; i++) {
+		tmp_vf = ice_get_vf_by_id(pf, vf_ids[i]);
+		if (!tmp_vf)
+			continue;
+
+		tmp_vf->first_vector_idx =
+			ice_sriov_get_irqs(pf, tmp_vf->num_msix);
+		/* there is no need to rebuild VSI as we are only changing the
+		 * vector indexes not amount of MSI-X or queues
+		 */
+		ice_ena_vf_mappings(tmp_vf);
+		ice_put_vf(tmp_vf);
+	}
+}
+
 /**
  * ice_sriov_set_msix_vec_count
  * @vf_dev: pointer to pci_dev struct of VF device
@@ -1002,8 +1080,9 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
 {
 	struct pci_dev *pdev = pci_physfn(vf_dev);
 	struct ice_pf *pf = pci_get_drvdata(pdev);
+	u16 prev_msix, prev_queues, queues;
+	bool needs_rebuild = false;
 	struct ice_vf *vf;
-	u16 queues;
 	int id;
 
 	if (!ice_get_num_vfs(pf))
@@ -1016,6 +1095,13 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
 	/* add 1 MSI-X for OICR */
 	msix_vec_count += 1;
 
+	if (queues > min(ice_get_avail_txq_count(pf),
+			 ice_get_avail_rxq_count(pf)))
+		return -EINVAL;
+
+	if (msix_vec_count < ICE_MIN_INTR_PER_VF)
+		return -EINVAL;
+
 	/* Transition of PCI VF function number to function_id */
 	for (id = 0; id < pci_num_vf(pdev); id++) {
 		if (vf_dev->devfn == pci_iov_virtfn_devfn(pdev, id))
@@ -1030,14 +1116,60 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
 	if (!vf)
 		return -ENOENT;
 
+	prev_msix = vf->num_msix;
+	prev_queues = vf->num_vf_qs;
+
+	if (ice_sriov_move_base_vector(pf, msix_vec_count - prev_msix)) {
+		ice_put_vf(vf);
+		return -ENOSPC;
+	}
+
 	ice_dis_vf_mappings(vf);
+	ice_sriov_free_irqs(pf, vf);
+
+	/* Remap all VFs beside the one is now configured */
+	ice_sriov_remap_vectors(pf, vf->vf_id);
+
 	vf->num_msix = msix_vec_count;
 	vf->num_vf_qs = queues;
-	ice_vsi_rebuild(ice_get_vf_vsi(vf), ICE_VSI_FLAG_NO_INIT);
+	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
+	if (vf->first_vector_idx < 0)
+		goto unroll;
+
+	ice_vf_vsi_release(vf);
+	if (vf->vf_ops->create_vsi(vf)) {
+		/* Try to rebuild with previous values */
+		needs_rebuild = true;
+		goto unroll;
+	}
+
+	dev_info(ice_pf_to_dev(pf),
+		 "Changing VF %d resources to %d vectors and %d queues\n",
+		 vf->vf_id, vf->num_msix, vf->num_vf_qs);
+
 	ice_ena_vf_mappings(vf);
 	ice_put_vf(vf);
 
 	return 0;
+
+unroll:
+	dev_info(ice_pf_to_dev(pf),
+		 "Can't set %d vectors on VF %d, falling back to %d\n",
+		 vf->num_msix, vf->vf_id, prev_msix);
+
+	vf->num_msix = prev_msix;
+	vf->num_vf_qs = prev_queues;
+	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
+	if (vf->first_vector_idx < 0)
+		return -EINVAL;
+
+	if (needs_rebuild)
+		vf->vf_ops->create_vsi(vf);
+
+	ice_ena_vf_mappings(vf);
+	ice_put_vf(vf);
+
+	return -EINVAL;
 }
 
 /**
-- 
2.40.1

_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
@ 2023-06-15 12:38   ` Michal Swiatkowski
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 12:38 UTC (permalink / raw)
  To: intel-wired-lan
  Cc: netdev, jacob.e.keller, przemyslaw.kitszel, Michal Swiatkowski

Track MSI-X for VFs using bitmap, by setting and clearing bitmap during
allocation and freeing.

Try to linearize irqs usage for VFs, by freeing them and allocating once
again. Do it only for VFs that aren't currently running.

Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_sriov.c | 170 ++++++++++++++++++---
 1 file changed, 151 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index e20ef1924fae..78a41163755b 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -246,22 +246,6 @@ static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf *vf)
 	return vsi;
 }
 
-/**
- * ice_calc_vf_first_vector_idx - Calculate MSIX vector index in the PF space
- * @pf: pointer to PF structure
- * @vf: pointer to VF that the first MSIX vector index is being calculated for
- *
- * This returns the first MSIX vector index in PF space that is used by this VF.
- * This index is used when accessing PF relative registers such as
- * GLINT_VECT2FUNC and GLINT_DYN_CTL.
- * This will always be the OICR index in the AVF driver so any functionality
- * using vf->first_vector_idx for queue configuration will have to increment by
- * 1 to avoid meddling with the OICR index.
- */
-static int ice_calc_vf_first_vector_idx(struct ice_pf *pf, struct ice_vf *vf)
-{
-	return pf->sriov_base_vector + vf->vf_id * pf->vfs.num_msix_per;
-}
 
 /**
  * ice_ena_vf_msix_mappings - enable VF MSIX mappings in hardware
@@ -528,6 +512,52 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16 num_vfs)
 	return 0;
 }
 
+/**
+ * ice_sriov_get_irqs - get irqs for SR-IOV usacase
+ * @pf: pointer to PF structure
+ * @needed: number of irqs to get
+ *
+ * This returns the first MSI-X vector index in PF space that is used by this
+ * VF. This index is used when accessing PF relative registers such as
+ * GLINT_VECT2FUNC and GLINT_DYN_CTL.
+ * This will always be the OICR index in the AVF driver so any functionality
+ * using vf->first_vector_idx for queue configuration_id: id of VF which will
+ * use this irqs
+ *
+ * Only SRIOV specific vectors are tracked in sriov_irq_bm. SRIOV vectors are
+ * allocated from the end of global irq index. First bit in sriov_irq_bm means
+ * last irq index etc. It simplifies extension of SRIOV vectors.
+ * They will be always located from sriov_base_vector to the last irq
+ * index. While increasing/decreasing sriov_base_vector can be moved.
+ */
+static int ice_sriov_get_irqs(struct ice_pf *pf, u16 needed)
+{
+	int res = bitmap_find_next_zero_area(pf->sriov_irq_bm,
+					     pf->sriov_irq_size, 0, needed, 0);
+	/* conversion from number in bitmap to global irq index */
+	int index = pf->sriov_irq_size - res - needed;
+
+	if (res >= pf->sriov_irq_size || index < pf->sriov_base_vector)
+		return -ENOENT;
+
+	bitmap_set(pf->sriov_irq_bm, res, needed);
+	return index;
+}
+
+/**
+ * ice_sriov_free_irqs - free irqs used by the VF
+ * @pf: pointer to PF structure
+ * @vf: pointer to VF structure
+ */
+static void ice_sriov_free_irqs(struct ice_pf *pf, struct ice_vf *vf)
+{
+	/* Move back from first vector index to first index in bitmap */
+	int bm_i = pf->sriov_irq_size - vf->first_vector_idx - vf->num_msix;
+
+	bitmap_clear(pf->sriov_irq_bm, bm_i, vf->num_msix);
+	vf->first_vector_idx = 0;
+}
+
 /**
  * ice_init_vf_vsi_res - initialize/setup VF VSI resources
  * @vf: VF to initialize/setup the VSI for
@@ -541,7 +571,9 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf)
 	struct ice_vsi *vsi;
 	int err;
 
-	vf->first_vector_idx = ice_calc_vf_first_vector_idx(pf, vf);
+	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
+	if (vf->first_vector_idx < 0)
+		return -ENOMEM;
 
 	vsi = ice_vf_vsi_setup(vf);
 	if (!vsi)
@@ -984,6 +1016,52 @@ u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
 	return pf->sriov_irq_size - ice_get_max_used_msix_vector(pf);
 }
 
+static int ice_sriov_move_base_vector(struct ice_pf *pf, int move)
+{
+	if (pf->sriov_base_vector - move < ice_get_max_used_msix_vector(pf))
+		return -ENOMEM;
+
+	pf->sriov_base_vector -= move;
+	return 0;
+}
+
+static void ice_sriov_remap_vectors(struct ice_pf *pf, u16 restricted_id)
+{
+	u16 vf_ids[ICE_MAX_SRIOV_VFS];
+	struct ice_vf *tmp_vf;
+	int to_remap = 0, bkt;
+
+	/* For better irqs usage try to remap irqs of VFs
+	 * that aren't running yet
+	 */
+	ice_for_each_vf(pf, bkt, tmp_vf) {
+		/* skip VF which is changing the number of MSI-X */
+		if (restricted_id == tmp_vf->vf_id ||
+		    test_bit(ICE_VF_STATE_ACTIVE, tmp_vf->vf_states))
+			continue;
+
+		ice_dis_vf_mappings(tmp_vf);
+		ice_sriov_free_irqs(pf, tmp_vf);
+
+		vf_ids[to_remap] = tmp_vf->vf_id;
+		to_remap += 1;
+	}
+
+	for (int i = 0; i < to_remap; i++) {
+		tmp_vf = ice_get_vf_by_id(pf, vf_ids[i]);
+		if (!tmp_vf)
+			continue;
+
+		tmp_vf->first_vector_idx =
+			ice_sriov_get_irqs(pf, tmp_vf->num_msix);
+		/* there is no need to rebuild VSI as we are only changing the
+		 * vector indexes not amount of MSI-X or queues
+		 */
+		ice_ena_vf_mappings(tmp_vf);
+		ice_put_vf(tmp_vf);
+	}
+}
+
 /**
  * ice_sriov_set_msix_vec_count
  * @vf_dev: pointer to pci_dev struct of VF device
@@ -1002,8 +1080,9 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
 {
 	struct pci_dev *pdev = pci_physfn(vf_dev);
 	struct ice_pf *pf = pci_get_drvdata(pdev);
+	u16 prev_msix, prev_queues, queues;
+	bool needs_rebuild = false;
 	struct ice_vf *vf;
-	u16 queues;
 	int id;
 
 	if (!ice_get_num_vfs(pf))
@@ -1016,6 +1095,13 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
 	/* add 1 MSI-X for OICR */
 	msix_vec_count += 1;
 
+	if (queues > min(ice_get_avail_txq_count(pf),
+			 ice_get_avail_rxq_count(pf)))
+		return -EINVAL;
+
+	if (msix_vec_count < ICE_MIN_INTR_PER_VF)
+		return -EINVAL;
+
 	/* Transition of PCI VF function number to function_id */
 	for (id = 0; id < pci_num_vf(pdev); id++) {
 		if (vf_dev->devfn == pci_iov_virtfn_devfn(pdev, id))
@@ -1030,14 +1116,60 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
 	if (!vf)
 		return -ENOENT;
 
+	prev_msix = vf->num_msix;
+	prev_queues = vf->num_vf_qs;
+
+	if (ice_sriov_move_base_vector(pf, msix_vec_count - prev_msix)) {
+		ice_put_vf(vf);
+		return -ENOSPC;
+	}
+
 	ice_dis_vf_mappings(vf);
+	ice_sriov_free_irqs(pf, vf);
+
+	/* Remap all VFs beside the one is now configured */
+	ice_sriov_remap_vectors(pf, vf->vf_id);
+
 	vf->num_msix = msix_vec_count;
 	vf->num_vf_qs = queues;
-	ice_vsi_rebuild(ice_get_vf_vsi(vf), ICE_VSI_FLAG_NO_INIT);
+	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
+	if (vf->first_vector_idx < 0)
+		goto unroll;
+
+	ice_vf_vsi_release(vf);
+	if (vf->vf_ops->create_vsi(vf)) {
+		/* Try to rebuild with previous values */
+		needs_rebuild = true;
+		goto unroll;
+	}
+
+	dev_info(ice_pf_to_dev(pf),
+		 "Changing VF %d resources to %d vectors and %d queues\n",
+		 vf->vf_id, vf->num_msix, vf->num_vf_qs);
+
 	ice_ena_vf_mappings(vf);
 	ice_put_vf(vf);
 
 	return 0;
+
+unroll:
+	dev_info(ice_pf_to_dev(pf),
+		 "Can't set %d vectors on VF %d, falling back to %d\n",
+		 vf->num_msix, vf->vf_id, prev_msix);
+
+	vf->num_msix = prev_msix;
+	vf->num_vf_qs = prev_queues;
+	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
+	if (vf->first_vector_idx < 0)
+		return -EINVAL;
+
+	if (needs_rebuild)
+		vf->vf_ops->create_vsi(vf);
+
+	ice_ena_vf_mappings(vf);
+	ice_put_vf(vf);
+
+	return -EINVAL;
 }
 
 /**
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v1 1/4] ice: implement num_msix field per VF
  2023-06-15 12:38   ` [Intel-wired-lan] " Michal Swiatkowski
@ 2023-06-15 14:22     ` Maciej Fijalkowski
  -1 siblings, 0 replies; 24+ messages in thread
From: Maciej Fijalkowski @ 2023-06-15 14:22 UTC (permalink / raw)
  To: Michal Swiatkowski; +Cc: intel-wired-lan, netdev, przemyslaw.kitszel

On Thu, Jun 15, 2023 at 02:38:27PM +0200, Michal Swiatkowski wrote:
> Store the amount of MSI-X per VF instead of storing it in pf struct. It
> is used to calculate number of q_vectors (and queues) for VF VSI.
> 
> Calculate vector indexes based on this new field.

Can you explain why? From a standalone POV the reasoning is not clear.

> 
> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c      |  2 +-
>  drivers/net/ethernet/intel/ice/ice_sriov.c    | 13 +++++++++----
>  drivers/net/ethernet/intel/ice/ice_vf_lib.h   |  4 +++-
>  drivers/net/ethernet/intel/ice/ice_virtchnl.c |  2 +-
>  4 files changed, 14 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
> index e8142bea2eb2..24a0bf403445 100644
> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> @@ -229,7 +229,7 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi)
>  		 * of queues vectors, subtract 1 (ICE_NONQ_VECS_VF) from the
>  		 * original vector count
>  		 */
> -		vsi->num_q_vectors = pf->vfs.num_msix_per - ICE_NONQ_VECS_VF;
> +		vsi->num_q_vectors = vf->num_msix - ICE_NONQ_VECS_VF;
>  		break;
>  	case ICE_VSI_CTRL:
>  		vsi->alloc_txq = 1;
> diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
> index 2ea6d24977a6..3137e772a64b 100644
> --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
> @@ -64,7 +64,7 @@ static void ice_free_vf_res(struct ice_vf *vf)
>  		vf->num_mac = 0;
>  	}
>  
> -	last_vector_idx = vf->first_vector_idx + pf->vfs.num_msix_per - 1;
> +	last_vector_idx = vf->first_vector_idx + vf->num_msix - 1;
>  
>  	/* clear VF MDD event information */
>  	memset(&vf->mdd_tx_events, 0, sizeof(vf->mdd_tx_events));
> @@ -102,7 +102,7 @@ static void ice_dis_vf_mappings(struct ice_vf *vf)
>  	wr32(hw, VPINT_ALLOC_PCI(vf->vf_id), 0);
>  
>  	first = vf->first_vector_idx;
> -	last = first + pf->vfs.num_msix_per - 1;
> +	last = first + vf->num_msix - 1;
>  	for (v = first; v <= last; v++) {
>  		u32 reg;
>  
> @@ -280,12 +280,12 @@ static void ice_ena_vf_msix_mappings(struct ice_vf *vf)
>  
>  	hw = &pf->hw;
>  	pf_based_first_msix = vf->first_vector_idx;
> -	pf_based_last_msix = (pf_based_first_msix + pf->vfs.num_msix_per) - 1;
> +	pf_based_last_msix = (pf_based_first_msix + vf->num_msix) - 1;
>  
>  	device_based_first_msix = pf_based_first_msix +
>  		pf->hw.func_caps.common_cap.msix_vector_first_id;
>  	device_based_last_msix =
> -		(device_based_first_msix + pf->vfs.num_msix_per) - 1;
> +		(device_based_first_msix + vf->num_msix) - 1;
>  	device_based_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
>  
>  	reg = (((device_based_first_msix << VPINT_ALLOC_FIRST_S) &
> @@ -814,6 +814,11 @@ static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs)
>  
>  		vf->vf_sw_id = pf->first_sw;
>  
> +		/* set default number of MSI-X */
> +		vf->num_msix = pf->vfs.num_msix_per;
> +		vf->num_vf_qs = pf->vfs.num_qps_per;
> +		ice_vc_set_default_allowlist(vf);
> +
>  		hash_add_rcu(vfs->table, &vf->entry, vf_id);
>  	}
>  
> diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> index 67172fdd9bc2..4dbfb7e26bfa 100644
> --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> @@ -72,7 +72,7 @@ struct ice_vfs {
>  	struct mutex table_lock;	/* Lock for protecting the hash table */
>  	u16 num_supported;		/* max supported VFs on this PF */
>  	u16 num_qps_per;		/* number of queue pairs per VF */
> -	u16 num_msix_per;		/* number of MSI-X vectors per VF */
> +	u16 num_msix_per;		/* default MSI-X vectors per VF */
>  	unsigned long last_printed_mdd_jiffies;	/* MDD message rate limit */
>  };
>  
> @@ -133,6 +133,8 @@ struct ice_vf {
>  
>  	/* devlink port data */
>  	struct devlink_port devlink_port;
> +
> +	u16 num_msix;			/* num of MSI-X configured on this VF */
>  };
>  
>  /* Flags for controlling behavior of ice_reset_vf */
> diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> index efbc2968a7bf..37b588774ac1 100644
> --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> @@ -498,7 +498,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
>  	vfres->num_vsis = 1;
>  	/* Tx and Rx queue are equal for VF */
>  	vfres->num_queue_pairs = vsi->num_txq;
> -	vfres->max_vectors = vf->pf->vfs.num_msix_per;
> +	vfres->max_vectors = vf->num_msix;
>  	vfres->rss_key_size = ICE_VSIQF_HKEY_ARRAY_SIZE;
>  	vfres->rss_lut_size = ICE_VSIQF_HLUT_ARRAY_SIZE;
>  	vfres->max_mtu = ice_vc_get_max_frame_size(vf);
> -- 
> 2.40.1
> 
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan@osuosl.org
> https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v1 1/4] ice: implement num_msix field per VF
@ 2023-06-15 14:22     ` Maciej Fijalkowski
  0 siblings, 0 replies; 24+ messages in thread
From: Maciej Fijalkowski @ 2023-06-15 14:22 UTC (permalink / raw)
  To: Michal Swiatkowski; +Cc: netdev, intel-wired-lan, przemyslaw.kitszel

On Thu, Jun 15, 2023 at 02:38:27PM +0200, Michal Swiatkowski wrote:
> Store the amount of MSI-X per VF instead of storing it in pf struct. It
> is used to calculate number of q_vectors (and queues) for VF VSI.
> 
> Calculate vector indexes based on this new field.

Can you explain why? From a standalone POV the reasoning is not clear.

> 
> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c      |  2 +-
>  drivers/net/ethernet/intel/ice/ice_sriov.c    | 13 +++++++++----
>  drivers/net/ethernet/intel/ice/ice_vf_lib.h   |  4 +++-
>  drivers/net/ethernet/intel/ice/ice_virtchnl.c |  2 +-
>  4 files changed, 14 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
> index e8142bea2eb2..24a0bf403445 100644
> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> @@ -229,7 +229,7 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi)
>  		 * of queues vectors, subtract 1 (ICE_NONQ_VECS_VF) from the
>  		 * original vector count
>  		 */
> -		vsi->num_q_vectors = pf->vfs.num_msix_per - ICE_NONQ_VECS_VF;
> +		vsi->num_q_vectors = vf->num_msix - ICE_NONQ_VECS_VF;
>  		break;
>  	case ICE_VSI_CTRL:
>  		vsi->alloc_txq = 1;
> diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
> index 2ea6d24977a6..3137e772a64b 100644
> --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
> @@ -64,7 +64,7 @@ static void ice_free_vf_res(struct ice_vf *vf)
>  		vf->num_mac = 0;
>  	}
>  
> -	last_vector_idx = vf->first_vector_idx + pf->vfs.num_msix_per - 1;
> +	last_vector_idx = vf->first_vector_idx + vf->num_msix - 1;
>  
>  	/* clear VF MDD event information */
>  	memset(&vf->mdd_tx_events, 0, sizeof(vf->mdd_tx_events));
> @@ -102,7 +102,7 @@ static void ice_dis_vf_mappings(struct ice_vf *vf)
>  	wr32(hw, VPINT_ALLOC_PCI(vf->vf_id), 0);
>  
>  	first = vf->first_vector_idx;
> -	last = first + pf->vfs.num_msix_per - 1;
> +	last = first + vf->num_msix - 1;
>  	for (v = first; v <= last; v++) {
>  		u32 reg;
>  
> @@ -280,12 +280,12 @@ static void ice_ena_vf_msix_mappings(struct ice_vf *vf)
>  
>  	hw = &pf->hw;
>  	pf_based_first_msix = vf->first_vector_idx;
> -	pf_based_last_msix = (pf_based_first_msix + pf->vfs.num_msix_per) - 1;
> +	pf_based_last_msix = (pf_based_first_msix + vf->num_msix) - 1;
>  
>  	device_based_first_msix = pf_based_first_msix +
>  		pf->hw.func_caps.common_cap.msix_vector_first_id;
>  	device_based_last_msix =
> -		(device_based_first_msix + pf->vfs.num_msix_per) - 1;
> +		(device_based_first_msix + vf->num_msix) - 1;
>  	device_based_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
>  
>  	reg = (((device_based_first_msix << VPINT_ALLOC_FIRST_S) &
> @@ -814,6 +814,11 @@ static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs)
>  
>  		vf->vf_sw_id = pf->first_sw;
>  
> +		/* set default number of MSI-X */
> +		vf->num_msix = pf->vfs.num_msix_per;
> +		vf->num_vf_qs = pf->vfs.num_qps_per;
> +		ice_vc_set_default_allowlist(vf);
> +
>  		hash_add_rcu(vfs->table, &vf->entry, vf_id);
>  	}
>  
> diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> index 67172fdd9bc2..4dbfb7e26bfa 100644
> --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> @@ -72,7 +72,7 @@ struct ice_vfs {
>  	struct mutex table_lock;	/* Lock for protecting the hash table */
>  	u16 num_supported;		/* max supported VFs on this PF */
>  	u16 num_qps_per;		/* number of queue pairs per VF */
> -	u16 num_msix_per;		/* number of MSI-X vectors per VF */
> +	u16 num_msix_per;		/* default MSI-X vectors per VF */
>  	unsigned long last_printed_mdd_jiffies;	/* MDD message rate limit */
>  };
>  
> @@ -133,6 +133,8 @@ struct ice_vf {
>  
>  	/* devlink port data */
>  	struct devlink_port devlink_port;
> +
> +	u16 num_msix;			/* num of MSI-X configured on this VF */
>  };
>  
>  /* Flags for controlling behavior of ice_reset_vf */
> diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> index efbc2968a7bf..37b588774ac1 100644
> --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> @@ -498,7 +498,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
>  	vfres->num_vsis = 1;
>  	/* Tx and Rx queue are equal for VF */
>  	vfres->num_queue_pairs = vsi->num_txq;
> -	vfres->max_vectors = vf->pf->vfs.num_msix_per;
> +	vfres->max_vectors = vf->num_msix;
>  	vfres->rss_key_size = ICE_VSIQF_HKEY_ARRAY_SIZE;
>  	vfres->rss_lut_size = ICE_VSIQF_HLUT_ARRAY_SIZE;
>  	vfres->max_mtu = ice_vc_get_max_frame_size(vf);
> -- 
> 2.40.1
> 
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan@osuosl.org
> https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v1 1/4] ice: implement num_msix field per VF
  2023-06-15 14:22     ` Maciej Fijalkowski
@ 2023-06-15 14:43       ` Michal Swiatkowski
  -1 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 14:43 UTC (permalink / raw)
  To: Maciej Fijalkowski; +Cc: netdev, intel-wired-lan, przemyslaw.kitszel

On Thu, Jun 15, 2023 at 04:22:35PM +0200, Maciej Fijalkowski wrote:
> On Thu, Jun 15, 2023 at 02:38:27PM +0200, Michal Swiatkowski wrote:
> > Store the amount of MSI-X per VF instead of storing it in pf struct. It
> > is used to calculate number of q_vectors (and queues) for VF VSI.
> > 
> > Calculate vector indexes based on this new field.
> 
> Can you explain why? From a standalone POV the reasoning is not clear.
> 

Maybe I should reword it. Previously we had pf->vf_msix - number of MSI-X
on each VF. The number of MSI-X was the same on all VFs. After this
changes user is allowed to change MSI-X per VF. We need new field in VF
struct to track it. Calculation of queues / vector/ indexes is the same
as it was, but the number can be different for each VF, so intead of
baseing calculation on per VF MSI-X we have to base it on real VF MSI-X.

Feel like I over complicated simple thing by this commit message.
Calculation remainis the same, we have per VF field to store MSI-X instead
of one field for all of the VFs.
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v1 1/4] ice: implement num_msix field per VF
@ 2023-06-15 14:43       ` Michal Swiatkowski
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-15 14:43 UTC (permalink / raw)
  To: Maciej Fijalkowski; +Cc: intel-wired-lan, netdev, przemyslaw.kitszel

On Thu, Jun 15, 2023 at 04:22:35PM +0200, Maciej Fijalkowski wrote:
> On Thu, Jun 15, 2023 at 02:38:27PM +0200, Michal Swiatkowski wrote:
> > Store the amount of MSI-X per VF instead of storing it in pf struct. It
> > is used to calculate number of q_vectors (and queues) for VF VSI.
> > 
> > Calculate vector indexes based on this new field.
> 
> Can you explain why? From a standalone POV the reasoning is not clear.
> 

Maybe I should reword it. Previously we had pf->vf_msix - number of MSI-X
on each VF. The number of MSI-X was the same on all VFs. After this
changes user is allowed to change MSI-X per VF. We need new field in VF
struct to track it. Calculation of queues / vector/ indexes is the same
as it was, but the number can be different for each VF, so intead of
baseing calculation on per VF MSI-X we have to base it on real VF MSI-X.

Feel like I over complicated simple thing by this commit message.
Calculation remainis the same, we have per VF field to store MSI-X instead
of one field for all of the VFs.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
  2023-06-15 12:38   ` Michal Swiatkowski
@ 2023-06-15 15:57     ` Keller, Jacob E
  -1 siblings, 0 replies; 24+ messages in thread
From: Keller, Jacob E @ 2023-06-15 15:57 UTC (permalink / raw)
  To: Michal Swiatkowski, intel-wired-lan; +Cc: netdev, Kitszel, Przemyslaw



> -----Original Message-----
> From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> Sent: Thursday, June 15, 2023 5:39 AM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Keller, Jacob E <jacob.e.keller@intel.com>; Kitszel,
> Przemyslaw <przemyslaw.kitszel@intel.com>; Michal Swiatkowski
> <michal.swiatkowski@linux.intel.com>
> Subject: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
> 
> Track MSI-X for VFs using bitmap, by setting and clearing bitmap during
> allocation and freeing.
> 
> Try to linearize irqs usage for VFs, by freeing them and allocating once
> again. Do it only for VFs that aren't currently running.
> 
> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_sriov.c | 170 ++++++++++++++++++---
>  1 file changed, 151 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c
> b/drivers/net/ethernet/intel/ice/ice_sriov.c
> index e20ef1924fae..78a41163755b 100644
> --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
> @@ -246,22 +246,6 @@ static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf *vf)
>  	return vsi;
>  }
> 
> -/**
> - * ice_calc_vf_first_vector_idx - Calculate MSIX vector index in the PF space
> - * @pf: pointer to PF structure
> - * @vf: pointer to VF that the first MSIX vector index is being calculated for
> - *
> - * This returns the first MSIX vector index in PF space that is used by this VF.
> - * This index is used when accessing PF relative registers such as
> - * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> - * This will always be the OICR index in the AVF driver so any functionality
> - * using vf->first_vector_idx for queue configuration will have to increment by
> - * 1 to avoid meddling with the OICR index.
> - */
> -static int ice_calc_vf_first_vector_idx(struct ice_pf *pf, struct ice_vf *vf)
> -{
> -	return pf->sriov_base_vector + vf->vf_id * pf->vfs.num_msix_per;
> -}
> 
>  /**
>   * ice_ena_vf_msix_mappings - enable VF MSIX mappings in hardware
> @@ -528,6 +512,52 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16
> num_vfs)
>  	return 0;
>  }
> 
> +/**
> + * ice_sriov_get_irqs - get irqs for SR-IOV usacase
> + * @pf: pointer to PF structure
> + * @needed: number of irqs to get
> + *
> + * This returns the first MSI-X vector index in PF space that is used by this
> + * VF. This index is used when accessing PF relative registers such as
> + * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> + * This will always be the OICR index in the AVF driver so any functionality
> + * using vf->first_vector_idx for queue configuration_id: id of VF which will
> + * use this irqs
> + *
> + * Only SRIOV specific vectors are tracked in sriov_irq_bm. SRIOV vectors are
> + * allocated from the end of global irq index. First bit in sriov_irq_bm means
> + * last irq index etc. It simplifies extension of SRIOV vectors.
> + * They will be always located from sriov_base_vector to the last irq
> + * index. While increasing/decreasing sriov_base_vector can be moved.
> + */
> +static int ice_sriov_get_irqs(struct ice_pf *pf, u16 needed)
> +{
> +	int res = bitmap_find_next_zero_area(pf->sriov_irq_bm,
> +					     pf->sriov_irq_size, 0, needed, 0);
> +	/* conversion from number in bitmap to global irq index */
> +	int index = pf->sriov_irq_size - res - needed;
> +
> +	if (res >= pf->sriov_irq_size || index < pf->sriov_base_vector)
> +		return -ENOENT;
> +
> +	bitmap_set(pf->sriov_irq_bm, res, needed);
> +	return index;

Shouldn't it be possible to use the xarray that was recently done for dynamic IRQ allocation for this now? It might take a little more refactor work though, hmm. It feels weird to introduce yet another data structure for a nearly identical purpose...

> +}
> +
> +/**
> + * ice_sriov_free_irqs - free irqs used by the VF
> + * @pf: pointer to PF structure
> + * @vf: pointer to VF structure
> + */
> +static void ice_sriov_free_irqs(struct ice_pf *pf, struct ice_vf *vf)
> +{
> +	/* Move back from first vector index to first index in bitmap */
> +	int bm_i = pf->sriov_irq_size - vf->first_vector_idx - vf->num_msix;
> +
> +	bitmap_clear(pf->sriov_irq_bm, bm_i, vf->num_msix);
> +	vf->first_vector_idx = 0;
> +}
> +
>  /**
>   * ice_init_vf_vsi_res - initialize/setup VF VSI resources
>   * @vf: VF to initialize/setup the VSI for
> @@ -541,7 +571,9 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf)
>  	struct ice_vsi *vsi;
>  	int err;
> 
> -	vf->first_vector_idx = ice_calc_vf_first_vector_idx(pf, vf);
> +	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
> +	if (vf->first_vector_idx < 0)
> +		return -ENOMEM;
> 
>  	vsi = ice_vf_vsi_setup(vf);
>  	if (!vsi)
> @@ -984,6 +1016,52 @@ u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
>  	return pf->sriov_irq_size - ice_get_max_used_msix_vector(pf);
>  }
> 
> +static int ice_sriov_move_base_vector(struct ice_pf *pf, int move)
> +{
> +	if (pf->sriov_base_vector - move < ice_get_max_used_msix_vector(pf))
> +		return -ENOMEM;
> +
> +	pf->sriov_base_vector -= move;
> +	return 0;
> +}
> +
> +static void ice_sriov_remap_vectors(struct ice_pf *pf, u16 restricted_id)
> +{
> +	u16 vf_ids[ICE_MAX_SRIOV_VFS];
> +	struct ice_vf *tmp_vf;
> +	int to_remap = 0, bkt;
> +
> +	/* For better irqs usage try to remap irqs of VFs
> +	 * that aren't running yet
> +	 */
> +	ice_for_each_vf(pf, bkt, tmp_vf) {
> +		/* skip VF which is changing the number of MSI-X */
> +		if (restricted_id == tmp_vf->vf_id ||
> +		    test_bit(ICE_VF_STATE_ACTIVE, tmp_vf->vf_states))
> +			continue;
> +
> +		ice_dis_vf_mappings(tmp_vf);
> +		ice_sriov_free_irqs(pf, tmp_vf);
> +
> +		vf_ids[to_remap] = tmp_vf->vf_id;
> +		to_remap += 1;
> +	}
> +
> +	for (int i = 0; i < to_remap; i++) {
> +		tmp_vf = ice_get_vf_by_id(pf, vf_ids[i]);
> +		if (!tmp_vf)
> +			continue;
> +
> +		tmp_vf->first_vector_idx =
> +			ice_sriov_get_irqs(pf, tmp_vf->num_msix);
> +		/* there is no need to rebuild VSI as we are only changing the
> +		 * vector indexes not amount of MSI-X or queues
> +		 */
> +		ice_ena_vf_mappings(tmp_vf);
> +		ice_put_vf(tmp_vf);
> +	}
> +}
> +
>  /**
>   * ice_sriov_set_msix_vec_count
>   * @vf_dev: pointer to pci_dev struct of VF device
> @@ -1002,8 +1080,9 @@ int ice_sriov_set_msix_vec_count(struct pci_dev
> *vf_dev, int msix_vec_count)
>  {
>  	struct pci_dev *pdev = pci_physfn(vf_dev);
>  	struct ice_pf *pf = pci_get_drvdata(pdev);
> +	u16 prev_msix, prev_queues, queues;
> +	bool needs_rebuild = false;
>  	struct ice_vf *vf;
> -	u16 queues;
>  	int id;
> 
>  	if (!ice_get_num_vfs(pf))
> @@ -1016,6 +1095,13 @@ int ice_sriov_set_msix_vec_count(struct pci_dev
> *vf_dev, int msix_vec_count)
>  	/* add 1 MSI-X for OICR */
>  	msix_vec_count += 1;
> 
> +	if (queues > min(ice_get_avail_txq_count(pf),
> +			 ice_get_avail_rxq_count(pf)))
> +		return -EINVAL;
> +
> +	if (msix_vec_count < ICE_MIN_INTR_PER_VF)
> +		return -EINVAL;
> +
>  	/* Transition of PCI VF function number to function_id */
>  	for (id = 0; id < pci_num_vf(pdev); id++) {
>  		if (vf_dev->devfn == pci_iov_virtfn_devfn(pdev, id))
> @@ -1030,14 +1116,60 @@ int ice_sriov_set_msix_vec_count(struct pci_dev
> *vf_dev, int msix_vec_count)
>  	if (!vf)
>  		return -ENOENT;
> 
> +	prev_msix = vf->num_msix;
> +	prev_queues = vf->num_vf_qs;
> +
> +	if (ice_sriov_move_base_vector(pf, msix_vec_count - prev_msix)) {
> +		ice_put_vf(vf);
> +		return -ENOSPC;
> +	}
> +
>  	ice_dis_vf_mappings(vf);
> +	ice_sriov_free_irqs(pf, vf);
> +
> +	/* Remap all VFs beside the one is now configured */
> +	ice_sriov_remap_vectors(pf, vf->vf_id);
> +
>  	vf->num_msix = msix_vec_count;
>  	vf->num_vf_qs = queues;
> -	ice_vsi_rebuild(ice_get_vf_vsi(vf), ICE_VSI_FLAG_NO_INIT);
> +	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
> +	if (vf->first_vector_idx < 0)
> +		goto unroll;
> +
> +	ice_vf_vsi_release(vf);
> +	if (vf->vf_ops->create_vsi(vf)) {
> +		/* Try to rebuild with previous values */
> +		needs_rebuild = true;
> +		goto unroll;
> +	}
> +
> +	dev_info(ice_pf_to_dev(pf),
> +		 "Changing VF %d resources to %d vectors and %d queues\n",
> +		 vf->vf_id, vf->num_msix, vf->num_vf_qs);
> +
>  	ice_ena_vf_mappings(vf);
>  	ice_put_vf(vf);
> 
>  	return 0;
> +
> +unroll:
> +	dev_info(ice_pf_to_dev(pf),
> +		 "Can't set %d vectors on VF %d, falling back to %d\n",
> +		 vf->num_msix, vf->vf_id, prev_msix);
> +
> +	vf->num_msix = prev_msix;
> +	vf->num_vf_qs = prev_queues;
> +	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
> +	if (vf->first_vector_idx < 0)
> +		return -EINVAL;
> +
> +	if (needs_rebuild)
> +		vf->vf_ops->create_vsi(vf);
> +
> +	ice_ena_vf_mappings(vf);
> +	ice_put_vf(vf);
> +
> +	return -EINVAL;
>  }
> 
>  /**
> --
> 2.40.1

_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
@ 2023-06-15 15:57     ` Keller, Jacob E
  0 siblings, 0 replies; 24+ messages in thread
From: Keller, Jacob E @ 2023-06-15 15:57 UTC (permalink / raw)
  To: Michal Swiatkowski, intel-wired-lan; +Cc: netdev, Kitszel, Przemyslaw



> -----Original Message-----
> From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> Sent: Thursday, June 15, 2023 5:39 AM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Keller, Jacob E <jacob.e.keller@intel.com>; Kitszel,
> Przemyslaw <przemyslaw.kitszel@intel.com>; Michal Swiatkowski
> <michal.swiatkowski@linux.intel.com>
> Subject: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
> 
> Track MSI-X for VFs using bitmap, by setting and clearing bitmap during
> allocation and freeing.
> 
> Try to linearize irqs usage for VFs, by freeing them and allocating once
> again. Do it only for VFs that aren't currently running.
> 
> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_sriov.c | 170 ++++++++++++++++++---
>  1 file changed, 151 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c
> b/drivers/net/ethernet/intel/ice/ice_sriov.c
> index e20ef1924fae..78a41163755b 100644
> --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
> @@ -246,22 +246,6 @@ static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf *vf)
>  	return vsi;
>  }
> 
> -/**
> - * ice_calc_vf_first_vector_idx - Calculate MSIX vector index in the PF space
> - * @pf: pointer to PF structure
> - * @vf: pointer to VF that the first MSIX vector index is being calculated for
> - *
> - * This returns the first MSIX vector index in PF space that is used by this VF.
> - * This index is used when accessing PF relative registers such as
> - * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> - * This will always be the OICR index in the AVF driver so any functionality
> - * using vf->first_vector_idx for queue configuration will have to increment by
> - * 1 to avoid meddling with the OICR index.
> - */
> -static int ice_calc_vf_first_vector_idx(struct ice_pf *pf, struct ice_vf *vf)
> -{
> -	return pf->sriov_base_vector + vf->vf_id * pf->vfs.num_msix_per;
> -}
> 
>  /**
>   * ice_ena_vf_msix_mappings - enable VF MSIX mappings in hardware
> @@ -528,6 +512,52 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16
> num_vfs)
>  	return 0;
>  }
> 
> +/**
> + * ice_sriov_get_irqs - get irqs for SR-IOV usacase
> + * @pf: pointer to PF structure
> + * @needed: number of irqs to get
> + *
> + * This returns the first MSI-X vector index in PF space that is used by this
> + * VF. This index is used when accessing PF relative registers such as
> + * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> + * This will always be the OICR index in the AVF driver so any functionality
> + * using vf->first_vector_idx for queue configuration_id: id of VF which will
> + * use this irqs
> + *
> + * Only SRIOV specific vectors are tracked in sriov_irq_bm. SRIOV vectors are
> + * allocated from the end of global irq index. First bit in sriov_irq_bm means
> + * last irq index etc. It simplifies extension of SRIOV vectors.
> + * They will be always located from sriov_base_vector to the last irq
> + * index. While increasing/decreasing sriov_base_vector can be moved.
> + */
> +static int ice_sriov_get_irqs(struct ice_pf *pf, u16 needed)
> +{
> +	int res = bitmap_find_next_zero_area(pf->sriov_irq_bm,
> +					     pf->sriov_irq_size, 0, needed, 0);
> +	/* conversion from number in bitmap to global irq index */
> +	int index = pf->sriov_irq_size - res - needed;
> +
> +	if (res >= pf->sriov_irq_size || index < pf->sriov_base_vector)
> +		return -ENOENT;
> +
> +	bitmap_set(pf->sriov_irq_bm, res, needed);
> +	return index;

Shouldn't it be possible to use the xarray that was recently done for dynamic IRQ allocation for this now? It might take a little more refactor work though, hmm. It feels weird to introduce yet another data structure for a nearly identical purpose...

> +}
> +
> +/**
> + * ice_sriov_free_irqs - free irqs used by the VF
> + * @pf: pointer to PF structure
> + * @vf: pointer to VF structure
> + */
> +static void ice_sriov_free_irqs(struct ice_pf *pf, struct ice_vf *vf)
> +{
> +	/* Move back from first vector index to first index in bitmap */
> +	int bm_i = pf->sriov_irq_size - vf->first_vector_idx - vf->num_msix;
> +
> +	bitmap_clear(pf->sriov_irq_bm, bm_i, vf->num_msix);
> +	vf->first_vector_idx = 0;
> +}
> +
>  /**
>   * ice_init_vf_vsi_res - initialize/setup VF VSI resources
>   * @vf: VF to initialize/setup the VSI for
> @@ -541,7 +571,9 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf)
>  	struct ice_vsi *vsi;
>  	int err;
> 
> -	vf->first_vector_idx = ice_calc_vf_first_vector_idx(pf, vf);
> +	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
> +	if (vf->first_vector_idx < 0)
> +		return -ENOMEM;
> 
>  	vsi = ice_vf_vsi_setup(vf);
>  	if (!vsi)
> @@ -984,6 +1016,52 @@ u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
>  	return pf->sriov_irq_size - ice_get_max_used_msix_vector(pf);
>  }
> 
> +static int ice_sriov_move_base_vector(struct ice_pf *pf, int move)
> +{
> +	if (pf->sriov_base_vector - move < ice_get_max_used_msix_vector(pf))
> +		return -ENOMEM;
> +
> +	pf->sriov_base_vector -= move;
> +	return 0;
> +}
> +
> +static void ice_sriov_remap_vectors(struct ice_pf *pf, u16 restricted_id)
> +{
> +	u16 vf_ids[ICE_MAX_SRIOV_VFS];
> +	struct ice_vf *tmp_vf;
> +	int to_remap = 0, bkt;
> +
> +	/* For better irqs usage try to remap irqs of VFs
> +	 * that aren't running yet
> +	 */
> +	ice_for_each_vf(pf, bkt, tmp_vf) {
> +		/* skip VF which is changing the number of MSI-X */
> +		if (restricted_id == tmp_vf->vf_id ||
> +		    test_bit(ICE_VF_STATE_ACTIVE, tmp_vf->vf_states))
> +			continue;
> +
> +		ice_dis_vf_mappings(tmp_vf);
> +		ice_sriov_free_irqs(pf, tmp_vf);
> +
> +		vf_ids[to_remap] = tmp_vf->vf_id;
> +		to_remap += 1;
> +	}
> +
> +	for (int i = 0; i < to_remap; i++) {
> +		tmp_vf = ice_get_vf_by_id(pf, vf_ids[i]);
> +		if (!tmp_vf)
> +			continue;
> +
> +		tmp_vf->first_vector_idx =
> +			ice_sriov_get_irqs(pf, tmp_vf->num_msix);
> +		/* there is no need to rebuild VSI as we are only changing the
> +		 * vector indexes not amount of MSI-X or queues
> +		 */
> +		ice_ena_vf_mappings(tmp_vf);
> +		ice_put_vf(tmp_vf);
> +	}
> +}
> +
>  /**
>   * ice_sriov_set_msix_vec_count
>   * @vf_dev: pointer to pci_dev struct of VF device
> @@ -1002,8 +1080,9 @@ int ice_sriov_set_msix_vec_count(struct pci_dev
> *vf_dev, int msix_vec_count)
>  {
>  	struct pci_dev *pdev = pci_physfn(vf_dev);
>  	struct ice_pf *pf = pci_get_drvdata(pdev);
> +	u16 prev_msix, prev_queues, queues;
> +	bool needs_rebuild = false;
>  	struct ice_vf *vf;
> -	u16 queues;
>  	int id;
> 
>  	if (!ice_get_num_vfs(pf))
> @@ -1016,6 +1095,13 @@ int ice_sriov_set_msix_vec_count(struct pci_dev
> *vf_dev, int msix_vec_count)
>  	/* add 1 MSI-X for OICR */
>  	msix_vec_count += 1;
> 
> +	if (queues > min(ice_get_avail_txq_count(pf),
> +			 ice_get_avail_rxq_count(pf)))
> +		return -EINVAL;
> +
> +	if (msix_vec_count < ICE_MIN_INTR_PER_VF)
> +		return -EINVAL;
> +
>  	/* Transition of PCI VF function number to function_id */
>  	for (id = 0; id < pci_num_vf(pdev); id++) {
>  		if (vf_dev->devfn == pci_iov_virtfn_devfn(pdev, id))
> @@ -1030,14 +1116,60 @@ int ice_sriov_set_msix_vec_count(struct pci_dev
> *vf_dev, int msix_vec_count)
>  	if (!vf)
>  		return -ENOENT;
> 
> +	prev_msix = vf->num_msix;
> +	prev_queues = vf->num_vf_qs;
> +
> +	if (ice_sriov_move_base_vector(pf, msix_vec_count - prev_msix)) {
> +		ice_put_vf(vf);
> +		return -ENOSPC;
> +	}
> +
>  	ice_dis_vf_mappings(vf);
> +	ice_sriov_free_irqs(pf, vf);
> +
> +	/* Remap all VFs beside the one is now configured */
> +	ice_sriov_remap_vectors(pf, vf->vf_id);
> +
>  	vf->num_msix = msix_vec_count;
>  	vf->num_vf_qs = queues;
> -	ice_vsi_rebuild(ice_get_vf_vsi(vf), ICE_VSI_FLAG_NO_INIT);
> +	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
> +	if (vf->first_vector_idx < 0)
> +		goto unroll;
> +
> +	ice_vf_vsi_release(vf);
> +	if (vf->vf_ops->create_vsi(vf)) {
> +		/* Try to rebuild with previous values */
> +		needs_rebuild = true;
> +		goto unroll;
> +	}
> +
> +	dev_info(ice_pf_to_dev(pf),
> +		 "Changing VF %d resources to %d vectors and %d queues\n",
> +		 vf->vf_id, vf->num_msix, vf->num_vf_qs);
> +
>  	ice_ena_vf_mappings(vf);
>  	ice_put_vf(vf);
> 
>  	return 0;
> +
> +unroll:
> +	dev_info(ice_pf_to_dev(pf),
> +		 "Can't set %d vectors on VF %d, falling back to %d\n",
> +		 vf->num_msix, vf->vf_id, prev_msix);
> +
> +	vf->num_msix = prev_msix;
> +	vf->num_vf_qs = prev_queues;
> +	vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
> +	if (vf->first_vector_idx < 0)
> +		return -EINVAL;
> +
> +	if (needs_rebuild)
> +		vf->vf_ops->create_vsi(vf);
> +
> +	ice_ena_vf_mappings(vf);
> +	ice_put_vf(vf);
> +
> +	return -EINVAL;
>  }
> 
>  /**
> --
> 2.40.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
  2023-06-15 15:57     ` Keller, Jacob E
@ 2023-06-16  8:37       ` Michal Swiatkowski
  -1 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-16  8:37 UTC (permalink / raw)
  To: Keller, Jacob E; +Cc: intel-wired-lan, netdev, Kitszel, Przemyslaw

On Thu, Jun 15, 2023 at 03:57:37PM +0000, Keller, Jacob E wrote:
> 
> 
> > -----Original Message-----
> > From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > Sent: Thursday, June 15, 2023 5:39 AM
> > To: intel-wired-lan@lists.osuosl.org
> > Cc: netdev@vger.kernel.org; Keller, Jacob E <jacob.e.keller@intel.com>; Kitszel,
> > Przemyslaw <przemyslaw.kitszel@intel.com>; Michal Swiatkowski
> > <michal.swiatkowski@linux.intel.com>
> > Subject: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
> > 
> > Track MSI-X for VFs using bitmap, by setting and clearing bitmap during
> > allocation and freeing.
> > 
> > Try to linearize irqs usage for VFs, by freeing them and allocating once
> > again. Do it only for VFs that aren't currently running.
> > 
> > Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> > ---
> >  drivers/net/ethernet/intel/ice/ice_sriov.c | 170 ++++++++++++++++++---
> >  1 file changed, 151 insertions(+), 19 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > b/drivers/net/ethernet/intel/ice/ice_sriov.c
> > index e20ef1924fae..78a41163755b 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
> > @@ -246,22 +246,6 @@ static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf *vf)
> >  	return vsi;
> >  }
> > 
> > -/**
> > - * ice_calc_vf_first_vector_idx - Calculate MSIX vector index in the PF space
> > - * @pf: pointer to PF structure
> > - * @vf: pointer to VF that the first MSIX vector index is being calculated for
> > - *
> > - * This returns the first MSIX vector index in PF space that is used by this VF.
> > - * This index is used when accessing PF relative registers such as
> > - * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> > - * This will always be the OICR index in the AVF driver so any functionality
> > - * using vf->first_vector_idx for queue configuration will have to increment by
> > - * 1 to avoid meddling with the OICR index.
> > - */
> > -static int ice_calc_vf_first_vector_idx(struct ice_pf *pf, struct ice_vf *vf)
> > -{
> > -	return pf->sriov_base_vector + vf->vf_id * pf->vfs.num_msix_per;
> > -}
> > 
> >  /**
> >   * ice_ena_vf_msix_mappings - enable VF MSIX mappings in hardware
> > @@ -528,6 +512,52 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16
> > num_vfs)
> >  	return 0;
> >  }
> > 
> > +/**
> > + * ice_sriov_get_irqs - get irqs for SR-IOV usacase
> > + * @pf: pointer to PF structure
> > + * @needed: number of irqs to get
> > + *
> > + * This returns the first MSI-X vector index in PF space that is used by this
> > + * VF. This index is used when accessing PF relative registers such as
> > + * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> > + * This will always be the OICR index in the AVF driver so any functionality
> > + * using vf->first_vector_idx for queue configuration_id: id of VF which will
> > + * use this irqs
> > + *
> > + * Only SRIOV specific vectors are tracked in sriov_irq_bm. SRIOV vectors are
> > + * allocated from the end of global irq index. First bit in sriov_irq_bm means
> > + * last irq index etc. It simplifies extension of SRIOV vectors.
> > + * They will be always located from sriov_base_vector to the last irq
> > + * index. While increasing/decreasing sriov_base_vector can be moved.
> > + */
> > +static int ice_sriov_get_irqs(struct ice_pf *pf, u16 needed)
> > +{
> > +	int res = bitmap_find_next_zero_area(pf->sriov_irq_bm,
> > +					     pf->sriov_irq_size, 0, needed, 0);
> > +	/* conversion from number in bitmap to global irq index */
> > +	int index = pf->sriov_irq_size - res - needed;
> > +
> > +	if (res >= pf->sriov_irq_size || index < pf->sriov_base_vector)
> > +		return -ENOENT;
> > +
> > +	bitmap_set(pf->sriov_irq_bm, res, needed);
> > +	return index;
> 
> Shouldn't it be possible to use the xarray that was recently done for dynamic IRQ allocation for this now? It might take a little more refactor work though, hmm. It feels weird to introduce yet another data structure for a nearly identical purpose...
> 

I used bitmap because it was easy to get linear irq indexes (it is need
for VF to have linear indexes). Do you know how to assume that with
xarray? I felt like solution with storing in xarray and searching for
linear space was more complicated than bitmap, but probably I don't know
useful xarray mechanism for that purpose. If you know please point me
and I will rewrite it to use xarray.

Thanks

[...]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
@ 2023-06-16  8:37       ` Michal Swiatkowski
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Swiatkowski @ 2023-06-16  8:37 UTC (permalink / raw)
  To: Keller, Jacob E; +Cc: netdev, intel-wired-lan, Kitszel, Przemyslaw

On Thu, Jun 15, 2023 at 03:57:37PM +0000, Keller, Jacob E wrote:
> 
> 
> > -----Original Message-----
> > From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > Sent: Thursday, June 15, 2023 5:39 AM
> > To: intel-wired-lan@lists.osuosl.org
> > Cc: netdev@vger.kernel.org; Keller, Jacob E <jacob.e.keller@intel.com>; Kitszel,
> > Przemyslaw <przemyslaw.kitszel@intel.com>; Michal Swiatkowski
> > <michal.swiatkowski@linux.intel.com>
> > Subject: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
> > 
> > Track MSI-X for VFs using bitmap, by setting and clearing bitmap during
> > allocation and freeing.
> > 
> > Try to linearize irqs usage for VFs, by freeing them and allocating once
> > again. Do it only for VFs that aren't currently running.
> > 
> > Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> > ---
> >  drivers/net/ethernet/intel/ice/ice_sriov.c | 170 ++++++++++++++++++---
> >  1 file changed, 151 insertions(+), 19 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > b/drivers/net/ethernet/intel/ice/ice_sriov.c
> > index e20ef1924fae..78a41163755b 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
> > @@ -246,22 +246,6 @@ static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf *vf)
> >  	return vsi;
> >  }
> > 
> > -/**
> > - * ice_calc_vf_first_vector_idx - Calculate MSIX vector index in the PF space
> > - * @pf: pointer to PF structure
> > - * @vf: pointer to VF that the first MSIX vector index is being calculated for
> > - *
> > - * This returns the first MSIX vector index in PF space that is used by this VF.
> > - * This index is used when accessing PF relative registers such as
> > - * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> > - * This will always be the OICR index in the AVF driver so any functionality
> > - * using vf->first_vector_idx for queue configuration will have to increment by
> > - * 1 to avoid meddling with the OICR index.
> > - */
> > -static int ice_calc_vf_first_vector_idx(struct ice_pf *pf, struct ice_vf *vf)
> > -{
> > -	return pf->sriov_base_vector + vf->vf_id * pf->vfs.num_msix_per;
> > -}
> > 
> >  /**
> >   * ice_ena_vf_msix_mappings - enable VF MSIX mappings in hardware
> > @@ -528,6 +512,52 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16
> > num_vfs)
> >  	return 0;
> >  }
> > 
> > +/**
> > + * ice_sriov_get_irqs - get irqs for SR-IOV usacase
> > + * @pf: pointer to PF structure
> > + * @needed: number of irqs to get
> > + *
> > + * This returns the first MSI-X vector index in PF space that is used by this
> > + * VF. This index is used when accessing PF relative registers such as
> > + * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> > + * This will always be the OICR index in the AVF driver so any functionality
> > + * using vf->first_vector_idx for queue configuration_id: id of VF which will
> > + * use this irqs
> > + *
> > + * Only SRIOV specific vectors are tracked in sriov_irq_bm. SRIOV vectors are
> > + * allocated from the end of global irq index. First bit in sriov_irq_bm means
> > + * last irq index etc. It simplifies extension of SRIOV vectors.
> > + * They will be always located from sriov_base_vector to the last irq
> > + * index. While increasing/decreasing sriov_base_vector can be moved.
> > + */
> > +static int ice_sriov_get_irqs(struct ice_pf *pf, u16 needed)
> > +{
> > +	int res = bitmap_find_next_zero_area(pf->sriov_irq_bm,
> > +					     pf->sriov_irq_size, 0, needed, 0);
> > +	/* conversion from number in bitmap to global irq index */
> > +	int index = pf->sriov_irq_size - res - needed;
> > +
> > +	if (res >= pf->sriov_irq_size || index < pf->sriov_base_vector)
> > +		return -ENOENT;
> > +
> > +	bitmap_set(pf->sriov_irq_bm, res, needed);
> > +	return index;
> 
> Shouldn't it be possible to use the xarray that was recently done for dynamic IRQ allocation for this now? It might take a little more refactor work though, hmm. It feels weird to introduce yet another data structure for a nearly identical purpose...
> 

I used bitmap because it was easy to get linear irq indexes (it is need
for VF to have linear indexes). Do you know how to assume that with
xarray? I felt like solution with storing in xarray and searching for
linear space was more complicated than bitmap, but probably I don't know
useful xarray mechanism for that purpose. If you know please point me
and I will rewrite it to use xarray.

Thanks

[...]
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v1 0/4] change MSI-X vectors per VF
  2023-06-15 12:38 ` [Intel-wired-lan] " Michal Swiatkowski
@ 2023-06-16 20:37   ` Tony Nguyen
  -1 siblings, 0 replies; 24+ messages in thread
From: Tony Nguyen @ 2023-06-16 20:37 UTC (permalink / raw)
  To: Michal Swiatkowski, intel-wired-lan; +Cc: netdev, przemyslaw.kitszel

On 6/15/2023 5:38 AM, Michal Swiatkowski wrote:
> Hi,
> 
> This patchset is implementing sysfs API introduced here [1].
> 
> It will allow user to assign different amount of MSI-X vectors to VF.
> For example when there are VMs with different number of virtual cores.
> 
> Example:
> 1. Turn off autoprobe
> echo 0 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_drivers_autoprobe
> 2. Create VFs
> echo 4 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
> 3. Configure MSI-X
> echo 20 > /sys/class/pci_bus/0000\:18/device/0000\:18\:01.0/sriov_vf_msix_count
> 
> [1] https://lore.kernel.org/netdev/20210314124256.70253-1-leon@kernel.org/
> 
> Michal Swiatkowski (4):
>    ice: implement num_msix field per VF
>    ice: add bitmap to track VF MSI-X usage
>    ice: set MSI-X vector count on VF
>    ice: manage VFs MSI-X using resource tracking
> 
>   drivers/net/ethernet/intel/ice/ice.h          |   2 +
>   drivers/net/ethernet/intel/ice/ice_lib.c      |   2 +-
>   drivers/net/ethernet/intel/ice/ice_main.c     |   2 +
>   drivers/net/ethernet/intel/ice/ice_sriov.c    | 257 ++++++++++++++++--
>   drivers/net/ethernet/intel/ice/ice_sriov.h    |  13 +
>   drivers/net/ethernet/intel/ice/ice_vf_lib.h   |   4 +-
>   drivers/net/ethernet/intel/ice/ice_virtchnl.c |   2 +-
>   7 files changed, 258 insertions(+), 24 deletions(-)

This doesn't apply to net-queue, however, it seems as though it applies 
to net-next. Please use the tree that you are targeting to base your 
patches on. While most the time it may not matter, in some cases, like 
this, it does.

Thanks,
Tony

_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v1 0/4] change MSI-X vectors per VF
@ 2023-06-16 20:37   ` Tony Nguyen
  0 siblings, 0 replies; 24+ messages in thread
From: Tony Nguyen @ 2023-06-16 20:37 UTC (permalink / raw)
  To: Michal Swiatkowski, intel-wired-lan; +Cc: netdev, przemyslaw.kitszel

On 6/15/2023 5:38 AM, Michal Swiatkowski wrote:
> Hi,
> 
> This patchset is implementing sysfs API introduced here [1].
> 
> It will allow user to assign different amount of MSI-X vectors to VF.
> For example when there are VMs with different number of virtual cores.
> 
> Example:
> 1. Turn off autoprobe
> echo 0 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_drivers_autoprobe
> 2. Create VFs
> echo 4 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
> 3. Configure MSI-X
> echo 20 > /sys/class/pci_bus/0000\:18/device/0000\:18\:01.0/sriov_vf_msix_count
> 
> [1] https://lore.kernel.org/netdev/20210314124256.70253-1-leon@kernel.org/
> 
> Michal Swiatkowski (4):
>    ice: implement num_msix field per VF
>    ice: add bitmap to track VF MSI-X usage
>    ice: set MSI-X vector count on VF
>    ice: manage VFs MSI-X using resource tracking
> 
>   drivers/net/ethernet/intel/ice/ice.h          |   2 +
>   drivers/net/ethernet/intel/ice/ice_lib.c      |   2 +-
>   drivers/net/ethernet/intel/ice/ice_main.c     |   2 +
>   drivers/net/ethernet/intel/ice/ice_sriov.c    | 257 ++++++++++++++++--
>   drivers/net/ethernet/intel/ice/ice_sriov.h    |  13 +
>   drivers/net/ethernet/intel/ice/ice_vf_lib.h   |   4 +-
>   drivers/net/ethernet/intel/ice/ice_virtchnl.c |   2 +-
>   7 files changed, 258 insertions(+), 24 deletions(-)

This doesn't apply to net-queue, however, it seems as though it applies 
to net-next. Please use the tree that you are targeting to base your 
patches on. While most the time it may not matter, in some cases, like 
this, it does.

Thanks,
Tony


^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
  2023-06-16  8:37       ` [Intel-wired-lan] " Michal Swiatkowski
@ 2023-06-20  5:37         ` Keller, Jacob E
  -1 siblings, 0 replies; 24+ messages in thread
From: Keller, Jacob E @ 2023-06-20  5:37 UTC (permalink / raw)
  To: Michal Swiatkowski; +Cc: intel-wired-lan, netdev, Kitszel, Przemyslaw



> -----Original Message-----
> From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> Sent: Friday, June 16, 2023 1:37 AM
> To: Keller, Jacob E <jacob.e.keller@intel.com>
> Cc: intel-wired-lan@lists.osuosl.org; netdev@vger.kernel.org; Kitszel, Przemyslaw
> <przemyslaw.kitszel@intel.com>
> Subject: Re: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource
> tracking
> 
> On Thu, Jun 15, 2023 at 03:57:37PM +0000, Keller, Jacob E wrote:
> >
> >
> > > -----Original Message-----
> > > From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > > Sent: Thursday, June 15, 2023 5:39 AM
> > > To: intel-wired-lan@lists.osuosl.org
> > > Cc: netdev@vger.kernel.org; Keller, Jacob E <jacob.e.keller@intel.com>;
> Kitszel,
> > > Przemyslaw <przemyslaw.kitszel@intel.com>; Michal Swiatkowski
> > > <michal.swiatkowski@linux.intel.com>
> > > Subject: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource
> tracking
> > >
> > > Track MSI-X for VFs using bitmap, by setting and clearing bitmap during
> > > allocation and freeing.
> > >
> > > Try to linearize irqs usage for VFs, by freeing them and allocating once
> > > again. Do it only for VFs that aren't currently running.
> > >
> > > Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > > Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> > > ---
> > >  drivers/net/ethernet/intel/ice/ice_sriov.c | 170 ++++++++++++++++++---
> > >  1 file changed, 151 insertions(+), 19 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > > b/drivers/net/ethernet/intel/ice/ice_sriov.c
> > > index e20ef1924fae..78a41163755b 100644
> > > --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > > +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
> > > @@ -246,22 +246,6 @@ static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf
> *vf)
> > >  	return vsi;
> > >  }
> > >
> > > -/**
> > > - * ice_calc_vf_first_vector_idx - Calculate MSIX vector index in the PF space
> > > - * @pf: pointer to PF structure
> > > - * @vf: pointer to VF that the first MSIX vector index is being calculated for
> > > - *
> > > - * This returns the first MSIX vector index in PF space that is used by this VF.
> > > - * This index is used when accessing PF relative registers such as
> > > - * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> > > - * This will always be the OICR index in the AVF driver so any functionality
> > > - * using vf->first_vector_idx for queue configuration will have to increment
> by
> > > - * 1 to avoid meddling with the OICR index.
> > > - */
> > > -static int ice_calc_vf_first_vector_idx(struct ice_pf *pf, struct ice_vf *vf)
> > > -{
> > > -	return pf->sriov_base_vector + vf->vf_id * pf->vfs.num_msix_per;
> > > -}
> > >
> > >  /**
> > >   * ice_ena_vf_msix_mappings - enable VF MSIX mappings in hardware
> > > @@ -528,6 +512,52 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16
> > > num_vfs)
> > >  	return 0;
> > >  }
> > >
> > > +/**
> > > + * ice_sriov_get_irqs - get irqs for SR-IOV usacase
> > > + * @pf: pointer to PF structure
> > > + * @needed: number of irqs to get
> > > + *
> > > + * This returns the first MSI-X vector index in PF space that is used by this
> > > + * VF. This index is used when accessing PF relative registers such as
> > > + * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> > > + * This will always be the OICR index in the AVF driver so any functionality
> > > + * using vf->first_vector_idx for queue configuration_id: id of VF which will
> > > + * use this irqs
> > > + *
> > > + * Only SRIOV specific vectors are tracked in sriov_irq_bm. SRIOV vectors are
> > > + * allocated from the end of global irq index. First bit in sriov_irq_bm means
> > > + * last irq index etc. It simplifies extension of SRIOV vectors.
> > > + * They will be always located from sriov_base_vector to the last irq
> > > + * index. While increasing/decreasing sriov_base_vector can be moved.
> > > + */
> > > +static int ice_sriov_get_irqs(struct ice_pf *pf, u16 needed)
> > > +{
> > > +	int res = bitmap_find_next_zero_area(pf->sriov_irq_bm,
> > > +					     pf->sriov_irq_size, 0, needed, 0);
> > > +	/* conversion from number in bitmap to global irq index */
> > > +	int index = pf->sriov_irq_size - res - needed;
> > > +
> > > +	if (res >= pf->sriov_irq_size || index < pf->sriov_base_vector)
> > > +		return -ENOENT;
> > > +
> > > +	bitmap_set(pf->sriov_irq_bm, res, needed);
> > > +	return index;
> >
> > Shouldn't it be possible to use the xarray that was recently done for dynamic
> IRQ allocation for this now? It might take a little more refactor work though,
> hmm. It feels weird to introduce yet another data structure for a nearly identical
> purpose...
> >
> 
> I used bitmap because it was easy to get linear irq indexes (it is need
> for VF to have linear indexes). Do you know how to assume that with
> xarray? I felt like solution with storing in xarray and searching for
> linear space was more complicated than bitmap, but probably I don't know
> useful xarray mechanism for that purpose. If you know please point me
> and I will rewrite it to use xarray.
> 
> Thanks
> 
> [...]

My goal wasn't specifically to use xarray, but rather to use the existing IRQ tracking data structure (which happens to be xarray), rather than adding another tracking structure that is separate. I don't know if that is feasible, so maybe having a separate bitmap is necessary.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
@ 2023-06-20  5:37         ` Keller, Jacob E
  0 siblings, 0 replies; 24+ messages in thread
From: Keller, Jacob E @ 2023-06-20  5:37 UTC (permalink / raw)
  To: Michal Swiatkowski; +Cc: netdev, intel-wired-lan, Kitszel, Przemyslaw



> -----Original Message-----
> From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> Sent: Friday, June 16, 2023 1:37 AM
> To: Keller, Jacob E <jacob.e.keller@intel.com>
> Cc: intel-wired-lan@lists.osuosl.org; netdev@vger.kernel.org; Kitszel, Przemyslaw
> <przemyslaw.kitszel@intel.com>
> Subject: Re: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource
> tracking
> 
> On Thu, Jun 15, 2023 at 03:57:37PM +0000, Keller, Jacob E wrote:
> >
> >
> > > -----Original Message-----
> > > From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > > Sent: Thursday, June 15, 2023 5:39 AM
> > > To: intel-wired-lan@lists.osuosl.org
> > > Cc: netdev@vger.kernel.org; Keller, Jacob E <jacob.e.keller@intel.com>;
> Kitszel,
> > > Przemyslaw <przemyslaw.kitszel@intel.com>; Michal Swiatkowski
> > > <michal.swiatkowski@linux.intel.com>
> > > Subject: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource
> tracking
> > >
> > > Track MSI-X for VFs using bitmap, by setting and clearing bitmap during
> > > allocation and freeing.
> > >
> > > Try to linearize irqs usage for VFs, by freeing them and allocating once
> > > again. Do it only for VFs that aren't currently running.
> > >
> > > Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > > Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> > > ---
> > >  drivers/net/ethernet/intel/ice/ice_sriov.c | 170 ++++++++++++++++++---
> > >  1 file changed, 151 insertions(+), 19 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > > b/drivers/net/ethernet/intel/ice/ice_sriov.c
> > > index e20ef1924fae..78a41163755b 100644
> > > --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > > +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
> > > @@ -246,22 +246,6 @@ static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf
> *vf)
> > >  	return vsi;
> > >  }
> > >
> > > -/**
> > > - * ice_calc_vf_first_vector_idx - Calculate MSIX vector index in the PF space
> > > - * @pf: pointer to PF structure
> > > - * @vf: pointer to VF that the first MSIX vector index is being calculated for
> > > - *
> > > - * This returns the first MSIX vector index in PF space that is used by this VF.
> > > - * This index is used when accessing PF relative registers such as
> > > - * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> > > - * This will always be the OICR index in the AVF driver so any functionality
> > > - * using vf->first_vector_idx for queue configuration will have to increment
> by
> > > - * 1 to avoid meddling with the OICR index.
> > > - */
> > > -static int ice_calc_vf_first_vector_idx(struct ice_pf *pf, struct ice_vf *vf)
> > > -{
> > > -	return pf->sriov_base_vector + vf->vf_id * pf->vfs.num_msix_per;
> > > -}
> > >
> > >  /**
> > >   * ice_ena_vf_msix_mappings - enable VF MSIX mappings in hardware
> > > @@ -528,6 +512,52 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16
> > > num_vfs)
> > >  	return 0;
> > >  }
> > >
> > > +/**
> > > + * ice_sriov_get_irqs - get irqs for SR-IOV usacase
> > > + * @pf: pointer to PF structure
> > > + * @needed: number of irqs to get
> > > + *
> > > + * This returns the first MSI-X vector index in PF space that is used by this
> > > + * VF. This index is used when accessing PF relative registers such as
> > > + * GLINT_VECT2FUNC and GLINT_DYN_CTL.
> > > + * This will always be the OICR index in the AVF driver so any functionality
> > > + * using vf->first_vector_idx for queue configuration_id: id of VF which will
> > > + * use this irqs
> > > + *
> > > + * Only SRIOV specific vectors are tracked in sriov_irq_bm. SRIOV vectors are
> > > + * allocated from the end of global irq index. First bit in sriov_irq_bm means
> > > + * last irq index etc. It simplifies extension of SRIOV vectors.
> > > + * They will be always located from sriov_base_vector to the last irq
> > > + * index. While increasing/decreasing sriov_base_vector can be moved.
> > > + */
> > > +static int ice_sriov_get_irqs(struct ice_pf *pf, u16 needed)
> > > +{
> > > +	int res = bitmap_find_next_zero_area(pf->sriov_irq_bm,
> > > +					     pf->sriov_irq_size, 0, needed, 0);
> > > +	/* conversion from number in bitmap to global irq index */
> > > +	int index = pf->sriov_irq_size - res - needed;
> > > +
> > > +	if (res >= pf->sriov_irq_size || index < pf->sriov_base_vector)
> > > +		return -ENOENT;
> > > +
> > > +	bitmap_set(pf->sriov_irq_bm, res, needed);
> > > +	return index;
> >
> > Shouldn't it be possible to use the xarray that was recently done for dynamic
> IRQ allocation for this now? It might take a little more refactor work though,
> hmm. It feels weird to introduce yet another data structure for a nearly identical
> purpose...
> >
> 
> I used bitmap because it was easy to get linear irq indexes (it is need
> for VF to have linear indexes). Do you know how to assume that with
> xarray? I felt like solution with storing in xarray and searching for
> linear space was more complicated than bitmap, but probably I don't know
> useful xarray mechanism for that purpose. If you know please point me
> and I will rewrite it to use xarray.
> 
> Thanks
> 
> [...]

My goal wasn't specifically to use xarray, but rather to use the existing IRQ tracking data structure (which happens to be xarray), rather than adding another tracking structure that is separate. I don't know if that is feasible, so maybe having a separate bitmap is necessary.
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
  2023-06-15 15:57     ` Keller, Jacob E
@ 2023-11-23 17:22       ` Romanowski, Rafal
  -1 siblings, 0 replies; 24+ messages in thread
From: Romanowski, Rafal @ 2023-11-23 17:22 UTC (permalink / raw)
  To: Keller, Jacob E, Michal Swiatkowski, intel-wired-lan
  Cc: netdev, Kitszel, Przemyslaw

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Keller, Jacob E
> Sent: Thursday, June 15, 2023 5:58 PM
> To: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>; intel-wired-
> lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Kitszel, Przemyslaw
> <przemyslaw.kitszel@intel.com>
> Subject: Re: [Intel-wired-lan] [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X
> using resource tracking
> 
> 
> 
> > -----Original Message-----
> > From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > Sent: Thursday, June 15, 2023 5:39 AM
> > To: intel-wired-lan@lists.osuosl.org
> > Cc: netdev@vger.kernel.org; Keller, Jacob E
> > <jacob.e.keller@intel.com>; Kitszel, Przemyslaw
> > <przemyslaw.kitszel@intel.com>; Michal Swiatkowski
> > <michal.swiatkowski@linux.intel.com>
> > Subject: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource
> > tracking
> >
> > Track MSI-X for VFs using bitmap, by setting and clearing bitmap
> > during allocation and freeing.
> >
> > Try to linearize irqs usage for VFs, by freeing them and allocating
> > once again. Do it only for VFs that aren't currently running.
> >
> > Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> > ---
> >  drivers/net/ethernet/intel/ice/ice_sriov.c | 170
> > ++++++++++++++++++---
> >  1 file changed, 151 insertions(+), 19 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > b/drivers/net/ethernet/intel/ice/ice_sriov.c
> > index e20ef1924fae..78a41163755b 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c


Tested-by: Rafal Romanowski <rafal.romanowski@intel.com>



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking
@ 2023-11-23 17:22       ` Romanowski, Rafal
  0 siblings, 0 replies; 24+ messages in thread
From: Romanowski, Rafal @ 2023-11-23 17:22 UTC (permalink / raw)
  To: Keller, Jacob E, Michal Swiatkowski, intel-wired-lan
  Cc: netdev, Kitszel, Przemyslaw

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Keller, Jacob E
> Sent: Thursday, June 15, 2023 5:58 PM
> To: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>; intel-wired-
> lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Kitszel, Przemyslaw
> <przemyslaw.kitszel@intel.com>
> Subject: Re: [Intel-wired-lan] [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X
> using resource tracking
> 
> 
> 
> > -----Original Message-----
> > From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > Sent: Thursday, June 15, 2023 5:39 AM
> > To: intel-wired-lan@lists.osuosl.org
> > Cc: netdev@vger.kernel.org; Keller, Jacob E
> > <jacob.e.keller@intel.com>; Kitszel, Przemyslaw
> > <przemyslaw.kitszel@intel.com>; Michal Swiatkowski
> > <michal.swiatkowski@linux.intel.com>
> > Subject: [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource
> > tracking
> >
> > Track MSI-X for VFs using bitmap, by setting and clearing bitmap
> > during allocation and freeing.
> >
> > Try to linearize irqs usage for VFs, by freeing them and allocating
> > once again. Do it only for VFs that aren't currently running.
> >
> > Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> > ---
> >  drivers/net/ethernet/intel/ice/ice_sriov.c | 170
> > ++++++++++++++++++---
> >  1 file changed, 151 insertions(+), 19 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > b/drivers/net/ethernet/intel/ice/ice_sriov.c
> > index e20ef1924fae..78a41163755b 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c


Tested-by: Rafal Romanowski <rafal.romanowski@intel.com>


_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2023-11-23 17:23 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-15 12:38 [PATCH iwl-next v1 0/4] change MSI-X vectors per VF Michal Swiatkowski
2023-06-15 12:38 ` [Intel-wired-lan] " Michal Swiatkowski
2023-06-15 12:38 ` [PATCH iwl-next v1 1/4] ice: implement num_msix field " Michal Swiatkowski
2023-06-15 12:38   ` [Intel-wired-lan] " Michal Swiatkowski
2023-06-15 14:22   ` Maciej Fijalkowski
2023-06-15 14:22     ` Maciej Fijalkowski
2023-06-15 14:43     ` Michal Swiatkowski
2023-06-15 14:43       ` Michal Swiatkowski
2023-06-15 12:38 ` [PATCH iwl-next v1 2/4] ice: add bitmap to track VF MSI-X usage Michal Swiatkowski
2023-06-15 12:38   ` [Intel-wired-lan] " Michal Swiatkowski
2023-06-15 12:38 ` [PATCH iwl-next v1 3/4] ice: set MSI-X vector count on VF Michal Swiatkowski
2023-06-15 12:38   ` [Intel-wired-lan] " Michal Swiatkowski
2023-06-15 12:38 ` [Intel-wired-lan] [PATCH iwl-next v1 4/4] ice: manage VFs MSI-X using resource tracking Michal Swiatkowski
2023-06-15 12:38   ` Michal Swiatkowski
2023-06-15 15:57   ` [Intel-wired-lan] " Keller, Jacob E
2023-06-15 15:57     ` Keller, Jacob E
2023-06-16  8:37     ` Michal Swiatkowski
2023-06-16  8:37       ` [Intel-wired-lan] " Michal Swiatkowski
2023-06-20  5:37       ` Keller, Jacob E
2023-06-20  5:37         ` [Intel-wired-lan] " Keller, Jacob E
2023-11-23 17:22     ` Romanowski, Rafal
2023-11-23 17:22       ` [Intel-wired-lan] " Romanowski, Rafal
2023-06-16 20:37 ` [Intel-wired-lan] [PATCH iwl-next v1 0/4] change MSI-X vectors per VF Tony Nguyen
2023-06-16 20:37   ` Tony Nguyen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.