All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration
@ 2022-09-20 15:14 Simon Horman
  2022-09-20 15:14 ` [PATCH/RFC net-next 1/3] " Simon Horman
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Simon Horman @ 2022-09-20 15:14 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, oss-drivers, Diana Wang, Peng Zhang

Hi,

this short series adds the max_vf_queue generic devlink device parameter,
the intention of this is to allow configuration of the number of queues
associated with VFs, and facilitates having VFs with different queue
counts.

The series also adds support for multi-queue VFs to the nfp driver
and support for the max_vf_queue feature described above.

Diana Wang (1):
  nfp: support VF multi-queues configuration

Peng Zhang (2):
  devlink: Add new "max_vf_queue" generic device param
  nfp: devlink: add the devlink parameter "max_vf_queue" support

 .../networking/devlink/devlink-params.rst     |   5 +
 Documentation/networking/devlink/nfp.rst      |   2 +
 .../ethernet/netronome/nfp/devlink_param.c    | 114 ++++++++++++++++++
 drivers/net/ethernet/netronome/nfp/nfp_main.c |   6 +
 drivers/net/ethernet/netronome/nfp/nfp_main.h |  13 ++
 drivers/net/ethernet/netronome/nfp/nfp_net.h  |   1 +
 .../net/ethernet/netronome/nfp/nfp_net_main.c |   3 +
 .../ethernet/netronome/nfp/nfp_net_sriov.c    | 101 ++++++++++++++++
 .../ethernet/netronome/nfp/nfp_net_sriov.h    |   3 +
 include/net/devlink.h                         |   4 +
 net/core/devlink.c                            |   5 +
 11 files changed, 257 insertions(+)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH/RFC net-next 1/3] nfp: support VF multi-queues configuration
  2022-09-20 15:14 [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration Simon Horman
@ 2022-09-20 15:14 ` Simon Horman
  2022-09-20 15:14 ` [PATCH/RFC net-next 2/3] devlink: Add new "max_vf_queue" generic device param Simon Horman
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Simon Horman @ 2022-09-20 15:14 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, oss-drivers, Diana Wang, Peng Zhang

From: Diana Wang <na.wang@corigine.com>

Add VF setting multi-queue feature.
It is to configure the max queue number for each VF,
user can still modify the queue number in use by
ethtool -l <intf>

The number set of configuring queues for every vf is
{16 8 4 2 1} and total number of configuring queues
is not allowed bigger than vf queues resource.

If quantity of created VF exceeds expectation, it will
check VF number validity based on the queues not used.
The condition is that quantity of the rest queues must
not smaller than redundant VFs' number. If it meets
the condition, it will set one queue per extra VF.

If not configured(default mode), the created VFs will
divide the total vf-queues equally and it rounds down
power of 2.

Signed-off-by: Diana Wang <na.wang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
---
 drivers/net/ethernet/netronome/nfp/nfp_main.c |   6 ++
 drivers/net/ethernet/netronome/nfp/nfp_main.h |  13 +++
 drivers/net/ethernet/netronome/nfp/nfp_net.h  |   1 +
 .../net/ethernet/netronome/nfp/nfp_net_main.c |   3 +
 .../ethernet/netronome/nfp/nfp_net_sriov.c    | 101 ++++++++++++++++++
 .../ethernet/netronome/nfp/nfp_net_sriov.h    |   3 +
 6 files changed, 127 insertions(+)

diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.c b/drivers/net/ethernet/netronome/nfp/nfp_main.c
index 873429f7a6da..be1744d5b7ea 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_main.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_main.c
@@ -29,6 +29,7 @@
 #include "nfp_app.h"
 #include "nfp_main.h"
 #include "nfp_net.h"
+#include "nfp_net_sriov.h"
 
 static const char nfp_driver_name[] = "nfp";
 
@@ -252,6 +253,10 @@ static int nfp_pcie_sriov_enable(struct pci_dev *pdev, int num_vfs)
 		return -EINVAL;
 	}
 
+	err = nfp_vf_queues_config(pf, num_vfs);
+	if (err)
+		return err;
+
 	err = pci_enable_sriov(pdev, num_vfs);
 	if (err) {
 		dev_warn(&pdev->dev, "Failed to enable PCI SR-IOV: %d\n", err);
@@ -782,6 +787,7 @@ static int nfp_pci_probe(struct pci_dev *pdev,
 	if (err)
 		goto err_fw_unload;
 
+	pf->default_config_vfs_queue = true;
 	pf->num_vfs = pci_num_vf(pdev);
 	if (pf->num_vfs > pf->limit_vfs) {
 		dev_err(&pdev->dev,
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.h b/drivers/net/ethernet/netronome/nfp/nfp_main.h
index 6805af186f1b..37b2bd6091f0 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_main.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_main.h
@@ -17,6 +17,12 @@
 #include <linux/workqueue.h>
 #include <net/devlink.h>
 
+ /* Define how many types of max-q-number is supported to
+  * configure, currently we support 16, 8, 4, 2, 1.
+  */
+#define NFP_NET_CFG_QUEUE_TYPE		5
+#define NFP_NET_CFG_MAX_Q(type)		(1 << (NFP_NET_CFG_QUEUE_TYPE - (type) - 1))
+
 struct dentry;
 struct device;
 struct pci_dev;
@@ -63,6 +69,10 @@ struct nfp_dumpspec {
  * @irq_entries:	Array of MSI-X entries for all vNICs
  * @limit_vfs:		Number of VFs supported by firmware (~0 for PCI limit)
  * @num_vfs:		Number of SR-IOV VFs enabled
+ * @max_vf_queues:	number of queues can be allocated to VFs
+ * @config_vfs_queue:	Array to indicate VF number of each max-queue-num type
+ *                      The quantity of distributable queues is {16, 8, 4, 2, 1}
+ * @default_config_vfs_queue:	Is the method of allocating queues to VFS evenly distributed
  * @fw_loaded:		Is the firmware loaded?
  * @unload_fw_on_remove:Do we need to unload firmware on driver removal?
  * @sp_indiff:		Is the firmware indifferent to physical port speed?
@@ -112,6 +122,9 @@ struct nfp_pf {
 
 	unsigned int limit_vfs;
 	unsigned int num_vfs;
+	unsigned int max_vf_queues;
+	u8 config_vfs_queue[NFP_NET_CFG_QUEUE_TYPE];
+	bool default_config_vfs_queue;
 
 	bool fw_loaded;
 	bool unload_fw_on_remove;
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h
index a101ff30a1ae..5deeae87b684 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h
@@ -78,6 +78,7 @@
 /* Queue/Ring definitions */
 #define NFP_NET_MAX_TX_RINGS	64	/* Max. # of Tx rings per device */
 #define NFP_NET_MAX_RX_RINGS	64	/* Max. # of Rx rings per device */
+#define NFP_NET_CTRL_RINGS	1	/* Max. # of Ctrl rings per device */
 #define NFP_NET_MAX_R_VECS	(NFP_NET_MAX_TX_RINGS > NFP_NET_MAX_RX_RINGS ? \
 				 NFP_NET_MAX_TX_RINGS : NFP_NET_MAX_RX_RINGS)
 #define NFP_NET_MAX_IRQS	(NFP_NET_NON_Q_VECTORS + NFP_NET_MAX_R_VECS)
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c
index e2d4c487e8de..e5d5f88e60c7 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c
@@ -296,6 +296,7 @@ static int nfp_net_pf_init_vnics(struct nfp_pf *pf)
 		if (err)
 			goto err_prev_deinit;
 
+		pf->max_vf_queues -= nn->max_r_vecs;
 		id++;
 	}
 
@@ -794,6 +795,8 @@ int nfp_net_pci_probe(struct nfp_pf *pf)
 		}
 	}
 
+	pf->max_vf_queues = NFP_NET_MAX_R_VECS - NFP_NET_CTRL_RINGS;
+
 	err = nfp_net_pf_app_init(pf, qc_bar, stride);
 	if (err)
 		goto err_unmap;
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_sriov.c b/drivers/net/ethernet/netronome/nfp/nfp_net_sriov.c
index 6eeeb0fda91f..eca6e65089f4 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_sriov.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_sriov.c
@@ -29,6 +29,9 @@ nfp_net_sriov_check(struct nfp_app *app, int vf, u16 cap, const char *msg, bool
 		return -EOPNOTSUPP;
 	}
 
+	if (cap == NFP_NET_VF_CFG_MB_CAP_QUEUE_CONFIG)
+		return 0;
+
 	if (vf < 0 || vf >= app->pf->num_vfs) {
 		if (warn)
 			nfp_warn(app->pf->cpp, "invalid VF id %d\n", vf);
@@ -309,3 +312,101 @@ int nfp_app_get_vf_config(struct net_device *netdev, int vf,
 
 	return 0;
 }
+
+static int nfp_set_vf_queue_config(struct nfp_pf *pf, int num_vfs)
+{
+	unsigned char config_content[sizeof(u32)] = {0};
+	unsigned int i, j, k, cfg_vf_count, offset;
+	struct nfp_net *nn;
+	u32 raw;
+	int err;
+
+	raw = 0; k = 0; cfg_vf_count = 0;
+	offset = NFP_NET_VF_CFG_MB_SZ + pf->limit_vfs * NFP_NET_VF_CFG_SZ;
+
+	for (i = 0; i < NFP_NET_CFG_QUEUE_TYPE; i++) {
+		for (j = 0; j < pf->config_vfs_queue[i]; j++) {
+			config_content[k++] = NFP_NET_CFG_MAX_Q(i);
+			cfg_vf_count++;
+			if (k == sizeof(raw) || cfg_vf_count == num_vfs) {
+				raw = config_content[0] |
+				      (config_content[1] << BITS_PER_BYTE) |
+				      (config_content[2] << (2 * BITS_PER_BYTE)) |
+				      (config_content[3] << (3 * BITS_PER_BYTE));
+				writel(raw, pf->vfcfg_tbl2 + offset);
+				offset += sizeof(raw);
+				memset(config_content, 0, sizeof(u32));
+				k = 0;
+			}
+		}
+	}
+
+	writew(NFP_NET_VF_CFG_MB_UPD_QUEUE_CONFIG, pf->vfcfg_tbl2 + NFP_NET_VF_CFG_MB_UPD);
+
+	nn = list_first_entry(&pf->vnics, struct nfp_net, vnic_list);
+	err = nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_VF);
+	if (err) {
+		nfp_warn(pf->cpp,
+			 "FW reconfig VF config queue failed: %d\n", err);
+		return -EINVAL;
+	}
+
+	err = readw(pf->vfcfg_tbl2 + NFP_NET_VF_CFG_MB_RET);
+	if (err) {
+		nfp_warn(pf->cpp,
+			 "FW refused VF config queue update with errno: %d\n", err);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int nfp_vf_queues_config(struct nfp_pf *pf, int num_vfs)
+{
+	unsigned int i, j, cfg_num_queues = 0, cfg_num_vfs;
+
+	if (nfp_net_sriov_check(pf->app, 0, NFP_NET_VF_CFG_MB_CAP_QUEUE_CONFIG, "max_queue", true))
+		return 0;
+
+	/* In default mode, the created VFs divide all the VF queues equally,
+	 * and round down to power of 2
+	 */
+	if (pf->default_config_vfs_queue) {
+		memset(pf->config_vfs_queue, 0, NFP_NET_CFG_QUEUE_TYPE);
+		j = pf->max_vf_queues / num_vfs;
+		for (i = 0; i < NFP_NET_CFG_QUEUE_TYPE; i++) {
+			if (j >= NFP_NET_CFG_MAX_Q(i)) {
+				pf->config_vfs_queue[i] = num_vfs;
+				break;
+			}
+		}
+		return nfp_set_vf_queue_config(pf, num_vfs);
+	}
+
+	for (i = 0, cfg_num_vfs = 0; i < NFP_NET_CFG_QUEUE_TYPE; i++) {
+		cfg_num_queues += NFP_NET_CFG_MAX_Q(i) * pf->config_vfs_queue[i];
+		cfg_num_vfs += pf->config_vfs_queue[i];
+	}
+
+	if (cfg_num_queues > pf->max_vf_queues) {
+		dev_warn(&pf->pdev->dev,
+			 "Number of queues from configuration is bigger than total queues number.\n");
+		return -EINVAL;
+	}
+
+	cfg_num_queues = pf->max_vf_queues - cfg_num_queues;
+
+	if (num_vfs > cfg_num_vfs) {
+		cfg_num_vfs = num_vfs - cfg_num_vfs;
+		if (cfg_num_queues < cfg_num_vfs) {
+			dev_warn(&pf->pdev->dev,
+				 "Remaining queues are not enough to be allocated.\n");
+			return -EINVAL;
+		}
+		dev_info(&pf->pdev->dev,
+			 "The extra created VFs are allocated with single queue.\n");
+		pf->config_vfs_queue[NFP_NET_CFG_QUEUE_TYPE - 1] += cfg_num_vfs;
+	}
+
+	return nfp_set_vf_queue_config(pf, num_vfs);
+}
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_sriov.h b/drivers/net/ethernet/netronome/nfp/nfp_net_sriov.h
index 2d445fa199dc..36df29fdaf0e 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_sriov.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_sriov.h
@@ -21,6 +21,7 @@
 #define   NFP_NET_VF_CFG_MB_CAP_TRUST			  (0x1 << 4)
 #define   NFP_NET_VF_CFG_MB_CAP_VLAN_PROTO		  (0x1 << 5)
 #define   NFP_NET_VF_CFG_MB_CAP_RATE			  (0x1 << 6)
+#define   NFP_NET_VF_CFG_MB_CAP_QUEUE_CONFIG		  (0x1 << 7)
 #define NFP_NET_VF_CFG_MB_RET				0x2
 #define NFP_NET_VF_CFG_MB_UPD				0x4
 #define   NFP_NET_VF_CFG_MB_UPD_MAC			  (0x1 << 0)
@@ -30,6 +31,7 @@
 #define   NFP_NET_VF_CFG_MB_UPD_TRUST			  (0x1 << 4)
 #define   NFP_NET_VF_CFG_MB_UPD_VLAN_PROTO		  (0x1 << 5)
 #define   NFP_NET_VF_CFG_MB_UPD_RATE			  (0x1 << 6)
+#define   NFP_NET_VF_CFG_MB_UPD_QUEUE_CONFIG		  (0x1 << 7)
 #define NFP_NET_VF_CFG_MB_VF_NUM			0x7
 
 /* VF config entry
@@ -67,5 +69,6 @@ int nfp_app_set_vf_link_state(struct net_device *netdev, int vf,
 			      int link_state);
 int nfp_app_get_vf_config(struct net_device *netdev, int vf,
 			  struct ifla_vf_info *ivi);
+int nfp_vf_queues_config(struct nfp_pf *pf, int num_vfs);
 
 #endif /* _NFP_NET_SRIOV_H_ */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH/RFC net-next 2/3] devlink: Add new "max_vf_queue" generic device param
  2022-09-20 15:14 [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration Simon Horman
  2022-09-20 15:14 ` [PATCH/RFC net-next 1/3] " Simon Horman
@ 2022-09-20 15:14 ` Simon Horman
  2022-09-20 18:27   ` Edward Cree
  2022-09-20 15:14 ` [PATCH/RFC net-next 3/3] nfp: devlink: add the devlink parameter "max_vf_queue" support Simon Horman
  2022-09-21 13:34 ` driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration) Jakub Kicinski
  3 siblings, 1 reply; 16+ messages in thread
From: Simon Horman @ 2022-09-20 15:14 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, oss-drivers, Diana Wang, Peng Zhang

From: Peng Zhang <peng.zhang@corigine.com>

VF max-queue-number is the MAX num of queues which the VF has.

Add new device generic parameter to configure the max-queue-number
of the each VF to be generated dynamically.

The string format is decided ad vendor specific. The suggested
format is ...-V-W-X-Y-Z, the V represents generating V VFs that
have 16 queues, the W represents generating W VFs that have 8
queues, and so on, the Z represents generating Z VFs that have
1 queue.

For example, to configure
* 1x VF with 128 queues
* 1x VF with 64 queues
* 0x VF with 32 queues
* 0x VF with 16 queues
* 12x VF with 8 queues
* 2x VF with 4 queues
* 2x VF with 2 queues
* 0x VF with 1 queue, execute:

$ devlink dev param set pci/0000:01:00.0 \
                          name max_vf_queue value \
                          "1-1-0-0-12-2-2-0" cmode runtime

When created VF number is bigger than that is configured by this
parameter, the extra VFs' max-queue-number is decided as vendor
specific.

If the config doesn't be set, the VFs' max-queue-number is decided
as vendor specific.

Signed-off-by: Peng Zhang <peng.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
---
 Documentation/networking/devlink/devlink-params.rst | 5 +++++
 include/net/devlink.h                               | 4 ++++
 net/core/devlink.c                                  | 5 +++++
 3 files changed, 14 insertions(+)

diff --git a/Documentation/networking/devlink/devlink-params.rst b/Documentation/networking/devlink/devlink-params.rst
index 4e01dc32bc08..4b415b1acc9d 100644
--- a/Documentation/networking/devlink/devlink-params.rst
+++ b/Documentation/networking/devlink/devlink-params.rst
@@ -137,3 +137,8 @@ own name.
    * - ``event_eq_size``
      - u32
      - Control the size of asynchronous control events EQ.
+   * - ``max_vf_queue``
+     - String
+     - Configure the queue of the each VF to be generated dynamically. When
+       created VF number is bigger than that is configured by this parameter,
+       the extra VFs' max-queue-number is decided as vendor specific.
diff --git a/include/net/devlink.h b/include/net/devlink.h
index 264aa98e6da6..b756be29f824 100644
--- a/include/net/devlink.h
+++ b/include/net/devlink.h
@@ -496,6 +496,7 @@ enum devlink_param_generic_id {
 	DEVLINK_PARAM_GENERIC_ID_ENABLE_IWARP,
 	DEVLINK_PARAM_GENERIC_ID_IO_EQ_SIZE,
 	DEVLINK_PARAM_GENERIC_ID_EVENT_EQ_SIZE,
+	DEVLINK_PARAM_GENERIC_ID_MAX_VF_QUEUE,
 
 	/* add new param generic ids above here*/
 	__DEVLINK_PARAM_GENERIC_ID_MAX,
@@ -554,6 +555,9 @@ enum devlink_param_generic_id {
 #define DEVLINK_PARAM_GENERIC_EVENT_EQ_SIZE_NAME "event_eq_size"
 #define DEVLINK_PARAM_GENERIC_EVENT_EQ_SIZE_TYPE DEVLINK_PARAM_TYPE_U32
 
+#define DEVLINK_PARAM_GENERIC_MAX_VF_QUEUE_NAME "max_vf_queue"
+#define DEVLINK_PARAM_GENERIC_MAX_VF_QUEUE_TYPE DEVLINK_PARAM_TYPE_STRING
+
 #define DEVLINK_PARAM_GENERIC(_id, _cmodes, _get, _set, _validate)	\
 {									\
 	.id = DEVLINK_PARAM_GENERIC_ID_##_id,				\
diff --git a/net/core/devlink.c b/net/core/devlink.c
index 7776dc82f88d..f447e8b11a80 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -5145,6 +5145,11 @@ static const struct devlink_param devlink_param_generic[] = {
 		.name = DEVLINK_PARAM_GENERIC_EVENT_EQ_SIZE_NAME,
 		.type = DEVLINK_PARAM_GENERIC_EVENT_EQ_SIZE_TYPE,
 	},
+	{
+		.id = DEVLINK_PARAM_GENERIC_ID_MAX_VF_QUEUE,
+		.name = DEVLINK_PARAM_GENERIC_MAX_VF_QUEUE_NAME,
+		.type = DEVLINK_PARAM_GENERIC_MAX_VF_QUEUE_TYPE,
+	},
 };
 
 static int devlink_param_generic_verify(const struct devlink_param *param)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH/RFC net-next 3/3] nfp: devlink: add the devlink parameter "max_vf_queue" support
  2022-09-20 15:14 [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration Simon Horman
  2022-09-20 15:14 ` [PATCH/RFC net-next 1/3] " Simon Horman
  2022-09-20 15:14 ` [PATCH/RFC net-next 2/3] devlink: Add new "max_vf_queue" generic device param Simon Horman
@ 2022-09-20 15:14 ` Simon Horman
  2022-09-21 13:34 ` driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration) Jakub Kicinski
  3 siblings, 0 replies; 16+ messages in thread
From: Simon Horman @ 2022-09-20 15:14 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, oss-drivers, Diana Wang, Peng Zhang

From: Peng Zhang <peng.zhang@corigine.com>

VF max-queue-number is the MAX num of queues which the VF has.

Currently, different users need that the number of VF max-queue-number
is different. Configuring the VF max-queue-number dynamically can
better match users. Hence, it is necessary to allow user to configure
the number of the VF max-queue-number.

The string format is V-W-X-Y-Z, the V represents generating
V VFs that have 16 queues, the W represents generating W VFs that
have 8 queues, and so on, the z represents  generating Z VF that
has 1 queues. As far, it supports the VF max-queue-number is 16.

For example, to configure
* 1x VF with 16 queue
* 1x VF with 8 queues
* 2x VF with 4 queues
* 2x VF with 2 queues
* 0x VF with 1 queue, execute:

$ devlink dev param set pci/0000:01:00.0 \
		   name max_vf_queue value "1-1-2-2-0" cmode runtime

The nfp also support the variable-length argument, Y-Z, X-Y-Z, W-X-Y-Z
and Z, it also is right format and represents that the VFs which aren't
configured are all zero.

For example, execute:
$ devlink dev param set pci/0000:01:00.0 \
                   name max_vf_queue value "1-1" cmode runtime

It represent configure the queue is as follows:
* 0x VF with 16 queue
* 0x VF with 8 queues
* 0x VF with 4 queues
* 1x VF with 2 queues
* 1x VF with 1 queue

When created VF number is bigger than that is configured by this parameter,
the extra VFs' max-queue-number is 1.

If the config doesn't be set, the nfp vf max-queue-number is 2^n which is
round_down the average of the total queue / VF nums. If setting the config
is "0-0-0-0-0", it also is as the default way to generate VF.

Signed-off-by: Peng Zhang <peng.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
---
 Documentation/networking/devlink/nfp.rst      |   2 +
 .../ethernet/netronome/nfp/devlink_param.c    | 114 ++++++++++++++++++
 2 files changed, 116 insertions(+)

diff --git a/Documentation/networking/devlink/nfp.rst b/Documentation/networking/devlink/nfp.rst
index a1717db0dfcc..1936cee16dbe 100644
--- a/Documentation/networking/devlink/nfp.rst
+++ b/Documentation/networking/devlink/nfp.rst
@@ -18,6 +18,8 @@ Parameters
      - permanent
    * - ``reset_dev_on_drv_probe``
      - permanent
+   * - ``vf_max_queue``
+     - runtime
 
 Info versions
 =============
diff --git a/drivers/net/ethernet/netronome/nfp/devlink_param.c b/drivers/net/ethernet/netronome/nfp/devlink_param.c
index db297ee4d7ad..5856b45601f7 100644
--- a/drivers/net/ethernet/netronome/nfp/devlink_param.c
+++ b/drivers/net/ethernet/netronome/nfp/devlink_param.c
@@ -6,6 +6,7 @@
 #include "nfpcore/nfp.h"
 #include "nfpcore/nfp_nsp.h"
 #include "nfp_main.h"
+#include "nfp_net.h"
 
 /**
  * struct nfp_devlink_param_u8_arg - Devlink u8 parameter get/set arguments
@@ -191,7 +192,120 @@ nfp_devlink_param_u8_validate(struct devlink *devlink, u32 id,
 	return 0;
 }
 
+static int
+nfp_devlink_vq_config_get(struct devlink *devlink, u32 id,
+			  struct devlink_param_gset_ctx *ctx)
+{
+	struct nfp_pf *pf = devlink_priv(devlink);
+	char config[4 * NFP_NET_CFG_QUEUE_TYPE];
+	int i, len;
+
+	if (!pf)
+		return -ENODEV;
+
+	for (i = 0, len = 0; i < NFP_NET_CFG_QUEUE_TYPE; i++)
+		len += snprintf(config + len, sizeof(config), "%03d-", pf->config_vfs_queue[i]);
+	config[len - 1] = '\0';
+
+	strscpy(ctx->val.vstr, config, sizeof(ctx->val.vstr));
+
+	return 0;
+}
+
+static int
+nfp_devlink_vq_config_set(struct devlink *devlink, u32 id,
+			  struct devlink_param_gset_ctx *ctx)
+{
+	int config_vfs_queue[NFP_NET_CFG_QUEUE_TYPE];
+	char vq[__DEVLINK_PARAM_MAX_STRING_VALUE];
+	struct nfp_pf *pf = devlink_priv(devlink);
+	char *num_vf, *value;
+	u8 config = 0;
+	int i;
+
+	if (!pf)
+		return -ENODEV;
+
+	strscpy(vq, ctx->val.vstr, sizeof(vq));
+	value = vq;
+	memset(config_vfs_queue, 0, sizeof(config_vfs_queue));
+
+	num_vf = strsep(&value, "-");
+	while (num_vf) {
+		if (kstrtouint(num_vf, 10, &config_vfs_queue[config++]))
+			return -EINVAL;
+		num_vf = strsep(&value, "-");
+	}
+
+	pf->default_config_vfs_queue = true;
+	memset(pf->config_vfs_queue, 0, sizeof(pf->config_vfs_queue));
+
+	for (i = NFP_NET_CFG_QUEUE_TYPE - 1; i >= 0; i--) {
+		if (config >= 1) {
+			pf->config_vfs_queue[i] = config_vfs_queue[--config];
+			if (pf->config_vfs_queue[i] && pf->default_config_vfs_queue)
+				pf->default_config_vfs_queue = false;
+		}
+	}
+
+	return 0;
+}
+
+static int
+nfp_devlink_vq_config_validate(struct devlink *devlink, u32 id,
+			       union devlink_param_value val,
+			       struct netlink_ext_ack *extack)
+{
+	int config_vfs_queue[NFP_NET_CFG_QUEUE_TYPE];
+	char vq[__DEVLINK_PARAM_MAX_STRING_VALUE];
+	struct nfp_pf *pf = devlink_priv(devlink);
+	char *num_vf, *value;
+	u32 total_q_num = 0;
+	u32 config = 0;
+	int i;
+
+	if (!pf) {
+		NL_SET_ERR_MSG_MOD(extack, "Can't find the device.");
+		return -ENODEV;
+	}
+
+	strscpy(vq, val.vstr, sizeof(vq));
+	value = vq;
+	memset(config_vfs_queue, 0, sizeof(config_vfs_queue));
+
+	num_vf = strsep(&value, "-");
+	while (num_vf) {
+		if (kstrtouint(num_vf, 10, &config_vfs_queue[config++])) {
+			NL_SET_ERR_MSG_MOD(extack,
+					   "The input format is error, expected format: %d-%d-%d-%d-%d.");
+			return -EINVAL;
+		}
+
+		if (config > NFP_NET_CFG_QUEUE_TYPE) {
+			NL_SET_ERR_MSG_MOD(extack,
+					   "The config queue type is more than the max_cfg_queue_type.");
+			return -EINVAL;
+		}
+		num_vf = strsep(&value, "-");
+	}
+
+	for (i = NFP_NET_CFG_QUEUE_TYPE - 1; i >= 0 && config; i--)
+		total_q_num += config_vfs_queue[--config] * NFP_NET_CFG_MAX_Q(i);
+
+	if (total_q_num > pf->max_vf_queues) {
+		NL_SET_ERR_MSG_MOD(extack, "The set queue is more than the MAX queue.");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static const struct devlink_param nfp_devlink_params[] = {
+	DEVLINK_PARAM_GENERIC(MAX_VF_QUEUE,
+			      BIT(DEVLINK_PARAM_CMODE_RUNTIME),
+			      nfp_devlink_vq_config_get,
+			      nfp_devlink_vq_config_set,
+			      nfp_devlink_vq_config_validate),
 	DEVLINK_PARAM_GENERIC(FW_LOAD_POLICY,
 			      BIT(DEVLINK_PARAM_CMODE_PERMANENT),
 			      nfp_devlink_param_u8_get,
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH/RFC net-next 2/3] devlink: Add new "max_vf_queue" generic device param
  2022-09-20 15:14 ` [PATCH/RFC net-next 2/3] devlink: Add new "max_vf_queue" generic device param Simon Horman
@ 2022-09-20 18:27   ` Edward Cree
  2022-09-21  1:47     ` Yinjun Zhang
  0 siblings, 1 reply; 16+ messages in thread
From: Edward Cree @ 2022-09-20 18:27 UTC (permalink / raw)
  To: Simon Horman, David Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, oss-drivers, Diana Wang, Peng Zhang

On 20/09/2022 16:14, Simon Horman wrote:
> From: Peng Zhang <peng.zhang@corigine.com>
> 
> VF max-queue-number is the MAX num of queues which the VF has.
> 
> Add new device generic parameter to configure the max-queue-number
> of the each VF to be generated dynamically.
> 
> The string format is decided ad vendor specific. The suggested
> format is ...-V-W-X-Y-Z, the V represents generating V VFs that
> have 16 queues, the W represents generating W VFs that have 8
> queues, and so on, the Z represents generating Z VFs that have
> 1 queue.

I don't like this.
If I'm correctly understanding, it hardcodes an assumption that
 lower-numbered VFs will be the ones with more queues, and also
 makes it difficult to change a VF's max-queues on the fly.
Why not instead have a per-VF operation to set that VF's max
 queue count?  Ideally through the VF representor, perhaps as
 an ethtool param/tunable, rather than devlink.  Then the
 mechanism is flexible and makes no assumptions about policy.

-ed

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH/RFC net-next 2/3] devlink: Add new "max_vf_queue" generic device param
  2022-09-20 18:27   ` Edward Cree
@ 2022-09-21  1:47     ` Yinjun Zhang
  0 siblings, 0 replies; 16+ messages in thread
From: Yinjun Zhang @ 2022-09-21  1:47 UTC (permalink / raw)
  To: Edward Cree, Simon Horman, David Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, oss-drivers, Diana Wang, Nole Zhang

On Tue, 20 Sep 2022 19:27:48 +0100, Edward Cree wrote:
> On 20/09/2022 16:14, Simon Horman wrote:
> > From: Peng Zhang <peng.zhang@corigine.com>
> >
> > VF max-queue-number is the MAX num of queues which the VF has.
> >
> > Add new device generic parameter to configure the max-queue-number
> > of the each VF to be generated dynamically.
> >
> > The string format is decided ad vendor specific. The suggested
> > format is ...-V-W-X-Y-Z, the V represents generating V VFs that
> > have 16 queues, the W represents generating W VFs that have 8
> > queues, and so on, the Z represents generating Z VFs that have
> > 1 queue.
> 
> I don't like this.
> If I'm correctly understanding, it hardcodes an assumption that
>  lower-numbered VFs will be the ones with more queues, and also

Usually all VFs have same max-q-num, so config like "...-0-N-0-.."
will be the most case. If you want some VFs have more queues
than the others, then yes, the lower-numbered VFs always have
more. We think it's OK, since the user can decide which VFs for
what use. If you need VFs with fewer queues, then use those
higher-numbered VFs.
And the format is just a recommendation, the parse process
is implemented in NIC driver, so it can be vendor specific.

>  makes it difficult to change a VF's max-queues on the fly.
> Why not instead have a per-VF operation to set that VF's max
>  queue count?  Ideally through the VF representor, perhaps as
>  an ethtool param/tunable, rather than devlink.  Then the
>  mechanism is flexible and makes no assumptions about policy.

The background of this configuration proposal is that we need a 
way to reallocate the queue resource that is shared by all VFs.
So it should be pre-configured before VFs are created, that's why
the configuration is per-PF or per-pcidev. We're not supposed to
change the max-q-number once the VF is created.

Thanks.

> 
> -ed

^ permalink raw reply	[flat|nested] 16+ messages in thread

* driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration)
  2022-09-20 15:14 [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration Simon Horman
                   ` (2 preceding siblings ...)
  2022-09-20 15:14 ` [PATCH/RFC net-next 3/3] nfp: devlink: add the devlink parameter "max_vf_queue" support Simon Horman
@ 2022-09-21 13:34 ` Jakub Kicinski
  2022-09-21 13:39   ` Simon Horman
  2022-09-22 16:04   ` Leon Romanovsky
  3 siblings, 2 replies; 16+ messages in thread
From: Jakub Kicinski @ 2022-09-21 13:34 UTC (permalink / raw)
  To: Simon Horman
  Cc: David Miller, Paolo Abeni, netdev, oss-drivers, Diana Wang,
	Peng Zhang, Michael Chan, Andy Gospodarek, Gal Pressman,
	Saeed Mahameed, Jesse Brandeburg, Tony Nguyen, Edward Cree,
	Vladimir Oltean, Andrew Lunn

On Tue, 20 Sep 2022 16:14:16 +0100 Simon Horman wrote:
> this short series adds the max_vf_queue generic devlink device parameter,
> the intention of this is to allow configuration of the number of queues
> associated with VFs, and facilitates having VFs with different queue
> counts.
> 
> The series also adds support for multi-queue VFs to the nfp driver
> and support for the max_vf_queue feature described above.

I think a similar API was discussed in the past by... Broadcom?
IIRC they wanted more flexibility, i.e. being able to set the
guaranteed and max allowed queue count.

Overall this seems like a typical resource division problem so 
we should try to use the devlink resource API or similar. More 
complex policies like guaranteed+max are going to be a pain over
params.


I wanted to ask a more general question, however. I see that you
haven't CCed even the major (for some def.) vendors' maintainers.

Would it be helpful for participation if we had a separate mailing 
list for discussing driver uAPI introduction which would hopefully 
be lower traffic? Or perhaps we can require a subject tag ([PATCH
net-next uapi] ?) so that people can set up email filters?

The cost is obviously yet another process thing to remember, and 
while this is nothing that lore+lei can't already do based on file 
path filters - I doubt y'all care enough to set that up for
yourselves... :)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration)
  2022-09-21 13:34 ` driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration) Jakub Kicinski
@ 2022-09-21 13:39   ` Simon Horman
  2022-09-22 13:37     ` Gal Pressman
  2022-09-22 16:04   ` Leon Romanovsky
  1 sibling, 1 reply; 16+ messages in thread
From: Simon Horman @ 2022-09-21 13:39 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: David Miller, Paolo Abeni, netdev, oss-drivers, Diana Wang,
	Peng Zhang, Michael Chan, Andy Gospodarek, Gal Pressman,
	Saeed Mahameed, Jesse Brandeburg, Tony Nguyen, Edward Cree,
	Vladimir Oltean, Andrew Lunn

On Wed, Sep 21, 2022 at 06:34:48AM -0700, Jakub Kicinski wrote:
> On Tue, 20 Sep 2022 16:14:16 +0100 Simon Horman wrote:
> > this short series adds the max_vf_queue generic devlink device parameter,
> > the intention of this is to allow configuration of the number of queues
> > associated with VFs, and facilitates having VFs with different queue
> > counts.
> > 
> > The series also adds support for multi-queue VFs to the nfp driver
> > and support for the max_vf_queue feature described above.
> 
> I think a similar API was discussed in the past by... Broadcom?
> IIRC they wanted more flexibility, i.e. being able to set the
> guaranteed and max allowed queue count.
> 
> Overall this seems like a typical resource division problem so 
> we should try to use the devlink resource API or similar. More 
> complex policies like guaranteed+max are going to be a pain over
> params.
> 
> 
> I wanted to ask a more general question, however. I see that you
> haven't CCed even the major (for some def.) vendors' maintainers.

Sorry about that. I should have considered doing so in the first place.

> Would it be helpful for participation if we had a separate mailing 
> list for discussing driver uAPI introduction which would hopefully 
> be lower traffic? Or perhaps we can require a subject tag ([PATCH
> net-next uapi] ?) so that people can set up email filters?
> 
> The cost is obviously yet another process thing to remember, and 
> while this is nothing that lore+lei can't already do based on file 
> path filters - I doubt y'all care enough to set that up for
> yourselves... :)

Not defending myself here. And not sure if this is helpful.
But the issue for me at the time was not being clear on how to
reach the right audience.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration)
  2022-09-21 13:39   ` Simon Horman
@ 2022-09-22 13:37     ` Gal Pressman
  2022-09-22 13:49       ` Jakub Kicinski
  0 siblings, 1 reply; 16+ messages in thread
From: Gal Pressman @ 2022-09-22 13:37 UTC (permalink / raw)
  To: Simon Horman, Jakub Kicinski
  Cc: David Miller, Paolo Abeni, netdev, oss-drivers, Diana Wang,
	Peng Zhang, Michael Chan, Andy Gospodarek, Saeed Mahameed,
	Jesse Brandeburg, Tony Nguyen, Edward Cree, Vladimir Oltean,
	Andrew Lunn

On 21/09/2022 16:39, Simon Horman wrote:
> On Wed, Sep 21, 2022 at 06:34:48AM -0700, Jakub Kicinski wrote:
>> On Tue, 20 Sep 2022 16:14:16 +0100 Simon Horman wrote:
>>> this short series adds the max_vf_queue generic devlink device parameter,
>>> the intention of this is to allow configuration of the number of queues
>>> associated with VFs, and facilitates having VFs with different queue
>>> counts.
>>>
>>> The series also adds support for multi-queue VFs to the nfp driver
>>> and support for the max_vf_queue feature described above.
>> I think a similar API was discussed in the past by... Broadcom?
>> IIRC they wanted more flexibility, i.e. being able to set the
>> guaranteed and max allowed queue count.
>>
>> Overall this seems like a typical resource division problem so 
>> we should try to use the devlink resource API or similar. More 
>> complex policies like guaranteed+max are going to be a pain over
>> params.
>>
>>
>> I wanted to ask a more general question, however. I see that you
>> haven't CCed even the major (for some def.) vendors' maintainers.
> Sorry about that. I should have considered doing so in the first place.
>
>> Would it be helpful for participation if we had a separate mailing 
>> list for discussing driver uAPI introduction which would hopefully 
>> be lower traffic? Or perhaps we can require a subject tag ([PATCH
>> net-next uapi] ?) so that people can set up email filters?
>>
>> The cost is obviously yet another process thing to remember, and 
>> while this is nothing that lore+lei can't already do based on file 
>> path filters - I doubt y'all care enough to set that up for
>> yourselves... :)

It's not that simple though, this series for example doesn't touch any
uapi files directly.

> Not defending myself here. And not sure if this is helpful.
> But the issue for me at the time was not being clear on how to
> reach the right audience.

It's helpful if the right audience is subscribed to such list.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration)
  2022-09-22 13:37     ` Gal Pressman
@ 2022-09-22 13:49       ` Jakub Kicinski
  0 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2022-09-22 13:49 UTC (permalink / raw)
  To: Gal Pressman
  Cc: Simon Horman, David Miller, Paolo Abeni, netdev, oss-drivers,
	Diana Wang, Peng Zhang, Michael Chan, Andy Gospodarek,
	Saeed Mahameed, Jesse Brandeburg, Tony Nguyen, Edward Cree,
	Vladimir Oltean, Andrew Lunn

On Thu, 22 Sep 2022 16:37:21 +0300 Gal Pressman wrote:
> >> The cost is obviously yet another process thing to remember, and 
> >> while this is nothing that lore+lei can't already do based on file 
> >> path filters - I doubt y'all care enough to set that up for
> >> yourselves... :)  
> 
> It's not that simple though, this series for example doesn't touch any
> uapi files directly.

Right, the joys of devlink params. I'd suspect a good filter would need
to match on both uapi and Documentation paths.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration)
  2022-09-21 13:34 ` driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration) Jakub Kicinski
  2022-09-21 13:39   ` Simon Horman
@ 2022-09-22 16:04   ` Leon Romanovsky
  2022-09-22 16:14     ` Jakub Kicinski
  2022-09-22 17:37     ` Andrew Lunn
  1 sibling, 2 replies; 16+ messages in thread
From: Leon Romanovsky @ 2022-09-22 16:04 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Simon Horman, David Miller, Paolo Abeni, netdev, oss-drivers,
	Diana Wang, Peng Zhang, Michael Chan, Andy Gospodarek,
	Gal Pressman, Saeed Mahameed, Jesse Brandeburg, Tony Nguyen,
	Edward Cree, Vladimir Oltean, Andrew Lunn

On Wed, Sep 21, 2022 at 06:34:48AM -0700, Jakub Kicinski wrote:
> On Tue, 20 Sep 2022 16:14:16 +0100 Simon Horman wrote:
> > this short series adds the max_vf_queue generic devlink device parameter,
> > the intention of this is to allow configuration of the number of queues
> > associated with VFs, and facilitates having VFs with different queue
> > counts.

<...>

> Would it be helpful for participation if we had a separate mailing 
> list for discussing driver uAPI introduction which would hopefully 
> be lower traffic?

Please don't. It will cause to an opposite situation where UAPI
discussions will be hidden from most people. IMHO, every net vendor
should be registered to netdev mailing list and read, review and
participate.

Thanks

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration)
  2022-09-22 16:04   ` Leon Romanovsky
@ 2022-09-22 16:14     ` Jakub Kicinski
  2022-09-22 19:08       ` Leon Romanovsky
  2022-09-22 17:37     ` Andrew Lunn
  1 sibling, 1 reply; 16+ messages in thread
From: Jakub Kicinski @ 2022-09-22 16:14 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Simon Horman, David Miller, Paolo Abeni, netdev, oss-drivers,
	Diana Wang, Peng Zhang, Michael Chan, Andy Gospodarek,
	Gal Pressman, Saeed Mahameed, Jesse Brandeburg, Tony Nguyen,
	Edward Cree, Vladimir Oltean, Andrew Lunn

On Thu, 22 Sep 2022 19:04:19 +0300 Leon Romanovsky wrote:
> > Would it be helpful for participation if we had a separate mailing 
> > list for discussing driver uAPI introduction which would hopefully 
> > be lower traffic?  
> 
> Please don't. It will cause to an opposite situation where UAPI
> discussions will be hidden from most people.

Oh, it'd just be an additional list, the patches would still need 
to CC netdev.

> IMHO, every net vendor should be registered to netdev mailing list
> and read, review and participate.

Alright, so we got two no votes so far.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration)
  2022-09-22 16:04   ` Leon Romanovsky
  2022-09-22 16:14     ` Jakub Kicinski
@ 2022-09-22 17:37     ` Andrew Lunn
  2022-09-22 19:31       ` Leon Romanovsky
  1 sibling, 1 reply; 16+ messages in thread
From: Andrew Lunn @ 2022-09-22 17:37 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Jakub Kicinski, Simon Horman, David Miller, Paolo Abeni, netdev,
	oss-drivers, Diana Wang, Peng Zhang, Michael Chan,
	Andy Gospodarek, Gal Pressman, Saeed Mahameed, Jesse Brandeburg,
	Tony Nguyen, Edward Cree, Vladimir Oltean

On Thu, Sep 22, 2022 at 07:04:19PM +0300, Leon Romanovsky wrote:
> On Wed, Sep 21, 2022 at 06:34:48AM -0700, Jakub Kicinski wrote:
> > On Tue, 20 Sep 2022 16:14:16 +0100 Simon Horman wrote:
> > > this short series adds the max_vf_queue generic devlink device parameter,
> > > the intention of this is to allow configuration of the number of queues
> > > associated with VFs, and facilitates having VFs with different queue
> > > counts.
> 
> <...>
> 
> > Would it be helpful for participation if we had a separate mailing 
> > list for discussing driver uAPI introduction which would hopefully 
> > be lower traffic?
> 
> Please don't. It will cause to an opposite situation where UAPI
> discussions will be hidden from most people. IMHO, every net vendor
> should be registered to netdev mailing list and read, review and
> participate.

Good in theory, but how often do you really see it happen?

How many vendor developers really do review other vendors drivers
patches. It tends to be vendor neutral reviewers who do reviews across
multiple vendors. I wish there was more cross vendor review, it would
prevent having to repeat the same review comments again and again.
There is probably also a lot of good ideas in some drivers which
should be spread to other drivers. But if you don't look outside your
silo, you are blind to them.

However, i do agree, although netdev is not perfect, i think it is
better than another list which will get even more ignored.

      Andrew

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration)
  2022-09-22 16:14     ` Jakub Kicinski
@ 2022-09-22 19:08       ` Leon Romanovsky
  2022-09-22 19:24         ` Jakub Kicinski
  0 siblings, 1 reply; 16+ messages in thread
From: Leon Romanovsky @ 2022-09-22 19:08 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Simon Horman, David Miller, Paolo Abeni, netdev, oss-drivers,
	Diana Wang, Peng Zhang, Michael Chan, Andy Gospodarek,
	Gal Pressman, Saeed Mahameed, Jesse Brandeburg, Tony Nguyen,
	Edward Cree, Vladimir Oltean, Andrew Lunn

On Thu, Sep 22, 2022 at 09:14:14AM -0700, Jakub Kicinski wrote:
> On Thu, 22 Sep 2022 19:04:19 +0300 Leon Romanovsky wrote:
> > > Would it be helpful for participation if we had a separate mailing 
> > > list for discussing driver uAPI introduction which would hopefully 
> > > be lower traffic?  
> > 
> > Please don't. It will cause to an opposite situation where UAPI
> > discussions will be hidden from most people.
> 
> Oh, it'd just be an additional list, the patches would still need 
> to CC netdev.

First, there is already such ML: https://lore.kernel.org/linux-api/
Second, you disconnect discussion from actual patches and this can
cause to repeating of same arguments while reviewing patches.

> 
> > IMHO, every net vendor should be registered to netdev mailing list
> > and read, review and participate.
> 
> Alright, so we got two no votes so far.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration)
  2022-09-22 19:08       ` Leon Romanovsky
@ 2022-09-22 19:24         ` Jakub Kicinski
  0 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2022-09-22 19:24 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Simon Horman, David Miller, Paolo Abeni, netdev, oss-drivers,
	Diana Wang, Peng Zhang, Michael Chan, Andy Gospodarek,
	Gal Pressman, Saeed Mahameed, Jesse Brandeburg, Tony Nguyen,
	Edward Cree, Vladimir Oltean, Andrew Lunn

On Thu, 22 Sep 2022 22:08:22 +0300 Leon Romanovsky wrote:
> > > Please don't. It will cause to an opposite situation where UAPI
> > > discussions will be hidden from most people.  
> > 
> > Oh, it'd just be an additional list, the patches would still need 
> > to CC netdev.  
> 
> First, there is already such ML: https://lore.kernel.org/linux-api/
> Second, you disconnect discussion from actual patches and this can
> cause to repeating of same arguments while reviewing patches.

I phrased what I meant poorly. The list would be in addition to netdev.
All patches CCing the new list would also have to CC netdev. So if you
subscribe and read netdev that'd be enough, no extra effort needed.

So no disconnect from patches or repetition. linux-api is obviously 
not a subset of netdev, it's not a fit.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration)
  2022-09-22 17:37     ` Andrew Lunn
@ 2022-09-22 19:31       ` Leon Romanovsky
  0 siblings, 0 replies; 16+ messages in thread
From: Leon Romanovsky @ 2022-09-22 19:31 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: Jakub Kicinski, Simon Horman, David Miller, Paolo Abeni, netdev,
	oss-drivers, Diana Wang, Peng Zhang, Michael Chan,
	Andy Gospodarek, Gal Pressman, Saeed Mahameed, Jesse Brandeburg,
	Tony Nguyen, Edward Cree, Vladimir Oltean

On Thu, Sep 22, 2022 at 07:37:08PM +0200, Andrew Lunn wrote:
> On Thu, Sep 22, 2022 at 07:04:19PM +0300, Leon Romanovsky wrote:
> > On Wed, Sep 21, 2022 at 06:34:48AM -0700, Jakub Kicinski wrote:
> > > On Tue, 20 Sep 2022 16:14:16 +0100 Simon Horman wrote:
> > > > this short series adds the max_vf_queue generic devlink device parameter,
> > > > the intention of this is to allow configuration of the number of queues
> > > > associated with VFs, and facilitates having VFs with different queue
> > > > counts.
> > 
> > <...>
> > 
> > > Would it be helpful for participation if we had a separate mailing 
> > > list for discussing driver uAPI introduction which would hopefully 
> > > be lower traffic?
> > 
> > Please don't. It will cause to an opposite situation where UAPI
> > discussions will be hidden from most people. IMHO, every net vendor
> > should be registered to netdev mailing list and read, review and
> > participate.
> 
> Good in theory, but how often do you really see it happen?

I agree that the situation in netdev is not ideal, but it can be
improved by slightly changing acceptance criteria. 

As a rough idea (influenced by DRM subsystem), require cross-vendor
review prior merge. It doesn't need to be universal, and can be
applicable only to most active companies. If reviews are not happening
in sensible time frame, there are ways "to punish" vendor that was
asked to review.

Thanks

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-09-22 19:31 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-20 15:14 [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration Simon Horman
2022-09-20 15:14 ` [PATCH/RFC net-next 1/3] " Simon Horman
2022-09-20 15:14 ` [PATCH/RFC net-next 2/3] devlink: Add new "max_vf_queue" generic device param Simon Horman
2022-09-20 18:27   ` Edward Cree
2022-09-21  1:47     ` Yinjun Zhang
2022-09-20 15:14 ` [PATCH/RFC net-next 3/3] nfp: devlink: add the devlink parameter "max_vf_queue" support Simon Horman
2022-09-21 13:34 ` driver uABI review list? (was: Re: [PATCH/RFC net-next 0/3] nfp: support VF multi-queues configuration) Jakub Kicinski
2022-09-21 13:39   ` Simon Horman
2022-09-22 13:37     ` Gal Pressman
2022-09-22 13:49       ` Jakub Kicinski
2022-09-22 16:04   ` Leon Romanovsky
2022-09-22 16:14     ` Jakub Kicinski
2022-09-22 19:08       ` Leon Romanovsky
2022-09-22 19:24         ` Jakub Kicinski
2022-09-22 17:37     ` Andrew Lunn
2022-09-22 19:31       ` Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.