All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks
@ 2022-07-29  7:10 Jiri Pirko
  2022-07-29  7:10 ` [patch net-next 1/4] net: devlink: introduce "unregistering" mark and use it during devlinks iteration Jiri Pirko
                   ` (7 more replies)
  0 siblings, 8 replies; 15+ messages in thread
From: Jiri Pirko @ 2022-07-29  7:10 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, idosch, petrm, pabeni, edumazet, mlxsw, saeedm,
	tariqt, leon, moshe

From: Jiri Pirko <jiri@nvidia.com>

Aim of this patchset is to remove devlink_mutex and eventually to enable
parallel ops on devlink netlink interface.

Jiri Pirko (4):
  net: devlink: introduce "unregistering" mark and use it during
    devlinks iteration
  net: devlink: convert reload command to take implicit devlink->lock
  net: devlink: remove devlink_mutex
  net: devlink: enable parallel ops on netlink interface

 drivers/net/ethernet/mellanox/mlx4/main.c     |   4 -
 .../net/ethernet/mellanox/mlx5/core/devlink.c |   4 -
 drivers/net/ethernet/mellanox/mlxsw/core.c    |   4 -
 drivers/net/netdevsim/dev.c                   |   6 -
 net/core/devlink.c                            | 110 ++++--------------
 5 files changed, 21 insertions(+), 107 deletions(-)

-- 
2.35.3


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [patch net-next 1/4] net: devlink: introduce "unregistering" mark and use it during devlinks iteration
  2022-07-29  7:10 [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jiri Pirko
@ 2022-07-29  7:10 ` Jiri Pirko
  2022-07-29  7:10 ` [patch net-next 2/4] net: devlink: convert reload command to take implicit devlink->lock Jiri Pirko
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Jiri Pirko @ 2022-07-29  7:10 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, idosch, petrm, pabeni, edumazet, mlxsw, saeedm,
	tariqt, leon, moshe

From: Jiri Pirko <jiri@nvidia.com>

Add new mark called "unregistering" to be set at the beginning of
devlink_unregister() function. Check this mark during devlinks
iteration in order to prevent getting a reference of devlink which is
being currently unregistered.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
---
 net/core/devlink.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/net/core/devlink.c b/net/core/devlink.c
index c43c96554a3e..6b20196ada1a 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -207,6 +207,7 @@ static const struct nla_policy devlink_selftest_nl_policy[DEVLINK_ATTR_SELFTEST_
 
 static DEFINE_XARRAY_FLAGS(devlinks, XA_FLAGS_ALLOC);
 #define DEVLINK_REGISTERED XA_MARK_1
+#define DEVLINK_UNREGISTERING XA_MARK_2
 
 /* devlink instances are open to the access from the user space after
  * devlink_register() call. Such logical barrier allows us to have certain
@@ -305,6 +306,14 @@ devlinks_xa_find_get(struct net *net, unsigned long *indexp, xa_mark_t filter,
 	devlink = xa_find_fn(&devlinks, indexp, ULONG_MAX, DEVLINK_REGISTERED);
 	if (!devlink)
 		goto unlock;
+
+	/* In case devlink_unregister() was already called and "unregistering"
+	 * mark was set, do not allow to get a devlink reference here.
+	 * This prevents live-lock of devlink_unregister() wait for completion.
+	 */
+	if (xa_get_mark(&devlinks, *indexp, DEVLINK_UNREGISTERING))
+		goto retry;
+
 	/* For a possible retry, the xa_find_after() should be always used */
 	xa_find_fn = xa_find_after;
 	if (!devlink_try_get(devlink))
@@ -9809,11 +9818,13 @@ void devlink_unregister(struct devlink *devlink)
 	ASSERT_DEVLINK_REGISTERED(devlink);
 	/* Make sure that we are in .remove() routine */
 
+	xa_set_mark(&devlinks, devlink->index, DEVLINK_UNREGISTERING);
 	devlink_put(devlink);
 	wait_for_completion(&devlink->comp);
 
 	devlink_notify_unregister(devlink);
 	xa_clear_mark(&devlinks, devlink->index, DEVLINK_REGISTERED);
+	xa_clear_mark(&devlinks, devlink->index, DEVLINK_UNREGISTERING);
 }
 EXPORT_SYMBOL_GPL(devlink_unregister);
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [patch net-next 2/4] net: devlink: convert reload command to take implicit devlink->lock
  2022-07-29  7:10 [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jiri Pirko
  2022-07-29  7:10 ` [patch net-next 1/4] net: devlink: introduce "unregistering" mark and use it during devlinks iteration Jiri Pirko
@ 2022-07-29  7:10 ` Jiri Pirko
  2022-08-05 16:21   ` Keller, Jacob E
  2022-07-29  7:10 ` [patch net-next 3/4] net: devlink: remove devlink_mutex Jiri Pirko
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Jiri Pirko @ 2022-07-29  7:10 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, idosch, petrm, pabeni, edumazet, mlxsw, saeedm,
	tariqt, leon, moshe

From: Jiri Pirko <jiri@nvidia.com>

Convert reload command to behave the same way as the rest of the
commands and let if be called with devlink->lock held. Remove the
temporary devl_lock taking from drivers. As the DEVLINK_NL_FLAG_NO_LOCK
flag is no longer used, remove it alongside.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx4/main.c      |  4 ----
 .../net/ethernet/mellanox/mlx5/core/devlink.c  |  4 ----
 drivers/net/ethernet/mellanox/mlxsw/core.c     |  4 ----
 drivers/net/netdevsim/dev.c                    |  6 ------
 net/core/devlink.c                             | 18 +++++-------------
 5 files changed, 5 insertions(+), 31 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index 2c764d1d897d..78c5f40382c9 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -3958,11 +3958,9 @@ static int mlx4_devlink_reload_down(struct devlink *devlink, bool netns_change,
 		NL_SET_ERR_MSG_MOD(extack, "Namespace change is not supported");
 		return -EOPNOTSUPP;
 	}
-	devl_lock(devlink);
 	if (persist->num_vfs)
 		mlx4_warn(persist->dev, "Reload performed on PF, will cause reset on operating Virtual Functions\n");
 	mlx4_restart_one_down(persist->pdev);
-	devl_unlock(devlink);
 	return 0;
 }
 
@@ -3975,10 +3973,8 @@ static int mlx4_devlink_reload_up(struct devlink *devlink, enum devlink_reload_a
 	struct mlx4_dev_persistent *persist = dev->persist;
 	int err;
 
-	devl_lock(devlink);
 	*actions_performed = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT);
 	err = mlx4_restart_one_up(persist->pdev, true, devlink);
-	devl_unlock(devlink);
 	if (err)
 		mlx4_err(persist->dev, "mlx4_restart_one_up failed, ret=%d\n",
 			 err);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
index 1c05a7091698..66c6a7017695 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
@@ -164,7 +164,6 @@ static int mlx5_devlink_reload_down(struct devlink *devlink, bool netns_change,
 		NL_SET_ERR_MSG_MOD(extack, "reload while VFs are present is unfavorable");
 	}
 
-	devl_lock(devlink);
 	switch (action) {
 	case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
 		mlx5_unload_one_devl_locked(dev);
@@ -181,7 +180,6 @@ static int mlx5_devlink_reload_down(struct devlink *devlink, bool netns_change,
 		ret = -EOPNOTSUPP;
 	}
 
-	devl_unlock(devlink);
 	return ret;
 }
 
@@ -192,7 +190,6 @@ static int mlx5_devlink_reload_up(struct devlink *devlink, enum devlink_reload_a
 	struct mlx5_core_dev *dev = devlink_priv(devlink);
 	int ret = 0;
 
-	devl_lock(devlink);
 	*actions_performed = BIT(action);
 	switch (action) {
 	case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
@@ -211,7 +208,6 @@ static int mlx5_devlink_reload_up(struct devlink *devlink, enum devlink_reload_a
 		ret = -EOPNOTSUPP;
 	}
 
-	devl_unlock(devlink);
 	return ret;
 }
 
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
index a48f893cf7b0..e12918b6baa1 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
@@ -1499,9 +1499,7 @@ mlxsw_devlink_core_bus_device_reload_down(struct devlink *devlink,
 	if (!(mlxsw_core->bus->features & MLXSW_BUS_F_RESET))
 		return -EOPNOTSUPP;
 
-	devl_lock(devlink);
 	mlxsw_core_bus_device_unregister(mlxsw_core, true);
-	devl_unlock(devlink);
 	return 0;
 }
 
@@ -1515,12 +1513,10 @@ mlxsw_devlink_core_bus_device_reload_up(struct devlink *devlink, enum devlink_re
 
 	*actions_performed = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
 			     BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE);
-	devl_lock(devlink);
 	err = mlxsw_core_bus_device_register(mlxsw_core->bus_info,
 					     mlxsw_core->bus,
 					     mlxsw_core->bus_priv, true,
 					     devlink, extack);
-	devl_unlock(devlink);
 	return err;
 }
 
diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
index 5802e80e8fe1..e88f783c297e 100644
--- a/drivers/net/netdevsim/dev.c
+++ b/drivers/net/netdevsim/dev.c
@@ -948,18 +948,15 @@ static int nsim_dev_reload_down(struct devlink *devlink, bool netns_change,
 {
 	struct nsim_dev *nsim_dev = devlink_priv(devlink);
 
-	devl_lock(devlink);
 	if (nsim_dev->dont_allow_reload) {
 		/* For testing purposes, user set debugfs dont_allow_reload
 		 * value to true. So forbid it.
 		 */
 		NL_SET_ERR_MSG_MOD(extack, "User forbid the reload for testing purposes");
-		devl_unlock(devlink);
 		return -EOPNOTSUPP;
 	}
 
 	nsim_dev_reload_destroy(nsim_dev);
-	devl_unlock(devlink);
 	return 0;
 }
 
@@ -970,19 +967,16 @@ static int nsim_dev_reload_up(struct devlink *devlink, enum devlink_reload_actio
 	struct nsim_dev *nsim_dev = devlink_priv(devlink);
 	int ret;
 
-	devl_lock(devlink);
 	if (nsim_dev->fail_reload) {
 		/* For testing purposes, user set debugfs fail_reload
 		 * value to true. Fail right away.
 		 */
 		NL_SET_ERR_MSG_MOD(extack, "User setup the reload to fail for testing purposes");
-		devl_unlock(devlink);
 		return -EINVAL;
 	}
 
 	*actions_performed = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT);
 	ret = nsim_dev_reload_create(nsim_dev, extack);
-	devl_unlock(devlink);
 	return ret;
 }
 
diff --git a/net/core/devlink.c b/net/core/devlink.c
index 6b20196ada1a..57865b231364 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -768,12 +768,6 @@ devlink_region_snapshot_get_by_id(struct devlink_region *region, u32 id)
 #define DEVLINK_NL_FLAG_NEED_RATE_NODE		BIT(3)
 #define DEVLINK_NL_FLAG_NEED_LINECARD		BIT(4)
 
-/* The per devlink instance lock is taken by default in the pre-doit
- * operation, yet several commands do not require this. The global
- * devlink lock is taken and protects from disruption by user-calls.
- */
-#define DEVLINK_NL_FLAG_NO_LOCK			BIT(5)
-
 static int devlink_nl_pre_doit(const struct genl_ops *ops,
 			       struct sk_buff *skb, struct genl_info *info)
 {
@@ -788,8 +782,7 @@ static int devlink_nl_pre_doit(const struct genl_ops *ops,
 		mutex_unlock(&devlink_mutex);
 		return PTR_ERR(devlink);
 	}
-	if (~ops->internal_flags & DEVLINK_NL_FLAG_NO_LOCK)
-		devl_lock(devlink);
+	devl_lock(devlink);
 	info->user_ptr[0] = devlink;
 	if (ops->internal_flags & DEVLINK_NL_FLAG_NEED_PORT) {
 		devlink_port = devlink_port_get_from_info(devlink, info);
@@ -831,8 +824,7 @@ static int devlink_nl_pre_doit(const struct genl_ops *ops,
 	return 0;
 
 unlock:
-	if (~ops->internal_flags & DEVLINK_NL_FLAG_NO_LOCK)
-		devl_unlock(devlink);
+	devl_unlock(devlink);
 	devlink_put(devlink);
 	mutex_unlock(&devlink_mutex);
 	return err;
@@ -849,8 +841,7 @@ static void devlink_nl_post_doit(const struct genl_ops *ops,
 		linecard = info->user_ptr[1];
 		devlink_linecard_put(linecard);
 	}
-	if (~ops->internal_flags & DEVLINK_NL_FLAG_NO_LOCK)
-		devl_unlock(devlink);
+	devl_unlock(devlink);
 	devlink_put(devlink);
 	mutex_unlock(&devlink_mutex);
 }
@@ -9414,7 +9405,6 @@ static const struct genl_small_ops devlink_nl_ops[] = {
 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
 		.doit = devlink_nl_cmd_reload,
 		.flags = GENL_ADMIN_PERM,
-		.internal_flags = DEVLINK_NL_FLAG_NO_LOCK,
 	},
 	{
 		.cmd = DEVLINK_CMD_PARAM_GET,
@@ -12507,10 +12497,12 @@ static void __net_exit devlink_pernet_pre_exit(struct net *net)
 	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(net, index, devlink) {
 		WARN_ON(!(devlink->features & DEVLINK_F_RELOAD));
+		mutex_lock(&devlink->lock);
 		err = devlink_reload(devlink, &init_net,
 				     DEVLINK_RELOAD_ACTION_DRIVER_REINIT,
 				     DEVLINK_RELOAD_LIMIT_UNSPEC,
 				     &actions_performed, NULL);
+		mutex_unlock(&devlink->lock);
 		if (err && err != -EOPNOTSUPP)
 			pr_warn("Failed to reload devlink instance into init_net\n");
 		devlink_put(devlink);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [patch net-next 3/4] net: devlink: remove devlink_mutex
  2022-07-29  7:10 [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jiri Pirko
  2022-07-29  7:10 ` [patch net-next 1/4] net: devlink: introduce "unregistering" mark and use it during devlinks iteration Jiri Pirko
  2022-07-29  7:10 ` [patch net-next 2/4] net: devlink: convert reload command to take implicit devlink->lock Jiri Pirko
@ 2022-07-29  7:10 ` Jiri Pirko
  2022-07-29  7:10 ` [patch net-next 4/4] net: devlink: enable parallel ops on netlink interface Jiri Pirko
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Jiri Pirko @ 2022-07-29  7:10 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, idosch, petrm, pabeni, edumazet, mlxsw, saeedm,
	tariqt, leon, moshe

From: Jiri Pirko <jiri@nvidia.com>

All accesses to devlink structure from userspace and drivers are locked
with devlink->lock instance mutex. Also, devlinks xa_array iteration is
taken care of by iteration helpers taking devlink reference.

Therefore, remove devlink_mutex as it is no longer needed.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
---
 net/core/devlink.c | 80 +++-------------------------------------------
 1 file changed, 4 insertions(+), 76 deletions(-)

diff --git a/net/core/devlink.c b/net/core/devlink.c
index 57865b231364..06cd7c1a1f0a 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -225,12 +225,6 @@ static DEFINE_XARRAY_FLAGS(devlinks, XA_FLAGS_ALLOC);
 #define ASSERT_DEVLINK_NOT_REGISTERED(d)                                       \
 	WARN_ON_ONCE(xa_get_mark(&devlinks, (d)->index, DEVLINK_REGISTERED))
 
-/* devlink_mutex
- *
- * An overall lock guarding every operation coming from userspace.
- */
-static DEFINE_MUTEX(devlink_mutex);
-
 struct net *devlink_net(const struct devlink *devlink)
 {
 	return read_pnet(&devlink->_net);
@@ -776,12 +770,9 @@ static int devlink_nl_pre_doit(const struct genl_ops *ops,
 	struct devlink *devlink;
 	int err;
 
-	mutex_lock(&devlink_mutex);
 	devlink = devlink_get_from_attrs(genl_info_net(info), info->attrs);
-	if (IS_ERR(devlink)) {
-		mutex_unlock(&devlink_mutex);
+	if (IS_ERR(devlink))
 		return PTR_ERR(devlink);
-	}
 	devl_lock(devlink);
 	info->user_ptr[0] = devlink;
 	if (ops->internal_flags & DEVLINK_NL_FLAG_NEED_PORT) {
@@ -826,7 +817,6 @@ static int devlink_nl_pre_doit(const struct genl_ops *ops,
 unlock:
 	devl_unlock(devlink);
 	devlink_put(devlink);
-	mutex_unlock(&devlink_mutex);
 	return err;
 }
 
@@ -843,7 +833,6 @@ static void devlink_nl_post_doit(const struct genl_ops *ops,
 	}
 	devl_unlock(devlink);
 	devlink_put(devlink);
-	mutex_unlock(&devlink_mutex);
 }
 
 static struct genl_family devlink_nl_family;
@@ -1408,7 +1397,6 @@ static int devlink_nl_cmd_rate_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err = 0;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		devl_lock(devlink);
 		list_for_each_entry(devlink_rate, &devlink->rate_list, list) {
@@ -1433,7 +1421,6 @@ static int devlink_nl_cmd_rate_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
 	if (err != -EMSGSIZE)
 		return err;
 
@@ -1504,7 +1491,6 @@ static int devlink_nl_cmd_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		if (idx < start) {
 			idx++;
@@ -1521,8 +1507,6 @@ static int devlink_nl_cmd_get_dumpit(struct sk_buff *msg,
 		idx++;
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	cb->args[0] = idx;
 	return msg->len;
 }
@@ -1559,7 +1543,6 @@ static int devlink_nl_cmd_port_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		devl_lock(devlink);
 		list_for_each_entry(devlink_port, &devlink->port_list, list) {
@@ -1583,8 +1566,6 @@ static int devlink_nl_cmd_port_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	cb->args[0] = idx;
 	return msg->len;
 }
@@ -2238,7 +2219,6 @@ static int devlink_nl_cmd_linecard_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		mutex_lock(&devlink->linecards_lock);
 		list_for_each_entry(linecard, &devlink->linecard_list, list) {
@@ -2265,8 +2245,6 @@ static int devlink_nl_cmd_linecard_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	cb->args[0] = idx;
 	return msg->len;
 }
@@ -2503,7 +2481,6 @@ static int devlink_nl_cmd_sb_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		devl_lock(devlink);
 		list_for_each_entry(devlink_sb, &devlink->sb_list, list) {
@@ -2527,8 +2504,6 @@ static int devlink_nl_cmd_sb_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	cb->args[0] = idx;
 	return msg->len;
 }
@@ -2648,7 +2623,6 @@ static int devlink_nl_cmd_sb_pool_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err = 0;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		if (!devlink->ops->sb_pool_get)
 			goto retry;
@@ -2672,8 +2646,6 @@ static int devlink_nl_cmd_sb_pool_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	if (err != -EMSGSIZE)
 		return err;
 
@@ -2865,7 +2837,6 @@ static int devlink_nl_cmd_sb_port_pool_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err = 0;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		if (!devlink->ops->sb_port_pool_get)
 			goto retry;
@@ -2889,8 +2860,6 @@ static int devlink_nl_cmd_sb_port_pool_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	if (err != -EMSGSIZE)
 		return err;
 
@@ -3110,7 +3079,6 @@ devlink_nl_cmd_sb_tc_pool_bind_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err = 0;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		if (!devlink->ops->sb_tc_pool_bind_get)
 			goto retry;
@@ -3135,8 +3103,6 @@ devlink_nl_cmd_sb_tc_pool_bind_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	if (err != -EMSGSIZE)
 		return err;
 
@@ -4908,7 +4874,6 @@ static int devlink_nl_cmd_selftests_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err = 0;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		if (idx < start || !devlink->ops->selftest_check)
 			goto inc;
@@ -4927,7 +4892,6 @@ static int devlink_nl_cmd_selftests_get_dumpit(struct sk_buff *msg,
 		idx++;
 		devlink_put(devlink);
 	}
-	mutex_unlock(&devlink_mutex);
 
 	if (err != -EMSGSIZE)
 		return err;
@@ -5393,7 +5357,6 @@ static int devlink_nl_cmd_param_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err = 0;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		devl_lock(devlink);
 		list_for_each_entry(param_item, &devlink->param_list, list) {
@@ -5419,8 +5382,6 @@ static int devlink_nl_cmd_param_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	if (err != -EMSGSIZE)
 		return err;
 
@@ -5621,7 +5582,6 @@ static int devlink_nl_cmd_port_param_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err = 0;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		devl_lock(devlink);
 		list_for_each_entry(devlink_port, &devlink->port_list, list) {
@@ -5652,8 +5612,6 @@ static int devlink_nl_cmd_port_param_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	if (err != -EMSGSIZE)
 		return err;
 
@@ -6208,7 +6166,6 @@ static int devlink_nl_cmd_region_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err = 0;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		err = devlink_nl_cmd_region_get_devlink_dumpit(msg, cb, devlink,
 							       &idx, start);
@@ -6217,7 +6174,6 @@ static int devlink_nl_cmd_region_get_dumpit(struct sk_buff *msg,
 			goto out;
 	}
 out:
-	mutex_unlock(&devlink_mutex);
 	cb->args[0] = idx;
 	return msg->len;
 }
@@ -6483,12 +6439,9 @@ static int devlink_nl_cmd_region_read_dumpit(struct sk_buff *skb,
 
 	start_offset = *((u64 *)&cb->args[0]);
 
-	mutex_lock(&devlink_mutex);
 	devlink = devlink_get_from_attrs(sock_net(cb->skb->sk), attrs);
-	if (IS_ERR(devlink)) {
-		err = PTR_ERR(devlink);
-		goto out_dev;
-	}
+	if (IS_ERR(devlink))
+		return PTR_ERR(devlink);
 
 	devl_lock(devlink);
 
@@ -6588,8 +6541,6 @@ static int devlink_nl_cmd_region_read_dumpit(struct sk_buff *skb,
 	genlmsg_end(skb, hdr);
 	devl_unlock(devlink);
 	devlink_put(devlink);
-	mutex_unlock(&devlink_mutex);
-
 	return skb->len;
 
 nla_put_failure:
@@ -6597,8 +6548,6 @@ static int devlink_nl_cmd_region_read_dumpit(struct sk_buff *skb,
 out_unlock:
 	devl_unlock(devlink);
 	devlink_put(devlink);
-out_dev:
-	mutex_unlock(&devlink_mutex);
 	return err;
 }
 
@@ -6747,7 +6696,6 @@ static int devlink_nl_cmd_info_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err = 0;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		if (idx < start || !devlink->ops->info_get)
 			goto inc;
@@ -6768,7 +6716,6 @@ static int devlink_nl_cmd_info_get_dumpit(struct sk_buff *msg,
 		idx++;
 		devlink_put(devlink);
 	}
-	mutex_unlock(&devlink_mutex);
 
 	if (err != -EMSGSIZE)
 		return err;
@@ -7847,18 +7794,13 @@ devlink_health_reporter_get_from_cb(struct netlink_callback *cb)
 	struct nlattr **attrs = info->attrs;
 	struct devlink *devlink;
 
-	mutex_lock(&devlink_mutex);
 	devlink = devlink_get_from_attrs(sock_net(cb->skb->sk), attrs);
 	if (IS_ERR(devlink))
-		goto unlock;
+		return NULL;
 
 	reporter = devlink_health_reporter_get_from_attrs(devlink, attrs);
 	devlink_put(devlink);
-	mutex_unlock(&devlink_mutex);
 	return reporter;
-unlock:
-	mutex_unlock(&devlink_mutex);
-	return NULL;
 }
 
 void
@@ -7924,7 +7866,6 @@ devlink_nl_cmd_health_reporter_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		mutex_lock(&devlink->reporters_lock);
 		list_for_each_entry(reporter, &devlink->reporter_list,
@@ -7976,8 +7917,6 @@ devlink_nl_cmd_health_reporter_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	cb->args[0] = idx;
 	return msg->len;
 }
@@ -8510,7 +8449,6 @@ static int devlink_nl_cmd_trap_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		devl_lock(devlink);
 		list_for_each_entry(trap_item, &devlink->trap_list, list) {
@@ -8534,8 +8472,6 @@ static int devlink_nl_cmd_trap_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	cb->args[0] = idx;
 	return msg->len;
 }
@@ -8730,7 +8666,6 @@ static int devlink_nl_cmd_trap_group_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		devl_lock(devlink);
 		list_for_each_entry(group_item, &devlink->trap_group_list,
@@ -8755,8 +8690,6 @@ static int devlink_nl_cmd_trap_group_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	cb->args[0] = idx;
 	return msg->len;
 }
@@ -9037,7 +8970,6 @@ static int devlink_nl_cmd_trap_policer_get_dumpit(struct sk_buff *msg,
 	int idx = 0;
 	int err;
 
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(sock_net(msg->sk), index, devlink) {
 		devl_lock(devlink);
 		list_for_each_entry(policer_item, &devlink->trap_policer_list,
@@ -9062,8 +8994,6 @@ static int devlink_nl_cmd_trap_policer_get_dumpit(struct sk_buff *msg,
 		devlink_put(devlink);
 	}
 out:
-	mutex_unlock(&devlink_mutex);
-
 	cb->args[0] = idx;
 	return msg->len;
 }
@@ -12494,7 +12424,6 @@ static void __net_exit devlink_pernet_pre_exit(struct net *net)
 	/* In case network namespace is getting destroyed, reload
 	 * all devlink instances from this namespace into init_net.
 	 */
-	mutex_lock(&devlink_mutex);
 	devlinks_xa_for_each_registered_get(net, index, devlink) {
 		WARN_ON(!(devlink->features & DEVLINK_F_RELOAD));
 		mutex_lock(&devlink->lock);
@@ -12507,7 +12436,6 @@ static void __net_exit devlink_pernet_pre_exit(struct net *net)
 			pr_warn("Failed to reload devlink instance into init_net\n");
 		devlink_put(devlink);
 	}
-	mutex_unlock(&devlink_mutex);
 }
 
 static struct pernet_operations devlink_pernet_ops __net_initdata = {
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [patch net-next 4/4] net: devlink: enable parallel ops on netlink interface
  2022-07-29  7:10 [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jiri Pirko
                   ` (2 preceding siblings ...)
  2022-07-29  7:10 ` [patch net-next 3/4] net: devlink: remove devlink_mutex Jiri Pirko
@ 2022-07-29  7:10 ` Jiri Pirko
  2022-08-20 19:44   ` Jakub Kicinski
  2022-07-30  3:04 ` [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jakub Kicinski
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Jiri Pirko @ 2022-07-29  7:10 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, idosch, petrm, pabeni, edumazet, mlxsw, saeedm,
	tariqt, leon, moshe

From: Jiri Pirko <jiri@nvidia.com>

As the devlink_mutex was removed and all devlink instances are protected
individually by devlink->lock mutex, allow the netlink ops to run
in parallel and therefore allow user to execute commands on multiple
devlink instances simultaneously.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
---
 net/core/devlink.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/core/devlink.c b/net/core/devlink.c
index 06cd7c1a1f0a..889e7e3d3e8a 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -9505,6 +9505,7 @@ static struct genl_family devlink_nl_family __ro_after_init = {
 	.maxattr	= DEVLINK_ATTR_MAX,
 	.policy = devlink_nl_policy,
 	.netnsok	= true,
+	.parallel_ops	= true,
 	.pre_doit	= devlink_nl_pre_doit,
 	.post_doit	= devlink_nl_post_doit,
 	.module		= THIS_MODULE,
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks
  2022-07-29  7:10 [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jiri Pirko
                   ` (3 preceding siblings ...)
  2022-07-29  7:10 ` [patch net-next 4/4] net: devlink: enable parallel ops on netlink interface Jiri Pirko
@ 2022-07-30  3:04 ` Jakub Kicinski
  2022-07-30  9:26   ` Jiri Pirko
  2022-07-30 15:53 ` Ido Schimmel
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2022-07-30  3:04 UTC (permalink / raw)
  To: Jiri Pirko, idosch, leon, moshe
  Cc: netdev, davem, petrm, pabeni, edumazet, mlxsw, saeedm, tariqt

On Fri, 29 Jul 2022 09:10:34 +0200 Jiri Pirko wrote:
> From: Jiri Pirko <jiri@nvidia.com>
> 
> Aim of this patchset is to remove devlink_mutex and eventually to enable
> parallel ops on devlink netlink interface.

Let's do this! Worse comes to worst it's a relatively small revert.

Reviewed-by: Jakub Kicinski <kuba@kernel.org>

Thanks to everyone involved!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks
  2022-07-30  3:04 ` [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jakub Kicinski
@ 2022-07-30  9:26   ` Jiri Pirko
  0 siblings, 0 replies; 15+ messages in thread
From: Jiri Pirko @ 2022-07-30  9:26 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jiri Pirko, idosch, leon, moshe, netdev, davem, petrm, pabeni,
	edumazet, mlxsw, saeedm, tariqt

Sat, Jul 30, 2022 at 05:04:34AM CEST, kuba@kernel.org wrote:
>On Fri, 29 Jul 2022 09:10:34 +0200 Jiri Pirko wrote:
>> From: Jiri Pirko <jiri@nvidia.com>
>> 
>> Aim of this patchset is to remove devlink_mutex and eventually to enable
>> parallel ops on devlink netlink interface.
>
>Let's do this! Worse comes to worst it's a relatively small revert.
>
>Reviewed-by: Jakub Kicinski <kuba@kernel.org>
>
>Thanks to everyone involved!

Thanks!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks
  2022-07-29  7:10 [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jiri Pirko
                   ` (4 preceding siblings ...)
  2022-07-30  3:04 ` [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jakub Kicinski
@ 2022-07-30 15:53 ` Ido Schimmel
  2022-07-31  6:02 ` Ido Schimmel
  2022-08-01 11:40 ` patchwork-bot+netdevbpf
  7 siblings, 0 replies; 15+ messages in thread
From: Ido Schimmel @ 2022-07-30 15:53 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, kuba, petrm, pabeni, edumazet, mlxsw, saeedm,
	tariqt, leon, moshe

On Fri, Jul 29, 2022 at 09:10:34AM +0200, Jiri Pirko wrote:
> From: Jiri Pirko <jiri@nvidia.com>
> 
> Aim of this patchset is to remove devlink_mutex and eventually to enable
> parallel ops on devlink netlink interface.

Will report regression results tomorrow morning.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks
  2022-07-29  7:10 [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jiri Pirko
                   ` (5 preceding siblings ...)
  2022-07-30 15:53 ` Ido Schimmel
@ 2022-07-31  6:02 ` Ido Schimmel
  2022-08-01 11:40 ` patchwork-bot+netdevbpf
  7 siblings, 0 replies; 15+ messages in thread
From: Ido Schimmel @ 2022-07-31  6:02 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, kuba, petrm, pabeni, edumazet, mlxsw, saeedm,
	tariqt, leon, moshe

On Fri, Jul 29, 2022 at 09:10:34AM +0200, Jiri Pirko wrote:
> From: Jiri Pirko <jiri@nvidia.com>
> 
> Aim of this patchset is to remove devlink_mutex and eventually to enable
> parallel ops on devlink netlink interface.

Tested-by: Ido Schimmel <idosch@nvidia.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks
  2022-07-29  7:10 [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jiri Pirko
                   ` (6 preceding siblings ...)
  2022-07-31  6:02 ` Ido Schimmel
@ 2022-08-01 11:40 ` patchwork-bot+netdevbpf
  7 siblings, 0 replies; 15+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-08-01 11:40 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, kuba, idosch, petrm, pabeni, edumazet, mlxsw,
	saeedm, tariqt, leon, moshe

Hello:

This series was applied to netdev/net-next.git (master)
by David S. Miller <davem@davemloft.net>:

On Fri, 29 Jul 2022 09:10:34 +0200 you wrote:
> From: Jiri Pirko <jiri@nvidia.com>
> 
> Aim of this patchset is to remove devlink_mutex and eventually to enable
> parallel ops on devlink netlink interface.
> 
> Jiri Pirko (4):
>   net: devlink: introduce "unregistering" mark and use it during
>     devlinks iteration
>   net: devlink: convert reload command to take implicit devlink->lock
>   net: devlink: remove devlink_mutex
>   net: devlink: enable parallel ops on netlink interface
> 
> [...]

Here is the summary with links:
  - [net-next,1/4] net: devlink: introduce "unregistering" mark and use it during devlinks iteration
    https://git.kernel.org/netdev/net-next/c/c2368b19807a
  - [net-next,2/4] net: devlink: convert reload command to take implicit devlink->lock
    https://git.kernel.org/netdev/net-next/c/644a66c60f02
  - [net-next,3/4] net: devlink: remove devlink_mutex
    https://git.kernel.org/netdev/net-next/c/d3efc2a6a6d8
  - [net-next,4/4] net: devlink: enable parallel ops on netlink interface
    https://git.kernel.org/netdev/net-next/c/09b278462f16

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [patch net-next 2/4] net: devlink: convert reload command to take implicit devlink->lock
  2022-07-29  7:10 ` [patch net-next 2/4] net: devlink: convert reload command to take implicit devlink->lock Jiri Pirko
@ 2022-08-05 16:21   ` Keller, Jacob E
  2022-08-05 19:58     ` Jakub Kicinski
  0 siblings, 1 reply; 15+ messages in thread
From: Keller, Jacob E @ 2022-08-05 16:21 UTC (permalink / raw)
  To: Jiri Pirko, netdev
  Cc: davem, kuba, idosch, petrm, pabeni, edumazet, mlxsw, saeedm,
	tariqt, leon, moshe



> -----Original Message-----
> From: Jiri Pirko <jiri@resnulli.us>
> Sent: Friday, July 29, 2022 12:11 AM
> To: netdev@vger.kernel.org
> Cc: davem@davemloft.net; kuba@kernel.org; idosch@nvidia.com;
> petrm@nvidia.com; pabeni@redhat.com; edumazet@google.com;
> mlxsw@nvidia.com; saeedm@nvidia.com; tariqt@nvidia.com; leon@kernel.org;
> moshe@nvidia.com
> Subject: [patch net-next 2/4] net: devlink: convert reload command to take
> implicit devlink->lock
> 
> From: Jiri Pirko <jiri@nvidia.com>
> 
> Convert reload command to behave the same way as the rest of the
> commands and let if be called with devlink->lock held. Remove the
> temporary devl_lock taking from drivers. As the DEVLINK_NL_FLAG_NO_LOCK
> flag is no longer used, remove it alongside.
> 
> Signed-off-by: Jiri Pirko <jiri@nvidia.com>

Wasn't reload avoiding the lock done so that drivers could perform other devlink operations during reload? Or is that no longer a problem with the recent refactors done to move devlink initialization and such earlier so that its now safe?

Thanks,
Jake

> ---
>  drivers/net/ethernet/mellanox/mlx4/main.c      |  4 ----
>  .../net/ethernet/mellanox/mlx5/core/devlink.c  |  4 ----
>  drivers/net/ethernet/mellanox/mlxsw/core.c     |  4 ----
>  drivers/net/netdevsim/dev.c                    |  6 ------
>  net/core/devlink.c                             | 18 +++++-------------
>  5 files changed, 5 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c
> b/drivers/net/ethernet/mellanox/mlx4/main.c
> index 2c764d1d897d..78c5f40382c9 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/main.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/main.c
> @@ -3958,11 +3958,9 @@ static int mlx4_devlink_reload_down(struct devlink
> *devlink, bool netns_change,
>  		NL_SET_ERR_MSG_MOD(extack, "Namespace change is not
> supported");
>  		return -EOPNOTSUPP;
>  	}
> -	devl_lock(devlink);
>  	if (persist->num_vfs)
>  		mlx4_warn(persist->dev, "Reload performed on PF, will cause
> reset on operating Virtual Functions\n");
>  	mlx4_restart_one_down(persist->pdev);
> -	devl_unlock(devlink);
>  	return 0;
>  }
> 
> @@ -3975,10 +3973,8 @@ static int mlx4_devlink_reload_up(struct devlink
> *devlink, enum devlink_reload_a
>  	struct mlx4_dev_persistent *persist = dev->persist;
>  	int err;
> 
> -	devl_lock(devlink);
>  	*actions_performed = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT);
>  	err = mlx4_restart_one_up(persist->pdev, true, devlink);
> -	devl_unlock(devlink);
>  	if (err)
>  		mlx4_err(persist->dev, "mlx4_restart_one_up failed, ret=%d\n",
>  			 err);
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
> b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
> index 1c05a7091698..66c6a7017695 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
> @@ -164,7 +164,6 @@ static int mlx5_devlink_reload_down(struct devlink
> *devlink, bool netns_change,
>  		NL_SET_ERR_MSG_MOD(extack, "reload while VFs are present is
> unfavorable");
>  	}
> 
> -	devl_lock(devlink);
>  	switch (action) {
>  	case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
>  		mlx5_unload_one_devl_locked(dev);
> @@ -181,7 +180,6 @@ static int mlx5_devlink_reload_down(struct devlink
> *devlink, bool netns_change,
>  		ret = -EOPNOTSUPP;
>  	}
> 
> -	devl_unlock(devlink);
>  	return ret;
>  }
> 
> @@ -192,7 +190,6 @@ static int mlx5_devlink_reload_up(struct devlink *devlink,
> enum devlink_reload_a
>  	struct mlx5_core_dev *dev = devlink_priv(devlink);
>  	int ret = 0;
> 
> -	devl_lock(devlink);
>  	*actions_performed = BIT(action);
>  	switch (action) {
>  	case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
> @@ -211,7 +208,6 @@ static int mlx5_devlink_reload_up(struct devlink *devlink,
> enum devlink_reload_a
>  		ret = -EOPNOTSUPP;
>  	}
> 
> -	devl_unlock(devlink);
>  	return ret;
>  }
> 
> diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c
> b/drivers/net/ethernet/mellanox/mlxsw/core.c
> index a48f893cf7b0..e12918b6baa1 100644
> --- a/drivers/net/ethernet/mellanox/mlxsw/core.c
> +++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
> @@ -1499,9 +1499,7 @@ mlxsw_devlink_core_bus_device_reload_down(struct
> devlink *devlink,
>  	if (!(mlxsw_core->bus->features & MLXSW_BUS_F_RESET))
>  		return -EOPNOTSUPP;
> 
> -	devl_lock(devlink);
>  	mlxsw_core_bus_device_unregister(mlxsw_core, true);
> -	devl_unlock(devlink);
>  	return 0;
>  }
> 
> @@ -1515,12 +1513,10 @@ mlxsw_devlink_core_bus_device_reload_up(struct
> devlink *devlink, enum devlink_re
> 
>  	*actions_performed = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
>  			     BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE);
> -	devl_lock(devlink);
>  	err = mlxsw_core_bus_device_register(mlxsw_core->bus_info,
>  					     mlxsw_core->bus,
>  					     mlxsw_core->bus_priv, true,
>  					     devlink, extack);
> -	devl_unlock(devlink);
>  	return err;
>  }
> 
> diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
> index 5802e80e8fe1..e88f783c297e 100644
> --- a/drivers/net/netdevsim/dev.c
> +++ b/drivers/net/netdevsim/dev.c
> @@ -948,18 +948,15 @@ static int nsim_dev_reload_down(struct devlink
> *devlink, bool netns_change,
>  {
>  	struct nsim_dev *nsim_dev = devlink_priv(devlink);
> 
> -	devl_lock(devlink);
>  	if (nsim_dev->dont_allow_reload) {
>  		/* For testing purposes, user set debugfs dont_allow_reload
>  		 * value to true. So forbid it.
>  		 */
>  		NL_SET_ERR_MSG_MOD(extack, "User forbid the reload for
> testing purposes");
> -		devl_unlock(devlink);
>  		return -EOPNOTSUPP;
>  	}
> 
>  	nsim_dev_reload_destroy(nsim_dev);
> -	devl_unlock(devlink);
>  	return 0;
>  }
> 
> @@ -970,19 +967,16 @@ static int nsim_dev_reload_up(struct devlink *devlink,
> enum devlink_reload_actio
>  	struct nsim_dev *nsim_dev = devlink_priv(devlink);
>  	int ret;
> 
> -	devl_lock(devlink);
>  	if (nsim_dev->fail_reload) {
>  		/* For testing purposes, user set debugfs fail_reload
>  		 * value to true. Fail right away.
>  		 */
>  		NL_SET_ERR_MSG_MOD(extack, "User setup the reload to fail for
> testing purposes");
> -		devl_unlock(devlink);
>  		return -EINVAL;
>  	}
> 
>  	*actions_performed = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT);
>  	ret = nsim_dev_reload_create(nsim_dev, extack);
> -	devl_unlock(devlink);
>  	return ret;
>  }
> 
> diff --git a/net/core/devlink.c b/net/core/devlink.c
> index 6b20196ada1a..57865b231364 100644
> --- a/net/core/devlink.c
> +++ b/net/core/devlink.c
> @@ -768,12 +768,6 @@ devlink_region_snapshot_get_by_id(struct
> devlink_region *region, u32 id)
>  #define DEVLINK_NL_FLAG_NEED_RATE_NODE		BIT(3)
>  #define DEVLINK_NL_FLAG_NEED_LINECARD		BIT(4)
> 
> -/* The per devlink instance lock is taken by default in the pre-doit
> - * operation, yet several commands do not require this. The global
> - * devlink lock is taken and protects from disruption by user-calls.
> - */
> -#define DEVLINK_NL_FLAG_NO_LOCK			BIT(5)
> -
>  static int devlink_nl_pre_doit(const struct genl_ops *ops,
>  			       struct sk_buff *skb, struct genl_info *info)
>  {
> @@ -788,8 +782,7 @@ static int devlink_nl_pre_doit(const struct genl_ops *ops,
>  		mutex_unlock(&devlink_mutex);
>  		return PTR_ERR(devlink);
>  	}
> -	if (~ops->internal_flags & DEVLINK_NL_FLAG_NO_LOCK)
> -		devl_lock(devlink);
> +	devl_lock(devlink);
>  	info->user_ptr[0] = devlink;
>  	if (ops->internal_flags & DEVLINK_NL_FLAG_NEED_PORT) {
>  		devlink_port = devlink_port_get_from_info(devlink, info);
> @@ -831,8 +824,7 @@ static int devlink_nl_pre_doit(const struct genl_ops *ops,
>  	return 0;
> 
>  unlock:
> -	if (~ops->internal_flags & DEVLINK_NL_FLAG_NO_LOCK)
> -		devl_unlock(devlink);
> +	devl_unlock(devlink);
>  	devlink_put(devlink);
>  	mutex_unlock(&devlink_mutex);
>  	return err;
> @@ -849,8 +841,7 @@ static void devlink_nl_post_doit(const struct genl_ops
> *ops,
>  		linecard = info->user_ptr[1];
>  		devlink_linecard_put(linecard);
>  	}
> -	if (~ops->internal_flags & DEVLINK_NL_FLAG_NO_LOCK)
> -		devl_unlock(devlink);
> +	devl_unlock(devlink);
>  	devlink_put(devlink);
>  	mutex_unlock(&devlink_mutex);
>  }
> @@ -9414,7 +9405,6 @@ static const struct genl_small_ops devlink_nl_ops[] = {
>  		.validate = GENL_DONT_VALIDATE_STRICT |
> GENL_DONT_VALIDATE_DUMP,
>  		.doit = devlink_nl_cmd_reload,
>  		.flags = GENL_ADMIN_PERM,
> -		.internal_flags = DEVLINK_NL_FLAG_NO_LOCK,
>  	},
>  	{
>  		.cmd = DEVLINK_CMD_PARAM_GET,
> @@ -12507,10 +12497,12 @@ static void __net_exit
> devlink_pernet_pre_exit(struct net *net)
>  	mutex_lock(&devlink_mutex);
>  	devlinks_xa_for_each_registered_get(net, index, devlink) {
>  		WARN_ON(!(devlink->features & DEVLINK_F_RELOAD));
> +		mutex_lock(&devlink->lock);
>  		err = devlink_reload(devlink, &init_net,
>  				     DEVLINK_RELOAD_ACTION_DRIVER_REINIT,
>  				     DEVLINK_RELOAD_LIMIT_UNSPEC,
>  				     &actions_performed, NULL);
> +		mutex_unlock(&devlink->lock);
>  		if (err && err != -EOPNOTSUPP)
>  			pr_warn("Failed to reload devlink instance into
> init_net\n");
>  		devlink_put(devlink);
> --
> 2.35.3


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch net-next 2/4] net: devlink: convert reload command to take implicit devlink->lock
  2022-08-05 16:21   ` Keller, Jacob E
@ 2022-08-05 19:58     ` Jakub Kicinski
  2022-08-05 20:12       ` Keller, Jacob E
  0 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2022-08-05 19:58 UTC (permalink / raw)
  To: Keller, Jacob E
  Cc: Jiri Pirko, netdev, davem, idosch, petrm, pabeni, edumazet,
	mlxsw, saeedm, tariqt, leon, moshe

On Fri, 5 Aug 2022 16:21:16 +0000 Keller, Jacob E wrote:
> > Convert reload command to behave the same way as the rest of the
> > commands and let if be called with devlink->lock held. Remove the
> > temporary devl_lock taking from drivers. As the DEVLINK_NL_FLAG_NO_LOCK
> > flag is no longer used, remove it alongside.
> > 
> > Signed-off-by: Jiri Pirko <jiri@nvidia.com>  
> 
> Wasn't reload avoiding the lock done so that drivers could perform other devlink operations during reload? Or is that no longer a problem with the recent refactors done to move devlink initialization and such earlier so that its now safe?

Drivers have control over the lock now, they can take the lock
themselves (devl_lock()) and call the "I'm already holding the lock"
functions. So the expectation is that shared paths used by probe and
reload will always run under the lock.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [patch net-next 2/4] net: devlink: convert reload command to take implicit devlink->lock
  2022-08-05 19:58     ` Jakub Kicinski
@ 2022-08-05 20:12       ` Keller, Jacob E
  0 siblings, 0 replies; 15+ messages in thread
From: Keller, Jacob E @ 2022-08-05 20:12 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jiri Pirko, netdev, davem, idosch, petrm, pabeni, edumazet,
	mlxsw, saeedm, tariqt, leon, moshe



> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Friday, August 05, 2022 12:59 PM
> To: Keller, Jacob E <jacob.e.keller@intel.com>
> Cc: Jiri Pirko <jiri@resnulli.us>; netdev@vger.kernel.org; davem@davemloft.net;
> idosch@nvidia.com; petrm@nvidia.com; pabeni@redhat.com;
> edumazet@google.com; mlxsw@nvidia.com; saeedm@nvidia.com;
> tariqt@nvidia.com; leon@kernel.org; moshe@nvidia.com
> Subject: Re: [patch net-next 2/4] net: devlink: convert reload command to take
> implicit devlink->lock
> 
> On Fri, 5 Aug 2022 16:21:16 +0000 Keller, Jacob E wrote:
> > > Convert reload command to behave the same way as the rest of the
> > > commands and let if be called with devlink->lock held. Remove the
> > > temporary devl_lock taking from drivers. As the DEVLINK_NL_FLAG_NO_LOCK
> > > flag is no longer used, remove it alongside.
> > >
> > > Signed-off-by: Jiri Pirko <jiri@nvidia.com>
> >
> > Wasn't reload avoiding the lock done so that drivers could perform other
> devlink operations during reload? Or is that no longer a problem with the recent
> refactors done to move devlink initialization and such earlier so that its now safe?
> 
> Drivers have control over the lock now, they can take the lock
> themselves (devl_lock()) and call the "I'm already holding the lock"
> functions. So the expectation is that shared paths used by probe and
> reload will always run under the lock.

Ah perfect. So essentially, they already take the lock and are calling the "i'm locked" version of the functions, but this now moves the take the lock part into core.

Ok great! That sounds better than assuming drivers will take the lock themselves.

Thanks,
Jake

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch net-next 4/4] net: devlink: enable parallel ops on netlink interface
  2022-07-29  7:10 ` [patch net-next 4/4] net: devlink: enable parallel ops on netlink interface Jiri Pirko
@ 2022-08-20 19:44   ` Jakub Kicinski
  2022-08-22 12:26     ` Jiri Pirko
  0 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2022-08-20 19:44 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, idosch, petrm, pabeni, edumazet, mlxsw, saeedm,
	tariqt, leon, moshe

On Fri, 29 Jul 2022 09:10:38 +0200 Jiri Pirko wrote:
> From: Jiri Pirko <jiri@nvidia.com>
> 
> As the devlink_mutex was removed and all devlink instances are protected
> individually by devlink->lock mutex, allow the netlink ops to run
> in parallel and therefore allow user to execute commands on multiple
> devlink instances simultaneously.

Could you update the "Locking" section of
Documentation/networking/devlink/index.rst
with the full info and recommendations?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch net-next 4/4] net: devlink: enable parallel ops on netlink interface
  2022-08-20 19:44   ` Jakub Kicinski
@ 2022-08-22 12:26     ` Jiri Pirko
  0 siblings, 0 replies; 15+ messages in thread
From: Jiri Pirko @ 2022-08-22 12:26 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: netdev, davem, idosch, petrm, pabeni, edumazet, mlxsw, saeedm,
	tariqt, leon, moshe

Sat, Aug 20, 2022 at 09:44:59PM CEST, kuba@kernel.org wrote:
>On Fri, 29 Jul 2022 09:10:38 +0200 Jiri Pirko wrote:
>> From: Jiri Pirko <jiri@nvidia.com>
>> 
>> As the devlink_mutex was removed and all devlink instances are protected
>> individually by devlink->lock mutex, allow the netlink ops to run
>> in parallel and therefore allow user to execute commands on multiple
>> devlink instances simultaneously.
>
>Could you update the "Locking" section of
>Documentation/networking/devlink/index.rst
>with the full info and recommendations?

Will do.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2022-08-22 12:26 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-29  7:10 [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jiri Pirko
2022-07-29  7:10 ` [patch net-next 1/4] net: devlink: introduce "unregistering" mark and use it during devlinks iteration Jiri Pirko
2022-07-29  7:10 ` [patch net-next 2/4] net: devlink: convert reload command to take implicit devlink->lock Jiri Pirko
2022-08-05 16:21   ` Keller, Jacob E
2022-08-05 19:58     ` Jakub Kicinski
2022-08-05 20:12       ` Keller, Jacob E
2022-07-29  7:10 ` [patch net-next 3/4] net: devlink: remove devlink_mutex Jiri Pirko
2022-07-29  7:10 ` [patch net-next 4/4] net: devlink: enable parallel ops on netlink interface Jiri Pirko
2022-08-20 19:44   ` Jakub Kicinski
2022-08-22 12:26     ` Jiri Pirko
2022-07-30  3:04 ` [patch net-next 0/4] net: devlink: allow parallel commands on multiple devlinks Jakub Kicinski
2022-07-30  9:26   ` Jiri Pirko
2022-07-30 15:53 ` Ido Schimmel
2022-07-31  6:02 ` Ido Schimmel
2022-08-01 11:40 ` patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.