All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 0/9] devlink: Add support for region access
@ 2018-03-29 16:07 Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 1/9] devlink: Add support for creating and destroying regions Alex Vesker
                   ` (10 more replies)
  0 siblings, 11 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 16:07 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Tariq Toukan, Jiri Pirko, Alex Vesker

This is a proposal which will allow access to driver defined address
regions using devlink. Each device can create its supported address
regions and register them. A device which exposes a region will allow
access to it using devlink.

The suggested implementation will allow exposing regions to the user,
reading and dumping snapshots taken from different regions. 
A snapshot represents a memory image of a region taken by the driver.

If a device collects a snapshot of an address region it can be later
exposed using devlink region read or dump commands.
This functionality allows for future analyses on the snapshots to be
done.

The major benefit of this support is not only to provide access to
internal address regions which were inaccessible to the user but also
to provide an additional way to debug complex error states using the
region snapshots.

Implemented commands:
$ devlink region help
$ devlink region show [ DEV/REGION ]
$ devlink region del DEV/REGION snapshot SNAPSHOT_ID
$ devlink region dump DEV/REGION [ snapshot SNAPSHOT_ID ]
$ devlink region read DEV/REGION [ snapshot SNAPSHOT_ID ]
	address ADDRESS length length

Show all of the exposed regions with region sizes:
$ devlink region show
pci/0000:00:05.0/cr-space: size 1048576 snapshot [1 2]
pci/0000:00:05.0/fw-health: size 64 snapshot [1 2]

Delete a snapshot using:
$ devlink region del pci/0000:00:05.0/cr-space snapshot 1

Dump a snapshot:
$ devlink region dump pci/0000:00:05.0/fw-health snapshot 1
0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
0000000000000010 0000 0000 ffff ff04 0029 8c00 0028 8cc8
0000000000000020 0016 0bb8 0016 1720 0000 0000 c00f 3ffc
0000000000000030 bada cce5 bada cce5 bada cce5 bada cce5

Read a specific part of a snapshot:
$ devlink region read pci/0000:00:05.0/fw-health snapshot 1 address 0 
	length 16
0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30

For more information you can check devlink-region.8 man page

Future:
There is a plan to extend the support to include a write command
as well as performing read and dump live region

Alex Vesker (9):
  devlink: Add support for creating and destroying regions
  devlink: Add callback to query for snapshot id before snapshot create
  devlink: Add support for creating region snapshots
  devlink: Add support for region get command
  devlink: Extend the support querying for region snapshot IDs
  devlink: Add support for region snapshot delete command
  devlink: Add support for region snapshot read command
  net/mlx4_core: Add health buffer address capability
  net/mlx4_core: Add Crdump FW snapshot support

 drivers/net/ethernet/mellanox/mlx4/Makefile |   2 +-
 drivers/net/ethernet/mellanox/mlx4/catas.c  |   6 +-
 drivers/net/ethernet/mellanox/mlx4/crdump.c | 224 ++++++++++
 drivers/net/ethernet/mellanox/mlx4/fw.c     |   5 +-
 drivers/net/ethernet/mellanox/mlx4/fw.h     |   1 +
 drivers/net/ethernet/mellanox/mlx4/main.c   |  11 +-
 drivers/net/ethernet/mellanox/mlx4/mlx4.h   |   4 +
 include/linux/mlx4/device.h                 |   7 +
 include/net/devlink.h                       |  39 ++
 include/uapi/linux/devlink.h                |  18 +
 net/core/devlink.c                          | 646 ++++++++++++++++++++++++++++
 11 files changed, 958 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/ethernet/mellanox/mlx4/crdump.c

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH net-next 1/9] devlink: Add support for creating and destroying regions
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
@ 2018-03-29 16:07 ` Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 2/9] devlink: Add callback to query for snapshot id before snapshot create Alex Vesker
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 16:07 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Tariq Toukan, Jiri Pirko, Alex Vesker

This allows a device to register its supported address regions.
Each address region can be accessed directly for example reading
the snapshots taken of this address space.
Drivers are not limited in the name selection for different regions.
An example of a region-name can be: pci cr-space, register-space.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/net/devlink.h | 22 ++++++++++++++
 net/core/devlink.c    | 84 +++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 106 insertions(+)

diff --git a/include/net/devlink.h b/include/net/devlink.h
index e21d8ca..784a33c 100644
--- a/include/net/devlink.h
+++ b/include/net/devlink.h
@@ -28,6 +28,7 @@ struct devlink {
 	struct list_head dpipe_table_list;
 	struct list_head resource_list;
 	struct devlink_dpipe_headers *dpipe_headers;
+	struct list_head region_list;
 	const struct devlink_ops *ops;
 	struct device *dev;
 	possible_net_t _net;
@@ -294,6 +295,8 @@ struct devlink_resource {
 
 #define DEVLINK_RESOURCE_ID_PARENT_TOP 0
 
+struct devlink_region;
+
 struct devlink_ops {
 	int (*reload)(struct devlink *devlink);
 	int (*port_type_set)(struct devlink_port *devlink_port,
@@ -419,6 +422,11 @@ int devlink_resource_size_get(struct devlink *devlink,
 int devlink_dpipe_table_resource_set(struct devlink *devlink,
 				     const char *table_name, u64 resource_id,
 				     u64 resource_units);
+struct devlink_region *devlink_region_create(struct devlink *devlink,
+					     const char *region_name,
+					     u32 region_max_snapshots,
+					     u64 region_size);
+void devlink_region_destroy(struct devlink_region *region);
 
 #else
 
@@ -589,6 +597,20 @@ static inline bool devlink_dpipe_table_counter_enabled(struct devlink *devlink,
 	return -EOPNOTSUPP;
 }
 
+static inline struct devlink_region *
+devlink_region_create(struct devlink *devlink,
+		      const char *region_name,
+		      u32 region_max_snapshots,
+		      u64 region_size)
+{
+	return NULL;
+}
+
+static inline void
+devlink_region_destroy(struct devlink_region *region)
+{
+}
+
 #endif
 
 #endif /* _NET_DEVLINK_H_ */
diff --git a/net/core/devlink.c b/net/core/devlink.c
index 9236e42..fd5b9f6 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -326,6 +326,28 @@ static int devlink_sb_pool_index_get_from_info(struct devlink_sb *devlink_sb,
 						  pool_type, p_tc_index);
 }
 
+struct devlink_region {
+	struct devlink *devlink;
+	struct list_head list;
+	const char *name;
+	struct list_head snapshot_list;
+	u32 max_snapshots;
+	u32 cur_snapshots;
+	u64 size;
+};
+
+static struct devlink_region *
+devlink_region_get_by_name(struct devlink *devlink, const char *region_name)
+{
+	struct devlink_region *region;
+
+	list_for_each_entry(region, &devlink->region_list, list)
+		if (!strcmp(region->name, region_name))
+			return region;
+
+	return NULL;
+}
+
 #define DEVLINK_NL_FLAG_NEED_DEVLINK	BIT(0)
 #define DEVLINK_NL_FLAG_NEED_PORT	BIT(1)
 #define DEVLINK_NL_FLAG_NEED_SB		BIT(2)
@@ -2820,6 +2842,7 @@ struct devlink *devlink_alloc(const struct devlink_ops *ops, size_t priv_size)
 	INIT_LIST_HEAD(&devlink->sb_list);
 	INIT_LIST_HEAD_RCU(&devlink->dpipe_table_list);
 	INIT_LIST_HEAD(&devlink->resource_list);
+	INIT_LIST_HEAD(&devlink->region_list);
 	mutex_init(&devlink->lock);
 	return devlink;
 }
@@ -3315,6 +3338,67 @@ int devlink_dpipe_table_resource_set(struct devlink *devlink,
 }
 EXPORT_SYMBOL_GPL(devlink_dpipe_table_resource_set);
 
+/**
+ *	devlink_region_create - create a new address region
+ *
+ *	@devlink: devlink
+ *	@region_name: region name
+ *	@region_max_snapshots: Maximum supported number of snapshots for region
+ *	@region_size: size of region
+ */
+struct devlink_region *devlink_region_create(struct devlink *devlink,
+					     const char *region_name,
+					     u32 region_max_snapshots,
+					     u64 region_size)
+{
+	struct devlink_region *region;
+	int err = 0;
+
+	mutex_lock(&devlink->lock);
+
+	if (devlink_region_get_by_name(devlink, region_name)) {
+		err = -EEXIST;
+		goto unlock;
+	}
+
+	region = kzalloc(sizeof(*region), GFP_KERNEL);
+	if (!region) {
+		err = -ENOMEM;
+		goto unlock;
+	}
+
+	region->devlink = devlink;
+	region->max_snapshots = region_max_snapshots;
+	region->name = region_name;
+	region->size = region_size;
+	INIT_LIST_HEAD(&region->snapshot_list);
+	list_add_tail(&region->list, &devlink->region_list);
+
+	mutex_unlock(&devlink->lock);
+	return region;
+
+unlock:
+	mutex_unlock(&devlink->lock);
+	return ERR_PTR(err);
+}
+EXPORT_SYMBOL_GPL(devlink_region_create);
+
+/**
+ *	devlink_region_destroy - destroy address region
+ *
+ *	@region: devlink region to destroy
+ */
+void devlink_region_destroy(struct devlink_region *region)
+{
+	struct devlink *devlink = region->devlink;
+
+	mutex_lock(&devlink->lock);
+	list_del(&region->list);
+	mutex_unlock(&devlink->lock);
+	kfree(region);
+}
+EXPORT_SYMBOL_GPL(devlink_region_destroy);
+
 static int __init devlink_module_init(void)
 {
 	return genl_register_family(&devlink_nl_family);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH net-next 2/9] devlink: Add callback to query for snapshot id before snapshot create
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 1/9] devlink: Add support for creating and destroying regions Alex Vesker
@ 2018-03-29 16:07 ` Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 3/9] devlink: Add support for creating region snapshots Alex Vesker
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 16:07 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Tariq Toukan, Jiri Pirko, Alex Vesker

To restrict the driver with the snapshot ID selection a new callback
is introduced for the driver to get the snapshot ID before creating
a new snapshot. This will also allow giving the same ID for multiple
snapshots taken of different regions on the same time.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/net/devlink.h |  8 ++++++++
 net/core/devlink.c    | 21 +++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/include/net/devlink.h b/include/net/devlink.h
index 784a33c..5697c55 100644
--- a/include/net/devlink.h
+++ b/include/net/devlink.h
@@ -29,6 +29,7 @@ struct devlink {
 	struct list_head resource_list;
 	struct devlink_dpipe_headers *dpipe_headers;
 	struct list_head region_list;
+	u32 snapshot_id;
 	const struct devlink_ops *ops;
 	struct device *dev;
 	possible_net_t _net;
@@ -427,6 +428,7 @@ struct devlink_region *devlink_region_create(struct devlink *devlink,
 					     u32 region_max_snapshots,
 					     u64 region_size);
 void devlink_region_destroy(struct devlink_region *region);
+u32 devlink_region_shapshot_id_get(struct devlink *devlink);
 
 #else
 
@@ -611,6 +613,12 @@ static inline bool devlink_dpipe_table_counter_enabled(struct devlink *devlink,
 {
 }
 
+static inline u32
+devlink_region_shapshot_id_get(struct devlink *devlink)
+{
+	return 0;
+}
+
 #endif
 
 #endif /* _NET_DEVLINK_H_ */
diff --git a/net/core/devlink.c b/net/core/devlink.c
index fd5b9f6..4822a08 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -3399,6 +3399,27 @@ void devlink_region_destroy(struct devlink_region *region)
 }
 EXPORT_SYMBOL_GPL(devlink_region_destroy);
 
+/**
+ *	devlink_region_shapshot_id_get - get snapshot ID
+ *
+ *	This callback should be called when adding a new snapshot,
+ *	Driver should use the same id for multiple snapshots taken
+ *	on multiple regions at the same time/by the same trigger.
+ *
+ *	@devlink: devlink
+ */
+u32 devlink_region_shapshot_id_get(struct devlink *devlink)
+{
+	u32 id;
+
+	mutex_lock(&devlink->lock);
+	id = ++devlink->snapshot_id;
+	mutex_unlock(&devlink->lock);
+
+	return id;
+}
+EXPORT_SYMBOL_GPL(devlink_region_shapshot_id_get);
+
 static int __init devlink_module_init(void)
 {
 	return genl_register_family(&devlink_nl_family);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH net-next 3/9] devlink: Add support for creating region snapshots
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 1/9] devlink: Add support for creating and destroying regions Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 2/9] devlink: Add callback to query for snapshot id before snapshot create Alex Vesker
@ 2018-03-29 16:07 ` Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 4/9] devlink: Add support for region get command Alex Vesker
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 16:07 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Tariq Toukan, Jiri Pirko, Alex Vesker

Each device address region can store multiple snapshots,
each snapshot is identified using a different numerical ID.
This ID is used when deleting a snapshot or showing an address
region specific snapshot. This patch exposes a callback to add
a new snapshot (data, data length and ID) to an address region.
The snapshot are can be deleted from devlink user tool or when
destroying a region.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/net/devlink.h |  9 +++++
 net/core/devlink.c    | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 108 insertions(+)

diff --git a/include/net/devlink.h b/include/net/devlink.h
index 5697c55..83e569f 100644
--- a/include/net/devlink.h
+++ b/include/net/devlink.h
@@ -429,6 +429,8 @@ struct devlink_region *devlink_region_create(struct devlink *devlink,
 					     u64 region_size);
 void devlink_region_destroy(struct devlink_region *region);
 u32 devlink_region_shapshot_id_get(struct devlink *devlink);
+int devlink_region_snapshot_create(struct devlink_region *region, u64 data_len,
+				   u8 *data, u32 snapshot_id);
 
 #else
 
@@ -619,6 +621,13 @@ static inline bool devlink_dpipe_table_counter_enabled(struct devlink *devlink,
 	return 0;
 }
 
+static inline int
+devlink_region_snapshot_create(struct devlink_region *region, u64 data_len,
+			       u8 *data, u32 snapshot_id)
+{
+	return 0;
+}
+
 #endif
 
 #endif /* _NET_DEVLINK_H_ */
diff --git a/net/core/devlink.c b/net/core/devlink.c
index 4822a08..785e87d 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -336,6 +336,14 @@ struct devlink_region {
 	u64 size;
 };
 
+struct devlink_snapshot {
+	struct list_head list;
+	struct devlink_region *region;
+	u64 data_len;
+	u8 *data;
+	u32 id;
+};
+
 static struct devlink_region *
 devlink_region_get_by_name(struct devlink *devlink, const char *region_name)
 {
@@ -348,6 +356,26 @@ struct devlink_region {
 	return NULL;
 }
 
+static struct devlink_snapshot *
+devlink_region_snapshot_get_by_id(struct devlink_region *region, u32 id)
+{
+	struct devlink_snapshot *snapshot;
+
+	list_for_each_entry(snapshot, &region->snapshot_list, list)
+		if (snapshot->id == id)
+			return snapshot;
+
+	return NULL;
+}
+
+static void devlink_region_snapshot_del(struct devlink_snapshot *snapshot)
+{
+	snapshot->region->cur_snapshots--;
+	list_del(&snapshot->list);
+	kfree(snapshot->data);
+	kfree(snapshot);
+}
+
 #define DEVLINK_NL_FLAG_NEED_DEVLINK	BIT(0)
 #define DEVLINK_NL_FLAG_NEED_PORT	BIT(1)
 #define DEVLINK_NL_FLAG_NEED_SB		BIT(2)
@@ -3391,8 +3419,14 @@ struct devlink_region *devlink_region_create(struct devlink *devlink,
 void devlink_region_destroy(struct devlink_region *region)
 {
 	struct devlink *devlink = region->devlink;
+	struct devlink_snapshot *snapshot, *ts;
 
 	mutex_lock(&devlink->lock);
+
+	/* Free all snapshots of region */
+	list_for_each_entry_safe(snapshot, ts, &region->snapshot_list, list)
+		devlink_region_snapshot_del(snapshot);
+
 	list_del(&region->list);
 	mutex_unlock(&devlink->lock);
 	kfree(region);
@@ -3420,6 +3454,71 @@ u32 devlink_region_shapshot_id_get(struct devlink *devlink)
 }
 EXPORT_SYMBOL_GPL(devlink_region_shapshot_id_get);
 
+/**
+ *	devlink_region_snapshot_create - create a new snapshot
+ *	This will add a new snapshot of a region. The snapshot
+ *	will be stored on the region struct and can be accessed
+ *	from devlink. This is useful for future	analyses of snapshots.
+ *	Multiple snapshots can be created on a region.
+ *	The @snapshot_id should be obtained using the getter function.
+ *
+ *	@devlink_region: devlink region of the snapshot
+ *	@data_len: size of snapshot data
+ *	@data: snapshot data
+ *	@snapshot_id: snapshot id to be created
+ */
+int devlink_region_snapshot_create(struct devlink_region *region, u64 data_len,
+				   u8 *data, u32 snapshot_id)
+{
+	struct devlink *devlink = region->devlink;
+	struct devlink_snapshot *snapshot;
+	int err;
+
+	mutex_lock(&devlink->lock);
+
+	/* check if region can hold one more snapshot */
+	if (region->cur_snapshots == region->max_snapshots) {
+		err = -ENOMEM;
+		goto unlock;
+	}
+
+	if (devlink_region_snapshot_get_by_id(region, snapshot_id)) {
+		err = -EEXIST;
+		goto unlock;
+	}
+
+	snapshot = kzalloc(sizeof(*snapshot), GFP_KERNEL);
+	if (!snapshot) {
+		err = -ENOMEM;
+		goto unlock;
+	}
+
+	snapshot->data = kzalloc(data_len, GFP_KERNEL);
+	if (!snapshot->data) {
+		err = -ENOMEM;
+		goto free_snapshot;
+	}
+
+	snapshot->id = snapshot_id;
+	snapshot->region = region;
+	snapshot->data_len = data_len;
+	memcpy(snapshot->data, data, data_len);
+
+	list_add_tail(&snapshot->list, &region->snapshot_list);
+
+	region->cur_snapshots++;
+
+	mutex_unlock(&devlink->lock);
+	return 0;
+
+free_snapshot:
+	kfree(snapshot);
+unlock:
+	mutex_unlock(&devlink->lock);
+	return err;
+}
+EXPORT_SYMBOL_GPL(devlink_region_snapshot_create);
+
 static int __init devlink_module_init(void)
 {
 	return genl_register_family(&devlink_nl_family);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH net-next 4/9] devlink: Add support for region get command
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
                   ` (2 preceding siblings ...)
  2018-03-29 16:07 ` [PATCH net-next 3/9] devlink: Add support for creating region snapshots Alex Vesker
@ 2018-03-29 16:07 ` Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 5/9] devlink: Extend the support querying for region snapshot IDs Alex Vesker
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 16:07 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Tariq Toukan, Jiri Pirko, Alex Vesker

Add support for DEVLINK_CMD_REGION_GET command which is used for
querying for the supported DEV/REGION values of devlink devices.
The support is both for doit and dumpit.

Reply includes:
  BUS_NAME, DEVICE_NAME, REGION_NAME, REGION_SIZE

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/uapi/linux/devlink.h |   6 +++
 net/core/devlink.c           | 114 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 120 insertions(+)

diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h
index 1df65a4..8d24f49 100644
--- a/include/uapi/linux/devlink.h
+++ b/include/uapi/linux/devlink.h
@@ -78,6 +78,9 @@ enum devlink_command {
 	 */
 	DEVLINK_CMD_RELOAD,
 
+	DEVLINK_CMD_REGION_GET,
+	DEVLINK_CMD_REGION_SET,
+
 	/* add new commands above here */
 	__DEVLINK_CMD_MAX,
 	DEVLINK_CMD_MAX = __DEVLINK_CMD_MAX - 1
@@ -224,6 +227,9 @@ enum devlink_attr {
 	DEVLINK_ATTR_DPIPE_TABLE_RESOURCE_ID,	/* u64 */
 	DEVLINK_ATTR_DPIPE_TABLE_RESOURCE_UNITS,/* u64 */
 
+	DEVLINK_ATTR_REGION_NAME,		/* string */
+	DEVLINK_ATTR_REGION_SIZE,		/* u32 */
+
 	/* add new attributes above here, update the policy in devlink.c */
 
 	__DEVLINK_ATTR_MAX,
diff --git a/net/core/devlink.c b/net/core/devlink.c
index 785e87d..20d243d 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -2630,6 +2630,111 @@ static int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info)
 	return devlink->ops->reload(devlink);
 }
 
+static int devlink_nl_region_fill(struct sk_buff *msg, struct devlink *devlink,
+				  enum devlink_command cmd, u32 portid,
+				  u32 seq, int flags,
+				  struct devlink_region *region)
+{
+	void *hdr;
+	int err;
+
+	hdr = genlmsg_put(msg, portid, seq, &devlink_nl_family, flags, cmd);
+	if (!hdr)
+		return -EMSGSIZE;
+
+	err = devlink_nl_put_handle(msg, devlink);
+	if (err)
+		goto nla_put_failure;
+
+	err = nla_put_string(msg, DEVLINK_ATTR_REGION_NAME, region->name);
+	if (err)
+		goto nla_put_failure;
+
+	err = nla_put_u64_64bit(msg, DEVLINK_ATTR_REGION_SIZE,
+				region->size,
+				DEVLINK_ATTR_PAD);
+	if (err)
+		goto nla_put_failure;
+
+	genlmsg_end(msg, hdr);
+	return 0;
+
+nla_put_failure:
+	genlmsg_cancel(msg, hdr);
+	return err;
+}
+
+static int devlink_nl_cmd_region_get_doit(struct sk_buff *skb,
+					  struct genl_info *info)
+{
+	struct devlink *devlink = info->user_ptr[0];
+	struct devlink_region *region;
+	const char *region_name;
+	struct sk_buff *msg;
+	int err;
+
+	if (!info->attrs[DEVLINK_ATTR_REGION_NAME])
+		return -EINVAL;
+
+	region_name = nla_data(info->attrs[DEVLINK_ATTR_REGION_NAME]);
+	region = devlink_region_get_by_name(devlink, region_name);
+	if (!region)
+		return -EINVAL;
+
+	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+	if (!msg)
+		return -ENOMEM;
+
+	err = devlink_nl_region_fill(msg, devlink, DEVLINK_CMD_REGION_GET,
+				     info->snd_portid, info->snd_seq, 0,
+				     region);
+	if (err) {
+		nlmsg_free(msg);
+		return err;
+	}
+
+	return genlmsg_reply(msg, info);
+}
+
+static int devlink_nl_cmd_region_get_dumpit(struct sk_buff *msg,
+					    struct netlink_callback *cb)
+{
+	struct devlink_region *region;
+	struct devlink *devlink;
+	int start = cb->args[0];
+	int idx = 0;
+	int err;
+
+	mutex_lock(&devlink_mutex);
+	list_for_each_entry(devlink, &devlink_list, list) {
+		if (!net_eq(devlink_net(devlink), sock_net(msg->sk)))
+			continue;
+
+		mutex_lock(&devlink->lock);
+		list_for_each_entry(region, &devlink->region_list, list) {
+			if (idx < start) {
+				idx++;
+				continue;
+			}
+			err = devlink_nl_region_fill(msg, devlink,
+						     DEVLINK_CMD_REGION_GET,
+						     NETLINK_CB(cb->skb).portid,
+						     cb->nlh->nlmsg_seq,
+						     NLM_F_MULTI, region);
+			if (err) {
+				mutex_unlock(&devlink->lock);
+				goto out;
+			}
+			idx++;
+		}
+		mutex_unlock(&devlink->lock);
+	}
+out:
+	mutex_unlock(&devlink_mutex);
+	cb->args[0] = idx;
+	return msg->len;
+}
+
 static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = {
 	[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING },
 	[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING },
@@ -2650,6 +2755,7 @@ static int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info)
 	[DEVLINK_ATTR_DPIPE_TABLE_COUNTERS_ENABLED] = { .type = NLA_U8 },
 	[DEVLINK_ATTR_RESOURCE_ID] = { .type = NLA_U64},
 	[DEVLINK_ATTR_RESOURCE_SIZE] = { .type = NLA_U64},
+	[DEVLINK_ATTR_REGION_NAME] = { .type = NLA_NUL_STRING },
 };
 
 static const struct genl_ops devlink_nl_ops[] = {
@@ -2832,6 +2938,14 @@ static int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info)
 		.internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK |
 				  DEVLINK_NL_FLAG_NO_LOCK,
 	},
+	{
+		.cmd = DEVLINK_CMD_REGION_GET,
+		.doit = devlink_nl_cmd_region_get_doit,
+		.dumpit = devlink_nl_cmd_region_get_dumpit,
+		.policy = devlink_nl_policy,
+		.flags = GENL_ADMIN_PERM,
+		.internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK,
+	},
 };
 
 static struct genl_family devlink_nl_family __ro_after_init = {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH net-next 5/9] devlink: Extend the support querying for region snapshot IDs
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
                   ` (3 preceding siblings ...)
  2018-03-29 16:07 ` [PATCH net-next 4/9] devlink: Add support for region get command Alex Vesker
@ 2018-03-29 16:07 ` Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 6/9] devlink: Add support for region snapshot delete command Alex Vesker
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 16:07 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Tariq Toukan, Jiri Pirko, Alex Vesker

Extend the support for DEVLINK_CMD_REGION_GET command to also
return the IDs of the snapshot currently present on the region.
Each reply will include a nested snapshots attribute that
can contain multiple snapshot attributes each with an ID.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/uapi/linux/devlink.h |  3 +++
 net/core/devlink.c           | 53 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 56 insertions(+)

diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h
index 8d24f49..786185a 100644
--- a/include/uapi/linux/devlink.h
+++ b/include/uapi/linux/devlink.h
@@ -229,6 +229,9 @@ enum devlink_attr {
 
 	DEVLINK_ATTR_REGION_NAME,		/* string */
 	DEVLINK_ATTR_REGION_SIZE,		/* u32 */
+	DEVLINK_ATTR_REGION_SNAPSHOTS,		/* nested */
+	DEVLINK_ATTR_REGION_SNAPSHOT,		/* nested */
+	DEVLINK_ATTR_REGION_SNAPSHOT_ID,	/* u32 */
 
 	/* add new attributes above here, update the policy in devlink.c */
 
diff --git a/net/core/devlink.c b/net/core/devlink.c
index 20d243d..915bb33 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -2630,6 +2630,55 @@ static int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info)
 	return devlink->ops->reload(devlink);
 }
 
+static int devlink_nl_region_snapshot_id_put(struct sk_buff *msg,
+					     struct devlink *devlink,
+					     struct devlink_snapshot *snapshot)
+{
+	struct nlattr *snap_attr;
+	int err;
+
+	snap_attr = nla_nest_start(msg, DEVLINK_ATTR_REGION_SNAPSHOT);
+	if (!snap_attr)
+		return -EINVAL;
+
+	err = nla_put_u32(msg, DEVLINK_ATTR_REGION_SNAPSHOT_ID, snapshot->id);
+	if (err)
+		goto nla_put_failure;
+
+	nla_nest_end(msg, snap_attr);
+	return 0;
+
+nla_put_failure:
+	nla_nest_cancel(msg, snap_attr);
+	return err;
+}
+
+static int devlink_nl_region_snapshots_id_put(struct sk_buff *msg,
+					      struct devlink *devlink,
+					      struct devlink_region *region)
+{
+	struct devlink_snapshot *snapshot;
+	struct nlattr *snapshots_attr;
+	int err;
+
+	snapshots_attr = nla_nest_start(msg, DEVLINK_ATTR_REGION_SNAPSHOTS);
+	if (!snapshots_attr)
+		return -EINVAL;
+
+	list_for_each_entry(snapshot, &region->snapshot_list, list) {
+		err = devlink_nl_region_snapshot_id_put(msg, devlink, snapshot);
+		if (err)
+			goto nla_put_failure;
+	}
+
+	nla_nest_end(msg, snapshots_attr);
+	return 0;
+
+nla_put_failure:
+	nla_nest_cancel(msg, snapshots_attr);
+	return err;
+}
+
 static int devlink_nl_region_fill(struct sk_buff *msg, struct devlink *devlink,
 				  enum devlink_command cmd, u32 portid,
 				  u32 seq, int flags,
@@ -2656,6 +2705,10 @@ static int devlink_nl_region_fill(struct sk_buff *msg, struct devlink *devlink,
 	if (err)
 		goto nla_put_failure;
 
+	err = devlink_nl_region_snapshots_id_put(msg, devlink, region);
+	if (err)
+		goto nla_put_failure;
+
 	genlmsg_end(msg, hdr);
 	return 0;
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH net-next 6/9] devlink: Add support for region snapshot delete command
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
                   ` (4 preceding siblings ...)
  2018-03-29 16:07 ` [PATCH net-next 5/9] devlink: Extend the support querying for region snapshot IDs Alex Vesker
@ 2018-03-29 16:07 ` Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 7/9] devlink: Add support for region snapshot read command Alex Vesker
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 16:07 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Tariq Toukan, Jiri Pirko, Alex Vesker

Add support for DEVLINK_CMD_REGION_DEL used
for deleting a snapshot from a region. The snapshot ID is required.
Also added notification support for NEW and DEL of snapshots.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/uapi/linux/devlink.h |  2 +
 net/core/devlink.c           | 93 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 95 insertions(+)

diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h
index 786185a..8662a03 100644
--- a/include/uapi/linux/devlink.h
+++ b/include/uapi/linux/devlink.h
@@ -80,6 +80,8 @@ enum devlink_command {
 
 	DEVLINK_CMD_REGION_GET,
 	DEVLINK_CMD_REGION_SET,
+	DEVLINK_CMD_REGION_NEW,
+	DEVLINK_CMD_REGION_DEL,
 
 	/* add new commands above here */
 	__DEVLINK_CMD_MAX,
diff --git a/net/core/devlink.c b/net/core/devlink.c
index 915bb33..f5c90a8 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -2717,6 +2717,58 @@ static int devlink_nl_region_fill(struct sk_buff *msg, struct devlink *devlink,
 	return err;
 }
 
+static void devlink_nl_region_notify(struct devlink_region *region,
+				     struct devlink_snapshot *snapshot,
+				     enum devlink_command cmd)
+{
+	struct devlink *devlink = region->devlink;
+	struct sk_buff *msg;
+	void *hdr;
+	int err;
+
+	WARN_ON(cmd != DEVLINK_CMD_REGION_NEW && cmd != DEVLINK_CMD_REGION_DEL);
+
+	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+	if (!msg)
+		return;
+
+	hdr = genlmsg_put(msg, 0, 0, &devlink_nl_family, 0, cmd);
+	if (!hdr)
+		goto out_free_msg;
+
+	err = devlink_nl_put_handle(msg, devlink);
+	if (err)
+		goto out_cancel_msg;
+
+	err = nla_put_string(msg, DEVLINK_ATTR_REGION_NAME,
+			     region->name);
+	if (err)
+		goto out_cancel_msg;
+
+	if (snapshot) {
+		err = nla_put_u32(msg, DEVLINK_ATTR_REGION_SNAPSHOT_ID,
+				  snapshot->id);
+		if (err)
+			goto out_cancel_msg;
+	} else {
+		err = nla_put_u64_64bit(msg, DEVLINK_ATTR_REGION_SIZE,
+					region->size, DEVLINK_ATTR_PAD);
+		if (err)
+			goto out_cancel_msg;
+	}
+	genlmsg_end(msg, hdr);
+
+	genlmsg_multicast_netns(&devlink_nl_family, devlink_net(devlink),
+				msg, 0, DEVLINK_MCGRP_CONFIG, GFP_KERNEL);
+
+	return;
+
+out_cancel_msg:
+	genlmsg_cancel(msg, hdr);
+out_free_msg:
+	nlmsg_free(msg);
+}
+
 static int devlink_nl_cmd_region_get_doit(struct sk_buff *skb,
 					  struct genl_info *info)
 {
@@ -2788,6 +2840,35 @@ static int devlink_nl_cmd_region_get_dumpit(struct sk_buff *msg,
 	return msg->len;
 }
 
+static int devlink_nl_cmd_region_del(struct sk_buff *skb,
+				     struct genl_info *info)
+{
+	struct devlink *devlink = info->user_ptr[0];
+	struct devlink_snapshot *snapshot;
+	struct devlink_region *region;
+	const char *region_name;
+	u32 snapshot_id;
+
+	if (!info->attrs[DEVLINK_ATTR_REGION_NAME] ||
+	    !info->attrs[DEVLINK_ATTR_REGION_SNAPSHOT_ID])
+		return -EINVAL;
+
+	region_name = nla_data(info->attrs[DEVLINK_ATTR_REGION_NAME]);
+	snapshot_id = nla_get_u32(info->attrs[DEVLINK_ATTR_REGION_SNAPSHOT_ID]);
+
+	region = devlink_region_get_by_name(devlink, region_name);
+	if (!region)
+		return -EINVAL;
+
+	snapshot = devlink_region_snapshot_get_by_id(region, snapshot_id);
+	if (!snapshot)
+		return -EINVAL;
+
+	devlink_nl_region_notify(region, snapshot, DEVLINK_CMD_REGION_DEL);
+	devlink_region_snapshot_del(snapshot);
+	return 0;
+}
+
 static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = {
 	[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING },
 	[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING },
@@ -2809,6 +2890,7 @@ static int devlink_nl_cmd_region_get_dumpit(struct sk_buff *msg,
 	[DEVLINK_ATTR_RESOURCE_ID] = { .type = NLA_U64},
 	[DEVLINK_ATTR_RESOURCE_SIZE] = { .type = NLA_U64},
 	[DEVLINK_ATTR_REGION_NAME] = { .type = NLA_NUL_STRING },
+	[DEVLINK_ATTR_REGION_SNAPSHOT_ID] = { .type = NLA_U32 },
 };
 
 static const struct genl_ops devlink_nl_ops[] = {
@@ -2999,6 +3081,13 @@ static int devlink_nl_cmd_region_get_dumpit(struct sk_buff *msg,
 		.flags = GENL_ADMIN_PERM,
 		.internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK,
 	},
+	{
+		.cmd = DEVLINK_CMD_REGION_DEL,
+		.doit = devlink_nl_cmd_region_del,
+		.policy = devlink_nl_policy,
+		.flags = GENL_ADMIN_PERM,
+		.internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK,
+	},
 };
 
 static struct genl_family devlink_nl_family __ro_after_init = {
@@ -3568,6 +3657,7 @@ struct devlink_region *devlink_region_create(struct devlink *devlink,
 	region->size = region_size;
 	INIT_LIST_HEAD(&region->snapshot_list);
 	list_add_tail(&region->list, &devlink->region_list);
+	devlink_nl_region_notify(region, NULL, DEVLINK_CMD_REGION_NEW);
 
 	mutex_unlock(&devlink->lock);
 	return region;
@@ -3595,6 +3685,8 @@ void devlink_region_destroy(struct devlink_region *region)
 		devlink_region_snapshot_del(snapshot);
 
 	list_del(&region->list);
+
+	devlink_nl_region_notify(region, NULL, DEVLINK_CMD_REGION_DEL);
 	mutex_unlock(&devlink->lock);
 	kfree(region);
 }
@@ -3675,6 +3767,7 @@ int devlink_region_snapshot_create(struct devlink_region *region, u64 data_len,
 
 	region->cur_snapshots++;
 
+	devlink_nl_region_notify(region, snapshot, DEVLINK_CMD_REGION_NEW);
 	mutex_unlock(&devlink->lock);
 	return 0;
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH net-next 7/9] devlink: Add support for region snapshot read command
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
                   ` (5 preceding siblings ...)
  2018-03-29 16:07 ` [PATCH net-next 6/9] devlink: Add support for region snapshot delete command Alex Vesker
@ 2018-03-29 16:07 ` Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 8/9] net/mlx4_core: Add health buffer address capability Alex Vesker
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 16:07 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Tariq Toukan, Jiri Pirko, Alex Vesker

Add support for DEVLINK_CMD_REGION_READ_GET used for both reading
and dumping region data. Read allows reading from a region specific
address for given length. Dump allows reading the full region.
If only snapshot ID is provided a snapshot dump will be done.
If snapshot ID, Address and Length are provided a snapshot read
will done.

This is used for both snapshot access and will be used in the same
way to access current data on the region.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/uapi/linux/devlink.h |   7 ++
 net/core/devlink.c           | 182 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 189 insertions(+)

diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h
index 8662a03..e9e94dd 100644
--- a/include/uapi/linux/devlink.h
+++ b/include/uapi/linux/devlink.h
@@ -82,6 +82,7 @@ enum devlink_command {
 	DEVLINK_CMD_REGION_SET,
 	DEVLINK_CMD_REGION_NEW,
 	DEVLINK_CMD_REGION_DEL,
+	DEVLINK_CMD_REGION_READ,
 
 	/* add new commands above here */
 	__DEVLINK_CMD_MAX,
@@ -235,6 +236,12 @@ enum devlink_attr {
 	DEVLINK_ATTR_REGION_SNAPSHOT,		/* nested */
 	DEVLINK_ATTR_REGION_SNAPSHOT_ID,	/* u32 */
 
+	DEVLINK_ATTR_REGION_CHUNKS,		/* nested */
+	DEVLINK_ATTR_REGION_CHUNK,		/* nested */
+	DEVLINK_ATTR_REGION_CHUNK_DATA,		/* binary */
+	DEVLINK_ATTR_REGION_CHUNK_ADDR,		/* u64 */
+	DEVLINK_ATTR_REGION_CHUNK_LEN,		/* u64 */
+
 	/* add new attributes above here, update the policy in devlink.c */
 
 	__DEVLINK_ATTR_MAX,
diff --git a/net/core/devlink.c b/net/core/devlink.c
index f5c90a8..101c6ef 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -2869,6 +2869,181 @@ static int devlink_nl_cmd_region_del(struct sk_buff *skb,
 	return 0;
 }
 
+static int devlink_nl_cmd_region_read_chunk_fill(struct sk_buff *msg,
+						 struct devlink *devlink,
+						 u8 *chunk, u32 chunk_size,
+						 u64 addr)
+{
+	struct nlattr *chunk_attr;
+	int err;
+
+	chunk_attr = nla_nest_start(msg, DEVLINK_ATTR_REGION_CHUNK);
+	if (!chunk_attr)
+		return -EINVAL;
+
+	err = nla_put(msg, DEVLINK_ATTR_REGION_CHUNK_DATA, chunk_size, chunk);
+	if (err)
+		goto nla_put_failure;
+
+	err = nla_put_u64_64bit(msg, DEVLINK_ATTR_REGION_CHUNK_ADDR, addr,
+				DEVLINK_ATTR_PAD);
+	if (err)
+		goto nla_put_failure;
+
+	nla_nest_end(msg, chunk_attr);
+	return 0;
+
+nla_put_failure:
+	nla_nest_cancel(msg, chunk_attr);
+	return err;
+}
+
+#define DEVLINK_REGION_READ_CHUNK_SIZE 256
+
+static int devlink_nl_region_read_snapshot_fill(struct sk_buff *skb,
+						struct devlink *devlink,
+						struct devlink_region *region,
+						struct nlattr **attrs,
+						u64 start_offset,
+						u64 end_offset,
+						bool dump,
+						u64 *new_offset)
+{
+	struct devlink_snapshot *snapshot;
+	u64 curr_offset = start_offset;
+	u32 snapshot_id;
+	int err = 0;
+
+	*new_offset = start_offset;
+
+	snapshot_id = nla_get_u32(attrs[DEVLINK_ATTR_REGION_SNAPSHOT_ID]);
+	snapshot = devlink_region_snapshot_get_by_id(region, snapshot_id);
+	if (!snapshot)
+		return -EINVAL;
+
+	if (end_offset > snapshot->data_len || dump)
+		end_offset = snapshot->data_len;
+
+	while (curr_offset < end_offset) {
+		u32 data_size;
+		u8 *data;
+
+		if (end_offset - curr_offset < DEVLINK_REGION_READ_CHUNK_SIZE)
+			data_size = end_offset - curr_offset;
+		else
+			data_size = DEVLINK_REGION_READ_CHUNK_SIZE;
+
+		data = &snapshot->data[curr_offset];
+		err = devlink_nl_cmd_region_read_chunk_fill(skb, devlink,
+							    data, data_size,
+							    curr_offset);
+		if (err)
+			break;
+
+		curr_offset += data_size;
+	}
+	*new_offset = curr_offset;
+
+	return err;
+}
+
+static int devlink_nl_cmd_region_read_dumpit(struct sk_buff *skb,
+					     struct netlink_callback *cb)
+{
+	u64 ret_offset, start_offset, end_offset = 0;
+	struct nlattr *attrs[DEVLINK_ATTR_MAX + 1];
+	const struct genl_ops *ops = cb->data;
+	struct devlink_region *region;
+	struct nlattr *chunks_attr;
+	const char *region_name;
+	struct devlink *devlink;
+	bool dump = true;
+	void *hdr;
+	int err;
+
+	start_offset = *((u64 *)&cb->args[0]);
+
+	err = nlmsg_parse(cb->nlh, GENL_HDRLEN + devlink_nl_family.hdrsize,
+			  attrs, DEVLINK_ATTR_MAX, ops->policy, NULL);
+	if (err)
+		goto out;
+
+	devlink = devlink_get_from_attrs(sock_net(cb->skb->sk), attrs);
+	if (IS_ERR(devlink))
+		goto out;
+
+	mutex_lock(&devlink_mutex);
+	mutex_lock(&devlink->lock);
+
+	if (!attrs[DEVLINK_ATTR_REGION_NAME] ||
+	    !attrs[DEVLINK_ATTR_REGION_SNAPSHOT_ID])
+		goto out_unlock;
+
+	region_name = nla_data(attrs[DEVLINK_ATTR_REGION_NAME]);
+	region = devlink_region_get_by_name(devlink, region_name);
+	if (!region)
+		goto out_unlock;
+
+	hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
+			  &devlink_nl_family, NLM_F_ACK | NLM_F_MULTI,
+			  DEVLINK_CMD_REGION_READ);
+	if (!hdr)
+		goto out_unlock;
+
+	err = devlink_nl_put_handle(skb, devlink);
+	if (err)
+		goto nla_put_failure;
+
+	err = nla_put_string(skb, DEVLINK_ATTR_REGION_NAME, region_name);
+	if (err)
+		goto nla_put_failure;
+
+	chunks_attr = nla_nest_start(skb, DEVLINK_ATTR_REGION_CHUNKS);
+	if (!chunks_attr)
+		goto nla_put_failure;
+
+	if (attrs[DEVLINK_ATTR_REGION_CHUNK_ADDR] &&
+	    attrs[DEVLINK_ATTR_REGION_CHUNK_LEN]) {
+		if (!start_offset)
+			start_offset =
+				nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_ADDR]);
+
+		end_offset = nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_ADDR]);
+		end_offset += nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_LEN]);
+		dump = false;
+	}
+
+	err = devlink_nl_region_read_snapshot_fill(skb, devlink,
+						   region, attrs,
+						   start_offset,
+						   end_offset, dump,
+						   &ret_offset);
+
+	if (err && err != -EMSGSIZE)
+		goto nla_put_failure;
+
+	/* Check if there was any progress done to prevent infinite loop */
+	if (ret_offset == start_offset)
+		goto nla_put_failure;
+
+	*((u64 *)&cb->args[0]) = ret_offset;
+
+	nla_nest_end(skb, chunks_attr);
+	genlmsg_end(skb, hdr);
+	mutex_unlock(&devlink->lock);
+	mutex_unlock(&devlink_mutex);
+
+	return skb->len;
+
+nla_put_failure:
+	genlmsg_cancel(skb, hdr);
+out_unlock:
+	mutex_unlock(&devlink->lock);
+	mutex_unlock(&devlink_mutex);
+out:
+	return 0;
+}
+
 static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = {
 	[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING },
 	[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING },
@@ -3088,6 +3263,13 @@ static int devlink_nl_cmd_region_del(struct sk_buff *skb,
 		.flags = GENL_ADMIN_PERM,
 		.internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK,
 	},
+	{
+		.cmd = DEVLINK_CMD_REGION_READ,
+		.dumpit = devlink_nl_cmd_region_read_dumpit,
+		.policy = devlink_nl_policy,
+		.flags = GENL_ADMIN_PERM,
+		.internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK,
+	},
 };
 
 static struct genl_family devlink_nl_family __ro_after_init = {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH net-next 8/9] net/mlx4_core: Add health buffer address capability
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
                   ` (6 preceding siblings ...)
  2018-03-29 16:07 ` [PATCH net-next 7/9] devlink: Add support for region snapshot read command Alex Vesker
@ 2018-03-29 16:07 ` Alex Vesker
  2018-03-29 16:07 ` [PATCH net-next 9/9] net/mlx4_core: Add Crdump FW snapshot support Alex Vesker
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 16:07 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Tariq Toukan, Jiri Pirko, Alex Vesker

Health buffer address is a 32 bit PCI address offset provided by
the FW. This offset is used for reading FW health debug data
located on the shared CR space. Cr space is accessible in both
driver and FW and allows for different queries and configurations.
Health buffer size is always 64B of readable data followed by a
lock which is used to block volatile CR space access.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx4/fw.c   | 5 ++++-
 drivers/net/ethernet/mellanox/mlx4/fw.h   | 1 +
 drivers/net/ethernet/mellanox/mlx4/main.c | 1 +
 include/linux/mlx4/device.h               | 1 +
 4 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
index 634f603..4bb266e 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
@@ -823,7 +823,7 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
 #define QUERY_DEV_CAP_QP_RATE_LIMIT_NUM_OFFSET	0xcc
 #define QUERY_DEV_CAP_QP_RATE_LIMIT_MAX_OFFSET	0xd0
 #define QUERY_DEV_CAP_QP_RATE_LIMIT_MIN_OFFSET	0xd2
-
+#define QUERY_DEV_CAP_HEALTH_BUFFER_ADDRESS_OFFSET	0xe4
 
 	dev_cap->flags2 = 0;
 	mailbox = mlx4_alloc_cmd_mailbox(dev);
@@ -1078,6 +1078,9 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
 		dev_cap->rl_caps.min_unit = size >> 14;
 	}
 
+	MLX4_GET(dev_cap->health_buffer_addrs, outbox,
+		 QUERY_DEV_CAP_HEALTH_BUFFER_ADDRESS_OFFSET);
+
 	MLX4_GET(field32, outbox, QUERY_DEV_CAP_EXT_2_FLAGS_OFFSET);
 	if (field32 & (1 << 16))
 		dev_cap->flags2 |= MLX4_DEV_CAP_FLAG2_UPDATE_QP;
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.h b/drivers/net/ethernet/mellanox/mlx4/fw.h
index cd6399c..650ae08 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.h
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.h
@@ -128,6 +128,7 @@ struct mlx4_dev_cap {
 	u32 dmfs_high_rate_qpn_base;
 	u32 dmfs_high_rate_qpn_range;
 	struct mlx4_rate_limit_caps rl_caps;
+	u32 health_buffer_addrs;
 	struct mlx4_port_cap port_cap[MLX4_MAX_PORTS + 1];
 	bool wol_port[MLX4_MAX_PORTS + 1];
 };
diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index 100ded5..acc6ccc 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -427,6 +427,7 @@ static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
 	dev->caps.max_rss_tbl_sz     = dev_cap->max_rss_tbl_sz;
 	dev->caps.wol_port[1]          = dev_cap->wol_port[1];
 	dev->caps.wol_port[2]          = dev_cap->wol_port[2];
+	dev->caps.health_buffer_addrs  = dev_cap->health_buffer_addrs;
 
 	/* Save uar page shift */
 	if (!mlx4_is_slave(dev)) {
diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h
index b2423ba..1e4b0f1 100644
--- a/include/linux/mlx4/device.h
+++ b/include/linux/mlx4/device.h
@@ -633,6 +633,7 @@ struct mlx4_caps {
 	u32			vf_caps;
 	bool			wol_port[MLX4_MAX_PORTS + 1];
 	struct mlx4_rate_limit_caps rl_caps;
+	u32			health_buffer_addrs;
 };
 
 struct mlx4_buf_list {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH net-next 9/9] net/mlx4_core: Add Crdump FW snapshot support
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
                   ` (7 preceding siblings ...)
  2018-03-29 16:07 ` [PATCH net-next 8/9] net/mlx4_core: Add health buffer address capability Alex Vesker
@ 2018-03-29 16:07 ` Alex Vesker
  2018-03-29 17:13 ` [PATCH net-next 0/9] devlink: Add support for region access Andrew Lunn
  2018-03-29 18:23 ` Andrew Lunn
  10 siblings, 0 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 16:07 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Tariq Toukan, Jiri Pirko, Alex Vesker

Crdump allows the driver to create a snapshot of the FW PCI
crspace and health buffer during a critical FW issue.
In case of a FW command timeout, FW getting stuck or a non zero
value on the catastrophic buffer, a snapshot will be taken.

The snapshot is exposed using devlink, cr-space, fw-health
address regions are registered on init and snapshots are attached
once a new snapshot is collected by the driver.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx4/Makefile |   2 +-
 drivers/net/ethernet/mellanox/mlx4/catas.c  |   6 +-
 drivers/net/ethernet/mellanox/mlx4/crdump.c | 224 ++++++++++++++++++++++++++++
 drivers/net/ethernet/mellanox/mlx4/main.c   |  10 +-
 drivers/net/ethernet/mellanox/mlx4/mlx4.h   |   4 +
 include/linux/mlx4/device.h                 |   6 +
 6 files changed, 248 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/ethernet/mellanox/mlx4/crdump.c

diff --git a/drivers/net/ethernet/mellanox/mlx4/Makefile b/drivers/net/ethernet/mellanox/mlx4/Makefile
index 16b10d0..3f40077 100644
--- a/drivers/net/ethernet/mellanox/mlx4/Makefile
+++ b/drivers/net/ethernet/mellanox/mlx4/Makefile
@@ -3,7 +3,7 @@ obj-$(CONFIG_MLX4_CORE)		+= mlx4_core.o
 
 mlx4_core-y :=	alloc.o catas.o cmd.o cq.o eq.o fw.o fw_qos.o icm.o intf.o \
 		main.o mcg.o mr.o pd.o port.o profile.o qp.o reset.o sense.o \
-		srq.o resource_tracker.o
+		srq.o resource_tracker.o crdump.o
 
 obj-$(CONFIG_MLX4_EN)               += mlx4_en.o
 
diff --git a/drivers/net/ethernet/mellanox/mlx4/catas.c b/drivers/net/ethernet/mellanox/mlx4/catas.c
index e2b6b0c..e9fdf14 100644
--- a/drivers/net/ethernet/mellanox/mlx4/catas.c
+++ b/drivers/net/ethernet/mellanox/mlx4/catas.c
@@ -178,10 +178,12 @@ void mlx4_enter_error_state(struct mlx4_dev_persistent *persist)
 
 	dev = persist->dev;
 	mlx4_err(dev, "device is going to be reset\n");
-	if (mlx4_is_slave(dev))
+	if (mlx4_is_slave(dev)) {
 		err = mlx4_reset_slave(dev);
-	else
+	} else {
+		mlx4_crdump_collect(dev);
 		err = mlx4_reset_master(dev);
+	}
 
 	if (!err) {
 		mlx4_err(dev, "device was reset successfully\n");
diff --git a/drivers/net/ethernet/mellanox/mlx4/crdump.c b/drivers/net/ethernet/mellanox/mlx4/crdump.c
new file mode 100644
index 0000000..677d2d9
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx4/crdump.c
@@ -0,0 +1,224 @@
+/*
+ * Copyright (c) 2018, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include "mlx4.h"
+
+#define BAD_ACCESS			0xBADACCE5
+#define HEALTH_BUFFER_SIZE		0x40
+#define CR_ENABLE_BIT			swab32(BIT(6))
+#define CR_ENABLE_BIT_OFFSET		0xF3F04
+#define MAX_NUM_OF_DUMPS_TO_STORE	(8)
+
+const char *region_cr_space_str = "cr-space";
+const char *region_fw_health_str = "fw-health";
+
+/* Set to true in case cr enable bit was set to true before crdump */
+bool crdump_enbale_bit_set;
+
+static void crdump_enable_crspace_access(struct mlx4_dev *dev, u8 *cr_space)
+{
+	/* Get current enable bit value */
+	crdump_enbale_bit_set =
+		readl(cr_space + CR_ENABLE_BIT_OFFSET) & CR_ENABLE_BIT;
+
+	/* Enable FW CR filter (set bit6 to 0) */
+	if (crdump_enbale_bit_set)
+		writel(readl(cr_space + CR_ENABLE_BIT_OFFSET) & ~CR_ENABLE_BIT,
+		       cr_space + CR_ENABLE_BIT_OFFSET);
+
+	/* Enable block volatile crspace accesses */
+	writel(swab32(1), cr_space + dev->caps.health_buffer_addrs +
+	       HEALTH_BUFFER_SIZE);
+}
+
+static void crdump_disable_crspace_access(struct mlx4_dev *dev, u8 *cr_space)
+{
+	/* Disable block volatile crspace accesses */
+	writel(0, cr_space + dev->caps.health_buffer_addrs +
+	       HEALTH_BUFFER_SIZE);
+
+	/* Restore FW CR filter value (set bit6 to original value) */
+	if (crdump_enbale_bit_set)
+		writel(readl(cr_space + CR_ENABLE_BIT_OFFSET) | CR_ENABLE_BIT,
+		       cr_space + CR_ENABLE_BIT_OFFSET);
+}
+
+void mlx4_crdump_collect_crspace(struct mlx4_dev *dev, u8 *cr_space, u32 id)
+{
+	struct mlx4_fw_crdump *crdump = &dev->persist->crdump;
+	struct pci_dev *pdev = dev->persist->pdev;
+	unsigned long cr_res_size;
+	u8 *crspace_data;
+	int offset;
+	int err;
+
+	if (!crdump->region_crspace) {
+		mlx4_err(dev, "crdump: cr-space region is NULL\n");
+		return;
+	}
+
+	/* Try to collect CR space */
+	cr_res_size = pci_resource_len(pdev, 0);
+	crspace_data = kzalloc(cr_res_size, GFP_KERNEL);
+	if (crspace_data) {
+		for (offset = 0; offset < cr_res_size; offset += 4)
+			*(u32 *)(crspace_data + offset) =
+					readl(cr_space + offset);
+
+		err = devlink_region_snapshot_create(crdump->region_crspace,
+						     cr_res_size, crspace_data,
+						     id);
+		if (err)
+			mlx4_warn(dev, "crdump: devlink create %s snapshot id %d err %d\n",
+				  region_cr_space_str, id, err);
+		else
+			mlx4_info(dev, "crdump: added snapshot %d to devlink region %s\n",
+				  id, region_cr_space_str);
+
+		kfree(crspace_data);
+	} else {
+		mlx4_err(dev, "crdump: Failed to allocate crspace buffer\n");
+	}
+}
+
+void mlx4_crdump_collect_fw_health(struct mlx4_dev *dev, u8 *cr_space, u32 id)
+{
+	struct mlx4_fw_crdump *crdump = &dev->persist->crdump;
+	u8 *health_data;
+	int offset;
+	int err;
+
+	if (!crdump->region_fw_health) {
+		mlx4_err(dev, "crdump: fw-health region is NULL\n");
+		return;
+	}
+
+	/* Try to collect health buffer */
+	health_data = kzalloc(HEALTH_BUFFER_SIZE, GFP_KERNEL);
+	if (health_data) {
+		u8 *health_buf_s = cr_space + dev->caps.health_buffer_addrs;
+
+		for (offset = 0; offset < HEALTH_BUFFER_SIZE; offset += 4)
+			*(u32 *)(health_data + offset) =
+					readl(health_buf_s + offset);
+
+		err = devlink_region_snapshot_create(crdump->region_fw_health,
+						     HEALTH_BUFFER_SIZE,
+						     health_data,
+						     id);
+		if (err)
+			mlx4_warn(dev, "crdump: devlink create %s snapshot id %d err %d\n",
+				  region_fw_health_str, id, err);
+		else
+			mlx4_info(dev, "crdump: added snapshot %d to devlink region %s\n",
+				  id, region_fw_health_str);
+
+		kfree(health_data);
+	} else {
+		mlx4_err(dev, "crdump: Failed to allocate health buffer\n");
+	}
+}
+
+int mlx4_crdump_collect(struct mlx4_dev *dev)
+{
+	struct devlink *devlink = priv_to_devlink(mlx4_priv(dev));
+	struct pci_dev *pdev = dev->persist->pdev;
+	unsigned long cr_res_size;
+	u8 *cr_space;
+	u32 id;
+
+	if (!dev->caps.health_buffer_addrs) {
+		mlx4_info(dev, "crdump: FW doesn't support health buffer access, skipping\n");
+		return 0;
+	}
+
+	cr_res_size = pci_resource_len(pdev, 0);
+
+	cr_space = ioremap(pci_resource_start(pdev, 0), cr_res_size);
+	if (!cr_space) {
+		mlx4_err(dev, "crdump: Failed to map pci cr region\n");
+		return -ENODEV;
+	}
+
+	crdump_enable_crspace_access(dev, cr_space);
+
+	/* Get the available snapshot ID for the dumps */
+	id = devlink_region_shapshot_id_get(devlink);
+
+	/* Try to capture dumps */
+	mlx4_crdump_collect_crspace(dev, cr_space, id);
+	mlx4_crdump_collect_fw_health(dev, cr_space, id);
+
+	crdump_disable_crspace_access(dev, cr_space);
+
+	iounmap(cr_space);
+	return 0;
+}
+
+int mlx4_crdump_init(struct mlx4_dev *dev)
+{
+	struct devlink *devlink = priv_to_devlink(mlx4_priv(dev));
+	struct mlx4_fw_crdump *crdump = &dev->persist->crdump;
+	struct pci_dev *pdev = dev->persist->pdev;
+
+	/* Create cr-space region */
+	crdump->region_crspace =
+		devlink_region_create(devlink,
+				      region_cr_space_str,
+				      MAX_NUM_OF_DUMPS_TO_STORE,
+				      pci_resource_len(pdev, 0));
+	if (IS_ERR(crdump->region_crspace))
+		mlx4_warn(dev, "crdump: create devlink region %s err %ld\n",
+			  region_cr_space_str,
+			  PTR_ERR(crdump->region_crspace));
+
+	/* Create fw-health region */
+	crdump->region_fw_health =
+		devlink_region_create(devlink,
+				      region_fw_health_str,
+				      MAX_NUM_OF_DUMPS_TO_STORE,
+				      HEALTH_BUFFER_SIZE);
+	if (IS_ERR(crdump->region_fw_health))
+		mlx4_warn(dev, "crdump: create devlink region %s err %ld\n",
+			  region_fw_health_str,
+			  PTR_ERR(crdump->region_fw_health));
+
+	return 0;
+}
+
+void mlx4_crdump_end(struct mlx4_dev *dev)
+{
+	struct mlx4_fw_crdump *crdump = &dev->persist->crdump;
+
+	devlink_region_destroy(crdump->region_fw_health);
+	devlink_region_destroy(crdump->region_crspace);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index acc6ccc..869b163 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -3787,10 +3787,14 @@ static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data,
 		}
 	}
 
-	err = mlx4_catas_init(&priv->dev);
+	err = mlx4_crdump_init(&priv->dev);
 	if (err)
 		goto err_release_regions;
 
+	err = mlx4_catas_init(&priv->dev);
+	if (err)
+		goto err_crdump;
+
 	err = mlx4_load_one(pdev, pci_dev_data, total_vfs, nvfs, priv, 0);
 	if (err)
 		goto err_catas;
@@ -3800,6 +3804,9 @@ static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data,
 err_catas:
 	mlx4_catas_end(&priv->dev);
 
+err_crdump:
+	mlx4_crdump_end(&priv->dev);
+
 err_release_regions:
 	pci_release_regions(pdev);
 
@@ -4005,6 +4012,7 @@ static void mlx4_remove_one(struct pci_dev *pdev)
 	else
 		mlx4_info(dev, "%s: interface is down\n", __func__);
 	mlx4_catas_end(dev);
+	mlx4_crdump_end(dev);
 	if (dev->flags & MLX4_FLAG_SRIOV && !active_vfs) {
 		mlx4_warn(dev, "Disabling SR-IOV\n");
 		pci_disable_sriov(pdev);
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4.h b/drivers/net/ethernet/mellanox/mlx4/mlx4.h
index c68da19..809b4e7 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4.h
@@ -1042,6 +1042,8 @@ int mlx4_calc_vf_counters(struct mlx4_dev *dev, int slave, int port,
 void mlx4_stop_catas_poll(struct mlx4_dev *dev);
 int mlx4_catas_init(struct mlx4_dev *dev);
 void mlx4_catas_end(struct mlx4_dev *dev);
+int mlx4_crdump_init(struct mlx4_dev *dev);
+void mlx4_crdump_end(struct mlx4_dev *dev);
 int mlx4_restart_one(struct pci_dev *pdev);
 int mlx4_register_device(struct mlx4_dev *dev);
 void mlx4_unregister_device(struct mlx4_dev *dev);
@@ -1227,6 +1229,8 @@ int mlx4_comm_cmd(struct mlx4_dev *dev, u8 cmd, u16 param,
 void mlx4_enter_error_state(struct mlx4_dev_persistent *persist);
 int mlx4_comm_internal_err(u32 slave_read);
 
+int mlx4_crdump_collect(struct mlx4_dev *dev);
+
 int mlx4_SENSE_PORT(struct mlx4_dev *dev, int port,
 		    enum mlx4_port_type *type);
 void mlx4_do_sense_ports(struct mlx4_dev *dev,
diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h
index 1e4b0f1..b5d8e7d 100644
--- a/include/linux/mlx4/device.h
+++ b/include/linux/mlx4/device.h
@@ -855,6 +855,11 @@ struct mlx4_vf_dev {
 	u8			n_ports;
 };
 
+struct mlx4_fw_crdump {
+	struct devlink_region *region_crspace;
+	struct devlink_region *region_fw_health;
+};
+
 enum mlx4_pci_status {
 	MLX4_PCI_STATUS_DISABLED,
 	MLX4_PCI_STATUS_ENABLED,
@@ -875,6 +880,7 @@ struct mlx4_dev_persistent {
 	u8	interface_state;
 	struct mutex		pci_status_mutex; /* sync pci state */
 	enum mlx4_pci_status	pci_status;
+	struct mlx4_fw_crdump	crdump;
 };
 
 struct mlx4_dev {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
                   ` (8 preceding siblings ...)
  2018-03-29 16:07 ` [PATCH net-next 9/9] net/mlx4_core: Add Crdump FW snapshot support Alex Vesker
@ 2018-03-29 17:13 ` Andrew Lunn
  2018-03-29 18:59   ` Alex Vesker
  2018-03-29 18:23 ` Andrew Lunn
  10 siblings, 1 reply; 28+ messages in thread
From: Andrew Lunn @ 2018-03-29 17:13 UTC (permalink / raw)
  To: Alex Vesker; +Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko

On Thu, Mar 29, 2018 at 07:07:43PM +0300, Alex Vesker wrote:
> This is a proposal which will allow access to driver defined address
> regions using devlink. Each device can create its supported address
> regions and register them. A device which exposes a region will allow
> access to it using devlink.
> 
> The suggested implementation will allow exposing regions to the user,
> reading and dumping snapshots taken from different regions. 
> A snapshot represents a memory image of a region taken by the driver.
> 
> If a device collects a snapshot of an address region it can be later
> exposed using devlink region read or dump commands.
> This functionality allows for future analyses on the snapshots to be
> done.

Hi Alex

So the device is in change of making a snapshot? A user cannot
initiate it?

Seems like if i'm trying to debug something, i want to take a snapshot
in the good state, issue the command which breaks things, and then
take another snapshot. Looking at the diff then gives me an idea what
happened.

> Show all of the exposed regions with region sizes:
> $ devlink region show
> pci/0000:00:05.0/cr-space: size 1048576 snapshot [1 2]

So you have 2Mbytes of snapshot data. Is this held in the device, or
kernel memory?

> Dump a snapshot:
> $ devlink region dump pci/0000:00:05.0/fw-health snapshot 1
> 0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
> 0000000000000010 0000 0000 ffff ff04 0029 8c00 0028 8cc8
> 0000000000000020 0016 0bb8 0016 1720 0000 0000 c00f 3ffc
> 0000000000000030 bada cce5 bada cce5 bada cce5 bada cce5
> 
> Read a specific part of a snapshot:
> $ devlink region read pci/0000:00:05.0/fw-health snapshot 1 address 0 
> 	length 16
> 0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30

Why a separate command? It seems to be just a subset of dump.

    Andrew

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
                   ` (9 preceding siblings ...)
  2018-03-29 17:13 ` [PATCH net-next 0/9] devlink: Add support for region access Andrew Lunn
@ 2018-03-29 18:23 ` Andrew Lunn
  2018-03-30  9:51   ` Rahul Lakkireddy
  10 siblings, 1 reply; 28+ messages in thread
From: Andrew Lunn @ 2018-03-29 18:23 UTC (permalink / raw)
  To: Rahul Lakkireddy, Alex Vesker
  Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko

On Thu, Mar 29, 2018 at 07:07:43PM +0300, Alex Vesker wrote:
> This is a proposal which will allow access to driver defined address
> regions using devlink. Each device can create its supported address
> regions and register them. A device which exposes a region will allow
> access to it using devlink.

Hi Alex

Did you see the work Rahul Lakkireddy has been doing?

https://patchwork.kernel.org/patch/10305935/

It seems like these are similar, or at least overlapping. We probably
want one solution for both.

     Andrew

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-29 17:13 ` [PATCH net-next 0/9] devlink: Add support for region access Andrew Lunn
@ 2018-03-29 18:59   ` Alex Vesker
  2018-03-29 19:51     ` Andrew Lunn
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Vesker @ 2018-03-29 18:59 UTC (permalink / raw)
  To: Andrew Lunn; +Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko



On 3/29/2018 8:13 PM, Andrew Lunn wrote:
> On Thu, Mar 29, 2018 at 07:07:43PM +0300, Alex Vesker wrote:
>> This is a proposal which will allow access to driver defined address
>> regions using devlink. Each device can create its supported address
>> regions and register them. A device which exposes a region will allow
>> access to it using devlink.
>>
>> The suggested implementation will allow exposing regions to the user,
>> reading and dumping snapshots taken from different regions.
>> A snapshot represents a memory image of a region taken by the driver.
>>
>> If a device collects a snapshot of an address region it can be later
>> exposed using devlink region read or dump commands.
>> This functionality allows for future analyses on the snapshots to be
>> done.
> Hi Alex
>
> So the device is in change of making a snapshot? A user cannot
> initiate it?
Hi,
Correct, currently the user cannot initiate saving a snapshot but
as I said in the cover letter, planned support is for dumping "live" 
regions.

> Seems like if i'm trying to debug something, i want to take a snapshot
> in the good state, issue the command which breaks things, and then
> take another snapshot. Looking at the diff then gives me an idea what
> happened.
>
>> Show all of the exposed regions with region sizes:
>> $ devlink region show
>> pci/0000:00:05.0/cr-space: size 1048576 snapshot [1 2]
> So you have 2Mbytes of snapshot data. Is this held in the device, or
> kernel memory?
This is allocated in devlink, the maximum number of snapshots is set by 
the driver.

>> Dump a snapshot:
>> $ devlink region dump pci/0000:00:05.0/fw-health snapshot 1
>> 0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
>> 0000000000000010 0000 0000 ffff ff04 0029 8c00 0028 8cc8
>> 0000000000000020 0016 0bb8 0016 1720 0000 0000 c00f 3ffc
>> 0000000000000030 bada cce5 bada cce5 bada cce5 bada cce5
>>
>> Read a specific part of a snapshot:
>> $ devlink region read pci/0000:00:05.0/fw-health snapshot 1 address 0
>> 	length 16
>> 0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
> Why a separate command? It seems to be just a subset of dump.

This is useful when debugging values on specific addresses, this also
brings the API one step closer for a read and write API.

>
>      Andrew

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-29 18:59   ` Alex Vesker
@ 2018-03-29 19:51     ` Andrew Lunn
  2018-03-30  5:28       ` Alex Vesker
                         ` (2 more replies)
  0 siblings, 3 replies; 28+ messages in thread
From: Andrew Lunn @ 2018-03-29 19:51 UTC (permalink / raw)
  To: Alex Vesker; +Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko

> >>Show all of the exposed regions with region sizes:
> >>$ devlink region show
> >>pci/0000:00:05.0/cr-space: size 1048576 snapshot [1 2]
> >So you have 2Mbytes of snapshot data. Is this held in the device, or
> >kernel memory?
> This is allocated in devlink, the maximum number of snapshots is set by the
> driver.

And it seems to want contiguous pages. How well does that work after
the system has been running for a while and memory is fragmented?

> >>Dump a snapshot:
> >>$ devlink region dump pci/0000:00:05.0/fw-health snapshot 1
> >>0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
> >>0000000000000010 0000 0000 ffff ff04 0029 8c00 0028 8cc8
> >>0000000000000020 0016 0bb8 0016 1720 0000 0000 c00f 3ffc
> >>0000000000000030 bada cce5 bada cce5 bada cce5 bada cce5
> >>
> >>Read a specific part of a snapshot:
> >>$ devlink region read pci/0000:00:05.0/fw-health snapshot 1 address 0
> >>	length 16
> >>0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
> >Why a separate command? It seems to be just a subset of dump.
> 
> This is useful when debugging values on specific addresses, this also
> brings the API one step closer for a read and write API.

The functionality is useful, yes. But why two commands? Why not one
command, dump, which takes optional parameters?

Also, i doubt write support will be accepted. That sounds like the
start of an API to allow a user space driver.

      Andrew

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-29 19:51     ` Andrew Lunn
@ 2018-03-30  5:28       ` Alex Vesker
  2018-03-30 14:34         ` Andrew Lunn
  2018-03-30 18:07         ` David Miller
  2018-03-30 10:21       ` Jiri Pirko
  2018-03-30 18:07       ` David Miller
  2 siblings, 2 replies; 28+ messages in thread
From: Alex Vesker @ 2018-03-30  5:28 UTC (permalink / raw)
  To: Andrew Lunn; +Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko



On 3/29/2018 10:51 PM, Andrew Lunn wrote:
>>>> Show all of the exposed regions with region sizes:
>>>> $ devlink region show
>>>> pci/0000:00:05.0/cr-space: size 1048576 snapshot [1 2]
>>> So you have 2Mbytes of snapshot data. Is this held in the device, or
>>> kernel memory?
>> This is allocated in devlink, the maximum number of snapshots is set by the
>> driver.
> And it seems to want contiguous pages. How well does that work after
> the system has been running for a while and memory is fragmented?

The allocation can be changed, there is no read need for contiguous pages.
It is important to note that we the amount of snapshots is limited by 
the driver
this can be based on the dump size or expected frequency of collection.
I also prefer not to pre-allocate this memory.
>>>> Dump a snapshot:
>>>> $ devlink region dump pci/0000:00:05.0/fw-health snapshot 1
>>>> 0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
>>>> 0000000000000010 0000 0000 ffff ff04 0029 8c00 0028 8cc8
>>>> 0000000000000020 0016 0bb8 0016 1720 0000 0000 c00f 3ffc
>>>> 0000000000000030 bada cce5 bada cce5 bada cce5 bada cce5
>>>>
>>>> Read a specific part of a snapshot:
>>>> $ devlink region read pci/0000:00:05.0/fw-health snapshot 1 address 0
>>>> 	length 16
>>>> 0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
>>> Why a separate command? It seems to be just a subset of dump.
>> This is useful when debugging values on specific addresses, this also
>> brings the API one step closer for a read and write API.
> The functionality is useful, yes. But why two commands? Why not one
> command, dump, which takes optional parameters?

Dump in devlink means provide all the data, saying dump address x length 
y sounds
confusing.  Do you see this as a critical issue?

> Also, i doubt write support will be accepted. That sounds like the
> start of an API to allow a user space driver.

If this will be an issue we will stay with read access only.

>
>        Andrew

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-29 18:23 ` Andrew Lunn
@ 2018-03-30  9:51   ` Rahul Lakkireddy
  2018-03-30 10:24     ` Jiri Pirko
  0 siblings, 1 reply; 28+ messages in thread
From: Rahul Lakkireddy @ 2018-03-30  9:51 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: Alex Vesker, David S. Miller, netdev, Tariq Toukan, Jiri Pirko

On Thursday, March 03/29/18, 2018 at 23:53:43 +0530, Andrew Lunn wrote:
> On Thu, Mar 29, 2018 at 07:07:43PM +0300, Alex Vesker wrote:
> > This is a proposal which will allow access to driver defined address
> > regions using devlink. Each device can create its supported address
> > regions and register them. A device which exposes a region will allow
> > access to it using devlink.
> 
> Hi Alex
> 
> Did you see the work Rahul Lakkireddy has been doing?
> 
> https://patchwork.kernel.org/patch/10305935/
> 
> It seems like these are similar, or at least overlapping. We probably
> want one solution for both.
> 

We're already collecting hardware snapshots when system is live with
ethtool --getdump (which devlink tool is now trying to do).

We are now in the process of adding support to collect hardware
snapshots during kernel panic.

Thanks,
Rahul

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-29 19:51     ` Andrew Lunn
  2018-03-30  5:28       ` Alex Vesker
@ 2018-03-30 10:21       ` Jiri Pirko
  2018-03-30 18:07       ` David Miller
  2 siblings, 0 replies; 28+ messages in thread
From: Jiri Pirko @ 2018-03-30 10:21 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: Alex Vesker, David S. Miller, netdev, Tariq Toukan, Jiri Pirko

Thu, Mar 29, 2018 at 09:51:54PM CEST, andrew@lunn.ch wrote:
>> >>Show all of the exposed regions with region sizes:
>> >>$ devlink region show
>> >>pci/0000:00:05.0/cr-space: size 1048576 snapshot [1 2]
>> >So you have 2Mbytes of snapshot data. Is this held in the device, or
>> >kernel memory?
>> This is allocated in devlink, the maximum number of snapshots is set by the
>> driver.
>
>And it seems to want contiguous pages. How well does that work after
>the system has been running for a while and memory is fragmented?
>
>> >>Dump a snapshot:
>> >>$ devlink region dump pci/0000:00:05.0/fw-health snapshot 1
>> >>0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
>> >>0000000000000010 0000 0000 ffff ff04 0029 8c00 0028 8cc8
>> >>0000000000000020 0016 0bb8 0016 1720 0000 0000 c00f 3ffc
>> >>0000000000000030 bada cce5 bada cce5 bada cce5 bada cce5
>> >>
>> >>Read a specific part of a snapshot:
>> >>$ devlink region read pci/0000:00:05.0/fw-health snapshot 1 address 0
>> >>	length 16
>> >>0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
>> >Why a separate command? It seems to be just a subset of dump.
>> 
>> This is useful when debugging values on specific addresses, this also
>> brings the API one step closer for a read and write API.
>
>The functionality is useful, yes. But why two commands? Why not one
>command, dump, which takes optional parameters?

Between userspace and kernel, this is implemented as a single command.
So this is just a userspace wrapper. I think it is nice to provide clear
commands to the user so he is not confused about what is he doing. Also,
as Alex mentioned, we plan to have write command which will have same
command like args as read. These 2 should be in sync.


>
>Also, i doubt write support will be accepted. That sounds like the
>start of an API to allow a user space driver.

We discussed that on netconf in Seoul and was agreed it is needed.
We have 2 options: Some out of tree crap utils and access via /dev/mem,
or something which is well defined and implemented by in-tree drivers.
Writes will serve for debugging purposes, even tuning and bug hunting
on in production. For that, we need a standard way to do it.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-30  9:51   ` Rahul Lakkireddy
@ 2018-03-30 10:24     ` Jiri Pirko
  0 siblings, 0 replies; 28+ messages in thread
From: Jiri Pirko @ 2018-03-30 10:24 UTC (permalink / raw)
  To: Rahul Lakkireddy
  Cc: Andrew Lunn, Alex Vesker, David S. Miller, netdev, Tariq Toukan,
	Jiri Pirko

Fri, Mar 30, 2018 at 11:51:57AM CEST, rahul.lakkireddy@chelsio.com wrote:
>On Thursday, March 03/29/18, 2018 at 23:53:43 +0530, Andrew Lunn wrote:
>> On Thu, Mar 29, 2018 at 07:07:43PM +0300, Alex Vesker wrote:
>> > This is a proposal which will allow access to driver defined address
>> > regions using devlink. Each device can create its supported address
>> > regions and register them. A device which exposes a region will allow
>> > access to it using devlink.
>> 
>> Hi Alex
>> 
>> Did you see the work Rahul Lakkireddy has been doing?
>> 
>> https://patchwork.kernel.org/patch/10305935/
>> 
>> It seems like these are similar, or at least overlapping. We probably
>> want one solution for both.
>> 
>
>We're already collecting hardware snapshots when system is live with
>ethtool --getdump (which devlink tool is now trying to do).

Ethtool is definitelly a wrong tool for this. It uses netdev as a
handle, but the dumps happen on a parent device - represented by a
devlink instance.
Also, in devlink we have notifications so a deamon can actually listen
on a socket to see if there is new dump available due to a critical
event etc.


>
>We are now in the process of adding support to collect hardware
>snapshots during kernel panic.
>
>Thanks,
>Rahul

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-30  5:28       ` Alex Vesker
@ 2018-03-30 14:34         ` Andrew Lunn
  2018-03-30 16:57           ` David Ahern
  2018-03-30 18:07         ` David Miller
  1 sibling, 1 reply; 28+ messages in thread
From: Andrew Lunn @ 2018-03-30 14:34 UTC (permalink / raw)
  To: Alex Vesker; +Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko

> >And it seems to want contiguous pages. How well does that work after
> >the system has been running for a while and memory is fragmented?
> 
> The allocation can be changed, there is no read need for contiguous pages.
> It is important to note that we the amount of snapshots is limited by the
> driver
> this can be based on the dump size or expected frequency of collection.
> I also prefer not to pre-allocate this memory.

The driver code also asks for a 1MB contiguous chunk of memory!  You
really should think about this API, how can you avoid double memory
allocations. And can kvmalloc be used. But then you get into the
problem for DMA'ing the memory from the device...

This API also does not scale. 1MB is actually quite small. I'm sure
there is firmware running on CPUs with a lot more than 1MB of RAM.
How well does with API work with 64MB? Say i wanted to snapshot my
GPU? Or the MC/BMC?

> Dump in devlink means provide all the data, saying dump address x length y
> sounds
> confusing.  Do you see this as a critical issue?

No, i don't. But nearly every set of tools i've used has one command.
eg uboot, coreboot, gdb, od, hexdump. Even ethtool has [offset N] [length N]

How many tools can you name which have two different command, rather
than one with options?

      Andrew

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-30 14:34         ` Andrew Lunn
@ 2018-03-30 16:57           ` David Ahern
  2018-03-30 19:39             ` Alex Vesker
  0 siblings, 1 reply; 28+ messages in thread
From: David Ahern @ 2018-03-30 16:57 UTC (permalink / raw)
  To: Andrew Lunn, Alex Vesker
  Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko

On 3/30/18 8:34 AM, Andrew Lunn wrote:
>>> And it seems to want contiguous pages. How well does that work after
>>> the system has been running for a while and memory is fragmented?
>>
>> The allocation can be changed, there is no read need for contiguous pages.
>> It is important to note that we the amount of snapshots is limited by the
>> driver
>> this can be based on the dump size or expected frequency of collection.
>> I also prefer not to pre-allocate this memory.
> 
> The driver code also asks for a 1MB contiguous chunk of memory!  You
> really should think about this API, how can you avoid double memory
> allocations. And can kvmalloc be used. But then you get into the
> problem for DMA'ing the memory from the device...
> 
> This API also does not scale. 1MB is actually quite small. I'm sure
> there is firmware running on CPUs with a lot more than 1MB of RAM.
> How well does with API work with 64MB? Say i wanted to snapshot my
> GPU? Or the MC/BMC?
> 

That and the drivers control the number of snapshots. The user should be
able to control the number of snapshots, and an option to remove all
snapshots to free up that memory.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-30  5:28       ` Alex Vesker
  2018-03-30 14:34         ` Andrew Lunn
@ 2018-03-30 18:07         ` David Miller
  1 sibling, 0 replies; 28+ messages in thread
From: David Miller @ 2018-03-30 18:07 UTC (permalink / raw)
  To: valex; +Cc: andrew, netdev, tariqt, jiri

From: Alex Vesker <valex@mellanox.com>
Date: Fri, 30 Mar 2018 08:28:39 +0300

> On 3/29/2018 10:51 PM, Andrew Lunn wrote:
>> Also, i doubt write support will be accepted. That sounds like the
>> start of an API to allow a user space driver.
> 
> If this will be an issue we will stay with read access only.

Because of registers which are accessed indirectly, it's hard to avoid
supporting write support in some way.

This interface is not for providing a way to do userland drivers, it's
for diagnostics only.  And indeed we did discuss this at netconf and
we had broad concensus on this matter at the time.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-29 19:51     ` Andrew Lunn
  2018-03-30  5:28       ` Alex Vesker
  2018-03-30 10:21       ` Jiri Pirko
@ 2018-03-30 18:07       ` David Miller
  2 siblings, 0 replies; 28+ messages in thread
From: David Miller @ 2018-03-30 18:07 UTC (permalink / raw)
  To: andrew; +Cc: valex, netdev, tariqt, jiri

From: Andrew Lunn <andrew@lunn.ch>
Date: Thu, 29 Mar 2018 21:51:54 +0200

> And it seems to want contiguous pages. How well does that work after
> the system has been running for a while and memory is fragmented?

Indeed this will be a problem.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-30 16:57           ` David Ahern
@ 2018-03-30 19:39             ` Alex Vesker
  2018-03-30 22:26               ` David Ahern
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Vesker @ 2018-03-30 19:39 UTC (permalink / raw)
  To: David Ahern, Andrew Lunn
  Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko



On 3/30/2018 7:57 PM, David Ahern wrote:
> On 3/30/18 8:34 AM, Andrew Lunn wrote:
>>>> And it seems to want contiguous pages. How well does that work after
>>>> the system has been running for a while and memory is fragmented?
>>> The allocation can be changed, there is no read need for contiguous pages.
>>> It is important to note that we the amount of snapshots is limited by the
>>> driver
>>> this can be based on the dump size or expected frequency of collection.
>>> I also prefer not to pre-allocate this memory.
>> The driver code also asks for a 1MB contiguous chunk of memory!  You
>> really should think about this API, how can you avoid double memory
>> allocations. And can kvmalloc be used. But then you get into the
>> problem for DMA'ing the memory from the device...
>>
>> This API also does not scale. 1MB is actually quite small. I'm sure
>> there is firmware running on CPUs with a lot more than 1MB of RAM.
>> How well does with API work with 64MB? Say i wanted to snapshot my
>> GPU? Or the MC/BMC?
>>
> That and the drivers control the number of snapshots. The user should be
> able to control the number of snapshots, and an option to remove all
> snapshots to free up that memory.

There is an option to free up this memory, using a delete command.
The reason I added the option to control the number of snapshots from
the driver side only is because the driver knows the size of the snapshots
and when/why they will be taken.
For example in our mlx4 driver the snapshots are taken on rare failures,
the snapshot is quite large and from past analyses the first dump is usually
the important one, this means that 8 is more than enough in my case.
If a user wants more than that he can always monitor notification read
the snapshot and delete once backup-ed, there is no reason for keeping
all of this data in the kernel.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-30 19:39             ` Alex Vesker
@ 2018-03-30 22:26               ` David Ahern
  2018-03-31  6:11                 ` Alex Vesker
  0 siblings, 1 reply; 28+ messages in thread
From: David Ahern @ 2018-03-30 22:26 UTC (permalink / raw)
  To: Alex Vesker, Andrew Lunn
  Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko

On 3/30/18 1:39 PM, Alex Vesker wrote:
> 
> 
> On 3/30/2018 7:57 PM, David Ahern wrote:
>> On 3/30/18 8:34 AM, Andrew Lunn wrote:
>>>>> And it seems to want contiguous pages. How well does that work after
>>>>> the system has been running for a while and memory is fragmented?
>>>> The allocation can be changed, there is no read need for contiguous
>>>> pages.
>>>> It is important to note that we the amount of snapshots is limited
>>>> by the
>>>> driver
>>>> this can be based on the dump size or expected frequency of collection.
>>>> I also prefer not to pre-allocate this memory.
>>> The driver code also asks for a 1MB contiguous chunk of memory!  You
>>> really should think about this API, how can you avoid double memory
>>> allocations. And can kvmalloc be used. But then you get into the
>>> problem for DMA'ing the memory from the device...
>>>
>>> This API also does not scale. 1MB is actually quite small. I'm sure
>>> there is firmware running on CPUs with a lot more than 1MB of RAM.
>>> How well does with API work with 64MB? Say i wanted to snapshot my
>>> GPU? Or the MC/BMC?
>>>
>> That and the drivers control the number of snapshots. The user should be
>> able to control the number of snapshots, and an option to remove all
>> snapshots to free up that memory.
> 
> There is an option to free up this memory, using a delete command.
> The reason I added the option to control the number of snapshots from
> the driver side only is because the driver knows the size of the snapshots
> and when/why they will be taken.
> For example in our mlx4 driver the snapshots are taken on rare failures,
> the snapshot is quite large and from past analyses the first dump is
> usually
> the important one, this means that 8 is more than enough in my case.
> If a user wants more than that he can always monitor notification read
> the snapshot and delete once backup-ed, there is no reason for keeping
> all of this data in the kernel.
> 
> 

I was thinking less. ie., a user says keep only 1 or 2 snapshots or
disable snapshots altogether.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-30 22:26               ` David Ahern
@ 2018-03-31  6:11                 ` Alex Vesker
  2018-03-31 15:53                   ` Andrew Lunn
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Vesker @ 2018-03-31  6:11 UTC (permalink / raw)
  To: David Ahern, Andrew Lunn
  Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko



On 3/31/2018 1:26 AM, David Ahern wrote:
> On 3/30/18 1:39 PM, Alex Vesker wrote:
>>
>> On 3/30/2018 7:57 PM, David Ahern wrote:
>>> On 3/30/18 8:34 AM, Andrew Lunn wrote:
>>>>>> And it seems to want contiguous pages. How well does that work after
>>>>>> the system has been running for a while and memory is fragmented?
>>>>> The allocation can be changed, there is no read need for contiguous
>>>>> pages.
>>>>> It is important to note that we the amount of snapshots is limited
>>>>> by the
>>>>> driver
>>>>> this can be based on the dump size or expected frequency of collection.
>>>>> I also prefer not to pre-allocate this memory.
>>>> The driver code also asks for a 1MB contiguous chunk of memory!  You
>>>> really should think about this API, how can you avoid double memory
>>>> allocations. And can kvmalloc be used. But then you get into the
>>>> problem for DMA'ing the memory from the device...
>>>>
>>>> This API also does not scale. 1MB is actually quite small. I'm sure
>>>> there is firmware running on CPUs with a lot more than 1MB of RAM.
>>>> How well does with API work with 64MB? Say i wanted to snapshot my
>>>> GPU? Or the MC/BMC?
>>>>
>>> That and the drivers control the number of snapshots. The user should be
>>> able to control the number of snapshots, and an option to remove all
>>> snapshots to free up that memory.
>> There is an option to free up this memory, using a delete command.
>> The reason I added the option to control the number of snapshots from
>> the driver side only is because the driver knows the size of the snapshots
>> and when/why they will be taken.
>> For example in our mlx4 driver the snapshots are taken on rare failures,
>> the snapshot is quite large and from past analyses the first dump is
>> usually
>> the important one, this means that 8 is more than enough in my case.
>> If a user wants more than that he can always monitor notification read
>> the snapshot and delete once backup-ed, there is no reason for keeping
>> all of this data in the kernel.
>>
>>
> I was thinking less. ie., a user says keep only 1 or 2 snapshots or
> disable snapshots altogether.
Devlink configuration is not persistent if the driver is reloaded, currently
there is no way to sync this. One or two might not be enough time to
read, delete and make room for the next one, as I said each driver should
do its calculations here based on frequency, size and even the time it takes
capturing it. The user can't know if one snapshot is enough for debug
I saw cases in which debug requires more than one snapshot to make
sure a health clock is incremented and the FW is alive.

I want to be able to login to a customer and accessing this snapshot
without any previous configuration from the user and not asking for
enabling the feature and then waiting for a repro...this will help
debugging issues that are hard to reproduce, I don't see any reason
to disable this.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-31  6:11                 ` Alex Vesker
@ 2018-03-31 15:53                   ` Andrew Lunn
  2018-03-31 17:21                     ` David Ahern
  0 siblings, 1 reply; 28+ messages in thread
From: Andrew Lunn @ 2018-03-31 15:53 UTC (permalink / raw)
  To: Alex Vesker
  Cc: David Ahern, David S. Miller, netdev, Tariq Toukan, Jiri Pirko

> I want to be able to login to a customer and accessing this snapshot
> without any previous configuration from the user and not asking for
> enabling the feature and then waiting for a repro...this will help
> debugging issues that are hard to reproduce, I don't see any reason
> to disable this.

The likely reality is 99.9% of these snapshots will never be seen or
used. But they take up memory sitting there doing nothing. And if the
snapshot is 2GB, that is a lot of memory. I expect a system admin
wants to be able to choose to enable this feature or not, because of
that memory. You should also consider implementing the memory pressure
callbacks, so you can discard snapshots, rather than OOM the machine.

	   Andrew

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-31 15:53                   ` Andrew Lunn
@ 2018-03-31 17:21                     ` David Ahern
  2018-04-04 11:07                       ` Alex Vesker
  0 siblings, 1 reply; 28+ messages in thread
From: David Ahern @ 2018-03-31 17:21 UTC (permalink / raw)
  To: Andrew Lunn, Alex Vesker
  Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko

On 3/31/18 9:53 AM, Andrew Lunn wrote:
>> I want to be able to login to a customer and accessing this snapshot
>> without any previous configuration from the user and not asking for
>> enabling the feature and then waiting for a repro...this will help
>> debugging issues that are hard to reproduce, I don't see any reason
>> to disable this.
> 
> The likely reality is 99.9% of these snapshots will never be seen or
> used. But they take up memory sitting there doing nothing. And if the
> snapshot is 2GB, that is a lot of memory. I expect a system admin
> wants to be able to choose to enable this feature or not, because of
> that memory. You should also consider implementing the memory pressure
> callbacks, so you can discard snapshots, rather than OOM the machine.
> 

That is exactly my point. Nobody wants one rogue device triggering
snapshots, consuming system resources and with no options to disable it.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH net-next 0/9] devlink: Add support for region access
  2018-03-31 17:21                     ` David Ahern
@ 2018-04-04 11:07                       ` Alex Vesker
  0 siblings, 0 replies; 28+ messages in thread
From: Alex Vesker @ 2018-04-04 11:07 UTC (permalink / raw)
  To: David Ahern, Andrew Lunn
  Cc: David S. Miller, netdev, Tariq Toukan, Jiri Pirko



On 3/31/2018 8:21 PM, David Ahern wrote:
> On 3/31/18 9:53 AM, Andrew Lunn wrote:
>>> I want to be able to login to a customer and accessing this snapshot
>>> without any previous configuration from the user and not asking for
>>> enabling the feature and then waiting for a repro...this will help
>>> debugging issues that are hard to reproduce, I don't see any reason
>>> to disable this.
>> The likely reality is 99.9% of these snapshots will never be seen or
>> used. But they take up memory sitting there doing nothing. And if the
>> snapshot is 2GB, that is a lot of memory. I expect a system admin
>> wants to be able to choose to enable this feature or not, because of
>> that memory. You should also consider implementing the memory pressure
>> callbacks, so you can discard snapshots, rather than OOM the machine.
>>
> That is exactly my point. Nobody wants one rogue device triggering
> snapshots, consuming system resources and with no options to disable it.


OK, currently there is a task to support persistent/permanent configuration
to devlink. Once this support is in I will add my code on top of it.
This will allow a user to enable the snapshot functionality on the driver.
Regarding the double continuous allocation of memory, I will fix to a single
vmalloc on the driver in case of adding a snapshot. Tell me what you think.

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2018-04-04 11:07 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-29 16:07 [PATCH net-next 0/9] devlink: Add support for region access Alex Vesker
2018-03-29 16:07 ` [PATCH net-next 1/9] devlink: Add support for creating and destroying regions Alex Vesker
2018-03-29 16:07 ` [PATCH net-next 2/9] devlink: Add callback to query for snapshot id before snapshot create Alex Vesker
2018-03-29 16:07 ` [PATCH net-next 3/9] devlink: Add support for creating region snapshots Alex Vesker
2018-03-29 16:07 ` [PATCH net-next 4/9] devlink: Add support for region get command Alex Vesker
2018-03-29 16:07 ` [PATCH net-next 5/9] devlink: Extend the support querying for region snapshot IDs Alex Vesker
2018-03-29 16:07 ` [PATCH net-next 6/9] devlink: Add support for region snapshot delete command Alex Vesker
2018-03-29 16:07 ` [PATCH net-next 7/9] devlink: Add support for region snapshot read command Alex Vesker
2018-03-29 16:07 ` [PATCH net-next 8/9] net/mlx4_core: Add health buffer address capability Alex Vesker
2018-03-29 16:07 ` [PATCH net-next 9/9] net/mlx4_core: Add Crdump FW snapshot support Alex Vesker
2018-03-29 17:13 ` [PATCH net-next 0/9] devlink: Add support for region access Andrew Lunn
2018-03-29 18:59   ` Alex Vesker
2018-03-29 19:51     ` Andrew Lunn
2018-03-30  5:28       ` Alex Vesker
2018-03-30 14:34         ` Andrew Lunn
2018-03-30 16:57           ` David Ahern
2018-03-30 19:39             ` Alex Vesker
2018-03-30 22:26               ` David Ahern
2018-03-31  6:11                 ` Alex Vesker
2018-03-31 15:53                   ` Andrew Lunn
2018-03-31 17:21                     ` David Ahern
2018-04-04 11:07                       ` Alex Vesker
2018-03-30 18:07         ` David Miller
2018-03-30 10:21       ` Jiri Pirko
2018-03-30 18:07       ` David Miller
2018-03-29 18:23 ` Andrew Lunn
2018-03-30  9:51   ` Rahul Lakkireddy
2018-03-30 10:24     ` Jiri Pirko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.