linux-remoteproc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/4] Extend coredump functionality
@ 2020-07-09 20:31 Rishabh Bhatnagar
  2020-07-09 20:31 ` [PATCH v7 1/4] remoteproc: Move coredump functionality to a new file Rishabh Bhatnagar
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Rishabh Bhatnagar @ 2020-07-09 20:31 UTC (permalink / raw)
  To: linux-remoteproc, linux-kernel
  Cc: bjorn.andersson, mathieu.poirier, tsoni, psodagud, sidgup,
	Rishabh Bhatnagar

This patch series moves the coredump functionality to a separate
file and adds "inline" coredump feature. Inline coredump directly
copies segments from device memory during coredump to userspace.
This avoids extra memory usage at the cost of speed. Recovery is
stalled until all data is read by userspace.
This patchset also includes Sibi Sankar's patch to deal with chunk sizes
lesser than segment size to make inline coredump work for modem.
https://patchwork.kernel.org/patch/11637157/

Changelog:

v7 -> v6:
- Include Sibi's patch as part of this patchset
- Add a linefeed when displaying coredump conf in debugfs
- Fix a typo in remoteproc.h

v6 -> v5:
- Fix unsigned comaprison with negative bug found on gcc-9.3.0

v5 -> v4:
- Rebase on top of linux-next
- Modify qcom_q6v5_mss driver as a result of rebasing on latest tip.

v4 -> v3:
- Write a helper function to copy segment memory for every dump format
- Change segment dump fn to add offset and size adn covert mss driver

v3 -> v2:
- Move entire coredump functionality to remoteproc_coredump.c
- Modify rproc_coredump to perform dump according to conf. set by userspace
- Move the userspace configuration to debugfs from sysfs.
- Keep the default coredump implementation as is

v2 -> v1:
- Introduce new file for coredump.
- Add userspace sysfs configuration for dump type.


Rishabh Bhatnagar (3):
  remoteproc: Move coredump functionality to a new file
  remoteproc: Add inline coredump functionality
  remoteproc: Add coredump debugfs entry

Sibi Sankar (1):
  remoteproc: qcom_q6v5_mss: Replace mask based tracking with size

 drivers/remoteproc/Makefile              |   1 +
 drivers/remoteproc/qcom_q6v5_mss.c       |  22 +--
 drivers/remoteproc/remoteproc_core.c     | 191 ------------------
 drivers/remoteproc/remoteproc_coredump.c | 326 +++++++++++++++++++++++++++++++
 drivers/remoteproc/remoteproc_debugfs.c  |  90 +++++++++
 drivers/remoteproc/remoteproc_internal.h |   4 +
 include/linux/remoteproc.h               |  21 +-
 7 files changed, 451 insertions(+), 204 deletions(-)
 create mode 100644 drivers/remoteproc/remoteproc_coredump.c

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v7 1/4] remoteproc: Move coredump functionality to a new file
  2020-07-09 20:31 [PATCH v7 0/4] Extend coredump functionality Rishabh Bhatnagar
@ 2020-07-09 20:31 ` Rishabh Bhatnagar
  2020-07-09 20:31 ` [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size Rishabh Bhatnagar
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 15+ messages in thread
From: Rishabh Bhatnagar @ 2020-07-09 20:31 UTC (permalink / raw)
  To: linux-remoteproc, linux-kernel
  Cc: bjorn.andersson, mathieu.poirier, tsoni, psodagud, sidgup,
	Rishabh Bhatnagar

Move all coredump functionality to an individual file. This is
being done so that the current functionality can be extended
in future patchsets.

Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Reviewed-by: Sibi Sankar <sibis@codeaurora.org>
Tested-by: Sibi Sankar <sibis@codeaurora.org>
---
 drivers/remoteproc/Makefile              |   1 +
 drivers/remoteproc/remoteproc_core.c     | 191 -----------------------------
 drivers/remoteproc/remoteproc_coredump.c | 204 +++++++++++++++++++++++++++++++
 drivers/remoteproc/remoteproc_internal.h |   4 +
 4 files changed, 209 insertions(+), 191 deletions(-)
 create mode 100644 drivers/remoteproc/remoteproc_coredump.c

diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile
index e8b886e..8702a4e 100644
--- a/drivers/remoteproc/Makefile
+++ b/drivers/remoteproc/Makefile
@@ -5,6 +5,7 @@
 
 obj-$(CONFIG_REMOTEPROC)		+= remoteproc.o
 remoteproc-y				:= remoteproc_core.o
+remoteproc-y				+= remoteproc_coredump.o
 remoteproc-y				+= remoteproc_debugfs.o
 remoteproc-y				+= remoteproc_sysfs.o
 remoteproc-y				+= remoteproc_virtio.o
diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
index 9f04c30..57db042 100644
--- a/drivers/remoteproc/remoteproc_core.c
+++ b/drivers/remoteproc/remoteproc_core.c
@@ -26,7 +26,6 @@
 #include <linux/firmware.h>
 #include <linux/string.h>
 #include <linux/debugfs.h>
-#include <linux/devcoredump.h>
 #include <linux/rculist.h>
 #include <linux/remoteproc.h>
 #include <linux/pm_runtime.h>
@@ -41,7 +40,6 @@
 #include <linux/platform_device.h>
 
 #include "remoteproc_internal.h"
-#include "remoteproc_elf_helpers.h"
 
 #define HIGH_BITS_MASK 0xFFFFFFFF00000000ULL
 
@@ -1239,19 +1237,6 @@ static int rproc_alloc_registered_carveouts(struct rproc *rproc)
 	return 0;
 }
 
-/**
- * rproc_coredump_cleanup() - clean up dump_segments list
- * @rproc: the remote processor handle
- */
-static void rproc_coredump_cleanup(struct rproc *rproc)
-{
-	struct rproc_dump_segment *entry, *tmp;
-
-	list_for_each_entry_safe(entry, tmp, &rproc->dump_segments, node) {
-		list_del(&entry->node);
-		kfree(entry);
-	}
-}
 
 /**
  * rproc_resource_cleanup() - clean up and free all acquired resources
@@ -1518,182 +1503,6 @@ static int rproc_stop(struct rproc *rproc, bool crashed)
 	return 0;
 }
 
-/**
- * rproc_coredump_add_segment() - add segment of device memory to coredump
- * @rproc:	handle of a remote processor
- * @da:		device address
- * @size:	size of segment
- *
- * Add device memory to the list of segments to be included in a coredump for
- * the remoteproc.
- *
- * Return: 0 on success, negative errno on error.
- */
-int rproc_coredump_add_segment(struct rproc *rproc, dma_addr_t da, size_t size)
-{
-	struct rproc_dump_segment *segment;
-
-	segment = kzalloc(sizeof(*segment), GFP_KERNEL);
-	if (!segment)
-		return -ENOMEM;
-
-	segment->da = da;
-	segment->size = size;
-
-	list_add_tail(&segment->node, &rproc->dump_segments);
-
-	return 0;
-}
-EXPORT_SYMBOL(rproc_coredump_add_segment);
-
-/**
- * rproc_coredump_add_custom_segment() - add custom coredump segment
- * @rproc:	handle of a remote processor
- * @da:		device address
- * @size:	size of segment
- * @dumpfn:	custom dump function called for each segment during coredump
- * @priv:	private data
- *
- * Add device memory to the list of segments to be included in the coredump
- * and associate the segment with the given custom dump function and private
- * data.
- *
- * Return: 0 on success, negative errno on error.
- */
-int rproc_coredump_add_custom_segment(struct rproc *rproc,
-				      dma_addr_t da, size_t size,
-				      void (*dumpfn)(struct rproc *rproc,
-						     struct rproc_dump_segment *segment,
-						     void *dest),
-				      void *priv)
-{
-	struct rproc_dump_segment *segment;
-
-	segment = kzalloc(sizeof(*segment), GFP_KERNEL);
-	if (!segment)
-		return -ENOMEM;
-
-	segment->da = da;
-	segment->size = size;
-	segment->priv = priv;
-	segment->dump = dumpfn;
-
-	list_add_tail(&segment->node, &rproc->dump_segments);
-
-	return 0;
-}
-EXPORT_SYMBOL(rproc_coredump_add_custom_segment);
-
-/**
- * rproc_coredump_set_elf_info() - set coredump elf information
- * @rproc:	handle of a remote processor
- * @class:	elf class for coredump elf file
- * @machine:	elf machine for coredump elf file
- *
- * Set elf information which will be used for coredump elf file.
- *
- * Return: 0 on success, negative errno on error.
- */
-int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 machine)
-{
-	if (class != ELFCLASS64 && class != ELFCLASS32)
-		return -EINVAL;
-
-	rproc->elf_class = class;
-	rproc->elf_machine = machine;
-
-	return 0;
-}
-EXPORT_SYMBOL(rproc_coredump_set_elf_info);
-
-/**
- * rproc_coredump() - perform coredump
- * @rproc:	rproc handle
- *
- * This function will generate an ELF header for the registered segments
- * and create a devcoredump device associated with rproc.
- */
-static void rproc_coredump(struct rproc *rproc)
-{
-	struct rproc_dump_segment *segment;
-	void *phdr;
-	void *ehdr;
-	size_t data_size;
-	size_t offset;
-	void *data;
-	void *ptr;
-	u8 class = rproc->elf_class;
-	int phnum = 0;
-
-	if (list_empty(&rproc->dump_segments))
-		return;
-
-	if (class == ELFCLASSNONE) {
-		dev_err(&rproc->dev, "Elf class is not set\n");
-		return;
-	}
-
-	data_size = elf_size_of_hdr(class);
-	list_for_each_entry(segment, &rproc->dump_segments, node) {
-		data_size += elf_size_of_phdr(class) + segment->size;
-
-		phnum++;
-	}
-
-	data = vmalloc(data_size);
-	if (!data)
-		return;
-
-	ehdr = data;
-
-	memset(ehdr, 0, elf_size_of_hdr(class));
-	/* e_ident field is common for both elf32 and elf64 */
-	elf_hdr_init_ident(ehdr, class);
-
-	elf_hdr_set_e_type(class, ehdr, ET_CORE);
-	elf_hdr_set_e_machine(class, ehdr, rproc->elf_machine);
-	elf_hdr_set_e_version(class, ehdr, EV_CURRENT);
-	elf_hdr_set_e_entry(class, ehdr, rproc->bootaddr);
-	elf_hdr_set_e_phoff(class, ehdr, elf_size_of_hdr(class));
-	elf_hdr_set_e_ehsize(class, ehdr, elf_size_of_hdr(class));
-	elf_hdr_set_e_phentsize(class, ehdr, elf_size_of_phdr(class));
-	elf_hdr_set_e_phnum(class, ehdr, phnum);
-
-	phdr = data + elf_hdr_get_e_phoff(class, ehdr);
-	offset = elf_hdr_get_e_phoff(class, ehdr);
-	offset += elf_size_of_phdr(class) * elf_hdr_get_e_phnum(class, ehdr);
-
-	list_for_each_entry(segment, &rproc->dump_segments, node) {
-		memset(phdr, 0, elf_size_of_phdr(class));
-		elf_phdr_set_p_type(class, phdr, PT_LOAD);
-		elf_phdr_set_p_offset(class, phdr, offset);
-		elf_phdr_set_p_vaddr(class, phdr, segment->da);
-		elf_phdr_set_p_paddr(class, phdr, segment->da);
-		elf_phdr_set_p_filesz(class, phdr, segment->size);
-		elf_phdr_set_p_memsz(class, phdr, segment->size);
-		elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X);
-		elf_phdr_set_p_align(class, phdr, 0);
-
-		if (segment->dump) {
-			segment->dump(rproc, segment, data + offset);
-		} else {
-			ptr = rproc_da_to_va(rproc, segment->da, segment->size);
-			if (!ptr) {
-				dev_err(&rproc->dev,
-					"invalid coredump segment (%pad, %zu)\n",
-					&segment->da, segment->size);
-				memset(data + offset, 0xff, segment->size);
-			} else {
-				memcpy(data + offset, ptr, segment->size);
-			}
-		}
-
-		offset += elf_phdr_get_p_filesz(class, phdr);
-		phdr += elf_size_of_phdr(class);
-	}
-
-	dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
-}
 
 /**
  * rproc_trigger_recovery() - recover a remoteproc
diff --git a/drivers/remoteproc/remoteproc_coredump.c b/drivers/remoteproc/remoteproc_coredump.c
new file mode 100644
index 0000000..ded0244
--- /dev/null
+++ b/drivers/remoteproc/remoteproc_coredump.c
@@ -0,0 +1,204 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Coredump functionality for Remoteproc framework.
+ *
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/devcoredump.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/remoteproc.h>
+#include "remoteproc_internal.h"
+#include "remoteproc_elf_helpers.h"
+
+/**
+ * rproc_coredump_cleanup() - clean up dump_segments list
+ * @rproc: the remote processor handle
+ */
+void rproc_coredump_cleanup(struct rproc *rproc)
+{
+	struct rproc_dump_segment *entry, *tmp;
+
+	list_for_each_entry_safe(entry, tmp, &rproc->dump_segments, node) {
+		list_del(&entry->node);
+		kfree(entry);
+	}
+}
+
+/**
+ * rproc_coredump_add_segment() - add segment of device memory to coredump
+ * @rproc:	handle of a remote processor
+ * @da:		device address
+ * @size:	size of segment
+ *
+ * Add device memory to the list of segments to be included in a coredump for
+ * the remoteproc.
+ *
+ * Return: 0 on success, negative errno on error.
+ */
+int rproc_coredump_add_segment(struct rproc *rproc, dma_addr_t da, size_t size)
+{
+	struct rproc_dump_segment *segment;
+
+	segment = kzalloc(sizeof(*segment), GFP_KERNEL);
+	if (!segment)
+		return -ENOMEM;
+
+	segment->da = da;
+	segment->size = size;
+
+	list_add_tail(&segment->node, &rproc->dump_segments);
+
+	return 0;
+}
+EXPORT_SYMBOL(rproc_coredump_add_segment);
+
+/**
+ * rproc_coredump_add_custom_segment() - add custom coredump segment
+ * @rproc:	handle of a remote processor
+ * @da:		device address
+ * @size:	size of segment
+ * @dumpfn:	custom dump function called for each segment during coredump
+ * @priv:	private data
+ *
+ * Add device memory to the list of segments to be included in the coredump
+ * and associate the segment with the given custom dump function and private
+ * data.
+ *
+ * Return: 0 on success, negative errno on error.
+ */
+int rproc_coredump_add_custom_segment(struct rproc *rproc,
+				      dma_addr_t da, size_t size,
+				      void (*dumpfn)(struct rproc *rproc,
+						     struct rproc_dump_segment *segment,
+						     void *dest),
+				      void *priv)
+{
+	struct rproc_dump_segment *segment;
+
+	segment = kzalloc(sizeof(*segment), GFP_KERNEL);
+	if (!segment)
+		return -ENOMEM;
+
+	segment->da = da;
+	segment->size = size;
+	segment->priv = priv;
+	segment->dump = dumpfn;
+
+	list_add_tail(&segment->node, &rproc->dump_segments);
+
+	return 0;
+}
+EXPORT_SYMBOL(rproc_coredump_add_custom_segment);
+
+/**
+ * rproc_coredump_set_elf_info() - set coredump elf information
+ * @rproc:	handle of a remote processor
+ * @class:	elf class for coredump elf file
+ * @machine:	elf machine for coredump elf file
+ *
+ * Set elf information which will be used for coredump elf file.
+ *
+ * Return: 0 on success, negative errno on error.
+ */
+int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 machine)
+{
+	if (class != ELFCLASS64 && class != ELFCLASS32)
+		return -EINVAL;
+
+	rproc->elf_class = class;
+	rproc->elf_machine = machine;
+
+	return 0;
+}
+EXPORT_SYMBOL(rproc_coredump_set_elf_info);
+
+/**
+ * rproc_coredump() - perform coredump
+ * @rproc:	rproc handle
+ *
+ * This function will generate an ELF header for the registered segments
+ * and create a devcoredump device associated with rproc.
+ */
+void rproc_coredump(struct rproc *rproc)
+{
+	struct rproc_dump_segment *segment;
+	void *phdr;
+	void *ehdr;
+	size_t data_size;
+	size_t offset;
+	void *data;
+	void *ptr;
+	u8 class = rproc->elf_class;
+	int phnum = 0;
+
+	if (list_empty(&rproc->dump_segments))
+		return;
+
+	if (class == ELFCLASSNONE) {
+		dev_err(&rproc->dev, "Elf class is not set\n");
+		return;
+	}
+
+	data_size = elf_size_of_hdr(class);
+	list_for_each_entry(segment, &rproc->dump_segments, node) {
+		data_size += elf_size_of_phdr(class) + segment->size;
+
+		phnum++;
+	}
+
+	data = vmalloc(data_size);
+	if (!data)
+		return;
+
+	ehdr = data;
+
+	memset(ehdr, 0, elf_size_of_hdr(class));
+	/* e_ident field is common for both elf32 and elf64 */
+	elf_hdr_init_ident(ehdr, class);
+
+	elf_hdr_set_e_type(class, ehdr, ET_CORE);
+	elf_hdr_set_e_machine(class, ehdr, rproc->elf_machine);
+	elf_hdr_set_e_version(class, ehdr, EV_CURRENT);
+	elf_hdr_set_e_entry(class, ehdr, rproc->bootaddr);
+	elf_hdr_set_e_phoff(class, ehdr, elf_size_of_hdr(class));
+	elf_hdr_set_e_ehsize(class, ehdr, elf_size_of_hdr(class));
+	elf_hdr_set_e_phentsize(class, ehdr, elf_size_of_phdr(class));
+	elf_hdr_set_e_phnum(class, ehdr, phnum);
+
+	phdr = data + elf_hdr_get_e_phoff(class, ehdr);
+	offset = elf_hdr_get_e_phoff(class, ehdr);
+	offset += elf_size_of_phdr(class) * elf_hdr_get_e_phnum(class, ehdr);
+
+	list_for_each_entry(segment, &rproc->dump_segments, node) {
+		memset(phdr, 0, elf_size_of_phdr(class));
+		elf_phdr_set_p_type(class, phdr, PT_LOAD);
+		elf_phdr_set_p_offset(class, phdr, offset);
+		elf_phdr_set_p_vaddr(class, phdr, segment->da);
+		elf_phdr_set_p_paddr(class, phdr, segment->da);
+		elf_phdr_set_p_filesz(class, phdr, segment->size);
+		elf_phdr_set_p_memsz(class, phdr, segment->size);
+		elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X);
+		elf_phdr_set_p_align(class, phdr, 0);
+
+		if (segment->dump) {
+			segment->dump(rproc, segment, data + offset);
+		} else {
+			ptr = rproc_da_to_va(rproc, segment->da, segment->size);
+			if (!ptr) {
+				dev_err(&rproc->dev,
+					"invalid coredump segment (%pad, %zu)\n",
+					&segment->da, segment->size);
+				memset(data + offset, 0xff, segment->size);
+			} else {
+				memcpy(data + offset, ptr, segment->size);
+			}
+		}
+
+		offset += elf_phdr_get_p_filesz(class, phdr);
+		phdr += elf_size_of_phdr(class);
+	}
+
+	dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
+}
diff --git a/drivers/remoteproc/remoteproc_internal.h b/drivers/remoteproc/remoteproc_internal.h
index 4ba7cb5..97d441b 100644
--- a/drivers/remoteproc/remoteproc_internal.h
+++ b/drivers/remoteproc/remoteproc_internal.h
@@ -47,6 +47,10 @@ extern struct class rproc_class;
 int rproc_init_sysfs(void);
 void rproc_exit_sysfs(void);
 
+/* from remoteproc_coredump.c */
+void rproc_coredump_cleanup(struct rproc *rproc);
+void rproc_coredump(struct rproc *rproc);
+
 void rproc_free_vring(struct rproc_vring *rvring);
 int rproc_alloc_vring(struct rproc_vdev *rvdev, int i);
 
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size
  2020-07-09 20:31 [PATCH v7 0/4] Extend coredump functionality Rishabh Bhatnagar
  2020-07-09 20:31 ` [PATCH v7 1/4] remoteproc: Move coredump functionality to a new file Rishabh Bhatnagar
@ 2020-07-09 20:31 ` Rishabh Bhatnagar
  2020-07-14 17:18   ` Mathieu Poirier
  2020-07-20 16:57   ` Mathieu Poirier
  2020-07-09 20:31 ` [PATCH v7 3/4] remoteproc: Add inline coredump functionality Rishabh Bhatnagar
  2020-07-09 20:31 ` [PATCH v7 4/4] remoteproc: Add coredump debugfs entry Rishabh Bhatnagar
  3 siblings, 2 replies; 15+ messages in thread
From: Rishabh Bhatnagar @ 2020-07-09 20:31 UTC (permalink / raw)
  To: linux-remoteproc, linux-kernel
  Cc: bjorn.andersson, mathieu.poirier, tsoni, psodagud, sidgup,
	Sibi Sankar, Rishabh Bhatnagar

From: Sibi Sankar <sibis@codeaurora.org>

In order to land inline coredump support for mss, the dump_segment
function would need to support granularities less than the segment
size. This is achieved by replacing mask based tracking with size.

Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
---
 drivers/remoteproc/qcom_q6v5_mss.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
index feb70283b..c6ce032 100644
--- a/drivers/remoteproc/qcom_q6v5_mss.c
+++ b/drivers/remoteproc/qcom_q6v5_mss.c
@@ -181,8 +181,8 @@ struct q6v5 {
 	bool running;
 
 	bool dump_mba_loaded;
-	unsigned long dump_segment_mask;
-	unsigned long dump_complete_mask;
+	size_t current_dump_size;
+	size_t total_dump_size;
 
 	phys_addr_t mba_phys;
 	void *mba_region;
@@ -1203,7 +1203,6 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
 {
 	int ret = 0;
 	struct q6v5 *qproc = rproc->priv;
-	unsigned long mask = BIT((unsigned long)segment->priv);
 	int offset = segment->da - qproc->mpss_reloc;
 	void *ptr = NULL;
 
@@ -1229,10 +1228,10 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
 		memset(dest, 0xff, segment->size);
 	}
 
-	qproc->dump_segment_mask |= mask;
+	qproc->current_dump_size += segment->size;
 
 	/* Reclaim mba after copying segments */
-	if (qproc->dump_segment_mask == qproc->dump_complete_mask) {
+	if (qproc->current_dump_size == qproc->total_dump_size) {
 		if (qproc->dump_mba_loaded) {
 			/* Try to reset ownership back to Q6 */
 			q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
@@ -1274,7 +1273,7 @@ static int q6v5_start(struct rproc *rproc)
 			"Failed to reclaim mba buffer system may become unstable\n");
 
 	/* Reset Dump Segment Mask */
-	qproc->dump_segment_mask = 0;
+	qproc->current_dump_size = 0;
 	qproc->running = true;
 
 	return 0;
@@ -1323,7 +1322,7 @@ static int qcom_q6v5_register_dump_segments(struct rproc *rproc,
 
 	ehdr = (struct elf32_hdr *)fw->data;
 	phdrs = (struct elf32_phdr *)(ehdr + 1);
-	qproc->dump_complete_mask = 0;
+	qproc->total_dump_size = 0;
 
 	for (i = 0; i < ehdr->e_phnum; i++) {
 		phdr = &phdrs[i];
@@ -1338,7 +1337,7 @@ static int qcom_q6v5_register_dump_segments(struct rproc *rproc,
 		if (ret)
 			break;
 
-		qproc->dump_complete_mask |= BIT(i);
+		qproc->total_dump_size += phdr->p_memsz;
 	}
 
 	release_firmware(fw);
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 3/4] remoteproc: Add inline coredump functionality
  2020-07-09 20:31 [PATCH v7 0/4] Extend coredump functionality Rishabh Bhatnagar
  2020-07-09 20:31 ` [PATCH v7 1/4] remoteproc: Move coredump functionality to a new file Rishabh Bhatnagar
  2020-07-09 20:31 ` [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size Rishabh Bhatnagar
@ 2020-07-09 20:31 ` Rishabh Bhatnagar
  2020-07-14 17:53   ` Mathieu Poirier
  2020-07-16  8:25   ` Sibi Sankar
  2020-07-09 20:31 ` [PATCH v7 4/4] remoteproc: Add coredump debugfs entry Rishabh Bhatnagar
  3 siblings, 2 replies; 15+ messages in thread
From: Rishabh Bhatnagar @ 2020-07-09 20:31 UTC (permalink / raw)
  To: linux-remoteproc, linux-kernel
  Cc: bjorn.andersson, mathieu.poirier, tsoni, psodagud, sidgup,
	Rishabh Bhatnagar

The current coredump implementation uses vmalloc area to copy
all the segments. But this might put strain on low memory targets
as the firmware size sometimes is in tens of MBs. The situation
becomes worse if there are multiple remote processors undergoing
recovery at the same time. This patch adds inline coredump
functionality that avoids extra memory usage. This requires
recovery to be halted until data is read by userspace and free
function is called. Also modify the qcom_q6v5_mss driver to include
size and offset in the segment dump function.

Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
Reviewed-by: Sibi Sankar <sibis@codeaurora.org>
---
 drivers/remoteproc/qcom_q6v5_mss.c       |  11 ++-
 drivers/remoteproc/remoteproc_coredump.c | 160 +++++++++++++++++++++++++++----
 include/linux/remoteproc.h               |  21 +++-
 3 files changed, 166 insertions(+), 26 deletions(-)

diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
index c6ce032..79df354 100644
--- a/drivers/remoteproc/qcom_q6v5_mss.c
+++ b/drivers/remoteproc/qcom_q6v5_mss.c
@@ -1199,11 +1199,12 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
 
 static void qcom_q6v5_dump_segment(struct rproc *rproc,
 				   struct rproc_dump_segment *segment,
-				   void *dest)
+				   void *dest, size_t cp_offset, size_t size)
 {
 	int ret = 0;
 	struct q6v5 *qproc = rproc->priv;
 	int offset = segment->da - qproc->mpss_reloc;
+	size_t cp_size = size ? size : segment->size;
 	void *ptr = NULL;
 
 	/* Unlock mba before copying segments */
@@ -1219,16 +1220,16 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
 	}
 
 	if (!ret)
-		ptr = ioremap_wc(qproc->mpss_phys + offset, segment->size);
+		ptr = ioremap_wc(qproc->mpss_phys + offset + cp_offset, cp_size);
 
 	if (ptr) {
-		memcpy(dest, ptr, segment->size);
+		memcpy(dest, ptr, cp_size);
 		iounmap(ptr);
 	} else {
-		memset(dest, 0xff, segment->size);
+		memset(dest, 0xff, cp_size);
 	}
 
-	qproc->current_dump_size += segment->size;
+	qproc->current_dump_size += cp_size;
 
 	/* Reclaim mba after copying segments */
 	if (qproc->current_dump_size == qproc->total_dump_size) {
diff --git a/drivers/remoteproc/remoteproc_coredump.c b/drivers/remoteproc/remoteproc_coredump.c
index ded0244..646886f 100644
--- a/drivers/remoteproc/remoteproc_coredump.c
+++ b/drivers/remoteproc/remoteproc_coredump.c
@@ -5,6 +5,7 @@
  * Copyright (c) 2020, The Linux Foundation. All rights reserved.
  */
 
+#include <linux/completion.h>
 #include <linux/devcoredump.h>
 #include <linux/device.h>
 #include <linux/kernel.h>
@@ -12,6 +13,12 @@
 #include "remoteproc_internal.h"
 #include "remoteproc_elf_helpers.h"
 
+struct rproc_coredump_state {
+	struct rproc *rproc;
+	void *header;
+	struct completion dump_done;
+};
+
 /**
  * rproc_coredump_cleanup() - clean up dump_segments list
  * @rproc: the remote processor handle
@@ -72,7 +79,8 @@ int rproc_coredump_add_custom_segment(struct rproc *rproc,
 				      dma_addr_t da, size_t size,
 				      void (*dumpfn)(struct rproc *rproc,
 						     struct rproc_dump_segment *segment,
-						     void *dest),
+						     void *dest, size_t offset,
+						     size_t size),
 				      void *priv)
 {
 	struct rproc_dump_segment *segment;
@@ -114,12 +122,110 @@ int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 machine)
 }
 EXPORT_SYMBOL(rproc_coredump_set_elf_info);
 
+static void rproc_coredump_free(void *data)
+{
+	struct rproc_coredump_state *dump_state = data;
+
+	complete(&dump_state->dump_done);
+	vfree(dump_state->header);
+}
+
+static void *rproc_coredump_find_segment(loff_t user_offset,
+					 struct list_head *segments,
+					 size_t *data_left)
+{
+	struct rproc_dump_segment *segment;
+
+	list_for_each_entry(segment, segments, node) {
+		if (user_offset < segment->size) {
+			*data_left = segment->size - user_offset;
+			return segment;
+		}
+		user_offset -= segment->size;
+	}
+
+	*data_left = 0;
+	return NULL;
+}
+
+static void rproc_copy_segment(struct rproc *rproc, void *dest,
+			       struct rproc_dump_segment *segment,
+			       size_t offset, size_t size)
+{
+	void *ptr;
+
+	if (segment->dump) {
+		segment->dump(rproc, segment, dest, offset, size);
+	} else {
+		ptr = rproc_da_to_va(rproc, segment->da + offset, size);
+		if (!ptr) {
+			dev_err(&rproc->dev,
+				"invalid copy request for segment %pad with offset %zu and size %zu)\n",
+				&segment->da, offset, size);
+			memset(dest, 0xff, size);
+		} else {
+			memcpy(dest, ptr, size);
+		}
+	}
+}
+
+static ssize_t rproc_coredump_read(char *buffer, loff_t offset, size_t count,
+				   void *data, size_t header_sz)
+{
+	size_t seg_data, bytes_left = count;
+	ssize_t copy_sz;
+	struct rproc_dump_segment *seg;
+	struct rproc_coredump_state *dump_state = data;
+	struct rproc *rproc = dump_state->rproc;
+	void *elfcore = dump_state->header;
+
+	/* Copy the vmalloc'ed header first. */
+	if (offset < header_sz) {
+		copy_sz = memory_read_from_buffer(buffer, count, &offset,
+						  elfcore, header_sz);
+
+		return copy_sz;
+	}
+
+	/*
+	 * Find out the segment memory chunk to be copied based on offset.
+	 * Keep copying data until count bytes are read.
+	 */
+	while (bytes_left) {
+		seg = rproc_coredump_find_segment(offset - header_sz,
+						  &rproc->dump_segments,
+						  &seg_data);
+		/* EOF check */
+		if (!seg) {
+			dev_info(&rproc->dev, "Ramdump done, %lld bytes read",
+				 offset);
+			break;
+		}
+
+		copy_sz = min_t(size_t, bytes_left, seg_data);
+
+		rproc_copy_segment(rproc, buffer, seg, seg->size - seg_data,
+				   copy_sz);
+
+		offset += copy_sz;
+		buffer += copy_sz;
+		bytes_left -= copy_sz;
+	}
+
+	return count - bytes_left;
+}
+
 /**
  * rproc_coredump() - perform coredump
  * @rproc:	rproc handle
  *
  * This function will generate an ELF header for the registered segments
- * and create a devcoredump device associated with rproc.
+ * and create a devcoredump device associated with rproc. Based on the
+ * coredump configuration this function will directly copy the segments
+ * from device memory to userspace or copy segments from device memory to
+ * a separate buffer, which can then be read by userspace.
+ * The first approach avoids using extra vmalloc memory. But it will stall
+ * recovery flow until dump is read by userspace.
  */
 void rproc_coredump(struct rproc *rproc)
 {
@@ -129,11 +235,13 @@ void rproc_coredump(struct rproc *rproc)
 	size_t data_size;
 	size_t offset;
 	void *data;
-	void *ptr;
 	u8 class = rproc->elf_class;
 	int phnum = 0;
+	struct rproc_coredump_state dump_state;
+	enum rproc_dump_mechanism dump_conf = rproc->dump_conf;
 
-	if (list_empty(&rproc->dump_segments))
+	if (list_empty(&rproc->dump_segments) ||
+	    dump_conf == RPROC_COREDUMP_DISABLED)
 		return;
 
 	if (class == ELFCLASSNONE) {
@@ -143,7 +251,14 @@ void rproc_coredump(struct rproc *rproc)
 
 	data_size = elf_size_of_hdr(class);
 	list_for_each_entry(segment, &rproc->dump_segments, node) {
-		data_size += elf_size_of_phdr(class) + segment->size;
+		/*
+		 * For default configuration buffer includes headers & segments.
+		 * For inline dump buffer just includes headers as segments are
+		 * directly read from device memory.
+		 */
+		data_size += elf_size_of_phdr(class);
+		if (dump_conf == RPROC_COREDUMP_DEFAULT)
+			data_size += segment->size;
 
 		phnum++;
 	}
@@ -182,23 +297,30 @@ void rproc_coredump(struct rproc *rproc)
 		elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X);
 		elf_phdr_set_p_align(class, phdr, 0);
 
-		if (segment->dump) {
-			segment->dump(rproc, segment, data + offset);
-		} else {
-			ptr = rproc_da_to_va(rproc, segment->da, segment->size);
-			if (!ptr) {
-				dev_err(&rproc->dev,
-					"invalid coredump segment (%pad, %zu)\n",
-					&segment->da, segment->size);
-				memset(data + offset, 0xff, segment->size);
-			} else {
-				memcpy(data + offset, ptr, segment->size);
-			}
-		}
+		if (dump_conf == RPROC_COREDUMP_DEFAULT)
+			rproc_copy_segment(rproc, data + offset, segment, 0,
+					   segment->size);
 
 		offset += elf_phdr_get_p_filesz(class, phdr);
 		phdr += elf_size_of_phdr(class);
 	}
 
-	dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
+	if (dump_conf == RPROC_COREDUMP_DEFAULT) {
+		dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
+		return;
+	}
+
+	/* Initialize the dump state struct to be used by rproc_coredump_read */
+	dump_state.rproc = rproc;
+	dump_state.header = data;
+	init_completion(&dump_state.dump_done);
+
+	dev_coredumpm(&rproc->dev, NULL, &dump_state, data_size, GFP_KERNEL,
+		      rproc_coredump_read, rproc_coredump_free);
+
+	/*
+	 * Wait until the dump is read and free is called. Data is freed
+	 * by devcoredump framework automatically after 5 minutes.
+	 */
+	wait_for_completion(&dump_state.dump_done);
 }
diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h
index e7b7bab..38d037d 100644
--- a/include/linux/remoteproc.h
+++ b/include/linux/remoteproc.h
@@ -435,6 +435,20 @@ enum rproc_crash_type {
 };
 
 /**
+ * enum rproc_dump_mechanism - Coredump options for core
+ * @RPROC_COREDUMP_DEFAULT:	Copy dump to separate buffer and carry on with
+				recovery
+ * @RPROC_COREDUMP_INLINE:	Read segments directly from device memory. Stall
+				recovery until all segments are read
+ * @RPROC_COREDUMP_DISABLED:	Don't perform any dump
+ */
+enum rproc_dump_mechanism {
+	RPROC_COREDUMP_DEFAULT,
+	RPROC_COREDUMP_INLINE,
+	RPROC_COREDUMP_DISABLED,
+};
+
+/**
  * struct rproc_dump_segment - segment info from ELF header
  * @node:	list node related to the rproc segment list
  * @da:		device address of the segment
@@ -451,7 +465,7 @@ struct rproc_dump_segment {
 
 	void *priv;
 	void (*dump)(struct rproc *rproc, struct rproc_dump_segment *segment,
-		     void *dest);
+		     void *dest, size_t offset, size_t size);
 	loff_t offset;
 };
 
@@ -466,6 +480,7 @@ struct rproc_dump_segment {
  * @dev: virtual device for refcounting and common remoteproc behavior
  * @power: refcount of users who need this rproc powered up
  * @state: state of the device
+ * @dump_conf: Currently selected coredump configuration
  * @lock: lock which protects concurrent manipulations of the rproc
  * @dbg_dir: debugfs directory of this rproc device
  * @traces: list of trace buffers
@@ -499,6 +514,7 @@ struct rproc {
 	struct device dev;
 	atomic_t power;
 	unsigned int state;
+	enum rproc_dump_mechanism dump_conf;
 	struct mutex lock;
 	struct dentry *dbg_dir;
 	struct list_head traces;
@@ -630,7 +646,8 @@ int rproc_coredump_add_custom_segment(struct rproc *rproc,
 				      dma_addr_t da, size_t size,
 				      void (*dumpfn)(struct rproc *rproc,
 						     struct rproc_dump_segment *segment,
-						     void *dest),
+						     void *dest, size_t offset,
+						     size_t size),
 				      void *priv);
 int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 machine);
 
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 4/4] remoteproc: Add coredump debugfs entry
  2020-07-09 20:31 [PATCH v7 0/4] Extend coredump functionality Rishabh Bhatnagar
                   ` (2 preceding siblings ...)
  2020-07-09 20:31 ` [PATCH v7 3/4] remoteproc: Add inline coredump functionality Rishabh Bhatnagar
@ 2020-07-09 20:31 ` Rishabh Bhatnagar
  3 siblings, 0 replies; 15+ messages in thread
From: Rishabh Bhatnagar @ 2020-07-09 20:31 UTC (permalink / raw)
  To: linux-remoteproc, linux-kernel
  Cc: bjorn.andersson, mathieu.poirier, tsoni, psodagud, sidgup,
	Rishabh Bhatnagar

Add coredump debugfs entry to configure the type of dump that will
be collected during recovery. User can select between default or
inline coredump functionality. Also coredump collection can be
disabled through this interface.
This functionality can be configured differently for different
remote processors.

Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Tested-by: Sibi Sankar <sibis@codeaurora.org>
Reviewed-by: Sibi Sankar <sibis@codeaurora.org>
---
 drivers/remoteproc/remoteproc_debugfs.c | 90 +++++++++++++++++++++++++++++++++
 1 file changed, 90 insertions(+)

diff --git a/drivers/remoteproc/remoteproc_debugfs.c b/drivers/remoteproc/remoteproc_debugfs.c
index 732770e..16a4196 100644
--- a/drivers/remoteproc/remoteproc_debugfs.c
+++ b/drivers/remoteproc/remoteproc_debugfs.c
@@ -28,6 +28,94 @@
 static struct dentry *rproc_dbg;
 
 /*
+ * A coredump-configuration-to-string lookup table, for exposing a
+ * human readable configuration via debugfs. Always keep in sync with
+ * enum rproc_coredump_mechanism
+ */
+static const char * const rproc_coredump_str[] = {
+	[RPROC_COREDUMP_DEFAULT]	= "default",
+	[RPROC_COREDUMP_INLINE]		= "inline",
+	[RPROC_COREDUMP_DISABLED]	= "disabled",
+};
+
+/* Expose the current coredump configuration via debugfs */
+static ssize_t rproc_coredump_read(struct file *filp, char __user *userbuf,
+				   size_t count, loff_t *ppos)
+{
+	struct rproc *rproc = filp->private_data;
+	char buf[20];
+	int len;
+
+	len = scnprintf(buf, sizeof(buf), "%s\n",
+			rproc_coredump_str[rproc->dump_conf]);
+
+	return simple_read_from_buffer(userbuf, count, ppos, buf, len);
+}
+
+/*
+ * By writing to the 'coredump' debugfs entry, we control the behavior of the
+ * coredump mechanism dynamically. The default value of this entry is "default".
+ *
+ * The 'coredump' debugfs entry supports these commands:
+ *
+ * default:	This is the default coredump mechanism. When the remoteproc
+ *		crashes the entire coredump will be copied to a separate buffer
+ *		and exposed to userspace.
+ *
+ * inline:	The coredump will not be copied to a separate buffer and the
+ *		recovery process will have to wait until data is read by
+ *		userspace. But this avoid usage of extra memory.
+ *
+ * disabled:	This will disable coredump. Recovery will proceed without
+ *		collecting any dump.
+ */
+static ssize_t rproc_coredump_write(struct file *filp,
+				    const char __user *user_buf, size_t count,
+				    loff_t *ppos)
+{
+	struct rproc *rproc = filp->private_data;
+	int ret, err = 0;
+	char buf[20];
+
+	if (count > sizeof(buf))
+		return -EINVAL;
+
+	ret = copy_from_user(buf, user_buf, count);
+	if (ret)
+		return -EFAULT;
+
+	/* remove end of line */
+	if (buf[count - 1] == '\n')
+		buf[count - 1] = '\0';
+
+	if (rproc->state == RPROC_CRASHED) {
+		dev_err(&rproc->dev, "can't change coredump configuration\n");
+		err = -EBUSY;
+		goto out;
+	}
+
+	if (!strncmp(buf, "disable", count)) {
+		rproc->dump_conf = RPROC_COREDUMP_DISABLED;
+	} else if (!strncmp(buf, "inline", count)) {
+		rproc->dump_conf = RPROC_COREDUMP_INLINE;
+	} else if (!strncmp(buf, "default", count)) {
+		rproc->dump_conf = RPROC_COREDUMP_DEFAULT;
+	} else {
+		dev_err(&rproc->dev, "Invalid coredump configuration\n");
+		err = -EINVAL;
+	}
+out:
+	return err ? err : count;
+}
+
+static const struct file_operations rproc_coredump_fops = {
+	.read = rproc_coredump_read,
+	.write = rproc_coredump_write,
+	.open = simple_open,
+	.llseek = generic_file_llseek,
+};
+
+/*
  * Some remote processors may support dumping trace logs into a shared
  * memory buffer. We expose this trace buffer using debugfs, so users
  * can easily tell what's going on remotely.
@@ -337,6 +425,8 @@ void rproc_create_debug_dir(struct rproc *rproc)
 			    rproc, &rproc_rsc_table_fops);
 	debugfs_create_file("carveout_memories", 0400, rproc->dbg_dir,
 			    rproc, &rproc_carveouts_fops);
+	debugfs_create_file("coredump", 0600, rproc->dbg_dir,
+			    rproc, &rproc_coredump_fops);
 }
 
 void __init rproc_init_debugfs(void)
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size
  2020-07-09 20:31 ` [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size Rishabh Bhatnagar
@ 2020-07-14 17:18   ` Mathieu Poirier
  2020-07-14 18:12     ` Sibi Sankar
  2020-07-16  8:10     ` Sibi Sankar
  2020-07-20 16:57   ` Mathieu Poirier
  1 sibling, 2 replies; 15+ messages in thread
From: Mathieu Poirier @ 2020-07-14 17:18 UTC (permalink / raw)
  To: Rishabh Bhatnagar
  Cc: linux-remoteproc, linux-kernel, bjorn.andersson, tsoni, psodagud,
	sidgup, Sibi Sankar

On Thu, Jul 09, 2020 at 01:31:54PM -0700, Rishabh Bhatnagar wrote:
> From: Sibi Sankar <sibis@codeaurora.org>
> 
> In order to land inline coredump support for mss, the dump_segment
> function would need to support granularities less than the segment
> size. This is achieved by replacing mask based tracking with size.
> 
> Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
> ---
>  drivers/remoteproc/qcom_q6v5_mss.c | 15 +++++++--------
>  1 file changed, 7 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
> index feb70283b..c6ce032 100644
> --- a/drivers/remoteproc/qcom_q6v5_mss.c
> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
> @@ -181,8 +181,8 @@ struct q6v5 {
>  	bool running;
>  
>  	bool dump_mba_loaded;
> -	unsigned long dump_segment_mask;
> -	unsigned long dump_complete_mask;
> +	size_t current_dump_size;
> +	size_t total_dump_size;
>  
>  	phys_addr_t mba_phys;
>  	void *mba_region;
> @@ -1203,7 +1203,6 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
>  {
>  	int ret = 0;
>  	struct q6v5 *qproc = rproc->priv;
> -	unsigned long mask = BIT((unsigned long)segment->priv);
>  	int offset = segment->da - qproc->mpss_reloc;
>  	void *ptr = NULL;
>  
> @@ -1229,10 +1228,10 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
>  		memset(dest, 0xff, segment->size);
>  	}
>  
> -	qproc->dump_segment_mask |= mask;
> +	qproc->current_dump_size += segment->size;
>  
>  	/* Reclaim mba after copying segments */
> -	if (qproc->dump_segment_mask == qproc->dump_complete_mask) {
> +	if (qproc->current_dump_size == qproc->total_dump_size) {
>  		if (qproc->dump_mba_loaded) {
>  			/* Try to reset ownership back to Q6 */
>  			q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
> @@ -1274,7 +1273,7 @@ static int q6v5_start(struct rproc *rproc)
>  			"Failed to reclaim mba buffer system may become unstable\n");
>  
>  	/* Reset Dump Segment Mask */
> -	qproc->dump_segment_mask = 0;
> +	qproc->current_dump_size = 0;
>  	qproc->running = true;
>  
>  	return 0;
> @@ -1323,7 +1322,7 @@ static int qcom_q6v5_register_dump_segments(struct rproc *rproc,
>  
>  	ehdr = (struct elf32_hdr *)fw->data;
>  	phdrs = (struct elf32_phdr *)(ehdr + 1);
> -	qproc->dump_complete_mask = 0;
> +	qproc->total_dump_size = 0;
>  
>  	for (i = 0; i < ehdr->e_phnum; i++) {
>  		phdr = &phdrs[i];
> @@ -1338,7 +1337,7 @@ static int qcom_q6v5_register_dump_segments(struct rproc *rproc,
>  		if (ret)
>  			break;

There is also no longer a need to carry the 'i' in:

                ret = rproc_coredump_add_custom_segment(rproc, phdr->p_paddr,
                                                        phdr->p_memsz,
                                                        qcom_q6v5_dump_segment,
                                                        (void *)i);
>  
> -		qproc->dump_complete_mask |= BIT(i);
> +		qproc->total_dump_size += phdr->p_memsz;
>  	}
>  
>  	release_firmware(fw);
> -- 
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/4] remoteproc: Add inline coredump functionality
  2020-07-09 20:31 ` [PATCH v7 3/4] remoteproc: Add inline coredump functionality Rishabh Bhatnagar
@ 2020-07-14 17:53   ` Mathieu Poirier
  2020-07-15 19:22     ` rishabhb
  2020-07-16  8:25   ` Sibi Sankar
  1 sibling, 1 reply; 15+ messages in thread
From: Mathieu Poirier @ 2020-07-14 17:53 UTC (permalink / raw)
  To: Rishabh Bhatnagar
  Cc: linux-remoteproc, linux-kernel, bjorn.andersson, tsoni, psodagud, sidgup

On Thu, Jul 09, 2020 at 01:31:55PM -0700, Rishabh Bhatnagar wrote:
> The current coredump implementation uses vmalloc area to copy
> all the segments. But this might put strain on low memory targets
> as the firmware size sometimes is in tens of MBs. The situation
> becomes worse if there are multiple remote processors undergoing
> recovery at the same time. This patch adds inline coredump
> functionality that avoids extra memory usage. This requires
> recovery to be halted until data is read by userspace and free
> function is called. Also modify the qcom_q6v5_mss driver to include
> size and offset in the segment dump function.
> 
> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
> Reviewed-by: Sibi Sankar <sibis@codeaurora.org>
> ---
>  drivers/remoteproc/qcom_q6v5_mss.c       |  11 ++-
>  drivers/remoteproc/remoteproc_coredump.c | 160 +++++++++++++++++++++++++++----
>  include/linux/remoteproc.h               |  21 +++-
>  3 files changed, 166 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
> index c6ce032..79df354 100644
> --- a/drivers/remoteproc/qcom_q6v5_mss.c
> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
> @@ -1199,11 +1199,12 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
>  
>  static void qcom_q6v5_dump_segment(struct rproc *rproc,
>  				   struct rproc_dump_segment *segment,
> -				   void *dest)
> +				   void *dest, size_t cp_offset, size_t size)

Now that I'm taking another look at this, refactoring qcom_q6v5_dump_segment()
should be in another patch - please split them.

>  {
>  	int ret = 0;
>  	struct q6v5 *qproc = rproc->priv;
>  	int offset = segment->da - qproc->mpss_reloc;
> +	size_t cp_size = size ? size : segment->size;

I can't find a scenario where size is 0 - can you explain when 
such condition occurs?

Thanks,
Mathieu 


>  	void *ptr = NULL;
>  
>  	/* Unlock mba before copying segments */
> @@ -1219,16 +1220,16 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
>  	}
>  
>  	if (!ret)
> -		ptr = ioremap_wc(qproc->mpss_phys + offset, segment->size);
> +		ptr = ioremap_wc(qproc->mpss_phys + offset + cp_offset, cp_size);
>  
>  	if (ptr) {
> -		memcpy(dest, ptr, segment->size);
> +		memcpy(dest, ptr, cp_size);
>  		iounmap(ptr);
>  	} else {
> -		memset(dest, 0xff, segment->size);
> +		memset(dest, 0xff, cp_size);
>  	}
>  
> -	qproc->current_dump_size += segment->size;
> +	qproc->current_dump_size += cp_size;
>  
>  	/* Reclaim mba after copying segments */
>  	if (qproc->current_dump_size == qproc->total_dump_size) {
> diff --git a/drivers/remoteproc/remoteproc_coredump.c b/drivers/remoteproc/remoteproc_coredump.c
> index ded0244..646886f 100644
> --- a/drivers/remoteproc/remoteproc_coredump.c
> +++ b/drivers/remoteproc/remoteproc_coredump.c
> @@ -5,6 +5,7 @@
>   * Copyright (c) 2020, The Linux Foundation. All rights reserved.
>   */
>  
> +#include <linux/completion.h>
>  #include <linux/devcoredump.h>
>  #include <linux/device.h>
>  #include <linux/kernel.h>
> @@ -12,6 +13,12 @@
>  #include "remoteproc_internal.h"
>  #include "remoteproc_elf_helpers.h"
>  
> +struct rproc_coredump_state {
> +	struct rproc *rproc;
> +	void *header;
> +	struct completion dump_done;
> +};
> +
>  /**
>   * rproc_coredump_cleanup() - clean up dump_segments list
>   * @rproc: the remote processor handle
> @@ -72,7 +79,8 @@ int rproc_coredump_add_custom_segment(struct rproc *rproc,
>  				      dma_addr_t da, size_t size,
>  				      void (*dumpfn)(struct rproc *rproc,
>  						     struct rproc_dump_segment *segment,
> -						     void *dest),
> +						     void *dest, size_t offset,
> +						     size_t size),
>  				      void *priv)
>  {
>  	struct rproc_dump_segment *segment;
> @@ -114,12 +122,110 @@ int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 machine)
>  }
>  EXPORT_SYMBOL(rproc_coredump_set_elf_info);
>  
> +static void rproc_coredump_free(void *data)
> +{
> +	struct rproc_coredump_state *dump_state = data;
> +
> +	complete(&dump_state->dump_done);
> +	vfree(dump_state->header);
> +}
> +
> +static void *rproc_coredump_find_segment(loff_t user_offset,
> +					 struct list_head *segments,
> +					 size_t *data_left)
> +{
> +	struct rproc_dump_segment *segment;
> +
> +	list_for_each_entry(segment, segments, node) {
> +		if (user_offset < segment->size) {
> +			*data_left = segment->size - user_offset;
> +			return segment;
> +		}
> +		user_offset -= segment->size;
> +	}
> +
> +	*data_left = 0;
> +	return NULL;
> +}
> +
> +static void rproc_copy_segment(struct rproc *rproc, void *dest,
> +			       struct rproc_dump_segment *segment,
> +			       size_t offset, size_t size)
> +{
> +	void *ptr;
> +
> +	if (segment->dump) {
> +		segment->dump(rproc, segment, dest, offset, size);
> +	} else {
> +		ptr = rproc_da_to_va(rproc, segment->da + offset, size);
> +		if (!ptr) {
> +			dev_err(&rproc->dev,
> +				"invalid copy request for segment %pad with offset %zu and size %zu)\n",
> +				&segment->da, offset, size);
> +			memset(dest, 0xff, size);
> +		} else {
> +			memcpy(dest, ptr, size);
> +		}
> +	}
> +}
> +
> +static ssize_t rproc_coredump_read(char *buffer, loff_t offset, size_t count,
> +				   void *data, size_t header_sz)
> +{
> +	size_t seg_data, bytes_left = count;
> +	ssize_t copy_sz;
> +	struct rproc_dump_segment *seg;
> +	struct rproc_coredump_state *dump_state = data;
> +	struct rproc *rproc = dump_state->rproc;
> +	void *elfcore = dump_state->header;
> +
> +	/* Copy the vmalloc'ed header first. */
> +	if (offset < header_sz) {
> +		copy_sz = memory_read_from_buffer(buffer, count, &offset,
> +						  elfcore, header_sz);
> +
> +		return copy_sz;
> +	}
> +
> +	/*
> +	 * Find out the segment memory chunk to be copied based on offset.
> +	 * Keep copying data until count bytes are read.
> +	 */
> +	while (bytes_left) {
> +		seg = rproc_coredump_find_segment(offset - header_sz,
> +						  &rproc->dump_segments,
> +						  &seg_data);
> +		/* EOF check */
> +		if (!seg) {
> +			dev_info(&rproc->dev, "Ramdump done, %lld bytes read",
> +				 offset);
> +			break;
> +		}
> +
> +		copy_sz = min_t(size_t, bytes_left, seg_data);
> +
> +		rproc_copy_segment(rproc, buffer, seg, seg->size - seg_data,
> +				   copy_sz);
> +
> +		offset += copy_sz;
> +		buffer += copy_sz;
> +		bytes_left -= copy_sz;
> +	}
> +
> +	return count - bytes_left;
> +}
> +
>  /**
>   * rproc_coredump() - perform coredump
>   * @rproc:	rproc handle
>   *
>   * This function will generate an ELF header for the registered segments
> - * and create a devcoredump device associated with rproc.
> + * and create a devcoredump device associated with rproc. Based on the
> + * coredump configuration this function will directly copy the segments
> + * from device memory to userspace or copy segments from device memory to
> + * a separate buffer, which can then be read by userspace.
> + * The first approach avoids using extra vmalloc memory. But it will stall
> + * recovery flow until dump is read by userspace.
>   */
>  void rproc_coredump(struct rproc *rproc)
>  {
> @@ -129,11 +235,13 @@ void rproc_coredump(struct rproc *rproc)
>  	size_t data_size;
>  	size_t offset;
>  	void *data;
> -	void *ptr;
>  	u8 class = rproc->elf_class;
>  	int phnum = 0;
> +	struct rproc_coredump_state dump_state;
> +	enum rproc_dump_mechanism dump_conf = rproc->dump_conf;
>  
> -	if (list_empty(&rproc->dump_segments))
> +	if (list_empty(&rproc->dump_segments) ||
> +	    dump_conf == RPROC_COREDUMP_DISABLED)
>  		return;
>  
>  	if (class == ELFCLASSNONE) {
> @@ -143,7 +251,14 @@ void rproc_coredump(struct rproc *rproc)
>  
>  	data_size = elf_size_of_hdr(class);
>  	list_for_each_entry(segment, &rproc->dump_segments, node) {
> -		data_size += elf_size_of_phdr(class) + segment->size;
> +		/*
> +		 * For default configuration buffer includes headers & segments.
> +		 * For inline dump buffer just includes headers as segments are
> +		 * directly read from device memory.
> +		 */
> +		data_size += elf_size_of_phdr(class);
> +		if (dump_conf == RPROC_COREDUMP_DEFAULT)
> +			data_size += segment->size;
>  
>  		phnum++;
>  	}
> @@ -182,23 +297,30 @@ void rproc_coredump(struct rproc *rproc)
>  		elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X);
>  		elf_phdr_set_p_align(class, phdr, 0);
>  
> -		if (segment->dump) {
> -			segment->dump(rproc, segment, data + offset);
> -		} else {
> -			ptr = rproc_da_to_va(rproc, segment->da, segment->size);
> -			if (!ptr) {
> -				dev_err(&rproc->dev,
> -					"invalid coredump segment (%pad, %zu)\n",
> -					&segment->da, segment->size);
> -				memset(data + offset, 0xff, segment->size);
> -			} else {
> -				memcpy(data + offset, ptr, segment->size);
> -			}
> -		}
> +		if (dump_conf == RPROC_COREDUMP_DEFAULT)
> +			rproc_copy_segment(rproc, data + offset, segment, 0,
> +					   segment->size);
>  
>  		offset += elf_phdr_get_p_filesz(class, phdr);
>  		phdr += elf_size_of_phdr(class);
>  	}
>  
> -	dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
> +	if (dump_conf == RPROC_COREDUMP_DEFAULT) {
> +		dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
> +		return;
> +	}
> +
> +	/* Initialize the dump state struct to be used by rproc_coredump_read */
> +	dump_state.rproc = rproc;
> +	dump_state.header = data;
> +	init_completion(&dump_state.dump_done);
> +
> +	dev_coredumpm(&rproc->dev, NULL, &dump_state, data_size, GFP_KERNEL,
> +		      rproc_coredump_read, rproc_coredump_free);
> +
> +	/*
> +	 * Wait until the dump is read and free is called. Data is freed
> +	 * by devcoredump framework automatically after 5 minutes.
> +	 */
> +	wait_for_completion(&dump_state.dump_done);
>  }
> diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h
> index e7b7bab..38d037d 100644
> --- a/include/linux/remoteproc.h
> +++ b/include/linux/remoteproc.h
> @@ -435,6 +435,20 @@ enum rproc_crash_type {
>  };
>  
>  /**
> + * enum rproc_dump_mechanism - Coredump options for core
> + * @RPROC_COREDUMP_DEFAULT:	Copy dump to separate buffer and carry on with
> +				recovery
> + * @RPROC_COREDUMP_INLINE:	Read segments directly from device memory. Stall
> +				recovery until all segments are read
> + * @RPROC_COREDUMP_DISABLED:	Don't perform any dump
> + */
> +enum rproc_dump_mechanism {
> +	RPROC_COREDUMP_DEFAULT,
> +	RPROC_COREDUMP_INLINE,
> +	RPROC_COREDUMP_DISABLED,
> +};
> +
> +/**
>   * struct rproc_dump_segment - segment info from ELF header
>   * @node:	list node related to the rproc segment list
>   * @da:		device address of the segment
> @@ -451,7 +465,7 @@ struct rproc_dump_segment {
>  
>  	void *priv;
>  	void (*dump)(struct rproc *rproc, struct rproc_dump_segment *segment,
> -		     void *dest);
> +		     void *dest, size_t offset, size_t size);
>  	loff_t offset;
>  };
>  
> @@ -466,6 +480,7 @@ struct rproc_dump_segment {
>   * @dev: virtual device for refcounting and common remoteproc behavior
>   * @power: refcount of users who need this rproc powered up
>   * @state: state of the device
> + * @dump_conf: Currently selected coredump configuration
>   * @lock: lock which protects concurrent manipulations of the rproc
>   * @dbg_dir: debugfs directory of this rproc device
>   * @traces: list of trace buffers
> @@ -499,6 +514,7 @@ struct rproc {
>  	struct device dev;
>  	atomic_t power;
>  	unsigned int state;
> +	enum rproc_dump_mechanism dump_conf;
>  	struct mutex lock;
>  	struct dentry *dbg_dir;
>  	struct list_head traces;
> @@ -630,7 +646,8 @@ int rproc_coredump_add_custom_segment(struct rproc *rproc,
>  				      dma_addr_t da, size_t size,
>  				      void (*dumpfn)(struct rproc *rproc,
>  						     struct rproc_dump_segment *segment,
> -						     void *dest),
> +						     void *dest, size_t offset,
> +						     size_t size),
>  				      void *priv);
>  int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 machine);
>  
> -- 
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size
  2020-07-14 17:18   ` Mathieu Poirier
@ 2020-07-14 18:12     ` Sibi Sankar
  2020-07-16  8:10     ` Sibi Sankar
  1 sibling, 0 replies; 15+ messages in thread
From: Sibi Sankar @ 2020-07-14 18:12 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: Rishabh Bhatnagar, linux-remoteproc, linux-kernel,
	bjorn.andersson, tsoni, psodagud, sidgup, linux-kernel-owner

On 2020-07-14 22:48, Mathieu Poirier wrote:
> On Thu, Jul 09, 2020 at 01:31:54PM -0700, Rishabh Bhatnagar wrote:
>> From: Sibi Sankar <sibis@codeaurora.org>
>> 
>> In order to land inline coredump support for mss, the dump_segment
>> function would need to support granularities less than the segment
>> size. This is achieved by replacing mask based tracking with size.
>> 
>> Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
>> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
>> ---
>>  drivers/remoteproc/qcom_q6v5_mss.c | 15 +++++++--------
>>  1 file changed, 7 insertions(+), 8 deletions(-)
>> 
>> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c 
>> b/drivers/remoteproc/qcom_q6v5_mss.c
>> index feb70283b..c6ce032 100644
>> --- a/drivers/remoteproc/qcom_q6v5_mss.c
>> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
>> @@ -181,8 +181,8 @@ struct q6v5 {
>>  	bool running;
>> 
>>  	bool dump_mba_loaded;
>> -	unsigned long dump_segment_mask;
>> -	unsigned long dump_complete_mask;
>> +	size_t current_dump_size;
>> +	size_t total_dump_size;
>> 
>>  	phys_addr_t mba_phys;
>>  	void *mba_region;
>> @@ -1203,7 +1203,6 @@ static void qcom_q6v5_dump_segment(struct rproc 
>> *rproc,
>>  {
>>  	int ret = 0;
>>  	struct q6v5 *qproc = rproc->priv;
>> -	unsigned long mask = BIT((unsigned long)segment->priv);
>>  	int offset = segment->da - qproc->mpss_reloc;
>>  	void *ptr = NULL;
>> 
>> @@ -1229,10 +1228,10 @@ static void qcom_q6v5_dump_segment(struct 
>> rproc *rproc,
>>  		memset(dest, 0xff, segment->size);
>>  	}
>> 
>> -	qproc->dump_segment_mask |= mask;
>> +	qproc->current_dump_size += segment->size;
>> 
>>  	/* Reclaim mba after copying segments */
>> -	if (qproc->dump_segment_mask == qproc->dump_complete_mask) {
>> +	if (qproc->current_dump_size == qproc->total_dump_size) {
>>  		if (qproc->dump_mba_loaded) {
>>  			/* Try to reset ownership back to Q6 */
>>  			q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
>> @@ -1274,7 +1273,7 @@ static int q6v5_start(struct rproc *rproc)
>>  			"Failed to reclaim mba buffer system may become unstable\n");
>> 
>>  	/* Reset Dump Segment Mask */
>> -	qproc->dump_segment_mask = 0;
>> +	qproc->current_dump_size = 0;
>>  	qproc->running = true;
>> 
>>  	return 0;
>> @@ -1323,7 +1322,7 @@ static int 
>> qcom_q6v5_register_dump_segments(struct rproc *rproc,
>> 
>>  	ehdr = (struct elf32_hdr *)fw->data;
>>  	phdrs = (struct elf32_phdr *)(ehdr + 1);
>> -	qproc->dump_complete_mask = 0;
>> +	qproc->total_dump_size = 0;
>> 
>>  	for (i = 0; i < ehdr->e_phnum; i++) {
>>  		phdr = &phdrs[i];
>> @@ -1338,7 +1337,7 @@ static int 
>> qcom_q6v5_register_dump_segments(struct rproc *rproc,
>>  		if (ret)
>>  			break;
> 
> There is also no longer a need to carry the 'i' in:
> 
>                 ret = rproc_coredump_add_custom_segment(rproc, 
> phdr->p_paddr,
>                                                         phdr->p_memsz,
>                                                         
> qcom_q6v5_dump_segment,
>                                                         (void *)i);

Mathieu,
Thanks for the review. Will pass in
NULL instead in the next re-spin.

>> 
>> -		qproc->dump_complete_mask |= BIT(i);
>> +		qproc->total_dump_size += phdr->p_memsz;
>>  	}
>> 
>>  	release_firmware(fw);
>> --
>> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
>> Forum,
>> a Linux Foundation Collaborative Project
>> 

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/4] remoteproc: Add inline coredump functionality
  2020-07-14 17:53   ` Mathieu Poirier
@ 2020-07-15 19:22     ` rishabhb
  2020-07-15 21:55       ` Mathieu Poirier
  0 siblings, 1 reply; 15+ messages in thread
From: rishabhb @ 2020-07-15 19:22 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: linux-remoteproc, linux-kernel, bjorn.andersson, tsoni, psodagud,
	sidgup, linux-remoteproc-owner

On 2020-07-14 10:53, Mathieu Poirier wrote:
> On Thu, Jul 09, 2020 at 01:31:55PM -0700, Rishabh Bhatnagar wrote:
>> The current coredump implementation uses vmalloc area to copy
>> all the segments. But this might put strain on low memory targets
>> as the firmware size sometimes is in tens of MBs. The situation
>> becomes worse if there are multiple remote processors undergoing
>> recovery at the same time. This patch adds inline coredump
>> functionality that avoids extra memory usage. This requires
>> recovery to be halted until data is read by userspace and free
>> function is called. Also modify the qcom_q6v5_mss driver to include
>> size and offset in the segment dump function.
>> 
>> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
>> Reviewed-by: Sibi Sankar <sibis@codeaurora.org>
>> ---
>>  drivers/remoteproc/qcom_q6v5_mss.c       |  11 ++-
>>  drivers/remoteproc/remoteproc_coredump.c | 160 
>> +++++++++++++++++++++++++++----
>>  include/linux/remoteproc.h               |  21 +++-
>>  3 files changed, 166 insertions(+), 26 deletions(-)
>> 
>> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c 
>> b/drivers/remoteproc/qcom_q6v5_mss.c
>> index c6ce032..79df354 100644
>> --- a/drivers/remoteproc/qcom_q6v5_mss.c
>> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
>> @@ -1199,11 +1199,12 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
>> 
>>  static void qcom_q6v5_dump_segment(struct rproc *rproc,
>>  				   struct rproc_dump_segment *segment,
>> -				   void *dest)
>> +				   void *dest, size_t cp_offset, size_t size)
> 
> Now that I'm taking another look at this, refactoring 
> qcom_q6v5_dump_segment()
> should be in another patch - please split them.
> 
Hi Mathieu,
Thanks for reviewing.

Since I'm changing the API signature if we just take this patch without 
q6v5_mss
changes (in a separate patch) it would cause compilation errors. Is that 
ok?
>>  {
>>  	int ret = 0;
>>  	struct q6v5 *qproc = rproc->priv;
>>  	int offset = segment->da - qproc->mpss_reloc;
>> +	size_t cp_size = size ? size : segment->size;
> 
> I can't find a scenario where size is 0 - can you explain when
> such condition occurs?
> 
There is no such case right now. I can get rid of the check.
> Thanks,
> Mathieu
> 
> 
>>  	void *ptr = NULL;
>> 
>>  	/* Unlock mba before copying segments */
>> @@ -1219,16 +1220,16 @@ static void qcom_q6v5_dump_segment(struct 
>> rproc *rproc,
>>  	}
>> 
>>  	if (!ret)
>> -		ptr = ioremap_wc(qproc->mpss_phys + offset, segment->size);
>> +		ptr = ioremap_wc(qproc->mpss_phys + offset + cp_offset, cp_size);
>> 
>>  	if (ptr) {
>> -		memcpy(dest, ptr, segment->size);
>> +		memcpy(dest, ptr, cp_size);
>>  		iounmap(ptr);
>>  	} else {
>> -		memset(dest, 0xff, segment->size);
>> +		memset(dest, 0xff, cp_size);
>>  	}
>> 
>> -	qproc->current_dump_size += segment->size;
>> +	qproc->current_dump_size += cp_size;
>> 
>>  	/* Reclaim mba after copying segments */
>>  	if (qproc->current_dump_size == qproc->total_dump_size) {
>> diff --git a/drivers/remoteproc/remoteproc_coredump.c 
>> b/drivers/remoteproc/remoteproc_coredump.c
>> index ded0244..646886f 100644
>> --- a/drivers/remoteproc/remoteproc_coredump.c
>> +++ b/drivers/remoteproc/remoteproc_coredump.c
>> @@ -5,6 +5,7 @@
>>   * Copyright (c) 2020, The Linux Foundation. All rights reserved.
>>   */
>> 
>> +#include <linux/completion.h>
>>  #include <linux/devcoredump.h>
>>  #include <linux/device.h>
>>  #include <linux/kernel.h>
>> @@ -12,6 +13,12 @@
>>  #include "remoteproc_internal.h"
>>  #include "remoteproc_elf_helpers.h"
>> 
>> +struct rproc_coredump_state {
>> +	struct rproc *rproc;
>> +	void *header;
>> +	struct completion dump_done;
>> +};
>> +
>>  /**
>>   * rproc_coredump_cleanup() - clean up dump_segments list
>>   * @rproc: the remote processor handle
>> @@ -72,7 +79,8 @@ int rproc_coredump_add_custom_segment(struct rproc 
>> *rproc,
>>  				      dma_addr_t da, size_t size,
>>  				      void (*dumpfn)(struct rproc *rproc,
>>  						     struct rproc_dump_segment *segment,
>> -						     void *dest),
>> +						     void *dest, size_t offset,
>> +						     size_t size),
>>  				      void *priv)
>>  {
>>  	struct rproc_dump_segment *segment;
>> @@ -114,12 +122,110 @@ int rproc_coredump_set_elf_info(struct rproc 
>> *rproc, u8 class, u16 machine)
>>  }
>>  EXPORT_SYMBOL(rproc_coredump_set_elf_info);
>> 
>> +static void rproc_coredump_free(void *data)
>> +{
>> +	struct rproc_coredump_state *dump_state = data;
>> +
>> +	complete(&dump_state->dump_done);
>> +	vfree(dump_state->header);
>> +}
>> +
>> +static void *rproc_coredump_find_segment(loff_t user_offset,
>> +					 struct list_head *segments,
>> +					 size_t *data_left)
>> +{
>> +	struct rproc_dump_segment *segment;
>> +
>> +	list_for_each_entry(segment, segments, node) {
>> +		if (user_offset < segment->size) {
>> +			*data_left = segment->size - user_offset;
>> +			return segment;
>> +		}
>> +		user_offset -= segment->size;
>> +	}
>> +
>> +	*data_left = 0;
>> +	return NULL;
>> +}
>> +
>> +static void rproc_copy_segment(struct rproc *rproc, void *dest,
>> +			       struct rproc_dump_segment *segment,
>> +			       size_t offset, size_t size)
>> +{
>> +	void *ptr;
>> +
>> +	if (segment->dump) {
>> +		segment->dump(rproc, segment, dest, offset, size);
>> +	} else {
>> +		ptr = rproc_da_to_va(rproc, segment->da + offset, size);
>> +		if (!ptr) {
>> +			dev_err(&rproc->dev,
>> +				"invalid copy request for segment %pad with offset %zu and size 
>> %zu)\n",
>> +				&segment->da, offset, size);
>> +			memset(dest, 0xff, size);
>> +		} else {
>> +			memcpy(dest, ptr, size);
>> +		}
>> +	}
>> +}
>> +
>> +static ssize_t rproc_coredump_read(char *buffer, loff_t offset, 
>> size_t count,
>> +				   void *data, size_t header_sz)
>> +{
>> +	size_t seg_data, bytes_left = count;
>> +	ssize_t copy_sz;
>> +	struct rproc_dump_segment *seg;
>> +	struct rproc_coredump_state *dump_state = data;
>> +	struct rproc *rproc = dump_state->rproc;
>> +	void *elfcore = dump_state->header;
>> +
>> +	/* Copy the vmalloc'ed header first. */
>> +	if (offset < header_sz) {
>> +		copy_sz = memory_read_from_buffer(buffer, count, &offset,
>> +						  elfcore, header_sz);
>> +
>> +		return copy_sz;
>> +	}
>> +
>> +	/*
>> +	 * Find out the segment memory chunk to be copied based on offset.
>> +	 * Keep copying data until count bytes are read.
>> +	 */
>> +	while (bytes_left) {
>> +		seg = rproc_coredump_find_segment(offset - header_sz,
>> +						  &rproc->dump_segments,
>> +						  &seg_data);
>> +		/* EOF check */
>> +		if (!seg) {
>> +			dev_info(&rproc->dev, "Ramdump done, %lld bytes read",
>> +				 offset);
>> +			break;
>> +		}
>> +
>> +		copy_sz = min_t(size_t, bytes_left, seg_data);
>> +
>> +		rproc_copy_segment(rproc, buffer, seg, seg->size - seg_data,
>> +				   copy_sz);
>> +
>> +		offset += copy_sz;
>> +		buffer += copy_sz;
>> +		bytes_left -= copy_sz;
>> +	}
>> +
>> +	return count - bytes_left;
>> +}
>> +
>>  /**
>>   * rproc_coredump() - perform coredump
>>   * @rproc:	rproc handle
>>   *
>>   * This function will generate an ELF header for the registered 
>> segments
>> - * and create a devcoredump device associated with rproc.
>> + * and create a devcoredump device associated with rproc. Based on 
>> the
>> + * coredump configuration this function will directly copy the 
>> segments
>> + * from device memory to userspace or copy segments from device 
>> memory to
>> + * a separate buffer, which can then be read by userspace.
>> + * The first approach avoids using extra vmalloc memory. But it will 
>> stall
>> + * recovery flow until dump is read by userspace.
>>   */
>>  void rproc_coredump(struct rproc *rproc)
>>  {
>> @@ -129,11 +235,13 @@ void rproc_coredump(struct rproc *rproc)
>>  	size_t data_size;
>>  	size_t offset;
>>  	void *data;
>> -	void *ptr;
>>  	u8 class = rproc->elf_class;
>>  	int phnum = 0;
>> +	struct rproc_coredump_state dump_state;
>> +	enum rproc_dump_mechanism dump_conf = rproc->dump_conf;
>> 
>> -	if (list_empty(&rproc->dump_segments))
>> +	if (list_empty(&rproc->dump_segments) ||
>> +	    dump_conf == RPROC_COREDUMP_DISABLED)
>>  		return;
>> 
>>  	if (class == ELFCLASSNONE) {
>> @@ -143,7 +251,14 @@ void rproc_coredump(struct rproc *rproc)
>> 
>>  	data_size = elf_size_of_hdr(class);
>>  	list_for_each_entry(segment, &rproc->dump_segments, node) {
>> -		data_size += elf_size_of_phdr(class) + segment->size;
>> +		/*
>> +		 * For default configuration buffer includes headers & segments.
>> +		 * For inline dump buffer just includes headers as segments are
>> +		 * directly read from device memory.
>> +		 */
>> +		data_size += elf_size_of_phdr(class);
>> +		if (dump_conf == RPROC_COREDUMP_DEFAULT)
>> +			data_size += segment->size;
>> 
>>  		phnum++;
>>  	}
>> @@ -182,23 +297,30 @@ void rproc_coredump(struct rproc *rproc)
>>  		elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X);
>>  		elf_phdr_set_p_align(class, phdr, 0);
>> 
>> -		if (segment->dump) {
>> -			segment->dump(rproc, segment, data + offset);
>> -		} else {
>> -			ptr = rproc_da_to_va(rproc, segment->da, segment->size);
>> -			if (!ptr) {
>> -				dev_err(&rproc->dev,
>> -					"invalid coredump segment (%pad, %zu)\n",
>> -					&segment->da, segment->size);
>> -				memset(data + offset, 0xff, segment->size);
>> -			} else {
>> -				memcpy(data + offset, ptr, segment->size);
>> -			}
>> -		}
>> +		if (dump_conf == RPROC_COREDUMP_DEFAULT)
>> +			rproc_copy_segment(rproc, data + offset, segment, 0,
>> +					   segment->size);
>> 
>>  		offset += elf_phdr_get_p_filesz(class, phdr);
>>  		phdr += elf_size_of_phdr(class);
>>  	}
>> 
>> -	dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
>> +	if (dump_conf == RPROC_COREDUMP_DEFAULT) {
>> +		dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
>> +		return;
>> +	}
>> +
>> +	/* Initialize the dump state struct to be used by 
>> rproc_coredump_read */
>> +	dump_state.rproc = rproc;
>> +	dump_state.header = data;
>> +	init_completion(&dump_state.dump_done);
>> +
>> +	dev_coredumpm(&rproc->dev, NULL, &dump_state, data_size, GFP_KERNEL,
>> +		      rproc_coredump_read, rproc_coredump_free);
>> +
>> +	/*
>> +	 * Wait until the dump is read and free is called. Data is freed
>> +	 * by devcoredump framework automatically after 5 minutes.
>> +	 */
>> +	wait_for_completion(&dump_state.dump_done);
>>  }
>> diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h
>> index e7b7bab..38d037d 100644
>> --- a/include/linux/remoteproc.h
>> +++ b/include/linux/remoteproc.h
>> @@ -435,6 +435,20 @@ enum rproc_crash_type {
>>  };
>> 
>>  /**
>> + * enum rproc_dump_mechanism - Coredump options for core
>> + * @RPROC_COREDUMP_DEFAULT:	Copy dump to separate buffer and carry on 
>> with
>> +				recovery
>> + * @RPROC_COREDUMP_INLINE:	Read segments directly from device memory. 
>> Stall
>> +				recovery until all segments are read
>> + * @RPROC_COREDUMP_DISABLED:	Don't perform any dump
>> + */
>> +enum rproc_dump_mechanism {
>> +	RPROC_COREDUMP_DEFAULT,
>> +	RPROC_COREDUMP_INLINE,
>> +	RPROC_COREDUMP_DISABLED,
>> +};
>> +
>> +/**
>>   * struct rproc_dump_segment - segment info from ELF header
>>   * @node:	list node related to the rproc segment list
>>   * @da:		device address of the segment
>> @@ -451,7 +465,7 @@ struct rproc_dump_segment {
>> 
>>  	void *priv;
>>  	void (*dump)(struct rproc *rproc, struct rproc_dump_segment 
>> *segment,
>> -		     void *dest);
>> +		     void *dest, size_t offset, size_t size);
>>  	loff_t offset;
>>  };
>> 
>> @@ -466,6 +480,7 @@ struct rproc_dump_segment {
>>   * @dev: virtual device for refcounting and common remoteproc 
>> behavior
>>   * @power: refcount of users who need this rproc powered up
>>   * @state: state of the device
>> + * @dump_conf: Currently selected coredump configuration
>>   * @lock: lock which protects concurrent manipulations of the rproc
>>   * @dbg_dir: debugfs directory of this rproc device
>>   * @traces: list of trace buffers
>> @@ -499,6 +514,7 @@ struct rproc {
>>  	struct device dev;
>>  	atomic_t power;
>>  	unsigned int state;
>> +	enum rproc_dump_mechanism dump_conf;
>>  	struct mutex lock;
>>  	struct dentry *dbg_dir;
>>  	struct list_head traces;
>> @@ -630,7 +646,8 @@ int rproc_coredump_add_custom_segment(struct rproc 
>> *rproc,
>>  				      dma_addr_t da, size_t size,
>>  				      void (*dumpfn)(struct rproc *rproc,
>>  						     struct rproc_dump_segment *segment,
>> -						     void *dest),
>> +						     void *dest, size_t offset,
>> +						     size_t size),
>>  				      void *priv);
>>  int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 
>> machine);
>> 
>> --
>> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
>> Forum,
>> a Linux Foundation Collaborative Project
>> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/4] remoteproc: Add inline coredump functionality
  2020-07-15 19:22     ` rishabhb
@ 2020-07-15 21:55       ` Mathieu Poirier
  0 siblings, 0 replies; 15+ messages in thread
From: Mathieu Poirier @ 2020-07-15 21:55 UTC (permalink / raw)
  To: Rishabh Bhatnagar
  Cc: linux-remoteproc, Linux Kernel Mailing List, Bjorn Andersson,
	tsoni, psodagud, Siddharth Gupta, linux-remoteproc-owner

On Wed, 15 Jul 2020 at 13:22, <rishabhb@codeaurora.org> wrote:
>
> On 2020-07-14 10:53, Mathieu Poirier wrote:
> > On Thu, Jul 09, 2020 at 01:31:55PM -0700, Rishabh Bhatnagar wrote:
> >> The current coredump implementation uses vmalloc area to copy
> >> all the segments. But this might put strain on low memory targets
> >> as the firmware size sometimes is in tens of MBs. The situation
> >> becomes worse if there are multiple remote processors undergoing
> >> recovery at the same time. This patch adds inline coredump
> >> functionality that avoids extra memory usage. This requires
> >> recovery to be halted until data is read by userspace and free
> >> function is called. Also modify the qcom_q6v5_mss driver to include
> >> size and offset in the segment dump function.
> >>
> >> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
> >> Reviewed-by: Sibi Sankar <sibis@codeaurora.org>
> >> ---
> >>  drivers/remoteproc/qcom_q6v5_mss.c       |  11 ++-
> >>  drivers/remoteproc/remoteproc_coredump.c | 160
> >> +++++++++++++++++++++++++++----
> >>  include/linux/remoteproc.h               |  21 +++-
> >>  3 files changed, 166 insertions(+), 26 deletions(-)
> >>
> >> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c
> >> b/drivers/remoteproc/qcom_q6v5_mss.c
> >> index c6ce032..79df354 100644
> >> --- a/drivers/remoteproc/qcom_q6v5_mss.c
> >> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
> >> @@ -1199,11 +1199,12 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
> >>
> >>  static void qcom_q6v5_dump_segment(struct rproc *rproc,
> >>                                 struct rproc_dump_segment *segment,
> >> -                               void *dest)
> >> +                               void *dest, size_t cp_offset, size_t size)
> >
> > Now that I'm taking another look at this, refactoring
> > qcom_q6v5_dump_segment()
> > should be in another patch - please split them.
> >
> Hi Mathieu,
> Thanks for reviewing.
>
> Since I'm changing the API signature if we just take this patch without
> q6v5_mss
> changes (in a separate patch) it would cause compilation errors. Is that
> ok?

Just fix the arguments in q6v5_mss to match the signature and then do
another patch to implement the inline coredump feature.

> >>  {
> >>      int ret = 0;
> >>      struct q6v5 *qproc = rproc->priv;
> >>      int offset = segment->da - qproc->mpss_reloc;
> >> +    size_t cp_size = size ? size : segment->size;
> >
> > I can't find a scenario where size is 0 - can you explain when
> > such condition occurs?
> >
> There is no such case right now. I can get rid of the check.
> > Thanks,
> > Mathieu
> >
> >
> >>      void *ptr = NULL;
> >>
> >>      /* Unlock mba before copying segments */
> >> @@ -1219,16 +1220,16 @@ static void qcom_q6v5_dump_segment(struct
> >> rproc *rproc,
> >>      }
> >>
> >>      if (!ret)
> >> -            ptr = ioremap_wc(qproc->mpss_phys + offset, segment->size);
> >> +            ptr = ioremap_wc(qproc->mpss_phys + offset + cp_offset, cp_size);
> >>
> >>      if (ptr) {
> >> -            memcpy(dest, ptr, segment->size);
> >> +            memcpy(dest, ptr, cp_size);
> >>              iounmap(ptr);
> >>      } else {
> >> -            memset(dest, 0xff, segment->size);
> >> +            memset(dest, 0xff, cp_size);
> >>      }
> >>
> >> -    qproc->current_dump_size += segment->size;
> >> +    qproc->current_dump_size += cp_size;
> >>
> >>      /* Reclaim mba after copying segments */
> >>      if (qproc->current_dump_size == qproc->total_dump_size) {
> >> diff --git a/drivers/remoteproc/remoteproc_coredump.c
> >> b/drivers/remoteproc/remoteproc_coredump.c
> >> index ded0244..646886f 100644
> >> --- a/drivers/remoteproc/remoteproc_coredump.c
> >> +++ b/drivers/remoteproc/remoteproc_coredump.c
> >> @@ -5,6 +5,7 @@
> >>   * Copyright (c) 2020, The Linux Foundation. All rights reserved.
> >>   */
> >>
> >> +#include <linux/completion.h>
> >>  #include <linux/devcoredump.h>
> >>  #include <linux/device.h>
> >>  #include <linux/kernel.h>
> >> @@ -12,6 +13,12 @@
> >>  #include "remoteproc_internal.h"
> >>  #include "remoteproc_elf_helpers.h"
> >>
> >> +struct rproc_coredump_state {
> >> +    struct rproc *rproc;
> >> +    void *header;
> >> +    struct completion dump_done;
> >> +};
> >> +
> >>  /**
> >>   * rproc_coredump_cleanup() - clean up dump_segments list
> >>   * @rproc: the remote processor handle
> >> @@ -72,7 +79,8 @@ int rproc_coredump_add_custom_segment(struct rproc
> >> *rproc,
> >>                                    dma_addr_t da, size_t size,
> >>                                    void (*dumpfn)(struct rproc *rproc,
> >>                                                   struct rproc_dump_segment *segment,
> >> -                                                 void *dest),
> >> +                                                 void *dest, size_t offset,
> >> +                                                 size_t size),
> >>                                    void *priv)
> >>  {
> >>      struct rproc_dump_segment *segment;
> >> @@ -114,12 +122,110 @@ int rproc_coredump_set_elf_info(struct rproc
> >> *rproc, u8 class, u16 machine)
> >>  }
> >>  EXPORT_SYMBOL(rproc_coredump_set_elf_info);
> >>
> >> +static void rproc_coredump_free(void *data)
> >> +{
> >> +    struct rproc_coredump_state *dump_state = data;
> >> +
> >> +    complete(&dump_state->dump_done);
> >> +    vfree(dump_state->header);
> >> +}
> >> +
> >> +static void *rproc_coredump_find_segment(loff_t user_offset,
> >> +                                     struct list_head *segments,
> >> +                                     size_t *data_left)
> >> +{
> >> +    struct rproc_dump_segment *segment;
> >> +
> >> +    list_for_each_entry(segment, segments, node) {
> >> +            if (user_offset < segment->size) {
> >> +                    *data_left = segment->size - user_offset;
> >> +                    return segment;
> >> +            }
> >> +            user_offset -= segment->size;
> >> +    }
> >> +
> >> +    *data_left = 0;
> >> +    return NULL;
> >> +}
> >> +
> >> +static void rproc_copy_segment(struct rproc *rproc, void *dest,
> >> +                           struct rproc_dump_segment *segment,
> >> +                           size_t offset, size_t size)
> >> +{
> >> +    void *ptr;
> >> +
> >> +    if (segment->dump) {
> >> +            segment->dump(rproc, segment, dest, offset, size);
> >> +    } else {
> >> +            ptr = rproc_da_to_va(rproc, segment->da + offset, size);
> >> +            if (!ptr) {
> >> +                    dev_err(&rproc->dev,
> >> +                            "invalid copy request for segment %pad with offset %zu and size
> >> %zu)\n",
> >> +                            &segment->da, offset, size);
> >> +                    memset(dest, 0xff, size);
> >> +            } else {
> >> +                    memcpy(dest, ptr, size);
> >> +            }
> >> +    }
> >> +}
> >> +
> >> +static ssize_t rproc_coredump_read(char *buffer, loff_t offset,
> >> size_t count,
> >> +                               void *data, size_t header_sz)
> >> +{
> >> +    size_t seg_data, bytes_left = count;
> >> +    ssize_t copy_sz;
> >> +    struct rproc_dump_segment *seg;
> >> +    struct rproc_coredump_state *dump_state = data;
> >> +    struct rproc *rproc = dump_state->rproc;
> >> +    void *elfcore = dump_state->header;
> >> +
> >> +    /* Copy the vmalloc'ed header first. */
> >> +    if (offset < header_sz) {
> >> +            copy_sz = memory_read_from_buffer(buffer, count, &offset,
> >> +                                              elfcore, header_sz);
> >> +
> >> +            return copy_sz;
> >> +    }
> >> +
> >> +    /*
> >> +     * Find out the segment memory chunk to be copied based on offset.
> >> +     * Keep copying data until count bytes are read.
> >> +     */
> >> +    while (bytes_left) {
> >> +            seg = rproc_coredump_find_segment(offset - header_sz,
> >> +                                              &rproc->dump_segments,
> >> +                                              &seg_data);
> >> +            /* EOF check */
> >> +            if (!seg) {
> >> +                    dev_info(&rproc->dev, "Ramdump done, %lld bytes read",
> >> +                             offset);
> >> +                    break;
> >> +            }
> >> +
> >> +            copy_sz = min_t(size_t, bytes_left, seg_data);
> >> +
> >> +            rproc_copy_segment(rproc, buffer, seg, seg->size - seg_data,
> >> +                               copy_sz);
> >> +
> >> +            offset += copy_sz;
> >> +            buffer += copy_sz;
> >> +            bytes_left -= copy_sz;
> >> +    }
> >> +
> >> +    return count - bytes_left;
> >> +}
> >> +
> >>  /**
> >>   * rproc_coredump() - perform coredump
> >>   * @rproc:  rproc handle
> >>   *
> >>   * This function will generate an ELF header for the registered
> >> segments
> >> - * and create a devcoredump device associated with rproc.
> >> + * and create a devcoredump device associated with rproc. Based on
> >> the
> >> + * coredump configuration this function will directly copy the
> >> segments
> >> + * from device memory to userspace or copy segments from device
> >> memory to
> >> + * a separate buffer, which can then be read by userspace.
> >> + * The first approach avoids using extra vmalloc memory. But it will
> >> stall
> >> + * recovery flow until dump is read by userspace.
> >>   */
> >>  void rproc_coredump(struct rproc *rproc)
> >>  {
> >> @@ -129,11 +235,13 @@ void rproc_coredump(struct rproc *rproc)
> >>      size_t data_size;
> >>      size_t offset;
> >>      void *data;
> >> -    void *ptr;
> >>      u8 class = rproc->elf_class;
> >>      int phnum = 0;
> >> +    struct rproc_coredump_state dump_state;
> >> +    enum rproc_dump_mechanism dump_conf = rproc->dump_conf;
> >>
> >> -    if (list_empty(&rproc->dump_segments))
> >> +    if (list_empty(&rproc->dump_segments) ||
> >> +        dump_conf == RPROC_COREDUMP_DISABLED)
> >>              return;
> >>
> >>      if (class == ELFCLASSNONE) {
> >> @@ -143,7 +251,14 @@ void rproc_coredump(struct rproc *rproc)
> >>
> >>      data_size = elf_size_of_hdr(class);
> >>      list_for_each_entry(segment, &rproc->dump_segments, node) {
> >> -            data_size += elf_size_of_phdr(class) + segment->size;
> >> +            /*
> >> +             * For default configuration buffer includes headers & segments.
> >> +             * For inline dump buffer just includes headers as segments are
> >> +             * directly read from device memory.
> >> +             */
> >> +            data_size += elf_size_of_phdr(class);
> >> +            if (dump_conf == RPROC_COREDUMP_DEFAULT)
> >> +                    data_size += segment->size;
> >>
> >>              phnum++;
> >>      }
> >> @@ -182,23 +297,30 @@ void rproc_coredump(struct rproc *rproc)
> >>              elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X);
> >>              elf_phdr_set_p_align(class, phdr, 0);
> >>
> >> -            if (segment->dump) {
> >> -                    segment->dump(rproc, segment, data + offset);
> >> -            } else {
> >> -                    ptr = rproc_da_to_va(rproc, segment->da, segment->size);
> >> -                    if (!ptr) {
> >> -                            dev_err(&rproc->dev,
> >> -                                    "invalid coredump segment (%pad, %zu)\n",
> >> -                                    &segment->da, segment->size);
> >> -                            memset(data + offset, 0xff, segment->size);
> >> -                    } else {
> >> -                            memcpy(data + offset, ptr, segment->size);
> >> -                    }
> >> -            }
> >> +            if (dump_conf == RPROC_COREDUMP_DEFAULT)
> >> +                    rproc_copy_segment(rproc, data + offset, segment, 0,
> >> +                                       segment->size);
> >>
> >>              offset += elf_phdr_get_p_filesz(class, phdr);
> >>              phdr += elf_size_of_phdr(class);
> >>      }
> >>
> >> -    dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
> >> +    if (dump_conf == RPROC_COREDUMP_DEFAULT) {
> >> +            dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
> >> +            return;
> >> +    }
> >> +
> >> +    /* Initialize the dump state struct to be used by
> >> rproc_coredump_read */
> >> +    dump_state.rproc = rproc;
> >> +    dump_state.header = data;
> >> +    init_completion(&dump_state.dump_done);
> >> +
> >> +    dev_coredumpm(&rproc->dev, NULL, &dump_state, data_size, GFP_KERNEL,
> >> +                  rproc_coredump_read, rproc_coredump_free);
> >> +
> >> +    /*
> >> +     * Wait until the dump is read and free is called. Data is freed
> >> +     * by devcoredump framework automatically after 5 minutes.
> >> +     */
> >> +    wait_for_completion(&dump_state.dump_done);
> >>  }
> >> diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h
> >> index e7b7bab..38d037d 100644
> >> --- a/include/linux/remoteproc.h
> >> +++ b/include/linux/remoteproc.h
> >> @@ -435,6 +435,20 @@ enum rproc_crash_type {
> >>  };
> >>
> >>  /**
> >> + * enum rproc_dump_mechanism - Coredump options for core
> >> + * @RPROC_COREDUMP_DEFAULT: Copy dump to separate buffer and carry on
> >> with
> >> +                            recovery
> >> + * @RPROC_COREDUMP_INLINE:  Read segments directly from device memory.
> >> Stall
> >> +                            recovery until all segments are read
> >> + * @RPROC_COREDUMP_DISABLED:        Don't perform any dump
> >> + */
> >> +enum rproc_dump_mechanism {
> >> +    RPROC_COREDUMP_DEFAULT,
> >> +    RPROC_COREDUMP_INLINE,
> >> +    RPROC_COREDUMP_DISABLED,
> >> +};
> >> +
> >> +/**
> >>   * struct rproc_dump_segment - segment info from ELF header
> >>   * @node:   list node related to the rproc segment list
> >>   * @da:             device address of the segment
> >> @@ -451,7 +465,7 @@ struct rproc_dump_segment {
> >>
> >>      void *priv;
> >>      void (*dump)(struct rproc *rproc, struct rproc_dump_segment
> >> *segment,
> >> -                 void *dest);
> >> +                 void *dest, size_t offset, size_t size);
> >>      loff_t offset;
> >>  };
> >>
> >> @@ -466,6 +480,7 @@ struct rproc_dump_segment {
> >>   * @dev: virtual device for refcounting and common remoteproc
> >> behavior
> >>   * @power: refcount of users who need this rproc powered up
> >>   * @state: state of the device
> >> + * @dump_conf: Currently selected coredump configuration
> >>   * @lock: lock which protects concurrent manipulations of the rproc
> >>   * @dbg_dir: debugfs directory of this rproc device
> >>   * @traces: list of trace buffers
> >> @@ -499,6 +514,7 @@ struct rproc {
> >>      struct device dev;
> >>      atomic_t power;
> >>      unsigned int state;
> >> +    enum rproc_dump_mechanism dump_conf;
> >>      struct mutex lock;
> >>      struct dentry *dbg_dir;
> >>      struct list_head traces;
> >> @@ -630,7 +646,8 @@ int rproc_coredump_add_custom_segment(struct rproc
> >> *rproc,
> >>                                    dma_addr_t da, size_t size,
> >>                                    void (*dumpfn)(struct rproc *rproc,
> >>                                                   struct rproc_dump_segment *segment,
> >> -                                                 void *dest),
> >> +                                                 void *dest, size_t offset,
> >> +                                                 size_t size),
> >>                                    void *priv);
> >>  int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16
> >> machine);
> >>
> >> --
> >> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
> >> Forum,
> >> a Linux Foundation Collaborative Project
> >>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size
  2020-07-14 17:18   ` Mathieu Poirier
  2020-07-14 18:12     ` Sibi Sankar
@ 2020-07-16  8:10     ` Sibi Sankar
  2020-07-16 18:24       ` rishabhb
  1 sibling, 1 reply; 15+ messages in thread
From: Sibi Sankar @ 2020-07-16  8:10 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: Rishabh Bhatnagar, linux-remoteproc, linux-kernel,
	bjorn.andersson, tsoni, psodagud, sidgup, linux-kernel-owner

On 2020-07-14 22:48, Mathieu Poirier wrote:
> On Thu, Jul 09, 2020 at 01:31:54PM -0700, Rishabh Bhatnagar wrote:
>> From: Sibi Sankar <sibis@codeaurora.org>
>> 
>> In order to land inline coredump support for mss, the dump_segment
>> function would need to support granularities less than the segment
>> size. This is achieved by replacing mask based tracking with size.
>> 
>> Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
>> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
>> ---
>>  drivers/remoteproc/qcom_q6v5_mss.c | 15 +++++++--------
>>  1 file changed, 7 insertions(+), 8 deletions(-)
>> 
>> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c 
>> b/drivers/remoteproc/qcom_q6v5_mss.c
>> index feb70283b..c6ce032 100644
>> --- a/drivers/remoteproc/qcom_q6v5_mss.c
>> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
>> @@ -181,8 +181,8 @@ struct q6v5 {
>>  	bool running;
>> 
>>  	bool dump_mba_loaded;
>> -	unsigned long dump_segment_mask;
>> -	unsigned long dump_complete_mask;
>> +	size_t current_dump_size;
>> +	size_t total_dump_size;
>> 
>>  	phys_addr_t mba_phys;
>>  	void *mba_region;
>> @@ -1203,7 +1203,6 @@ static void qcom_q6v5_dump_segment(struct rproc 
>> *rproc,
>>  {
>>  	int ret = 0;
>>  	struct q6v5 *qproc = rproc->priv;
>> -	unsigned long mask = BIT((unsigned long)segment->priv);
>>  	int offset = segment->da - qproc->mpss_reloc;
>>  	void *ptr = NULL;
>> 
>> @@ -1229,10 +1228,10 @@ static void qcom_q6v5_dump_segment(struct 
>> rproc *rproc,
>>  		memset(dest, 0xff, segment->size);
>>  	}
>> 
>> -	qproc->dump_segment_mask |= mask;
>> +	qproc->current_dump_size += segment->size;
>> 
>>  	/* Reclaim mba after copying segments */
>> -	if (qproc->dump_segment_mask == qproc->dump_complete_mask) {
>> +	if (qproc->current_dump_size == qproc->total_dump_size) {
>>  		if (qproc->dump_mba_loaded) {
>>  			/* Try to reset ownership back to Q6 */
>>  			q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
>> @@ -1274,7 +1273,7 @@ static int q6v5_start(struct rproc *rproc)
>>  			"Failed to reclaim mba buffer system may become unstable\n");
>> 
>>  	/* Reset Dump Segment Mask */
>> -	qproc->dump_segment_mask = 0;
>> +	qproc->current_dump_size = 0;
>>  	qproc->running = true;
>> 
>>  	return 0;
>> @@ -1323,7 +1322,7 @@ static int 
>> qcom_q6v5_register_dump_segments(struct rproc *rproc,
>> 
>>  	ehdr = (struct elf32_hdr *)fw->data;
>>  	phdrs = (struct elf32_phdr *)(ehdr + 1);
>> -	qproc->dump_complete_mask = 0;
>> +	qproc->total_dump_size = 0;
>> 
>>  	for (i = 0; i < ehdr->e_phnum; i++) {
>>  		phdr = &phdrs[i];
>> @@ -1338,7 +1337,7 @@ static int 
>> qcom_q6v5_register_dump_segments(struct rproc *rproc,
>>  		if (ret)
>>  			break;
> 
> There is also no longer a need to carry the 'i' in:
> 
>                 ret = rproc_coredump_add_custom_segment(rproc, 
> phdr->p_paddr,
>                                                         phdr->p_memsz,
>                                                         
> qcom_q6v5_dump_segment,
>                                                         (void *)i);

I assume Rishabh will re-spin the
series today and this will be taken
care as well.

>> 
>> -		qproc->dump_complete_mask |= BIT(i);
>> +		qproc->total_dump_size += phdr->p_memsz;
>>  	}
>> 
>>  	release_firmware(fw);
>> --
>> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
>> Forum,
>> a Linux Foundation Collaborative Project
>> 

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/4] remoteproc: Add inline coredump functionality
  2020-07-09 20:31 ` [PATCH v7 3/4] remoteproc: Add inline coredump functionality Rishabh Bhatnagar
  2020-07-14 17:53   ` Mathieu Poirier
@ 2020-07-16  8:25   ` Sibi Sankar
  2020-07-16 18:25     ` rishabhb
  1 sibling, 1 reply; 15+ messages in thread
From: Sibi Sankar @ 2020-07-16  8:25 UTC (permalink / raw)
  To: Rishabh Bhatnagar
  Cc: linux-remoteproc, linux-kernel, bjorn.andersson, mathieu.poirier,
	tsoni, psodagud, sidgup, linux-kernel-owner

On 2020-07-10 02:01, Rishabh Bhatnagar wrote:
> The current coredump implementation uses vmalloc area to copy
> all the segments. But this might put strain on low memory targets
> as the firmware size sometimes is in tens of MBs. The situation
> becomes worse if there are multiple remote processors undergoing
> recovery at the same time. This patch adds inline coredump
> functionality that avoids extra memory usage. This requires
> recovery to be halted until data is read by userspace and free
> function is called. Also modify the qcom_q6v5_mss driver to include
> size and offset in the segment dump function.
> 
> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
> Reviewed-by: Sibi Sankar <sibis@codeaurora.org>
> ---
>  drivers/remoteproc/qcom_q6v5_mss.c       |  11 ++-
>  drivers/remoteproc/remoteproc_coredump.c | 160 
> +++++++++++++++++++++++++++----
>  include/linux/remoteproc.h               |  21 +++-
>  3 files changed, 166 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c
> b/drivers/remoteproc/qcom_q6v5_mss.c
> index c6ce032..79df354 100644
> --- a/drivers/remoteproc/qcom_q6v5_mss.c
> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
> @@ -1199,11 +1199,12 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
> 
>  static void qcom_q6v5_dump_segment(struct rproc *rproc,
>  				   struct rproc_dump_segment *segment,
> -				   void *dest)
> +				   void *dest, size_t cp_offset, size_t size)
>  {
>  	int ret = 0;
>  	struct q6v5 *qproc = rproc->priv;
>  	int offset = segment->da - qproc->mpss_reloc;
> +	size_t cp_size = size ? size : segment->size;
>  	void *ptr = NULL;
> 
>  	/* Unlock mba before copying segments */
> @@ -1219,16 +1220,16 @@ static void qcom_q6v5_dump_segment(struct rproc 
> *rproc,
>  	}
> 
>  	if (!ret)
> -		ptr = ioremap_wc(qproc->mpss_phys + offset, segment->size);
> +		ptr = ioremap_wc(qproc->mpss_phys + offset + cp_offset, cp_size);
> 
>  	if (ptr) {
> -		memcpy(dest, ptr, segment->size);
> +		memcpy(dest, ptr, cp_size);
>  		iounmap(ptr);
>  	} else {
> -		memset(dest, 0xff, segment->size);
> +		memset(dest, 0xff, cp_size);
>  	}
> 
> -	qproc->current_dump_size += segment->size;
> +	qproc->current_dump_size += cp_size;
> 
>  	/* Reclaim mba after copying segments */
>  	if (qproc->current_dump_size == qproc->total_dump_size) {
> diff --git a/drivers/remoteproc/remoteproc_coredump.c
> b/drivers/remoteproc/remoteproc_coredump.c
> index ded0244..646886f 100644
> --- a/drivers/remoteproc/remoteproc_coredump.c
> +++ b/drivers/remoteproc/remoteproc_coredump.c
> @@ -5,6 +5,7 @@
>   * Copyright (c) 2020, The Linux Foundation. All rights reserved.
>   */
> 
> +#include <linux/completion.h>
>  #include <linux/devcoredump.h>
>  #include <linux/device.h>
>  #include <linux/kernel.h>
> @@ -12,6 +13,12 @@
>  #include "remoteproc_internal.h"
>  #include "remoteproc_elf_helpers.h"
> 
> +struct rproc_coredump_state {
> +	struct rproc *rproc;
> +	void *header;
> +	struct completion dump_done;
> +};
> +
>  /**
>   * rproc_coredump_cleanup() - clean up dump_segments list
>   * @rproc: the remote processor handle
> @@ -72,7 +79,8 @@ int rproc_coredump_add_custom_segment(struct rproc 
> *rproc,
>  				      dma_addr_t da, size_t size,
>  				      void (*dumpfn)(struct rproc *rproc,
>  						     struct rproc_dump_segment *segment,
> -						     void *dest),
> +						     void *dest, size_t offset,
> +						     size_t size),
>  				      void *priv)
>  {
>  	struct rproc_dump_segment *segment;
> @@ -114,12 +122,110 @@ int rproc_coredump_set_elf_info(struct rproc
> *rproc, u8 class, u16 machine)
>  }
>  EXPORT_SYMBOL(rproc_coredump_set_elf_info);
> 
> +static void rproc_coredump_free(void *data)
> +{
> +	struct rproc_coredump_state *dump_state = data;
> +
> +	complete(&dump_state->dump_done);
> +	vfree(dump_state->header);

Rishabh,

I observed a system hang when I trigger
recovery of a number of remoteproc
simultaneously (after a number of iterations
consistently). This goes away if I free the
header and  then issue the dump completion
and that does seem to be the right thing to
do.

> +}
> +
> +static void *rproc_coredump_find_segment(loff_t user_offset,
> +					 struct list_head *segments,
> +					 size_t *data_left)
> +{
> +	struct rproc_dump_segment *segment;
> +
> +	list_for_each_entry(segment, segments, node) {
> +		if (user_offset < segment->size) {
> +			*data_left = segment->size - user_offset;
> +			return segment;
> +		}
> +		user_offset -= segment->size;
> +	}
> +
> +	*data_left = 0;
> +	return NULL;
> +}
> +
> +static void rproc_copy_segment(struct rproc *rproc, void *dest,
> +			       struct rproc_dump_segment *segment,
> +			       size_t offset, size_t size)
> +{
> +	void *ptr;
> +
> +	if (segment->dump) {
> +		segment->dump(rproc, segment, dest, offset, size);
> +	} else {
> +		ptr = rproc_da_to_va(rproc, segment->da + offset, size);
> +		if (!ptr) {
> +			dev_err(&rproc->dev,
> +				"invalid copy request for segment %pad with offset %zu and size 
> %zu)\n",
> +				&segment->da, offset, size);
> +			memset(dest, 0xff, size);
> +		} else {
> +			memcpy(dest, ptr, size);
> +		}
> +	}
> +}
> +
> +static ssize_t rproc_coredump_read(char *buffer, loff_t offset, size_t 
> count,
> +				   void *data, size_t header_sz)
> +{
> +	size_t seg_data, bytes_left = count;
> +	ssize_t copy_sz;
> +	struct rproc_dump_segment *seg;
> +	struct rproc_coredump_state *dump_state = data;
> +	struct rproc *rproc = dump_state->rproc;
> +	void *elfcore = dump_state->header;
> +
> +	/* Copy the vmalloc'ed header first. */
> +	if (offset < header_sz) {
> +		copy_sz = memory_read_from_buffer(buffer, count, &offset,
> +						  elfcore, header_sz);
> +
> +		return copy_sz;
> +	}
> +
> +	/*
> +	 * Find out the segment memory chunk to be copied based on offset.
> +	 * Keep copying data until count bytes are read.
> +	 */
> +	while (bytes_left) {
> +		seg = rproc_coredump_find_segment(offset - header_sz,
> +						  &rproc->dump_segments,
> +						  &seg_data);
> +		/* EOF check */
> +		if (!seg) {
> +			dev_info(&rproc->dev, "Ramdump done, %lld bytes read",
> +				 offset);
> +			break;
> +		}
> +
> +		copy_sz = min_t(size_t, bytes_left, seg_data);
> +
> +		rproc_copy_segment(rproc, buffer, seg, seg->size - seg_data,
> +				   copy_sz);
> +
> +		offset += copy_sz;
> +		buffer += copy_sz;
> +		bytes_left -= copy_sz;
> +	}
> +
> +	return count - bytes_left;
> +}
> +
>  /**
>   * rproc_coredump() - perform coredump
>   * @rproc:	rproc handle
>   *
>   * This function will generate an ELF header for the registered 
> segments
> - * and create a devcoredump device associated with rproc.
> + * and create a devcoredump device associated with rproc. Based on the
> + * coredump configuration this function will directly copy the 
> segments
> + * from device memory to userspace or copy segments from device memory 
> to
> + * a separate buffer, which can then be read by userspace.
> + * The first approach avoids using extra vmalloc memory. But it will 
> stall
> + * recovery flow until dump is read by userspace.
>   */
>  void rproc_coredump(struct rproc *rproc)
>  {
> @@ -129,11 +235,13 @@ void rproc_coredump(struct rproc *rproc)
>  	size_t data_size;
>  	size_t offset;
>  	void *data;
> -	void *ptr;
>  	u8 class = rproc->elf_class;
>  	int phnum = 0;
> +	struct rproc_coredump_state dump_state;
> +	enum rproc_dump_mechanism dump_conf = rproc->dump_conf;
> 
> -	if (list_empty(&rproc->dump_segments))
> +	if (list_empty(&rproc->dump_segments) ||
> +	    dump_conf == RPROC_COREDUMP_DISABLED)
>  		return;
> 
>  	if (class == ELFCLASSNONE) {
> @@ -143,7 +251,14 @@ void rproc_coredump(struct rproc *rproc)
> 
>  	data_size = elf_size_of_hdr(class);
>  	list_for_each_entry(segment, &rproc->dump_segments, node) {
> -		data_size += elf_size_of_phdr(class) + segment->size;
> +		/*
> +		 * For default configuration buffer includes headers & segments.
> +		 * For inline dump buffer just includes headers as segments are
> +		 * directly read from device memory.
> +		 */
> +		data_size += elf_size_of_phdr(class);
> +		if (dump_conf == RPROC_COREDUMP_DEFAULT)
> +			data_size += segment->size;
> 
>  		phnum++;
>  	}
> @@ -182,23 +297,30 @@ void rproc_coredump(struct rproc *rproc)
>  		elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X);
>  		elf_phdr_set_p_align(class, phdr, 0);
> 
> -		if (segment->dump) {
> -			segment->dump(rproc, segment, data + offset);
> -		} else {
> -			ptr = rproc_da_to_va(rproc, segment->da, segment->size);
> -			if (!ptr) {
> -				dev_err(&rproc->dev,
> -					"invalid coredump segment (%pad, %zu)\n",
> -					&segment->da, segment->size);
> -				memset(data + offset, 0xff, segment->size);
> -			} else {
> -				memcpy(data + offset, ptr, segment->size);
> -			}
> -		}
> +		if (dump_conf == RPROC_COREDUMP_DEFAULT)
> +			rproc_copy_segment(rproc, data + offset, segment, 0,
> +					   segment->size);
> 
>  		offset += elf_phdr_get_p_filesz(class, phdr);
>  		phdr += elf_size_of_phdr(class);
>  	}
> 
> -	dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
> +	if (dump_conf == RPROC_COREDUMP_DEFAULT) {
> +		dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
> +		return;
> +	}
> +
> +	/* Initialize the dump state struct to be used by rproc_coredump_read 
> */
> +	dump_state.rproc = rproc;
> +	dump_state.header = data;
> +	init_completion(&dump_state.dump_done);
> +
> +	dev_coredumpm(&rproc->dev, NULL, &dump_state, data_size, GFP_KERNEL,
> +		      rproc_coredump_read, rproc_coredump_free);
> +
> +	/*
> +	 * Wait until the dump is read and free is called. Data is freed
> +	 * by devcoredump framework automatically after 5 minutes.
> +	 */
> +	wait_for_completion(&dump_state.dump_done);
>  }
> diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h
> index e7b7bab..38d037d 100644
> --- a/include/linux/remoteproc.h
> +++ b/include/linux/remoteproc.h
> @@ -435,6 +435,20 @@ enum rproc_crash_type {
>  };
> 
>  /**
> + * enum rproc_dump_mechanism - Coredump options for core
> + * @RPROC_COREDUMP_DEFAULT:	Copy dump to separate buffer and carry on 
> with
> +				recovery
> + * @RPROC_COREDUMP_INLINE:	Read segments directly from device memory. 
> Stall
> +				recovery until all segments are read
> + * @RPROC_COREDUMP_DISABLED:	Don't perform any dump
> + */
> +enum rproc_dump_mechanism {
> +	RPROC_COREDUMP_DEFAULT,
> +	RPROC_COREDUMP_INLINE,
> +	RPROC_COREDUMP_DISABLED,
> +};
> +
> +/**
>   * struct rproc_dump_segment - segment info from ELF header
>   * @node:	list node related to the rproc segment list
>   * @da:		device address of the segment
> @@ -451,7 +465,7 @@ struct rproc_dump_segment {
> 
>  	void *priv;
>  	void (*dump)(struct rproc *rproc, struct rproc_dump_segment *segment,
> -		     void *dest);
> +		     void *dest, size_t offset, size_t size);
>  	loff_t offset;
>  };
> 
> @@ -466,6 +480,7 @@ struct rproc_dump_segment {
>   * @dev: virtual device for refcounting and common remoteproc behavior
>   * @power: refcount of users who need this rproc powered up
>   * @state: state of the device
> + * @dump_conf: Currently selected coredump configuration
>   * @lock: lock which protects concurrent manipulations of the rproc
>   * @dbg_dir: debugfs directory of this rproc device
>   * @traces: list of trace buffers
> @@ -499,6 +514,7 @@ struct rproc {
>  	struct device dev;
>  	atomic_t power;
>  	unsigned int state;
> +	enum rproc_dump_mechanism dump_conf;
>  	struct mutex lock;
>  	struct dentry *dbg_dir;
>  	struct list_head traces;
> @@ -630,7 +646,8 @@ int rproc_coredump_add_custom_segment(struct rproc 
> *rproc,
>  				      dma_addr_t da, size_t size,
>  				      void (*dumpfn)(struct rproc *rproc,
>  						     struct rproc_dump_segment *segment,
> -						     void *dest),
> +						     void *dest, size_t offset,
> +						     size_t size),
>  				      void *priv);
>  int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 
> machine);

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size
  2020-07-16  8:10     ` Sibi Sankar
@ 2020-07-16 18:24       ` rishabhb
  0 siblings, 0 replies; 15+ messages in thread
From: rishabhb @ 2020-07-16 18:24 UTC (permalink / raw)
  To: Sibi Sankar
  Cc: Mathieu Poirier, linux-remoteproc, linux-kernel, bjorn.andersson,
	tsoni, psodagud, sidgup, linux-kernel-owner,
	linux-remoteproc-owner

On 2020-07-16 01:10, Sibi Sankar wrote:
> On 2020-07-14 22:48, Mathieu Poirier wrote:
>> On Thu, Jul 09, 2020 at 01:31:54PM -0700, Rishabh Bhatnagar wrote:
>>> From: Sibi Sankar <sibis@codeaurora.org>
>>> 
>>> In order to land inline coredump support for mss, the dump_segment
>>> function would need to support granularities less than the segment
>>> size. This is achieved by replacing mask based tracking with size.
>>> 
>>> Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
>>> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
>>> ---
>>>  drivers/remoteproc/qcom_q6v5_mss.c | 15 +++++++--------
>>>  1 file changed, 7 insertions(+), 8 deletions(-)
>>> 
>>> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c 
>>> b/drivers/remoteproc/qcom_q6v5_mss.c
>>> index feb70283b..c6ce032 100644
>>> --- a/drivers/remoteproc/qcom_q6v5_mss.c
>>> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
>>> @@ -181,8 +181,8 @@ struct q6v5 {
>>>  	bool running;
>>> 
>>>  	bool dump_mba_loaded;
>>> -	unsigned long dump_segment_mask;
>>> -	unsigned long dump_complete_mask;
>>> +	size_t current_dump_size;
>>> +	size_t total_dump_size;
>>> 
>>>  	phys_addr_t mba_phys;
>>>  	void *mba_region;
>>> @@ -1203,7 +1203,6 @@ static void qcom_q6v5_dump_segment(struct rproc 
>>> *rproc,
>>>  {
>>>  	int ret = 0;
>>>  	struct q6v5 *qproc = rproc->priv;
>>> -	unsigned long mask = BIT((unsigned long)segment->priv);
>>>  	int offset = segment->da - qproc->mpss_reloc;
>>>  	void *ptr = NULL;
>>> 
>>> @@ -1229,10 +1228,10 @@ static void qcom_q6v5_dump_segment(struct 
>>> rproc *rproc,
>>>  		memset(dest, 0xff, segment->size);
>>>  	}
>>> 
>>> -	qproc->dump_segment_mask |= mask;
>>> +	qproc->current_dump_size += segment->size;
>>> 
>>>  	/* Reclaim mba after copying segments */
>>> -	if (qproc->dump_segment_mask == qproc->dump_complete_mask) {
>>> +	if (qproc->current_dump_size == qproc->total_dump_size) {
>>>  		if (qproc->dump_mba_loaded) {
>>>  			/* Try to reset ownership back to Q6 */
>>>  			q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
>>> @@ -1274,7 +1273,7 @@ static int q6v5_start(struct rproc *rproc)
>>>  			"Failed to reclaim mba buffer system may become unstable\n");
>>> 
>>>  	/* Reset Dump Segment Mask */
>>> -	qproc->dump_segment_mask = 0;
>>> +	qproc->current_dump_size = 0;
>>>  	qproc->running = true;
>>> 
>>>  	return 0;
>>> @@ -1323,7 +1322,7 @@ static int 
>>> qcom_q6v5_register_dump_segments(struct rproc *rproc,
>>> 
>>>  	ehdr = (struct elf32_hdr *)fw->data;
>>>  	phdrs = (struct elf32_phdr *)(ehdr + 1);
>>> -	qproc->dump_complete_mask = 0;
>>> +	qproc->total_dump_size = 0;
>>> 
>>>  	for (i = 0; i < ehdr->e_phnum; i++) {
>>>  		phdr = &phdrs[i];
>>> @@ -1338,7 +1337,7 @@ static int 
>>> qcom_q6v5_register_dump_segments(struct rproc *rproc,
>>>  		if (ret)
>>>  			break;
>> 
>> There is also no longer a need to carry the 'i' in:
>> 
>>                 ret = rproc_coredump_add_custom_segment(rproc, 
>> phdr->p_paddr,
>>                                                         phdr->p_memsz,
>>                                                         
>> qcom_q6v5_dump_segment,
>>                                                         (void *)i);
> 
> I assume Rishabh will re-spin the
> series today and this will be taken
> care as well.
> 
Hi Sibi,
I'll respin and add the fixes.
Thanks
>>> 
>>> -		qproc->dump_complete_mask |= BIT(i);
>>> +		qproc->total_dump_size += phdr->p_memsz;
>>>  	}
>>> 
>>>  	release_firmware(fw);
>>> --
>>> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
>>> Forum,
>>> a Linux Foundation Collaborative Project
>>> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/4] remoteproc: Add inline coredump functionality
  2020-07-16  8:25   ` Sibi Sankar
@ 2020-07-16 18:25     ` rishabhb
  0 siblings, 0 replies; 15+ messages in thread
From: rishabhb @ 2020-07-16 18:25 UTC (permalink / raw)
  To: Sibi Sankar
  Cc: linux-remoteproc, linux-kernel, bjorn.andersson, mathieu.poirier,
	tsoni, psodagud, sidgup, linux-kernel-owner

On 2020-07-16 01:25, Sibi Sankar wrote:
> On 2020-07-10 02:01, Rishabh Bhatnagar wrote:
>> The current coredump implementation uses vmalloc area to copy
>> all the segments. But this might put strain on low memory targets
>> as the firmware size sometimes is in tens of MBs. The situation
>> becomes worse if there are multiple remote processors undergoing
>> recovery at the same time. This patch adds inline coredump
>> functionality that avoids extra memory usage. This requires
>> recovery to be halted until data is read by userspace and free
>> function is called. Also modify the qcom_q6v5_mss driver to include
>> size and offset in the segment dump function.
>> 
>> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
>> Reviewed-by: Sibi Sankar <sibis@codeaurora.org>
>> ---
>>  drivers/remoteproc/qcom_q6v5_mss.c       |  11 ++-
>>  drivers/remoteproc/remoteproc_coredump.c | 160 
>> +++++++++++++++++++++++++++----
>>  include/linux/remoteproc.h               |  21 +++-
>>  3 files changed, 166 insertions(+), 26 deletions(-)
>> 
>> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c
>> b/drivers/remoteproc/qcom_q6v5_mss.c
>> index c6ce032..79df354 100644
>> --- a/drivers/remoteproc/qcom_q6v5_mss.c
>> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
>> @@ -1199,11 +1199,12 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
>> 
>>  static void qcom_q6v5_dump_segment(struct rproc *rproc,
>>  				   struct rproc_dump_segment *segment,
>> -				   void *dest)
>> +				   void *dest, size_t cp_offset, size_t size)
>>  {
>>  	int ret = 0;
>>  	struct q6v5 *qproc = rproc->priv;
>>  	int offset = segment->da - qproc->mpss_reloc;
>> +	size_t cp_size = size ? size : segment->size;
>>  	void *ptr = NULL;
>> 
>>  	/* Unlock mba before copying segments */
>> @@ -1219,16 +1220,16 @@ static void qcom_q6v5_dump_segment(struct 
>> rproc *rproc,
>>  	}
>> 
>>  	if (!ret)
>> -		ptr = ioremap_wc(qproc->mpss_phys + offset, segment->size);
>> +		ptr = ioremap_wc(qproc->mpss_phys + offset + cp_offset, cp_size);
>> 
>>  	if (ptr) {
>> -		memcpy(dest, ptr, segment->size);
>> +		memcpy(dest, ptr, cp_size);
>>  		iounmap(ptr);
>>  	} else {
>> -		memset(dest, 0xff, segment->size);
>> +		memset(dest, 0xff, cp_size);
>>  	}
>> 
>> -	qproc->current_dump_size += segment->size;
>> +	qproc->current_dump_size += cp_size;
>> 
>>  	/* Reclaim mba after copying segments */
>>  	if (qproc->current_dump_size == qproc->total_dump_size) {
>> diff --git a/drivers/remoteproc/remoteproc_coredump.c
>> b/drivers/remoteproc/remoteproc_coredump.c
>> index ded0244..646886f 100644
>> --- a/drivers/remoteproc/remoteproc_coredump.c
>> +++ b/drivers/remoteproc/remoteproc_coredump.c
>> @@ -5,6 +5,7 @@
>>   * Copyright (c) 2020, The Linux Foundation. All rights reserved.
>>   */
>> 
>> +#include <linux/completion.h>
>>  #include <linux/devcoredump.h>
>>  #include <linux/device.h>
>>  #include <linux/kernel.h>
>> @@ -12,6 +13,12 @@
>>  #include "remoteproc_internal.h"
>>  #include "remoteproc_elf_helpers.h"
>> 
>> +struct rproc_coredump_state {
>> +	struct rproc *rproc;
>> +	void *header;
>> +	struct completion dump_done;
>> +};
>> +
>>  /**
>>   * rproc_coredump_cleanup() - clean up dump_segments list
>>   * @rproc: the remote processor handle
>> @@ -72,7 +79,8 @@ int rproc_coredump_add_custom_segment(struct rproc 
>> *rproc,
>>  				      dma_addr_t da, size_t size,
>>  				      void (*dumpfn)(struct rproc *rproc,
>>  						     struct rproc_dump_segment *segment,
>> -						     void *dest),
>> +						     void *dest, size_t offset,
>> +						     size_t size),
>>  				      void *priv)
>>  {
>>  	struct rproc_dump_segment *segment;
>> @@ -114,12 +122,110 @@ int rproc_coredump_set_elf_info(struct rproc
>> *rproc, u8 class, u16 machine)
>>  }
>>  EXPORT_SYMBOL(rproc_coredump_set_elf_info);
>> 
>> +static void rproc_coredump_free(void *data)
>> +{
>> +	struct rproc_coredump_state *dump_state = data;
>> +
>> +	complete(&dump_state->dump_done);
>> +	vfree(dump_state->header);
> 
> Rishabh,
> 
> I observed a system hang when I trigger
> recovery of a number of remoteproc
> simultaneously (after a number of iterations
> consistently). This goes away if I free the
> header and  then issue the dump completion
> and that does seem to be the right thing to
> do.
> 
Makes sense i'll switch it in the next patchset.
>> +}
>> +
>> +static void *rproc_coredump_find_segment(loff_t user_offset,
>> +					 struct list_head *segments,
>> +					 size_t *data_left)
>> +{
>> +	struct rproc_dump_segment *segment;
>> +
>> +	list_for_each_entry(segment, segments, node) {
>> +		if (user_offset < segment->size) {
>> +			*data_left = segment->size - user_offset;
>> +			return segment;
>> +		}
>> +		user_offset -= segment->size;
>> +	}
>> +
>> +	*data_left = 0;
>> +	return NULL;
>> +}
>> +
>> +static void rproc_copy_segment(struct rproc *rproc, void *dest,
>> +			       struct rproc_dump_segment *segment,
>> +			       size_t offset, size_t size)
>> +{
>> +	void *ptr;
>> +
>> +	if (segment->dump) {
>> +		segment->dump(rproc, segment, dest, offset, size);
>> +	} else {
>> +		ptr = rproc_da_to_va(rproc, segment->da + offset, size);
>> +		if (!ptr) {
>> +			dev_err(&rproc->dev,
>> +				"invalid copy request for segment %pad with offset %zu and size 
>> %zu)\n",
>> +				&segment->da, offset, size);
>> +			memset(dest, 0xff, size);
>> +		} else {
>> +			memcpy(dest, ptr, size);
>> +		}
>> +	}
>> +}
>> +
>> +static ssize_t rproc_coredump_read(char *buffer, loff_t offset, 
>> size_t count,
>> +				   void *data, size_t header_sz)
>> +{
>> +	size_t seg_data, bytes_left = count;
>> +	ssize_t copy_sz;
>> +	struct rproc_dump_segment *seg;
>> +	struct rproc_coredump_state *dump_state = data;
>> +	struct rproc *rproc = dump_state->rproc;
>> +	void *elfcore = dump_state->header;
>> +
>> +	/* Copy the vmalloc'ed header first. */
>> +	if (offset < header_sz) {
>> +		copy_sz = memory_read_from_buffer(buffer, count, &offset,
>> +						  elfcore, header_sz);
>> +
>> +		return copy_sz;
>> +	}
>> +
>> +	/*
>> +	 * Find out the segment memory chunk to be copied based on offset.
>> +	 * Keep copying data until count bytes are read.
>> +	 */
>> +	while (bytes_left) {
>> +		seg = rproc_coredump_find_segment(offset - header_sz,
>> +						  &rproc->dump_segments,
>> +						  &seg_data);
>> +		/* EOF check */
>> +		if (!seg) {
>> +			dev_info(&rproc->dev, "Ramdump done, %lld bytes read",
>> +				 offset);
>> +			break;
>> +		}
>> +
>> +		copy_sz = min_t(size_t, bytes_left, seg_data);
>> +
>> +		rproc_copy_segment(rproc, buffer, seg, seg->size - seg_data,
>> +				   copy_sz);
>> +
>> +		offset += copy_sz;
>> +		buffer += copy_sz;
>> +		bytes_left -= copy_sz;
>> +	}
>> +
>> +	return count - bytes_left;
>> +}
>> +
>>  /**
>>   * rproc_coredump() - perform coredump
>>   * @rproc:	rproc handle
>>   *
>>   * This function will generate an ELF header for the registered 
>> segments
>> - * and create a devcoredump device associated with rproc.
>> + * and create a devcoredump device associated with rproc. Based on 
>> the
>> + * coredump configuration this function will directly copy the 
>> segments
>> + * from device memory to userspace or copy segments from device 
>> memory to
>> + * a separate buffer, which can then be read by userspace.
>> + * The first approach avoids using extra vmalloc memory. But it will 
>> stall
>> + * recovery flow until dump is read by userspace.
>>   */
>>  void rproc_coredump(struct rproc *rproc)
>>  {
>> @@ -129,11 +235,13 @@ void rproc_coredump(struct rproc *rproc)
>>  	size_t data_size;
>>  	size_t offset;
>>  	void *data;
>> -	void *ptr;
>>  	u8 class = rproc->elf_class;
>>  	int phnum = 0;
>> +	struct rproc_coredump_state dump_state;
>> +	enum rproc_dump_mechanism dump_conf = rproc->dump_conf;
>> 
>> -	if (list_empty(&rproc->dump_segments))
>> +	if (list_empty(&rproc->dump_segments) ||
>> +	    dump_conf == RPROC_COREDUMP_DISABLED)
>>  		return;
>> 
>>  	if (class == ELFCLASSNONE) {
>> @@ -143,7 +251,14 @@ void rproc_coredump(struct rproc *rproc)
>> 
>>  	data_size = elf_size_of_hdr(class);
>>  	list_for_each_entry(segment, &rproc->dump_segments, node) {
>> -		data_size += elf_size_of_phdr(class) + segment->size;
>> +		/*
>> +		 * For default configuration buffer includes headers & segments.
>> +		 * For inline dump buffer just includes headers as segments are
>> +		 * directly read from device memory.
>> +		 */
>> +		data_size += elf_size_of_phdr(class);
>> +		if (dump_conf == RPROC_COREDUMP_DEFAULT)
>> +			data_size += segment->size;
>> 
>>  		phnum++;
>>  	}
>> @@ -182,23 +297,30 @@ void rproc_coredump(struct rproc *rproc)
>>  		elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X);
>>  		elf_phdr_set_p_align(class, phdr, 0);
>> 
>> -		if (segment->dump) {
>> -			segment->dump(rproc, segment, data + offset);
>> -		} else {
>> -			ptr = rproc_da_to_va(rproc, segment->da, segment->size);
>> -			if (!ptr) {
>> -				dev_err(&rproc->dev,
>> -					"invalid coredump segment (%pad, %zu)\n",
>> -					&segment->da, segment->size);
>> -				memset(data + offset, 0xff, segment->size);
>> -			} else {
>> -				memcpy(data + offset, ptr, segment->size);
>> -			}
>> -		}
>> +		if (dump_conf == RPROC_COREDUMP_DEFAULT)
>> +			rproc_copy_segment(rproc, data + offset, segment, 0,
>> +					   segment->size);
>> 
>>  		offset += elf_phdr_get_p_filesz(class, phdr);
>>  		phdr += elf_size_of_phdr(class);
>>  	}
>> 
>> -	dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
>> +	if (dump_conf == RPROC_COREDUMP_DEFAULT) {
>> +		dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
>> +		return;
>> +	}
>> +
>> +	/* Initialize the dump state struct to be used by 
>> rproc_coredump_read */
>> +	dump_state.rproc = rproc;
>> +	dump_state.header = data;
>> +	init_completion(&dump_state.dump_done);
>> +
>> +	dev_coredumpm(&rproc->dev, NULL, &dump_state, data_size, GFP_KERNEL,
>> +		      rproc_coredump_read, rproc_coredump_free);
>> +
>> +	/*
>> +	 * Wait until the dump is read and free is called. Data is freed
>> +	 * by devcoredump framework automatically after 5 minutes.
>> +	 */
>> +	wait_for_completion(&dump_state.dump_done);
>>  }
>> diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h
>> index e7b7bab..38d037d 100644
>> --- a/include/linux/remoteproc.h
>> +++ b/include/linux/remoteproc.h
>> @@ -435,6 +435,20 @@ enum rproc_crash_type {
>>  };
>> 
>>  /**
>> + * enum rproc_dump_mechanism - Coredump options for core
>> + * @RPROC_COREDUMP_DEFAULT:	Copy dump to separate buffer and carry on 
>> with
>> +				recovery
>> + * @RPROC_COREDUMP_INLINE:	Read segments directly from device memory. 
>> Stall
>> +				recovery until all segments are read
>> + * @RPROC_COREDUMP_DISABLED:	Don't perform any dump
>> + */
>> +enum rproc_dump_mechanism {
>> +	RPROC_COREDUMP_DEFAULT,
>> +	RPROC_COREDUMP_INLINE,
>> +	RPROC_COREDUMP_DISABLED,
>> +};
>> +
>> +/**
>>   * struct rproc_dump_segment - segment info from ELF header
>>   * @node:	list node related to the rproc segment list
>>   * @da:		device address of the segment
>> @@ -451,7 +465,7 @@ struct rproc_dump_segment {
>> 
>>  	void *priv;
>>  	void (*dump)(struct rproc *rproc, struct rproc_dump_segment 
>> *segment,
>> -		     void *dest);
>> +		     void *dest, size_t offset, size_t size);
>>  	loff_t offset;
>>  };
>> 
>> @@ -466,6 +480,7 @@ struct rproc_dump_segment {
>>   * @dev: virtual device for refcounting and common remoteproc 
>> behavior
>>   * @power: refcount of users who need this rproc powered up
>>   * @state: state of the device
>> + * @dump_conf: Currently selected coredump configuration
>>   * @lock: lock which protects concurrent manipulations of the rproc
>>   * @dbg_dir: debugfs directory of this rproc device
>>   * @traces: list of trace buffers
>> @@ -499,6 +514,7 @@ struct rproc {
>>  	struct device dev;
>>  	atomic_t power;
>>  	unsigned int state;
>> +	enum rproc_dump_mechanism dump_conf;
>>  	struct mutex lock;
>>  	struct dentry *dbg_dir;
>>  	struct list_head traces;
>> @@ -630,7 +646,8 @@ int rproc_coredump_add_custom_segment(struct rproc 
>> *rproc,
>>  				      dma_addr_t da, size_t size,
>>  				      void (*dumpfn)(struct rproc *rproc,
>>  						     struct rproc_dump_segment *segment,
>> -						     void *dest),
>> +						     void *dest, size_t offset,
>> +						     size_t size),
>>  				      void *priv);
>>  int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 
>> machine);

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size
  2020-07-09 20:31 ` [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size Rishabh Bhatnagar
  2020-07-14 17:18   ` Mathieu Poirier
@ 2020-07-20 16:57   ` Mathieu Poirier
  1 sibling, 0 replies; 15+ messages in thread
From: Mathieu Poirier @ 2020-07-20 16:57 UTC (permalink / raw)
  To: Rishabh Bhatnagar
  Cc: linux-remoteproc, linux-kernel, bjorn.andersson, tsoni, psodagud,
	sidgup, Sibi Sankar

On Thu, Jul 09, 2020 at 01:31:54PM -0700, Rishabh Bhatnagar wrote:
> From: Sibi Sankar <sibis@codeaurora.org>
> 
> In order to land inline coredump support for mss, the dump_segment
> function would need to support granularities less than the segment
> size. This is achieved by replacing mask based tracking with size.
> 
> Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>

Acked-by: Mathieu Poirier <mathieu.poirier@linaro.org>

> ---
>  drivers/remoteproc/qcom_q6v5_mss.c | 15 +++++++--------
>  1 file changed, 7 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
> index feb70283b..c6ce032 100644
> --- a/drivers/remoteproc/qcom_q6v5_mss.c
> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
> @@ -181,8 +181,8 @@ struct q6v5 {
>  	bool running;
>  
>  	bool dump_mba_loaded;
> -	unsigned long dump_segment_mask;
> -	unsigned long dump_complete_mask;
> +	size_t current_dump_size;
> +	size_t total_dump_size;
>  
>  	phys_addr_t mba_phys;
>  	void *mba_region;
> @@ -1203,7 +1203,6 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
>  {
>  	int ret = 0;
>  	struct q6v5 *qproc = rproc->priv;
> -	unsigned long mask = BIT((unsigned long)segment->priv);
>  	int offset = segment->da - qproc->mpss_reloc;
>  	void *ptr = NULL;
>  
> @@ -1229,10 +1228,10 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
>  		memset(dest, 0xff, segment->size);
>  	}
>  
> -	qproc->dump_segment_mask |= mask;
> +	qproc->current_dump_size += segment->size;
>  
>  	/* Reclaim mba after copying segments */
> -	if (qproc->dump_segment_mask == qproc->dump_complete_mask) {
> +	if (qproc->current_dump_size == qproc->total_dump_size) {
>  		if (qproc->dump_mba_loaded) {
>  			/* Try to reset ownership back to Q6 */
>  			q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
> @@ -1274,7 +1273,7 @@ static int q6v5_start(struct rproc *rproc)
>  			"Failed to reclaim mba buffer system may become unstable\n");
>  
>  	/* Reset Dump Segment Mask */
> -	qproc->dump_segment_mask = 0;
> +	qproc->current_dump_size = 0;
>  	qproc->running = true;
>  
>  	return 0;
> @@ -1323,7 +1322,7 @@ static int qcom_q6v5_register_dump_segments(struct rproc *rproc,
>  
>  	ehdr = (struct elf32_hdr *)fw->data;
>  	phdrs = (struct elf32_phdr *)(ehdr + 1);
> -	qproc->dump_complete_mask = 0;
> +	qproc->total_dump_size = 0;
>  
>  	for (i = 0; i < ehdr->e_phnum; i++) {
>  		phdr = &phdrs[i];
> @@ -1338,7 +1337,7 @@ static int qcom_q6v5_register_dump_segments(struct rproc *rproc,
>  		if (ret)
>  			break;
>  
> -		qproc->dump_complete_mask |= BIT(i);
> +		qproc->total_dump_size += phdr->p_memsz;
>  	}
>  
>  	release_firmware(fw);
> -- 
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-07-20 16:57 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-09 20:31 [PATCH v7 0/4] Extend coredump functionality Rishabh Bhatnagar
2020-07-09 20:31 ` [PATCH v7 1/4] remoteproc: Move coredump functionality to a new file Rishabh Bhatnagar
2020-07-09 20:31 ` [PATCH v1 2/4] remoteproc: qcom_q6v5_mss: Replace mask based tracking with size Rishabh Bhatnagar
2020-07-14 17:18   ` Mathieu Poirier
2020-07-14 18:12     ` Sibi Sankar
2020-07-16  8:10     ` Sibi Sankar
2020-07-16 18:24       ` rishabhb
2020-07-20 16:57   ` Mathieu Poirier
2020-07-09 20:31 ` [PATCH v7 3/4] remoteproc: Add inline coredump functionality Rishabh Bhatnagar
2020-07-14 17:53   ` Mathieu Poirier
2020-07-15 19:22     ` rishabhb
2020-07-15 21:55       ` Mathieu Poirier
2020-07-16  8:25   ` Sibi Sankar
2020-07-16 18:25     ` rishabhb
2020-07-09 20:31 ` [PATCH v7 4/4] remoteproc: Add coredump debugfs entry Rishabh Bhatnagar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).