All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
@ 2019-11-12 11:08 Bhupesh Sharma
  2019-11-12 11:08 ` [PATCH v4 1/4] tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available) Bhupesh Sharma
                   ` (5 more replies)
  0 siblings, 6 replies; 34+ messages in thread
From: Bhupesh Sharma @ 2019-11-12 11:08 UTC (permalink / raw)
  To: kexec; +Cc: John Donnelly, bhsharma, bhupesh.linux, Kazuhito Hagio

Changes since v3:
----------------
- v3 can be seen here:
  http://lists.infradead.org/pipermail/kexec/2019-March/022534.html
- Added a new patch (via [PATCH 4/4]) which marks '--mem-usage' option as
  unsupported for arm64 architecture. With the newer arm64 kernels
  supporting 48-bit/52-bit VA address spaces and keeping a single
  binary for supporting the same, the address of
  kernel symbols like _stext, which could be earlier used to determine
  VA_BITS value, can no longer to determine whether VA_BITS is set to 48
  or 52 in the kernel space. Hence for now, it makes sense to mark
  '--mem-usage' option as unsupported for arm64 architecture until
  we have more clarity from arm64 kernel maintainers on how to manage
  the same in future kernel/makedumpfile versions.

Changes since v2:
----------------
- v2 can be seen here:
  http://lists.infradead.org/pipermail/kexec/2019-February/022456.html
- I missed some comments from Kazu sent on the LVA v1 patch when I sent
  out the v2. So, addressing them now in v3.
- Also added a patch that adds a tree-wide feature to read
  'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).

Changes since v1:
----------------
- v1 was sent as two separate patches:
  http://lists.infradead.org/pipermail/kexec/2019-February/022424.html
  (ARMv8.2-LPA)
  http://lists.infradead.org/pipermail/kexec/2019-February/022425.html
  (ARMv8.2-LVA)
- v2 combined the two in a single patchset and also addresses Kazu's
  review comments.

This patchset adds support for ARMv8.2 extensions in makedumpfile code.
I cover the following two cases with this patchset:
 - 48-bit kernel VA + 52-bit PA (LPA)
 - 52-bit kernel VA (LVA) + 52-bit PA (LPA)
 - 48-bit kernel VA + 52-bit user-space VA (LVA)
 - 52-bit kernel VA + 52-bit user-space VA (Full LVA)

This has been tested for the following user-cases:
1. Creating a dumpfile using /proc/vmcore,
2. Creating a dumpfile using /proc/kcore, and
3. Post-processing a vmcore.

I have tested this patchset on the following platforms, with kernels
which support/do-not-support ARMv8.2 features:
1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
   ampere-osprey.
2. Prototype models which support ARMv8.2 extensions (e.g. ARMv8 FVP
   simulation model).

Also a preparation patch has been added in this patchset which adds a
common feature for archs (except arm64, for which similar support is
added via subsequent patch) to retrieve 'MAX_PHYSMEM_BITS' from
vmcoreinfo (if available).

I recently posted two kernel patches (see [0] and [1]) which append
'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' to vmcoreinfo in the kernel
code, so that user-space code can benefit from the same.

This patchset ensures backward compatibility for kernel versions in
which 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' are not available in
vmcoreinfo.

[0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
[1]. http://lists.infradead.org/pipermail/kexec/2019-November/023962.html

Cc: John Donnelly <john.p.donnelly@oracle.com>
Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
Cc: kexec@lists.infradead.org

Bhupesh Sharma (4):
  tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
  makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
  makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA
    support)
  makedumpfile: Mark --mem-usage option unsupported for arm64

 arch/arm.c     |   8 +-
 arch/arm64.c   | 438 ++++++++++++++++++++++++++++++++++++++++++---------------
 arch/ia64.c    |   7 +-
 arch/ppc.c     |   8 +-
 arch/ppc64.c   |  49 ++++---
 arch/s390x.c   |  29 ++--
 arch/sparc64.c |   9 +-
 arch/x86.c     |  34 +++--
 arch/x86_64.c  |  27 ++--
 makedumpfile.c |   7 +
 makedumpfile.h |   3 +-
 11 files changed, 439 insertions(+), 180 deletions(-)

-- 
2.7.4


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 1/4] tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
  2019-11-12 11:08 [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Bhupesh Sharma
@ 2019-11-12 11:08 ` Bhupesh Sharma
  2019-12-04 17:34   ` Kazuhito Hagio
  2019-11-12 11:08 ` [PATCH v4 2/4] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) Bhupesh Sharma
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 34+ messages in thread
From: Bhupesh Sharma @ 2019-11-12 11:08 UTC (permalink / raw)
  To: kexec; +Cc: John Donnelly, bhsharma, bhupesh.linux, Kazuhito Hagio

This patch adds a common feature for archs (except arm64, for which
similar support is added via subsequent patch) to retrieve
'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).

I recently posted a kernel patch (see [0]) which appends
'MAX_PHYSMEM_BITS' to vmcoreinfo in the core code itself rather than
in arch-specific code, so that user-space code can also benefit from
this addition to the vmcoreinfo and use it as a standard way of
determining 'SECTIONS_SHIFT' value in 'makedumpfile' utility.

This patch ensures backward compatibility for kernel versions in which
'MAX_PHYSMEM_BITS' is not available in vmcoreinfo.

[0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html

Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
Cc: John Donnelly <john.p.donnelly@oracle.com>
Cc: kexec@lists.infradead.org
Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
---
 arch/arm.c     |  8 +++++++-
 arch/ia64.c    |  7 ++++++-
 arch/ppc.c     |  8 +++++++-
 arch/ppc64.c   | 49 ++++++++++++++++++++++++++++---------------------
 arch/s390x.c   | 29 ++++++++++++++++++-----------
 arch/sparc64.c |  9 +++++++--
 arch/x86.c     | 34 ++++++++++++++++++++--------------
 arch/x86_64.c  | 27 ++++++++++++++++-----------
 8 files changed, 109 insertions(+), 62 deletions(-)

diff --git a/arch/arm.c b/arch/arm.c
index af7442ac70bf..33536fc4dfc9 100644
--- a/arch/arm.c
+++ b/arch/arm.c
@@ -81,7 +81,13 @@ int
 get_machdep_info_arm(void)
 {
 	info->page_offset = SYMBOL(_stext) & 0xffff0000UL;
-	info->max_physmem_bits = _MAX_PHYSMEM_BITS;
+
+	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
+	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
+		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
+	else
+		info->max_physmem_bits = _MAX_PHYSMEM_BITS;
+
 	info->kernel_start = SYMBOL(_stext);
 	info->section_size_bits = _SECTION_SIZE_BITS;
 
diff --git a/arch/ia64.c b/arch/ia64.c
index 6c33cc7c8288..fb44dda47172 100644
--- a/arch/ia64.c
+++ b/arch/ia64.c
@@ -85,7 +85,12 @@ get_machdep_info_ia64(void)
 	}
 
 	info->section_size_bits = _SECTION_SIZE_BITS;
-	info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
+
+	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
+	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
+		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
+	else
+		info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
 
 	return TRUE;
 }
diff --git a/arch/ppc.c b/arch/ppc.c
index 37c6a3b60cd3..ed9447427a30 100644
--- a/arch/ppc.c
+++ b/arch/ppc.c
@@ -31,7 +31,13 @@ get_machdep_info_ppc(void)
 	unsigned long vmlist, vmap_area_list, vmalloc_start;
 
 	info->section_size_bits = _SECTION_SIZE_BITS;
-	info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
+
+	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
+	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
+		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
+	else
+		info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
+
 	info->page_offset = __PAGE_OFFSET;
 
 	if (SYMBOL(_stext) != NOT_FOUND_SYMBOL)
diff --git a/arch/ppc64.c b/arch/ppc64.c
index 9d8f2525f608..a3984eebdced 100644
--- a/arch/ppc64.c
+++ b/arch/ppc64.c
@@ -466,30 +466,37 @@ int
 set_ppc64_max_physmem_bits(void)
 {
 	long array_len = ARRAY_LENGTH(mem_section);
-	/*
-	 * The older ppc64 kernels uses _MAX_PHYSMEM_BITS as 42 and the
-	 * newer kernels 3.7 onwards uses 46 bits.
-	 */
-
-	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
-	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
-		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
-		return TRUE;
-
-	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_7;
-	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
-		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
-		return TRUE;
 
-	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_19;
-	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
-		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
+	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
+	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
+		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
 		return TRUE;
+	} else {
+		/*
+		 * The older ppc64 kernels uses _MAX_PHYSMEM_BITS as 42 and the
+		 * newer kernels 3.7 onwards uses 46 bits.
+		 */
 
-	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_20;
-	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
-		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
-		return TRUE;
+		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
+		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
+				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
+			return TRUE;
+
+		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_7;
+		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
+				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
+			return TRUE;
+
+		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_19;
+		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
+				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
+			return TRUE;
+
+		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_20;
+		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
+				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
+			return TRUE;
+	}
 
 	return FALSE;
 }
diff --git a/arch/s390x.c b/arch/s390x.c
index bf9d58e54fb7..4d17a783e5bd 100644
--- a/arch/s390x.c
+++ b/arch/s390x.c
@@ -63,20 +63,27 @@ int
 set_s390x_max_physmem_bits(void)
 {
 	long array_len = ARRAY_LENGTH(mem_section);
-	/*
-	 * The older s390x kernels uses _MAX_PHYSMEM_BITS as 42 and the
-	 * newer kernels uses 46 bits.
-	 */
 
-	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
-	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
-		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
+	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
+	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
+		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
 		return TRUE;
+	} else {
+		/*
+		 * The older s390x kernels uses _MAX_PHYSMEM_BITS as 42 and the
+		 * newer kernels uses 46 bits.
+		 */
 
-	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_3;
-	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
-		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
-		return TRUE;
+		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
+		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
+				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
+			return TRUE;
+
+		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_3;
+		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
+				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
+			return TRUE;
+	}
 
 	return FALSE;
 }
diff --git a/arch/sparc64.c b/arch/sparc64.c
index 1cfaa854ce6d..b93a05bdfe59 100644
--- a/arch/sparc64.c
+++ b/arch/sparc64.c
@@ -25,10 +25,15 @@ int get_versiondep_info_sparc64(void)
 {
 	info->section_size_bits = _SECTION_SIZE_BITS;
 
-	if (info->kernel_version >= KERNEL_VERSION(3, 8, 13))
+	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
+	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
+		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
+	else if (info->kernel_version >= KERNEL_VERSION(3, 8, 13))
 		info->max_physmem_bits = _MAX_PHYSMEM_BITS_L4;
-	else {
+	else
 		info->max_physmem_bits = _MAX_PHYSMEM_BITS_L3;
+
+	if (info->kernel_version < KERNEL_VERSION(3, 8, 13)) {
 		info->flag_vmemmap = TRUE;
 		info->vmemmap_start = VMEMMAP_BASE_SPARC64;
 		info->vmemmap_end = VMEMMAP_BASE_SPARC64 +
diff --git a/arch/x86.c b/arch/x86.c
index 3fdae93084b8..f1b43d4c8179 100644
--- a/arch/x86.c
+++ b/arch/x86.c
@@ -72,21 +72,27 @@ get_machdep_info_x86(void)
 {
 	unsigned long vmlist, vmap_area_list, vmalloc_start;
 
-	/* PAE */
-	if ((vt.mem_flags & MEMORY_X86_PAE)
-	    || ((SYMBOL(pkmap_count) != NOT_FOUND_SYMBOL)
-	      && (SYMBOL(pkmap_count_next) != NOT_FOUND_SYMBOL)
-	      && ((SYMBOL(pkmap_count_next)-SYMBOL(pkmap_count))/sizeof(int))
-	      == 512)) {
-		DEBUG_MSG("\n");
-		DEBUG_MSG("PAE          : ON\n");
-		vt.mem_flags |= MEMORY_X86_PAE;
-		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_PAE;
-	} else {
-		DEBUG_MSG("\n");
-		DEBUG_MSG("PAE          : OFF\n");
-		info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
+	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
+	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
+		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
+	else {
+		/* PAE */
+		if ((vt.mem_flags & MEMORY_X86_PAE)
+				|| ((SYMBOL(pkmap_count) != NOT_FOUND_SYMBOL)
+					&& (SYMBOL(pkmap_count_next) != NOT_FOUND_SYMBOL)
+					&& ((SYMBOL(pkmap_count_next)-SYMBOL(pkmap_count))/sizeof(int))
+					== 512)) {
+			DEBUG_MSG("\n");
+			DEBUG_MSG("PAE          : ON\n");
+			vt.mem_flags |= MEMORY_X86_PAE;
+			info->max_physmem_bits  = _MAX_PHYSMEM_BITS_PAE;
+		} else {
+			DEBUG_MSG("\n");
+			DEBUG_MSG("PAE          : OFF\n");
+			info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
+		}
 	}
+
 	info->page_offset = __PAGE_OFFSET;
 
 	if (SYMBOL(_stext) == NOT_FOUND_SYMBOL) {
diff --git a/arch/x86_64.c b/arch/x86_64.c
index 876644f932be..eff90307552c 100644
--- a/arch/x86_64.c
+++ b/arch/x86_64.c
@@ -268,17 +268,22 @@ get_machdep_info_x86_64(void)
 int
 get_versiondep_info_x86_64(void)
 {
-	/*
-	 * On linux-2.6.26, MAX_PHYSMEM_BITS is changed to 44 from 40.
-	 */
-	if (info->kernel_version < KERNEL_VERSION(2, 6, 26))
-		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG;
-	else if (info->kernel_version < KERNEL_VERSION(2, 6, 31))
-		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_26;
-	else if(check_5level_paging())
-		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_5LEVEL;
-	else
-		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_31;
+	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
+	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
+		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
+	} else {
+		/*
+		 * On linux-2.6.26, MAX_PHYSMEM_BITS is changed to 44 from 40.
+		 */
+		if (info->kernel_version < KERNEL_VERSION(2, 6, 26))
+			info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG;
+		else if (info->kernel_version < KERNEL_VERSION(2, 6, 31))
+			info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_26;
+		else if(check_5level_paging())
+			info->max_physmem_bits  = _MAX_PHYSMEM_BITS_5LEVEL;
+		else
+			info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_31;
+	}
 
 	if (!get_page_offset_x86_64())
 		return FALSE;
-- 
2.7.4


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 2/4] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
  2019-11-12 11:08 [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Bhupesh Sharma
  2019-11-12 11:08 ` [PATCH v4 1/4] tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available) Bhupesh Sharma
@ 2019-11-12 11:08 ` Bhupesh Sharma
  2019-12-04 17:36   ` Kazuhito Hagio
  2019-11-12 11:08 ` [PATCH v4 3/4] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA support) Bhupesh Sharma
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 34+ messages in thread
From: Bhupesh Sharma @ 2019-11-12 11:08 UTC (permalink / raw)
  To: kexec; +Cc: John Donnelly, bhsharma, bhupesh.linux, Kazuhito Hagio

ARMv8.2-LPA architecture extension (if available on underlying hardware)
can support 52-bit physical addresses, while the kernel virtual
addresses remain 48-bit.

Make sure that we read the 52-bit PA address capability from
'MAX_PHYSMEM_BITS' variable (if available in vmcoreinfo) and
accordingly change the pte_to_phy() mask values and also traverse
the page-table walk accordingly.

Also make sure that it works well for the existing 48-bit PA address
platforms and also on environments which use newer kernels with 52-bit
PA support but hardware which is not ARM8.2-LPA compliant.

I have sent a kernel patch upstream to add 'MAX_PHYSMEM_BITS' to
vmcoreinfo for arm64 (see [0]).

This patch is in accordance with ARMv8 Architecture Reference Manual
version D.a

[0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html

Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
Cc: John Donnelly <john.p.donnelly@oracle.com>
Cc: kexec@lists.infradead.org
Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
---
 arch/arm64.c | 292 +++++++++++++++++++++++++++++++++++++++++------------------
 1 file changed, 204 insertions(+), 88 deletions(-)

diff --git a/arch/arm64.c b/arch/arm64.c
index 3516b340adfd..ecb19139e178 100644
--- a/arch/arm64.c
+++ b/arch/arm64.c
@@ -39,72 +39,184 @@ typedef struct {
 	unsigned long pte;
 } pte_t;
 
+#define __pte(x)	((pte_t) { (x) } )
+#define __pmd(x)	((pmd_t) { (x) } )
+#define __pud(x)	((pud_t) { (x) } )
+#define __pgd(x)	((pgd_t) { (x) } )
+
+static int lpa_52_bit_support_available;
 static int pgtable_level;
 static int va_bits;
 static unsigned long kimage_voffset;
 
-#define SZ_4K			(4 * 1024)
-#define SZ_16K			(16 * 1024)
-#define SZ_64K			(64 * 1024)
-#define SZ_128M			(128 * 1024 * 1024)
+#define SZ_4K			4096
+#define SZ_16K			16384
+#define SZ_64K			65536
 
-#define PAGE_OFFSET_36 ((0xffffffffffffffffUL) << 36)
-#define PAGE_OFFSET_39 ((0xffffffffffffffffUL) << 39)
-#define PAGE_OFFSET_42 ((0xffffffffffffffffUL) << 42)
-#define PAGE_OFFSET_47 ((0xffffffffffffffffUL) << 47)
-#define PAGE_OFFSET_48 ((0xffffffffffffffffUL) << 48)
+#define PAGE_OFFSET_36		((0xffffffffffffffffUL) << 36)
+#define PAGE_OFFSET_39		((0xffffffffffffffffUL) << 39)
+#define PAGE_OFFSET_42		((0xffffffffffffffffUL) << 42)
+#define PAGE_OFFSET_47		((0xffffffffffffffffUL) << 47)
+#define PAGE_OFFSET_48		((0xffffffffffffffffUL) << 48)
+#define PAGE_OFFSET_52		((0xffffffffffffffffUL) << 52)
 
 #define pgd_val(x)		((x).pgd)
 #define pud_val(x)		(pgd_val((x).pgd))
 #define pmd_val(x)		(pud_val((x).pud))
 #define pte_val(x)		((x).pte)
 
-#define PAGE_MASK		(~(PAGESIZE() - 1))
-#define PGDIR_SHIFT		((PAGESHIFT() - 3) * pgtable_level + 3)
-#define PTRS_PER_PGD		(1 << (va_bits - PGDIR_SHIFT))
-#define PUD_SHIFT		get_pud_shift_arm64()
-#define PUD_SIZE		(1UL << PUD_SHIFT)
-#define PUD_MASK		(~(PUD_SIZE - 1))
-#define PTRS_PER_PTE		(1 << (PAGESHIFT() - 3))
-#define PTRS_PER_PUD		PTRS_PER_PTE
-#define PMD_SHIFT		((PAGESHIFT() - 3) * 2 + 3)
-#define PMD_SIZE		(1UL << PMD_SHIFT)
-#define PMD_MASK		(~(PMD_SIZE - 1))
+/* See 'include/uapi/linux/const.h' for definitions below */
+#define __AC(X,Y)	(X##Y)
+#define _AC(X,Y)	__AC(X,Y)
+#define _AT(T,X)	((T)(X))
+
+/* See 'include/asm/pgtable-types.h' for definitions below */
+typedef unsigned long pteval_t;
+typedef unsigned long pmdval_t;
+typedef unsigned long pudval_t;
+typedef unsigned long pgdval_t;
+
+#define PAGE_SHIFT	PAGESHIFT()
+
+/* See 'arch/arm64/include/asm/pgtable-hwdef.h' for definitions below */
+
+#define ARM64_HW_PGTABLE_LEVEL_SHIFT(n)	((PAGE_SHIFT - 3) * (4 - (n)) + 3)
+
+#define PTRS_PER_PTE		(1 << (PAGE_SHIFT - 3))
+
+/*
+ * PMD_SHIFT determines the size a level 2 page table entry can map.
+ */
+#define PMD_SHIFT		ARM64_HW_PGTABLE_LEVEL_SHIFT(2)
+#define PMD_SIZE		(_AC(1, UL) << PMD_SHIFT)
+#define PMD_MASK		(~(PMD_SIZE-1))
 #define PTRS_PER_PMD		PTRS_PER_PTE
 
-#define PAGE_PRESENT		(1 << 0)
+/*
+ * PUD_SHIFT determines the size a level 1 page table entry can map.
+ */
+#define PUD_SHIFT		ARM64_HW_PGTABLE_LEVEL_SHIFT(1)
+#define PUD_SIZE		(_AC(1, UL) << PUD_SHIFT)
+#define PUD_MASK		(~(PUD_SIZE-1))
+#define PTRS_PER_PUD		PTRS_PER_PTE
+
+/*
+ * PGDIR_SHIFT determines the size a top-level page table entry can map
+ * (depending on the configuration, this level can be 0, 1 or 2).
+ */
+#define PGDIR_SHIFT		ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (pgtable_level))
+#define PGDIR_SIZE		(_AC(1, UL) << PGDIR_SHIFT)
+#define PGDIR_MASK		(~(PGDIR_SIZE-1))
+#define PTRS_PER_PGD		(1 << ((va_bits) - PGDIR_SHIFT))
+
+/*
+ * Section address mask and size definitions.
+ */
 #define SECTIONS_SIZE_BITS	30
-/* Highest possible physical address supported */
-#define PHYS_MASK_SHIFT		48
-#define PHYS_MASK		((1UL << PHYS_MASK_SHIFT) - 1)
+
 /*
- * Remove the highest order bits that are not a part of the
- * physical address in a section
+ * Hardware page table definitions.
+ *
+ * Level 1 descriptor (PUD).
  */
-#define PMD_SECTION_MASK	((1UL << 40) - 1)
+#define PUD_TYPE_TABLE		(_AT(pudval_t, 3) << 0)
+#define PUD_TABLE_BIT		(_AT(pudval_t, 1) << 1)
+#define PUD_TYPE_MASK		(_AT(pudval_t, 3) << 0)
+#define PUD_TYPE_SECT		(_AT(pudval_t, 1) << 0)
 
-#define PMD_TYPE_MASK		3
-#define PMD_TYPE_SECT		1
-#define PMD_TYPE_TABLE		3
+/*
+ * Level 2 descriptor (PMD).
+ */
+#define PMD_TYPE_MASK		(_AT(pmdval_t, 3) << 0)
+#define PMD_TYPE_FAULT		(_AT(pmdval_t, 0) << 0)
+#define PMD_TYPE_TABLE		(_AT(pmdval_t, 3) << 0)
+#define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
+#define PMD_TABLE_BIT		(_AT(pmdval_t, 1) << 1)
+
+/*
+ * Level 3 descriptor (PTE).
+ */
+#define PTE_ADDR_LOW		(((_AT(pteval_t, 1) << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
+#define PTE_ADDR_HIGH		(_AT(pteval_t, 0xf) << 12)
+
+static inline unsigned long
+get_pte_addr_mask_arm64(void)
+{
+	if (lpa_52_bit_support_available)
+		return (PTE_ADDR_LOW | PTE_ADDR_HIGH);
+	else
+		return PTE_ADDR_LOW;
+}
+
+#define PTE_ADDR_MASK		get_pte_addr_mask_arm64()
 
-#define PUD_TYPE_MASK		3
-#define PUD_TYPE_SECT		1
-#define PUD_TYPE_TABLE		3
+#define PAGE_MASK		(~(PAGESIZE() - 1))
+#define PAGE_PRESENT		(1 << 0)
 
+/* Helper API to convert between a physical address and its placement
+ * in a page table entry, taking care of 52-bit addresses.
+ */
+static inline unsigned long
+__pte_to_phys(pte_t pte)
+{
+	if (lpa_52_bit_support_available)
+		return ((pte_val(pte) & PTE_ADDR_LOW) | ((pte_val(pte) & PTE_ADDR_HIGH) << 36));
+	else
+		return (pte_val(pte) & PTE_ADDR_MASK);
+}
+
+/* Find an entry in a page-table-directory */
 #define pgd_index(vaddr) 		(((vaddr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
-#define pgd_offset(pgdir, vaddr)	((pgd_t *)(pgdir) + pgd_index(vaddr))
 
-#define pte_index(vaddr) 		(((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
-#define pmd_page_paddr(pmd)		(pmd_val(pmd) & PHYS_MASK & (int32_t)PAGE_MASK)
-#define pte_offset(dir, vaddr) 		((pte_t*)pmd_page_paddr((*dir)) + pte_index(vaddr))
+static inline pte_t
+pgd_pte(pgd_t pgd)
+{
+	return __pte(pgd_val(pgd));
+}
 
-#define pmd_index(vaddr)		(((vaddr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
-#define pud_page_paddr(pud)		(pud_val(pud) & PHYS_MASK & (int32_t)PAGE_MASK)
-#define pmd_offset_pgtbl_lvl_2(pud, vaddr) ((pmd_t *)pud)
-#define pmd_offset_pgtbl_lvl_3(pud, vaddr) ((pmd_t *)pud_page_paddr((*pud)) + pmd_index(vaddr))
+#define __pgd_to_phys(pgd)		__pte_to_phys(pgd_pte(pgd))
+#define pgd_offset(pgd, vaddr)		((pgd_t *)(pgd) + pgd_index(vaddr))
+
+static inline pte_t pud_pte(pud_t pud)
+{
+	return __pte(pud_val(pud));
+}
 
+static inline unsigned long
+pgd_page_paddr(pgd_t pgd)
+{
+	return __pgd_to_phys(pgd);
+}
+
+/* Find an entry in the first-level page table. */
 #define pud_index(vaddr)		(((vaddr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))
-#define pgd_page_paddr(pgd)		(pgd_val(pgd) & PHYS_MASK & (int32_t)PAGE_MASK)
+#define __pud_to_phys(pud)		__pte_to_phys(pud_pte(pud))
+
+static inline unsigned long
+pud_page_paddr(pud_t pud)
+{
+	return __pud_to_phys(pud);
+}
+
+/* Find an entry in the second-level page table. */
+#define pmd_index(vaddr)		(((vaddr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
+
+static inline pte_t pmd_pte(pmd_t pmd)
+{
+	return __pte(pmd_val(pmd));
+}
+
+#define __pmd_to_phys(pmd)		__pte_to_phys(pmd_pte(pmd))
+
+static inline unsigned long
+pmd_page_paddr(pmd_t pmd)
+{
+	return __pmd_to_phys(pmd);
+}
+
+/* Find an entry in the third-level page table. */
+#define pte_index(vaddr) 		(((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
+#define pte_offset(dir, vaddr) 		(pmd_page_paddr((*dir)) + pte_index(vaddr) * sizeof(pte_t))
 
 static unsigned long long
 __pa(unsigned long vaddr)
@@ -116,32 +228,22 @@ __pa(unsigned long vaddr)
 		return (vaddr - kimage_voffset);
 }
 
-static int
-get_pud_shift_arm64(void)
+static pud_t *
+pud_offset(pgd_t *pgda, pgd_t *pgdv, unsigned long vaddr)
 {
-	if (pgtable_level == 4)
-		return ((PAGESHIFT() - 3) * 3 + 3);
+	if (pgtable_level > 3)
+		return (pud_t *)(pgd_page_paddr(*pgdv) + pud_index(vaddr) * sizeof(pud_t));
 	else
-		return PGDIR_SHIFT;
+		return (pud_t *)(pgda);
 }
 
 static pmd_t *
 pmd_offset(pud_t *puda, pud_t *pudv, unsigned long vaddr)
 {
-	if (pgtable_level == 2) {
-		return pmd_offset_pgtbl_lvl_2(puda, vaddr);
-	} else {
-		return pmd_offset_pgtbl_lvl_3(pudv, vaddr);
-	}
-}
-
-static pud_t *
-pud_offset(pgd_t *pgda, pgd_t *pgdv, unsigned long vaddr)
-{
-	if (pgtable_level == 4)
-		return ((pud_t *)pgd_page_paddr((*pgdv)) + pud_index(vaddr));
+	if (pgtable_level > 2)
+		return (pmd_t *)(pud_page_paddr(*pudv) + pmd_index(vaddr) * sizeof(pmd_t));
 	else
-		return (pud_t *)(pgda);
+		return (pmd_t*)(puda);
 }
 
 static int calculate_plat_config(void)
@@ -307,6 +409,14 @@ get_stext_symbol(void)
 int
 get_machdep_info_arm64(void)
 {
+	/* Determine if the PA address range is 52-bits: ARMv8.2-LPA */
+	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
+		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
+		if (info->max_physmem_bits == 52)
+			lpa_52_bit_support_available = 1;
+	} else
+		info->max_physmem_bits = 48;
+
 	/* Check if va_bits is still not initialized. If still 0, call
 	 * get_versiondep_info() to initialize the same.
 	 */
@@ -319,12 +429,11 @@ get_machdep_info_arm64(void)
 	}
 
 	kimage_voffset = NUMBER(kimage_voffset);
-	info->max_physmem_bits = PHYS_MASK_SHIFT;
 	info->section_size_bits = SECTIONS_SIZE_BITS;
 
 	DEBUG_MSG("kimage_voffset   : %lx\n", kimage_voffset);
-	DEBUG_MSG("max_physmem_bits : %lx\n", info->max_physmem_bits);
-	DEBUG_MSG("section_size_bits: %lx\n", info->section_size_bits);
+	DEBUG_MSG("max_physmem_bits : %ld\n", info->max_physmem_bits);
+	DEBUG_MSG("section_size_bits: %ld\n", info->section_size_bits);
 
 	return TRUE;
 }
@@ -382,6 +491,19 @@ get_versiondep_info_arm64(void)
 	return TRUE;
 }
 
+/* 1GB section for Page Table level = 4 and Page Size = 4KB */
+static int
+is_pud_sect(pud_t pud)
+{
+	return ((pud_val(pud) & PUD_TYPE_MASK) == PUD_TYPE_SECT);
+}
+
+static int
+is_pmd_sect(pmd_t pmd)
+{
+	return ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_SECT);
+}
+
 /*
  * vaddr_to_paddr_arm64() - translate arbitrary virtual address to physical
  * @vaddr: virtual address to translate
@@ -419,10 +541,9 @@ vaddr_to_paddr_arm64(unsigned long vaddr)
 		return NOT_PADDR;
 	}
 
-	if ((pud_val(pudv) & PUD_TYPE_MASK) == PUD_TYPE_SECT) {
-		/* 1GB section for Page Table level = 4 and Page Size = 4KB */
-		paddr = (pud_val(pudv) & (PUD_MASK & PMD_SECTION_MASK))
-					+ (vaddr & (PUD_SIZE - 1));
+	if (is_pud_sect(pudv)) {
+		paddr = (pud_page_paddr(pudv) & PUD_MASK) +
+				(vaddr & (PUD_SIZE - 1));
 		return paddr;
 	}
 
@@ -432,29 +553,24 @@ vaddr_to_paddr_arm64(unsigned long vaddr)
 		return NOT_PADDR;
 	}
 
-	switch (pmd_val(pmdv) & PMD_TYPE_MASK) {
-	case PMD_TYPE_TABLE:
-		ptea = pte_offset(&pmdv, vaddr);
-		/* 64k page */
-		if (!readmem(PADDR, (unsigned long long)ptea, &ptev, sizeof(ptev))) {
-			ERRMSG("Can't read pte\n");
-			return NOT_PADDR;
-		}
+	if (is_pmd_sect(pmdv)) {
+		paddr = (pmd_page_paddr(pmdv) & PMD_MASK) +
+				(vaddr & (PMD_SIZE - 1));
+		return paddr;
+	}
 
-		if (!(pte_val(ptev) & PAGE_PRESENT)) {
-			ERRMSG("Can't get a valid pte.\n");
-			return NOT_PADDR;
-		} else {
+	ptea = (pte_t *)pte_offset(&pmdv, vaddr);
+	if (!readmem(PADDR, (unsigned long long)ptea, &ptev, sizeof(ptev))) {
+		ERRMSG("Can't read pte\n");
+		return NOT_PADDR;
+	}
 
-			paddr = (PAGEBASE(pte_val(ptev)) & PHYS_MASK)
-					+ (vaddr & (PAGESIZE() - 1));
-		}
-		break;
-	case PMD_TYPE_SECT:
-		/* 512MB section for Page Table level = 3 and Page Size = 64KB*/
-		paddr = (pmd_val(pmdv) & (PMD_MASK & PMD_SECTION_MASK))
-					+ (vaddr & (PMD_SIZE - 1));
-		break;
+	if (!(pte_val(ptev) & PAGE_PRESENT)) {
+		ERRMSG("Can't get a valid pte.\n");
+		return NOT_PADDR;
+	} else {
+		paddr = __pte_to_phys(ptev) +
+				(vaddr & (PAGESIZE() - 1));
 	}
 
 	return paddr;
-- 
2.7.4


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 3/4] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA support)
  2019-11-12 11:08 [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Bhupesh Sharma
  2019-11-12 11:08 ` [PATCH v4 1/4] tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available) Bhupesh Sharma
  2019-11-12 11:08 ` [PATCH v4 2/4] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) Bhupesh Sharma
@ 2019-11-12 11:08 ` Bhupesh Sharma
  2019-12-04 17:45   ` Kazuhito Hagio
  2019-11-12 11:08 ` [PATCH v4 4/4] makedumpfile: Mark --mem-usage option unsupported for arm64 Bhupesh Sharma
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 34+ messages in thread
From: Bhupesh Sharma @ 2019-11-12 11:08 UTC (permalink / raw)
  To: kexec; +Cc: John Donnelly, bhsharma, bhupesh.linux, Kazuhito Hagio

With ARMv8.2-LVA architecture extension availability, arm64 hardware
which supports this extension can support upto 52-bit virtual
addresses. It is specially useful for having a 52-bit user-space virtual
address space while the kernel can still retain 48-bit/52-bit virtual
addressing.

Since at the moment we enable the support of this extension in the
kernel via a CONFIG flag (CONFIG_ARM64_VA_BITS_52), so there are
no clear mechanisms in user-space to determine this CONFIG
flag value and use it to determine the kernel-space VA address range
values.

'makedumpfile' can instead use 'TCR_EL1.T1SZ' value from vmcoreinfo
which indicates the size offset of the memory region addressed by
TTBR1_EL1 (and hence can be used for determining the
vabits_actual value).

The user-space computation for determining whether an address lies in
the linear map range is the same as we have in kernel-space:

  #define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))

I have sent a kernel patch upstream to add 'TCR_EL1.T1SZ' to
vmcoreinfo for arm64 (see [0]).

This patch is in accordance with ARMv8 Architecture Reference Manual
version D.a

Note that with these changes the '--mem-usage' option will not work
properly for arm64 (a subsequent patch in this series will address the
same) and there is a discussion on-going with the arm64 maintainers to
find a way-out for the same (via standard kernel symbols like _stext).

[0].http://lists.infradead.org/pipermail/kexec/2019-November/023962.html

Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
Cc: John Donnelly <john.p.donnelly@oracle.com>
Cc: kexec@lists.infradead.org
Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
---
 arch/arm64.c   | 148 +++++++++++++++++++++++++++++++++++++++++++++------------
 makedumpfile.c |   2 +
 makedumpfile.h |   3 +-
 3 files changed, 122 insertions(+), 31 deletions(-)

diff --git a/arch/arm64.c b/arch/arm64.c
index ecb19139e178..094d73b8a60f 100644
--- a/arch/arm64.c
+++ b/arch/arm64.c
@@ -47,6 +47,7 @@ typedef struct {
 static int lpa_52_bit_support_available;
 static int pgtable_level;
 static int va_bits;
+static int vabits_actual;
 static unsigned long kimage_voffset;
 
 #define SZ_4K			4096
@@ -218,12 +219,19 @@ pmd_page_paddr(pmd_t pmd)
 #define pte_index(vaddr) 		(((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
 #define pte_offset(dir, vaddr) 		(pmd_page_paddr((*dir)) + pte_index(vaddr) * sizeof(pte_t))
 
+/*
+ * The linear kernel range starts at the bottom of the virtual address
+ * space. Testing the top bit for the start of the region is a
+ * sufficient check and avoids having to worry about the tag.
+ */
+#define is_linear_addr(addr)	(!(((unsigned long)addr) & (1UL << (vabits_actual - 1))))
+
 static unsigned long long
 __pa(unsigned long vaddr)
 {
 	if (kimage_voffset == NOT_FOUND_NUMBER ||
-			(vaddr >= PAGE_OFFSET))
-		return (vaddr - PAGE_OFFSET + info->phys_base);
+			is_linear_addr(vaddr))
+		return (vaddr + info->phys_base - PAGE_OFFSET);
 	else
 		return (vaddr - kimage_voffset);
 }
@@ -253,6 +261,7 @@ static int calculate_plat_config(void)
 			(PAGESIZE() == SZ_64K && va_bits == 42)) {
 		pgtable_level = 2;
 	} else if ((PAGESIZE() == SZ_64K && va_bits == 48) ||
+			(PAGESIZE() == SZ_64K && va_bits == 52) ||
 			(PAGESIZE() == SZ_4K && va_bits == 39) ||
 			(PAGESIZE() == SZ_16K && va_bits == 47)) {
 		pgtable_level = 3;
@@ -287,6 +296,16 @@ get_phys_base_arm64(void)
 		return TRUE;
 	}
 
+	/* If both vabits_actual and va_bits are now initialized, always
+	 * prefer vabits_actual over va_bits to calculate PAGE_OFFSET
+	 * value.
+	 */
+	if (vabits_actual && va_bits && vabits_actual != va_bits) {
+		info->page_offset = (-(1UL << vabits_actual));
+		DEBUG_MSG("page_offset    : %lx (via vabits_actual)\n",
+				info->page_offset);
+	}
+
 	if (get_num_pt_loads() && PAGE_OFFSET) {
 		for (i = 0;
 		    get_pt_load(i, &phys_start, NULL, &virt_start, NULL);
@@ -406,6 +425,73 @@ get_stext_symbol(void)
 	return(found ? kallsym : FALSE);
 }
 
+static int
+get_va_bits_from_stext_arm64(void)
+{
+	ulong _stext;
+
+	_stext = get_stext_symbol();
+	if (!_stext) {
+		ERRMSG("Can't get the symbol of _stext.\n");
+		return FALSE;
+	}
+
+	/* Derive va_bits as per arch/arm64/Kconfig. Note that this is a
+	 * best case approximation at the moment, as there can be
+	 * inconsistencies in this calculation (for e.g., for
+	 * 52-bit kernel VA case, even the 48th bit might be set in
+	 * the _stext symbol).
+	 *
+	 * So, we need to rely on the actual VA_BITS symbol in the
+	 * vmcoreinfo for a accurate value.
+	 *
+	 * TODO: Improve this further once there is a closure with arm64
+	 * kernel maintainers on the same.
+	 */
+	if ((_stext & PAGE_OFFSET_52) == PAGE_OFFSET_52) {
+		va_bits = 52;
+	} else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) {
+		va_bits = 48;
+	} else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) {
+		va_bits = 47;
+	} else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) {
+		va_bits = 42;
+	} else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) {
+		va_bits = 39;
+	} else if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) {
+		va_bits = 36;
+	} else {
+		ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n");
+		return FALSE;
+	}
+
+	DEBUG_MSG("va_bits    : %d (_stext) (approximation)\n", va_bits);
+
+	return TRUE;
+}
+
+static void
+get_page_offset_arm64(void)
+{
+	/* Check if 'vabits_actual' is initialized yet.
+	 * If not, our best bet is to use 'va_bits' to calculate
+	 * the PAGE_OFFSET value, otherwise use 'vabits_actual'
+	 * for the same.
+	 *
+	 * See arch/arm64/include/asm/memory.h for more details.
+	 */
+	if (!vabits_actual) {
+		info->page_offset = (-(1UL << va_bits));
+		DEBUG_MSG("page_offset    : %lx (approximation)\n",
+					info->page_offset);
+	} else {
+		info->page_offset = (-(1UL << vabits_actual));
+		DEBUG_MSG("page_offset    : %lx (accurate)\n",
+					info->page_offset);
+	}
+
+}
+
 int
 get_machdep_info_arm64(void)
 {
@@ -420,8 +506,33 @@ get_machdep_info_arm64(void)
 	/* Check if va_bits is still not initialized. If still 0, call
 	 * get_versiondep_info() to initialize the same.
 	 */
+	if (NUMBER(VA_BITS) != NOT_FOUND_NUMBER) {
+		va_bits = NUMBER(VA_BITS);
+		DEBUG_MSG("va_bits        : %d (vmcoreinfo)\n",
+				va_bits);
+	}
+
+	/* Check if va_bits is still not initialized. If still 0, call
+	 * get_versiondep_info() to initialize the same from _stext
+	 * symbol.
+	 */
 	if (!va_bits)
-		get_versiondep_info_arm64();
+		if (get_va_bits_from_stext_arm64() == FALSE)
+			return FALSE;
+
+	get_page_offset_arm64();
+
+	/* See TCR_EL1, Translation Control Register (EL1) register
+	 * description in the ARMv8 Architecture Reference Manual.
+	 * Basically, we can use the TCR_EL1.T1SZ
+	 * value to determine the virtual addressing range supported
+	 * in the kernel-space (i.e. vabits_actual).
+	 */
+	if (NUMBER(tcr_el1_t1sz) != NOT_FOUND_NUMBER) {
+		vabits_actual = 64 - NUMBER(tcr_el1_t1sz);
+		DEBUG_MSG("vabits_actual  : %d (vmcoreinfo)\n",
+				vabits_actual);
+	}
 
 	if (!calculate_plat_config()) {
 		ERRMSG("Can't determine platform config values\n");
@@ -459,34 +570,11 @@ get_xen_info_arm64(void)
 int
 get_versiondep_info_arm64(void)
 {
-	ulong _stext;
-
-	_stext = get_stext_symbol();
-	if (!_stext) {
-		ERRMSG("Can't get the symbol of _stext.\n");
-		return FALSE;
-	}
-
-	/* Derive va_bits as per arch/arm64/Kconfig */
-	if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) {
-		va_bits = 36;
-	} else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) {
-		va_bits = 39;
-	} else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) {
-		va_bits = 42;
-	} else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) {
-		va_bits = 47;
-	} else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) {
-		va_bits = 48;
-	} else {
-		ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n");
-		return FALSE;
-	}
-
-	info->page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
+	if (!va_bits)
+		if (get_va_bits_from_stext_arm64() == FALSE)
+			return FALSE;
 
-	DEBUG_MSG("va_bits      : %d\n", va_bits);
-	DEBUG_MSG("page_offset  : %lx\n", info->page_offset);
+	get_page_offset_arm64();
 
 	return TRUE;
 }
diff --git a/makedumpfile.c b/makedumpfile.c
index 4a000112ba59..baf559e4d74e 100644
--- a/makedumpfile.c
+++ b/makedumpfile.c
@@ -2314,6 +2314,7 @@ write_vmcoreinfo_data(void)
 	WRITE_NUMBER("HUGETLB_PAGE_DTOR", HUGETLB_PAGE_DTOR);
 #ifdef __aarch64__
 	WRITE_NUMBER("VA_BITS", VA_BITS);
+	WRITE_NUMBER_UNSIGNED("tcr_el1_t1sz", tcr_el1_t1sz);
 	WRITE_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET);
 	WRITE_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset);
 #endif
@@ -2720,6 +2721,7 @@ read_vmcoreinfo(void)
 	READ_NUMBER("KERNEL_IMAGE_SIZE", KERNEL_IMAGE_SIZE);
 #ifdef __aarch64__
 	READ_NUMBER("VA_BITS", VA_BITS);
+	READ_NUMBER_UNSIGNED("tcr_el1_t1sz", tcr_el1_t1sz);
 	READ_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET);
 	READ_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset);
 #endif
diff --git a/makedumpfile.h b/makedumpfile.h
index ac11e906b5b7..7eab6507c8df 100644
--- a/makedumpfile.h
+++ b/makedumpfile.h
@@ -974,7 +974,7 @@ int get_versiondep_info_arm64(void);
 int get_xen_basic_info_arm64(void);
 int get_xen_info_arm64(void);
 unsigned long get_kaslr_offset_arm64(unsigned long vaddr);
-#define paddr_to_vaddr_arm64(X) (((X) - info->phys_base) | PAGE_OFFSET)
+#define paddr_to_vaddr_arm64(X) (((X) - (info->phys_base - PAGE_OFFSET)))
 
 #define find_vmemmap()		stub_false()
 #define vaddr_to_paddr(X)	vaddr_to_paddr_arm64(X)
@@ -1937,6 +1937,7 @@ struct number_table {
 	long	KERNEL_IMAGE_SIZE;
 #ifdef __aarch64__
 	long 	VA_BITS;
+	unsigned long	tcr_el1_t1sz;
 	unsigned long	PHYS_OFFSET;
 	unsigned long	kimage_voffset;
 #endif
-- 
2.7.4


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 4/4] makedumpfile: Mark --mem-usage option unsupported for arm64
  2019-11-12 11:08 [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Bhupesh Sharma
                   ` (2 preceding siblings ...)
  2019-11-12 11:08 ` [PATCH v4 3/4] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA support) Bhupesh Sharma
@ 2019-11-12 11:08 ` Bhupesh Sharma
  2019-12-04 17:49   ` Kazuhito Hagio
  2019-11-13 21:59 ` [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Kazuhito Hagio
  2019-11-18  5:12 ` Prabhakar Kushwaha
  5 siblings, 1 reply; 34+ messages in thread
From: Bhupesh Sharma @ 2019-11-12 11:08 UTC (permalink / raw)
  To: kexec; +Cc: John Donnelly, bhsharma, bhupesh.linux, Kazuhito Hagio

This patch marks '--mem-usage' option as unsupported for arm64
architecture.

With the newer arm64 kernels supporting 48-bit/52-bit VA address spaces
and keeping a single binary for supporting the same, the address of
kernel symbols like _stext which could be earlier used to determine
VA_BITS value, can no longer to determine whether VA_BITS is set to 48
or 52 in the kernel space.

Hence for now, it makes sense to mark '--mem-usage' option as
unsupported for arm64 architecture until we have more clarity from arm64
kernel maintainers on how to manage the same in future
kernel/makedumpfile versions.

Cc: John Donnelly <john.p.donnelly@oracle.com>
Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
Cc: kexec@lists.infradead.org
Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
---
 makedumpfile.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/makedumpfile.c b/makedumpfile.c
index baf559e4d74e..ae60466a1e9c 100644
--- a/makedumpfile.c
+++ b/makedumpfile.c
@@ -11564,6 +11564,11 @@ main(int argc, char *argv[])
 		MSG("\n");
 		MSG("The dmesg log is saved to %s.\n", info->name_dumpfile);
 	} else if (info->flag_mem_usage) {
+#ifdef __aarch64__
+		MSG("mem-usage not supported for arm64 architecure.\n");
+		goto out;
+#endif
+
 		if (!check_param_for_creating_dumpfile(argc, argv)) {
 			MSG("Commandline parameter is invalid.\n");
 			MSG("Try `makedumpfile --help' for more information.\n");
-- 
2.7.4


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-12 11:08 [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Bhupesh Sharma
                   ` (3 preceding siblings ...)
  2019-11-12 11:08 ` [PATCH v4 4/4] makedumpfile: Mark --mem-usage option unsupported for arm64 Bhupesh Sharma
@ 2019-11-13 21:59 ` Kazuhito Hagio
  2019-11-14 19:10   ` Bhupesh Sharma
  2019-11-18  5:12 ` Prabhakar Kushwaha
  5 siblings, 1 reply; 34+ messages in thread
From: Kazuhito Hagio @ 2019-11-13 21:59 UTC (permalink / raw)
  To: Bhupesh Sharma; +Cc: John Donnelly, bhupesh.linux, kexec

Hi Bhupesh,

Thanks for the updated patchset.

I'm taking a look at this, but I will be out of office from tomorrow
until Nov 29th, so please expect some (long) delays in my response..

Thanks,
Kazu

> -----Original Message-----
> Changes since v3:
> ----------------
> - v3 can be seen here:
>   http://lists.infradead.org/pipermail/kexec/2019-March/022534.html
> - Added a new patch (via [PATCH 4/4]) which marks '--mem-usage' option as
>   unsupported for arm64 architecture. With the newer arm64 kernels
>   supporting 48-bit/52-bit VA address spaces and keeping a single
>   binary for supporting the same, the address of
>   kernel symbols like _stext, which could be earlier used to determine
>   VA_BITS value, can no longer to determine whether VA_BITS is set to 48
>   or 52 in the kernel space. Hence for now, it makes sense to mark
>   '--mem-usage' option as unsupported for arm64 architecture until
>   we have more clarity from arm64 kernel maintainers on how to manage
>   the same in future kernel/makedumpfile versions.
> 
> Changes since v2:
> ----------------
> - v2 can be seen here:
>   http://lists.infradead.org/pipermail/kexec/2019-February/022456.html
> - I missed some comments from Kazu sent on the LVA v1 patch when I sent
>   out the v2. So, addressing them now in v3.
> - Also added a patch that adds a tree-wide feature to read
>   'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
> 
> Changes since v1:
> ----------------
> - v1 was sent as two separate patches:
>   http://lists.infradead.org/pipermail/kexec/2019-February/022424.html
>   (ARMv8.2-LPA)
>   http://lists.infradead.org/pipermail/kexec/2019-February/022425.html
>   (ARMv8.2-LVA)
> - v2 combined the two in a single patchset and also addresses Kazu's
>   review comments.
> 
> This patchset adds support for ARMv8.2 extensions in makedumpfile code.
> I cover the following two cases with this patchset:
>  - 48-bit kernel VA + 52-bit PA (LPA)
>  - 52-bit kernel VA (LVA) + 52-bit PA (LPA)
>  - 48-bit kernel VA + 52-bit user-space VA (LVA)
>  - 52-bit kernel VA + 52-bit user-space VA (Full LVA)
> 
> This has been tested for the following user-cases:
> 1. Creating a dumpfile using /proc/vmcore,
> 2. Creating a dumpfile using /proc/kcore, and
> 3. Post-processing a vmcore.
> 
> I have tested this patchset on the following platforms, with kernels
> which support/do-not-support ARMv8.2 features:
> 1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
>    ampere-osprey.
> 2. Prototype models which support ARMv8.2 extensions (e.g. ARMv8 FVP
>    simulation model).
> 
> Also a preparation patch has been added in this patchset which adds a
> common feature for archs (except arm64, for which similar support is
> added via subsequent patch) to retrieve 'MAX_PHYSMEM_BITS' from
> vmcoreinfo (if available).
> 
> I recently posted two kernel patches (see [0] and [1]) which append
> 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' to vmcoreinfo in the kernel
> code, so that user-space code can benefit from the same.
> 
> This patchset ensures backward compatibility for kernel versions in
> which 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' are not available in
> vmcoreinfo.
> 
> [0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
> [1]. http://lists.infradead.org/pipermail/kexec/2019-November/023962.html
> 
> Cc: John Donnelly <john.p.donnelly@oracle.com>
> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> Cc: kexec@lists.infradead.org
> 
> Bhupesh Sharma (4):
>   tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
>   makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
>   makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA
>     support)
>   makedumpfile: Mark --mem-usage option unsupported for arm64
> 
>  arch/arm.c     |   8 +-
>  arch/arm64.c   | 438 ++++++++++++++++++++++++++++++++++++++++++---------------
>  arch/ia64.c    |   7 +-
>  arch/ppc.c     |   8 +-
>  arch/ppc64.c   |  49 ++++---
>  arch/s390x.c   |  29 ++--
>  arch/sparc64.c |   9 +-
>  arch/x86.c     |  34 +++--
>  arch/x86_64.c  |  27 ++--
>  makedumpfile.c |   7 +
>  makedumpfile.h |   3 +-
>  11 files changed, 439 insertions(+), 180 deletions(-)
> 
> --
> 2.7.4
> 
> 
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-13 21:59 ` [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Kazuhito Hagio
@ 2019-11-14 19:10   ` Bhupesh Sharma
  0 siblings, 0 replies; 34+ messages in thread
From: Bhupesh Sharma @ 2019-11-14 19:10 UTC (permalink / raw)
  To: Kazuhito Hagio; +Cc: John Donnelly, bhupesh.linux, kexec

Hi Kazu,

On Thu, Nov 14, 2019 at 3:31 AM Kazuhito Hagio <k-hagio@ab.jp.nec.com> wrote:
>
> Hi Bhupesh,
>
> Thanks for the updated patchset.
>
> I'm taking a look at this, but I will be out of office from tomorrow
> until Nov 29th, so please expect some (long) delays in my response..

Thanks a lot for your message. Sure, let's discuss this more when you
return from your holidays.

Regards,
Bhupesh

>
> Thanks,
> Kazu
>
> > -----Original Message-----
> > Changes since v3:
> > ----------------
> > - v3 can be seen here:
> >   http://lists.infradead.org/pipermail/kexec/2019-March/022534.html
> > - Added a new patch (via [PATCH 4/4]) which marks '--mem-usage' option as
> >   unsupported for arm64 architecture. With the newer arm64 kernels
> >   supporting 48-bit/52-bit VA address spaces and keeping a single
> >   binary for supporting the same, the address of
> >   kernel symbols like _stext, which could be earlier used to determine
> >   VA_BITS value, can no longer to determine whether VA_BITS is set to 48
> >   or 52 in the kernel space. Hence for now, it makes sense to mark
> >   '--mem-usage' option as unsupported for arm64 architecture until
> >   we have more clarity from arm64 kernel maintainers on how to manage
> >   the same in future kernel/makedumpfile versions.
> >
> > Changes since v2:
> > ----------------
> > - v2 can be seen here:
> >   http://lists.infradead.org/pipermail/kexec/2019-February/022456.html
> > - I missed some comments from Kazu sent on the LVA v1 patch when I sent
> >   out the v2. So, addressing them now in v3.
> > - Also added a patch that adds a tree-wide feature to read
> >   'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
> >
> > Changes since v1:
> > ----------------
> > - v1 was sent as two separate patches:
> >   http://lists.infradead.org/pipermail/kexec/2019-February/022424.html
> >   (ARMv8.2-LPA)
> >   http://lists.infradead.org/pipermail/kexec/2019-February/022425.html
> >   (ARMv8.2-LVA)
> > - v2 combined the two in a single patchset and also addresses Kazu's
> >   review comments.
> >
> > This patchset adds support for ARMv8.2 extensions in makedumpfile code.
> > I cover the following two cases with this patchset:
> >  - 48-bit kernel VA + 52-bit PA (LPA)
> >  - 52-bit kernel VA (LVA) + 52-bit PA (LPA)
> >  - 48-bit kernel VA + 52-bit user-space VA (LVA)
> >  - 52-bit kernel VA + 52-bit user-space VA (Full LVA)
> >
> > This has been tested for the following user-cases:
> > 1. Creating a dumpfile using /proc/vmcore,
> > 2. Creating a dumpfile using /proc/kcore, and
> > 3. Post-processing a vmcore.
> >
> > I have tested this patchset on the following platforms, with kernels
> > which support/do-not-support ARMv8.2 features:
> > 1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
> >    ampere-osprey.
> > 2. Prototype models which support ARMv8.2 extensions (e.g. ARMv8 FVP
> >    simulation model).
> >
> > Also a preparation patch has been added in this patchset which adds a
> > common feature for archs (except arm64, for which similar support is
> > added via subsequent patch) to retrieve 'MAX_PHYSMEM_BITS' from
> > vmcoreinfo (if available).
> >
> > I recently posted two kernel patches (see [0] and [1]) which append
> > 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' to vmcoreinfo in the kernel
> > code, so that user-space code can benefit from the same.
> >
> > This patchset ensures backward compatibility for kernel versions in
> > which 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' are not available in
> > vmcoreinfo.
> >
> > [0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
> > [1]. http://lists.infradead.org/pipermail/kexec/2019-November/023962.html
> >
> > Cc: John Donnelly <john.p.donnelly@oracle.com>
> > Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> > Cc: kexec@lists.infradead.org
> >
> > Bhupesh Sharma (4):
> >   tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
> >   makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
> >   makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA
> >     support)
> >   makedumpfile: Mark --mem-usage option unsupported for arm64
> >
> >  arch/arm.c     |   8 +-
> >  arch/arm64.c   | 438 ++++++++++++++++++++++++++++++++++++++++++---------------
> >  arch/ia64.c    |   7 +-
> >  arch/ppc.c     |   8 +-
> >  arch/ppc64.c   |  49 ++++---
> >  arch/s390x.c   |  29 ++--
> >  arch/sparc64.c |   9 +-
> >  arch/x86.c     |  34 +++--
> >  arch/x86_64.c  |  27 ++--
> >  makedumpfile.c |   7 +
> >  makedumpfile.h |   3 +-
> >  11 files changed, 439 insertions(+), 180 deletions(-)
> >
> > --
> > 2.7.4
> >
> >
> > _______________________________________________
> > kexec mailing list
> > kexec@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/kexec
>
>
>
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
>


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-12 11:08 [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Bhupesh Sharma
                   ` (4 preceding siblings ...)
  2019-11-13 21:59 ` [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Kazuhito Hagio
@ 2019-11-18  5:12 ` Prabhakar Kushwaha
  2019-11-18 17:11   ` John Donnelly
  2019-11-18 18:56   ` Bhupesh Sharma
  5 siblings, 2 replies; 34+ messages in thread
From: Prabhakar Kushwaha @ 2019-11-18  5:12 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: John Donnelly, Prabhakar Kushwaha,
	Ganapatrao Prabhakerrao Kulkarni, kexec mailing list,
	Kazuhito Hagio, bhupesh.linux

Re-sending in plain text mode.

On Tue, Nov 12, 2019 at 4:39 PM Bhupesh Sharma <bhsharma@redhat.com> wrote:
>
> Changes since v3:
> ----------------
> - v3 can be seen here:
>   http://lists.infradead.org/pipermail/kexec/2019-March/022534.html
> - Added a new patch (via [PATCH 4/4]) which marks '--mem-usage' option as
>   unsupported for arm64 architecture. With the newer arm64 kernels
>   supporting 48-bit/52-bit VA address spaces and keeping a single
>   binary for supporting the same, the address of
>   kernel symbols like _stext, which could be earlier used to determine
>   VA_BITS value, can no longer to determine whether VA_BITS is set to 48
>   or 52 in the kernel space. Hence for now, it makes sense to mark
>   '--mem-usage' option as unsupported for arm64 architecture until
>   we have more clarity from arm64 kernel maintainers on how to manage
>   the same in future kernel/makedumpfile versions.
>
> Changes since v2:
> ----------------
> - v2 can be seen here:
>   http://lists.infradead.org/pipermail/kexec/2019-February/022456.html
> - I missed some comments from Kazu sent on the LVA v1 patch when I sent
>   out the v2. So, addressing them now in v3.
> - Also added a patch that adds a tree-wide feature to read
>   'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
>
> Changes since v1:
> ----------------
> - v1 was sent as two separate patches:
>   http://lists.infradead.org/pipermail/kexec/2019-February/022424.html
>   (ARMv8.2-LPA)
>   http://lists.infradead.org/pipermail/kexec/2019-February/022425.html
>   (ARMv8.2-LVA)
> - v2 combined the two in a single patchset and also addresses Kazu's
>   review comments.
>
> This patchset adds support for ARMv8.2 extensions in makedumpfile code.
> I cover the following two cases with this patchset:
>  - 48-bit kernel VA + 52-bit PA (LPA)
>  - 52-bit kernel VA (LVA) + 52-bit PA (LPA)
>  - 48-bit kernel VA + 52-bit user-space VA (LVA)
>  - 52-bit kernel VA + 52-bit user-space VA (Full LVA)
>
> This has been tested for the following user-cases:
> 1. Creating a dumpfile using /proc/vmcore,
> 2. Creating a dumpfile using /proc/kcore, and
> 3. Post-processing a vmcore.
>
> I have tested this patchset on the following platforms, with kernels
> which support/do-not-support ARMv8.2 features:
> 1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
>    ampere-osprey.
> 2. Prototype models which support ARMv8.2 extensions (e.g. ARMv8 FVP
>    simulation model).
>
> Also a preparation patch has been added in this patchset which adds a
> common feature for archs (except arm64, for which similar support is
> added via subsequent patch) to retrieve 'MAX_PHYSMEM_BITS' from
> vmcoreinfo (if available).
>
> I recently posted two kernel patches (see [0] and [1]) which append
> 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' to vmcoreinfo in the kernel
> code, so that user-space code can benefit from the same.
>
> This patchset ensures backward compatibility for kernel versions in
> which 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' are not available in
> vmcoreinfo.
>
> [0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
> [1]. http://lists.infradead.org/pipermail/kexec/2019-November/023962.html
>
> Cc: John Donnelly <john.p.donnelly@oracle.com>
> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> Cc: kexec@lists.infradead.org
>
> Bhupesh Sharma (4):
>   tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
>   makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
>   makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA
>     support)
>   makedumpfile: Mark --mem-usage option unsupported for arm64
>
>  arch/arm.c     |   8 +-
>  arch/arm64.c   | 438 ++++++++++++++++++++++++++++++++++++++++++---------------
>  arch/ia64.c    |   7 +-
>  arch/ppc.c     |   8 +-
>  arch/ppc64.c   |  49 ++++---
>  arch/s390x.c   |  29 ++--
>  arch/sparc64.c |   9 +-
>  arch/x86.c     |  34 +++--
>  arch/x86_64.c  |  27 ++--
>  makedumpfile.c |   7 +
>  makedumpfile.h |   3 +-
>  11 files changed, 439 insertions(+), 180 deletions(-)
>
> --

Tested this patch-set on Marvell's TX2 platform on top
commit(82e6cce2219a) of https://git.code.sf.net/p/makedumpfile/code
(devel branch)

Tested-by: Prabhakar Kushwaha <pkushwaha@marvell.com>

--pk

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-18  5:12 ` Prabhakar Kushwaha
@ 2019-11-18 17:11   ` John Donnelly
  2019-11-18 19:01     ` Bhupesh Sharma
  2019-11-18 18:56   ` Bhupesh Sharma
  1 sibling, 1 reply; 34+ messages in thread
From: John Donnelly @ 2019-11-18 17:11 UTC (permalink / raw)
  To: Prabhakar Kushwaha
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	Bhupesh Sharma, kexec mailing list, Kazuhito Hagio,
	bhupesh.linux

Hi,

See below .

> On Nov 17, 2019, at 11:12 PM, Prabhakar Kushwaha <prabhakar.pkin@gmail.com> wrote:
> 
> Re-sending in plain text mode.
> 
> On Tue, Nov 12, 2019 at 4:39 PM Bhupesh Sharma <bhsharma@redhat.com> wrote:
>> 
>> Changes since v3:
>> ----------------
>> - v3 can be seen here:
>>  https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DMarch_022534.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=tXNSuQSbZP03h4vwmeTiXu_9gUn9e_rY470TmwrNQSU&e= 
>> - Added a new patch (via [PATCH 4/4]) which marks '--mem-usage' option as
>>  unsupported for arm64 architecture. With the newer arm64 kernels
>>  supporting 48-bit/52-bit VA address spaces and keeping a single
>>  binary for supporting the same, the address of
>>  kernel symbols like _stext, which could be earlier used to determine
>>  VA_BITS value, can no longer to determine whether VA_BITS is set to 48
>>  or 52 in the kernel space. Hence for now, it makes sense to mark
>>  '--mem-usage' option as unsupported for arm64 architecture until
>>  we have more clarity from arm64 kernel maintainers on how to manage
>>  the same in future kernel/makedumpfile versions.
>> 
>> Changes since v2:
>> ----------------
>> - v2 can be seen here:
>>  https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022456.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Hd_PJi1aXdKh1jmODxHa_VFNy8HwvSYxCBH-wDitxkI&e= 
>> - I missed some comments from Kazu sent on the LVA v1 patch when I sent
>>  out the v2. So, addressing them now in v3.
>> - Also added a patch that adds a tree-wide feature to read
>>  'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
>> 
>> Changes since v1:
>> ----------------
>> - v1 was sent as two separate patches:
>>  https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022424.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=ZB2bTeP-9z7PVIUhtVjv0ao8wqWFJSOWTnH-kqj_LV8&e= 
>>  (ARMv8.2-LPA)
>>  https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022425.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=OzCS7jUUiiB4YZPbD5xo1GRQtOtsgpHtnwQDV7AgiMs&e= 
>>  (ARMv8.2-LVA)
>> - v2 combined the two in a single patchset and also addresses Kazu's
>>  review comments.
>> 
>> This patchset adds support for ARMv8.2 extensions in makedumpfile code.
>> I cover the following two cases with this patchset:
>> - 48-bit kernel VA + 52-bit PA (LPA)
>> - 52-bit kernel VA (LVA) + 52-bit PA (LPA)
>> - 48-bit kernel VA + 52-bit user-space VA (LVA)
>> - 52-bit kernel VA + 52-bit user-space VA (Full LVA)
>> 
>> This has been tested for the following user-cases:
>> 1. Creating a dumpfile using /proc/vmcore,
>> 2. Creating a dumpfile using /proc/kcore, and
>> 3. Post-processing a vmcore.
>> 
>> I have tested this patchset on the following platforms, with kernels
>> which support/do-not-support ARMv8.2 features:
>> 1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
>>   ampere-osprey.
>> 2. Prototype models which support ARMv8.2 extensions (e.g. ARMv8 FVP
>>   simulation model).
>> 
>> Also a preparation patch has been added in this patchset which adds a
>> common feature for archs (except arm64, for which similar support is
>> added via subsequent patch) to retrieve 'MAX_PHYSMEM_BITS' from
>> vmcoreinfo (if available).
>> 
>> I recently posted two kernel patches (see [0] and [1]) which append
>> 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' to vmcoreinfo in the kernel
>> code, so that user-space code can benefit from the same.
>> 
>> This patchset ensures backward compatibility for kernel versions in
>> which 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' are not available in
>> vmcoreinfo.
>> 
>> [0]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023960.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Aiwq36rzITwEmdA6KIDK54J-AWZKSMBcrGOG2sspXAg&e= 
>> [1]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023962.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=q9nNMgIGrTZoTuSZ2xymuuXN2gqhXnfNlnRPRifV6CI&e= 
>> 
>> Cc: John Donnelly <john.p.donnelly@oracle.com>
>> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
>> Cc: kexec@lists.infradead.org
>> 
>> Bhupesh Sharma (4):
>>  tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
>>  makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
>>  makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA
>>    support)
>>  makedumpfile: Mark --mem-usage option unsupported for arm64
>> 
>> arch/arm.c     |   8 +-
>> arch/arm64.c   | 438 ++++++++++++++++++++++++++++++++++++++++++---------------
>> arch/ia64.c    |   7 +-
>> arch/ppc.c     |   8 +-
>> arch/ppc64.c   |  49 ++++---
>> arch/s390x.c   |  29 ++--
>> arch/sparc64.c |   9 +-
>> arch/x86.c     |  34 +++--
>> arch/x86_64.c  |  27 ++--
>> makedumpfile.c |   7 +
>> makedumpfile.h |   3 +-
>> 11 files changed, 439 insertions(+), 180 deletions(-)
>> 
>> --
> 
> Tested this patch-set on Marvell's TX2 platform on top
> commit(82e6cce2219a) of https://urldefense.proofpoint.com/v2/url?u=https-3A__git.code.sf.net_p_makedumpfile_code&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Eg6LcBiLs9MlQf3jlvdRnuaQ-DODCNA9UKWnQgg9wX8&e= 
> (devel branch)
> 
> Tested-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
> 
> —p


   Hi ,
   
   I tested this on a Arm 8.1v platform with a 5.4.rc4 kernel and it fails :



kdump: saving vmcore-dmesg.txt
kdump: saving vmcore-dmesg.txt complete
kdump: saving vmcore
sadump: unsuppor     phys_start         phys_end       virt_start         virt_end
LOAD[ 0]         92a80000         95040000 ffff800010080000 ffff800012640000
LOAD[ 1]         90000000         92000000 ffffc00010000000 ffffc00012000000
LOAD[ 2]         928c0000         dfe00000 ffffc000128c0000 ffffc0005fe00000
LOAD[ 3]         ffe00000         fffa0000 ffffc0007fe00000 ffffc0007ffa0000
LOAD[ 4]        880000000       1000000000 ffffc00800000000 ffffc00f80000000
LOAD[ 5]       8800000000       bff7010000 ffffc08780000000 ffffc0bf77010000
LOAD[ 6]       bff7040000       bff7740000 ffffc0bf77040000 ffffc0bf77740000
LOAD[ 7]       bff7770000       bff8020000 ffffc0bf77770000 ffffc0bf78020000
LOAD[ 8]       bff8050000       bff8070000 ffffc0bf78050000 ffffc0bf78070000
LOAD[ 9]       bff80d0000       bff8270000 ffffc0bf780d0000 ffffc0bf78270000
LOAD[10]       bff8280000       bff83d0000 ffffc0bf78280000 ffffc0bf783d0000
LOAD[11]       bff8870000       bffc1a0000 ffffc0bf78870000 ffffc0bf7c1a0000
LOAD[12]       bffc1c0000       bffc1d0000 ffffc0bf7c1c0000 ffffc0bf7c1d0000
LOAD[13]       bffe210000       bfffd10000 ffffc0bf7e210000 ffffc0bf7fd10000
LOAD[14]       bfffd40000       bfffd50000 ffffc0bf7fd40000 ffffc0bf7fd50000
LOAD[15]       bfffe00000       c000000000 ffffc0bf7fe00000 ffffc0bf80000000
Linux kdump
VMCOREINFO   :
  OSRELEASE=5.4.0-0
  PAGESIZE=65536
page_size    : 65536
  SYMBOL(init_uts_ns)=ffff800011ac5ca8
  SYMBOL(node_online_map)=ffff800011abd490
  SYMBOL(swapper_pg_dir)=ffff800011340000
  SYMBOL(_stext)=ffff800010081000
  SYMBOL(vmap_area_list)=ffff800011b89898
  SYMBOL(mem_section)=ffff00bf7be7e300
  LENGTH(mem_section)=64
  SIZE(mem_section)=16
  OFFSET(mem_section.section_mem_map)=0
  SIZE(page)=64
  SIZE(pglist_data)=6912
  SIZE(zone)=1920
  SIZE(free_area)=104
  SIZE(list_head)=16
  SIZE(nodemask_t)=8
  OFFSET(page.flags)=0
  OFFSET(page._refcount)=52
  OFFSET(page.mapping)=24
  OFFSET(page.lru)=8
  OFFSET(page._mapcount)=48
  OFFSET(page.private)=40
  OFFSET(page.compound_dtor)=16
  OFFSET(page.compound_order)=17
  OFFSET(page.compound_head)=8
  OFFSET(pglist_data.node_zones)=0
  OFFSET(pglist_data.nr_zones)=6176
  OFFSET(pglist_data.node_start_pfn)=6184
  OFFSET(pglist_data.node_spanned_pages)=6200
  OFFSET(pglist_data.node_id)=6208
  OFFSET(zone.free_area)=192
  OFFSET(zone.vm_stat)=1728
  OFFSET(zone.spanned_pages)=104
  OFFSET(free_area.free_list)=0
  OFFSET(list_head.next)=0
  OFFSET(list_head.prev)=8
  OFFSET(vmap_are14
  SYMBOL(logt_idx)=ffff800011ed7294
  SYMBOL(clear_idx)=ffff800011ed4ce0
 og)=16
  OFFSET(printk_log.ts_nsec)=0
  OFFSET(printk_log.len)=8
  OFFSET(printk_log.text_len)=10
  OFFSET(printk_log.dict_len)=12
  LENGTH(free_area.free_list)=6
  NUMBER(NR_FREE_PAGES)=0
  NUMBER(PG_lru)=4
  NUMBER(PG_private)=13
  NUMBER(PG_swapcache)=10
  NUMBER(PG_swapbacked)=19
  NUMBER(PG_slab)=9
  NUMBER(PG_hwpoison)=22
  NUMBER(PG_head_mask)=65536
  NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129
  NUMBER(HUGETLB_PAGE_DTOR)=2
  NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257
  NUMBER(VA_BITS)=52
  NUMBER(kimage_voffset)=0xffff7fff7d600000
  NUMBER(PHYS_OFFSET)=0x80000000
  KERNELOFFSET=0
  CRASHTIME=1574096441

phys_base    : 80000000 (vmcoreinfo)

max_mapnr    : c00000
There is enough free memory to be done in one cycle.

Buffer size for the cyclic mode: 3145728
va_bits      : 47
page_offset  : ffffc00000000000
calculate_plat_config: Parm64: Can't detd
[FAILED] Failed to start Kdump Vmcore Save Service.


< reboot >


CAN YOU ADD A VERSION BANNER TO THE MAKEDUMPFILE SO WE CAN BE SURE OF WHAT IS BEING USED WHEN IT STARTS ?

Thanks ! 









> k
> 

> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_mailman_listinfo_kexec&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=2n_H5NpvTcxlrmKKuAqZtmrMsLwrb8Y5l5kKKyjnJ8g&e= 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-18  5:12 ` Prabhakar Kushwaha
  2019-11-18 17:11   ` John Donnelly
@ 2019-11-18 18:56   ` Bhupesh Sharma
  1 sibling, 0 replies; 34+ messages in thread
From: Bhupesh Sharma @ 2019-11-18 18:56 UTC (permalink / raw)
  To: Prabhakar Kushwaha
  Cc: John Donnelly, Prabhakar Kushwaha,
	Ganapatrao Prabhakerrao Kulkarni, kexec mailing list,
	Kazuhito Hagio, Bhupesh SHARMA

Hi Prabhakar,

On Mon, Nov 18, 2019 at 10:42 AM Prabhakar Kushwaha
<prabhakar.pkin@gmail.com> wrote:
>
> Re-sending in plain text mode.
>
> On Tue, Nov 12, 2019 at 4:39 PM Bhupesh Sharma <bhsharma@redhat.com> wrote:
> >
> > Changes since v3:
> > ----------------
> > - v3 can be seen here:
> >   http://lists.infradead.org/pipermail/kexec/2019-March/022534.html
> > - Added a new patch (via [PATCH 4/4]) which marks '--mem-usage' option as
> >   unsupported for arm64 architecture. With the newer arm64 kernels
> >   supporting 48-bit/52-bit VA address spaces and keeping a single
> >   binary for supporting the same, the address of
> >   kernel symbols like _stext, which could be earlier used to determine
> >   VA_BITS value, can no longer to determine whether VA_BITS is set to 48
> >   or 52 in the kernel space. Hence for now, it makes sense to mark
> >   '--mem-usage' option as unsupported for arm64 architecture until
> >   we have more clarity from arm64 kernel maintainers on how to manage
> >   the same in future kernel/makedumpfile versions.
> >
> > Changes since v2:
> > ----------------
> > - v2 can be seen here:
> >   http://lists.infradead.org/pipermail/kexec/2019-February/022456.html
> > - I missed some comments from Kazu sent on the LVA v1 patch when I sent
> >   out the v2. So, addressing them now in v3.
> > - Also added a patch that adds a tree-wide feature to read
> >   'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
> >
> > Changes since v1:
> > ----------------
> > - v1 was sent as two separate patches:
> >   http://lists.infradead.org/pipermail/kexec/2019-February/022424.html
> >   (ARMv8.2-LPA)
> >   http://lists.infradead.org/pipermail/kexec/2019-February/022425.html
> >   (ARMv8.2-LVA)
> > - v2 combined the two in a single patchset and also addresses Kazu's
> >   review comments.
> >
> > This patchset adds support for ARMv8.2 extensions in makedumpfile code.
> > I cover the following two cases with this patchset:
> >  - 48-bit kernel VA + 52-bit PA (LPA)
> >  - 52-bit kernel VA (LVA) + 52-bit PA (LPA)
> >  - 48-bit kernel VA + 52-bit user-space VA (LVA)
> >  - 52-bit kernel VA + 52-bit user-space VA (Full LVA)
> >
> > This has been tested for the following user-cases:
> > 1. Creating a dumpfile using /proc/vmcore,
> > 2. Creating a dumpfile using /proc/kcore, and
> > 3. Post-processing a vmcore.
> >
> > I have tested this patchset on the following platforms, with kernels
> > which support/do-not-support ARMv8.2 features:
> > 1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
> >    ampere-osprey.
> > 2. Prototype models which support ARMv8.2 extensions (e.g. ARMv8 FVP
> >    simulation model).
> >
> > Also a preparation patch has been added in this patchset which adds a
> > common feature for archs (except arm64, for which similar support is
> > added via subsequent patch) to retrieve 'MAX_PHYSMEM_BITS' from
> > vmcoreinfo (if available).
> >
> > I recently posted two kernel patches (see [0] and [1]) which append
> > 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' to vmcoreinfo in the kernel
> > code, so that user-space code can benefit from the same.
> >
> > This patchset ensures backward compatibility for kernel versions in
> > which 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' are not available in
> > vmcoreinfo.
> >
> > [0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
> > [1]. http://lists.infradead.org/pipermail/kexec/2019-November/023962.html
> >
> > Cc: John Donnelly <john.p.donnelly@oracle.com>
> > Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> > Cc: kexec@lists.infradead.org
> >
> > Bhupesh Sharma (4):
> >   tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
> >   makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
> >   makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA
> >     support)
> >   makedumpfile: Mark --mem-usage option unsupported for arm64
> >
> >  arch/arm.c     |   8 +-
> >  arch/arm64.c   | 438 ++++++++++++++++++++++++++++++++++++++++++---------------
> >  arch/ia64.c    |   7 +-
> >  arch/ppc.c     |   8 +-
> >  arch/ppc64.c   |  49 ++++---
> >  arch/s390x.c   |  29 ++--
> >  arch/sparc64.c |   9 +-
> >  arch/x86.c     |  34 +++--
> >  arch/x86_64.c  |  27 ++--
> >  makedumpfile.c |   7 +
> >  makedumpfile.h |   3 +-
> >  11 files changed, 439 insertions(+), 180 deletions(-)
> >
> > --
>
> Tested this patch-set on Marvell's TX2 platform on top
> commit(82e6cce2219a) of https://git.code.sf.net/p/makedumpfile/code
> (devel branch)
>
> Tested-by: Prabhakar Kushwaha <pkushwaha@marvell.com>

Thanks for testing the patchset.

Regards,
Bhupesh


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-18 17:11   ` John Donnelly
@ 2019-11-18 19:01     ` Bhupesh Sharma
  2019-11-18 19:12       ` John Donnelly
  2019-11-20 16:33       ` John Donnelly
  0 siblings, 2 replies; 34+ messages in thread
From: Bhupesh Sharma @ 2019-11-18 19:01 UTC (permalink / raw)
  To: John Donnelly
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	kexec mailing list, Kazuhito Hagio, Prabhakar Kushwaha,
	Bhupesh SHARMA

Hi John,

On Mon, Nov 18, 2019 at 10:41 PM John Donnelly
<john.p.donnelly@oracle.com> wrote:
>
> Hi,
>
> See below .
>
> > On Nov 17, 2019, at 11:12 PM, Prabhakar Kushwaha <prabhakar.pkin@gmail.com> wrote:
> >
> > Re-sending in plain text mode.
> >
> > On Tue, Nov 12, 2019 at 4:39 PM Bhupesh Sharma <bhsharma@redhat.com> wrote:
> >>
> >> Changes since v3:
> >> ----------------
> >> - v3 can be seen here:
> >>  https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DMarch_022534.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=tXNSuQSbZP03h4vwmeTiXu_9gUn9e_rY470TmwrNQSU&e=
> >> - Added a new patch (via [PATCH 4/4]) which marks '--mem-usage' option as
> >>  unsupported for arm64 architecture. With the newer arm64 kernels
> >>  supporting 48-bit/52-bit VA address spaces and keeping a single
> >>  binary for supporting the same, the address of
> >>  kernel symbols like _stext, which could be earlier used to determine
> >>  VA_BITS value, can no longer to determine whether VA_BITS is set to 48
> >>  or 52 in the kernel space. Hence for now, it makes sense to mark
> >>  '--mem-usage' option as unsupported for arm64 architecture until
> >>  we have more clarity from arm64 kernel maintainers on how to manage
> >>  the same in future kernel/makedumpfile versions.
> >>
> >> Changes since v2:
> >> ----------------
> >> - v2 can be seen here:
> >>  https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022456.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Hd_PJi1aXdKh1jmODxHa_VFNy8HwvSYxCBH-wDitxkI&e=
> >> - I missed some comments from Kazu sent on the LVA v1 patch when I sent
> >>  out the v2. So, addressing them now in v3.
> >> - Also added a patch that adds a tree-wide feature to read
> >>  'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
> >>
> >> Changes since v1:
> >> ----------------
> >> - v1 was sent as two separate patches:
> >>  https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022424.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=ZB2bTeP-9z7PVIUhtVjv0ao8wqWFJSOWTnH-kqj_LV8&e=
> >>  (ARMv8.2-LPA)
> >>  https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022425.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=OzCS7jUUiiB4YZPbD5xo1GRQtOtsgpHtnwQDV7AgiMs&e=
> >>  (ARMv8.2-LVA)
> >> - v2 combined the two in a single patchset and also addresses Kazu's
> >>  review comments.
> >>
> >> This patchset adds support for ARMv8.2 extensions in makedumpfile code.
> >> I cover the following two cases with this patchset:
> >> - 48-bit kernel VA + 52-bit PA (LPA)
> >> - 52-bit kernel VA (LVA) + 52-bit PA (LPA)
> >> - 48-bit kernel VA + 52-bit user-space VA (LVA)
> >> - 52-bit kernel VA + 52-bit user-space VA (Full LVA)
> >>
> >> This has been tested for the following user-cases:
> >> 1. Creating a dumpfile using /proc/vmcore,
> >> 2. Creating a dumpfile using /proc/kcore, and
> >> 3. Post-processing a vmcore.
> >>
> >> I have tested this patchset on the following platforms, with kernels
> >> which support/do-not-support ARMv8.2 features:
> >> 1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
> >>   ampere-osprey.
> >> 2. Prototype models which support ARMv8.2 extensions (e.g. ARMv8 FVP
> >>   simulation model).
> >>
> >> Also a preparation patch has been added in this patchset which adds a
> >> common feature for archs (except arm64, for which similar support is
> >> added via subsequent patch) to retrieve 'MAX_PHYSMEM_BITS' from
> >> vmcoreinfo (if available).
> >>
> >> I recently posted two kernel patches (see [0] and [1]) which append
> >> 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' to vmcoreinfo in the kernel
> >> code, so that user-space code can benefit from the same.
> >>
> >> This patchset ensures backward compatibility for kernel versions in
> >> which 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' are not available in
> >> vmcoreinfo.
> >>
> >> [0]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023960.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Aiwq36rzITwEmdA6KIDK54J-AWZKSMBcrGOG2sspXAg&e=
> >> [1]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023962.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=q9nNMgIGrTZoTuSZ2xymuuXN2gqhXnfNlnRPRifV6CI&e=
> >>
> >> Cc: John Donnelly <john.p.donnelly@oracle.com>
> >> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> >> Cc: kexec@lists.infradead.org
> >>
> >> Bhupesh Sharma (4):
> >>  tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
> >>  makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
> >>  makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA
> >>    support)
> >>  makedumpfile: Mark --mem-usage option unsupported for arm64
> >>
> >> arch/arm.c     |   8 +-
> >> arch/arm64.c   | 438 ++++++++++++++++++++++++++++++++++++++++++---------------
> >> arch/ia64.c    |   7 +-
> >> arch/ppc.c     |   8 +-
> >> arch/ppc64.c   |  49 ++++---
> >> arch/s390x.c   |  29 ++--
> >> arch/sparc64.c |   9 +-
> >> arch/x86.c     |  34 +++--
> >> arch/x86_64.c  |  27 ++--
> >> makedumpfile.c |   7 +
> >> makedumpfile.h |   3 +-
> >> 11 files changed, 439 insertions(+), 180 deletions(-)
> >>
> >> --
> >
> > Tested this patch-set on Marvell's TX2 platform on top
> > commit(82e6cce2219a) of https://urldefense.proofpoint.com/v2/url?u=https-3A__git.code.sf.net_p_makedumpfile_code&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Eg6LcBiLs9MlQf3jlvdRnuaQ-DODCNA9UKWnQgg9wX8&e=
> > (devel branch)
> >
> > Tested-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
> >
> > —p
>
>
>    Hi ,
>
>    I tested this on a Arm 8.1v platform with a 5.4.rc4 kernel and it fails :
>
>
>
> kdump: saving vmcore-dmesg.txt
> kdump: saving vmcore-dmesg.txt complete
> kdump: saving vmcore
> sadump: unsuppor     phys_start         phys_end       virt_start         virt_end
> LOAD[ 0]         92a80000         95040000 ffff800010080000 ffff800012640000
> LOAD[ 1]         90000000         92000000 ffffc00010000000 ffffc00012000000
> LOAD[ 2]         928c0000         dfe00000 ffffc000128c0000 ffffc0005fe00000
> LOAD[ 3]         ffe00000         fffa0000 ffffc0007fe00000 ffffc0007ffa0000
> LOAD[ 4]        880000000       1000000000 ffffc00800000000 ffffc00f80000000
> LOAD[ 5]       8800000000       bff7010000 ffffc08780000000 ffffc0bf77010000
> LOAD[ 6]       bff7040000       bff7740000 ffffc0bf77040000 ffffc0bf77740000
> LOAD[ 7]       bff7770000       bff8020000 ffffc0bf77770000 ffffc0bf78020000
> LOAD[ 8]       bff8050000       bff8070000 ffffc0bf78050000 ffffc0bf78070000
> LOAD[ 9]       bff80d0000       bff8270000 ffffc0bf780d0000 ffffc0bf78270000
> LOAD[10]       bff8280000       bff83d0000 ffffc0bf78280000 ffffc0bf783d0000
> LOAD[11]       bff8870000       bffc1a0000 ffffc0bf78870000 ffffc0bf7c1a0000
> LOAD[12]       bffc1c0000       bffc1d0000 ffffc0bf7c1c0000 ffffc0bf7c1d0000
> LOAD[13]       bffe210000       bfffd10000 ffffc0bf7e210000 ffffc0bf7fd10000
> LOAD[14]       bfffd40000       bfffd50000 ffffc0bf7fd40000 ffffc0bf7fd50000
> LOAD[15]       bfffe00000       c000000000 ffffc0bf7fe00000 ffffc0bf80000000
> Linux kdump
> VMCOREINFO   :
>   OSRELEASE=5.4.0-0
>   PAGESIZE=65536
> page_size    : 65536
>   SYMBOL(init_uts_ns)=ffff800011ac5ca8
>   SYMBOL(node_online_map)=ffff800011abd490
>   SYMBOL(swapper_pg_dir)=ffff800011340000
>   SYMBOL(_stext)=ffff800010081000
>   SYMBOL(vmap_area_list)=ffff800011b89898
>   SYMBOL(mem_section)=ffff00bf7be7e300
>   LENGTH(mem_section)=64
>   SIZE(mem_section)=16
>   OFFSET(mem_section.section_mem_map)=0
>   SIZE(page)=64
>   SIZE(pglist_data)=6912
>   SIZE(zone)=1920
>   SIZE(free_area)=104
>   SIZE(list_head)=16
>   SIZE(nodemask_t)=8
>   OFFSET(page.flags)=0
>   OFFSET(page._refcount)=52
>   OFFSET(page.mapping)=24
>   OFFSET(page.lru)=8
>   OFFSET(page._mapcount)=48
>   OFFSET(page.private)=40
>   OFFSET(page.compound_dtor)=16
>   OFFSET(page.compound_order)=17
>   OFFSET(page.compound_head)=8
>   OFFSET(pglist_data.node_zones)=0
>   OFFSET(pglist_data.nr_zones)=6176
>   OFFSET(pglist_data.node_start_pfn)=6184
>   OFFSET(pglist_data.node_spanned_pages)=6200
>   OFFSET(pglist_data.node_id)=6208
>   OFFSET(zone.free_area)=192
>   OFFSET(zone.vm_stat)=1728
>   OFFSET(zone.spanned_pages)=104
>   OFFSET(free_area.free_list)=0
>   OFFSET(list_head.next)=0
>   OFFSET(list_head.prev)=8
>   OFFSET(vmap_are14
>   SYMBOL(logt_idx)=ffff800011ed7294
>   SYMBOL(clear_idx)=ffff800011ed4ce0
>  og)=16
>   OFFSET(printk_log.ts_nsec)=0
>   OFFSET(printk_log.len)=8
>   OFFSET(printk_log.text_len)=10
>   OFFSET(printk_log.dict_len)=12
>   LENGTH(free_area.free_list)=6
>   NUMBER(NR_FREE_PAGES)=0
>   NUMBER(PG_lru)=4
>   NUMBER(PG_private)=13
>   NUMBER(PG_swapcache)=10
>   NUMBER(PG_swapbacked)=19
>   NUMBER(PG_slab)=9
>   NUMBER(PG_hwpoison)=22
>   NUMBER(PG_head_mask)=65536
>   NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129
>   NUMBER(HUGETLB_PAGE_DTOR)=2
>   NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257
>   NUMBER(VA_BITS)=52
>   NUMBER(kimage_voffset)=0xffff7fff7d600000
>   NUMBER(PHYS_OFFSET)=0x80000000
>   KERNELOFFSET=0
>   CRASHTIME=1574096441
>
> phys_base    : 80000000 (vmcoreinfo)
>
> max_mapnr    : c00000
> There is enough free memory to be done in one cycle.
>
> Buffer size for the cyclic mode: 3145728
> va_bits      : 47
> page_offset  : ffffc00000000000
> calculate_plat_config: Parm64: Can't detd
> [FAILED] Failed to start Kdump Vmcore Save Service.
>
>
> < reboot >
>
>
> CAN YOU ADD A VERSION BANNER TO THE MAKEDUMPFILE SO WE CAN BE SURE OF WHAT IS BEING USED WHEN IT STARTS ?

It will not work with default vanila (upstream) kernel as you need to
apply the patches which export TCR_EL1.T1SZ and 'MAX_PHYSMEM_BITS' in
vmcoreinfo (see [0] and [1] for details).

I mentioned the same in the cover letter (see:
<http://lists.infradead.org/pipermail/kexec/2019-November/023963.html>)

[0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
[1]. http://lists.infradead.org/pipermail/kexec/2019-November/023962.html

Regards,
Bhupesh


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-18 19:01     ` Bhupesh Sharma
@ 2019-11-18 19:12       ` John Donnelly
  2019-11-18 20:00         ` John Donnelly
  2019-11-20 16:33       ` John Donnelly
  1 sibling, 1 reply; 34+ messages in thread
From: John Donnelly @ 2019-11-18 19:12 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	kexec mailing list, Kazuhito Hagio, Prabhakar Kushwaha,
	Bhupesh SHARMA

I will update and test a new kernel 




> On Nov 18, 2019, at 1:01 PM, Bhupesh Sharma <bhsharma@redhat.com> wrote:
> 
> Hi John,
> 
> On Mon, Nov 18, 2019 at 10:41 PM John Donnelly
> <john.p.donnelly@oracle.com> wrote:
>> 
>> Hi,
>> 
>> See below .
>> 
>>> On Nov 17, 2019, at 11:12 PM, Prabhakar Kushwaha <prabhakar.pkin@gmail.com> wrote:
>>> 
>>> Re-sending in plain text mode.
>>> 
>>> On Tue, Nov 12, 2019 at 4:39 PM Bhupesh Sharma <bhsharma@redhat.com> wrote:
>>>> 
>>>> Changes since v3:
>>>> ----------------
>>>> - v3 can be seen here:
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DMarch_022534.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=tXNSuQSbZP03h4vwmeTiXu_9gUn9e_rY470TmwrNQSU&e=
>>>> - Added a new patch (via [PATCH 4/4]) which marks '--mem-usage' option as
>>>> unsupported for arm64 architecture. With the newer arm64 kernels
>>>> supporting 48-bit/52-bit VA address spaces and keeping a single
>>>> binary for supporting the same, the address of
>>>> kernel symbols like _stext, which could be earlier used to determine
>>>> VA_BITS value, can no longer to determine whether VA_BITS is set to 48
>>>> or 52 in the kernel space. Hence for now, it makes sense to mark
>>>> '--mem-usage' option as unsupported for arm64 architecture until
>>>> we have more clarity from arm64 kernel maintainers on how to manage
>>>> the same in future kernel/makedumpfile versions.
>>>> 
>>>> Changes since v2:
>>>> ----------------
>>>> - v2 can be seen here:
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022456.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Hd_PJi1aXdKh1jmODxHa_VFNy8HwvSYxCBH-wDitxkI&e=
>>>> - I missed some comments from Kazu sent on the LVA v1 patch when I sent
>>>> out the v2. So, addressing them now in v3.
>>>> - Also added a patch that adds a tree-wide feature to read
>>>> 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
>>>> 
>>>> Changes since v1:
>>>> ----------------
>>>> - v1 was sent as two separate patches:
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022424.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=ZB2bTeP-9z7PVIUhtVjv0ao8wqWFJSOWTnH-kqj_LV8&e=
>>>> (ARMv8.2-LPA)
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022425.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=OzCS7jUUiiB4YZPbD5xo1GRQtOtsgpHtnwQDV7AgiMs&e=
>>>> (ARMv8.2-LVA)
>>>> - v2 combined the two in a single patchset and also addresses Kazu's
>>>> review comments.
>>>> 
>>>> This patchset adds support for ARMv8.2 extensions in makedumpfile code.
>>>> I cover the following two cases with this patchset:
>>>> - 48-bit kernel VA + 52-bit PA (LPA)
>>>> - 52-bit kernel VA (LVA) + 52-bit PA (LPA)
>>>> - 48-bit kernel VA + 52-bit user-space VA (LVA)
>>>> - 52-bit kernel VA + 52-bit user-space VA (Full LVA)
>>>> 
>>>> This has been tested for the following user-cases:
>>>> 1. Creating a dumpfile using /proc/vmcore,
>>>> 2. Creating a dumpfile using /proc/kcore, and
>>>> 3. Post-processing a vmcore.
>>>> 
>>>> I have tested this patchset on the following platforms, with kernels
>>>> which support/do-not-support ARMv8.2 features:
>>>> 1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
>>>>  ampere-osprey.
>>>> 2. Prototype models which support ARMv8.2 extensions (e.g. ARMv8 FVP
>>>>  simulation model).
>>>> 
>>>> Also a preparation patch has been added in this patchset which adds a
>>>> common feature for archs (except arm64, for which similar support is
>>>> added via subsequent patch) to retrieve 'MAX_PHYSMEM_BITS' from
>>>> vmcoreinfo (if available).
>>>> 
>>>> I recently posted two kernel patches (see [0] and [1]) which append
>>>> 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' to vmcoreinfo in the kernel
>>>> code, so that user-space code can benefit from the same.
>>>> 
>>>> This patchset ensures backward compatibility for kernel versions in
>>>> which 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' are not available in
>>>> vmcoreinfo.
>>>> 
>>>> [0]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023960.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Aiwq36rzITwEmdA6KIDK54J-AWZKSMBcrGOG2sspXAg&e=
>>>> [1]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023962.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=q9nNMgIGrTZoTuSZ2xymuuXN2gqhXnfNlnRPRifV6CI&e=
>>>> 
>>>> Cc: John Donnelly <john.p.donnelly@oracle.com>
>>>> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
>>>> Cc: kexec@lists.infradead.org
>>>> 
>>>> Bhupesh Sharma (4):
>>>> tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
>>>> makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
>>>> makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA
>>>>   support)
>>>> makedumpfile: Mark --mem-usage option unsupported for arm64
>>>> 
>>>> arch/arm.c     |   8 +-
>>>> arch/arm64.c   | 438 ++++++++++++++++++++++++++++++++++++++++++---------------
>>>> arch/ia64.c    |   7 +-
>>>> arch/ppc.c     |   8 +-
>>>> arch/ppc64.c   |  49 ++++---
>>>> arch/s390x.c   |  29 ++--
>>>> arch/sparc64.c |   9 +-
>>>> arch/x86.c     |  34 +++--
>>>> arch/x86_64.c  |  27 ++--
>>>> makedumpfile.c |   7 +
>>>> makedumpfile.h |   3 +-
>>>> 11 files changed, 439 insertions(+), 180 deletions(-)
>>>> 
>>>> --
>>> 
>>> Tested this patch-set on Marvell's TX2 platform on top
>>> commit(82e6cce2219a) of https://urldefense.proofpoint.com/v2/url?u=https-3A__git.code.sf.net_p_makedumpfile_code&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Eg6LcBiLs9MlQf3jlvdRnuaQ-DODCNA9UKWnQgg9wX8&e=
>>> (devel branch)
>>> 
>>> Tested-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
>>> 
>>> —p
>> 
>> 
>>   Hi ,
>> 
>>   I tested this on a Arm 8.1v platform with a 5.4.rc4 kernel and it fails :
>> 
>> 
>> 
>> kdump: saving vmcore-dmesg.txt
>> kdump: saving vmcore-dmesg.txt complete
>> kdump: saving vmcore
>> sadump: unsuppor     phys_start         phys_end       virt_start         virt_end
>> LOAD[ 0]         92a80000         95040000 ffff800010080000 ffff800012640000
>> LOAD[ 1]         90000000         92000000 ffffc00010000000 ffffc00012000000
>> LOAD[ 2]         928c0000         dfe00000 ffffc000128c0000 ffffc0005fe00000
>> LOAD[ 3]         ffe00000         fffa0000 ffffc0007fe00000 ffffc0007ffa0000
>> LOAD[ 4]        880000000       1000000000 ffffc00800000000 ffffc00f80000000
>> LOAD[ 5]       8800000000       bff7010000 ffffc08780000000 ffffc0bf77010000
>> LOAD[ 6]       bff7040000       bff7740000 ffffc0bf77040000 ffffc0bf77740000
>> LOAD[ 7]       bff7770000       bff8020000 ffffc0bf77770000 ffffc0bf78020000
>> LOAD[ 8]       bff8050000       bff8070000 ffffc0bf78050000 ffffc0bf78070000
>> LOAD[ 9]       bff80d0000       bff8270000 ffffc0bf780d0000 ffffc0bf78270000
>> LOAD[10]       bff8280000       bff83d0000 ffffc0bf78280000 ffffc0bf783d0000
>> LOAD[11]       bff8870000       bffc1a0000 ffffc0bf78870000 ffffc0bf7c1a0000
>> LOAD[12]       bffc1c0000       bffc1d0000 ffffc0bf7c1c0000 ffffc0bf7c1d0000
>> LOAD[13]       bffe210000       bfffd10000 ffffc0bf7e210000 ffffc0bf7fd10000
>> LOAD[14]       bfffd40000       bfffd50000 ffffc0bf7fd40000 ffffc0bf7fd50000
>> LOAD[15]       bfffe00000       c000000000 ffffc0bf7fe00000 ffffc0bf80000000
>> Linux kdump
>> VMCOREINFO   :
>>  OSRELEASE=5.4.0-0
>>  PAGESIZE=65536
>> page_size    : 65536
>>  SYMBOL(init_uts_ns)=ffff800011ac5ca8
>>  SYMBOL(node_online_map)=ffff800011abd490
>>  SYMBOL(swapper_pg_dir)=ffff800011340000
>>  SYMBOL(_stext)=ffff800010081000
>>  SYMBOL(vmap_area_list)=ffff800011b89898
>>  SYMBOL(mem_section)=ffff00bf7be7e300
>>  LENGTH(mem_section)=64
>>  SIZE(mem_section)=16
>>  OFFSET(mem_section.section_mem_map)=0
>>  SIZE(page)=64
>>  SIZE(pglist_data)=6912
>>  SIZE(zone)=1920
>>  SIZE(free_area)=104
>>  SIZE(list_head)=16
>>  SIZE(nodemask_t)=8
>>  OFFSET(page.flags)=0
>>  OFFSET(page._refcount)=52
>>  OFFSET(page.mapping)=24
>>  OFFSET(page.lru)=8
>>  OFFSET(page._mapcount)=48
>>  OFFSET(page.private)=40
>>  OFFSET(page.compound_dtor)=16
>>  OFFSET(page.compound_order)=17
>>  OFFSET(page.compound_head)=8
>>  OFFSET(pglist_data.node_zones)=0
>>  OFFSET(pglist_data.nr_zones)=6176
>>  OFFSET(pglist_data.node_start_pfn)=6184
>>  OFFSET(pglist_data.node_spanned_pages)=6200
>>  OFFSET(pglist_data.node_id)=6208
>>  OFFSET(zone.free_area)=192
>>  OFFSET(zone.vm_stat)=1728
>>  OFFSET(zone.spanned_pages)=104
>>  OFFSET(free_area.free_list)=0
>>  OFFSET(list_head.next)=0
>>  OFFSET(list_head.prev)=8
>>  OFFSET(vmap_are14
>>  SYMBOL(logt_idx)=ffff800011ed7294
>>  SYMBOL(clear_idx)=ffff800011ed4ce0
>> og)=16
>>  OFFSET(printk_log.ts_nsec)=0
>>  OFFSET(printk_log.len)=8
>>  OFFSET(printk_log.text_len)=10
>>  OFFSET(printk_log.dict_len)=12
>>  LENGTH(free_area.free_list)=6
>>  NUMBER(NR_FREE_PAGES)=0
>>  NUMBER(PG_lru)=4
>>  NUMBER(PG_private)=13
>>  NUMBER(PG_swapcache)=10
>>  NUMBER(PG_swapbacked)=19
>>  NUMBER(PG_slab)=9
>>  NUMBER(PG_hwpoison)=22
>>  NUMBER(PG_head_mask)=65536
>>  NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129
>>  NUMBER(HUGETLB_PAGE_DTOR)=2
>>  NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257
>>  NUMBER(VA_BITS)=52
>>  NUMBER(kimage_voffset)=0xffff7fff7d600000
>>  NUMBER(PHYS_OFFSET)=0x80000000
>>  KERNELOFFSET=0
>>  CRASHTIME=1574096441
>> 
>> phys_base    : 80000000 (vmcoreinfo)
>> 
>> max_mapnr    : c00000
>> There is enough free memory to be done in one cycle.
>> 
>> Buffer size for the cyclic mode: 3145728
>> va_bits      : 47
>> page_offset  : ffffc00000000000
>> calculate_plat_config: Parm64: Can't detd
>> [FAILED] Failed to start Kdump Vmcore Save Service.
>> 
>> 
>> < reboot >
>> 
>> 
>> CAN YOU ADD A VERSION BANNER TO THE MAKEDUMPFILE SO WE CAN BE SURE OF WHAT IS BEING USED WHEN IT STARTS ?
> 
> It will not work with default vanila (upstream) kernel as you need to
> apply the patches which export TCR_EL1.T1SZ and 'MAX_PHYSMEM_BITS' in
> vmcoreinfo (see [0] and [1] for details).
> 
> I mentioned the same in the cover letter (see:
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023963.html&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yz8krT-bd__omR2VsWUhXea3iPBB4JUhUgw_0MCBasE&s=Itbzun1ta89dvRLgYqXtplaQcQKMncXV4ewUs0Lpf7o&e= >)
> 
> [0]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023960.html&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yz8krT-bd__omR2VsWUhXea3iPBB4JUhUgw_0MCBasE&s=fqNL97Va3Cc3_pym_lQXB_dnJZxU98KTioa_CHMzzoc&e= 
> [1]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023962.html&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yz8krT-bd__omR2VsWUhXea3iPBB4JUhUgw_0MCBasE&s=En-sz176a1irpuRC9XXUqRn3SL5eqLPR8VN05ajhB5A&e= 
> 
> Regards,
> Bhupesh
> 
> 
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_mailman_listinfo_kexec&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yz8krT-bd__omR2VsWUhXea3iPBB4JUhUgw_0MCBasE&s=8r5b0cjNbaH8SDXv6Kvx1NZv7Cy9KHFsmYsNwSG2eMw&e= 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-18 19:12       ` John Donnelly
@ 2019-11-18 20:00         ` John Donnelly
  0 siblings, 0 replies; 34+ messages in thread
From: John Donnelly @ 2019-11-18 20:00 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	kexec mailing list, Kazuhito Hagio, Prabhakar Kushwaha,
	Bhupesh SHARMA




> On Nov 18, 2019, at 1:12 PM, John Donnelly <john.p.donnelly@oracle.com> wrote:
> 
> I will update and test a new kernel 
> 
> 
> 
> 
>> On Nov 18, 2019, at 1:01 PM, Bhupesh Sharma <bhsharma@redhat.com> wrote:
>> 
>> Hi John,
>> 
>> On Mon, Nov 18, 2019 at 10:41 PM John Donnelly
>> <john.p.donnelly@oracle.com> wrote:
>>> 
>>> Hi,
>>> 
>>> See below .
>>> 
>>>> On Nov 17, 2019, at 11:12 PM, Prabhakar Kushwaha <prabhakar.pkin@gmail.com> wrote:
>>>> 
>>>> Re-sending in plain text mode.
>>>> 
>>>> On Tue, Nov 12, 2019 at 4:39 PM Bhupesh Sharma <bhsharma@redhat.com> wrote:
>>>>> 
>>>>> Changes since v3:
>>>>> ----------------
>>>>> - v3 can be seen here:
>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DMarch_022534.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=tXNSuQSbZP03h4vwmeTiXu_9gUn9e_rY470TmwrNQSU&e=
>>>>> - Added a new patch (via [PATCH 4/4]) which marks '--mem-usage' option as
>>>>> unsupported for arm64 architecture. With the newer arm64 kernels
>>>>> supporting 48-bit/52-bit VA address spaces and keeping a single
>>>>> binary for supporting the same, the address of
>>>>> kernel symbols like _stext, which could be earlier used to determine
>>>>> VA_BITS value, can no longer to determine whether VA_BITS is set to 48
>>>>> or 52 in the kernel space. Hence for now, it makes sense to mark
>>>>> '--mem-usage' option as unsupported for arm64 architecture until
>>>>> we have more clarity from arm64 kernel maintainers on how to manage
>>>>> the same in future kernel/makedumpfile versions.
>>>>> 
>>>>> Changes since v2:
>>>>> ----------------
>>>>> - v2 can be seen here:
>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022456.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Hd_PJi1aXdKh1jmODxHa_VFNy8HwvSYxCBH-wDitxkI&e=
>>>>> - I missed some comments from Kazu sent on the LVA v1 patch when I sent
>>>>> out the v2. So, addressing them now in v3.
>>>>> - Also added a patch that adds a tree-wide feature to read
>>>>> 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
>>>>> 
>>>>> Changes since v1:
>>>>> ----------------
>>>>> - v1 was sent as two separate patches:
>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022424.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=ZB2bTeP-9z7PVIUhtVjv0ao8wqWFJSOWTnH-kqj_LV8&e=
>>>>> (ARMv8.2-LPA)
>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022425.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=OzCS7jUUiiB4YZPbD5xo1GRQtOtsgpHtnwQDV7AgiMs&e=
>>>>> (ARMv8.2-LVA)
>>>>> - v2 combined the two in a single patchset and also addresses Kazu's
>>>>> review comments.
>>>>> 
>>>>> This patchset adds support for ARMv8.2 extensions in makedumpfile code.
>>>>> I cover the following two cases with this patchset:
>>>>> - 48-bit kernel VA + 52-bit PA (LPA)
>>>>> - 52-bit kernel VA (LVA) + 52-bit PA (LPA)
>>>>> - 48-bit kernel VA + 52-bit user-space VA (LVA)
>>>>> - 52-bit kernel VA + 52-bit user-space VA (Full LVA)
>>>>> 
>>>>> This has been tested for the following user-cases:
>>>>> 1. Creating a dumpfile using /proc/vmcore,
>>>>> 2. Creating a dumpfile using /proc/kcore, and
>>>>> 3. Post-processing a vmcore.
>>>>> 
>>>>> I have tested this patchset on the following platforms, with kernels
>>>>> which support/do-not-support ARMv8.2 features:
>>>>> 1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
>>>>> ampere-osprey.
>>>>> 2. Prototype models which support ARMv8.2 extensions (e.g. ARMv8 FVP
>>>>> simulation model).
>>>>> 
>>>>> Also a preparation patch has been added in this patchset which adds a
>>>>> common feature for archs (except arm64, for which similar support is
>>>>> added via subsequent patch) to retrieve 'MAX_PHYSMEM_BITS' from
>>>>> vmcoreinfo (if available).
>>>>> 
>>>>> I recently posted two kernel patches (see [0] and [1]) which append
>>>>> 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' to vmcoreinfo in the kernel
>>>>> code, so that user-space code can benefit from the same.
>>>>> 
>>>>> This patchset ensures backward compatibility for kernel versions in
>>>>> which 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' are not available in
>>>>> vmcoreinfo.
>>>>> 
>>>>> [0]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023960.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Aiwq36rzITwEmdA6KIDK54J-AWZKSMBcrGOG2sspXAg&e=
>>>>> [1]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023962.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=q9nNMgIGrTZoTuSZ2xymuuXN2gqhXnfNlnRPRifV6CI&e=
>>>>> 
>>>>> Cc: John Donnelly <john.p.donnelly@oracle.com>
>>>>> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
>>>>> Cc: kexec@lists.infradead.org
>>>>> 
>>>>> Bhupesh Sharma (4):
>>>>> tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
>>>>> makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
>>>>> makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA
>>>>>  support)
>>>>> makedumpfile: Mark --mem-usage option unsupported for arm64
>>>>> 
>>>>> arch/arm.c     |   8 +-
>>>>> arch/arm64.c   | 438 ++++++++++++++++++++++++++++++++++++++++++---------------
>>>>> arch/ia64.c    |   7 +-
>>>>> arch/ppc.c     |   8 +-
>>>>> arch/ppc64.c   |  49 ++++---
>>>>> arch/s390x.c   |  29 ++--
>>>>> arch/sparc64.c |   9 +-
>>>>> arch/x86.c     |  34 +++--
>>>>> arch/x86_64.c  |  27 ++--
>>>>> makedumpfile.c |   7 +
>>>>> makedumpfile.h |   3 +-
>>>>> 11 files changed, 439 insertions(+), 180 deletions(-)
>>>>> 
>>>>> --
>>>> 
>>>> Tested this patch-set on Marvell's TX2 platform on top
>>>> commit(82e6cce2219a) of https://urldefense.proofpoint.com/v2/url?u=https-3A__git.code.sf.net_p_makedumpfile_code&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Eg6LcBiLs9MlQf3jlvdRnuaQ-DODCNA9UKWnQgg9wX8&e=
>>>> (devel branch)
>>>> 
>>>> Tested-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
>>>> 
>>>> —p
>>> 
>>> 
>>>  Hi ,
>>> 
>>>  I tested this on a Arm 8.1v platform with a 5.4.rc4 kernel and it fails :
>>> 
>>> 
>>> 
>>> kdump: saving vmcore-dmesg.txt
>>> kdump: saving vmcore-dmesg.txt complete
>>> kdump: saving vmcore
>>> sadump: unsuppor     phys_start         phys_end       virt_start         virt_end
>>> LOAD[ 0]         92a80000         95040000 ffff800010080000 ffff800012640000
>>> LOAD[ 1]         90000000         92000000 ffffc00010000000 ffffc00012000000
>>> LOAD[ 2]         928c0000         dfe00000 ffffc000128c0000 ffffc0005fe00000
>>> LOAD[ 3]         ffe00000         fffa0000 ffffc0007fe00000 ffffc0007ffa0000
>>> LOAD[ 4]        880000000       1000000000 ffffc00800000000 ffffc00f80000000
>>> LOAD[ 5]       8800000000       bff7010000 ffffc08780000000 ffffc0bf77010000
>>> LOAD[ 6]       bff7040000       bff7740000 ffffc0bf77040000 ffffc0bf77740000
>>> LOAD[ 7]       bff7770000       bff8020000 ffffc0bf77770000 ffffc0bf78020000
>>> LOAD[ 8]       bff8050000       bff8070000 ffffc0bf78050000 ffffc0bf78070000
>>> LOAD[ 9]       bff80d0000       bff8270000 ffffc0bf780d0000 ffffc0bf78270000
>>> LOAD[10]       bff8280000       bff83d0000 ffffc0bf78280000 ffffc0bf783d0000
>>> LOAD[11]       bff8870000       bffc1a0000 ffffc0bf78870000 ffffc0bf7c1a0000
>>> LOAD[12]       bffc1c0000       bffc1d0000 ffffc0bf7c1c0000 ffffc0bf7c1d0000
>>> LOAD[13]       bffe210000       bfffd10000 ffffc0bf7e210000 ffffc0bf7fd10000
>>> LOAD[14]       bfffd40000       bfffd50000 ffffc0bf7fd40000 ffffc0bf7fd50000
>>> LOAD[15]       bfffe00000       c000000000 ffffc0bf7fe00000 ffffc0bf80000000
>>> Linux kdump
>>> VMCOREINFO   :
>>> OSRELEASE=5.4.0-0
>>> PAGESIZE=65536
>>> page_size    : 65536
>>> SYMBOL(init_uts_ns)=ffff800011ac5ca8
>>> SYMBOL(node_online_map)=ffff800011abd490
>>> SYMBOL(swapper_pg_dir)=ffff800011340000
>>> SYMBOL(_stext)=ffff800010081000
>>> SYMBOL(vmap_area_list)=ffff800011b89898
>>> SYMBOL(mem_section)=ffff00bf7be7e300
>>> LENGTH(mem_section)=64
>>> SIZE(mem_section)=16
>>> OFFSET(mem_section.section_mem_map)=0
>>> SIZE(page)=64
>>> SIZE(pglist_data)=6912
>>> SIZE(zone)=1920
>>> SIZE(free_area)=104
>>> SIZE(list_head)=16
>>> SIZE(nodemask_t)=8
>>> OFFSET(page.flags)=0
>>> OFFSET(page._refcount)=52
>>> OFFSET(page.mapping)=24
>>> OFFSET(page.lru)=8
>>> OFFSET(page._mapcount)=48
>>> OFFSET(page.private)=40
>>> OFFSET(page.compound_dtor)=16
>>> OFFSET(page.compound_order)=17
>>> OFFSET(page.compound_head)=8
>>> OFFSET(pglist_data.node_zones)=0
>>> OFFSET(pglist_data.nr_zones)=6176
>>> OFFSET(pglist_data.node_start_pfn)=6184
>>> OFFSET(pglist_data.node_spanned_pages)=6200
>>> OFFSET(pglist_data.node_id)=6208
>>> OFFSET(zone.free_area)=192
>>> OFFSET(zone.vm_stat)=1728
>>> OFFSET(zone.spanned_pages)=104
>>> OFFSET(free_area.free_list)=0
>>> OFFSET(list_head.next)=0
>>> OFFSET(list_head.prev)=8
>>> OFFSET(vmap_are14
>>> SYMBOL(logt_idx)=ffff800011ed7294
>>> SYMBOL(clear_idx)=ffff800011ed4ce0
>>> og)=16
>>> OFFSET(printk_log.ts_nsec)=0
>>> OFFSET(printk_log.len)=8
>>> OFFSET(printk_log.text_len)=10
>>> OFFSET(printk_log.dict_len)=12
>>> LENGTH(free_area.free_list)=6
>>> NUMBER(NR_FREE_PAGES)=0
>>> NUMBER(PG_lru)=4
>>> NUMBER(PG_private)=13
>>> NUMBER(PG_swapcache)=10
>>> NUMBER(PG_swapbacked)=19
>>> NUMBER(PG_slab)=9
>>> NUMBER(PG_hwpoison)=22
>>> NUMBER(PG_head_mask)=65536
>>> NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129
>>> NUMBER(HUGETLB_PAGE_DTOR)=2
>>> NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257
>>> NUMBER(VA_BITS)=52
>>> NUMBER(kimage_voffset)=0xffff7fff7d600000
>>> NUMBER(PHYS_OFFSET)=0x80000000
>>> KERNELOFFSET=0
>>> CRASHTIME=1574096441
>>> 
>>> phys_base    : 80000000 (vmcoreinfo)
>>> 
>>> max_mapnr    : c00000
>>> There is enough free memory to be done in one cycle.
>>> 
>>> Buffer size for the cyclic mode: 3145728
>>> va_bits      : 47
>>> page_offset  : ffffc00000000000
>>> calculate_plat_config: Parm64: Can't detd
>>> [FAILED] Failed to start Kdump Vmcore Save Service.
>>> 
>>> 
>>> < reboot >
>>> 
>>> 
>>> CAN YOU ADD A VERSION BANNER TO THE MAKEDUMPFILE SO WE CAN BE SURE OF WHAT IS BEING USED WHEN IT STARTS ?
>> 
>> It will not work with default vanila (upstream) kernel as you need to
>> apply the patches which export TCR_EL1.T1SZ and 'MAX_PHYSMEM_BITS' in
>> vmcoreinfo (see [0] and [1] for details).
>> 
>> I mentioned the same in the cover letter (see:
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023963.html&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yz8krT-bd__omR2VsWUhXea3iPBB4JUhUgw_0MCBasE&s=Itbzun1ta89dvRLgYqXtplaQcQKMncXV4ewUs0Lpf7o&e= >)
>> 
>> [0]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023960.html&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yz8krT-bd__omR2VsWUhXea3iPBB4JUhUgw_0MCBasE&s=fqNL97Va3Cc3_pym_lQXB_dnJZxU98KTioa_CHMzzoc&e= 
>> [1]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023962.html&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yz8krT-bd__omR2VsWUhXea3iPBB4JUhUgw_0MCBasE&s=En-sz176a1irpuRC9XXUqRn3SL5eqLPR8VN05ajhB5A&e= 
>> 
>> Regards,
>> Bhupesh
>> 



Hi,


Has anyone verified that crash CLI works  with 5.4.0 ?  Or you are simply relying on getting a vmcore file ? Are there dependencies on crash CLI ?



— snip —   




_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-18 19:01     ` Bhupesh Sharma
  2019-11-18 19:12       ` John Donnelly
@ 2019-11-20 16:33       ` John Donnelly
  2019-11-21 16:32         ` Bhupesh Sharma
  2019-12-05 20:59         ` Kazuhito Hagio
  1 sibling, 2 replies; 34+ messages in thread
From: John Donnelly @ 2019-11-20 16:33 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	kexec mailing list, Kazuhito Hagio, Prabhakar Kushwaha,
	Bhupesh SHARMA

Hi,

  Recent test below 


> On Nov 18, 2019, at 1:01 PM, Bhupesh Sharma <bhsharma@redhat.com> wrote:
> 
> Hi John,
> 
> On Mon, Nov 18, 2019 at 10:41 PM John Donnelly
> <john.p.donnelly@oracle.com> wrote:
>> 
>> Hi,
>> 
>> See below .
>> 
>>> On Nov 17, 2019, at 11:12 PM, Prabhakar Kushwaha <prabhakar.pkin@gmail.com> wrote:
>>> 
>>> Re-sending in plain text mode.
>>> 
>>> On Tue, Nov 12, 2019 at 4:39 PM Bhupesh Sharma <bhsharma@redhat.com> wrote:
>>>> 
>>>> Changes since v3:
>>>> ----------------
>>>> - v3 can be seen here:
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DMarch_022534.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=tXNSuQSbZP03h4vwmeTiXu_9gUn9e_rY470TmwrNQSU&e=
>>>> - Added a new patch (via [PATCH 4/4]) which marks '--mem-usage' option as
>>>> unsupported for arm64 architecture. With the newer arm64 kernels
>>>> supporting 48-bit/52-bit VA address spaces and keeping a single
>>>> binary for supporting the same, the address of
>>>> kernel symbols like _stext, which could be earlier used to determine
>>>> VA_BITS value, can no longer to determine whether VA_BITS is set to 48
>>>> or 52 in the kernel space. Hence for now, it makes sense to mark
>>>> '--mem-usage' option as unsupported for arm64 architecture until
>>>> we have more clarity from arm64 kernel maintainers on how to manage
>>>> the same in future kernel/makedumpfile versions.
>>>> 
>>>> Changes since v2:
>>>> ----------------
>>>> - v2 can be seen here:
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022456.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Hd_PJi1aXdKh1jmODxHa_VFNy8HwvSYxCBH-wDitxkI&e=
>>>> - I missed some comments from Kazu sent on the LVA v1 patch when I sent
>>>> out the v2. So, addressing them now in v3.
>>>> - Also added a patch that adds a tree-wide feature to read
>>>> 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
>>>> 
>>>> Changes since v1:
>>>> ----------------
>>>> - v1 was sent as two separate patches:
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022424.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=ZB2bTeP-9z7PVIUhtVjv0ao8wqWFJSOWTnH-kqj_LV8&e=
>>>> (ARMv8.2-LPA)
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DFebruary_022425.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=OzCS7jUUiiB4YZPbD5xo1GRQtOtsgpHtnwQDV7AgiMs&e=
>>>> (ARMv8.2-LVA)
>>>> - v2 combined the two in a single patchset and also addresses Kazu's
>>>> review comments.
>>>> 
>>>> This patchset adds support for ARMv8.2 extensions in makedumpfile code.
>>>> I cover the following two cases with this patchset:
>>>> - 48-bit kernel VA + 52-bit PA (LPA)
>>>> - 52-bit kernel VA (LVA) + 52-bit PA (LPA)
>>>> - 48-bit kernel VA + 52-bit user-space VA (LVA)
>>>> - 52-bit kernel VA + 52-bit user-space VA (Full LVA)
>>>> 
>>>> This has been tested for the following user-cases:
>>>> 1. Creating a dumpfile using /proc/vmcore,
>>>> 2. Creating a dumpfile using /proc/kcore, and
>>>> 3. Post-processing a vmcore.
>>>> 
>>>> I have tested this patchset on the following platforms, with kernels
>>>> which support/do-not-support ARMv8.2 features:
>>>> 1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
>>>>  ampere-osprey.
>>>> 2. Prototype models which support ARMv8.2 extensions (e.g. ARMv8 FVP
>>>>  simulation model).
>>>> 
>>>> Also a preparation patch has been added in this patchset which adds a
>>>> common feature for archs (except arm64, for which similar support is
>>>> added via subsequent patch) to retrieve 'MAX_PHYSMEM_BITS' from
>>>> vmcoreinfo (if available).
>>>> 
>>>> I recently posted two kernel patches (see [0] and [1]) which append
>>>> 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' to vmcoreinfo in the kernel
>>>> code, so that user-space code can benefit from the same.
>>>> 
>>>> This patchset ensures backward compatibility for kernel versions in
>>>> which 'TCR_EL1.T1SZ' and 'MAX_PHYSMEM_BITS' are not available in
>>>> vmcoreinfo.
>>>> 
>>>> [0]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023960.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Aiwq36rzITwEmdA6KIDK54J-AWZKSMBcrGOG2sspXAg&e=
>>>> [1]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023962.html&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=q9nNMgIGrTZoTuSZ2xymuuXN2gqhXnfNlnRPRifV6CI&e=
>>>> 
>>>> Cc: John Donnelly <john.p.donnelly@oracle.com>
>>>> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
>>>> Cc: kexec@lists.infradead.org
>>>> 
>>>> Bhupesh Sharma (4):
>>>> tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
>>>> makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
>>>> makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA
>>>>   support)
>>>> makedumpfile: Mark --mem-usage option unsupported for arm64
>>>> 
>>>> arch/arm.c     |   8 +-
>>>> arch/arm64.c   | 438 ++++++++++++++++++++++++++++++++++++++++++---------------
>>>> arch/ia64.c    |   7 +-
>>>> arch/ppc.c     |   8 +-
>>>> arch/ppc64.c   |  49 ++++---
>>>> arch/s390x.c   |  29 ++--
>>>> arch/sparc64.c |   9 +-
>>>> arch/x86.c     |  34 +++--
>>>> arch/x86_64.c  |  27 ++--
>>>> makedumpfile.c |   7 +
>>>> makedumpfile.h |   3 +-
>>>> 11 files changed, 439 insertions(+), 180 deletions(-)
>>>> 
>>>> --
>>> 
>>> Tested this patch-set on Marvell's TX2 platform on top
>>> commit(82e6cce2219a) of https://urldefense.proofpoint.com/v2/url?u=https-3A__git.code.sf.net_p_makedumpfile_code&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yaIr-WZNVYousyfxDrAInpTgEW0nPszxryxHQtvXrDQ&s=Eg6LcBiLs9MlQf3jlvdRnuaQ-DODCNA9UKWnQgg9wX8&e=
>>> (devel branch)
>>> 
>>> Tested-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
>>> 
>>> —p
>> 
>> 
>>   Hi ,
>> 
>>   I tested this on a Arm 8.1v platform with a 5.4.rc4 kernel and it fails :
>> 
>> 
>> 
>> kdump: saving vmcore-dmesg.txt
>> kdump: saving vmcore-dmesg.txt complete
>> kdump: saving vmcore
>> sadump: unsuppor     phys_start         phys_end       virt_start         virt_end
>> LOAD[ 0]         92a80000         95040000 ffff800010080000 ffff800012640000
>> LOAD[ 1]         90000000         92000000 ffffc00010000000 ffffc00012000000
>> LOAD[ 2]         928c0000         dfe00000 ffffc000128c0000 ffffc0005fe00000
>> LOAD[ 3]         ffe00000         fffa0000 ffffc0007fe00000 ffffc0007ffa0000
>> LOAD[ 4]        880000000       1000000000 ffffc00800000000 ffffc00f80000000
>> LOAD[ 5]       8800000000       bff7010000 ffffc08780000000 ffffc0bf77010000
>> LOAD[ 6]       bff7040000       bff7740000 ffffc0bf77040000 ffffc0bf77740000
>> LOAD[ 7]       bff7770000       bff8020000 ffffc0bf77770000 ffffc0bf78020000
>> LOAD[ 8]       bff8050000       bff8070000 ffffc0bf78050000 ffffc0bf78070000
>> LOAD[ 9]       bff80d0000       bff8270000 ffffc0bf780d0000 ffffc0bf78270000
>> LOAD[10]       bff8280000       bff83d0000 ffffc0bf78280000 ffffc0bf783d0000
>> LOAD[11]       bff8870000       bffc1a0000 ffffc0bf78870000 ffffc0bf7c1a0000
>> LOAD[12]       bffc1c0000       bffc1d0000 ffffc0bf7c1c0000 ffffc0bf7c1d0000
>> LOAD[13]       bffe210000       bfffd10000 ffffc0bf7e210000 ffffc0bf7fd10000
>> LOAD[14]       bfffd40000       bfffd50000 ffffc0bf7fd40000 ffffc0bf7fd50000
>> LOAD[15]       bfffe00000       c000000000 ffffc0bf7fe00000 ffffc0bf80000000
>> Linux kdump
>> VMCOREINFO   :
>>  OSRELEASE=5.4.0-0
>>  PAGESIZE=65536
>> page_size    : 65536
>>  SYMBOL(init_uts_ns)=ffff800011ac5ca8
>>  SYMBOL(node_online_map)=ffff800011abd490
>>  SYMBOL(swapper_pg_dir)=ffff800011340000
>>  SYMBOL(_stext)=ffff800010081000
>>  SYMBOL(vmap_area_list)=ffff800011b89898
>>  SYMBOL(mem_section)=ffff00bf7be7e300
>>  LENGTH(mem_section)=64
>>  SIZE(mem_section)=16
>>  OFFSET(mem_section.section_mem_map)=0
>>  SIZE(page)=64
>>  SIZE(pglist_data)=6912
>>  SIZE(zone)=1920
>>  SIZE(free_area)=104
>>  SIZE(list_head)=16
>>  SIZE(nodemask_t)=8
>>  OFFSET(page.flags)=0
>>  OFFSET(page._refcount)=52
>>  OFFSET(page.mapping)=24
>>  OFFSET(page.lru)=8
>>  OFFSET(page._mapcount)=48
>>  OFFSET(page.private)=40
>>  OFFSET(page.compound_dtor)=16
>>  OFFSET(page.compound_order)=17
>>  OFFSET(page.compound_head)=8
>>  OFFSET(pglist_data.node_zones)=0
>>  OFFSET(pglist_data.nr_zones)=6176
>>  OFFSET(pglist_data.node_start_pfn)=6184
>>  OFFSET(pglist_data.node_spanned_pages)=6200
>>  OFFSET(pglist_data.node_id)=6208
>>  OFFSET(zone.free_area)=192
>>  OFFSET(zone.vm_stat)=1728
>>  OFFSET(zone.spanned_pages)=104
>>  OFFSET(free_area.free_list)=0
>>  OFFSET(list_head.next)=0
>>  OFFSET(list_head.prev)=8
>>  OFFSET(vmap_are14
>>  SYMBOL(logt_idx)=ffff800011ed7294
>>  SYMBOL(clear_idx)=ffff800011ed4ce0
>> og)=16
>>  OFFSET(printk_log.ts_nsec)=0
>>  OFFSET(printk_log.len)=8
>>  OFFSET(printk_log.text_len)=10
>>  OFFSET(printk_log.dict_len)=12
>>  LENGTH(free_area.free_list)=6
>>  NUMBER(NR_FREE_PAGES)=0
>>  NUMBER(PG_lru)=4
>>  NUMBER(PG_private)=13
>>  NUMBER(PG_swapcache)=10
>>  NUMBER(PG_swapbacked)=19
>>  NUMBER(PG_slab)=9
>>  NUMBER(PG_hwpoison)=22
>>  NUMBER(PG_head_mask)=65536
>>  NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129
>>  NUMBER(HUGETLB_PAGE_DTOR)=2
>>  NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257
>>  NUMBER(VA_BITS)=52
>>  NUMBER(kimage_voffset)=0xffff7fff7d600000
>>  NUMBER(PHYS_OFFSET)=0x80000000
>>  KERNELOFFSET=0
>>  CRASHTIME=1574096441
>> 
>> phys_base    : 80000000 (vmcoreinfo)
>> 
>> max_mapnr    : c00000
>> There is enough free memory to be done in one cycle.
>> 
>> Buffer size for the cyclic mode: 3145728
>> va_bits      : 47
>> page_offset  : ffffc00000000000
>> calculate_plat_config: Parm64: Can't detd
>> [FAILED] Failed to start Kdump Vmcore Save Service.
>> 
>> 
>> < reboot >
>> 
>> 
>> CAN YOU ADD A VERSION BANNER TO THE MAKEDUMPFILE SO WE CAN BE SURE OF WHAT IS BEING USED WHEN IT STARTS ?
> 
> It will not work with default vanila (upstream) kernel as you need to
> apply the patches which export TCR_EL1.T1SZ and 'MAX_PHYSMEM_BITS' in
> vmcoreinfo (see [0] and [1] for details).
> 
> I mentioned the same in the cover letter (see:
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023963.html&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yz8krT-bd__omR2VsWUhXea3iPBB4JUhUgw_0MCBasE&s=Itbzun1ta89dvRLgYqXtplaQcQKMncXV4ewUs0Lpf7o&e= >)
> 
> [0]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023960.html&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yz8krT-bd__omR2VsWUhXea3iPBB4JUhUgw_0MCBasE&s=fqNL97Va3Cc3_pym_lQXB_dnJZxU98KTioa_CHMzzoc&e= 
> [1]. https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_kexec_2019-2DNovember_023962.html&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=yz8krT-bd__omR2VsWUhXea3iPBB4JUhUgw_0MCBasE&s=En-sz176a1irpuRC9XXUqRn3SL5eqLPR8VN05ajhB5A&e= 
> 
> Regards,
> Bhupesh
> 

This is your makedumpfile pulled from sourceforge .  

It would be helpful if you bumped the VERSION and DATE to be certain we are using the correct pieces .




   kdump: saving vmcore
makedumpfile 1.6.6, 27 Jun 2019.
sadump: unsupported architecture
               phys_start         phys_end       virt_start         virt_end
LOAD[ 0]         92a80000         94fe0000 ffff800010080000 ffff8000125e0000
LOAD[ 1]         90000000         92000000 ffffc00010000000 ffffc00012000000
LOAD[ 2]         928c0000         dfe00000 ffffc000128c0000 ffffc0005fe00000
LOAD[ 3]         ffe00000         fffa0000 ffffc0007fe00000 ffffc0007ffa0000
LOAD[ 4]        880000000       1000000000 ffffc00800000000 ffffc00f80000000
LOAD[ 5]       8800000000       bff7030000 ffffc08780000000 ffffc0bf77030000
LOAD[ 6]       bff7060000       bff72b0000 ffffc0bf77060000 ffffc0bf772b0000
LOAD[ 7]       bff72f0000       bff8030000 ffffc0bf772f0000 ffffc0bf78030000
LOAD[ 8]       bff8050000       bff8070000 ffffc0bf78050000 ffffc0bf78070000
LOAD[ 9]       bff80d0000       bff8270000 ffffc0bf780d0000 ffffc0bf78270000
LOAD[10]       bff8280000       bff83d0000 ffffc0bf78280000 ffffc0bf783d0000
LOAD[11]       bff8870000       bffc1a0000 ffffc0bf78870000 ffffc0bf7c1a0000
LOAD[12]       bffc1c0000       bffc1d0000 ffffc0bf7c1c0000 ffffc0bf7c1d0000
LOAD[13]       bffe210000       bfffd10000 ffffc0bf7e210000 ffffc0bf7fd10000
LOAD[14]       bfffd40000       bfffd50000 ffffc0bf7fd40000 ffffc0bf7fd50000
LOAD[15]       bfffe00000       c000000000 ffffc0bf7fe00000 ffffc0bf80000000
Linux kdump
VMCOREINFO   :
  OSRELEASE=5.4.0-rc8
  PAGESIZE=65536
page_size    : 65536
  SYMBOL(init_uts_ns)=ffff800011a65ca8
  SYMBOL(node_online_map)=ffff800011a5d490
  SYMBOL(swapper_pg_dir)=ffff8000112f0000
  SYMBOL(_stext)=ffff800010081000
  SYMBOL(vmap_area_list)=ffff800011b29a98
  SYMBOL(mem_section)=ffff00bf7be7e300
  LENGTH(mem_section)=64
  SIZE(mem_section)=16
  OFFSET(mem_section.section_mem_map)=0
  NUMBER(MAX_PHYSMEM_BITS)=48
  SIZE(page)=64
  SIZE(pglist_data)=6912
  SIZE(zone)=1920
  SIZE(free_area)=104
  SIZE(list_head)=16
  SIZE(nodemask_t)=8
  OFFSET(page.flags)=0
  OFFSET(page._refcount)=52
  OFFSET(page.mapping)=24
  OFFSET(page.lru)=8
  OFFSET(page._mapcount)=48
  OFFSET(page.private)=40
  OFFSET(page.compound_dtor)=16
  OFFSET(page.compound_order)=17
  OFFSET(page.compound_head)=8
  OFFSET(pglist_data.node_zones)=0
  OFFSET(pglist_data.nr_zones)=6176
  OFFSET(pglist_data.node_start_pfn)=6184
  OFFSET(pglist_data.node_spanned_pages)=6200
  OFFSET(pglist_data.node_id)=6208
  OFFSET(zone.free_area)=192
  OFFSET(zone.vm_stat)=1728
  OFFSET(zone.spanned_pages)=104
  OFFSET(free_area.free_list)=0
  OFFSET(list_head.next)=0
  OFFSET(list_head.prev)=8
  OFFSET(vmap_area.va_start)=0
  OFFSET(vmap_area.list)=40
  LENGTH(zone.free_area)=14
  SYMBOL(log_buf)=ffff800011ada808
  SYMBOL(log_buf_len)=ffff800011ada810
  SYMBOL(log_first_idx)=ffff800011e772d4
  SYMBOL(clear_idx)=ffff800011e74d20
  SYMBOL(log_next_idx)=ffff800011e772e0
  SIZE(printk_log)=16
  OFFSET(printk_log.ts_nsec)=0
  OFFSET(printk_log.len)=8
  OFFSET(printk_log.text_len)=10
  OFFSET(printk_log.dict_len)=12
  LENGTH(free_area.free_list)=6
  NUMBER(NR_FREE_PAGES)=0
  NUMBER(PG_lru)=4
  NUMBER(PG_private)=13
  NUMBER(PG_swapcache)=10
  NUMBER(PG_swapbacked)=19
  NUMBER(PG_slab)=9
  NUMBER(PG_hwpoison)=22
  NUMBER(PG_head_mask)=65536
  NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129
  NUMBER(HUGETLB_PAGE_DTOR)=2
  NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257
  NUMBER(VA_BITS)=48
  NUMBER(kimage_voffset)=0xffff7fff7d600000
  NUMBER(PHYS_OFFSET)=0x80000000
  NUMBER(tcr_el1_t1sz)=0x10
  KERNELOFFSET=0
  CRASHTIME=1574266958

phys_base    : 80000000 (vmcoreinfo)

max_mapnr    : c00000
There is enough free memory to be done in one cycle.

Buffer size for the cyclic mode: 3145728
va_bits      : 47
page_offset  : ffffc00000000000
kdump: saving vmcore failed



================


— kernel patch applied to 5.4.0-rc8 



vabits_actual variable on arm64 indicates the actual VA space size,
and allows a single binary to support both 48-bit and 52-bit VA
spaces.

If the ARMv8.2-LVA optional feature is present, and we are running
with a 64KB page size; then it is possible to use 52-bits of address
space for both userspace and kernel addresses. However, any kernel
binary that supports 52-bit must also be able to fall back to 48-bit
at early boot time if the hardware feature is not present.

Since TCR_EL1.T1SZ indicates the size offset of the memory region
addressed by TTBR1_EL1 (and hence can be used for determining the
vabits_actual value) it makes more sense to export the same in
vmcoreinfo rather than vabits_actual variable, as the name of the
variable can change in future kernel versions, but the architectural
constructs like TCR_EL1.T1SZ can be used better to indicate intended
specific fields to user-space.

User-space utilities like makedumpfile and crash-utility, need to
read/write this value from/to vmcoreinfo for determining if a virtual
address lies in the linear map range.

The user-space computation for determining whether an address lies in
the linear map range is the same as we have in kernel-space:

  #define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual -
1)))

Copied from kexec working group

Signed-off-by: John Donnelly <john.p.donnelly@oracle.com>
---
 arch/arm64/include/asm/pgtable-hwdef.h |  1 +
 arch/arm64/kernel/crash_core.c         | 10 ++++++++++
 kernel/crash_core.c                    |  1 +
 3 files changed, 12 insertions(+)

diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index 3df60f97da1f..a0f789fa25f3 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -215,6 +215,7 @@
 #define TCR_TxSZ(x)		(TCR_T0SZ(x) | TCR_T1SZ(x))
 #define TCR_TxSZ_WIDTH		6
 #define TCR_T0SZ_MASK		(((UL(1) << TCR_TxSZ_WIDTH) - 1) << TCR_T0SZ_OFFSET)
+#define TCR_T1SZ_MASK		(((UL(1) << TCR_TxSZ_WIDTH) - 1) << TCR_T1SZ_OFFSET)
 
 #define TCR_EPD0_SHIFT		7
 #define TCR_EPD0_MASK		(UL(1) << TCR_EPD0_SHIFT)
diff --git a/arch/arm64/kernel/crash_core.c b/arch/arm64/kernel/crash_core.c
index ca4c3e12d8c5..f7027142030f 100644
--- a/arch/arm64/kernel/crash_core.c
+++ b/arch/arm64/kernel/crash_core.c
@@ -7,6 +7,14 @@
 #include <linux/crash_core.h>
 #include <asm/memory.h>
 
+static inline u64 get_tcr_el1_t1sz(void);
+
+static inline u64 get_tcr_el1_t1sz(void)
+{
+	return (read_sysreg(tcr_el1) & TCR_T1SZ_MASK) >> TCR_T1SZ_OFFSET;
+}
+
+
 void arch_crash_save_vmcoreinfo(void)
 {
 	VMCOREINFO_NUMBER(VA_BITS);
@@ -15,5 +23,7 @@ void arch_crash_save_vmcoreinfo(void)
 						kimage_voffset);
 	vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
 						PHYS_OFFSET);
+	vmcoreinfo_append_str("NUMBER(tcr_el1_t1sz)=0x%llx\n",
+						get_tcr_el1_t1sz());
 	vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
 }
diff --git a/kernel/crash_core.c b/kernel/crash_core.c
index f0061fec74df..157d0c2ec277 100644
--- a/kernel/crash_core.c
+++ b/kernel/crash_core.c
@@ -469,6 +469,7 @@ static int __init crash_save_vmcoreinfo_init(void)
 	VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);
 	VMCOREINFO_STRUCT_SIZE(mem_section);
 	VMCOREINFO_OFFSET(mem_section, section_mem_map);
+	VMCOREINFO_NUMBER(MAX_PHYSMEM_BITS);
 #endif
 	VMCOREINFO_STRUCT_SIZE(page);
 	VMCOREINFO_STRUCT_SIZE(pglist_data);
-- 
2.20.1




_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-20 16:33       ` John Donnelly
@ 2019-11-21 16:32         ` Bhupesh Sharma
  2019-11-21 16:59           ` John Donnelly
  2019-12-05 20:59         ` Kazuhito Hagio
  1 sibling, 1 reply; 34+ messages in thread
From: Bhupesh Sharma @ 2019-11-21 16:32 UTC (permalink / raw)
  To: John Donnelly
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	kexec mailing list, Kazuhito Hagio, Prabhakar Kushwaha,
	Bhupesh SHARMA

> On Wed, Nov 20, 2019 at 10:03 PM John Donnelly <john.p.donnelly@oracle.com> wrote:
>
> Hi,
>
>   Recent test below
>  This is your makedumpfile pulled from sourceforge .

Do you mean github? I don't remember pushing anything to sourceforge.
Please share the exact branch name and the source URL for the
makedumpfile you are using

> It would be helpful if you bumped the VERSION and DATE to be certain we are using the correct pieces .

You can print makedumpfile version in your scriptware. It lets you
know the latest makedumpfile version. Note that this indicates the
latest released version and not the development branch. The
development branch is for things under test (like this change) and
being stabilized whereas the released version contains a bump to a new
VERSION number and DATE at which a release is made.

# makedumpfile -v
makedumpfile: version 1.6.6 (released on 27 Jun 2019)
lzo    enabled

> kdump: saving vmcore
> makedumpfile 1.6.6, 27 Jun 2019.
> sadump: unsupported architecture
>               phys_start         phys_end       virt_start         virt_end
> LOAD[ 0]         92a80000         94fe0000 ffff800010080000 ffff8000125e0000
> LOAD[ 1]         90000000         92000000 ffffc00010000000 ffffc00012000000
> LOAD[ 2]         928c0000         dfe00000 ffffc000128c0000 ffffc0005fe00000
> LOAD[ 3]         ffe00000         fffa0000 ffffc0007fe00000 ffffc0007ffa0000
> LOAD[ 4]        880000000       1000000000 ffffc00800000000 ffffc00f80000000
> LOAD[ 5]       8800000000       bff7030000 ffffc08780000000 ffffc0bf77030000
> LOAD[ 6]       bff7060000       bff72b0000 ffffc0bf77060000 ffffc0bf772b0000
> LOAD[ 7]       bff72f0000       bff8030000 ffffc0bf772f0000 ffffc0bf78030000
> LOAD[ 8]       bff8050000       bff8070000 ffffc0bf78050000 ffffc0bf78070000
> LOAD[ 9]       bff80d0000       bff8270000 ffffc0bf780d0000 ffffc0bf78270000
> LOAD[10]       bff8280000       bff83d0000 ffffc0bf78280000 ffffc0bf783d0000>
> LOAD[11]       bff8870000       bffc1a0000 ffffc0bf78870000 ffffc0bf7c1a0000
> LOAD[12]       bffc1c0000       bffc1d0000 ffffc0bf7c1c0000 ffffc0bf7c1d0000
> LOAD[13]       bffe210000       bfffd10000 ffffc0bf7e210000 ffffc0bf7fd10000
> LOAD[14]       bfffd40000       bfffd50000 ffffc0bf7fd40000 ffffc0bf7fd50000
> LOAD[15]       bfffe00000       c000000000 ffffc0bf7fe00000 ffffc0bf80000000
> Linux kdump
> VMCOREINFO   :
  OSRELEASE=5.4.0-rc8
  PAGESIZE=65536
> page_size    : 65536
  SYMBOL(init_uts_ns)=ffff800011a65ca8
  SYMBOL(node_online_map)=ffff800011a5d490
  SYMBOL(swapper_pg_dir)=ffff8000112f0000
  SYMBOL(_stext)=ffff800010081000
  SYMBOL(vmap_area_list)=ffff800011b29a98
  SYMBOL(mem_section)=ffff00bf7be7e300
  LENGTH(mem_section)=64
  SIZE(mem_section)=16
  OFFSET(mem_section.section_mem_map)=0
  NUMBER(MAX_PHYSMEM_BITS)=48   OFFSET(vmap_area.va_start)=0
  OFFSET(vmap_area.list)=40
  LENGTH(zone.free_area)=14
  SYMBOL(log_buf)=ffff800011ada808
  SYMBOL(log_buf_len)=ffff800011ada810
  SYMBOL(log_first_idx)=ffff800011e772d4
  SYMBOL(clear_idx)=ffff800011e74d20
  SYMBOL(log_next_idx)=ffff800011e772e0
  SIZE(printk_log)=16
  OFFSET(printk_log.ts_nsec)=0
  OFFSET(printk_log.len)=8
  OFFSET(printk_log.text_len)=10
  OFFSET(printk_log.dict_len)=12
  LENGTH(free_area.free_list)=6
  NUMBER(NR_FREE_PAGES)=0
  NUMBER(PG_lru)=4
  NUMBER(PG_private)=13
  NUMBER(PG_swapcache)=10
  NUMBER(PG_swapbacked)=19
  NUMBER(PG_slab)=9
  NUMBER(PG_hwpoison)=22
  NUMBER(PG_head_mask)=65536
  NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129
  NUMBER(HUGETLB_PAGE_DTOR)=2
  NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257
  NUMBER(VA_BITS)=48
  NUMBER(kimage_voffset)=0xffff7fff7d600000
  NUMBER(PHYS_OFFSET)=0x80000000
  NUMBER(tcr_el1_t1sz)=0x10
  KERNELOFFSET=0
  CRASHTIME=1574266958

> phys_base    : 80000000 (vmcoreinfo)

> max_mapnr    : c00000
> There is enough free memory to be done in one cycle.

> Buffer size for the cyclic mode: 3145728
> va_bits      : 47
> page_offset  : ffffc00000000000
> kdump: saving vmcore failed

You again seem to be using an old/incorrect version of makedumpfile.
As you can see here from [0] and [1] the newer makedumpfile patches I
posted print where the va_bits are derived from - _stext symbol or
vmcoreinfo.

Since you are running a kdump test, it should print something like
this for va_bits if you have the correct makedumpfile changes compiled
in and installed (via make install) - notice the source from where
va_bits is determined properly is printed in brackets:
phys_base    : 80000000 (vmcoreinfo)

max_mapnr    : 97fd00
There is enough free memory to be done in one cycle.

Buffer size for the cyclic mode: 2490176
va_bits        : 48 (vmcoreinfo)
page_offset    : ffff000000000000 (approximation)
kimage_voffset   : fffeffff8fc00000
max_physmem_bits : 52
section_size_bits: 30

Regards,
Bhupesh

[0]. <https://github.com/bhupesh-sharma/makedumpfile/blob/52-bit-va-support-via-vmcore-upstream-v4/arch/arm64.c#L468>
[1]. <https://github.com/bhupesh-sharma/makedumpfile/blob/52-bit-va-support-via-vmcore-upstream-v4/arch/arm64.c#L511>


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-21 16:32         ` Bhupesh Sharma
@ 2019-11-21 16:59           ` John Donnelly
  2019-11-21 19:20             ` John Donnelly
  0 siblings, 1 reply; 34+ messages in thread
From: John Donnelly @ 2019-11-21 16:59 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	kexec mailing list, Kazuhito Hagio, Prabhakar Kushwaha,
	Bhupesh SHARMA



> On Nov 21, 2019, at 10:32 AM, Bhupesh Sharma <bhsharma@redhat.com> wrote:
> 
>> On Wed, Nov 20, 2019 at 10:03 PM John Donnelly <john.p.donnelly@oracle.com> wrote:
>> 
>> Hi,
>> 
>>  Recent test below
>> This is your makedumpfile pulled from sourceforge .
> 
> Do you mean github? I don't remember pushing anything to sourceforge.
> Please share the exact branch name and the source URL for the
> makedumpfile you are using

 Hi,   You are correct -  GitHub -    I used your url posted below ; I do not see the arch/arm64.c changes in the zip  version I downloaded . 

 I am not a GUI/gitlab user. Can you please send a  tarball copy of your working makedumpfile   CLI  via email and I will verify it works.





> 
>> It would be helpful if you bumped the VERSION and DATE to be certain we are using the correct pieces .
> 
> You can print makedumpfile version in your scriptware. It lets you
> know the latest makedumpfile version. Note that this indicates the
> latest released version and not the development branch. The
> development branch is for things under test (like this change) and
> being stabilized whereas the released version contains a bump to a new
> VERSION number and DATE at which a release is made.
> 
> # makedumpfile -v
> makedumpfile: version 1.6.6 (released on 27 Jun 2019)
> lzo    enabled
> 
>> kdump: saving vmcore
>> makedumpfile 1.6.6, 27 Jun 2019.
>> sadump: unsupported architecture
>>              phys_start         phys_end       virt_start         virt_end
>> LOAD[ 0]         92a80000         94fe0000 ffff800010080000 ffff8000125e0000
>> LOAD[ 1]         90000000         92000000 ffffc00010000000 ffffc00012000000
>> LOAD[ 2]         928c0000         dfe00000 ffffc000128c0000 ffffc0005fe00000
>> LOAD[ 3]         ffe00000         fffa0000 ffffc0007fe00000 ffffc0007ffa0000
>> LOAD[ 4]        880000000       1000000000 ffffc00800000000 ffffc00f80000000
>> LOAD[ 5]       8800000000       bff7030000 ffffc08780000000 ffffc0bf77030000
>> LOAD[ 6]       bff7060000       bff72b0000 ffffc0bf77060000 ffffc0bf772b0000
>> LOAD[ 7]       bff72f0000       bff8030000 ffffc0bf772f0000 ffffc0bf78030000
>> LOAD[ 8]       bff8050000       bff8070000 ffffc0bf78050000 ffffc0bf78070000
>> LOAD[ 9]       bff80d0000       bff8270000 ffffc0bf780d0000 ffffc0bf78270000
>> LOAD[10]       bff8280000       bff83d0000 ffffc0bf78280000 ffffc0bf783d0000>
>> LOAD[11]       bff8870000       bffc1a0000 ffffc0bf78870000 ffffc0bf7c1a0000
>> LOAD[12]       bffc1c0000       bffc1d0000 ffffc0bf7c1c0000 ffffc0bf7c1d0000
>> LOAD[13]       bffe210000       bfffd10000 ffffc0bf7e210000 ffffc0bf7fd10000
>> LOAD[14]       bfffd40000       bfffd50000 ffffc0bf7fd40000 ffffc0bf7fd50000
>> LOAD[15]       bfffe00000       c000000000 ffffc0bf7fe00000 ffffc0bf80000000
>> Linux kdump
>> VMCOREINFO   :
>  OSRELEASE=5.4.0-rc8
>  PAGESIZE=65536
>> page_size    : 65536
>  SYMBOL(init_uts_ns)=ffff800011a65ca8
>  SYMBOL(node_online_map)=ffff800011a5d490
>  SYMBOL(swapper_pg_dir)=ffff8000112f0000
>  SYMBOL(_stext)=ffff800010081000
>  SYMBOL(vmap_area_list)=ffff800011b29a98
>  SYMBOL(mem_section)=ffff00bf7be7e300
>  LENGTH(mem_section)=64
>  SIZE(mem_section)=16
>  OFFSET(mem_section.section_mem_map)=0
>  NUMBER(MAX_PHYSMEM_BITS)=48   OFFSET(vmap_area.va_start)=0
>  OFFSET(vmap_area.list)=40
>  LENGTH(zone.free_area)=14
>  SYMBOL(log_buf)=ffff800011ada808
>  SYMBOL(log_buf_len)=ffff800011ada810
>  SYMBOL(log_first_idx)=ffff800011e772d4
>  SYMBOL(clear_idx)=ffff800011e74d20
>  SYMBOL(log_next_idx)=ffff800011e772e0
>  SIZE(printk_log)=16
>  OFFSET(printk_log.ts_nsec)=0
>  OFFSET(printk_log.len)=8
>  OFFSET(printk_log.text_len)=10
>  OFFSET(printk_log.dict_len)=12
>  LENGTH(free_area.free_list)=6
>  NUMBER(NR_FREE_PAGES)=0
>  NUMBER(PG_lru)=4
>  NUMBER(PG_private)=13
>  NUMBER(PG_swapcache)=10
>  NUMBER(PG_swapbacked)=19
>  NUMBER(PG_slab)=9
>  NUMBER(PG_hwpoison)=22
>  NUMBER(PG_head_mask)=65536
>  NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129
>  NUMBER(HUGETLB_PAGE_DTOR)=2
>  NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257
>  NUMBER(VA_BITS)=48
>  NUMBER(kimage_voffset)=0xffff7fff7d600000
>  NUMBER(PHYS_OFFSET)=0x80000000
>  NUMBER(tcr_el1_t1sz)=0x10
>  KERNELOFFSET=0
>  CRASHTIME=1574266958
> 
>> phys_base    : 80000000 (vmcoreinfo)
> 
>> max_mapnr    : c00000
>> There is enough free memory to be done in one cycle.
> 
>> Buffer size for the cyclic mode: 3145728
>> va_bits      : 47
>> page_offset  : ffffc00000000000
>> kdump: saving vmcore failed
> 
> You again seem to be using an old/incorrect version of makedumpfile.
> As you can see here from [0] and [1] the newer makedumpfile patches I
> posted print where the va_bits are derived from - _stext symbol or
> vmcoreinfo.
> 
> Since you are running a kdump test, it should print something like
> this for va_bits if you have the correct makedumpfile changes compiled
> in and installed (via make install) - notice the source from where
> va_bits is determined properly is printed in brackets:
> phys_base    : 80000000 (vmcoreinfo)
> 
> max_mapnr    : 97fd00
> There is enough free memory to be done in one cycle.
> 
> Buffer size for the cyclic mode: 2490176
> va_bits        : 48 (vmcoreinfo)
> page_offset    : ffff000000000000 (approximation)
> kimage_voffset   : fffeffff8fc00000
> max_physmem_bits : 52
> section_size_bits: 30
> 
> Regards,
> Bhupesh
> 
> [0]. <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_bhupesh-2Dsharma_makedumpfile_blob_52-2Dbit-2Dva-2Dsupport-2Dvia-2Dvmcore-2Dupstream-2Dv4_arch_arm64.c-23L468&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=fTxuFz0gFYqF-yeTV5k-4ve75ozUz7jUQ2H70G6l3Ew&s=5KmTLEoAJrWyjQx6U6QkzZyiqBw8rGIzrSdF5Bc1ui4&e= >
> [1]. <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_bhupesh-2Dsharma_makedumpfile_blob_52-2Dbit-2Dva-2Dsupport-2Dvia-2Dvmcore-2Dupstream-2Dv4_arch_arm64.c-23L511&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=fTxuFz0gFYqF-yeTV5k-4ve75ozUz7jUQ2H70G6l3Ew&s=e_1XFoI356GjqUeFk-99QB4wCe-pAUuzZTHirfBMtbQ&e= >
> 
> 
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_mailman_listinfo_kexec&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=fTxuFz0gFYqF-yeTV5k-4ve75ozUz7jUQ2H70G6l3Ew&s=fasW4oWXU7Sb-TR6YC5qCtqErxdidkMgCKZtKMp7Ans&e= 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-21 16:59           ` John Donnelly
@ 2019-11-21 19:20             ` John Donnelly
  2019-11-21 21:52               ` John Donnelly
  0 siblings, 1 reply; 34+ messages in thread
From: John Donnelly @ 2019-11-21 19:20 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	kexec mailing list, Kazuhito Hagio, Prabhakar Kushwaha,
	Bhupesh SHARMA



> On Nov 21, 2019, at 10:59 AM, John Donnelly <john.p.donnelly@oracle.com> wrote:
> 
> 
> 
>> On Nov 21, 2019, at 10:32 AM, Bhupesh Sharma <bhsharma@redhat.com> wrote:
>> 
>>> On Wed, Nov 20, 2019 at 10:03 PM John Donnelly <john.p.donnelly@oracle.com> wrote:
>>> 
>>> Hi,
>>> 
>>> Recent test below
>>> This is your makedumpfile pulled from sourceforge .
>> 
>> Do you mean github? I don't remember pushing anything to sourceforge.
>> Please share the exact branch name and the source URL for the
>> makedumpfile you are using
> 
> Hi,   You are correct -  GitHub -    I used your url posted below ; I do not see the arch/arm64.c changes in the zip  version I downloaded . 
> 
> I am not a GUI/gitlab user. Can you please send a  tarball copy of your working makedumpfile   CLI  via email and I will verify it works.
> 


  Hi 

 
   I was able to fork and clone your work area .

 I can see makedumpfile works now ! 

 Fantastic ;;  Thank you for your patience !






_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-21 19:20             ` John Donnelly
@ 2019-11-21 21:52               ` John Donnelly
  2019-11-22 12:30                 ` John Donnelly
  0 siblings, 1 reply; 34+ messages in thread
From: John Donnelly @ 2019-11-21 21:52 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	kexec mailing list, Kazuhito Hagio, Prabhakar Kushwaha,
	Bhupesh SHARMA



> On Nov 21, 2019, at 1:20 PM, John Donnelly <john.p.donnelly@oracle.com> wrote:
> 
> 
> 
>> On Nov 21, 2019, at 10:59 AM, John Donnelly <john.p.donnelly@oracle.com> wrote:
>> 
>> 
>> 
>>> On Nov 21, 2019, at 10:32 AM, Bhupesh Sharma <bhsharma@redhat.com> wrote:
>>> 
>>>> On Wed, Nov 20, 2019 at 10:03 PM John Donnelly <john.p.donnelly@oracle.com> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> Recent test below
>>>> This is your makedumpfile pulled from sourceforge .
>>> 
>>> Do you mean github? I don't remember pushing anything to sourceforge.
>>> Please share the exact branch name and the source URL for the
>>> makedumpfile you are using
>> 
>> Hi,   You are correct -  GitHub -    I used your url posted below ; I do not see the arch/arm64.c changes in the zip  version I downloaded . 
>> 
>> I am not a GUI/gitlab user. Can you please send a  tarball copy of your working makedumpfile   CLI  via email and I will verify it works.
>> 
> 
> 
>  Hi 
> 
> 
>   I was able to fork and clone your work area .
> 
> I can see makedumpfile works now ! 
> 
> Fantastic ;;  Thank you for your patience !
> 



   I did some basic unit tests. 

   This patch for  makedumpfile  ONLY WORKS on 5.4.0-rc8 kernel. 

  It does not work with a previous 4.18 kernel. 

 Is this suppose to be backwards compatible  ?





> 
> 
> 
> 
> 
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_mailman_listinfo_kexec&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=qzvbWFi4jiB58rXJ3WWlsBhMaCE050Bl3F630z5cxZQ&s=06v1wglHOpFgEZdqr06KBrYVdp3SPc6GuQ88d6Mo_24&e= 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-21 21:52               ` John Donnelly
@ 2019-11-22 12:30                 ` John Donnelly
  2019-11-22 14:22                   ` John Donnelly
  0 siblings, 1 reply; 34+ messages in thread
From: John Donnelly @ 2019-11-22 12:30 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	kexec mailing list, Kazuhito Hagio, Prabhakar Kushwaha,
	Bhupesh SHARMA



> On Nov 21, 2019, at 3:52 PM, John Donnelly <john.p.donnelly@oracle.com> wrote:
> 
> 
> 
>> On Nov 21, 2019, at 1:20 PM, John Donnelly <john.p.donnelly@oracle.com> wrote:
>> 
>> 
>> 
>>> On Nov 21, 2019, at 10:59 AM, John Donnelly <john.p.donnelly@oracle.com> wrote:
>>> 
>>> 
>>> 
>>>> On Nov 21, 2019, at 10:32 AM, Bhupesh Sharma <bhsharma@redhat.com> wrote:
>>>> 
>>>>> On Wed, Nov 20, 2019 at 10:03 PM John Donnelly <john.p.donnelly@oracle.com> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> Recent test below
>>>>> This is your makedumpfile pulled from sourceforge .
>>>> 
>>>> Do you mean github? I don't remember pushing anything to sourceforge.
>>>> Please share the exact branch name and the source URL for the
>>>> makedumpfile you are using
>>> 
>>> Hi,   You are correct -  GitHub -    I used your url posted below ; I do not see the arch/arm64.c changes in the zip  version I downloaded . 
>>> 
>>> I am not a GUI/gitlab user. Can you please send a  tarball copy of your working makedumpfile   CLI  via email and I will verify it works.
>>> 
>> 
>> 
>> Hi 
>> 
>> 
>>  I was able to fork and clone your work area .
>> 
>> I can see makedumpfile works now ! 
>> 
>> Fantastic ;;  Thank you for your patience !
>> 
> 
> 
> 
>   I did some basic unit tests. 
> 
>   This patch for  makedumpfile  ONLY WORKS on 5.4.0-rc8 kernel. 
> 
>  It does not work with a previous 4.18 kernel. 
> 
> Is this suppose to be backwards compatible  ?
> 
> 



 Your makedumpfile ran on 4.18. kernel   debug output :



kdump: saving vmcore
sadump: unsupported architecture
               phys_start         phys_end       virt_start         virt_end
LOAD[ 0]         90080000         91f50000 ffff000010080000 ffff000011f50000
LOAD[ 1]         90000000         92000000 ffff800010000000 ffff800012000000
LOAD[ 2]         928c0000         e3e00000 ffff8000128c0000 ffff800063e00000
LOAD[ 3]         ffe00000         fffa0000 ffff80007fe00000 ffff80007ffa0000
LOAD[ 4]        880000000       1000000000 ffff800800000000 ffff800f80000000
LOAD[ 5]       8800000000       bff7030000 ffff808780000000 ffff80bf77030000
LOAD[ 6]       bff7060000       bff72b0000 ffff80bf77060000 ffff80bf772b0000
LOAD[ 7]       bff72f0000       bff8030000 ffff80bf772f0000 ffff80bf78030000
LOAD[ 8]       bff8050000       bff8070000 ffff80bf78050000 ffff80bf78070000
LOAD[ 9]       bff80d0000       bff8270000 ffff80bf780d0000 ffff80bf78270000
LOAD[10]       bff8280000       bff83d0000 ffff80bf78280000 ffff80bf783d0000
LOAD[11]       bff8870000       bffc1a0000 ffff80bf78870000 ffff80bf7c1a0000
LOAD[12]       bffc1c0000       bffc1d0000 ffff80bf7c1c0000 ffff80bf7c1d0000
LOAD[13]       bffe210000       bfffd10000 ffff80bf7e210000 ffff80bf7fd10000
LOAD[14]       bfffd40000       bfffd50000 ffff80bf7fd40000 ffff80bf7fd50000
LOAD[15]       bfffe00000       c000000000 ffff80bf7fe00000 ffff80bf80000000
Linux kdump
VMCOREINFO   :
  OSRELEASE=4.18.0-147.el8.aarch64.  <<————      4.18. kernel 
  PAGESIZE=65536
page_size    : 65536
  SYMBOL(init_uts_ns)=ffff0000114657a8
  SYMBOL(node_online_map)=ffff00001145d320
  SYMBOL(swapper_pg_dir)=ffff000010fa0000
  SYMBOL(_stext)=ffff000010081000
  SYMBOL(vmap_area_list)=ffff0000114ea220
  SYMBOL(mem_section)=ffff80bf7be7c600
  LENGTH(mem_section)=1024
  SIZE(mem_section)=16
  OFFSET(mem_section.section_mem_map)=0
  SIZE(page)=64
  SIZE(pglist_data)=6656
  SIZE(zone)=1728
  SIZE(free_area)=88
  SIZE(list_head)=16
  SIZE(nodemask_t)=8
  OFFSET(page.flags)=0
  OFFSET(page._refcount)=52
  OFFSET(page.mapping)=24
  OFFSET(page.lru)=8
  OFFSET(page._mapcount)=48
  OFFSET(page.private)=40
  OFFSET(page.compound_dtor)=16
  OFFSET(page.compound_order)=17
  OFFSET(page.compound_head)=8
  OFFSET(pglist_data.node_zones)=0
  OFFSET(pglist_data.nr_zones)=5984
  OFFSET(pglist_data.node_start_pfn)=5992
  OFFSET(pglist_data.node_spanned_pages)=6008
  OFFSET(pglist_data.node_id)=6016
  OFFSET(zone.free_area)=192
  OFFSET(zone.vm_stat)=1552
  OFFSET(zone.spanned_pages)=96
  OFFSET(free_area.free_list)=0
  OFFSET(list_head.next)=0
  OFFSET(list_head.prev)=8
  OFFSET(vmap_area.va_start)=0
  OFFSET(vmap_area.list)=48
  LENGTH(zone.free_area)=14
  SYMBOL(log_buf)=ffff00001149f670
  SYMBOL(log_buf_len)=ffff00001149f668
  SYMBOL(log_first_idx)=ffff000011cc574c
  SYMBOL(clear_idx)=ffff000011cc5758
  SYMBOL(log_next_idx)=ffff000011cc5748
  SIZE(printk_log)=16
  OFFSET(printk_log.ts_nsec)=0
  OFFSET(printk_log.len)=8
  OFFSET(printk_log.text_len)=10
  OFFSET(printk_log.dict_len)=12
  LENGTH(free_area.free_list)=5
  NUMBER(NR_FREE_PAGES)=0
  NUMBER(PG_lru)=5
  NUMBER(PG_private)=12
  NUMBER(PG_swapcache)=9
  NUMBER(PG_swapbacked)=18
  NUMBER(PG_slab)=8
  NUMBER(PG_hwpoison)=21
  NUMBER(PG_head_mask)=32768
  NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129
  NUMBER(HUGETLB_PAGE_DTOR)=2
  NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257
  NUMBER(VA_BITS)=48
  NUMBER(MAX_PHYSMEM_BITS)=52
  NUMBER(MAX_USER_VA_BITS)=52
  NUMBER(kimage_voffset)=0xfffeffff80000000
  NUMBER(PHYS_OFFSET)=0x80000000
  KERNELOFFSET=0
  CRASHTIME=1574425559

phys_base    : 80000000 (vmcoreinfo)

max_mapnr    : c00000
There is enough free memory to be done in one cycle.

Buffer size for the cyclic mode: 3145728
va_bits        : 48 (vmcoreinfo)
page_offset    : ffff000000000000 (approximation)
kimage_voffset   : fffeffff80000000
max_physmem_bits : 52
section_size_bits: 30
kdump: saving vmcore failed




> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_mailman_listinfo_kexec&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=t2fPg9D87F7D8jm0_3CG9yoiIKdRg4qc_thBw4bzMhc&m=e8_Mv6xWOkVqXTTpFpN44wI2yJoD5vkUr3uCEgxTYjc&s=_BNBMgMWftR4uXgwxGh8zpq7iWJof0CYkYnFYEPNOSU&e= 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-22 12:30                 ` John Donnelly
@ 2019-11-22 14:22                   ` John Donnelly
  0 siblings, 0 replies; 34+ messages in thread
From: John Donnelly @ 2019-11-22 14:22 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: Prabhakar Kushwaha, Ganapatrao Prabhakerrao Kulkarni,
	kexec mailing list, Kazuhito Hagio, Prabhakar Kushwaha,
	Bhupesh SHARMA



Hi Bhupesh,

I recall seeing a reference to modification are needed for the crash CLI also to support 5.4.0-rc with your kernel patches cited here.
 
Where would I find that at ?  I don’t see crash on Giblab.


>>> 
>>> 
>>> Hi 
>>> 
>>> 
>>> I was able to fork and clone your work area .
>>> 
>>> I can see makedumpfile works now ! 
>>> 
>>> Fantastic ;;  Thank you for your patience !
>>> 
>> 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 1/4] tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
  2019-11-12 11:08 ` [PATCH v4 1/4] tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available) Bhupesh Sharma
@ 2019-12-04 17:34   ` Kazuhito Hagio
  2019-12-05 18:17     ` Bhupesh Sharma
  0 siblings, 1 reply; 34+ messages in thread
From: Kazuhito Hagio @ 2019-12-04 17:34 UTC (permalink / raw)
  To: Bhupesh Sharma, kexec; +Cc: John Donnelly, bhupesh.linux

Hi Bhupesh,

Sorry for the late reply.

> -----Original Message-----
> This patch adds a common feature for archs (except arm64, for which
> similar support is added via subsequent patch) to retrieve
> 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).

We already have the calibrate_machdep_info() function, which sets
info->max_physmem_bits from vmcoreinfo, so practically we don't need
to add this patch for the benefit.

Thanks,
Kazu

> 
> I recently posted a kernel patch (see [0]) which appends
> 'MAX_PHYSMEM_BITS' to vmcoreinfo in the core code itself rather than
> in arch-specific code, so that user-space code can also benefit from
> this addition to the vmcoreinfo and use it as a standard way of
> determining 'SECTIONS_SHIFT' value in 'makedumpfile' utility.
> 
> This patch ensures backward compatibility for kernel versions in which
> 'MAX_PHYSMEM_BITS' is not available in vmcoreinfo.
> 
> [0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
> 
> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> Cc: John Donnelly <john.p.donnelly@oracle.com>
> Cc: kexec@lists.infradead.org
> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> ---
>  arch/arm.c     |  8 +++++++-
>  arch/ia64.c    |  7 ++++++-
>  arch/ppc.c     |  8 +++++++-
>  arch/ppc64.c   | 49 ++++++++++++++++++++++++++++---------------------
>  arch/s390x.c   | 29 ++++++++++++++++++-----------
>  arch/sparc64.c |  9 +++++++--
>  arch/x86.c     | 34 ++++++++++++++++++++--------------
>  arch/x86_64.c  | 27 ++++++++++++++++-----------
>  8 files changed, 109 insertions(+), 62 deletions(-)
> 
> diff --git a/arch/arm.c b/arch/arm.c
> index af7442ac70bf..33536fc4dfc9 100644
> --- a/arch/arm.c
> +++ b/arch/arm.c
> @@ -81,7 +81,13 @@ int
>  get_machdep_info_arm(void)
>  {
>  	info->page_offset = SYMBOL(_stext) & 0xffff0000UL;
> -	info->max_physmem_bits = _MAX_PHYSMEM_BITS;
> +
> +	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> +	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> +		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> +	else
> +		info->max_physmem_bits = _MAX_PHYSMEM_BITS;
> +
>  	info->kernel_start = SYMBOL(_stext);
>  	info->section_size_bits = _SECTION_SIZE_BITS;
> 
> diff --git a/arch/ia64.c b/arch/ia64.c
> index 6c33cc7c8288..fb44dda47172 100644
> --- a/arch/ia64.c
> +++ b/arch/ia64.c
> @@ -85,7 +85,12 @@ get_machdep_info_ia64(void)
>  	}
> 
>  	info->section_size_bits = _SECTION_SIZE_BITS;
> -	info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> +
> +	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> +	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> +		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> +	else
> +		info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> 
>  	return TRUE;
>  }
> diff --git a/arch/ppc.c b/arch/ppc.c
> index 37c6a3b60cd3..ed9447427a30 100644
> --- a/arch/ppc.c
> +++ b/arch/ppc.c
> @@ -31,7 +31,13 @@ get_machdep_info_ppc(void)
>  	unsigned long vmlist, vmap_area_list, vmalloc_start;
> 
>  	info->section_size_bits = _SECTION_SIZE_BITS;
> -	info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> +
> +	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> +	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> +		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> +	else
> +		info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> +
>  	info->page_offset = __PAGE_OFFSET;
> 
>  	if (SYMBOL(_stext) != NOT_FOUND_SYMBOL)
> diff --git a/arch/ppc64.c b/arch/ppc64.c
> index 9d8f2525f608..a3984eebdced 100644
> --- a/arch/ppc64.c
> +++ b/arch/ppc64.c
> @@ -466,30 +466,37 @@ int
>  set_ppc64_max_physmem_bits(void)
>  {
>  	long array_len = ARRAY_LENGTH(mem_section);
> -	/*
> -	 * The older ppc64 kernels uses _MAX_PHYSMEM_BITS as 42 and the
> -	 * newer kernels 3.7 onwards uses 46 bits.
> -	 */
> -
> -	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> -	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> -		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> -		return TRUE;
> -
> -	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_7;
> -	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> -		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> -		return TRUE;
> 
> -	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_19;
> -	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> -		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> +	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> +	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> +		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
>  		return TRUE;
> +	} else {
> +		/*
> +		 * The older ppc64 kernels uses _MAX_PHYSMEM_BITS as 42 and the
> +		 * newer kernels 3.7 onwards uses 46 bits.
> +		 */
> 
> -	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_20;
> -	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> -		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> -		return TRUE;
> +		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> +		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> +				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> +			return TRUE;
> +
> +		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_7;
> +		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> +				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> +			return TRUE;
> +
> +		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_19;
> +		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> +				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> +			return TRUE;
> +
> +		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_20;
> +		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> +				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> +			return TRUE;
> +	}
> 
>  	return FALSE;
>  }
> diff --git a/arch/s390x.c b/arch/s390x.c
> index bf9d58e54fb7..4d17a783e5bd 100644
> --- a/arch/s390x.c
> +++ b/arch/s390x.c
> @@ -63,20 +63,27 @@ int
>  set_s390x_max_physmem_bits(void)
>  {
>  	long array_len = ARRAY_LENGTH(mem_section);
> -	/*
> -	 * The older s390x kernels uses _MAX_PHYSMEM_BITS as 42 and the
> -	 * newer kernels uses 46 bits.
> -	 */
> 
> -	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> -	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> -		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> +	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> +	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> +		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
>  		return TRUE;
> +	} else {
> +		/*
> +		 * The older s390x kernels uses _MAX_PHYSMEM_BITS as 42 and the
> +		 * newer kernels uses 46 bits.
> +		 */
> 
> -	info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_3;
> -	if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> -		|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> -		return TRUE;
> +		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> +		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> +				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> +			return TRUE;
> +
> +		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_3;
> +		if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> +				|| (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> +			return TRUE;
> +	}
> 
>  	return FALSE;
>  }
> diff --git a/arch/sparc64.c b/arch/sparc64.c
> index 1cfaa854ce6d..b93a05bdfe59 100644
> --- a/arch/sparc64.c
> +++ b/arch/sparc64.c
> @@ -25,10 +25,15 @@ int get_versiondep_info_sparc64(void)
>  {
>  	info->section_size_bits = _SECTION_SIZE_BITS;
> 
> -	if (info->kernel_version >= KERNEL_VERSION(3, 8, 13))
> +	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> +	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> +		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> +	else if (info->kernel_version >= KERNEL_VERSION(3, 8, 13))
>  		info->max_physmem_bits = _MAX_PHYSMEM_BITS_L4;
> -	else {
> +	else
>  		info->max_physmem_bits = _MAX_PHYSMEM_BITS_L3;
> +
> +	if (info->kernel_version < KERNEL_VERSION(3, 8, 13)) {
>  		info->flag_vmemmap = TRUE;
>  		info->vmemmap_start = VMEMMAP_BASE_SPARC64;
>  		info->vmemmap_end = VMEMMAP_BASE_SPARC64 +
> diff --git a/arch/x86.c b/arch/x86.c
> index 3fdae93084b8..f1b43d4c8179 100644
> --- a/arch/x86.c
> +++ b/arch/x86.c
> @@ -72,21 +72,27 @@ get_machdep_info_x86(void)
>  {
>  	unsigned long vmlist, vmap_area_list, vmalloc_start;
> 
> -	/* PAE */
> -	if ((vt.mem_flags & MEMORY_X86_PAE)
> -	    || ((SYMBOL(pkmap_count) != NOT_FOUND_SYMBOL)
> -	      && (SYMBOL(pkmap_count_next) != NOT_FOUND_SYMBOL)
> -	      && ((SYMBOL(pkmap_count_next)-SYMBOL(pkmap_count))/sizeof(int))
> -	      == 512)) {
> -		DEBUG_MSG("\n");
> -		DEBUG_MSG("PAE          : ON\n");
> -		vt.mem_flags |= MEMORY_X86_PAE;
> -		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_PAE;
> -	} else {
> -		DEBUG_MSG("\n");
> -		DEBUG_MSG("PAE          : OFF\n");
> -		info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> +	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> +	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> +		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> +	else {
> +		/* PAE */
> +		if ((vt.mem_flags & MEMORY_X86_PAE)
> +				|| ((SYMBOL(pkmap_count) != NOT_FOUND_SYMBOL)
> +					&& (SYMBOL(pkmap_count_next) != NOT_FOUND_SYMBOL)
> +					&& ((SYMBOL(pkmap_count_next)-SYMBOL(pkmap_count))/sizeof(int))
> +					== 512)) {
> +			DEBUG_MSG("\n");
> +			DEBUG_MSG("PAE          : ON\n");
> +			vt.mem_flags |= MEMORY_X86_PAE;
> +			info->max_physmem_bits  = _MAX_PHYSMEM_BITS_PAE;
> +		} else {
> +			DEBUG_MSG("\n");
> +			DEBUG_MSG("PAE          : OFF\n");
> +			info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> +		}
>  	}
> +
>  	info->page_offset = __PAGE_OFFSET;
> 
>  	if (SYMBOL(_stext) == NOT_FOUND_SYMBOL) {
> diff --git a/arch/x86_64.c b/arch/x86_64.c
> index 876644f932be..eff90307552c 100644
> --- a/arch/x86_64.c
> +++ b/arch/x86_64.c
> @@ -268,17 +268,22 @@ get_machdep_info_x86_64(void)
>  int
>  get_versiondep_info_x86_64(void)
>  {
> -	/*
> -	 * On linux-2.6.26, MAX_PHYSMEM_BITS is changed to 44 from 40.
> -	 */
> -	if (info->kernel_version < KERNEL_VERSION(2, 6, 26))
> -		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG;
> -	else if (info->kernel_version < KERNEL_VERSION(2, 6, 31))
> -		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_26;
> -	else if(check_5level_paging())
> -		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_5LEVEL;
> -	else
> -		info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_31;
> +	/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> +	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> +		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> +	} else {
> +		/*
> +		 * On linux-2.6.26, MAX_PHYSMEM_BITS is changed to 44 from 40.
> +		 */
> +		if (info->kernel_version < KERNEL_VERSION(2, 6, 26))
> +			info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG;
> +		else if (info->kernel_version < KERNEL_VERSION(2, 6, 31))
> +			info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_26;
> +		else if(check_5level_paging())
> +			info->max_physmem_bits  = _MAX_PHYSMEM_BITS_5LEVEL;
> +		else
> +			info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_31;
> +	}
> 
>  	if (!get_page_offset_x86_64())
>  		return FALSE;
> --
> 2.7.4
> 



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 2/4] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
  2019-11-12 11:08 ` [PATCH v4 2/4] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) Bhupesh Sharma
@ 2019-12-04 17:36   ` Kazuhito Hagio
  2019-12-05 18:21     ` Bhupesh Sharma
  0 siblings, 1 reply; 34+ messages in thread
From: Kazuhito Hagio @ 2019-12-04 17:36 UTC (permalink / raw)
  To: Bhupesh Sharma, kexec; +Cc: John Donnelly, bhupesh.linux

> -----Original Message-----
> ARMv8.2-LPA architecture extension (if available on underlying hardware)
> can support 52-bit physical addresses, while the kernel virtual
> addresses remain 48-bit.
> 
> Make sure that we read the 52-bit PA address capability from
> 'MAX_PHYSMEM_BITS' variable (if available in vmcoreinfo) and
> accordingly change the pte_to_phy() mask values and also traverse
> the page-table walk accordingly.
> 
> Also make sure that it works well for the existing 48-bit PA address
> platforms and also on environments which use newer kernels with 52-bit
> PA support but hardware which is not ARM8.2-LPA compliant.
> 
> I have sent a kernel patch upstream to add 'MAX_PHYSMEM_BITS' to
> vmcoreinfo for arm64 (see [0]).
> 
> This patch is in accordance with ARMv8 Architecture Reference Manual
> version D.a
> 
> [0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
> 
> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> Cc: John Donnelly <john.p.donnelly@oracle.com>
> Cc: kexec@lists.infradead.org
> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> ---
>  arch/arm64.c | 292 +++++++++++++++++++++++++++++++++++++++++------------------
>  1 file changed, 204 insertions(+), 88 deletions(-)
> 
> diff --git a/arch/arm64.c b/arch/arm64.c
> index 3516b340adfd..ecb19139e178 100644
> --- a/arch/arm64.c
> +++ b/arch/arm64.c
> @@ -39,72 +39,184 @@ typedef struct {
>  	unsigned long pte;
>  } pte_t;
> 

> +#define __pte(x)	((pte_t) { (x) } )
> +#define __pmd(x)	((pmd_t) { (x) } )
> +#define __pud(x)	((pud_t) { (x) } )
> +#define __pgd(x)	((pgd_t) { (x) } )

Is it possible to remove these macros?

> +
> +static int lpa_52_bit_support_available;
>  static int pgtable_level;
>  static int va_bits;
>  static unsigned long kimage_voffset;
> 
> -#define SZ_4K			(4 * 1024)
> -#define SZ_16K			(16 * 1024)
> -#define SZ_64K			(64 * 1024)
> -#define SZ_128M			(128 * 1024 * 1024)
> +#define SZ_4K			4096
> +#define SZ_16K			16384
> +#define SZ_64K			65536
> 
> -#define PAGE_OFFSET_36 ((0xffffffffffffffffUL) << 36)
> -#define PAGE_OFFSET_39 ((0xffffffffffffffffUL) << 39)
> -#define PAGE_OFFSET_42 ((0xffffffffffffffffUL) << 42)
> -#define PAGE_OFFSET_47 ((0xffffffffffffffffUL) << 47)
> -#define PAGE_OFFSET_48 ((0xffffffffffffffffUL) << 48)
> +#define PAGE_OFFSET_36		((0xffffffffffffffffUL) << 36)
> +#define PAGE_OFFSET_39		((0xffffffffffffffffUL) << 39)
> +#define PAGE_OFFSET_42		((0xffffffffffffffffUL) << 42)
> +#define PAGE_OFFSET_47		((0xffffffffffffffffUL) << 47)
> +#define PAGE_OFFSET_48		((0xffffffffffffffffUL) << 48)
> +#define PAGE_OFFSET_52		((0xffffffffffffffffUL) << 52)
> 
>  #define pgd_val(x)		((x).pgd)
>  #define pud_val(x)		(pgd_val((x).pgd))
>  #define pmd_val(x)		(pud_val((x).pud))
>  #define pte_val(x)		((x).pte)
> 
> -#define PAGE_MASK		(~(PAGESIZE() - 1))
> -#define PGDIR_SHIFT		((PAGESHIFT() - 3) * pgtable_level + 3)
> -#define PTRS_PER_PGD		(1 << (va_bits - PGDIR_SHIFT))
> -#define PUD_SHIFT		get_pud_shift_arm64()
> -#define PUD_SIZE		(1UL << PUD_SHIFT)
> -#define PUD_MASK		(~(PUD_SIZE - 1))
> -#define PTRS_PER_PTE		(1 << (PAGESHIFT() - 3))
> -#define PTRS_PER_PUD		PTRS_PER_PTE
> -#define PMD_SHIFT		((PAGESHIFT() - 3) * 2 + 3)
> -#define PMD_SIZE		(1UL << PMD_SHIFT)
> -#define PMD_MASK		(~(PMD_SIZE - 1))

> +/* See 'include/uapi/linux/const.h' for definitions below */
> +#define __AC(X,Y)	(X##Y)
> +#define _AC(X,Y)	__AC(X,Y)
> +#define _AT(T,X)	((T)(X))
> +
> +/* See 'include/asm/pgtable-types.h' for definitions below */
> +typedef unsigned long pteval_t;
> +typedef unsigned long pmdval_t;
> +typedef unsigned long pudval_t;
> +typedef unsigned long pgdval_t;

Is it possible to remove these macros/typedefs as well?
I don't think they make the code easier to read..

Thanks,
Kazu

> +
> +#define PAGE_SHIFT	PAGESHIFT()
> +
> +/* See 'arch/arm64/include/asm/pgtable-hwdef.h' for definitions below */
> +
> +#define ARM64_HW_PGTABLE_LEVEL_SHIFT(n)	((PAGE_SHIFT - 3) * (4 - (n)) + 3)
> +
> +#define PTRS_PER_PTE		(1 << (PAGE_SHIFT - 3))
> +
> +/*
> + * PMD_SHIFT determines the size a level 2 page table entry can map.
> + */
> +#define PMD_SHIFT		ARM64_HW_PGTABLE_LEVEL_SHIFT(2)
> +#define PMD_SIZE		(_AC(1, UL) << PMD_SHIFT)
> +#define PMD_MASK		(~(PMD_SIZE-1))
>  #define PTRS_PER_PMD		PTRS_PER_PTE
> 
> -#define PAGE_PRESENT		(1 << 0)
> +/*
> + * PUD_SHIFT determines the size a level 1 page table entry can map.
> + */
> +#define PUD_SHIFT		ARM64_HW_PGTABLE_LEVEL_SHIFT(1)
> +#define PUD_SIZE		(_AC(1, UL) << PUD_SHIFT)
> +#define PUD_MASK		(~(PUD_SIZE-1))
> +#define PTRS_PER_PUD		PTRS_PER_PTE
> +
> +/*
> + * PGDIR_SHIFT determines the size a top-level page table entry can map
> + * (depending on the configuration, this level can be 0, 1 or 2).
> + */
> +#define PGDIR_SHIFT		ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (pgtable_level))
> +#define PGDIR_SIZE		(_AC(1, UL) << PGDIR_SHIFT)
> +#define PGDIR_MASK		(~(PGDIR_SIZE-1))
> +#define PTRS_PER_PGD		(1 << ((va_bits) - PGDIR_SHIFT))
> +
> +/*
> + * Section address mask and size definitions.
> + */
>  #define SECTIONS_SIZE_BITS	30
> -/* Highest possible physical address supported */
> -#define PHYS_MASK_SHIFT		48
> -#define PHYS_MASK		((1UL << PHYS_MASK_SHIFT) - 1)
> +
>  /*
> - * Remove the highest order bits that are not a part of the
> - * physical address in a section
> + * Hardware page table definitions.
> + *
> + * Level 1 descriptor (PUD).
>   */
> -#define PMD_SECTION_MASK	((1UL << 40) - 1)
> +#define PUD_TYPE_TABLE		(_AT(pudval_t, 3) << 0)
> +#define PUD_TABLE_BIT		(_AT(pudval_t, 1) << 1)
> +#define PUD_TYPE_MASK		(_AT(pudval_t, 3) << 0)
> +#define PUD_TYPE_SECT		(_AT(pudval_t, 1) << 0)
> 
> -#define PMD_TYPE_MASK		3
> -#define PMD_TYPE_SECT		1
> -#define PMD_TYPE_TABLE		3
> +/*
> + * Level 2 descriptor (PMD).
> + */
> +#define PMD_TYPE_MASK		(_AT(pmdval_t, 3) << 0)
> +#define PMD_TYPE_FAULT		(_AT(pmdval_t, 0) << 0)
> +#define PMD_TYPE_TABLE		(_AT(pmdval_t, 3) << 0)
> +#define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
> +#define PMD_TABLE_BIT		(_AT(pmdval_t, 1) << 1)
> +
> +/*
> + * Level 3 descriptor (PTE).
> + */
> +#define PTE_ADDR_LOW		(((_AT(pteval_t, 1) << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
> +#define PTE_ADDR_HIGH		(_AT(pteval_t, 0xf) << 12)
> +
> +static inline unsigned long
> +get_pte_addr_mask_arm64(void)
> +{
> +	if (lpa_52_bit_support_available)
> +		return (PTE_ADDR_LOW | PTE_ADDR_HIGH);
> +	else
> +		return PTE_ADDR_LOW;
> +}
> +
> +#define PTE_ADDR_MASK		get_pte_addr_mask_arm64()
> 
> -#define PUD_TYPE_MASK		3
> -#define PUD_TYPE_SECT		1
> -#define PUD_TYPE_TABLE		3
> +#define PAGE_MASK		(~(PAGESIZE() - 1))
> +#define PAGE_PRESENT		(1 << 0)
> 
> +/* Helper API to convert between a physical address and its placement
> + * in a page table entry, taking care of 52-bit addresses.
> + */
> +static inline unsigned long
> +__pte_to_phys(pte_t pte)
> +{
> +	if (lpa_52_bit_support_available)
> +		return ((pte_val(pte) & PTE_ADDR_LOW) | ((pte_val(pte) & PTE_ADDR_HIGH) << 36));
> +	else
> +		return (pte_val(pte) & PTE_ADDR_MASK);
> +}
> +
> +/* Find an entry in a page-table-directory */
>  #define pgd_index(vaddr) 		(((vaddr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
> -#define pgd_offset(pgdir, vaddr)	((pgd_t *)(pgdir) + pgd_index(vaddr))
> 
> -#define pte_index(vaddr) 		(((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
> -#define pmd_page_paddr(pmd)		(pmd_val(pmd) & PHYS_MASK & (int32_t)PAGE_MASK)
> -#define pte_offset(dir, vaddr) 		((pte_t*)pmd_page_paddr((*dir)) + pte_index(vaddr))
> +static inline pte_t
> +pgd_pte(pgd_t pgd)
> +{
> +	return __pte(pgd_val(pgd));
> +}
> 
> -#define pmd_index(vaddr)		(((vaddr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
> -#define pud_page_paddr(pud)		(pud_val(pud) & PHYS_MASK & (int32_t)PAGE_MASK)
> -#define pmd_offset_pgtbl_lvl_2(pud, vaddr) ((pmd_t *)pud)
> -#define pmd_offset_pgtbl_lvl_3(pud, vaddr) ((pmd_t *)pud_page_paddr((*pud)) + pmd_index(vaddr))
> +#define __pgd_to_phys(pgd)		__pte_to_phys(pgd_pte(pgd))
> +#define pgd_offset(pgd, vaddr)		((pgd_t *)(pgd) + pgd_index(vaddr))
> +
> +static inline pte_t pud_pte(pud_t pud)
> +{
> +	return __pte(pud_val(pud));
> +}
> 
> +static inline unsigned long
> +pgd_page_paddr(pgd_t pgd)
> +{
> +	return __pgd_to_phys(pgd);
> +}
> +
> +/* Find an entry in the first-level page table. */
>  #define pud_index(vaddr)		(((vaddr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))
> -#define pgd_page_paddr(pgd)		(pgd_val(pgd) & PHYS_MASK & (int32_t)PAGE_MASK)
> +#define __pud_to_phys(pud)		__pte_to_phys(pud_pte(pud))
> +
> +static inline unsigned long
> +pud_page_paddr(pud_t pud)
> +{
> +	return __pud_to_phys(pud);
> +}
> +
> +/* Find an entry in the second-level page table. */
> +#define pmd_index(vaddr)		(((vaddr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
> +
> +static inline pte_t pmd_pte(pmd_t pmd)
> +{
> +	return __pte(pmd_val(pmd));
> +}
> +
> +#define __pmd_to_phys(pmd)		__pte_to_phys(pmd_pte(pmd))
> +
> +static inline unsigned long
> +pmd_page_paddr(pmd_t pmd)
> +{
> +	return __pmd_to_phys(pmd);
> +}
> +
> +/* Find an entry in the third-level page table. */
> +#define pte_index(vaddr) 		(((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
> +#define pte_offset(dir, vaddr) 		(pmd_page_paddr((*dir)) + pte_index(vaddr) * sizeof(pte_t))
> 
>  static unsigned long long
>  __pa(unsigned long vaddr)
> @@ -116,32 +228,22 @@ __pa(unsigned long vaddr)
>  		return (vaddr - kimage_voffset);
>  }
> 
> -static int
> -get_pud_shift_arm64(void)
> +static pud_t *
> +pud_offset(pgd_t *pgda, pgd_t *pgdv, unsigned long vaddr)
>  {
> -	if (pgtable_level == 4)
> -		return ((PAGESHIFT() - 3) * 3 + 3);
> +	if (pgtable_level > 3)
> +		return (pud_t *)(pgd_page_paddr(*pgdv) + pud_index(vaddr) * sizeof(pud_t));
>  	else
> -		return PGDIR_SHIFT;
> +		return (pud_t *)(pgda);
>  }
> 
>  static pmd_t *
>  pmd_offset(pud_t *puda, pud_t *pudv, unsigned long vaddr)
>  {
> -	if (pgtable_level == 2) {
> -		return pmd_offset_pgtbl_lvl_2(puda, vaddr);
> -	} else {
> -		return pmd_offset_pgtbl_lvl_3(pudv, vaddr);
> -	}
> -}
> -
> -static pud_t *
> -pud_offset(pgd_t *pgda, pgd_t *pgdv, unsigned long vaddr)
> -{
> -	if (pgtable_level == 4)
> -		return ((pud_t *)pgd_page_paddr((*pgdv)) + pud_index(vaddr));
> +	if (pgtable_level > 2)
> +		return (pmd_t *)(pud_page_paddr(*pudv) + pmd_index(vaddr) * sizeof(pmd_t));
>  	else
> -		return (pud_t *)(pgda);
> +		return (pmd_t*)(puda);
>  }
> 
>  static int calculate_plat_config(void)
> @@ -307,6 +409,14 @@ get_stext_symbol(void)
>  int
>  get_machdep_info_arm64(void)
>  {
> +	/* Determine if the PA address range is 52-bits: ARMv8.2-LPA */
> +	if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> +		info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> +		if (info->max_physmem_bits == 52)
> +			lpa_52_bit_support_available = 1;
> +	} else
> +		info->max_physmem_bits = 48;
> +
>  	/* Check if va_bits is still not initialized. If still 0, call
>  	 * get_versiondep_info() to initialize the same.
>  	 */
> @@ -319,12 +429,11 @@ get_machdep_info_arm64(void)
>  	}
> 
>  	kimage_voffset = NUMBER(kimage_voffset);
> -	info->max_physmem_bits = PHYS_MASK_SHIFT;
>  	info->section_size_bits = SECTIONS_SIZE_BITS;
> 
>  	DEBUG_MSG("kimage_voffset   : %lx\n", kimage_voffset);
> -	DEBUG_MSG("max_physmem_bits : %lx\n", info->max_physmem_bits);
> -	DEBUG_MSG("section_size_bits: %lx\n", info->section_size_bits);
> +	DEBUG_MSG("max_physmem_bits : %ld\n", info->max_physmem_bits);
> +	DEBUG_MSG("section_size_bits: %ld\n", info->section_size_bits);
> 
>  	return TRUE;
>  }
> @@ -382,6 +491,19 @@ get_versiondep_info_arm64(void)
>  	return TRUE;
>  }
> 
> +/* 1GB section for Page Table level = 4 and Page Size = 4KB */
> +static int
> +is_pud_sect(pud_t pud)
> +{
> +	return ((pud_val(pud) & PUD_TYPE_MASK) == PUD_TYPE_SECT);
> +}
> +
> +static int
> +is_pmd_sect(pmd_t pmd)
> +{
> +	return ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_SECT);
> +}
> +
>  /*
>   * vaddr_to_paddr_arm64() - translate arbitrary virtual address to physical
>   * @vaddr: virtual address to translate
> @@ -419,10 +541,9 @@ vaddr_to_paddr_arm64(unsigned long vaddr)
>  		return NOT_PADDR;
>  	}
> 
> -	if ((pud_val(pudv) & PUD_TYPE_MASK) == PUD_TYPE_SECT) {
> -		/* 1GB section for Page Table level = 4 and Page Size = 4KB */
> -		paddr = (pud_val(pudv) & (PUD_MASK & PMD_SECTION_MASK))
> -					+ (vaddr & (PUD_SIZE - 1));
> +	if (is_pud_sect(pudv)) {
> +		paddr = (pud_page_paddr(pudv) & PUD_MASK) +
> +				(vaddr & (PUD_SIZE - 1));
>  		return paddr;
>  	}
> 
> @@ -432,29 +553,24 @@ vaddr_to_paddr_arm64(unsigned long vaddr)
>  		return NOT_PADDR;
>  	}
> 
> -	switch (pmd_val(pmdv) & PMD_TYPE_MASK) {
> -	case PMD_TYPE_TABLE:
> -		ptea = pte_offset(&pmdv, vaddr);
> -		/* 64k page */
> -		if (!readmem(PADDR, (unsigned long long)ptea, &ptev, sizeof(ptev))) {
> -			ERRMSG("Can't read pte\n");
> -			return NOT_PADDR;
> -		}
> +	if (is_pmd_sect(pmdv)) {
> +		paddr = (pmd_page_paddr(pmdv) & PMD_MASK) +
> +				(vaddr & (PMD_SIZE - 1));
> +		return paddr;
> +	}
> 
> -		if (!(pte_val(ptev) & PAGE_PRESENT)) {
> -			ERRMSG("Can't get a valid pte.\n");
> -			return NOT_PADDR;
> -		} else {
> +	ptea = (pte_t *)pte_offset(&pmdv, vaddr);
> +	if (!readmem(PADDR, (unsigned long long)ptea, &ptev, sizeof(ptev))) {
> +		ERRMSG("Can't read pte\n");
> +		return NOT_PADDR;
> +	}
> 
> -			paddr = (PAGEBASE(pte_val(ptev)) & PHYS_MASK)
> -					+ (vaddr & (PAGESIZE() - 1));
> -		}
> -		break;
> -	case PMD_TYPE_SECT:
> -		/* 512MB section for Page Table level = 3 and Page Size = 64KB*/
> -		paddr = (pmd_val(pmdv) & (PMD_MASK & PMD_SECTION_MASK))
> -					+ (vaddr & (PMD_SIZE - 1));
> -		break;
> +	if (!(pte_val(ptev) & PAGE_PRESENT)) {
> +		ERRMSG("Can't get a valid pte.\n");
> +		return NOT_PADDR;
> +	} else {
> +		paddr = __pte_to_phys(ptev) +
> +				(vaddr & (PAGESIZE() - 1));
>  	}
> 
>  	return paddr;
> --
> 2.7.4
> 



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 3/4] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA support)
  2019-11-12 11:08 ` [PATCH v4 3/4] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA support) Bhupesh Sharma
@ 2019-12-04 17:45   ` Kazuhito Hagio
  2019-12-05 15:29     ` Kazuhito Hagio
  0 siblings, 1 reply; 34+ messages in thread
From: Kazuhito Hagio @ 2019-12-04 17:45 UTC (permalink / raw)
  To: Bhupesh Sharma, kexec; +Cc: John Donnelly, bhupesh.linux

> -----Original Message-----
> With ARMv8.2-LVA architecture extension availability, arm64 hardware
> which supports this extension can support upto 52-bit virtual
> addresses. It is specially useful for having a 52-bit user-space virtual
> address space while the kernel can still retain 48-bit/52-bit virtual
> addressing.
> 
> Since at the moment we enable the support of this extension in the
> kernel via a CONFIG flag (CONFIG_ARM64_VA_BITS_52), so there are
> no clear mechanisms in user-space to determine this CONFIG
> flag value and use it to determine the kernel-space VA address range
> values.
> 
> 'makedumpfile' can instead use 'TCR_EL1.T1SZ' value from vmcoreinfo
> which indicates the size offset of the memory region addressed by
> TTBR1_EL1 (and hence can be used for determining the
> vabits_actual value).
> 
> The user-space computation for determining whether an address lies in
> the linear map range is the same as we have in kernel-space:
> 
>   #define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))
> 
> I have sent a kernel patch upstream to add 'TCR_EL1.T1SZ' to
> vmcoreinfo for arm64 (see [0]).
> 
> This patch is in accordance with ARMv8 Architecture Reference Manual
> version D.a
> 
> Note that with these changes the '--mem-usage' option will not work
> properly for arm64 (a subsequent patch in this series will address the
> same) and there is a discussion on-going with the arm64 maintainers to
> find a way-out for the same (via standard kernel symbols like _stext).
> 
> [0].http://lists.infradead.org/pipermail/kexec/2019-November/023962.html
> 
> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> Cc: John Donnelly <john.p.donnelly@oracle.com>
> Cc: kexec@lists.infradead.org
> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> ---
>  arch/arm64.c   | 148 +++++++++++++++++++++++++++++++++++++++++++++------------
>  makedumpfile.c |   2 +
>  makedumpfile.h |   3 +-
>  3 files changed, 122 insertions(+), 31 deletions(-)
> 
> diff --git a/arch/arm64.c b/arch/arm64.c
> index ecb19139e178..094d73b8a60f 100644
> --- a/arch/arm64.c
> +++ b/arch/arm64.c
> @@ -47,6 +47,7 @@ typedef struct {
>  static int lpa_52_bit_support_available;
>  static int pgtable_level;
>  static int va_bits;
> +static int vabits_actual;
>  static unsigned long kimage_voffset;
> 
>  #define SZ_4K			4096
> @@ -218,12 +219,19 @@ pmd_page_paddr(pmd_t pmd)
>  #define pte_index(vaddr) 		(((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
>  #define pte_offset(dir, vaddr) 		(pmd_page_paddr((*dir)) + pte_index(vaddr) * sizeof(pte_t))
> 
> +/*
> + * The linear kernel range starts at the bottom of the virtual address
> + * space. Testing the top bit for the start of the region is a
> + * sufficient check and avoids having to worry about the tag.
> + */
> +#define is_linear_addr(addr)	(!(((unsigned long)addr) & (1UL << (vabits_actual - 1))))

Does this check cover 5.3 or earlier kernels?
There is no case that vabits_actual is zero?

> +
>  static unsigned long long
>  __pa(unsigned long vaddr)
>  {
>  	if (kimage_voffset == NOT_FOUND_NUMBER ||
> -			(vaddr >= PAGE_OFFSET))
> -		return (vaddr - PAGE_OFFSET + info->phys_base);
> +			is_linear_addr(vaddr))
> +		return (vaddr + info->phys_base - PAGE_OFFSET);
>  	else
>  		return (vaddr - kimage_voffset);
>  }
> @@ -253,6 +261,7 @@ static int calculate_plat_config(void)
>  			(PAGESIZE() == SZ_64K && va_bits == 42)) {
>  		pgtable_level = 2;
>  	} else if ((PAGESIZE() == SZ_64K && va_bits == 48) ||
> +			(PAGESIZE() == SZ_64K && va_bits == 52) ||
>  			(PAGESIZE() == SZ_4K && va_bits == 39) ||
>  			(PAGESIZE() == SZ_16K && va_bits == 47)) {
>  		pgtable_level = 3;
> @@ -287,6 +296,16 @@ get_phys_base_arm64(void)
>  		return TRUE;
>  	}
> 
> +	/* If both vabits_actual and va_bits are now initialized, always
> +	 * prefer vabits_actual over va_bits to calculate PAGE_OFFSET
> +	 * value.
> +	 */
> +	if (vabits_actual && va_bits && vabits_actual != va_bits) {
> +		info->page_offset = (-(1UL << vabits_actual));
> +		DEBUG_MSG("page_offset    : %lx (via vabits_actual)\n",
> +				info->page_offset);
> +	}
> +

Is this for --mem-usage?
If so, let's drop from this patch and think about it later because
some additional base functions will be needed for the option, I think.

>  	if (get_num_pt_loads() && PAGE_OFFSET) {
>  		for (i = 0;
>  		    get_pt_load(i, &phys_start, NULL, &virt_start, NULL);
> @@ -406,6 +425,73 @@ get_stext_symbol(void)
>  	return(found ? kallsym : FALSE);
>  }
> 
> +static int
> +get_va_bits_from_stext_arm64(void)
> +{
> +	ulong _stext;
> +
> +	_stext = get_stext_symbol();
> +	if (!_stext) {
> +		ERRMSG("Can't get the symbol of _stext.\n");
> +		return FALSE;
> +	}
> +
> +	/* Derive va_bits as per arch/arm64/Kconfig. Note that this is a
> +	 * best case approximation at the moment, as there can be
> +	 * inconsistencies in this calculation (for e.g., for
> +	 * 52-bit kernel VA case, even the 48th bit might be set in
> +	 * the _stext symbol).
> +	 *
> +	 * So, we need to rely on the actual VA_BITS symbol in the
> +	 * vmcoreinfo for a accurate value.
> +	 *
> +	 * TODO: Improve this further once there is a closure with arm64
> +	 * kernel maintainers on the same.
> +	 */
> +	if ((_stext & PAGE_OFFSET_52) == PAGE_OFFSET_52) {
> +		va_bits = 52;
> +	} else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) {
> +		va_bits = 48;
> +	} else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) {
> +		va_bits = 47;
> +	} else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) {
> +		va_bits = 42;
> +	} else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) {
> +		va_bits = 39;
> +	} else if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) {
> +		va_bits = 36;
> +	} else {
> +		ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n");
> +		return FALSE;
> +	}
> +
> +	DEBUG_MSG("va_bits    : %d (_stext) (approximation)\n", va_bits);
> +
> +	return TRUE;
> +}
> +
> +static void
> +get_page_offset_arm64(void)
> +{
> +	/* Check if 'vabits_actual' is initialized yet.
> +	 * If not, our best bet is to use 'va_bits' to calculate
> +	 * the PAGE_OFFSET value, otherwise use 'vabits_actual'
> +	 * for the same.
> +	 *
> +	 * See arch/arm64/include/asm/memory.h for more details.
> +	 */
> +	if (!vabits_actual) {
> +		info->page_offset = (-(1UL << va_bits));
> +		DEBUG_MSG("page_offset    : %lx (approximation)\n",
> +					info->page_offset);
> +	} else {
> +		info->page_offset = (-(1UL << vabits_actual));
> +		DEBUG_MSG("page_offset    : %lx (accurate)\n",
> +					info->page_offset);
> +	}

Does this support 5.3 or earlier kernels?

Thanks,
Kazu

> +
> +}
> +
>  int
>  get_machdep_info_arm64(void)
>  {
> @@ -420,8 +506,33 @@ get_machdep_info_arm64(void)
>  	/* Check if va_bits is still not initialized. If still 0, call
>  	 * get_versiondep_info() to initialize the same.
>  	 */
> +	if (NUMBER(VA_BITS) != NOT_FOUND_NUMBER) {
> +		va_bits = NUMBER(VA_BITS);
> +		DEBUG_MSG("va_bits        : %d (vmcoreinfo)\n",
> +				va_bits);
> +	}
> +
> +	/* Check if va_bits is still not initialized. If still 0, call
> +	 * get_versiondep_info() to initialize the same from _stext
> +	 * symbol.
> +	 */
>  	if (!va_bits)
> -		get_versiondep_info_arm64();
> +		if (get_va_bits_from_stext_arm64() == FALSE)
> +			return FALSE;
> +
> +	get_page_offset_arm64();
> +
> +	/* See TCR_EL1, Translation Control Register (EL1) register
> +	 * description in the ARMv8 Architecture Reference Manual.
> +	 * Basically, we can use the TCR_EL1.T1SZ
> +	 * value to determine the virtual addressing range supported
> +	 * in the kernel-space (i.e. vabits_actual).
> +	 */
> +	if (NUMBER(tcr_el1_t1sz) != NOT_FOUND_NUMBER) {
> +		vabits_actual = 64 - NUMBER(tcr_el1_t1sz);
> +		DEBUG_MSG("vabits_actual  : %d (vmcoreinfo)\n",
> +				vabits_actual);
> +	}
> 
>  	if (!calculate_plat_config()) {
>  		ERRMSG("Can't determine platform config values\n");
> @@ -459,34 +570,11 @@ get_xen_info_arm64(void)
>  int
>  get_versiondep_info_arm64(void)
>  {
> -	ulong _stext;
> -
> -	_stext = get_stext_symbol();
> -	if (!_stext) {
> -		ERRMSG("Can't get the symbol of _stext.\n");
> -		return FALSE;
> -	}
> -
> -	/* Derive va_bits as per arch/arm64/Kconfig */
> -	if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) {
> -		va_bits = 36;
> -	} else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) {
> -		va_bits = 39;
> -	} else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) {
> -		va_bits = 42;
> -	} else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) {
> -		va_bits = 47;
> -	} else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) {
> -		va_bits = 48;
> -	} else {
> -		ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n");
> -		return FALSE;
> -	}
> -
> -	info->page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
> +	if (!va_bits)
> +		if (get_va_bits_from_stext_arm64() == FALSE)
> +			return FALSE;
> 
> -	DEBUG_MSG("va_bits      : %d\n", va_bits);
> -	DEBUG_MSG("page_offset  : %lx\n", info->page_offset);
> +	get_page_offset_arm64();
> 
>  	return TRUE;
>  }
> diff --git a/makedumpfile.c b/makedumpfile.c
> index 4a000112ba59..baf559e4d74e 100644
> --- a/makedumpfile.c
> +++ b/makedumpfile.c
> @@ -2314,6 +2314,7 @@ write_vmcoreinfo_data(void)
>  	WRITE_NUMBER("HUGETLB_PAGE_DTOR", HUGETLB_PAGE_DTOR);
>  #ifdef __aarch64__
>  	WRITE_NUMBER("VA_BITS", VA_BITS);
> +	WRITE_NUMBER_UNSIGNED("tcr_el1_t1sz", tcr_el1_t1sz);
>  	WRITE_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET);
>  	WRITE_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset);
>  #endif
> @@ -2720,6 +2721,7 @@ read_vmcoreinfo(void)
>  	READ_NUMBER("KERNEL_IMAGE_SIZE", KERNEL_IMAGE_SIZE);
>  #ifdef __aarch64__
>  	READ_NUMBER("VA_BITS", VA_BITS);
> +	READ_NUMBER_UNSIGNED("tcr_el1_t1sz", tcr_el1_t1sz);
>  	READ_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET);
>  	READ_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset);
>  #endif
> diff --git a/makedumpfile.h b/makedumpfile.h
> index ac11e906b5b7..7eab6507c8df 100644
> --- a/makedumpfile.h
> +++ b/makedumpfile.h
> @@ -974,7 +974,7 @@ int get_versiondep_info_arm64(void);
>  int get_xen_basic_info_arm64(void);
>  int get_xen_info_arm64(void);
>  unsigned long get_kaslr_offset_arm64(unsigned long vaddr);
> -#define paddr_to_vaddr_arm64(X) (((X) - info->phys_base) | PAGE_OFFSET)
> +#define paddr_to_vaddr_arm64(X) (((X) - (info->phys_base - PAGE_OFFSET)))
> 
>  #define find_vmemmap()		stub_false()
>  #define vaddr_to_paddr(X)	vaddr_to_paddr_arm64(X)
> @@ -1937,6 +1937,7 @@ struct number_table {
>  	long	KERNEL_IMAGE_SIZE;
>  #ifdef __aarch64__
>  	long 	VA_BITS;
> +	unsigned long	tcr_el1_t1sz;
>  	unsigned long	PHYS_OFFSET;
>  	unsigned long	kimage_voffset;
>  #endif
> --
> 2.7.4
> 



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 4/4] makedumpfile: Mark --mem-usage option unsupported for arm64
  2019-11-12 11:08 ` [PATCH v4 4/4] makedumpfile: Mark --mem-usage option unsupported for arm64 Bhupesh Sharma
@ 2019-12-04 17:49   ` Kazuhito Hagio
  2019-12-05 18:24     ` Bhupesh Sharma
  0 siblings, 1 reply; 34+ messages in thread
From: Kazuhito Hagio @ 2019-12-04 17:49 UTC (permalink / raw)
  To: Bhupesh Sharma, kexec; +Cc: John Donnelly, bhupesh.linux

> -----Original Message-----
> This patch marks '--mem-usage' option as unsupported for arm64
> architecture.
> 
> With the newer arm64 kernels supporting 48-bit/52-bit VA address spaces
> and keeping a single binary for supporting the same, the address of
> kernel symbols like _stext which could be earlier used to determine
> VA_BITS value, can no longer to determine whether VA_BITS is set to 48
> or 52 in the kernel space.

The --mem-usage option works with older arm64 kernels, so we should not
mark it unsupported for all arm64 kernels.

(If we use ELF note vmcoreinfo in kcore, is it possible to support the
option?  Let's think about it later..)

Thanks,
Kazu

> 
> Hence for now, it makes sense to mark '--mem-usage' option as
> unsupported for arm64 architecture until we have more clarity from arm64
> kernel maintainers on how to manage the same in future
> kernel/makedumpfile versions.
> 
> Cc: John Donnelly <john.p.donnelly@oracle.com>
> Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> Cc: kexec@lists.infradead.org
> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> ---
>  makedumpfile.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/makedumpfile.c b/makedumpfile.c
> index baf559e4d74e..ae60466a1e9c 100644
> --- a/makedumpfile.c
> +++ b/makedumpfile.c
> @@ -11564,6 +11564,11 @@ main(int argc, char *argv[])
>  		MSG("\n");
>  		MSG("The dmesg log is saved to %s.\n", info->name_dumpfile);
>  	} else if (info->flag_mem_usage) {
> +#ifdef __aarch64__
> +		MSG("mem-usage not supported for arm64 architecure.\n");
> +		goto out;
> +#endif
> +
>  		if (!check_param_for_creating_dumpfile(argc, argv)) {
>  			MSG("Commandline parameter is invalid.\n");
>  			MSG("Try `makedumpfile --help' for more information.\n");
> --
> 2.7.4
> 



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 3/4] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA support)
  2019-12-04 17:45   ` Kazuhito Hagio
@ 2019-12-05 15:29     ` Kazuhito Hagio
  2019-12-05 18:05       ` Bhupesh Sharma
  0 siblings, 1 reply; 34+ messages in thread
From: Kazuhito Hagio @ 2019-12-05 15:29 UTC (permalink / raw)
  To: Bhupesh Sharma, kexec; +Cc: John Donnelly, bhupesh.linux

> -----Original Message-----
> > -----Original Message-----
> > With ARMv8.2-LVA architecture extension availability, arm64 hardware
> > which supports this extension can support upto 52-bit virtual
> > addresses. It is specially useful for having a 52-bit user-space virtual
> > address space while the kernel can still retain 48-bit/52-bit virtual
> > addressing.
> >
> > Since at the moment we enable the support of this extension in the
> > kernel via a CONFIG flag (CONFIG_ARM64_VA_BITS_52), so there are
> > no clear mechanisms in user-space to determine this CONFIG
> > flag value and use it to determine the kernel-space VA address range
> > values.
> >
> > 'makedumpfile' can instead use 'TCR_EL1.T1SZ' value from vmcoreinfo
> > which indicates the size offset of the memory region addressed by
> > TTBR1_EL1 (and hence can be used for determining the
> > vabits_actual value).
> >
> > The user-space computation for determining whether an address lies in
> > the linear map range is the same as we have in kernel-space:
> >
> >   #define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))
> >
> > I have sent a kernel patch upstream to add 'TCR_EL1.T1SZ' to
> > vmcoreinfo for arm64 (see [0]).
> >
> > This patch is in accordance with ARMv8 Architecture Reference Manual
> > version D.a
> >
> > Note that with these changes the '--mem-usage' option will not work
> > properly for arm64 (a subsequent patch in this series will address the
> > same) and there is a discussion on-going with the arm64 maintainers to
> > find a way-out for the same (via standard kernel symbols like _stext).
> >
> > [0].http://lists.infradead.org/pipermail/kexec/2019-November/023962.html
> >
> > Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> > Cc: John Donnelly <john.p.donnelly@oracle.com>
> > Cc: kexec@lists.infradead.org
> > Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> > ---
> >  arch/arm64.c   | 148 +++++++++++++++++++++++++++++++++++++++++++++------------
> >  makedumpfile.c |   2 +
> >  makedumpfile.h |   3 +-
> >  3 files changed, 122 insertions(+), 31 deletions(-)
> >
> > diff --git a/arch/arm64.c b/arch/arm64.c
> > index ecb19139e178..094d73b8a60f 100644
> > --- a/arch/arm64.c
> > +++ b/arch/arm64.c
> > @@ -47,6 +47,7 @@ typedef struct {
> >  static int lpa_52_bit_support_available;
> >  static int pgtable_level;
> >  static int va_bits;
> > +static int vabits_actual;
> >  static unsigned long kimage_voffset;
> >
> >  #define SZ_4K			4096
> > @@ -218,12 +219,19 @@ pmd_page_paddr(pmd_t pmd)
> >  #define pte_index(vaddr) 		(((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
> >  #define pte_offset(dir, vaddr) 		(pmd_page_paddr((*dir)) + pte_index(vaddr) * sizeof(pte_t))
> >
> > +/*
> > + * The linear kernel range starts at the bottom of the virtual address
> > + * space. Testing the top bit for the start of the region is a
> > + * sufficient check and avoids having to worry about the tag.
> > + */
> > +#define is_linear_addr(addr)	(!(((unsigned long)addr) & (1UL << (vabits_actual - 1))))
> 
> Does this check cover 5.3 or earlier kernels?
> There is no case that vabits_actual is zero?

As you know, 14c127c957c1 ("arm64: mm: Flip kernel VA space") changed
the check for linear address:

-#define __is_lm_address(addr)  (!!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)  (!((addr) & BIT(VA_BITS - 1)))

so if we use the same check as kernel has, I think we will need the
former one to support earlier kernels.

> 
> > +
> >  static unsigned long long
> >  __pa(unsigned long vaddr)
> >  {
> >  	if (kimage_voffset == NOT_FOUND_NUMBER ||
> > -			(vaddr >= PAGE_OFFSET))
> > -		return (vaddr - PAGE_OFFSET + info->phys_base);
> > +			is_linear_addr(vaddr))
> > +		return (vaddr + info->phys_base - PAGE_OFFSET);
> >  	else
> >  		return (vaddr - kimage_voffset);
> >  }
> > @@ -253,6 +261,7 @@ static int calculate_plat_config(void)
> >  			(PAGESIZE() == SZ_64K && va_bits == 42)) {
> >  		pgtable_level = 2;
> >  	} else if ((PAGESIZE() == SZ_64K && va_bits == 48) ||
> > +			(PAGESIZE() == SZ_64K && va_bits == 52) ||
> >  			(PAGESIZE() == SZ_4K && va_bits == 39) ||
> >  			(PAGESIZE() == SZ_16K && va_bits == 47)) {
> >  		pgtable_level = 3;
> > @@ -287,6 +296,16 @@ get_phys_base_arm64(void)
> >  		return TRUE;
> >  	}
> >
> > +	/* If both vabits_actual and va_bits are now initialized, always
> > +	 * prefer vabits_actual over va_bits to calculate PAGE_OFFSET
> > +	 * value.
> > +	 */
> > +	if (vabits_actual && va_bits && vabits_actual != va_bits) {
> > +		info->page_offset = (-(1UL << vabits_actual));
> > +		DEBUG_MSG("page_offset    : %lx (via vabits_actual)\n",
> > +				info->page_offset);
> > +	}
> > +
> 
> Is this for --mem-usage?
> If so, let's drop from this patch and think about it later because
> some additional base functions will be needed for the option, I think.
> 
> >  	if (get_num_pt_loads() && PAGE_OFFSET) {
> >  		for (i = 0;
> >  		    get_pt_load(i, &phys_start, NULL, &virt_start, NULL);
> > @@ -406,6 +425,73 @@ get_stext_symbol(void)
> >  	return(found ? kallsym : FALSE);
> >  }
> >
> > +static int
> > +get_va_bits_from_stext_arm64(void)
> > +{
> > +	ulong _stext;
> > +
> > +	_stext = get_stext_symbol();
> > +	if (!_stext) {
> > +		ERRMSG("Can't get the symbol of _stext.\n");
> > +		return FALSE;
> > +	}
> > +
> > +	/* Derive va_bits as per arch/arm64/Kconfig. Note that this is a
> > +	 * best case approximation at the moment, as there can be
> > +	 * inconsistencies in this calculation (for e.g., for
> > +	 * 52-bit kernel VA case, even the 48th bit might be set in
> > +	 * the _stext symbol).
> > +	 *
> > +	 * So, we need to rely on the actual VA_BITS symbol in the
> > +	 * vmcoreinfo for a accurate value.
> > +	 *
> > +	 * TODO: Improve this further once there is a closure with arm64
> > +	 * kernel maintainers on the same.
> > +	 */
> > +	if ((_stext & PAGE_OFFSET_52) == PAGE_OFFSET_52) {
> > +		va_bits = 52;
> > +	} else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) {
> > +		va_bits = 48;
> > +	} else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) {
> > +		va_bits = 47;
> > +	} else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) {
> > +		va_bits = 42;
> > +	} else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) {
> > +		va_bits = 39;
> > +	} else if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) {
> > +		va_bits = 36;
> > +	} else {
> > +		ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n");
> > +		return FALSE;
> > +	}
> > +
> > +	DEBUG_MSG("va_bits    : %d (_stext) (approximation)\n", va_bits);
> > +
> > +	return TRUE;
> > +}
> > +
> > +static void
> > +get_page_offset_arm64(void)
> > +{
> > +	/* Check if 'vabits_actual' is initialized yet.
> > +	 * If not, our best bet is to use 'va_bits' to calculate
> > +	 * the PAGE_OFFSET value, otherwise use 'vabits_actual'
> > +	 * for the same.
> > +	 *
> > +	 * See arch/arm64/include/asm/memory.h for more details.
> > +	 */
> > +	if (!vabits_actual) {
> > +		info->page_offset = (-(1UL << va_bits));
> > +		DEBUG_MSG("page_offset    : %lx (approximation)\n",
> > +					info->page_offset);
> > +	} else {
> > +		info->page_offset = (-(1UL << vabits_actual));
> > +		DEBUG_MSG("page_offset    : %lx (accurate)\n",
> > +					info->page_offset);
> > +	}
> 
> Does this support 5.3 or earlier kernels?

Because I didn't see the old page_offset calculation below in this patch:

> > -	info->page_offset = (0xffffffffffffffffUL) << (va_bits - 1);

I was thinking that if there is a NUMBER(tcr_el1_t1sz) in vmcoreinfo,
we assume the kernel has the 'flipped' VA space.  And if there is no
NUMBER(tcr_el1_t1sz), then older 'non-flipped' VA [1].

This might be a bit fragile against backport, but it requires less
vmcoreinfo, and older kernels don't need NUMBER(tcr_el1_t1sz).
(they might need NUMBER(MAX_USER_VA_BITS) like RHEL8 though.)

What do you think?

[1] https://github.com/k-hagio/makedumpfile/commit/fd9d86ea05b38e9edbb8c0ac3ebd612d5d485df3#diff-73f1cf659e8099a2f3a94f38063f97ecR400

Thanks,
Kazu


> 
> Thanks,
> Kazu
> 
> > +
> > +}
> > +
> >  int
> >  get_machdep_info_arm64(void)
> >  {
> > @@ -420,8 +506,33 @@ get_machdep_info_arm64(void)
> >  	/* Check if va_bits is still not initialized. If still 0, call
> >  	 * get_versiondep_info() to initialize the same.
> >  	 */
> > +	if (NUMBER(VA_BITS) != NOT_FOUND_NUMBER) {
> > +		va_bits = NUMBER(VA_BITS);
> > +		DEBUG_MSG("va_bits        : %d (vmcoreinfo)\n",
> > +				va_bits);
> > +	}
> > +
> > +	/* Check if va_bits is still not initialized. If still 0, call
> > +	 * get_versiondep_info() to initialize the same from _stext
> > +	 * symbol.
> > +	 */
> >  	if (!va_bits)
> > -		get_versiondep_info_arm64();
> > +		if (get_va_bits_from_stext_arm64() == FALSE)
> > +			return FALSE;
> > +
> > +	get_page_offset_arm64();
> > +
> > +	/* See TCR_EL1, Translation Control Register (EL1) register
> > +	 * description in the ARMv8 Architecture Reference Manual.
> > +	 * Basically, we can use the TCR_EL1.T1SZ
> > +	 * value to determine the virtual addressing range supported
> > +	 * in the kernel-space (i.e. vabits_actual).
> > +	 */
> > +	if (NUMBER(tcr_el1_t1sz) != NOT_FOUND_NUMBER) {
> > +		vabits_actual = 64 - NUMBER(tcr_el1_t1sz);
> > +		DEBUG_MSG("vabits_actual  : %d (vmcoreinfo)\n",
> > +				vabits_actual);
> > +	}
> >
> >  	if (!calculate_plat_config()) {
> >  		ERRMSG("Can't determine platform config values\n");
> > @@ -459,34 +570,11 @@ get_xen_info_arm64(void)
> >  int
> >  get_versiondep_info_arm64(void)
> >  {
> > -	ulong _stext;
> > -
> > -	_stext = get_stext_symbol();
> > -	if (!_stext) {
> > -		ERRMSG("Can't get the symbol of _stext.\n");
> > -		return FALSE;
> > -	}
> > -
> > -	/* Derive va_bits as per arch/arm64/Kconfig */
> > -	if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) {
> > -		va_bits = 36;
> > -	} else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) {
> > -		va_bits = 39;
> > -	} else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) {
> > -		va_bits = 42;
> > -	} else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) {
> > -		va_bits = 47;
> > -	} else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) {
> > -		va_bits = 48;
> > -	} else {
> > -		ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n");
> > -		return FALSE;
> > -	}
> > -
> > -	info->page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
> > +	if (!va_bits)
> > +		if (get_va_bits_from_stext_arm64() == FALSE)
> > +			return FALSE;
> >
> > -	DEBUG_MSG("va_bits      : %d\n", va_bits);
> > -	DEBUG_MSG("page_offset  : %lx\n", info->page_offset);
> > +	get_page_offset_arm64();
> >
> >  	return TRUE;
> >  }
> > diff --git a/makedumpfile.c b/makedumpfile.c
> > index 4a000112ba59..baf559e4d74e 100644
> > --- a/makedumpfile.c
> > +++ b/makedumpfile.c
> > @@ -2314,6 +2314,7 @@ write_vmcoreinfo_data(void)
> >  	WRITE_NUMBER("HUGETLB_PAGE_DTOR", HUGETLB_PAGE_DTOR);
> >  #ifdef __aarch64__
> >  	WRITE_NUMBER("VA_BITS", VA_BITS);
> > +	WRITE_NUMBER_UNSIGNED("tcr_el1_t1sz", tcr_el1_t1sz);
> >  	WRITE_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET);
> >  	WRITE_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset);
> >  #endif
> > @@ -2720,6 +2721,7 @@ read_vmcoreinfo(void)
> >  	READ_NUMBER("KERNEL_IMAGE_SIZE", KERNEL_IMAGE_SIZE);
> >  #ifdef __aarch64__
> >  	READ_NUMBER("VA_BITS", VA_BITS);
> > +	READ_NUMBER_UNSIGNED("tcr_el1_t1sz", tcr_el1_t1sz);
> >  	READ_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET);
> >  	READ_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset);
> >  #endif
> > diff --git a/makedumpfile.h b/makedumpfile.h
> > index ac11e906b5b7..7eab6507c8df 100644
> > --- a/makedumpfile.h
> > +++ b/makedumpfile.h
> > @@ -974,7 +974,7 @@ int get_versiondep_info_arm64(void);
> >  int get_xen_basic_info_arm64(void);
> >  int get_xen_info_arm64(void);
> >  unsigned long get_kaslr_offset_arm64(unsigned long vaddr);
> > -#define paddr_to_vaddr_arm64(X) (((X) - info->phys_base) | PAGE_OFFSET)
> > +#define paddr_to_vaddr_arm64(X) (((X) - (info->phys_base - PAGE_OFFSET)))
> >
> >  #define find_vmemmap()		stub_false()
> >  #define vaddr_to_paddr(X)	vaddr_to_paddr_arm64(X)
> > @@ -1937,6 +1937,7 @@ struct number_table {
> >  	long	KERNEL_IMAGE_SIZE;
> >  #ifdef __aarch64__
> >  	long 	VA_BITS;
> > +	unsigned long	tcr_el1_t1sz;
> >  	unsigned long	PHYS_OFFSET;
> >  	unsigned long	kimage_voffset;
> >  #endif
> > --
> > 2.7.4
> >



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 3/4] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA support)
  2019-12-05 15:29     ` Kazuhito Hagio
@ 2019-12-05 18:05       ` Bhupesh Sharma
  2019-12-05 20:49         ` Kazuhito Hagio
  0 siblings, 1 reply; 34+ messages in thread
From: Bhupesh Sharma @ 2019-12-05 18:05 UTC (permalink / raw)
  To: Kazuhito Hagio; +Cc: John Donnelly, bhupesh.linux, kexec

Hi Kazu,

On Thu, Dec 5, 2019 at 9:00 PM Kazuhito Hagio <k-hagio@ab.jp.nec.com> wrote:
>
> > -----Original Message-----
> > > -----Original Message-----
> > > With ARMv8.2-LVA architecture extension availability, arm64 hardware
> > > which supports this extension can support upto 52-bit virtual
> > > addresses. It is specially useful for having a 52-bit user-space virtual
> > > address space while the kernel can still retain 48-bit/52-bit virtual
> > > addressing.
> > >
> > > Since at the moment we enable the support of this extension in the
> > > kernel via a CONFIG flag (CONFIG_ARM64_VA_BITS_52), so there are
> > > no clear mechanisms in user-space to determine this CONFIG
> > > flag value and use it to determine the kernel-space VA address range
> > > values.
> > >
> > > 'makedumpfile' can instead use 'TCR_EL1.T1SZ' value from vmcoreinfo
> > > which indicates the size offset of the memory region addressed by
> > > TTBR1_EL1 (and hence can be used for determining the
> > > vabits_actual value).
> > >
> > > The user-space computation for determining whether an address lies in
> > > the linear map range is the same as we have in kernel-space:
> > >
> > >   #define __is_lm_address(addr)     (!(((u64)addr) & BIT(vabits_actual - 1)))
> > >
> > > I have sent a kernel patch upstream to add 'TCR_EL1.T1SZ' to
> > > vmcoreinfo for arm64 (see [0]).
> > >
> > > This patch is in accordance with ARMv8 Architecture Reference Manual
> > > version D.a
> > >
> > > Note that with these changes the '--mem-usage' option will not work
> > > properly for arm64 (a subsequent patch in this series will address the
> > > same) and there is a discussion on-going with the arm64 maintainers to
> > > find a way-out for the same (via standard kernel symbols like _stext).
> > >
> > > [0].http://lists.infradead.org/pipermail/kexec/2019-November/023962.html
> > >
> > > Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> > > Cc: John Donnelly <john.p.donnelly@oracle.com>
> > > Cc: kexec@lists.infradead.org
> > > Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> > > ---
> > >  arch/arm64.c   | 148 +++++++++++++++++++++++++++++++++++++++++++++------------
> > >  makedumpfile.c |   2 +
> > >  makedumpfile.h |   3 +-
> > >  3 files changed, 122 insertions(+), 31 deletions(-)
> > >
> > > diff --git a/arch/arm64.c b/arch/arm64.c
> > > index ecb19139e178..094d73b8a60f 100644
> > > --- a/arch/arm64.c
> > > +++ b/arch/arm64.c
> > > @@ -47,6 +47,7 @@ typedef struct {
> > >  static int lpa_52_bit_support_available;
> > >  static int pgtable_level;
> > >  static int va_bits;
> > > +static int vabits_actual;
> > >  static unsigned long kimage_voffset;
> > >
> > >  #define SZ_4K                      4096
> > > @@ -218,12 +219,19 @@ pmd_page_paddr(pmd_t pmd)
> > >  #define pte_index(vaddr)           (((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
> > >  #define pte_offset(dir, vaddr)             (pmd_page_paddr((*dir)) + pte_index(vaddr) * sizeof(pte_t))
> > >
> > > +/*
> > > + * The linear kernel range starts at the bottom of the virtual address
> > > + * space. Testing the top bit for the start of the region is a
> > > + * sufficient check and avoids having to worry about the tag.
> > > + */
> > > +#define is_linear_addr(addr)       (!(((unsigned long)addr) & (1UL << (vabits_actual - 1))))
> >
> > Does this check cover 5.3 or earlier kernels?
> > There is no case that vabits_actual is zero?

We can set vabits_actual as va_bits for older kernels. That shouldn't
be a big change.
Will add it in v5. See more below ...

> As you know, 14c127c957c1 ("arm64: mm: Flip kernel VA space") changed
> the check for linear address:
>
> -#define __is_lm_address(addr)  (!!((addr) & BIT(VA_BITS - 1)))
> +#define __is_lm_address(addr)  (!((addr) & BIT(VA_BITS - 1)))
>
> so if we use the same check as kernel has, I think we will need the
> former one to support earlier kernels.

See above, we can use va_bits where vabits_actual is not present.

> > > +
> > >  static unsigned long long
> > >  __pa(unsigned long vaddr)
> > >  {
> > >     if (kimage_voffset == NOT_FOUND_NUMBER ||
> > > -                   (vaddr >= PAGE_OFFSET))
> > > -           return (vaddr - PAGE_OFFSET + info->phys_base);
> > > +                   is_linear_addr(vaddr))
> > > +           return (vaddr + info->phys_base - PAGE_OFFSET);
> > >     else
> > >             return (vaddr - kimage_voffset);
> > >  }
> > > @@ -253,6 +261,7 @@ static int calculate_plat_config(void)
> > >                     (PAGESIZE() == SZ_64K && va_bits == 42)) {
> > >             pgtable_level = 2;
> > >     } else if ((PAGESIZE() == SZ_64K && va_bits == 48) ||
> > > +                   (PAGESIZE() == SZ_64K && va_bits == 52) ||
> > >                     (PAGESIZE() == SZ_4K && va_bits == 39) ||
> > >                     (PAGESIZE() == SZ_16K && va_bits == 47)) {
> > >             pgtable_level = 3;
> > > @@ -287,6 +296,16 @@ get_phys_base_arm64(void)
> > >             return TRUE;
> > >     }
> > >
> > > +   /* If both vabits_actual and va_bits are now initialized, always
> > > +    * prefer vabits_actual over va_bits to calculate PAGE_OFFSET
> > > +    * value.
> > > +    */
> > > +   if (vabits_actual && va_bits && vabits_actual != va_bits) {
> > > +           info->page_offset = (-(1UL << vabits_actual));
> > > +           DEBUG_MSG("page_offset    : %lx (via vabits_actual)\n",
> > > +                           info->page_offset);
> > > +   }
> > > +
> >
> > Is this for --mem-usage?
> > If so, let's drop from this patch and think about it later because
> > some additional base functions will be needed for the option, I think.

Ok.

> > >     if (get_num_pt_loads() && PAGE_OFFSET) {
> > >             for (i = 0;
> > >                 get_pt_load(i, &phys_start, NULL, &virt_start, NULL);
> > > @@ -406,6 +425,73 @@ get_stext_symbol(void)
> > >     return(found ? kallsym : FALSE);
> > >  }
> > >
> > > +static int
> > > +get_va_bits_from_stext_arm64(void)
> > > +{
> > > +   ulong _stext;
> > > +
> > > +   _stext = get_stext_symbol();
> > > +   if (!_stext) {
> > > +           ERRMSG("Can't get the symbol of _stext.\n");
> > > +           return FALSE;
> > > +   }
> > > +
> > > +   /* Derive va_bits as per arch/arm64/Kconfig. Note that this is a
> > > +    * best case approximation at the moment, as there can be
> > > +    * inconsistencies in this calculation (for e.g., for
> > > +    * 52-bit kernel VA case, even the 48th bit might be set in
> > > +    * the _stext symbol).
> > > +    *
> > > +    * So, we need to rely on the actual VA_BITS symbol in the
> > > +    * vmcoreinfo for a accurate value.
> > > +    *
> > > +    * TODO: Improve this further once there is a closure with arm64
> > > +    * kernel maintainers on the same.
> > > +    */
> > > +   if ((_stext & PAGE_OFFSET_52) == PAGE_OFFSET_52) {
> > > +           va_bits = 52;
> > > +   } else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) {
> > > +           va_bits = 48;
> > > +   } else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) {
> > > +           va_bits = 47;
> > > +   } else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) {
> > > +           va_bits = 42;
> > > +   } else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) {
> > > +           va_bits = 39;
> > > +   } else if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) {
> > > +           va_bits = 36;
> > > +   } else {
> > > +           ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n");
> > > +           return FALSE;
> > > +   }
> > > +
> > > +   DEBUG_MSG("va_bits    : %d (_stext) (approximation)\n", va_bits);
> > > +
> > > +   return TRUE;
> > > +}
> > > +
> > > +static void
> > > +get_page_offset_arm64(void)
> > > +{
> > > +   /* Check if 'vabits_actual' is initialized yet.
> > > +    * If not, our best bet is to use 'va_bits' to calculate
> > > +    * the PAGE_OFFSET value, otherwise use 'vabits_actual'
> > > +    * for the same.
> > > +    *
> > > +    * See arch/arm64/include/asm/memory.h for more details.
> > > +    */
> > > +   if (!vabits_actual) {
> > > +           info->page_offset = (-(1UL << va_bits));
> > > +           DEBUG_MSG("page_offset    : %lx (approximation)\n",
> > > +                                   info->page_offset);
> > > +   } else {
> > > +           info->page_offset = (-(1UL << vabits_actual));
> > > +           DEBUG_MSG("page_offset    : %lx (accurate)\n",
> > > +                                   info->page_offset);
> > > +   }
> >
> > Does this support 5.3 or earlier kernels?
>
> Because I didn't see the old page_offset calculation below in this patch:
>
> > > -   info->page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
>
> I was thinking that if there is a NUMBER(tcr_el1_t1sz) in vmcoreinfo,
> we assume the kernel has the 'flipped' VA space.  And if there is no
> NUMBER(tcr_el1_t1sz), then older 'non-flipped' VA [1].

Yes vabits_actual will be not found in such a case and we can use
va_bits in such a case and similarly helper routines for page table
walk/page_offset calculation may need modification.
Will fix in v5.

> This might be a bit fragile against backport, but it requires less
> vmcoreinfo, and older kernels don't need NUMBER(tcr_el1_t1sz).
> (they might need NUMBER(MAX_USER_VA_BITS) like RHEL8 though.)

I think since this is an upstream fix, we should look at a generic fix
(not restricted to RHEL, which anyway can contain out of tree fixes).
I will send a v5 shortly with the suggested fixes.

Regards,
Bhupesh

> What do you think?
>
> [1] https://github.com/k-hagio/makedumpfile/commit/fd9d86ea05b38e9edbb8c0ac3ebd612d5d485df3#diff-73f1cf659e8099a2f3a94f38063f97ecR400
>
> Thanks,
> Kazu
>
>
> >
> > Thanks,
> > Kazu
> >
> > > +
> > > +}
> > > +
> > >  int
> > >  get_machdep_info_arm64(void)
> > >  {
> > > @@ -420,8 +506,33 @@ get_machdep_info_arm64(void)
> > >     /* Check if va_bits is still not initialized. If still 0, call
> > >      * get_versiondep_info() to initialize the same.
> > >      */
> > > +   if (NUMBER(VA_BITS) != NOT_FOUND_NUMBER) {
> > > +           va_bits = NUMBER(VA_BITS);
> > > +           DEBUG_MSG("va_bits        : %d (vmcoreinfo)\n",
> > > +                           va_bits);
> > > +   }
> > > +
> > > +   /* Check if va_bits is still not initialized. If still 0, call
> > > +    * get_versiondep_info() to initialize the same from _stext
> > > +    * symbol.
> > > +    */
> > >     if (!va_bits)
> > > -           get_versiondep_info_arm64();
> > > +           if (get_va_bits_from_stext_arm64() == FALSE)
> > > +                   return FALSE;
> > > +
> > > +   get_page_offset_arm64();
> > > +
> > > +   /* See TCR_EL1, Translation Control Register (EL1) register
> > > +    * description in the ARMv8 Architecture Reference Manual.
> > > +    * Basically, we can use the TCR_EL1.T1SZ
> > > +    * value to determine the virtual addressing range supported
> > > +    * in the kernel-space (i.e. vabits_actual).
> > > +    */
> > > +   if (NUMBER(tcr_el1_t1sz) != NOT_FOUND_NUMBER) {
> > > +           vabits_actual = 64 - NUMBER(tcr_el1_t1sz);
> > > +           DEBUG_MSG("vabits_actual  : %d (vmcoreinfo)\n",
> > > +                           vabits_actual);
> > > +   }
> > >
> > >     if (!calculate_plat_config()) {
> > >             ERRMSG("Can't determine platform config values\n");
> > > @@ -459,34 +570,11 @@ get_xen_info_arm64(void)
> > >  int
> > >  get_versiondep_info_arm64(void)
> > >  {
> > > -   ulong _stext;
> > > -
> > > -   _stext = get_stext_symbol();
> > > -   if (!_stext) {
> > > -           ERRMSG("Can't get the symbol of _stext.\n");
> > > -           return FALSE;
> > > -   }
> > > -
> > > -   /* Derive va_bits as per arch/arm64/Kconfig */
> > > -   if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) {
> > > -           va_bits = 36;
> > > -   } else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) {
> > > -           va_bits = 39;
> > > -   } else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) {
> > > -           va_bits = 42;
> > > -   } else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) {
> > > -           va_bits = 47;
> > > -   } else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) {
> > > -           va_bits = 48;
> > > -   } else {
> > > -           ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n");
> > > -           return FALSE;
> > > -   }
> > > -
> > > -   info->page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
> > > +   if (!va_bits)
> > > +           if (get_va_bits_from_stext_arm64() == FALSE)
> > > +                   return FALSE;
> > >
> > > -   DEBUG_MSG("va_bits      : %d\n", va_bits);
> > > -   DEBUG_MSG("page_offset  : %lx\n", info->page_offset);
> > > +   get_page_offset_arm64();
> > >
> > >     return TRUE;
> > >  }
> > > diff --git a/makedumpfile.c b/makedumpfile.c
> > > index 4a000112ba59..baf559e4d74e 100644
> > > --- a/makedumpfile.c
> > > +++ b/makedumpfile.c
> > > @@ -2314,6 +2314,7 @@ write_vmcoreinfo_data(void)
> > >     WRITE_NUMBER("HUGETLB_PAGE_DTOR", HUGETLB_PAGE_DTOR);
> > >  #ifdef __aarch64__
> > >     WRITE_NUMBER("VA_BITS", VA_BITS);
> > > +   WRITE_NUMBER_UNSIGNED("tcr_el1_t1sz", tcr_el1_t1sz);
> > >     WRITE_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET);
> > >     WRITE_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset);
> > >  #endif
> > > @@ -2720,6 +2721,7 @@ read_vmcoreinfo(void)
> > >     READ_NUMBER("KERNEL_IMAGE_SIZE", KERNEL_IMAGE_SIZE);
> > >  #ifdef __aarch64__
> > >     READ_NUMBER("VA_BITS", VA_BITS);
> > > +   READ_NUMBER_UNSIGNED("tcr_el1_t1sz", tcr_el1_t1sz);
> > >     READ_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET);
> > >     READ_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset);
> > >  #endif
> > > diff --git a/makedumpfile.h b/makedumpfile.h
> > > index ac11e906b5b7..7eab6507c8df 100644
> > > --- a/makedumpfile.h
> > > +++ b/makedumpfile.h
> > > @@ -974,7 +974,7 @@ int get_versiondep_info_arm64(void);
> > >  int get_xen_basic_info_arm64(void);
> > >  int get_xen_info_arm64(void);
> > >  unsigned long get_kaslr_offset_arm64(unsigned long vaddr);
> > > -#define paddr_to_vaddr_arm64(X) (((X) - info->phys_base) | PAGE_OFFSET)
> > > +#define paddr_to_vaddr_arm64(X) (((X) - (info->phys_base - PAGE_OFFSET)))
> > >
> > >  #define find_vmemmap()             stub_false()
> > >  #define vaddr_to_paddr(X)  vaddr_to_paddr_arm64(X)
> > > @@ -1937,6 +1937,7 @@ struct number_table {
> > >     long    KERNEL_IMAGE_SIZE;
> > >  #ifdef __aarch64__
> > >     long    VA_BITS;
> > > +   unsigned long   tcr_el1_t1sz;
> > >     unsigned long   PHYS_OFFSET;
> > >     unsigned long   kimage_voffset;
> > >  #endif
> > > --
> > > 2.7.4
> > >
>
>
>
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
>


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 1/4] tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
  2019-12-04 17:34   ` Kazuhito Hagio
@ 2019-12-05 18:17     ` Bhupesh Sharma
  2019-12-05 20:41       ` Kazuhito Hagio
  0 siblings, 1 reply; 34+ messages in thread
From: Bhupesh Sharma @ 2019-12-05 18:17 UTC (permalink / raw)
  To: Kazuhito Hagio; +Cc: John Donnelly, bhupesh.linux, kexec

Hi Kazu,

On Wed, Dec 4, 2019 at 11:05 PM Kazuhito Hagio <k-hagio@ab.jp.nec.com> wrote:
>
> Hi Bhupesh,
>
> Sorry for the late reply.

No problem.

> > -----Original Message-----
> > This patch adds a common feature for archs (except arm64, for which
> > similar support is added via subsequent patch) to retrieve
> > 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
>
> We already have the calibrate_machdep_info() function, which sets
> info->max_physmem_bits from vmcoreinfo, so practically we don't need
> to add this patch for the benefit.

Since other user-space tools like crash use the 'MAX_PHYSMEM_BITS' value as well
it was agreed with the arm64 maintainers that it would be a good
approach to export the
same in vmcoreinfo and not use different methods to determine the same
in user-space.

Take an example of the PPC makedumpfile implementation for example. It
uses the following complex method of dtereming
'info->max_physmem_bits':
int
set_ppc64_max_physmem_bits(void)
{
    long array_len = ARRAY_LENGTH(mem_section);
    /*
     * The older ppc64 kernels uses _MAX_PHYSMEM_BITS as 42 and the
     * newer kernels 3.7 onwards uses 46 bits.
     */

    info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
    if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
        || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
        return TRUE;

    info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_7;
    if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
        || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
        return TRUE;

    info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_19;
    if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
        || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
        return TRUE;

    info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_20;
    if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
        || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
        return TRUE;

    return FALSE;
}

This might need modification and introduction of another
_MAX_PHYSMEM_BITS_x_y macro when this changes for a newer kernel
version.

I think this makes the code error-prone and hard to read. Its much
better to replace it with:
/* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
    if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
        info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
        return TRUE;
} else {
..
}

I think it will reduce future reworks (as per kernel versions) and
also reduce issues while backporting makedumpfile to older kernels.

What do you think?

Regards,
Bhupesh
> > I recently posted a kernel patch (see [0]) which appends
> > 'MAX_PHYSMEM_BITS' to vmcoreinfo in the core code itself rather than
> > in arch-specific code, so that user-space code can also benefit from
> > this addition to the vmcoreinfo and use it as a standard way of
> > determining 'SECTIONS_SHIFT' value in 'makedumpfile' utility.
> >
> > This patch ensures backward compatibility for kernel versions in which
> > 'MAX_PHYSMEM_BITS' is not available in vmcoreinfo.
> >
> > [0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
> >
> > Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> > Cc: John Donnelly <john.p.donnelly@oracle.com>
> > Cc: kexec@lists.infradead.org
> > Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> > ---
> >  arch/arm.c     |  8 +++++++-
> >  arch/ia64.c    |  7 ++++++-
> >  arch/ppc.c     |  8 +++++++-
> >  arch/ppc64.c   | 49 ++++++++++++++++++++++++++++---------------------
> >  arch/s390x.c   | 29 ++++++++++++++++++-----------
> >  arch/sparc64.c |  9 +++++++--
> >  arch/x86.c     | 34 ++++++++++++++++++++--------------
> >  arch/x86_64.c  | 27 ++++++++++++++++-----------
> >  8 files changed, 109 insertions(+), 62 deletions(-)
> >
> > diff --git a/arch/arm.c b/arch/arm.c
> > index af7442ac70bf..33536fc4dfc9 100644
> > --- a/arch/arm.c
> > +++ b/arch/arm.c
> > @@ -81,7 +81,13 @@ int
> >  get_machdep_info_arm(void)
> >  {
> >       info->page_offset = SYMBOL(_stext) & 0xffff0000UL;
> > -     info->max_physmem_bits = _MAX_PHYSMEM_BITS;
> > +
> > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > +     else
> > +             info->max_physmem_bits = _MAX_PHYSMEM_BITS;
> > +
> >       info->kernel_start = SYMBOL(_stext);
> >       info->section_size_bits = _SECTION_SIZE_BITS;
> >
> > diff --git a/arch/ia64.c b/arch/ia64.c
> > index 6c33cc7c8288..fb44dda47172 100644
> > --- a/arch/ia64.c
> > +++ b/arch/ia64.c
> > @@ -85,7 +85,12 @@ get_machdep_info_ia64(void)
> >       }
> >
> >       info->section_size_bits = _SECTION_SIZE_BITS;
> > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > +
> > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > +     else
> > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> >
> >       return TRUE;
> >  }
> > diff --git a/arch/ppc.c b/arch/ppc.c
> > index 37c6a3b60cd3..ed9447427a30 100644
> > --- a/arch/ppc.c
> > +++ b/arch/ppc.c
> > @@ -31,7 +31,13 @@ get_machdep_info_ppc(void)
> >       unsigned long vmlist, vmap_area_list, vmalloc_start;
> >
> >       info->section_size_bits = _SECTION_SIZE_BITS;
> > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > +
> > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > +     else
> > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > +
> >       info->page_offset = __PAGE_OFFSET;
> >
> >       if (SYMBOL(_stext) != NOT_FOUND_SYMBOL)
> > diff --git a/arch/ppc64.c b/arch/ppc64.c
> > index 9d8f2525f608..a3984eebdced 100644
> > --- a/arch/ppc64.c
> > +++ b/arch/ppc64.c
> > @@ -466,30 +466,37 @@ int
> >  set_ppc64_max_physmem_bits(void)
> >  {
> >       long array_len = ARRAY_LENGTH(mem_section);
> > -     /*
> > -      * The older ppc64 kernels uses _MAX_PHYSMEM_BITS as 42 and the
> > -      * newer kernels 3.7 onwards uses 46 bits.
> > -      */
> > -
> > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > -             return TRUE;
> > -
> > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_7;
> > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > -             return TRUE;
> >
> > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_19;
> > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> >               return TRUE;
> > +     } else {
> > +             /*
> > +              * The older ppc64 kernels uses _MAX_PHYSMEM_BITS as 42 and the
> > +              * newer kernels 3.7 onwards uses 46 bits.
> > +              */
> >
> > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_20;
> > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > -             return TRUE;
> > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > +                     return TRUE;
> > +
> > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_7;
> > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > +                     return TRUE;
> > +
> > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_19;
> > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > +                     return TRUE;
> > +
> > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_20;
> > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > +                     return TRUE;
> > +     }
> >
> >       return FALSE;
> >  }
> > diff --git a/arch/s390x.c b/arch/s390x.c
> > index bf9d58e54fb7..4d17a783e5bd 100644
> > --- a/arch/s390x.c
> > +++ b/arch/s390x.c
> > @@ -63,20 +63,27 @@ int
> >  set_s390x_max_physmem_bits(void)
> >  {
> >       long array_len = ARRAY_LENGTH(mem_section);
> > -     /*
> > -      * The older s390x kernels uses _MAX_PHYSMEM_BITS as 42 and the
> > -      * newer kernels uses 46 bits.
> > -      */
> >
> > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> >               return TRUE;
> > +     } else {
> > +             /*
> > +              * The older s390x kernels uses _MAX_PHYSMEM_BITS as 42 and the
> > +              * newer kernels uses 46 bits.
> > +              */
> >
> > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_3;
> > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > -             return TRUE;
> > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > +                     return TRUE;
> > +
> > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_3;
> > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > +                     return TRUE;
> > +     }
> >
> >       return FALSE;
> >  }
> > diff --git a/arch/sparc64.c b/arch/sparc64.c
> > index 1cfaa854ce6d..b93a05bdfe59 100644
> > --- a/arch/sparc64.c
> > +++ b/arch/sparc64.c
> > @@ -25,10 +25,15 @@ int get_versiondep_info_sparc64(void)
> >  {
> >       info->section_size_bits = _SECTION_SIZE_BITS;
> >
> > -     if (info->kernel_version >= KERNEL_VERSION(3, 8, 13))
> > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > +     else if (info->kernel_version >= KERNEL_VERSION(3, 8, 13))
> >               info->max_physmem_bits = _MAX_PHYSMEM_BITS_L4;
> > -     else {
> > +     else
> >               info->max_physmem_bits = _MAX_PHYSMEM_BITS_L3;
> > +
> > +     if (info->kernel_version < KERNEL_VERSION(3, 8, 13)) {
> >               info->flag_vmemmap = TRUE;
> >               info->vmemmap_start = VMEMMAP_BASE_SPARC64;
> >               info->vmemmap_end = VMEMMAP_BASE_SPARC64 +
> > diff --git a/arch/x86.c b/arch/x86.c
> > index 3fdae93084b8..f1b43d4c8179 100644
> > --- a/arch/x86.c
> > +++ b/arch/x86.c
> > @@ -72,21 +72,27 @@ get_machdep_info_x86(void)
> >  {
> >       unsigned long vmlist, vmap_area_list, vmalloc_start;
> >
> > -     /* PAE */
> > -     if ((vt.mem_flags & MEMORY_X86_PAE)
> > -         || ((SYMBOL(pkmap_count) != NOT_FOUND_SYMBOL)
> > -           && (SYMBOL(pkmap_count_next) != NOT_FOUND_SYMBOL)
> > -           && ((SYMBOL(pkmap_count_next)-SYMBOL(pkmap_count))/sizeof(int))
> > -           == 512)) {
> > -             DEBUG_MSG("\n");
> > -             DEBUG_MSG("PAE          : ON\n");
> > -             vt.mem_flags |= MEMORY_X86_PAE;
> > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_PAE;
> > -     } else {
> > -             DEBUG_MSG("\n");
> > -             DEBUG_MSG("PAE          : OFF\n");
> > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > +     else {
> > +             /* PAE */
> > +             if ((vt.mem_flags & MEMORY_X86_PAE)
> > +                             || ((SYMBOL(pkmap_count) != NOT_FOUND_SYMBOL)
> > +                                     && (SYMBOL(pkmap_count_next) != NOT_FOUND_SYMBOL)
> > +                                     && ((SYMBOL(pkmap_count_next)-SYMBOL(pkmap_count))/sizeof(int))
> > +                                     == 512)) {
> > +                     DEBUG_MSG("\n");
> > +                     DEBUG_MSG("PAE          : ON\n");
> > +                     vt.mem_flags |= MEMORY_X86_PAE;
> > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_PAE;
> > +             } else {
> > +                     DEBUG_MSG("\n");
> > +                     DEBUG_MSG("PAE          : OFF\n");
> > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > +             }
> >       }
> > +
> >       info->page_offset = __PAGE_OFFSET;
> >
> >       if (SYMBOL(_stext) == NOT_FOUND_SYMBOL) {
> > diff --git a/arch/x86_64.c b/arch/x86_64.c
> > index 876644f932be..eff90307552c 100644
> > --- a/arch/x86_64.c
> > +++ b/arch/x86_64.c
> > @@ -268,17 +268,22 @@ get_machdep_info_x86_64(void)
> >  int
> >  get_versiondep_info_x86_64(void)
> >  {
> > -     /*
> > -      * On linux-2.6.26, MAX_PHYSMEM_BITS is changed to 44 from 40.
> > -      */
> > -     if (info->kernel_version < KERNEL_VERSION(2, 6, 26))
> > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG;
> > -     else if (info->kernel_version < KERNEL_VERSION(2, 6, 31))
> > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_26;
> > -     else if(check_5level_paging())
> > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_5LEVEL;
> > -     else
> > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_31;
> > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > +     } else {
> > +             /*
> > +              * On linux-2.6.26, MAX_PHYSMEM_BITS is changed to 44 from 40.
> > +              */
> > +             if (info->kernel_version < KERNEL_VERSION(2, 6, 26))
> > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG;
> > +             else if (info->kernel_version < KERNEL_VERSION(2, 6, 31))
> > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_26;
> > +             else if(check_5level_paging())
> > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_5LEVEL;
> > +             else
> > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_31;
> > +     }
> >
> >       if (!get_page_offset_x86_64())
> >               return FALSE;
> > --
> > 2.7.4
> >
>
>
>
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
>


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 2/4] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
  2019-12-04 17:36   ` Kazuhito Hagio
@ 2019-12-05 18:21     ` Bhupesh Sharma
  2019-12-05 20:45       ` Kazuhito Hagio
  0 siblings, 1 reply; 34+ messages in thread
From: Bhupesh Sharma @ 2019-12-05 18:21 UTC (permalink / raw)
  To: Kazuhito Hagio; +Cc: John Donnelly, bhupesh.linux, kexec

Hi Kazu,

On Wed, Dec 4, 2019 at 11:07 PM Kazuhito Hagio <k-hagio@ab.jp.nec.com> wrote:
>
> > -----Original Message-----
> > ARMv8.2-LPA architecture extension (if available on underlying hardware)
> > can support 52-bit physical addresses, while the kernel virtual
> > addresses remain 48-bit.
> >
> > Make sure that we read the 52-bit PA address capability from
> > 'MAX_PHYSMEM_BITS' variable (if available in vmcoreinfo) and
> > accordingly change the pte_to_phy() mask values and also traverse
> > the page-table walk accordingly.
> >
> > Also make sure that it works well for the existing 48-bit PA address
> > platforms and also on environments which use newer kernels with 52-bit
> > PA support but hardware which is not ARM8.2-LPA compliant.
> >
> > I have sent a kernel patch upstream to add 'MAX_PHYSMEM_BITS' to
> > vmcoreinfo for arm64 (see [0]).
> >
> > This patch is in accordance with ARMv8 Architecture Reference Manual
> > version D.a
> >
> > [0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
> >
> > Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> > Cc: John Donnelly <john.p.donnelly@oracle.com>
> > Cc: kexec@lists.infradead.org
> > Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> > ---
> >  arch/arm64.c | 292 +++++++++++++++++++++++++++++++++++++++++------------------
> >  1 file changed, 204 insertions(+), 88 deletions(-)
> >
> > diff --git a/arch/arm64.c b/arch/arm64.c
> > index 3516b340adfd..ecb19139e178 100644
> > --- a/arch/arm64.c
> > +++ b/arch/arm64.c
> > @@ -39,72 +39,184 @@ typedef struct {
> >       unsigned long pte;
> >  } pte_t;
> >
>
> > +#define __pte(x)     ((pte_t) { (x) } )
> > +#define __pmd(x)     ((pmd_t) { (x) } )
> > +#define __pud(x)     ((pud_t) { (x) } )
> > +#define __pgd(x)     ((pgd_t) { (x) } )
>
> Is it possible to remove these macros?

Ok, will fix in v5.

> > +
> > +static int lpa_52_bit_support_available;
> >  static int pgtable_level;
> >  static int va_bits;
> >  static unsigned long kimage_voffset;
> >
> > -#define SZ_4K                        (4 * 1024)
> > -#define SZ_16K                       (16 * 1024)
> > -#define SZ_64K                       (64 * 1024)
> > -#define SZ_128M                      (128 * 1024 * 1024)
> > +#define SZ_4K                        4096
> > +#define SZ_16K                       16384
> > +#define SZ_64K                       65536
> >
> > -#define PAGE_OFFSET_36 ((0xffffffffffffffffUL) << 36)
> > -#define PAGE_OFFSET_39 ((0xffffffffffffffffUL) << 39)
> > -#define PAGE_OFFSET_42 ((0xffffffffffffffffUL) << 42)
> > -#define PAGE_OFFSET_47 ((0xffffffffffffffffUL) << 47)
> > -#define PAGE_OFFSET_48 ((0xffffffffffffffffUL) << 48)
> > +#define PAGE_OFFSET_36               ((0xffffffffffffffffUL) << 36)
> > +#define PAGE_OFFSET_39               ((0xffffffffffffffffUL) << 39)
> > +#define PAGE_OFFSET_42               ((0xffffffffffffffffUL) << 42)
> > +#define PAGE_OFFSET_47               ((0xffffffffffffffffUL) << 47)
> > +#define PAGE_OFFSET_48               ((0xffffffffffffffffUL) << 48)
> > +#define PAGE_OFFSET_52               ((0xffffffffffffffffUL) << 52)
> >
> >  #define pgd_val(x)           ((x).pgd)
> >  #define pud_val(x)           (pgd_val((x).pgd))
> >  #define pmd_val(x)           (pud_val((x).pud))
> >  #define pte_val(x)           ((x).pte)
> >
> > -#define PAGE_MASK            (~(PAGESIZE() - 1))
> > -#define PGDIR_SHIFT          ((PAGESHIFT() - 3) * pgtable_level + 3)
> > -#define PTRS_PER_PGD         (1 << (va_bits - PGDIR_SHIFT))
> > -#define PUD_SHIFT            get_pud_shift_arm64()
> > -#define PUD_SIZE             (1UL << PUD_SHIFT)
> > -#define PUD_MASK             (~(PUD_SIZE - 1))
> > -#define PTRS_PER_PTE         (1 << (PAGESHIFT() - 3))
> > -#define PTRS_PER_PUD         PTRS_PER_PTE
> > -#define PMD_SHIFT            ((PAGESHIFT() - 3) * 2 + 3)
> > -#define PMD_SIZE             (1UL << PMD_SHIFT)
> > -#define PMD_MASK             (~(PMD_SIZE - 1))
>
> > +/* See 'include/uapi/linux/const.h' for definitions below */
> > +#define __AC(X,Y)    (X##Y)
> > +#define _AC(X,Y)     __AC(X,Y)
> > +#define _AT(T,X)     ((T)(X))
> > +
> > +/* See 'include/asm/pgtable-types.h' for definitions below */
> > +typedef unsigned long pteval_t;
> > +typedef unsigned long pmdval_t;
> > +typedef unsigned long pudval_t;
> > +typedef unsigned long pgdval_t;
>
> Is it possible to remove these macros/typedefs as well?
> I don't think they make the code easier to read..

Ok. The idea behind it was to keep the makedumpfile(user-space)
page-table-walk code similar to Linux kernel (as much as possible), so
that if we identify an issue in user-space code, it is easier to
correlate it with the corresponding kernel code.

I will try to see how this can be addressed while making the code
easier to read/understand.

Regards,
Bhupesh

> > +
> > +#define PAGE_SHIFT   PAGESHIFT()
> > +
> > +/* See 'arch/arm64/include/asm/pgtable-hwdef.h' for definitions below */
> > +
> > +#define ARM64_HW_PGTABLE_LEVEL_SHIFT(n)      ((PAGE_SHIFT - 3) * (4 - (n)) + 3)
> > +
> > +#define PTRS_PER_PTE         (1 << (PAGE_SHIFT - 3))
> > +
> > +/*
> > + * PMD_SHIFT determines the size a level 2 page table entry can map.
> > + */
> > +#define PMD_SHIFT            ARM64_HW_PGTABLE_LEVEL_SHIFT(2)
> > +#define PMD_SIZE             (_AC(1, UL) << PMD_SHIFT)
> > +#define PMD_MASK             (~(PMD_SIZE-1))
> >  #define PTRS_PER_PMD         PTRS_PER_PTE
> >
> > -#define PAGE_PRESENT         (1 << 0)
> > +/*
> > + * PUD_SHIFT determines the size a level 1 page table entry can map.
> > + */
> > +#define PUD_SHIFT            ARM64_HW_PGTABLE_LEVEL_SHIFT(1)
> > +#define PUD_SIZE             (_AC(1, UL) << PUD_SHIFT)
> > +#define PUD_MASK             (~(PUD_SIZE-1))
> > +#define PTRS_PER_PUD         PTRS_PER_PTE
> > +
> > +/*
> > + * PGDIR_SHIFT determines the size a top-level page table entry can map
> > + * (depending on the configuration, this level can be 0, 1 or 2).
> > + */
> > +#define PGDIR_SHIFT          ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (pgtable_level))
> > +#define PGDIR_SIZE           (_AC(1, UL) << PGDIR_SHIFT)
> > +#define PGDIR_MASK           (~(PGDIR_SIZE-1))
> > +#define PTRS_PER_PGD         (1 << ((va_bits) - PGDIR_SHIFT))
> > +
> > +/*
> > + * Section address mask and size definitions.
> > + */
> >  #define SECTIONS_SIZE_BITS   30
> > -/* Highest possible physical address supported */
> > -#define PHYS_MASK_SHIFT              48
> > -#define PHYS_MASK            ((1UL << PHYS_MASK_SHIFT) - 1)
> > +
> >  /*
> > - * Remove the highest order bits that are not a part of the
> > - * physical address in a section
> > + * Hardware page table definitions.
> > + *
> > + * Level 1 descriptor (PUD).
> >   */
> > -#define PMD_SECTION_MASK     ((1UL << 40) - 1)
> > +#define PUD_TYPE_TABLE               (_AT(pudval_t, 3) << 0)
> > +#define PUD_TABLE_BIT                (_AT(pudval_t, 1) << 1)
> > +#define PUD_TYPE_MASK                (_AT(pudval_t, 3) << 0)
> > +#define PUD_TYPE_SECT                (_AT(pudval_t, 1) << 0)
> >
> > -#define PMD_TYPE_MASK                3
> > -#define PMD_TYPE_SECT                1
> > -#define PMD_TYPE_TABLE               3
> > +/*
> > + * Level 2 descriptor (PMD).
> > + */
> > +#define PMD_TYPE_MASK                (_AT(pmdval_t, 3) << 0)
> > +#define PMD_TYPE_FAULT               (_AT(pmdval_t, 0) << 0)
> > +#define PMD_TYPE_TABLE               (_AT(pmdval_t, 3) << 0)
> > +#define PMD_TYPE_SECT                (_AT(pmdval_t, 1) << 0)
> > +#define PMD_TABLE_BIT                (_AT(pmdval_t, 1) << 1)
> > +
> > +/*
> > + * Level 3 descriptor (PTE).
> > + */
> > +#define PTE_ADDR_LOW         (((_AT(pteval_t, 1) << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
> > +#define PTE_ADDR_HIGH                (_AT(pteval_t, 0xf) << 12)
> > +
> > +static inline unsigned long
> > +get_pte_addr_mask_arm64(void)
> > +{
> > +     if (lpa_52_bit_support_available)
> > +             return (PTE_ADDR_LOW | PTE_ADDR_HIGH);
> > +     else
> > +             return PTE_ADDR_LOW;
> > +}
> > +
> > +#define PTE_ADDR_MASK                get_pte_addr_mask_arm64()
> >
> > -#define PUD_TYPE_MASK                3
> > -#define PUD_TYPE_SECT                1
> > -#define PUD_TYPE_TABLE               3
> > +#define PAGE_MASK            (~(PAGESIZE() - 1))
> > +#define PAGE_PRESENT         (1 << 0)
> >
> > +/* Helper API to convert between a physical address and its placement
> > + * in a page table entry, taking care of 52-bit addresses.
> > + */
> > +static inline unsigned long
> > +__pte_to_phys(pte_t pte)
> > +{
> > +     if (lpa_52_bit_support_available)
> > +             return ((pte_val(pte) & PTE_ADDR_LOW) | ((pte_val(pte) & PTE_ADDR_HIGH) << 36));
> > +     else
> > +             return (pte_val(pte) & PTE_ADDR_MASK);
> > +}
> > +
> > +/* Find an entry in a page-table-directory */
> >  #define pgd_index(vaddr)             (((vaddr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
> > -#define pgd_offset(pgdir, vaddr)     ((pgd_t *)(pgdir) + pgd_index(vaddr))
> >
> > -#define pte_index(vaddr)             (((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
> > -#define pmd_page_paddr(pmd)          (pmd_val(pmd) & PHYS_MASK & (int32_t)PAGE_MASK)
> > -#define pte_offset(dir, vaddr)               ((pte_t*)pmd_page_paddr((*dir)) + pte_index(vaddr))
> > +static inline pte_t
> > +pgd_pte(pgd_t pgd)
> > +{
> > +     return __pte(pgd_val(pgd));
> > +}
> >
> > -#define pmd_index(vaddr)             (((vaddr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
> > -#define pud_page_paddr(pud)          (pud_val(pud) & PHYS_MASK & (int32_t)PAGE_MASK)
> > -#define pmd_offset_pgtbl_lvl_2(pud, vaddr) ((pmd_t *)pud)
> > -#define pmd_offset_pgtbl_lvl_3(pud, vaddr) ((pmd_t *)pud_page_paddr((*pud)) + pmd_index(vaddr))
> > +#define __pgd_to_phys(pgd)           __pte_to_phys(pgd_pte(pgd))
> > +#define pgd_offset(pgd, vaddr)               ((pgd_t *)(pgd) + pgd_index(vaddr))
> > +
> > +static inline pte_t pud_pte(pud_t pud)
> > +{
> > +     return __pte(pud_val(pud));
> > +}
> >
> > +static inline unsigned long
> > +pgd_page_paddr(pgd_t pgd)
> > +{
> > +     return __pgd_to_phys(pgd);
> > +}
> > +
> > +/* Find an entry in the first-level page table. */
> >  #define pud_index(vaddr)             (((vaddr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))
> > -#define pgd_page_paddr(pgd)          (pgd_val(pgd) & PHYS_MASK & (int32_t)PAGE_MASK)
> > +#define __pud_to_phys(pud)           __pte_to_phys(pud_pte(pud))
> > +
> > +static inline unsigned long
> > +pud_page_paddr(pud_t pud)
> > +{
> > +     return __pud_to_phys(pud);
> > +}
> > +
> > +/* Find an entry in the second-level page table. */
> > +#define pmd_index(vaddr)             (((vaddr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
> > +
> > +static inline pte_t pmd_pte(pmd_t pmd)
> > +{
> > +     return __pte(pmd_val(pmd));
> > +}
> > +
> > +#define __pmd_to_phys(pmd)           __pte_to_phys(pmd_pte(pmd))
> > +
> > +static inline unsigned long
> > +pmd_page_paddr(pmd_t pmd)
> > +{
> > +     return __pmd_to_phys(pmd);
> > +}
> > +
> > +/* Find an entry in the third-level page table. */
> > +#define pte_index(vaddr)             (((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
> > +#define pte_offset(dir, vaddr)               (pmd_page_paddr((*dir)) + pte_index(vaddr) * sizeof(pte_t))
> >
> >  static unsigned long long
> >  __pa(unsigned long vaddr)
> > @@ -116,32 +228,22 @@ __pa(unsigned long vaddr)
> >               return (vaddr - kimage_voffset);
> >  }
> >
> > -static int
> > -get_pud_shift_arm64(void)
> > +static pud_t *
> > +pud_offset(pgd_t *pgda, pgd_t *pgdv, unsigned long vaddr)
> >  {
> > -     if (pgtable_level == 4)
> > -             return ((PAGESHIFT() - 3) * 3 + 3);
> > +     if (pgtable_level > 3)
> > +             return (pud_t *)(pgd_page_paddr(*pgdv) + pud_index(vaddr) * sizeof(pud_t));
> >       else
> > -             return PGDIR_SHIFT;
> > +             return (pud_t *)(pgda);
> >  }
> >
> >  static pmd_t *
> >  pmd_offset(pud_t *puda, pud_t *pudv, unsigned long vaddr)
> >  {
> > -     if (pgtable_level == 2) {
> > -             return pmd_offset_pgtbl_lvl_2(puda, vaddr);
> > -     } else {
> > -             return pmd_offset_pgtbl_lvl_3(pudv, vaddr);
> > -     }
> > -}
> > -
> > -static pud_t *
> > -pud_offset(pgd_t *pgda, pgd_t *pgdv, unsigned long vaddr)
> > -{
> > -     if (pgtable_level == 4)
> > -             return ((pud_t *)pgd_page_paddr((*pgdv)) + pud_index(vaddr));
> > +     if (pgtable_level > 2)
> > +             return (pmd_t *)(pud_page_paddr(*pudv) + pmd_index(vaddr) * sizeof(pmd_t));
> >       else
> > -             return (pud_t *)(pgda);
> > +             return (pmd_t*)(puda);
> >  }
> >
> >  static int calculate_plat_config(void)
> > @@ -307,6 +409,14 @@ get_stext_symbol(void)
> >  int
> >  get_machdep_info_arm64(void)
> >  {
> > +     /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */
> > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > +             if (info->max_physmem_bits == 52)
> > +                     lpa_52_bit_support_available = 1;
> > +     } else
> > +             info->max_physmem_bits = 48;
> > +
> >       /* Check if va_bits is still not initialized. If still 0, call
> >        * get_versiondep_info() to initialize the same.
> >        */
> > @@ -319,12 +429,11 @@ get_machdep_info_arm64(void)
> >       }
> >
> >       kimage_voffset = NUMBER(kimage_voffset);
> > -     info->max_physmem_bits = PHYS_MASK_SHIFT;
> >       info->section_size_bits = SECTIONS_SIZE_BITS;
> >
> >       DEBUG_MSG("kimage_voffset   : %lx\n", kimage_voffset);
> > -     DEBUG_MSG("max_physmem_bits : %lx\n", info->max_physmem_bits);
> > -     DEBUG_MSG("section_size_bits: %lx\n", info->section_size_bits);
> > +     DEBUG_MSG("max_physmem_bits : %ld\n", info->max_physmem_bits);
> > +     DEBUG_MSG("section_size_bits: %ld\n", info->section_size_bits);
> >
> >       return TRUE;
> >  }
> > @@ -382,6 +491,19 @@ get_versiondep_info_arm64(void)
> >       return TRUE;
> >  }
> >
> > +/* 1GB section for Page Table level = 4 and Page Size = 4KB */
> > +static int
> > +is_pud_sect(pud_t pud)
> > +{
> > +     return ((pud_val(pud) & PUD_TYPE_MASK) == PUD_TYPE_SECT);
> > +}
> > +
> > +static int
> > +is_pmd_sect(pmd_t pmd)
> > +{
> > +     return ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_SECT);
> > +}
> > +
> >  /*
> >   * vaddr_to_paddr_arm64() - translate arbitrary virtual address to physical
> >   * @vaddr: virtual address to translate
> > @@ -419,10 +541,9 @@ vaddr_to_paddr_arm64(unsigned long vaddr)
> >               return NOT_PADDR;
> >       }
> >
> > -     if ((pud_val(pudv) & PUD_TYPE_MASK) == PUD_TYPE_SECT) {
> > -             /* 1GB section for Page Table level = 4 and Page Size = 4KB */
> > -             paddr = (pud_val(pudv) & (PUD_MASK & PMD_SECTION_MASK))
> > -                                     + (vaddr & (PUD_SIZE - 1));
> > +     if (is_pud_sect(pudv)) {
> > +             paddr = (pud_page_paddr(pudv) & PUD_MASK) +
> > +                             (vaddr & (PUD_SIZE - 1));
> >               return paddr;
> >       }
> >
> > @@ -432,29 +553,24 @@ vaddr_to_paddr_arm64(unsigned long vaddr)
> >               return NOT_PADDR;
> >       }
> >
> > -     switch (pmd_val(pmdv) & PMD_TYPE_MASK) {
> > -     case PMD_TYPE_TABLE:
> > -             ptea = pte_offset(&pmdv, vaddr);
> > -             /* 64k page */
> > -             if (!readmem(PADDR, (unsigned long long)ptea, &ptev, sizeof(ptev))) {
> > -                     ERRMSG("Can't read pte\n");
> > -                     return NOT_PADDR;
> > -             }
> > +     if (is_pmd_sect(pmdv)) {
> > +             paddr = (pmd_page_paddr(pmdv) & PMD_MASK) +
> > +                             (vaddr & (PMD_SIZE - 1));
> > +             return paddr;
> > +     }
> >
> > -             if (!(pte_val(ptev) & PAGE_PRESENT)) {
> > -                     ERRMSG("Can't get a valid pte.\n");
> > -                     return NOT_PADDR;
> > -             } else {
> > +     ptea = (pte_t *)pte_offset(&pmdv, vaddr);
> > +     if (!readmem(PADDR, (unsigned long long)ptea, &ptev, sizeof(ptev))) {
> > +             ERRMSG("Can't read pte\n");
> > +             return NOT_PADDR;
> > +     }
> >
> > -                     paddr = (PAGEBASE(pte_val(ptev)) & PHYS_MASK)
> > -                                     + (vaddr & (PAGESIZE() - 1));
> > -             }
> > -             break;
> > -     case PMD_TYPE_SECT:
> > -             /* 512MB section for Page Table level = 3 and Page Size = 64KB*/
> > -             paddr = (pmd_val(pmdv) & (PMD_MASK & PMD_SECTION_MASK))
> > -                                     + (vaddr & (PMD_SIZE - 1));
> > -             break;
> > +     if (!(pte_val(ptev) & PAGE_PRESENT)) {
> > +             ERRMSG("Can't get a valid pte.\n");
> > +             return NOT_PADDR;
> > +     } else {
> > +             paddr = __pte_to_phys(ptev) +
> > +                             (vaddr & (PAGESIZE() - 1));
> >       }
> >
> >       return paddr;
> > --
> > 2.7.4
> >
>
>
>
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
>


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/4] makedumpfile: Mark --mem-usage option unsupported for arm64
  2019-12-04 17:49   ` Kazuhito Hagio
@ 2019-12-05 18:24     ` Bhupesh Sharma
  0 siblings, 0 replies; 34+ messages in thread
From: Bhupesh Sharma @ 2019-12-05 18:24 UTC (permalink / raw)
  To: Kazuhito Hagio; +Cc: John Donnelly, bhupesh.linux, kexec

Hi Kazu,

On Wed, Dec 4, 2019 at 11:20 PM Kazuhito Hagio <k-hagio@ab.jp.nec.com> wrote:
>
> > -----Original Message-----
> > This patch marks '--mem-usage' option as unsupported for arm64
> > architecture.
> >
> > With the newer arm64 kernels supporting 48-bit/52-bit VA address spaces
> > and keeping a single binary for supporting the same, the address of
> > kernel symbols like _stext which could be earlier used to determine
> > VA_BITS value, can no longer to determine whether VA_BITS is set to 48
> > or 52 in the kernel space.
>
> The --mem-usage option works with older arm64 kernels, so we should not
> mark it unsupported for all arm64 kernels.
>
> (If we use ELF note vmcoreinfo in kcore, is it possible to support the
> option?  Let's think about it later..)

Ok, I am in the process of discussing this with arm64 maintainers in
detail as _stext symbol address can no longer be used to separate
48-bit v/s 52-bit kernel VA space configurations.

Also other user-space utilities like 'kexec-tools' also face a similar
problem with the 52-bit change (as the vmcore-dmesg stops working).

I am currently caught up with another high priority issue. Will come
back with more thoughts on this in a couple of days.

Thanks,
Bhupesh

> > Hence for now, it makes sense to mark '--mem-usage' option as
> > unsupported for arm64 architecture until we have more clarity from arm64
> > kernel maintainers on how to manage the same in future
> > kernel/makedumpfile versions.
> >
> > Cc: John Donnelly <john.p.donnelly@oracle.com>
> > Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> > Cc: kexec@lists.infradead.org
> > Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> > ---
> >  makedumpfile.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> >
> > diff --git a/makedumpfile.c b/makedumpfile.c
> > index baf559e4d74e..ae60466a1e9c 100644
> > --- a/makedumpfile.c
> > +++ b/makedumpfile.c
> > @@ -11564,6 +11564,11 @@ main(int argc, char *argv[])
> >               MSG("\n");
> >               MSG("The dmesg log is saved to %s.\n", info->name_dumpfile);
> >       } else if (info->flag_mem_usage) {
> > +#ifdef __aarch64__
> > +             MSG("mem-usage not supported for arm64 architecure.\n");
> > +             goto out;
> > +#endif
> > +
> >               if (!check_param_for_creating_dumpfile(argc, argv)) {
> >                       MSG("Commandline parameter is invalid.\n");
> >                       MSG("Try `makedumpfile --help' for more information.\n");
> > --
> > 2.7.4
> >
>
>
>
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
>


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 1/4] tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available)
  2019-12-05 18:17     ` Bhupesh Sharma
@ 2019-12-05 20:41       ` Kazuhito Hagio
  0 siblings, 0 replies; 34+ messages in thread
From: Kazuhito Hagio @ 2019-12-05 20:41 UTC (permalink / raw)
  To: Bhupesh Sharma; +Cc: John Donnelly, bhupesh.linux, kexec

Hi Bhupesh,

> -----Original Message-----
> > > -----Original Message-----
> > > This patch adds a common feature for archs (except arm64, for which
> > > similar support is added via subsequent patch) to retrieve
> > > 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available).
> >
> > We already have the calibrate_machdep_info() function, which sets
> > info->max_physmem_bits from vmcoreinfo, so practically we don't need
> > to add this patch for the benefit.

I meant that we already have an arch-independent setter for info->max_physmem_bits:

 3714 int
 3715 calibrate_machdep_info(void)
 3716 {
 3717         if (NUMBER(MAX_PHYSMEM_BITS) > 0)
 3718                 info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
 3719 
 3720         if (NUMBER(SECTION_SIZE_BITS) > 0)
 3721                 info->section_size_bits = NUMBER(SECTION_SIZE_BITS);
 3722 
 3723         return TRUE;
 3724 }

so if NUMBER(MAX_PHYSMEM_BITS) appears, it is automatically used in makedumpfile
without this patch 1/4.

Thanks,
Kazu

> 
> Since other user-space tools like crash use the 'MAX_PHYSMEM_BITS' value as well
> it was agreed with the arm64 maintainers that it would be a good
> approach to export the
> same in vmcoreinfo and not use different methods to determine the same
> in user-space.
> 
> Take an example of the PPC makedumpfile implementation for example. It
> uses the following complex method of dtereming
> 'info->max_physmem_bits':
> int
> set_ppc64_max_physmem_bits(void)
> {
>     long array_len = ARRAY_LENGTH(mem_section);
>     /*
>      * The older ppc64 kernels uses _MAX_PHYSMEM_BITS as 42 and the
>      * newer kernels 3.7 onwards uses 46 bits.
>      */
> 
>     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
>     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
>         || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
>         return TRUE;
> 
>     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_7;
>     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
>         || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
>         return TRUE;
> 
>     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_19;
>     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
>         || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
>         return TRUE;
> 
>     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_20;
>     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
>         || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
>         return TRUE;
> 
>     return FALSE;
> }
> 
> This might need modification and introduction of another
> _MAX_PHYSMEM_BITS_x_y macro when this changes for a newer kernel
> version.
> 
> I think this makes the code error-prone and hard to read. Its much
> better to replace it with:
> /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
>     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
>         info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
>         return TRUE;
> } else {
> ..
> }
> 
> I think it will reduce future reworks (as per kernel versions) and
> also reduce issues while backporting makedumpfile to older kernels.
> 
> What do you think?
> 
> Regards,
> Bhupesh
> > > I recently posted a kernel patch (see [0]) which appends
> > > 'MAX_PHYSMEM_BITS' to vmcoreinfo in the core code itself rather than
> > > in arch-specific code, so that user-space code can also benefit from
> > > this addition to the vmcoreinfo and use it as a standard way of
> > > determining 'SECTIONS_SHIFT' value in 'makedumpfile' utility.
> > >
> > > This patch ensures backward compatibility for kernel versions in which
> > > 'MAX_PHYSMEM_BITS' is not available in vmcoreinfo.
> > >
> > > [0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
> > >
> > > Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> > > Cc: John Donnelly <john.p.donnelly@oracle.com>
> > > Cc: kexec@lists.infradead.org
> > > Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> > > ---
> > >  arch/arm.c     |  8 +++++++-
> > >  arch/ia64.c    |  7 ++++++-
> > >  arch/ppc.c     |  8 +++++++-
> > >  arch/ppc64.c   | 49 ++++++++++++++++++++++++++++---------------------
> > >  arch/s390x.c   | 29 ++++++++++++++++++-----------
> > >  arch/sparc64.c |  9 +++++++--
> > >  arch/x86.c     | 34 ++++++++++++++++++++--------------
> > >  arch/x86_64.c  | 27 ++++++++++++++++-----------
> > >  8 files changed, 109 insertions(+), 62 deletions(-)
> > >
> > > diff --git a/arch/arm.c b/arch/arm.c
> > > index af7442ac70bf..33536fc4dfc9 100644
> > > --- a/arch/arm.c
> > > +++ b/arch/arm.c
> > > @@ -81,7 +81,13 @@ int
> > >  get_machdep_info_arm(void)
> > >  {
> > >       info->page_offset = SYMBOL(_stext) & 0xffff0000UL;
> > > -     info->max_physmem_bits = _MAX_PHYSMEM_BITS;
> > > +
> > > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> > > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > > +     else
> > > +             info->max_physmem_bits = _MAX_PHYSMEM_BITS;
> > > +
> > >       info->kernel_start = SYMBOL(_stext);
> > >       info->section_size_bits = _SECTION_SIZE_BITS;
> > >
> > > diff --git a/arch/ia64.c b/arch/ia64.c
> > > index 6c33cc7c8288..fb44dda47172 100644
> > > --- a/arch/ia64.c
> > > +++ b/arch/ia64.c
> > > @@ -85,7 +85,12 @@ get_machdep_info_ia64(void)
> > >       }
> > >
> > >       info->section_size_bits = _SECTION_SIZE_BITS;
> > > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > > +
> > > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> > > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > > +     else
> > > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > >
> > >       return TRUE;
> > >  }
> > > diff --git a/arch/ppc.c b/arch/ppc.c
> > > index 37c6a3b60cd3..ed9447427a30 100644
> > > --- a/arch/ppc.c
> > > +++ b/arch/ppc.c
> > > @@ -31,7 +31,13 @@ get_machdep_info_ppc(void)
> > >       unsigned long vmlist, vmap_area_list, vmalloc_start;
> > >
> > >       info->section_size_bits = _SECTION_SIZE_BITS;
> > > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > > +
> > > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> > > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > > +     else
> > > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > > +
> > >       info->page_offset = __PAGE_OFFSET;
> > >
> > >       if (SYMBOL(_stext) != NOT_FOUND_SYMBOL)
> > > diff --git a/arch/ppc64.c b/arch/ppc64.c
> > > index 9d8f2525f608..a3984eebdced 100644
> > > --- a/arch/ppc64.c
> > > +++ b/arch/ppc64.c
> > > @@ -466,30 +466,37 @@ int
> > >  set_ppc64_max_physmem_bits(void)
> > >  {
> > >       long array_len = ARRAY_LENGTH(mem_section);
> > > -     /*
> > > -      * The older ppc64 kernels uses _MAX_PHYSMEM_BITS as 42 and the
> > > -      * newer kernels 3.7 onwards uses 46 bits.
> > > -      */
> > > -
> > > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> > > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > -             return TRUE;
> > > -
> > > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_7;
> > > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > -             return TRUE;
> > >
> > > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_19;
> > > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> > > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > >               return TRUE;
> > > +     } else {
> > > +             /*
> > > +              * The older ppc64 kernels uses _MAX_PHYSMEM_BITS as 42 and the
> > > +              * newer kernels 3.7 onwards uses 46 bits.
> > > +              */
> > >
> > > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_20;
> > > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > -             return TRUE;
> > > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> > > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > +                     return TRUE;
> > > +
> > > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_7;
> > > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > +                     return TRUE;
> > > +
> > > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_19;
> > > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > +                     return TRUE;
> > > +
> > > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_4_20;
> > > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > +                     return TRUE;
> > > +     }
> > >
> > >       return FALSE;
> > >  }
> > > diff --git a/arch/s390x.c b/arch/s390x.c
> > > index bf9d58e54fb7..4d17a783e5bd 100644
> > > --- a/arch/s390x.c
> > > +++ b/arch/s390x.c
> > > @@ -63,20 +63,27 @@ int
> > >  set_s390x_max_physmem_bits(void)
> > >  {
> > >       long array_len = ARRAY_LENGTH(mem_section);
> > > -     /*
> > > -      * The older s390x kernels uses _MAX_PHYSMEM_BITS as 42 and the
> > > -      * newer kernels uses 46 bits.
> > > -      */
> > >
> > > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> > > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> > > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > >               return TRUE;
> > > +     } else {
> > > +             /*
> > > +              * The older s390x kernels uses _MAX_PHYSMEM_BITS as 42 and the
> > > +              * newer kernels uses 46 bits.
> > > +              */
> > >
> > > -     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_3;
> > > -     if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > -             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > -             return TRUE;
> > > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG ;
> > > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > +                     return TRUE;
> > > +
> > > +             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_3_3;
> > > +             if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME()))
> > > +                             || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT())))
> > > +                     return TRUE;
> > > +     }
> > >
> > >       return FALSE;
> > >  }
> > > diff --git a/arch/sparc64.c b/arch/sparc64.c
> > > index 1cfaa854ce6d..b93a05bdfe59 100644
> > > --- a/arch/sparc64.c
> > > +++ b/arch/sparc64.c
> > > @@ -25,10 +25,15 @@ int get_versiondep_info_sparc64(void)
> > >  {
> > >       info->section_size_bits = _SECTION_SIZE_BITS;
> > >
> > > -     if (info->kernel_version >= KERNEL_VERSION(3, 8, 13))
> > > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> > > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > > +     else if (info->kernel_version >= KERNEL_VERSION(3, 8, 13))
> > >               info->max_physmem_bits = _MAX_PHYSMEM_BITS_L4;
> > > -     else {
> > > +     else
> > >               info->max_physmem_bits = _MAX_PHYSMEM_BITS_L3;
> > > +
> > > +     if (info->kernel_version < KERNEL_VERSION(3, 8, 13)) {
> > >               info->flag_vmemmap = TRUE;
> > >               info->vmemmap_start = VMEMMAP_BASE_SPARC64;
> > >               info->vmemmap_end = VMEMMAP_BASE_SPARC64 +
> > > diff --git a/arch/x86.c b/arch/x86.c
> > > index 3fdae93084b8..f1b43d4c8179 100644
> > > --- a/arch/x86.c
> > > +++ b/arch/x86.c
> > > @@ -72,21 +72,27 @@ get_machdep_info_x86(void)
> > >  {
> > >       unsigned long vmlist, vmap_area_list, vmalloc_start;
> > >
> > > -     /* PAE */
> > > -     if ((vt.mem_flags & MEMORY_X86_PAE)
> > > -         || ((SYMBOL(pkmap_count) != NOT_FOUND_SYMBOL)
> > > -           && (SYMBOL(pkmap_count_next) != NOT_FOUND_SYMBOL)
> > > -           && ((SYMBOL(pkmap_count_next)-SYMBOL(pkmap_count))/sizeof(int))
> > > -           == 512)) {
> > > -             DEBUG_MSG("\n");
> > > -             DEBUG_MSG("PAE          : ON\n");
> > > -             vt.mem_flags |= MEMORY_X86_PAE;
> > > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_PAE;
> > > -     } else {
> > > -             DEBUG_MSG("\n");
> > > -             DEBUG_MSG("PAE          : OFF\n");
> > > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER)
> > > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > > +     else {
> > > +             /* PAE */
> > > +             if ((vt.mem_flags & MEMORY_X86_PAE)
> > > +                             || ((SYMBOL(pkmap_count) != NOT_FOUND_SYMBOL)
> > > +                                     && (SYMBOL(pkmap_count_next) != NOT_FOUND_SYMBOL)
> > > +                                     &&
> ((SYMBOL(pkmap_count_next)-SYMBOL(pkmap_count))/sizeof(int))
> > > +                                     == 512)) {
> > > +                     DEBUG_MSG("\n");
> > > +                     DEBUG_MSG("PAE          : ON\n");
> > > +                     vt.mem_flags |= MEMORY_X86_PAE;
> > > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_PAE;
> > > +             } else {
> > > +                     DEBUG_MSG("\n");
> > > +                     DEBUG_MSG("PAE          : OFF\n");
> > > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS;
> > > +             }
> > >       }
> > > +
> > >       info->page_offset = __PAGE_OFFSET;
> > >
> > >       if (SYMBOL(_stext) == NOT_FOUND_SYMBOL) {
> > > diff --git a/arch/x86_64.c b/arch/x86_64.c
> > > index 876644f932be..eff90307552c 100644
> > > --- a/arch/x86_64.c
> > > +++ b/arch/x86_64.c
> > > @@ -268,17 +268,22 @@ get_machdep_info_x86_64(void)
> > >  int
> > >  get_versiondep_info_x86_64(void)
> > >  {
> > > -     /*
> > > -      * On linux-2.6.26, MAX_PHYSMEM_BITS is changed to 44 from 40.
> > > -      */
> > > -     if (info->kernel_version < KERNEL_VERSION(2, 6, 26))
> > > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG;
> > > -     else if (info->kernel_version < KERNEL_VERSION(2, 6, 31))
> > > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_26;
> > > -     else if(check_5level_paging())
> > > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_5LEVEL;
> > > -     else
> > > -             info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_31;
> > > +     /* Check if we can get MAX_PHYSMEM_BITS from vmcoreinfo */
> > > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> > > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > > +     } else {
> > > +             /*
> > > +              * On linux-2.6.26, MAX_PHYSMEM_BITS is changed to 44 from 40.
> > > +              */
> > > +             if (info->kernel_version < KERNEL_VERSION(2, 6, 26))
> > > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_ORIG;
> > > +             else if (info->kernel_version < KERNEL_VERSION(2, 6, 31))
> > > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_26;
> > > +             else if(check_5level_paging())
> > > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_5LEVEL;
> > > +             else
> > > +                     info->max_physmem_bits  = _MAX_PHYSMEM_BITS_2_6_31;
> > > +     }
> > >
> > >       if (!get_page_offset_x86_64())
> > >               return FALSE;
> > > --
> > > 2.7.4
> > >
> >
> >
> >
> > _______________________________________________
> > kexec mailing list
> > kexec@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/kexec
> >
> 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 2/4] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
  2019-12-05 18:21     ` Bhupesh Sharma
@ 2019-12-05 20:45       ` Kazuhito Hagio
  0 siblings, 0 replies; 34+ messages in thread
From: Kazuhito Hagio @ 2019-12-05 20:45 UTC (permalink / raw)
  To: Bhupesh Sharma; +Cc: John Donnelly, bhupesh.linux, kexec

> -----Original Message-----
> Hi Kazu,
> 
> On Wed, Dec 4, 2019 at 11:07 PM Kazuhito Hagio <k-hagio@ab.jp.nec.com> wrote:
> >
> > > -----Original Message-----
> > > ARMv8.2-LPA architecture extension (if available on underlying hardware)
> > > can support 52-bit physical addresses, while the kernel virtual
> > > addresses remain 48-bit.
> > >
> > > Make sure that we read the 52-bit PA address capability from
> > > 'MAX_PHYSMEM_BITS' variable (if available in vmcoreinfo) and
> > > accordingly change the pte_to_phy() mask values and also traverse
> > > the page-table walk accordingly.
> > >
> > > Also make sure that it works well for the existing 48-bit PA address
> > > platforms and also on environments which use newer kernels with 52-bit
> > > PA support but hardware which is not ARM8.2-LPA compliant.
> > >
> > > I have sent a kernel patch upstream to add 'MAX_PHYSMEM_BITS' to
> > > vmcoreinfo for arm64 (see [0]).
> > >
> > > This patch is in accordance with ARMv8 Architecture Reference Manual
> > > version D.a
> > >
> > > [0]. http://lists.infradead.org/pipermail/kexec/2019-November/023960.html
> > >
> > > Cc: Kazuhito Hagio <k-hagio@ab.jp.nec.com>
> > > Cc: John Donnelly <john.p.donnelly@oracle.com>
> > > Cc: kexec@lists.infradead.org
> > > Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
> > > ---
> > >  arch/arm64.c | 292 +++++++++++++++++++++++++++++++++++++++++------------------
> > >  1 file changed, 204 insertions(+), 88 deletions(-)
> > >
> > > diff --git a/arch/arm64.c b/arch/arm64.c
> > > index 3516b340adfd..ecb19139e178 100644
> > > --- a/arch/arm64.c
> > > +++ b/arch/arm64.c
> > > @@ -39,72 +39,184 @@ typedef struct {
> > >       unsigned long pte;
> > >  } pte_t;
> > >
> >
> > > +#define __pte(x)     ((pte_t) { (x) } )
> > > +#define __pmd(x)     ((pmd_t) { (x) } )
> > > +#define __pud(x)     ((pud_t) { (x) } )
> > > +#define __pgd(x)     ((pgd_t) { (x) } )
> >
> > Is it possible to remove these macros?
> 
> Ok, will fix in v5.
> 
> > > +
> > > +static int lpa_52_bit_support_available;
> > >  static int pgtable_level;
> > >  static int va_bits;
> > >  static unsigned long kimage_voffset;
> > >
> > > -#define SZ_4K                        (4 * 1024)
> > > -#define SZ_16K                       (16 * 1024)
> > > -#define SZ_64K                       (64 * 1024)
> > > -#define SZ_128M                      (128 * 1024 * 1024)
> > > +#define SZ_4K                        4096
> > > +#define SZ_16K                       16384
> > > +#define SZ_64K                       65536
> > >
> > > -#define PAGE_OFFSET_36 ((0xffffffffffffffffUL) << 36)
> > > -#define PAGE_OFFSET_39 ((0xffffffffffffffffUL) << 39)
> > > -#define PAGE_OFFSET_42 ((0xffffffffffffffffUL) << 42)
> > > -#define PAGE_OFFSET_47 ((0xffffffffffffffffUL) << 47)
> > > -#define PAGE_OFFSET_48 ((0xffffffffffffffffUL) << 48)
> > > +#define PAGE_OFFSET_36               ((0xffffffffffffffffUL) << 36)
> > > +#define PAGE_OFFSET_39               ((0xffffffffffffffffUL) << 39)
> > > +#define PAGE_OFFSET_42               ((0xffffffffffffffffUL) << 42)
> > > +#define PAGE_OFFSET_47               ((0xffffffffffffffffUL) << 47)
> > > +#define PAGE_OFFSET_48               ((0xffffffffffffffffUL) << 48)
> > > +#define PAGE_OFFSET_52               ((0xffffffffffffffffUL) << 52)
> > >
> > >  #define pgd_val(x)           ((x).pgd)
> > >  #define pud_val(x)           (pgd_val((x).pgd))
> > >  #define pmd_val(x)           (pud_val((x).pud))
> > >  #define pte_val(x)           ((x).pte)
> > >
> > > -#define PAGE_MASK            (~(PAGESIZE() - 1))
> > > -#define PGDIR_SHIFT          ((PAGESHIFT() - 3) * pgtable_level + 3)
> > > -#define PTRS_PER_PGD         (1 << (va_bits - PGDIR_SHIFT))
> > > -#define PUD_SHIFT            get_pud_shift_arm64()
> > > -#define PUD_SIZE             (1UL << PUD_SHIFT)
> > > -#define PUD_MASK             (~(PUD_SIZE - 1))
> > > -#define PTRS_PER_PTE         (1 << (PAGESHIFT() - 3))
> > > -#define PTRS_PER_PUD         PTRS_PER_PTE
> > > -#define PMD_SHIFT            ((PAGESHIFT() - 3) * 2 + 3)
> > > -#define PMD_SIZE             (1UL << PMD_SHIFT)
> > > -#define PMD_MASK             (~(PMD_SIZE - 1))
> >
> > > +/* See 'include/uapi/linux/const.h' for definitions below */
> > > +#define __AC(X,Y)    (X##Y)
> > > +#define _AC(X,Y)     __AC(X,Y)
> > > +#define _AT(T,X)     ((T)(X))
> > > +
> > > +/* See 'include/asm/pgtable-types.h' for definitions below */
> > > +typedef unsigned long pteval_t;
> > > +typedef unsigned long pmdval_t;
> > > +typedef unsigned long pudval_t;
> > > +typedef unsigned long pgdval_t;
> >
> > Is it possible to remove these macros/typedefs as well?
> > I don't think they make the code easier to read..
> 
> Ok. The idea behind it was to keep the makedumpfile(user-space)
> page-table-walk code similar to Linux kernel (as much as possible), so
> that if we identify an issue in user-space code, it is easier to
> correlate it with the corresponding kernel code.
> 
> I will try to see how this can be addressed while making the code
> easier to read/understand.

Thank you for understanding.

Yes, keeping makedumpfile similar to kernel is one of the ways to make
it easier to understand.  But we have to take backward compatibility
into account, and kernel code is variable.  So I think keeping the code
itself simple and easy to read/understand is also important.

Thanks,
Kazu

> 
> Regards,
> Bhupesh
> 
> > > +
> > > +#define PAGE_SHIFT   PAGESHIFT()
> > > +
> > > +/* See 'arch/arm64/include/asm/pgtable-hwdef.h' for definitions below */
> > > +
> > > +#define ARM64_HW_PGTABLE_LEVEL_SHIFT(n)      ((PAGE_SHIFT - 3) * (4 - (n)) + 3)
> > > +
> > > +#define PTRS_PER_PTE         (1 << (PAGE_SHIFT - 3))
> > > +
> > > +/*
> > > + * PMD_SHIFT determines the size a level 2 page table entry can map.
> > > + */
> > > +#define PMD_SHIFT            ARM64_HW_PGTABLE_LEVEL_SHIFT(2)
> > > +#define PMD_SIZE             (_AC(1, UL) << PMD_SHIFT)
> > > +#define PMD_MASK             (~(PMD_SIZE-1))
> > >  #define PTRS_PER_PMD         PTRS_PER_PTE
> > >
> > > -#define PAGE_PRESENT         (1 << 0)
> > > +/*
> > > + * PUD_SHIFT determines the size a level 1 page table entry can map.
> > > + */
> > > +#define PUD_SHIFT            ARM64_HW_PGTABLE_LEVEL_SHIFT(1)
> > > +#define PUD_SIZE             (_AC(1, UL) << PUD_SHIFT)
> > > +#define PUD_MASK             (~(PUD_SIZE-1))
> > > +#define PTRS_PER_PUD         PTRS_PER_PTE
> > > +
> > > +/*
> > > + * PGDIR_SHIFT determines the size a top-level page table entry can map
> > > + * (depending on the configuration, this level can be 0, 1 or 2).
> > > + */
> > > +#define PGDIR_SHIFT          ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (pgtable_level))
> > > +#define PGDIR_SIZE           (_AC(1, UL) << PGDIR_SHIFT)
> > > +#define PGDIR_MASK           (~(PGDIR_SIZE-1))
> > > +#define PTRS_PER_PGD         (1 << ((va_bits) - PGDIR_SHIFT))
> > > +
> > > +/*
> > > + * Section address mask and size definitions.
> > > + */
> > >  #define SECTIONS_SIZE_BITS   30
> > > -/* Highest possible physical address supported */
> > > -#define PHYS_MASK_SHIFT              48
> > > -#define PHYS_MASK            ((1UL << PHYS_MASK_SHIFT) - 1)
> > > +
> > >  /*
> > > - * Remove the highest order bits that are not a part of the
> > > - * physical address in a section
> > > + * Hardware page table definitions.
> > > + *
> > > + * Level 1 descriptor (PUD).
> > >   */
> > > -#define PMD_SECTION_MASK     ((1UL << 40) - 1)
> > > +#define PUD_TYPE_TABLE               (_AT(pudval_t, 3) << 0)
> > > +#define PUD_TABLE_BIT                (_AT(pudval_t, 1) << 1)
> > > +#define PUD_TYPE_MASK                (_AT(pudval_t, 3) << 0)
> > > +#define PUD_TYPE_SECT                (_AT(pudval_t, 1) << 0)
> > >
> > > -#define PMD_TYPE_MASK                3
> > > -#define PMD_TYPE_SECT                1
> > > -#define PMD_TYPE_TABLE               3
> > > +/*
> > > + * Level 2 descriptor (PMD).
> > > + */
> > > +#define PMD_TYPE_MASK                (_AT(pmdval_t, 3) << 0)
> > > +#define PMD_TYPE_FAULT               (_AT(pmdval_t, 0) << 0)
> > > +#define PMD_TYPE_TABLE               (_AT(pmdval_t, 3) << 0)
> > > +#define PMD_TYPE_SECT                (_AT(pmdval_t, 1) << 0)
> > > +#define PMD_TABLE_BIT                (_AT(pmdval_t, 1) << 1)
> > > +
> > > +/*
> > > + * Level 3 descriptor (PTE).
> > > + */
> > > +#define PTE_ADDR_LOW         (((_AT(pteval_t, 1) << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
> > > +#define PTE_ADDR_HIGH                (_AT(pteval_t, 0xf) << 12)
> > > +
> > > +static inline unsigned long
> > > +get_pte_addr_mask_arm64(void)
> > > +{
> > > +     if (lpa_52_bit_support_available)
> > > +             return (PTE_ADDR_LOW | PTE_ADDR_HIGH);
> > > +     else
> > > +             return PTE_ADDR_LOW;
> > > +}
> > > +
> > > +#define PTE_ADDR_MASK                get_pte_addr_mask_arm64()
> > >
> > > -#define PUD_TYPE_MASK                3
> > > -#define PUD_TYPE_SECT                1
> > > -#define PUD_TYPE_TABLE               3
> > > +#define PAGE_MASK            (~(PAGESIZE() - 1))
> > > +#define PAGE_PRESENT         (1 << 0)
> > >
> > > +/* Helper API to convert between a physical address and its placement
> > > + * in a page table entry, taking care of 52-bit addresses.
> > > + */
> > > +static inline unsigned long
> > > +__pte_to_phys(pte_t pte)
> > > +{
> > > +     if (lpa_52_bit_support_available)
> > > +             return ((pte_val(pte) & PTE_ADDR_LOW) | ((pte_val(pte) & PTE_ADDR_HIGH) << 36));
> > > +     else
> > > +             return (pte_val(pte) & PTE_ADDR_MASK);
> > > +}
> > > +
> > > +/* Find an entry in a page-table-directory */
> > >  #define pgd_index(vaddr)             (((vaddr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
> > > -#define pgd_offset(pgdir, vaddr)     ((pgd_t *)(pgdir) + pgd_index(vaddr))
> > >
> > > -#define pte_index(vaddr)             (((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
> > > -#define pmd_page_paddr(pmd)          (pmd_val(pmd) & PHYS_MASK & (int32_t)PAGE_MASK)
> > > -#define pte_offset(dir, vaddr)               ((pte_t*)pmd_page_paddr((*dir)) + pte_index(vaddr))
> > > +static inline pte_t
> > > +pgd_pte(pgd_t pgd)
> > > +{
> > > +     return __pte(pgd_val(pgd));
> > > +}
> > >
> > > -#define pmd_index(vaddr)             (((vaddr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
> > > -#define pud_page_paddr(pud)          (pud_val(pud) & PHYS_MASK & (int32_t)PAGE_MASK)
> > > -#define pmd_offset_pgtbl_lvl_2(pud, vaddr) ((pmd_t *)pud)
> > > -#define pmd_offset_pgtbl_lvl_3(pud, vaddr) ((pmd_t *)pud_page_paddr((*pud)) + pmd_index(vaddr))
> > > +#define __pgd_to_phys(pgd)           __pte_to_phys(pgd_pte(pgd))
> > > +#define pgd_offset(pgd, vaddr)               ((pgd_t *)(pgd) + pgd_index(vaddr))
> > > +
> > > +static inline pte_t pud_pte(pud_t pud)
> > > +{
> > > +     return __pte(pud_val(pud));
> > > +}
> > >
> > > +static inline unsigned long
> > > +pgd_page_paddr(pgd_t pgd)
> > > +{
> > > +     return __pgd_to_phys(pgd);
> > > +}
> > > +
> > > +/* Find an entry in the first-level page table. */
> > >  #define pud_index(vaddr)             (((vaddr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))
> > > -#define pgd_page_paddr(pgd)          (pgd_val(pgd) & PHYS_MASK & (int32_t)PAGE_MASK)
> > > +#define __pud_to_phys(pud)           __pte_to_phys(pud_pte(pud))
> > > +
> > > +static inline unsigned long
> > > +pud_page_paddr(pud_t pud)
> > > +{
> > > +     return __pud_to_phys(pud);
> > > +}
> > > +
> > > +/* Find an entry in the second-level page table. */
> > > +#define pmd_index(vaddr)             (((vaddr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
> > > +
> > > +static inline pte_t pmd_pte(pmd_t pmd)
> > > +{
> > > +     return __pte(pmd_val(pmd));
> > > +}
> > > +
> > > +#define __pmd_to_phys(pmd)           __pte_to_phys(pmd_pte(pmd))
> > > +
> > > +static inline unsigned long
> > > +pmd_page_paddr(pmd_t pmd)
> > > +{
> > > +     return __pmd_to_phys(pmd);
> > > +}
> > > +
> > > +/* Find an entry in the third-level page table. */
> > > +#define pte_index(vaddr)             (((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1))
> > > +#define pte_offset(dir, vaddr)               (pmd_page_paddr((*dir)) + pte_index(vaddr) *
> sizeof(pte_t))
> > >
> > >  static unsigned long long
> > >  __pa(unsigned long vaddr)
> > > @@ -116,32 +228,22 @@ __pa(unsigned long vaddr)
> > >               return (vaddr - kimage_voffset);
> > >  }
> > >
> > > -static int
> > > -get_pud_shift_arm64(void)
> > > +static pud_t *
> > > +pud_offset(pgd_t *pgda, pgd_t *pgdv, unsigned long vaddr)
> > >  {
> > > -     if (pgtable_level == 4)
> > > -             return ((PAGESHIFT() - 3) * 3 + 3);
> > > +     if (pgtable_level > 3)
> > > +             return (pud_t *)(pgd_page_paddr(*pgdv) + pud_index(vaddr) * sizeof(pud_t));
> > >       else
> > > -             return PGDIR_SHIFT;
> > > +             return (pud_t *)(pgda);
> > >  }
> > >
> > >  static pmd_t *
> > >  pmd_offset(pud_t *puda, pud_t *pudv, unsigned long vaddr)
> > >  {
> > > -     if (pgtable_level == 2) {
> > > -             return pmd_offset_pgtbl_lvl_2(puda, vaddr);
> > > -     } else {
> > > -             return pmd_offset_pgtbl_lvl_3(pudv, vaddr);
> > > -     }
> > > -}
> > > -
> > > -static pud_t *
> > > -pud_offset(pgd_t *pgda, pgd_t *pgdv, unsigned long vaddr)
> > > -{
> > > -     if (pgtable_level == 4)
> > > -             return ((pud_t *)pgd_page_paddr((*pgdv)) + pud_index(vaddr));
> > > +     if (pgtable_level > 2)
> > > +             return (pmd_t *)(pud_page_paddr(*pudv) + pmd_index(vaddr) * sizeof(pmd_t));
> > >       else
> > > -             return (pud_t *)(pgda);
> > > +             return (pmd_t*)(puda);
> > >  }
> > >
> > >  static int calculate_plat_config(void)
> > > @@ -307,6 +409,14 @@ get_stext_symbol(void)
> > >  int
> > >  get_machdep_info_arm64(void)
> > >  {
> > > +     /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */
> > > +     if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) {
> > > +             info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS);
> > > +             if (info->max_physmem_bits == 52)
> > > +                     lpa_52_bit_support_available = 1;
> > > +     } else
> > > +             info->max_physmem_bits = 48;
> > > +
> > >       /* Check if va_bits is still not initialized. If still 0, call
> > >        * get_versiondep_info() to initialize the same.
> > >        */
> > > @@ -319,12 +429,11 @@ get_machdep_info_arm64(void)
> > >       }
> > >
> > >       kimage_voffset = NUMBER(kimage_voffset);
> > > -     info->max_physmem_bits = PHYS_MASK_SHIFT;
> > >       info->section_size_bits = SECTIONS_SIZE_BITS;
> > >
> > >       DEBUG_MSG("kimage_voffset   : %lx\n", kimage_voffset);
> > > -     DEBUG_MSG("max_physmem_bits : %lx\n", info->max_physmem_bits);
> > > -     DEBUG_MSG("section_size_bits: %lx\n", info->section_size_bits);
> > > +     DEBUG_MSG("max_physmem_bits : %ld\n", info->max_physmem_bits);
> > > +     DEBUG_MSG("section_size_bits: %ld\n", info->section_size_bits);
> > >
> > >       return TRUE;
> > >  }
> > > @@ -382,6 +491,19 @@ get_versiondep_info_arm64(void)
> > >       return TRUE;
> > >  }
> > >
> > > +/* 1GB section for Page Table level = 4 and Page Size = 4KB */
> > > +static int
> > > +is_pud_sect(pud_t pud)
> > > +{
> > > +     return ((pud_val(pud) & PUD_TYPE_MASK) == PUD_TYPE_SECT);
> > > +}
> > > +
> > > +static int
> > > +is_pmd_sect(pmd_t pmd)
> > > +{
> > > +     return ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_SECT);
> > > +}
> > > +
> > >  /*
> > >   * vaddr_to_paddr_arm64() - translate arbitrary virtual address to physical
> > >   * @vaddr: virtual address to translate
> > > @@ -419,10 +541,9 @@ vaddr_to_paddr_arm64(unsigned long vaddr)
> > >               return NOT_PADDR;
> > >       }
> > >
> > > -     if ((pud_val(pudv) & PUD_TYPE_MASK) == PUD_TYPE_SECT) {
> > > -             /* 1GB section for Page Table level = 4 and Page Size = 4KB */
> > > -             paddr = (pud_val(pudv) & (PUD_MASK & PMD_SECTION_MASK))
> > > -                                     + (vaddr & (PUD_SIZE - 1));
> > > +     if (is_pud_sect(pudv)) {
> > > +             paddr = (pud_page_paddr(pudv) & PUD_MASK) +
> > > +                             (vaddr & (PUD_SIZE - 1));
> > >               return paddr;
> > >       }
> > >
> > > @@ -432,29 +553,24 @@ vaddr_to_paddr_arm64(unsigned long vaddr)
> > >               return NOT_PADDR;
> > >       }
> > >
> > > -     switch (pmd_val(pmdv) & PMD_TYPE_MASK) {
> > > -     case PMD_TYPE_TABLE:
> > > -             ptea = pte_offset(&pmdv, vaddr);
> > > -             /* 64k page */
> > > -             if (!readmem(PADDR, (unsigned long long)ptea, &ptev, sizeof(ptev))) {
> > > -                     ERRMSG("Can't read pte\n");
> > > -                     return NOT_PADDR;
> > > -             }
> > > +     if (is_pmd_sect(pmdv)) {
> > > +             paddr = (pmd_page_paddr(pmdv) & PMD_MASK) +
> > > +                             (vaddr & (PMD_SIZE - 1));
> > > +             return paddr;
> > > +     }
> > >
> > > -             if (!(pte_val(ptev) & PAGE_PRESENT)) {
> > > -                     ERRMSG("Can't get a valid pte.\n");
> > > -                     return NOT_PADDR;
> > > -             } else {
> > > +     ptea = (pte_t *)pte_offset(&pmdv, vaddr);
> > > +     if (!readmem(PADDR, (unsigned long long)ptea, &ptev, sizeof(ptev))) {
> > > +             ERRMSG("Can't read pte\n");
> > > +             return NOT_PADDR;
> > > +     }
> > >
> > > -                     paddr = (PAGEBASE(pte_val(ptev)) & PHYS_MASK)
> > > -                                     + (vaddr & (PAGESIZE() - 1));
> > > -             }
> > > -             break;
> > > -     case PMD_TYPE_SECT:
> > > -             /* 512MB section for Page Table level = 3 and Page Size = 64KB*/
> > > -             paddr = (pmd_val(pmdv) & (PMD_MASK & PMD_SECTION_MASK))
> > > -                                     + (vaddr & (PMD_SIZE - 1));
> > > -             break;
> > > +     if (!(pte_val(ptev) & PAGE_PRESENT)) {
> > > +             ERRMSG("Can't get a valid pte.\n");
> > > +             return NOT_PADDR;
> > > +     } else {
> > > +             paddr = __pte_to_phys(ptev) +
> > > +                             (vaddr & (PAGESIZE() - 1));
> > >       }
> > >
> > >       return paddr;
> > > --
> > > 2.7.4
> > >
> >
> >
> >
> > _______________________________________________
> > kexec mailing list
> > kexec@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/kexec
> >
> 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 3/4] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA support)
  2019-12-05 18:05       ` Bhupesh Sharma
@ 2019-12-05 20:49         ` Kazuhito Hagio
  0 siblings, 0 replies; 34+ messages in thread
From: Kazuhito Hagio @ 2019-12-05 20:49 UTC (permalink / raw)
  To: Bhupesh Sharma; +Cc: John Donnelly, bhupesh.linux, kexec

> -----Original Message-----
> > > > +/*
> > > > + * The linear kernel range starts at the bottom of the virtual address
> > > > + * space. Testing the top bit for the start of the region is a
> > > > + * sufficient check and avoids having to worry about the tag.
> > > > + */
> > > > +#define is_linear_addr(addr)       (!(((unsigned long)addr) & (1UL << (vabits_actual - 1))))
> > >
> > > Does this check cover 5.3 or earlier kernels?
> > > There is no case that vabits_actual is zero?
> 
> We can set vabits_actual as va_bits for older kernels. That shouldn't
> be a big change.
> Will add it in v5. See more below ...
> 
> > As you know, 14c127c957c1 ("arm64: mm: Flip kernel VA space") changed
> > the check for linear address:
> >
> > -#define __is_lm_address(addr)  (!!((addr) & BIT(VA_BITS - 1)))
> > +#define __is_lm_address(addr)  (!((addr) & BIT(VA_BITS - 1)))
> >
> > so if we use the same check as kernel has, I think we will need the
> > former one to support earlier kernels.
> 
> See above, we can use va_bits where vabits_actual is not present.

Yes, but it is not the problem that I wanted to say here.

The problem is that, even if we set vabits_actual to va_bits, we cannot
determine whether an address is in linear map range with just one macro
for 5.3 and 5.4 kernels.

Because the bit value to be checked by the macro changed:

5.3 VA_BITS=48
  linear map : 0xffff800000000000 to 0xffffffffffffffff
5.4 VA_BITS=48
  linear map : 0xffff000000000000 to 0xffff7fffffffffff

or I missed something?

Thanks,
Kazu

> 
> > > > +
> > > >  static unsigned long long
> > > >  __pa(unsigned long vaddr)
> > > >  {
> > > >     if (kimage_voffset == NOT_FOUND_NUMBER ||
> > > > -                   (vaddr >= PAGE_OFFSET))
> > > > -           return (vaddr - PAGE_OFFSET + info->phys_base);
> > > > +                   is_linear_addr(vaddr))
> > > > +           return (vaddr + info->phys_base - PAGE_OFFSET);
> > > >     else
> > > >             return (vaddr - kimage_voffset);
> > > >  }
> > > > @@ -253,6 +261,7 @@ static int calculate_plat_config(void)
> > > >                     (PAGESIZE() == SZ_64K && va_bits == 42)) {
> > > >             pgtable_level = 2;
> > > >     } else if ((PAGESIZE() == SZ_64K && va_bits == 48) ||
> > > > +                   (PAGESIZE() == SZ_64K && va_bits == 52) ||
> > > >                     (PAGESIZE() == SZ_4K && va_bits == 39) ||
> > > >                     (PAGESIZE() == SZ_16K && va_bits == 47)) {
> > > >             pgtable_level = 3;
> > > > @@ -287,6 +296,16 @@ get_phys_base_arm64(void)
> > > >             return TRUE;
> > > >     }
> > > >
> > > > +   /* If both vabits_actual and va_bits are now initialized, always
> > > > +    * prefer vabits_actual over va_bits to calculate PAGE_OFFSET
> > > > +    * value.
> > > > +    */
> > > > +   if (vabits_actual && va_bits && vabits_actual != va_bits) {
> > > > +           info->page_offset = (-(1UL << vabits_actual));
> > > > +           DEBUG_MSG("page_offset    : %lx (via vabits_actual)\n",
> > > > +                           info->page_offset);
> > > > +   }
> > > > +
> > >
> > > Is this for --mem-usage?
> > > If so, let's drop from this patch and think about it later because
> > > some additional base functions will be needed for the option, I think.
> 
> Ok.
> 
> > > >     if (get_num_pt_loads() && PAGE_OFFSET) {
> > > >             for (i = 0;
> > > >                 get_pt_load(i, &phys_start, NULL, &virt_start, NULL);
> > > > @@ -406,6 +425,73 @@ get_stext_symbol(void)
> > > >     return(found ? kallsym : FALSE);
> > > >  }
> > > >
> > > > +static int
> > > > +get_va_bits_from_stext_arm64(void)
> > > > +{
> > > > +   ulong _stext;
> > > > +
> > > > +   _stext = get_stext_symbol();
> > > > +   if (!_stext) {
> > > > +           ERRMSG("Can't get the symbol of _stext.\n");
> > > > +           return FALSE;
> > > > +   }
> > > > +
> > > > +   /* Derive va_bits as per arch/arm64/Kconfig. Note that this is a
> > > > +    * best case approximation at the moment, as there can be
> > > > +    * inconsistencies in this calculation (for e.g., for
> > > > +    * 52-bit kernel VA case, even the 48th bit might be set in
> > > > +    * the _stext symbol).
> > > > +    *
> > > > +    * So, we need to rely on the actual VA_BITS symbol in the
> > > > +    * vmcoreinfo for a accurate value.
> > > > +    *
> > > > +    * TODO: Improve this further once there is a closure with arm64
> > > > +    * kernel maintainers on the same.
> > > > +    */
> > > > +   if ((_stext & PAGE_OFFSET_52) == PAGE_OFFSET_52) {
> > > > +           va_bits = 52;
> > > > +   } else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) {
> > > > +           va_bits = 48;
> > > > +   } else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) {
> > > > +           va_bits = 47;
> > > > +   } else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) {
> > > > +           va_bits = 42;
> > > > +   } else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) {
> > > > +           va_bits = 39;
> > > > +   } else if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) {
> > > > +           va_bits = 36;
> > > > +   } else {
> > > > +           ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n");
> > > > +           return FALSE;
> > > > +   }
> > > > +
> > > > +   DEBUG_MSG("va_bits    : %d (_stext) (approximation)\n", va_bits);
> > > > +
> > > > +   return TRUE;
> > > > +}
> > > > +
> > > > +static void
> > > > +get_page_offset_arm64(void)
> > > > +{
> > > > +   /* Check if 'vabits_actual' is initialized yet.
> > > > +    * If not, our best bet is to use 'va_bits' to calculate
> > > > +    * the PAGE_OFFSET value, otherwise use 'vabits_actual'
> > > > +    * for the same.
> > > > +    *
> > > > +    * See arch/arm64/include/asm/memory.h for more details.
> > > > +    */
> > > > +   if (!vabits_actual) {
> > > > +           info->page_offset = (-(1UL << va_bits));
> > > > +           DEBUG_MSG("page_offset    : %lx (approximation)\n",
> > > > +                                   info->page_offset);
> > > > +   } else {
> > > > +           info->page_offset = (-(1UL << vabits_actual));
> > > > +           DEBUG_MSG("page_offset    : %lx (accurate)\n",
> > > > +                                   info->page_offset);
> > > > +   }
> > >
> > > Does this support 5.3 or earlier kernels?
> >
> > Because I didn't see the old page_offset calculation below in this patch:
> >
> > > > -   info->page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
> >
> > I was thinking that if there is a NUMBER(tcr_el1_t1sz) in vmcoreinfo,
> > we assume the kernel has the 'flipped' VA space.  And if there is no
> > NUMBER(tcr_el1_t1sz), then older 'non-flipped' VA [1].
> 
> Yes vabits_actual will be not found in such a case and we can use
> va_bits in such a case and similarly helper routines for page table
> walk/page_offset calculation may need modification.
> Will fix in v5.
> 
> > This might be a bit fragile against backport, but it requires less
> > vmcoreinfo, and older kernels don't need NUMBER(tcr_el1_t1sz).
> > (they might need NUMBER(MAX_USER_VA_BITS) like RHEL8 though.)
> 
> I think since this is an upstream fix, we should look at a generic fix
> (not restricted to RHEL, which anyway can contain out of tree fixes).
> I will send a v5 shortly with the suggested fixes.
> 
> Regards,
> Bhupesh
> 
> > What do you think?
> >
> > [1]
> https://github.com/k-hagio/makedumpfile/commit/fd9d86ea05b38e9edbb8c0ac3ebd612d5d485df3#diff-73f1cf659
> e8099a2f3a94f38063f97ecR400
> >
> > Thanks,
> > Kazu
> >
> >
> > >
> > > Thanks,
> > > Kazu
> > >
> > > > +
> > > > +}
> > > > +
> > > >  int
> > > >  get_machdep_info_arm64(void)
> > > >  {
> > > > @@ -420,8 +506,33 @@ get_machdep_info_arm64(void)
> > > >     /* Check if va_bits is still not initialized. If still 0, call
> > > >      * get_versiondep_info() to initialize the same.
> > > >      */
> > > > +   if (NUMBER(VA_BITS) != NOT_FOUND_NUMBER) {
> > > > +           va_bits = NUMBER(VA_BITS);
> > > > +           DEBUG_MSG("va_bits        : %d (vmcoreinfo)\n",
> > > > +                           va_bits);
> > > > +   }
> > > > +
> > > > +   /* Check if va_bits is still not initialized. If still 0, call
> > > > +    * get_versiondep_info() to initialize the same from _stext
> > > > +    * symbol.
> > > > +    */
> > > >     if (!va_bits)
> > > > -           get_versiondep_info_arm64();
> > > > +           if (get_va_bits_from_stext_arm64() == FALSE)
> > > > +                   return FALSE;
> > > > +
> > > > +   get_page_offset_arm64();
> > > > +
> > > > +   /* See TCR_EL1, Translation Control Register (EL1) register
> > > > +    * description in the ARMv8 Architecture Reference Manual.
> > > > +    * Basically, we can use the TCR_EL1.T1SZ
> > > > +    * value to determine the virtual addressing range supported
> > > > +    * in the kernel-space (i.e. vabits_actual).
> > > > +    */
> > > > +   if (NUMBER(tcr_el1_t1sz) != NOT_FOUND_NUMBER) {
> > > > +           vabits_actual = 64 - NUMBER(tcr_el1_t1sz);
> > > > +           DEBUG_MSG("vabits_actual  : %d (vmcoreinfo)\n",
> > > > +                           vabits_actual);
> > > > +   }
> > > >
> > > >     if (!calculate_plat_config()) {
> > > >             ERRMSG("Can't determine platform config values\n");
> > > > @@ -459,34 +570,11 @@ get_xen_info_arm64(void)
> > > >  int
> > > >  get_versiondep_info_arm64(void)
> > > >  {
> > > > -   ulong _stext;
> > > > -
> > > > -   _stext = get_stext_symbol();
> > > > -   if (!_stext) {
> > > > -           ERRMSG("Can't get the symbol of _stext.\n");
> > > > -           return FALSE;
> > > > -   }
> > > > -
> > > > -   /* Derive va_bits as per arch/arm64/Kconfig */
> > > > -   if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) {
> > > > -           va_bits = 36;
> > > > -   } else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) {
> > > > -           va_bits = 39;
> > > > -   } else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) {
> > > > -           va_bits = 42;
> > > > -   } else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) {
> > > > -           va_bits = 47;
> > > > -   } else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) {
> > > > -           va_bits = 48;
> > > > -   } else {
> > > > -           ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n");
> > > > -           return FALSE;
> > > > -   }
> > > > -
> > > > -   info->page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
> > > > +   if (!va_bits)
> > > > +           if (get_va_bits_from_stext_arm64() == FALSE)
> > > > +                   return FALSE;
> > > >
> > > > -   DEBUG_MSG("va_bits      : %d\n", va_bits);
> > > > -   DEBUG_MSG("page_offset  : %lx\n", info->page_offset);
> > > > +   get_page_offset_arm64();
> > > >
> > > >     return TRUE;
> > > >  }
> > > > diff --git a/makedumpfile.c b/makedumpfile.c
> > > > index 4a000112ba59..baf559e4d74e 100644
> > > > --- a/makedumpfile.c
> > > > +++ b/makedumpfile.c
> > > > @@ -2314,6 +2314,7 @@ write_vmcoreinfo_data(void)
> > > >     WRITE_NUMBER("HUGETLB_PAGE_DTOR", HUGETLB_PAGE_DTOR);
> > > >  #ifdef __aarch64__
> > > >     WRITE_NUMBER("VA_BITS", VA_BITS);
> > > > +   WRITE_NUMBER_UNSIGNED("tcr_el1_t1sz", tcr_el1_t1sz);
> > > >     WRITE_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET);
> > > >     WRITE_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset);
> > > >  #endif
> > > > @@ -2720,6 +2721,7 @@ read_vmcoreinfo(void)
> > > >     READ_NUMBER("KERNEL_IMAGE_SIZE", KERNEL_IMAGE_SIZE);
> > > >  #ifdef __aarch64__
> > > >     READ_NUMBER("VA_BITS", VA_BITS);
> > > > +   READ_NUMBER_UNSIGNED("tcr_el1_t1sz", tcr_el1_t1sz);
> > > >     READ_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET);
> > > >     READ_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset);
> > > >  #endif
> > > > diff --git a/makedumpfile.h b/makedumpfile.h
> > > > index ac11e906b5b7..7eab6507c8df 100644
> > > > --- a/makedumpfile.h
> > > > +++ b/makedumpfile.h
> > > > @@ -974,7 +974,7 @@ int get_versiondep_info_arm64(void);
> > > >  int get_xen_basic_info_arm64(void);
> > > >  int get_xen_info_arm64(void);
> > > >  unsigned long get_kaslr_offset_arm64(unsigned long vaddr);
> > > > -#define paddr_to_vaddr_arm64(X) (((X) - info->phys_base) | PAGE_OFFSET)
> > > > +#define paddr_to_vaddr_arm64(X) (((X) - (info->phys_base - PAGE_OFFSET)))
> > > >
> > > >  #define find_vmemmap()             stub_false()
> > > >  #define vaddr_to_paddr(X)  vaddr_to_paddr_arm64(X)
> > > > @@ -1937,6 +1937,7 @@ struct number_table {
> > > >     long    KERNEL_IMAGE_SIZE;
> > > >  #ifdef __aarch64__
> > > >     long    VA_BITS;
> > > > +   unsigned long   tcr_el1_t1sz;
> > > >     unsigned long   PHYS_OFFSET;
> > > >     unsigned long   kimage_voffset;
> > > >  #endif
> > > > --
> > > > 2.7.4
> > > >
> >
> >
> >
> > _______________________________________________
> > kexec mailing list
> > kexec@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/kexec
> >
> 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-11-20 16:33       ` John Donnelly
  2019-11-21 16:32         ` Bhupesh Sharma
@ 2019-12-05 20:59         ` Kazuhito Hagio
  2019-12-10 14:50           ` Kazuhito Hagio
  1 sibling, 1 reply; 34+ messages in thread
From: Kazuhito Hagio @ 2019-12-05 20:59 UTC (permalink / raw)
  To: John Donnelly; +Cc: Bhupesh Sharma, Bhupesh SHARMA, kexec mailing list

> -----Original Message-----
> This is your makedumpfile pulled from sourceforge .
> 
> It would be helpful if you bumped the VERSION and DATE to be certain we are using the correct pieces .

Good suggestion.

I wanted the command line that executed makedumpfile in debug message
as well, so I'll think about adding them together.

Thanks,
Kazu
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions
  2019-12-05 20:59         ` Kazuhito Hagio
@ 2019-12-10 14:50           ` Kazuhito Hagio
  0 siblings, 0 replies; 34+ messages in thread
From: Kazuhito Hagio @ 2019-12-10 14:50 UTC (permalink / raw)
  To: kexec mailing list; +Cc: John Donnelly, Bhupesh Sharma, Bhupesh SHARMA

> -----Original Message-----
> > -----Original Message-----
> > This is your makedumpfile pulled from sourceforge .
> >
> > It would be helpful if you bumped the VERSION and DATE to be certain we are using the correct pieces .
> 
> Good suggestion.
> 
> I wanted the command line that executed makedumpfile in debug message
> as well, so I'll think about adding them together.

Done.
https://sourceforge.net/p/makedumpfile/code/ci/180a3958c30d95cb1d8e8c341baaf267f7eaef89/

Thanks,
Kazu



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2019-12-10 14:51 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-12 11:08 [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Bhupesh Sharma
2019-11-12 11:08 ` [PATCH v4 1/4] tree-wide: Retrieve 'MAX_PHYSMEM_BITS' from vmcoreinfo (if available) Bhupesh Sharma
2019-12-04 17:34   ` Kazuhito Hagio
2019-12-05 18:17     ` Bhupesh Sharma
2019-12-05 20:41       ` Kazuhito Hagio
2019-11-12 11:08 ` [PATCH v4 2/4] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) Bhupesh Sharma
2019-12-04 17:36   ` Kazuhito Hagio
2019-12-05 18:21     ` Bhupesh Sharma
2019-12-05 20:45       ` Kazuhito Hagio
2019-11-12 11:08 ` [PATCH v4 3/4] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit kernel VA support) Bhupesh Sharma
2019-12-04 17:45   ` Kazuhito Hagio
2019-12-05 15:29     ` Kazuhito Hagio
2019-12-05 18:05       ` Bhupesh Sharma
2019-12-05 20:49         ` Kazuhito Hagio
2019-11-12 11:08 ` [PATCH v4 4/4] makedumpfile: Mark --mem-usage option unsupported for arm64 Bhupesh Sharma
2019-12-04 17:49   ` Kazuhito Hagio
2019-12-05 18:24     ` Bhupesh Sharma
2019-11-13 21:59 ` [PATCH v4 0/4] makedumpfile/arm64: Add support for ARMv8.2 extensions Kazuhito Hagio
2019-11-14 19:10   ` Bhupesh Sharma
2019-11-18  5:12 ` Prabhakar Kushwaha
2019-11-18 17:11   ` John Donnelly
2019-11-18 19:01     ` Bhupesh Sharma
2019-11-18 19:12       ` John Donnelly
2019-11-18 20:00         ` John Donnelly
2019-11-20 16:33       ` John Donnelly
2019-11-21 16:32         ` Bhupesh Sharma
2019-11-21 16:59           ` John Donnelly
2019-11-21 19:20             ` John Donnelly
2019-11-21 21:52               ` John Donnelly
2019-11-22 12:30                 ` John Donnelly
2019-11-22 14:22                   ` John Donnelly
2019-12-05 20:59         ` Kazuhito Hagio
2019-12-10 14:50           ` Kazuhito Hagio
2019-11-18 18:56   ` Bhupesh Sharma

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.