* [PATCHv4 1/4] arm64: make phys_offset signed
2022-01-18 7:48 [PATCHv4 0/4] arm64: make phys_to_virt() correct Pingfan Liu
@ 2022-01-18 7:48 ` Pingfan Liu
2022-01-20 18:09 ` Philipp Rudo
2022-01-18 7:48 ` [PATCHv4 2/4] arm64/crashdump: unify routine to get page_offset Pingfan Liu
` (3 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Pingfan Liu @ 2022-01-18 7:48 UTC (permalink / raw)
To: kexec
After kernel commit 7bc1a0f9e176 ("arm64: mm: use single quantity to
represent the PA to VA translation"), phys_offset can be negative if
running 52-bits kernel on 48-bits hardware.
So changing phys_offset from unsigned to signed.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Kairui Song <kasong@tencent.com>
Cc: Simon Horman <horms@verge.net.au>
Cc: Philipp Rudo <prudo@redhat.com>
To: kexec@lists.infradead.org
---
kexec/arch/arm64/kexec-arm64.c | 12 ++++++------
kexec/arch/arm64/kexec-arm64.h | 2 +-
util_lib/elf_info.c | 2 +-
util_lib/include/elf_info.h | 2 +-
4 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
index 6f572ed..c6c67e8 100644
--- a/kexec/arch/arm64/kexec-arm64.c
+++ b/kexec/arch/arm64/kexec-arm64.c
@@ -859,7 +859,7 @@ void add_segment(struct kexec_info *info, const void *buf, size_t bufsz,
add_segment_phys_virt(info, buf, bufsz, base, memsz, 1);
}
-static inline void set_phys_offset(uint64_t v, char *set_method)
+static inline void set_phys_offset(int64_t v, char *set_method)
{
if (arm64_mem.phys_offset == arm64_mem_ngv
|| v < arm64_mem.phys_offset) {
@@ -928,7 +928,7 @@ static int get_page_offset(void)
* from VMCOREINFO note inside 'kcore'.
*/
-static int get_phys_offset_from_vmcoreinfo_pt_note(unsigned long *phys_offset)
+static int get_phys_offset_from_vmcoreinfo_pt_note(long *phys_offset)
{
int fd, ret = 0;
@@ -948,7 +948,7 @@ static int get_phys_offset_from_vmcoreinfo_pt_note(unsigned long *phys_offset)
* from PT_LOADs inside 'kcore'.
*/
-int get_phys_base_from_pt_load(unsigned long *phys_offset)
+int get_phys_base_from_pt_load(long *phys_offset)
{
int i, fd, ret;
unsigned long long phys_start;
@@ -997,7 +997,7 @@ static bool to_be_excluded(char *str)
int get_memory_ranges(struct memory_range **range, int *ranges,
unsigned long kexec_flags)
{
- unsigned long phys_offset = UINT64_MAX;
+ long phys_offset = -1;
FILE *fp;
const char *iomem = proc_iomem();
char line[MAX_LINE], *str;
@@ -1019,7 +1019,7 @@ int get_memory_ranges(struct memory_range **range, int *ranges,
*/
ret = get_phys_offset_from_vmcoreinfo_pt_note(&phys_offset);
if (!ret) {
- if (phys_offset != UINT64_MAX)
+ if (phys_offset != -1)
set_phys_offset(phys_offset,
"vmcoreinfo pt_note");
} else {
@@ -1031,7 +1031,7 @@ int get_memory_ranges(struct memory_range **range, int *ranges,
*/
ret = get_phys_base_from_pt_load(&phys_offset);
if (!ret)
- if (phys_offset != UINT64_MAX)
+ if (phys_offset != -1)
set_phys_offset(phys_offset,
"pt_load");
}
diff --git a/kexec/arch/arm64/kexec-arm64.h b/kexec/arch/arm64/kexec-arm64.h
index ed447ac..1844b67 100644
--- a/kexec/arch/arm64/kexec-arm64.h
+++ b/kexec/arch/arm64/kexec-arm64.h
@@ -58,7 +58,7 @@ extern off_t initrd_size;
*/
struct arm64_mem {
- uint64_t phys_offset;
+ long phys_offset;
uint64_t text_offset;
uint64_t image_size;
uint64_t vp_offset;
diff --git a/util_lib/elf_info.c b/util_lib/elf_info.c
index 51d8b92..5574c7f 100644
--- a/util_lib/elf_info.c
+++ b/util_lib/elf_info.c
@@ -1236,7 +1236,7 @@ int read_elf(int fd)
return 0;
}
-int read_phys_offset_elf_kcore(int fd, unsigned long *phys_off)
+int read_phys_offset_elf_kcore(int fd, long *phys_off)
{
int ret;
diff --git a/util_lib/include/elf_info.h b/util_lib/include/elf_info.h
index 4bc9279..f550d86 100644
--- a/util_lib/include/elf_info.h
+++ b/util_lib/include/elf_info.h
@@ -28,7 +28,7 @@ int get_pt_load(int idx,
unsigned long long *phys_end,
unsigned long long *virt_start,
unsigned long long *virt_end);
-int read_phys_offset_elf_kcore(int fd, unsigned long *phys_off);
+int read_phys_offset_elf_kcore(int fd, long *phys_off);
int read_elf(int fd);
void dump_dmesg(int fd, void (*handler)(char*, unsigned int));
--
2.31.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCHv4 1/4] arm64: make phys_offset signed
2022-01-18 7:48 ` [PATCHv4 1/4] arm64: make phys_offset signed Pingfan Liu
@ 2022-01-20 18:09 ` Philipp Rudo
2022-01-21 1:38 ` Pingfan Liu
0 siblings, 1 reply; 14+ messages in thread
From: Philipp Rudo @ 2022-01-20 18:09 UTC (permalink / raw)
To: kexec
Hi Pingfan,
On Tue, 18 Jan 2022 15:48:09 +0800
Pingfan Liu <piliu@redhat.com> wrote:
> After kernel commit 7bc1a0f9e176 ("arm64: mm: use single quantity to
> represent the PA to VA translation"), phys_offset can be negative if
> running 52-bits kernel on 48-bits hardware.
>
> So changing phys_offset from unsigned to signed.
>
> Signed-off-by: Pingfan Liu <piliu@redhat.com>
> Cc: Kairui Song <kasong@tencent.com>
> Cc: Simon Horman <horms@verge.net.au>
> Cc: Philipp Rudo <prudo@redhat.com>
> To: kexec at lists.infradead.org
> ---
> kexec/arch/arm64/kexec-arm64.c | 12 ++++++------
> kexec/arch/arm64/kexec-arm64.h | 2 +-
> util_lib/elf_info.c | 2 +-
> util_lib/include/elf_info.h | 2 +-
> 4 files changed, 9 insertions(+), 9 deletions(-)
>
[...]
> diff --git a/kexec/arch/arm64/kexec-arm64.h b/kexec/arch/arm64/kexec-arm64.h
> index ed447ac..1844b67 100644
> --- a/kexec/arch/arm64/kexec-arm64.h
> +++ b/kexec/arch/arm64/kexec-arm64.h
> @@ -58,7 +58,7 @@ extern off_t initrd_size;
> */
>
> struct arm64_mem {
> - uint64_t phys_offset;
> + long phys_offset;
I think this one should be int64_t as well.
Other than that
Reviewed-by: Philipp Rudo <prudo@redhat.com>
> uint64_t text_offset;
> uint64_t image_size;
> uint64_t vp_offset;
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCHv4 1/4] arm64: make phys_offset signed
2022-01-20 18:09 ` Philipp Rudo
@ 2022-01-21 1:38 ` Pingfan Liu
2022-01-24 8:58 ` Simon Horman
0 siblings, 1 reply; 14+ messages in thread
From: Pingfan Liu @ 2022-01-21 1:38 UTC (permalink / raw)
To: kexec
On Fri, Jan 21, 2022 at 2:09 AM Philipp Rudo <prudo@redhat.com> wrote:
>
> Hi Pingfan,
>
> On Tue, 18 Jan 2022 15:48:09 +0800
> Pingfan Liu <piliu@redhat.com> wrote:
>
> > After kernel commit 7bc1a0f9e176 ("arm64: mm: use single quantity to
> > represent the PA to VA translation"), phys_offset can be negative if
> > running 52-bits kernel on 48-bits hardware.
> >
> > So changing phys_offset from unsigned to signed.
> >
> > Signed-off-by: Pingfan Liu <piliu@redhat.com>
> > Cc: Kairui Song <kasong@tencent.com>
> > Cc: Simon Horman <horms@verge.net.au>
> > Cc: Philipp Rudo <prudo@redhat.com>
> > To: kexec at lists.infradead.org
> > ---
> > kexec/arch/arm64/kexec-arm64.c | 12 ++++++------
> > kexec/arch/arm64/kexec-arm64.h | 2 +-
> > util_lib/elf_info.c | 2 +-
> > util_lib/include/elf_info.h | 2 +-
> > 4 files changed, 9 insertions(+), 9 deletions(-)
> >
>
> [...]
>
> > diff --git a/kexec/arch/arm64/kexec-arm64.h b/kexec/arch/arm64/kexec-arm64.h
> > index ed447ac..1844b67 100644
> > --- a/kexec/arch/arm64/kexec-arm64.h
> > +++ b/kexec/arch/arm64/kexec-arm64.h
> > @@ -58,7 +58,7 @@ extern off_t initrd_size;
> > */
> >
> > struct arm64_mem {
> > - uint64_t phys_offset;
> > + long phys_offset;
>
> I think this one should be int64_t as well.
>
Yes, you are right. Thanks for your careful review.
@Simon, could you help to correct it or prefer my V5 to fix it.
Thanks,
PIngfan
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCHv4 1/4] arm64: make phys_offset signed
2022-01-21 1:38 ` Pingfan Liu
@ 2022-01-24 8:58 ` Simon Horman
0 siblings, 0 replies; 14+ messages in thread
From: Simon Horman @ 2022-01-24 8:58 UTC (permalink / raw)
To: kexec
On Fri, Jan 21, 2022 at 09:38:49AM +0800, Pingfan Liu wrote:
> On Fri, Jan 21, 2022 at 2:09 AM Philipp Rudo <prudo@redhat.com> wrote:
> >
> > Hi Pingfan,
> >
> > On Tue, 18 Jan 2022 15:48:09 +0800
> > Pingfan Liu <piliu@redhat.com> wrote:
> >
> > > After kernel commit 7bc1a0f9e176 ("arm64: mm: use single quantity to
> > > represent the PA to VA translation"), phys_offset can be negative if
> > > running 52-bits kernel on 48-bits hardware.
> > >
> > > So changing phys_offset from unsigned to signed.
> > >
> > > Signed-off-by: Pingfan Liu <piliu@redhat.com>
> > > Cc: Kairui Song <kasong@tencent.com>
> > > Cc: Simon Horman <horms@verge.net.au>
> > > Cc: Philipp Rudo <prudo@redhat.com>
> > > To: kexec at lists.infradead.org
> > > ---
> > > kexec/arch/arm64/kexec-arm64.c | 12 ++++++------
> > > kexec/arch/arm64/kexec-arm64.h | 2 +-
> > > util_lib/elf_info.c | 2 +-
> > > util_lib/include/elf_info.h | 2 +-
> > > 4 files changed, 9 insertions(+), 9 deletions(-)
> > >
> >
> > [...]
> >
> > > diff --git a/kexec/arch/arm64/kexec-arm64.h b/kexec/arch/arm64/kexec-arm64.h
> > > index ed447ac..1844b67 100644
> > > --- a/kexec/arch/arm64/kexec-arm64.h
> > > +++ b/kexec/arch/arm64/kexec-arm64.h
> > > @@ -58,7 +58,7 @@ extern off_t initrd_size;
> > > */
> > >
> > > struct arm64_mem {
> > > - uint64_t phys_offset;
> > > + long phys_offset;
> >
> > I think this one should be int64_t as well.
> >
> Yes, you are right. Thanks for your careful review.
>
> @Simon, could you help to correct it or prefer my V5 to fix it.
I think I can fix this while applying this patchset :)
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCHv4 2/4] arm64/crashdump: unify routine to get page_offset
2022-01-18 7:48 [PATCHv4 0/4] arm64: make phys_to_virt() correct Pingfan Liu
2022-01-18 7:48 ` [PATCHv4 1/4] arm64: make phys_offset signed Pingfan Liu
@ 2022-01-18 7:48 ` Pingfan Liu
2022-01-20 18:09 ` Philipp Rudo
2022-01-18 7:48 ` [PATCHv4 3/4] arm64: read VA_BITS from kcore for 52-bits VA kernel Pingfan Liu
` (2 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Pingfan Liu @ 2022-01-18 7:48 UTC (permalink / raw)
To: kexec
There are two funcs to get page_offset:
get_kernel_page_offset()
get_page_offset()
Since get_kernel_page_offset() does not observe the kernel formula, and
remove it. Unify them in order to introduce 52-bits VA kernel more
easily in the coming patch.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Kairui Song <kasong@tencent.com>
Cc: Simon Horman <horms@verge.net.au>
Cc: Philipp Rudo <prudo@redhat.com>
To: kexec@lists.infradead.org
---
kexec/arch/arm64/crashdump-arm64.c | 23 +----------------------
kexec/arch/arm64/kexec-arm64.c | 8 ++++----
kexec/arch/arm64/kexec-arm64.h | 1 +
3 files changed, 6 insertions(+), 26 deletions(-)
diff --git a/kexec/arch/arm64/crashdump-arm64.c b/kexec/arch/arm64/crashdump-arm64.c
index a02019a..0a8d44c 100644
--- a/kexec/arch/arm64/crashdump-arm64.c
+++ b/kexec/arch/arm64/crashdump-arm64.c
@@ -46,27 +46,6 @@ static struct crash_elf_info elf_info = {
.machine = EM_AARCH64,
};
-/*
- * Note: The returned value is correct only if !CONFIG_RANDOMIZE_BASE.
- */
-static uint64_t get_kernel_page_offset(void)
-{
- int i;
-
- if (elf_info.kern_vaddr_start == UINT64_MAX)
- return UINT64_MAX;
-
- /* Current max virtual memory range is 48-bits. */
- for (i = 48; i > 0; i--)
- if (!(elf_info.kern_vaddr_start & (1UL << i)))
- break;
-
- if (i <= 0)
- return UINT64_MAX;
- else
- return UINT64_MAX << i;
-}
-
/*
* iomem_range_callback() - callback called for each iomem region
* @data: not used
@@ -198,7 +177,7 @@ int load_crashdump_segments(struct kexec_info *info)
if (err)
return EFAILED;
- elf_info.page_offset = get_kernel_page_offset();
+ get_page_offset(&elf_info.page_offset);
dbgprintf("%s: page_offset: %016llx\n", __func__,
elf_info.page_offset);
diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
index c6c67e8..33cc258 100644
--- a/kexec/arch/arm64/kexec-arm64.c
+++ b/kexec/arch/arm64/kexec-arm64.c
@@ -909,7 +909,7 @@ static int get_va_bits(void)
* get_page_offset - Helper for getting PAGE_OFFSET
*/
-static int get_page_offset(void)
+int get_page_offset(unsigned long *page_offset)
{
int ret;
@@ -917,8 +917,8 @@ static int get_page_offset(void)
if (ret < 0)
return ret;
- page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
- dbgprintf("page_offset : %lx\n", page_offset);
+ *page_offset = UINT64_MAX << (va_bits - 1);
+ dbgprintf("page_offset : %lx\n", *page_offset);
return 0;
}
@@ -954,7 +954,7 @@ int get_phys_base_from_pt_load(long *phys_offset)
unsigned long long phys_start;
unsigned long long virt_start;
- ret = get_page_offset();
+ ret = get_page_offset(&page_offset);
if (ret < 0)
return ret;
diff --git a/kexec/arch/arm64/kexec-arm64.h b/kexec/arch/arm64/kexec-arm64.h
index 1844b67..ed99d9d 100644
--- a/kexec/arch/arm64/kexec-arm64.h
+++ b/kexec/arch/arm64/kexec-arm64.h
@@ -69,6 +69,7 @@ extern struct arm64_mem arm64_mem;
uint64_t get_phys_offset(void);
uint64_t get_vp_offset(void);
+int get_page_offset(unsigned long *offset);
static inline void reset_vp_offset(void)
{
--
2.31.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCHv4 2/4] arm64/crashdump: unify routine to get page_offset
2022-01-18 7:48 ` [PATCHv4 2/4] arm64/crashdump: unify routine to get page_offset Pingfan Liu
@ 2022-01-20 18:09 ` Philipp Rudo
0 siblings, 0 replies; 14+ messages in thread
From: Philipp Rudo @ 2022-01-20 18:09 UTC (permalink / raw)
To: kexec
Hi Pingfan,
On Tue, 18 Jan 2022 15:48:10 +0800
Pingfan Liu <piliu@redhat.com> wrote:
> There are two funcs to get page_offset:
> get_kernel_page_offset()
> get_page_offset()
>
> Since get_kernel_page_offset() does not observe the kernel formula, and
> remove it. Unify them in order to introduce 52-bits VA kernel more
> easily in the coming patch.
>
> Signed-off-by: Pingfan Liu <piliu@redhat.com>
> Cc: Kairui Song <kasong@tencent.com>
> Cc: Simon Horman <horms@verge.net.au>
> Cc: Philipp Rudo <prudo@redhat.com>
> To: kexec at lists.infradead.org
looks good
Reviewed-by: Philipp Rudo <prudo@redhat.com>
> ---
> kexec/arch/arm64/crashdump-arm64.c | 23 +----------------------
> kexec/arch/arm64/kexec-arm64.c | 8 ++++----
> kexec/arch/arm64/kexec-arm64.h | 1 +
> 3 files changed, 6 insertions(+), 26 deletions(-)
>
> diff --git a/kexec/arch/arm64/crashdump-arm64.c b/kexec/arch/arm64/crashdump-arm64.c
> index a02019a..0a8d44c 100644
> --- a/kexec/arch/arm64/crashdump-arm64.c
> +++ b/kexec/arch/arm64/crashdump-arm64.c
> @@ -46,27 +46,6 @@ static struct crash_elf_info elf_info = {
> .machine = EM_AARCH64,
> };
>
> -/*
> - * Note: The returned value is correct only if !CONFIG_RANDOMIZE_BASE.
> - */
> -static uint64_t get_kernel_page_offset(void)
> -{
> - int i;
> -
> - if (elf_info.kern_vaddr_start == UINT64_MAX)
> - return UINT64_MAX;
> -
> - /* Current max virtual memory range is 48-bits. */
> - for (i = 48; i > 0; i--)
> - if (!(elf_info.kern_vaddr_start & (1UL << i)))
> - break;
> -
> - if (i <= 0)
> - return UINT64_MAX;
> - else
> - return UINT64_MAX << i;
> -}
> -
> /*
> * iomem_range_callback() - callback called for each iomem region
> * @data: not used
> @@ -198,7 +177,7 @@ int load_crashdump_segments(struct kexec_info *info)
> if (err)
> return EFAILED;
>
> - elf_info.page_offset = get_kernel_page_offset();
> + get_page_offset(&elf_info.page_offset);
> dbgprintf("%s: page_offset: %016llx\n", __func__,
> elf_info.page_offset);
>
> diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
> index c6c67e8..33cc258 100644
> --- a/kexec/arch/arm64/kexec-arm64.c
> +++ b/kexec/arch/arm64/kexec-arm64.c
> @@ -909,7 +909,7 @@ static int get_va_bits(void)
> * get_page_offset - Helper for getting PAGE_OFFSET
> */
>
> -static int get_page_offset(void)
> +int get_page_offset(unsigned long *page_offset)
> {
> int ret;
>
> @@ -917,8 +917,8 @@ static int get_page_offset(void)
> if (ret < 0)
> return ret;
>
> - page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
> - dbgprintf("page_offset : %lx\n", page_offset);
> + *page_offset = UINT64_MAX << (va_bits - 1);
> + dbgprintf("page_offset : %lx\n", *page_offset);
>
> return 0;
> }
> @@ -954,7 +954,7 @@ int get_phys_base_from_pt_load(long *phys_offset)
> unsigned long long phys_start;
> unsigned long long virt_start;
>
> - ret = get_page_offset();
> + ret = get_page_offset(&page_offset);
> if (ret < 0)
> return ret;
>
> diff --git a/kexec/arch/arm64/kexec-arm64.h b/kexec/arch/arm64/kexec-arm64.h
> index 1844b67..ed99d9d 100644
> --- a/kexec/arch/arm64/kexec-arm64.h
> +++ b/kexec/arch/arm64/kexec-arm64.h
> @@ -69,6 +69,7 @@ extern struct arm64_mem arm64_mem;
>
> uint64_t get_phys_offset(void);
> uint64_t get_vp_offset(void);
> +int get_page_offset(unsigned long *offset);
>
> static inline void reset_vp_offset(void)
> {
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCHv4 3/4] arm64: read VA_BITS from kcore for 52-bits VA kernel
2022-01-18 7:48 [PATCHv4 0/4] arm64: make phys_to_virt() correct Pingfan Liu
2022-01-18 7:48 ` [PATCHv4 1/4] arm64: make phys_offset signed Pingfan Liu
2022-01-18 7:48 ` [PATCHv4 2/4] arm64/crashdump: unify routine to get page_offset Pingfan Liu
@ 2022-01-18 7:48 ` Pingfan Liu
2022-01-20 18:09 ` Philipp Rudo
2022-01-18 7:48 ` [PATCHv4 4/4] arm64: fix PAGE_OFFSET calc for flipped mm Pingfan Liu
2022-01-24 9:10 ` [PATCHv4 0/4] arm64: make phys_to_virt() correct Simon Horman
4 siblings, 1 reply; 14+ messages in thread
From: Pingfan Liu @ 2022-01-18 7:48 UTC (permalink / raw)
To: kexec
phys_to_virt() calculates virtual address. As a important factor,
page_offset is excepted to be accurate.
Since arm64 kernel exposes va_bits through vmcore, using it.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Kairui Song <kasong@tencent.com>
Cc: Simon Horman <horms@verge.net.au>
Cc: Philipp Rudo <prudo@redhat.com>
To: kexec@lists.infradead.org
---
kexec/arch/arm64/kexec-arm64.c | 34 ++++++++++++++++++++++++++++++----
util_lib/elf_info.c | 5 +++++
util_lib/include/elf_info.h | 1 +
3 files changed, 36 insertions(+), 4 deletions(-)
diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
index 33cc258..793799b 100644
--- a/kexec/arch/arm64/kexec-arm64.c
+++ b/kexec/arch/arm64/kexec-arm64.c
@@ -54,7 +54,7 @@
static bool try_read_phys_offset_from_kcore = false;
/* Machine specific details. */
-static int va_bits;
+static int va_bits = -1;
static unsigned long page_offset;
/* Global varables the core kexec routines expect. */
@@ -876,7 +876,18 @@ static inline void set_phys_offset(int64_t v, char *set_method)
static int get_va_bits(void)
{
- unsigned long long stext_sym_addr = get_kernel_sym("_stext");
+ unsigned long long stext_sym_addr;
+
+ /*
+ * if already got from kcore
+ */
+ if (va_bits != -1)
+ goto out;
+
+
+ /* For kernel older than v4.19 */
+ fprintf(stderr, "Warning, can't get the VA_BITS from kcore\n");
+ stext_sym_addr = get_kernel_sym("_stext");
if (stext_sym_addr == 0) {
fprintf(stderr, "Can't get the symbol of _stext.\n");
@@ -900,6 +911,7 @@ static int get_va_bits(void)
return -1;
}
+out:
dbgprintf("va_bits : %d\n", va_bits);
return 0;
@@ -917,14 +929,27 @@ int get_page_offset(unsigned long *page_offset)
if (ret < 0)
return ret;
- *page_offset = UINT64_MAX << (va_bits - 1);
+ if (va_bits < 52)
+ *page_offset = UINT64_MAX << (va_bits - 1);
+ else
+ *page_offset = UINT64_MAX << va_bits;
+
dbgprintf("page_offset : %lx\n", *page_offset);
return 0;
}
+static void arm64_scan_vmcoreinfo(char *pos)
+{
+ const char *str;
+
+ str = "NUMBER(VA_BITS)=";
+ if (memcmp(str, pos, strlen(str)) == 0)
+ va_bits = strtoul(pos + strlen(str), NULL, 10);
+}
+
/**
- * get_phys_offset_from_vmcoreinfo_pt_note - Helper for getting PHYS_OFFSET
+ * get_phys_offset_from_vmcoreinfo_pt_note - Helper for getting PHYS_OFFSET (and va_bits)
* from VMCOREINFO note inside 'kcore'.
*/
@@ -937,6 +962,7 @@ static int get_phys_offset_from_vmcoreinfo_pt_note(long *phys_offset)
return EFAILED;
}
+ arch_scan_vmcoreinfo = arm64_scan_vmcoreinfo;
ret = read_phys_offset_elf_kcore(fd, phys_offset);
close(fd);
diff --git a/util_lib/elf_info.c b/util_lib/elf_info.c
index 5574c7f..d252eff 100644
--- a/util_lib/elf_info.c
+++ b/util_lib/elf_info.c
@@ -310,6 +310,8 @@ int get_pt_load(int idx,
#define NOT_FOUND_LONG_VALUE (-1)
+void (*arch_scan_vmcoreinfo)(char *pos);
+
void scan_vmcoreinfo(char *start, size_t size)
{
char *last = start + size - 1;
@@ -551,6 +553,9 @@ void scan_vmcoreinfo(char *start, size_t size)
}
}
+ if (arch_scan_vmcoreinfo != NULL)
+ (*arch_scan_vmcoreinfo)(pos);
+
if (last_line)
break;
}
diff --git a/util_lib/include/elf_info.h b/util_lib/include/elf_info.h
index f550d86..fdf4c3d 100644
--- a/util_lib/include/elf_info.h
+++ b/util_lib/include/elf_info.h
@@ -31,5 +31,6 @@ int get_pt_load(int idx,
int read_phys_offset_elf_kcore(int fd, long *phys_off);
int read_elf(int fd);
void dump_dmesg(int fd, void (*handler)(char*, unsigned int));
+extern void (*arch_scan_vmcoreinfo)(char *pos);
#endif /* ELF_INFO_H */
--
2.31.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCHv4 3/4] arm64: read VA_BITS from kcore for 52-bits VA kernel
2022-01-18 7:48 ` [PATCHv4 3/4] arm64: read VA_BITS from kcore for 52-bits VA kernel Pingfan Liu
@ 2022-01-20 18:09 ` Philipp Rudo
0 siblings, 0 replies; 14+ messages in thread
From: Philipp Rudo @ 2022-01-20 18:09 UTC (permalink / raw)
To: kexec
Hi Pingfan,
On Tue, 18 Jan 2022 15:48:11 +0800
Pingfan Liu <piliu@redhat.com> wrote:
> phys_to_virt() calculates virtual address. As a important factor,
> page_offset is excepted to be accurate.
>
> Since arm64 kernel exposes va_bits through vmcore, using it.
>
> Signed-off-by: Pingfan Liu <piliu@redhat.com>
> Cc: Kairui Song <kasong@tencent.com>
> Cc: Simon Horman <horms@verge.net.au>
> Cc: Philipp Rudo <prudo@redhat.com>
> To: kexec at lists.infradead.org
looks good
Reviewed-by: Philipp Rudo <prudo@redhat.com>
> ---
> kexec/arch/arm64/kexec-arm64.c | 34 ++++++++++++++++++++++++++++++----
> util_lib/elf_info.c | 5 +++++
> util_lib/include/elf_info.h | 1 +
> 3 files changed, 36 insertions(+), 4 deletions(-)
>
> diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
> index 33cc258..793799b 100644
> --- a/kexec/arch/arm64/kexec-arm64.c
> +++ b/kexec/arch/arm64/kexec-arm64.c
> @@ -54,7 +54,7 @@
> static bool try_read_phys_offset_from_kcore = false;
>
> /* Machine specific details. */
> -static int va_bits;
> +static int va_bits = -1;
> static unsigned long page_offset;
>
> /* Global varables the core kexec routines expect. */
> @@ -876,7 +876,18 @@ static inline void set_phys_offset(int64_t v, char *set_method)
>
> static int get_va_bits(void)
> {
> - unsigned long long stext_sym_addr = get_kernel_sym("_stext");
> + unsigned long long stext_sym_addr;
> +
> + /*
> + * if already got from kcore
> + */
> + if (va_bits != -1)
> + goto out;
> +
> +
> + /* For kernel older than v4.19 */
> + fprintf(stderr, "Warning, can't get the VA_BITS from kcore\n");
> + stext_sym_addr = get_kernel_sym("_stext");
>
> if (stext_sym_addr == 0) {
> fprintf(stderr, "Can't get the symbol of _stext.\n");
> @@ -900,6 +911,7 @@ static int get_va_bits(void)
> return -1;
> }
>
> +out:
> dbgprintf("va_bits : %d\n", va_bits);
>
> return 0;
> @@ -917,14 +929,27 @@ int get_page_offset(unsigned long *page_offset)
> if (ret < 0)
> return ret;
>
> - *page_offset = UINT64_MAX << (va_bits - 1);
> + if (va_bits < 52)
> + *page_offset = UINT64_MAX << (va_bits - 1);
> + else
> + *page_offset = UINT64_MAX << va_bits;
> +
> dbgprintf("page_offset : %lx\n", *page_offset);
>
> return 0;
> }
>
> +static void arm64_scan_vmcoreinfo(char *pos)
> +{
> + const char *str;
> +
> + str = "NUMBER(VA_BITS)=";
> + if (memcmp(str, pos, strlen(str)) == 0)
> + va_bits = strtoul(pos + strlen(str), NULL, 10);
> +}
> +
> /**
> - * get_phys_offset_from_vmcoreinfo_pt_note - Helper for getting PHYS_OFFSET
> + * get_phys_offset_from_vmcoreinfo_pt_note - Helper for getting PHYS_OFFSET (and va_bits)
> * from VMCOREINFO note inside 'kcore'.
> */
>
> @@ -937,6 +962,7 @@ static int get_phys_offset_from_vmcoreinfo_pt_note(long *phys_offset)
> return EFAILED;
> }
>
> + arch_scan_vmcoreinfo = arm64_scan_vmcoreinfo;
> ret = read_phys_offset_elf_kcore(fd, phys_offset);
>
> close(fd);
> diff --git a/util_lib/elf_info.c b/util_lib/elf_info.c
> index 5574c7f..d252eff 100644
> --- a/util_lib/elf_info.c
> +++ b/util_lib/elf_info.c
> @@ -310,6 +310,8 @@ int get_pt_load(int idx,
>
> #define NOT_FOUND_LONG_VALUE (-1)
>
> +void (*arch_scan_vmcoreinfo)(char *pos);
> +
> void scan_vmcoreinfo(char *start, size_t size)
> {
> char *last = start + size - 1;
> @@ -551,6 +553,9 @@ void scan_vmcoreinfo(char *start, size_t size)
> }
> }
>
> + if (arch_scan_vmcoreinfo != NULL)
> + (*arch_scan_vmcoreinfo)(pos);
> +
> if (last_line)
> break;
> }
> diff --git a/util_lib/include/elf_info.h b/util_lib/include/elf_info.h
> index f550d86..fdf4c3d 100644
> --- a/util_lib/include/elf_info.h
> +++ b/util_lib/include/elf_info.h
> @@ -31,5 +31,6 @@ int get_pt_load(int idx,
> int read_phys_offset_elf_kcore(int fd, long *phys_off);
> int read_elf(int fd);
> void dump_dmesg(int fd, void (*handler)(char*, unsigned int));
> +extern void (*arch_scan_vmcoreinfo)(char *pos);
>
> #endif /* ELF_INFO_H */
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCHv4 4/4] arm64: fix PAGE_OFFSET calc for flipped mm
2022-01-18 7:48 [PATCHv4 0/4] arm64: make phys_to_virt() correct Pingfan Liu
` (2 preceding siblings ...)
2022-01-18 7:48 ` [PATCHv4 3/4] arm64: read VA_BITS from kcore for 52-bits VA kernel Pingfan Liu
@ 2022-01-18 7:48 ` Pingfan Liu
2022-01-20 18:08 ` Philipp Rudo
2022-01-24 9:10 ` [PATCHv4 0/4] arm64: make phys_to_virt() correct Simon Horman
4 siblings, 1 reply; 14+ messages in thread
From: Pingfan Liu @ 2022-01-18 7:48 UTC (permalink / raw)
To: kexec
From: Kairui Song <kasong@tencent.com>
Since kernel commit 14c127c957c1 ('arm64: mm: Flip kernel VA space'),
the memory layout on arm64 have changed, and kexec-tools can no longer
get the the right PAGE_OFFSET based on _text symbol.
Prior to that, the kimage (_text) lays above PAGE_END with this layout:
0 -> VA_START : Usespace
VA_START -> VA_START + 256M : BPF JIT, Modules
VA_START + 256M -> PAGE_OFFSET - (~GB misc) : Vmalloc (KERNEL _text HERE)
PAGE_OFFSET -> ... : * Linear map *
And here we have:
VA_START = -1UL << VA_BITS
PAGE_OFFSET = -1UL << (VA_BITS - 1)
_text < -1UL << (VA_BITS - 1)
Kernel image lays somewhere between VA_START and PAGE_OFFSET, so we just
calc VA_BITS by getting the highest unset bit of _text symbol address,
and shift one less bit of VA_BITS to get page offset. This works as long
as KASLR don't put kernel in a too high location (which is commented inline).
And after that commit, kernel layout have changed:
0 -> PAGE_OFFSET : Userspace
PAGE_OFFSET -> PAGE_END : * Linear map *
PAGE_END -> PAGE_END + 128M : bpf jit region
PAGE_END + 128M -> PAGE_END + 256MB : modules
PAGE_END + 256M -> ... : vmalloc (KERNEL _text HERE)
Here we have:
PAGE_OFFSET = -1UL << VA_BITS
PAGE_END = -1UL << (VA_BITS - 1)
_text > -1UL << (VA_BITS - 1)
Kernel image now lays above PAGE_END, so we have to shift one more bit to
get the VA_BITS, and shift the exact VA_BITS for PAGE_OFFSET.
We can simply check if "_text > -1UL << (VA_BITS - 1)" is true to judge
which layout is being used and shift the page offset occordingly.
Signed-off-by: Kairui Song <kasong@tencent.com>
(rebased and stripped by Pingfan )
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Simon Horman <horms@verge.net.au>
Cc: Philipp Rudo <prudo@redhat.com>
To: kexec@lists.infradead.org
---
kexec/arch/arm64/kexec-arm64.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
index 793799b..ce7a5bb 100644
--- a/kexec/arch/arm64/kexec-arm64.c
+++ b/kexec/arch/arm64/kexec-arm64.c
@@ -923,13 +923,25 @@ out:
int get_page_offset(unsigned long *page_offset)
{
+ unsigned long long text_sym_addr, kernel_va_mid;
int ret;
+ text_sym_addr = get_kernel_sym("_text");
+ if (text_sym_addr == 0) {
+ fprintf(stderr, "Can't get the symbol of _text to calculate page_offset.\n");
+ return -1;
+ }
+
ret = get_va_bits();
if (ret < 0)
return ret;
- if (va_bits < 52)
+ /* Since kernel 5.4, kernel image is put above
+ * UINT64_MAX << (va_bits - 1)
+ */
+ kernel_va_mid = UINT64_MAX << (va_bits - 1);
+ /* older kernel */
+ if (text_sym_addr < kernel_va_mid)
*page_offset = UINT64_MAX << (va_bits - 1);
else
*page_offset = UINT64_MAX << va_bits;
--
2.31.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCHv4 4/4] arm64: fix PAGE_OFFSET calc for flipped mm
2022-01-18 7:48 ` [PATCHv4 4/4] arm64: fix PAGE_OFFSET calc for flipped mm Pingfan Liu
@ 2022-01-20 18:08 ` Philipp Rudo
2022-01-21 1:36 ` Pingfan Liu
0 siblings, 1 reply; 14+ messages in thread
From: Philipp Rudo @ 2022-01-20 18:08 UTC (permalink / raw)
To: kexec
Hi Pingfan,
On Tue, 18 Jan 2022 15:48:12 +0800
Pingfan Liu <piliu@redhat.com> wrote:
> From: Kairui Song <kasong@tencent.com>
>
> Since kernel commit 14c127c957c1 ('arm64: mm: Flip kernel VA space'),
> the memory layout on arm64 have changed, and kexec-tools can no longer
> get the the right PAGE_OFFSET based on _text symbol.
>
> Prior to that, the kimage (_text) lays above PAGE_END with this layout:
> 0 -> VA_START : Usespace
> VA_START -> VA_START + 256M : BPF JIT, Modules
> VA_START + 256M -> PAGE_OFFSET - (~GB misc) : Vmalloc (KERNEL _text HERE)
> PAGE_OFFSET -> ... : * Linear map *
>
> And here we have:
> VA_START = -1UL << VA_BITS
> PAGE_OFFSET = -1UL << (VA_BITS - 1)
> _text < -1UL << (VA_BITS - 1)
>
> Kernel image lays somewhere between VA_START and PAGE_OFFSET, so we just
> calc VA_BITS by getting the highest unset bit of _text symbol address,
> and shift one less bit of VA_BITS to get page offset. This works as long
> as KASLR don't put kernel in a too high location (which is commented inline).
>
> And after that commit, kernel layout have changed:
> 0 -> PAGE_OFFSET : Userspace
> PAGE_OFFSET -> PAGE_END : * Linear map *
> PAGE_END -> PAGE_END + 128M : bpf jit region
> PAGE_END + 128M -> PAGE_END + 256MB : modules
> PAGE_END + 256M -> ... : vmalloc (KERNEL _text HERE)
>
> Here we have:
> PAGE_OFFSET = -1UL << VA_BITS
> PAGE_END = -1UL << (VA_BITS - 1)
> _text > -1UL << (VA_BITS - 1)
>
> Kernel image now lays above PAGE_END, so we have to shift one more bit to
> get the VA_BITS, and shift the exact VA_BITS for PAGE_OFFSET.
>
> We can simply check if "_text > -1UL << (VA_BITS - 1)" is true to judge
> which layout is being used and shift the page offset occordingly.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> (rebased and stripped by Pingfan )
> Signed-off-by: Pingfan Liu <piliu@redhat.com>
> Cc: Simon Horman <horms@verge.net.au>
> Cc: Philipp Rudo <prudo@redhat.com>
> To: kexec at lists.infradead.org
> ---
> kexec/arch/arm64/kexec-arm64.c | 14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
> index 793799b..ce7a5bb 100644
> --- a/kexec/arch/arm64/kexec-arm64.c
> +++ b/kexec/arch/arm64/kexec-arm64.c
> @@ -923,13 +923,25 @@ out:
>
> int get_page_offset(unsigned long *page_offset)
> {
> + unsigned long long text_sym_addr, kernel_va_mid;
> int ret;
>
> + text_sym_addr = get_kernel_sym("_text");
> + if (text_sym_addr == 0) {
> + fprintf(stderr, "Can't get the symbol of _text to calculate page_offset.\n");
> + return -1;
> + }
> +
> ret = get_va_bits();
> if (ret < 0)
> return ret;
>
> - if (va_bits < 52)
> + /* Since kernel 5.4, kernel image is put above
> + * UINT64_MAX << (va_bits - 1)
> + */
> + kernel_va_mid = UINT64_MAX << (va_bits - 1);
> + /* older kernel */
> + if (text_sym_addr < kernel_va_mid)
> *page_offset = UINT64_MAX << (va_bits - 1);
> else
> *page_offset = UINT64_MAX << va_bits;
I would drop the kernel_va_mid and simply use
*page_offset = UINT64_MAX << (va_bits - 1)
if (*page_offset > text_sym_addr > *page_offset)
*page_offset = UINT64_MAX << va_bits
but that's more a matter of taste.
Reviewed-by: Philipp Rudo <prudo@redhat.com>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCHv4 4/4] arm64: fix PAGE_OFFSET calc for flipped mm
2022-01-20 18:08 ` Philipp Rudo
@ 2022-01-21 1:36 ` Pingfan Liu
0 siblings, 0 replies; 14+ messages in thread
From: Pingfan Liu @ 2022-01-21 1:36 UTC (permalink / raw)
To: kexec
On Fri, Jan 21, 2022 at 2:09 AM Philipp Rudo <prudo@redhat.com> wrote:
>
> Hi Pingfan,
>
> On Tue, 18 Jan 2022 15:48:12 +0800
> Pingfan Liu <piliu@redhat.com> wrote:
>
> > From: Kairui Song <kasong@tencent.com>
> >
> > Since kernel commit 14c127c957c1 ('arm64: mm: Flip kernel VA space'),
> > the memory layout on arm64 have changed, and kexec-tools can no longer
> > get the the right PAGE_OFFSET based on _text symbol.
> >
> > Prior to that, the kimage (_text) lays above PAGE_END with this layout:
> > 0 -> VA_START : Usespace
> > VA_START -> VA_START + 256M : BPF JIT, Modules
> > VA_START + 256M -> PAGE_OFFSET - (~GB misc) : Vmalloc (KERNEL _text HERE)
> > PAGE_OFFSET -> ... : * Linear map *
> >
> > And here we have:
> > VA_START = -1UL << VA_BITS
> > PAGE_OFFSET = -1UL << (VA_BITS - 1)
> > _text < -1UL << (VA_BITS - 1)
> >
> > Kernel image lays somewhere between VA_START and PAGE_OFFSET, so we just
> > calc VA_BITS by getting the highest unset bit of _text symbol address,
> > and shift one less bit of VA_BITS to get page offset. This works as long
> > as KASLR don't put kernel in a too high location (which is commented inline).
> >
> > And after that commit, kernel layout have changed:
> > 0 -> PAGE_OFFSET : Userspace
> > PAGE_OFFSET -> PAGE_END : * Linear map *
> > PAGE_END -> PAGE_END + 128M : bpf jit region
> > PAGE_END + 128M -> PAGE_END + 256MB : modules
> > PAGE_END + 256M -> ... : vmalloc (KERNEL _text HERE)
> >
> > Here we have:
> > PAGE_OFFSET = -1UL << VA_BITS
> > PAGE_END = -1UL << (VA_BITS - 1)
> > _text > -1UL << (VA_BITS - 1)
> >
> > Kernel image now lays above PAGE_END, so we have to shift one more bit to
> > get the VA_BITS, and shift the exact VA_BITS for PAGE_OFFSET.
> >
> > We can simply check if "_text > -1UL << (VA_BITS - 1)" is true to judge
> > which layout is being used and shift the page offset occordingly.
> >
> > Signed-off-by: Kairui Song <kasong@tencent.com>
> > (rebased and stripped by Pingfan )
> > Signed-off-by: Pingfan Liu <piliu@redhat.com>
> > Cc: Simon Horman <horms@verge.net.au>
> > Cc: Philipp Rudo <prudo@redhat.com>
> > To: kexec at lists.infradead.org
> > ---
> > kexec/arch/arm64/kexec-arm64.c | 14 +++++++++++++-
> > 1 file changed, 13 insertions(+), 1 deletion(-)
> >
> > diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
> > index 793799b..ce7a5bb 100644
> > --- a/kexec/arch/arm64/kexec-arm64.c
> > +++ b/kexec/arch/arm64/kexec-arm64.c
> > @@ -923,13 +923,25 @@ out:
> >
> > int get_page_offset(unsigned long *page_offset)
> > {
> > + unsigned long long text_sym_addr, kernel_va_mid;
> > int ret;
> >
> > + text_sym_addr = get_kernel_sym("_text");
> > + if (text_sym_addr == 0) {
> > + fprintf(stderr, "Can't get the symbol of _text to calculate page_offset.\n");
> > + return -1;
> > + }
> > +
> > ret = get_va_bits();
> > if (ret < 0)
> > return ret;
> >
> > - if (va_bits < 52)
> > + /* Since kernel 5.4, kernel image is put above
> > + * UINT64_MAX << (va_bits - 1)
> > + */
> > + kernel_va_mid = UINT64_MAX << (va_bits - 1);
> > + /* older kernel */
> > + if (text_sym_addr < kernel_va_mid)
> > *page_offset = UINT64_MAX << (va_bits - 1);
> > else
> > *page_offset = UINT64_MAX << va_bits;
>
> I would drop the kernel_va_mid and simply use
>
> *page_offset = UINT64_MAX << (va_bits - 1)
> if (*page_offset > text_sym_addr > *page_offset)
> *page_offset = UINT64_MAX << va_bits
>
> but that's more a matter of taste.
>
Ah, I kept kernel_va_mid dedicatedly to illustrate the purpose.
> Reviewed-by: Philipp Rudo <prudo@redhat.com>
>
Thanks for your reviewing.
Regards,
Pingfan
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCHv4 0/4] arm64: make phys_to_virt() correct
2022-01-18 7:48 [PATCHv4 0/4] arm64: make phys_to_virt() correct Pingfan Liu
` (3 preceding siblings ...)
2022-01-18 7:48 ` [PATCHv4 4/4] arm64: fix PAGE_OFFSET calc for flipped mm Pingfan Liu
@ 2022-01-24 9:10 ` Simon Horman
2022-01-24 9:53 ` Pingfan Liu
4 siblings, 1 reply; 14+ messages in thread
From: Simon Horman @ 2022-01-24 9:10 UTC (permalink / raw)
To: kexec
On Tue, Jan 18, 2022 at 03:48:08PM +0800, Pingfan Liu wrote:
> Currently phys_to_virt() does not work well on 52-bits VA arm64 kernel.
> One issue is contributed by phys_offset not signed.
> The other is contributed by wrong page_offset.
>
> v3 -> v4:
> address the date type in [1/4] and [2/4]
>
> v2 -> v3:
> Discussed with Kairui off-list, 48-bits VA kernel can not handle flipped
> mm yet. So introducing [4/4], which judges whether the kernel is with
> flipped mm layout and adopt different formula accordingly.
>
> As for [1-3/4], they are the same as [1-3/3] in V2.
>
> v1 -> v2
> Fix broken patch [2/3] in v1
> Move arch_scan_vmcoreinfo declaration to util_lib/include/elf_info.h
> Using UINT64_MAX instread of 0xffffffffffffffff
>
>
> Cc: Kairui Song <kasong@tencent.com>
> Cc: Simon Horman <horms@verge.net.au>
> Cc: Philipp Rudo <prudo@redhat.com>
> To: kexec at lists.infradead.org
Thanks, series applied.
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCHv4 0/4] arm64: make phys_to_virt() correct
2022-01-24 9:10 ` [PATCHv4 0/4] arm64: make phys_to_virt() correct Simon Horman
@ 2022-01-24 9:53 ` Pingfan Liu
0 siblings, 0 replies; 14+ messages in thread
From: Pingfan Liu @ 2022-01-24 9:53 UTC (permalink / raw)
To: kexec
On Mon, Jan 24, 2022 at 5:10 PM Simon Horman <horms@verge.net.au> wrote:
>
> On Tue, Jan 18, 2022 at 03:48:08PM +0800, Pingfan Liu wrote:
> > Currently phys_to_virt() does not work well on 52-bits VA arm64 kernel.
> > One issue is contributed by phys_offset not signed.
> > The other is contributed by wrong page_offset.
> >
> > v3 -> v4:
> > address the date type in [1/4] and [2/4]
> >
> > v2 -> v3:
> > Discussed with Kairui off-list, 48-bits VA kernel can not handle flipped
> > mm yet. So introducing [4/4], which judges whether the kernel is with
> > flipped mm layout and adopt different formula accordingly.
> >
> > As for [1-3/4], they are the same as [1-3/3] in V2.
> >
> > v1 -> v2
> > Fix broken patch [2/3] in v1
> > Move arch_scan_vmcoreinfo declaration to util_lib/include/elf_info.h
> > Using UINT64_MAX instread of 0xffffffffffffffff
> >
> >
> > Cc: Kairui Song <kasong@tencent.com>
> > Cc: Simon Horman <horms@verge.net.au>
> > Cc: Philipp Rudo <prudo@redhat.com>
> > To: kexec at lists.infradead.org
>
> Thanks, series applied.
>
Hi Simon and Philipp, appreciate the help and patient review.
Best Regards,
Pingfan
^ permalink raw reply [flat|nested] 14+ messages in thread