All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-16 12:16 ` Xunlei Pang
  0 siblings, 0 replies; 20+ messages in thread
From: Xunlei Pang @ 2017-03-16 12:16 UTC (permalink / raw)
  To: linux-kernel, kexec
  Cc: akpm, Eric Biederman, Dave Young, Baoquan He, Xunlei Pang

Currently vmcoreinfo data is updated at boot time subsys_initcall(),
it has the risk of being modified by some wrong code during system
is running.

As a result, vmcore dumped will contain the wrong vmcoreinfo. Later on,
when using "crash" utility to parse this vmcore, we probably will get
"Segmentation fault".

Based on the fact that the value of each vmcoreinfo stays invariable
once kernel boots up, we safely move all the vmcoreinfo operations into
crash_save_vmcoreinfo() which is called after crash happened. In this
way, vmcoreinfo data correctness is always guaranteed.

Signed-off-by: Xunlei Pang <xlpang@redhat.com>
---
 kernel/kexec_core.c | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index bfe62d5..1bfdd96 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -1367,12 +1367,6 @@ static void update_vmcoreinfo_note(void)
 	final_note(buf);
 }
 
-void crash_save_vmcoreinfo(void)
-{
-	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
-	update_vmcoreinfo_note();
-}
-
 void vmcoreinfo_append_str(const char *fmt, ...)
 {
 	va_list args;
@@ -1402,7 +1396,7 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
 	return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
 }
 
-static int __init crash_save_vmcoreinfo_init(void)
+void crash_save_vmcoreinfo(void)
 {
 	VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
 	VMCOREINFO_PAGESIZE(PAGE_SIZE);
@@ -1474,13 +1468,11 @@ static int __init crash_save_vmcoreinfo_init(void)
 #endif
 
 	arch_crash_save_vmcoreinfo();
-	update_vmcoreinfo_note();
+	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
 
-	return 0;
+	update_vmcoreinfo_note();
 }
 
-subsys_initcall(crash_save_vmcoreinfo_init);
-
 /*
  * Move into place and start executing a preloaded standalone
  * executable.  If nothing was preloaded return an error.
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-16 12:16 ` Xunlei Pang
  0 siblings, 0 replies; 20+ messages in thread
From: Xunlei Pang @ 2017-03-16 12:16 UTC (permalink / raw)
  To: linux-kernel, kexec
  Cc: Xunlei Pang, akpm, Dave Young, Eric Biederman, Baoquan He

Currently vmcoreinfo data is updated at boot time subsys_initcall(),
it has the risk of being modified by some wrong code during system
is running.

As a result, vmcore dumped will contain the wrong vmcoreinfo. Later on,
when using "crash" utility to parse this vmcore, we probably will get
"Segmentation fault".

Based on the fact that the value of each vmcoreinfo stays invariable
once kernel boots up, we safely move all the vmcoreinfo operations into
crash_save_vmcoreinfo() which is called after crash happened. In this
way, vmcoreinfo data correctness is always guaranteed.

Signed-off-by: Xunlei Pang <xlpang@redhat.com>
---
 kernel/kexec_core.c | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index bfe62d5..1bfdd96 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -1367,12 +1367,6 @@ static void update_vmcoreinfo_note(void)
 	final_note(buf);
 }
 
-void crash_save_vmcoreinfo(void)
-{
-	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
-	update_vmcoreinfo_note();
-}
-
 void vmcoreinfo_append_str(const char *fmt, ...)
 {
 	va_list args;
@@ -1402,7 +1396,7 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
 	return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
 }
 
-static int __init crash_save_vmcoreinfo_init(void)
+void crash_save_vmcoreinfo(void)
 {
 	VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
 	VMCOREINFO_PAGESIZE(PAGE_SIZE);
@@ -1474,13 +1468,11 @@ static int __init crash_save_vmcoreinfo_init(void)
 #endif
 
 	arch_crash_save_vmcoreinfo();
-	update_vmcoreinfo_note();
+	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
 
-	return 0;
+	update_vmcoreinfo_note();
 }
 
-subsys_initcall(crash_save_vmcoreinfo_init);
-
 /*
  * Move into place and start executing a preloaded standalone
  * executable.  If nothing was preloaded return an error.
-- 
1.8.3.1


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
  2017-03-16 12:16 ` Xunlei Pang
@ 2017-03-16 12:27   ` Baoquan He
  -1 siblings, 0 replies; 20+ messages in thread
From: Baoquan He @ 2017-03-16 12:27 UTC (permalink / raw)
  To: Xunlei Pang; +Cc: linux-kernel, kexec, akpm, Eric Biederman, Dave Young

Hi Xunlei,

Did you really see this ever happened? Because the vmcore size estimate
feature, namely --mem-usage option of makedumpfile, depends on the
vmcoreinfo in 1st kernel, your change will break it.

If not, it could be not good to change that.

Baoquan

On 03/16/17 at 08:16pm, Xunlei Pang wrote:
> Currently vmcoreinfo data is updated at boot time subsys_initcall(),
> it has the risk of being modified by some wrong code during system
> is running.
> 
> As a result, vmcore dumped will contain the wrong vmcoreinfo. Later on,
> when using "crash" utility to parse this vmcore, we probably will get
> "Segmentation fault".
> 
> Based on the fact that the value of each vmcoreinfo stays invariable
> once kernel boots up, we safely move all the vmcoreinfo operations into
> crash_save_vmcoreinfo() which is called after crash happened. In this
> way, vmcoreinfo data correctness is always guaranteed.
> 
> Signed-off-by: Xunlei Pang <xlpang@redhat.com>
> ---
>  kernel/kexec_core.c | 14 +++-----------
>  1 file changed, 3 insertions(+), 11 deletions(-)
> 
> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> index bfe62d5..1bfdd96 100644
> --- a/kernel/kexec_core.c
> +++ b/kernel/kexec_core.c
> @@ -1367,12 +1367,6 @@ static void update_vmcoreinfo_note(void)
>  	final_note(buf);
>  }
>  
> -void crash_save_vmcoreinfo(void)
> -{
> -	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
> -	update_vmcoreinfo_note();
> -}
> -
>  void vmcoreinfo_append_str(const char *fmt, ...)
>  {
>  	va_list args;
> @@ -1402,7 +1396,7 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
>  	return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
>  }
>  
> -static int __init crash_save_vmcoreinfo_init(void)
> +void crash_save_vmcoreinfo(void)
>  {
>  	VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
>  	VMCOREINFO_PAGESIZE(PAGE_SIZE);
> @@ -1474,13 +1468,11 @@ static int __init crash_save_vmcoreinfo_init(void)
>  #endif
>  
>  	arch_crash_save_vmcoreinfo();
> -	update_vmcoreinfo_note();
> +	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
>  
> -	return 0;
> +	update_vmcoreinfo_note();
>  }
>  
> -subsys_initcall(crash_save_vmcoreinfo_init);
> -
>  /*
>   * Move into place and start executing a preloaded standalone
>   * executable.  If nothing was preloaded return an error.
> -- 
> 1.8.3.1
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-16 12:27   ` Baoquan He
  0 siblings, 0 replies; 20+ messages in thread
From: Baoquan He @ 2017-03-16 12:27 UTC (permalink / raw)
  To: Xunlei Pang; +Cc: Dave Young, akpm, kexec, linux-kernel, Eric Biederman

Hi Xunlei,

Did you really see this ever happened? Because the vmcore size estimate
feature, namely --mem-usage option of makedumpfile, depends on the
vmcoreinfo in 1st kernel, your change will break it.

If not, it could be not good to change that.

Baoquan

On 03/16/17 at 08:16pm, Xunlei Pang wrote:
> Currently vmcoreinfo data is updated at boot time subsys_initcall(),
> it has the risk of being modified by some wrong code during system
> is running.
> 
> As a result, vmcore dumped will contain the wrong vmcoreinfo. Later on,
> when using "crash" utility to parse this vmcore, we probably will get
> "Segmentation fault".
> 
> Based on the fact that the value of each vmcoreinfo stays invariable
> once kernel boots up, we safely move all the vmcoreinfo operations into
> crash_save_vmcoreinfo() which is called after crash happened. In this
> way, vmcoreinfo data correctness is always guaranteed.
> 
> Signed-off-by: Xunlei Pang <xlpang@redhat.com>
> ---
>  kernel/kexec_core.c | 14 +++-----------
>  1 file changed, 3 insertions(+), 11 deletions(-)
> 
> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> index bfe62d5..1bfdd96 100644
> --- a/kernel/kexec_core.c
> +++ b/kernel/kexec_core.c
> @@ -1367,12 +1367,6 @@ static void update_vmcoreinfo_note(void)
>  	final_note(buf);
>  }
>  
> -void crash_save_vmcoreinfo(void)
> -{
> -	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
> -	update_vmcoreinfo_note();
> -}
> -
>  void vmcoreinfo_append_str(const char *fmt, ...)
>  {
>  	va_list args;
> @@ -1402,7 +1396,7 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
>  	return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
>  }
>  
> -static int __init crash_save_vmcoreinfo_init(void)
> +void crash_save_vmcoreinfo(void)
>  {
>  	VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
>  	VMCOREINFO_PAGESIZE(PAGE_SIZE);
> @@ -1474,13 +1468,11 @@ static int __init crash_save_vmcoreinfo_init(void)
>  #endif
>  
>  	arch_crash_save_vmcoreinfo();
> -	update_vmcoreinfo_note();
> +	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
>  
> -	return 0;
> +	update_vmcoreinfo_note();
>  }
>  
> -subsys_initcall(crash_save_vmcoreinfo_init);
> -
>  /*
>   * Move into place and start executing a preloaded standalone
>   * executable.  If nothing was preloaded return an error.
> -- 
> 1.8.3.1
> 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
  2017-03-16 12:27   ` Baoquan He
@ 2017-03-16 12:36     ` Xunlei Pang
  -1 siblings, 0 replies; 20+ messages in thread
From: Xunlei Pang @ 2017-03-16 12:36 UTC (permalink / raw)
  To: Baoquan He, Xunlei Pang
  Cc: linux-kernel, kexec, akpm, Eric Biederman, Dave Young

On 03/16/2017 at 08:27 PM, Baoquan He wrote:
> Hi Xunlei,
>
> Did you really see this ever happened? Because the vmcore size estimate
> feature, namely --mem-usage option of makedumpfile, depends on the
> vmcoreinfo in 1st kernel, your change will break it.

Hi Baoquan,

I can reproduce it using a kernel module which modifies the vmcoreinfo,
so it's a problem can actually happen.

> If not, it could be not good to change that.

That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
and store again all the vmcoreinfo after crash. What do you think?

Regards,
Xunlei

>
> Baoquan
>
> On 03/16/17 at 08:16pm, Xunlei Pang wrote:
>> Currently vmcoreinfo data is updated at boot time subsys_initcall(),
>> it has the risk of being modified by some wrong code during system
>> is running.
>>
>> As a result, vmcore dumped will contain the wrong vmcoreinfo. Later on,
>> when using "crash" utility to parse this vmcore, we probably will get
>> "Segmentation fault".
>>
>> Based on the fact that the value of each vmcoreinfo stays invariable
>> once kernel boots up, we safely move all the vmcoreinfo operations into
>> crash_save_vmcoreinfo() which is called after crash happened. In this
>> way, vmcoreinfo data correctness is always guaranteed.
>>
>> Signed-off-by: Xunlei Pang <xlpang@redhat.com>
>> ---
>>  kernel/kexec_core.c | 14 +++-----------
>>  1 file changed, 3 insertions(+), 11 deletions(-)
>>
>> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
>> index bfe62d5..1bfdd96 100644
>> --- a/kernel/kexec_core.c
>> +++ b/kernel/kexec_core.c
>> @@ -1367,12 +1367,6 @@ static void update_vmcoreinfo_note(void)
>>  	final_note(buf);
>>  }
>>  
>> -void crash_save_vmcoreinfo(void)
>> -{
>> -	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
>> -	update_vmcoreinfo_note();
>> -}
>> -
>>  void vmcoreinfo_append_str(const char *fmt, ...)
>>  {
>>  	va_list args;
>> @@ -1402,7 +1396,7 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
>>  	return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
>>  }
>>  
>> -static int __init crash_save_vmcoreinfo_init(void)
>> +void crash_save_vmcoreinfo(void)
>>  {
>>  	VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
>>  	VMCOREINFO_PAGESIZE(PAGE_SIZE);
>> @@ -1474,13 +1468,11 @@ static int __init crash_save_vmcoreinfo_init(void)
>>  #endif
>>  
>>  	arch_crash_save_vmcoreinfo();
>> -	update_vmcoreinfo_note();
>> +	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
>>  
>> -	return 0;
>> +	update_vmcoreinfo_note();
>>  }
>>  
>> -subsys_initcall(crash_save_vmcoreinfo_init);
>> -
>>  /*
>>   * Move into place and start executing a preloaded standalone
>>   * executable.  If nothing was preloaded return an error.
>> -- 
>> 1.8.3.1
>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-16 12:36     ` Xunlei Pang
  0 siblings, 0 replies; 20+ messages in thread
From: Xunlei Pang @ 2017-03-16 12:36 UTC (permalink / raw)
  To: Baoquan He, Xunlei Pang
  Cc: Dave Young, akpm, kexec, linux-kernel, Eric Biederman

On 03/16/2017 at 08:27 PM, Baoquan He wrote:
> Hi Xunlei,
>
> Did you really see this ever happened? Because the vmcore size estimate
> feature, namely --mem-usage option of makedumpfile, depends on the
> vmcoreinfo in 1st kernel, your change will break it.

Hi Baoquan,

I can reproduce it using a kernel module which modifies the vmcoreinfo,
so it's a problem can actually happen.

> If not, it could be not good to change that.

That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
and store again all the vmcoreinfo after crash. What do you think?

Regards,
Xunlei

>
> Baoquan
>
> On 03/16/17 at 08:16pm, Xunlei Pang wrote:
>> Currently vmcoreinfo data is updated at boot time subsys_initcall(),
>> it has the risk of being modified by some wrong code during system
>> is running.
>>
>> As a result, vmcore dumped will contain the wrong vmcoreinfo. Later on,
>> when using "crash" utility to parse this vmcore, we probably will get
>> "Segmentation fault".
>>
>> Based on the fact that the value of each vmcoreinfo stays invariable
>> once kernel boots up, we safely move all the vmcoreinfo operations into
>> crash_save_vmcoreinfo() which is called after crash happened. In this
>> way, vmcoreinfo data correctness is always guaranteed.
>>
>> Signed-off-by: Xunlei Pang <xlpang@redhat.com>
>> ---
>>  kernel/kexec_core.c | 14 +++-----------
>>  1 file changed, 3 insertions(+), 11 deletions(-)
>>
>> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
>> index bfe62d5..1bfdd96 100644
>> --- a/kernel/kexec_core.c
>> +++ b/kernel/kexec_core.c
>> @@ -1367,12 +1367,6 @@ static void update_vmcoreinfo_note(void)
>>  	final_note(buf);
>>  }
>>  
>> -void crash_save_vmcoreinfo(void)
>> -{
>> -	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
>> -	update_vmcoreinfo_note();
>> -}
>> -
>>  void vmcoreinfo_append_str(const char *fmt, ...)
>>  {
>>  	va_list args;
>> @@ -1402,7 +1396,7 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
>>  	return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
>>  }
>>  
>> -static int __init crash_save_vmcoreinfo_init(void)
>> +void crash_save_vmcoreinfo(void)
>>  {
>>  	VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
>>  	VMCOREINFO_PAGESIZE(PAGE_SIZE);
>> @@ -1474,13 +1468,11 @@ static int __init crash_save_vmcoreinfo_init(void)
>>  #endif
>>  
>>  	arch_crash_save_vmcoreinfo();
>> -	update_vmcoreinfo_note();
>> +	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
>>  
>> -	return 0;
>> +	update_vmcoreinfo_note();
>>  }
>>  
>> -subsys_initcall(crash_save_vmcoreinfo_init);
>> -
>>  /*
>>   * Move into place and start executing a preloaded standalone
>>   * executable.  If nothing was preloaded return an error.
>> -- 
>> 1.8.3.1
>>


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
  2017-03-16 12:36     ` Xunlei Pang
@ 2017-03-16 13:18       ` Baoquan He
  -1 siblings, 0 replies; 20+ messages in thread
From: Baoquan He @ 2017-03-16 13:18 UTC (permalink / raw)
  To: xlpang; +Cc: linux-kernel, kexec, akpm, Eric Biederman, Dave Young

On 03/16/17 at 08:36pm, Xunlei Pang wrote:
> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
> > Hi Xunlei,
> >
> > Did you really see this ever happened? Because the vmcore size estimate
> > feature, namely --mem-usage option of makedumpfile, depends on the
> > vmcoreinfo in 1st kernel, your change will break it.
> 
> Hi Baoquan,
> 
> I can reproduce it using a kernel module which modifies the vmcoreinfo,
> so it's a problem can actually happen.
> 
> > If not, it could be not good to change that.
> 
> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
> and store again all the vmcoreinfo after crash. What do you think?

Well, then it will make makedumpfile segfault happen too when execute
below command in 1st kernel if it existed:
	makedumpfile --mem-usage /proc/kcore

So we still need to face that problem and need fix it. vmcoreinfo_note
is in kernel data area, how does module intrude into this area? And can
we fix the module code?


> 
> >
> > Baoquan
> >
> > On 03/16/17 at 08:16pm, Xunlei Pang wrote:
> >> Currently vmcoreinfo data is updated at boot time subsys_initcall(),
> >> it has the risk of being modified by some wrong code during system
> >> is running.
> >>
> >> As a result, vmcore dumped will contain the wrong vmcoreinfo. Later on,
> >> when using "crash" utility to parse this vmcore, we probably will get
> >> "Segmentation fault".
> >>
> >> Based on the fact that the value of each vmcoreinfo stays invariable
> >> once kernel boots up, we safely move all the vmcoreinfo operations into
> >> crash_save_vmcoreinfo() which is called after crash happened. In this
> >> way, vmcoreinfo data correctness is always guaranteed.
> >>
> >> Signed-off-by: Xunlei Pang <xlpang@redhat.com>
> >> ---
> >>  kernel/kexec_core.c | 14 +++-----------
> >>  1 file changed, 3 insertions(+), 11 deletions(-)
> >>
> >> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> >> index bfe62d5..1bfdd96 100644
> >> --- a/kernel/kexec_core.c
> >> +++ b/kernel/kexec_core.c
> >> @@ -1367,12 +1367,6 @@ static void update_vmcoreinfo_note(void)
> >>  	final_note(buf);
> >>  }
> >>  
> >> -void crash_save_vmcoreinfo(void)
> >> -{
> >> -	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
> >> -	update_vmcoreinfo_note();
> >> -}
> >> -
> >>  void vmcoreinfo_append_str(const char *fmt, ...)
> >>  {
> >>  	va_list args;
> >> @@ -1402,7 +1396,7 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
> >>  	return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
> >>  }
> >>  
> >> -static int __init crash_save_vmcoreinfo_init(void)
> >> +void crash_save_vmcoreinfo(void)
> >>  {
> >>  	VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
> >>  	VMCOREINFO_PAGESIZE(PAGE_SIZE);
> >> @@ -1474,13 +1468,11 @@ static int __init crash_save_vmcoreinfo_init(void)
> >>  #endif
> >>  
> >>  	arch_crash_save_vmcoreinfo();
> >> -	update_vmcoreinfo_note();
> >> +	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
> >>  
> >> -	return 0;
> >> +	update_vmcoreinfo_note();
> >>  }
> >>  
> >> -subsys_initcall(crash_save_vmcoreinfo_init);
> >> -
> >>  /*
> >>   * Move into place and start executing a preloaded standalone
> >>   * executable.  If nothing was preloaded return an error.
> >> -- 
> >> 1.8.3.1
> >>
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-16 13:18       ` Baoquan He
  0 siblings, 0 replies; 20+ messages in thread
From: Baoquan He @ 2017-03-16 13:18 UTC (permalink / raw)
  To: xlpang; +Cc: Dave Young, akpm, kexec, linux-kernel, Eric Biederman

On 03/16/17 at 08:36pm, Xunlei Pang wrote:
> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
> > Hi Xunlei,
> >
> > Did you really see this ever happened? Because the vmcore size estimate
> > feature, namely --mem-usage option of makedumpfile, depends on the
> > vmcoreinfo in 1st kernel, your change will break it.
> 
> Hi Baoquan,
> 
> I can reproduce it using a kernel module which modifies the vmcoreinfo,
> so it's a problem can actually happen.
> 
> > If not, it could be not good to change that.
> 
> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
> and store again all the vmcoreinfo after crash. What do you think?

Well, then it will make makedumpfile segfault happen too when execute
below command in 1st kernel if it existed:
	makedumpfile --mem-usage /proc/kcore

So we still need to face that problem and need fix it. vmcoreinfo_note
is in kernel data area, how does module intrude into this area? And can
we fix the module code?


> 
> >
> > Baoquan
> >
> > On 03/16/17 at 08:16pm, Xunlei Pang wrote:
> >> Currently vmcoreinfo data is updated at boot time subsys_initcall(),
> >> it has the risk of being modified by some wrong code during system
> >> is running.
> >>
> >> As a result, vmcore dumped will contain the wrong vmcoreinfo. Later on,
> >> when using "crash" utility to parse this vmcore, we probably will get
> >> "Segmentation fault".
> >>
> >> Based on the fact that the value of each vmcoreinfo stays invariable
> >> once kernel boots up, we safely move all the vmcoreinfo operations into
> >> crash_save_vmcoreinfo() which is called after crash happened. In this
> >> way, vmcoreinfo data correctness is always guaranteed.
> >>
> >> Signed-off-by: Xunlei Pang <xlpang@redhat.com>
> >> ---
> >>  kernel/kexec_core.c | 14 +++-----------
> >>  1 file changed, 3 insertions(+), 11 deletions(-)
> >>
> >> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> >> index bfe62d5..1bfdd96 100644
> >> --- a/kernel/kexec_core.c
> >> +++ b/kernel/kexec_core.c
> >> @@ -1367,12 +1367,6 @@ static void update_vmcoreinfo_note(void)
> >>  	final_note(buf);
> >>  }
> >>  
> >> -void crash_save_vmcoreinfo(void)
> >> -{
> >> -	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
> >> -	update_vmcoreinfo_note();
> >> -}
> >> -
> >>  void vmcoreinfo_append_str(const char *fmt, ...)
> >>  {
> >>  	va_list args;
> >> @@ -1402,7 +1396,7 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
> >>  	return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
> >>  }
> >>  
> >> -static int __init crash_save_vmcoreinfo_init(void)
> >> +void crash_save_vmcoreinfo(void)
> >>  {
> >>  	VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
> >>  	VMCOREINFO_PAGESIZE(PAGE_SIZE);
> >> @@ -1474,13 +1468,11 @@ static int __init crash_save_vmcoreinfo_init(void)
> >>  #endif
> >>  
> >>  	arch_crash_save_vmcoreinfo();
> >> -	update_vmcoreinfo_note();
> >> +	vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
> >>  
> >> -	return 0;
> >> +	update_vmcoreinfo_note();
> >>  }
> >>  
> >> -subsys_initcall(crash_save_vmcoreinfo_init);
> >> -
> >>  /*
> >>   * Move into place and start executing a preloaded standalone
> >>   * executable.  If nothing was preloaded return an error.
> >> -- 
> >> 1.8.3.1
> >>
> 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
  2017-03-16 13:18       ` Baoquan He
@ 2017-03-16 13:40         ` Xunlei Pang
  -1 siblings, 0 replies; 20+ messages in thread
From: Xunlei Pang @ 2017-03-16 13:40 UTC (permalink / raw)
  To: Baoquan He, xlpang; +Cc: linux-kernel, kexec, akpm, Eric Biederman, Dave Young

On 03/16/2017 at 09:18 PM, Baoquan He wrote:
> On 03/16/17 at 08:36pm, Xunlei Pang wrote:
>> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
>>> Hi Xunlei,
>>>
>>> Did you really see this ever happened? Because the vmcore size estimate
>>> feature, namely --mem-usage option of makedumpfile, depends on the
>>> vmcoreinfo in 1st kernel, your change will break it.
>> Hi Baoquan,
>>
>> I can reproduce it using a kernel module which modifies the vmcoreinfo,
>> so it's a problem can actually happen.
>>
>>> If not, it could be not good to change that.
>> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
>> and store again all the vmcoreinfo after crash. What do you think?
> Well, then it will make makedumpfile segfault happen too when execute
> below command in 1st kernel if it existed:
> 	makedumpfile --mem-usage /proc/kcore

Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
after all the system is going something wrong. And that's why we deploy kdump service at the very
beginning when the system has a low possibility of going wrong.

But we have to guarantee kdump vmcore can be generated correctly as possible as it can.

>
> So we still need to face that problem and need fix it. vmcoreinfo_note
> is in kernel data area, how does module intrude into this area? And can
> we fix the module code?
>

Bugs always exist in products, we can't know what will happen and fix all the errors,
that's why we need kdump.

I think the following update should guarantee the correct vmcoreinfo for kdump.

---
 kernel/kexec_core.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index bfe62d5..0f7b328 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -1367,12 +1367,6 @@ static void update_vmcoreinfo_note(void)
     final_note(buf);
 }
 
-void crash_save_vmcoreinfo(void)
-{
-    vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
-    update_vmcoreinfo_note();
-}
-
 void vmcoreinfo_append_str(const char *fmt, ...)
 {
     va_list args;
@@ -1402,7 +1396,7 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
     return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
 }
 
-static int __init crash_save_vmcoreinfo_init(void)
+static void do_crash_save_vmcoreinfo_init(void)
 {
     VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
     VMCOREINFO_PAGESIZE(PAGE_SIZE);
@@ -1474,6 +1468,20 @@ static int __init crash_save_vmcoreinfo_init(void)
 #endif
 
     arch_crash_save_vmcoreinfo();
+}
+
+void crash_save_vmcoreinfo(void)
+{
+    /* Save again to protect vmcoreinfo from being modified */
+    vmcoreinfo_size = 0;
+    do_crash_save_vmcoreinfo_init();
+    vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
+    update_vmcoreinfo_note();
+}
+
+static int __init crash_save_vmcoreinfo_init(void)
+{
+    do_crash_save_vmcoreinfo_init();
     update_vmcoreinfo_note();
 
     return 0;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-16 13:40         ` Xunlei Pang
  0 siblings, 0 replies; 20+ messages in thread
From: Xunlei Pang @ 2017-03-16 13:40 UTC (permalink / raw)
  To: Baoquan He, xlpang; +Cc: Dave Young, akpm, kexec, linux-kernel, Eric Biederman

On 03/16/2017 at 09:18 PM, Baoquan He wrote:
> On 03/16/17 at 08:36pm, Xunlei Pang wrote:
>> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
>>> Hi Xunlei,
>>>
>>> Did you really see this ever happened? Because the vmcore size estimate
>>> feature, namely --mem-usage option of makedumpfile, depends on the
>>> vmcoreinfo in 1st kernel, your change will break it.
>> Hi Baoquan,
>>
>> I can reproduce it using a kernel module which modifies the vmcoreinfo,
>> so it's a problem can actually happen.
>>
>>> If not, it could be not good to change that.
>> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
>> and store again all the vmcoreinfo after crash. What do you think?
> Well, then it will make makedumpfile segfault happen too when execute
> below command in 1st kernel if it existed:
> 	makedumpfile --mem-usage /proc/kcore

Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
after all the system is going something wrong. And that's why we deploy kdump service at the very
beginning when the system has a low possibility of going wrong.

But we have to guarantee kdump vmcore can be generated correctly as possible as it can.

>
> So we still need to face that problem and need fix it. vmcoreinfo_note
> is in kernel data area, how does module intrude into this area? And can
> we fix the module code?
>

Bugs always exist in products, we can't know what will happen and fix all the errors,
that's why we need kdump.

I think the following update should guarantee the correct vmcoreinfo for kdump.

---
 kernel/kexec_core.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index bfe62d5..0f7b328 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -1367,12 +1367,6 @@ static void update_vmcoreinfo_note(void)
     final_note(buf);
 }
 
-void crash_save_vmcoreinfo(void)
-{
-    vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
-    update_vmcoreinfo_note();
-}
-
 void vmcoreinfo_append_str(const char *fmt, ...)
 {
     va_list args;
@@ -1402,7 +1396,7 @@ phys_addr_t __weak paddr_vmcoreinfo_note(void)
     return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
 }
 
-static int __init crash_save_vmcoreinfo_init(void)
+static void do_crash_save_vmcoreinfo_init(void)
 {
     VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
     VMCOREINFO_PAGESIZE(PAGE_SIZE);
@@ -1474,6 +1468,20 @@ static int __init crash_save_vmcoreinfo_init(void)
 #endif
 
     arch_crash_save_vmcoreinfo();
+}
+
+void crash_save_vmcoreinfo(void)
+{
+    /* Save again to protect vmcoreinfo from being modified */
+    vmcoreinfo_size = 0;
+    do_crash_save_vmcoreinfo_init();
+    vmcoreinfo_append_str("CRASHTIME=%ld\n", get_seconds());
+    update_vmcoreinfo_note();
+}
+
+static int __init crash_save_vmcoreinfo_init(void)
+{
+    do_crash_save_vmcoreinfo_init();
     update_vmcoreinfo_note();
 
     return 0;
-- 
1.8.3.1

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
  2017-03-16 13:40         ` Xunlei Pang
@ 2017-03-18 18:23           ` Petr Tesarik
  -1 siblings, 0 replies; 20+ messages in thread
From: Petr Tesarik @ 2017-03-18 18:23 UTC (permalink / raw)
  To: Xunlei Pang
  Cc: Baoquan He, xlpang, Dave Young, akpm, kexec, linux-kernel,
	Eric Biederman

On Thu, 16 Mar 2017 21:40:58 +0800
Xunlei Pang <xpang@redhat.com> wrote:

> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
> > On 03/16/17 at 08:36pm, Xunlei Pang wrote:
> >> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
> >>> Hi Xunlei,
> >>>
> >>> Did you really see this ever happened? Because the vmcore size estimate
> >>> feature, namely --mem-usage option of makedumpfile, depends on the
> >>> vmcoreinfo in 1st kernel, your change will break it.
> >> Hi Baoquan,
> >>
> >> I can reproduce it using a kernel module which modifies the vmcoreinfo,
> >> so it's a problem can actually happen.
> >>
> >>> If not, it could be not good to change that.
> >> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
> >> and store again all the vmcoreinfo after crash. What do you think?
> > Well, then it will make makedumpfile segfault happen too when execute
> > below command in 1st kernel if it existed:
> > 	makedumpfile --mem-usage /proc/kcore
> 
> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
> after all the system is going something wrong. And that's why we deploy kdump service at the very
> beginning when the system has a low possibility of going wrong.
> 
> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
> 
> >
> > So we still need to face that problem and need fix it. vmcoreinfo_note
> > is in kernel data area, how does module intrude into this area? And can
> > we fix the module code?
> >
> 
> Bugs always exist in products, we can't know what will happen and fix all the errors,
> that's why we need kdump.
> 
> I think the following update should guarantee the correct vmcoreinfo for kdump.

I'm still not convinced. I would probably have more trust in a clean
kernel (after boot) than a kernel that has already crashed (presumably
because of a serious bug). How can be reliability improved by running
more code in unsafe environment?

If some code overwrites reserved areas (such as vmcoreinfo), then it's
seriously buggy. And in my opinion, it is more difficult to identify
such bugs if they are masked by re-initializing vmcoreinfo after crash.
In fact, if makedumpfile in the kexec'ed kernel complains that it
didn't find valid VMCOREINFO content, that's already a hint.

As a side note, if you're debugging a vmcoreinfo corruption, it's
possible to use a standalone VMCOREINFO file with makedumpfile, so you
can pre-generate it and save it in the kdump initrd.

In short, I don't see a compelling case for this change.

Just my two cents,
Petr T

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-18 18:23           ` Petr Tesarik
  0 siblings, 0 replies; 20+ messages in thread
From: Petr Tesarik @ 2017-03-18 18:23 UTC (permalink / raw)
  To: Xunlei Pang
  Cc: Baoquan He, kexec, xlpang, linux-kernel, Eric Biederman, akpm,
	Dave Young

On Thu, 16 Mar 2017 21:40:58 +0800
Xunlei Pang <xpang@redhat.com> wrote:

> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
> > On 03/16/17 at 08:36pm, Xunlei Pang wrote:
> >> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
> >>> Hi Xunlei,
> >>>
> >>> Did you really see this ever happened? Because the vmcore size estimate
> >>> feature, namely --mem-usage option of makedumpfile, depends on the
> >>> vmcoreinfo in 1st kernel, your change will break it.
> >> Hi Baoquan,
> >>
> >> I can reproduce it using a kernel module which modifies the vmcoreinfo,
> >> so it's a problem can actually happen.
> >>
> >>> If not, it could be not good to change that.
> >> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
> >> and store again all the vmcoreinfo after crash. What do you think?
> > Well, then it will make makedumpfile segfault happen too when execute
> > below command in 1st kernel if it existed:
> > 	makedumpfile --mem-usage /proc/kcore
> 
> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
> after all the system is going something wrong. And that's why we deploy kdump service at the very
> beginning when the system has a low possibility of going wrong.
> 
> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
> 
> >
> > So we still need to face that problem and need fix it. vmcoreinfo_note
> > is in kernel data area, how does module intrude into this area? And can
> > we fix the module code?
> >
> 
> Bugs always exist in products, we can't know what will happen and fix all the errors,
> that's why we need kdump.
> 
> I think the following update should guarantee the correct vmcoreinfo for kdump.

I'm still not convinced. I would probably have more trust in a clean
kernel (after boot) than a kernel that has already crashed (presumably
because of a serious bug). How can be reliability improved by running
more code in unsafe environment?

If some code overwrites reserved areas (such as vmcoreinfo), then it's
seriously buggy. And in my opinion, it is more difficult to identify
such bugs if they are masked by re-initializing vmcoreinfo after crash.
In fact, if makedumpfile in the kexec'ed kernel complains that it
didn't find valid VMCOREINFO content, that's already a hint.

As a side note, if you're debugging a vmcoreinfo corruption, it's
possible to use a standalone VMCOREINFO file with makedumpfile, so you
can pre-generate it and save it in the kdump initrd.

In short, I don't see a compelling case for this change.

Just my two cents,
Petr T

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
  2017-03-18 18:23           ` Petr Tesarik
@ 2017-03-20  2:17             ` Xunlei Pang
  -1 siblings, 0 replies; 20+ messages in thread
From: Xunlei Pang @ 2017-03-20  2:17 UTC (permalink / raw)
  To: Petr Tesarik
  Cc: Baoquan He, kexec, xlpang, linux-kernel, Eric Biederman, akpm,
	Dave Young

On 03/19/2017 at 02:23 AM, Petr Tesarik wrote:
> On Thu, 16 Mar 2017 21:40:58 +0800
> Xunlei Pang <xpang@redhat.com> wrote:
>
>> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
>>> On 03/16/17 at 08:36pm, Xunlei Pang wrote:
>>>> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
>>>>> Hi Xunlei,
>>>>>
>>>>> Did you really see this ever happened? Because the vmcore size estimate
>>>>> feature, namely --mem-usage option of makedumpfile, depends on the
>>>>> vmcoreinfo in 1st kernel, your change will break it.
>>>> Hi Baoquan,
>>>>
>>>> I can reproduce it using a kernel module which modifies the vmcoreinfo,
>>>> so it's a problem can actually happen.
>>>>
>>>>> If not, it could be not good to change that.
>>>> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
>>>> and store again all the vmcoreinfo after crash. What do you think?
>>> Well, then it will make makedumpfile segfault happen too when execute
>>> below command in 1st kernel if it existed:
>>> 	makedumpfile --mem-usage /proc/kcore
>> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
>> after all the system is going something wrong. And that's why we deploy kdump service at the very
>> beginning when the system has a low possibility of going wrong.
>>
>> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
>>
>>> So we still need to face that problem and need fix it. vmcoreinfo_note
>>> is in kernel data area, how does module intrude into this area? And can
>>> we fix the module code?
>>>
>> Bugs always exist in products, we can't know what will happen and fix all the errors,
>> that's why we need kdump.
>>
>> I think the following update should guarantee the correct vmcoreinfo for kdump.
> I'm still not convinced. I would probably have more trust in a clean
> kernel (after boot) than a kernel that has already crashed (presumably
> because of a serious bug). How can be reliability improved by running
> more code in unsafe environment?

Correct, I realized that, so used crc32 to protect the original data,
but since Eric left a more reasonable idea, I will try that later.

>
> If some code overwrites reserved areas (such as vmcoreinfo), then it's
> seriously buggy. And in my opinion, it is more difficult to identify
> such bugs if they are masked by re-initializing vmcoreinfo after crash.
> In fact, if makedumpfile in the kexec'ed kernel complains that it
> didn't find valid VMCOREINFO content, that's already a hint.
>
> As a side note, if you're debugging a vmcoreinfo corruption, it's
> possible to use a standalone VMCOREINFO file with makedumpfile, so you
> can pre-generate it and save it in the kdump initrd.
>
> In short, I don't see a compelling case for this change.

E.g. 1) wrong code overwrites vmcoreinfo_data; 2) further crashes the
system; 3) trigger kdump, then we obviously will fail to recognize the
crash context correctly due to the corrupted vmcoreinfo.  Everyone
will get confused if met such unfortunate customer-side issue.

Although it's corner case, if it's easy to fix, then I think we better do it.

Now except for vmcoreinfo, all the crash data is well protected (including
cpu note which is fully updated in the crash path, thus its correctness is
guaranteed).

Regards,
Xunlei

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-20  2:17             ` Xunlei Pang
  0 siblings, 0 replies; 20+ messages in thread
From: Xunlei Pang @ 2017-03-20  2:17 UTC (permalink / raw)
  To: Petr Tesarik
  Cc: Baoquan He, kexec, xlpang, linux-kernel, Eric Biederman, akpm,
	Dave Young

On 03/19/2017 at 02:23 AM, Petr Tesarik wrote:
> On Thu, 16 Mar 2017 21:40:58 +0800
> Xunlei Pang <xpang@redhat.com> wrote:
>
>> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
>>> On 03/16/17 at 08:36pm, Xunlei Pang wrote:
>>>> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
>>>>> Hi Xunlei,
>>>>>
>>>>> Did you really see this ever happened? Because the vmcore size estimate
>>>>> feature, namely --mem-usage option of makedumpfile, depends on the
>>>>> vmcoreinfo in 1st kernel, your change will break it.
>>>> Hi Baoquan,
>>>>
>>>> I can reproduce it using a kernel module which modifies the vmcoreinfo,
>>>> so it's a problem can actually happen.
>>>>
>>>>> If not, it could be not good to change that.
>>>> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
>>>> and store again all the vmcoreinfo after crash. What do you think?
>>> Well, then it will make makedumpfile segfault happen too when execute
>>> below command in 1st kernel if it existed:
>>> 	makedumpfile --mem-usage /proc/kcore
>> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
>> after all the system is going something wrong. And that's why we deploy kdump service at the very
>> beginning when the system has a low possibility of going wrong.
>>
>> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
>>
>>> So we still need to face that problem and need fix it. vmcoreinfo_note
>>> is in kernel data area, how does module intrude into this area? And can
>>> we fix the module code?
>>>
>> Bugs always exist in products, we can't know what will happen and fix all the errors,
>> that's why we need kdump.
>>
>> I think the following update should guarantee the correct vmcoreinfo for kdump.
> I'm still not convinced. I would probably have more trust in a clean
> kernel (after boot) than a kernel that has already crashed (presumably
> because of a serious bug). How can be reliability improved by running
> more code in unsafe environment?

Correct, I realized that, so used crc32 to protect the original data,
but since Eric left a more reasonable idea, I will try that later.

>
> If some code overwrites reserved areas (such as vmcoreinfo), then it's
> seriously buggy. And in my opinion, it is more difficult to identify
> such bugs if they are masked by re-initializing vmcoreinfo after crash.
> In fact, if makedumpfile in the kexec'ed kernel complains that it
> didn't find valid VMCOREINFO content, that's already a hint.
>
> As a side note, if you're debugging a vmcoreinfo corruption, it's
> possible to use a standalone VMCOREINFO file with makedumpfile, so you
> can pre-generate it and save it in the kdump initrd.
>
> In short, I don't see a compelling case for this change.

E.g. 1) wrong code overwrites vmcoreinfo_data; 2) further crashes the
system; 3) trigger kdump, then we obviously will fail to recognize the
crash context correctly due to the corrupted vmcoreinfo.  Everyone
will get confused if met such unfortunate customer-side issue.

Although it's corner case, if it's easy to fix, then I think we better do it.

Now except for vmcoreinfo, all the crash data is well protected (including
cpu note which is fully updated in the crash path, thus its correctness is
guaranteed).

Regards,
Xunlei

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
  2017-03-20  2:17             ` Xunlei Pang
@ 2017-03-20 13:04               ` Petr Tesarik
  -1 siblings, 0 replies; 20+ messages in thread
From: Petr Tesarik @ 2017-03-20 13:04 UTC (permalink / raw)
  To: Xunlei Pang
  Cc: Baoquan He, kexec, xlpang, linux-kernel, Eric Biederman, akpm,
	Dave Young

On Mon, 20 Mar 2017 10:17:42 +0800
Xunlei Pang <xpang@redhat.com> wrote:

> On 03/19/2017 at 02:23 AM, Petr Tesarik wrote:
> > On Thu, 16 Mar 2017 21:40:58 +0800
> > Xunlei Pang <xpang@redhat.com> wrote:
> >
> >> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
> >>> On 03/16/17 at 08:36pm, Xunlei Pang wrote:
> >>>> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
> >>>>> Hi Xunlei,
> >>>>>
> >>>>> Did you really see this ever happened? Because the vmcore size estimate
> >>>>> feature, namely --mem-usage option of makedumpfile, depends on the
> >>>>> vmcoreinfo in 1st kernel, your change will break it.
> >>>> Hi Baoquan,
> >>>>
> >>>> I can reproduce it using a kernel module which modifies the vmcoreinfo,
> >>>> so it's a problem can actually happen.
> >>>>
> >>>>> If not, it could be not good to change that.
> >>>> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
> >>>> and store again all the vmcoreinfo after crash. What do you think?
> >>> Well, then it will make makedumpfile segfault happen too when execute
> >>> below command in 1st kernel if it existed:
> >>> 	makedumpfile --mem-usage /proc/kcore
> >> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
> >> after all the system is going something wrong. And that's why we deploy kdump service at the very
> >> beginning when the system has a low possibility of going wrong.
> >>
> >> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
> >>
> >>> So we still need to face that problem and need fix it. vmcoreinfo_note
> >>> is in kernel data area, how does module intrude into this area? And can
> >>> we fix the module code?
> >>>
> >> Bugs always exist in products, we can't know what will happen and fix all the errors,
> >> that's why we need kdump.
> >>
> >> I think the following update should guarantee the correct vmcoreinfo for kdump.
> > I'm still not convinced. I would probably have more trust in a clean
> > kernel (after boot) than a kernel that has already crashed (presumably
> > because of a serious bug). How can be reliability improved by running
> > more code in unsafe environment?
> 
> Correct, I realized that, so used crc32 to protect the original data,
> but since Eric left a more reasonable idea, I will try that later.
> 
> >
> > If some code overwrites reserved areas (such as vmcoreinfo), then it's
> > seriously buggy. And in my opinion, it is more difficult to identify
> > such bugs if they are masked by re-initializing vmcoreinfo after crash.
> > In fact, if makedumpfile in the kexec'ed kernel complains that it
> > didn't find valid VMCOREINFO content, that's already a hint.
> >
> > As a side note, if you're debugging a vmcoreinfo corruption, it's
> > possible to use a standalone VMCOREINFO file with makedumpfile, so you
> > can pre-generate it and save it in the kdump initrd.
> >
> > In short, I don't see a compelling case for this change.
> 
> E.g. 1) wrong code overwrites vmcoreinfo_data; 2) further crashes the
> system; 3) trigger kdump, then we obviously will fail to recognize the
> crash context correctly due to the corrupted vmcoreinfo.  Everyone
> will get confused if met such unfortunate customer-side issue.
> 
> Although it's corner case, if it's easy to fix, then I think we better do it.
> 
> Now except for vmcoreinfo, all the crash data is well protected (including
> cpu note which is fully updated in the crash path, thus its correctness is
> guaranteed).

Hm, I think we shouldn't combine the two things.

Protecting VMCOREINFO with SHA (just as the other information passed to
the secondary kernel) sounds right to me. Re-creating the info while
the kernel is already crashing does not sound particularly good.

Yes, your patch may help in some scenarios, but in general it also
increases the amount of code that must reliably work in a crashed
environment. I can still recall why the LKCD approach (save the dump
directly from the crashed kernel) was abandoned...

Apart, there's a lot of other information that might be corrupted (e.g.
the purgatory code, elfcorehdr, secondary kernel, or the initrd).

Why is this VMCOREINFO so special?

Regards,
Petr Tesarik

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-20 13:04               ` Petr Tesarik
  0 siblings, 0 replies; 20+ messages in thread
From: Petr Tesarik @ 2017-03-20 13:04 UTC (permalink / raw)
  To: Xunlei Pang
  Cc: Baoquan He, kexec, xlpang, linux-kernel, Eric Biederman, akpm,
	Dave Young

On Mon, 20 Mar 2017 10:17:42 +0800
Xunlei Pang <xpang@redhat.com> wrote:

> On 03/19/2017 at 02:23 AM, Petr Tesarik wrote:
> > On Thu, 16 Mar 2017 21:40:58 +0800
> > Xunlei Pang <xpang@redhat.com> wrote:
> >
> >> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
> >>> On 03/16/17 at 08:36pm, Xunlei Pang wrote:
> >>>> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
> >>>>> Hi Xunlei,
> >>>>>
> >>>>> Did you really see this ever happened? Because the vmcore size estimate
> >>>>> feature, namely --mem-usage option of makedumpfile, depends on the
> >>>>> vmcoreinfo in 1st kernel, your change will break it.
> >>>> Hi Baoquan,
> >>>>
> >>>> I can reproduce it using a kernel module which modifies the vmcoreinfo,
> >>>> so it's a problem can actually happen.
> >>>>
> >>>>> If not, it could be not good to change that.
> >>>> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
> >>>> and store again all the vmcoreinfo after crash. What do you think?
> >>> Well, then it will make makedumpfile segfault happen too when execute
> >>> below command in 1st kernel if it existed:
> >>> 	makedumpfile --mem-usage /proc/kcore
> >> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
> >> after all the system is going something wrong. And that's why we deploy kdump service at the very
> >> beginning when the system has a low possibility of going wrong.
> >>
> >> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
> >>
> >>> So we still need to face that problem and need fix it. vmcoreinfo_note
> >>> is in kernel data area, how does module intrude into this area? And can
> >>> we fix the module code?
> >>>
> >> Bugs always exist in products, we can't know what will happen and fix all the errors,
> >> that's why we need kdump.
> >>
> >> I think the following update should guarantee the correct vmcoreinfo for kdump.
> > I'm still not convinced. I would probably have more trust in a clean
> > kernel (after boot) than a kernel that has already crashed (presumably
> > because of a serious bug). How can be reliability improved by running
> > more code in unsafe environment?
> 
> Correct, I realized that, so used crc32 to protect the original data,
> but since Eric left a more reasonable idea, I will try that later.
> 
> >
> > If some code overwrites reserved areas (such as vmcoreinfo), then it's
> > seriously buggy. And in my opinion, it is more difficult to identify
> > such bugs if they are masked by re-initializing vmcoreinfo after crash.
> > In fact, if makedumpfile in the kexec'ed kernel complains that it
> > didn't find valid VMCOREINFO content, that's already a hint.
> >
> > As a side note, if you're debugging a vmcoreinfo corruption, it's
> > possible to use a standalone VMCOREINFO file with makedumpfile, so you
> > can pre-generate it and save it in the kdump initrd.
> >
> > In short, I don't see a compelling case for this change.
> 
> E.g. 1) wrong code overwrites vmcoreinfo_data; 2) further crashes the
> system; 3) trigger kdump, then we obviously will fail to recognize the
> crash context correctly due to the corrupted vmcoreinfo.  Everyone
> will get confused if met such unfortunate customer-side issue.
> 
> Although it's corner case, if it's easy to fix, then I think we better do it.
> 
> Now except for vmcoreinfo, all the crash data is well protected (including
> cpu note which is fully updated in the crash path, thus its correctness is
> guaranteed).

Hm, I think we shouldn't combine the two things.

Protecting VMCOREINFO with SHA (just as the other information passed to
the secondary kernel) sounds right to me. Re-creating the info while
the kernel is already crashing does not sound particularly good.

Yes, your patch may help in some scenarios, but in general it also
increases the amount of code that must reliably work in a crashed
environment. I can still recall why the LKCD approach (save the dump
directly from the crashed kernel) was abandoned...

Apart, there's a lot of other information that might be corrupted (e.g.
the purgatory code, elfcorehdr, secondary kernel, or the initrd).

Why is this VMCOREINFO so special?

Regards,
Petr Tesarik

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
  2017-03-20 13:04               ` Petr Tesarik
@ 2017-03-20 19:15                 ` Eric W. Biederman
  -1 siblings, 0 replies; 20+ messages in thread
From: Eric W. Biederman @ 2017-03-20 19:15 UTC (permalink / raw)
  To: Petr Tesarik
  Cc: Xunlei Pang, Baoquan He, kexec, xlpang, linux-kernel, akpm, Dave Young

Petr Tesarik <ptesarik@suse.cz> writes:

> On Mon, 20 Mar 2017 10:17:42 +0800
> Xunlei Pang <xpang@redhat.com> wrote:
>
>> On 03/19/2017 at 02:23 AM, Petr Tesarik wrote:
>> > On Thu, 16 Mar 2017 21:40:58 +0800
>> > Xunlei Pang <xpang@redhat.com> wrote:
>> >
>> >> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
>> >>> On 03/16/17 at 08:36pm, Xunlei Pang wrote:
>> >>>> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
>> >>>>> Hi Xunlei,
>> >>>>>
>> >>>>> Did you really see this ever happened? Because the vmcore size estimate
>> >>>>> feature, namely --mem-usage option of makedumpfile, depends on the
>> >>>>> vmcoreinfo in 1st kernel, your change will break it.
>> >>>> Hi Baoquan,
>> >>>>
>> >>>> I can reproduce it using a kernel module which modifies the vmcoreinfo,
>> >>>> so it's a problem can actually happen.
>> >>>>
>> >>>>> If not, it could be not good to change that.
>> >>>> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
>> >>>> and store again all the vmcoreinfo after crash. What do you think?
>> >>> Well, then it will make makedumpfile segfault happen too when execute
>> >>> below command in 1st kernel if it existed:
>> >>> 	makedumpfile --mem-usage /proc/kcore
>> >> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
>> >> after all the system is going something wrong. And that's why we deploy kdump service at the very
>> >> beginning when the system has a low possibility of going wrong.
>> >>
>> >> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
>> >>
>> >>> So we still need to face that problem and need fix it. vmcoreinfo_note
>> >>> is in kernel data area, how does module intrude into this area? And can
>> >>> we fix the module code?
>> >>>
>> >> Bugs always exist in products, we can't know what will happen and fix all the errors,
>> >> that's why we need kdump.
>> >>
>> >> I think the following update should guarantee the correct vmcoreinfo for kdump.
>> > I'm still not convinced. I would probably have more trust in a clean
>> > kernel (after boot) than a kernel that has already crashed (presumably
>> > because of a serious bug). How can be reliability improved by running
>> > more code in unsafe environment?
>> 
>> Correct, I realized that, so used crc32 to protect the original data,
>> but since Eric left a more reasonable idea, I will try that later.
>> 
>> >
>> > If some code overwrites reserved areas (such as vmcoreinfo), then it's
>> > seriously buggy. And in my opinion, it is more difficult to identify
>> > such bugs if they are masked by re-initializing vmcoreinfo after crash.
>> > In fact, if makedumpfile in the kexec'ed kernel complains that it
>> > didn't find valid VMCOREINFO content, that's already a hint.
>> >
>> > As a side note, if you're debugging a vmcoreinfo corruption, it's
>> > possible to use a standalone VMCOREINFO file with makedumpfile, so you
>> > can pre-generate it and save it in the kdump initrd.
>> >
>> > In short, I don't see a compelling case for this change.
>> 
>> E.g. 1) wrong code overwrites vmcoreinfo_data; 2) further crashes the
>> system; 3) trigger kdump, then we obviously will fail to recognize the
>> crash context correctly due to the corrupted vmcoreinfo.  Everyone
>> will get confused if met such unfortunate customer-side issue.
>> 
>> Although it's corner case, if it's easy to fix, then I think we better do it.
>> 
>> Now except for vmcoreinfo, all the crash data is well protected (including
>> cpu note which is fully updated in the crash path, thus its correctness is
>> guaranteed).
>
> Hm, I think we shouldn't combine the two things.
>
> Protecting VMCOREINFO with SHA (just as the other information passed to
> the secondary kernel) sounds right to me. Re-creating the info while
> the kernel is already crashing does not sound particularly good.
>
> Yes, your patch may help in some scenarios, but in general it also
> increases the amount of code that must reliably work in a crashed
> environment. I can still recall why the LKCD approach (save the dump
> directly from the crashed kernel) was abandoned...
>
> Apart, there's a lot of other information that might be corrupted (e.g.
> the purgatory code, elfcorehdr, secondary kernel, or the initrd).
>
> Why is this VMCOREINFO so special?

Petr I generally agree with you.  We need to minimise what happens after
a panic.

I don't know if you saw my comment on the v2 of this patchset, but the
core issue I saw with VMCOREINFO is that the data appears to be stored
in the .bss (of the kernel that will crash) instead of in the pages
managed by kexec.  Which means they are not protected like other pages.

That is the issue I have asked to be addressed first.

I also agree that we very much want to minimise what happens during
crash.  Which I suspect means that we will want to remove the CRASH_TIME
variable from VMCOREINFO.  I may be wrong but I think that is probably
enough to make the strings a constant.

Eric

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-20 19:15                 ` Eric W. Biederman
  0 siblings, 0 replies; 20+ messages in thread
From: Eric W. Biederman @ 2017-03-20 19:15 UTC (permalink / raw)
  To: Petr Tesarik
  Cc: Baoquan He, Xunlei Pang, xlpang, linux-kernel, akpm, Dave Young, kexec

Petr Tesarik <ptesarik@suse.cz> writes:

> On Mon, 20 Mar 2017 10:17:42 +0800
> Xunlei Pang <xpang@redhat.com> wrote:
>
>> On 03/19/2017 at 02:23 AM, Petr Tesarik wrote:
>> > On Thu, 16 Mar 2017 21:40:58 +0800
>> > Xunlei Pang <xpang@redhat.com> wrote:
>> >
>> >> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
>> >>> On 03/16/17 at 08:36pm, Xunlei Pang wrote:
>> >>>> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
>> >>>>> Hi Xunlei,
>> >>>>>
>> >>>>> Did you really see this ever happened? Because the vmcore size estimate
>> >>>>> feature, namely --mem-usage option of makedumpfile, depends on the
>> >>>>> vmcoreinfo in 1st kernel, your change will break it.
>> >>>> Hi Baoquan,
>> >>>>
>> >>>> I can reproduce it using a kernel module which modifies the vmcoreinfo,
>> >>>> so it's a problem can actually happen.
>> >>>>
>> >>>>> If not, it could be not good to change that.
>> >>>> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
>> >>>> and store again all the vmcoreinfo after crash. What do you think?
>> >>> Well, then it will make makedumpfile segfault happen too when execute
>> >>> below command in 1st kernel if it existed:
>> >>> 	makedumpfile --mem-usage /proc/kcore
>> >> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
>> >> after all the system is going something wrong. And that's why we deploy kdump service at the very
>> >> beginning when the system has a low possibility of going wrong.
>> >>
>> >> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
>> >>
>> >>> So we still need to face that problem and need fix it. vmcoreinfo_note
>> >>> is in kernel data area, how does module intrude into this area? And can
>> >>> we fix the module code?
>> >>>
>> >> Bugs always exist in products, we can't know what will happen and fix all the errors,
>> >> that's why we need kdump.
>> >>
>> >> I think the following update should guarantee the correct vmcoreinfo for kdump.
>> > I'm still not convinced. I would probably have more trust in a clean
>> > kernel (after boot) than a kernel that has already crashed (presumably
>> > because of a serious bug). How can be reliability improved by running
>> > more code in unsafe environment?
>> 
>> Correct, I realized that, so used crc32 to protect the original data,
>> but since Eric left a more reasonable idea, I will try that later.
>> 
>> >
>> > If some code overwrites reserved areas (such as vmcoreinfo), then it's
>> > seriously buggy. And in my opinion, it is more difficult to identify
>> > such bugs if they are masked by re-initializing vmcoreinfo after crash.
>> > In fact, if makedumpfile in the kexec'ed kernel complains that it
>> > didn't find valid VMCOREINFO content, that's already a hint.
>> >
>> > As a side note, if you're debugging a vmcoreinfo corruption, it's
>> > possible to use a standalone VMCOREINFO file with makedumpfile, so you
>> > can pre-generate it and save it in the kdump initrd.
>> >
>> > In short, I don't see a compelling case for this change.
>> 
>> E.g. 1) wrong code overwrites vmcoreinfo_data; 2) further crashes the
>> system; 3) trigger kdump, then we obviously will fail to recognize the
>> crash context correctly due to the corrupted vmcoreinfo.  Everyone
>> will get confused if met such unfortunate customer-side issue.
>> 
>> Although it's corner case, if it's easy to fix, then I think we better do it.
>> 
>> Now except for vmcoreinfo, all the crash data is well protected (including
>> cpu note which is fully updated in the crash path, thus its correctness is
>> guaranteed).
>
> Hm, I think we shouldn't combine the two things.
>
> Protecting VMCOREINFO with SHA (just as the other information passed to
> the secondary kernel) sounds right to me. Re-creating the info while
> the kernel is already crashing does not sound particularly good.
>
> Yes, your patch may help in some scenarios, but in general it also
> increases the amount of code that must reliably work in a crashed
> environment. I can still recall why the LKCD approach (save the dump
> directly from the crashed kernel) was abandoned...
>
> Apart, there's a lot of other information that might be corrupted (e.g.
> the purgatory code, elfcorehdr, secondary kernel, or the initrd).
>
> Why is this VMCOREINFO so special?

Petr I generally agree with you.  We need to minimise what happens after
a panic.

I don't know if you saw my comment on the v2 of this patchset, but the
core issue I saw with VMCOREINFO is that the data appears to be stored
in the .bss (of the kernel that will crash) instead of in the pages
managed by kexec.  Which means they are not protected like other pages.

That is the issue I have asked to be addressed first.

I also agree that we very much want to minimise what happens during
crash.  Which I suspect means that we will want to remove the CRASH_TIME
variable from VMCOREINFO.  I may be wrong but I think that is probably
enough to make the strings a constant.

Eric

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
  2017-03-20 13:04               ` Petr Tesarik
@ 2017-03-21  2:05                 ` Xunlei Pang
  -1 siblings, 0 replies; 20+ messages in thread
From: Xunlei Pang @ 2017-03-21  2:05 UTC (permalink / raw)
  To: Petr Tesarik
  Cc: Baoquan He, kexec, xlpang, linux-kernel, Eric Biederman, akpm,
	Dave Young

On 03/20/2017 at 09:04 PM, Petr Tesarik wrote:
> On Mon, 20 Mar 2017 10:17:42 +0800
> Xunlei Pang <xpang@redhat.com> wrote:
>
>> On 03/19/2017 at 02:23 AM, Petr Tesarik wrote:
>>> On Thu, 16 Mar 2017 21:40:58 +0800
>>> Xunlei Pang <xpang@redhat.com> wrote:
>>>
>>>> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
>>>>> On 03/16/17 at 08:36pm, Xunlei Pang wrote:
>>>>>> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
>>>>>>> Hi Xunlei,
>>>>>>>
>>>>>>> Did you really see this ever happened? Because the vmcore size estimate
>>>>>>> feature, namely --mem-usage option of makedumpfile, depends on the
>>>>>>> vmcoreinfo in 1st kernel, your change will break it.
>>>>>> Hi Baoquan,
>>>>>>
>>>>>> I can reproduce it using a kernel module which modifies the vmcoreinfo,
>>>>>> so it's a problem can actually happen.
>>>>>>
>>>>>>> If not, it could be not good to change that.
>>>>>> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
>>>>>> and store again all the vmcoreinfo after crash. What do you think?
>>>>> Well, then it will make makedumpfile segfault happen too when execute
>>>>> below command in 1st kernel if it existed:
>>>>> 	makedumpfile --mem-usage /proc/kcore
>>>> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
>>>> after all the system is going something wrong. And that's why we deploy kdump service at the very
>>>> beginning when the system has a low possibility of going wrong.
>>>>
>>>> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
>>>>
>>>>> So we still need to face that problem and need fix it. vmcoreinfo_note
>>>>> is in kernel data area, how does module intrude into this area? And can
>>>>> we fix the module code?
>>>>>
>>>> Bugs always exist in products, we can't know what will happen and fix all the errors,
>>>> that's why we need kdump.
>>>>
>>>> I think the following update should guarantee the correct vmcoreinfo for kdump.
>>> I'm still not convinced. I would probably have more trust in a clean
>>> kernel (after boot) than a kernel that has already crashed (presumably
>>> because of a serious bug). How can be reliability improved by running
>>> more code in unsafe environment?
>> Correct, I realized that, so used crc32 to protect the original data,
>> but since Eric left a more reasonable idea, I will try that later.
>>
>>> If some code overwrites reserved areas (such as vmcoreinfo), then it's
>>> seriously buggy. And in my opinion, it is more difficult to identify
>>> such bugs if they are masked by re-initializing vmcoreinfo after crash.
>>> In fact, if makedumpfile in the kexec'ed kernel complains that it
>>> didn't find valid VMCOREINFO content, that's already a hint.
>>>
>>> As a side note, if you're debugging a vmcoreinfo corruption, it's
>>> possible to use a standalone VMCOREINFO file with makedumpfile, so you
>>> can pre-generate it and save it in the kdump initrd.
>>>
>>> In short, I don't see a compelling case for this change.
>> E.g. 1) wrong code overwrites vmcoreinfo_data; 2) further crashes the
>> system; 3) trigger kdump, then we obviously will fail to recognize the
>> crash context correctly due to the corrupted vmcoreinfo.  Everyone
>> will get confused if met such unfortunate customer-side issue.
>>
>> Although it's corner case, if it's easy to fix, then I think we better do it.
>>
>> Now except for vmcoreinfo, all the crash data is well protected (including
>> cpu note which is fully updated in the crash path, thus its correctness is
>> guaranteed).
> Hm, I think we shouldn't combine the two things.
>
> Protecting VMCOREINFO with SHA (just as the other information passed to
> the secondary kernel) sounds right to me. Re-creating the info while
> the kernel is already crashing does not sound particularly good.
>
> Yes, your patch may help in some scenarios, but in general it also
> increases the amount of code that must reliably work in a crashed
> environment. I can still recall why the LKCD approach (save the dump
> directly from the crashed kernel) was abandoned...

Agree on this point, there is nearly no extra code added to the crash path in v3,
maybe you can have a quick look.

>
> Apart, there's a lot of other information that might be corrupted (e.g.
> the purgatory code, elfcorehdr, secondary kernel, or the initrd).

Those are located at the crash memory, they can be protected by either
SHA or the arch_kexec_protect_crashkres() mechanism(if implemented).

>
> Why is this VMCOREINFO so special?

It is also a chunk passed to 2nd kernel like the above-mentioned information,
we better treat it like them as well.

Regards,
Xunlei

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kexec: Update vmcoreinfo after crash happened
@ 2017-03-21  2:05                 ` Xunlei Pang
  0 siblings, 0 replies; 20+ messages in thread
From: Xunlei Pang @ 2017-03-21  2:05 UTC (permalink / raw)
  To: Petr Tesarik
  Cc: Baoquan He, kexec, xlpang, linux-kernel, Eric Biederman, akpm,
	Dave Young

On 03/20/2017 at 09:04 PM, Petr Tesarik wrote:
> On Mon, 20 Mar 2017 10:17:42 +0800
> Xunlei Pang <xpang@redhat.com> wrote:
>
>> On 03/19/2017 at 02:23 AM, Petr Tesarik wrote:
>>> On Thu, 16 Mar 2017 21:40:58 +0800
>>> Xunlei Pang <xpang@redhat.com> wrote:
>>>
>>>> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
>>>>> On 03/16/17 at 08:36pm, Xunlei Pang wrote:
>>>>>> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
>>>>>>> Hi Xunlei,
>>>>>>>
>>>>>>> Did you really see this ever happened? Because the vmcore size estimate
>>>>>>> feature, namely --mem-usage option of makedumpfile, depends on the
>>>>>>> vmcoreinfo in 1st kernel, your change will break it.
>>>>>> Hi Baoquan,
>>>>>>
>>>>>> I can reproduce it using a kernel module which modifies the vmcoreinfo,
>>>>>> so it's a problem can actually happen.
>>>>>>
>>>>>>> If not, it could be not good to change that.
>>>>>> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
>>>>>> and store again all the vmcoreinfo after crash. What do you think?
>>>>> Well, then it will make makedumpfile segfault happen too when execute
>>>>> below command in 1st kernel if it existed:
>>>>> 	makedumpfile --mem-usage /proc/kcore
>>>> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
>>>> after all the system is going something wrong. And that's why we deploy kdump service at the very
>>>> beginning when the system has a low possibility of going wrong.
>>>>
>>>> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
>>>>
>>>>> So we still need to face that problem and need fix it. vmcoreinfo_note
>>>>> is in kernel data area, how does module intrude into this area? And can
>>>>> we fix the module code?
>>>>>
>>>> Bugs always exist in products, we can't know what will happen and fix all the errors,
>>>> that's why we need kdump.
>>>>
>>>> I think the following update should guarantee the correct vmcoreinfo for kdump.
>>> I'm still not convinced. I would probably have more trust in a clean
>>> kernel (after boot) than a kernel that has already crashed (presumably
>>> because of a serious bug). How can be reliability improved by running
>>> more code in unsafe environment?
>> Correct, I realized that, so used crc32 to protect the original data,
>> but since Eric left a more reasonable idea, I will try that later.
>>
>>> If some code overwrites reserved areas (such as vmcoreinfo), then it's
>>> seriously buggy. And in my opinion, it is more difficult to identify
>>> such bugs if they are masked by re-initializing vmcoreinfo after crash.
>>> In fact, if makedumpfile in the kexec'ed kernel complains that it
>>> didn't find valid VMCOREINFO content, that's already a hint.
>>>
>>> As a side note, if you're debugging a vmcoreinfo corruption, it's
>>> possible to use a standalone VMCOREINFO file with makedumpfile, so you
>>> can pre-generate it and save it in the kdump initrd.
>>>
>>> In short, I don't see a compelling case for this change.
>> E.g. 1) wrong code overwrites vmcoreinfo_data; 2) further crashes the
>> system; 3) trigger kdump, then we obviously will fail to recognize the
>> crash context correctly due to the corrupted vmcoreinfo.  Everyone
>> will get confused if met such unfortunate customer-side issue.
>>
>> Although it's corner case, if it's easy to fix, then I think we better do it.
>>
>> Now except for vmcoreinfo, all the crash data is well protected (including
>> cpu note which is fully updated in the crash path, thus its correctness is
>> guaranteed).
> Hm, I think we shouldn't combine the two things.
>
> Protecting VMCOREINFO with SHA (just as the other information passed to
> the secondary kernel) sounds right to me. Re-creating the info while
> the kernel is already crashing does not sound particularly good.
>
> Yes, your patch may help in some scenarios, but in general it also
> increases the amount of code that must reliably work in a crashed
> environment. I can still recall why the LKCD approach (save the dump
> directly from the crashed kernel) was abandoned...

Agree on this point, there is nearly no extra code added to the crash path in v3,
maybe you can have a quick look.

>
> Apart, there's a lot of other information that might be corrupted (e.g.
> the purgatory code, elfcorehdr, secondary kernel, or the initrd).

Those are located at the crash memory, they can be protected by either
SHA or the arch_kexec_protect_crashkres() mechanism(if implemented).

>
> Why is this VMCOREINFO so special?

It is also a chunk passed to 2nd kernel like the above-mentioned information,
we better treat it like them as well.

Regards,
Xunlei

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-03-21  2:02 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-16 12:16 [PATCH] kexec: Update vmcoreinfo after crash happened Xunlei Pang
2017-03-16 12:16 ` Xunlei Pang
2017-03-16 12:27 ` Baoquan He
2017-03-16 12:27   ` Baoquan He
2017-03-16 12:36   ` Xunlei Pang
2017-03-16 12:36     ` Xunlei Pang
2017-03-16 13:18     ` Baoquan He
2017-03-16 13:18       ` Baoquan He
2017-03-16 13:40       ` Xunlei Pang
2017-03-16 13:40         ` Xunlei Pang
2017-03-18 18:23         ` Petr Tesarik
2017-03-18 18:23           ` Petr Tesarik
2017-03-20  2:17           ` Xunlei Pang
2017-03-20  2:17             ` Xunlei Pang
2017-03-20 13:04             ` Petr Tesarik
2017-03-20 13:04               ` Petr Tesarik
2017-03-20 19:15               ` Eric W. Biederman
2017-03-20 19:15                 ` Eric W. Biederman
2017-03-21  2:05               ` Xunlei Pang
2017-03-21  2:05                 ` Xunlei Pang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.