linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] vmcore: call remap_pfn_range() separately for respective partial pages
@ 2013-11-28  8:48 HATAYAMA Daisuke
  2013-12-02 15:27 ` Vivek Goyal
  0 siblings, 1 reply; 7+ messages in thread
From: HATAYAMA Daisuke @ 2013-11-28  8:48 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Eric W. Biederman, Atsushi Kumagai, Linux Kernel Mailing List, kexec

Hello Vivek,

Here is a patch set for mmap failure for /proc/vmcore.
Could you try to use this on the problematic system?

This patch doesn't copy partial pages to the 2nd kernel, only prepares
vmcore objects for respective partial pages to invoke remap_pfn_range()
for individual partial pages.

>From c83dddd23be2a2972dcb3f252598c39abfa23078 Mon Sep 17 00:00:00 2001
From: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Date: Thu, 28 Nov 2013 14:51:22 +0900
Subject: [PATCH] vmcore: call remap_pfn_range() separately for respective
 partial pages

Acording to the report by Vivek in
https://lkml.org/lkml/2013/11/13/439, on some specific systems, some
of the System RAM ranges don't end at page boundary and the later part
of the same page is used for some kind of ACPI data. As a result,
remap_pfn_range() to the partial page failed if mapping range covers a
boundary of the System RAM part and the ACPI data part in the partial
page, due to the detection of different cache types in
track_pfn_remap().

To resolve the issue, call remap_pfn_range() separately for respective
partial pages, not for multiple consequtive pages that don't either
start or end at page boundary, by creating vmcore objects for
respective partial pages.

This patch never changes shape of /proc/vmcore visible from user-land.

Reported-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
---
 fs/proc/vmcore.c | 108 ++++++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 84 insertions(+), 24 deletions(-)

diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 9100d69..e396a1d 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -816,26 +816,56 @@ static int __init process_ptload_program_headers_elf64(char *elfptr,
 	vmcore_off = elfsz + elfnotes_sz;
 
 	for (i = 0; i < ehdr_ptr->e_phnum; i++, phdr_ptr++) {
-		u64 paddr, start, end, size;
+		u64 start, end, size, rest;
+		u64 start_up, start_down, end_up, end_down;
 
 		if (phdr_ptr->p_type != PT_LOAD)
 			continue;
 
-		paddr = phdr_ptr->p_offset;
-		start = rounddown(paddr, PAGE_SIZE);
-		end = roundup(paddr + phdr_ptr->p_memsz, PAGE_SIZE);
-		size = end - start;
+		start = phdr_ptr->p_offset;
+		start_up = roundup(start, PAGE_SIZE);
+		start_down = rounddown(start, PAGE_SIZE);
+		end = phdr_ptr->p_offset + phdr_ptr->p_memsz;
+		end_up = roundup(end, PAGE_SIZE);
+		end_down = rounddown(end, PAGE_SIZE);
+		size = end_up - start_down;
+		rest = phdr_ptr->p_memsz;
+
+		if (!PAGE_ALIGNED(start)) {
+			new = get_new_element();
+			if (!new)
+				return -ENOMEM;
+			new->paddr = start_down;
+			new->size = PAGE_SIZE;
+			list_add_tail(&new->list, vc_list);
+			rest -= min(start_up, end) - start;
+		}
 
 		/* Add this contiguous chunk of memory to vmcore list.*/
-		new = get_new_element();
-		if (!new)
-			return -ENOMEM;
-		new->paddr = start;
-		new->size = size;
-		list_add_tail(&new->list, vc_list);
+		if (rest > 0 && start_up < end_down) {
+			new = get_new_element();
+			if (!new)
+				return -ENOMEM;
+			new->paddr = start_up;
+			new->size = end_down - start_up;
+			list_add_tail(&new->list, vc_list);
+			rest -= end_down - start_up;
+		}
+
+		if (rest > 0) {
+			new = get_new_element();
+			if (!new)
+				return -ENOMEM;
+			new->paddr = end_down;
+			new->size = PAGE_SIZE;
+			list_add_tail(&new->list, vc_list);
+			rest -= end - end_down;
+		}
+
+		WARN_ON(rest > 0);
 
 		/* Update the program header offset. */
-		phdr_ptr->p_offset = vmcore_off + (paddr - start);
+		phdr_ptr->p_offset = vmcore_off + (start - start_down);
 		vmcore_off = vmcore_off + size;
 	}
 	return 0;
@@ -859,26 +889,56 @@ static int __init process_ptload_program_headers_elf32(char *elfptr,
 	vmcore_off = elfsz + elfnotes_sz;
 
 	for (i = 0; i < ehdr_ptr->e_phnum; i++, phdr_ptr++) {
-		u64 paddr, start, end, size;
+		u64 start, end, size, rest;
+		u64 start_up, start_down, end_up, end_down;
 
 		if (phdr_ptr->p_type != PT_LOAD)
 			continue;
 
-		paddr = phdr_ptr->p_offset;
-		start = rounddown(paddr, PAGE_SIZE);
-		end = roundup(paddr + phdr_ptr->p_memsz, PAGE_SIZE);
-		size = end - start;
+		start = phdr_ptr->p_offset;
+		start_up = roundup(start, PAGE_SIZE);
+		start_down = rounddown(start, PAGE_SIZE);
+		end = phdr_ptr->p_offset + phdr_ptr->p_memsz;
+		end_up = roundup(end, PAGE_SIZE);
+		end_down = rounddown(end, PAGE_SIZE);
+		rest = phdr_ptr->p_memsz;
+		size = end_up - start_down;
+
+		if (!PAGE_ALIGNED(start)) {
+			new = get_new_element();
+			if (!new)
+				return -ENOMEM;
+			new->paddr = start_down;
+			new->size = PAGE_SIZE;
+			list_add_tail(&new->list, vc_list);
+			rest -= min(start_up, end) - start;
+		}
 
 		/* Add this contiguous chunk of memory to vmcore list.*/
-		new = get_new_element();
-		if (!new)
-			return -ENOMEM;
-		new->paddr = start;
-		new->size = size;
-		list_add_tail(&new->list, vc_list);
+		if (rest > 0 && start_up < end_down) {
+			new = get_new_element();
+			if (!new)
+				return -ENOMEM;
+			new->paddr = start_up;
+			new->size = end_down - start_up;
+			list_add_tail(&new->list, vc_list);
+			rest -= end_down - start_up;
+		}
+
+		if (rest > 0) {
+			new = get_new_element();
+			if (!new)
+				return -ENOMEM;
+			new->paddr = end_down;
+			new->size = PAGE_SIZE;
+			list_add_tail(&new->list, vc_list);
+			rest -= end - end_down;
+		}
+
+		WARN_ON(rest > 0);
 
 		/* Update the program header offset */
-		phdr_ptr->p_offset = vmcore_off + (paddr - start);
+		phdr_ptr->p_offset = vmcore_off + (start - start_down);
 		vmcore_off = vmcore_off + size;
 	}
 	return 0;
-- 
1.8.3.1

-- 
Thanks.
HATAYAMA, Daisuke


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] vmcore: call remap_pfn_range() separately for respective partial pages
  2013-11-28  8:48 [PATCH] vmcore: call remap_pfn_range() separately for respective partial pages HATAYAMA Daisuke
@ 2013-12-02 15:27 ` Vivek Goyal
  2013-12-03  1:18   ` HATAYAMA Daisuke
  0 siblings, 1 reply; 7+ messages in thread
From: Vivek Goyal @ 2013-12-02 15:27 UTC (permalink / raw)
  To: HATAYAMA Daisuke
  Cc: Eric W. Biederman, Atsushi Kumagai, Linux Kernel Mailing List, kexec

On Thu, Nov 28, 2013 at 05:48:02PM +0900, HATAYAMA Daisuke wrote:
> Hello Vivek,
> 
> Here is a patch set for mmap failure for /proc/vmcore.
> Could you try to use this on the problematic system?
> 
> This patch doesn't copy partial pages to the 2nd kernel, only prepares
> vmcore objects for respective partial pages to invoke remap_pfn_range()
> for individual partial pages.

Hi Hatayama,

Thanks for the patch. Ok, I see that partial pages will be put in a separate
call to remap_oldmem_pfn_range() and this time it should succeed.

I am wondering what do you think about your old approach of copying
only relevant old memory to a new kernel page in new kernel. I kind
of feel little uncomfortable with the idea of rounding down start
and roudning up end to page size boundaries and then accessing the
full page using oldmem interface. A safer approach might be to allocate
page in new kernel, read *only* those bytes as reported by elf header
and fill rest of the page with zeros.

Thanks
Vivek

> 
> >From c83dddd23be2a2972dcb3f252598c39abfa23078 Mon Sep 17 00:00:00 2001
> From: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
> Date: Thu, 28 Nov 2013 14:51:22 +0900
> Subject: [PATCH] vmcore: call remap_pfn_range() separately for respective
>  partial pages
> 
> Acording to the report by Vivek in
> https://lkml.org/lkml/2013/11/13/439, on some specific systems, some
> of the System RAM ranges don't end at page boundary and the later part
> of the same page is used for some kind of ACPI data. As a result,
> remap_pfn_range() to the partial page failed if mapping range covers a
> boundary of the System RAM part and the ACPI data part in the partial
> page, due to the detection of different cache types in
> track_pfn_remap().
> 
> To resolve the issue, call remap_pfn_range() separately for respective
> partial pages, not for multiple consequtive pages that don't either
> start or end at page boundary, by creating vmcore objects for
> respective partial pages.
> 
> This patch never changes shape of /proc/vmcore visible from user-land.
> 
> Reported-by: Vivek Goyal <vgoyal@redhat.com>
> Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
> ---
>  fs/proc/vmcore.c | 108 ++++++++++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 84 insertions(+), 24 deletions(-)
> 
> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
> index 9100d69..e396a1d 100644
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -816,26 +816,56 @@ static int __init process_ptload_program_headers_elf64(char *elfptr,
>  	vmcore_off = elfsz + elfnotes_sz;
>  
>  	for (i = 0; i < ehdr_ptr->e_phnum; i++, phdr_ptr++) {
> -		u64 paddr, start, end, size;
> +		u64 start, end, size, rest;
> +		u64 start_up, start_down, end_up, end_down;
>  
>  		if (phdr_ptr->p_type != PT_LOAD)
>  			continue;
>  
> -		paddr = phdr_ptr->p_offset;
> -		start = rounddown(paddr, PAGE_SIZE);
> -		end = roundup(paddr + phdr_ptr->p_memsz, PAGE_SIZE);
> -		size = end - start;
> +		start = phdr_ptr->p_offset;
> +		start_up = roundup(start, PAGE_SIZE);
> +		start_down = rounddown(start, PAGE_SIZE);
> +		end = phdr_ptr->p_offset + phdr_ptr->p_memsz;
> +		end_up = roundup(end, PAGE_SIZE);
> +		end_down = rounddown(end, PAGE_SIZE);
> +		size = end_up - start_down;
> +		rest = phdr_ptr->p_memsz;
> +
> +		if (!PAGE_ALIGNED(start)) {
> +			new = get_new_element();
> +			if (!new)
> +				return -ENOMEM;
> +			new->paddr = start_down;
> +			new->size = PAGE_SIZE;
> +			list_add_tail(&new->list, vc_list);
> +			rest -= min(start_up, end) - start;
> +		}
>  
>  		/* Add this contiguous chunk of memory to vmcore list.*/
> -		new = get_new_element();
> -		if (!new)
> -			return -ENOMEM;
> -		new->paddr = start;
> -		new->size = size;
> -		list_add_tail(&new->list, vc_list);
> +		if (rest > 0 && start_up < end_down) {
> +			new = get_new_element();
> +			if (!new)
> +				return -ENOMEM;
> +			new->paddr = start_up;
> +			new->size = end_down - start_up;
> +			list_add_tail(&new->list, vc_list);
> +			rest -= end_down - start_up;
> +		}
> +
> +		if (rest > 0) {
> +			new = get_new_element();
> +			if (!new)
> +				return -ENOMEM;
> +			new->paddr = end_down;
> +			new->size = PAGE_SIZE;
> +			list_add_tail(&new->list, vc_list);
> +			rest -= end - end_down;
> +		}
> +
> +		WARN_ON(rest > 0);
>  
>  		/* Update the program header offset. */
> -		phdr_ptr->p_offset = vmcore_off + (paddr - start);
> +		phdr_ptr->p_offset = vmcore_off + (start - start_down);
>  		vmcore_off = vmcore_off + size;
>  	}
>  	return 0;
> @@ -859,26 +889,56 @@ static int __init process_ptload_program_headers_elf32(char *elfptr,
>  	vmcore_off = elfsz + elfnotes_sz;
>  
>  	for (i = 0; i < ehdr_ptr->e_phnum; i++, phdr_ptr++) {
> -		u64 paddr, start, end, size;
> +		u64 start, end, size, rest;
> +		u64 start_up, start_down, end_up, end_down;
>  
>  		if (phdr_ptr->p_type != PT_LOAD)
>  			continue;
>  
> -		paddr = phdr_ptr->p_offset;
> -		start = rounddown(paddr, PAGE_SIZE);
> -		end = roundup(paddr + phdr_ptr->p_memsz, PAGE_SIZE);
> -		size = end - start;
> +		start = phdr_ptr->p_offset;
> +		start_up = roundup(start, PAGE_SIZE);
> +		start_down = rounddown(start, PAGE_SIZE);
> +		end = phdr_ptr->p_offset + phdr_ptr->p_memsz;
> +		end_up = roundup(end, PAGE_SIZE);
> +		end_down = rounddown(end, PAGE_SIZE);
> +		rest = phdr_ptr->p_memsz;
> +		size = end_up - start_down;
> +
> +		if (!PAGE_ALIGNED(start)) {
> +			new = get_new_element();
> +			if (!new)
> +				return -ENOMEM;
> +			new->paddr = start_down;
> +			new->size = PAGE_SIZE;
> +			list_add_tail(&new->list, vc_list);
> +			rest -= min(start_up, end) - start;
> +		}
>  
>  		/* Add this contiguous chunk of memory to vmcore list.*/
> -		new = get_new_element();
> -		if (!new)
> -			return -ENOMEM;
> -		new->paddr = start;
> -		new->size = size;
> -		list_add_tail(&new->list, vc_list);
> +		if (rest > 0 && start_up < end_down) {
> +			new = get_new_element();
> +			if (!new)
> +				return -ENOMEM;
> +			new->paddr = start_up;
> +			new->size = end_down - start_up;
> +			list_add_tail(&new->list, vc_list);
> +			rest -= end_down - start_up;
> +		}
> +
> +		if (rest > 0) {
> +			new = get_new_element();
> +			if (!new)
> +				return -ENOMEM;
> +			new->paddr = end_down;
> +			new->size = PAGE_SIZE;
> +			list_add_tail(&new->list, vc_list);
> +			rest -= end - end_down;
> +		}
> +
> +		WARN_ON(rest > 0);
>  
>  		/* Update the program header offset */
> -		phdr_ptr->p_offset = vmcore_off + (paddr - start);
> +		phdr_ptr->p_offset = vmcore_off + (start - start_down);
>  		vmcore_off = vmcore_off + size;
>  	}
>  	return 0;
> -- 
> 1.8.3.1
> 
> -- 
> Thanks.
> HATAYAMA, Daisuke

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] vmcore: call remap_pfn_range() separately for respective partial pages
  2013-12-02 15:27 ` Vivek Goyal
@ 2013-12-03  1:18   ` HATAYAMA Daisuke
  2013-12-03  5:16     ` HATAYAMA Daisuke
  2013-12-03 15:03     ` Vivek Goyal
  0 siblings, 2 replies; 7+ messages in thread
From: HATAYAMA Daisuke @ 2013-12-03  1:18 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Eric W. Biederman, Atsushi Kumagai, Linux Kernel Mailing List, kexec

(2013/12/03 0:27), Vivek Goyal wrote:
> On Thu, Nov 28, 2013 at 05:48:02PM +0900, HATAYAMA Daisuke wrote:
>> Hello Vivek,
>>
>> Here is a patch set for mmap failure for /proc/vmcore.
>> Could you try to use this on the problematic system?
>>
>> This patch doesn't copy partial pages to the 2nd kernel, only prepares
>> vmcore objects for respective partial pages to invoke remap_pfn_range()
>> for individual partial pages.
>
> Hi Hatayama,
>
> Thanks for the patch. Ok, I see that partial pages will be put in a separate
> call to remap_oldmem_pfn_range() and this time it should succeed.
>
> I am wondering what do you think about your old approach of copying
> only relevant old memory to a new kernel page in new kernel. I kind
> of feel little uncomfortable with the idea of rounding down start
> and roudning up end to page size boundaries and then accessing the
> full page using oldmem interface. A safer approach might be to allocate
> page in new kernel, read *only* those bytes as reported by elf header
> and fill rest of the page with zeros.
>
> Thanks
> Vivek
>

Even if copying partial pages into the 2nd kernel, we need to use ioremap()
once on them, and I think the ioremap() is exactly similar to
remap_pfn_range() for a single page. There seems no difference on safeness
between them.

Also, current /proc/vmcore shows user-land tools a shape with holes not
filled with zeros both in case of read() and in case of mmap(). If we adapt
copying one without reading data in holes, shape of /proc/vmcore gets
changed again.

-- 
Thanks.
HATAYAMA, Daisuke


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] vmcore: call remap_pfn_range() separately for respective partial pages
  2013-12-03  1:18   ` HATAYAMA Daisuke
@ 2013-12-03  5:16     ` HATAYAMA Daisuke
  2013-12-03 15:12       ` Vivek Goyal
  2013-12-03 15:03     ` Vivek Goyal
  1 sibling, 1 reply; 7+ messages in thread
From: HATAYAMA Daisuke @ 2013-12-03  5:16 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Eric W. Biederman, Atsushi Kumagai, Linux Kernel Mailing List, kexec

(2013/12/03 10:18), HATAYAMA Daisuke wrote:
> (2013/12/03 0:27), Vivek Goyal wrote:
>> On Thu, Nov 28, 2013 at 05:48:02PM +0900, HATAYAMA Daisuke wrote:
>>> Hello Vivek,
>>>
>>> Here is a patch set for mmap failure for /proc/vmcore.
>>> Could you try to use this on the problematic system?
>>>
>>> This patch doesn't copy partial pages to the 2nd kernel, only prepares
>>> vmcore objects for respective partial pages to invoke remap_pfn_range()
>>> for individual partial pages.
>>
>> Hi Hatayama,
>>
>> Thanks for the patch. Ok, I see that partial pages will be put in a separate
>> call to remap_oldmem_pfn_range() and this time it should succeed.
>>
>> I am wondering what do you think about your old approach of copying
>> only relevant old memory to a new kernel page in new kernel. I kind
>> of feel little uncomfortable with the idea of rounding down start
>> and roudning up end to page size boundaries and then accessing the
>> full page using oldmem interface. A safer approach might be to allocate
>> page in new kernel, read *only* those bytes as reported by elf header
>> and fill rest of the page with zeros.
>>
>> Thanks
>> Vivek
>>
>
> Even if copying partial pages into the 2nd kernel, we need to use ioremap()
> once on them, and I think the ioremap() is exactly similar to
> remap_pfn_range() for a single page. There seems no difference on safeness
> between them.
>

I suspected some kind of pre-fetching could be performed when just page table
is created. But it's common thing between the two cases above. Then, as you say,
it would be safer to read less data from non-System-RAM area. Copying seems
better in our case.

Another concern to reading data from partial pages is a possibility of
undesirable hardware prefetch to non-System-RAM area. Is it better to disable this?

> Also, current /proc/vmcore shows user-land tools a shape with holes not
> filled with zeros both in case of read() and in case of mmap(). If we adapt
> copying one without reading data in holes, shape of /proc/vmcore gets
> changed again.
>

So, next patch will change data in holes by filling them with zeros.

BTW, we have now page cache interface implemented by Michael Holzheu, but
we have yet to use it on x86 because we've never wanted it so far. It's
natural to use it to read partial pages on-demand, but I also in part think
that it's not proper time to start using new mechanism that needs to be tested
more. How do you think?

-- 
Thanks.
HATAYAMA, Daisuke


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] vmcore: call remap_pfn_range() separately for respective partial pages
  2013-12-03  1:18   ` HATAYAMA Daisuke
  2013-12-03  5:16     ` HATAYAMA Daisuke
@ 2013-12-03 15:03     ` Vivek Goyal
  1 sibling, 0 replies; 7+ messages in thread
From: Vivek Goyal @ 2013-12-03 15:03 UTC (permalink / raw)
  To: HATAYAMA Daisuke
  Cc: Eric W. Biederman, Atsushi Kumagai, Linux Kernel Mailing List, kexec

On Tue, Dec 03, 2013 at 10:18:16AM +0900, HATAYAMA Daisuke wrote:
> (2013/12/03 0:27), Vivek Goyal wrote:
> >On Thu, Nov 28, 2013 at 05:48:02PM +0900, HATAYAMA Daisuke wrote:
> >>Hello Vivek,
> >>
> >>Here is a patch set for mmap failure for /proc/vmcore.
> >>Could you try to use this on the problematic system?
> >>
> >>This patch doesn't copy partial pages to the 2nd kernel, only prepares
> >>vmcore objects for respective partial pages to invoke remap_pfn_range()
> >>for individual partial pages.
> >
> >Hi Hatayama,
> >
> >Thanks for the patch. Ok, I see that partial pages will be put in a separate
> >call to remap_oldmem_pfn_range() and this time it should succeed.
> >
> >I am wondering what do you think about your old approach of copying
> >only relevant old memory to a new kernel page in new kernel. I kind
> >of feel little uncomfortable with the idea of rounding down start
> >and roudning up end to page size boundaries and then accessing the
> >full page using oldmem interface. A safer approach might be to allocate
> >page in new kernel, read *only* those bytes as reported by elf header
> >and fill rest of the page with zeros.
> >
> >Thanks
> >Vivek
> >
> 
> Even if copying partial pages into the 2nd kernel, we need to use ioremap()
> once on them, and I think the ioremap() is exactly similar to
> remap_pfn_range() for a single page. There seems no difference on safeness
> between them.

Hmm.., that's a good point. So anyway we will map the full page and read
parts of it.

> 
> Also, current /proc/vmcore shows user-land tools a shape with holes not
> filled with zeros both in case of read() and in case of mmap(). If we adapt
> copying one without reading data in holes, shape of /proc/vmcore gets
> changed again.

I will not be worried about this as contents of those holes are undefined.
And if we replace undefined with zeros, it should not break any
application.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] vmcore: call remap_pfn_range() separately for respective partial pages
  2013-12-03  5:16     ` HATAYAMA Daisuke
@ 2013-12-03 15:12       ` Vivek Goyal
  2013-12-04  9:05         ` HATAYAMA Daisuke
  0 siblings, 1 reply; 7+ messages in thread
From: Vivek Goyal @ 2013-12-03 15:12 UTC (permalink / raw)
  To: HATAYAMA Daisuke
  Cc: Eric W. Biederman, Atsushi Kumagai, Linux Kernel Mailing List, kexec

On Tue, Dec 03, 2013 at 02:16:35PM +0900, HATAYAMA Daisuke wrote:

[..]
> >Even if copying partial pages into the 2nd kernel, we need to use ioremap()
> >once on them, and I think the ioremap() is exactly similar to
> >remap_pfn_range() for a single page. There seems no difference on safeness
> >between them.
> >
> 
> I suspected some kind of pre-fetching could be performed when just page table
> is created. But it's common thing between the two cases above. Then, as you say,
> it would be safer to read less data from non-System-RAM area. Copying seems
> better in our case.

If we map page and just read *only* those bytes as advertized by ELF
headers and fill rest of the bytes with zero, I think that seems like
a right thing to do. We are only accessing what has been exported by
ELF headers and not trying to read outside that range.

> 
> Another concern to reading data from partial pages is a possibility of
> undesirable hardware prefetch to non-System-RAM area. Is it better to disable this?

If it becomes an issue, may be. I think we can fix it when we run into
an actual issue.

> 
> >Also, current /proc/vmcore shows user-land tools a shape with holes not
> >filled with zeros both in case of read() and in case of mmap(). If we adapt
> >copying one without reading data in holes, shape of /proc/vmcore gets
> >changed again.
> >
> 
> So, next patch will change data in holes by filling them with zeros.
> 
> BTW, we have now page cache interface implemented by Michael Holzheu, but
> we have yet to use it on x86 because we've never wanted it so far. It's
> natural to use it to read partial pages on-demand, but I also in part think
> that it's not proper time to start using new mechanism that needs to be tested
> more. How do you think?

Do we gain anything significant by using that interface. To me it looks
like this will just delay creation of mapping for partial pages. Does not
save us any memory in second kernel?

I would think that in first pass, keep it simple. Copy partial pages in
second kernel's memory. Read data as exported by ELF headers. Fill rest
of the page with zeros. Adjust /proc/vmcore ELF headers accordingly and
that should do it.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] vmcore: call remap_pfn_range() separately for respective partial pages
  2013-12-03 15:12       ` Vivek Goyal
@ 2013-12-04  9:05         ` HATAYAMA Daisuke
  0 siblings, 0 replies; 7+ messages in thread
From: HATAYAMA Daisuke @ 2013-12-04  9:05 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Eric W. Biederman, Atsushi Kumagai, Linux Kernel Mailing List, kexec

(2013/12/04 0:12), Vivek Goyal wrote:
> On Tue, Dec 03, 2013 at 02:16:35PM +0900, HATAYAMA Daisuke wrote:
>
> [..]
>>> Even if copying partial pages into the 2nd kernel, we need to use ioremap()
>>> once on them, and I think the ioremap() is exactly similar to
>>> remap_pfn_range() for a single page. There seems no difference on safeness
>>> between them.
>>>
>>
>> I suspected some kind of pre-fetching could be performed when just page table
>> is created. But it's common thing between the two cases above. Then, as you say,
>> it would be safer to read less data from non-System-RAM area. Copying seems
>> better in our case.
>
> If we map page and just read *only* those bytes as advertized by ELF
> headers and fill rest of the bytes with zero, I think that seems like
> a right thing to do. We are only accessing what has been exported by
> ELF headers and not trying to read outside that range.
>
>>
>> Another concern to reading data from partial pages is a possibility of
>> undesirable hardware prefetch to non-System-RAM area. Is it better to disable this?
>
> If it becomes an issue, may be. I think we can fix it when we run into
> an actual issue.
>

I see.

>>
>>> Also, current /proc/vmcore shows user-land tools a shape with holes not
>>> filled with zeros both in case of read() and in case of mmap(). If we adapt
>>> copying one without reading data in holes, shape of /proc/vmcore gets
>>> changed again.
>>>
>>
>> So, next patch will change data in holes by filling them with zeros.
>>
>> BTW, we have now page cache interface implemented by Michael Holzheu, but
>> we have yet to use it on x86 because we've never wanted it so far. It's
>> natural to use it to read partial pages on-demand, but I also in part think
>> that it's not proper time to start using new mechanism that needs to be tested
>> more. How do you think?
>
> Do we gain anything significant by using that interface. To me it looks
> like this will just delay creation of mapping for partial pages. Does not
> save us any memory in second kernel?
>

Amount of partial pages seems not so problematic.
No, the number of partial pages are at most the number of System RAM entries * 2.
I guess the number of the entries doesn't exceed 100 even in the worst case.
Less than 1MiB even in the worst case.

The mechanism would be useful for platform that has large amount of note segment.
But per-cpu note segemnt of x86 is relatively small, so meaningful platform seems
the ones with a large number of cpus and they are still restrictive. Other archs
that has large per-cpu note segment would already want the feature, just as
Michael Holzheu explained previously for some of their s390 platform.

> I would think that in first pass, keep it simple. Copy partial pages in
> second kernel's memory. Read data as exported by ELF headers. Fill rest
> of the page with zeros. Adjust /proc/vmcore ELF headers accordingly and
> that should do it.
>

Yes, I also want to keep simplicity. I'll post a new patch tomorrow.

-- 
Thanks.
HATAYAMA, Daisuke


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-12-04  9:07 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-11-28  8:48 [PATCH] vmcore: call remap_pfn_range() separately for respective partial pages HATAYAMA Daisuke
2013-12-02 15:27 ` Vivek Goyal
2013-12-03  1:18   ` HATAYAMA Daisuke
2013-12-03  5:16     ` HATAYAMA Daisuke
2013-12-03 15:12       ` Vivek Goyal
2013-12-04  9:05         ` HATAYAMA Daisuke
2013-12-03 15:03     ` Vivek Goyal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).