linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Patch] 4/5 in support of hot-add memory x86_64 fix kernel mapping code
@ 2006-07-29  2:52 keith mannthey
  2006-07-29 16:32 ` [discuss] " Andi Kleen
  0 siblings, 1 reply; 3+ messages in thread
From: keith mannthey @ 2006-07-29  2:52 UTC (permalink / raw)
  To: lkml; +Cc: lhms-devel, discuss, Andi Kleen, andrew, kame, dave hansen, konrad

[-- Attachment #1: Type: text/plain, Size: 538 bytes --]

Hello All,
  
  phys_pud_init is broken when using it at runtime with some offsets.
It currently only maps one pud entry worth of pages while trampling any
mappings that may have existed on the pmd_page :( 

This his patch make it safe to map addresses that are not pmd_page
aligned and generally more robust.  

This is a critical patch for hot-add and x86_64. 

Thanks to Dave Hansen for input on look and feel of this patch and it's
constructs. 

This was built against 2.6.18-rc2.

Signed-off-by:  Keith Mannthey <kmannth@us.ibm.com>

[-- Attachment #2: patch-2.6.18-rc2-mapping-fix --]
[-- Type: text/x-patch, Size: 2578 bytes --]

diff -urN orig/arch/x86_64/mm/init.c work/arch/x86_64/mm/init.c
--- orig/arch/x86_64/mm/init.c	2006-07-28 13:57:35.000000000 -0400
+++ work/arch/x86_64/mm/init.c	2006-07-28 14:38:53.000000000 -0400
@@ -252,9 +252,11 @@
 static void __meminit
 phys_pmd_init(pmd_t *pmd, unsigned long address, unsigned long end)
 {
-	int i;
+	int i = pmd_index(address);
 
-	for (i = 0; i < PTRS_PER_PMD; pmd++, i++, address += PMD_SIZE) {
+	pmd = pmd + i;
+
+	for (; i < PTRS_PER_PMD; pmd++, i++, address += PMD_SIZE) {
 		unsigned long entry;
 
 		if (address >= end) {
@@ -263,6 +265,11 @@
 					set_pmd(pmd, __pmd(0));
 			break;
 		}
+		
+		if (pmd_val(*pmd)) {
+			printk (KERN_ERR "%s trying to trample pte entry \
+				%08lx@%08lx\n",__func__,pmd_val(*pmd),address);
+		}
 		entry = _PAGE_NX|_PAGE_PSE|_KERNPG_TABLE|_PAGE_GLOBAL|address;
 		entry &= __supported_pte_mask;
 		set_pmd(pmd, __pmd(entry));
@@ -272,45 +279,42 @@
 static void __meminit
 phys_pmd_update(pud_t *pud, unsigned long address, unsigned long end)
 {
-	pmd_t *pmd = pmd_offset(pud, (unsigned long)__va(address));
-
-	if (pmd_none(*pmd)) {
-		spin_lock(&init_mm.page_table_lock);
-		phys_pmd_init(pmd, address, end);
-		spin_unlock(&init_mm.page_table_lock);
-		__flush_tlb_all();
-	}
+	pmd_t *pmd = pmd_offset(pud,0);
+	spin_lock(&init_mm.page_table_lock);
+	phys_pmd_init(pmd, address, end);
+	spin_unlock(&init_mm.page_table_lock);
+	__flush_tlb_all();
 }
 
 static void __meminit phys_pud_init(pud_t *pud, unsigned long address, unsigned long end)
 { 
-	long i = pud_index(address);
+	int i = pud_index(address);
 
 	pud = pud + i;
 
-	if (after_bootmem && pud_val(*pud)) {
-		phys_pmd_update(pud, address, end);
-		return;
-	}
-
-	for (; i < PTRS_PER_PUD; pud++, i++) {
+	for (; i < PTRS_PER_PUD; pud++, i++, \
+			address = (address & PUD_MASK) + PUD_SIZE ) {
 		int map; 
-		unsigned long paddr, pmd_phys;
+		unsigned long pmd_phys;
 		pmd_t *pmd;
 
-		paddr = (address & PGDIR_MASK) + i*PUD_SIZE;
-		if (paddr >= end)
+		if (address >= end)
 			break;
 
-		if (!after_bootmem && !e820_any_mapped(paddr, paddr+PUD_SIZE, 0)) {
+		if (!after_bootmem && !e820_any_mapped(address, address+PUD_SIZE, 0)) {
 			set_pud(pud, __pud(0)); 
 			continue;
 		} 
 
+		if (pud_val(*pud)) {
+			phys_pmd_update(pud, address, end);
+			continue;
+		}
+
 		pmd = alloc_low_page(&map, &pmd_phys);
 		spin_lock(&init_mm.page_table_lock);
 		set_pud(pud, __pud(pmd_phys | _KERNPG_TABLE));
-		phys_pmd_init(pmd, paddr, end);
+		phys_pmd_init(pmd, address, end);
 		spin_unlock(&init_mm.page_table_lock);
 		unmap_low_page(map);
 	}

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [discuss] [Patch] 4/5 in support of hot-add memory x86_64 fix kernel mapping code
  2006-07-29  2:52 [Patch] 4/5 in support of hot-add memory x86_64 fix kernel mapping code keith mannthey
@ 2006-07-29 16:32 ` Andi Kleen
  2006-07-29 20:11   ` keith mannthey
  0 siblings, 1 reply; 3+ messages in thread
From: Andi Kleen @ 2006-07-29 16:32 UTC (permalink / raw)
  To: discuss, kmannth; +Cc: lkml, lhms-devel, andrew, kame, dave hansen, konrad

On Saturday 29 July 2006 04:52, keith mannthey wrote:
> Hello All,
>
>   phys_pud_init is broken when using it at runtime with some offsets.
> It currently only maps one pud entry worth of pages while trampling any
> mappings that may have existed on the pmd_page :(

To print x86-64 ptes you need a %016lx (or just %lx) 

it would be cleaner to recompute pmd inside the loop based on i
and use a standard for() 

It is unclear why you hardcode 0 as address in phys_pmd_update
when a real address is passed in?

-Andi


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [discuss] [Patch] 4/5 in support of hot-add memory x86_64 fix kernel mapping code
  2006-07-29 16:32 ` [discuss] " Andi Kleen
@ 2006-07-29 20:11   ` keith mannthey
  0 siblings, 0 replies; 3+ messages in thread
From: keith mannthey @ 2006-07-29 20:11 UTC (permalink / raw)
  To: Andi Kleen; +Cc: discuss, lkml, lhms-devel, andrew, kame, dave hansen, konrad

On Sat, 2006-07-29 at 18:32 +0200, Andi Kleen wrote:
> On Saturday 29 July 2006 04:52, keith mannthey wrote:
> > Hello All,
> >
> >   phys_pud_init is broken when using it at runtime with some offsets.
> > It currently only maps one pud entry worth of pages while trampling any
> > mappings that may have existed on the pmd_page :(
> 
> To print x86-64 ptes you need a %016lx (or just %lx) 

Thanks. 
> it would be cleaner to recompute pmd inside the loop based on i
> and use a standard for() 

sure I can do away with the pmd++ pud++ in the for loops. 
> 
> It is unclear why you hardcode 0 as address in phys_pmd_update
> when a real address is passed in

When the pud is already set there a 2 options.  

1. You need to initialize pmds at the start of the pmd_page. 
2. You need to initialize pmds starting at some offset of the pmd_page.

When calling phys_pmd_init you need to pass it the start of the pmd_page
not some random pmd with in the page.  
pmd = alloc_low_page(&map, &pmd_phys); 
always gives us the start of the pmd_page.  I keep this idea for the
update path as well. 
pmd_offset(pud,0) does just this. Maybe there is a better macro to use?

Things could be different is terms of flexibility of what you pass in
(more changes are involved) but this set of changes seemed to be min-
tampering of the current code. 

Also in general is there some reason kernel mapping code is arch
specific?  This all seems to be pretty generic.

Thanks,
  Keith 


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2006-07-29 20:11 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-07-29  2:52 [Patch] 4/5 in support of hot-add memory x86_64 fix kernel mapping code keith mannthey
2006-07-29 16:32 ` [discuss] " Andi Kleen
2006-07-29 20:11   ` keith mannthey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).