All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC][PATCH 0/6] kcore clean up and enhance. v3
@ 2009-07-24  8:08 KAMEZAWA Hiroyuki
  2009-07-24  8:10 ` [RFC][PATCH 1/6] kcore: clean up to use generic list ops KAMEZAWA Hiroyuki
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2009-07-24  8:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: xiyou.wangcong, akpm, ralf, benh, lethal

Hi, back to RFC again. 2 patches are added.

Now, /proc/kcore is not aware of physical memory layout and has no hooks for
memory hotplug. I'm trying to fix it. But at doing so, I've wrote several
clean-up patches unexpectedly ;)

Now, /proc/kcore has per-arch hooks to x86-32/64, sh, mips, ia64,
powerpc-32/64. I think I know x86-32/64's and ia64 memory layout well,
but not sure about others. Then, I CCed to maintainers. please notify me
if any concerns.
After this, most of arch-specific codes can be removed.
(x86-64 and mips-64bit seems to have something special.)

Patches
[1/6] kcore clean up/ use usual list ops 
[2/6] kcore clean up/ add kclist type.
[3/6] kcore clean up/ unifiy per-arch vmalloc kclist_add()
[4/6] kcore clean up/ unifiy per-arch text range kclist_add()
[5/6] kcore enhance / use precise physical memory check and support hotplug.
[6/6] generic: rewrite walk_memory_resource as to be  walk_system_ram_range().

Any comments are welcome.
I'm sorry that I may not be able to make a quick reply.

Thanks,
-Kame


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC][PATCH 1/6] kcore: clean up to use generic list ops
  2009-07-24  8:08 [RFC][PATCH 0/6] kcore clean up and enhance. v3 KAMEZAWA Hiroyuki
@ 2009-07-24  8:10 ` KAMEZAWA Hiroyuki
  2009-07-24  8:11 ` [RFC][PATCH 2/6] kcore : add type attribute to kclist KAMEZAWA Hiroyuki
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2009-07-24  8:10 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: linux-kernel, xiyou.wangcong, akpm, ralf, benh, lethal

From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

/proc/kcore uses its own list handling codes. It's better to use
generic list codes.

And read_kcore() use "m" to specifiy
  - kcore entry
  - vmalloc entry
in different types.
This patch renames "m" to "vms" for vmalloc(), avoiding confusion.

No changes in logic. just clean up.

Changelog v1->v3
 - no changes.

Reviewed-by: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 fs/proc/kcore.c         |   41 ++++++++++++++++++++++-------------------
 include/linux/proc_fs.h |    2 +-
 2 files changed, 23 insertions(+), 20 deletions(-)

Index: mmotm-2.6.31-Jul16/fs/proc/kcore.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/fs/proc/kcore.c
+++ mmotm-2.6.31-Jul16/fs/proc/kcore.c
@@ -20,6 +20,7 @@
 #include <linux/init.h>
 #include <asm/uaccess.h>
 #include <asm/io.h>
+#include <linux/list.h>
 
 #define CORE_STR "CORE"
 
@@ -57,7 +58,7 @@ struct memelfnote
 	void *data;
 };
 
-static struct kcore_list *kclist;
+static LIST_HEAD(kclist_head);
 static DEFINE_RWLOCK(kclist_lock);
 
 void
@@ -67,8 +68,7 @@ kclist_add(struct kcore_list *new, void 
 	new->size = size;
 
 	write_lock(&kclist_lock);
-	new->next = kclist;
-	kclist = new;
+	list_add_tail(&new->list, &kclist_head);
 	write_unlock(&kclist_lock);
 }
 
@@ -80,7 +80,7 @@ static size_t get_kcore_size(int *nphdr,
 	*nphdr = 1; /* PT_NOTE */
 	size = 0;
 
-	for (m=kclist; m; m=m->next) {
+	list_for_each_entry(m, &kclist_head, list) {
 		try = kc_vaddr_to_offset((size_t)m->addr + m->size);
 		if (try > size)
 			size = try;
@@ -192,7 +192,7 @@ static void elf_kcore_store_hdr(char *bu
 	nhdr->p_align	= 0;
 
 	/* setup ELF PT_LOAD program header for every area */
-	for (m=kclist; m; m=m->next) {
+	list_for_each_entry(m, &kclist_head, list) {
 		phdr = (struct elf_phdr *) bufp;
 		bufp += sizeof(struct elf_phdr);
 		offset += sizeof(struct elf_phdr);
@@ -317,7 +317,7 @@ read_kcore(struct file *file, char __use
 		struct kcore_list *m;
 
 		read_lock(&kclist_lock);
-		for (m=kclist; m; m=m->next) {
+		list_for_each_entry(m, &kclist_head, list) {
 			if (start >= m->addr && start < (m->addr+m->size))
 				break;
 		}
@@ -328,7 +328,7 @@ read_kcore(struct file *file, char __use
 				return -EFAULT;
 		} else if (is_vmalloc_addr((void *)start)) {
 			char * elf_buf;
-			struct vm_struct *m;
+			struct vm_struct *vms;
 			unsigned long curstart = start;
 			unsigned long cursize = tsz;
 
@@ -337,29 +337,32 @@ read_kcore(struct file *file, char __use
 				return -ENOMEM;
 
 			read_lock(&vmlist_lock);
-			for (m=vmlist; m && cursize; m=m->next) {
+			for (vms = vmlist; vms && cursize; vms = vms->next) {
 				unsigned long vmstart;
 				unsigned long vmsize;
-				unsigned long msize = m->size - PAGE_SIZE;
+				unsigned long msize = vms->size - PAGE_SIZE;
+				unsigned long curend, vmend;
 
-				if (((unsigned long)m->addr + msize) < 
+				if (((unsigned long)vms->addr + msize) < 
 								curstart)
 					continue;
-				if ((unsigned long)m->addr > (curstart + 
+				if ((unsigned long)vms->addr > (curstart + 
 								cursize))
 					break;
-				vmstart = (curstart < (unsigned long)m->addr ? 
-					(unsigned long)m->addr : curstart);
-				if (((unsigned long)m->addr + msize) > 
-							(curstart + cursize))
-					vmsize = curstart + cursize - vmstart;
+				if (curstart < (unsigned long)vms->addr)
+					vmstart = (unsigned long)vms->addr;
 				else
-					vmsize = (unsigned long)m->addr + 
-							msize - vmstart;
+					vmstart = curstart;
+				curend = curstart + cursize;
+				vmend = (unsigned long)vms->addr + msize;
+				if (vmend > curend)
+					vmsize = curend - vmstart;
+				else
+					vmsize = vmend - vmstart;
 				curstart = vmstart + vmsize;
 				cursize -= vmsize;
 				/* don't dump ioremap'd stuff! (TA) */
-				if (m->flags & VM_IOREMAP)
+				if (vms->flags & VM_IOREMAP)
 					continue;
 				memcpy(elf_buf + (vmstart - start),
 					(char *)vmstart, vmsize);
Index: mmotm-2.6.31-Jul16/include/linux/proc_fs.h
===================================================================
--- mmotm-2.6.31-Jul16.orig/include/linux/proc_fs.h
+++ mmotm-2.6.31-Jul16/include/linux/proc_fs.h
@@ -79,7 +79,7 @@ struct proc_dir_entry {
 };
 
 struct kcore_list {
-	struct kcore_list *next;
+	struct list_head list;
 	unsigned long addr;
 	size_t size;
 };


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC][PATCH 2/6] kcore : add type attribute to kclist
  2009-07-24  8:08 [RFC][PATCH 0/6] kcore clean up and enhance. v3 KAMEZAWA Hiroyuki
  2009-07-24  8:10 ` [RFC][PATCH 1/6] kcore: clean up to use generic list ops KAMEZAWA Hiroyuki
@ 2009-07-24  8:11 ` KAMEZAWA Hiroyuki
  2009-07-24  8:13 ` [RFC][PATCH 3/6] kcore: unify vmalloc range entry KAMEZAWA Hiroyuki
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2009-07-24  8:11 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: linux-kernel, xiyou.wangcong, akpm, ralf, benh, lethal

From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Now, kclist_add() only eats start address and size as its arguments.
Considering to make kclist dynamically reconfigulable, it's necessary
to know which kclists are for System RAM and whic is not.

This patch add kclist types as
  KCORE_RAM
  KCORE_VMALLOC
  KCORE_TEXT
  KCORE_OTHER

region for KCORE_RAM will be dynamically updated at memory hotplug.
used in later patch.

Chnagelog v1->v3
 - no changes.

Reviewed-by: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 arch/ia64/mm/init.c       |    7 ++++---
 arch/mips/mm/init.c       |    7 ++++---
 arch/powerpc/mm/init_32.c |    4 ++--
 arch/powerpc/mm/init_64.c |    5 +++--
 arch/sh/mm/init.c         |    4 ++--
 arch/x86/mm/init_32.c     |    4 ++--
 arch/x86/mm/init_64.c     |   11 ++++++-----
 fs/proc/kcore.c           |    3 ++-
 include/linux/proc_fs.h   |   13 +++++++++++--
 9 files changed, 36 insertions(+), 22 deletions(-)

Index: mmotm-2.6.31-Jul16/include/linux/proc_fs.h
===================================================================
--- mmotm-2.6.31-Jul16.orig/include/linux/proc_fs.h
+++ mmotm-2.6.31-Jul16/include/linux/proc_fs.h
@@ -78,10 +78,18 @@ struct proc_dir_entry {
 	struct list_head pde_openers;	/* who did ->open, but not ->release */
 };
 
+enum kcore_type {
+	KCORE_TEXT,
+	KCORE_VMALLOC,
+	KCORE_RAM,
+	KCORE_OTHER,
+};
+
 struct kcore_list {
 	struct list_head list;
 	unsigned long addr;
 	size_t size;
+	int type;
 };
 
 struct vmcore {
@@ -233,11 +241,12 @@ static inline void dup_mm_exe_file(struc
 #endif /* CONFIG_PROC_FS */
 
 #if !defined(CONFIG_PROC_KCORE)
-static inline void kclist_add(struct kcore_list *new, void *addr, size_t size)
+static inline void
+kclist_add(struct kcore_list *new, void *addr, size_t size, int type)
 {
 }
 #else
-extern void kclist_add(struct kcore_list *, void *, size_t);
+extern void kclist_add(struct kcore_list *, void *, size_t, int type);
 #endif
 
 union proc_op {
Index: mmotm-2.6.31-Jul16/arch/ia64/mm/init.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/ia64/mm/init.c
+++ mmotm-2.6.31-Jul16/arch/ia64/mm/init.c
@@ -639,9 +639,10 @@ mem_init (void)
 
 	high_memory = __va(max_low_pfn * PAGE_SIZE);
 
-	kclist_add(&kcore_mem, __va(0), max_low_pfn * PAGE_SIZE);
-	kclist_add(&kcore_vmem, (void *)VMALLOC_START, VMALLOC_END-VMALLOC_START);
-	kclist_add(&kcore_kernel, _stext, _end - _stext);
+	kclist_add(&kcore_mem, __va(0), max_low_pfn * PAGE_SIZE, KCORE_RAM);
+	kclist_add(&kcore_vmem, (void *)VMALLOC_START,
+			VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
+	kclist_add(&kcore_kernel, _stext, _end - _stext, KCORE_TEXT);
 
 	for_each_online_pgdat(pgdat)
 		if (pgdat->bdata->node_bootmem_map)
Index: mmotm-2.6.31-Jul16/arch/mips/mm/init.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/mips/mm/init.c
+++ mmotm-2.6.31-Jul16/arch/mips/mm/init.c
@@ -409,11 +409,12 @@ void __init mem_init(void)
 	if ((unsigned long) &_text > (unsigned long) CKSEG0)
 		/* The -4 is a hack so that user tools don't have to handle
 		   the overflow.  */
-		kclist_add(&kcore_kseg0, (void *) CKSEG0, 0x80000000 - 4);
+		kclist_add(&kcore_kseg0, (void *) CKSEG0,
+				0x80000000 - 4, KCORE_TEXT);
 #endif
-	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT);
+	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
 	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
-		   VMALLOC_END-VMALLOC_START);
+		   VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
 
 	printk(KERN_INFO "Memory: %luk/%luk available (%ldk kernel code, "
 	       "%ldk reserved, %ldk data, %ldk init, %ldk highmem)\n",
Index: mmotm-2.6.31-Jul16/arch/powerpc/mm/init_32.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/powerpc/mm/init_32.c
+++ mmotm-2.6.31-Jul16/arch/powerpc/mm/init_32.c
@@ -270,11 +270,11 @@ static int __init setup_kcore(void)
 						size);
 		}
 
-		kclist_add(kcore_mem, __va(base), size);
+		kclist_add(kcore_mem, __va(base), size, KCORE_RAM);
 	}
 
 	kclist_add(&kcore_vmem, (void *)VMALLOC_START,
-		VMALLOC_END-VMALLOC_START);
+		VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
 
 	return 0;
 }
Index: mmotm-2.6.31-Jul16/arch/powerpc/mm/init_64.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/powerpc/mm/init_64.c
+++ mmotm-2.6.31-Jul16/arch/powerpc/mm/init_64.c
@@ -128,10 +128,11 @@ static int __init setup_kcore(void)
 		if (!kcore_mem)
 			panic("%s: kmalloc failed\n", __func__);
 
-		kclist_add(kcore_mem, __va(base), size);
+		kclist_add(kcore_mem, __va(base), size, KCORE_RAM);
 	}
 
-	kclist_add(&kcore_vmem, (void *)VMALLOC_START, VMALLOC_END-VMALLOC_START);
+	kclist_add(&kcore_vmem, (void *)VMALLOC_START,
+		VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
 
 	return 0;
 }
Index: mmotm-2.6.31-Jul16/arch/sh/mm/init.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/sh/mm/init.c
+++ mmotm-2.6.31-Jul16/arch/sh/mm/init.c
@@ -218,9 +218,9 @@ void __init mem_init(void)
 	datasize =  (unsigned long) &_edata - (unsigned long) &_etext;
 	initsize =  (unsigned long) &__init_end - (unsigned long) &__init_begin;
 
-	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT);
+	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
 	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
-		   VMALLOC_END - VMALLOC_START);
+		   VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);
 
 	printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
 	       "%dk data, %dk init)\n",
Index: mmotm-2.6.31-Jul16/arch/x86/mm/init_32.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/x86/mm/init_32.c
+++ mmotm-2.6.31-Jul16/arch/x86/mm/init_32.c
@@ -886,9 +886,9 @@ void __init mem_init(void)
 	datasize =  (unsigned long) &_edata - (unsigned long) &_etext;
 	initsize =  (unsigned long) &__init_end - (unsigned long) &__init_begin;
 
-	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT);
+	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
 	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
-		   VMALLOC_END-VMALLOC_START);
+		   VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
 
 	printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
 			"%dk reserved, %dk data, %dk init, %ldk highmem)\n",
Index: mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/x86/mm/init_64.c
+++ mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
@@ -677,13 +677,14 @@ void __init mem_init(void)
 	initsize =  (unsigned long) &__init_end - (unsigned long) &__init_begin;
 
 	/* Register memory areas for /proc/kcore */
-	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT);
+	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
 	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
-		   VMALLOC_END-VMALLOC_START);
-	kclist_add(&kcore_kernel, &_stext, _end - _stext);
-	kclist_add(&kcore_modules, (void *)MODULES_VADDR, MODULES_LEN);
+		   VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
+	kclist_add(&kcore_kernel, &_stext, _end - _stext, KCORE_TEXT);
+	kclist_add(&kcore_modules, (void *)MODULES_VADDR, MODULES_LEN,
+			KCORE_OTHER);
 	kclist_add(&kcore_vsyscall, (void *)VSYSCALL_START,
-				 VSYSCALL_END - VSYSCALL_START);
+			 VSYSCALL_END - VSYSCALL_START, KCORE_OTHER);
 
 	printk(KERN_INFO "Memory: %luk/%luk available (%ldk kernel code, "
 			 "%ldk absent, %ldk reserved, %ldk data, %ldk init)\n",
Index: mmotm-2.6.31-Jul16/fs/proc/kcore.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/fs/proc/kcore.c
+++ mmotm-2.6.31-Jul16/fs/proc/kcore.c
@@ -62,10 +62,11 @@ static LIST_HEAD(kclist_head);
 static DEFINE_RWLOCK(kclist_lock);
 
 void
-kclist_add(struct kcore_list *new, void *addr, size_t size)
+kclist_add(struct kcore_list *new, void *addr, size_t size, int type)
 {
 	new->addr = (unsigned long)addr;
 	new->size = size;
+	new->type = type;
 
 	write_lock(&kclist_lock);
 	list_add_tail(&new->list, &kclist_head);


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC][PATCH 3/6] kcore: unify vmalloc range entry
  2009-07-24  8:08 [RFC][PATCH 0/6] kcore clean up and enhance. v3 KAMEZAWA Hiroyuki
  2009-07-24  8:10 ` [RFC][PATCH 1/6] kcore: clean up to use generic list ops KAMEZAWA Hiroyuki
  2009-07-24  8:11 ` [RFC][PATCH 2/6] kcore : add type attribute to kclist KAMEZAWA Hiroyuki
@ 2009-07-24  8:13 ` KAMEZAWA Hiroyuki
  2009-07-28 10:05   ` Amerigo Wang
  2009-07-24  8:15 ` [RFC][PATCH 4/6] kcore: kcore unify text " KAMEZAWA Hiroyuki
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2009-07-24  8:13 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: linux-kernel, xiyou.wangcong, akpm, ralf, benh, lethal

From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

For /proc/kcore, vmalloc areas are registered per arch. But, all of
them registers same range of [VMALLOC_START...VMALLOC_END)
This patch unifies them.
Note: /proc/kcore depends on CONFIG_MMU.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 arch/ia64/mm/init.c       |    7 +------
 arch/mips/mm/init.c       |    4 +---
 arch/powerpc/mm/init_32.c |    4 ----
 arch/powerpc/mm/init_64.c |    4 ----
 arch/sh/mm/init.c         |    4 +---
 arch/x86/mm/init_32.c     |    4 +---
 arch/x86/mm/init_64.c     |    4 +---
 fs/proc/kcore.c           |    5 +++++
 8 files changed, 10 insertions(+), 26 deletions(-)

Index: mmotm-2.6.31-Jul16/fs/proc/kcore.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/fs/proc/kcore.c
+++ mmotm-2.6.31-Jul16/fs/proc/kcore.c
@@ -406,9 +406,14 @@ read_kcore(struct file *file, char __use
 	return acc;
 }
 
+static struct kcore_list kcore_vmalloc;
+
 static int __init proc_kcore_init(void)
 {
 	proc_root_kcore = proc_create("kcore", S_IRUSR, NULL, &proc_kcore_operations);
+
+	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
+		VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);
 	return 0;
 }
 module_init(proc_kcore_init);
Index: mmotm-2.6.31-Jul16/arch/x86/mm/init_32.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/x86/mm/init_32.c
+++ mmotm-2.6.31-Jul16/arch/x86/mm/init_32.c
@@ -857,7 +857,7 @@ static void __init test_wp_bit(void)
 	}
 }
 
-static struct kcore_list kcore_mem, kcore_vmalloc;
+static struct kcore_list kcore_mem;
 
 void __init mem_init(void)
 {
@@ -887,8 +887,6 @@ void __init mem_init(void)
 	initsize =  (unsigned long) &__init_end - (unsigned long) &__init_begin;
 
 	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
-	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
-		   VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
 
 	printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
 			"%dk reserved, %dk data, %dk init, %ldk highmem)\n",
Index: mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/x86/mm/init_64.c
+++ mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
@@ -647,7 +647,7 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to
 
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
-static struct kcore_list kcore_mem, kcore_vmalloc, kcore_kernel,
+static struct kcore_list kcore_mem, kcore_kernel,
 			 kcore_modules, kcore_vsyscall;
 
 void __init mem_init(void)
@@ -678,8 +678,6 @@ void __init mem_init(void)
 
 	/* Register memory areas for /proc/kcore */
 	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
-	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
-		   VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
 	kclist_add(&kcore_kernel, &_stext, _end - _stext, KCORE_TEXT);
 	kclist_add(&kcore_modules, (void *)MODULES_VADDR, MODULES_LEN,
 			KCORE_OTHER);
Index: mmotm-2.6.31-Jul16/arch/mips/mm/init.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/mips/mm/init.c
+++ mmotm-2.6.31-Jul16/arch/mips/mm/init.c
@@ -352,7 +352,7 @@ void __init paging_init(void)
 	free_area_init_nodes(max_zone_pfns);
 }
 
-static struct kcore_list kcore_mem, kcore_vmalloc;
+static struct kcore_list kcore_mem;
 #ifdef CONFIG_64BIT
 static struct kcore_list kcore_kseg0;
 #endif
@@ -413,8 +413,6 @@ void __init mem_init(void)
 				0x80000000 - 4, KCORE_TEXT);
 #endif
 	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
-	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
-		   VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
 
 	printk(KERN_INFO "Memory: %luk/%luk available (%ldk kernel code, "
 	       "%ldk reserved, %ldk data, %ldk init, %ldk highmem)\n",
Index: mmotm-2.6.31-Jul16/arch/powerpc/mm/init_32.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/powerpc/mm/init_32.c
+++ mmotm-2.6.31-Jul16/arch/powerpc/mm/init_32.c
@@ -245,7 +245,6 @@ void free_initrd_mem(unsigned long start
 #endif
 
 #ifdef CONFIG_PROC_KCORE
-static struct kcore_list kcore_vmem;
 
 static int __init setup_kcore(void)
 {
@@ -273,9 +272,6 @@ static int __init setup_kcore(void)
 		kclist_add(kcore_mem, __va(base), size, KCORE_RAM);
 	}
 
-	kclist_add(&kcore_vmem, (void *)VMALLOC_START,
-		VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
-
 	return 0;
 }
 module_init(setup_kcore);
Index: mmotm-2.6.31-Jul16/arch/ia64/mm/init.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/ia64/mm/init.c
+++ mmotm-2.6.31-Jul16/arch/ia64/mm/init.c
@@ -617,7 +617,7 @@ mem_init (void)
 	long reserved_pages, codesize, datasize, initsize;
 	pg_data_t *pgdat;
 	int i;
-	static struct kcore_list kcore_mem, kcore_vmem, kcore_kernel;
+	static struct kcore_list kcore_kernel;
 
 	BUG_ON(PTRS_PER_PGD * sizeof(pgd_t) != PAGE_SIZE);
 	BUG_ON(PTRS_PER_PMD * sizeof(pmd_t) != PAGE_SIZE);
@@ -636,12 +636,7 @@ mem_init (void)
 	BUG_ON(!mem_map);
 	max_mapnr = max_low_pfn;
 #endif
-
 	high_memory = __va(max_low_pfn * PAGE_SIZE);
-
-	kclist_add(&kcore_mem, __va(0), max_low_pfn * PAGE_SIZE, KCORE_RAM);
-	kclist_add(&kcore_vmem, (void *)VMALLOC_START,
-			VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
 	kclist_add(&kcore_kernel, _stext, _end - _stext, KCORE_TEXT);
 
 	for_each_online_pgdat(pgdat)
Index: mmotm-2.6.31-Jul16/arch/powerpc/mm/init_64.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/powerpc/mm/init_64.c
+++ mmotm-2.6.31-Jul16/arch/powerpc/mm/init_64.c
@@ -110,7 +110,6 @@ void free_initrd_mem(unsigned long start
 #endif
 
 #ifdef CONFIG_PROC_KCORE
-static struct kcore_list kcore_vmem;
 
 static int __init setup_kcore(void)
 {
@@ -131,9 +130,6 @@ static int __init setup_kcore(void)
 		kclist_add(kcore_mem, __va(base), size, KCORE_RAM);
 	}
 
-	kclist_add(&kcore_vmem, (void *)VMALLOC_START,
-		VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
-
 	return 0;
 }
 module_init(setup_kcore);
Index: mmotm-2.6.31-Jul16/arch/sh/mm/init.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/sh/mm/init.c
+++ mmotm-2.6.31-Jul16/arch/sh/mm/init.c
@@ -181,7 +181,7 @@ void __init paging_init(void)
 	set_fixmap_nocache(FIX_UNCACHED, __pa(&__uncached_start));
 }
 
-static struct kcore_list kcore_mem, kcore_vmalloc;
+static struct kcore_list kcore_mem;
 
 void __init mem_init(void)
 {
@@ -219,8 +219,6 @@ void __init mem_init(void)
 	initsize =  (unsigned long) &__init_end - (unsigned long) &__init_begin;
 
 	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
-	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
-		   VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);
 
 	printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
 	       "%dk data, %dk init)\n",


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC][PATCH 4/6] kcore: kcore unify text range entry
  2009-07-24  8:08 [RFC][PATCH 0/6] kcore clean up and enhance. v3 KAMEZAWA Hiroyuki
                   ` (2 preceding siblings ...)
  2009-07-24  8:13 ` [RFC][PATCH 3/6] kcore: unify vmalloc range entry KAMEZAWA Hiroyuki
@ 2009-07-24  8:15 ` KAMEZAWA Hiroyuki
  2009-07-28 10:10   ` Amerigo Wang
  2009-07-24  8:19 ` [RFC][PATCH 5/6] kcore: check physical memory range in correct way KAMEZAWA Hiroyuki
  2009-07-24  8:22 ` [RFC][PATCH 6/6] kcore: walk_system_ram_range() KAMEZAWA Hiroyuki
  5 siblings, 1 reply; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2009-07-24  8:15 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: linux-kernel, xiyou.wangcong, akpm, ralf, benh, lethal

From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Some 64bit arch has a special segment for mapping kernel text. It should be
entried to /proc/kcore in addtion to direct-linear-map, vmalloc area.
This patch unifies KCORE_TEXT entry scattered under x86 and ia64.

I'm not familiar with other archs (mips has its own even after this patch)
If range of [_stext ..._end) is a valid area of text/data and it's not
in direct-map/vmalloc area, defining CONFIG_ARCH_PROC_KCORE_TEXT is only
a necessary thing to do for archs.

Note: I left mips-64 as it is now.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
Index: mmotm-2.6.31-Jul16/arch/x86/Kconfig
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/x86/Kconfig
+++ mmotm-2.6.31-Jul16/arch/x86/Kconfig
@@ -1244,6 +1244,10 @@ config ARCH_MEMORY_PROBE
 	def_bool X86_64
 	depends on MEMORY_HOTPLUG
 
+config ARCH_PROC_KCORE_TEXT
+	def_bool y
+	depends on X86_64 && PROC_KCORE
+
 config ILLEGAL_POINTER_VALUE
        hex
        default 0 if X86_32
Index: mmotm-2.6.31-Jul16/arch/ia64/Kconfig
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/ia64/Kconfig
+++ mmotm-2.6.31-Jul16/arch/ia64/Kconfig
@@ -496,6 +496,10 @@ config HAVE_ARCH_NODEDATA_EXTENSION
 	def_bool y
 	depends on NUMA
 
+config ARCH_PROC_KCORE_TEXT
+	def_bool y
+	depends on PROC_KCORE
+
 config IA32_SUPPORT
 	bool "Support for Linux/x86 binaries"
 	help
Index: mmotm-2.6.31-Jul16/arch/ia64/mm/init.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/ia64/mm/init.c
+++ mmotm-2.6.31-Jul16/arch/ia64/mm/init.c
@@ -617,7 +617,6 @@ mem_init (void)
 	long reserved_pages, codesize, datasize, initsize;
 	pg_data_t *pgdat;
 	int i;
-	static struct kcore_list kcore_kernel;
 
 	BUG_ON(PTRS_PER_PGD * sizeof(pgd_t) != PAGE_SIZE);
 	BUG_ON(PTRS_PER_PMD * sizeof(pmd_t) != PAGE_SIZE);
@@ -637,7 +636,6 @@ mem_init (void)
 	max_mapnr = max_low_pfn;
 #endif
 	high_memory = __va(max_low_pfn * PAGE_SIZE);
-	kclist_add(&kcore_kernel, _stext, _end - _stext, KCORE_TEXT);
 
 	for_each_online_pgdat(pgdat)
 		if (pgdat->bdata->node_bootmem_map)
Index: mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/x86/mm/init_64.c
+++ mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
@@ -647,8 +647,7 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to
 
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
-static struct kcore_list kcore_mem, kcore_kernel,
-			 kcore_modules, kcore_vsyscall;
+static struct kcore_list kcore_mem, kcore_modules, kcore_vsyscall;
 
 void __init mem_init(void)
 {
@@ -678,7 +677,6 @@ void __init mem_init(void)
 
 	/* Register memory areas for /proc/kcore */
 	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
-	kclist_add(&kcore_kernel, &_stext, _end - _stext, KCORE_TEXT);
 	kclist_add(&kcore_modules, (void *)MODULES_VADDR, MODULES_LEN,
 			KCORE_OTHER);
 	kclist_add(&kcore_vsyscall, (void *)VSYSCALL_START,
Index: mmotm-2.6.31-Jul16/fs/proc/kcore.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/fs/proc/kcore.c
+++ mmotm-2.6.31-Jul16/fs/proc/kcore.c
@@ -21,6 +21,7 @@
 #include <asm/uaccess.h>
 #include <asm/io.h>
 #include <linux/list.h>
+#include <asm/sections.h>
 
 #define CORE_STR "CORE"
 
@@ -408,10 +409,26 @@ read_kcore(struct file *file, char __use
 
 static struct kcore_list kcore_vmalloc;
 
+#ifdef CONFIG_ARCH_PROC_KCORE_TEXT
+static struct kcore_list kcore_text;
+/*
+ * If defined, special segment is used for mapping kernel text instead of
+ * direct-map area. We need to create special TEXT section.
+ */
+static void __init proc_kcore_text_init(void)
+{
+	kclist_add(&kcore_text, _stext, _end - _stext, KCORE_TEXT);
+}
+#else
+static void __init proc_kcore_text_init(void)
+{
+}
+#endif
+
 static int __init proc_kcore_init(void)
 {
 	proc_root_kcore = proc_create("kcore", S_IRUSR, NULL, &proc_kcore_operations);
-
+	proc_kcore_text_init();
 	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
 		VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);
 	return 0;


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC][PATCH 5/6] kcore: check physical memory range  in correct way.
  2009-07-24  8:08 [RFC][PATCH 0/6] kcore clean up and enhance. v3 KAMEZAWA Hiroyuki
                   ` (3 preceding siblings ...)
  2009-07-24  8:15 ` [RFC][PATCH 4/6] kcore: kcore unify text " KAMEZAWA Hiroyuki
@ 2009-07-24  8:19 ` KAMEZAWA Hiroyuki
  2009-07-28 10:24   ` Amerigo Wang
  2009-07-24  8:22 ` [RFC][PATCH 6/6] kcore: walk_system_ram_range() KAMEZAWA Hiroyuki
  5 siblings, 1 reply; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2009-07-24  8:19 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: linux-kernel, xiyou.wangcong, akpm, ralf, benh, lethal

From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

For /proc/kcore, each arch registers its memory range by kclist_add().
In usual,
	- range of physical memory
	- range of vmalloc area
	- text, etc...
are registered but "range of physical memory" has some troubles.

It doesn't updated at memory hotplug and it tend to include
unnecessary memory holes. Now, /proc/iomem (kernel/resource.c)
includes required physical memory range information and it's
properly updated at memory hotplug. Then, it's good to avoid
using its own code(duplicating information) and to rebuild
kclist for physical memory based on /proc/iomem.

Note: IIUC, /proc/iomem information is used for kdump.

Changelog: v2 -> v3
 - fixed HIGHMEM codes.(At least, no compile error)
 - enhnanced sanity chesk in !HIGHMEM codes. (See kclist_add_private())
 - after this, x86-32, ia64, sh, powerpc has no private kclist codes.
   x86-64 and mips still have some.

Changelog: v1 -> v2
 - removed -EBUSY at memory hotplug in read_kcore()
   (continue reading is no problem in general.)
 - fixed initial value of kcore_need_update to be 1.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 arch/mips/mm/init.c            |    2 
 arch/powerpc/mm/init_32.c      |   32 -------
 arch/powerpc/mm/init_64.c      |   26 ------
 arch/sh/mm/init.c              |    4 
 arch/x86/mm/init_32.c          |    4 
 arch/x86/mm/init_64.c          |    3 
 fs/proc/kcore.c                |  174 ++++++++++++++++++++++++++++++++++++++---
 include/linux/ioport.h         |    8 +
 include/linux/memory_hotplug.h |    7 -
 kernel/resource.c              |    2 
 10 files changed, 172 insertions(+), 90 deletions(-)

Index: mmotm-2.6.31-Jul16/fs/proc/kcore.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/fs/proc/kcore.c
+++ mmotm-2.6.31-Jul16/fs/proc/kcore.c
@@ -21,6 +21,9 @@
 #include <asm/uaccess.h>
 #include <asm/io.h>
 #include <linux/list.h>
+#include <linux/ioport.h>
+#include <linux/mm.h>
+#include <linux/memory.h>
 #include <asm/sections.h>
 
 #define CORE_STR "CORE"
@@ -31,17 +34,6 @@
 
 static struct proc_dir_entry *proc_root_kcore;
 
-static int open_kcore(struct inode * inode, struct file * filp)
-{
-	return capable(CAP_SYS_RAWIO) ? 0 : -EPERM;
-}
-
-static ssize_t read_kcore(struct file *, char __user *, size_t, loff_t *);
-
-static const struct file_operations proc_kcore_operations = {
-	.read		= read_kcore,
-	.open		= open_kcore,
-};
 
 #ifndef kc_vaddr_to_offset
 #define	kc_vaddr_to_offset(v) ((v) - PAGE_OFFSET)
@@ -61,6 +53,7 @@ struct memelfnote
 
 static LIST_HEAD(kclist_head);
 static DEFINE_RWLOCK(kclist_lock);
+static int kcore_need_update = 1;
 
 void
 kclist_add(struct kcore_list *new, void *addr, size_t size, int type)
@@ -99,6 +92,122 @@ static size_t get_kcore_size(int *nphdr,
 	return size + *elf_buflen;
 }
 
+static void free_kclist_ents(struct list_head *head)
+{
+	struct kcore_list *tmp, *pos;
+
+	list_for_each_entry_safe(pos, tmp, head, list) {
+		list_del(&pos->list);
+		kfree(pos);
+	}
+}
+/*
+ * Replace all KCORE_RAM information with passed list.
+ */
+static void __kcore_update_ram(struct list_head *list)
+{
+	struct kcore_list *tmp, *pos;
+	LIST_HEAD(garbage);
+
+	write_lock(&kclist_lock);
+	if (kcore_need_update) {
+		list_for_each_entry_safe(pos, tmp, &kclist_head, list) {
+			if (pos->type == KCORE_RAM)
+				list_move(&pos->list, &garbage);
+		}
+		list_splice(list, &kclist_head);
+	} else
+		list_splice(list, &garbage);
+	kcore_need_update = 0;
+	write_unlock(&kclist_lock);
+
+	free_kclist_ents(&garbage);
+}
+
+
+#ifdef CONFIG_HIGHMEM
+/*
+ * If no highmem, we can assume [0...max_low_pfn) continuous range of memory
+ * because memory hole is not as big as !HIGHMEM case.
+ * (HIGHMEM is special because part of memory is _invisible_ from the kernel.)
+ */
+static int kcore_update_ram(void)
+{
+	LIST_HEAD(head);
+	struct kcore_list *ent;
+	int ret = 0;
+
+	ent = kmalloc(sizeof(*ent), GFP_KERNEL);
+	if (!ent)
+		return -ENOMEM;
+	ent->addr = (unsigned long)__va(0);
+	ent->size = max_low_pfn << PAGE_SHIFT;
+	ent->type = KCORE_RAM;
+	list_add(&ent->list, &head);
+	__kcore_update_ram(&head);
+	return ret;
+}
+
+#else /* !CONFIG_HIGHMEM */
+
+static int
+kclist_add_private(unsigned long pfn, unsigned long nr_pages, void *arg)
+{
+	struct list_head *head = (struct list_head *)arg;
+	struct kcore_list *ent;
+
+	ent = kmalloc(sizeof(*ent), GFP_KERNEL);
+	if (!ent)
+		return -ENOMEM;
+	ent->addr = (unsigned long)__va((pfn << PAGE_SHIFT));
+	ent->size = nr_pages << PAGE_SHIFT;
+
+	/* Sanity check: Can happen in 32bit arch...maybe */
+	if (ent->addr < (unsigned long) __va(0))
+		goto free_out;
+
+	/* cut not-mapped area. ....from ppc-32 code. */
+	if (ULONG_MAX - ent->addr < ent->size)
+		ent->size = ULONG_MAX - ent->addr;
+
+	/* cut when vmalloc() area is higher than direct-map area */
+	if (VMALLOC_START > __va(0)) {
+		if (ent->addr > VMALLOC_START)
+			goto free_out;
+		if (VMALLOC_START - ent->addr < ent->size)
+			ent->size = VMALLOC_START - ent->addr;
+	}
+
+	ent->type = KCORE_RAM;
+	list_add(&ent->list, head);
+	return 0;
+free_out:
+	kfree(ent);
+	return 0;
+}
+
+static int kcore_update_ram(void)
+{
+	int nid, ret;
+	unsigned long end_pfn;
+	LIST_HEAD(head);
+
+	/* Not inialized....update now */
+	/* find out "max pfn" */
+	end_pfn = 0;
+	for_each_node_state(nid, N_HIGH_MEMORY)
+		if (end_pfn < node_end_pfn(nid))
+			end_pfn = node_end_pfn(nid);
+	/* scan 0 to max_pfn */
+	ret = walk_memory_resource(0, end_pfn, &head, kclist_add_private);
+	if (ret) {
+		free_kclist_ents(&head);
+		return -ENOMEM;
+	}
+	__kcore_update_ram(&head);
+	return ret;
+}
+#endif /* CONFIG_HIGHMEM */
 
 /*****************************************************************************/
 /*
@@ -407,6 +516,39 @@ read_kcore(struct file *file, char __use
 	return acc;
 }
 
+
+static int open_kcore(struct inode * inode, struct file *filp)
+{
+	if (!capable(CAP_SYS_RAWIO))
+		return -EPERM;
+	if (kcore_need_update)
+		kcore_update_ram();
+	return 0;
+}
+
+
+static const struct file_operations proc_kcore_operations = {
+	.read		= read_kcore,
+	.open		= open_kcore,
+};
+
+#ifdef CONFIG_MEMORY_HOTPLUG
+/* just remember that we have to update kcore */
+static int __meminit kcore_callback(struct notifier_block *self,
+				    unsigned long action, void *arg)
+{
+	switch (action) {
+	case MEM_ONLINE:
+	case MEM_OFFLINE:
+		write_lock(&kclist_lock);
+		kcore_need_update = 1;
+		write_unlock(&kclist_lock);
+	}
+	return NOTIFY_OK;
+}
+#endif
+
+
 static struct kcore_list kcore_vmalloc;
 
 #ifdef CONFIG_ARCH_PROC_KCORE_TEXT
@@ -427,10 +569,18 @@ static void __init proc_kcore_text_init(
 
 static int __init proc_kcore_init(void)
 {
-	proc_root_kcore = proc_create("kcore", S_IRUSR, NULL, &proc_kcore_operations);
+	proc_root_kcore = proc_create("kcore", S_IRUSR, NULL,
+				      &proc_kcore_operations);
+	/* Store text area if it's special */
 	proc_kcore_text_init();
+	/* Store vmalloc area */
 	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
 		VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);
+	/* Store direct-map area from physical memory map */
+	kcore_update_ram();
+	hotplug_memory_notifier(kcore_callback, 0);
+	/* Other special area, area-for-module etc is arch specific. */
+
 	return 0;
 }
 module_init(proc_kcore_init);
Index: mmotm-2.6.31-Jul16/include/linux/ioport.h
===================================================================
--- mmotm-2.6.31-Jul16.orig/include/linux/ioport.h
+++ mmotm-2.6.31-Jul16/include/linux/ioport.h
@@ -186,5 +186,13 @@ extern void __devm_release_region(struct
 extern int iomem_map_sanity_check(resource_size_t addr, unsigned long size);
 extern int iomem_is_exclusive(u64 addr);
 
+/*
+ * Walk through all SYSTEM_RAM which is registered as resource.
+ * arg is (start_pfn, nr_pages, private_arg_pointer)
+ */
+extern int walk_memory_resource(unsigned long start_pfn,
+			unsigned long nr_pages, void *arg,
+			int (*func)(unsigned long, unsigned long, void *));
+
 #endif /* __ASSEMBLY__ */
 #endif	/* _LINUX_IOPORT_H */
Index: mmotm-2.6.31-Jul16/include/linux/memory_hotplug.h
===================================================================
--- mmotm-2.6.31-Jul16.orig/include/linux/memory_hotplug.h
+++ mmotm-2.6.31-Jul16/include/linux/memory_hotplug.h
@@ -191,13 +191,6 @@ static inline void register_page_bootmem
 
 #endif /* ! CONFIG_MEMORY_HOTPLUG */
 
-/*
- * Walk through all memory which is registered as resource.
- * arg is (start_pfn, nr_pages, private_arg_pointer)
- */
-extern int walk_memory_resource(unsigned long start_pfn,
-			unsigned long nr_pages, void *arg,
-			int (*func)(unsigned long, unsigned long, void *));
 
 #ifdef CONFIG_MEMORY_HOTREMOVE
 
Index: mmotm-2.6.31-Jul16/kernel/resource.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/kernel/resource.c
+++ mmotm-2.6.31-Jul16/kernel/resource.c
@@ -234,7 +234,7 @@ int release_resource(struct resource *ol
 
 EXPORT_SYMBOL(release_resource);
 
-#if defined(CONFIG_MEMORY_HOTPLUG) && !defined(CONFIG_ARCH_HAS_WALK_MEMORY)
+#if !defined(CONFIG_ARCH_HAS_WALK_MEMORY)
 /*
  * Finds the lowest memory reosurce exists within [res->start.res->end)
  * the caller must specify res->start, res->end, res->flags.
Index: mmotm-2.6.31-Jul16/arch/x86/mm/init_32.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/x86/mm/init_32.c
+++ mmotm-2.6.31-Jul16/arch/x86/mm/init_32.c
@@ -857,8 +857,6 @@ static void __init test_wp_bit(void)
 	}
 }
 
-static struct kcore_list kcore_mem;
-
 void __init mem_init(void)
 {
 	int codesize, reservedpages, datasize, initsize;
@@ -886,8 +884,6 @@ void __init mem_init(void)
 	datasize =  (unsigned long) &_edata - (unsigned long) &_etext;
 	initsize =  (unsigned long) &__init_end - (unsigned long) &__init_begin;
 
-	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
-
 	printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
 			"%dk reserved, %dk data, %dk init, %ldk highmem)\n",
 		(unsigned long) nr_free_pages() << (PAGE_SHIFT-10),
Index: mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/x86/mm/init_64.c
+++ mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
@@ -647,7 +647,7 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to
 
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
-static struct kcore_list kcore_mem, kcore_modules, kcore_vsyscall;
+static struct kcore_list kcore_modules, kcore_vsyscall;
 
 void __init mem_init(void)
 {
@@ -676,7 +676,6 @@ void __init mem_init(void)
 	initsize =  (unsigned long) &__init_end - (unsigned long) &__init_begin;
 
 	/* Register memory areas for /proc/kcore */
-	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
 	kclist_add(&kcore_modules, (void *)MODULES_VADDR, MODULES_LEN,
 			KCORE_OTHER);
 	kclist_add(&kcore_vsyscall, (void *)VSYSCALL_START,
Index: mmotm-2.6.31-Jul16/arch/mips/mm/init.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/mips/mm/init.c
+++ mmotm-2.6.31-Jul16/arch/mips/mm/init.c
@@ -352,7 +352,6 @@ void __init paging_init(void)
 	free_area_init_nodes(max_zone_pfns);
 }
 
-static struct kcore_list kcore_mem;
 #ifdef CONFIG_64BIT
 static struct kcore_list kcore_kseg0;
 #endif
@@ -412,7 +411,6 @@ void __init mem_init(void)
 		kclist_add(&kcore_kseg0, (void *) CKSEG0,
 				0x80000000 - 4, KCORE_TEXT);
 #endif
-	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
 
 	printk(KERN_INFO "Memory: %luk/%luk available (%ldk kernel code, "
 	       "%ldk reserved, %ldk data, %ldk init, %ldk highmem)\n",
Index: mmotm-2.6.31-Jul16/arch/powerpc/mm/init_32.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/powerpc/mm/init_32.c
+++ mmotm-2.6.31-Jul16/arch/powerpc/mm/init_32.c
@@ -244,35 +244,3 @@ void free_initrd_mem(unsigned long start
 }
 #endif
 
-#ifdef CONFIG_PROC_KCORE
-
-static int __init setup_kcore(void)
-{
-	int i;
-
-	for (i = 0; i < lmb.memory.cnt; i++) {
-		unsigned long base;
-		unsigned long size;
-		struct kcore_list *kcore_mem;
-
-		base = lmb.memory.region[i].base;
-		size = lmb.memory.region[i].size;
-
-		kcore_mem = kmalloc(sizeof(struct kcore_list), GFP_ATOMIC);
-		if (!kcore_mem)
-			panic("%s: kmalloc failed\n", __func__);
-
-		/* must stay under 32 bits */
-		if ( 0xfffffffful - (unsigned long)__va(base) < size) {
-			size = 0xfffffffful - (unsigned long)(__va(base));
-			printk(KERN_DEBUG "setup_kcore: restrict size=%lx\n",
-						size);
-		}
-
-		kclist_add(kcore_mem, __va(base), size, KCORE_RAM);
-	}
-
-	return 0;
-}
-module_init(setup_kcore);
-#endif
Index: mmotm-2.6.31-Jul16/arch/powerpc/mm/init_64.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/powerpc/mm/init_64.c
+++ mmotm-2.6.31-Jul16/arch/powerpc/mm/init_64.c
@@ -109,32 +109,6 @@ void free_initrd_mem(unsigned long start
 }
 #endif
 
-#ifdef CONFIG_PROC_KCORE
-
-static int __init setup_kcore(void)
-{
-	int i;
-
-	for (i=0; i < lmb.memory.cnt; i++) {
-		unsigned long base, size;
-		struct kcore_list *kcore_mem;
-
-		base = lmb.memory.region[i].base;
-		size = lmb.memory.region[i].size;
-
-		/* GFP_ATOMIC to avoid might_sleep warnings during boot */
-		kcore_mem = kmalloc(sizeof(struct kcore_list), GFP_ATOMIC);
-		if (!kcore_mem)
-			panic("%s: kmalloc failed\n", __func__);
-
-		kclist_add(kcore_mem, __va(base), size, KCORE_RAM);
-	}
-
-	return 0;
-}
-module_init(setup_kcore);
-#endif
-
 static void pgd_ctor(void *addr)
 {
 	memset(addr, 0, PGD_TABLE_SIZE);
Index: mmotm-2.6.31-Jul16/arch/sh/mm/init.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/sh/mm/init.c
+++ mmotm-2.6.31-Jul16/arch/sh/mm/init.c
@@ -181,8 +181,6 @@ void __init paging_init(void)
 	set_fixmap_nocache(FIX_UNCACHED, __pa(&__uncached_start));
 }
 
-static struct kcore_list kcore_mem;
-
 void __init mem_init(void)
 {
 	int codesize, datasize, initsize;
@@ -218,8 +216,6 @@ void __init mem_init(void)
 	datasize =  (unsigned long) &_edata - (unsigned long) &_etext;
 	initsize =  (unsigned long) &__init_end - (unsigned long) &__init_begin;
 
-	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
-
 	printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
 	       "%dk data, %dk init)\n",
 		(unsigned long) nr_free_pages() << (PAGE_SHIFT-10),


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC][PATCH 6/6] kcore: walk_system_ram_range()
  2009-07-24  8:08 [RFC][PATCH 0/6] kcore clean up and enhance. v3 KAMEZAWA Hiroyuki
                   ` (4 preceding siblings ...)
  2009-07-24  8:19 ` [RFC][PATCH 5/6] kcore: check physical memory range in correct way KAMEZAWA Hiroyuki
@ 2009-07-24  8:22 ` KAMEZAWA Hiroyuki
  5 siblings, 0 replies; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2009-07-24  8:22 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: linux-kernel, xiyou.wangcong, akpm, ralf, benh, lethal

From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Originally, walk_memory_resource() was introduced to traverse all memory
of "System RAM" for detecting memory hotplug/unplug range.
For doing so, flags of IORESOUCE_MEM|IORESOURCE_BUSY was used and this
was enough for memory hotplug.

But for using other purpose, /proc/kcore, this may includes some firmware
area marked as IORESOURCE_BUSY | IORESOUCE_MEM. This patch makes the check
strict to find out "System RAM".

Note: PPC keeps their own walk_memory_resouce(), which walk through
ppc's lmb informaton. Because old kclist_add() is called per lmb,
this patch makes no difference in behavior, finally.

Changelog v2->v3
 - walk_memory_resource() is renamed to be walk_system_ram_range()
Changelog v2:
 - new patch from v2.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
Index: mmotm-2.6.31-Jul16/kernel/resource.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/kernel/resource.c
+++ mmotm-2.6.31-Jul16/kernel/resource.c
@@ -237,10 +237,10 @@ EXPORT_SYMBOL(release_resource);
 #if !defined(CONFIG_ARCH_HAS_WALK_MEMORY)
 /*
  * Finds the lowest memory reosurce exists within [res->start.res->end)
- * the caller must specify res->start, res->end, res->flags.
+ * the caller must specify res->start, res->end, res->flags and "name".
  * If found, returns 0, res is overwritten, if not found, returns -1.
  */
-static int find_next_system_ram(struct resource *res)
+static int find_next_system_ram(struct resource *res, char *name)
 {
 	resource_size_t start, end;
 	struct resource *p;
@@ -256,6 +256,8 @@ static int find_next_system_ram(struct r
 		/* system ram is just marked as IORESOURCE_MEM */
 		if (p->flags != res->flags)
 			continue;
+		if (name && strcmp(p->name, name))
+			continue;
 		if (p->start > end) {
 			p = NULL;
 			break;
@@ -273,19 +275,26 @@ static int find_next_system_ram(struct r
 		res->end = p->end;
 	return 0;
 }
-int
-walk_memory_resource(unsigned long start_pfn, unsigned long nr_pages, void *arg,
-			int (*func)(unsigned long, unsigned long, void *))
+
+/*
+ * This function calls callback against all memory range of "System RAM"
+ * which are marked as IORESOURCE_MEM and IORESOUCE_BUSY.
+ * Now, this function is only for "System RAM".
+ */
+int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
+		void *arg, int (*func)(unsigned long, unsigned long, void *))
 {
 	struct resource res;
 	unsigned long pfn, len;
 	u64 orig_end;
 	int ret = -1;
+
 	res.start = (u64) start_pfn << PAGE_SHIFT;
 	res.end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
 	res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
 	orig_end = res.end;
-	while ((res.start < res.end) && (find_next_system_ram(&res) >= 0)) {
+	while ((res.start < res.end) &&
+		(find_next_system_ram(&res, "System RAM") >= 0)) {
 		pfn = (unsigned long)(res.start >> PAGE_SHIFT);
 		len = (unsigned long)((res.end + 1 - res.start) >> PAGE_SHIFT);
 		ret = (*func)(pfn, len, arg);
Index: mmotm-2.6.31-Jul16/include/linux/ioport.h
===================================================================
--- mmotm-2.6.31-Jul16.orig/include/linux/ioport.h
+++ mmotm-2.6.31-Jul16/include/linux/ioport.h
@@ -190,7 +190,7 @@ extern int iomem_is_exclusive(u64 addr);
  * Walk through all SYSTEM_RAM which is registered as resource.
  * arg is (start_pfn, nr_pages, private_arg_pointer)
  */
-extern int walk_memory_resource(unsigned long start_pfn,
+extern int walk_system_ram_range(unsigned long start_pfn,
 			unsigned long nr_pages, void *arg,
 			int (*func)(unsigned long, unsigned long, void *));
 
Index: mmotm-2.6.31-Jul16/mm/memory_hotplug.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/mm/memory_hotplug.c
+++ mmotm-2.6.31-Jul16/mm/memory_hotplug.c
@@ -410,7 +410,7 @@ int online_pages(unsigned long pfn, unsi
 	if (!populated_zone(zone))
 		need_zonelists_rebuild = 1;
 
-	ret = walk_memory_resource(pfn, nr_pages, &onlined_pages,
+	ret = walk_system_ram_range(pfn, nr_pages, &onlined_pages,
 		online_pages_range);
 	if (ret) {
 		printk(KERN_DEBUG "online_pages %lx at %lx failed\n",
@@ -702,7 +702,7 @@ offline_isolated_pages_cb(unsigned long 
 static void
 offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
 {
-	walk_memory_resource(start_pfn, end_pfn - start_pfn, NULL,
+	walk_system_ram_range(start_pfn, end_pfn - start_pfn, NULL,
 				offline_isolated_pages_cb);
 }
 
@@ -728,7 +728,7 @@ check_pages_isolated(unsigned long start
 	long offlined = 0;
 	int ret;
 
-	ret = walk_memory_resource(start_pfn, end_pfn - start_pfn, &offlined,
+	ret = walk_system_ram_range(start_pfn, end_pfn - start_pfn, &offlined,
 			check_pages_isolated_cb);
 	if (ret < 0)
 		offlined = (long)ret;
Index: mmotm-2.6.31-Jul16/arch/powerpc/mm/mem.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/arch/powerpc/mm/mem.c
+++ mmotm-2.6.31-Jul16/arch/powerpc/mm/mem.c
@@ -143,8 +143,8 @@ int arch_add_memory(int nid, u64 start, 
  * memory regions, find holes and callback for contiguous regions.
  */
 int
-walk_memory_resource(unsigned long start_pfn, unsigned long nr_pages, void *arg,
-			int (*func)(unsigned long, unsigned long, void *))
+walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
+		void *arg, int (*func)(unsigned long, unsigned long, void *))
 {
 	struct lmb_property res;
 	unsigned long pfn, len;
Index: mmotm-2.6.31-Jul16/drivers/net/ehea/ehea_qmr.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/drivers/net/ehea/ehea_qmr.c
+++ mmotm-2.6.31-Jul16/drivers/net/ehea/ehea_qmr.c
@@ -751,7 +751,7 @@ int ehea_create_busmap(void)
 
 	mutex_lock(&ehea_busmap_mutex);
 	ehea_mr_len = 0;
-	ret = walk_memory_resource(0, 1ULL << MAX_PHYSMEM_BITS, NULL,
+	ret = walk_system_ram_range(0, 1ULL << MAX_PHYSMEM_BITS, NULL,
 				   ehea_create_busmap_callback);
 	mutex_unlock(&ehea_busmap_mutex);
 	return ret;
Index: mmotm-2.6.31-Jul16/fs/proc/kcore.c
===================================================================
--- mmotm-2.6.31-Jul16.orig/fs/proc/kcore.c
+++ mmotm-2.6.31-Jul16/fs/proc/kcore.c
@@ -199,7 +199,7 @@ static int kcore_update_ram(void)
 		if (end_pfn < node_end_pfn(nid))
 			end_pfn = node_end_pfn(nid);
 	/* scan 0 to max_pfn */
-	ret = walk_memory_resource(0, end_pfn, &head, kclist_add_private);
+	ret = walk_system_ram_range(0, end_pfn, &head, kclist_add_private);
 	if (ret) {
 		free_kclist_ents(&head);
 		return -ENOMEM;


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC][PATCH 3/6] kcore: unify vmalloc range entry
  2009-07-24  8:13 ` [RFC][PATCH 3/6] kcore: unify vmalloc range entry KAMEZAWA Hiroyuki
@ 2009-07-28 10:05   ` Amerigo Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Amerigo Wang @ 2009-07-28 10:05 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: linux-kernel, xiyou.wangcong, akpm, ralf, benh, lethal

On Fri, Jul 24, 2009 at 05:13:18PM +0900, KAMEZAWA Hiroyuki wrote:
>From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>
>For /proc/kcore, vmalloc areas are registered per arch. But, all of
>them registers same range of [VMALLOC_START...VMALLOC_END)
>This patch unifies them.
>Note: /proc/kcore depends on CONFIG_MMU.
>
>Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Looks good.

Reviewed-by: WANG Cong <xiyou.wangcong@gmail.com>

Thanks!

>---
> arch/ia64/mm/init.c       |    7 +------
> arch/mips/mm/init.c       |    4 +---
> arch/powerpc/mm/init_32.c |    4 ----
> arch/powerpc/mm/init_64.c |    4 ----
> arch/sh/mm/init.c         |    4 +---
> arch/x86/mm/init_32.c     |    4 +---
> arch/x86/mm/init_64.c     |    4 +---
> fs/proc/kcore.c           |    5 +++++
> 8 files changed, 10 insertions(+), 26 deletions(-)
>
>Index: mmotm-2.6.31-Jul16/fs/proc/kcore.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/fs/proc/kcore.c
>+++ mmotm-2.6.31-Jul16/fs/proc/kcore.c
>@@ -406,9 +406,14 @@ read_kcore(struct file *file, char __use
> 	return acc;
> }
> 
>+static struct kcore_list kcore_vmalloc;
>+
> static int __init proc_kcore_init(void)
> {
> 	proc_root_kcore = proc_create("kcore", S_IRUSR, NULL, &proc_kcore_operations);
>+
>+	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
>+		VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);
> 	return 0;
> }
> module_init(proc_kcore_init);
>Index: mmotm-2.6.31-Jul16/arch/x86/mm/init_32.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/x86/mm/init_32.c
>+++ mmotm-2.6.31-Jul16/arch/x86/mm/init_32.c
>@@ -857,7 +857,7 @@ static void __init test_wp_bit(void)
> 	}
> }
> 
>-static struct kcore_list kcore_mem, kcore_vmalloc;
>+static struct kcore_list kcore_mem;
> 
> void __init mem_init(void)
> {
>@@ -887,8 +887,6 @@ void __init mem_init(void)
> 	initsize =  (unsigned long) &__init_end - (unsigned long) &__init_begin;
> 
> 	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
>-	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
>-		   VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
> 
> 	printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
> 			"%dk reserved, %dk data, %dk init, %ldk highmem)\n",
>Index: mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/x86/mm/init_64.c
>+++ mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
>@@ -647,7 +647,7 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to
> 
> #endif /* CONFIG_MEMORY_HOTPLUG */
> 
>-static struct kcore_list kcore_mem, kcore_vmalloc, kcore_kernel,
>+static struct kcore_list kcore_mem, kcore_kernel,
> 			 kcore_modules, kcore_vsyscall;
> 
> void __init mem_init(void)
>@@ -678,8 +678,6 @@ void __init mem_init(void)
> 
> 	/* Register memory areas for /proc/kcore */
> 	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
>-	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
>-		   VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
> 	kclist_add(&kcore_kernel, &_stext, _end - _stext, KCORE_TEXT);
> 	kclist_add(&kcore_modules, (void *)MODULES_VADDR, MODULES_LEN,
> 			KCORE_OTHER);
>Index: mmotm-2.6.31-Jul16/arch/mips/mm/init.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/mips/mm/init.c
>+++ mmotm-2.6.31-Jul16/arch/mips/mm/init.c
>@@ -352,7 +352,7 @@ void __init paging_init(void)
> 	free_area_init_nodes(max_zone_pfns);
> }
> 
>-static struct kcore_list kcore_mem, kcore_vmalloc;
>+static struct kcore_list kcore_mem;
> #ifdef CONFIG_64BIT
> static struct kcore_list kcore_kseg0;
> #endif
>@@ -413,8 +413,6 @@ void __init mem_init(void)
> 				0x80000000 - 4, KCORE_TEXT);
> #endif
> 	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
>-	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
>-		   VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
> 
> 	printk(KERN_INFO "Memory: %luk/%luk available (%ldk kernel code, "
> 	       "%ldk reserved, %ldk data, %ldk init, %ldk highmem)\n",
>Index: mmotm-2.6.31-Jul16/arch/powerpc/mm/init_32.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/powerpc/mm/init_32.c
>+++ mmotm-2.6.31-Jul16/arch/powerpc/mm/init_32.c
>@@ -245,7 +245,6 @@ void free_initrd_mem(unsigned long start
> #endif
> 
> #ifdef CONFIG_PROC_KCORE
>-static struct kcore_list kcore_vmem;
> 
> static int __init setup_kcore(void)
> {
>@@ -273,9 +272,6 @@ static int __init setup_kcore(void)
> 		kclist_add(kcore_mem, __va(base), size, KCORE_RAM);
> 	}
> 
>-	kclist_add(&kcore_vmem, (void *)VMALLOC_START,
>-		VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
>-
> 	return 0;
> }
> module_init(setup_kcore);
>Index: mmotm-2.6.31-Jul16/arch/ia64/mm/init.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/ia64/mm/init.c
>+++ mmotm-2.6.31-Jul16/arch/ia64/mm/init.c
>@@ -617,7 +617,7 @@ mem_init (void)
> 	long reserved_pages, codesize, datasize, initsize;
> 	pg_data_t *pgdat;
> 	int i;
>-	static struct kcore_list kcore_mem, kcore_vmem, kcore_kernel;
>+	static struct kcore_list kcore_kernel;
> 
> 	BUG_ON(PTRS_PER_PGD * sizeof(pgd_t) != PAGE_SIZE);
> 	BUG_ON(PTRS_PER_PMD * sizeof(pmd_t) != PAGE_SIZE);
>@@ -636,12 +636,7 @@ mem_init (void)
> 	BUG_ON(!mem_map);
> 	max_mapnr = max_low_pfn;
> #endif
>-
> 	high_memory = __va(max_low_pfn * PAGE_SIZE);
>-
>-	kclist_add(&kcore_mem, __va(0), max_low_pfn * PAGE_SIZE, KCORE_RAM);
>-	kclist_add(&kcore_vmem, (void *)VMALLOC_START,
>-			VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
> 	kclist_add(&kcore_kernel, _stext, _end - _stext, KCORE_TEXT);
> 
> 	for_each_online_pgdat(pgdat)
>Index: mmotm-2.6.31-Jul16/arch/powerpc/mm/init_64.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/powerpc/mm/init_64.c
>+++ mmotm-2.6.31-Jul16/arch/powerpc/mm/init_64.c
>@@ -110,7 +110,6 @@ void free_initrd_mem(unsigned long start
> #endif
> 
> #ifdef CONFIG_PROC_KCORE
>-static struct kcore_list kcore_vmem;
> 
> static int __init setup_kcore(void)
> {
>@@ -131,9 +130,6 @@ static int __init setup_kcore(void)
> 		kclist_add(kcore_mem, __va(base), size, KCORE_RAM);
> 	}
> 
>-	kclist_add(&kcore_vmem, (void *)VMALLOC_START,
>-		VMALLOC_END-VMALLOC_START, KCORE_VMALLOC);
>-
> 	return 0;
> }
> module_init(setup_kcore);
>Index: mmotm-2.6.31-Jul16/arch/sh/mm/init.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/sh/mm/init.c
>+++ mmotm-2.6.31-Jul16/arch/sh/mm/init.c
>@@ -181,7 +181,7 @@ void __init paging_init(void)
> 	set_fixmap_nocache(FIX_UNCACHED, __pa(&__uncached_start));
> }
> 
>-static struct kcore_list kcore_mem, kcore_vmalloc;
>+static struct kcore_list kcore_mem;
> 
> void __init mem_init(void)
> {
>@@ -219,8 +219,6 @@ void __init mem_init(void)
> 	initsize =  (unsigned long) &__init_end - (unsigned long) &__init_begin;
> 
> 	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
>-	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
>-		   VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);
> 
> 	printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
> 	       "%dk data, %dk init)\n",
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC][PATCH 4/6] kcore: kcore unify text range entry
  2009-07-24  8:15 ` [RFC][PATCH 4/6] kcore: kcore unify text " KAMEZAWA Hiroyuki
@ 2009-07-28 10:10   ` Amerigo Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Amerigo Wang @ 2009-07-28 10:10 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: linux-kernel, xiyou.wangcong, akpm, ralf, benh, lethal

On Fri, Jul 24, 2009 at 05:15:22PM +0900, KAMEZAWA Hiroyuki wrote:
>From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>
>Some 64bit arch has a special segment for mapping kernel text. It should be
>entried to /proc/kcore in addtion to direct-linear-map, vmalloc area.
>This patch unifies KCORE_TEXT entry scattered under x86 and ia64.
>
>I'm not familiar with other archs (mips has its own even after this patch)
>If range of [_stext ..._end) is a valid area of text/data and it's not
>in direct-map/vmalloc area, defining CONFIG_ARCH_PROC_KCORE_TEXT is only
>a necessary thing to do for archs.
>
>Note: I left mips-64 as it is now.
>
>Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>


Excellent.

Reviewed-by: WANG Cong <xiyou.wangcong@gmail.com>

>---
>Index: mmotm-2.6.31-Jul16/arch/x86/Kconfig
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/x86/Kconfig
>+++ mmotm-2.6.31-Jul16/arch/x86/Kconfig
>@@ -1244,6 +1244,10 @@ config ARCH_MEMORY_PROBE
> 	def_bool X86_64
> 	depends on MEMORY_HOTPLUG
> 
>+config ARCH_PROC_KCORE_TEXT
>+	def_bool y
>+	depends on X86_64 && PROC_KCORE
>+
> config ILLEGAL_POINTER_VALUE
>        hex
>        default 0 if X86_32
>Index: mmotm-2.6.31-Jul16/arch/ia64/Kconfig
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/ia64/Kconfig
>+++ mmotm-2.6.31-Jul16/arch/ia64/Kconfig
>@@ -496,6 +496,10 @@ config HAVE_ARCH_NODEDATA_EXTENSION
> 	def_bool y
> 	depends on NUMA
> 
>+config ARCH_PROC_KCORE_TEXT
>+	def_bool y
>+	depends on PROC_KCORE
>+
> config IA32_SUPPORT
> 	bool "Support for Linux/x86 binaries"
> 	help
>Index: mmotm-2.6.31-Jul16/arch/ia64/mm/init.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/ia64/mm/init.c
>+++ mmotm-2.6.31-Jul16/arch/ia64/mm/init.c
>@@ -617,7 +617,6 @@ mem_init (void)
> 	long reserved_pages, codesize, datasize, initsize;
> 	pg_data_t *pgdat;
> 	int i;
>-	static struct kcore_list kcore_kernel;
> 
> 	BUG_ON(PTRS_PER_PGD * sizeof(pgd_t) != PAGE_SIZE);
> 	BUG_ON(PTRS_PER_PMD * sizeof(pmd_t) != PAGE_SIZE);
>@@ -637,7 +636,6 @@ mem_init (void)
> 	max_mapnr = max_low_pfn;
> #endif
> 	high_memory = __va(max_low_pfn * PAGE_SIZE);
>-	kclist_add(&kcore_kernel, _stext, _end - _stext, KCORE_TEXT);
> 
> 	for_each_online_pgdat(pgdat)
> 		if (pgdat->bdata->node_bootmem_map)
>Index: mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/arch/x86/mm/init_64.c
>+++ mmotm-2.6.31-Jul16/arch/x86/mm/init_64.c
>@@ -647,8 +647,7 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to
> 
> #endif /* CONFIG_MEMORY_HOTPLUG */
> 
>-static struct kcore_list kcore_mem, kcore_kernel,
>-			 kcore_modules, kcore_vsyscall;
>+static struct kcore_list kcore_mem, kcore_modules, kcore_vsyscall;
> 
> void __init mem_init(void)
> {
>@@ -678,7 +677,6 @@ void __init mem_init(void)
> 
> 	/* Register memory areas for /proc/kcore */
> 	kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT, KCORE_RAM);
>-	kclist_add(&kcore_kernel, &_stext, _end - _stext, KCORE_TEXT);
> 	kclist_add(&kcore_modules, (void *)MODULES_VADDR, MODULES_LEN,
> 			KCORE_OTHER);
> 	kclist_add(&kcore_vsyscall, (void *)VSYSCALL_START,
>Index: mmotm-2.6.31-Jul16/fs/proc/kcore.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/fs/proc/kcore.c
>+++ mmotm-2.6.31-Jul16/fs/proc/kcore.c
>@@ -21,6 +21,7 @@
> #include <asm/uaccess.h>
> #include <asm/io.h>
> #include <linux/list.h>
>+#include <asm/sections.h>
> 
> #define CORE_STR "CORE"
> 
>@@ -408,10 +409,26 @@ read_kcore(struct file *file, char __use
> 
> static struct kcore_list kcore_vmalloc;
> 
>+#ifdef CONFIG_ARCH_PROC_KCORE_TEXT
>+static struct kcore_list kcore_text;
>+/*
>+ * If defined, special segment is used for mapping kernel text instead of
>+ * direct-map area. We need to create special TEXT section.
>+ */
>+static void __init proc_kcore_text_init(void)
>+{
>+	kclist_add(&kcore_text, _stext, _end - _stext, KCORE_TEXT);
>+}
>+#else
>+static void __init proc_kcore_text_init(void)
>+{
>+}
>+#endif
>+
> static int __init proc_kcore_init(void)
> {
> 	proc_root_kcore = proc_create("kcore", S_IRUSR, NULL, &proc_kcore_operations);
>-
>+	proc_kcore_text_init();
> 	kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
> 		VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);
> 	return 0;
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC][PATCH 5/6] kcore: check physical memory range  in correct way.
  2009-07-24  8:19 ` [RFC][PATCH 5/6] kcore: check physical memory range in correct way KAMEZAWA Hiroyuki
@ 2009-07-28 10:24   ` Amerigo Wang
  2009-07-28 23:58     ` KAMEZAWA Hiroyuki
  0 siblings, 1 reply; 12+ messages in thread
From: Amerigo Wang @ 2009-07-28 10:24 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: linux-kernel, xiyou.wangcong, akpm, ralf, benh, lethal

On Fri, Jul 24, 2009 at 05:19:27PM +0900, KAMEZAWA Hiroyuki wrote:
>From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>
>For /proc/kcore, each arch registers its memory range by kclist_add().
>In usual,
>	- range of physical memory
>	- range of vmalloc area
>	- text, etc...
>are registered but "range of physical memory" has some troubles.
>
>It doesn't updated at memory hotplug and it tend to include
>unnecessary memory holes. Now, /proc/iomem (kernel/resource.c)
>includes required physical memory range information and it's
>properly updated at memory hotplug. Then, it's good to avoid
>using its own code(duplicating information) and to rebuild
>kclist for physical memory based on /proc/iomem.
>
>Note: IIUC, /proc/iomem information is used for kdump.
>
>Changelog: v2 -> v3
> - fixed HIGHMEM codes.(At least, no compile error)
> - enhnanced sanity chesk in !HIGHMEM codes. (See kclist_add_private())
> - after this, x86-32, ia64, sh, powerpc has no private kclist codes.
>   x86-64 and mips still have some.


<snip>



>Index: mmotm-2.6.31-Jul16/include/linux/ioport.h
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/include/linux/ioport.h
>+++ mmotm-2.6.31-Jul16/include/linux/ioport.h
>@@ -186,5 +186,13 @@ extern void __devm_release_region(struct
> extern int iomem_map_sanity_check(resource_size_t addr, unsigned long size);
> extern int iomem_is_exclusive(u64 addr);
> 
>+/*
>+ * Walk through all SYSTEM_RAM which is registered as resource.
>+ * arg is (start_pfn, nr_pages, private_arg_pointer)
>+ */
>+extern int walk_memory_resource(unsigned long start_pfn,
>+			unsigned long nr_pages, void *arg,
>+			int (*func)(unsigned long, unsigned long, void *));
>+
> #endif /* __ASSEMBLY__ */
> #endif	/* _LINUX_IOPORT_H */
>Index: mmotm-2.6.31-Jul16/include/linux/memory_hotplug.h
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/include/linux/memory_hotplug.h
>+++ mmotm-2.6.31-Jul16/include/linux/memory_hotplug.h
>@@ -191,13 +191,6 @@ static inline void register_page_bootmem
> 
> #endif /* ! CONFIG_MEMORY_HOTPLUG */
> 
>-/*
>- * Walk through all memory which is registered as resource.
>- * arg is (start_pfn, nr_pages, private_arg_pointer)
>- */
>-extern int walk_memory_resource(unsigned long start_pfn,
>-			unsigned long nr_pages, void *arg,
>-			int (*func)(unsigned long, unsigned long, void *));


Why moving it? :)

> 
> #ifdef CONFIG_MEMORY_HOTREMOVE
> 
>Index: mmotm-2.6.31-Jul16/kernel/resource.c
>===================================================================
>--- mmotm-2.6.31-Jul16.orig/kernel/resource.c
>+++ mmotm-2.6.31-Jul16/kernel/resource.c
>@@ -234,7 +234,7 @@ int release_resource(struct resource *ol
> 
> EXPORT_SYMBOL(release_resource);
> 
>-#if defined(CONFIG_MEMORY_HOTPLUG) && !defined(CONFIG_ARCH_HAS_WALK_MEMORY)
>+#if !defined(CONFIG_ARCH_HAS_WALK_MEMORY)
> /*
>  * Finds the lowest memory reosurce exists within [res->start.res->end)
>  * the caller must specify res->start, res->end, res->flags.


Shouldn't this part be in patch 6/6 instead of this one?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC][PATCH 5/6] kcore: check physical memory range  in correct way.
  2009-07-28 10:24   ` Amerigo Wang
@ 2009-07-28 23:58     ` KAMEZAWA Hiroyuki
  2009-07-29  8:50       ` Amerigo Wang
  0 siblings, 1 reply; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2009-07-28 23:58 UTC (permalink / raw)
  To: Amerigo Wang; +Cc: linux-kernel, akpm, ralf, benh, lethal

On Tue, 28 Jul 2009 18:24:11 +0800
Amerigo Wang <xiyou.wangcong@gmail.com> wrote:

> On Fri, Jul 24, 2009 at 05:19:27PM +0900, KAMEZAWA Hiroyuki wrote:
> >From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> >
> >For /proc/kcore, each arch registers its memory range by kclist_add().
> >In usual,
> >	- range of physical memory
> >	- range of vmalloc area
> >	- text, etc...
> >are registered but "range of physical memory" has some troubles.
> >
> >It doesn't updated at memory hotplug and it tend to include
> >unnecessary memory holes. Now, /proc/iomem (kernel/resource.c)
> >includes required physical memory range information and it's
> >properly updated at memory hotplug. Then, it's good to avoid
> >using its own code(duplicating information) and to rebuild
> >kclist for physical memory based on /proc/iomem.
> >
> >Note: IIUC, /proc/iomem information is used for kdump.
> >
> >Changelog: v2 -> v3
> > - fixed HIGHMEM codes.(At least, no compile error)
> > - enhnanced sanity chesk in !HIGHMEM codes. (See kclist_add_private())
> > - after this, x86-32, ia64, sh, powerpc has no private kclist codes.
> >   x86-64 and mips still have some.
> 
> 
> <snip>
> 
> 
> 
> >Index: mmotm-2.6.31-Jul16/include/linux/ioport.h
> >===================================================================
> >--- mmotm-2.6.31-Jul16.orig/include/linux/ioport.h
> >+++ mmotm-2.6.31-Jul16/include/linux/ioport.h
> >@@ -186,5 +186,13 @@ extern void __devm_release_region(struct
> > extern int iomem_map_sanity_check(resource_size_t addr, unsigned long size);
> > extern int iomem_is_exclusive(u64 addr);
> > 
> >+/*
> >+ * Walk through all SYSTEM_RAM which is registered as resource.
> >+ * arg is (start_pfn, nr_pages, private_arg_pointer)
> >+ */
> >+extern int walk_memory_resource(unsigned long start_pfn,
> >+			unsigned long nr_pages, void *arg,
> >+			int (*func)(unsigned long, unsigned long, void *));
> >+
> > #endif /* __ASSEMBLY__ */
> > #endif	/* _LINUX_IOPORT_H */
> >Index: mmotm-2.6.31-Jul16/include/linux/memory_hotplug.h
> >===================================================================
> >--- mmotm-2.6.31-Jul16.orig/include/linux/memory_hotplug.h
> >+++ mmotm-2.6.31-Jul16/include/linux/memory_hotplug.h
> >@@ -191,13 +191,6 @@ static inline void register_page_bootmem
> > 
> > #endif /* ! CONFIG_MEMORY_HOTPLUG */
> > 
> >-/*
> >- * Walk through all memory which is registered as resource.
> >- * arg is (start_pfn, nr_pages, private_arg_pointer)
> >- */
> >-extern int walk_memory_resource(unsigned long start_pfn,
> >-			unsigned long nr_pages, void *arg,
> >-			int (*func)(unsigned long, unsigned long, void *));
> 
> 
> Why moving it? :)
> 
Ah, this declaration is in memory_hotplug.h because it's only for memory
hotplug. For generic use, it's better to move this to iomem.h as other
resource related ops, I think.



> > 
> > #ifdef CONFIG_MEMORY_HOTREMOVE
> > 
> >Index: mmotm-2.6.31-Jul16/kernel/resource.c
> >===================================================================
> >--- mmotm-2.6.31-Jul16.orig/kernel/resource.c
> >+++ mmotm-2.6.31-Jul16/kernel/resource.c
> >@@ -234,7 +234,7 @@ int release_resource(struct resource *ol
> > 
> > EXPORT_SYMBOL(release_resource);
> > 
> >-#if defined(CONFIG_MEMORY_HOTPLUG) && !defined(CONFIG_ARCH_HAS_WALK_MEMORY)
> >+#if !defined(CONFIG_ARCH_HAS_WALK_MEMORY)
> > /*
> >  * Finds the lowest memory reosurce exists within [res->start.res->end)
> >  * the caller must specify res->start, res->end, res->flags.
> 
> 
> Shouldn't this part be in patch 6/6 instead of this one?
> 
Hmm, ok, I'll reorder 5/6 and 6/6 and define walk_system_ram_range() before
this patch.

Thanks,
-Kame


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC][PATCH 5/6] kcore: check physical memory range  in correct way.
  2009-07-28 23:58     ` KAMEZAWA Hiroyuki
@ 2009-07-29  8:50       ` Amerigo Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Amerigo Wang @ 2009-07-29  8:50 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: Amerigo Wang, linux-kernel, akpm, ralf, benh, lethal

On Wed, Jul 29, 2009 at 08:58:10AM +0900, KAMEZAWA Hiroyuki wrote:
>On Tue, 28 Jul 2009 18:24:11 +0800
>Amerigo Wang <xiyou.wangcong@gmail.com> wrote:
>
>> On Fri, Jul 24, 2009 at 05:19:27PM +0900, KAMEZAWA Hiroyuki wrote:
>> >From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>> >
>> >For /proc/kcore, each arch registers its memory range by kclist_add().
>> >In usual,
>> >	- range of physical memory
>> >	- range of vmalloc area
>> >	- text, etc...
>> >are registered but "range of physical memory" has some troubles.
>> >
>> >It doesn't updated at memory hotplug and it tend to include
>> >unnecessary memory holes. Now, /proc/iomem (kernel/resource.c)
>> >includes required physical memory range information and it's
>> >properly updated at memory hotplug. Then, it's good to avoid
>> >using its own code(duplicating information) and to rebuild
>> >kclist for physical memory based on /proc/iomem.
>> >
>> >Note: IIUC, /proc/iomem information is used for kdump.
>> >
>> >Changelog: v2 -> v3
>> > - fixed HIGHMEM codes.(At least, no compile error)
>> > - enhnanced sanity chesk in !HIGHMEM codes. (See kclist_add_private())
>> > - after this, x86-32, ia64, sh, powerpc has no private kclist codes.
>> >   x86-64 and mips still have some.
>> 
>> 
>> <snip>
>> 
>> 
>> 
>> >Index: mmotm-2.6.31-Jul16/include/linux/ioport.h
>> >===================================================================
>> >--- mmotm-2.6.31-Jul16.orig/include/linux/ioport.h
>> >+++ mmotm-2.6.31-Jul16/include/linux/ioport.h
>> >@@ -186,5 +186,13 @@ extern void __devm_release_region(struct
>> > extern int iomem_map_sanity_check(resource_size_t addr, unsigned long size);
>> > extern int iomem_is_exclusive(u64 addr);
>> > 
>> >+/*
>> >+ * Walk through all SYSTEM_RAM which is registered as resource.
>> >+ * arg is (start_pfn, nr_pages, private_arg_pointer)
>> >+ */
>> >+extern int walk_memory_resource(unsigned long start_pfn,
>> >+			unsigned long nr_pages, void *arg,
>> >+			int (*func)(unsigned long, unsigned long, void *));
>> >+
>> > #endif /* __ASSEMBLY__ */
>> > #endif	/* _LINUX_IOPORT_H */
>> >Index: mmotm-2.6.31-Jul16/include/linux/memory_hotplug.h
>> >===================================================================
>> >--- mmotm-2.6.31-Jul16.orig/include/linux/memory_hotplug.h
>> >+++ mmotm-2.6.31-Jul16/include/linux/memory_hotplug.h
>> >@@ -191,13 +191,6 @@ static inline void register_page_bootmem
>> > 
>> > #endif /* ! CONFIG_MEMORY_HOTPLUG */
>> > 
>> >-/*
>> >- * Walk through all memory which is registered as resource.
>> >- * arg is (start_pfn, nr_pages, private_arg_pointer)
>> >- */
>> >-extern int walk_memory_resource(unsigned long start_pfn,
>> >-			unsigned long nr_pages, void *arg,
>> >-			int (*func)(unsigned long, unsigned long, void *));
>> 
>> 
>> Why moving it? :)
>> 
>Ah, this declaration is in memory_hotplug.h because it's only for memory
>hotplug. For generic use, it's better to move this to iomem.h as other
>resource related ops, I think.
>


Ok, it's better if you can put this in your changelog. ;)


>
>
>> > 
>> > #ifdef CONFIG_MEMORY_HOTREMOVE
>> > 
>> >Index: mmotm-2.6.31-Jul16/kernel/resource.c
>> >===================================================================
>> >--- mmotm-2.6.31-Jul16.orig/kernel/resource.c
>> >+++ mmotm-2.6.31-Jul16/kernel/resource.c
>> >@@ -234,7 +234,7 @@ int release_resource(struct resource *ol
>> > 
>> > EXPORT_SYMBOL(release_resource);
>> > 
>> >-#if defined(CONFIG_MEMORY_HOTPLUG) && !defined(CONFIG_ARCH_HAS_WALK_MEMORY)
>> >+#if !defined(CONFIG_ARCH_HAS_WALK_MEMORY)
>> > /*
>> >  * Finds the lowest memory reosurce exists within [res->start.res->end)
>> >  * the caller must specify res->start, res->end, res->flags.
>> 
>> 
>> Shouldn't this part be in patch 6/6 instead of this one?
>> 
>Hmm, ok, I'll reorder 5/6 and 6/6 and define walk_system_ram_range() before
>this patch.

Thank you for keeping working on this! :)

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2009-07-29  8:47 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-07-24  8:08 [RFC][PATCH 0/6] kcore clean up and enhance. v3 KAMEZAWA Hiroyuki
2009-07-24  8:10 ` [RFC][PATCH 1/6] kcore: clean up to use generic list ops KAMEZAWA Hiroyuki
2009-07-24  8:11 ` [RFC][PATCH 2/6] kcore : add type attribute to kclist KAMEZAWA Hiroyuki
2009-07-24  8:13 ` [RFC][PATCH 3/6] kcore: unify vmalloc range entry KAMEZAWA Hiroyuki
2009-07-28 10:05   ` Amerigo Wang
2009-07-24  8:15 ` [RFC][PATCH 4/6] kcore: kcore unify text " KAMEZAWA Hiroyuki
2009-07-28 10:10   ` Amerigo Wang
2009-07-24  8:19 ` [RFC][PATCH 5/6] kcore: check physical memory range in correct way KAMEZAWA Hiroyuki
2009-07-28 10:24   ` Amerigo Wang
2009-07-28 23:58     ` KAMEZAWA Hiroyuki
2009-07-29  8:50       ` Amerigo Wang
2009-07-24  8:22 ` [RFC][PATCH 6/6] kcore: walk_system_ram_range() KAMEZAWA Hiroyuki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.