linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Patch:000/004] wait_table and zonelist initializing for memory hotadd
@ 2006-04-05 10:57 Yasunori Goto
  2006-04-05 11:01 ` [Patch:001/004] wait_table and zonelist initializing for memory hotadd (change to meminit for build_zonelist) Yasunori Goto
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Yasunori Goto @ 2006-04-05 10:57 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux Kernel ML, linux-mm, Yasunori Goto

Hi.

These are parts of patches for new nodes addition v4.
I picked them up because v4 might be a bit too many patches.
These patches can be used even when a new zone becomes available.
When empty zone becomes not empty, wait_table must be initialized,
and zonelists must be updated.
So, They are a good group for once post.

  ex) x86-64 is good example of new zone addition.
      - System boot up with memory under 4G address.
        All of memory will be ZONE_DMA32.
      - Then hot-add over 4G memory. It becomes ZONE_NORMAL. But, 
        wait table of zone normal is not initialized at this time.

This patch is for 2.6.17-rc1-mm1.

Please apply.

----------------------------
Change log from v4 of hot-add.
  - update for 2.6.17-rc1-mm1.
  - change allocation for wait_table from kmalloc() to vmalloc().
    vmalloc() is enough for it.

V4 of post is here.
<description>
http://marc.theaimsgroup.com/?l=linux-mm&w=2&r=1&s=memory+hotplug+node+v.4&q=b
<patches>
http://marc.theaimsgroup.com/?l=linux-mm&w=2&r=1&s=memory+hotplug+node+v.4.&q=b



-- 
Yasunori Goto 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Patch:001/004] wait_table and zonelist initializing for memory hotadd (change to meminit for build_zonelist)
  2006-04-05 10:57 [Patch:000/004] wait_table and zonelist initializing for memory hotadd Yasunori Goto
@ 2006-04-05 11:01 ` Yasunori Goto
  2006-04-05 11:01 ` [Patch:002/004] wait_table and zonelist initializing for memory hotadd (add return code for init_current_empty_zone) Yasunori Goto
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Yasunori Goto @ 2006-04-05 11:01 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux Kernel ML, linux-mm


This is a patch to change definition of some functions and data
from __init to __meminit.
These functions and data can be used after bootup by this patch to
be used for hot-add codes.

Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>

 include/linux/bootmem.h |    4 ++--
 mm/page_alloc.c         |   18 +++++++++---------
 2 files changed, 11 insertions(+), 11 deletions(-)

Index: pgdat10/mm/page_alloc.c
===================================================================
--- pgdat10.orig/mm/page_alloc.c	2006-04-05 16:04:03.000000000 +0900
+++ pgdat10/mm/page_alloc.c	2006-04-05 16:04:12.000000000 +0900
@@ -81,8 +81,8 @@ EXPORT_SYMBOL(zone_table);
 static char *zone_names[MAX_NR_ZONES] = { "DMA", "DMA32", "Normal", "HighMem" };
 int min_free_kbytes = 1024;
 
-unsigned long __initdata nr_kernel_pages;
-unsigned long __initdata nr_all_pages;
+unsigned long __meminitdata nr_kernel_pages;
+unsigned long __meminitdata nr_all_pages;
 
 #ifdef CONFIG_DEBUG_VM
 static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
@@ -1575,7 +1575,7 @@ void show_free_areas(void)
  *
  * Add all populated zones of a node to the zonelist.
  */
-static int __init build_zonelists_node(pg_data_t *pgdat,
+static int __meminit build_zonelists_node(pg_data_t *pgdat,
 			struct zonelist *zonelist, int nr_zones, int zone_type)
 {
 	struct zone *zone;
@@ -1611,7 +1611,7 @@ static inline int highest_zone(int zone_
 
 #ifdef CONFIG_NUMA
 #define MAX_NODE_LOAD (num_online_nodes())
-static int __initdata node_load[MAX_NUMNODES];
+static int __meminitdata node_load[MAX_NUMNODES];
 /**
  * find_next_best_node - find the next node that should appear in a given node's fallback list
  * @node: node whose fallback list we're appending
@@ -1626,7 +1626,7 @@ static int __initdata node_load[MAX_NUMN
  * on them otherwise.
  * It returns -1 if no node is found.
  */
-static int __init find_next_best_node(int node, nodemask_t *used_node_mask)
+static int __meminit find_next_best_node(int node, nodemask_t *used_node_mask)
 {
 	int n, val;
 	int min_val = INT_MAX;
@@ -1672,7 +1672,7 @@ static int __init find_next_best_node(in
 	return best_node;
 }
 
-static void __init build_zonelists(pg_data_t *pgdat)
+static void __meminit build_zonelists(pg_data_t *pgdat)
 {
 	int i, j, k, node, local_node;
 	int prev_node, load;
@@ -1724,7 +1724,7 @@ static void __init build_zonelists(pg_da
 
 #else	/* CONFIG_NUMA */
 
-static void __init build_zonelists(pg_data_t *pgdat)
+static void __meminit build_zonelists(pg_data_t *pgdat)
 {
 	int i, j, k, node, local_node;
 
@@ -2130,7 +2130,7 @@ static __meminit void init_currently_emp
  *   - mark all memory queues empty
  *   - clear the memory bitmaps
  */
-static void __init free_area_init_core(struct pglist_data *pgdat,
+static void __meminit free_area_init_core(struct pglist_data *pgdat,
 		unsigned long *zones_size, unsigned long *zholes_size)
 {
 	unsigned long j;
@@ -2210,7 +2210,7 @@ static void __init alloc_node_mem_map(st
 #endif /* CONFIG_FLAT_NODE_MEM_MAP */
 }
 
-void __init free_area_init_node(int nid, struct pglist_data *pgdat,
+void __meminit free_area_init_node(int nid, struct pglist_data *pgdat,
 		unsigned long *zones_size, unsigned long node_start_pfn,
 		unsigned long *zholes_size)
 {
Index: pgdat10/include/linux/bootmem.h
===================================================================
--- pgdat10.orig/include/linux/bootmem.h	2006-04-05 16:04:03.000000000 +0900
+++ pgdat10/include/linux/bootmem.h	2006-04-05 16:04:12.000000000 +0900
@@ -91,8 +91,8 @@ static inline void *alloc_remap(int nid,
 }
 #endif
 
-extern unsigned long __initdata nr_kernel_pages;
-extern unsigned long __initdata nr_all_pages;
+extern unsigned long nr_kernel_pages;
+extern unsigned long nr_all_pages;
 
 extern void *__init alloc_large_system_hash(const char *tablename,
 					    unsigned long bucketsize,

-- 
Yasunori Goto 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Patch:002/004] wait_table and zonelist initializing for memory hotadd (add return code for init_current_empty_zone)
  2006-04-05 10:57 [Patch:000/004] wait_table and zonelist initializing for memory hotadd Yasunori Goto
  2006-04-05 11:01 ` [Patch:001/004] wait_table and zonelist initializing for memory hotadd (change to meminit for build_zonelist) Yasunori Goto
@ 2006-04-05 11:01 ` Yasunori Goto
  2006-04-05 11:01 ` [Patch:003/004] wait_table and zonelist initializing for memory hotadd (wait_table initialization) Yasunori Goto
  2006-04-05 11:01 ` [Patch:004/004] wait_table and zonelist initializing for memory hotadd (update zonelists) Yasunori Goto
  3 siblings, 0 replies; 8+ messages in thread
From: Yasunori Goto @ 2006-04-05 11:01 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux Kernel ML, linux-mm

When add_zone() is called against empty zone (not populated zone),
we have to initialize the zone which didn't initialize at boot time.
But, init_currently_empty_zone() may fail due to allocation of 
wait table. So, this patch is to catch its error code.

Changes against wait_table is in the next patch.


Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>

 include/linux/mmzone.h |    3 +++
 mm/memory_hotplug.c    |   15 +++++++++++++--
 mm/page_alloc.c        |   11 ++++++++---
 3 files changed, 24 insertions(+), 5 deletions(-)

Index: pgdat10/mm/page_alloc.c
===================================================================
--- pgdat10.orig/mm/page_alloc.c	2006-03-31 14:43:33.000000000 +0900
+++ pgdat10/mm/page_alloc.c	2006-03-31 15:50:08.000000000 +0900
@@ -2109,8 +2109,9 @@ static __meminit void zone_pcp_init(stru
 			zone->name, zone->present_pages, batch);
 }
 
-static __meminit void init_currently_empty_zone(struct zone *zone,
-		unsigned long zone_start_pfn, unsigned long size)
+__meminit int init_currently_empty_zone(struct zone *zone,
+					unsigned long zone_start_pfn,
+					unsigned long size)
 {
 	struct pglist_data *pgdat = zone->zone_pgdat;
 
@@ -2122,6 +2123,8 @@ static __meminit void init_currently_emp
 	memmap_init(size, pgdat->node_id, zone_idx(zone), zone_start_pfn);
 
 	zone_init_free_lists(pgdat, zone, zone->spanned_pages);
+
+	return 0;
 }
 
 /*
@@ -2136,6 +2139,7 @@ static void __init free_area_init_core(s
 	unsigned long j;
 	int nid = pgdat->node_id;
 	unsigned long zone_start_pfn = pgdat->node_start_pfn;
+	int ret;
 
 	pgdat_resize_init(pgdat);
 	pgdat->nr_zones = 0;
@@ -2177,7 +2181,8 @@ static void __init free_area_init_core(s
 			continue;
 
 		zonetable_add(zone, nid, j, zone_start_pfn, size);
-		init_currently_empty_zone(zone, zone_start_pfn, size);
+		ret = init_currently_empty_zone(zone, zone_start_pfn, size);
+		BUG_ON(ret);
 		zone_start_pfn += size;
 	}
 }
Index: pgdat10/mm/memory_hotplug.c
===================================================================
--- pgdat10.orig/mm/memory_hotplug.c	2006-03-22 17:25:06.000000000 +0900
+++ pgdat10/mm/memory_hotplug.c	2006-03-31 15:50:08.000000000 +0900
@@ -26,7 +26,7 @@
 
 extern void zonetable_add(struct zone *zone, int nid, int zid, unsigned long pfn,
 			  unsigned long size);
-static void __add_zone(struct zone *zone, unsigned long phys_start_pfn)
+static int __add_zone(struct zone *zone, unsigned long phys_start_pfn)
 {
 	struct pglist_data *pgdat = zone->zone_pgdat;
 	int nr_pages = PAGES_PER_SECTION;
@@ -34,8 +34,15 @@ static void __add_zone(struct zone *zone
 	int zone_type;
 
 	zone_type = zone - pgdat->node_zones;
+	if (!populated_zone(zone)) {
+		int ret = 0;
+		ret = init_currently_empty_zone(zone, phys_start_pfn, nr_pages);
+		if (ret < 0)
+			return ret;
+	}
 	memmap_init_zone(nr_pages, nid, zone_type, phys_start_pfn);
 	zonetable_add(zone, nid, zone_type, phys_start_pfn, nr_pages);
+	return 0;
 }
 
 extern int sparse_add_one_section(struct zone *zone, unsigned long start_pfn,
@@ -50,7 +57,11 @@ static int __add_section(struct zone *zo
 	if (ret < 0)
 		return ret;
 
-	__add_zone(zone, phys_start_pfn);
+	ret = __add_zone(zone, phys_start_pfn);
+
+	if (ret < 0)
+		return ret;
+
 	return register_new_memory(__pfn_to_section(phys_start_pfn));
 }
 
Index: pgdat10/include/linux/mmzone.h
===================================================================
--- pgdat10.orig/include/linux/mmzone.h	2006-03-31 14:43:32.000000000 +0900
+++ pgdat10/include/linux/mmzone.h	2006-03-31 15:50:08.000000000 +0900
@@ -332,6 +332,9 @@ void wakeup_kswapd(struct zone *zone, in
 int zone_watermark_ok(struct zone *z, int order, unsigned long mark,
 		int classzone_idx, int alloc_flags);
 
+extern int init_currently_empty_zone(struct zone *zone, unsigned long start_pfn,
+				     unsigned long size);
+
 #ifdef CONFIG_HAVE_MEMORY_PRESENT
 void memory_present(int nid, unsigned long start, unsigned long end);
 #else

-- 
Yasunori Goto 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Patch:003/004] wait_table and zonelist initializing for memory hotadd (wait_table initialization)
  2006-04-05 10:57 [Patch:000/004] wait_table and zonelist initializing for memory hotadd Yasunori Goto
  2006-04-05 11:01 ` [Patch:001/004] wait_table and zonelist initializing for memory hotadd (change to meminit for build_zonelist) Yasunori Goto
  2006-04-05 11:01 ` [Patch:002/004] wait_table and zonelist initializing for memory hotadd (add return code for init_current_empty_zone) Yasunori Goto
@ 2006-04-05 11:01 ` Yasunori Goto
  2006-04-06 22:05   ` Dave Hansen
  2006-04-05 11:01 ` [Patch:004/004] wait_table and zonelist initializing for memory hotadd (update zonelists) Yasunori Goto
  3 siblings, 1 reply; 8+ messages in thread
From: Yasunori Goto @ 2006-04-05 11:01 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux Kernel ML, linux-mm


Wait_table is initialized according to zone size at boot time.
But, we cannot know the maixmum zone size when memory hotplug is enabled.
It can be changed.... And resizing of wait_table is hard.

So kernel allocate and initialzie wait_table as its maximum size.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>

 mm/page_alloc.c |   45 +++++++++++++++++++++++++++++++++++++++------
 1 files changed, 39 insertions(+), 6 deletions(-)

Index: pgdat10/mm/page_alloc.c
===================================================================
--- pgdat10.orig/mm/page_alloc.c	2006-04-05 16:04:22.000000000 +0900
+++ pgdat10/mm/page_alloc.c	2006-04-05 16:10:17.000000000 +0900
@@ -1785,6 +1785,7 @@ void __init build_all_zonelists(void)
  */
 #define PAGES_PER_WAITQUEUE	256
 
+#ifdef CONFIG_MEMORY_HOTPLUG
 static inline unsigned long wait_table_size(unsigned long pages)
 {
 	unsigned long size = 1;
@@ -1803,6 +1804,17 @@ static inline unsigned long wait_table_s
 
 	return max(size, 4UL);
 }
+#else
+/*
+ * XXX: Because zone size might be changed by hot-add,
+ *      It is hard to determin suitable size for wait_table as traditional.
+ *      So, we use maximum size now.
+ */
+static inline unsigned long wait_table_size(unsigned long pages)
+{
+	return 4096UL;
+}
+#endif
 
 /*
  * This is an integer logarithm so that shifts can be used later
@@ -2071,10 +2083,11 @@ void __init setup_per_cpu_pageset(void)
 #endif
 
 static __meminit
-void zone_wait_table_init(struct zone *zone, unsigned long zone_size_pages)
+int zone_wait_table_init(struct zone *zone, unsigned long zone_size_pages)
 {
 	int i;
 	struct pglist_data *pgdat = zone->zone_pgdat;
+	size_t alloc_size;
 
 	/*
 	 * The per-page waitqueue mechanism uses hashed waitqueues
@@ -2082,12 +2095,30 @@ void zone_wait_table_init(struct zone *z
 	 */
 	zone->wait_table_size = wait_table_size(zone_size_pages);
 	zone->wait_table_bits =	wait_table_bits(zone->wait_table_size);
-	zone->wait_table = (wait_queue_head_t *)
-		alloc_bootmem_node(pgdat, zone->wait_table_size
-					* sizeof(wait_queue_head_t));
+	alloc_size = zone->wait_table_size * sizeof(wait_queue_head_t);
+
+	if (system_state == SYSTEM_BOOTING) {
+		zone->wait_table = (wait_queue_head_t *)
+			alloc_bootmem_node(pgdat, alloc_size);
+	} else {
+		/*
+		 * XXX: This case means that a zone whose size was 0 gets new
+		 *      memory by memory hot-add.
+		 *      But, this may be the case that "new node" is hotadded.
+		 * 	If its case, vmalloc() will not get this new node's
+		 *      memory. Because this wait_table must be initialized
+		 *      to use this new node itself too.
+		 *      To use this new node's memory, further consideration
+		 *      will be necessary.
+		 */
+		zone->wait_table = (wait_queue_head_t *)vmalloc(alloc_size);
+	}
+	if (!zone->wait_table)
+		return -ENOMEM;
 
 	for(i = 0; i < zone->wait_table_size; ++i)
 		init_waitqueue_head(zone->wait_table + i);
+	return 0;
 }
 
 static __meminit void zone_pcp_init(struct zone *zone)
@@ -2114,8 +2145,10 @@ __meminit int init_currently_empty_zone(
 					unsigned long size)
 {
 	struct pglist_data *pgdat = zone->zone_pgdat;
-
-	zone_wait_table_init(zone, size);
+	int ret;
+	ret = zone_wait_table_init(zone, size);
+	if (ret)
+		return ret;
 	pgdat->nr_zones = zone_idx(zone) + 1;
 
 	zone->zone_start_pfn = zone_start_pfn;

-- 
Yasunori Goto 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Patch:004/004] wait_table and zonelist initializing for memory hotadd (update zonelists)
  2006-04-05 10:57 [Patch:000/004] wait_table and zonelist initializing for memory hotadd Yasunori Goto
                   ` (2 preceding siblings ...)
  2006-04-05 11:01 ` [Patch:003/004] wait_table and zonelist initializing for memory hotadd (wait_table initialization) Yasunori Goto
@ 2006-04-05 11:01 ` Yasunori Goto
  3 siblings, 0 replies; 8+ messages in thread
From: Yasunori Goto @ 2006-04-05 11:01 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux Kernel ML, linux-mm

In current code, zonelist is considered to be build once, no modification.
But MemoryHotplug can add new zone/pgdat. It must be updated.

This patch modifies build_all_zonelists(). 
By this, build_all_zonelist() can reconfig pgdat's zonelists.

To update them safety, this patch use stop_machine_run().
Other cpus don't touch among updating them by using it.

In previous version (V2), kernel updated them after zone initialization.
But present_page of its new zone is still 0, because online_page()
is not called yet at this time. 
Build_zonelists() checks present_pages to find present zone.
It was too early. So, I changed it after online_pages().

Signed-off-by: Yasunori Goto     <y-goto@jp.fujitsu.com>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

 mm/memory_hotplug.c |   12 ++++++++++++
 mm/page_alloc.c     |   26 +++++++++++++++++++++-----
 2 files changed, 33 insertions(+), 5 deletions(-)

Index: pgdat10/mm/page_alloc.c
===================================================================
--- pgdat10.orig/mm/page_alloc.c	2006-04-04 20:42:51.000000000 +0900
+++ pgdat10/mm/page_alloc.c	2006-04-04 20:42:52.000000000 +0900
@@ -37,6 +37,7 @@
 #include <linux/nodemask.h>
 #include <linux/vmalloc.h>
 #include <linux/mempolicy.h>
+#include <linux/stop_machine.h>
 
 #include <asm/tlbflush.h>
 #include "internal.h"
@@ -1762,14 +1763,29 @@ static void __init build_zonelists(pg_da
 
 #endif	/* CONFIG_NUMA */
 
-void __init build_all_zonelists(void)
+/* return values int ....just for stop_machine_run() */
+static int __meminit __build_all_zonelists(void *dummy)
 {
-	int i;
+	int nid;
+	for_each_online_node(nid)
+		build_zonelists(NODE_DATA(nid));
+	return 0;
+}
+
+void __meminit build_all_zonelists(void)
+{
+	if (system_state == SYSTEM_BOOTING) {
+		__build_all_zonelists(0);
+		cpuset_init_current_mems_allowed();
+	} else {
+		/* we have to stop all cpus to guaranntee there is no user
+		   of zonelist */
+		stop_machine_run(__build_all_zonelists, NULL, NR_CPUS);
+		/* cpuset refresh routine should be here */
+	}
 
-	for_each_online_node(i)
-		build_zonelists(NODE_DATA(i));
 	printk("Built %i zonelists\n", num_online_nodes());
-	cpuset_init_current_mems_allowed();
+
 }
 
 /*
Index: pgdat10/mm/memory_hotplug.c
===================================================================
--- pgdat10.orig/mm/memory_hotplug.c	2006-04-04 20:42:49.000000000 +0900
+++ pgdat10/mm/memory_hotplug.c	2006-04-04 20:42:52.000000000 +0900
@@ -123,6 +123,7 @@ int online_pages(unsigned long pfn, unsi
 	unsigned long flags;
 	unsigned long onlined_pages = 0;
 	struct zone *zone;
+	int need_zonelists_rebuild = 0;
 
 	/*
 	 * This doesn't need a lock to do pfn_to_page().
@@ -135,6 +136,14 @@ int online_pages(unsigned long pfn, unsi
 	grow_pgdat_span(zone->zone_pgdat, pfn, pfn + nr_pages);
 	pgdat_resize_unlock(zone->zone_pgdat, &flags);
 
+	/*
+	 * If this zone is not populated, then it is not in zonelist.
+	 * This means the page allocator ignores this zone.
+	 * So, zonelist must be updated after online.
+	 */
+	if (!populated_zone(zone))
+		need_zonelists_rebuild = 1;
+
 	for (i = 0; i < nr_pages; i++) {
 		struct page *page = pfn_to_page(pfn + i);
 		online_page(page);
@@ -145,5 +154,8 @@ int online_pages(unsigned long pfn, unsi
 
 	setup_per_zone_pages_min();
 
+	if (need_zonelists_rebuild)
+		build_all_zonelists();
+
 	return 0;
 }

-- 
Yasunori Goto 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Patch:003/004] wait_table and zonelist initializing for memory hotadd (wait_table initialization)
  2006-04-05 11:01 ` [Patch:003/004] wait_table and zonelist initializing for memory hotadd (wait_table initialization) Yasunori Goto
@ 2006-04-06 22:05   ` Dave Hansen
  2006-04-07  3:10     ` Yasunori Goto
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Hansen @ 2006-04-06 22:05 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: Andrew Morton, Linux Kernel ML, linux-mm

On Wed, 2006-04-05 at 20:01 +0900, Yasunori Goto wrote:
> 
> +#ifdef CONFIG_MEMORY_HOTPLUG
>  static inline unsigned long wait_table_size(unsigned long pages)
>  {
>         unsigned long size = 1;
> @@ -1803,6 +1804,17 @@ static inline unsigned long wait_table_s
>  
>         return max(size, 4UL);
>  }
> +#else
> +/*
> + * XXX: Because zone size might be changed by hot-add,
> + *      It is hard to determin suitable size for wait_table as
> traditional.
> + *      So, we use maximum size now.
> + */
> +static inline unsigned long wait_table_size(unsigned long pages)
> +{
> +       return 4096UL;
> +}
> +#endif 

Sorry for the slow response.  My IBM email is temporarily dead.

Couple of things.  

First, is there anything useful that prepending UL to the constants does
to the functions?  It ends up looking a little messy to me.

Also, I thought you were going to put a big fat comment on there about
doing it correctly in the future.  It would also be nice to quantify the
wasted space in terms of bytes, just so that people get a feel for it.

Oh, and wait_table_size() needs a unit.  wait_table_size_bytes() sounds
like a winner to me.

-- Dave


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Patch:003/004] wait_table and zonelist initializing for memory hotadd (wait_table initialization)
  2006-04-06 22:05   ` Dave Hansen
@ 2006-04-07  3:10     ` Yasunori Goto
  2006-04-07  3:12       ` Dave Hansen
  0 siblings, 1 reply; 8+ messages in thread
From: Yasunori Goto @ 2006-04-07  3:10 UTC (permalink / raw)
  To: Dave Hansen; +Cc: Andrew Morton, Linux Kernel ML, linux-mm

> On Wed, 2006-04-05 at 20:01 +0900, Yasunori Goto wrote:
> > 
> > +#ifdef CONFIG_MEMORY_HOTPLUG
> >  static inline unsigned long wait_table_size(unsigned long pages)
> >  {
> >         unsigned long size = 1;
> > @@ -1803,6 +1804,17 @@ static inline unsigned long wait_table_s
> >  
> >         return max(size, 4UL);
> >  }
> > +#else
> > +/*
> > + * XXX: Because zone size might be changed by hot-add,
> > + *      It is hard to determin suitable size for wait_table as
> > traditional.
> > + *      So, we use maximum size now.
> > + */
> > +static inline unsigned long wait_table_size(unsigned long pages)
> > +{
> > +       return 4096UL;
> > +}
> > +#endif 
> 
> Sorry for the slow response.  My IBM email is temporarily dead.
> 
> Couple of things.  
> 
> First, is there anything useful that prepending UL to the constants does
> to the functions?  It ends up looking a little messy to me.

I would like to show that it is max size of original wait_table_size().
Original one uses 4096UL for it.

> Also, I thought you were going to put a big fat comment on there about
> doing it correctly in the future.  It would also be nice to quantify the
> wasted space in terms of bytes, just so that people get a feel for it.

Hmmm. Ok.

> Oh, and wait_table_size() needs a unit.  wait_table_size_bytes() sounds
> like a winner to me.

This size doesn't mean bytes. It is hash table entry size.
So, wait_table_hash_size() or wait_table_entry_size() might be better.

Thanks.

-- 
Yasunori Goto 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Patch:003/004] wait_table and zonelist initializing for memory hotadd (wait_table initialization)
  2006-04-07  3:10     ` Yasunori Goto
@ 2006-04-07  3:12       ` Dave Hansen
  0 siblings, 0 replies; 8+ messages in thread
From: Dave Hansen @ 2006-04-07  3:12 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: Andrew Morton, Linux Kernel ML, linux-mm

On Fri, 2006-04-07 at 12:10 +0900, Yasunori Goto wrote:
> 
> This size doesn't mean bytes. It is hash table entry size.
> So, wait_table_hash_size() or wait_table_entry_size() might be better.

wait_table_hash_nr_entries() is pretty obvious, although a bit long.

-- Dave


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2006-04-07  3:12 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-04-05 10:57 [Patch:000/004] wait_table and zonelist initializing for memory hotadd Yasunori Goto
2006-04-05 11:01 ` [Patch:001/004] wait_table and zonelist initializing for memory hotadd (change to meminit for build_zonelist) Yasunori Goto
2006-04-05 11:01 ` [Patch:002/004] wait_table and zonelist initializing for memory hotadd (add return code for init_current_empty_zone) Yasunori Goto
2006-04-05 11:01 ` [Patch:003/004] wait_table and zonelist initializing for memory hotadd (wait_table initialization) Yasunori Goto
2006-04-06 22:05   ` Dave Hansen
2006-04-07  3:10     ` Yasunori Goto
2006-04-07  3:12       ` Dave Hansen
2006-04-05 11:01 ` [Patch:004/004] wait_table and zonelist initializing for memory hotadd (update zonelists) Yasunori Goto

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).