All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@suse.de>, Vlastimil Babka <vbabka@suse.cz>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Reza Arbab <arbab@linux.vnet.ibm.com>,
	Yasuaki Ishimatsu <yasu.isimatu@gmail.com>,
	Tang Chen <tangchen@cn.fujitsu.com>,
	qiuxishi@huawei.com, Kani Toshimitsu <toshi.kani@hpe.com>,
	slaoub@gmail.com, Joonsoo Kim <js1304@gmail.com>,
	Andi Kleen <ak@linux.intel.com>,
	David Rientjes <rientjes@google.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 6/6] mm, memory_hotplug: remove unused cruft after memory hotplug rework
Date: Fri, 31 Mar 2017 09:46:49 +0200	[thread overview]
Message-ID: <20170331074648.GH27098@dhcp22.suse.cz> (raw)
In-Reply-To: <20170330115454.32154-7-mhocko@kernel.org>

rebase on top of previous changes
---
>From dfb7cab2783d2175ad8355ce65943a28e43d8700 Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@suse.com>
Date: Wed, 29 Mar 2017 18:08:51 +0200
Subject: [PATCH] mm, memory_hotplug: remove unused cruft after memory hotplug
 rework

arch_add_memory doesn't need for_device parameter anymore because
devm_memremap_pages already does all what it needs to.

zone_for_memory doesn't have any user anymore as well as the whole zone
shifting infrastructure so drop them as well.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 arch/ia64/mm/init.c            |   2 +-
 arch/powerpc/mm/mem.c          |   2 +-
 arch/s390/mm/init.c            |   2 +-
 arch/sh/mm/init.c              |   3 +-
 arch/x86/mm/init_32.c          |   2 +-
 arch/x86/mm/init_64.c          |   2 +-
 include/linux/memory_hotplug.h |   4 +-
 kernel/memremap.c              |   2 +-
 mm/memory_hotplug.c            | 209 +----------------------------------------
 9 files changed, 9 insertions(+), 219 deletions(-)

diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
index 0fb7f3946785..6ebb570f1eae 100644
--- a/arch/ia64/mm/init.c
+++ b/arch/ia64/mm/init.c
@@ -645,7 +645,7 @@ mem_init (void)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG
-int arch_add_memory(int nid, u64 start, u64 size, bool for_device)
+int arch_add_memory(int nid, u64 start, u64 size)
 {
 	unsigned long start_pfn = start >> PAGE_SHIFT;
 	unsigned long nr_pages = size >> PAGE_SHIFT;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index db1369a7f69f..707a4146938b 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -126,7 +126,7 @@ int __weak remove_section_mapping(unsigned long start, unsigned long end)
 	return -ENODEV;
 }
 
-int arch_add_memory(int nid, u64 start, u64 size, bool for_device)
+int arch_add_memory(int nid, u64 start, u64 size)
 {
 	unsigned long start_pfn = start >> PAGE_SHIFT;
 	unsigned long nr_pages = size >> PAGE_SHIFT;
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 9e8c515ee29f..7d7591b63b57 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -153,7 +153,7 @@ void __init free_initrd_mem(unsigned long start, unsigned long end)
 #endif
 
 #ifdef CONFIG_MEMORY_HOTPLUG
-int arch_add_memory(int nid, u64 start, u64 size, bool for_device)
+int arch_add_memory(int nid, u64 start, u64 size)
 {
 	unsigned long start_pfn = PFN_DOWN(start);
 	unsigned long size_pages = PFN_DOWN(size);
diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
index 95261b66bcf3..4e4afa0ab3c3 100644
--- a/arch/sh/mm/init.c
+++ b/arch/sh/mm/init.c
@@ -485,13 +485,12 @@ void free_initrd_mem(unsigned long start, unsigned long end)
 #endif
 
 #ifdef CONFIG_MEMORY_HOTPLUG
-int arch_add_memory(int nid, u64 start, u64 size, bool for_device)
+int arch_add_memory(int nid, u64 start, u64 size)
 {
 	unsigned long start_pfn = PFN_DOWN(start);
 	unsigned long nr_pages = size >> PAGE_SHIFT;
 	int ret;
 
-
 	/* We only have ZONE_NORMAL, so this is easy.. */
 	ret = __add_pages(nid, start_pfn, nr_pages);
 	if (unlikely(ret))
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 5de79aa7d6ce..b389f2ec75f9 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -816,7 +816,7 @@ void __init mem_init(void)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG
-int arch_add_memory(int nid, u64 start, u64 size, bool for_device)
+int arch_add_memory(int nid, u64 start, u64 size)
 {
 	unsigned long start_pfn = start >> PAGE_SHIFT;
 	unsigned long nr_pages = size >> PAGE_SHIFT;
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index bc53f24e6703..20f575333b08 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -637,7 +637,7 @@ static void  update_end_of_memory_vars(u64 start, u64 size)
 	}
 }
 
-int arch_add_memory(int nid, u64 start, u64 size, bool for_device)
+int arch_add_memory(int nid, u64 start, u64 size)
 {
 	unsigned long start_pfn = start >> PAGE_SHIFT;
 	unsigned long nr_pages = size >> PAGE_SHIFT;
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 63577ce57028..d5adb7c468d1 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -274,9 +274,7 @@ extern int walk_memory_range(unsigned long start_pfn, unsigned long end_pfn,
 		void *arg, int (*func)(struct memory_block *, void *));
 extern int add_memory(int nid, u64 start, u64 size);
 extern int add_memory_resource(int nid, struct resource *resource, bool online);
-extern int zone_for_memory(int nid, u64 start, u64 size, int zone_default,
-		bool for_device);
-extern int arch_add_memory(int nid, u64 start, u64 size, bool for_device);
+extern int arch_add_memory(int nid, u64 start, u64 size);
 extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
 		unsigned long nr_pages);
 extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages);
diff --git a/kernel/memremap.c b/kernel/memremap.c
index 1b137649cb82..808f7974918a 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -366,7 +366,7 @@ void *devm_memremap_pages(struct device *dev, struct resource *res,
 
 	lock_device_hotplug();
 	mem_hotplug_begin();
-	error = arch_add_memory(nid, align_start, align_size, true);
+	error = arch_add_memory(nid, align_start, align_size);
 	if (!error)
 		move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
 				align_start, align_size);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 3a8b24e47a2b..b9dc1c4e26c3 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -295,180 +295,6 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
 }
 #endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
 
-static void __meminit grow_zone_span(struct zone *zone, unsigned long start_pfn,
-				     unsigned long end_pfn)
-{
-	unsigned long old_zone_end_pfn;
-
-	zone_span_writelock(zone);
-
-	old_zone_end_pfn = zone_end_pfn(zone);
-	if (zone_is_empty(zone) || start_pfn < zone->zone_start_pfn)
-		zone->zone_start_pfn = start_pfn;
-
-	zone->spanned_pages = max(old_zone_end_pfn, end_pfn) -
-				zone->zone_start_pfn;
-
-	zone_span_writeunlock(zone);
-}
-
-static void resize_zone(struct zone *zone, unsigned long start_pfn,
-		unsigned long end_pfn)
-{
-	zone_span_writelock(zone);
-
-	if (end_pfn - start_pfn) {
-		zone->zone_start_pfn = start_pfn;
-		zone->spanned_pages = end_pfn - start_pfn;
-	} else {
-		/*
-		 * make it consist as free_area_init_core(),
-		 * if spanned_pages = 0, then keep start_pfn = 0
-		 */
-		zone->zone_start_pfn = 0;
-		zone->spanned_pages = 0;
-	}
-
-	zone_span_writeunlock(zone);
-}
-
-static void fix_zone_id(struct zone *zone, unsigned long start_pfn,
-		unsigned long end_pfn)
-{
-	enum zone_type zid = zone_idx(zone);
-	int nid = zone->zone_pgdat->node_id;
-	unsigned long pfn;
-
-	for (pfn = start_pfn; pfn < end_pfn; pfn++)
-		set_page_links(pfn_to_page(pfn), zid, nid, pfn);
-}
-
-static void __ref ensure_zone_is_initialized(struct zone *zone,
-			unsigned long start_pfn, unsigned long num_pages)
-{
-	if (zone_is_empty(zone))
-		init_currently_empty_zone(zone, start_pfn, num_pages);
-}
-
-static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
-		unsigned long start_pfn, unsigned long end_pfn)
-{
-	unsigned long flags;
-	unsigned long z1_start_pfn;
-
-	ensure_zone_is_initialized(z1, start_pfn, end_pfn - start_pfn);
-
-	pgdat_resize_lock(z1->zone_pgdat, &flags);
-
-	/* can't move pfns which are higher than @z2 */
-	if (end_pfn > zone_end_pfn(z2))
-		goto out_fail;
-	/* the move out part must be at the left most of @z2 */
-	if (start_pfn > z2->zone_start_pfn)
-		goto out_fail;
-	/* must included/overlap */
-	if (end_pfn <= z2->zone_start_pfn)
-		goto out_fail;
-
-	/* use start_pfn for z1's start_pfn if z1 is empty */
-	if (!zone_is_empty(z1))
-		z1_start_pfn = z1->zone_start_pfn;
-	else
-		z1_start_pfn = start_pfn;
-
-	resize_zone(z1, z1_start_pfn, end_pfn);
-	resize_zone(z2, end_pfn, zone_end_pfn(z2));
-
-	pgdat_resize_unlock(z1->zone_pgdat, &flags);
-
-	fix_zone_id(z1, start_pfn, end_pfn);
-
-	return 0;
-out_fail:
-	pgdat_resize_unlock(z1->zone_pgdat, &flags);
-	return -1;
-}
-
-static int __meminit move_pfn_range_right(struct zone *z1, struct zone *z2,
-		unsigned long start_pfn, unsigned long end_pfn)
-{
-	unsigned long flags;
-	unsigned long z2_end_pfn;
-
-	ensure_zone_is_initialized(z2, start_pfn, end_pfn - start_pfn);
-
-	pgdat_resize_lock(z1->zone_pgdat, &flags);
-
-	/* can't move pfns which are lower than @z1 */
-	if (z1->zone_start_pfn > start_pfn)
-		goto out_fail;
-	/* the move out part mast at the right most of @z1 */
-	if (zone_end_pfn(z1) >  end_pfn)
-		goto out_fail;
-	/* must included/overlap */
-	if (start_pfn >= zone_end_pfn(z1))
-		goto out_fail;
-
-	/* use end_pfn for z2's end_pfn if z2 is empty */
-	if (!zone_is_empty(z2))
-		z2_end_pfn = zone_end_pfn(z2);
-	else
-		z2_end_pfn = end_pfn;
-
-	resize_zone(z1, z1->zone_start_pfn, start_pfn);
-	resize_zone(z2, start_pfn, z2_end_pfn);
-
-	pgdat_resize_unlock(z1->zone_pgdat, &flags);
-
-	fix_zone_id(z2, start_pfn, end_pfn);
-
-	return 0;
-out_fail:
-	pgdat_resize_unlock(z1->zone_pgdat, &flags);
-	return -1;
-}
-
-static void __meminit grow_pgdat_span(struct pglist_data *pgdat, unsigned long start_pfn,
-				      unsigned long end_pfn)
-{
-	unsigned long old_pgdat_end_pfn = pgdat_end_pfn(pgdat);
-
-	if (!pgdat->node_spanned_pages || start_pfn < pgdat->node_start_pfn)
-		pgdat->node_start_pfn = start_pfn;
-
-	pgdat->node_spanned_pages = max(old_pgdat_end_pfn, end_pfn) -
-					pgdat->node_start_pfn;
-}
-
-static int __meminit __add_zone(struct zone *zone, unsigned long phys_start_pfn)
-{
-	struct pglist_data *pgdat = zone->zone_pgdat;
-	int nr_pages = PAGES_PER_SECTION;
-	int nid = pgdat->node_id;
-	int zone_type;
-	unsigned long flags, pfn;
-
-	zone_type = zone - pgdat->node_zones;
-	ensure_zone_is_initialized(zone, phys_start_pfn, nr_pages);
-
-	pgdat_resize_lock(zone->zone_pgdat, &flags);
-	grow_zone_span(zone, phys_start_pfn, phys_start_pfn + nr_pages);
-	grow_pgdat_span(zone->zone_pgdat, phys_start_pfn,
-			phys_start_pfn + nr_pages);
-	pgdat_resize_unlock(zone->zone_pgdat, &flags);
-	memmap_init_zone(nr_pages, nid, zone_type,
-			 phys_start_pfn, MEMMAP_HOTPLUG);
-
-	/* online_page_range is called later and expects pages reserved */
-	for (pfn = phys_start_pfn; pfn < phys_start_pfn + nr_pages; pfn++) {
-		if (!pfn_valid(pfn))
-			continue;
-
-		SetPageReserved(pfn_to_page(pfn));
-	}
-	return 0;
-}
-
 static int __meminit __add_section(int nid, unsigned long phys_start_pfn)
 {
 	int ret;
@@ -1332,39 +1158,6 @@ static int check_hotplug_memory_range(u64 start, u64 size)
 	return 0;
 }
 
-/*
- * If movable zone has already been setup, newly added memory should be check.
- * If its address is higher than movable zone, it should be added as movable.
- * Without this check, movable zone may overlap with other zone.
- */
-static int should_add_memory_movable(int nid, u64 start, u64 size)
-{
-	unsigned long start_pfn = start >> PAGE_SHIFT;
-	pg_data_t *pgdat = NODE_DATA(nid);
-	struct zone *movable_zone = pgdat->node_zones + ZONE_MOVABLE;
-
-	if (zone_is_empty(movable_zone))
-		return 0;
-
-	if (movable_zone->zone_start_pfn <= start_pfn)
-		return 1;
-
-	return 0;
-}
-
-int zone_for_memory(int nid, u64 start, u64 size, int zone_default,
-		bool for_device)
-{
-#ifdef CONFIG_ZONE_DEVICE
-	if (for_device)
-		return ZONE_DEVICE;
-#endif
-	if (should_add_memory_movable(nid, start, size))
-		return ZONE_MOVABLE;
-
-	return zone_default;
-}
-
 static int online_memory_block(struct memory_block *mem, void *arg)
 {
 	return device_online(&mem->dev);
@@ -1410,7 +1203,7 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online)
 	}
 
 	/* call arch's memory hotadd */
-	ret = arch_add_memory(nid, start, size, false);
+	ret = arch_add_memory(nid, start, size);
 
 	if (ret < 0)
 		goto error;
-- 
2.11.0

-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@suse.de>, Vlastimil Babka <vbabka@suse.cz>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Reza Arbab <arbab@linux.vnet.ibm.com>,
	Yasuaki Ishimatsu <yasu.isimatu@gmail.com>,
	Tang Chen <tangchen@cn.fujitsu.com>,
	qiuxishi@huawei.com, Kani Toshimitsu <toshi.kani@hpe.com>,
	slaoub@gmail.com, Joonsoo Kim <js1304@gmail.com>,
	Andi Kleen <ak@linux.intel.com>,
	David Rientjes <rientjes@google.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 6/6] mm, memory_hotplug: remove unused cruft after memory hotplug rework
Date: Fri, 31 Mar 2017 09:46:49 +0200	[thread overview]
Message-ID: <20170331074648.GH27098@dhcp22.suse.cz> (raw)
In-Reply-To: <20170330115454.32154-7-mhocko@kernel.org>

rebase on top of previous changes
---

  reply	other threads:[~2017-03-31  7:46 UTC|newest]

Thread overview: 140+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-30 11:54 [PATCH 0/6] mm: make movable onlining suck less Michal Hocko
2017-03-30 11:54 ` Michal Hocko
2017-03-30 11:54 ` [PATCH 1/6] mm: get rid of zone_is_initialized Michal Hocko
2017-03-30 11:54   ` Michal Hocko
2017-03-31  3:39   ` Hillf Danton
2017-03-31  3:39     ` Hillf Danton
2017-03-31  6:43     ` Michal Hocko
2017-03-31  6:43       ` Michal Hocko
2017-03-31  6:48       ` Michal Hocko
2017-03-31  6:48         ` Michal Hocko
2017-03-31  7:39   ` [PATCH v1 " Michal Hocko
2017-03-31  7:39     ` Michal Hocko
2017-04-05  8:14     ` Michal Hocko
2017-04-05  8:14       ` Michal Hocko
2017-04-05  9:06       ` Igor Mammedov
2017-04-05  9:06         ` Igor Mammedov
2017-04-05  9:23         ` Michal Hocko
2017-04-05  9:23           ` Michal Hocko
2017-03-30 11:54 ` [PATCH 2/6] mm, tile: drop arch_{add,remove}_memory Michal Hocko
2017-03-30 11:54   ` Michal Hocko
2017-03-30 15:41   ` Chris Metcalf
2017-03-30 15:41     ` Chris Metcalf
2017-03-30 11:54 ` [PATCH 3/6] mm: remove return value from init_currently_empty_zone Michal Hocko
2017-03-30 11:54   ` Michal Hocko
2017-03-31  3:49   ` Hillf Danton
2017-03-31  3:49     ` Hillf Danton
2017-03-31  6:49     ` Michal Hocko
2017-03-31  6:49       ` Michal Hocko
2017-03-31  7:06       ` Hillf Danton
2017-03-31  7:06         ` Hillf Danton
2017-03-31  7:18         ` Michal Hocko
2017-03-31  7:18           ` Michal Hocko
2017-03-31  7:43   ` Michal Hocko
2017-03-31  7:43     ` Michal Hocko
2017-04-03 21:22   ` Reza Arbab
2017-04-03 21:22     ` Reza Arbab
2017-04-04  7:30     ` Michal Hocko
2017-04-04  7:30       ` Michal Hocko
2017-03-30 11:54 ` [PATCH 4/6] mm, memory_hotplug: use node instead of zone in can_online_high_movable Michal Hocko
2017-03-30 11:54   ` Michal Hocko
2017-03-30 11:54 ` [PATCH 5/6] mm, memory_hotplug: do not associate hotadded memory to zones until online Michal Hocko
2017-03-30 11:54   ` Michal Hocko
2017-03-31  6:18   ` Hillf Danton
2017-03-31  6:18     ` Hillf Danton
2017-03-31  6:50     ` Michal Hocko
2017-03-31  6:50       ` Michal Hocko
2017-04-04 12:21   ` Tobias Regnery
2017-04-04 12:21     ` Tobias Regnery
2017-04-04 12:45     ` Michal Hocko
2017-04-04 12:45       ` Michal Hocko
2017-04-06  8:14   ` Michal Hocko
2017-04-06  8:14     ` Michal Hocko
2017-04-06 12:46   ` Michal Hocko
2017-04-06 12:46     ` Michal Hocko
2017-03-30 11:54 ` [PATCH 6/6] mm, memory_hotplug: remove unused cruft after memory hotplug rework Michal Hocko
2017-03-30 11:54   ` Michal Hocko
2017-03-31  7:46   ` Michal Hocko [this message]
2017-03-31  7:46     ` Michal Hocko
2017-03-31 19:19 ` [PATCH 0/6] mm: make movable onlining suck less Heiko Carstens
2017-03-31 19:19   ` Heiko Carstens
2017-04-03  7:34   ` Michal Hocko
2017-04-03  7:34     ` Michal Hocko
2017-04-03 11:55 ` Michal Hocko
2017-04-03 11:55   ` Michal Hocko
2017-04-03 12:20   ` Igor Mammedov
2017-04-03 12:20     ` Igor Mammedov
2017-04-03 19:58   ` Reza Arbab
2017-04-03 19:58     ` Reza Arbab
2017-04-03 20:23     ` Michal Hocko
2017-04-03 20:23       ` Michal Hocko
2017-04-03 20:42       ` Reza Arbab
2017-04-03 20:42         ` Reza Arbab
2017-04-04  7:23         ` Michal Hocko
2017-04-04  7:23           ` Michal Hocko
2017-04-04  7:34           ` Michal Hocko
2017-04-04  7:34             ` Michal Hocko
2017-04-04  8:23             ` Michal Hocko
2017-04-04  8:23               ` Michal Hocko
2017-04-04 15:59               ` Reza Arbab
2017-04-04 15:59                 ` Reza Arbab
2017-04-04 16:02               ` Reza Arbab
2017-04-04 16:02                 ` Reza Arbab
2017-04-04 16:44                 ` Michal Hocko
2017-04-04 16:44                   ` Michal Hocko
2017-04-04 18:30                   ` Reza Arbab
2017-04-04 18:30                     ` Reza Arbab
2017-04-04 19:41                     ` Michal Hocko
2017-04-04 19:41                       ` Michal Hocko
2017-04-04 21:43                       ` Reza Arbab
2017-04-04 21:43                         ` Reza Arbab
2017-04-05  6:42                         ` Michal Hocko
2017-04-05  6:42                           ` Michal Hocko
2017-04-05  9:24                           ` Michal Hocko
2017-04-05  9:24                             ` Michal Hocko
2017-04-05 14:53                             ` Reza Arbab
2017-04-05 14:53                               ` Reza Arbab
2017-04-05 15:42                               ` Michal Hocko
2017-04-05 15:42                                 ` Michal Hocko
2017-04-05 17:32                                 ` Reza Arbab
2017-04-05 17:32                                   ` Reza Arbab
2017-04-05 18:15                                   ` Michal Hocko
2017-04-05 18:15                                     ` Michal Hocko
2017-04-05 19:39                                     ` Michal Hocko
2017-04-05 19:39                                       ` Michal Hocko
2017-04-05 21:02                                     ` Michal Hocko
2017-04-05 21:02                                       ` Michal Hocko
2017-04-06 11:07                                       ` Michal Hocko
2017-04-06 11:07                                         ` Michal Hocko
2017-04-05 15:48                           ` Reza Arbab
2017-04-05 15:48                             ` Reza Arbab
2017-04-05 16:34                             ` Michal Hocko
2017-04-05 16:34                               ` Michal Hocko
2017-04-05 20:55                               ` Reza Arbab
2017-04-05 20:55                                 ` Reza Arbab
2017-04-06  9:25                               ` Michal Hocko
2017-04-06  9:25                                 ` Michal Hocko
2017-04-05 13:52                         ` Michal Hocko
2017-04-05 13:52                           ` Michal Hocko
2017-04-05 15:23                           ` Reza Arbab
2017-04-05 15:23                             ` Reza Arbab
2017-04-05  6:36                       ` Michal Hocko
2017-04-05  6:36                         ` Michal Hocko
2017-04-06 13:08 ` Michal Hocko
2017-04-06 13:08   ` Michal Hocko
2017-04-06 15:24   ` Reza Arbab
2017-04-06 15:24     ` Reza Arbab
2017-04-06 15:41     ` Michal Hocko
2017-04-06 15:41       ` Michal Hocko
2017-04-06 15:46       ` Reza Arbab
2017-04-06 15:46         ` Reza Arbab
2017-04-06 16:21         ` Michal Hocko
2017-04-06 16:21           ` Michal Hocko
2017-04-06 16:24           ` Mel Gorman
2017-04-06 16:24             ` Mel Gorman
2017-04-06 16:55           ` Mel Gorman
2017-04-06 16:55             ` Mel Gorman
2017-04-06 17:12             ` Michal Hocko
2017-04-06 17:12               ` Michal Hocko
2017-04-06 17:46               ` Mel Gorman
2017-04-06 17:46                 ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170331074648.GH27098@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=aarcange@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=arbab@linux.vnet.ibm.com \
    --cc=daniel.kiper@oracle.com \
    --cc=imammedo@redhat.com \
    --cc=js1304@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=qiuxishi@huawei.com \
    --cc=rientjes@google.com \
    --cc=slaoub@gmail.com \
    --cc=tangchen@cn.fujitsu.com \
    --cc=toshi.kani@hpe.com \
    --cc=vbabka@suse.cz \
    --cc=vkuznets@redhat.com \
    --cc=yasu.isimatu@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.