linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/13] Swap-over-NBD without deadlocking
@ 2011-04-26  7:36 Mel Gorman
  2011-04-26  7:36 ` [PATCH 01/13] mm: Serialize access to min_free_kbytes Mel Gorman
                   ` (14 more replies)
  0 siblings, 15 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

Changelog since V1
  o Rebase on top of mmotm
  o Use atomic_t for memalloc_socks		(David Miller)
  o Remove use of sk_memalloc_socks in vmscan	(Neil Brown)
  o Check throttle within prepare_to_wait	(Neil Brown)
  o Add statistics on throttling instead of printk

Swapping over NBD is something that is technically possible but not
often advised. While there are number of guides on the internet
on how to configure it and nbd-client supports a -swap switch to
"prevent deadlocks", the fact of the matter is a machine using NBD
for swap can be locked up within minutes if swap is used intensively.

The problem is that network block devices do not use mempools like
normal block devices do. As the host cannot control where they receive
packets from, they cannot reliably work out in advance how much memory
they might need.

Some years ago, Peter Ziljstra developed a series of patches that
supported swap over an NFS that some distributions are carrying in
their kernels. This patch series borrows very heavily from Peter's work
to support swapping over NBD (the relatively straight-forward case)
and uses throttling instead of dynamically resized memory reserves
so the series is not too unwieldy for review.

Patch 1 serialises access to min_free_kbytes. It's not strictly needed
	by this series but as the series cares about watermarks in
	general, it's a harmless fix. It could be merged independently.

Patch 2 adds knowledge of the PFMEMALLOC reserves to SLAB and SLUB to
	preserve access to pages allocated under low memory situations
	to callers that are freeying memory.

Patch 3 introduces __GFP_MEMALLOC to allow access to the PFMEMALLOC
	reserves without setting PFMEMALLOC.

Patch 4 opens the possibility for softirqs to use PFMEMALLOC reserves
	for later use by network packet processing.

Patch 5 ignores memory policies when ALLOC_NO_WATERMARKS is set.

Patches 6-9 allows network processing to use PFMEMALLOC reserves when
	the socket has been marked as being used by the VM to clean
	pages. If packets are received and stored in pages that were
	allocated under low-memory situations and are unrelated to
	the VM, the packets are dropped.

Patch 10 is a micro-optimisation to avoid a function call in the
	common case.

Patch 11 tags NBD sockets as being SOCK_MEMALLOC so they can use
	PFMEMALLOC if necessary.

Patch 12 notes that it is still possible for the PFMEMALLOC reserve
	to be depleted. To prevent this, direct reclaimers get
	throttled on a waitqueue if 50% of the PFMEMALLOC reserves are
	depleted.  It is expected that kswapd and the direct reclaimers
	already running will clean enough pages for the low watermark
	to be reached and the throttled processes are woken up.

Patch 13 adds a statistic to track how often processes get throttled

Some basic performance testing was run using kernel builds, netperf
on loopback for UDP and TCP, hackbench (pipes and sockets), iozone
and sysbench. Each of them were expected to use the sl*b allocators
reasonably heavily but there did not appear to be significant
performance variances. Here is the results from netperf using
slab as an example

NETPERF UDP
                   netperf-udp       udp-swapnbd
                  vanilla-slab        v1r17-slab
      64   178.06 ( 0.00%)*   189.46 ( 6.02%) 
             1.02%             1.00%        
     128   355.06 ( 0.00%)    370.75 ( 4.23%) 
     256   662.47 ( 0.00%)    721.62 ( 8.20%) 
    1024  2229.39 ( 0.00%)   2567.04 (13.15%) 
    2048  3974.20 ( 0.00%)   4114.70 ( 3.41%) 
    3312  5619.89 ( 0.00%)   5800.09 ( 3.11%) 
    4096  6460.45 ( 0.00%)   6702.45 ( 3.61%) 
    8192  9580.24 ( 0.00%)   9927.97 ( 3.50%) 
   16384 13259.14 ( 0.00%)  13493.88 ( 1.74%) 
MMTests Statistics: duration
User/Sys Time Running Test (seconds)       2960.17   2540.14
Total Elapsed Time (seconds)               3554.10   3050.10

NETPERF TCP
                   netperf-tcp       tcp-swapnbd
                  vanilla-slab        v1r17-slab
      64  1230.29 ( 0.00%)   1273.17 ( 3.37%) 
     128  2309.97 ( 0.00%)   2375.22 ( 2.75%) 
     256  3659.32 ( 0.00%)   3704.87 ( 1.23%) 
    1024  7267.80 ( 0.00%)   7251.02 (-0.23%) 
    2048  8358.26 ( 0.00%)   8204.74 (-1.87%) 
    3312  8631.07 ( 0.00%)   8637.62 ( 0.08%) 
    4096  8770.95 ( 0.00%)   8704.08 (-0.77%) 
    8192  9749.33 ( 0.00%)   9769.06 ( 0.20%) 
   16384 11151.71 ( 0.00%)  11135.32 (-0.15%) 
MMTests Statistics: duration
User/Sys Time Running Test (seconds)       1245.04   1619.89
Total Elapsed Time (seconds)               1250.66   1622.18

Here is the equivalent test for SLUB

NETPERF UDP
                   netperf-udp       udp-swapnbd
                  vanilla-slub        v1r17-slub
      64   180.83 ( 0.00%)    183.68 ( 1.55%) 
     128   357.29 ( 0.00%)    367.11 ( 2.67%) 
     256   679.64 ( 0.00%)*   724.03 ( 6.13%) 
             1.15%             1.00%        
    1024  2343.40 ( 0.00%)*  2610.63 (10.24%) 
             1.68%             1.00%        
    2048  3971.53 ( 0.00%)   4102.21 ( 3.19%)*
             1.00%             1.40%        
    3312  5677.04 ( 0.00%)   5748.69 ( 1.25%) 
    4096  6436.75 ( 0.00%)   6549.41 ( 1.72%) 
    8192  9698.56 ( 0.00%)   9808.84 ( 1.12%) 
   16384 13337.06 ( 0.00%)  13404.38 ( 0.50%) 
MMTests Statistics: duration
User/Sys Time Running Test (seconds)       2880.15   2180.13
Total Elapsed Time (seconds)               3458.10   2618.09

NETPERF TCP
                   netperf-tcp       tcp-swapnbd
                  vanilla-slub        v1r17-slub
      64  1256.79 ( 0.00%)   1287.32 ( 2.37%) 
     128  2308.71 ( 0.00%)   2371.09 ( 2.63%) 
     256  3672.03 ( 0.00%)   3771.05 ( 2.63%) 
    1024  7245.08 ( 0.00%)   7261.60 ( 0.23%) 
    2048  8315.17 ( 0.00%)   8244.14 (-0.86%) 
    3312  8611.43 ( 0.00%)   8616.90 ( 0.06%) 
    4096  8711.64 ( 0.00%)   8695.97 (-0.18%) 
    8192  9795.71 ( 0.00%)   9774.11 (-0.22%) 
   16384 11145.48 ( 0.00%)  11225.70 ( 0.71%) 
MMTests Statistics: duration
User/Sys Time Running Test (seconds)       1345.05   1425.06
Total Elapsed Time (seconds)               1350.61   1430.66

Time to completion varied a lot but this can happen with netperf as
it tries to find results within a sufficiently high confidence. I
wouldn't read too much into the performance gains of netperf-udp
as it can sometimes be affected by code just shuffling around for
whatever reason.

For testing swap-over-NBD, a machine was booted with 2G of RAM with a
swapfile backed by NBD. 16*NUM_CPU processes were started that create
anonymous memory mappings and read them linearly in a loop. The total
size of the mappings were 4*PHYSICAL_MEMORY to use swap heavily under
memory pressure. Without the patches, the machine locks up within
minutes and runs to completion with them applied.

Comments?

 drivers/block/nbd.c           |    7 +-
 include/linux/gfp.h           |    7 +-
 include/linux/mm_types.h      |    8 ++
 include/linux/mmzone.h        |    1 +
 include/linux/sched.h         |    7 ++
 include/linux/skbuff.h        |   19 +++-
 include/linux/slub_def.h      |    1 +
 include/linux/vm_event_item.h |    1 +
 include/net/sock.h            |   19 ++++
 kernel/softirq.c              |    3 +
 mm/page_alloc.c               |   57 ++++++++--
 mm/slab.c                     |  240 +++++++++++++++++++++++++++++++++++------
 mm/slub.c                     |   35 +++++-
 mm/vmscan.c                   |   58 ++++++++++
 mm/vmstat.c                   |    1 +
 net/core/dev.c                |   52 ++++++++-
 net/core/filter.c             |    8 ++
 net/core/skbuff.c             |   95 ++++++++++++++---
 net/core/sock.c               |   42 +++++++
 net/ipv4/tcp.c                |    3 +-
 net/ipv4/tcp_output.c         |   13 ++-
 net/ipv6/tcp_ipv6.c           |   12 ++-
 22 files changed, 603 insertions(+), 86 deletions(-)

-- 
1.7.3.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH 01/13] mm: Serialize access to min_free_kbytes
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26  7:36 ` [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages Mel Gorman
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

There is a race between the min_free_kbytes sysctl, memory hotplug
and transparent hugepage support enablement.  Memory hotplug uses a
zonelists_mutex to avoid a race when building zonelists. Reuse it to
serialise watermark updates.

[a.p.zijlstra@chello.nl: Older patch fixed the race with spinlock]
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/page_alloc.c |   23 +++++++++++++++--------
 1 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1d5c189..a93013a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4990,14 +4990,7 @@ static void setup_per_zone_lowmem_reserve(void)
 	calculate_totalreserve_pages();
 }
 
-/**
- * setup_per_zone_wmarks - called when min_free_kbytes changes
- * or when memory is hot-{added|removed}
- *
- * Ensures that the watermark[min,low,high] values for each zone are set
- * correctly with respect to min_free_kbytes.
- */
-void setup_per_zone_wmarks(void)
+static void __setup_per_zone_wmarks(void)
 {
 	unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);
 	unsigned long lowmem_pages = 0;
@@ -5052,6 +5045,20 @@ void setup_per_zone_wmarks(void)
 	calculate_totalreserve_pages();
 }
 
+/**
+ * setup_per_zone_wmarks - called when min_free_kbytes changes
+ * or when memory is hot-{added|removed}
+ *
+ * Ensures that the watermark[min,low,high] values for each zone are set
+ * correctly with respect to min_free_kbytes.
+ */
+void setup_per_zone_wmarks(void)
+{
+	mutex_lock(&zonelists_mutex);
+	__setup_per_zone_wmarks();
+	mutex_unlock(&zonelists_mutex);
+}
+
 /*
  * The inactive anon list should be small enough that the VM never has to
  * do too much work, but large enough that each inactive page has a chance
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
  2011-04-26  7:36 ` [PATCH 01/13] mm: Serialize access to min_free_kbytes Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26 11:15   ` NeilBrown
  2011-04-26 11:37   ` NeilBrown
  2011-04-26  7:36 ` [PATCH 03/13] mm: Introduce __GFP_MEMALLOC to allow access to emergency reserves Mel Gorman
                   ` (12 subsequent siblings)
  14 siblings, 2 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

Allocations of pages below the min watermark run a risk of the machine
hanging due to lack of memory.  To prevent this, only callers who
have PF_MEMALLOC or TIF_MEMDIE set and not processing an interrupt are
allowed to allocate with ALLOC_NO_WATERMARKS. Once they are allocated
to a slab though, nothing prevents other callers consuming free objects
within those slabs. This patch limits access to slab pages that were
alloced from the PFMEMALLOC reserves.

Pages allocated from the reserve are returned with page->pfmemalloc
set and it's up to the caller to determine how the page should be
protected.  SLAB restricts access to any page with page->pfmemalloc set
to callers which are known to able to access the PFMEMALLOC reserve. If
one is not available, an attempt is made to allocate a new page rather
than use a reserve. SLUB is a bit more relaxed in that it only records
if the current per-CPU page was allocated from PFMEMALLOC reserve and
uses another partial slab if the caller does not have the necessary
GFP or process flags. This was found to be sufficient in tests to
avoid hangs due to SLUB generally maintaining smaller lists than SLAB.

In low-memory conditions it does mean that !PFMEMALLOC allocators
can fail a slab allocation even though free objects are available
because they are being preserved for callers that are freeing pages.

[a.p.zijlstra@chello.nl: Original implementation]
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/mm_types.h |    8 ++
 include/linux/slub_def.h |    1 +
 mm/internal.h            |    3 +
 mm/page_alloc.c          |   27 +++++-
 mm/slab.c                |  216 +++++++++++++++++++++++++++++++++++++++-------
 mm/slub.c                |   35 ++++++--
 6 files changed, 246 insertions(+), 44 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index ca01ab2..5630d27 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -71,6 +71,14 @@ struct page {
 	union {
 		pgoff_t index;		/* Our offset within mapping. */
 		void *freelist;		/* SLUB: freelist req. slab lock */
+		bool pfmemalloc;	/* If set by the page allocator,
+					 * ALLOC_PFMEMALLOC was set and the
+					 * low watermark was not met implying
+					 * that the system is under some
+					 * pressure. The caller should try
+					 * ensure this page is only used to
+					 * free other pages.
+					 */
 	};
 	struct list_head lru;		/* Pageout list, eg. active_list
 					 * protected by zone->lru_lock !
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 45ca123..639aace 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -42,6 +42,7 @@ struct kmem_cache_cpu {
 #endif
 	struct page *page;	/* The slab from which we are allocating */
 	int node;		/* The node of the page (or -1 for debug) */
+	bool pfmemalloc;	/* Slab page had pfmemalloc set */
 #ifdef CONFIG_SLUB_STATS
 	unsigned stat[NR_SLUB_STAT_ITEMS];
 #endif
diff --git a/mm/internal.h b/mm/internal.h
index d071d380..a520f3b 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -193,6 +193,9 @@ static inline struct page *mem_map_next(struct page *iter,
 #define __paginginit __init
 #endif
 
+/* Returns true if the gfp_mask allows use of ALLOC_NO_WATERMARK */
+bool gfp_pfmemalloc_allowed(gfp_t gfp_mask);
+
 /* Memory initialisation debug and verification */
 enum mminit_level {
 	MMINIT_WARNING,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a93013a..5977f7b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -654,6 +654,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 	trace_mm_page_free_direct(page, order);
 	kmemcheck_free_shadow(page, order);
 
+	page->pfmemalloc = false;
 	if (PageAnon(page))
 		page->mapping = NULL;
 	for (i = 0; i < (1 << order); i++)
@@ -1172,6 +1173,7 @@ void free_hot_cold_page(struct page *page, int cold)
 
 	migratetype = get_pageblock_migratetype(page);
 	set_page_private(page, migratetype);
+	page->pfmemalloc = false;
 	local_irq_save(flags);
 	if (unlikely(wasMlocked))
 		free_page_mlock(page);
@@ -1365,6 +1367,7 @@ failed:
 #define ALLOC_HARDER		0x10 /* try to alloc harder */
 #define ALLOC_HIGH		0x20 /* __GFP_HIGH set */
 #define ALLOC_CPUSET		0x40 /* check for correct cpuset */
+#define ALLOC_PFMEMALLOC	0x80 /* Caller has PF_MEMALLOC set */
 
 #ifdef CONFIG_FAIL_PAGE_ALLOC
 
@@ -2000,16 +2003,22 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
 	} else if (unlikely(rt_task(current)) && !in_interrupt())
 		alloc_flags |= ALLOC_HARDER;
 
-	if (likely(!(gfp_mask & __GFP_NOMEMALLOC))) {
-		if (!in_interrupt() &&
-		    ((current->flags & PF_MEMALLOC) ||
-		     unlikely(test_thread_flag(TIF_MEMDIE))))
+	if ((current->flags & PF_MEMALLOC) ||
+			unlikely(test_thread_flag(TIF_MEMDIE))) {
+		alloc_flags |= ALLOC_PFMEMALLOC;
+
+		if (likely(!(gfp_mask & __GFP_NOMEMALLOC)) && !in_interrupt())
 			alloc_flags |= ALLOC_NO_WATERMARKS;
 	}
 
 	return alloc_flags;
 }
 
+bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
+{
+	return gfp_to_alloc_flags(gfp_mask) & ALLOC_PFMEMALLOC;
+}
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
@@ -2202,8 +2211,16 @@ nopage:
 got_pg:
 	if (kmemcheck_enabled)
 		kmemcheck_pagealloc_alloc(page, order, gfp_mask);
-	return page;
 
+	/*
+	 * page->pfmemalloc is set when the caller had PFMEMALLOC set or is
+	 * been OOM killed. The expectation is that the caller is taking
+	 * steps that will free more memory. The caller should avoid the
+	 * page being used for !PFMEMALLOC purposes.
+	 */
+	page->pfmemalloc = (alloc_flags & ALLOC_PFMEMALLOC);
+
+	return page;
 }
 
 /*
diff --git a/mm/slab.c b/mm/slab.c
index 46a9c16..a11c5fd 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -120,6 +120,8 @@
 #include	<asm/tlbflush.h>
 #include	<asm/page.h>
 
+#include	"internal.h"
+
 /*
  * DEBUG	- 1 for kmem_cache_create() to honour; SLAB_RED_ZONE & SLAB_POISON.
  *		  0 for faster, smaller code (especially in the critical paths).
@@ -226,6 +228,7 @@ struct slab {
 			unsigned int inuse;	/* num of objs active in slab */
 			kmem_bufctl_t free;
 			unsigned short nodeid;
+			bool pfmemalloc;	/* Slab had pfmemalloc set */
 		};
 		struct slab_rcu __slab_cover_slab_rcu;
 	};
@@ -247,15 +250,37 @@ struct array_cache {
 	unsigned int avail;
 	unsigned int limit;
 	unsigned int batchcount;
-	unsigned int touched;
+	bool touched;
+	bool pfmemalloc;
 	spinlock_t lock;
 	void *entry[];	/*
 			 * Must have this definition in here for the proper
 			 * alignment of array_cache. Also simplifies accessing
 			 * the entries.
+			 *
+			 * Entries should not be directly dereferenced as
+			 * entries belonging to slabs marked pfmemalloc will
+			 * have the lower bits set SLAB_OBJ_PFMEMALLOC
 			 */
 };
 
+#define SLAB_OBJ_PFMEMALLOC	1
+static inline bool is_obj_pfmemalloc(void *objp)
+{
+	return (unsigned long)objp & SLAB_OBJ_PFMEMALLOC;
+}
+
+static inline void set_obj_pfmemalloc(void **objp)
+{
+	*objp = (void *)((unsigned long)*objp | SLAB_OBJ_PFMEMALLOC);
+	return;
+}
+
+static inline void clear_obj_pfmemalloc(void **objp)
+{
+	*objp = (void *)((unsigned long)*objp & ~SLAB_OBJ_PFMEMALLOC);
+}
+
 /*
  * bootstrap: The caches do not work without cpuarrays anymore, but the
  * cpuarrays are allocated from the generic caches...
@@ -888,12 +913,100 @@ static struct array_cache *alloc_arraycache(int node, int entries,
 		nc->avail = 0;
 		nc->limit = entries;
 		nc->batchcount = batchcount;
-		nc->touched = 0;
+		nc->touched = false;
 		spin_lock_init(&nc->lock);
 	}
 	return nc;
 }
 
+/* Clears ac->pfmemalloc if no slabs have pfmalloc set */
+static void check_ac_pfmemalloc(struct kmem_cache *cachep,
+						struct array_cache *ac)
+{
+	struct kmem_list3 *l3 = cachep->nodelists[numa_mem_id()];
+	struct slab *slabp;
+
+	if (!ac->pfmemalloc)
+		return;
+
+	list_for_each_entry(slabp, &l3->slabs_full, list)
+		if (slabp->pfmemalloc)
+			return;
+
+	list_for_each_entry(slabp, &l3->slabs_partial, list)
+		if (slabp->pfmemalloc)
+			return;
+
+	list_for_each_entry(slabp, &l3->slabs_free, list)
+		if (slabp->pfmemalloc)
+			return;
+
+	ac->pfmemalloc = false;
+}
+
+static void *ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac,
+						gfp_t flags, bool force_refill)
+{
+	int i;
+	void *objp = ac->entry[--ac->avail];
+
+	/* Ensure the caller is allowed to use objects from PFMEMALLOC slab */
+	if (unlikely(is_obj_pfmemalloc(objp))) {
+		struct kmem_list3 *l3;
+
+		if (gfp_pfmemalloc_allowed(flags)) {
+			clear_obj_pfmemalloc(&objp);
+			return objp;
+		}
+
+		/* The caller cannot use PFMEMALLOC objects, find another one */
+		for (i = 1; i < ac->avail; i++) {
+			/* If a !PFMEMALLOC object is found, swap them */
+			if (!is_obj_pfmemalloc(ac->entry[i])) {
+				objp = ac->entry[i];
+				ac->entry[i] = ac->entry[ac->avail];
+				ac->entry[ac->avail] = objp;
+				return objp;
+			}
+		}
+
+		/*
+		 * If there are full empty slabs and we were not forced to
+		 * allocate a slab, mark this one !pfmemalloc
+		 */
+		l3 = cachep->nodelists[numa_mem_id()];
+		if (!list_empty(&l3->slabs_free) && force_refill) {
+			struct slab *slabp = virt_to_slab(objp);
+			slabp->pfmemalloc = false;
+			clear_obj_pfmemalloc(&objp);
+			check_ac_pfmemalloc(cachep, ac);
+			return objp;
+		}
+
+		/* No !PFMEMALLOC objects available */
+		ac->avail++;
+		objp = NULL;
+	}
+
+	return objp;
+}
+
+static void ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac,
+								void *objp)
+{
+	struct slab *slabp;
+
+	/* If there are pfmemalloc slabs, check if the object is part of one */
+	if (unlikely(ac->pfmemalloc)) {
+		slabp = virt_to_slab(objp);
+
+		if (slabp->pfmemalloc)
+			set_obj_pfmemalloc(&objp);
+	}
+
+	ac->entry[ac->avail++] = objp;
+}
+
 /*
  * Transfer objects in one arraycache to another.
  * Locking must be handled by the caller.
@@ -1070,7 +1183,7 @@ static inline int cache_free_alien(struct kmem_cache *cachep, void *objp)
 			STATS_INC_ACOVERFLOW(cachep);
 			__drain_alien_cache(cachep, alien, nodeid);
 		}
-		alien->entry[alien->avail++] = objp;
+		ac_put_obj(cachep, alien, objp);
 		spin_unlock(&alien->lock);
 	} else {
 		spin_lock(&(cachep->nodelists[nodeid])->list_lock);
@@ -1677,7 +1790,8 @@ __initcall(cpucache_init);
  * did not request dmaable memory, we might get it, but that
  * would be relatively rare and ignorable.
  */
-static void *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid)
+static void *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid,
+		bool *pfmemalloc)
 {
 	struct page *page;
 	int nr_pages;
@@ -1698,6 +1812,7 @@ static void *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid)
 	page = alloc_pages_exact_node(nodeid, flags | __GFP_NOTRACK, cachep->gfporder);
 	if (!page)
 		return NULL;
+	*pfmemalloc = page->pfmemalloc;
 
 	nr_pages = (1 << cachep->gfporder);
 	if (cachep->flags & SLAB_RECLAIM_ACCOUNT)
@@ -2130,7 +2245,7 @@ static int __init_refok setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp)
 	cpu_cache_get(cachep)->avail = 0;
 	cpu_cache_get(cachep)->limit = BOOT_CPUCACHE_ENTRIES;
 	cpu_cache_get(cachep)->batchcount = 1;
-	cpu_cache_get(cachep)->touched = 0;
+	cpu_cache_get(cachep)->touched = false;
 	cachep->batchcount = 1;
 	cachep->limit = BOOT_CPUCACHE_ENTRIES;
 	return 0;
@@ -2677,6 +2792,7 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
 	slabp->s_mem = objp + colour_off;
 	slabp->nodeid = nodeid;
 	slabp->free = 0;
+	slabp->pfmemalloc = false;
 	return slabp;
 }
 
@@ -2808,7 +2924,7 @@ static void slab_map_pages(struct kmem_cache *cache, struct slab *slab,
  * kmem_cache_alloc() when there are no active objs left in a cache.
  */
 static int cache_grow(struct kmem_cache *cachep,
-		gfp_t flags, int nodeid, void *objp)
+		gfp_t flags, int nodeid, void *objp, bool pfmemalloc)
 {
 	struct slab *slabp;
 	size_t offset;
@@ -2852,7 +2968,7 @@ static int cache_grow(struct kmem_cache *cachep,
 	 * 'nodeid'.
 	 */
 	if (!objp)
-		objp = kmem_getpages(cachep, local_flags, nodeid);
+		objp = kmem_getpages(cachep, local_flags, nodeid, &pfmemalloc);
 	if (!objp)
 		goto failed;
 
@@ -2862,6 +2978,13 @@ static int cache_grow(struct kmem_cache *cachep,
 	if (!slabp)
 		goto opps1;
 
+	/* Record if ALLOC_PFMEMALLOC was set when allocating the slab */
+	if (pfmemalloc) {
+		struct array_cache *ac = cpu_cache_get(cachep);
+		slabp->pfmemalloc = true;
+		ac->pfmemalloc = 1;
+	}
+
 	slab_map_pages(cachep, slabp, objp);
 
 	cache_init_objs(cachep, slabp);
@@ -3003,16 +3126,19 @@ bad:
 #define check_slabp(x,y) do { } while(0)
 #endif
 
-static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
+static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags,
+							bool force_refill)
 {
 	int batchcount;
 	struct kmem_list3 *l3;
 	struct array_cache *ac;
 	int node;
 
-retry:
 	check_irq_off();
 	node = numa_mem_id();
+	if (unlikely(force_refill))
+		goto force_grow;
+retry:
 	ac = cpu_cache_get(cachep);
 	batchcount = ac->batchcount;
 	if (!ac->touched && batchcount > BATCHREFILL_LIMIT) {
@@ -3030,7 +3156,7 @@ retry:
 
 	/* See if we can refill from the shared array */
 	if (l3->shared && transfer_objects(ac, l3->shared, batchcount)) {
-		l3->shared->touched = 1;
+		l3->shared->touched = true;
 		goto alloc_done;
 	}
 
@@ -3062,8 +3188,8 @@ retry:
 			STATS_INC_ACTIVE(cachep);
 			STATS_SET_HIGH(cachep);
 
-			ac->entry[ac->avail++] = slab_get_obj(cachep, slabp,
-							    node);
+			ac_put_obj(cachep, ac, slab_get_obj(cachep, slabp,
+									node));
 		}
 		check_slabp(cachep, slabp);
 
@@ -3082,18 +3208,25 @@ alloc_done:
 
 	if (unlikely(!ac->avail)) {
 		int x;
-		x = cache_grow(cachep, flags | GFP_THISNODE, node, NULL);
+force_grow:
+		x = cache_grow(cachep, flags | GFP_THISNODE, node, NULL, false);
 
 		/* cache_grow can reenable interrupts, then ac could change. */
 		ac = cpu_cache_get(cachep);
-		if (!x && ac->avail == 0)	/* no objects in sight? abort */
+
+		/* no objects in sight? abort */
+		if (!x && (ac->avail == 0 || force_refill))
 			return NULL;
 
-		if (!ac->avail)		/* objects refilled by interrupt? */
+		/* objects refilled by interrupt? */
+		if (!ac->avail) {
+			node = numa_node_id();
 			goto retry;
+		}
 	}
-	ac->touched = 1;
-	return ac->entry[--ac->avail];
+	ac->touched = true;
+
+	return ac_get_obj(cachep, ac, flags, force_refill);
 }
 
 static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
@@ -3176,23 +3309,35 @@ static inline void *____cache_alloc(struct kmem_cache *cachep, gfp_t flags)
 {
 	void *objp;
 	struct array_cache *ac;
+	bool force_refill = false;
 
 	check_irq_off();
 
 	ac = cpu_cache_get(cachep);
 	if (likely(ac->avail)) {
-		STATS_INC_ALLOCHIT(cachep);
-		ac->touched = 1;
-		objp = ac->entry[--ac->avail];
-	} else {
-		STATS_INC_ALLOCMISS(cachep);
-		objp = cache_alloc_refill(cachep, flags);
+		ac->touched = true;
+		objp = ac_get_obj(cachep, ac, flags, false);
+
 		/*
-		 * the 'ac' may be updated by cache_alloc_refill(),
-		 * and kmemleak_erase() requires its correct value.
+		 * Allow for the possibility all avail objects are not allowed
+		 * by the current flags
 		 */
-		ac = cpu_cache_get(cachep);
+		if (objp) {
+			STATS_INC_ALLOCHIT(cachep);
+			goto out;
+		}
+		force_refill = true;
 	}
+
+	STATS_INC_ALLOCMISS(cachep);
+	objp = cache_alloc_refill(cachep, flags, force_refill);
+	/*
+	 * the 'ac' may be updated by cache_alloc_refill(),
+	 * and kmemleak_erase() requires its correct value.
+	 */
+	ac = cpu_cache_get(cachep);
+
+out:
 	/*
 	 * To avoid a false negative, if an object that is in one of the
 	 * per-CPU caches is leaked, we need to make sure kmemleak doesn't
@@ -3245,6 +3390,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags)
 	enum zone_type high_zoneidx = gfp_zone(flags);
 	void *obj = NULL;
 	int nid;
+	bool pfmemalloc;
 
 	if (flags & __GFP_THISNODE)
 		return NULL;
@@ -3281,7 +3427,8 @@ retry:
 		if (local_flags & __GFP_WAIT)
 			local_irq_enable();
 		kmem_flagcheck(cache, flags);
-		obj = kmem_getpages(cache, local_flags, numa_mem_id());
+		obj = kmem_getpages(cache, local_flags, numa_mem_id(),
+							&pfmemalloc);
 		if (local_flags & __GFP_WAIT)
 			local_irq_disable();
 		if (obj) {
@@ -3289,7 +3436,7 @@ retry:
 			 * Insert into the appropriate per node queues
 			 */
 			nid = page_to_nid(virt_to_page(obj));
-			if (cache_grow(cache, flags, nid, obj)) {
+			if (cache_grow(cache, flags, nid, obj, pfmemalloc)) {
 				obj = ____cache_alloc_node(cache,
 					flags | GFP_THISNODE, nid);
 				if (!obj)
@@ -3361,7 +3508,7 @@ retry:
 
 must_grow:
 	spin_unlock(&l3->list_lock);
-	x = cache_grow(cachep, flags | GFP_THISNODE, nodeid, NULL);
+	x = cache_grow(cachep, flags | GFP_THISNODE, nodeid, NULL, false);
 	if (x)
 		goto retry;
 
@@ -3511,9 +3658,12 @@ static void free_block(struct kmem_cache *cachep, void **objpp, int nr_objects,
 	struct kmem_list3 *l3;
 
 	for (i = 0; i < nr_objects; i++) {
-		void *objp = objpp[i];
+		void *objp;
 		struct slab *slabp;
 
+		clear_obj_pfmemalloc(&objpp[i]);
+		objp = objpp[i];
+
 		slabp = virt_to_slab(objp);
 		l3 = cachep->nodelists[node];
 		list_del(&slabp->list);
@@ -3625,12 +3775,12 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp)
 
 	if (likely(ac->avail < ac->limit)) {
 		STATS_INC_FREEHIT(cachep);
-		ac->entry[ac->avail++] = objp;
+		ac_put_obj(cachep, ac, objp);
 		return;
 	} else {
 		STATS_INC_FREEMISS(cachep);
 		cache_flusharray(cachep, ac);
-		ac->entry[ac->avail++] = objp;
+		ac_put_obj(cachep, ac, objp);
 	}
 }
 
@@ -4056,7 +4206,7 @@ static void drain_array(struct kmem_cache *cachep, struct kmem_list3 *l3,
 	if (!ac || !ac->avail)
 		return;
 	if (ac->touched && !force) {
-		ac->touched = 0;
+		ac->touched = false;
 	} else {
 		spin_lock_irq(&l3->list_lock);
 		if (ac->avail) {
diff --git a/mm/slub.c b/mm/slub.c
index df77f78..6707d2e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -30,6 +30,8 @@
 
 #include <trace/events/kmem.h>
 
+#include "internal.h"
+
 /*
  * Lock order:
  *   1. slab_lock(page)
@@ -1219,7 +1221,8 @@ static void setup_object(struct kmem_cache *s, struct page *page,
 		s->ctor(object);
 }
 
-static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
+static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node,
+							bool *pfmemalloc)
 {
 	struct page *page;
 	void *start;
@@ -1234,6 +1237,7 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 		goto out;
 
 	inc_slabs_node(s, page_to_nid(page), page->objects);
+	*pfmemalloc = page->pfmemalloc;
 	page->slab = s;
 	page->flags |= 1 << PG_slab;
 
@@ -1757,6 +1761,16 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
 	}
 }
 
+#define SLAB_PAGE_PFMEMALLOC 1
+
+static inline bool pfmemalloc_match(struct kmem_cache_cpu *c, gfp_t gfpflags)
+{
+	if (unlikely(c->pfmemalloc))
+		return gfp_pfmemalloc_allowed(gfpflags);
+
+	return true;
+}
+
 /*
  * Slow path. The lockless freelist is empty or we need to perform
  * debugging duties.
@@ -1780,6 +1794,7 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 {
 	void **object;
 	struct page *new;
+	bool pfmemalloc = false;
 #ifdef CONFIG_CMPXCHG_LOCAL
 	unsigned long flags;
 
@@ -1801,7 +1816,13 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 		goto new_slab;
 
 	slab_lock(c->page);
-	if (unlikely(!node_match(c, node)))
+
+	/*
+	 * By rights, we should be searching for a slab page that was
+	 * PFMEMALLOC but right now, we are losing the pfmemalloc
+	 * information when the page leaves the per-cpu allocator
+	 */
+	if (unlikely(!pfmemalloc_match(c, gfpflags) || !node_match(c, node)))
 		goto another_slab;
 
 	stat(s, ALLOC_REFILL);
@@ -1841,7 +1862,7 @@ new_slab:
 	if (gfpflags & __GFP_WAIT)
 		local_irq_enable();
 
-	new = new_slab(s, gfpflags, node);
+	new = new_slab(s, gfpflags, node, &pfmemalloc);
 
 	if (gfpflags & __GFP_WAIT)
 		local_irq_disable();
@@ -1854,6 +1875,7 @@ new_slab:
 		slab_lock(new);
 		__SetPageSlubFrozen(new);
 		c->page = new;
+		c->pfmemalloc = pfmemalloc;
 		goto load_freelist;
 	}
 	if (!(gfpflags & __GFP_NOWARN) && printk_ratelimit())
@@ -1922,8 +1944,8 @@ redo:
 #endif
 
 	object = c->freelist;
-	if (unlikely(!object || !node_match(c, node)))
-
+	if (unlikely(!object || !node_match(c, node) ||
+					!pfmemalloc_match(c, gfpflags)))
 		object = __slab_alloc(s, gfpflags, node, addr, c);
 
 	else {
@@ -2389,10 +2411,11 @@ static void early_kmem_cache_node_alloc(int node)
 	struct page *page;
 	struct kmem_cache_node *n;
 	unsigned long flags;
+	bool pfmemalloc;	/* Ignore this early in boot */
 
 	BUG_ON(kmem_cache_node->size < sizeof(struct kmem_cache_node));
 
-	page = new_slab(kmem_cache_node, GFP_NOWAIT, node);
+	page = new_slab(kmem_cache_node, GFP_NOWAIT, node, &pfmemalloc);
 
 	BUG_ON(!page);
 	if (page_to_nid(page) != node) {
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 03/13] mm: Introduce __GFP_MEMALLOC to allow access to emergency reserves
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
  2011-04-26  7:36 ` [PATCH 01/13] mm: Serialize access to min_free_kbytes Mel Gorman
  2011-04-26  7:36 ` [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26  9:49   ` NeilBrown
  2011-04-26  7:36 ` [PATCH 04/13] mm: allow PF_MEMALLOC from softirq context Mel Gorman
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

__GFP_MEMALLOC will allow the allocation to disregard the watermarks,
much like PF_MEMALLOC. It allows one to pass along the memalloc state in
object related allocation flags as opposed to task related flags, such
as sk->sk_allocation. This removes the need for ALLOC_PFMEMALLOC as
callers using __GFP_MEMALLOC can get the ALLOC_NO_WATERMARK flag which
is now enough to identify allocations related to page reclaim.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/gfp.h      |    4 +++-
 include/linux/mm_types.h |    2 +-
 mm/page_alloc.c          |   14 ++++++--------
 mm/slab.c                |    2 +-
 4 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index bfb8f93..4e011e7 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -23,6 +23,7 @@ struct vm_area_struct;
 #define ___GFP_REPEAT		0x400u
 #define ___GFP_NOFAIL		0x800u
 #define ___GFP_NORETRY		0x1000u
+#define ___GFP_MEMALLOC		0x2000u
 #define ___GFP_COMP		0x4000u
 #define ___GFP_ZERO		0x8000u
 #define ___GFP_NOMEMALLOC	0x10000u
@@ -75,6 +76,7 @@ struct vm_area_struct;
 #define __GFP_REPEAT	((__force gfp_t)___GFP_REPEAT)	/* See above */
 #define __GFP_NOFAIL	((__force gfp_t)___GFP_NOFAIL)	/* See above */
 #define __GFP_NORETRY	((__force gfp_t)___GFP_NORETRY) /* See above */
+#define __GFP_MEMALLOC	((__force gfp_t)___GFP_MEMALLOC)/* Allow access to emergency reserves */
 #define __GFP_COMP	((__force gfp_t)___GFP_COMP)	/* Add compound page metadata */
 #define __GFP_ZERO	((__force gfp_t)___GFP_ZERO)	/* Return zeroed page on success */
 #define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC) /* Don't use emergency reserves */
@@ -127,7 +129,7 @@ struct vm_area_struct;
 /* Control page allocator reclaim behavior */
 #define GFP_RECLAIM_MASK (__GFP_WAIT|__GFP_HIGH|__GFP_IO|__GFP_FS|\
 			__GFP_NOWARN|__GFP_REPEAT|__GFP_NOFAIL|\
-			__GFP_NORETRY|__GFP_NOMEMALLOC)
+			__GFP_NORETRY|__GFP_MEMALLOC|__GFP_NOMEMALLOC)
 
 /* Control slab gfp mask during early boot */
 #define GFP_BOOT_MASK (__GFP_BITS_MASK & ~(__GFP_WAIT|__GFP_IO|__GFP_FS))
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 5630d27..b890a0b 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -72,7 +72,7 @@ struct page {
 		pgoff_t index;		/* Our offset within mapping. */
 		void *freelist;		/* SLUB: freelist req. slab lock */
 		bool pfmemalloc;	/* If set by the page allocator,
-					 * ALLOC_PFMEMALLOC was set and the
+					 * ALLOC_NO_WATERMARKS was set and the
 					 * low watermark was not met implying
 					 * that the system is under some
 					 * pressure. The caller should try
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5977f7b..dcc6337 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1367,7 +1367,6 @@ failed:
 #define ALLOC_HARDER		0x10 /* try to alloc harder */
 #define ALLOC_HIGH		0x20 /* __GFP_HIGH set */
 #define ALLOC_CPUSET		0x40 /* check for correct cpuset */
-#define ALLOC_PFMEMALLOC	0x80 /* Caller has PF_MEMALLOC set */
 
 #ifdef CONFIG_FAIL_PAGE_ALLOC
 
@@ -2003,11 +2002,10 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
 	} else if (unlikely(rt_task(current)) && !in_interrupt())
 		alloc_flags |= ALLOC_HARDER;
 
-	if ((current->flags & PF_MEMALLOC) ||
-			unlikely(test_thread_flag(TIF_MEMDIE))) {
-		alloc_flags |= ALLOC_PFMEMALLOC;
-
-		if (likely(!(gfp_mask & __GFP_NOMEMALLOC)) && !in_interrupt())
+	if (likely(!(gfp_mask & __GFP_NOMEMALLOC))) {
+		if (gfp_mask & __GFP_MEMALLOC)
+			alloc_flags |= ALLOC_NO_WATERMARKS;
+		else if (likely(!(gfp_mask & __GFP_NOMEMALLOC)) && !in_interrupt())
 			alloc_flags |= ALLOC_NO_WATERMARKS;
 	}
 
@@ -2016,7 +2014,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
 
 bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
 {
-	return gfp_to_alloc_flags(gfp_mask) & ALLOC_PFMEMALLOC;
+	return gfp_to_alloc_flags(gfp_mask) & ALLOC_NO_WATERMARKS;
 }
 
 static inline struct page *
@@ -2218,7 +2216,7 @@ got_pg:
 	 * steps that will free more memory. The caller should avoid the
 	 * page being used for !PFMEMALLOC purposes.
 	 */
-	page->pfmemalloc = (alloc_flags & ALLOC_PFMEMALLOC);
+	page->pfmemalloc = (alloc_flags & ALLOC_NO_WATERMARKS);
 
 	return page;
 }
diff --git a/mm/slab.c b/mm/slab.c
index a11c5fd..1925023 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2978,7 +2978,7 @@ static int cache_grow(struct kmem_cache *cachep,
 	if (!slabp)
 		goto opps1;
 
-	/* Record if ALLOC_PFMEMALLOC was set when allocating the slab */
+	/* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */
 	if (pfmemalloc) {
 		struct array_cache *ac = cpu_cache_get(cachep);
 		slabp->pfmemalloc = true;
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 04/13] mm: allow PF_MEMALLOC from softirq context
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (2 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 03/13] mm: Introduce __GFP_MEMALLOC to allow access to emergency reserves Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26  7:36 ` [PATCH 05/13] mm: Ignore mempolicies when using ALLOC_NO_WATERMARK Mel Gorman
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

This is needed to allow network softirq packet processing to make use
of PF_MEMALLOC.

Currently softirq context cannot use PF_MEMALLOC due to it not being
associated with a task, and therefore not having task flags to fiddle with -
thus the gfp to alloc flag mapping ignores the task flags when in interrupts
(hard or soft) context.

Allowing softirqs to make use of PF_MEMALLOC therefore requires some trickery.
We basically borrow the task flags from whatever process happens to be
preempted by the softirq.

So we modify the gfp to alloc flags mapping to not exclude task flags in
softirq context, and modify the softirq code to save, clear and restore
the PF_MEMALLOC flag.

The save and clear, ensures the preempted task's PF_MEMALLOC flag doesn't
leak into the softirq. The restore ensures a softirq's PF_MEMALLOC flag
cannot leak back into the preempted process.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/sched.h |    7 +++++++
 kernel/softirq.c      |    3 +++
 mm/page_alloc.c       |    5 ++++-
 3 files changed, 14 insertions(+), 1 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 3f7d3f9..e87bb68 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1822,6 +1822,13 @@ static inline void rcu_copy_process(struct task_struct *p)
 
 #endif
 
+static inline void tsk_restore_flags(struct task_struct *p,
+				     unsigned long pflags, unsigned long mask)
+{
+	p->flags &= ~mask;
+	p->flags |= pflags & mask;
+}
+
 #ifdef CONFIG_SMP
 extern int set_cpus_allowed_ptr(struct task_struct *p,
 				const struct cpumask *new_mask);
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 1396017..2817c27 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -210,6 +210,8 @@ asmlinkage void __do_softirq(void)
 	__u32 pending;
 	int max_restart = MAX_SOFTIRQ_RESTART;
 	int cpu;
+	unsigned long pflags = current->flags;
+	current->flags &= ~PF_MEMALLOC;
 
 	pending = local_softirq_pending();
 	account_system_vtime(current);
@@ -265,6 +267,7 @@ restart:
 
 	account_system_vtime(current);
 	__local_bh_enable(SOFTIRQ_OFFSET);
+	tsk_restore_flags(current, pflags, PF_MEMALLOC);
 }
 
 #ifndef __ARCH_HAS_DO_SOFTIRQ
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index dcc6337..457c701 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2005,7 +2005,10 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
 	if (likely(!(gfp_mask & __GFP_NOMEMALLOC))) {
 		if (gfp_mask & __GFP_MEMALLOC)
 			alloc_flags |= ALLOC_NO_WATERMARKS;
-		else if (likely(!(gfp_mask & __GFP_NOMEMALLOC)) && !in_interrupt())
+		else if (!in_irq() && (current->flags & PF_MEMALLOC))
+			alloc_flags |= ALLOC_NO_WATERMARKS;
+		else if (!in_interrupt() &&
+				unlikely(test_thread_flag(TIF_MEMDIE)))
 			alloc_flags |= ALLOC_NO_WATERMARKS;
 	}
 
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 05/13] mm: Ignore mempolicies when using ALLOC_NO_WATERMARK
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (3 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 04/13] mm: allow PF_MEMALLOC from softirq context Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26  7:36 ` [PATCH 06/13] net: Introduce sk_allocation() to allow addition of GFP flags depending on the individual socket Mel Gorman
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

The reserve is proportionally distributed over all !highmem zones in the
system. So we need to allow an emergency allocation access to all zones.
In order to do that we need to break out of any mempolicy boundaries we
might have.

In my opinion that does not break mempolicies as those are user oriented
and not system oriented. That is, system allocations are not guaranteed to
be within mempolicy boundaries. For instance IRQs don't even have a mempolicy.

So breaking out of mempolicy boundaries for 'rare' emergency allocations,
which are always system allocations (as opposed to user) is ok.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/page_alloc.c |    7 +++++++
 1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 457c701..c0c42ce 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2085,6 +2085,13 @@ restart:
 rebalance:
 	/* Allocate without watermarks if the context allows */
 	if (alloc_flags & ALLOC_NO_WATERMARKS) {
+		/*
+		 * Ignore mempolicies if ALLOC_NO_WATERMARKS on the grounds
+		 * the allocation is high priority and these type of
+		 * allocations are system rather than user orientated
+		 */
+		zonelist = node_zonelist(numa_node_id(), gfp_mask);
+
 		page = __alloc_pages_high_priority(gfp_mask, order,
 				zonelist, high_zoneidx, nodemask,
 				preferred_zone, migratetype);
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 06/13] net: Introduce sk_allocation() to allow addition of GFP flags depending on the individual socket
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (4 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 05/13] mm: Ignore mempolicies when using ALLOC_NO_WATERMARK Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26  7:36 ` [PATCH 07/13] netvm: Allow the use of __GFP_MEMALLOC by specific sockets Mel Gorman
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

Introduce sk_allocation(), this function allows to inject sock specific
flags to each sock related allocation. It is only used on allocation
paths that may be required for writing pages back to network storage.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/net/sock.h    |    5 +++++
 net/ipv4/tcp.c        |    3 ++-
 net/ipv4/tcp_output.c |   13 +++++++------
 net/ipv6/tcp_ipv6.c   |   12 +++++++++---
 4 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index f2046e4..e89c38f 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -585,6 +585,11 @@ static inline int sock_flag(struct sock *sk, enum sock_flags flag)
 	return test_bit(flag, &sk->sk_flags);
 }
 
+static inline gfp_t sk_allocation(struct sock *sk, gfp_t gfp_mask)
+{
+	return gfp_mask;
+}
+
 static inline void sk_acceptq_removed(struct sock *sk)
 {
 	sk->sk_ack_backlog--;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 054a59d..8c1a9d5 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -698,7 +698,8 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp)
 	/* The TCP header must be at least 32-bit aligned.  */
 	size = ALIGN(size, 4);
 
-	skb = alloc_skb_fclone(size + sk->sk_prot->max_header, gfp);
+	skb = alloc_skb_fclone(size + sk->sk_prot->max_header,
+			       sk_allocation(sk, gfp));
 	if (skb) {
 		if (sk_wmem_schedule(sk, skb->truesize)) {
 			/*
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 17388c7..6157ced 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -2324,7 +2324,7 @@ void tcp_send_fin(struct sock *sk)
 		/* Socket is locked, keep trying until memory is available. */
 		for (;;) {
 			skb = alloc_skb_fclone(MAX_TCP_HEADER,
-					       sk->sk_allocation);
+					       sk_allocation(sk, GFP_KERNEL));
 			if (skb)
 				break;
 			yield();
@@ -2350,7 +2350,7 @@ void tcp_send_active_reset(struct sock *sk, gfp_t priority)
 	struct sk_buff *skb;
 
 	/* NOTE: No TCP options attached and we never retransmit this. */
-	skb = alloc_skb(MAX_TCP_HEADER, priority);
+	skb = alloc_skb(MAX_TCP_HEADER, sk_allocation(sk, priority));
 	if (!skb) {
 		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTFAILED);
 		return;
@@ -2423,7 +2423,8 @@ struct sk_buff *tcp_make_synack(struct sock *sk, struct dst_entry *dst,
 
 	if (cvp != NULL && cvp->s_data_constant && cvp->s_data_desired)
 		s_data_desired = cvp->s_data_desired;
-	skb = sock_wmalloc(sk, MAX_TCP_HEADER + 15 + s_data_desired, 1, GFP_ATOMIC);
+	skb = sock_wmalloc(sk, MAX_TCP_HEADER + 15 + s_data_desired, 1,
+					sk_allocation(sk, GFP_ATOMIC));
 	if (skb == NULL)
 		return NULL;
 
@@ -2719,7 +2720,7 @@ void tcp_send_ack(struct sock *sk)
 	 * tcp_transmit_skb() will set the ownership to this
 	 * sock.
 	 */
-	buff = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC);
+	buff = alloc_skb(MAX_TCP_HEADER, sk_allocation(sk, GFP_ATOMIC));
 	if (buff == NULL) {
 		inet_csk_schedule_ack(sk);
 		inet_csk(sk)->icsk_ack.ato = TCP_ATO_MIN;
@@ -2734,7 +2735,7 @@ void tcp_send_ack(struct sock *sk)
 
 	/* Send it off, this clears delayed acks for us. */
 	TCP_SKB_CB(buff)->when = tcp_time_stamp;
-	tcp_transmit_skb(sk, buff, 0, GFP_ATOMIC);
+	tcp_transmit_skb(sk, buff, 0, sk_allocation(sk, GFP_ATOMIC));
 }
 
 /* This routine sends a packet with an out of date sequence
@@ -2754,7 +2755,7 @@ static int tcp_xmit_probe_skb(struct sock *sk, int urgent)
 	struct sk_buff *skb;
 
 	/* We don't queue it, tcp_transmit_skb() sets ownership. */
-	skb = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC);
+	skb = alloc_skb(MAX_TCP_HEADER, sk_allocation(sk, GFP_ATOMIC));
 	if (skb == NULL)
 		return -1;
 
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 2f6c671..4c11638 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -597,7 +597,8 @@ static int tcp_v6_md5_do_add(struct sock *sk, struct in6_addr *peer,
 	} else {
 		/* reallocate new list if current one is full. */
 		if (!tp->md5sig_info) {
-			tp->md5sig_info = kzalloc(sizeof(*tp->md5sig_info), GFP_ATOMIC);
+			tp->md5sig_info = kzalloc(sizeof(*tp->md5sig_info),
+					sk_allocation(sk, GFP_ATOMIC));
 			if (!tp->md5sig_info) {
 				kfree(newkey);
 				return -ENOMEM;
@@ -610,7 +611,8 @@ static int tcp_v6_md5_do_add(struct sock *sk, struct in6_addr *peer,
 		}
 		if (tp->md5sig_info->alloced6 == tp->md5sig_info->entries6) {
 			keys = kmalloc((sizeof (tp->md5sig_info->keys6[0]) *
-				       (tp->md5sig_info->entries6 + 1)), GFP_ATOMIC);
+				       (tp->md5sig_info->entries6 + 1)),
+				       sk_allocation(sk, GFP_ATOMIC));
 
 			if (!keys) {
 				tcp_free_md5sig_pool();
@@ -734,7 +736,8 @@ static int tcp_v6_parse_md5_keys (struct sock *sk, char __user *optval,
 		struct tcp_sock *tp = tcp_sk(sk);
 		struct tcp_md5sig_info *p;
 
-		p = kzalloc(sizeof(struct tcp_md5sig_info), GFP_KERNEL);
+		p = kzalloc(sizeof(struct tcp_md5sig_info),
+				   sk_allocation(sk, GFP_KERNEL));
 		if (!p)
 			return -ENOMEM;
 
@@ -1084,6 +1087,7 @@ static void tcp_v6_send_reset(struct sock *sk, struct sk_buff *skb)
 	struct tcphdr *th = tcp_hdr(skb);
 	u32 seq = 0, ack_seq = 0;
 	struct tcp_md5sig_key *key = NULL;
+	gfp_t gfp_mask = GFP_ATOMIC;
 
 	if (th->rst)
 		return;
@@ -1095,6 +1099,8 @@ static void tcp_v6_send_reset(struct sock *sk, struct sk_buff *skb)
 	if (sk)
 		key = tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->daddr);
 #endif
+	if (sk)
+		gfp_mask = sk_allocation(sk, gfp_mask);
 
 	if (th->ack)
 		seq = ntohl(th->ack_seq);
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 07/13] netvm: Allow the use of __GFP_MEMALLOC by specific sockets
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (5 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 06/13] net: Introduce sk_allocation() to allow addition of GFP flags depending on the individual socket Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26  7:36 ` [PATCH 08/13] netvm: Allow skb allocation to use PFMEMALLOC reserves Mel Gorman
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

Allow specific sockets to be tagged SOCK_MEMALLOC and use __GFP_MEMALLOC
for their allocations. These sockets will be able to go below watermarks
and allocate from the emergency reserve. Such sockets are to be used
to service the VM (iow. to swap over). They must be handled kernel side,
exposing such a socket to user-space is a bug.

There is a risk that the reserves be depleted so for now, the administrator is
responsible for increasing min_free_kbytes as necessary to prevent deadlock
for their workloads.

[a.p.zijlstra@chello.nl: Original patches]
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/net/sock.h |    5 ++++-
 net/core/sock.c    |   22 ++++++++++++++++++++++
 2 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index e89c38f..046bc97 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -554,6 +554,7 @@ enum sock_flags {
 	SOCK_RCVTSTAMPNS, /* %SO_TIMESTAMPNS setting */
 	SOCK_LOCALROUTE, /* route locally only, %SO_DONTROUTE setting */
 	SOCK_QUEUE_SHRUNK, /* write queue has been shrunk recently */
+	SOCK_MEMALLOC, /* VM depends on this socket for swapping */
 	SOCK_TIMESTAMPING_TX_HARDWARE,  /* %SOF_TIMESTAMPING_TX_HARDWARE */
 	SOCK_TIMESTAMPING_TX_SOFTWARE,  /* %SOF_TIMESTAMPING_TX_SOFTWARE */
 	SOCK_TIMESTAMPING_RX_HARDWARE,  /* %SOF_TIMESTAMPING_RX_HARDWARE */
@@ -587,7 +588,7 @@ static inline int sock_flag(struct sock *sk, enum sock_flags flag)
 
 static inline gfp_t sk_allocation(struct sock *sk, gfp_t gfp_mask)
 {
-	return gfp_mask;
+	return gfp_mask | (sk->sk_allocation & __GFP_MEMALLOC);
 }
 
 static inline void sk_acceptq_removed(struct sock *sk)
@@ -717,6 +718,8 @@ extern int sk_stream_wait_memory(struct sock *sk, long *timeo_p);
 extern void sk_stream_wait_close(struct sock *sk, long timeo_p);
 extern int sk_stream_error(struct sock *sk, int flags, int err);
 extern void sk_stream_kill_queues(struct sock *sk);
+extern void sk_set_memalloc(struct sock *sk);
+extern void sk_clear_memalloc(struct sock *sk);
 
 extern int sk_wait_data(struct sock *sk, long *timeo);
 
diff --git a/net/core/sock.c b/net/core/sock.c
index 6e81978..c685eda 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -219,6 +219,28 @@ __u32 sysctl_rmem_default __read_mostly = SK_RMEM_MAX;
 int sysctl_optmem_max __read_mostly = sizeof(unsigned long)*(2*UIO_MAXIOV+512);
 EXPORT_SYMBOL(sysctl_optmem_max);
 
+/**
+ * sk_set_memalloc - sets %SOCK_MEMALLOC
+ * @sk: socket to set it on
+ *
+ * Set %SOCK_MEMALLOC on a socket for access to emergency reserves.
+ * It's the responsibility of the admin to adjust min_free_kbytes
+ * to meet the requirements
+ */
+void sk_set_memalloc(struct sock *sk)
+{
+	sock_set_flag(sk, SOCK_MEMALLOC);
+	sk->sk_allocation |= __GFP_MEMALLOC;
+}
+EXPORT_SYMBOL_GPL(sk_set_memalloc);
+
+void sk_clear_memalloc(struct sock *sk)
+{
+	sock_reset_flag(sk, SOCK_MEMALLOC);
+	sk->sk_allocation &= ~__GFP_MEMALLOC;
+}
+EXPORT_SYMBOL_GPL(sk_clear_memalloc);
+
 #if defined(CONFIG_CGROUPS) && !defined(CONFIG_NET_CLS_CGROUP)
 int net_cls_subsys_id = -1;
 EXPORT_SYMBOL_GPL(net_cls_subsys_id);
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 08/13] netvm: Allow skb allocation to use PFMEMALLOC reserves
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (6 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 07/13] netvm: Allow the use of __GFP_MEMALLOC by specific sockets Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26  7:36 ` [PATCH 09/13] netvm: Set PF_MEMALLOC as appropriate during SKB processing Mel Gorman
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

Change the skb allocation API to indicate RX usage and use this to fall back
to the PFMEMALLOC reserve when needed. SKBs allocated from the reserve are
tagged in skb->pfmemalloc. If an SKB is allocated from the reserve and
the socket is later found to be unrelated to page reclaim, the packet is
dropped so that the memory remains available for page reclaim. Network
protocols are expected to recover from this packet loss.

[a.p.zijlstra@chello.nl: Ideas taken from various patches]
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/gfp.h    |    3 ++
 include/linux/skbuff.h |   19 ++++++++--
 include/net/sock.h     |    6 +++
 mm/internal.h          |    3 --
 net/core/filter.c      |    8 ++++
 net/core/skbuff.c      |   95 ++++++++++++++++++++++++++++++++++++++++--------
 net/core/sock.c        |    4 ++
 7 files changed, 116 insertions(+), 22 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 4e011e7..9bfd09f 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -376,6 +376,9 @@ void drain_local_pages(void *dummy);
 
 extern gfp_t gfp_allowed_mask;
 
+/* Returns true if the gfp_mask allows use of ALLOC_NO_WATERMARK */
+bool gfp_pfmemalloc_allowed(gfp_t gfp_mask);
+
 extern void pm_restrict_gfp_mask(void);
 extern void pm_restore_gfp_mask(void);
 
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index d0ae90a..e08e2ce 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -396,6 +396,7 @@ struct sk_buff {
 #ifdef CONFIG_IPV6_NDISC_NODETYPE
 	__u8			ndisc_nodetype:2;
 #endif
+	__u8			pfmemalloc:1;
 	__u8			ooo_okay:1;
 	kmemcheck_bitfield_end(flags2);
 
@@ -434,6 +435,15 @@ struct sk_buff {
 
 #include <asm/system.h>
 
+#define SKB_ALLOC_FCLONE	0x01
+#define SKB_ALLOC_RX		0x02
+
+/* Returns true if the skb was allocated from PFMEMALLOC reserves */
+static inline bool skb_pfmemalloc(struct sk_buff *skb)
+{
+	return unlikely(skb->pfmemalloc);
+}
+
 /*
  * skb might have a dst pointer attached, refcounted or not.
  * _skb_refdst low order bit is set if refcount was _not_ taken
@@ -491,7 +501,7 @@ extern void kfree_skb(struct sk_buff *skb);
 extern void consume_skb(struct sk_buff *skb);
 extern void	       __kfree_skb(struct sk_buff *skb);
 extern struct sk_buff *__alloc_skb(unsigned int size,
-				   gfp_t priority, int fclone, int node);
+				   gfp_t priority, int flags, int node);
 static inline struct sk_buff *alloc_skb(unsigned int size,
 					gfp_t priority)
 {
@@ -501,7 +511,7 @@ static inline struct sk_buff *alloc_skb(unsigned int size,
 static inline struct sk_buff *alloc_skb_fclone(unsigned int size,
 					       gfp_t priority)
 {
-	return __alloc_skb(size, priority, 1, NUMA_NO_NODE);
+	return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, NUMA_NO_NODE);
 }
 
 extern bool skb_recycle_check(struct sk_buff *skb, int skb_size);
@@ -1527,7 +1537,8 @@ static inline void __skb_queue_purge(struct sk_buff_head *list)
 static inline struct sk_buff *__dev_alloc_skb(unsigned int length,
 					      gfp_t gfp_mask)
 {
-	struct sk_buff *skb = alloc_skb(length + NET_SKB_PAD, gfp_mask);
+	struct sk_buff *skb = __alloc_skb(length + NET_SKB_PAD, gfp_mask,
+						SKB_ALLOC_RX, NUMA_NO_NODE);
 	if (likely(skb))
 		skb_reserve(skb, NET_SKB_PAD);
 	return skb;
@@ -1578,7 +1589,7 @@ static inline struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev,
  */
 static inline struct page *__netdev_alloc_page(struct net_device *dev, gfp_t gfp_mask)
 {
-	return alloc_pages_node(NUMA_NO_NODE, gfp_mask, 0);
+	return alloc_pages_node(NUMA_NO_NODE, gfp_mask | __GFP_MEMALLOC, 0);
 }
 
 /**
diff --git a/include/net/sock.h b/include/net/sock.h
index 046bc97..e3aaa88 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -586,6 +586,12 @@ static inline int sock_flag(struct sock *sk, enum sock_flags flag)
 	return test_bit(flag, &sk->sk_flags);
 }
 
+extern atomic_t memalloc_socks;
+static inline int sk_memalloc_socks(void)
+{
+	return atomic_read(&memalloc_socks);
+}
+
 static inline gfp_t sk_allocation(struct sock *sk, gfp_t gfp_mask)
 {
 	return gfp_mask | (sk->sk_allocation & __GFP_MEMALLOC);
diff --git a/mm/internal.h b/mm/internal.h
index a520f3b..d071d380 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -193,9 +193,6 @@ static inline struct page *mem_map_next(struct page *iter,
 #define __paginginit __init
 #endif
 
-/* Returns true if the gfp_mask allows use of ALLOC_NO_WATERMARK */
-bool gfp_pfmemalloc_allowed(gfp_t gfp_mask);
-
 /* Memory initialisation debug and verification */
 enum mminit_level {
 	MMINIT_WARNING,
diff --git a/net/core/filter.c b/net/core/filter.c
index afb8afb..eac4cc5 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -138,6 +138,14 @@ int sk_filter(struct sock *sk, struct sk_buff *skb)
 	int err;
 	struct sk_filter *filter;
 
+	/*
+	 * If the skb was allocated from pfmemalloc reserves, only
+	 * allow SOCK_MEMALLOC sockets to use it as this socket is
+	 * helping free memory
+	 */
+	if (skb_pfmemalloc(skb) && !sock_flag(sk, SOCK_MEMALLOC))
+		return -ENOMEM;
+
 	err = security_sock_rcv_skb(sk, skb);
 	if (err)
 		return err;
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 7ebeed0..c90074d 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -146,6 +146,43 @@ static void skb_under_panic(struct sk_buff *skb, int sz, void *here)
 	BUG();
 }
 
+
+/*
+ * kmalloc_reserve is a wrapper around kmalloc_node_track_caller that tells
+ * the caller if emergency pfmemalloc reserves are being used. If it is and
+ * the socket is later found to be SOCK_MEMALLOC then PFMEMALLOC reserves
+ * may be used. Otherwise, the packet data may be discarded until enough
+ * memory is free
+ */
+#define kmalloc_reserve(size, gfp, node, pfmemalloc) \
+	 __kmalloc_reserve(size, gfp, node, _RET_IP_, pfmemalloc)
+void *__kmalloc_reserve(size_t size, gfp_t flags, int node, unsigned long ip,
+			 bool *pfmemalloc)
+{
+	void *obj;
+	bool ret_pfmemalloc = false;
+
+	/*
+	 * Try a regular allocation, when that fails and we're not entitled
+	 * to the reserves, fail.
+	 */
+	obj = kmalloc_node_track_caller(size,
+				flags | __GFP_NOMEMALLOC | __GFP_NOWARN,
+				node);
+	if (obj || !(gfp_pfmemalloc_allowed(flags)))
+		goto out;
+
+	/* Try again but now we are using pfmemalloc reserves */
+	ret_pfmemalloc = true;
+	obj = kmalloc_node_track_caller(size, flags, node);
+
+out:
+	if (pfmemalloc)
+		*pfmemalloc = ret_pfmemalloc;
+
+	return obj;
+}
+
 /* 	Allocate a new skbuff. We do this ourselves so we can fill in a few
  *	'private' fields and also do memory statistics to find all the
  *	[BEEP] leaks.
@@ -156,8 +193,10 @@ static void skb_under_panic(struct sk_buff *skb, int sz, void *here)
  *	__alloc_skb	-	allocate a network buffer
  *	@size: size to allocate
  *	@gfp_mask: allocation mask
- *	@fclone: allocate from fclone cache instead of head cache
- *		and allocate a cloned (child) skb
+ *	@flags: If SKB_ALLOC_FCLONE is set, allocate from fclone cache
+ *		instead of head cache and allocate a cloned (child) skb.
+ *		If SKB_ALLOC_RX is set, __GFP_MEMALLOC will be used for
+ *		allocations in case the data is required for writeback
  *	@node: numa node to allocate memory on
  *
  *	Allocate a new &sk_buff. The returned buffer has no headroom and a
@@ -168,14 +207,19 @@ static void skb_under_panic(struct sk_buff *skb, int sz, void *here)
  *	%GFP_ATOMIC.
  */
 struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
-			    int fclone, int node)
+			    int flags, int node)
 {
 	struct kmem_cache *cache;
 	struct skb_shared_info *shinfo;
 	struct sk_buff *skb;
 	u8 *data;
+	bool pfmemalloc;
+
+	cache = (flags & SKB_ALLOC_FCLONE)
+		? skbuff_fclone_cache : skbuff_head_cache;
 
-	cache = fclone ? skbuff_fclone_cache : skbuff_head_cache;
+	if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX))
+		gfp_mask |= __GFP_MEMALLOC;
 
 	/* Get the HEAD */
 	skb = kmem_cache_alloc_node(cache, gfp_mask & ~__GFP_DMA, node);
@@ -184,8 +228,8 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	prefetchw(skb);
 
 	size = SKB_DATA_ALIGN(size);
-	data = kmalloc_node_track_caller(size + sizeof(struct skb_shared_info),
-			gfp_mask, node);
+	data = kmalloc_reserve(size + sizeof(struct skb_shared_info),
+			gfp_mask, node, &pfmemalloc);
 	if (!data)
 		goto nodata;
 	prefetchw(data + size);
@@ -196,6 +240,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	 * the tail pointer in struct sk_buff!
 	 */
 	memset(skb, 0, offsetof(struct sk_buff, tail));
+	skb->pfmemalloc = pfmemalloc;
 	skb->truesize = size + sizeof(struct sk_buff);
 	atomic_set(&skb->users, 1);
 	skb->head = data;
@@ -212,7 +257,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	atomic_set(&shinfo->dataref, 1);
 	kmemcheck_annotate_variable(shinfo->destructor_arg);
 
-	if (fclone) {
+	if (flags & SKB_ALLOC_FCLONE) {
 		struct sk_buff *child = skb + 1;
 		atomic_t *fclone_ref = (atomic_t *) (child + 1);
 
@@ -222,6 +267,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 		atomic_set(fclone_ref, 1);
 
 		child->fclone = SKB_FCLONE_UNAVAILABLE;
+		child->pfmemalloc = pfmemalloc;
 	}
 out:
 	return skb;
@@ -250,7 +296,8 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev,
 {
 	struct sk_buff *skb;
 
-	skb = __alloc_skb(length + NET_SKB_PAD, gfp_mask, 0, NUMA_NO_NODE);
+	skb = __alloc_skb(length + NET_SKB_PAD, gfp_mask,
+						SKB_ALLOC_RX, NUMA_NO_NODE);
 	if (likely(skb)) {
 		skb_reserve(skb, NET_SKB_PAD);
 		skb->dev = dev;
@@ -526,6 +573,7 @@ static void __copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
 #if defined(CONFIG_IP_VS) || defined(CONFIG_IP_VS_MODULE)
 	new->ipvs_property	= old->ipvs_property;
 #endif
+	new->pfmemalloc		= old->pfmemalloc;
 	new->protocol		= old->protocol;
 	new->mark		= old->mark;
 	new->skb_iif		= old->skb_iif;
@@ -620,6 +668,9 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
 		n->fclone = SKB_FCLONE_CLONE;
 		atomic_inc(fclone_ref);
 	} else {
+		if (skb_pfmemalloc(skb))
+			gfp_mask |= __GFP_MEMALLOC;
+
 		n = kmem_cache_alloc(skbuff_head_cache, gfp_mask);
 		if (!n)
 			return NULL;
@@ -656,6 +707,13 @@ static void copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
 	skb_shinfo(new)->gso_type = skb_shinfo(old)->gso_type;
 }
 
+static inline int skb_alloc_rx_flag(const struct sk_buff *skb)
+{
+	if (skb_pfmemalloc((struct sk_buff *)skb))
+		return SKB_ALLOC_RX;
+	return 0;
+}
+
 /**
  *	skb_copy	-	create private copy of an sk_buff
  *	@skb: buffer to copy
@@ -677,7 +735,8 @@ struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask)
 {
 	int headerlen = skb_headroom(skb);
 	unsigned int size = (skb_end_pointer(skb) - skb->head) + skb->data_len;
-	struct sk_buff *n = alloc_skb(size, gfp_mask);
+	struct sk_buff *n = __alloc_skb(size, gfp_mask,
+					skb_alloc_rx_flag(skb), NUMA_NO_NODE);
 
 	if (!n)
 		return NULL;
@@ -711,7 +770,8 @@ EXPORT_SYMBOL(skb_copy);
 struct sk_buff *pskb_copy(struct sk_buff *skb, gfp_t gfp_mask)
 {
 	unsigned int size = skb_end_pointer(skb) - skb->head;
-	struct sk_buff *n = alloc_skb(size, gfp_mask);
+	struct sk_buff *n = __alloc_skb(size, gfp_mask,
+					skb_alloc_rx_flag(skb), NUMA_NO_NODE);
 
 	if (!n)
 		goto out;
@@ -802,7 +862,10 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
 		goto adjust_others;
 	}
 
-	data = kmalloc(size + sizeof(struct skb_shared_info), gfp_mask);
+	if (skb_pfmemalloc(skb))
+		gfp_mask |= __GFP_MEMALLOC;
+	data = kmalloc_reserve(size + sizeof(struct skb_shared_info), gfp_mask,
+			NUMA_NO_NODE, NULL);
 	if (!data)
 		goto nodata;
 
@@ -903,8 +966,9 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb,
 	/*
 	 *	Allocate the copy buffer
 	 */
-	struct sk_buff *n = alloc_skb(newheadroom + skb->len + newtailroom,
-				      gfp_mask);
+	struct sk_buff *n = __alloc_skb(newheadroom + skb->len + newtailroom,
+				      gfp_mask, skb_alloc_rx_flag(skb),
+				      NUMA_NO_NODE);
 	int oldheadroom = skb_headroom(skb);
 	int head_copy_len, head_copy_off;
 	int off;
@@ -2551,8 +2615,9 @@ struct sk_buff *skb_segment(struct sk_buff *skb, u32 features)
 			skb_release_head_state(nskb);
 			__skb_push(nskb, doffset);
 		} else {
-			nskb = alloc_skb(hsize + doffset + headroom,
-					 GFP_ATOMIC);
+			nskb = __alloc_skb(hsize + doffset + headroom,
+					 GFP_ATOMIC, skb_alloc_rx_flag(skb),
+					 NUMA_NO_NODE);
 
 			if (unlikely(!nskb))
 				goto err;
diff --git a/net/core/sock.c b/net/core/sock.c
index c685eda..8308609 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -219,6 +219,8 @@ __u32 sysctl_rmem_default __read_mostly = SK_RMEM_MAX;
 int sysctl_optmem_max __read_mostly = sizeof(unsigned long)*(2*UIO_MAXIOV+512);
 EXPORT_SYMBOL(sysctl_optmem_max);
 
+atomic_t memalloc_socks __read_mostly;
+
 /**
  * sk_set_memalloc - sets %SOCK_MEMALLOC
  * @sk: socket to set it on
@@ -231,6 +233,7 @@ void sk_set_memalloc(struct sock *sk)
 {
 	sock_set_flag(sk, SOCK_MEMALLOC);
 	sk->sk_allocation |= __GFP_MEMALLOC;
+	atomic_inc(&memalloc_socks);
 }
 EXPORT_SYMBOL_GPL(sk_set_memalloc);
 
@@ -238,6 +241,7 @@ void sk_clear_memalloc(struct sock *sk)
 {
 	sock_reset_flag(sk, SOCK_MEMALLOC);
 	sk->sk_allocation &= ~__GFP_MEMALLOC;
+	atomic_dec(&memalloc_socks);
 }
 EXPORT_SYMBOL_GPL(sk_clear_memalloc);
 
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 09/13] netvm: Set PF_MEMALLOC as appropriate during SKB processing
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (7 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 08/13] netvm: Allow skb allocation to use PFMEMALLOC reserves Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26 12:21   ` NeilBrown
  2011-04-26  7:36 ` [PATCH 10/13] mm: Micro-optimise slab to avoid a function call Mel Gorman
                   ` (5 subsequent siblings)
  14 siblings, 1 reply; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

In order to make sure pfmemalloc packets receive all memory needed to proceed,
ensure processing of pfmemalloc SKBs happens under PF_MEMALLOC. This is
limited to a subset of protocols that are expected to be used for writing
to swap. Taps are not allowed to use PF_MEMALLOC as these are expected to
communicate with userspace processes which could be paged out.

[a.p.zijlstra@chello.nl: Ideas taken from various patches]
[jslaby@suse.cz: Lock imbalance fix]
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/net/sock.h |    5 +++++
 net/core/dev.c     |   52 ++++++++++++++++++++++++++++++++++++++++++++++++----
 net/core/sock.c    |   16 ++++++++++++++++
 3 files changed, 69 insertions(+), 4 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index e3aaa88..e928880 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -668,8 +668,13 @@ static inline __must_check int sk_add_backlog(struct sock *sk, struct sk_buff *s
 	return 0;
 }
 
+extern int __sk_backlog_rcv(struct sock *sk, struct sk_buff *skb);
+
 static inline int sk_backlog_rcv(struct sock *sk, struct sk_buff *skb)
 {
+	if (skb_pfmemalloc(skb))
+		return __sk_backlog_rcv(sk, skb);
+
 	return sk->sk_backlog_rcv(sk, skb);
 }
 
diff --git a/net/core/dev.c b/net/core/dev.c
index 3871bf6..2d79a20 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3095,6 +3095,27 @@ static void vlan_on_bond_hook(struct sk_buff *skb)
 	}
 }
 
+/*
+ * Limit which protocols can use the PFMEMALLOC reserves to those that are
+ * expected to be used for communication with swap.
+ */
+static bool skb_pfmemalloc_protocol(struct sk_buff *skb)
+{
+	if (skb_pfmemalloc(skb))
+		switch (skb->protocol) {
+		case __constant_htons(ETH_P_ARP):
+		case __constant_htons(ETH_P_IP):
+		case __constant_htons(ETH_P_IPV6):
+		case __constant_htons(ETH_P_8021Q):
+			break;
+
+		default:
+			return false;
+		}
+
+	return true;
+}
+
 static int __netif_receive_skb(struct sk_buff *skb)
 {
 	struct packet_type *ptype, *pt_prev;
@@ -3104,15 +3125,28 @@ static int __netif_receive_skb(struct sk_buff *skb)
 	bool deliver_exact = false;
 	int ret = NET_RX_DROP;
 	__be16 type;
+	unsigned long pflags = current->flags;
 
 	if (!netdev_tstamp_prequeue)
 		net_timestamp_check(skb);
 
 	trace_netif_receive_skb(skb);
 
+	/*
+	 * PFMEMALLOC skbs are special, they should
+	 * - be delivered to SOCK_MEMALLOC sockets only
+	 * - stay away from userspace
+	 * - have bounded memory usage
+	 *
+	 * Use PF_MEMALLOC as this saves us from propagating the allocation
+	 * context down to all allocation sites.
+	 */
+	if (skb_pfmemalloc(skb))
+		current->flags |= PF_MEMALLOC;
+
 	/* if we've gotten here through NAPI, check netpoll */
 	if (netpoll_receive_skb(skb))
-		return NET_RX_DROP;
+		goto out;
 
 	if (!skb->skb_iif)
 		skb->skb_iif = skb->dev->ifindex;
@@ -3143,6 +3177,9 @@ another_round:
 	}
 #endif
 
+	if (skb_pfmemalloc(skb))
+		goto skip_taps;
+
 	list_for_each_entry_rcu(ptype, &ptype_all, list) {
 		if (!ptype->dev || ptype->dev == skb->dev) {
 			if (pt_prev)
@@ -3151,13 +3188,17 @@ another_round:
 		}
 	}
 
+skip_taps:
 #ifdef CONFIG_NET_CLS_ACT
 	skb = handle_ing(skb, &pt_prev, &ret, orig_dev);
 	if (!skb)
-		goto out;
+		goto unlock;
 ncls:
 #endif
 
+	if (!skb_pfmemalloc_protocol(skb))
+		goto drop;
+
 	rx_handler = rcu_dereference(skb->dev->rx_handler);
 	if (rx_handler) {
 		if (pt_prev) {
@@ -3166,7 +3207,7 @@ ncls:
 		}
 		switch (rx_handler(&skb)) {
 		case RX_HANDLER_CONSUMED:
-			goto out;
+			goto unlock;
 		case RX_HANDLER_ANOTHER:
 			goto another_round;
 		case RX_HANDLER_EXACT:
@@ -3210,6 +3251,7 @@ ncls:
 	if (pt_prev) {
 		ret = pt_prev->func(skb, skb->dev, pt_prev, orig_dev);
 	} else {
+drop:
 		atomic_long_inc(&skb->dev->rx_dropped);
 		kfree_skb(skb);
 		/* Jamal, now you will not able to escape explaining
@@ -3218,8 +3260,10 @@ ncls:
 		ret = NET_RX_DROP;
 	}
 
-out:
+unlock:
 	rcu_read_unlock();
+out:
+	tsk_restore_flags(current, pflags, PF_MEMALLOC);
 	return ret;
 }
 
diff --git a/net/core/sock.c b/net/core/sock.c
index 8308609..ac36807 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -245,6 +245,22 @@ void sk_clear_memalloc(struct sock *sk)
 }
 EXPORT_SYMBOL_GPL(sk_clear_memalloc);
 
+int __sk_backlog_rcv(struct sock *sk, struct sk_buff *skb)
+{
+	int ret;
+	unsigned long pflags = current->flags;
+
+	/* these should have been dropped before queueing */
+	BUG_ON(!sock_flag(sk, SOCK_MEMALLOC));
+
+	current->flags |= PF_MEMALLOC;
+	ret = sk->sk_backlog_rcv(sk, skb);
+	tsk_restore_flags(current, pflags, PF_MEMALLOC);
+
+	return ret;
+}
+EXPORT_SYMBOL(__sk_backlog_rcv);
+
 #if defined(CONFIG_CGROUPS) && !defined(CONFIG_NET_CLS_CGROUP)
 int net_cls_subsys_id = -1;
 EXPORT_SYMBOL_GPL(net_cls_subsys_id);
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 10/13] mm: Micro-optimise slab to avoid a function call
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (8 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 09/13] netvm: Set PF_MEMALLOC as appropriate during SKB processing Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26  7:36 ` [PATCH 11/13] nbd: Set SOCK_MEMALLOC for access to PFMEMALLOC reserves Mel Gorman
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

Getting and putting objects in SLAB currently requires a function call
but the bulk of the work is related to PFMEMALLOC reserves which are
only consumed when network-backed storage is critical. Use an inline
function to determine if the function call is required.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/slab.c |   28 ++++++++++++++++++++++++++--
 1 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 1925023..3163d31 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -116,6 +116,8 @@
 #include	<linux/kmemcheck.h>
 #include	<linux/memory.h>
 
+#include	<net/sock.h>
+
 #include	<asm/cacheflush.h>
 #include	<asm/tlbflush.h>
 #include	<asm/page.h>
@@ -944,7 +946,7 @@ static void check_ac_pfmemalloc(struct kmem_cache *cachep,
 	ac->pfmemalloc = false;
 }
 
-static void *ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac,
+static void *__ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac,
 						gfp_t flags, bool force_refill)
 {
 	int i;
@@ -991,7 +993,20 @@ static void *ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac,
 	return objp;
 }
 
-static void ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac,
+static inline void *ac_get_obj(struct kmem_cache *cachep,
+			struct array_cache *ac, gfp_t flags, bool force_refill)
+{
+	void *objp;
+
+	if (unlikely(sk_memalloc_socks()))
+		objp = __ac_get_obj(cachep, ac, flags, force_refill);
+	else
+		objp = ac->entry[--ac->avail];
+
+	return objp;
+}
+
+static void *__ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac,
 								void *objp)
 {
 	struct slab *slabp;
@@ -1004,6 +1019,15 @@ static void ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac,
 			set_obj_pfmemalloc(&objp);
 	}
 
+	return objp;
+}
+
+static inline void ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac,
+								void *objp)
+{
+	if (unlikely(sk_memalloc_socks()))
+		objp = __ac_put_obj(cachep, ac, objp);
+
 	ac->entry[ac->avail++] = objp;
 }
 
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 11/13] nbd: Set SOCK_MEMALLOC for access to PFMEMALLOC reserves
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (9 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 10/13] mm: Micro-optimise slab to avoid a function call Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26  7:36 ` [PATCH 12/13] mm: Throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage Mel Gorman
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

Set SOCK_MEMALLOC on the NBD socket to allow access to PFMEMALLOC
reserves so pages backed by NBD, particularly if swap related,
can be cleaned to prevent the machine being deadlocked. It is
still possible that the PFMEMALLOC reserves get depleted resulting
in deadlock but this can be resolved by the administrator by
increasing min_free_kbytes.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 drivers/block/nbd.c |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index e6fc716..322cef8 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -156,6 +156,7 @@ static int sock_xmit(struct nbd_device *lo, int send, void *buf, int size,
 	struct msghdr msg;
 	struct kvec iov;
 	sigset_t blocked, oldset;
+	unsigned long pflags = current->flags;
 
 	if (unlikely(!sock)) {
 		printk(KERN_ERR "%s: Attempted %s on closed socket in sock_xmit\n",
@@ -168,8 +169,9 @@ static int sock_xmit(struct nbd_device *lo, int send, void *buf, int size,
 	siginitsetinv(&blocked, sigmask(SIGKILL));
 	sigprocmask(SIG_SETMASK, &blocked, &oldset);
 
+	current->flags |= PF_MEMALLOC;
 	do {
-		sock->sk->sk_allocation = GFP_NOIO;
+		sock->sk->sk_allocation = GFP_NOIO | __GFP_MEMALLOC;
 		iov.iov_base = buf;
 		iov.iov_len = size;
 		msg.msg_name = NULL;
@@ -214,6 +216,7 @@ static int sock_xmit(struct nbd_device *lo, int send, void *buf, int size,
 	} while (size > 0);
 
 	sigprocmask(SIG_SETMASK, &oldset, NULL);
+	tsk_restore_flags(current, pflags, PF_MEMALLOC);
 
 	return result;
 }
@@ -404,6 +407,8 @@ static int nbd_do_it(struct nbd_device *lo)
 
 	BUG_ON(lo->magic != LO_MAGIC);
 
+	sk_set_memalloc(lo->sock->sk);
+
 	lo->pid = current->pid;
 	ret = sysfs_create_file(&disk_to_dev(lo->disk)->kobj, &pid_attr.attr);
 	if (ret) {
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 12/13] mm: Throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (10 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 11/13] nbd: Set SOCK_MEMALLOC for access to PFMEMALLOC reserves Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26 12:30   ` NeilBrown
  2011-04-26  7:36 ` [PATCH 13/13] mm: Account for the number of times direct reclaimers get throttled Mel Gorman
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

If swap is backed by network storage such as NBD, there is a risk that a
large number of reclaimers can hang the system by consuming all
PF_MEMALLOC reserves. To avoid these hangs, the administrator must tune
min_free_kbytes in advance. This patch will throttle direct reclaimers
if half the PF_MEMALLOC reserves are in use as the system is at risk of
hanging.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/mmzone.h |    1 +
 mm/page_alloc.c        |    1 +
 mm/vmscan.c            |   57 ++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 59 insertions(+), 0 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index e56f835..8e5c627 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -638,6 +638,7 @@ typedef struct pglist_data {
 					     range, including holes */
 	int node_id;
 	wait_queue_head_t kswapd_wait;
+	wait_queue_head_t pfmemalloc_wait;
 	struct task_struct *kswapd;
 	int kswapd_max_order;
 	enum zone_type classzone_idx;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c0c42ce..13fc246 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4225,6 +4225,7 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
 	pgdat_resize_init(pgdat);
 	pgdat->nr_zones = 0;
 	init_waitqueue_head(&pgdat->kswapd_wait);
+	init_waitqueue_head(&pgdat->pfmemalloc_wait);
 	pgdat->kswapd_max_order = 0;
 	pgdat_page_cgroup_init(pgdat);
 	
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b3a569f..8b6da2b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2117,6 +2117,55 @@ out:
 	return 0;
 }
 
+static bool pfmemalloc_watermark_ok(pg_data_t *pgdat, int high_zoneidx)
+{
+	struct zone *zone;
+	unsigned long pfmemalloc_reserve = 0;
+	unsigned long free_pages = 0;
+	int i;
+
+	for (i = 0; i <= high_zoneidx; i++) {
+		zone = &pgdat->node_zones[i];
+		pfmemalloc_reserve += min_wmark_pages(zone);
+		free_pages += zone_page_state(zone, NR_FREE_PAGES);
+	}
+
+	return (free_pages > pfmemalloc_reserve / 2) ? true : false;
+}
+
+/*
+ * Throttle direct reclaimers if backing storage is backed by the network
+ * and the PFMEMALLOC reserve for the preferred node is getting dangerously
+ * depleted. kswapd will continue to make progress and wake the processes
+ * when the low watermark is reached
+ */
+static void throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
+					nodemask_t *nodemask)
+{
+	struct zone *zone;
+	int high_zoneidx = gfp_zone(gfp_mask);
+	DEFINE_WAIT(wait);
+
+	/* Check if the pfmemalloc reserves are ok */
+	first_zones_zonelist(zonelist, high_zoneidx, NULL, &zone);
+	prepare_to_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait,
+							TASK_INTERRUPTIBLE);
+	if (pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx))
+		goto out;
+
+	/* Throttle */
+	do {
+		schedule();
+		finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
+		prepare_to_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait,
+							TASK_INTERRUPTIBLE);
+	} while (!pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx) &&
+			!fatal_signal_pending(current));
+
+out:
+	finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
+}
+
 unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 				gfp_t gfp_mask, nodemask_t *nodemask)
 {
@@ -2133,6 +2182,8 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 		.nodemask = nodemask,
 	};
 
+	throttle_direct_reclaim(gfp_mask, zonelist, nodemask);
+
 	trace_mm_vmscan_direct_reclaim_begin(order,
 				sc.may_writepage,
 				gfp_mask);
@@ -2488,6 +2539,12 @@ loop_again:
 			}
 
 		}
+
+		/* Wake throttled direct reclaimers if low watermark is met */
+		if (waitqueue_active(&pgdat->pfmemalloc_wait) &&
+				pfmemalloc_watermark_ok(pgdat, MAX_NR_ZONES - 1))
+			wake_up_interruptible(&pgdat->pfmemalloc_wait);
+
 		if (all_zones_ok || (order && pgdat_balanced(pgdat, balanced, *classzone_idx)))
 			break;		/* kswapd: all done */
 		/*
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 13/13] mm: Account for the number of times direct reclaimers get throttled
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (11 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 12/13] mm: Throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage Mel Gorman
@ 2011-04-26  7:36 ` Mel Gorman
  2011-04-26 12:35   ` NeilBrown
  2011-04-26 14:23 ` [PATCH 00/13] Swap-over-NBD without deadlocking Peter Zijlstra
  2011-04-28 13:31 ` Pavel Machek
  14 siblings, 1 reply; 41+ messages in thread
From: Mel Gorman @ 2011-04-26  7:36 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

Under significant pressure when writing back to network-backed storage,
direct reclaimers may get throttled. This is expected to be a
short-lived event and the processes get woken up again but processes do
get stalled. This patch counts how many times such stalling occurs. It's
up to the administrator whether to reduce these stalls by increasing
min_free_kbytes.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/vm_event_item.h |    1 +
 mm/vmscan.c                   |    1 +
 mm/vmstat.c                   |    1 +
 3 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 03b90cdc..652e5f3 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -29,6 +29,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		FOR_ALL_ZONES(PGSTEAL),
 		FOR_ALL_ZONES(PGSCAN_KSWAPD),
 		FOR_ALL_ZONES(PGSCAN_DIRECT),
+		PGSCAN_DIRECT_THROTTLE,
 #ifdef CONFIG_NUMA
 		PGSCAN_ZONE_RECLAIM_FAILED,
 #endif
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8b6da2b..e88138b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2154,6 +2154,7 @@ static void throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
 		goto out;
 
 	/* Throttle */
+	count_vm_event(PGSCAN_DIRECT_THROTTLE);
 	do {
 		schedule();
 		finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index a2b7344..5725387 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -911,6 +911,7 @@ const char * const vmstat_text[] = {
 	TEXTS_FOR_ZONES("pgsteal")
 	TEXTS_FOR_ZONES("pgscan_kswapd")
 	TEXTS_FOR_ZONES("pgscan_direct")
+	"pgscan_direct_throttle",
 
 #ifdef CONFIG_NUMA
 	"zone_reclaim_failed",
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/13] mm: Introduce __GFP_MEMALLOC to allow access to emergency reserves
  2011-04-26  7:36 ` [PATCH 03/13] mm: Introduce __GFP_MEMALLOC to allow access to emergency reserves Mel Gorman
@ 2011-04-26  9:49   ` NeilBrown
  2011-04-26 10:36     ` Mel Gorman
  0 siblings, 1 reply; 41+ messages in thread
From: NeilBrown @ 2011-04-26  9:49 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 08:36:44 +0100 Mel Gorman <mgorman@suse.de> wrote:

> __GFP_MEMALLOC will allow the allocation to disregard the watermarks,
> much like PF_MEMALLOC. It allows one to pass along the memalloc state in
> object related allocation flags as opposed to task related flags, such
> as sk->sk_allocation. This removes the need for ALLOC_PFMEMALLOC as
> callers using __GFP_MEMALLOC can get the ALLOC_NO_WATERMARK flag which
> is now enough to identify allocations related to page reclaim.
> 
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> ---
>  include/linux/gfp.h      |    4 +++-
>  include/linux/mm_types.h |    2 +-
>  mm/page_alloc.c          |   14 ++++++--------
>  mm/slab.c                |    2 +-
>  4 files changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index bfb8f93..4e011e7 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -23,6 +23,7 @@ struct vm_area_struct;
>  #define ___GFP_REPEAT		0x400u
>  #define ___GFP_NOFAIL		0x800u
>  #define ___GFP_NORETRY		0x1000u
> +#define ___GFP_MEMALLOC		0x2000u
>  #define ___GFP_COMP		0x4000u
>  #define ___GFP_ZERO		0x8000u
>  #define ___GFP_NOMEMALLOC	0x10000u
> @@ -75,6 +76,7 @@ struct vm_area_struct;
>  #define __GFP_REPEAT	((__force gfp_t)___GFP_REPEAT)	/* See above */
>  #define __GFP_NOFAIL	((__force gfp_t)___GFP_NOFAIL)	/* See above */
>  #define __GFP_NORETRY	((__force gfp_t)___GFP_NORETRY) /* See above */
> +#define __GFP_MEMALLOC	((__force gfp_t)___GFP_MEMALLOC)/* Allow access to emergency reserves */
>  #define __GFP_COMP	((__force gfp_t)___GFP_COMP)	/* Add compound page metadata */
>  #define __GFP_ZERO	((__force gfp_t)___GFP_ZERO)	/* Return zeroed page on success */
>  #define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC) /* Don't use emergency reserves */

Having both "MEMALLOC" and  "NOMEMALLOC" seems ... unfortunate.

It appears that NOMEMALLOC over-rides MEMALLOC.  It might be good to document
this 

> +#define __GFP_MEMALLOC	((__force gfp_t)___GFP_MEMALLOC)/* Allow access to emergency reserves
                                                                   unless __GFP_NOMEMALLOC is set*/

>  #define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC) /* Don't use emergency reserves
                                                                  Overrides __GFP_MEMALLOC */

I suspect that it is never valid to set both.  So NOMEMALLOC is really
NO_PF_MEMALLOC, but making that change is probably just noise.

Maybe a
   WARN_ON((gfp_mask & __GFP_MEMALLOC) && (gfp_mask & __GFP_NOMEMALLOC));
might be wise?

NeilBrown.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/13] mm: Introduce __GFP_MEMALLOC to allow access to emergency reserves
  2011-04-26  9:49   ` NeilBrown
@ 2011-04-26 10:36     ` Mel Gorman
  2011-04-26 10:53       ` NeilBrown
  0 siblings, 1 reply; 41+ messages in thread
From: Mel Gorman @ 2011-04-26 10:36 UTC (permalink / raw)
  To: NeilBrown; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, Apr 26, 2011 at 07:49:47PM +1000, NeilBrown wrote:
> On Tue, 26 Apr 2011 08:36:44 +0100 Mel Gorman <mgorman@suse.de> wrote:
> 
> > __GFP_MEMALLOC will allow the allocation to disregard the watermarks,
> > much like PF_MEMALLOC. It allows one to pass along the memalloc state in
> > object related allocation flags as opposed to task related flags, such
> > as sk->sk_allocation. This removes the need for ALLOC_PFMEMALLOC as
> > callers using __GFP_MEMALLOC can get the ALLOC_NO_WATERMARK flag which
> > is now enough to identify allocations related to page reclaim.
> > 
> > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > ---
> >  include/linux/gfp.h      |    4 +++-
> >  include/linux/mm_types.h |    2 +-
> >  mm/page_alloc.c          |   14 ++++++--------
> >  mm/slab.c                |    2 +-
> >  4 files changed, 11 insertions(+), 11 deletions(-)
> > 
> > diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> > index bfb8f93..4e011e7 100644
> > --- a/include/linux/gfp.h
> > +++ b/include/linux/gfp.h
> > @@ -23,6 +23,7 @@ struct vm_area_struct;
> >  #define ___GFP_REPEAT		0x400u
> >  #define ___GFP_NOFAIL		0x800u
> >  #define ___GFP_NORETRY		0x1000u
> > +#define ___GFP_MEMALLOC		0x2000u
> >  #define ___GFP_COMP		0x4000u
> >  #define ___GFP_ZERO		0x8000u
> >  #define ___GFP_NOMEMALLOC	0x10000u
> > @@ -75,6 +76,7 @@ struct vm_area_struct;
> >  #define __GFP_REPEAT	((__force gfp_t)___GFP_REPEAT)	/* See above */
> >  #define __GFP_NOFAIL	((__force gfp_t)___GFP_NOFAIL)	/* See above */
> >  #define __GFP_NORETRY	((__force gfp_t)___GFP_NORETRY) /* See above */
> > +#define __GFP_MEMALLOC	((__force gfp_t)___GFP_MEMALLOC)/* Allow access to emergency reserves */
> >  #define __GFP_COMP	((__force gfp_t)___GFP_COMP)	/* Add compound page metadata */
> >  #define __GFP_ZERO	((__force gfp_t)___GFP_ZERO)	/* Return zeroed page on success */
> >  #define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC) /* Don't use emergency reserves */
> 
> Having both "MEMALLOC" and  "NOMEMALLOC" seems ... unfortunate.
> 
> It appears that NOMEMALLOC over-rides MEMALLOC.  It might be good to document
> this 
> 

I can document it. Right now, a better name than NOMEMALLOC does not
spring to mind.

> > +#define __GFP_MEMALLOC	((__force gfp_t)___GFP_MEMALLOC)/* Allow access to emergency reserves
>                                                                    unless __GFP_NOMEMALLOC is set*/
> 
> >  #define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC) /* Don't use emergency reserves
>                                                                   Overrides __GFP_MEMALLOC */
> 
> I suspect that it is never valid to set both.  So NOMEMALLOC is really
> NO_PF_MEMALLOC, but making that change is probably just noise.
> 
> Maybe a
>    WARN_ON((gfp_mask & __GFP_MEMALLOC) && (gfp_mask & __GFP_NOMEMALLOC));
> might be wise?
> 

Both MEMALLOC and NOMEMALLOC are related to PFMEMALLOC reserves so
it's reasonable for them to have similar names. This warning will
also trigger because it's a combination of flags that does happen.

Consider for example

any interrupt
  -> __netdev_alloc_skb		(mask == GFP_ATOMIC)
    -> __alloc_skb		(mask == GFP_ATOMIC)
       if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX))
               gfp_mask |= __GFP_MEMALLOC;
				(mask == GFP_ATOMIC|__GFP_NOMEMALLOC)
      -> __kmalloc_reserve
		First attempt tries to avoid reserves so adds __GFP_MEMALLOC
				(mask == GFP_ATOMIC|__GFP_NOMEMALLOC|__GFP_MEMALLOC)

You're right in that __GFP_NOMEMALLOC overrides __GFP_MEMALLOC so that
could do with a note.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/13] mm: Introduce __GFP_MEMALLOC to allow access to emergency reserves
  2011-04-26 10:36     ` Mel Gorman
@ 2011-04-26 10:53       ` NeilBrown
  2011-04-26 14:00         ` Mel Gorman
  0 siblings, 1 reply; 41+ messages in thread
From: NeilBrown @ 2011-04-26 10:53 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 11:36:46 +0100 Mel Gorman <mgorman@suse.de> wrote:

> On Tue, Apr 26, 2011 at 07:49:47PM +1000, NeilBrown wrote:

> > Maybe a
> >    WARN_ON((gfp_mask & __GFP_MEMALLOC) && (gfp_mask & __GFP_NOMEMALLOC));
> > might be wise?
> > 
> 
> Both MEMALLOC and NOMEMALLOC are related to PFMEMALLOC reserves so
> it's reasonable for them to have similar names. This warning will
> also trigger because it's a combination of flags that does happen.
> 
> Consider for example
> 
> any interrupt
>   -> __netdev_alloc_skb		(mask == GFP_ATOMIC)
>     -> __alloc_skb		(mask == GFP_ATOMIC)
>        if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX))
>                gfp_mask |= __GFP_MEMALLOC;
> 				(mask == GFP_ATOMIC|__GFP_NOMEMALLOC)
>       -> __kmalloc_reserve
> 		First attempt tries to avoid reserves so adds __GFP_MEMALLOC
> 				(mask == GFP_ATOMIC|__GFP_NOMEMALLOC|__GFP_MEMALLOC)
> 

You have the "NO"s mixed up a bit which confused me for a while :-)
But I see your point - I guess the WARN_ON isn't really needed.


> You're right in that __GFP_NOMEMALLOC overrides __GFP_MEMALLOC so that
> could do with a note.
> 

Thanks,

NeilBrown

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages
  2011-04-26  7:36 ` [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages Mel Gorman
@ 2011-04-26 11:15   ` NeilBrown
  2011-04-26 11:33     ` Mel Gorman
  2011-04-26 11:37   ` NeilBrown
  1 sibling, 1 reply; 41+ messages in thread
From: NeilBrown @ 2011-04-26 11:15 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 08:36:43 +0100 Mel Gorman <mgorman@suse.de> wrote:

> +bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
> +{
> +	return gfp_to_alloc_flags(gfp_mask) & ALLOC_PFMEMALLOC;
> +}
> +
>  static inline struct page *
>  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	struct zonelist *zonelist, enum zone_type high_zoneidx,
> @@ -2202,8 +2211,16 @@ nopage:
>  got_pg:
>  	if (kmemcheck_enabled)
>  		kmemcheck_pagealloc_alloc(page, order, gfp_mask);
> -	return page;
>  
> +	/*
> +	 * page->pfmemalloc is set when the caller had PFMEMALLOC set or is
> +	 * been OOM killed. The expectation is that the caller is taking
> +	 * steps that will free more memory. The caller should avoid the
> +	 * page being used for !PFMEMALLOC purposes.
> +	 */
> +	page->pfmemalloc = (alloc_flags & ALLOC_PFMEMALLOC);
> +
> +	return page;

Linus doesn't seem to be a fan of this construct:
   https://lkml.org/lkml/2011/4/1/255

pfmemalloc is a bool, and the value on the right is either 0 or 0x1000.

If bool happens to be typedefed to 'char' or even 'short', pfmemalloc would
always be set to 0.
Ditto for the gfp_pfmemalloc_allowed function.

Prefixing with '!!' would make it safe.

NeilBrown

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages
  2011-04-26 11:15   ` NeilBrown
@ 2011-04-26 11:33     ` Mel Gorman
  2011-04-26 12:05       ` NeilBrown
  0 siblings, 1 reply; 41+ messages in thread
From: Mel Gorman @ 2011-04-26 11:33 UTC (permalink / raw)
  To: NeilBrown; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 2011-04-26 at 21:15 +1000, NeilBrown wrote:
> On Tue, 26 Apr 2011 08:36:43 +0100 Mel Gorman <mgorman@suse.de> wrote:
> 
> > +bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
> > +{
> > +	return gfp_to_alloc_flags(gfp_mask) & ALLOC_PFMEMALLOC;
> > +}
> > +
> >  static inline struct page *
> >  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  	struct zonelist *zonelist, enum zone_type high_zoneidx,
> > @@ -2202,8 +2211,16 @@ nopage:
> >  got_pg:
> >  	if (kmemcheck_enabled)
> >  		kmemcheck_pagealloc_alloc(page, order, gfp_mask);
> > -	return page;
> >  
> > +	/*
> > +	 * page->pfmemalloc is set when the caller had PFMEMALLOC set or is
> > +	 * been OOM killed. The expectation is that the caller is taking
> > +	 * steps that will free more memory. The caller should avoid the
> > +	 * page being used for !PFMEMALLOC purposes.
> > +	 */
> > +	page->pfmemalloc = (alloc_flags & ALLOC_PFMEMALLOC);
> > +
> > +	return page;
> 
> Linus doesn't seem to be a fan of this construct:
>    https://lkml.org/lkml/2011/4/1/255
> 

There is confusion around this topic. Andrew prefers bool for true/false
values and it's self-documenting. I tend to prefer it myself for
readability and there is a slow conversion in the VM from
ints-used-as-bools to bools and my understanding of bool is that any
non-zero value will be treated as true (just as it is for int).

> pfmemalloc is a bool, and the value on the right is either 0 or 0x1000.
> 
> If bool happens to be typedefed to 'char' or even 'short', pfmemalloc would
> always be set to 0.

It is typedeffed as _Bool though which I thought was able to handle the
cast appropriately or is that wrong?

> Ditto for the gfp_pfmemalloc_allowed function.
> 
> Prefixing with '!!' would make it safe.
> 

Will do that to avoid any oddities. Thanks

-- 
Mel Gorman
SUSE Labs


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages
  2011-04-26  7:36 ` [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages Mel Gorman
  2011-04-26 11:15   ` NeilBrown
@ 2011-04-26 11:37   ` NeilBrown
  2011-04-26 13:59     ` Mel Gorman
  1 sibling, 1 reply; 41+ messages in thread
From: NeilBrown @ 2011-04-26 11:37 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 08:36:43 +0100 Mel Gorman <mgorman@suse.de> wrote:

> +		/*
> +		 * If there are full empty slabs and we were not forced to
> +		 * allocate a slab, mark this one !pfmemalloc
> +		 */
> +		l3 = cachep->nodelists[numa_mem_id()];
> +		if (!list_empty(&l3->slabs_free) && force_refill) {
> +			struct slab *slabp = virt_to_slab(objp);
> +			slabp->pfmemalloc = false;
> +			clear_obj_pfmemalloc(&objp);
> +			check_ac_pfmemalloc(cachep, ac);
> +			return objp;
> +		}

The comment doesn't match the code.  I think you need to remove the words
"full" and "not" assuming the code is correct which it probably is...

But the code seems to be much more complex than Peter's original, and I don't
see the gain.

Peter's code had only one 'reserved' flag for each kmem_cache.  You seem to
have one for every slab.  I don't see the point.
It is true that yours is in some sense more fair - but I'm not sure the
complexity is worth it.

Was there some particular reason you made the change?

NeilBrown

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages
  2011-04-26 11:33     ` Mel Gorman
@ 2011-04-26 12:05       ` NeilBrown
  0 siblings, 0 replies; 41+ messages in thread
From: NeilBrown @ 2011-04-26 12:05 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 12:33:48 +0100 Mel Gorman <mgorman@suse.de> wrote:

> On Tue, 2011-04-26 at 21:15 +1000, NeilBrown wrote:
> > On Tue, 26 Apr 2011 08:36:43 +0100 Mel Gorman <mgorman@suse.de> wrote:
> > 
> > > +bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
> > > +{
> > > +	return gfp_to_alloc_flags(gfp_mask) & ALLOC_PFMEMALLOC;
> > > +}
> > > +
> > >  static inline struct page *
> > >  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> > >  	struct zonelist *zonelist, enum zone_type high_zoneidx,
> > > @@ -2202,8 +2211,16 @@ nopage:
> > >  got_pg:
> > >  	if (kmemcheck_enabled)
> > >  		kmemcheck_pagealloc_alloc(page, order, gfp_mask);
> > > -	return page;
> > >  
> > > +	/*
> > > +	 * page->pfmemalloc is set when the caller had PFMEMALLOC set or is
> > > +	 * been OOM killed. The expectation is that the caller is taking
> > > +	 * steps that will free more memory. The caller should avoid the
> > > +	 * page being used for !PFMEMALLOC purposes.
> > > +	 */
> > > +	page->pfmemalloc = (alloc_flags & ALLOC_PFMEMALLOC);
> > > +
> > > +	return page;
> > 
> > Linus doesn't seem to be a fan of this construct:
> >    https://lkml.org/lkml/2011/4/1/255
> > 
> 
> There is confusion around this topic. Andrew prefers bool for true/false
> values and it's self-documenting. I tend to prefer it myself for
> readability and there is a slow conversion in the VM from
> ints-used-as-bools to bools and my understanding of bool is that any
> non-zero value will be treated as true (just as it is for int).
> 
> > pfmemalloc is a bool, and the value on the right is either 0 or 0x1000.
> > 
> > If bool happens to be typedefed to 'char' or even 'short', pfmemalloc would
> > always be set to 0.
> 
> It is typedeffed as _Bool though which I thought was able to handle the
> cast appropriately or is that wrong?

Yes, I too believe that _Bool does the right thing, so this particular usage
does happen to be safe in the kernel.  But there is a long tradition of
'bool' being typedefed to an int type so the usage can look wrong.  So maybe
it is best avoided.

I have no strong feeling either way - I just thought I would highlight it.
In general, I like bool, but I don't like automatic casts (I often use
  if (pointer != NULL) ...
because I think it reads better).

Thanks,
NeilBrown



> 
> > Ditto for the gfp_pfmemalloc_allowed function.
> > 
> > Prefixing with '!!' would make it safe.
> > 
> 
> Will do that to avoid any oddities. Thanks
> 


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 09/13] netvm: Set PF_MEMALLOC as appropriate during SKB processing
  2011-04-26  7:36 ` [PATCH 09/13] netvm: Set PF_MEMALLOC as appropriate during SKB processing Mel Gorman
@ 2011-04-26 12:21   ` NeilBrown
  2011-04-26 14:10     ` Mel Gorman
  0 siblings, 1 reply; 41+ messages in thread
From: NeilBrown @ 2011-04-26 12:21 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 08:36:50 +0100 Mel Gorman <mgorman@suse.de> wrote:

> diff --git a/net/core/dev.c b/net/core/dev.c
> index 3871bf6..2d79a20 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -3095,6 +3095,27 @@ static void vlan_on_bond_hook(struct sk_buff *skb)
>  	}
>  }
>  
> +/*
> + * Limit which protocols can use the PFMEMALLOC reserves to those that are
> + * expected to be used for communication with swap.
> + */
> +static bool skb_pfmemalloc_protocol(struct sk_buff *skb)
> +{
> +	if (skb_pfmemalloc(skb))
> +		switch (skb->protocol) {
> +		case __constant_htons(ETH_P_ARP):
> +		case __constant_htons(ETH_P_IP):
> +		case __constant_htons(ETH_P_IPV6):
> +		case __constant_htons(ETH_P_8021Q):
> +			break;
> +
> +		default:
> +			return false;
> +		}
> +
> +	return true;
> +}

This sort of thing really bugs me :-)
Neither the comment nor the function name actually describe what the function
is doing.  The function is checking *2* things.
   is_pfmemalloc_skb_or_pfmemalloc_protocol()
might be a more correct name, but is too verbose.

I would prefer the skb_pfmemalloc test were removed from here and ....

> +	if (!skb_pfmemalloc_protocol(skb))
> +		goto drop;
> +

...added here so this becomes:

      if (!skb_pfmemalloc(skb) && !skb_pfmemalloc_protocol(skb))
                goto drop;

which actually makes sense.

Thanks,
NeilBrown


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 12/13] mm: Throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage
  2011-04-26  7:36 ` [PATCH 12/13] mm: Throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage Mel Gorman
@ 2011-04-26 12:30   ` NeilBrown
  2011-04-26 14:26     ` Mel Gorman
  0 siblings, 1 reply; 41+ messages in thread
From: NeilBrown @ 2011-04-26 12:30 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 08:36:53 +0100 Mel Gorman <mgorman@suse.de> wrote:


> +/*
> + * Throttle direct reclaimers if backing storage is backed by the network
> + * and the PFMEMALLOC reserve for the preferred node is getting dangerously
> + * depleted. kswapd will continue to make progress and wake the processes
> + * when the low watermark is reached
> + */
> +static void throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
> +					nodemask_t *nodemask)
> +{
> +	struct zone *zone;
> +	int high_zoneidx = gfp_zone(gfp_mask);
> +	DEFINE_WAIT(wait);
> +
> +	/* Check if the pfmemalloc reserves are ok */
> +	first_zones_zonelist(zonelist, high_zoneidx, NULL, &zone);
> +	prepare_to_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait,
> +							TASK_INTERRUPTIBLE);
> +	if (pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx))
> +		goto out;
> +
> +	/* Throttle */
> +	do {
> +		schedule();
> +		finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
> +		prepare_to_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait,
> +							TASK_INTERRUPTIBLE);
> +	} while (!pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx) &&
> +			!fatal_signal_pending(current));
> +
> +out:
> +	finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
> +}

You are doing an interruptible wait, but only checking for fatal signals.
So if a non-fatal signal arrives, you will busy-wait.

So I suspect you want TASK_KILLABLE, so just use:

    wait_event_killable(zone->zone_pgdat->pfmemalloc_wait,
                        pgmemalloc_watermark_ok(zone->zone_pgdata,
                                                high_zoneidx));

(You also have an extraneous call to finish_wait)

NeilBrown



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 13/13] mm: Account for the number of times direct reclaimers get throttled
  2011-04-26  7:36 ` [PATCH 13/13] mm: Account for the number of times direct reclaimers get throttled Mel Gorman
@ 2011-04-26 12:35   ` NeilBrown
  2011-04-26 14:26     ` Mel Gorman
  0 siblings, 1 reply; 41+ messages in thread
From: NeilBrown @ 2011-04-26 12:35 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 08:36:54 +0100 Mel Gorman <mgorman@suse.de> wrote:

> Under significant pressure when writing back to network-backed storage,
> direct reclaimers may get throttled. This is expected to be a
> short-lived event and the processes get woken up again but processes do
> get stalled. This patch counts how many times such stalling occurs. It's
> up to the administrator whether to reduce these stalls by increasing
> min_free_kbytes.
> 
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> ---
>  include/linux/vm_event_item.h |    1 +
>  mm/vmscan.c                   |    1 +
>  mm/vmstat.c                   |    1 +
>  3 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> index 03b90cdc..652e5f3 100644
> --- a/include/linux/vm_event_item.h
> +++ b/include/linux/vm_event_item.h
> @@ -29,6 +29,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
>  		FOR_ALL_ZONES(PGSTEAL),
>  		FOR_ALL_ZONES(PGSCAN_KSWAPD),
>  		FOR_ALL_ZONES(PGSCAN_DIRECT),
> +		PGSCAN_DIRECT_THROTTLE,
>  #ifdef CONFIG_NUMA
>  		PGSCAN_ZONE_RECLAIM_FAILED,
>  #endif
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 8b6da2b..e88138b 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2154,6 +2154,7 @@ static void throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
>  		goto out;
>  
>  	/* Throttle */
> +	count_vm_event(PGSCAN_DIRECT_THROTTLE);
>  	do {
>  		schedule();
>  		finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index a2b7344..5725387 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -911,6 +911,7 @@ const char * const vmstat_text[] = {
>  	TEXTS_FOR_ZONES("pgsteal")
>  	TEXTS_FOR_ZONES("pgscan_kswapd")
>  	TEXTS_FOR_ZONES("pgscan_direct")
> +	"pgscan_direct_throttle",
>  
>  #ifdef CONFIG_NUMA
>  	"zone_reclaim_failed",

I like this approach.  Make the information available, but don't make a fuss
about it.

Actually, I like the whole series - I'm really having to dig deep to find
anything to complain about :-)

Feel free to put
   Reviewed-by: NeilBrown <neilb@suse.de>
against anything that I haven't commented on.

Thanks,
NeilBrown

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages
  2011-04-26 11:37   ` NeilBrown
@ 2011-04-26 13:59     ` Mel Gorman
  2011-04-27 23:21       ` NeilBrown
  0 siblings, 1 reply; 41+ messages in thread
From: Mel Gorman @ 2011-04-26 13:59 UTC (permalink / raw)
  To: NeilBrown; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, Apr 26, 2011 at 09:37:58PM +1000, NeilBrown wrote:
> On Tue, 26 Apr 2011 08:36:43 +0100 Mel Gorman <mgorman@suse.de> wrote:
> 
> > +		/*
> > +		 * If there are full empty slabs and we were not forced to
> > +		 * allocate a slab, mark this one !pfmemalloc
> > +		 */
> > +		l3 = cachep->nodelists[numa_mem_id()];
> > +		if (!list_empty(&l3->slabs_free) && force_refill) {
> > +			struct slab *slabp = virt_to_slab(objp);
> > +			slabp->pfmemalloc = false;
> > +			clear_obj_pfmemalloc(&objp);
> > +			check_ac_pfmemalloc(cachep, ac);
> > +			return objp;
> > +		}
> 
> The comment doesn't match the code.  I think you need to remove the words
> "full" and "not" assuming the code is correct which it probably is...
> 

I'll fix up the comment, you're right, it's confusing.

> But the code seems to be much more complex than Peter's original, and I don't
> see the gain.
> 

You're right, it is more complex.

> Peter's code had only one 'reserved' flag for each kmem_cache. 

The reserve was set in a per-cpu structure so there was a "lag" time
before that information was available to other CPUs. Fine on smaller
machines but a bit more of a problem today. 

> You seem to
> have one for every slab.  I don't see the point.
> It is true that yours is in some sense more fair - but I'm not sure the
> complexity is worth it.
> 

More fairness was one of the objects.

> Was there some particular reason you made the change?
> 

This version survives under considerably more stress than Peter's
original version did without requiring the additional complexity of
memory reserves.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/13] mm: Introduce __GFP_MEMALLOC to allow access to emergency reserves
  2011-04-26 10:53       ` NeilBrown
@ 2011-04-26 14:00         ` Mel Gorman
  0 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26 14:00 UTC (permalink / raw)
  To: NeilBrown; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, Apr 26, 2011 at 08:53:30PM +1000, NeilBrown wrote:
> On Tue, 26 Apr 2011 11:36:46 +0100 Mel Gorman <mgorman@suse.de> wrote:
> 
> > On Tue, Apr 26, 2011 at 07:49:47PM +1000, NeilBrown wrote:
> 
> > > Maybe a
> > >    WARN_ON((gfp_mask & __GFP_MEMALLOC) && (gfp_mask & __GFP_NOMEMALLOC));
> > > might be wise?
> > > 
> > 
> > Both MEMALLOC and NOMEMALLOC are related to PFMEMALLOC reserves so
> > it's reasonable for them to have similar names. This warning will
> > also trigger because it's a combination of flags that does happen.
> > 
> > Consider for example
> > 
> > any interrupt
> >   -> __netdev_alloc_skb		(mask == GFP_ATOMIC)
> >     -> __alloc_skb		(mask == GFP_ATOMIC)
> >        if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX))
> >                gfp_mask |= __GFP_MEMALLOC;
> > 				(mask == GFP_ATOMIC|__GFP_NOMEMALLOC)
> >       -> __kmalloc_reserve
> > 		First attempt tries to avoid reserves so adds __GFP_MEMALLOC
> > 				(mask == GFP_ATOMIC|__GFP_NOMEMALLOC|__GFP_MEMALLOC)
> > 
> 
> You have the "NO"s mixed up a bit which confused me for a while :-)
> But I see your point - I guess the WARN_ON isn't really needed.
> 

Bah, that was unhelpful on my part. I'm glad you saw the point anyway.
Sorry about that.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 09/13] netvm: Set PF_MEMALLOC as appropriate during SKB processing
  2011-04-26 12:21   ` NeilBrown
@ 2011-04-26 14:10     ` Mel Gorman
  2011-04-26 23:22       ` NeilBrown
  0 siblings, 1 reply; 41+ messages in thread
From: Mel Gorman @ 2011-04-26 14:10 UTC (permalink / raw)
  To: NeilBrown; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, Apr 26, 2011 at 10:21:57PM +1000, NeilBrown wrote:
> On Tue, 26 Apr 2011 08:36:50 +0100 Mel Gorman <mgorman@suse.de> wrote:
> 
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 3871bf6..2d79a20 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -3095,6 +3095,27 @@ static void vlan_on_bond_hook(struct sk_buff *skb)
> >  	}
> >  }
> >  
> > +/*
> > + * Limit which protocols can use the PFMEMALLOC reserves to those that are
> > + * expected to be used for communication with swap.
> > + */
> > +static bool skb_pfmemalloc_protocol(struct sk_buff *skb)
> > +{
> > +	if (skb_pfmemalloc(skb))
> > +		switch (skb->protocol) {
> > +		case __constant_htons(ETH_P_ARP):
> > +		case __constant_htons(ETH_P_IP):
> > +		case __constant_htons(ETH_P_IPV6):
> > +		case __constant_htons(ETH_P_8021Q):
> > +			break;
> > +
> > +		default:
> > +			return false;
> > +		}
> > +
> > +	return true;
> > +}
> 
> This sort of thing really bugs me :-)
> Neither the comment nor the function name actually describe what the function
> is doing.  The function is checking *2* things.
>    is_pfmemalloc_skb_or_pfmemalloc_protocol()
> might be a more correct name, but is too verbose.
> 
> I would prefer the skb_pfmemalloc test were removed from here and ....
> 
> > +	if (!skb_pfmemalloc_protocol(skb))
> > +		goto drop;
> > +
> 
> ...added here so this becomes:
> 
>       if (!skb_pfmemalloc(skb) && !skb_pfmemalloc_protocol(skb))
>                 goto drop;
> 
> which actually makes sense.
> 

Moving the check is neater but that check should be

	if (skb_pfmemalloc(skb) && !skb_pfmemalloc_protocol(skb))

? It's only if the skb was allocated from emergency reserves that we
need to consider dropping it to make way for other packets to be
received.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 00/13] Swap-over-NBD without deadlocking
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (12 preceding siblings ...)
  2011-04-26  7:36 ` [PATCH 13/13] mm: Account for the number of times direct reclaimers get throttled Mel Gorman
@ 2011-04-26 14:23 ` Peter Zijlstra
  2011-04-26 14:46   ` Mel Gorman
  2011-04-28 13:31 ` Pavel Machek
  14 siblings, 1 reply; 41+ messages in thread
From: Peter Zijlstra @ 2011-04-26 14:23 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Neil Brown

On Tue, 2011-04-26 at 08:36 +0100, Mel Gorman wrote:
> Comments?

Last time I brought up the whole swap over network bits I was pointed
towards the generic skb recycling work:

  http://lwn.net/Articles/332037/

as a means to pre-allocate memory, and it was suggested to simply pin
the few route-cache entries required to route these packets and
dis-allow swap packets to be fragmented (these last two avoid lots of
funny allocation cases in the network stack).



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 12/13] mm: Throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage
  2011-04-26 12:30   ` NeilBrown
@ 2011-04-26 14:26     ` Mel Gorman
  2011-04-26 23:18       ` NeilBrown
  0 siblings, 1 reply; 41+ messages in thread
From: Mel Gorman @ 2011-04-26 14:26 UTC (permalink / raw)
  To: NeilBrown; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, Apr 26, 2011 at 10:30:59PM +1000, NeilBrown wrote:
> On Tue, 26 Apr 2011 08:36:53 +0100 Mel Gorman <mgorman@suse.de> wrote:
> 
> 
> > +/*
> > + * Throttle direct reclaimers if backing storage is backed by the network
> > + * and the PFMEMALLOC reserve for the preferred node is getting dangerously
> > + * depleted. kswapd will continue to make progress and wake the processes
> > + * when the low watermark is reached
> > + */
> > +static void throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
> > +					nodemask_t *nodemask)
> > +{
> > +	struct zone *zone;
> > +	int high_zoneidx = gfp_zone(gfp_mask);
> > +	DEFINE_WAIT(wait);
> > +
> > +	/* Check if the pfmemalloc reserves are ok */
> > +	first_zones_zonelist(zonelist, high_zoneidx, NULL, &zone);
> > +	prepare_to_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait,
> > +							TASK_INTERRUPTIBLE);
> > +	if (pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx))
> > +		goto out;
> > +
> > +	/* Throttle */
> > +	do {
> > +		schedule();
> > +		finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
> > +		prepare_to_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait,
> > +							TASK_INTERRUPTIBLE);
> > +	} while (!pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx) &&
> > +			!fatal_signal_pending(current));
> > +
> > +out:
> > +	finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
> > +}
> 
> You are doing an interruptible wait, but only checking for fatal signals.
> So if a non-fatal signal arrives, you will busy-wait.
> 
> So I suspect you want TASK_KILLABLE, so just use:
> 
>     wait_event_killable(zone->zone_pgdat->pfmemalloc_wait,
>                         pgmemalloc_watermark_ok(zone->zone_pgdata,
>                                                 high_zoneidx));
> 

Well, if a normal signal arrives, we do not necessarily want the
process to enter reclaim. For fatal signals, I allow it to continue
because it's not likely to be putting the system under more pressure
if it's exiting.

> (You also have an extraneous call to finish_wait)
> 

Which one? I'm not seeing a flow where finish_wait gets called twice
without a prepare_to_wait in between. 

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 13/13] mm: Account for the number of times direct reclaimers get throttled
  2011-04-26 12:35   ` NeilBrown
@ 2011-04-26 14:26     ` Mel Gorman
  0 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-26 14:26 UTC (permalink / raw)
  To: NeilBrown; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, Apr 26, 2011 at 10:35:10PM +1000, NeilBrown wrote:
> On Tue, 26 Apr 2011 08:36:54 +0100 Mel Gorman <mgorman@suse.de> wrote:
> 
> > Under significant pressure when writing back to network-backed storage,
> > direct reclaimers may get throttled. This is expected to be a
> > short-lived event and the processes get woken up again but processes do
> > get stalled. This patch counts how many times such stalling occurs. It's
> > up to the administrator whether to reduce these stalls by increasing
> > min_free_kbytes.
> > 
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > ---
> >  include/linux/vm_event_item.h |    1 +
> >  mm/vmscan.c                   |    1 +
> >  mm/vmstat.c                   |    1 +
> >  3 files changed, 3 insertions(+), 0 deletions(-)
> > 
> > diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> > index 03b90cdc..652e5f3 100644
> > --- a/include/linux/vm_event_item.h
> > +++ b/include/linux/vm_event_item.h
> > @@ -29,6 +29,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
> >  		FOR_ALL_ZONES(PGSTEAL),
> >  		FOR_ALL_ZONES(PGSCAN_KSWAPD),
> >  		FOR_ALL_ZONES(PGSCAN_DIRECT),
> > +		PGSCAN_DIRECT_THROTTLE,
> >  #ifdef CONFIG_NUMA
> >  		PGSCAN_ZONE_RECLAIM_FAILED,
> >  #endif
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 8b6da2b..e88138b 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2154,6 +2154,7 @@ static void throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
> >  		goto out;
> >  
> >  	/* Throttle */
> > +	count_vm_event(PGSCAN_DIRECT_THROTTLE);
> >  	do {
> >  		schedule();
> >  		finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
> > diff --git a/mm/vmstat.c b/mm/vmstat.c
> > index a2b7344..5725387 100644
> > --- a/mm/vmstat.c
> > +++ b/mm/vmstat.c
> > @@ -911,6 +911,7 @@ const char * const vmstat_text[] = {
> >  	TEXTS_FOR_ZONES("pgsteal")
> >  	TEXTS_FOR_ZONES("pgscan_kswapd")
> >  	TEXTS_FOR_ZONES("pgscan_direct")
> > +	"pgscan_direct_throttle",
> >  
> >  #ifdef CONFIG_NUMA
> >  	"zone_reclaim_failed",
> 
> I like this approach.  Make the information available, but don't make a fuss
> about it.
> 
> Actually, I like the whole series - I'm really having to dig deep to find
> anything to complain about :-)
> 
> Feel free to put
>    Reviewed-by: NeilBrown <neilb@suse.de>
> against anything that I haven't commented on.
> 

Thanks very much!

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 00/13] Swap-over-NBD without deadlocking
  2011-04-26 14:23 ` [PATCH 00/13] Swap-over-NBD without deadlocking Peter Zijlstra
@ 2011-04-26 14:46   ` Mel Gorman
  2011-04-26 14:50     ` Peter Zijlstra
  0 siblings, 1 reply; 41+ messages in thread
From: Mel Gorman @ 2011-04-26 14:46 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Neil Brown

On Tue, Apr 26, 2011 at 04:23:05PM +0200, Peter Zijlstra wrote:
> On Tue, 2011-04-26 at 08:36 +0100, Mel Gorman wrote:
> > Comments?
> 
> Last time I brought up the whole swap over network bits I was pointed
> towards the generic skb recycling work:
> 
>   http://lwn.net/Articles/332037/
> 
> as a means to pre-allocate memory,

I'd taken note of this to take a much closer look if it turned
out reservations were necessary and to find out what happened with
these patches. So far, bigger reservations have *not* been required
but I agree recycling SKBs may be a better alternative than large
reservations or preallocations if they are necessary.

>  and it was suggested to simply pin
> the few route-cache entries required to route these packets and
> dis-allow swap packets to be fragmented (these last two avoid lots of
> funny allocation cases in the network stack).
> 

I did find that only a few route-cache entries should be required. In
the original patches I worked with, there was a reservation for the
maximum possible number of route-cache entries. I thought this was
overkill and instead reserved 1-per-active-swapfile-backed-by-NFS.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 00/13] Swap-over-NBD without deadlocking
  2011-04-26 14:46   ` Mel Gorman
@ 2011-04-26 14:50     ` Peter Zijlstra
  2011-04-27  8:43       ` Mel Gorman
  0 siblings, 1 reply; 41+ messages in thread
From: Peter Zijlstra @ 2011-04-26 14:50 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Neil Brown

On Tue, 2011-04-26 at 15:46 +0100, Mel Gorman wrote:
> 
> I did find that only a few route-cache entries should be required. In
> the original patches I worked with, there was a reservation for the
> maximum possible number of route-cache entries. I thought this was
> overkill and instead reserved 1-per-active-swapfile-backed-by-NFS.

Right, so the thing I was worried about was a route-cache poison attack
where someone would spam the machine such that it would create a lot of
route cache entries and might flush the one we needed just as we needed
it.

Pinning the one entry we need would solve that (if possible).

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 12/13] mm: Throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage
  2011-04-26 14:26     ` Mel Gorman
@ 2011-04-26 23:18       ` NeilBrown
  2011-04-27  8:36         ` Mel Gorman
  0 siblings, 1 reply; 41+ messages in thread
From: NeilBrown @ 2011-04-26 23:18 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 15:26:24 +0100 Mel Gorman <mgorman@suse.de> wrote:

> On Tue, Apr 26, 2011 at 10:30:59PM +1000, NeilBrown wrote:
> > On Tue, 26 Apr 2011 08:36:53 +0100 Mel Gorman <mgorman@suse.de> wrote:
> > 
> > 
> > > +/*
> > > + * Throttle direct reclaimers if backing storage is backed by the network
> > > + * and the PFMEMALLOC reserve for the preferred node is getting dangerously
> > > + * depleted. kswapd will continue to make progress and wake the processes
> > > + * when the low watermark is reached
> > > + */
> > > +static void throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
> > > +					nodemask_t *nodemask)
> > > +{
> > > +	struct zone *zone;
> > > +	int high_zoneidx = gfp_zone(gfp_mask);
> > > +	DEFINE_WAIT(wait);
> > > +
> > > +	/* Check if the pfmemalloc reserves are ok */
> > > +	first_zones_zonelist(zonelist, high_zoneidx, NULL, &zone);
> > > +	prepare_to_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait,
> > > +							TASK_INTERRUPTIBLE);
> > > +	if (pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx))
> > > +		goto out;
> > > +
> > > +	/* Throttle */
> > > +	do {
> > > +		schedule();
> > > +		finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
> > > +		prepare_to_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait,
> > > +							TASK_INTERRUPTIBLE);
> > > +	} while (!pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx) &&
> > > +			!fatal_signal_pending(current));
> > > +
> > > +out:
> > > +	finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
> > > +}
> > 
> > You are doing an interruptible wait, but only checking for fatal signals.
> > So if a non-fatal signal arrives, you will busy-wait.
> > 
> > So I suspect you want TASK_KILLABLE, so just use:
> > 
> >     wait_event_killable(zone->zone_pgdat->pfmemalloc_wait,
> >                         pgmemalloc_watermark_ok(zone->zone_pgdata,
> >                                                 high_zoneidx));
> > 
> 
> Well, if a normal signal arrives, we do not necessarily want the
> process to enter reclaim. For fatal signals, I allow it to continue
> because it's not likely to be putting the system under more pressure
> if it's exiting.

Yep, I understand that and it doesn't seem unreasonable.

However I don't think the code implements that correctly.

If you get a non-fatal signal, schedule will exit immediately (because of the
TASK_INTERRUPTIBLE setting) and the 'while' clause will succeed because the
signal is not fatal, so it will loop around and try to schedule again, which
will again exit immediately - busy loop.

> 
> > (You also have an extraneous call to finish_wait)
> > 
> 
> Which one? I'm not seeing a flow where finish_wait gets called twice
> without a prepare_to_wait in between. 
> 

You don't need to call finish_wait immediately before prepare_to_wait.

It really is best to just use the appropriate 'wait_event*' macro....

NeilBrown


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 09/13] netvm: Set PF_MEMALLOC as appropriate during SKB processing
  2011-04-26 14:10     ` Mel Gorman
@ 2011-04-26 23:22       ` NeilBrown
  0 siblings, 0 replies; 41+ messages in thread
From: NeilBrown @ 2011-04-26 23:22 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 15:10:48 +0100 Mel Gorman <mgorman@suse.de> wrote:

> On Tue, Apr 26, 2011 at 10:21:57PM +1000, NeilBrown wrote:
> > On Tue, 26 Apr 2011 08:36:50 +0100 Mel Gorman <mgorman@suse.de> wrote:
> > 
> > > diff --git a/net/core/dev.c b/net/core/dev.c
> > > index 3871bf6..2d79a20 100644
> > > --- a/net/core/dev.c
> > > +++ b/net/core/dev.c
> > > @@ -3095,6 +3095,27 @@ static void vlan_on_bond_hook(struct sk_buff *skb)
> > >  	}
> > >  }
> > >  
> > > +/*
> > > + * Limit which protocols can use the PFMEMALLOC reserves to those that are
> > > + * expected to be used for communication with swap.
> > > + */
> > > +static bool skb_pfmemalloc_protocol(struct sk_buff *skb)
> > > +{
> > > +	if (skb_pfmemalloc(skb))
> > > +		switch (skb->protocol) {
> > > +		case __constant_htons(ETH_P_ARP):
> > > +		case __constant_htons(ETH_P_IP):
> > > +		case __constant_htons(ETH_P_IPV6):
> > > +		case __constant_htons(ETH_P_8021Q):
> > > +			break;
> > > +
> > > +		default:
> > > +			return false;
> > > +		}
> > > +
> > > +	return true;
> > > +}
> > 
> > This sort of thing really bugs me :-)
> > Neither the comment nor the function name actually describe what the function
> > is doing.  The function is checking *2* things.
> >    is_pfmemalloc_skb_or_pfmemalloc_protocol()
> > might be a more correct name, but is too verbose.
> > 
> > I would prefer the skb_pfmemalloc test were removed from here and ....
> > 
> > > +	if (!skb_pfmemalloc_protocol(skb))
> > > +		goto drop;
> > > +
> > 
> > ...added here so this becomes:
> > 
> >       if (!skb_pfmemalloc(skb) && !skb_pfmemalloc_protocol(skb))
> >                 goto drop;
> > 
> > which actually makes sense.
> > 
> 
> Moving the check is neater but that check should be
> 
> 	if (skb_pfmemalloc(skb) && !skb_pfmemalloc_protocol(skb))
> 
> ? It's only if the skb was allocated from emergency reserves that we
> need to consider dropping it to make way for other packets to be
> received.
> 

Correct.  I got my Boolean algebra all confused. Sorry 'bout that.

NeilBrown

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 12/13] mm: Throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage
  2011-04-26 23:18       ` NeilBrown
@ 2011-04-27  8:36         ` Mel Gorman
  0 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-27  8:36 UTC (permalink / raw)
  To: NeilBrown; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Wed, Apr 27, 2011 at 09:18:11AM +1000, NeilBrown wrote:
> On Tue, 26 Apr 2011 15:26:24 +0100 Mel Gorman <mgorman@suse.de> wrote:
> 
> > On Tue, Apr 26, 2011 at 10:30:59PM +1000, NeilBrown wrote:
> > > On Tue, 26 Apr 2011 08:36:53 +0100 Mel Gorman <mgorman@suse.de> wrote:
> > > 
> > > 
> > > > +/*
> > > > + * Throttle direct reclaimers if backing storage is backed by the network
> > > > + * and the PFMEMALLOC reserve for the preferred node is getting dangerously
> > > > + * depleted. kswapd will continue to make progress and wake the processes
> > > > + * when the low watermark is reached
> > > > + */
> > > > +static void throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
> > > > +					nodemask_t *nodemask)
> > > > +{
> > > > +	struct zone *zone;
> > > > +	int high_zoneidx = gfp_zone(gfp_mask);
> > > > +	DEFINE_WAIT(wait);
> > > > +
> > > > +	/* Check if the pfmemalloc reserves are ok */
> > > > +	first_zones_zonelist(zonelist, high_zoneidx, NULL, &zone);
> > > > +	prepare_to_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait,
> > > > +							TASK_INTERRUPTIBLE);
> > > > +	if (pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx))
> > > > +		goto out;
> > > > +
> > > > +	/* Throttle */
> > > > +	do {
> > > > +		schedule();
> > > > +		finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
> > > > +		prepare_to_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait,
> > > > +							TASK_INTERRUPTIBLE);
> > > > +	} while (!pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx) &&
> > > > +			!fatal_signal_pending(current));
> > > > +
> > > > +out:
> > > > +	finish_wait(&zone->zone_pgdat->pfmemalloc_wait, &wait);
> > > > +}
> > > 
> > > You are doing an interruptible wait, but only checking for fatal signals.
> > > So if a non-fatal signal arrives, you will busy-wait.
> > > 
> > > So I suspect you want TASK_KILLABLE, so just use:
> > > 
> > >     wait_event_killable(zone->zone_pgdat->pfmemalloc_wait,
> > >                         pgmemalloc_watermark_ok(zone->zone_pgdata,
> > >                                                 high_zoneidx));
> > > 
> > 
> > Well, if a normal signal arrives, we do not necessarily want the
> > process to enter reclaim. For fatal signals, I allow it to continue
> > because it's not likely to be putting the system under more pressure
> > if it's exiting.
> 
> Yep, I understand that and it doesn't seem unreasonable.
> 
> However I don't think the code implements that correctly.
> 
> If you get a non-fatal signal, schedule will exit immediately (because of the
> TASK_INTERRUPTIBLE setting) and the 'while' clause will succeed because the
> signal is not fatal, so it will loop around and try to schedule again, which
> will again exit immediately - busy loop.
> 

Ah, I see. Once again, well spotted.

> > 
> > > (You also have an extraneous call to finish_wait)
> > > 
> > 
> > Which one? I'm not seeing a flow where finish_wait gets called twice
> > without a prepare_to_wait in between. 
> > 
> 
> You don't need to call finish_wait immediately before prepare_to_wait.
> 
> It really is best to just use the appropriate 'wait_event*' macro....
> 

wait_event_interruptible it is. Thanks

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 00/13] Swap-over-NBD without deadlocking
  2011-04-26 14:50     ` Peter Zijlstra
@ 2011-04-27  8:43       ` Mel Gorman
  0 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-27  8:43 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Neil Brown

On Tue, Apr 26, 2011 at 04:50:49PM +0200, Peter Zijlstra wrote:
> On Tue, 2011-04-26 at 15:46 +0100, Mel Gorman wrote:
> > 
> > I did find that only a few route-cache entries should be required. In
> > the original patches I worked with, there was a reservation for the
> > maximum possible number of route-cache entries. I thought this was
> > overkill and instead reserved 1-per-active-swapfile-backed-by-NFS.
> 
> Right, so the thing I was worried about was a route-cache poison attack
> where someone would spam the machine such that it would create a lot of
> route cache entries and might flush the one we needed just as we needed
> it.
> 
> Pinning the one entry we need would solve that (if possible).

That is a possibility all right, nice thoughts there. Ok, as I do
not want this series to grow to the point where it is unreviewable,
I'll mark pinning the routing cache entry for a follow-on series.
In this series, the throttling logic should allow a new routing cache
entry to be allocated by kswapd as it's immune to the throttle.

Thanks.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages
  2011-04-26 13:59     ` Mel Gorman
@ 2011-04-27 23:21       ` NeilBrown
  2011-04-28  9:46         ` Mel Gorman
  0 siblings, 1 reply; 41+ messages in thread
From: NeilBrown @ 2011-04-27 23:21 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Tue, 26 Apr 2011 14:59:40 +0100 Mel Gorman <mgorman@suse.de> wrote:

> On Tue, Apr 26, 2011 at 09:37:58PM +1000, NeilBrown wrote:
> > On Tue, 26 Apr 2011 08:36:43 +0100 Mel Gorman <mgorman@suse.de> wrote:
> > 
> > > +		/*
> > > +		 * If there are full empty slabs and we were not forced to
> > > +		 * allocate a slab, mark this one !pfmemalloc
> > > +		 */
> > > +		l3 = cachep->nodelists[numa_mem_id()];
> > > +		if (!list_empty(&l3->slabs_free) && force_refill) {
> > > +			struct slab *slabp = virt_to_slab(objp);
> > > +			slabp->pfmemalloc = false;
> > > +			clear_obj_pfmemalloc(&objp);
> > > +			check_ac_pfmemalloc(cachep, ac);
> > > +			return objp;
> > > +		}
> > 
> > The comment doesn't match the code.  I think you need to remove the words
> > "full" and "not" assuming the code is correct which it probably is...
> > 
> 
> I'll fix up the comment, you're right, it's confusing.
> 
> > But the code seems to be much more complex than Peter's original, and I don't
> > see the gain.
> > 
> 
> You're right, it is more complex.
> 
> > Peter's code had only one 'reserved' flag for each kmem_cache. 
> 
> The reserve was set in a per-cpu structure so there was a "lag" time
> before that information was available to other CPUs. Fine on smaller
> machines but a bit more of a problem today. 
> 
> > You seem to
> > have one for every slab.  I don't see the point.
> > It is true that yours is in some sense more fair - but I'm not sure the
> > complexity is worth it.
> > 
> 
> More fairness was one of the objects.
> 
> > Was there some particular reason you made the change?
> > 
> 
> This version survives under considerably more stress than Peter's
> original version did without requiring the additional complexity of
> memory reserves.
> 

That is certainly a very compelling argument .... but I still don't get why.
I'm sorry if I'm being dense, but I still don't see why the complexity buys
us better stability and I really would like to understand.

You don't seem to need the same complexity for SLUB with the justification
of "SLUB generally maintaining smaller lists than SLAB".

Presumably these are per-CPU lists of free objects or slabs?  If the things
on those lists could be used by anyone long lists couldn't hurt.
So the problem must be that the lists get long while the array_cache is still
marked as 'pfmemalloc' (or 'reserve' in Peter's patches).

Is that the problem?  That reserve memory gets locked up in SLAB freelists?
If so - would that be more easily addressed by effectively reducing the
'batching' when the array_cache had dipped into reserves, so slabs are
returned to the VM more promptly?

Probably related, now that you've fixed the comment here (thanks):

+		/*
+		 * If there are empty slabs on the slabs_free list and we are
+		 * being forced to refill the cache, mark this one !pfmemalloc.
+		 */
+		l3 = cachep->nodelists[numa_mem_id()];
+		if (!list_empty(&l3->slabs_free) && force_refill) {
+			struct slab *slabp = virt_to_slab(objp);
+			slabp->pfmemalloc = false;
+			clear_obj_pfmemalloc(&objp);
+			check_ac_pfmemalloc(cachep, ac);
+			return objp;
+		}

I'm trying to understand it...
The context is that a non-MEMALLOC allocation is happening and everything on
the free lists is reserved for MEMALLOC allocations.
So if in that case there is a completely free slab on the free list we decide
that it is OK to mark the current slab as non-MEMALLOC.
The logic seems to be that we could just release that free slab to the VM,
then an alloc_page would be able to get it back.  But if we are still well
below the reserve watermark, then there might be some other allocation that
is more deserving of the page and we shouldn't just assume we can take
it with actually calling in to alloc_pages to check that we are no longer
running on reserves..

So this looks like an optimisation that is wrong.



BTW, 


+	/* Record if ALLOC_PFMEMALLOC was set when allocating the slab */
+	if (pfmemalloc) {
+		struct array_cache *ac = cpu_cache_get(cachep);
+		slabp->pfmemalloc = true;
+		ac->pfmemalloc = 1;
+	}
+

I think that "= 1"  should be "= true".  :-)

NeilBrown

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages
  2011-04-27 23:21       ` NeilBrown
@ 2011-04-28  9:46         ` Mel Gorman
  0 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-28  9:46 UTC (permalink / raw)
  To: NeilBrown; +Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Peter Zijlstra

On Thu, Apr 28, 2011 at 09:21:10AM +1000, NeilBrown wrote:
> On Tue, 26 Apr 2011 14:59:40 +0100 Mel Gorman <mgorman@suse.de> wrote:
> 
> > On Tue, Apr 26, 2011 at 09:37:58PM +1000, NeilBrown wrote:
> > > On Tue, 26 Apr 2011 08:36:43 +0100 Mel Gorman <mgorman@suse.de> wrote:
> > > 
> > > > +		/*
> > > > +		 * If there are full empty slabs and we were not forced to
> > > > +		 * allocate a slab, mark this one !pfmemalloc
> > > > +		 */
> > > > +		l3 = cachep->nodelists[numa_mem_id()];
> > > > +		if (!list_empty(&l3->slabs_free) && force_refill) {
> > > > +			struct slab *slabp = virt_to_slab(objp);
> > > > +			slabp->pfmemalloc = false;
> > > > +			clear_obj_pfmemalloc(&objp);
> > > > +			check_ac_pfmemalloc(cachep, ac);
> > > > +			return objp;
> > > > +		}
> > > 
> > > The comment doesn't match the code.  I think you need to remove the words
> > > "full" and "not" assuming the code is correct which it probably is...
> > > 
> > 
> > I'll fix up the comment, you're right, it's confusing.
> > 
> > > But the code seems to be much more complex than Peter's original, and I don't
> > > see the gain.
> > > 
> > 
> > You're right, it is more complex.
> > 
> > > Peter's code had only one 'reserved' flag for each kmem_cache. 
> > 
> > The reserve was set in a per-cpu structure so there was a "lag" time
> > before that information was available to other CPUs. Fine on smaller
> > machines but a bit more of a problem today. 
> > 
> > > You seem to
> > > have one for every slab.  I don't see the point.
> > > It is true that yours is in some sense more fair - but I'm not sure the
> > > complexity is worth it.
> > > 
> > 
> > More fairness was one of the objects.
> > 
> > > Was there some particular reason you made the change?
> > > 
> > 
> > This version survives under considerably more stress than Peter's
> > original version did without requiring the additional complexity of
> > memory reserves.
> > 
> 
> That is certainly a very compelling argument .... but I still don't get why.
> I'm sorry if I'm being dense, but I still don't see why the complexity buys
> us better stability and I really would like to understand.
> 
> You don't seem to need the same complexity for SLUB with the justification
> of "SLUB generally maintaining smaller lists than SLAB".
> 

It is an educated guess that the length of the lists was what was
relevant. Even without these patches, SLUB is harder to lockup (minutes
rather than seconds to halt the machine) than SLAB and I assumed it
was because there were fewer pages pinned on per-CPU lists with SLUB.

> Presumably these are per-CPU lists of free objects or slabs? 

The per-cpu lists are of objects (the entry[] array in struct
array_cache). The slab management structure is looked up much less
frequently (when a block of objects are being freed for example).

> If the things
> on those lists could be used by anyone long lists couldn't hurt.

Long lists can hurt in a few ways but I believe the two relevant reasons
for this series are;

1. A remote CPU could be holding the object on its free list
2. Multiple unnecessary caches could be pinning free memory with the
   shrinkers not triggering because everything is waiting on IO to complete

> So the problem must be that the lists get long while the array_cache is still
> marked as 'pfmemalloc'

By marking the array_cache pfmemalloc we can have objects on the list
that are a mix of pfmemalloc and !pfmemalloc objects with very coarse
control over who is accessing them.

For example. In Peters patches, CPU A could allocate from pfmemalloc
reserves and mark its array_cache appropriately. CPU B could be freeing
the objects but not have its array_cache marked. PFMEMALLOC objects
are now available for !PFMEMALLOC uses on that CPU and we dip further
into our reserves. This was managed by the memory reservation patches
which meant that dipping further into the reserves was not that much
of a problem.

> (or 'reserve' in Peter's patches).
> 
> Is that the problem?  That reserve memory gets locked up in SLAB freelists?
> If so - would that be more easily addressed by effectively reducing the
> 'batching' when the array_cache had dipped into reserves, so slabs are
> returned to the VM more promptly?
> 

Pinning objects on long list is one problem but I don't think it's
the most important problem. Indications were that the big problem
was insufficient control over who was accessing objects belonging to
slab pages allocated from the pfmemalloc reserves.

> Probably related, now that you've fixed the comment here (thanks):
> 
> +		/*
> +		 * If there are empty slabs on the slabs_free list and we are
> +		 * being forced to refill the cache, mark this one !pfmemalloc.
> +		 */
> +		l3 = cachep->nodelists[numa_mem_id()];
> +		if (!list_empty(&l3->slabs_free) && force_refill) {
> +			struct slab *slabp = virt_to_slab(objp);
> +			slabp->pfmemalloc = false;
> +			clear_obj_pfmemalloc(&objp);
> +			check_ac_pfmemalloc(cachep, ac);
> +			return objp;
> +		}
> 
> I'm trying to understand it...

Thanks :)

> The context is that a non-MEMALLOC allocation is happening and everything on
> the free lists is reserved for MEMALLOC allocations.

Yep.

> So if in that case there is a completely free slab on the free list we decide
> that it is OK to mark the current slab as non-MEMALLOC.

Yep.

> The logic seems to be that we could just release that free slab to the VM,
> then an alloc_page would be able to get it back. 

We could, it'd be slower and there would need to be some sort of
retry path in a hotter code path to catch the situation but we could.

> But if we are still well
> below the reserve watermark, then there might be some other allocation that
> is more deserving of the page and we shouldn't just assume we can take
> it with actually calling in to alloc_pages to check that we are no longer
> running on reserves..
> 
> So this looks like an optimisation that is wrong.

The problem that is being addressed here is that pfmemalloc slabs
have to made available for general use at some point or slabs can
be artifically large and waste memory. I didn't do a free and retry
path because it'd be more expensive and I wanted to avoid hurting
the common slab paths.

The assumption is that if there are free slab pages while there are
pfmemalloc slabs in use then we cannot be under that much pressure
and the free slab page is safe to use. If we go well below the reserve
watermark as you are concerned about, the throttle logic will trigger
and we'll at least identify that the situation occured without the
system crashing.

> BTW, 
> 
> 
> +	/* Record if ALLOC_PFMEMALLOC was set when allocating the slab */
> +	if (pfmemalloc) {
> +		struct array_cache *ac = cpu_cache_get(cachep);
> +		slabp->pfmemalloc = true;
> +		ac->pfmemalloc = 1;
> +	}
> +
> 
> I think that "= 1"  should be "= true".  :-)
> 

/me slaps self

Thanks

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 00/13] Swap-over-NBD without deadlocking
  2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
                   ` (13 preceding siblings ...)
  2011-04-26 14:23 ` [PATCH 00/13] Swap-over-NBD without deadlocking Peter Zijlstra
@ 2011-04-28 13:31 ` Pavel Machek
  2011-04-28 13:42   ` Mel Gorman
  14 siblings, 1 reply; 41+ messages in thread
From: Pavel Machek @ 2011-04-28 13:31 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Neil Brown, Peter Zijlstra

Hi!


> For testing swap-over-NBD, a machine was booted with 2G of RAM with a
> swapfile backed by NBD. 16*NUM_CPU processes were started that create
> anonymous memory mappings and read them linearly in a loop. The total
> size of the mappings were 4*PHYSICAL_MEMORY to use swap heavily under
> memory pressure. Without the patches, the machine locks up within
> minutes and runs to completion with them applied.
> 
> Comments?

Nice!

It  is easy to see why swapping needs these fixes, but... dirty memory
writeout is used for memory clearing, too. Are same changes neccessary
to make that safe?

(Perhaps raise 'max dirty %' for testing?)
								Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 00/13] Swap-over-NBD without deadlocking
  2011-04-28 13:31 ` Pavel Machek
@ 2011-04-28 13:42   ` Mel Gorman
  0 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-28 13:42 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Linux-MM, Linux-Netdev, LKML, David Miller, Neil Brown, Peter Zijlstra

On Thu, Apr 28, 2011 at 03:31:55PM +0200, Pavel Machek wrote:
> Hi!
> 
> 
> > For testing swap-over-NBD, a machine was booted with 2G of RAM with a
> > swapfile backed by NBD. 16*NUM_CPU processes were started that create
> > anonymous memory mappings and read them linearly in a loop. The total
> > size of the mappings were 4*PHYSICAL_MEMORY to use swap heavily under
> > memory pressure. Without the patches, the machine locks up within
> > minutes and runs to completion with them applied.
> > 
> > Comments?
> 
> Nice!
> 
> It  is easy to see why swapping needs these fixes, but... dirty memory
> writeout is used for memory clearing, too. Are same changes neccessary
> to make that safe?
> 

Dirty page limiting covers the MAP_SHARED cases and are already
throttled approprately.

> (Perhaps raise 'max dirty %' for testing?)

Stress testing passed for dirty ratios of 40% at least. Maybe it would
cause issues when raised to nearly 100% but I don't think that is a
particularly interesting use case.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH 13/13] mm: Account for the number of times direct reclaimers get throttled
  2011-04-27 16:07 [PATCH 00/13] Swap-over-NBD without deadlocking v3 Mel Gorman
@ 2011-04-27 16:08 ` Mel Gorman
  0 siblings, 0 replies; 41+ messages in thread
From: Mel Gorman @ 2011-04-27 16:08 UTC (permalink / raw)
  To: Linux-MM, Linux-Netdev
  Cc: LKML, David Miller, Neil Brown, Peter Zijlstra, Mel Gorman

Under significant pressure when writing back to network-backed storage,
direct reclaimers may get throttled. This is expected to be a
short-lived event and the processes get woken up again but processes do
get stalled. This patch counts how many times such stalling occurs. It's
up to the administrator whether to reduce these stalls by increasing
min_free_kbytes.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/vm_event_item.h |    1 +
 mm/vmscan.c                   |    1 +
 mm/vmstat.c                   |    1 +
 3 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 03b90cdc..652e5f3 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -29,6 +29,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		FOR_ALL_ZONES(PGSTEAL),
 		FOR_ALL_ZONES(PGSCAN_KSWAPD),
 		FOR_ALL_ZONES(PGSCAN_DIRECT),
+		PGSCAN_DIRECT_THROTTLE,
 #ifdef CONFIG_NUMA
 		PGSCAN_ZONE_RECLAIM_FAILED,
 #endif
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7f1099f..1b042d8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2152,6 +2152,7 @@ static void throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
 		return;
 
 	/* Throttle */
+	count_vm_event(PGSCAN_DIRECT_THROTTLE);
 	wait_event_interruptible(zone->zone_pgdat->pfmemalloc_wait,
 		pfmemalloc_watermark_ok(zone->zone_pgdat, high_zoneidx));
 }
diff --git a/mm/vmstat.c b/mm/vmstat.c
index a2b7344..5725387 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -911,6 +911,7 @@ const char * const vmstat_text[] = {
 	TEXTS_FOR_ZONES("pgsteal")
 	TEXTS_FOR_ZONES("pgscan_kswapd")
 	TEXTS_FOR_ZONES("pgscan_direct")
+	"pgscan_direct_throttle",
 
 #ifdef CONFIG_NUMA
 	"zone_reclaim_failed",
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2011-04-28 13:42 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-04-26  7:36 [PATCH 00/13] Swap-over-NBD without deadlocking Mel Gorman
2011-04-26  7:36 ` [PATCH 01/13] mm: Serialize access to min_free_kbytes Mel Gorman
2011-04-26  7:36 ` [PATCH 02/13] mm: sl[au]b: Add knowledge of PFMEMALLOC reserve pages Mel Gorman
2011-04-26 11:15   ` NeilBrown
2011-04-26 11:33     ` Mel Gorman
2011-04-26 12:05       ` NeilBrown
2011-04-26 11:37   ` NeilBrown
2011-04-26 13:59     ` Mel Gorman
2011-04-27 23:21       ` NeilBrown
2011-04-28  9:46         ` Mel Gorman
2011-04-26  7:36 ` [PATCH 03/13] mm: Introduce __GFP_MEMALLOC to allow access to emergency reserves Mel Gorman
2011-04-26  9:49   ` NeilBrown
2011-04-26 10:36     ` Mel Gorman
2011-04-26 10:53       ` NeilBrown
2011-04-26 14:00         ` Mel Gorman
2011-04-26  7:36 ` [PATCH 04/13] mm: allow PF_MEMALLOC from softirq context Mel Gorman
2011-04-26  7:36 ` [PATCH 05/13] mm: Ignore mempolicies when using ALLOC_NO_WATERMARK Mel Gorman
2011-04-26  7:36 ` [PATCH 06/13] net: Introduce sk_allocation() to allow addition of GFP flags depending on the individual socket Mel Gorman
2011-04-26  7:36 ` [PATCH 07/13] netvm: Allow the use of __GFP_MEMALLOC by specific sockets Mel Gorman
2011-04-26  7:36 ` [PATCH 08/13] netvm: Allow skb allocation to use PFMEMALLOC reserves Mel Gorman
2011-04-26  7:36 ` [PATCH 09/13] netvm: Set PF_MEMALLOC as appropriate during SKB processing Mel Gorman
2011-04-26 12:21   ` NeilBrown
2011-04-26 14:10     ` Mel Gorman
2011-04-26 23:22       ` NeilBrown
2011-04-26  7:36 ` [PATCH 10/13] mm: Micro-optimise slab to avoid a function call Mel Gorman
2011-04-26  7:36 ` [PATCH 11/13] nbd: Set SOCK_MEMALLOC for access to PFMEMALLOC reserves Mel Gorman
2011-04-26  7:36 ` [PATCH 12/13] mm: Throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage Mel Gorman
2011-04-26 12:30   ` NeilBrown
2011-04-26 14:26     ` Mel Gorman
2011-04-26 23:18       ` NeilBrown
2011-04-27  8:36         ` Mel Gorman
2011-04-26  7:36 ` [PATCH 13/13] mm: Account for the number of times direct reclaimers get throttled Mel Gorman
2011-04-26 12:35   ` NeilBrown
2011-04-26 14:26     ` Mel Gorman
2011-04-26 14:23 ` [PATCH 00/13] Swap-over-NBD without deadlocking Peter Zijlstra
2011-04-26 14:46   ` Mel Gorman
2011-04-26 14:50     ` Peter Zijlstra
2011-04-27  8:43       ` Mel Gorman
2011-04-28 13:31 ` Pavel Machek
2011-04-28 13:42   ` Mel Gorman
2011-04-27 16:07 [PATCH 00/13] Swap-over-NBD without deadlocking v3 Mel Gorman
2011-04-27 16:08 ` [PATCH 13/13] mm: Account for the number of times direct reclaimers get throttled Mel Gorman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).