All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: NeilBrown <neilb@suse.com>
Cc: linux-mm@kvack.org, Mikulas Patocka <mpatocka@redhat.com>,
	Ondrej Kozina <okozina@redhat.com>,
	David Rientjes <rientjes@google.com>,
	Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
	Mel Gorman <mgorman@suse.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	dm-devel@redhat.com, Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [RFC PATCH 2/2] mm, mempool: do not throttle PF_LESS_THROTTLE tasks
Date: Mon, 25 Jul 2016 21:23:45 +0200	[thread overview]
Message-ID: <20160725192344.GD2166@dhcp22.suse.cz> (raw)
In-Reply-To: <20160725083247.GD9401@dhcp22.suse.cz>

[CC Marcelo who might remember other details for the loads which made
 him to add this code - see the patch changelog for more context]

On Mon 25-07-16 10:32:47, Michal Hocko wrote:
> On Sat 23-07-16 10:12:24, NeilBrown wrote:
[...]
> > So I wonder what throttle_vm_writeout() really achieves these days.  Is
> > it just a bandaid that no-one is brave enough to remove?
> 
> Maybe yes. It is sitting there quietly and you do not know about it
> until it bites. Like in this particular case.

So I was playing with this today and tried to provoke throttle_vm_writeout
and couldn't hit that path with my pretty much default IO stack. I
probably need a more complex IO setup like dm-crypt or something that
basically have to double buffer every page in the writeout for some
time.

Anyway I believe that the throttle_vm_writeout is just a relict from the
past which just survived after many other changes in the reclaim path. I
fully realize my testing is quite poor and I would really appreciate if
Mikulas could try to retest with his more complex IO setups but let me
post a patch with the changelog so that we can at least reason about the
justification. In principle the reclaim path should have sufficient
throttling already and if that is not the case then we should
consolidate the remaining rather than have yet another one.

Thoughts?
---
>From 0d950d64e3c59061f7cca71fe5877d4e430499c9 Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@suse.com>
Date: Mon, 25 Jul 2016 14:18:54 +0200
Subject: [PATCH] mm, vmscan: get rid of throttle_vm_writeout

throttle_vm_writeout has been introduced back in 2005 to fix OOMs caused
by excessive pageout activity during the reclaim. Too many pages could
be put under writeback therefore LRUs would be full of unreclaimable pages
until the IO completes and in turn the OOM killer could be invoked.

There have been some important changes introduced since then in the
reclaim path though. Writers are throttled by balance_dirty_pages
when initiating the buffered IO and later during the memory pressure,
the direct reclaim is throttled by wait_iff_congested if the node is
considered congested by dirty pages on LRUs and the underlying bdi
is congested by the queued IO. The kswapd is throttled as well if it
encounters pages marked for immediate reclaim or under writeback which
signals that that there are too many pages under writeback already.
Another important aspect is that we do not issue any IO from the direct
reclaim context anymore. In a heavy parallel load this could queue a lot
of IO which would be very scattered and thus unefficient which would
just make the problem worse.

This three mechanisms should throttle and keep the amount of IO in a
steady state even under heavy IO and memory pressure so yet another
throttling point doesn't really seem helpful. Quite contrary, Mikulas
Patocka has reported that swap backed by dm-crypt doesn't work properly
because the swapout IO cannot make sufficient progress as the writeout
path depends on dm_crypt worker which has to allocate memory to perform
the encryption. In order to guarantee a forward progress it relies
on the mempool allocator. mempool_alloc(), however, prefers to use
the underlying (usually page) allocator before it grabs objects from
the pool. Such an allocation can dive into the memory reclaim and
consequently to throttle_vm_writeout. If there are too many dirty or
pages under writeback it will get throttled even though it is in fact a
flusher to clear pending pages.

[  345.352536] kworker/u4:0    D ffff88003df7f438 10488     6      2	0x00000000
[  345.352536] Workqueue: kcryptd kcryptd_crypt [dm_crypt]
[  345.352536]  ffff88003df7f438 ffff88003e5d0380 ffff88003e5d0380 ffff88003e5d8e80
[  345.352536]  ffff88003dfb3240 ffff88003df73240 ffff88003df80000 ffff88003df7f470
[  345.352536]  ffff88003e5d0380 ffff88003e5d0380 ffff88003df7f828 ffff88003df7f450
[  345.352536] Call Trace:
[  345.352536]  [<ffffffff818d466c>] schedule+0x3c/0x90
[  345.352536]  [<ffffffff818d96a8>] schedule_timeout+0x1d8/0x360
[  345.352536]  [<ffffffff81135e40>] ? detach_if_pending+0x1c0/0x1c0
[  345.352536]  [<ffffffff811407c3>] ? ktime_get+0xb3/0x150
[  345.352536]  [<ffffffff811958cf>] ? __delayacct_blkio_start+0x1f/0x30
[  345.352536]  [<ffffffff818d39e4>] io_schedule_timeout+0xa4/0x110
[  345.352536]  [<ffffffff8121d886>] congestion_wait+0x86/0x1f0
[  345.352536]  [<ffffffff810fdf40>] ? prepare_to_wait_event+0xf0/0xf0
[  345.352536]  [<ffffffff812061d4>] throttle_vm_writeout+0x44/0xd0
[  345.352536]  [<ffffffff81211533>] shrink_zone_memcg+0x613/0x720
[  345.352536]  [<ffffffff81211720>] shrink_zone+0xe0/0x300
[  345.352536]  [<ffffffff81211aed>] do_try_to_free_pages+0x1ad/0x450
[  345.352536]  [<ffffffff81211e7f>] try_to_free_pages+0xef/0x300
[  345.352536]  [<ffffffff811fef19>] __alloc_pages_nodemask+0x879/0x1210
[  345.352536]  [<ffffffff810e8080>] ? sched_clock_cpu+0x90/0xc0
[  345.352536]  [<ffffffff8125a8d1>] alloc_pages_current+0xa1/0x1f0
[  345.352536]  [<ffffffff81265ef5>] ? new_slab+0x3f5/0x6a0
[  345.352536]  [<ffffffff81265dd7>] new_slab+0x2d7/0x6a0
[  345.352536]  [<ffffffff810e7f87>] ? sched_clock_local+0x17/0x80
[  345.352536]  [<ffffffff812678cb>] ___slab_alloc+0x3fb/0x5c0
[  345.352536]  [<ffffffff811f71bd>] ? mempool_alloc_slab+0x1d/0x30
[  345.352536]  [<ffffffff810e7f87>] ? sched_clock_local+0x17/0x80
[  345.352536]  [<ffffffff811f71bd>] ? mempool_alloc_slab+0x1d/0x30
[  345.352536]  [<ffffffff81267ae1>] __slab_alloc+0x51/0x90
[  345.352536]  [<ffffffff811f71bd>] ? mempool_alloc_slab+0x1d/0x30
[  345.352536]  [<ffffffff81267d9b>] kmem_cache_alloc+0x27b/0x310
[  345.352536]  [<ffffffff811f71bd>] mempool_alloc_slab+0x1d/0x30
[  345.352536]  [<ffffffff811f6f11>] mempool_alloc+0x91/0x230
[  345.352536]  [<ffffffff8141a02d>] bio_alloc_bioset+0xbd/0x260
[  345.352536]  [<ffffffffc02f1a54>] kcryptd_crypt+0x114/0x3b0 [dm_crypt]

Let's just drop throttle_vm_writeout altogether. It is not very much
helpful anymore.

I have tried to test a potential writeback IO runaway similar to the one
described in the original patch which has introduced that [1]. Small
virtual machine (512MB RAM, 4 CPUs, 2G of swap space and disk image on a
rather slow NFS in a sync mode on the host) with 8 parallel writers each
writing 1G worth of data. As soon as the pagecache fills up and the
direct reclaim hits then I start anon memory consumer in a loop
(allocating 300M and exiting after populating it) in the background
to make the memory pressure even stronger as well as to disrupt the
steady state for the IO. The direct reclaim is throttled because of the
congestion as well as kswapd hitting congestion_wait due to nr_immediate
but throttle_vm_writeout doesn't ever trigger the sleep throughout
the test. Dirty+writeback are close to nr_dirty_threshold with some
fluctuations caused by the anon consumer.

[1] https://www2.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.9-rc1/2.6.9-rc1-mm3/broken-out/vm-pageout-throttling.patch
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/writeback.h |  1 -
 mm/page-writeback.c       | 30 ------------------------------
 mm/vmscan.c               |  2 --
 3 files changed, 33 deletions(-)

diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 44b4422ae57f..f67a992cdf89 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -319,7 +319,6 @@ void laptop_mode_timer_fn(unsigned long data);
 #else
 static inline void laptop_sync_completion(void) { }
 #endif
-void throttle_vm_writeout(gfp_t gfp_mask);
 bool node_dirty_ok(struct pglist_data *pgdat);
 int wb_domain_init(struct wb_domain *dom, gfp_t gfp);
 #ifdef CONFIG_CGROUP_WRITEBACK
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index b82303a9e67d..2828d6ca1451 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -1962,36 +1962,6 @@ bool wb_over_bg_thresh(struct bdi_writeback *wb)
 	return false;
 }
 
-void throttle_vm_writeout(gfp_t gfp_mask)
-{
-	unsigned long background_thresh;
-	unsigned long dirty_thresh;
-
-        for ( ; ; ) {
-		global_dirty_limits(&background_thresh, &dirty_thresh);
-		dirty_thresh = hard_dirty_limit(&global_wb_domain, dirty_thresh);
-
-                /*
-                 * Boost the allowable dirty threshold a bit for page
-                 * allocators so they don't get DoS'ed by heavy writers
-                 */
-                dirty_thresh += dirty_thresh / 10;      /* wheeee... */
-
-                if (global_node_page_state(NR_UNSTABLE_NFS) +
-			global_node_page_state(NR_WRITEBACK) <= dirty_thresh)
-                        	break;
-                congestion_wait(BLK_RW_ASYNC, HZ/10);
-
-		/*
-		 * The caller might hold locks which can prevent IO completion
-		 * or progress in the filesystem.  So we cannot just sit here
-		 * waiting for IO to complete.
-		 */
-		if ((gfp_mask & (__GFP_FS|__GFP_IO)) != (__GFP_FS|__GFP_IO))
-			break;
-        }
-}
-
 /*
  * sysctl handler for /proc/sys/vm/dirty_writeback_centisecs
  */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0294ab34f475..0f35ed30e35b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2410,8 +2410,6 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
 	if (inactive_list_is_low(lruvec, false, sc))
 		shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
 				   sc, LRU_ACTIVE_ANON);
-
-	throttle_vm_writeout(sc->gfp_mask);
 }
 
 /* Use reclaim/compaction for costly allocs or under memory pressure */
-- 
2.8.1

-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: NeilBrown <neilb@suse.com>
Cc: linux-mm@kvack.org, Mikulas Patocka <mpatocka@redhat.com>,
	Ondrej Kozina <okozina@redhat.com>,
	David Rientjes <rientjes@google.com>,
	Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
	Mel Gorman <mgorman@suse.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	dm-devel@redhat.com, Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [RFC PATCH 2/2] mm, mempool: do not throttle PF_LESS_THROTTLE tasks
Date: Mon, 25 Jul 2016 21:23:45 +0200	[thread overview]
Message-ID: <20160725192344.GD2166@dhcp22.suse.cz> (raw)
In-Reply-To: <20160725083247.GD9401@dhcp22.suse.cz>

[CC Marcelo who might remember other details for the loads which made
 him to add this code - see the patch changelog for more context]

On Mon 25-07-16 10:32:47, Michal Hocko wrote:
> On Sat 23-07-16 10:12:24, NeilBrown wrote:
[...]
> > So I wonder what throttle_vm_writeout() really achieves these days.  Is
> > it just a bandaid that no-one is brave enough to remove?
> 
> Maybe yes. It is sitting there quietly and you do not know about it
> until it bites. Like in this particular case.

So I was playing with this today and tried to provoke throttle_vm_writeout
and couldn't hit that path with my pretty much default IO stack. I
probably need a more complex IO setup like dm-crypt or something that
basically have to double buffer every page in the writeout for some
time.

Anyway I believe that the throttle_vm_writeout is just a relict from the
past which just survived after many other changes in the reclaim path. I
fully realize my testing is quite poor and I would really appreciate if
Mikulas could try to retest with his more complex IO setups but let me
post a patch with the changelog so that we can at least reason about the
justification. In principle the reclaim path should have sufficient
throttling already and if that is not the case then we should
consolidate the remaining rather than have yet another one.

Thoughts?
---

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: NeilBrown <neilb@suse.com>
Cc: linux-mm@kvack.org, Mikulas Patocka <mpatocka@redhat.com>,
	Ondrej Kozina <okozina@redhat.com>,
	David Rientjes <rientjes@google.com>,
	Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
	Mel Gorman <mgorman@suse.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	dm-devel@redhat.com, Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [RFC PATCH 2/2] mm, mempool: do not throttle PF_LESS_THROTTLE tasks
Date: Mon, 25 Jul 2016 21:23:45 +0200	[thread overview]
Message-ID: <20160725192344.GD2166@dhcp22.suse.cz> (raw)
In-Reply-To: <20160725083247.GD9401@dhcp22.suse.cz>

[CC Marcelo who might remember other details for the loads which made
 him to add this code - see the patch changelog for more context]

On Mon 25-07-16 10:32:47, Michal Hocko wrote:
> On Sat 23-07-16 10:12:24, NeilBrown wrote:
[...]
> > So I wonder what throttle_vm_writeout() really achieves these days.  Is
> > it just a bandaid that no-one is brave enough to remove?
> 
> Maybe yes. It is sitting there quietly and you do not know about it
> until it bites. Like in this particular case.

So I was playing with this today and tried to provoke throttle_vm_writeout
and couldn't hit that path with my pretty much default IO stack. I
probably need a more complex IO setup like dm-crypt or something that
basically have to double buffer every page in the writeout for some
time.

Anyway I believe that the throttle_vm_writeout is just a relict from the
past which just survived after many other changes in the reclaim path. I
fully realize my testing is quite poor and I would really appreciate if
Mikulas could try to retest with his more complex IO setups but let me
post a patch with the changelog so that we can at least reason about the
justification. In principle the reclaim path should have sufficient
throttling already and if that is not the case then we should
consolidate the remaining rather than have yet another one.

Thoughts?
---
From 0d950d64e3c59061f7cca71fe5877d4e430499c9 Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@suse.com>
Date: Mon, 25 Jul 2016 14:18:54 +0200
Subject: [PATCH] mm, vmscan: get rid of throttle_vm_writeout

throttle_vm_writeout has been introduced back in 2005 to fix OOMs caused
by excessive pageout activity during the reclaim. Too many pages could
be put under writeback therefore LRUs would be full of unreclaimable pages
until the IO completes and in turn the OOM killer could be invoked.

There have been some important changes introduced since then in the
reclaim path though. Writers are throttled by balance_dirty_pages
when initiating the buffered IO and later during the memory pressure,
the direct reclaim is throttled by wait_iff_congested if the node is
considered congested by dirty pages on LRUs and the underlying bdi
is congested by the queued IO. The kswapd is throttled as well if it
encounters pages marked for immediate reclaim or under writeback which
signals that that there are too many pages under writeback already.
Another important aspect is that we do not issue any IO from the direct
reclaim context anymore. In a heavy parallel load this could queue a lot
of IO which would be very scattered and thus unefficient which would
just make the problem worse.

This three mechanisms should throttle and keep the amount of IO in a
steady state even under heavy IO and memory pressure so yet another
throttling point doesn't really seem helpful. Quite contrary, Mikulas
Patocka has reported that swap backed by dm-crypt doesn't work properly
because the swapout IO cannot make sufficient progress as the writeout
path depends on dm_crypt worker which has to allocate memory to perform
the encryption. In order to guarantee a forward progress it relies
on the mempool allocator. mempool_alloc(), however, prefers to use
the underlying (usually page) allocator before it grabs objects from
the pool. Such an allocation can dive into the memory reclaim and
consequently to throttle_vm_writeout. If there are too many dirty or
pages under writeback it will get throttled even though it is in fact a
flusher to clear pending pages.

[  345.352536] kworker/u4:0    D ffff88003df7f438 10488     6      2	0x00000000
[  345.352536] Workqueue: kcryptd kcryptd_crypt [dm_crypt]
[  345.352536]  ffff88003df7f438 ffff88003e5d0380 ffff88003e5d0380 ffff88003e5d8e80
[  345.352536]  ffff88003dfb3240 ffff88003df73240 ffff88003df80000 ffff88003df7f470
[  345.352536]  ffff88003e5d0380 ffff88003e5d0380 ffff88003df7f828 ffff88003df7f450
[  345.352536] Call Trace:
[  345.352536]  [<ffffffff818d466c>] schedule+0x3c/0x90
[  345.352536]  [<ffffffff818d96a8>] schedule_timeout+0x1d8/0x360
[  345.352536]  [<ffffffff81135e40>] ? detach_if_pending+0x1c0/0x1c0
[  345.352536]  [<ffffffff811407c3>] ? ktime_get+0xb3/0x150
[  345.352536]  [<ffffffff811958cf>] ? __delayacct_blkio_start+0x1f/0x30
[  345.352536]  [<ffffffff818d39e4>] io_schedule_timeout+0xa4/0x110
[  345.352536]  [<ffffffff8121d886>] congestion_wait+0x86/0x1f0
[  345.352536]  [<ffffffff810fdf40>] ? prepare_to_wait_event+0xf0/0xf0
[  345.352536]  [<ffffffff812061d4>] throttle_vm_writeout+0x44/0xd0
[  345.352536]  [<ffffffff81211533>] shrink_zone_memcg+0x613/0x720
[  345.352536]  [<ffffffff81211720>] shrink_zone+0xe0/0x300
[  345.352536]  [<ffffffff81211aed>] do_try_to_free_pages+0x1ad/0x450
[  345.352536]  [<ffffffff81211e7f>] try_to_free_pages+0xef/0x300
[  345.352536]  [<ffffffff811fef19>] __alloc_pages_nodemask+0x879/0x1210
[  345.352536]  [<ffffffff810e8080>] ? sched_clock_cpu+0x90/0xc0
[  345.352536]  [<ffffffff8125a8d1>] alloc_pages_current+0xa1/0x1f0
[  345.352536]  [<ffffffff81265ef5>] ? new_slab+0x3f5/0x6a0
[  345.352536]  [<ffffffff81265dd7>] new_slab+0x2d7/0x6a0
[  345.352536]  [<ffffffff810e7f87>] ? sched_clock_local+0x17/0x80
[  345.352536]  [<ffffffff812678cb>] ___slab_alloc+0x3fb/0x5c0
[  345.352536]  [<ffffffff811f71bd>] ? mempool_alloc_slab+0x1d/0x30
[  345.352536]  [<ffffffff810e7f87>] ? sched_clock_local+0x17/0x80
[  345.352536]  [<ffffffff811f71bd>] ? mempool_alloc_slab+0x1d/0x30
[  345.352536]  [<ffffffff81267ae1>] __slab_alloc+0x51/0x90
[  345.352536]  [<ffffffff811f71bd>] ? mempool_alloc_slab+0x1d/0x30
[  345.352536]  [<ffffffff81267d9b>] kmem_cache_alloc+0x27b/0x310
[  345.352536]  [<ffffffff811f71bd>] mempool_alloc_slab+0x1d/0x30
[  345.352536]  [<ffffffff811f6f11>] mempool_alloc+0x91/0x230
[  345.352536]  [<ffffffff8141a02d>] bio_alloc_bioset+0xbd/0x260
[  345.352536]  [<ffffffffc02f1a54>] kcryptd_crypt+0x114/0x3b0 [dm_crypt]

Let's just drop throttle_vm_writeout altogether. It is not very much
helpful anymore.

I have tried to test a potential writeback IO runaway similar to the one
described in the original patch which has introduced that [1]. Small
virtual machine (512MB RAM, 4 CPUs, 2G of swap space and disk image on a
rather slow NFS in a sync mode on the host) with 8 parallel writers each
writing 1G worth of data. As soon as the pagecache fills up and the
direct reclaim hits then I start anon memory consumer in a loop
(allocating 300M and exiting after populating it) in the background
to make the memory pressure even stronger as well as to disrupt the
steady state for the IO. The direct reclaim is throttled because of the
congestion as well as kswapd hitting congestion_wait due to nr_immediate
but throttle_vm_writeout doesn't ever trigger the sleep throughout
the test. Dirty+writeback are close to nr_dirty_threshold with some
fluctuations caused by the anon consumer.

[1] https://www2.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.9-rc1/2.6.9-rc1-mm3/broken-out/vm-pageout-throttling.patch
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/writeback.h |  1 -
 mm/page-writeback.c       | 30 ------------------------------
 mm/vmscan.c               |  2 --
 3 files changed, 33 deletions(-)

diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 44b4422ae57f..f67a992cdf89 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -319,7 +319,6 @@ void laptop_mode_timer_fn(unsigned long data);
 #else
 static inline void laptop_sync_completion(void) { }
 #endif
-void throttle_vm_writeout(gfp_t gfp_mask);
 bool node_dirty_ok(struct pglist_data *pgdat);
 int wb_domain_init(struct wb_domain *dom, gfp_t gfp);
 #ifdef CONFIG_CGROUP_WRITEBACK
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index b82303a9e67d..2828d6ca1451 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -1962,36 +1962,6 @@ bool wb_over_bg_thresh(struct bdi_writeback *wb)
 	return false;
 }
 
-void throttle_vm_writeout(gfp_t gfp_mask)
-{
-	unsigned long background_thresh;
-	unsigned long dirty_thresh;
-
-        for ( ; ; ) {
-		global_dirty_limits(&background_thresh, &dirty_thresh);
-		dirty_thresh = hard_dirty_limit(&global_wb_domain, dirty_thresh);
-
-                /*
-                 * Boost the allowable dirty threshold a bit for page
-                 * allocators so they don't get DoS'ed by heavy writers
-                 */
-                dirty_thresh += dirty_thresh / 10;      /* wheeee... */
-
-                if (global_node_page_state(NR_UNSTABLE_NFS) +
-			global_node_page_state(NR_WRITEBACK) <= dirty_thresh)
-                        	break;
-                congestion_wait(BLK_RW_ASYNC, HZ/10);
-
-		/*
-		 * The caller might hold locks which can prevent IO completion
-		 * or progress in the filesystem.  So we cannot just sit here
-		 * waiting for IO to complete.
-		 */
-		if ((gfp_mask & (__GFP_FS|__GFP_IO)) != (__GFP_FS|__GFP_IO))
-			break;
-        }
-}
-
 /*
  * sysctl handler for /proc/sys/vm/dirty_writeback_centisecs
  */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0294ab34f475..0f35ed30e35b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2410,8 +2410,6 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
 	if (inactive_list_is_low(lruvec, false, sc))
 		shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
 				   sc, LRU_ACTIVE_ANON);
-
-	throttle_vm_writeout(sc->gfp_mask);
 }
 
 /* Use reclaim/compaction for costly allocs or under memory pressure */
-- 
2.8.1

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-07-25 19:23 UTC|newest]

Thread overview: 102+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-18  8:39 [RFC PATCH 0/2] mempool vs. page allocator interaction Michal Hocko
2016-07-18  8:39 ` Michal Hocko
2016-07-18  8:41 ` [RFC PATCH 1/2] mempool: do not consume memory reserves from the reclaim path Michal Hocko
2016-07-18  8:41   ` Michal Hocko
2016-07-18  8:41   ` [RFC PATCH 2/2] mm, mempool: do not throttle PF_LESS_THROTTLE tasks Michal Hocko
2016-07-18  8:41     ` Michal Hocko
2016-07-19 21:50     ` Mikulas Patocka
2016-07-19 21:50       ` Mikulas Patocka
2016-07-22  8:46     ` NeilBrown
2016-07-22  9:04       ` NeilBrown
2016-07-22  9:15       ` Michal Hocko
2016-07-22  9:15         ` Michal Hocko
2016-07-23  0:12         ` NeilBrown
2016-07-25  8:32           ` Michal Hocko
2016-07-25  8:32             ` Michal Hocko
2016-07-25 19:23             ` Michal Hocko [this message]
2016-07-25 19:23               ` Michal Hocko
2016-07-25 19:23               ` Michal Hocko
2016-07-26  7:07               ` Michal Hocko
2016-07-26  7:07                 ` Michal Hocko
2016-07-27  3:43             ` [dm-devel] " NeilBrown
2016-07-27 18:24               ` Michal Hocko
2016-07-27 18:24                 ` Michal Hocko
2016-07-27 21:33                 ` NeilBrown
2016-07-28  7:17                   ` Michal Hocko
2016-07-28  7:17                     ` Michal Hocko
2016-08-03 12:53                     ` Mikulas Patocka
2016-08-03 12:53                       ` Mikulas Patocka
2016-08-03 14:34                       ` Michal Hocko
2016-08-03 14:34                         ` Michal Hocko
2016-08-04 18:49                         ` Mikulas Patocka
2016-08-04 18:49                           ` Mikulas Patocka
2016-08-12 12:32                           ` Michal Hocko
2016-08-12 12:32                             ` Michal Hocko
2016-08-13 17:34                             ` Mikulas Patocka
2016-08-13 17:34                               ` Mikulas Patocka
2016-08-14 10:34                               ` Michal Hocko
2016-08-14 10:34                                 ` Michal Hocko
2016-08-15 16:15                                 ` Mikulas Patocka
2016-08-15 16:15                                   ` Mikulas Patocka
2016-11-23 21:11                                 ` Mikulas Patocka
2016-11-23 21:11                                   ` Mikulas Patocka
2016-11-24 13:29                                   ` Michal Hocko
2016-11-24 13:29                                     ` Michal Hocko
2016-11-24 17:10                                     ` Mikulas Patocka
2016-11-24 17:10                                       ` Mikulas Patocka
2016-11-28 14:06                                       ` Michal Hocko
2016-11-28 14:06                                         ` Michal Hocko
2016-07-25 21:52           ` Mikulas Patocka
2016-07-25 21:52             ` Mikulas Patocka
2016-07-26  7:25             ` Michal Hocko
2016-07-26  7:25               ` Michal Hocko
2016-07-27  4:02             ` [dm-devel] " NeilBrown
2016-07-27 14:28               ` Mikulas Patocka
2016-07-27 14:28                 ` Mikulas Patocka
2016-07-27 18:40                 ` Michal Hocko
2016-07-27 18:40                   ` Michal Hocko
2016-08-03 13:59                   ` Mikulas Patocka
2016-08-03 13:59                     ` Mikulas Patocka
2016-08-03 14:42                     ` Michal Hocko
2016-08-03 14:42                       ` Michal Hocko
2016-08-04 18:46                       ` Mikulas Patocka
2016-08-04 18:46                         ` Mikulas Patocka
2016-07-27 21:36                 ` NeilBrown
2016-07-19  2:00   ` [RFC PATCH 1/2] mempool: do not consume memory reserves from the reclaim path David Rientjes
2016-07-19  2:00     ` David Rientjes
2016-07-19  7:49     ` Michal Hocko
2016-07-19  7:49       ` Michal Hocko
2016-07-19 13:54   ` Johannes Weiner
2016-07-19 13:54     ` Johannes Weiner
2016-07-19 14:19     ` Michal Hocko
2016-07-19 14:19       ` Michal Hocko
2016-07-19 22:01       ` Mikulas Patocka
2016-07-19 22:01         ` Mikulas Patocka
2016-07-19 20:45     ` David Rientjes
2016-07-19 20:45       ` David Rientjes
2016-07-20  8:15       ` Michal Hocko
2016-07-20  8:15         ` Michal Hocko
2016-07-20 21:06         ` David Rientjes
2016-07-20 21:06           ` David Rientjes
2016-07-21  8:52           ` Michal Hocko
2016-07-21  8:52             ` Michal Hocko
2016-07-21 12:13             ` Johannes Weiner
2016-07-21 12:13               ` Johannes Weiner
2016-07-21 14:53               ` Michal Hocko
2016-07-21 14:53                 ` Michal Hocko
2016-07-21 14:53                 ` Michal Hocko
2016-07-21 15:26                 ` Johannes Weiner
2016-07-21 15:26                   ` Johannes Weiner
2016-07-22  1:41                 ` NeilBrown
2016-07-22  6:37                 ` Michal Hocko
2016-07-22  6:37                   ` Michal Hocko
2016-07-22 12:26                   ` Vlastimil Babka
2016-07-22 12:26                     ` Vlastimil Babka
2016-07-22 19:44                     ` Andrew Morton
2016-07-22 19:44                       ` Andrew Morton
2016-07-23 18:52                       ` Vlastimil Babka
2016-07-23 18:52                         ` Vlastimil Babka
2016-07-19 21:50   ` Mikulas Patocka
2016-07-19 21:50     ` Mikulas Patocka
2016-07-20  6:44     ` Michal Hocko
2016-07-20  6:44       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160725192344.GD2166@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=dm-devel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mpatocka@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=neilb@suse.com \
    --cc=okozina@redhat.com \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.