All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.cz>
To: Johannes Weiner <jweiner@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@suse.de>,
	Christoph Hellwig <hch@infradead.org>,
	Dave Chinner <david@fromorbit.com>,
	Wu Fengguang <fengguang.wu@intel.com>, Jan Kara <jack@suse.cz>,
	Rik van Riel <riel@redhat.com>,
	Minchan Kim <minchan.kim@gmail.com>,
	Chris Mason <chris.mason@oracle.com>,
	Theodore Ts'o <tytso@mit.edu>,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	Shaohua Li <shaohua.li@intel.com>,
	xfs@oss.sgi.com, linux-btrfs@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-mm@kvack.org,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [patch 1/5] mm: exclude reserved pages from dirtyable memory
Date: Fri, 30 Sep 2011 15:53:14 +0200	[thread overview]
Message-ID: <20110930135314.GA869@tiehlicka.suse.cz> (raw)
In-Reply-To: <1317367044-475-2-git-send-email-jweiner@redhat.com>

On Fri 30-09-11 09:17:20, Johannes Weiner wrote:
> The amount of dirtyable pages should not include the full number of
> free pages: there is a number of reserved pages that the page
> allocator and kswapd always try to keep free.
> 
> The closer (reclaimable pages - dirty pages) is to the number of
> reserved pages, the more likely it becomes for reclaim to run into
> dirty pages:
> 
>        +----------+ ---
>        |   anon   |  |
>        +----------+  |
>        |          |  |
>        |          |  -- dirty limit new    -- flusher new
>        |   file   |  |                     |
>        |          |  |                     |
>        |          |  -- dirty limit old    -- flusher old
>        |          |                        |
>        +----------+                       --- reclaim
>        | reserved |
>        +----------+
>        |  kernel  |
>        +----------+
> 
> This patch introduces a per-zone dirty reserve that takes both the
> lowmem reserve as well as the high watermark of the zone into account,
> and a global sum of those per-zone values that is subtracted from the
> global amount of dirtyable pages.  The lowmem reserve is unavailable
> to page cache allocations and kswapd tries to keep the high watermark
> free.  We don't want to end up in a situation where reclaim has to
> clean pages in order to balance zones.
> 
> Not treating reserved pages as dirtyable on a global level is only a
> conceptual fix.  In reality, dirty pages are not distributed equally
> across zones and reclaim runs into dirty pages on a regular basis.
> 
> But it is important to get this right before tackling the problem on a
> per-zone level, where the distance between reclaim and the dirty pages
> is mostly much smaller in absolute numbers.
> 
> Signed-off-by: Johannes Weiner <jweiner@redhat.com>
> Reviewed-by: Rik van Riel <riel@redhat.com>

Makes sense.
Reviewed-by: Michal Hocko <mhocko@suse.cz>

> ---
>  include/linux/mmzone.h |    6 ++++++
>  include/linux/swap.h   |    1 +
>  mm/page-writeback.c    |    6 ++++--
>  mm/page_alloc.c        |   19 +++++++++++++++++++
>  4 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 1ed4116..37a61e7 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -317,6 +317,12 @@ struct zone {
>  	 */
>  	unsigned long		lowmem_reserve[MAX_NR_ZONES];
>  
> +	/*
> +	 * This is a per-zone reserve of pages that should not be
> +	 * considered dirtyable memory.
> +	 */
> +	unsigned long		dirty_balance_reserve;
> +
>  #ifdef CONFIG_NUMA
>  	int node;
>  	/*
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 3808f10..5e70f65 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -209,6 +209,7 @@ struct swap_list_t {
>  /* linux/mm/page_alloc.c */
>  extern unsigned long totalram_pages;
>  extern unsigned long totalreserve_pages;
> +extern unsigned long dirty_balance_reserve;
>  extern unsigned int nr_free_buffer_pages(void);
>  extern unsigned int nr_free_pagecache_pages(void);
>  
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index da6d263..c8acf8a 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -170,7 +170,8 @@ static unsigned long highmem_dirtyable_memory(unsigned long total)
>  			&NODE_DATA(node)->node_zones[ZONE_HIGHMEM];
>  
>  		x += zone_page_state(z, NR_FREE_PAGES) +
> -		     zone_reclaimable_pages(z);
> +		     zone_reclaimable_pages(z) -
> +		     zone->dirty_balance_reserve;
>  	}
>  	/*
>  	 * Make sure that the number of highmem pages is never larger
> @@ -194,7 +195,8 @@ static unsigned long determine_dirtyable_memory(void)
>  {
>  	unsigned long x;
>  
> -	x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages();
> +	x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() -
> +	    dirty_balance_reserve;
>  
>  	if (!vm_highmem_is_dirtyable)
>  		x -= highmem_dirtyable_memory(x);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1dba05e..f8cba89 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -96,6 +96,14 @@ EXPORT_SYMBOL(node_states);
>  
>  unsigned long totalram_pages __read_mostly;
>  unsigned long totalreserve_pages __read_mostly;
> +/*
> + * When calculating the number of globally allowed dirty pages, there
> + * is a certain number of per-zone reserves that should not be
> + * considered dirtyable memory.  This is the sum of those reserves
> + * over all existing zones that contribute dirtyable memory.
> + */
> +unsigned long dirty_balance_reserve __read_mostly;
> +
>  int percpu_pagelist_fraction;
>  gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK;
>  
> @@ -5076,8 +5084,19 @@ static void calculate_totalreserve_pages(void)
>  			if (max > zone->present_pages)
>  				max = zone->present_pages;
>  			reserve_pages += max;
> +			/*
> +			 * Lowmem reserves are not available to
> +			 * GFP_HIGHUSER page cache allocations and
> +			 * kswapd tries to balance zones to their high
> +			 * watermark.  As a result, neither should be
> +			 * regarded as dirtyable memory, to prevent a
> +			 * situation where reclaim has to clean pages
> +			 * in order to balance the zones.
> +			 */
> +			zone->dirty_balance_reserve = max;
>  		}
>  	}
> +	dirty_balance_reserve = reserve_pages;
>  	totalreserve_pages = reserve_pages;
>  }
>  
> -- 
> 1.7.6.2
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@suse.cz>
To: Johannes Weiner <jweiner@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@suse.de>,
	Christoph Hellwig <hch@infradead.org>,
	Dave Chinner <david@fromorbit.com>,
	Wu Fengguang <fengguang.wu@intel.com>, Jan Kara <jack@suse.cz>,
	Rik van Riel <riel@redhat.com>,
	Minchan Kim <minchan.kim@gmail.com>,
	Chris Mason <chris.mason@oracle.com>,
	"Theodore Ts'o" <tytso@mit.edu>,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	Shaohua Li <shaohua.li@intel.com>,
	xfs@oss.sgi.com, linux-btrfs@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-mm@kvack.org,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [patch 1/5] mm: exclude reserved pages from dirtyable memory
Date: Fri, 30 Sep 2011 15:53:14 +0200	[thread overview]
Message-ID: <20110930135314.GA869@tiehlicka.suse.cz> (raw)
In-Reply-To: <1317367044-475-2-git-send-email-jweiner@redhat.com>

On Fri 30-09-11 09:17:20, Johannes Weiner wrote:
> The amount of dirtyable pages should not include the full number of
> free pages: there is a number of reserved pages that the page
> allocator and kswapd always try to keep free.
> 
> The closer (reclaimable pages - dirty pages) is to the number of
> reserved pages, the more likely it becomes for reclaim to run into
> dirty pages:
> 
>        +----------+ ---
>        |   anon   |  |
>        +----------+  |
>        |          |  |
>        |          |  -- dirty limit new    -- flusher new
>        |   file   |  |                     |
>        |          |  |                     |
>        |          |  -- dirty limit old    -- flusher old
>        |          |                        |
>        +----------+                       --- reclaim
>        | reserved |
>        +----------+
>        |  kernel  |
>        +----------+
> 
> This patch introduces a per-zone dirty reserve that takes both the
> lowmem reserve as well as the high watermark of the zone into account,
> and a global sum of those per-zone values that is subtracted from the
> global amount of dirtyable pages.  The lowmem reserve is unavailable
> to page cache allocations and kswapd tries to keep the high watermark
> free.  We don't want to end up in a situation where reclaim has to
> clean pages in order to balance zones.
> 
> Not treating reserved pages as dirtyable on a global level is only a
> conceptual fix.  In reality, dirty pages are not distributed equally
> across zones and reclaim runs into dirty pages on a regular basis.
> 
> But it is important to get this right before tackling the problem on a
> per-zone level, where the distance between reclaim and the dirty pages
> is mostly much smaller in absolute numbers.
> 
> Signed-off-by: Johannes Weiner <jweiner@redhat.com>
> Reviewed-by: Rik van Riel <riel@redhat.com>

Makes sense.
Reviewed-by: Michal Hocko <mhocko@suse.cz>

> ---
>  include/linux/mmzone.h |    6 ++++++
>  include/linux/swap.h   |    1 +
>  mm/page-writeback.c    |    6 ++++--
>  mm/page_alloc.c        |   19 +++++++++++++++++++
>  4 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 1ed4116..37a61e7 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -317,6 +317,12 @@ struct zone {
>  	 */
>  	unsigned long		lowmem_reserve[MAX_NR_ZONES];
>  
> +	/*
> +	 * This is a per-zone reserve of pages that should not be
> +	 * considered dirtyable memory.
> +	 */
> +	unsigned long		dirty_balance_reserve;
> +
>  #ifdef CONFIG_NUMA
>  	int node;
>  	/*
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 3808f10..5e70f65 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -209,6 +209,7 @@ struct swap_list_t {
>  /* linux/mm/page_alloc.c */
>  extern unsigned long totalram_pages;
>  extern unsigned long totalreserve_pages;
> +extern unsigned long dirty_balance_reserve;
>  extern unsigned int nr_free_buffer_pages(void);
>  extern unsigned int nr_free_pagecache_pages(void);
>  
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index da6d263..c8acf8a 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -170,7 +170,8 @@ static unsigned long highmem_dirtyable_memory(unsigned long total)
>  			&NODE_DATA(node)->node_zones[ZONE_HIGHMEM];
>  
>  		x += zone_page_state(z, NR_FREE_PAGES) +
> -		     zone_reclaimable_pages(z);
> +		     zone_reclaimable_pages(z) -
> +		     zone->dirty_balance_reserve;
>  	}
>  	/*
>  	 * Make sure that the number of highmem pages is never larger
> @@ -194,7 +195,8 @@ static unsigned long determine_dirtyable_memory(void)
>  {
>  	unsigned long x;
>  
> -	x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages();
> +	x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() -
> +	    dirty_balance_reserve;
>  
>  	if (!vm_highmem_is_dirtyable)
>  		x -= highmem_dirtyable_memory(x);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1dba05e..f8cba89 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -96,6 +96,14 @@ EXPORT_SYMBOL(node_states);
>  
>  unsigned long totalram_pages __read_mostly;
>  unsigned long totalreserve_pages __read_mostly;
> +/*
> + * When calculating the number of globally allowed dirty pages, there
> + * is a certain number of per-zone reserves that should not be
> + * considered dirtyable memory.  This is the sum of those reserves
> + * over all existing zones that contribute dirtyable memory.
> + */
> +unsigned long dirty_balance_reserve __read_mostly;
> +
>  int percpu_pagelist_fraction;
>  gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK;
>  
> @@ -5076,8 +5084,19 @@ static void calculate_totalreserve_pages(void)
>  			if (max > zone->present_pages)
>  				max = zone->present_pages;
>  			reserve_pages += max;
> +			/*
> +			 * Lowmem reserves are not available to
> +			 * GFP_HIGHUSER page cache allocations and
> +			 * kswapd tries to balance zones to their high
> +			 * watermark.  As a result, neither should be
> +			 * regarded as dirtyable memory, to prevent a
> +			 * situation where reclaim has to clean pages
> +			 * in order to balance the zones.
> +			 */
> +			zone->dirty_balance_reserve = max;
>  		}
>  	}
> +	dirty_balance_reserve = reserve_pages;
>  	totalreserve_pages = reserve_pages;
>  }
>  
> -- 
> 1.7.6.2
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@suse.cz>
To: Johannes Weiner <jweiner@redhat.com>
Cc: Rik van Riel <riel@redhat.com>,
	linux-ext4@vger.kernel.org, Jan Kara <jack@suse.cz>,
	linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org,
	xfs@oss.sgi.com, Christoph Hellwig <hch@infradead.org>,
	linux-mm@kvack.org, Andreas Dilger <adilger.kernel@dilger.ca>,
	Mel Gorman <mgorman@suse.de>, Shaohua Li <shaohua.li@intel.com>,
	linux-fsdevel@vger.kernel.org, Theodore Ts'o <tytso@mit.edu>,
	Andrew Morton <akpm@linux-foundation.org>,
	Wu Fengguang <fengguang.wu@intel.com>,
	Chris Mason <chris.mason@oracle.com>,
	Minchan Kim <minchan.kim@gmail.com>
Subject: Re: [patch 1/5] mm: exclude reserved pages from dirtyable memory
Date: Fri, 30 Sep 2011 15:53:14 +0200	[thread overview]
Message-ID: <20110930135314.GA869@tiehlicka.suse.cz> (raw)
In-Reply-To: <1317367044-475-2-git-send-email-jweiner@redhat.com>

On Fri 30-09-11 09:17:20, Johannes Weiner wrote:
> The amount of dirtyable pages should not include the full number of
> free pages: there is a number of reserved pages that the page
> allocator and kswapd always try to keep free.
> 
> The closer (reclaimable pages - dirty pages) is to the number of
> reserved pages, the more likely it becomes for reclaim to run into
> dirty pages:
> 
>        +----------+ ---
>        |   anon   |  |
>        +----------+  |
>        |          |  |
>        |          |  -- dirty limit new    -- flusher new
>        |   file   |  |                     |
>        |          |  |                     |
>        |          |  -- dirty limit old    -- flusher old
>        |          |                        |
>        +----------+                       --- reclaim
>        | reserved |
>        +----------+
>        |  kernel  |
>        +----------+
> 
> This patch introduces a per-zone dirty reserve that takes both the
> lowmem reserve as well as the high watermark of the zone into account,
> and a global sum of those per-zone values that is subtracted from the
> global amount of dirtyable pages.  The lowmem reserve is unavailable
> to page cache allocations and kswapd tries to keep the high watermark
> free.  We don't want to end up in a situation where reclaim has to
> clean pages in order to balance zones.
> 
> Not treating reserved pages as dirtyable on a global level is only a
> conceptual fix.  In reality, dirty pages are not distributed equally
> across zones and reclaim runs into dirty pages on a regular basis.
> 
> But it is important to get this right before tackling the problem on a
> per-zone level, where the distance between reclaim and the dirty pages
> is mostly much smaller in absolute numbers.
> 
> Signed-off-by: Johannes Weiner <jweiner@redhat.com>
> Reviewed-by: Rik van Riel <riel@redhat.com>

Makes sense.
Reviewed-by: Michal Hocko <mhocko@suse.cz>

> ---
>  include/linux/mmzone.h |    6 ++++++
>  include/linux/swap.h   |    1 +
>  mm/page-writeback.c    |    6 ++++--
>  mm/page_alloc.c        |   19 +++++++++++++++++++
>  4 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 1ed4116..37a61e7 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -317,6 +317,12 @@ struct zone {
>  	 */
>  	unsigned long		lowmem_reserve[MAX_NR_ZONES];
>  
> +	/*
> +	 * This is a per-zone reserve of pages that should not be
> +	 * considered dirtyable memory.
> +	 */
> +	unsigned long		dirty_balance_reserve;
> +
>  #ifdef CONFIG_NUMA
>  	int node;
>  	/*
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 3808f10..5e70f65 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -209,6 +209,7 @@ struct swap_list_t {
>  /* linux/mm/page_alloc.c */
>  extern unsigned long totalram_pages;
>  extern unsigned long totalreserve_pages;
> +extern unsigned long dirty_balance_reserve;
>  extern unsigned int nr_free_buffer_pages(void);
>  extern unsigned int nr_free_pagecache_pages(void);
>  
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index da6d263..c8acf8a 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -170,7 +170,8 @@ static unsigned long highmem_dirtyable_memory(unsigned long total)
>  			&NODE_DATA(node)->node_zones[ZONE_HIGHMEM];
>  
>  		x += zone_page_state(z, NR_FREE_PAGES) +
> -		     zone_reclaimable_pages(z);
> +		     zone_reclaimable_pages(z) -
> +		     zone->dirty_balance_reserve;
>  	}
>  	/*
>  	 * Make sure that the number of highmem pages is never larger
> @@ -194,7 +195,8 @@ static unsigned long determine_dirtyable_memory(void)
>  {
>  	unsigned long x;
>  
> -	x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages();
> +	x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() -
> +	    dirty_balance_reserve;
>  
>  	if (!vm_highmem_is_dirtyable)
>  		x -= highmem_dirtyable_memory(x);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1dba05e..f8cba89 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -96,6 +96,14 @@ EXPORT_SYMBOL(node_states);
>  
>  unsigned long totalram_pages __read_mostly;
>  unsigned long totalreserve_pages __read_mostly;
> +/*
> + * When calculating the number of globally allowed dirty pages, there
> + * is a certain number of per-zone reserves that should not be
> + * considered dirtyable memory.  This is the sum of those reserves
> + * over all existing zones that contribute dirtyable memory.
> + */
> +unsigned long dirty_balance_reserve __read_mostly;
> +
>  int percpu_pagelist_fraction;
>  gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK;
>  
> @@ -5076,8 +5084,19 @@ static void calculate_totalreserve_pages(void)
>  			if (max > zone->present_pages)
>  				max = zone->present_pages;
>  			reserve_pages += max;
> +			/*
> +			 * Lowmem reserves are not available to
> +			 * GFP_HIGHUSER page cache allocations and
> +			 * kswapd tries to balance zones to their high
> +			 * watermark.  As a result, neither should be
> +			 * regarded as dirtyable memory, to prevent a
> +			 * situation where reclaim has to clean pages
> +			 * in order to balance the zones.
> +			 */
> +			zone->dirty_balance_reserve = max;
>  		}
>  	}
> +	dirty_balance_reserve = reserve_pages;
>  	totalreserve_pages = reserve_pages;
>  }
>  
> -- 
> 1.7.6.2
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-09-30 13:53 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-09-30  7:17 [patch 0/5] per-zone dirty limits v3 Johannes Weiner
2011-09-30  7:17 ` Johannes Weiner
2011-09-30  7:17 ` Johannes Weiner
2011-09-30  7:17 ` [patch 1/5] mm: exclude reserved pages from dirtyable memory Johannes Weiner
2011-09-30  7:17   ` Johannes Weiner
2011-09-30  7:17   ` Johannes Weiner
2011-09-30 13:53   ` Michal Hocko [this message]
2011-09-30 13:53     ` Michal Hocko
2011-09-30 13:53     ` Michal Hocko
2011-10-01  7:10   ` Minchan Kim
2011-10-01  7:10     ` Minchan Kim
2011-10-01  7:10     ` Minchan Kim
2011-10-03 11:22   ` Mel Gorman
2011-10-03 11:22     ` Mel Gorman
2011-10-03 11:22     ` Mel Gorman
2011-09-30  7:17 ` [patch 2/5] mm: writeback: cleanups in preparation for per-zone dirty limits Johannes Weiner
2011-09-30  7:17   ` Johannes Weiner
2011-09-30  7:17   ` Johannes Weiner
2011-09-30 13:56   ` Michal Hocko
2011-09-30 13:56     ` Michal Hocko
2011-09-30 13:56     ` Michal Hocko
2011-09-30  7:17 ` [patch 3/5] mm: try to distribute dirty pages fairly across zones Johannes Weiner
2011-09-30  7:17   ` Johannes Weiner
2011-09-30  7:17   ` Johannes Weiner
2011-09-30  7:35   ` Pekka Enberg
2011-09-30  7:35     ` Pekka Enberg
2011-09-30  7:35     ` Pekka Enberg
2011-09-30  7:35     ` Pekka Enberg
2011-09-30  8:55     ` Johannes Weiner
2011-09-30  8:55       ` Johannes Weiner
2011-09-30  8:55       ` Johannes Weiner
2011-09-30  8:55       ` Johannes Weiner
2011-09-30  8:55       ` Johannes Weiner
2011-09-30 14:28   ` Michal Hocko
2011-09-30 14:28     ` Michal Hocko
2011-09-30 14:28     ` Michal Hocko
2011-10-28 20:18     ` Wu Fengguang
2011-10-28 20:18       ` Wu Fengguang
2011-10-28 20:18       ` Wu Fengguang
2011-10-31 11:33       ` Wu Fengguang
2011-10-31 11:33         ` Wu Fengguang
2011-10-31 11:33         ` Wu Fengguang
2011-11-01 10:55         ` Johannes Weiner
2011-11-01 10:55           ` Johannes Weiner
2011-11-01 10:55           ` Johannes Weiner
     [not found]     ` <20111027155618.GA25524@localhost>
     [not found]       ` <20111027161359.GA1319@redhat.com>
     [not found]         ` <20111027204743.GA19343@localhost>
     [not found]           ` <20111027221258.GA22869@localhost>
     [not found]             ` <20111027231933.GB1319@redhat.com>
2011-10-28 20:39               ` Wu Fengguang
2011-10-28 20:39                 ` Wu Fengguang
2011-11-01 10:52                 ` Johannes Weiner
2011-11-01 10:52                   ` Johannes Weiner
2011-11-01 10:52                   ` Johannes Weiner
2011-09-30  7:17 ` [patch 4/5] mm: filemap: pass __GFP_WRITE from grab_cache_page_write_begin() Johannes Weiner
2011-09-30  7:17   ` Johannes Weiner
2011-09-30  7:17   ` Johannes Weiner
2011-09-30 14:41   ` Michal Hocko
2011-09-30 14:41     ` Michal Hocko
2011-09-30 14:41     ` Michal Hocko
2011-09-30  7:17 ` [patch 5/5] Btrfs: pass __GFP_WRITE for buffered write page allocations Johannes Weiner
2011-09-30  7:17   ` Johannes Weiner
2011-09-30  7:17   ` Johannes Weiner
2011-10-03 11:25   ` Mel Gorman
2011-10-03 11:25     ` Mel Gorman
2011-10-03 11:25     ` Mel Gorman
2011-11-23 13:34 [patch 0/5] mm: per-zone dirty limits v3-resend Johannes Weiner
2011-11-23 13:34 ` [patch 1/5] mm: exclude reserved pages from dirtyable memory Johannes Weiner
2011-11-23 13:34   ` Johannes Weiner
2011-11-30  0:20   ` Andrew Morton
2011-11-30  0:20     ` Andrew Morton
2011-12-07 13:58     ` Johannes Weiner
2011-12-07 13:58       ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110930135314.GA869@tiehlicka.suse.cz \
    --to=mhocko@suse.cz \
    --cc=adilger.kernel@dilger.ca \
    --cc=akpm@linux-foundation.org \
    --cc=chris.mason@oracle.com \
    --cc=david@fromorbit.com \
    --cc=fengguang.wu@intel.com \
    --cc=hch@infradead.org \
    --cc=jack@suse.cz \
    --cc=jweiner@redhat.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=minchan.kim@gmail.com \
    --cc=riel@redhat.com \
    --cc=shaohua.li@intel.com \
    --cc=tytso@mit.edu \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.