linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.cz>
To: Mel Gorman <mgorman@suse.de>
Cc: Linux-MM <linux-mm@kvack.org>, Jiri Slaby <jslaby@suse.cz>,
	Valdis Kletnieks <Valdis.Kletnieks@vt.edu>,
	Rik van Riel <riel@redhat.com>,
	Zlatko Calusic <zcalusic@bitsync.net>,
	Johannes Weiner <hannes@cmpxchg.org>,
	dormando <dormando@rydia.net>,
	Satoru Moriya <satoru.moriya@hds.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 07/10 -v2r1] mm: vmscan: Block kswapd if it is encountering pages under writeback
Date: Thu, 21 Mar 2013 17:32:27 +0100	[thread overview]
Message-ID: <20130321163227.GT6094@dhcp22.suse.cz> (raw)
In-Reply-To: <1363525456-10448-8-git-send-email-mgorman@suse.de>

Here is what you have in your mm-vmscan-limit-reclaim-v2r1 branch:
> commit 0dae7d4be56e6a7fe3f128284679f5efc0cc2383
> Author: Mel Gorman <mgorman@suse.de>
> Date:   Tue Mar 12 10:33:31 2013 +0000
> 
>     mm: vmscan: Block kswapd if it is encountering pages under writeback
>     
>     Historically, kswapd used to congestion_wait() at higher priorities if it
>     was not making forward progress. This made no sense as the failure to make
>     progress could be completely independent of IO. It was later replaced by
>     wait_iff_congested() and removed entirely by commit 258401a6 (mm: don't
>     wait on congested zones in balance_pgdat()) as it was duplicating logic
>     in shrink_inactive_list().
>     
>     This is problematic. If kswapd encounters many pages under writeback and
>     it continues to scan until it reaches the high watermark then it will
>     quickly skip over the pages under writeback and reclaim clean young
>     pages or push applications out to swap.
>     
>     The use of wait_iff_congested() is not suited to kswapd as it will only
>     stall if the underlying BDI is really congested or a direct reclaimer was
>     unable to write to the underlying BDI. kswapd bypasses the BDI congestion
>     as it sets PF_SWAPWRITE but even if this was taken into account then it
>     would cause direct reclaimers to stall on writeback which is not desirable.
>     
>     This patch sets a ZONE_WRITEBACK flag if direct reclaim or kswapd is
>     encountering too many pages under writeback. If this flag is set and
>     kswapd encounters a PageReclaim page under writeback then it'll assume
>     that the LRU lists are being recycled too quickly before IO can complete
>     and block waiting for some IO to complete.
>     
>     Signed-off-by: Mel Gorman <mgorman@suse.de>

Looks reasonable to me.
Reviewed-by: Michal Hocko <mhocko@suse.cz>

> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index afedd1d..dd0d266 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -499,6 +499,9 @@ typedef enum {
>  					 * many dirty file pages at the tail
>  					 * of the LRU.
>  					 */
> +	ZONE_WRITEBACK,			/* reclaim scanning has recently found
> +					 * many pages under writeback
> +					 */
>  } zone_flags_t;
>  
>  static inline void zone_set_flag(struct zone *zone, zone_flags_t flag)
> @@ -526,6 +529,11 @@ static inline int zone_is_reclaim_dirty(const struct zone *zone)
>  	return test_bit(ZONE_TAIL_LRU_DIRTY, &zone->flags);
>  }
>  
> +static inline int zone_is_reclaim_writeback(const struct zone *zone)
> +{
> +	return test_bit(ZONE_WRITEBACK, &zone->flags);
> +}
> +
>  static inline int zone_is_reclaim_locked(const struct zone *zone)
>  {
>  	return test_bit(ZONE_RECLAIM_LOCKED, &zone->flags);
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a8b94fa..e87de90 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -723,25 +723,51 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>  		may_enter_fs = (sc->gfp_mask & __GFP_FS) ||
>  			(PageSwapCache(page) && (sc->gfp_mask & __GFP_IO));
>  
> +		/*
> +		 * If a page at the tail of the LRU is under writeback, there
> +		 * are three cases to consider.
> +		 *
> +		 * 1) If reclaim is encountering an excessive number of pages
> +		 *    under writeback and this page is both under writeback and
> +		 *    PageReclaim then it indicates that pages are being queued
> +		 *    for IO but are being recycled through the LRU before the
> +		 *    IO can complete. In this case, wait on the IO to complete
> +		 *    and then clear the ZONE_WRITEBACK flag to recheck if the
> +		 *    condition exists.
> +		 *
> +		 * 2) Global reclaim encounters a page, memcg encounters a
> +		 *    page that is not marked for immediate reclaim or
> +		 *    the caller does not have __GFP_IO. In this case mark
> +		 *    the page for immediate reclaim and continue scanning.
> +		 *
> +		 *    __GFP_IO is checked  because a loop driver thread might
> +		 *    enter reclaim, and deadlock if it waits on a page for
> +		 *    which it is needed to do the write (loop masks off
> +		 *    __GFP_IO|__GFP_FS for this reason); but more thought
> +		 *    would probably show more reasons.
> +		 *
> +		 *    Don't require __GFP_FS, since we're not going into the
> +		 *    FS, just waiting on its writeback completion. Worryingly,
> +		 *    ext4 gfs2 and xfs allocate pages with
> +		 *    grab_cache_page_write_begin(,,AOP_FLAG_NOFS), so testing
> +		 *    may_enter_fs here is liable to OOM on them.
> +		 *
> +		 * 3) memcg encounters a page that is not already marked
> +		 *    PageReclaim. memcg does not have any dirty pages
> +		 *    throttling so we could easily OOM just because too many
> +		 *    pages are in writeback and there is nothing else to
> +		 *    reclaim. Wait for the writeback to complete.
> +		 */
>  		if (PageWriteback(page)) {
> -			/*
> -			 * memcg doesn't have any dirty pages throttling so we
> -			 * could easily OOM just because too many pages are in
> -			 * writeback and there is nothing else to reclaim.
> -			 *
> -			 * Check __GFP_IO, certainly because a loop driver
> -			 * thread might enter reclaim, and deadlock if it waits
> -			 * on a page for which it is needed to do the write
> -			 * (loop masks off __GFP_IO|__GFP_FS for this reason);
> -			 * but more thought would probably show more reasons.
> -			 *
> -			 * Don't require __GFP_FS, since we're not going into
> -			 * the FS, just waiting on its writeback completion.
> -			 * Worryingly, ext4 gfs2 and xfs allocate pages with
> -			 * grab_cache_page_write_begin(,,AOP_FLAG_NOFS), so
> -			 * testing may_enter_fs here is liable to OOM on them.
> -			 */
> -			if (global_reclaim(sc) ||
> +			/* Case 1 above */
> +			if (current_is_kswapd() &&
> +			    PageReclaim(page) &&
> +			    zone_is_reclaim_writeback(zone)) {
> +				wait_on_page_writeback(page);
> +				zone_clear_flag(zone, ZONE_WRITEBACK);
> +
> +			/* Case 2 above */
> +			} else if (global_reclaim(sc) ||
>  			    !PageReclaim(page) || !(sc->gfp_mask & __GFP_IO)) {
>  				/*
>  				 * This is slightly racy - end_page_writeback()
> @@ -756,9 +782,13 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>  				 */
>  				SetPageReclaim(page);
>  				nr_writeback++;
> +
>  				goto keep_locked;
> +
> +			/* Case 3 above */
> +			} else {
> +				wait_on_page_writeback(page);
>  			}
> -			wait_on_page_writeback(page);
>  		}
>  
>  		if (!force_reclaim)
> @@ -1373,8 +1403,10 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
>  	 *                     isolated page is PageWriteback
>  	 */
>  	if (nr_writeback && nr_writeback >=
> -			(nr_taken >> (DEF_PRIORITY - sc->priority)))
> +			(nr_taken >> (DEF_PRIORITY - sc->priority))) {
>  		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
> +		zone_set_flag(zone, ZONE_WRITEBACK);
> +	}
>  
>  	/*
>  	 * Similarly, if many dirty pages are encountered that are not
> @@ -2639,8 +2671,8 @@ static bool prepare_kswapd_sleep(pg_data_t *pgdat, int order, long remaining,
>   * kswapd shrinks the zone by the number of pages required to reach
>   * the high watermark.
>   *
> - * Returns true if kswapd scanned at least the requested number of
> - * pages to reclaim.
> + * Returns true if kswapd scanned at least the requested number of pages to
> + * reclaim or if the lack of process was due to pages under writeback.
>   */
>  static bool kswapd_shrink_zone(struct zone *zone,
>  			       struct scan_control *sc,
> @@ -2663,6 +2695,8 @@ static bool kswapd_shrink_zone(struct zone *zone,
>  	if (nr_slab == 0 && !zone_reclaimable(zone))
>  		zone->all_unreclaimable = 1;
>  
> +	zone_clear_flag(zone, ZONE_WRITEBACK);
> +
>  	return sc->nr_scanned >= sc->nr_to_reclaim;
>  }
 
-- 
Michal Hocko
SUSE Labs

  parent reply	other threads:[~2013-03-21 16:32 UTC|newest]

Thread overview: 120+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-17 13:04 [RFC PATCH 0/8] Reduce system disruption due to kswapd Mel Gorman
2013-03-17 13:04 ` [PATCH 01/10] mm: vmscan: Limit the number of pages kswapd reclaims at each priority Mel Gorman
2013-03-18 23:53   ` Simon Jeons
2013-03-19  9:55     ` Mel Gorman
2013-03-19 10:16       ` Simon Jeons
2013-03-19 10:59         ` Mel Gorman
2013-03-20 16:18   ` Michal Hocko
2013-03-21  0:52     ` Rik van Riel
2013-03-22  0:08       ` Will Huck
2013-03-21  9:47     ` Mel Gorman
2013-03-21 12:59       ` Michal Hocko
2013-03-21  0:51   ` Rik van Riel
2013-03-21 15:57   ` Johannes Weiner
2013-03-21 16:47     ` Mel Gorman
2013-03-22  0:05     ` Will Huck
2013-03-22  3:52       ` Rik van Riel
2013-03-22  3:56         ` Will Huck
2013-03-22  4:59           ` Will Huck
2013-03-22 13:01             ` Rik van Riel
2013-04-05  0:05               ` Will Huck
2013-04-07  7:32                 ` Will Huck
2013-04-07  7:35                 ` Will Huck
2013-04-11  5:54         ` Will Huck
2013-04-11  5:58         ` Will Huck
2013-04-12  5:46           ` Ric Mason
2013-04-12  9:34             ` Mel Gorman
2013-04-12 13:40               ` Rik van Riel
2013-03-25  9:07   ` Michal Hocko
2013-03-25  9:13     ` Jiri Slaby
2013-03-28 22:31       ` Jiri Slaby
2013-03-29  8:22         ` Michal Hocko
2013-03-30 22:07           ` Jiri Slaby
2013-04-02 11:15             ` Mel Gorman
2013-03-17 13:04 ` [PATCH 02/10] mm: vmscan: Obey proportional scanning requirements for kswapd Mel Gorman
2013-03-17 14:39   ` Andi Kleen
2013-03-17 15:08     ` Mel Gorman
2013-03-21  1:10   ` Rik van Riel
2013-03-21  9:54     ` Mel Gorman
2013-03-21 14:01   ` Michal Hocko
2013-03-21 14:31     ` Mel Gorman
2013-03-21 15:07       ` Michal Hocko
2013-03-21 15:34         ` Mel Gorman
2013-03-22  7:54           ` Michal Hocko
2013-03-22  8:37             ` Mel Gorman
2013-03-22 10:04               ` Michal Hocko
2013-03-22 10:47                 ` Michal Hocko
2013-03-21 16:25   ` Johannes Weiner
2013-03-21 18:02     ` Mel Gorman
2013-03-22 16:53       ` Johannes Weiner
2013-03-22 18:25         ` Mel Gorman
2013-03-22 19:09           ` Johannes Weiner
2013-03-22 19:46             ` Mel Gorman
2013-03-17 13:04 ` [PATCH 03/10] mm: vmscan: Flatten kswapd priority loop Mel Gorman
2013-03-17 14:36   ` Andi Kleen
2013-03-17 15:09     ` Mel Gorman
2013-03-18 23:58   ` Simon Jeons
2013-03-19 10:12     ` Mel Gorman
2013-03-19  3:08   ` Simon Jeons
2013-03-19  8:23     ` Michal Hocko
2013-03-19 10:14     ` Mel Gorman
2013-03-19 10:26       ` Simon Jeons
2013-03-19 11:01         ` Mel Gorman
2013-03-21 14:54   ` Michal Hocko
2013-03-21 15:26     ` Mel Gorman
2013-03-21 15:38       ` Michal Hocko
2013-03-17 13:04 ` [PATCH 04/10] mm: vmscan: Decide whether to compact the pgdat based on reclaim progress Mel Gorman
2013-03-18 11:35   ` Hillf Danton
2013-03-19 10:27     ` Mel Gorman
     [not found]   ` <20130318111130.GA7245@hacker.(null)>
2013-03-19 10:19     ` Mel Gorman
2013-03-21 15:32   ` Michal Hocko
2013-03-21 15:47     ` Mel Gorman
2013-03-21 15:50       ` Michal Hocko
2013-03-17 13:04 ` [PATCH 05/10] mm: vmscan: Do not allow kswapd to scan at maximum priority Mel Gorman
2013-03-21  1:20   ` Rik van Riel
2013-03-21 10:12     ` Mel Gorman
2013-03-21 12:30       ` Rik van Riel
2013-03-21 15:48   ` Michal Hocko
2013-03-17 13:04 ` [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority Mel Gorman
2013-03-17 14:42   ` Andi Kleen
2013-03-17 15:11     ` Mel Gorman
2013-03-21 17:53       ` Rik van Riel
2013-03-21 18:15         ` Mel Gorman
2013-03-21 18:21           ` Rik van Riel
     [not found]   ` <20130318110850.GA7144@hacker.(null)>
2013-03-19 10:35     ` Mel Gorman
2013-03-17 13:04 ` [PATCH 07/10] mm: vmscan: Block kswapd if it is encountering pages under writeback Mel Gorman
2013-03-17 14:49   ` Andi Kleen
2013-03-17 15:19     ` Mel Gorman
2013-03-17 15:40       ` Andi Kleen
2013-03-19 11:06         ` Mel Gorman
2013-03-18 11:37   ` Simon Jeons
2013-03-19 10:57     ` Mel Gorman
     [not found]   ` <20130318115827.GB7245@hacker.(null)>
2013-03-19 10:58     ` Mel Gorman
2013-03-21 16:32   ` Michal Hocko [this message]
2013-03-21 18:42   ` Rik van Riel
2013-03-22  8:27     ` Mel Gorman
2013-03-17 13:04 ` [PATCH 08/10] mm: vmscan: Have kswapd shrink slab only once per priority Mel Gorman
2013-03-17 14:53   ` Andi Kleen
2013-03-21 16:47   ` Michal Hocko
2013-03-21 19:47   ` Rik van Riel
2013-04-09  6:53   ` Joonsoo Kim
2013-04-09  8:41     ` Simon Jeons
2013-04-09 11:13     ` Mel Gorman
2013-04-10  1:07       ` Dave Chinner
2013-04-10  5:23         ` Joonsoo Kim
2013-04-11  9:53         ` Mel Gorman
2013-04-10  5:21       ` Joonsoo Kim
2013-04-11 10:01         ` Mel Gorman
2013-04-11 10:29           ` Ric Mason
2013-03-17 13:04 ` [PATCH 09/10] mm: vmscan: Check if kswapd should writepage " Mel Gorman
2013-03-21 16:58   ` Michal Hocko
2013-03-21 18:07     ` Mel Gorman
2013-03-21 19:52   ` Rik van Riel
2013-03-17 13:04 ` [PATCH 10/10] mm: vmscan: Move logic from balance_pgdat() to kswapd_shrink_zone() Mel Gorman
2013-03-17 14:55   ` Andi Kleen
2013-03-17 15:25     ` Mel Gorman
2013-03-21 17:18   ` Michal Hocko
2013-03-21 18:13     ` Mel Gorman
2013-03-22 14:37 ` [RFC PATCH 0/8] Reduce system disruption due to kswapd Mel Gorman
2013-03-24 19:00 ` Jiri Slaby
2013-03-25  8:17   ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130321163227.GT6094@dhcp22.suse.cz \
    --to=mhocko@suse.cz \
    --cc=Valdis.Kletnieks@vt.edu \
    --cc=dormando@rydia.net \
    --cc=hannes@cmpxchg.org \
    --cc=jslaby@suse.cz \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=riel@redhat.com \
    --cc=satoru.moriya@hds.com \
    --cc=zcalusic@bitsync.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).