All of lore.kernel.org
 help / color / mirror / Atom feed
From: Neil Brown <neilb@suse.de>
To: Wu Fengguang <fengguang.wu@intel.com>
Cc: Rik van Riel <riel@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Li Shaohua <shaohua.li@intel.com>
Subject: Re: Deadlock possibly caused by too_many_isolated.
Date: Mon, 18 Oct 2010 15:14:59 +1100	[thread overview]
Message-ID: <20101018151459.2b443221@notabene> (raw)
In-Reply-To: <20100915184434.18e2d933@notabene>

On Wed, 15 Sep 2010 18:44:34 +1000
Neil Brown <neilb@suse.de> wrote:

> On Wed, 15 Sep 2010 16:28:43 +0800
> Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > Neil,
> > 
> > Sorry for the rushed and imaginary ideas this morning..
> > 
> > > @@ -1101,6 +1101,12 @@ static unsigned long shrink_inactive_lis
> > >  	int lumpy_reclaim = 0;
> > >  
> > >  	while (unlikely(too_many_isolated(zone, file, sc))) {
> > > +		if ((sc->gfp_mask & GFP_IOFS) != GFP_IOFS)
> > > +			/* Not allowed to do IO, so mustn't wait
> > > +			 * on processes that might try to
> > > +			 */
> > > +			return SWAP_CLUSTER_MAX;
> > > +
> > 
> > The above patch should behavior like this: it returns SWAP_CLUSTER_MAX
> > to cheat all the way up to believe "enough pages have been reclaimed".
> > So __alloc_pages_direct_reclaim() see non-zero *did_some_progress and
> > go on to call get_page_from_freelist(). That normally fails because
> > the task didn't really scanned the LRU lists. However it does have the
> > possibility to succeed -- when so many processes are doing concurrent
> > direct reclaims, it may luckily get one free page reclaimed by other
> > tasks. What's more, if it does fail to get a free page, the upper
> > layer __alloc_pages_slowpath() will be repeat recalling
> > __alloc_pages_direct_reclaim(). So, sooner or later it will succeed in
> > "stealing" a free page reclaimed by other tasks.
> > 
> > In summary, the patch behavior for !__GFP_IO/FS is
> > - won't do any page reclaim
> > - won't fail the page allocation (unexpected)
> > - will wait and steal one free page from others (unreasonable)
> > 
> > So it will address the problem you encountered, however it sounds
> > pretty unexpected and illogical behavior, right?
> > 
> > I believe this patch will address the problem equally well.
> > What do you think?
> 
> Thank you for the detailed explanation.  Is agree with your reasoning and
> now understand why your patch is sufficient.
> 
> I will get it tested and let you know how that goes.

(sorry this has taken a month to follow up).

Testing shows that this patch seems to work.
The test load (essentially kernbench) doesn't deadlock any more, though it
does get bogged down thrashing in swap so it doesn't make a lot more
progress :-)  I guess that is to be expected.

One observation is that the kernbench generated 10%-20% more context switches
with the patch than without.  Is that to be expected?

Do you have plans for sending this patch upstream?

Thanks,
NeilBrown


> 
> Thanks,
> NeilBrown
> 
> 
> > 
> > Thanks,
> > Fengguang
> > ---
> > 
> > mm: Avoid possible deadlock caused by too_many_isolated()
> > 
> > Neil finds that if too_many_isolated() returns true while performing
> > direct reclaim we can end up waiting for other threads to complete their
> > direct reclaim.  If those threads are allowed to enter the FS or IO to
> > free memory, but this thread is not, then it is possible that those
> > threads will be waiting on this thread and so we get a circular
> > deadlock.
> > 
> > some task enters direct reclaim with GFP_KERNEL
> >   => too_many_isolated() false
> >     => vmscan and run into dirty pages
> >       => pageout()
> >         => take some FS lock
> > 	  => fs/block code does GFP_NOIO allocation
> > 	    => enter direct reclaim again
> > 	      => too_many_isolated() true
> > 		=> waiting for others to progress, however the other
> > 		   tasks may be circular waiting for the FS lock..
> > 
> > The fix is to let !__GFP_IO and !__GFP_FS direct reclaims enjoy higher
> > priority than normal ones, by honouring them higher throttle threshold.
> > 
> > Now !__GFP_IO/FS reclaims won't be waiting for __GFP_IO/FS reclaims to
> > progress. They will be blocked only when there are too many concurrent
> > !__GFP_IO/FS reclaims, however that's very unlikely because the IO-less
> > direct reclaims is able to progress much more faster, and they won't
> > deadlock each other. The threshold is raised high enough for them, so
> > that there can be sufficient parallel progress of !__GFP_IO/FS reclaims.
> > 
> > Reported-by: NeilBrown <neilb@suse.de>
> > Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> > ---
> >  mm/vmscan.c |    5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > --- linux-next.orig/mm/vmscan.c	2010-09-15 11:58:58.000000000 +0800
> > +++ linux-next/mm/vmscan.c	2010-09-15 15:36:14.000000000 +0800
> > @@ -1141,36 +1141,39 @@ int isolate_lru_page(struct page *page)
> >  	return ret;
> >  }
> >  
> >  /*
> >   * Are there way too many processes in the direct reclaim path already?
> >   */
> >  static int too_many_isolated(struct zone *zone, int file,
> >  		struct scan_control *sc)
> >  {
> >  	unsigned long inactive, isolated;
> > +	int ratio;
> >  
> >  	if (current_is_kswapd())
> >  		return 0;
> >  
> >  	if (!scanning_global_lru(sc))
> >  		return 0;
> >  
> >  	if (file) {
> >  		inactive = zone_page_state(zone, NR_INACTIVE_FILE);
> >  		isolated = zone_page_state(zone, NR_ISOLATED_FILE);
> >  	} else {
> >  		inactive = zone_page_state(zone, NR_INACTIVE_ANON);
> >  		isolated = zone_page_state(zone, NR_ISOLATED_ANON);
> >  	}
> >  
> > -	return isolated > inactive;
> > +	ratio = sc->gfp_mask & (__GFP_IO | __GFP_FS) ? 1 : 8;
> > +
> > +	return isolated > inactive * ratio;
> >  }
> >  
> >  /*
> >   * TODO: Try merging with migrations version of putback_lru_pages
> >   */
> >  static noinline_for_stack void
> >  putback_lru_pages(struct zone *zone, struct scan_control *sc,
> >  				unsigned long nr_anon, unsigned long nr_file,
> >  				struct list_head *page_list)
> >  {
> 


WARNING: multiple messages have this Message-ID (diff)
From: Neil Brown <neilb@suse.de>
To: Wu Fengguang <fengguang.wu@intel.com>
Cc: Rik van Riel <riel@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Li Shaohua <shaohua.li@intel.com>
Subject: Re: Deadlock possibly caused by too_many_isolated.
Date: Mon, 18 Oct 2010 15:14:59 +1100	[thread overview]
Message-ID: <20101018151459.2b443221@notabene> (raw)
In-Reply-To: <20100915184434.18e2d933@notabene>

On Wed, 15 Sep 2010 18:44:34 +1000
Neil Brown <neilb@suse.de> wrote:

> On Wed, 15 Sep 2010 16:28:43 +0800
> Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > Neil,
> > 
> > Sorry for the rushed and imaginary ideas this morning..
> > 
> > > @@ -1101,6 +1101,12 @@ static unsigned long shrink_inactive_lis
> > >  	int lumpy_reclaim = 0;
> > >  
> > >  	while (unlikely(too_many_isolated(zone, file, sc))) {
> > > +		if ((sc->gfp_mask & GFP_IOFS) != GFP_IOFS)
> > > +			/* Not allowed to do IO, so mustn't wait
> > > +			 * on processes that might try to
> > > +			 */
> > > +			return SWAP_CLUSTER_MAX;
> > > +
> > 
> > The above patch should behavior like this: it returns SWAP_CLUSTER_MAX
> > to cheat all the way up to believe "enough pages have been reclaimed".
> > So __alloc_pages_direct_reclaim() see non-zero *did_some_progress and
> > go on to call get_page_from_freelist(). That normally fails because
> > the task didn't really scanned the LRU lists. However it does have the
> > possibility to succeed -- when so many processes are doing concurrent
> > direct reclaims, it may luckily get one free page reclaimed by other
> > tasks. What's more, if it does fail to get a free page, the upper
> > layer __alloc_pages_slowpath() will be repeat recalling
> > __alloc_pages_direct_reclaim(). So, sooner or later it will succeed in
> > "stealing" a free page reclaimed by other tasks.
> > 
> > In summary, the patch behavior for !__GFP_IO/FS is
> > - won't do any page reclaim
> > - won't fail the page allocation (unexpected)
> > - will wait and steal one free page from others (unreasonable)
> > 
> > So it will address the problem you encountered, however it sounds
> > pretty unexpected and illogical behavior, right?
> > 
> > I believe this patch will address the problem equally well.
> > What do you think?
> 
> Thank you for the detailed explanation.  Is agree with your reasoning and
> now understand why your patch is sufficient.
> 
> I will get it tested and let you know how that goes.

(sorry this has taken a month to follow up).

Testing shows that this patch seems to work.
The test load (essentially kernbench) doesn't deadlock any more, though it
does get bogged down thrashing in swap so it doesn't make a lot more
progress :-)  I guess that is to be expected.

One observation is that the kernbench generated 10%-20% more context switches
with the patch than without.  Is that to be expected?

Do you have plans for sending this patch upstream?

Thanks,
NeilBrown


> 
> Thanks,
> NeilBrown
> 
> 
> > 
> > Thanks,
> > Fengguang
> > ---
> > 
> > mm: Avoid possible deadlock caused by too_many_isolated()
> > 
> > Neil finds that if too_many_isolated() returns true while performing
> > direct reclaim we can end up waiting for other threads to complete their
> > direct reclaim.  If those threads are allowed to enter the FS or IO to
> > free memory, but this thread is not, then it is possible that those
> > threads will be waiting on this thread and so we get a circular
> > deadlock.
> > 
> > some task enters direct reclaim with GFP_KERNEL
> >   => too_many_isolated() false
> >     => vmscan and run into dirty pages
> >       => pageout()
> >         => take some FS lock
> > 	  => fs/block code does GFP_NOIO allocation
> > 	    => enter direct reclaim again
> > 	      => too_many_isolated() true
> > 		=> waiting for others to progress, however the other
> > 		   tasks may be circular waiting for the FS lock..
> > 
> > The fix is to let !__GFP_IO and !__GFP_FS direct reclaims enjoy higher
> > priority than normal ones, by honouring them higher throttle threshold.
> > 
> > Now !__GFP_IO/FS reclaims won't be waiting for __GFP_IO/FS reclaims to
> > progress. They will be blocked only when there are too many concurrent
> > !__GFP_IO/FS reclaims, however that's very unlikely because the IO-less
> > direct reclaims is able to progress much more faster, and they won't
> > deadlock each other. The threshold is raised high enough for them, so
> > that there can be sufficient parallel progress of !__GFP_IO/FS reclaims.
> > 
> > Reported-by: NeilBrown <neilb@suse.de>
> > Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> > ---
> >  mm/vmscan.c |    5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > --- linux-next.orig/mm/vmscan.c	2010-09-15 11:58:58.000000000 +0800
> > +++ linux-next/mm/vmscan.c	2010-09-15 15:36:14.000000000 +0800
> > @@ -1141,36 +1141,39 @@ int isolate_lru_page(struct page *page)
> >  	return ret;
> >  }
> >  
> >  /*
> >   * Are there way too many processes in the direct reclaim path already?
> >   */
> >  static int too_many_isolated(struct zone *zone, int file,
> >  		struct scan_control *sc)
> >  {
> >  	unsigned long inactive, isolated;
> > +	int ratio;
> >  
> >  	if (current_is_kswapd())
> >  		return 0;
> >  
> >  	if (!scanning_global_lru(sc))
> >  		return 0;
> >  
> >  	if (file) {
> >  		inactive = zone_page_state(zone, NR_INACTIVE_FILE);
> >  		isolated = zone_page_state(zone, NR_ISOLATED_FILE);
> >  	} else {
> >  		inactive = zone_page_state(zone, NR_INACTIVE_ANON);
> >  		isolated = zone_page_state(zone, NR_ISOLATED_ANON);
> >  	}
> >  
> > -	return isolated > inactive;
> > +	ratio = sc->gfp_mask & (__GFP_IO | __GFP_FS) ? 1 : 8;
> > +
> > +	return isolated > inactive * ratio;
> >  }
> >  
> >  /*
> >   * TODO: Try merging with migrations version of putback_lru_pages
> >   */
> >  static noinline_for_stack void
> >  putback_lru_pages(struct zone *zone, struct scan_control *sc,
> >  				unsigned long nr_anon, unsigned long nr_file,
> >  				struct list_head *page_list)
> >  {
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-10-18  4:15 UTC|newest]

Thread overview: 116+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-09-14 23:11 Deadlock possibly caused by too_many_isolated Neil Brown
2010-09-14 23:11 ` Neil Brown
2010-09-15  0:30 ` Rik van Riel
2010-09-15  0:30   ` Rik van Riel
2010-09-15  2:23   ` Neil Brown
2010-09-15  2:23     ` Neil Brown
2010-09-15  2:37     ` Wu Fengguang
2010-09-15  2:37       ` Wu Fengguang
2010-09-15  2:54       ` Wu Fengguang
2010-09-15  2:54         ` Wu Fengguang
2010-09-15  3:06         ` Wu Fengguang
2010-09-15  3:06           ` Wu Fengguang
2010-09-15  3:13           ` Wu Fengguang
2010-09-15  3:13             ` Wu Fengguang
2010-09-15  3:18             ` Shaohua Li
2010-09-15  3:18               ` Shaohua Li
2010-09-15  3:31               ` Wu Fengguang
2010-09-15  3:31                 ` Wu Fengguang
2010-09-15  3:17           ` Neil Brown
2010-09-15  3:17             ` Neil Brown
2010-09-15  3:47             ` Wu Fengguang
2010-09-15  3:47               ` Wu Fengguang
2010-09-15  8:28     ` Wu Fengguang
2010-09-15  8:28       ` Wu Fengguang
2010-09-15  8:44       ` Neil Brown
2010-09-15  8:44         ` Neil Brown
2010-10-18  4:14         ` Neil Brown [this message]
2010-10-18  4:14           ` Neil Brown
2010-10-18  5:04           ` KOSAKI Motohiro
2010-10-18  5:04             ` KOSAKI Motohiro
2010-10-18 10:58           ` Torsten Kaiser
2010-10-18 10:58             ` Torsten Kaiser
2010-10-18 23:11             ` Neil Brown
2010-10-18 23:11               ` Neil Brown
2010-10-19  8:43               ` Torsten Kaiser
2010-10-19  8:43                 ` Torsten Kaiser
2010-10-19 10:06                 ` Torsten Kaiser
2010-10-19 10:06                   ` Torsten Kaiser
2010-10-20  5:57                   ` Wu Fengguang
2010-10-20  5:57                     ` Wu Fengguang
2010-10-20  7:05                     ` KOSAKI Motohiro
2010-10-20  7:05                       ` KOSAKI Motohiro
2010-10-20  9:27                       ` Wu Fengguang
2010-10-20  9:27                         ` Wu Fengguang
2010-10-20 13:03                         ` Jens Axboe
2010-10-20 13:03                           ` Jens Axboe
2010-10-22  5:37                           ` Wu Fengguang
2010-10-22  5:37                             ` Wu Fengguang
2010-10-22  8:07                             ` Wu Fengguang
2010-10-22  8:07                               ` Wu Fengguang
2010-10-22  8:09                               ` Jens Axboe
2010-10-22  8:09                                 ` Jens Axboe
2010-10-24 16:52                                 ` Wu Fengguang
2010-10-24 16:52                                   ` Wu Fengguang
2010-10-25  6:40                                   ` Neil Brown
2010-10-25  6:40                                     ` Neil Brown
2010-10-25  7:26                                     ` Wu Fengguang
2010-10-25  7:26                                       ` Wu Fengguang
2010-10-20  7:25                     ` Torsten Kaiser
2010-10-20  7:25                       ` Torsten Kaiser
2010-10-20  9:01                       ` Wu Fengguang
2010-10-20  9:01                         ` Wu Fengguang
2010-10-20 10:07                         ` Torsten Kaiser
2010-10-20 10:07                           ` Torsten Kaiser
2010-10-20 14:23                       ` Minchan Kim
2010-10-20 14:23                         ` Minchan Kim
2010-10-20 15:35                         ` Torsten Kaiser
2010-10-20 15:35                           ` Torsten Kaiser
2010-10-20 23:31                           ` Minchan Kim
2010-10-20 23:31                             ` Minchan Kim
2010-10-18 16:15           ` Wu Fengguang
2010-10-18 16:15             ` Wu Fengguang
2010-10-18 21:58             ` Andrew Morton
2010-10-18 21:58               ` Andrew Morton
2010-10-18 22:31               ` Neil Brown
2010-10-18 22:31                 ` Neil Brown
2010-10-18 22:41                 ` Andrew Morton
2010-10-18 22:41                   ` Andrew Morton
2010-10-19  0:57                   ` KOSAKI Motohiro
2010-10-19  0:57                     ` KOSAKI Motohiro
2010-10-19  1:15                     ` Minchan Kim
2010-10-19  1:15                       ` Minchan Kim
2010-10-19  1:21                       ` KOSAKI Motohiro
2010-10-19  1:21                         ` KOSAKI Motohiro
2010-10-19  1:32                         ` Minchan Kim
2010-10-19  1:32                           ` Minchan Kim
2010-10-19  2:03                           ` KOSAKI Motohiro
2010-10-19  2:03                             ` KOSAKI Motohiro
2010-10-19  2:16                             ` Minchan Kim
2010-10-19  2:16                               ` Minchan Kim
2010-10-19  2:54                               ` KOSAKI Motohiro
2010-10-19  2:54                                 ` KOSAKI Motohiro
2010-10-19  2:35                       ` Wu Fengguang
2010-10-19  2:35                         ` Wu Fengguang
2010-10-19  2:52                         ` Minchan Kim
2010-10-19  2:52                           ` Minchan Kim
2010-10-19  3:05                           ` Wu Fengguang
2010-10-19  3:05                             ` Wu Fengguang
2010-10-19  3:09                             ` Minchan Kim
2010-10-19  3:09                               ` Minchan Kim
2010-10-19  3:13                               ` KOSAKI Motohiro
2010-10-19  3:13                                 ` KOSAKI Motohiro
2010-10-19  5:11                                 ` Minchan Kim
2010-10-19  5:11                                   ` Minchan Kim
2010-10-19  3:21                               ` Shaohua Li
2010-10-19  3:21                                 ` Shaohua Li
2010-10-19  7:15                                 ` Shaohua Li
2010-10-19  7:15                                   ` Shaohua Li
2010-10-19  7:34                                   ` Minchan Kim
2010-10-19  7:34                                     ` Minchan Kim
2010-10-19  2:24                   ` Wu Fengguang
2010-10-19  2:24                     ` Wu Fengguang
2010-10-19  2:37                     ` KOSAKI Motohiro
2010-10-19  2:37                       ` KOSAKI Motohiro
2010-10-19  2:37                     ` Minchan Kim
2010-10-19  2:37                       ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101018151459.2b443221@notabene \
    --to=neilb@suse.de \
    --cc=akpm@linux-foundation.org \
    --cc=fengguang.wu@intel.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=riel@redhat.com \
    --cc=shaohua.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.