All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: James Bottomley <James.Bottomley@suse.de>
Cc: Jan Kara <jack@suse.cz>,
	colin.king@canonical.com, Chris Mason <chris.mason@oracle.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	linux-ext4 <linux-ext4@vger.kernel.org>,
	mgorman@novell.com
Subject: Re: [BUG] fatal hang untarring 90GB file, possibly writeback related.
Date: Thu, 28 Apr 2011 18:18:26 +0100	[thread overview]
Message-ID: <20110428171826.GZ4658@suse.de> (raw)
In-Reply-To: <1304009778.2598.10.camel@mulgrave.site>

On Thu, Apr 28, 2011 at 11:56:17AM -0500, James Bottomley wrote:
> On Thu, 2011-04-28 at 11:50 -0500, James Bottomley wrote:
> > This is the output of perf record -g -a -f sleep 5
> > 
> > (hopefully the list won't choke)
> 
> Um, this one actually shows kswapd
> 
> James
> 
> ---
> 
> # Events: 6K cycles
> #
> # Overhead      Command        Shared Object                                   Symbol
> # ........  ...........  ...................  .......................................
> #
>     20.41%      kswapd0  [kernel.kallsyms]    [k] shrink_slab
>                 |
>                 --- shrink_slab
>                    |          
>                    |--99.91%-- kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                     --0.09%-- [...]
> 

Ok. I can't see how the patch "mm: vmscan: reclaim order-0 and use
compaction instead of lumpy reclaim" is related unless we are seeing
two problems that happen to manifest in a similar manner.

However, there were a number of changes made to dcache in particular
for 2.6.38. Specifically thinks like dentry_kill use trylock and is
happy to loop around if it fails to acquire anything. See things like
this for example;

static void try_prune_one_dentry(struct dentry *dentry)
        __releases(dentry->d_lock)
{
        struct dentry *parent;

        parent = dentry_kill(dentry, 0);
        /*
         * If dentry_kill returns NULL, we have nothing more to do.
         * if it returns the same dentry, trylocks failed. In either
         * case, just loop again.


If this in combination with many inodes being locked for whatever
reason (writeback locking them maybe?) is causing the shrinker to
return after zero progress, it could in turn cause kswapd to enter
into a loop for longish periods of time in shrink_slab here;

                while (total_scan >= SHRINK_BATCH) {
                        long this_scan = SHRINK_BATCH;
                        int shrink_ret;
                        int nr_before;

                        nr_before = (*shrinker->shrink)(shrinker, 0, gfp_mask);
                        shrink_ret = (*shrinker->shrink)(shrinker, this_scan,
                                                                gfp_mask);
                        if (shrink_ret == -1)
                                break;
                        if (shrink_ret < nr_before)
                                ret += nr_before - shrink_ret;
                        count_vm_events(SLABS_SCANNED, this_scan);
                        total_scan -= this_scan;

                        cond_resched();
                }

That would explain this trace.

>      9.98%      kswapd0  [kernel.kallsyms]    [k] shrink_zone
>                 |
>                 --- shrink_zone
>                    |          
>                    |--99.46%-- kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                    |          
>                     --0.54%-- kthread
>                               kernel_thread_helper
> 
>      7.70%      kswapd0  [kernel.kallsyms]    [k] kswapd
>                 |
>                 --- kswapd
>                     kthread
>                     kernel_thread_helper
> 
>      5.40%      kswapd0  [kernel.kallsyms]    [k] zone_watermark_ok_safe
>                 |
>                 --- zone_watermark_ok_safe
>                    |          
>                    |--72.66%-- kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                    |          
>                    |--20.88%-- sleeping_prematurely.part.12
>                    |          kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                    |          
>                     --6.46%-- kthread
>                               kernel_thread_helper
> 

We are also spending an astonishing amount of time in
sleeping_prematurely leading me to believe we are failing to balance the
zones and are continually under the min watermark for one of the zones.
We are never going to sleep because of this check;

                if (total_scanned && (priority < DEF_PRIORITY - 2)) {
                        if (has_under_min_watermark_zone)
                                count_vm_event(KSWAPD_SKIP_CONGESTION_WAIT);
                        else
                                congestion_wait(BLK_RW_ASYNC, HZ/10);
                }

However, I think this is a secondary effect to the failure of shrinkers
to do their work. If slabs were being shrunk, one would expect us to
be getting over the min watermark.

>      4.25%      kswapd0  [kernel.kallsyms]    [k] do_raw_spin_lock
>                 |
>                 --- do_raw_spin_lock
>                    |          
>                    |--77.49%-- _raw_spin_lock
>                    |          |          
>                    |          |--51.85%-- mb_cache_shrink_fn
>                    |          |          shrink_slab
>                    |          |          kswapd
>                    |          |          kthread
>                    |          |          kernel_thread_helper
>                    |          |          
>                    |           --48.15%-- mem_cgroup_soft_limit_reclaim
>                    |                     kswapd
>                    |                     kthread
>                    |                     kernel_thread_helper
>                    |          

Way hey, cgroups are also in the mix. How jolly.

Is systemd a common element of the machines hitting this bug by any
chance?

The remaining traces seem to be follow-on damage related to the three
issues of "shrinkers are bust in some manner" causing "we are not
getting over the min watermark" and as a side-show "we are spending lots
of time doing something unspecified but unhelpful in cgroups".

> <SNIP>

-- 
Mel Gorman
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: James Bottomley <James.Bottomley@suse.de>
Cc: Jan Kara <jack@suse.cz>,
	colin.king@canonical.com, Chris Mason <chris.mason@oracle.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	linux-ext4 <linux-ext4@vger.kernel.org>,
	mgorman@novell.com
Subject: Re: [BUG] fatal hang untarring 90GB file, possibly writeback related.
Date: Thu, 28 Apr 2011 18:18:26 +0100	[thread overview]
Message-ID: <20110428171826.GZ4658@suse.de> (raw)
In-Reply-To: <1304009778.2598.10.camel@mulgrave.site>

On Thu, Apr 28, 2011 at 11:56:17AM -0500, James Bottomley wrote:
> On Thu, 2011-04-28 at 11:50 -0500, James Bottomley wrote:
> > This is the output of perf record -g -a -f sleep 5
> > 
> > (hopefully the list won't choke)
> 
> Um, this one actually shows kswapd
> 
> James
> 
> ---
> 
> # Events: 6K cycles
> #
> # Overhead      Command        Shared Object                                   Symbol
> # ........  ...........  ...................  .......................................
> #
>     20.41%      kswapd0  [kernel.kallsyms]    [k] shrink_slab
>                 |
>                 --- shrink_slab
>                    |          
>                    |--99.91%-- kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                     --0.09%-- [...]
> 

Ok. I can't see how the patch "mm: vmscan: reclaim order-0 and use
compaction instead of lumpy reclaim" is related unless we are seeing
two problems that happen to manifest in a similar manner.

However, there were a number of changes made to dcache in particular
for 2.6.38. Specifically thinks like dentry_kill use trylock and is
happy to loop around if it fails to acquire anything. See things like
this for example;

static void try_prune_one_dentry(struct dentry *dentry)
        __releases(dentry->d_lock)
{
        struct dentry *parent;

        parent = dentry_kill(dentry, 0);
        /*
         * If dentry_kill returns NULL, we have nothing more to do.
         * if it returns the same dentry, trylocks failed. In either
         * case, just loop again.


If this in combination with many inodes being locked for whatever
reason (writeback locking them maybe?) is causing the shrinker to
return after zero progress, it could in turn cause kswapd to enter
into a loop for longish periods of time in shrink_slab here;

                while (total_scan >= SHRINK_BATCH) {
                        long this_scan = SHRINK_BATCH;
                        int shrink_ret;
                        int nr_before;

                        nr_before = (*shrinker->shrink)(shrinker, 0, gfp_mask);
                        shrink_ret = (*shrinker->shrink)(shrinker, this_scan,
                                                                gfp_mask);
                        if (shrink_ret == -1)
                                break;
                        if (shrink_ret < nr_before)
                                ret += nr_before - shrink_ret;
                        count_vm_events(SLABS_SCANNED, this_scan);
                        total_scan -= this_scan;

                        cond_resched();
                }

That would explain this trace.

>      9.98%      kswapd0  [kernel.kallsyms]    [k] shrink_zone
>                 |
>                 --- shrink_zone
>                    |          
>                    |--99.46%-- kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                    |          
>                     --0.54%-- kthread
>                               kernel_thread_helper
> 
>      7.70%      kswapd0  [kernel.kallsyms]    [k] kswapd
>                 |
>                 --- kswapd
>                     kthread
>                     kernel_thread_helper
> 
>      5.40%      kswapd0  [kernel.kallsyms]    [k] zone_watermark_ok_safe
>                 |
>                 --- zone_watermark_ok_safe
>                    |          
>                    |--72.66%-- kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                    |          
>                    |--20.88%-- sleeping_prematurely.part.12
>                    |          kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                    |          
>                     --6.46%-- kthread
>                               kernel_thread_helper
> 

We are also spending an astonishing amount of time in
sleeping_prematurely leading me to believe we are failing to balance the
zones and are continually under the min watermark for one of the zones.
We are never going to sleep because of this check;

                if (total_scanned && (priority < DEF_PRIORITY - 2)) {
                        if (has_under_min_watermark_zone)
                                count_vm_event(KSWAPD_SKIP_CONGESTION_WAIT);
                        else
                                congestion_wait(BLK_RW_ASYNC, HZ/10);
                }

However, I think this is a secondary effect to the failure of shrinkers
to do their work. If slabs were being shrunk, one would expect us to
be getting over the min watermark.

>      4.25%      kswapd0  [kernel.kallsyms]    [k] do_raw_spin_lock
>                 |
>                 --- do_raw_spin_lock
>                    |          
>                    |--77.49%-- _raw_spin_lock
>                    |          |          
>                    |          |--51.85%-- mb_cache_shrink_fn
>                    |          |          shrink_slab
>                    |          |          kswapd
>                    |          |          kthread
>                    |          |          kernel_thread_helper
>                    |          |          
>                    |           --48.15%-- mem_cgroup_soft_limit_reclaim
>                    |                     kswapd
>                    |                     kthread
>                    |                     kernel_thread_helper
>                    |          

Way hey, cgroups are also in the mix. How jolly.

Is systemd a common element of the machines hitting this bug by any
chance?

The remaining traces seem to be follow-on damage related to the three
issues of "shrinkers are bust in some manner" causing "we are not
getting over the min watermark" and as a side-show "we are spending lots
of time doing something unspecified but unhelpful in cgroups".

> <SNIP>

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-04-28 17:18 UTC|newest]

Thread overview: 138+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-27 16:09 [BUG] fatal hang untarring 90GB file, possibly writeback related James Bottomley
2011-04-27 16:09 ` James Bottomley
2011-04-27 16:33 ` Chris Mason
2011-04-27 16:33   ` Chris Mason
2011-04-27 16:50   ` James Bottomley
2011-04-27 16:50     ` James Bottomley
2011-04-27 16:50     ` James Bottomley
2011-04-27 16:54     ` Chris Mason
2011-04-27 16:54       ` Chris Mason
2011-04-27 17:21       ` James Bottomley
2011-04-27 17:21         ` James Bottomley
2011-04-27 17:21         ` James Bottomley
2011-04-27 17:34         ` Chris Mason
2011-04-27 17:34           ` Chris Mason
2011-04-27 17:50           ` James Bottomley
2011-04-27 17:50             ` James Bottomley
2011-04-27 18:25             ` Colin Ian King
2011-04-27 18:25               ` Colin Ian King
2011-04-28 15:57               ` James Bottomley
2011-04-28 15:57                 ` James Bottomley
2011-04-27 20:05             ` James Bottomley
2011-04-27 20:05               ` James Bottomley
2011-04-28 11:36               ` Colin Ian King
2011-04-28 11:36                 ` Colin Ian King
2011-04-28 12:29                 ` Chris Mason
2011-04-28 12:29                   ` Chris Mason
2011-04-28 13:42                   ` Colin Ian King
2011-04-28 13:42                     ` Colin Ian King
2011-04-28 13:45                     ` Chris Mason
2011-04-28 13:45                       ` Chris Mason
2011-04-28 14:01                       ` Colin Ian King
2011-04-28 14:04                         ` Chris Mason
2011-04-28 14:04                           ` Chris Mason
2011-04-28 15:23                           ` Colin Ian King
2011-04-28 14:25                         ` Jan Kara
2011-04-28 14:25                           ` Jan Kara
2011-04-28 14:33                           ` Jan Kara
2011-04-28 14:33                             ` Jan Kara
2011-04-28 14:58                             ` Colin Ian King
2011-04-28 22:40                               ` Jan Kara
2011-04-28 22:40                                 ` Jan Kara
2011-04-28 22:44                                 ` James Bottomley
2011-04-28 22:44                                   ` James Bottomley
2011-05-03 18:55                                 ` Colin Ian King
2011-05-03 18:55                                   ` Colin Ian King
2011-04-28 16:11                             ` James Bottomley
2011-04-28 16:11                               ` James Bottomley
2011-04-28 14:49                   ` James Bottomley
2011-04-28 14:49                     ` James Bottomley
2011-04-28 13:52                 ` Jan Kara
2011-04-28 13:52                   ` Jan Kara
2011-04-28 14:07                   ` Mel Gorman
2011-04-28 14:07                     ` Mel Gorman
2011-04-28 14:25                     ` James Bottomley
2011-04-28 14:25                       ` James Bottomley
2011-04-28 15:08                       ` Mel Gorman
2011-04-28 15:08                         ` Mel Gorman
2011-04-28 16:01                         ` James Bottomley
2011-04-28 16:01                           ` James Bottomley
2011-04-28 16:50                           ` James Bottomley
2011-04-28 16:50                             ` James Bottomley
2011-04-28 16:56                             ` James Bottomley
2011-04-28 16:56                               ` James Bottomley
2011-04-28 17:18                               ` Mel Gorman [this message]
2011-04-28 17:18                                 ` Mel Gorman
2011-04-28 18:30                                 ` James Bottomley
2011-04-28 18:30                                   ` James Bottomley
2011-04-28 19:21                                   ` Mel Gorman
2011-04-28 19:21                                     ` Mel Gorman
2011-04-28 19:59                                     ` James Bottomley
2011-04-28 19:59                                       ` James Bottomley
2011-04-28 20:27                                       ` Mel Gorman
2011-04-28 20:27                                         ` Mel Gorman
2011-04-29 15:02                                         ` James Bottomley
2011-04-29 15:02                                           ` James Bottomley
2011-04-28 21:12                                       ` James Bottomley
2011-04-28 21:12                                         ` James Bottomley
2011-04-28 22:43                                         ` James Bottomley
2011-04-28 22:43                                           ` James Bottomley
2011-05-03  9:13                                           ` Mel Gorman
2011-05-03  9:13                                             ` Mel Gorman
2011-05-03 14:13                                             ` James Bottomley
2011-05-03 14:13                                               ` James Bottomley
2011-05-03 14:22                                               ` James Bottomley
2011-05-06  7:42                                                 ` Mel Gorman
2011-05-06  7:42                                                   ` Mel Gorman
2011-05-06  8:07                                                   ` Mel Gorman
2011-05-09 18:16                                                     ` James Bottomley
2011-05-09 18:16                                                       ` James Bottomley
2011-05-10 10:21                                                       ` Mel Gorman
2011-05-10 10:21                                                         ` Mel Gorman
2011-05-10 10:33                                                         ` Pekka Enberg
2011-05-10 10:33                                                           ` Pekka Enberg
2011-05-10 14:01                                                         ` James Bottomley
2011-05-10 14:01                                                           ` James Bottomley
2011-05-10 14:35                                                           ` Mel Gorman
2011-05-10 14:35                                                             ` Mel Gorman
2011-05-10 15:29                                                             ` James Bottomley
2011-05-10 15:29                                                               ` James Bottomley
2011-05-10 15:57                                                               ` James Bottomley
2011-05-10 15:57                                                                 ` James Bottomley
2011-05-10 17:05                                                                 ` James Bottomley
2011-05-10 17:05                                                                   ` James Bottomley
2011-05-10 17:17                                                                   ` Mel Gorman
2011-05-10 17:17                                                                     ` Mel Gorman
2011-05-10 17:29                                                                     ` James Bottomley
2011-05-10 17:29                                                                       ` James Bottomley
2011-05-10 21:08                                                               ` Raghavendra D Prabhu
2011-05-11  9:16                                                                 ` Mel Gorman
2011-05-11  9:16                                                                   ` Mel Gorman
2011-05-06 11:42                                                   ` Mel Gorman
2011-05-06 11:42                                                     ` Mel Gorman
2011-05-06 15:44                                                   ` Mel Gorman
2011-05-06 15:44                                                     ` Mel Gorman
2011-05-06 19:14                                                     ` James Bottomley
2011-05-06 19:14                                                       ` James Bottomley
2011-05-06 19:37                                                       ` Mel Gorman
2011-05-06 19:37                                                         ` Mel Gorman
2011-05-10  5:37                                                     ` Colin Ian King
2011-05-10  5:37                                                       ` Colin Ian King
2011-05-10  5:37                                                       ` Colin Ian King
2011-05-06 15:58                                                   ` James Bottomley
2011-05-03  9:54                                 ` Colin Ian King
2011-05-03  9:54                                   ` Colin Ian King
2011-04-28 17:10                         ` Colin Ian King
2011-04-28 17:10                           ` Colin Ian King
2011-04-28  0:37         ` Dave Chinner
2011-04-28  0:37           ` Dave Chinner
2011-04-29 10:23         ` Sedat Dilek
2011-04-29 10:23           ` Sedat Dilek
2011-04-29 15:37           ` James Bottomley
2011-04-29 15:37             ` James Bottomley
2011-04-29 16:31             ` James Bottomley
2011-04-29 16:31               ` James Bottomley
2011-04-29 18:02               ` James Bottomley
2011-04-29 18:02                 ` James Bottomley
2011-05-02 20:04                 ` James Bottomley
2011-05-02 20:04                   ` James Bottomley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110428171826.GZ4658@suse.de \
    --to=mgorman@suse.de \
    --cc=James.Bottomley@suse.de \
    --cc=chris.mason@oracle.com \
    --cc=colin.king@canonical.com \
    --cc=jack@suse.cz \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@novell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.