linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paul Menzel <pmenzel@molgen.mpg.de>
To: Michal Hocko <mhocko@kernel.org>
Cc: Donald Buczek <buczek@molgen.mpg.de>,
	dvteam@molgen.mpg.de, linux-xfs@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Josh Triplett <josh@joshtriplett.org>,
	"Christopher S. Aker" <caker@theshore.net>
Subject: Re: INFO: rcu_sched detected stalls on CPUs/tasks with `kswapd` and `mem_cgroup_shrink_node`
Date: Sun, 27 Nov 2016 10:37:24 +0100	[thread overview]
Message-ID: <1480239444.3654.5.camel@molgen.mpg.de> (raw)
In-Reply-To: <918a163c-52af-3c5e-85db-9829cce9ae2b@molgen.mpg.de>

Am Donnerstag, den 24.11.2016, 19:50 +0100 schrieb Donald Buczek:
> On 24.11.2016 11:15, Michal Hocko wrote:
> 
> > On Mon 21-11-16 16:35:53, Donald Buczek wrote:
> > [...]

> >> Let me add some information from the reporting site:
> >>
> >> * We've tried the patch from Paul E. McKenney (the one posted Wed, 16 Nov
> >> 2016) and it doesn't shut up the rcu stall warnings.
> >>
> >> * Log file from a boot with the patch applied ( grep kernel
> >> /var/log/messages ) is here :

[…]

> >> * This system is a backup server and walks over thousands of files sometimes
> >> with multiple parallel rsync processes.
> >>
> >> * No rcu_* warnings on that machine with 4.7.2, but with 4.8.4 , 4.8.6 ,
> >> 4.8.8 and now 4.9.0-rc5+Pauls patch
> > I assume you haven't tried the Linus 4.8 kernel without any further
> > stable patches? Just to be sure we are not talking about some later
> > regression which found its way to the stable tree.

We tried, and the problem also shows up with the plain 4.8 kernel.

```
$ dmesg
[…]
[77554.135657] INFO: rcu_sched detected stalls on CPUs/tasks:
[77554.135662]  1-...: (222 ticks this GP) idle=7dd/140000000000000/0 softirq=30962751/30962968 fqs=12961
[77554.135663]  (detected by 10, t=60002 jiffies, g=7958132, c=7958131, q=90237)
[77554.135667] Task dump for CPU 1:
[77554.135669] kswapd1         R  running task        0    86      2 0x00000008
[77554.135672]  ffff88080ad87c58 ffff88080ad87c58 ffff88080ad87cf8 ffff88100c1e5200
[77554.135674]  0000000000000003 0000000000000000 ffff88080ad87e60 ffff88080ad87d90
[77554.135675]  ffffffff811345f5 ffff88080ad87da0 ffff88080ad87db0 ffff88100c1e5200
[77554.135677] Call Trace:
[77554.135684]  [<ffffffff811345f5>] ? shrink_node_memcg+0x605/0x870
[77554.135686]  [<ffffffff8113491f>] ? shrink_node+0xbf/0x1c0
[77554.135687]  [<ffffffff81135642>] ? kswapd+0x342/0x6b0
[77554.135689]  [<ffffffff81135300>] ? mem_cgroup_shrink_node+0x150/0x150
[77554.135692]  [<ffffffff81075be4>] ? kthread+0xc4/0xe0
[77554.135695]  [<ffffffff81b2b34f>] ? ret_from_fork+0x1f/0x40
[77554.135696]  [<ffffffff81075b20>] ? kthread_worker_fn+0x160/0x160
[77734.252362] INFO: rcu_sched detected stalls on CPUs/tasks:
[77734.252367]  1-...: (897 ticks this GP) idle=7dd/140000000000000/0 softirq=30962751/30963197 fqs=50466
[77734.252368]  (detected by 0, t=240122 jiffies, g=7958132, c=7958131, q=456322)
[77734.252372] Task dump for CPU 1:
[77734.252373] kswapd1         R  running task        0    86      2 0x00000008
[77734.252376]  ffff88080ad87c58 ffff88080ad87c58 ffff88080ad87cf8 ffff88100c1e5200
[77734.252378]  0000000000000003 0000000000000000 ffff88080ad87e60 ffff88080ad87d90
[77734.252380]  ffffffff811345f5 ffff88080ad87da0 ffff88080ad87db0 ffff88100c1e5200
[77734.252382] Call Trace:
[77734.252388]  [<ffffffff811345f5>] ? shrink_node_memcg+0x605/0x870
[77734.252390]  [<ffffffff8113491f>] ? shrink_node+0xbf/0x1c0
[77734.252391]  [<ffffffff81135642>] ? kswapd+0x342/0x6b0
[77734.252393]  [<ffffffff81135300>] ? mem_cgroup_shrink_node+0x150/0x150
[77734.252396]  [<ffffffff81075be4>] ? kthread+0xc4/0xe0
[77734.252399]  [<ffffffff81b2b34f>] ? ret_from_fork+0x1f/0x40
[77734.252401]  [<ffffffff81075b20>] ? kthread_worker_fn+0x160/0x160
[…]
```

> >> * When the backups are actually happening there might be relevant memory
> >> pressure from inode cache and the rsync processes. We saw the oom-killer
> >> kick in on another machine with same hardware and similar (a bit higher)
> >> workload. This other machine also shows a lot of rcu stall warnings since
> >> 4.8.4.
> >>
> >> * We see "rcu_sched detected stalls" also on some other machines since we
> >> switched to 4.8 but not as frequently as on the two backup servers. Usually
> >> there's "shrink_node" and "kswapd" on the top of the stack. Often
> >> "xfs_reclaim_inodes" variants on top of that.
> >
> > I would be interested to see some reclaim tracepoints enabled. Could you
> > try that out? At least mm_shrink_slab_{start,end} and
> > mm_vmscan_lru_shrink_inactive. This should tell us more about how the
> > reclaim behaved.
> 
> We'll try that tomorrow!

Unfortunately, looking today at `trace`, the corresponding messages have
already been thrown out the buffer. We continue trying.


Kind regards,

Paul Menzel

  reply	other threads:[~2016-11-27  9:38 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <24c226a5-1a4a-173e-8b4e-5107a2baac04@molgen.mpg.de>
2016-11-08 12:22 ` INFO: rcu_sched detected stalls on CPUs/tasks with `kswapd` and `mem_cgroup_shrink_node` Paul Menzel
2016-11-08 17:03   ` Paul E. McKenney
2016-11-08 17:38     ` Paul Menzel
2016-11-08 18:39       ` Paul E. McKenney
2016-11-16 17:01         ` Paul Menzel
2016-11-16 17:30           ` Paul E. McKenney
2016-11-21 13:41             ` Michal Hocko
2016-11-21 14:01               ` Paul E. McKenney
2016-11-21 14:18                 ` Michal Hocko
2016-11-21 14:29                   ` Paul E. McKenney
2016-11-21 15:35                     ` Donald Buczek
2016-11-24 10:15                       ` Michal Hocko
2016-11-24 18:50                         ` Donald Buczek
2016-11-27  9:37                           ` Paul Menzel [this message]
2016-11-27  5:32                         ` Christopher S. Aker
2016-11-27  9:19                         ` Donald Buczek
2016-11-28 11:04                           ` Michal Hocko
2016-11-28 12:26                             ` Paul Menzel
2016-11-30 10:28                               ` Donald Buczek
2016-11-30 11:09                                 ` Michal Hocko
2016-11-30 11:43                                   ` Donald Buczek
2016-12-02  9:14                                     ` Donald Buczek
2016-12-06  8:32                                       ` Donald Buczek
2016-11-30 11:53                                   ` Paul E. McKenney
2016-11-30 11:54                                     ` Paul E. McKenney
2016-11-30 12:31                                       ` Paul Menzel
2016-11-30 14:31                                         ` Paul E. McKenney
2016-11-30 13:19                                     ` Michal Hocko
2016-11-30 14:29                                       ` Paul E. McKenney
2016-11-30 16:38                                         ` Peter Zijlstra
2016-11-30 17:02                                           ` Paul E. McKenney
2016-11-30 17:05                                           ` Michal Hocko
2016-11-30 17:23                                             ` Paul E. McKenney
2016-11-30 17:34                                               ` Michal Hocko
2016-11-30 17:50                                             ` Peter Zijlstra
2016-11-30 19:40                                               ` Paul E. McKenney
2016-12-01  5:30                                                 ` Peter Zijlstra
2016-12-01 12:40                                                   ` Paul E. McKenney
2016-12-01 16:36                                                     ` Peter Zijlstra
2016-12-01 16:59                                                       ` Paul E. McKenney
2016-12-01 18:09                                                         ` Peter Zijlstra
2016-12-01 18:42                                                           ` Paul E. McKenney
2016-12-01 18:49                                                             ` Peter Zijlstra
     [not found] <20161125212000.GI31360@linux.vnet.ibm.com>
     [not found] ` <20161128095825.GI14788@dhcp22.suse.cz>
     [not found]   ` <20161128105425.GY31360@linux.vnet.ibm.com>
     [not found]     ` <3a4242cb-0198-0a3b-97ae-536fb5ff83ec@kernelpanic.ru>
     [not found]       ` <20161128143435.GC3924@linux.vnet.ibm.com>
     [not found]         ` <eba1571e-f7a8-09b3-5516-c2bc35b38a83@kernelpanic.ru>
     [not found]           ` <20161128150509.GG3924@linux.vnet.ibm.com>
     [not found]             ` <66fd50e1-a922-846a-f427-7654795bd4b5@kernelpanic.ru>
     [not found]               ` <20161130174802.GM18432@dhcp22.suse.cz>
     [not found]                 ` <fd34243c-2ebf-c14b-55e6-684a9dc614e7@kernelpanic.ru>
     [not found]                   ` <20161130182552.GN18432@dhcp22.suse.cz>
2016-12-01 18:10                     ` Boris Zhmurov
2016-12-01 19:39                       ` Paul E. McKenney
2016-12-02  9:37                       ` Michal Hocko
2016-12-02 13:52                         ` Paul E. McKenney
2016-12-02 16:39                       ` Boris Zhmurov
2016-12-02 16:44                         ` Paul E. McKenney
2016-12-02 17:02                           ` Michal Hocko
2016-12-02 17:15                             ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1480239444.3654.5.camel@molgen.mpg.de \
    --to=pmenzel@molgen.mpg.de \
    --cc=buczek@molgen.mpg.de \
    --cc=caker@theshore.net \
    --cc=dvteam@molgen.mpg.de \
    --cc=josh@joshtriplett.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).