linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: riteshh <riteshh@linux.ibm.com>
To: Jan Kara <jack@suse.cz>
Cc: Wen Yang <wenyang@linux.alibaba.com>,
	adilger@dilger.ca, tytso@mit.edu, linux-ext4@vger.kernel.org,
	linux-kernel@vger.kernel.org, baoyou.xie@alibaba-inc.com
Subject: Re: [PATCH] ext4: add a configurable parameter to prevent endless loop in ext4_mb_discard_group_p
Date: Fri, 9 Apr 2021 16:58:09 +0530	[thread overview]
Message-ID: <20210409112809.n4d6kar62zwzr6af@riteshh-domain> (raw)
In-Reply-To: <20210409101811.GB20833@quack2.suse.cz>

On 21/04/09 12:18PM, Jan Kara wrote:
> On Fri 09-04-21 11:17:33, riteshh wrote:
> > On 21/04/09 02:50AM, Wen Yang wrote:
> > > > On Apr 7, 2021, at 5:16 AM, riteshh <riteshh@linux.ibm.com> wrote:
> > > >>
> > > >> On 21/04/07 03:01PM, Wen Yang wrote:
> > > >>> From: Wen Yang <simon.wy@alibaba-inc.com>
> > > >>>
> > > >>> The kworker has occupied 100% of the CPU for several days:
> > > >>> PID USER  PR  NI VIRT RES SHR S  %CPU  %MEM TIME+  COMMAND
> > > >>> 68086 root 20 0  0    0   0   R  100.0 0.0  9718:18 kworker/u64:11
> > > >>>
> > > >>> And the stack obtained through sysrq is as follows:
> > > >>> [20613144.850426] task: ffff8800b5e08000 task.stack: ffffc9001342c000
> > > >>> [20613144.850438] Call Trace:
> > > >>> [20613144.850439] [<ffffffffa0244209>]ext4_mb_new_blocks+0x429/0x550
> > > [ext4]
> > > >>> [20613144.850439]  [<ffffffffa02389ae>] ext4_ext_map_blocks+0xb5e/0xf30
> > > [ext4]
> > > >>> [20613144.850441]  [<ffffffffa0204b52>] ext4_map_blocks+0x172/0x620
> > > [ext4]
> > > >>> [20613144.850442]  [<ffffffffa0208675>] ext4_writepages+0x7e5/0xf00
> > > [ext4]
> > > >>> [20613144.850443]  [<ffffffff811c487e>] do_writepages+0x1e/0x30
> > > >>> [20613144.850444]  [<ffffffff81280265>]
> > > __writeback_single_inode+0x45/0x320
> > > >>> [20613144.850444]  [<ffffffff81280ab2>] writeback_sb_inodes+0x272/0x600
> > > >>> [20613144.850445]  [<ffffffff81280ed2>] __writeback_inodes_wb+0x92/0xc0
> > > >>> [20613144.850445]  [<ffffffff81281238>] wb_writeback+0x268/0x300
> > > >>> [20613144.850446]  [<ffffffff812819f4>] wb_workfn+0xb4/0x380
> > > >>> [20613144.850447]  [<ffffffff810a5dc9>] process_one_work+0x189/0x420
> > > >>> [20613144.850447]  [<ffffffff810a60ae>] worker_thread+0x4e/0x4b0
> > > >>>
> > > >>> The cpu resources of the cloud server are precious, and the server
> > > >>> cannot be restarted after running for a long time, so a configuration
> > > >>> parameter is added to prevent this endless loop.
> > > >>
> > > >> Strange, if there is a endless loop here. Then I would definitely see
> > > >> if there is any accounting problem in pa->pa_count. Otherwise busy=1
> > > >> should not be set everytime. ext4_mb_show_pa() function may help debug
> > > this.
> > > >>
> > > >> If yes, then that means there always exists either a file preallocation
> > > >> or a group preallocation. Maybe it is possible, in some use case.
> > > >> Others may know of such use case, if any.
> > >
> > > > If this code is broken, then it doesn't make sense to me that we would
> > > > leave it in the "run forever" state after the patch, and require a sysfs
> > > > tunable to be set to have a properly working system?
> > >
> > > > Is there anything particularly strange about the workload/system that
> > > > might cause this?  Filesystem is very full, memory is very low, etc?
> > >
> > > Hi Ritesh and Andreas,
> > >
> > > Thank you for your reply. Since there is still a faulty machine, we have
> > > analyzed it again and found it is indeed a very special case:
> > >
> > >
> > > crash> struct ext4_group_info ffff8813bb5f72d0
> > > struct ext4_group_info {
> > >   bb_state = 0,
> > >   bb_free_root = {
> > >     rb_node = 0x0
> > >   },
> > >   bb_first_free = 1681,
> > >   bb_free = 0,
> >
> > Not related to this issue, but above two variables values doesn't looks
> > consistent.
> >
> > >   bb_fragments = 0,
> > >   bb_largest_free_order = -1,
> > >   bb_prealloc_list = {
> > >     next = 0xffff880268291d78,
> > >     prev = 0xffff880268291d78     ---> *** The list is empty
> > >   },
> >
> > Ok. So when you collected the dump this list was empty.
>
> No, it is not empty. It has a single element. Note that the structure is at
> ffff8813bb5f72d0 so the pointers would have to be like ffff8813bb5f7xxx.

Errr, yes right. So the list is not empty.
But I guess the other arguments discussed in that mail should still be valid.

-ritesh


  reply	other threads:[~2021-04-09 11:28 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-08 18:50 [PATCH] ext4: add a configurable parameter to prevent endless loop in ext4_mb_discard_group_p Wen Yang
2021-04-08 22:11 ` Andreas Dilger
2021-04-09  5:47 ` riteshh
2021-04-09 10:18   ` Jan Kara
2021-04-09 11:28     ` riteshh [this message]
2021-04-10 19:45   ` Wen Yang
2021-04-11  4:25     ` Theodore Ts'o
2021-04-15  7:53       ` Wen Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210409112809.n4d6kar62zwzr6af@riteshh-domain \
    --to=riteshh@linux.ibm.com \
    --cc=adilger@dilger.ca \
    --cc=baoyou.xie@alibaba-inc.com \
    --cc=jack@suse.cz \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tytso@mit.edu \
    --cc=wenyang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).