All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: John Berthels <john@humyo.com>
Cc: linux-kernel@vger.kernel.org, Nick Gregory <nick@humyo.com>,
	Rob Sanderson <rob@humyo.com>,
	xfs@oss.sgi.com
Subject: Re: PROBLEM + POSS FIX: kernel stack overflow, xfs, many disks, heavy write load, 8k stack, x86-64
Date: Thu, 8 Apr 2010 00:05:23 +1000	[thread overview]
Message-ID: <20100407140523.GJ11036@dastard> (raw)
In-Reply-To: <4BBC6719.7080304@humyo.com>

On Wed, Apr 07, 2010 at 12:06:01PM +0100, John Berthels wrote:
> Hi folks,
> 
> [I'm afraid that I'm not subscribed to the list, please cc: me on
> any reply].
> 
> Problem: kernel.org 2.6.33.2 x86_64 kernel locks up under
> write-heavy I/O load. It is "fixed" by changing THREAD_ORDER to 2.
> 
> Is this an OK long-term solution/should this be needed? As far as I
> can see from searching, there is an expectation that xfs would
> generally work with 8k stacks (THREAD_ORDER 1). We don't have xfs
> stacked over LVM or anything else.

I'm not seeing stacks deeper than about 5.6k on XFS under heavy write
loads. That's nowhere near blowing an 8k stack, so there must be
something special about what you are doing. Can you post the stack
traces that are being generated for the deepest stack generated -
/sys/kernel/debug/tracing/stack_trace should contain it.

> Background: We have a cluster of systems with roughly the following
> specs (2GB RAM, 24 (twenty-four) 1TB+ disks, Intel Core2 Duo @
> 2.2GHz).
> 
> Following a the addition of three new servers to the cluster, we
> started seeing a high incidence of intermittent lockups (up to
> several times per day for some servers) across both the old and new
> servers. Prior to that, we saw this problem only rarely (perhaps
> once per 3 months).

What is generating the write load?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

WARNING: multiple messages have this Message-ID (diff)
From: Dave Chinner <david@fromorbit.com>
To: John Berthels <john@humyo.com>
Cc: Nick Gregory <nick@humyo.com>,
	xfs@oss.sgi.com, linux-kernel@vger.kernel.org,
	Rob Sanderson <rob@humyo.com>
Subject: Re: PROBLEM + POSS FIX: kernel stack overflow, xfs, many disks, heavy write load, 8k stack, x86-64
Date: Thu, 8 Apr 2010 00:05:23 +1000	[thread overview]
Message-ID: <20100407140523.GJ11036@dastard> (raw)
In-Reply-To: <4BBC6719.7080304@humyo.com>

On Wed, Apr 07, 2010 at 12:06:01PM +0100, John Berthels wrote:
> Hi folks,
> 
> [I'm afraid that I'm not subscribed to the list, please cc: me on
> any reply].
> 
> Problem: kernel.org 2.6.33.2 x86_64 kernel locks up under
> write-heavy I/O load. It is "fixed" by changing THREAD_ORDER to 2.
> 
> Is this an OK long-term solution/should this be needed? As far as I
> can see from searching, there is an expectation that xfs would
> generally work with 8k stacks (THREAD_ORDER 1). We don't have xfs
> stacked over LVM or anything else.

I'm not seeing stacks deeper than about 5.6k on XFS under heavy write
loads. That's nowhere near blowing an 8k stack, so there must be
something special about what you are doing. Can you post the stack
traces that are being generated for the deepest stack generated -
/sys/kernel/debug/tracing/stack_trace should contain it.

> Background: We have a cluster of systems with roughly the following
> specs (2GB RAM, 24 (twenty-four) 1TB+ disks, Intel Core2 Duo @
> 2.2GHz).
> 
> Following a the addition of three new servers to the cluster, we
> started seeing a high incidence of intermittent lockups (up to
> several times per day for some servers) across both the old and new
> servers. Prior to that, we saw this problem only rarely (perhaps
> once per 3 months).

What is generating the write load?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2010-04-07 14:05 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-04-07 11:06 PROBLEM + POSS FIX: kernel stack overflow, xfs, many disks, heavy write load, 8k stack, x86-64 John Berthels
2010-04-07 14:05 ` Dave Chinner [this message]
2010-04-07 14:05   ` Dave Chinner
2010-04-07 15:57   ` John Berthels
2010-04-07 15:57     ` John Berthels
2010-04-07 17:43     ` Eric Sandeen
2010-04-07 17:43       ` Eric Sandeen
2010-04-07 23:43     ` Dave Chinner
2010-04-07 23:43       ` Dave Chinner
2010-04-08  3:03       ` Dave Chinner
2010-04-08  3:03         ` Dave Chinner
2010-04-08  3:03         ` Dave Chinner
2010-04-08 12:16         ` John Berthels
2010-04-08 12:16           ` John Berthels
2010-04-08 12:16           ` John Berthels
2010-04-08 14:47           ` John Berthels
2010-04-08 14:47             ` John Berthels
2010-04-08 14:47             ` John Berthels
2010-04-08 16:18             ` John Berthels
2010-04-08 16:18               ` John Berthels
2010-04-08 16:18               ` John Berthels
2010-04-08 23:38             ` Dave Chinner
2010-04-08 23:38               ` Dave Chinner
2010-04-08 23:38               ` Dave Chinner
2010-04-09 11:38               ` Chris Mason
2010-04-09 11:38                 ` Chris Mason
2010-04-09 11:38                 ` Chris Mason
2010-04-09 18:05                 ` Eric Sandeen
2010-04-09 18:05                   ` Eric Sandeen
2010-04-09 18:05                   ` Eric Sandeen
2010-04-09 18:11                   ` Chris Mason
2010-04-09 18:11                     ` Chris Mason
2010-04-09 18:11                     ` Chris Mason
2010-04-12  1:01                     ` Dave Chinner
2010-04-12  1:01                       ` Dave Chinner
2010-04-12  1:01                       ` Dave Chinner
2010-04-13  9:51                 ` John Berthels
2010-04-13  9:51                   ` John Berthels
2010-04-16 13:41                 ` John Berthels
2010-04-16 13:41                   ` John Berthels
2010-04-16 13:41                   ` John Berthels
2010-04-09 13:43               ` John Berthels
2010-04-09 13:43                 ` John Berthels

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100407140523.GJ11036@dastard \
    --to=david@fromorbit.com \
    --cc=john@humyo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nick@humyo.com \
    --cc=rob@humyo.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.