linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@suse.de>
To: "Kevin P. Fleming" <kpfleming@backtobasicsmgmt.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Linux-raid maillist <linux-raid@vger.kernel.org>,
	linux-lvm@sistina.com
Subject: Re: Reproducable OOPS with MD RAID-5 on 2.6.0-test11
Date: Tue, 2 Dec 2003 09:27:13 +0100	[thread overview]
Message-ID: <20031202082713.GN12211@suse.de> (raw)
In-Reply-To: <3FCC0EE0.9010207@backtobasicsmgmt.com>

On Mon, Dec 01 2003, Kevin P. Fleming wrote:
> Jens Axboe wrote:
> 
> >Alright, so no bouncing should be happening. Could you boot with
> >mem=800m (and reproduce) just to rule it out completely?
> 
> Tested with mem=800m, problem still occurs. Additional test was done 

Suspected as much, just wanted to make sure.

> without device-mapper in place, though, and I could not reproduce the 
> problem! I copied > 500MB of stuff to the XFS filesystem created using 
> the entire /dev/md/0 device without a single unusual message. I then 
> unmounted the filesystem and used pvcreate/vgcreate/lvcreate to make a 
> 3G volume on the array, made an XFS filesystem on it, mounted it, and 
> tried copying data over. The oops message came back.

Smells like a bio stacking problem in raid/dm then. I'll take a quick
look and see if anything obvious pops up, otherwise the maintainers of
those areas should take a closer look.

> I'm copying this message to linux-lvm; the original oops message is 
> repeated below for the benefit of those list readers. I've got one more 
> round of testing to do (after the array resyncs itself), which is to try 
> a filesystem other than XFS.

That might be a good idea, although it's not very likely to be an XFS
problem as it happens further down the io stack. It should trigger just
as happily on IDE or SCSI if that was the case.

-- 
Jens Axboe


  parent reply	other threads:[~2003-12-02  8:27 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-12-01 14:06 Reproducable OOPS with MD RAID-5 on 2.6.0-test11 Kevin P. Fleming
2003-12-01 14:11 ` Jens Axboe
2003-12-01 14:15   ` Kevin P. Fleming
2003-12-01 15:51     ` Jens Axboe
2003-12-02  4:02       ` Kevin P. Fleming
2003-12-02  4:15         ` Mike Fedyk
2003-12-02 13:11           ` Kevin P. Fleming
2003-12-02  8:27         ` Jens Axboe [this message]
2003-12-02 10:10           ` Nathan Scott
2003-12-02 13:15             ` Kevin P. Fleming
2003-12-03  3:32             ` Nathan Scott
2003-12-03 17:13               ` Linus Torvalds
2003-12-02 18:23           ` Linus Torvalds
2003-12-04  1:12             ` Simon Kirby
2003-12-04  1:23               ` Linus Torvalds
2003-12-04  4:31                 ` Simon Kirby
2003-12-05  6:55                   ` Theodore Ts'o
2003-12-04 20:53                 ` Herbert Xu
2003-12-04 21:06                   ` Linus Torvalds
2003-12-01 23:06   ` Reproducable OOPS with MD RAID-5 on 2.6.0-test11 - with XFS Neil Brown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20031202082713.GN12211@suse.de \
    --to=axboe@suse.de \
    --cc=kpfleming@backtobasicsmgmt.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-lvm@sistina.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).