All of lore.kernel.org
 help / color / mirror / Atom feed
From: "C. Morgan Hamill" <chamill@wesleyan.edu>
To: xfs@oss.sgi.com
Subject: Re: Question regarding XFS on LVM over hardware RAID.
Date: Thu, 20 Feb 2014 13:31:25 -0500	[thread overview]
Message-ID: <20140220183125.29149.64880@al.wesleyan.edu> (raw)
In-Reply-To: <5303E7AC.50903@hardwarefreak.com>

Quoting Stan Hoeppner (2014-02-18 18:07:24)
> Create each LV starting on a stripe boundary.  There will be some
> unallocated space between LVs.  Use the mkfs.xfs -d size= option to
> create your filesystems inside of each LV such that the filesystem total
> size is evenly divisible by the stripe width.  This results in an
> additional small amount of unallocated space within, and at the end of,
> each LV.

Of course, this occurred to me just after sending the message... ;)

> It's nice if you can line everything up, but when using RAID6 and one or
> two bays for hot spares, one rarely ends up with 8 or 16 data spindles.
> 
> > If not, I'll tweak things to ensure my stripe width is a power of 2.
> 
> That's not possible with 12 data spindles per RAID, not possible with 42
> drives in 3 chassis.  Not without a bunch of idle drives.

The closest I can come is with 4 RAID 6 arrays of 10 disks each, then
striped over:

8 * 128k = 1024k
1024k * 4 = 4096k

Which leaves me with 5 disks unused.  I might be able to live with that
if it makes things work better.  Sounds like I won't have to.


> I still don't understand why you believe you need LVM in the mix, and
> more than one filesystem.

> Backup software is unaware of mount points.  It uses paths just like
> every other program.  The number of XFS filesystems is irrelevant to
> "minimizing the effects of the archive maintenance jobs".  You cannot
> bog down XFS.  You will bog down the drives no matter how many
> filesystems when using RAID60.

A limitation of the software in question is that placing multiple
archive paths onto a single filesystem is a bit ugly: the software does
not let you specifiy a maximum size for the archive paths, and so will
think all of them are the size of the filesystem.  This isn't an issue
in isolation, but we need to make use of a data-balancing feature the
software has, which will not work if we place multiple archive paths on
a single filesystem.  It's a stupid issue to have, but it is what it is.

> Here is what you should do:
> 
> Format the RAID60 directly with XFS.  Create 3 or 4 directories for
> CrashPlan to use as its "store points".  If you need to expand in the
> future, as I said previously, simply add another 14 drive RAID6 chassis,
> format it directly with XFS, mount it at an appropriate place in the
> directory tree and give that path to CrashPlan.  Does it have a limit on
> the number of "store points"?

Yes, this is what I *want* to do.  There's a limit to the number of
store points, but it's large, so this would work fine if not for the
multiple-stores-on-one-filesystem issue.  Which is frustrating.

The *only* reason for LVM in the middle is to allow some flexibility of
sizing without dealing with the annoyances of the partition table.
I want to intentionally under-provision to start with because we are
using a small corner of this storage for a separate purpose but do not
know precisely how much yet.  LVM lets me leave, say, 10TB empty, until
I know exactly how big things are going to be.

It's a pile of little annoyances, but so it goes with these kinds of things.

It sounds like the little empty spots method will be fine though.

Thanks, yet again, for all your help.
--
Morgan Hamill

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2014-02-20 18:31 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-29 14:26 Question regarding XFS on LVM over hardware RAID C. Morgan Hamill
2014-01-29 15:07 ` Eric Sandeen
2014-01-29 19:11   ` C. Morgan Hamill
2014-01-29 23:55     ` Stan Hoeppner
2014-01-30 14:28       ` C. Morgan Hamill
2014-01-30 20:28         ` Dave Chinner
2014-01-31  5:58           ` Stan Hoeppner
2014-01-31 21:14             ` C. Morgan Hamill
2014-02-01 21:06               ` Stan Hoeppner
2014-02-02 21:21                 ` Dave Chinner
2014-02-03 16:12                   ` C. Morgan Hamill
2014-02-03 21:41                     ` Dave Chinner
2014-02-04  8:00                       ` Stan Hoeppner
2014-02-18 19:44                         ` C. Morgan Hamill
2014-02-18 23:07                           ` Stan Hoeppner
2014-02-20 18:31                             ` C. Morgan Hamill [this message]
2014-02-21  3:33                               ` Stan Hoeppner
2014-02-21  8:57                                 ` Emmanuel Florac
2014-02-22  2:21                                   ` Stan Hoeppner
2014-02-25 17:04                                     ` C. Morgan Hamill
2014-02-25 17:17                                       ` Emmanuel Florac
2014-02-25 20:08                                       ` Stan Hoeppner
2014-02-26 14:19                                         ` C. Morgan Hamill
2014-02-26 17:49                                           ` Stan Hoeppner
2014-02-21 19:17                                 ` C. Morgan Hamill
2014-02-03 16:07                 ` C. Morgan Hamill
2014-01-29 22:40   ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140220183125.29149.64880@al.wesleyan.edu \
    --to=chamill@wesleyan.edu \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.