All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: "Ralf Groß" <ralf.gross+xfs@gmail.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: memory requirements for a 400TB fs with reflinks
Date: Tue, 23 Mar 2021 08:50:39 +1100	[thread overview]
Message-ID: <20210322215039.GV63242@dread.disaster.area> (raw)
In-Reply-To: <CANSSxy=d2Tihu8dXUFQmRwYWHNdcGQoSQAkZpePD-8NOV+d5dw@mail.gmail.com>

On Mon, Mar 22, 2021 at 05:50:55PM +0100, Ralf Groß wrote:
> No advice or rule of thumb regarding needed memory for xfs_repair?

People are busy, and you posted on a weekend. Have some patience,
please.

> Am Sa., 20. März 2021 um 19:01 Uhr schrieb Ralf Groß <ralf.gross+xfs@gmail.com>:
> >
> > Hi,
> >
> > I plan to deploy a couple of Linux (RHEL 8.x) server as Veeam backup
> > repositories. Base for this might be high density server with 58 x
> > 16TB disks, 2x  RAID 60, each with its own raid controller and 28
> > disks. So each RAID 6 has 14 disks, + 2 globale spare.
> >
> > I wonder what memory requirement such a server would have, is there
> > any special requirement regarding reflinks? I remember that xfs_repair
> > has been a problem in the past, but my experience with this is from 10
> > years ago. Currently I plan to use 192GB RAM, this would be perfect as
> > it utilizes 6 memory channels and 16GB DIMMs are not so expensive.

Filesystem capacity doesn't massively affect repair memory usage
these days.

The amount of metadata and the type of it certainly does, though. I
recently saw a 14TB filesystem require 240GB of RAM to repair
because, as a hardlink based backup farm, it had hundreds of
millions of directories, inodes and hardlinks in it.  Resolving all
those directories and hardlinks took 3 weeks and 240GB of RAM....

I've seen other broken backup server filesystems of similar size
that have had close on 500GB of metadata in them, and repair needs
to index and cross-reference all that metadata. Hence memory demands
can be massive, even in today's terms....

Unfortunately, I haven't seen a broken filesystem containing
extensive production use of reflink at that scale, so I can't really
say what difference that will make to memory usage at this point in
time.

So there's no one answer - the amount of RAM xfs_repair might need
largely depends on what you are storing in the filesystem.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2021-03-22 21:51 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-20 18:01 memory requirements for a 400TB fs with reflinks Ralf Groß
2021-03-22 16:50 ` Ralf Groß
2021-03-22 21:50   ` Dave Chinner [this message]
2021-03-23  9:39     ` Ralf Groß
2021-03-23 22:02       ` Dave Chinner
2021-03-23  9:31   ` Lucas Stach

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210322215039.GV63242@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    --cc=ralf.gross+xfs@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.