All of lore.kernel.org
 help / color / mirror / Atom feed
From: Patrick Farrell <paf@cray.com>
To: lustre-devel@lists.lustre.org
Subject: [lustre-devel] Compact layouts
Date: Thu, 22 Nov 2018 02:53:48 +0000	[thread overview]
Message-ID: <MWHPR11MB1488F239E19293AED1B35882CBDB0@MWHPR11MB1488.namprd11.prod.outlook.com> (raw)
In-Reply-To: <MWHPR11MB148817404AF6C0B690EEB770CBDB0@MWHPR11MB1488.namprd11.prod.outlook.com>

By the way, an update:

The 1 MiB xattr limit I mentioned is incorrect.  If you raise the arbitrary stripe count limit in the code, the limit does appear to be 65532 (which was documented as the theoretical max when wide striping was implemented).  However, my VM started getting soft lockups around 30,000 stripes, so I'm not 100% sure.  Nothing is exactly broken, but some areas of the code (understandably) do not scale well to 15x the current upper limit.  Especially not on a single node VM.


- Patrick

________________________________
From: lustre-devel <lustre-devel-bounces@lists.lustre.org> on behalf of Patrick Farrell <paf@cray.com>
Sent: Wednesday, November 21, 2018 8:41:37 PM
To: John Bent
Cc: Lustre Developement
Subject: Re: [lustre-devel] Compact layouts

It's an issue, certainly, but as an interim solution, a little bit of compression (which could be limited to layouts over a certain size) is a lot better than sending around large globs of data.  (Which in the case of layout are A) highly compressible [we suspect], and B) must be sent to every client.)

Also, while I'm a huge fan of Lustre, it is not really designed for the sort of hyper-low latency hardware (basically, persistent memory tech) you're describing.

- Patrick
________________________________
From: John Bent <johnbent@gmail.com>
Sent: Wednesday, November 21, 2018 8:30:49 PM
To: Patrick Farrell
Cc: Andreas Dilger; Lustre Developement
Subject: Re: [lustre-devel] Compact layouts

As HW latencies shrink to zero, does it not make you nervous to suggest adding compression into the metadata critical path?

On Nov 21, 2018, at 7:27 PM, Patrick Farrell <paf at cray.com<mailto:paf@cray.com>> wrote:


Andreas,


Thanks for the informative reply.


You raise an interesting and nasty point about breaking the compact layout with movement.  It's not possible today to move an individual OST object/stripe, though it's certainly something I've heard people ask for.  So it wouldn't be an issue if all such operations must address whole components, as is required today.


If we did add the ability to switch out an individual OST object/stripe (which would be pretty easy to implement - data copy, layout swap, rm now-unused object), we could add those modifications as additional "traditional" layout info "atop" the compact layout.  So just the usual layout format, with OST IDs, and where present, it supersedes the relevant part of the compact layout.  This implicitly assumes we don't do a ton of this to a particular layout.


But as to reasons, it's a few things.


The primary concern is improving the open performance of very widely striped files, which means your second case - reduce the xattr and rpc size.


The same things that motivate this would also motivate raising the count limit, but my understanding from comments in the code is that 2000 is arbitrary, and the actual max could be quite a bit higher.  The first limit I'm aware of - I'm not sure if this is right? - is 1 MiB of extended attribute.  That's a little over 5000 stripes.  (Obviously, 1 MiB of layout is probably a non-starter...)


Your suggestion of gzip is very intriguing.  Ideally, I'd pick something available in kernel and with good performance.  A bit of experimentation is probably in order if we go that route.  Thanks for the pointer there.  I'd probably start with extracting the binary xattr and seeing how it compresses.


- Patrick

________________________________
From: Andreas Dilger <adilger at whamcloud.com<mailto:adilger@whamcloud.com>>
Sent: Wednesday, November 21, 2018 5:53:03 PM
To: Patrick Farrell
Cc: Lustre Developement
Subject: Re: [lustre-devel] Compact layouts

On Nov 16, 2018, at 11:06, Patrick Farrell <paf at cray.com<mailto:paf@cray.com>> wrote:
>
> All,
>
> There is an old idea for reducing the data required to describe file striping by using a bitmap to record which OSTs are in use.  As best I can tell, this was most recently described here:
> http://wiki.lustre.org/Layout_Enhancement_Solution_Architecture#Compact_Layouts_2
>
> I?m curious if this has been pursued any further, if there?s a JIRA or other place that might have more info or be tracking the idea.  I poked around and didn?t find anything.
>
> In particular, this comment:
> ?with enough data that for each OST index set in the bitmap, a corresponding OST object FID may be computed?
> Points at the difficult part of implementing this.
>
> So, before I get too far considering this problem - Is there more out there somewhere?  Hoping to avoid duplicating work!

Patrick,
as you mention above, the tricky part is that there would need to be sequential FID sequence allocation across all of the OSTs.  Then, each of the compact files would allocate/reserve the same OID in each of the sequences so that the mapping could be compact.  I don't think that is insurmountable - we already have a good mechanism for allocating FID sequences to different targets, but it would need to be extended so that compact layouts would allocate sequences from a different range of values from regular layouts.

It would also likely need to implement "OST object create on write" so that there aren't large numbers of unused objects on each OST (one for each OID that isn't used on a particular file).

The other issue is that anything like migrating any single object to another OST (e.g. for mirror resync, tiering, etc) would potentially break the compact layout.

I guess the question is what the need for compact layouts is?  To handle more than 2000 stripes, to reduce the xattr size/RPC size, to allow more complex PFL layouts to fit into the layout size limit?

In the past we discussed compressing the layout with gzip, which might be quite effective since large parts of it are zero-filled and repetitive.  This would help the xattr/RPC size, and I think even with compact layouts that they would still be expanded in RAM to allow easier processing.

Cheers, Andreas
---
Andreas Dilger
Principal Lustre Architect
Whamcloud







_______________________________________________
lustre-devel mailing list
lustre-devel at lists.lustre.org<mailto:lustre-devel@lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-devel-lustre.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-devel-lustre.org/attachments/20181122/bf331f61/attachment-0001.html>

  reply	other threads:[~2018-11-22  2:53 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-16 18:06 [lustre-devel] Compact layouts Patrick Farrell
2018-11-21 23:53 ` Andreas Dilger
2018-11-22  2:27   ` Patrick Farrell
2018-11-22  2:30     ` John Bent
2018-11-22  2:41       ` Patrick Farrell
2018-11-22  2:53         ` Patrick Farrell [this message]
2018-11-22  3:29     ` Andreas Dilger
2018-11-22  6:53       ` George Melikov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MWHPR11MB1488F239E19293AED1B35882CBDB0@MWHPR11MB1488.namprd11.prod.outlook.com \
    --to=paf@cray.com \
    --cc=lustre-devel@lists.lustre.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.