All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paul Menzel <pmenzel@molgen.mpg.de>
To: Andreas Dilger <adilger@dilger.ca>
Cc: Linux FS-devel Mailing List <linux-fsdevel@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Donald Buczek <buczek@molgen.mpg.de>
Subject: Re: File system for scratch space (in HPC cluster)
Date: Fri, 25 Oct 2019 10:35:34 +0200	[thread overview]
Message-ID: <c09eccc9-c067-c121-e9ae-8e3f32d8c80b@molgen.mpg.de> (raw)
In-Reply-To: <2794A217-0A93-44C1-B0A2-A67504A711F0@dilger.ca>

[-- Attachment #1: Type: text/plain, Size: 1462 bytes --]

Dear Andreas,


On 2019-10-24 19:51, Andreas Dilger wrote:
> On Oct 24, 2019, at 4:43 AM, Paul Menzel <pmenzel@molgen.mpg.de> 
> wrote:

>> In our cluster, we offer scratch space for temporary files. As 
>> these files are temporary, we do not need any safety requirements
>> – especially not those when the system crashes or shuts down. So
>> no `sync` is for example needed.
>> 
>> Are there file systems catering to this need? I couldn’t find any? 
>> Maybe I missed some options for existing file systems.
> 
> How big do you need the scratch filesystem to be?  Is it local to
> the node or does it need to be shared between nodes?

In this case local.

> If it needs to be large and shared between nodes then Lustre is 
> typically used for this.  If it is local and relatively small you 
> could consider using tmpfs backed by swab on an NVMe flash device 
> (M.2 or U.2, Optane if you can afford it) inside the node.
> 
> That way you get RAM-like performance for many files, with a larger 
> capacity than RAM when needed (tmpfs can use swap).
> 
> You might consider to mount a new tmpfs filesystem per job (no 
> formatting is needed for tmpfs), and then unmount it when the job is 
> done, so that the old files are automatically cleaned up.
That is a good idea, but probably not practical for 10 TB. Out of
curiosity, what is the limit for “relatively small” in your
experience?


Kind regards,

Paul


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

      reply	other threads:[~2019-10-25  8:35 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-24 10:43 File system for scratch space (in HPC cluster) Paul Menzel
2019-10-24 14:55 ` Theodore Y. Ts'o
2019-10-24 15:01   ` Boaz Harrosh
2019-10-24 20:34     ` Theodore Y. Ts'o
2019-10-25  8:33       ` Paul Menzel
2019-10-24 17:51 ` Andreas Dilger
2019-10-25  8:35   ` Paul Menzel [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c09eccc9-c067-c121-e9ae-8e3f32d8c80b@molgen.mpg.de \
    --to=pmenzel@molgen.mpg.de \
    --cc=adilger@dilger.ca \
    --cc=buczek@molgen.mpg.de \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.