kernelnewbies.kernelnewbies.org archive mirror
 help / color / mirror / Atom feed
From: Mulyadi Santosa <mulyadi.santosa@gmail.com>
To: kernelnewbies <Kernelnewbies@kernelnewbies.org>
Subject: Re: Understanding the locking behavior of msync
Date: Sat, 27 Mar 2021 08:57:35 +0700	[thread overview]
Message-ID: <CAGdaadYp-89S++wvBV8eUiCstgcKkFFwMQq0+0ZsDuFjZtrA6A@mail.gmail.com> (raw)
In-Reply-To: <36a97042-528d-cda6-8817-c91ba73072b2@student.hpi.de>


[-- Attachment #1.1: Type: text/plain, Size: 2070 bytes --]

On Fri, Mar 26, 2021, 22:05 Maximilian Böther <
maximilian.boether@student.hpi.de> wrote:

> Hello!
>
> I am investigating an application that writes random data in fixed-size
> chunks (e.g. 4k) to random locations in a large buffer file. I have
> several processes (not threads) doing that, each process has its own
> buffer file assigned.
>
> If I use mmap+msync to write and persist data to disk, I see a
> performance spike for 16 processes, and a performance drop for more
> threads (32 processes). The CPU has 32 logical cores in total, and we
> are not CPU bound.
>
> If I use open+write+fsync, I do not see such a spike, instead a
> performance plateau (and mmap is slower than open/write).
>
> I've read multiple times [1,2] that both mmap and msync can take locks.
> With vtune, I analyzed that we are indeed spinlocking, and spending the
> most time in clear_page_erms and xas_load functions.
>
> However, when reading the source code for msync [3], I cannot understand
> whether these locks are global or per-file. The paper [2] states that
> the locks are on radix-trees within the kernel that are per-file,
> however, as I do observe some spinlocks in the kernel, I believe that
> some locks may be global, as I have one file per process.
>
> Do you have an explanation on why we have such a spike at 16 processes
> for mmap and input on the locking behavior of msync?
>
> Thank you!
>
> Best,
> Maximilian Böther
>
> [1]
>
> https://kb.pmem.io/development/100000025-Why-msync-is-less-optimal-for-persistent-memory/
> - I know it's about PMem, but the lock argument is important
>
> [2] Optimizing Memory-mapped I/O for Fast Storage Devices, Papagiannis
> et al., ATC '20
>
> [3] https://elixir.bootlin.com/linux/latest/source/mm/msync.c
>
> _______________________________________________
> Kernelnewbies mailing list
> Kernelnewbies@kernelnewbies.org
> https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbi
> <https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies>es
>

Is it NUMA?

[-- Attachment #1.2: Type: text/html, Size: 3126 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

      reply	other threads:[~2021-03-27  1:59 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-24 11:56 Understanding the locking behavior of msync Maximilian Böther
2021-03-27  1:57 ` Mulyadi Santosa [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGdaadYp-89S++wvBV8eUiCstgcKkFFwMQq0+0ZsDuFjZtrA6A@mail.gmail.com \
    --to=mulyadi.santosa@gmail.com \
    --cc=Kernelnewbies@kernelnewbies.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).