All of lore.kernel.org
 help / color / mirror / Atom feed
From: Philipp Falk <philipp.falk@thinkparq.com>
To: linux-fsdevel@vger.kernel.org
Subject: Re: Throughput drop and high CPU load on fast NVMe drives
Date: Wed, 23 Jun 2021 13:33:23 +0200	[thread overview]
Message-ID: <YNMcA2YsOGO7CaiP@xps13> (raw)
In-Reply-To: <YNIfq8dCLEu/Wkc0@casper.infradead.org>

* Matthew Wilcox <willy@infradead.org> [210622 19:37]:
> Yes, this is a known issue.  Here's what's happening:
>
>  - The machine hits its low memory watermarks and starts trying to
>    reclaim.  There's one kswapd per node, so both nodes go to work
>    trying to reclaim memory (each kswapd tries to handle the memory
>    attached to its node)
>  - But all the memory is allocated to the same file, so both kswapd
>    instances try to remove the pages from the same file, and necessarily
>    contend on the same spinlock.
>  - The process trying to stream the file is also trying to acquire this
>    spinlock in order to add its newly-allocated pages to the file.
>

Thank you for the detailed explanation. In this benchmark scenario, every
thread (4 per NVMe drive) uses its own file, so there are reads from 64
files in flight at the same time. The individual files are only 20GiB in
size so the kswapd instances must handle memory allocated to multiple files
at once, right?

But both kswapd instances are probably contending for the same spinlocks on
multiple of those files then.

> What you can do is force the page cache to only allocate memory from the
> local node.  That means this workload will only use half the memory in
> the machine, but it's a streaming workload, so that shouldn't matter?
>
> The only problem is, I'm not sure what the user interface is to make
> that happen.  Here's what it looks like inside the kernel:

I repeated the benchmark and bound the fio threads to the numa nodes their
specific disks are connected to. I also forced the memory allocation to be
local to those numa zones and confirmed that cache allocation really only
happens on half of the memory when only the threads on one numa zone run.
Not sure if that is enough to achieve that only one kswapd will be actively
trying to remove pages.

In both cases (only threads on one numa zone running and numa bound threads
on both numa zones running) the throughput drop occured when half/all of
the memory was exhausted.

Does that mean that it isn't the two kswapd threads contending for the
locks but the process itself and the local kswapd? Is there anything else
we could do to improve that situation?

- Philipp

      reply	other threads:[~2021-06-23 11:33 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-22 17:15 Throughput drop and high CPU load on fast NVMe drives Philipp Falk
2021-06-22 17:36 ` Matthew Wilcox
2021-06-23 11:33   ` Philipp Falk [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YNMcA2YsOGO7CaiP@xps13 \
    --to=philipp.falk@thinkparq.com \
    --cc=linux-fsdevel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.