linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Snitzer <snitzer@redhat.com>
To: Damien Le Moal <Damien.LeMoal@wdc.com>
Cc: Ignat Korchagin <ignat@cloudflare.com>,
	"kernel-team@cloudflare.com" <kernel-team@cloudflare.com>,
	"dm-crypt@saout.de" <dm-crypt@saout.de>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>,
	Mikulas Patocka <mpatocka@redhat.com>,
	"agk@redhat.com" <agk@redhat.com>
Subject: Re: [RFC PATCH 0/1] dm-crypt excessive overhead
Date: Wed, 24 Jun 2020 01:22:09 -0400	[thread overview]
Message-ID: <20200624052209.GB23205@redhat.com> (raw)
In-Reply-To: <CY4PR04MB3751EB316BFD5600AFAA6796E7950@CY4PR04MB3751.namprd04.prod.outlook.com>

On Wed, Jun 24 2020 at 12:54am -0400,
Damien Le Moal <Damien.LeMoal@wdc.com> wrote:

> On 2020/06/24 0:23, Mike Snitzer wrote:
> > On Tue, Jun 23 2020 at 11:07am -0400,
> > Ignat Korchagin <ignat@cloudflare.com> wrote:
> > 
> >> Do you think it may be better to break it in two flags: one for read
> >> path and one for write? So, depending on the needs and workflow these
> >> could be enabled independently?
> > 
> > If there is a need to split, then sure.  But I think Damien had a hard
> > requirement that writes had to be inlined but that reads didn't _need_
> > to be for his dm-zoned usecase.  Damien may not yet have assessed the
> > performance implications, of not have reads inlined, as much as you
> > have.
> 
> We did do performance testing :)
> The results are mixed and performance differences between inline vs workqueues
> depend on the workload (IO size, IO queue depth and number of drives being used
> mostly). In many cases, inlining everything does really improve performance as
> Ignat reported.
> 
> In our testing, we used hard drives and so focused mostly on throughput rather
> than command latency. The added workqueue context switch overhead and crypto
> work latency compared to typical HDD IO times is small, and significant only if
> the backend storage as short IO times.
> 
> In the case of HDDs, especially for large IO sizes, inlining crypto work does
> not shine as it prevents an efficient use of CPU resources. This is especially
> true with reads on a large system with many drives connected to a single HBA:
> the softirq context decryption work does not lend itself well to using other
> CPUs that did not receive the HBA IRQ signaling command completions. The test
> results clearly show much higher throughputs using dm-crypt as is.
> 
> On the other hand, inlining crypto work significantly improves workloads of
> small random IOs, even for a large number of disks: removing the overhead of
> context switches allows faster completions, allowing sending more requests to
> the drives more quickly, keeping them busy.
> 
> For SMR, the inlining of write requests is *mandatory* to preserve the issuer
> write sequence, but encryption work being done in the issuer context (writes to
> SMR drives can only be O_DIRECT writes), efficient CPU resource usage can be
> achieved by simply using multiple writer thread/processes, working on different
> zones of different disks. This is a very reasonable model for SMR as writes into
> a single zone have to be done under mutual exclusion to ensure sequentiality.
> 
> For reads, SMR drives are essentially exactly the same as regular disks, so
> as-is or inline are both OK. Based on our performance results, allowing the user
> to have the choice of inlining or not reads based on the target workload would
> be great.
> 
> Of note is that zone append writes (emulated in SCSI, native with NVMe) are not
> subject to the sequential write constraint, so they can also be executed either
> inline or asynchronously.
> 
> > So let's see how Damien's work goes and if he trully doesn't need/want
> > reads to be inlined then 2 flags can be created.
> 
> For SMR, I do not need inline reads, but I do want the user to have the
> possibility of using this setup as that can provide better performance for some
> workloads. I think that splitting the inline flag in 2 is exactly what we want:
> 
> 1) For SMR, the write-inline flag can be automatically turned on when the target
> device is created if the backend device used is a host-managed zoned drive (scsi
> or NVMe ZNS). For reads, it would be the user choice, based on the target workload.
> 2) For regular block devices, write-inline only, read-inline only or both would
> be the user choice, to optimize for their target workload.
> 
> With the split into 2 flags, my SMR support patch becomes very simple.

OK, thanks for all the context.  Was a fun read ;)

SO let's run with splitting into 2 flags.  Ignat would you be up to
tweaking your patch to provide that and post a v2?

An added bonus would be to consolidate your 0/1 and 1/1 patch headers,
and add in the additional answers you provided in this thread to help
others understand the patch (mainly some more detail about why tasklet
is used).

Thanks,
Mike


  reply	other threads:[~2020-06-24  5:22 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-19 16:41 [RFC PATCH 0/1] dm-crypt excessive overhead Ignat Korchagin
2020-06-19 16:41 ` [RFC PATCH 1/1] Add DM_CRYPT_FORCE_INLINE flag to dm-crypt target Ignat Korchagin
2020-06-24  5:04   ` [dm-crypt] " Eric Biggers
2020-06-24  5:21     ` [dm-devel] " Damien Le Moal
2020-06-24  5:27       ` Eric Biggers
2020-06-24  6:46         ` Damien Le Moal
2020-06-24  7:24         ` Damien Le Moal
2020-06-24  7:49     ` Damien Le Moal
2020-06-24  8:24     ` Ignat Korchagin
2020-06-24 16:24       ` Eric Biggers
2020-06-24 17:00         ` Ignat Korchagin
2020-06-24  5:12   ` [dm-devel] " Damien Le Moal
2020-06-19 16:55 ` [RFC PATCH 0/1] dm-crypt excessive overhead Mike Snitzer
2020-06-19 18:39   ` Mikulas Patocka
2020-06-19 19:44     ` Ignat Korchagin
2020-06-20  1:23     ` Herbert Xu
2020-06-20 19:36       ` Mikulas Patocka
2020-06-20 21:02         ` Ignat Korchagin
2020-06-23 15:33       ` Mike Snitzer
2020-06-23 16:24         ` Ignat Korchagin
2020-06-24  0:23           ` Herbert Xu
2020-06-22  0:45   ` [dm-devel] " Damien Le Moal
2020-06-22  7:55     ` Ignat Korchagin
2020-06-22  8:08       ` Damien Le Moal
2020-06-23 15:01     ` Mike Snitzer
2020-06-23 15:07       ` Ignat Korchagin
2020-06-23 15:22         ` Mike Snitzer
2020-06-24  4:54           ` [dm-devel] " Damien Le Moal
2020-06-24  5:22             ` Mike Snitzer [this message]
2020-06-24  8:02               ` Ignat Korchagin
2020-06-24  4:28       ` Damien Le Moal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200624052209.GB23205@redhat.com \
    --to=snitzer@redhat.com \
    --cc=Damien.LeMoal@wdc.com \
    --cc=agk@redhat.com \
    --cc=dm-crypt@saout.de \
    --cc=dm-devel@redhat.com \
    --cc=ignat@cloudflare.com \
    --cc=kernel-team@cloudflare.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mpatocka@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).