linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Keith Busch <kbusch@kernel.org>
Cc: bigeasy@linutronix.de, tglx@linutronix.de,
	Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, hch@lst.de
Subject: Re: [PATCHv3 2/4] nvme/pci: Complete commands from primary handler
Date: Fri, 13 Dec 2019 08:52:35 +0800	[thread overview]
Message-ID: <20191213005235.GA30695@ming.t460p> (raw)
In-Reply-To: <20191212233039.GA69218@C02WT3WMHTD6>

On Thu, Dec 12, 2019 at 04:30:39PM -0700, Keith Busch wrote:
> On Fri, Dec 13, 2019 at 06:55:52AM +0800, Ming Lei wrote:
> > On Thu, Dec 12, 2019 at 10:02:40AM +0900, Keith Busch wrote:
> > > On Wed, Dec 11, 2019 at 04:40:47PM -0800, Sagi Grimberg wrote:
> > > > > Perhaps we can cycle the effective_affinity through the smp_affinity?
> > > > 
> > > > Not sure I follow your thoughts.
> > > 
> > > The only way the nvme driver's interrupt handler can saturate a
> > > cpu requires the smp_affinity have multiple online cpus set. The
> > 
> > As Sagi mentioned, there can be 24 drives, each drive can be 1:1
> > mapping, so each CPU may have to handle interrupts from 24 drives/queues,
> > then the CPU may become slower compared with the 24 hardware/queues.
> 
> If it's 1:1 queue:cpu, then it doesn't matter how many devices you
> have. Processes can't submit commands on that cpu while that cpu
> is servicing device interrupts, but the device can't send interrupts
> if processes can't submit commands.
> 

OK, but still not sure it is good to cycle the effective_affinity through the
smp_affinity:

1) the top half(primary interrupt handler) still might saturate the CPU

1) task migration isn't cheap from one CPU to another one

2) in theory we only need to do that when one interrupt target CPU is saturated,
but changing thread's affinity has to be done beforehand, so this way may hurt
performance when the CPU is never saturated. I believe that should be usual case.


Thanks,
Ming


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2019-12-13  0:52 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-09 17:56 [PATCHv3 0/4] nvme pci interrupt handling improvements Keith Busch
2019-12-09 17:56 ` [PATCHv3 1/4] nvme/pci: Disable interrupts for threaded handler Keith Busch
2019-12-10 15:12   ` Daniel Wagner
2019-12-10 15:28     ` Sebastian Andrzej Siewior
2019-12-10 15:54       ` Keith Busch
2019-12-10 16:44         ` Daniel Wagner
2019-12-10 16:57           ` Keith Busch
2019-12-10 17:11             ` Daniel Wagner
2019-12-12  9:09   ` Christoph Hellwig
2019-12-12 15:53     ` Keith Busch
2019-12-09 17:56 ` [PATCHv3 2/4] nvme/pci: Complete commands from primary handler Keith Busch
2019-12-10 20:00   ` Sagi Grimberg
2019-12-10 20:25     ` Keith Busch
2019-12-10 21:14       ` Sagi Grimberg
2019-12-11 17:35         ` Keith Busch
2019-12-12  0:40           ` Sagi Grimberg
2019-12-12  1:02             ` Keith Busch
2019-12-12 22:55               ` Ming Lei
2019-12-12 23:30                 ` Keith Busch
2019-12-13  0:52                   ` Ming Lei [this message]
2019-12-12  9:14   ` Christoph Hellwig
2019-12-09 17:56 ` [PATCHv3 3/4] nvme/pci: Remove use_threaded_interrupts Keith Busch
2019-12-12  9:14   ` Christoph Hellwig
2019-12-12 15:45     ` Keith Busch
2019-12-18  7:29     ` Ming Lei
2019-12-18 15:50       ` Keith Busch
2019-12-19  1:10         ` Ming Lei
2019-12-09 17:56 ` [PATCHv3 4/4] nvme/pci: Poll threaded completions Keith Busch
2019-12-10 17:43   ` Daniel Wagner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191213005235.GA30695@ming.t460p \
    --to=ming.lei@redhat.com \
    --cc=bigeasy@linutronix.de \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).