All of lore.kernel.org
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: Kashyap Desai <kashyap.desai@broadcom.com>
Cc: Ming Lei <tom.leiming@gmail.com>,
	Sumit Saxena <sumit.saxena@broadcom.com>,
	Ming Lei <ming.lei@redhat.com>, Christoph Hellwig <hch@lst.de>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Shivasharan Srikanteshwara
	<shivasharan.srikanteshwara@broadcom.com>,
	linux-block <linux-block@vger.kernel.org>
Subject: RE: Affinity managed interrupts vs non-managed interrupts
Date: Sun, 2 Sep 2018 14:02:30 +0200 (CEST)	[thread overview]
Message-ID: <alpine.DEB.2.21.1809021357000.1349@nanos.tec.linutronix.de> (raw)
In-Reply-To: <602cee6381b9f435a938bbaf852d07f9@mail.gmail.com>

On Fri, 31 Aug 2018, Kashyap Desai wrote:
> > Ok. I misunderstood the whole thing a bit. So your real issue is that you
> > want to have reply queues which are instantaneous, the per cpu ones, and
> > then the extra 16 which do batching and are shared over a set of CPUs,
> > right?
> 
> Yes that is correct.  Extra 16 or whatever should be shared over set of
> CPUs of *local* numa node of the PCI device.

Why restricting it to the local NUMA node of the device? That doesn't
really make sense if you queue lots of requests from CPUs on a different
node.

Why don't you spread these extra interrupts accross all nodes and keep the
locality for the request/reply?

That also would allow to make them properly managed interrupts as you could
shutdown the per node batching interrupts when all CPUs of that node are
offlined and you'd avoid the whole affinity hint irq balancer hackery.

Thanks,

	tglx

  reply	other threads:[~2018-09-02 16:18 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <eccc46e12890a1d033d9003837012502@mail.gmail.com>
2018-08-29  8:46 ` Affinity managed interrupts vs non-managed interrupts Ming Lei
2018-08-29 10:46   ` Sumit Saxena
2018-08-30 17:15     ` Kashyap Desai
2018-08-31  6:54     ` Ming Lei
2018-08-31  7:50       ` Kashyap Desai
2018-08-31  7:50         ` Kashyap Desai
2018-08-31 20:24         ` Thomas Gleixner
2018-08-31 20:24           ` Thomas Gleixner
2018-08-31 21:49           ` Kashyap Desai
2018-08-31 21:49             ` Kashyap Desai
2018-08-31 22:48             ` Thomas Gleixner
2018-08-31 22:48               ` Thomas Gleixner
2018-08-31 23:37               ` Kashyap Desai
2018-08-31 23:37                 ` Kashyap Desai
2018-09-02 12:02                 ` Thomas Gleixner [this message]
2018-09-02 12:02                   ` Thomas Gleixner
2018-09-03  5:34                   ` Kashyap Desai
2018-09-03  5:34                     ` Kashyap Desai
2018-09-03 16:28                     ` Thomas Gleixner
2018-09-03 16:28                       ` Thomas Gleixner
2018-09-04 10:29                       ` Kashyap Desai
2018-09-04 10:29                         ` Kashyap Desai
2018-09-05  5:46                         ` Dou Liyang
2018-09-05  5:46                           ` Dou Liyang
2018-09-05  9:45                           ` Kashyap Desai
2018-09-05  9:45                             ` Kashyap Desai
2018-09-05 10:38                             ` Thomas Gleixner
2018-09-05 10:38                               ` Thomas Gleixner
2018-09-06 10:14                               ` Dou Liyang
2018-09-06 10:14                                 ` Dou Liyang
2018-09-06 11:46                                 ` Thomas Gleixner
2018-09-06 11:46                                   ` Thomas Gleixner
2018-09-11  9:13                                   ` Christoph Hellwig
2018-09-11  9:13                                     ` Christoph Hellwig
2018-09-11  9:38                                     ` Dou Liyang
2018-09-11  9:38                                       ` Dou Liyang
2018-09-11  9:22               ` Christoph Hellwig
2018-09-11  9:22                 ` Christoph Hellwig
2018-09-03  2:13         ` Ming Lei
2018-09-03  2:13           ` Ming Lei
2018-09-03  6:10           ` Kashyap Desai
2018-09-03  6:10             ` Kashyap Desai
2018-09-03  9:21             ` Ming Lei
2018-09-03  9:21               ` Ming Lei
2018-09-03  9:50               ` Kashyap Desai
2018-09-03  9:50                 ` Kashyap Desai
2018-09-11  9:21     ` Christoph Hellwig
2018-09-11  9:54       ` Kashyap Desai
2018-08-28  6:47 Sumit Saxena

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.21.1809021357000.1349@nanos.tec.linutronix.de \
    --to=tglx@linutronix.de \
    --cc=hch@lst.de \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=shivasharan.srikanteshwara@broadcom.com \
    --cc=sumit.saxena@broadcom.com \
    --cc=tom.leiming@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.