linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: Ming Lei <ming.lei@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>,
	Bjorn Helgaas <helgaas@kernel.org>, Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org,
	LKML <linux-kernel@vger.kernel.org>,
	linux-pci@vger.kernel.org, Keith Busch <keith.busch@intel.com>
Subject: Re: [PATCH V2 0/4] genirq/affinity: add .calc_sets for improving IRQ allocation & spread
Date: Tue, 12 Feb 2019 14:42:04 +0100 (CET)	[thread overview]
Message-ID: <alpine.DEB.2.21.1902121438310.3710@nanos.tec.linutronix.de> (raw)
In-Reply-To: <20190212130439.14501-1-ming.lei@redhat.com>

On Tue, 12 Feb 2019, Ming Lei wrote:

> Hi,
> 
> Currently pre-caculated set vectors are provided by driver for
> allocating & spread vectors. This way only works when drivers passes
> same 'max_vecs' and 'min_vecs' to pci_alloc_irq_vectors_affinity(),
> also requires driver to retry the allocating & spread.
> 
> As Bjorn and Keith mentioned, the current usage & interface for irq sets
> is a bit awkward because the retrying should have been avoided by providing
> one resonable 'min_vecs'. However, if 'min_vecs' isn't same with
> 'max_vecs', number of the allocated vectors is unknown before calling
> pci_alloc_irq_vectors_affinity(), then each set's vectors can't be
> pre-caculated.
> 
> Add a new callback of .calc_sets into 'struct irq_affinity' so that
> driver can caculate set vectors after IRQ vector is allocated and
> before spread IRQ vectors. Add 'priv' so that driver may retrieve
> its private data via the 'struct irq_affinity'.
> 
> 
> V2:
> 	- add .calc_sets instead of .setup_affinity() which is easy to
> 	be abused by drivers

This looks really well done. If you can address the minor nitpicks, then
this is good to go, unless someone has objections.

Thanks,

	tglx

      parent reply	other threads:[~2019-02-12 13:42 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-12 13:04 [PATCH V2 0/4] genirq/affinity: add .calc_sets for improving IRQ allocation & spread Ming Lei
2019-02-12 13:04 ` [PATCH V2 1/4] genirq/affinity: store irq set vectors in 'struct irq_affinity' Ming Lei
2019-02-12 13:35   ` Thomas Gleixner
2019-02-12 13:04 ` [PATCH V2 2/4] genirq/affinity: add new callback for caculating set vectors Ming Lei
2019-02-12 13:37   ` Thomas Gleixner
2019-02-12 13:04 ` [PATCH V2 3/4] nvme-pci: avoid irq allocation retrying via .calc_sets Ming Lei
2019-02-12 15:49   ` Keith Busch
2019-02-12 13:04 ` [PATCH V2 4/4] genirq/affinity: Document .calc_sets as required in case of multiple sets Ming Lei
2019-02-12 23:42   ` Bjorn Helgaas
2019-02-12 13:42 ` Thomas Gleixner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.21.1902121438310.3710@nanos.tec.linutronix.de \
    --to=tglx@linutronix.de \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=helgaas@kernel.org \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).