All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Christoph Hellwig <hch@lst.de>,
	Bjorn Helgaas <helgaas@kernel.org>, Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org,
	Keith Busch <keith.busch@intel.com>,
	Marc Zyngier <marc.zyngier@arm.com>,
	Sumit Saxena <sumit.saxena@broadcom.com>,
	Kashyap Desai <kashyap.desai@broadcom.com>,
	Shivasharan Srikanteshwara 
	<shivasharan.srikanteshwara@broadcom.com>
Subject: Re: [patch V5 4/8] nvme-pci: Simplify interrupt allocation
Date: Fri, 15 Feb 2019 06:41:37 +0800	[thread overview]
Message-ID: <20190214224136.GA18764@ming.t460p> (raw)
In-Reply-To: <20190214211759.699390983@linutronix.de>

On Thu, Feb 14, 2019 at 09:47:59PM +0100, Thomas Gleixner wrote:
> From: Ming Lei <ming.lei@redhat.com>
> 
> The NVME PCI driver contains a tedious mechanism for interrupt
> allocation, which is necessary to adjust the number and size of interrupt
> sets to the maximum available number of interrupts which depends on the
> underlying PCI capabilities and the available CPU resources.
> 
> It works around the former short comings of the PCI and core interrupt
> allocation mechanims in combination with interrupt sets.
> 
> The PCI interrupt allocation function allows to provide a maximum and a
> minimum number of interrupts to be allocated and tries to allocate as
> many as possible. This worked without driver interaction as long as there
> was only a single set of interrupts to handle.
> 
> With the addition of support for multiple interrupt sets in the generic
> affinity spreading logic, which is invoked from the PCI interrupt
> allocation, the adaptive loop in the PCI interrupt allocation did not
> work for multiple interrupt sets. The reason is that depending on the
> total number of interrupts which the PCI allocation adaptive loop tries
> to allocate in each step, the number and the size of the interrupt sets
> need to be adapted as well. Due to the way the interrupt sets support was
> implemented there was no way for the PCI interrupt allocation code or the
> core affinity spreading mechanism to invoke a driver specific function
> for adapting the interrupt sets configuration.
> 
> As a consequence the driver had to implement another adaptive loop around
> the PCI interrupt allocation function and calling that with maximum and
> minimum interrupts set to the same value. This ensured that the
> allocation either succeeded or immediately failed without any attempt to
> adjust the number of interrupts in the PCI code.
> 
> The core code now allows drivers to provide a callback to recalculate the
> number and the size of interrupt sets during PCI interrupt allocation,
> which in turn allows the PCI interrupt allocation function to be called
> in the same way as with a single set of interrupts. The PCI code handles
> the adaptive loop and the interrupt affinity spreading mechanism invokes
> the driver callback to adapt the interrupt set configuration to the
> current loop value. This replaces the adaptive loop in the driver
> completely.
> 
> Implement the NVME specific callback which adjusts the interrupt sets
> configuration and remove the adaptive allocation loop.
> 
> [ tglx: Simplify the callback further and restore the dropped adjustment of
>   	number of sets ]
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  drivers/nvme/host/pci.c |  108 ++++++++++++------------------------------------
>  1 file changed, 28 insertions(+), 80 deletions(-)
> 
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2041,41 +2041,32 @@ static int nvme_setup_host_mem(struct nv
>  	return ret;
>  }
>  
> -/* irq_queues covers admin queue */
> -static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues)
> +/*
> + * nirqs is the number of interrupts available for write and read
> + * queues. The core already reserved an interrupt for the admin queue.
> + */
> +static void nvme_calc_irq_sets(struct irq_affinity *affd, unsigned int nrirqs)
>  {
> -	unsigned int this_w_queues = write_queues;
> -
> -	WARN_ON(!irq_queues);
> -
> -	/*
> -	 * Setup read/write queue split, assign admin queue one independent
> -	 * irq vector if irq_queues is > 1.
> -	 */
> -	if (irq_queues <= 2) {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = 1;
> -		dev->io_queues[HCTX_TYPE_READ] = 0;
> -		return;
> -	}
> +	struct nvme_dev *dev = affd->priv;
> +	unsigned int nr_read_queues;
>  
>  	/*
> -	 * If 'write_queues' is set, ensure it leaves room for at least
> -	 * one read queue and one admin queue
> -	 */
> -	if (this_w_queues >= irq_queues)
> -		this_w_queues = irq_queues - 2;
> -
> -	/*
> -	 * If 'write_queues' is set to zero, reads and writes will share
> -	 * a queue set.
> -	 */
> -	if (!this_w_queues) {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues - 1;
> -		dev->io_queues[HCTX_TYPE_READ] = 0;
> -	} else {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = this_w_queues;
> -		dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues - 1;
> -	}
> +	 * If only one interrupt is available, combine write and read
> +	 * queues. If 'write_queues' is set, ensure it leaves room for at
> +	 * least one read queue.
> +	 */
> +	if (nrirqs == 1)
> +		nr_read_queues = 0;
> +	else if (write_queues >= nrirqs)
> +		nr_read_queues = nrirqs - 1;
> +	else
> +		nr_read_queues = nrirqs - write_queues;
> +
> +	dev->io_queues[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
> +	affd->set_size[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
> +	dev->io_queues[HCTX_TYPE_READ] = nr_read_queues;
> +	affd->set_size[HCTX_TYPE_READ] = nr_read_queues;
> +	affd->nr_sets = nr_read_queues ? 2 : 1;
>  }

.calc_sets is called only if more than .pre_vectors is available,
then dev->io_queues[HCTX_TYPE_DEFAULT] may not be set in case of
(nvecs == affd->pre_vectors + affd->post_vectors).

Thanks,
Ming

WARNING: multiple messages have this Message-ID (diff)
From: ming.lei@redhat.com (Ming Lei)
Subject: [patch V5 4/8] nvme-pci: Simplify interrupt allocation
Date: Fri, 15 Feb 2019 06:41:37 +0800	[thread overview]
Message-ID: <20190214224136.GA18764@ming.t460p> (raw)
In-Reply-To: <20190214211759.699390983@linutronix.de>

On Thu, Feb 14, 2019@09:47:59PM +0100, Thomas Gleixner wrote:
> From: Ming Lei <ming.lei at redhat.com>
> 
> The NVME PCI driver contains a tedious mechanism for interrupt
> allocation, which is necessary to adjust the number and size of interrupt
> sets to the maximum available number of interrupts which depends on the
> underlying PCI capabilities and the available CPU resources.
> 
> It works around the former short comings of the PCI and core interrupt
> allocation mechanims in combination with interrupt sets.
> 
> The PCI interrupt allocation function allows to provide a maximum and a
> minimum number of interrupts to be allocated and tries to allocate as
> many as possible. This worked without driver interaction as long as there
> was only a single set of interrupts to handle.
> 
> With the addition of support for multiple interrupt sets in the generic
> affinity spreading logic, which is invoked from the PCI interrupt
> allocation, the adaptive loop in the PCI interrupt allocation did not
> work for multiple interrupt sets. The reason is that depending on the
> total number of interrupts which the PCI allocation adaptive loop tries
> to allocate in each step, the number and the size of the interrupt sets
> need to be adapted as well. Due to the way the interrupt sets support was
> implemented there was no way for the PCI interrupt allocation code or the
> core affinity spreading mechanism to invoke a driver specific function
> for adapting the interrupt sets configuration.
> 
> As a consequence the driver had to implement another adaptive loop around
> the PCI interrupt allocation function and calling that with maximum and
> minimum interrupts set to the same value. This ensured that the
> allocation either succeeded or immediately failed without any attempt to
> adjust the number of interrupts in the PCI code.
> 
> The core code now allows drivers to provide a callback to recalculate the
> number and the size of interrupt sets during PCI interrupt allocation,
> which in turn allows the PCI interrupt allocation function to be called
> in the same way as with a single set of interrupts. The PCI code handles
> the adaptive loop and the interrupt affinity spreading mechanism invokes
> the driver callback to adapt the interrupt set configuration to the
> current loop value. This replaces the adaptive loop in the driver
> completely.
> 
> Implement the NVME specific callback which adjusts the interrupt sets
> configuration and remove the adaptive allocation loop.
> 
> [ tglx: Simplify the callback further and restore the dropped adjustment of
>   	number of sets ]
> 
> Signed-off-by: Ming Lei <ming.lei at redhat.com>
> Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
> 
> ---
>  drivers/nvme/host/pci.c |  108 ++++++++++++------------------------------------
>  1 file changed, 28 insertions(+), 80 deletions(-)
> 
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2041,41 +2041,32 @@ static int nvme_setup_host_mem(struct nv
>  	return ret;
>  }
>  
> -/* irq_queues covers admin queue */
> -static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues)
> +/*
> + * nirqs is the number of interrupts available for write and read
> + * queues. The core already reserved an interrupt for the admin queue.
> + */
> +static void nvme_calc_irq_sets(struct irq_affinity *affd, unsigned int nrirqs)
>  {
> -	unsigned int this_w_queues = write_queues;
> -
> -	WARN_ON(!irq_queues);
> -
> -	/*
> -	 * Setup read/write queue split, assign admin queue one independent
> -	 * irq vector if irq_queues is > 1.
> -	 */
> -	if (irq_queues <= 2) {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = 1;
> -		dev->io_queues[HCTX_TYPE_READ] = 0;
> -		return;
> -	}
> +	struct nvme_dev *dev = affd->priv;
> +	unsigned int nr_read_queues;
>  
>  	/*
> -	 * If 'write_queues' is set, ensure it leaves room for at least
> -	 * one read queue and one admin queue
> -	 */
> -	if (this_w_queues >= irq_queues)
> -		this_w_queues = irq_queues - 2;
> -
> -	/*
> -	 * If 'write_queues' is set to zero, reads and writes will share
> -	 * a queue set.
> -	 */
> -	if (!this_w_queues) {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues - 1;
> -		dev->io_queues[HCTX_TYPE_READ] = 0;
> -	} else {
> -		dev->io_queues[HCTX_TYPE_DEFAULT] = this_w_queues;
> -		dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues - 1;
> -	}
> +	 * If only one interrupt is available, combine write and read
> +	 * queues. If 'write_queues' is set, ensure it leaves room for at
> +	 * least one read queue.
> +	 */
> +	if (nrirqs == 1)
> +		nr_read_queues = 0;
> +	else if (write_queues >= nrirqs)
> +		nr_read_queues = nrirqs - 1;
> +	else
> +		nr_read_queues = nrirqs - write_queues;
> +
> +	dev->io_queues[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
> +	affd->set_size[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
> +	dev->io_queues[HCTX_TYPE_READ] = nr_read_queues;
> +	affd->set_size[HCTX_TYPE_READ] = nr_read_queues;
> +	affd->nr_sets = nr_read_queues ? 2 : 1;
>  }

.calc_sets is called only if more than .pre_vectors is available,
then dev->io_queues[HCTX_TYPE_DEFAULT] may not be set in case of
(nvecs == affd->pre_vectors + affd->post_vectors).

Thanks,
Ming

  reply	other threads:[~2019-02-14 22:41 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-14 20:47 [patch V5 0/8] genirq/affinity: Overhaul the multiple interrupt sets support Thomas Gleixner
2019-02-14 20:47 ` Thomas Gleixner
2019-02-14 20:47 ` [patch V5 1/8] genirq/affinity: Code consolidation Thomas Gleixner
2019-02-14 20:47   ` Thomas Gleixner
2019-02-14 20:47 ` [patch V5 2/8] genirq/affinity: Store interrupt sets size in struct irq_affinity Thomas Gleixner
2019-02-14 20:47   ` Thomas Gleixner
2019-02-14 20:47 ` [patch V5 3/8] genirq/affinity: Add new callback for (re)calculating interrupt sets Thomas Gleixner
2019-02-14 20:47   ` Thomas Gleixner
2019-02-14 20:47 ` [patch V5 4/8] nvme-pci: Simplify interrupt allocation Thomas Gleixner
2019-02-14 20:47   ` Thomas Gleixner
2019-02-14 22:41   ` Ming Lei [this message]
2019-02-14 22:41     ` Ming Lei
2019-02-14 23:55     ` Thomas Gleixner
2019-02-14 23:55       ` Thomas Gleixner
2019-02-15 23:00       ` Thomas Gleixner
2019-02-15 23:00         ` Thomas Gleixner
2019-02-15  9:24   ` Marc Zyngier
2019-02-15  9:24     ` Marc Zyngier
2019-02-15  9:52     ` Thomas Gleixner
2019-02-15  9:52       ` Thomas Gleixner
2019-02-15  9:54       ` Thomas Gleixner
2019-02-15  9:54         ` Thomas Gleixner
2019-02-14 20:48 ` [patch V5 5/8] genirq/affinity: Remove the leftovers of the original set support Thomas Gleixner
2019-02-14 20:48   ` Thomas Gleixner
2019-02-14 20:48 ` [patch V5 6/8] PCI/MSI: Remove obsolete sanity checks for multiple interrupt sets Thomas Gleixner
2019-02-14 20:48   ` Thomas Gleixner
2019-02-14 20:48 ` [patch V5 7/8] genirq/affinity: Set is_managed in the spreading function Thomas Gleixner
2019-02-14 20:48   ` Thomas Gleixner
2019-02-14 20:48 ` [patch V5 8/8] genirq/affinity: Add support for non-managed affinity sets Thomas Gleixner
2019-02-14 20:48   ` Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190214224136.GA18764@ming.t460p \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=helgaas@kernel.org \
    --cc=kashyap.desai@broadcom.com \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=sagi@grimberg.me \
    --cc=shivasharan.srikanteshwara@broadcom.com \
    --cc=sumit.saxena@broadcom.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.