linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Don Dutile <ddutile@redhat.com>
Cc: Bjorn Helgaas <helgaas@kernel.org>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Jakub Kicinski <kuba@kernel.org>,
	linux-pci@vger.kernel.org, linux-rdma@vger.kernel.org,
	netdev@vger.kernel.org,
	Alex Williamson <alex.williamson@redhat.com>
Subject: Re: [PATCH mlx5-next 1/4] PCI: Configure number of MSI-X vectors for SR-IOV VFs
Date: Sun, 10 Jan 2021 10:29:52 +0200	[thread overview]
Message-ID: <20210110082952.GF31158@unreal> (raw)
In-Reply-To: <ba1e7c38-2a21-40ba-787f-458b979b938f@redhat.com>

On Thu, Jan 07, 2021 at 10:54:38PM -0500, Don Dutile wrote:
> On 1/7/21 7:57 PM, Bjorn Helgaas wrote:
> > [+cc Alex, Don]
> >
> > This patch does not actually *configure* the number of vectors, so the
> > subject is not quite accurate.  IIUC, this patch adds a sysfs file
> > that can be used to configure the number of vectors.  The subject
> > should mention the sysfs connection.
> >
> > On Sun, Jan 03, 2021 at 10:24:37AM +0200, Leon Romanovsky wrote:
> > > From: Leon Romanovsky <leonro@nvidia.com>
> > >
> > > This function is applicable for SR-IOV VFs because such devices allocate
> > > their MSI-X table before they will run on the targeted hardware and they
> > > can't guess the right amount of vectors.
> > This sentence doesn't quite have enough context to make sense to me.
> > Per PCIe r5.0, sec 9.5.1.2, I think PFs and VFs have independent MSI-X
> > Capabilities.  What is the connection between the PF MSI-X and the VF
> > MSI-X?
> +1... strip this commit log section and write it with correct, technical content.
> PFs & VF's have indep MSIX caps.
>
> Q: is this an issue where (some) mlx5's have a large msi-x capability (per VF) that may overwhelm a system's, (pci-(sub)-tree) MSI / intr capability,
> and this is a sysfs-based tuning knob to reduce the max number on such 'challenged' systems?
> -- ah; reading further below, it's based on some information gleemed from the VM's capability for intr. support.
>     -- or maybe IOMMU (intr) support on the host system, and the VF can't exceed it or config failure in VM... whatever... its some VM cap that's being accomodated.

I hope that this answers.
https://lore.kernel.org/linux-pci/20210110082206.GD31158@unreal/T/#md5dfc2edaaa686331ab3ce73496df7f58421c550

This feature is for MSI-X repartition and reduction.

> > The MSI-X table sizes should be determined by the Table Size in the
> > Message Control register.  Apparently we write a VF's Table Size
> > before a driver is bound to the VF?  Where does that happen?
> >
> > "Before they run on the targeted hardware" -- do you mean before the
> > VF is passed through to a guest virtual machine?  You mention "target
> > VM" below, which makes more sense to me.  VFs don't "run"; they're not
> > software.  I apologize for not being an expert in the use of VFs.
> >
> > Please mention the sysfs path in the commit log.
> >
> > > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > > ---
> > >   Documentation/ABI/testing/sysfs-bus-pci | 16 +++++++
> > >   drivers/pci/iov.c                       | 57 +++++++++++++++++++++++++
> > >   drivers/pci/msi.c                       | 30 +++++++++++++
> > >   drivers/pci/pci-sysfs.c                 |  1 +
> > >   drivers/pci/pci.h                       |  1 +
> > >   include/linux/pci.h                     |  8 ++++
> > >   6 files changed, 113 insertions(+)
> > >
> > > diff --git a/Documentation/ABI/testing/sysfs-bus-pci b/Documentation/ABI/testing/sysfs-bus-pci
> > > index 25c9c39770c6..30720a9e1386 100644
> > > --- a/Documentation/ABI/testing/sysfs-bus-pci
> > > +++ b/Documentation/ABI/testing/sysfs-bus-pci
> > > @@ -375,3 +375,19 @@ Description:
> > >   		The value comes from the PCI kernel device state and can be one
> > >   		of: "unknown", "error", "D0", D1", "D2", "D3hot", "D3cold".
> > >   		The file is read only.
> > > +
> > > +What:		/sys/bus/pci/devices/.../vf_msix_vec
> > > +Date:		December 2020
> > > +Contact:	Leon Romanovsky <leonro@nvidia.com>
> > > +Description:
> > > +		This file is associated with the SR-IOV VFs. It allows overwrite
> > > +		the amount of MSI-X vectors for that VF. This is needed to optimize
> > > +		performance of newly bounded devices by allocating the number of
> > > +		vectors based on the internal knowledge of targeted VM.
> > s/allows overwrite/allows configuration of/
> > s/for that/for the/
> > s/amount of/number of/
> > s/bounded/bound/
> >
> > What "internal knowledge" is this?  AFAICT this would have to be some
> > user-space administration knowledge, not anything internal to the
> > kernel.
> Correct; likely a libvirt VM (section of its) description;

Right, libvirt and/or orchestration software.

Thanks

  parent reply	other threads:[~2021-01-10  8:30 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-03  8:24 [PATCH mlx5-next 0/4] Dynamically assign MSI-X vectors count Leon Romanovsky
2021-01-03  8:24 ` [PATCH mlx5-next 1/4] PCI: Configure number of MSI-X vectors for SR-IOV VFs Leon Romanovsky
2021-01-08  0:57   ` Bjorn Helgaas
2021-01-08  3:54     ` Don Dutile
2021-01-08  7:25       ` Leon Romanovsky
2021-01-08 16:21         ` Alex Williamson
2021-01-10  8:47           ` Leon Romanovsky
2021-01-08 21:09       ` Bjorn Helgaas
2021-01-09  2:54         ` Don Dutile
2021-01-10  8:33           ` Leon Romanovsky
2021-01-10  8:25         ` Leon Romanovsky
2021-01-10  8:29       ` Leon Romanovsky [this message]
2021-01-10  8:22     ` Leon Romanovsky
2021-01-03  8:24 ` [PATCH mlx5-next 3/4] net/mlx5: Dynamically assign MSI-X vectors count Leon Romanovsky
2021-01-03  8:24 ` [PATCH mlx5-next 4/4] net/mlx5: Allow to the users to configure number of MSI-X vectors Leon Romanovsky
2021-01-06  5:50 ` [PATCH mlx5-next 0/4] Dynamically assign MSI-X vectors count Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210110082952.GF31158@unreal \
    --to=leon@kernel.org \
    --cc=alex.williamson@redhat.com \
    --cc=bhelgaas@google.com \
    --cc=ddutile@redhat.com \
    --cc=helgaas@kernel.org \
    --cc=jgg@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).