linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Jakub Kicinski <kuba@kernel.org>,
	linux-pci <linux-pci@vger.kernel.org>,
	linux-rdma@vger.kernel.org, Netdev <netdev@vger.kernel.org>,
	Don Dutile <ddutile@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	"David S . Miller" <davem@davemloft.net>
Subject: Re: [PATCH mlx5-next v4 1/4] PCI: Add sysfs callback to allow MSI-X table size change of SR-IOV VFs
Date: Mon, 25 Jan 2021 20:47:19 +0200	[thread overview]
Message-ID: <20210125184719.GK579511@unreal> (raw)
In-Reply-To: <20210124190032.GD5038@unreal>

On Sun, Jan 24, 2021 at 09:00:32PM +0200, Leon Romanovsky wrote:
> On Sun, Jan 24, 2021 at 08:47:44AM -0800, Alexander Duyck wrote:
> > On Sun, Jan 24, 2021 at 5:11 AM Leon Romanovsky <leon@kernel.org> wrote:
> > >
> > > From: Leon Romanovsky <leonro@nvidia.com>
> > >
> > > Extend PCI sysfs interface with a new callback that allows configure
> > > the number of MSI-X vectors for specific SR-IO VF. This is needed
> > > to optimize the performance of newly bound devices by allocating
> > > the number of vectors based on the administrator knowledge of targeted VM.
> > >
> > > This function is applicable for SR-IOV VF because such devices allocate
> > > their MSI-X table before they will run on the VMs and HW can't guess the
> > > right number of vectors, so the HW allocates them statically and equally.
> > >
> > > 1) The newly added /sys/bus/pci/devices/.../vfs_overlay/sriov_vf_msix_count
> > > file will be seen for the VFs and it is writable as long as a driver is not
> > > bounded to the VF.
> > >
> > > The values accepted are:
> > >  * > 0 - this will be number reported by the VF's MSI-X capability
> > >  * < 0 - not valid
> > >  * = 0 - will reset to the device default value
> > >
> > > 2) In order to make management easy, provide new read-only sysfs file that
> > > returns a total number of possible to configure MSI-X vectors.
> > >
> > > cat /sys/bus/pci/devices/.../vfs_overlay/sriov_vf_total_msix
> > >   = 0 - feature is not supported
> > >   > 0 - total number of MSI-X vectors to consume by the VFs
> > >
> > > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > > ---
> > >  Documentation/ABI/testing/sysfs-bus-pci |  32 +++++
> > >  drivers/pci/iov.c                       | 180 ++++++++++++++++++++++++
> > >  drivers/pci/msi.c                       |  47 +++++++
> > >  drivers/pci/pci.h                       |   4 +
> > >  include/linux/pci.h                     |  10 ++
> > >  5 files changed, 273 insertions(+)
> > >
> >
> > <snip>
> >
> > > +
> > > +static umode_t sriov_pf_attrs_are_visible(struct kobject *kobj,
> > > +                                         struct attribute *a, int n)
> > > +{
> > > +       struct device *dev = kobj_to_dev(kobj);
> > > +       struct pci_dev *pdev = to_pci_dev(dev);
> > > +
> > > +       if (!pdev->msix_cap || !dev_is_pf(dev))
> > > +               return 0;
> > > +
> > > +       return a->mode;
> > > +}
> > > +
> > > +static umode_t sriov_vf_attrs_are_visible(struct kobject *kobj,
> > > +                                         struct attribute *a, int n)
> > > +{
> > > +       struct device *dev = kobj_to_dev(kobj);
> > > +       struct pci_dev *pdev = to_pci_dev(dev);
> > > +
> > > +       if (!pdev->msix_cap || dev_is_pf(dev))
> > > +               return 0;
> > > +
> > > +       return a->mode;
> > > +}
> > > +
> >
> > Given the changes I don't see why we need to add the "visible"
> > functions. We are only registering this from the PF if there is a need
> > to make use of the interfaces, correct? If so we can just assume that
> > the interfaces should always be visible if they are requested.
>
> I added them to make extension of this vfs_overlay interface more easy,
> so we won't forget that current fields needs "msix_cap". Also I followed
> same style as other attribute_group which has .is_visible.
>
> >
> > Also you may want to look at placing a link to the VF folders in the
> > PF folder, although I suppose there are already links from the PF PCI
> > device to the VF PCI devices so maybe that isn't necessary. It just
> > takes a few extra steps to navigate between the two.
>
> We already have, I don't think that we need to add extra links, it will
> give nothing.
>
> [leonro@vm ~]$ ls -l /sys/bus/pci/devices/0000\:01\:00.0/
> ....
> drwxr-xr-x 2 root root        0 Jan 24 14:02 vfs_overlay
> lrwxrwxrwx 1 root root        0 Jan 24 14:02 virtfn0 -> ../0000:01:00.1
> lrwxrwxrwx 1 root root        0 Jan 24 14:02 virtfn1 -> ../0000:01:00.2
> ....

Alexander, are we clear here? Do you expect v5 without ".is_visible" from me?

Thanks

>
> Thanks

  reply	other threads:[~2021-01-26 19:36 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-24 13:11 [PATCH mlx5-next v4 0/4] Dynamically assign MSI-X vectors count Leon Romanovsky
2021-01-24 13:11 ` [PATCH mlx5-next v4 1/4] PCI: Add sysfs callback to allow MSI-X table size change of SR-IOV VFs Leon Romanovsky
2021-01-24 16:47   ` Alexander Duyck
2021-01-24 19:00     ` Leon Romanovsky
2021-01-25 18:47       ` Leon Romanovsky [this message]
2021-01-25 18:50         ` Alexander Duyck
2021-01-25 18:54           ` Leon Romanovsky
2021-01-25 21:52   ` Jakub Kicinski
2021-01-26  6:01     ` Leon Romanovsky
2021-01-26  8:20       ` Joe Perches
2021-01-26  8:48         ` Leon Romanovsky
2021-01-26  8:57           ` Joe Perches
2021-01-26  9:26             ` Leon Romanovsky
2021-01-24 13:11 ` [PATCH mlx5-next v4 2/4] net/mlx5: Add dynamic MSI-X capabilities bits Leon Romanovsky
2021-01-24 13:11 ` [PATCH mlx5-next v4 3/4] net/mlx5: Dynamically assign MSI-X vectors count Leon Romanovsky
2021-01-24 13:11 ` [PATCH mlx5-next v4 4/4] net/mlx5: Allow to the users to configure number of MSI-X vectors Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210125184719.GK579511@unreal \
    --to=leon@kernel.org \
    --cc=alex.williamson@redhat.com \
    --cc=alexander.duyck@gmail.com \
    --cc=bhelgaas@google.com \
    --cc=davem@davemloft.net \
    --cc=ddutile@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).