linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.duyck@gmail.com>
To: Leon Romanovsky <leon@kernel.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Jakub Kicinski <kuba@kernel.org>,
	linux-pci <linux-pci@vger.kernel.org>,
	linux-rdma@vger.kernel.org, Netdev <netdev@vger.kernel.org>,
	Don Dutile <ddutile@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>
Subject: Re: [PATCH mlx5-next v1 1/5] PCI: Add sysfs callback to allow MSI-X table size change of SR-IOV VFs
Date: Wed, 13 Jan 2021 12:00:00 -0800	[thread overview]
Message-ID: <CAKgT0UecBX+LTR9GuxFb=P+pcUkjU5RYNNjeynExS-9Pik1Hsg@mail.gmail.com> (raw)
In-Reply-To: <20210113060938.GF4678@unreal>

On Tue, Jan 12, 2021 at 10:09 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Tue, Jan 12, 2021 at 01:59:51PM -0800, Alexander Duyck wrote:
> > On Mon, Jan 11, 2021 at 10:39 PM Leon Romanovsky <leon@kernel.org> wrote:
> > >
> > > On Mon, Jan 11, 2021 at 11:30:33AM -0800, Alexander Duyck wrote:
> > > > On Sun, Jan 10, 2021 at 7:12 AM Leon Romanovsky <leon@kernel.org> wrote:
> > > > >
> > > > > From: Leon Romanovsky <leonro@nvidia.com>
> > > > >
> > > > > Extend PCI sysfs interface with a new callback that allows configure
> > > > > the number of MSI-X vectors for specific SR-IO VF. This is needed
> > > > > to optimize the performance of newly bound devices by allocating
> > > > > the number of vectors based on the administrator knowledge of targeted VM.
> > > > >
> > > > > This function is applicable for SR-IOV VF because such devices allocate
> > > > > their MSI-X table before they will run on the VMs and HW can't guess the
> > > > > right number of vectors, so the HW allocates them statically and equally.
> > > > >
> > > > > The newly added /sys/bus/pci/devices/.../vf_msix_vec file will be seen
> > > > > for the VFs and it is writable as long as a driver is not bounded to the VF.
> > > > >
> > > > > The values accepted are:
> > > > >  * > 0 - this will be number reported by the VF's MSI-X capability
> > > > >  * < 0 - not valid
> > > > >  * = 0 - will reset to the device default value
> > > > >
> > > > > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > > > > ---
> > > > >  Documentation/ABI/testing/sysfs-bus-pci | 20 ++++++++
> > > > >  drivers/pci/iov.c                       | 62 +++++++++++++++++++++++++
> > > > >  drivers/pci/msi.c                       | 29 ++++++++++++
> > > > >  drivers/pci/pci-sysfs.c                 |  1 +
> > > > >  drivers/pci/pci.h                       |  2 +
> > > > >  include/linux/pci.h                     |  8 +++-
> > > > >  6 files changed, 121 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/Documentation/ABI/testing/sysfs-bus-pci b/Documentation/ABI/testing/sysfs-bus-pci
> > > > > index 25c9c39770c6..05e26e5da54e 100644
> > > > > --- a/Documentation/ABI/testing/sysfs-bus-pci
> > > > > +++ b/Documentation/ABI/testing/sysfs-bus-pci
> > > > > @@ -375,3 +375,23 @@ Description:
> > > > >                 The value comes from the PCI kernel device state and can be one
> > > > >                 of: "unknown", "error", "D0", D1", "D2", "D3hot", "D3cold".
> > > > >                 The file is read only.
> > > > > +
> > > > > +What:          /sys/bus/pci/devices/.../vf_msix_vec
> > > >
> > > > So the name for this doesn't seem to match existing SR-IOV naming.  It
> > > > seems like this should probably be something like sriov_vf_msix_count
> > > > in order to be closer to the actual naming of what is being dealt
> > > > with.
> > >
> > > I'm open for suggestions. I didn't use sriov_vf_msix_count because it
> > > seems too long for me.
> > >
> > > >
> > > > > +Date:          December 2020
> > > > > +Contact:       Leon Romanovsky <leonro@nvidia.com>
> > > > > +Description:
> > > > > +               This file is associated with the SR-IOV VFs.
> > > > > +               It allows configuration of the number of MSI-X vectors for
> > > > > +               the VF. This is needed to optimize performance of newly bound
> > > > > +               devices by allocating the number of vectors based on the
> > > > > +               administrator knowledge of targeted VM.
> > > > > +
> > > > > +               The values accepted are:
> > > > > +                * > 0 - this will be number reported by the VF's MSI-X
> > > > > +                        capability
> > > > > +                * < 0 - not valid
> > > > > +                * = 0 - will reset to the device default value
> > > > > +
> > > > > +               The file is writable if the PF is bound to a driver that
> > > > > +               supports the ->sriov_set_msix_vec_count() callback and there
> > > > > +               is no driver bound to the VF.
> > > > > diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
> > > > > index 4afd4ee4f7f0..42c0df4158d1 100644
> > > > > --- a/drivers/pci/iov.c
> > > > > +++ b/drivers/pci/iov.c
> > > > > @@ -31,6 +31,7 @@ int pci_iov_virtfn_devfn(struct pci_dev *dev, int vf_id)
> > > > >         return (dev->devfn + dev->sriov->offset +
> > > > >                 dev->sriov->stride * vf_id) & 0xff;
> > > > >  }
> > > > > +EXPORT_SYMBOL(pci_iov_virtfn_devfn);
> > > > >
> > > > >  /*
> > > > >   * Per SR-IOV spec sec 3.3.10 and 3.3.11, First VF Offset and VF Stride may
> > > > > @@ -426,6 +427,67 @@ const struct attribute_group sriov_dev_attr_group = {
> > > > >         .is_visible = sriov_attrs_are_visible,
> > > > >  };
> > > > >
> > > > > +#ifdef CONFIG_PCI_MSI
> > > > > +static ssize_t vf_msix_vec_show(struct device *dev,
> > > > > +                               struct device_attribute *attr, char *buf)
> > > > > +{
> > > > > +       struct pci_dev *pdev = to_pci_dev(dev);
> > > > > +       int numb = pci_msix_vec_count(pdev);
> > > > > +       struct pci_dev *pfdev;
> > > > > +
> > > > > +       if (numb < 0)
> > > > > +               return numb;
> > > > > +
> > > > > +       pfdev = pci_physfn(pdev);
> > > > > +       if (!pfdev->driver || !pfdev->driver->sriov_set_msix_vec_count)
> > > > > +               return -EOPNOTSUPP;
> > > > > +
> > > >
> > > > This doesn't make sense to me. You are getting the vector count for
> > > > the PCI device and reporting that. Are you expecting to call this on
> > > > the PF or the VFs? It seems like this should be a PF attribute and not
> > > > be called on the individual VFs.
> > >
> > > We had this discussion over v0 variant of this series.
> > > https://lore.kernel.org/linux-pci/20210108072525.GB31158@unreal/
> > >
> > > This is per-VF property, but this VF is not bounded to the driver so you
> > > need some way to convey new number to the HW, so it will update PCI value.
> > >
> > > You must change/update this field after VF is created, because all SR-IOV VFs
> > > are created at the same time. The operator (administrator/orchestration
> > > software/e.t.c) will know the right amount of MSI-X vectors right before
> > > he will bind this VF to requested VM.
> > >
> > > It means that extending PF sysfs to get both VF index and count will
> > > look very unfriendly for the users.
> > >
> > > The PF here is an anchor to the relevant driver.
> >
> > Yes, but the problem is you are attempting to do it after a driver may
> > have already bound itself to the VF device. This setup works for the
> > direct assigned VF case, however if the VF drivers are running on the
> > host then this gets ugly as the driver may already be up and running.
>
> Please take a look on the pci_set_msix_vec_count() implementation, it
> checks that VF is not probed yet. I outlined this requirement almost
> in all my responses and commit messages.
>
> So no, it is not possible to set MSI-X vectors to already bound device.

Unless you are holding the device lock you cannot guarantee that as
Alex Williamson pointed out you can end up racing with the driver
probe/remove.

Secondly the fact that the driver might be probed before you even get
to make your call will cause this to return EOPNOTSUPP which doesn't
exactly make sense since it is supported, you just cannot do it since
the device is busy.

> >
> > Also I am not a big fan of the VF groping around looking for a PF
> > interface as it means the interface will likely be exposed in the
> > guest as well, but it just won't work.
>
> If you are referring to VF exposed to the VM, so in this case VF must be
> bound too vfio driver, or any other driver, and won't allow MSI-X change.
> If you are referring to PF exposed to the VM, it is very unlikely scenario
> in real world and reserved for braves among us. Even in this case, the
> set MSI-X won't work, because PF will be connected to the hypervisor driver
> that doesn't support set_msix.
>
> So both cases are handled.

I get that they are handled. However I am not a huge fan of the sysfs
attributes for one device being dependent on another device. When you
have to start searching for another device it just makes things messy.

> >
> > > >
> > > > If you are calling this on the VFs then it doesn't really make any
> > > > sense anyway since the VF is not a "VF PCI dev representor" and
> > > > shouldn't be treated as such. In my opinion if we are going to be
> > > > doing per-port resource limiting that is something that might make
> > > > more sense as a part of the devlink configuration for the VF since the
> > > > actual change won't be visible to an assigned device.
> > >
> > > https://lore.kernel.org/linux-pci/20210112061535.GB4678@unreal/
> >
> > So the question I would have is if we are spawning the VFs and
> > expecting them to have different configs or the same configuration?
>
> By default, they have same configuration.
>
> > I'm assuming in your case you are looking for a different
> > configuration per port. Do I have that correct?
>
> No, per-VF as represents one device in the PCI world. For example, mlx5
> can have more than one physical port.

Sorry, I meant per virtual function, not per port.

> >
> > Where this gets ugly is that SR-IOV assumes a certain uniformity per
> > VF so doing a per-VF custom limitation gets ugly pretty quick.
>
> I don't find any support for this "uniformity" claim in the PCI spec.

I am referring to the PCI configuration space. Each VF ends up with
some fixed amount of MMIO resources per function. So typically when
you spawn VFs we had things setup so that all you do is say how many
you want.

> > I wonder if it would make more sense if we are going this route to just
> > define a device-tree like schema that could be fed in to enable VFs
> > instead of just using echo X > sriov_numvfs and then trying to fix
> > things afterwards. Then you could define this and other features that
> > I am sure you would need in the future via json-schema like is done in
> > device-tree and write it once enabling the set of VFs that you need.
>
> Sorry, but this is overkill, it won't give us much and it doesn't fit
> the VF usage model at all.
>
> Right now, all heavy users of SR-IOV are creating many VFs up to the maximum.
> They do it with autoprobe disabled, because it is too time consuming to wait
> till all VFs probe themselves and unbind them later.
>
> After that, they wait for incoming request to provision VM on VF, they set MAC
> address, change MSI-X according to VM properties and bind that VF to new VM.
>
> So MSI-X change is done after VFs were created.

So if I understand correctly based on your comments below you are
dynamically changing the VF's MSI-X configuration space then?

> >
> > Given that SR-IOV isn't meant to be tweaked dynamically it seems like
> > that would be a much better fit for most users as then you can just
> > modify the schema and reload it which would probably be less effort
> > than having to manually redo all the commands needed to setup the VFs
> > using this approach if you are having to manually update each VF.
>
> Quite opposite it true. First users use orchestration software to manage it.
> Second, you will need to take care of already bound device.

This is the part that concerns me. It seems like this is all adding a
bunch of orchestration need to all of this. Basically the device
config can shift out from under us on the host if we aren't paying
attention.

> >
> > > >
> > > > > +       return sprintf(buf, "%d\n", numb);
> > > > > +}
> > > > > +
> > > > > +static ssize_t vf_msix_vec_store(struct device *dev,
> > > > > +                                struct device_attribute *attr, const char *buf,
> > > > > +                                size_t count)
> > > > > +{
> > > > > +       struct pci_dev *vf_dev = to_pci_dev(dev);
> > > > > +       int val, ret;
> > > > > +
> > > > > +       ret = kstrtoint(buf, 0, &val);
> > > > > +       if (ret)
> > > > > +               return ret;
> > > > > +
> > > > > +       ret = pci_set_msix_vec_count(vf_dev, val);
> > > > > +       if (ret)
> > > > > +               return ret;
> > > > > +
> > > > > +       return count;
> > > > > +}
> > > > > +static DEVICE_ATTR_RW(vf_msix_vec);
> > > > > +#endif
> > > > > +
> > > > > +static struct attribute *sriov_vf_dev_attrs[] = {
> > > > > +#ifdef CONFIG_PCI_MSI
> > > > > +       &dev_attr_vf_msix_vec.attr,
> > > > > +#endif
> > > > > +       NULL,
> > > > > +};
> > > > > +
> > > > > +static umode_t sriov_vf_attrs_are_visible(struct kobject *kobj,
> > > > > +                                         struct attribute *a, int n)
> > > > > +{
> > > > > +       struct device *dev = kobj_to_dev(kobj);
> > > > > +
> > > > > +       if (dev_is_pf(dev))
> > > > > +               return 0;
> > > > > +
> > > > > +       return a->mode;
> > > > > +}
> > > > > +
> > > > > +const struct attribute_group sriov_vf_dev_attr_group = {
> > > > > +       .attrs = sriov_vf_dev_attrs,
> > > > > +       .is_visible = sriov_vf_attrs_are_visible,
> > > > > +};
> > > > > +
> > > > >  int __weak pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs)
> > > > >  {
> > > > >         return 0;
> > > > > diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> > > > > index 3162f88fe940..20705ca94666 100644
> > > > > --- a/drivers/pci/msi.c
> > > > > +++ b/drivers/pci/msi.c
> > > > > @@ -991,6 +991,35 @@ int pci_msix_vec_count(struct pci_dev *dev)
> > > > >  }
> > > > >  EXPORT_SYMBOL(pci_msix_vec_count);
> > > > >
> > > > > +/**
> > > > > + * pci_set_msix_vec_count - change the reported number of MSI-X vectors
> > > > > + * This function is applicable for SR-IOV VF because such devices allocate
> > > > > + * their MSI-X table before they will run on the VMs and HW can't guess the
> > > > > + * right number of vectors, so the HW allocates them statically and equally.
> > > > > + * @dev: VF device that is going to be changed
> > > > > + * @numb: amount of MSI-X vectors
> > > > > + **/
> > > > > +int pci_set_msix_vec_count(struct pci_dev *dev, int numb)
> > > > > +{
> > > > > +       struct pci_dev *pdev = pci_physfn(dev);
> > > > > +
> > > > > +       if (!dev->msix_cap || !pdev->msix_cap)
> > > > > +               return -EINVAL;
> > > > > +
> > > > > +       if (dev->driver || !pdev->driver ||
> > > > > +           !pdev->driver->sriov_set_msix_vec_count)
> > > > > +               return -EOPNOTSUPP;
> > > > > +
> > > > > +       if (numb < 0)
> > > > > +               /*
> > > > > +                * We don't support negative numbers for now,
> > > > > +                * but maybe in the future it will make sense.
> > > > > +                */
> > > > > +               return -EINVAL;
> > > > > +
> > > > > +       return pdev->driver->sriov_set_msix_vec_count(dev, numb);
> > > > > +}
> > > > > +
> > > >
> > > > If you are going to have a set operation for this it would make sense
> > > > to have a get operation. Your show operation seems unbalanced since
> > > > you are expecting to call it on the VF directly which just seems
> > > > wrong.
> > >
> > > There is already get operator - pci_msix_vec_count().
> > > The same as above, PF is an anchor for the driver. VF doesn't have
> > > driver yet and we can't write directly to this PCI field - it is read-only.
> >
> > That returns the maximum. I would want to know what the value is that
> > you wrote here.
>
> It returns the value from the device and not maximum.

Okay, that was the part I hadn't caught onto. That you were using this
to make read-only configuration space writable.

> >
> > Also being able to change this once the device already exists is kind
> > of an ugly setup as I would imagine there will be cases where the
> > device has already partitioned out the vectors that it wanted.
>
> It is not possible, because all VFs starts with some value. Normal FW
> won't allow you to decrease vector count below certain limit.
>
> So VFs are always operable.

I get that. My concern is more the fact that you are modifying read
only bits within the configuration space that is visible to the host.
I initially thought you were doing the dynamic resizing behind the
scenes by just not enabling some of the MSI-X vectors in the table.
However, as I understand it now you are resizing the MSI-X table
itself which doesn't seem correct to me. The read-only portions of the
configuration space shouldn't be changed, at least within the host. I
just see the changing of fields that are expected to be static to be
problematic.

If all of this is just to tweak the MSI-X table size in the guest
/userspace wouldn't it just make more sense to add the ability for
vfio to limit this directly and perhaps intercept the reads to the
MSI-X control register? Then you could have vfio also take care of the
coordination of things with the driver which would make much more
sense to me as then you don't have to worry about a device on the host
changing size unexpectedly.

Anyway that is just my $.02.

- Alex

  reply	other threads:[~2021-01-13 20:00 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-10 15:07 [PATCH mlx5-next v1 0/5] Dynamically assign MSI-X vectors count Leon Romanovsky
2021-01-10 15:07 ` [PATCH mlx5-next v1 1/5] PCI: Add sysfs callback to allow MSI-X table size change of SR-IOV VFs Leon Romanovsky
2021-01-11 19:30   ` Alexander Duyck
2021-01-12  3:25     ` Don Dutile
2021-01-12  6:15       ` Leon Romanovsky
2021-01-12  6:39     ` Leon Romanovsky
2021-01-12 21:59       ` Alexander Duyck
2021-01-13  6:09         ` Leon Romanovsky
2021-01-13 20:00           ` Alexander Duyck [this message]
2021-01-14  7:16             ` Leon Romanovsky
2021-01-14 16:51               ` Alexander Duyck
2021-01-14 17:12                 ` Leon Romanovsky
2021-01-13 17:50   ` Alex Williamson
2021-01-13 19:49     ` Leon Romanovsky
2021-01-10 15:07 ` [PATCH mlx5-next v1 2/5] PCI: Add SR-IOV sysfs entry to read number of MSI-X vectors Leon Romanovsky
2021-01-11 19:30   ` Alexander Duyck
2021-01-12  6:56     ` Leon Romanovsky
2021-01-12 21:34       ` Alexander Duyck
2021-01-13  6:19         ` Leon Romanovsky
2021-01-13 22:44           ` Alexander Duyck
2021-01-14  6:50             ` Leon Romanovsky
2021-01-14 16:40               ` Alexander Duyck
2021-01-14 16:48                 ` Jason Gunthorpe
2021-01-14 17:55                   ` Alexander Duyck
2021-01-14 18:29                     ` Jason Gunthorpe
2021-01-14 19:24                       ` Alexander Duyck
2021-01-14 19:56                         ` Leon Romanovsky
2021-01-14 20:08                         ` Jason Gunthorpe
2021-01-14 21:43                           ` Alexander Duyck
2021-01-14 23:28                             ` Alex Williamson
2021-01-15  1:56                               ` Alexander Duyck
2021-01-15 14:06                                 ` Jason Gunthorpe
2021-01-15 15:53                                   ` Leon Romanovsky
2021-01-16  1:48                                     ` Alexander Duyck
2021-01-16  8:20                                       ` Leon Romanovsky
2021-01-18  3:16                                         ` Alexander Duyck
2021-01-18  7:20                                           ` Leon Romanovsky
2021-01-18 13:28                                             ` Leon Romanovsky
2021-01-18 18:21                                               ` Alexander Duyck
2021-01-19  5:43                                                 ` Leon Romanovsky
2021-01-16  4:32                                   ` Alexander Duyck
2021-01-18 15:47                                     ` Jason Gunthorpe
2021-01-15 13:51                             ` Jason Gunthorpe
2021-01-14 17:26                 ` Leon Romanovsky
2021-01-18 17:03   ` Greg KH
2021-01-19  5:38     ` Leon Romanovsky
2021-01-10 15:07 ` [PATCH mlx5-next v1 3/5] net/mlx5: Add dynamic MSI-X capabilities bits Leon Romanovsky
2021-01-10 15:07 ` [PATCH mlx5-next v1 4/5] net/mlx5: Dynamically assign MSI-X vectors count Leon Romanovsky
2021-01-10 15:07 ` [PATCH mlx5-next v1 5/5] net/mlx5: Allow to the users to configure number of MSI-X vectors Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKgT0UecBX+LTR9GuxFb=P+pcUkjU5RYNNjeynExS-9Pik1Hsg@mail.gmail.com' \
    --to=alexander.duyck@gmail.com \
    --cc=alex.williamson@redhat.com \
    --cc=bhelgaas@google.com \
    --cc=ddutile@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).