From: Max Gurtovoy <firstname.lastname@example.org>
To: Alexey Kardashevskiy <email@example.com>,
Cornelia Huck <firstname.lastname@example.org>,
Alex Williamson <email@example.com>
Cc: Matthew Rosato <firstname.lastname@example.org>, <email@example.com>,
<firstname.lastname@example.org>, <email@example.com>, <firstname.lastname@example.org>,
<email@example.com>, <firstname.lastname@example.org>, <email@example.com>,
<firstname.lastname@example.org>, <email@example.com>, <firstname.lastname@example.org>,
<ACurrid@nvidia.com>, <email@example.com>, <firstname.lastname@example.org>,
Subject: Re: [PATCH 8/9] vfio/pci: use x86 naming instead of igd
Date: Wed, 3 Feb 2021 18:07:45 +0200 [thread overview]
Message-ID: <email@example.com> (raw)
On 2/3/2021 5:24 AM, Alexey Kardashevskiy wrote:
> On 03/02/2021 04:41, Max Gurtovoy wrote:
>> On 2/2/2021 6:06 PM, Cornelia Huck wrote:
>>> On Mon, 1 Feb 2021 11:42:30 -0700
>>> Alex Williamson <firstname.lastname@example.org> wrote:
>>>> On Mon, 1 Feb 2021 12:49:12 -0500
>>>> Matthew Rosato <email@example.com> wrote:
>>>>> On 2/1/21 12:14 PM, Cornelia Huck wrote:
>>>>>> On Mon, 1 Feb 2021 16:28:27 +0000
>>>>>> Max Gurtovoy <firstname.lastname@example.org> wrote:
>>>>>>> This patch doesn't change any logic but only align to the
>>>>>>> concept of
>>>>>>> vfio_pci_core extensions. Extensions that are related to a platform
>>>>>>> and not to a specific vendor of PCI devices should be part of
>>>>>>> the core
>>>>>>> driver. Extensions that are specific for PCI device vendor
>>>>>>> should go
>>>>>>> to a dedicated vendor vfio-pci driver.
>>>>>> My understanding is that igd means support for Intel graphics,
>>>>>> i.e. a
>>>>>> strict subset of x86. If there are other future extensions that e.g.
>>>>>> only make sense for some devices found only on AMD systems, I don't
>>>>>> think they should all be included under the same x86 umbrella.
>>>>>> Similar reasoning for nvlink, that only seems to cover support
>>>>>> for some
>>>>>> GPUs under Power, and is not a general platform-specific
>>>>>> extension IIUC.
>>>>>> We can arguably do the zdev -> s390 rename (as zpci appears only on
>>>>>> s390, and all PCI devices will be zpci on that platform),
>>>>>> although I'm
>>>>>> not sure about the benefit.
>>>>> As far as I can tell, there isn't any benefit for s390 it's just
>>>>> "re-branding" to match the platform name rather than the zdev
>>>>> which admittedly perhaps makes it more clear to someone outside of
>>>>> that any PCI device on s390 is a zdev/zpci type, and thus will use
>>>>> extension to vfio_pci(_core). This would still be true even if we
>>>>> something later that builds atop it (e.g. a platform-specific device
>>>>> like ism-vfio-pci). Or for that matter, mlx5 via vfio-pci on
>>>>> s390x uses
>>>>> these zdev extensions today and would need to continue using them
>>>>> in a
>>>>> world where mlx5-vfio-pci.ko exists.
>>>>> I guess all that to say: if such a rename matches the 'grand
>>>>> scheme' of
>>>>> this design where we treat arch-level extensions to
>>>>> vfio_pci(_core) as
>>>>> "vfio_pci_(arch)" then I'm not particularly opposed to the
>>>>> rename. But
>>>>> by itself it's not very exciting :)
>>>> This all seems like the wrong direction to me. The goal here is to
>>>> modularize vfio-pci into a core library and derived vendor modules
>>>> make use of that core library. If existing device specific extensions
>>>> within vfio-pci cannot be turned into vendor modules through this
>>>> support and are instead redefined as platform specific features of the
>>>> new core library, that feels like we're already admitting failure of
>>>> this core library to support known devices, let alone future devices.
>>>> IGD is a specific set of devices. They happen to rely on some
>>>> specific support, whose availability should be determined via the
>>>> vendor module probe callback. Packing that support into an "x86"
>>>> component as part of the core feels not only short sighted, but also
>>>> avoids addressing the issues around how userspace determines an
>>>> module to use for a device.
>>> Hm, it seems that not all current extensions to the vfio-pci code are
>>> created equal.
>>> IIUC, we have igd and nvlink, which are sets of devices that only show
>>> up on x86 or ppc, respectively, and may rely on some special features
>>> of those architectures/platforms. The important point is that you have
>>> a device identifier that you can match a driver against.
>> maybe you can supply the ids ?
>> Alexey K, I saw you've been working on the NVLINK2 for P9. can you
>> supply the exact ids for that should be bounded to this driver ?
>> I'll add it to V3.
> Sorry no, I never really had the exact ids, they keep popping up after
> The nvlink2 support can only work if the platform supports it as there
> is nothing to do to the GPUs themselves, it is about setting up those
> nvlink2 links. If the platform advertises certain features in the
> device tree - then any GPU in the SXM2 form factor (not sure about the
> exact abbrev, not an usual PCIe device) should just work.
> If the nvlink2 "subdriver" fails to find nvlinks (the platform does
> not advertise them or some parameters are new to this subdriver, such
> as link-speed), we still fall back to generic vfio-pci and try passing
> through this GPU as a plain PCI device with no extra links.
> Semi-optimal but if the user is mining cryptocoins, then highspeed
> links are not really that important :) And the nvidia driver is
> capable of running such GPUs without nvlinks. Thanks,
From the above I can conclude that nvlink2_vfio_pci will need to find
nvlinks during the probe and fail in case it doesn't.
which platform driver is responsible to advertise it ? can we use
aux_device to represent these nvlink/nvlinks ?
In case of a failure, the user can fallback to vfio-pci.ko and run
without the nvlinks as you said.
This is deterministic behavior and the user will know exactly what he's
getting from vfio subsystem.
>>> On the other side, we have the zdev support, which both requires s390
>>> and applies to any pci device on s390. If we added special handling for
>>> ISM on s390, ISM would be in a category similar to igd/nvlink.
>>> Now, if somebody plugs a mlx5 device into an s390, we would want both
>>> the zdev support and the specialized mlx5 driver. Maybe zdev should be
>>> some kind of library that can be linked into normal vfio-pci, into
>>> vfio-pci-mlx5, and a hypothetical vfio-pci-ism? You always want zdev on
>>> s390 (if configured into the kernel).
next prev parent reply other threads:[~2021-02-03 16:09 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-01 16:28 [PATCH v2 0/9] Introduce vfio-pci-core subsystem Max Gurtovoy
2021-02-01 16:28 ` [PATCH 1/9] vfio-pci: rename vfio_pci.c to vfio_pci_core.c Max Gurtovoy
2021-02-01 16:28 ` [PATCH 2/9] vfio-pci: introduce vfio_pci_core subsystem driver Max Gurtovoy
2021-02-01 16:28 ` [PATCH 3/9] vfio-pci-core: export vfio_pci_register_dev_region function Max Gurtovoy
2021-02-01 16:28 ` [PATCH 4/9] mlx5-vfio-pci: add new vfio_pci driver for mlx5 devices Max Gurtovoy
2021-02-01 16:28 ` [PATCH 5/9] vfio-pci/zdev: remove unused vdev argument Max Gurtovoy
2021-02-01 17:27 ` Matthew Rosato
2021-02-02 7:57 ` Cornelia Huck
2021-02-02 17:21 ` Alex Williamson
2021-02-01 16:28 ` [PATCH 6/9] vfio-pci/zdev: fix possible segmentation fault issue Max Gurtovoy
2021-02-01 16:52 ` Cornelia Huck
2021-02-01 17:08 ` Matthew Rosato
2021-02-01 20:47 ` Alex Williamson
2021-02-02 7:58 ` Cornelia Huck
2021-02-01 16:28 ` [PATCH 7/9] vfio/pci: use s390 naming instead of zdev Max Gurtovoy
2021-02-01 16:28 ` [PATCH 8/9] vfio/pci: use x86 naming instead of igd Max Gurtovoy
2021-02-01 17:14 ` Cornelia Huck
2021-02-01 17:49 ` Matthew Rosato
2021-02-01 18:42 ` Alex Williamson
2021-02-02 16:06 ` Cornelia Huck
2021-02-02 17:10 ` Jason Gunthorpe
2021-02-11 15:47 ` Max Gurtovoy
2021-02-11 16:29 ` Matthew Rosato
2021-02-11 17:39 ` Cornelia Huck
2021-02-02 17:41 ` Max Gurtovoy
2021-02-02 17:54 ` Alex Williamson
2021-02-02 18:50 ` Jason Gunthorpe
2021-02-02 18:55 ` Christoph Hellwig
2021-02-02 19:05 ` Jason Gunthorpe
2021-02-02 19:37 ` Alex Williamson
2021-02-02 20:44 ` Jason Gunthorpe
2021-02-02 20:59 ` Max Gurtovoy
2021-02-02 21:30 ` Alex Williamson
2021-02-02 23:06 ` Jason Gunthorpe
2021-02-02 23:59 ` Alex Williamson
2021-02-03 13:54 ` Jason Gunthorpe
2021-02-11 8:47 ` Christoph Hellwig
2021-02-11 14:30 ` Jason Gunthorpe
2021-02-11 8:44 ` Christoph Hellwig
2021-02-11 19:43 ` Alex Williamson
[not found] ` <email@example.com>
2021-02-03 16:07 ` Max Gurtovoy [this message]
[not found] ` <firstname.lastname@example.org>
2021-02-04 12:51 ` Jason Gunthorpe
2021-02-05 0:42 ` Alexey Kardashevskiy
2021-02-08 12:44 ` Max Gurtovoy
2021-02-09 1:55 ` Alexey Kardashevskiy
2021-02-08 18:13 ` Jason Gunthorpe
2021-02-09 1:51 ` Alexey Kardashevskiy
2021-02-04 9:12 ` Max Gurtovoy
2021-02-11 8:50 ` Christoph Hellwig
2021-02-11 14:49 ` Jason Gunthorpe
2021-02-01 16:28 ` [PATCH 9/9] vfio/pci: use powernv naming instead of nvlink2 Max Gurtovoy
2021-02-01 18:35 ` Jason Gunthorpe
2021-02-10 7:52 ` [PATCH v2 0/9] Introduce vfio-pci-core subsystem Tian, Kevin
2021-02-10 13:34 ` Jason Gunthorpe
2021-02-10 16:37 ` Alex Williamson
2021-02-10 17:08 ` Jason Gunthorpe
2021-02-11 8:36 ` Christoph Hellwig
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).