linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Jason Gunthorpe <jgg@ziepe.ca>, Christoph Hellwig <hch@lst.de>
Cc: Lei Rao <lei.rao@intel.com>,
	kbusch@kernel.org, axboe@fb.com, kch@nvidia.com,
	sagi@grimberg.me, alex.williamson@redhat.com, cohuck@redhat.com,
	yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com,
	kevin.tian@intel.com, mjrosato@linux.ibm.com,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	kvm@vger.kernel.org, eddie.dong@intel.com, yadong.li@intel.com,
	yi.l.liu@intel.com, Konrad.wilk@oracle.com,
	stephen@eideticom.com, hang.yuan@intel.com,
	Oren Duer <oren@nvidia.com>
Subject: Re: [RFC PATCH 1/5] nvme-pci: add function nvme_submit_vf_cmd to issue admin commands for VF driver.
Date: Sun, 11 Dec 2022 13:39:37 +0200	[thread overview]
Message-ID: <b2ade627-0abb-08a0-c28b-2bf8eb8e4973@nvidia.com> (raw)
In-Reply-To: <Y5DyorZJPdtN5WcX@ziepe.ca>


On 12/7/2022 10:08 PM, Jason Gunthorpe wrote:
> On Wed, Dec 07, 2022 at 07:33:33PM +0100, Christoph Hellwig wrote:
>> On Wed, Dec 07, 2022 at 01:31:44PM -0400, Jason Gunthorpe wrote:
>>>> Sorry, I meant VF.  Your continued using of SR-IOV teminology now keeps
>>>> confusing my mind so much that I start mistyping things.
>>> Well, what words do you want to use?
>> The same I've used through this whole thread:  controlling and
>> controlled function.
>>
>>> So I don't think I've learned anything more about your objection.
>>>
>>> "fundamentally broken" doesn't help
>> The objection is that:
>>
>>   - in hardware fundamentally only the controlling funtion can
>>     control live migration features on the controlled function,
>>     because the controlled function is assigned to a VM which has
>>     control over it.
> Yes
>
> However hisilicon managed to do their implementation without this, or
> rather you could say their "controlling function" is a single MMIO BAR
> page in their PCI VF and their "controlled function" is the rest of
> the PCI VF.
>
>>   - for the same reason there is no portable way to even find
>>     the controlling function from a controlled function, unless
>>     you want to equate PF = controlling and VF = controlled,
>>     and even that breaks down in some corner cases
> As you say, the kernel must know the relationship between
> controlling->controlled. Nothing else is sane.
>
> If the kernel knows this information then we can find a way for the
> vfio_device to have pointers to both controlling and controlled
> objects. I have a suggestion below.
>
>>   - if you want to control live migration from the controlled
>>     VM you need a new vfio subdriver for a function that has
>>     absolutely no new functionality itself related to live
>>     migration (quirks for bugs excepted).
> I see it differently, the VFIO driver *is* the live migration
> driver. Look at all the drivers that have come through and they are
> 99% live migration code. They have, IMHO, properly split the live
> migration concern out of their controlling/PF driver and placed it in
> the "VFIO live migration driver".
>
> We've done a pretty good job of allowing the VFIO live migration
> driver to pretty much just focus on live migration stuff and delegate
> the VFIO part to library code.
>
> Excepting quirks and bugs sounds nice, except we actually can't ignore
> them. Having all the VFIO capabilities is exactly how we are fixing
> the quirks and bugs in the first place, and I don't see your vision
> how we can continue to do that if we split all the live migration code
> into yet another subsystem.
>
> For instance how do I trap FLR like mlx5 must do if the
> drivers/live_migration code cannot intercept the FLR VFIO ioctl?
>
> How do I mangle and share the BAR like hisilicon does?
>
> Which is really why this is in VFIO in the first place. It actually is
> coupled in practice, if not in theory.
>
>> So by this architecture you build a convoluted mess where you need
>> tons of vfio subdrivers that mostly just talk to the driver for the
>> controlling function, which they can't even sanely discover.  That's
>> what I keep calling fundamentally broken.
> The VFIO live migration drivers will look basically the same if you
> put them under drivers/live_migration. This cannot be considered a
> "convoluted mess" as splitting things by subsystem is best-practice,
> AFAIK.
>
> If we accept that drivers/vfio can be the "live migration subsystem"
> then lets go all the way and have the controlling driver to call
> vfio_device_group_register() to create the VFIO char device for the
> controlled function.
>
> This solves the "sanely discover" problem because of course the
> controlling function driver knows what the controlled function is and
> it can acquire both functions before it calls
> vfio_device_group_register().
>
> This is actually what I want to do anyhow for SIOV-like functions and
> VFIO. Doing it for PCI VFs (or related PFs) is very nice symmetry. I
> really dislike that our current SRIOV model in Linux forces the VF to
> instantly exist without a chance for the controlling driver to
> provision it.
>
> We have some challenges on how to do this in the kernel, but I think
> we can overcome them. VFIO is ready for this thanks to all the
> restructuring work we already did.
>
> I'd really like to get away from VFIO having to do all this crazy
> sysfs crap to activate its driver. I think there is a lot of appeal to
> having, say, a nvmecli command that just commands the controlling
> driver to provision a function, enable live migration, configure it
> and then make it visible via VFIO. The same API regardless if the
> underlying controlled function technology is PF/VF/SIOV.
>
> At least we have been very interested in doing that for networking
> devices.
>
> Jason

Jason/Christoph,

As I mentioned earlier we have 2 orthogonal paths here: implementation 
and SPEC. They are for some reason mixed in this discussion.

I've tried to understand some SPEC related issues that were raised here, 
that if we fix them - the implementation will be possible and all the 
VFIO efforts we did can be re-used.

In high level I think that for the SPEC:

1. Need to define a concept of a "virtual subsystem". A primary 
controller will be able to create a virtual subsystem. Inside this 
subsystem the primary controller will be the master ("the controlling") 
of the migration process. It will also be able to add secondary 
controllers to this virtual subsystem and assign "virtual controller ID" 
to it.
something like:
- nvme virtual_subsys_create --dev=/dev/nvme1 --virtual_nqn="my_v_nqn_1" 
--dev_vcid = 1
- nvme virtual_subsys_add --dev=/dev/nvme1 --virtual_nqn="my_v_nqn_1" 
--secondary_dev=/dev/nvme2 --secondary_dev_vcid=20

2. Now the primary controller have a list of ctrls inside it's virtual 
subsystem for migration. It has handle to it that doesn't go away after 
binding the controlled function to VFIO.

3. Same virtual subsystem should be created in the destination hypervisor.

4. Now, migration process starts using the VFIO uAPI - we will get to a 
point that VFIO driver of the controlled function needs to ask the 
controlling function to send admin commands to manage the migration process.
     How to do it ? implementation detail. We can set a pointer in 
pci_dev/dev structures or we can ask 
nvme_migration_handle_get(controlled_function) or NVMe can expose API's 
dedicated to migration nvme_state_save(controlled_function).


When creating a virtual subsystem and adding controllers to it, we can 
control any leakage or narrow some functionality that make migration 
impossible. This can be using more admin commands for example.
After the migration process is over, one can remove the secondary 
controller from the virtual subsystem and re-use it.

WDYT ?


  parent reply	other threads:[~2022-12-11 11:40 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-06  5:58 [RFC PATCH 0/5] Add new VFIO PCI driver for NVMe devices Lei Rao
2022-12-06  5:58 ` [RFC PATCH 1/5] nvme-pci: add function nvme_submit_vf_cmd to issue admin commands for VF driver Lei Rao
2022-12-06  6:19   ` Christoph Hellwig
2022-12-06 13:44     ` Jason Gunthorpe
2022-12-06 13:51       ` Keith Busch
2022-12-06 14:27         ` Jason Gunthorpe
2022-12-06 13:58       ` Christoph Hellwig
2022-12-06 15:22         ` Jason Gunthorpe
2022-12-06 15:38           ` Christoph Hellwig
2022-12-06 15:51             ` Jason Gunthorpe
2022-12-06 16:55               ` Christoph Hellwig
2022-12-06 19:15                 ` Jason Gunthorpe
2022-12-07  2:30                   ` Max Gurtovoy
2022-12-07  7:58                     ` Christoph Hellwig
2022-12-09  2:11                       ` Tian, Kevin
2022-12-12  7:41                         ` Christoph Hellwig
2022-12-07  7:54                   ` Christoph Hellwig
2022-12-07 10:59                     ` Max Gurtovoy
2022-12-07 13:46                       ` Christoph Hellwig
2022-12-07 14:50                         ` Max Gurtovoy
2022-12-07 16:35                           ` Christoph Hellwig
2022-12-07 13:34                     ` Jason Gunthorpe
2022-12-07 13:52                       ` Christoph Hellwig
2022-12-07 15:07                         ` Jason Gunthorpe
2022-12-07 16:38                           ` Christoph Hellwig
2022-12-07 17:31                             ` Jason Gunthorpe
2022-12-07 18:33                               ` Christoph Hellwig
2022-12-07 20:08                                 ` Jason Gunthorpe
2022-12-09  2:50                                   ` Tian, Kevin
2022-12-09 18:56                                     ` Dong, Eddie
2022-12-11 11:39                                   ` Max Gurtovoy [this message]
2022-12-12  7:55                                     ` Christoph Hellwig
2022-12-12 14:49                                       ` Max Gurtovoy
2022-12-12  7:50                                   ` Christoph Hellwig
2022-12-13 14:01                                     ` Jason Gunthorpe
2022-12-13 16:08                                       ` Christoph Hellwig
2022-12-13 17:49                                         ` Jason Gunthorpe
2022-12-06  5:58 ` [RFC PATCH 2/5] nvme-vfio: add new vfio-pci driver for NVMe device Lei Rao
2022-12-06  5:58 ` [RFC PATCH 3/5] nvme-vfio: enable the function of VFIO live migration Lei Rao
2023-01-19 10:21   ` Max Gurtovoy
2023-02-09  9:09     ` Rao, Lei
2022-12-06  5:58 ` [RFC PATCH 4/5] nvme-vfio: check if the hardware supports " Lei Rao
2022-12-06 13:47   ` Keith Busch
2022-12-06  5:58 ` [RFC PATCH 5/5] nvme-vfio: Add a document for the NVMe device Lei Rao
2022-12-06  6:26   ` Christoph Hellwig
2022-12-06 13:05     ` Jason Gunthorpe
2022-12-06 13:09       ` Christoph Hellwig
2022-12-06 13:52         ` Jason Gunthorpe
2022-12-06 14:00           ` Christoph Hellwig
2022-12-06 14:20             ` Jason Gunthorpe
2022-12-06 14:31               ` Christoph Hellwig
2022-12-06 14:48                 ` Jason Gunthorpe
2022-12-06 15:01                   ` Christoph Hellwig
2022-12-06 15:28                     ` Jason Gunthorpe
2022-12-06 15:35                       ` Christoph Hellwig
2022-12-06 18:00                         ` Dong, Eddie
2022-12-12  7:57                           ` Christoph Hellwig
2022-12-11 12:05                     ` Max Gurtovoy
2022-12-11 13:21                       ` Rao, Lei
2022-12-11 14:51                         ` Max Gurtovoy
2022-12-12  1:20                           ` Rao, Lei
2022-12-12  8:09                           ` Christoph Hellwig
2022-12-09  2:05         ` Tian, Kevin
2022-12-09 16:53           ` Li, Yadong
2022-12-12  8:11             ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b2ade627-0abb-08a0-c28b-2bf8eb8e4973@nvidia.com \
    --to=mgurtovoy@nvidia.com \
    --cc=Konrad.wilk@oracle.com \
    --cc=alex.williamson@redhat.com \
    --cc=axboe@fb.com \
    --cc=cohuck@redhat.com \
    --cc=eddie.dong@intel.com \
    --cc=hang.yuan@intel.com \
    --cc=hch@lst.de \
    --cc=jgg@ziepe.ca \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=lei.rao@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mjrosato@linux.ibm.com \
    --cc=oren@nvidia.com \
    --cc=sagi@grimberg.me \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=stephen@eideticom.com \
    --cc=yadong.li@intel.com \
    --cc=yi.l.liu@intel.com \
    --cc=yishaih@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).