kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: <cohuck@redhat.com>, <kvm@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <peterx@redhat.com>
Subject: Re: [RFC PATCH 05/10] vfio: Create a vfio_device from vma lookup
Date: Thu, 4 Mar 2021 14:37:57 -0700	[thread overview]
Message-ID: <20210304143757.1ca42cfc@omen.home.shazbot.org> (raw)
In-Reply-To: <20210225234949.GV4247@nvidia.com>

On Thu, 25 Feb 2021 19:49:49 -0400
Jason Gunthorpe <jgg@nvidia.com> wrote:

> On Thu, Feb 25, 2021 at 03:21:13PM -0700, Alex Williamson wrote:
> 
> > This is where it gets tricky.  The vm_pgoff we get from
> > file_operations.mmap is already essentially describing an offset from
> > the base of a specific resource.  We could convert that from an absolute
> > offset to a pfn offset, but it's only the bus driver code (ex.
> > vfio-pci) that knows how to get the base, assuming there is a single
> > base per region (we can't assume enough bits per region to store
> > absolute pfn).  Also note that you're suggesting that all vfio mmaps
> > would need to standardize on the vfio-pci implementation of region
> > layouts.  Not that most drivers haven't copied vfio-pci, but we've
> > specifically avoided exposing it as a fixed uAPI such that we could have
> > the flexibility for a bus driver to implement regions offsets however
> > they need.  
> 
> Okay, well the bus driver owns the address space and the bus driver is
> in control of the vm_pgoff. If it doesn't want to zap then it doesn't
> need to do anything
> 
> vfio-pci can consistently use the index encoding and be fine
>  
> > So I'm not really sure what this looks like.  Within vfio-pci we could
> > keep the index bits in place to allow unmmap_mapping_range() to
> > selectively zap matching vm_pgoffs but expanding that to a vfio
> > standard such that the IOMMU backend can also extract a pfn looks very
> > limiting, or ugly.  Thanks,  
> 
> Lets add a op to convert a vma into a PFN range. The map code will
> pass the vma to the op and get back a pfn (or failure).
> 
> pci will switch the vm_pgoff to an index, find the bar base and
> compute the pfn.
> 
> It is starting to look more and more like dma buf though

How terrible would it be if vfio-core used a shared vm_private_data
structure and inserted itself into the vm_ops call chain to reference
count that structure?  We already wrap the device file descriptor via
vfio_device_fops.mmap, so we could:

	struct vfio_vma_private_data *priv;

	priv = kzalloc(...
	
	priv->device = device;
	vma->vm_private_data = priv;

	device->ops->mmap(device->device_data, vma);

	if (vma->vm_private_data == priv) {
		priv->vm_ops = vma->vm_ops;
		vma->vm_ops = &vfio_vm_ops;
	} else
		kfree(priv);

Where:

struct vfio_vma_private_data {
	struct vfio_device *device;
	unsigned long pfn_base;
	void *private_data; // maybe not needed
	const struct vm_operations_struct *vm_ops;
	struct kref kref;
	unsigned int release_notification:1;
};

Therefore unless a bus driver opts-out by replacing vm_private_data, we
can identify participating vmas by the vm_ops and have flags indicating
if the vma maps device memory such that vfio_get_device_from_vma()
should produce a device reference.  The vfio IOMMU backends would also
consume this, ie. if they get a valid vfio_device from the vma, use the
pfn_base field directly.  vfio_vm_ops would wrap the bus driver
callbacks and provide reference counting on open/close to release this
object.

I'm not thrilled with a vfio_device_ops callback plumbed through
vfio-core to do vma-to-pfn translation, so I thought this might be a
better alternative.  Thanks,

Alex


  reply	other threads:[~2021-03-04 21:40 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-22 16:50 [RFC PATCH 00/10] vfio: Device memory DMA mapping improvements Alex Williamson
2021-02-22 16:50 ` [RFC PATCH 01/10] vfio: Create vfio_fs_type with inode per device Alex Williamson
2021-02-26  5:38   ` Christoph Hellwig
2021-02-26 13:15     ` Jason Gunthorpe
2021-02-22 16:50 ` [RFC PATCH 02/10] vfio: Update vfio_add_group_dev() API Alex Williamson
2021-02-22 17:01   ` Jason Gunthorpe
2021-02-22 16:50 ` [RFC PATCH 03/10] vfio: Export unmap_mapping_range() wrapper Alex Williamson
2021-02-22 16:51 ` [RFC PATCH 04/10] vfio/pci: Use vfio_device_unmap_mapping_range() Alex Williamson
2021-02-22 17:22   ` Jason Gunthorpe
2021-02-24 21:55     ` Alex Williamson
2021-02-25  0:57       ` Jason Gunthorpe
2021-02-22 16:51 ` [RFC PATCH 05/10] vfio: Create a vfio_device from vma lookup Alex Williamson
2021-02-22 17:29   ` Jason Gunthorpe
2021-02-24 21:55     ` Alex Williamson
2021-02-25  0:06       ` Jason Gunthorpe
2021-02-25 22:21         ` Alex Williamson
2021-02-25 23:49           ` Jason Gunthorpe
2021-03-04 21:37             ` Alex Williamson [this message]
2021-03-04 23:16               ` Jason Gunthorpe
2021-03-05  0:07                 ` Alex Williamson
2021-03-05  0:36                   ` Jason Gunthorpe
2021-02-22 16:51 ` [RFC PATCH 06/10] vfio: Add a device notifier interface Alex Williamson
2021-02-22 16:51 ` [RFC PATCH 07/10] vfio/pci: Notify on device release Alex Williamson
2021-02-22 16:52 ` [RFC PATCH 08/10] vfio/type1: Refactor pfn_list clearing Alex Williamson
2021-02-22 16:52 ` [RFC PATCH 09/10] vfio/type1: Pass iommu and dma objects through to vaddr_get_pfn Alex Williamson
2021-02-22 16:52 ` [RFC PATCH 10/10] vfio/type1: Register device notifier Alex Williamson
2021-02-22 17:55   ` Jason Gunthorpe
2021-02-24 21:55     ` Alex Williamson
2021-02-25  0:22       ` Jason Gunthorpe
2021-02-25 17:54         ` Peter Xu
2021-02-25 18:19           ` Jason Gunthorpe
2021-02-25 19:06             ` Peter Xu
2021-02-25 19:17               ` Jason Gunthorpe
2021-02-25 19:54                 ` Peter Xu
2021-02-26  5:47     ` Christoph Hellwig
2021-02-22 18:00 ` [RFC PATCH 00/10] vfio: Device memory DMA mapping improvements Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210304143757.1ca42cfc@omen.home.shazbot.org \
    --to=alex.williamson@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterx@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).