KVM ARM Archive on lore.kernel.org
 help / color / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Auger Eric <eric.auger@redhat.com>
Cc: kevin.tian@intel.com, jacob.jun.pan@linux.intel.com,
	ashok.raj@intel.com, kvm@vger.kernel.org, joro@8bytes.org,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	marc.zyngier@arm.com, iommu@lists.linux-foundation.org,
	yi.l.liu@intel.com, vincent.stehle@arm.com, robin.murphy@arm.com,
	kvmarm@lists.cs.columbia.edu, eric.auger.pro@gmail.com
Subject: Re: [PATCH v8 25/29] vfio-pci: Add a new VFIO_REGION_TYPE_NESTED region type
Date: Fri, 7 Jun 2019 10:29:31 -0600
Message-ID: <20190607102931.0bf2dfe0@x1.home> (raw)
In-Reply-To: <9c1ea2db-5ba0-3cf5-3b38-2c4a125460e6@redhat.com>

On Fri, 7 Jun 2019 10:28:06 +0200
Auger Eric <eric.auger@redhat.com> wrote:

> Hi Alex,
> 
> On 6/4/19 12:31 AM, Alex Williamson wrote:
> > On Sun, 26 May 2019 18:10:00 +0200
> > Eric Auger <eric.auger@redhat.com> wrote:
> >   
> >> This patch adds two new regions aiming to handle nested mode
> >> translation faults.
> >>
> >> The first region (two host kernel pages) is read-only from the
> >> user-space perspective. The first page contains an header
> >> that provides information about the circular buffer located in the
> >> second page. The circular buffer is put in a different page in
> >> the prospect to be mmappable.
> >>
> >> The max user API version supported by the kernel is returned
> >> through a dedicated fault region capability.
> >>
> >> The prod header contains
> >> - the user API version in use (potentially inferior to the one
> >>   returned in the capability),
> >> - the offset of the queue within the region,
> >> - the producer index relative to the start of the queue
> >> - the max number of fault records,
> >> - the size of each record.
> >>
> >> The second region is write-only from the user perspective. It
> >> contains the version of the requested fault ABI and the consumer
> >> index that is updated by the userspace each time this latter has
> >> consumed fault records.
> >>
> >> The natural order of operation for the userspace is:
> >> - retrieve the highest supported fault ABI version
> >> - set the requested fault ABI version in the consumer region
> >>
> >> Until the ABI version is not set by the userspace, the kernel
> >> cannot return a comprehensive set of information inside the
> >> prod header (entry size and number of entries in the fault queue).  
> > 
> > It's not clear to me why two regions are required for this.  If the
> > first page is not mmap capable, why does it need to be read-only?  If
> > it were not read-only couldn't the fields of the second region also fit
> > within this first page?  If you wanted to deal with an mmap capable
> > writeable region, it could just be yet a 3rd page in the first region.  
> I thought it would be clearer for the userspace to have 2 separate
> regions, one for the producer and one for the consumer. Otherwise I will
> need to specify which fields are read-only or write-only. But this may
> be more self-contained in a single region.

We need to figure out read vs write anyway, but separating them to
separate regions just for that seems unnecessary.  How many regions do
we expect to require for a single feature?

> >   
> >>
> >> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> >>
> >> ---
> >>
> >> v4 -> v5
> >> - check cons is not null in vfio_pci_check_cons_fault
> >>
> >> v3 -> v4:
> >> - use 2 separate regions, respectively in read and write modes
> >> - add the version capability
> >> ---
> >>  drivers/vfio/pci/vfio_pci.c         | 105 ++++++++++++++++++++++++++++
> >>  drivers/vfio/pci/vfio_pci_private.h |  17 +++++
> >>  drivers/vfio/pci/vfio_pci_rdwr.c    |  73 +++++++++++++++++++
> >>  include/uapi/linux/vfio.h           |  42 +++++++++++
> >>  4 files changed, 237 insertions(+)
> >>
> >> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> >> index cab71da46f4a..f75f61127277 100644
> >> --- a/drivers/vfio/pci/vfio_pci.c
> >> +++ b/drivers/vfio/pci/vfio_pci.c
> >> @@ -261,6 +261,106 @@ int vfio_pci_set_power_state(struct vfio_pci_device *vdev, pci_power_t state)
> >>  	return ret;
> >>  }
> >>  
> >> +void vfio_pci_fault_release(struct vfio_pci_device *vdev,
> >> +			    struct vfio_pci_region *region)
> >> +{
> >> +}
> >> +
> >> +static const struct vfio_pci_fault_abi fault_abi_versions[] = {
> >> +	[0] = {
> >> +		.entry_size = sizeof(struct iommu_fault),
> >> +	},
> >> +};
> >> +
> >> +#define NR_FAULT_ABIS ARRAY_SIZE(fault_abi_versions)  
> > 
> > This looks like it's leading to some dangerous complicated code to
> > support multiple user selected ABIs.  How many ABIs do we plan to
> > support?  The region capability also exposes a type, sub-type, and
> > version.  How much of this could be exposed that way?  ie. if we need
> > to support multiple versions, expose multiple regions.  
> 
> This is something that was discussed earlier and suggested by
> Jean-Philippe that we may need to support several versions of the ABI
> (typicallu when adding PRI support).
> Exposing multiple region is an interesting idea and I will explore that
> direction.
> >   
> >> +
> >> +static int vfio_pci_fault_prod_add_capability(struct vfio_pci_device *vdev,
> >> +		struct vfio_pci_region *region, struct vfio_info_cap *caps)
> >> +{
> >> +	struct vfio_region_info_cap_fault cap = {
> >> +		.header.id = VFIO_REGION_INFO_CAP_PRODUCER_FAULT,
> >> +		.header.version = 1,
> >> +		.version = NR_FAULT_ABIS,
> >> +	};
> >> +	return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
> >> +}
> >> +
> >> +static const struct vfio_pci_regops vfio_pci_fault_cons_regops = {
> >> +	.rw		= vfio_pci_fault_cons_rw,
> >> +	.release	= vfio_pci_fault_release,
> >> +};
> >> +
> >> +static const struct vfio_pci_regops vfio_pci_fault_prod_regops = {
> >> +	.rw		= vfio_pci_fault_prod_rw,
> >> +	.release	= vfio_pci_fault_release,
> >> +	.add_capability = vfio_pci_fault_prod_add_capability,
> >> +};
> >> +
> >> +static int vfio_pci_init_fault_region(struct vfio_pci_device *vdev)
> >> +{
> >> +	struct vfio_region_fault_prod *header;
> >> +	int ret;
> >> +
> >> +	mutex_init(&vdev->fault_queue_lock);
> >> +
> >> +	vdev->fault_pages = kzalloc(3 * PAGE_SIZE, GFP_KERNEL);
> >> +	if (!vdev->fault_pages)
> >> +		return -ENOMEM;
> >> +
> >> +	ret = vfio_pci_register_dev_region(vdev,
> >> +		VFIO_REGION_TYPE_NESTED,
> >> +		VFIO_REGION_SUBTYPE_NESTED_FAULT_PROD,
> >> +		&vfio_pci_fault_prod_regops, 2 * PAGE_SIZE,
> >> +		VFIO_REGION_INFO_FLAG_READ, vdev->fault_pages);  
> > 
> > If mmap isn't supported yet, why are we pushing the queue out to the
> > 2nd page?  We're just wasting space.  vfio_region_fault_prod.offset
> > allows us to relocate it when/if it is mmap capable.  
> OK. mmap capability is introduced in 27/29 though.
> >   
> >> +	if (ret)
> >> +		goto out;
> >> +
> >> +	ret = vfio_pci_register_dev_region(vdev,
> >> +		VFIO_REGION_TYPE_NESTED,
> >> +		VFIO_REGION_SUBTYPE_NESTED_FAULT_CONS,
> >> +		&vfio_pci_fault_cons_regops,
> >> +		sizeof(struct vfio_region_fault_cons),
> >> +		VFIO_REGION_INFO_FLAG_WRITE,
> >> +		vdev->fault_pages + 2 * PAGE_SIZE);  
> > 
> > What's the remaining (PAGE_SIZE - sizeof(struct vfio_region_fault_cons))
> > bytes used for?  
> They are not used.

So we probably don't want to allocate a full page for it.  Seems little
reason to separate by pages when mmap is not supported anyway.
 
> >> +	if (ret)
> >> +		goto out;
> >> +
> >> +	header = (struct vfio_region_fault_prod *)vdev->fault_pages;
> >> +	header->version = -1;
> >> +	header->offset = PAGE_SIZE;
> >> +	return 0;
> >> +out:
> >> +	kfree(vdev->fault_pages);
> >> +	return ret;
> >> +}
> >> +
> >> +int vfio_pci_check_cons_fault(struct vfio_pci_device *vdev,
> >> +			     struct vfio_region_fault_cons *cons_header)
> >> +{
> >> +	struct vfio_region_fault_prod *prod_header =
> >> +		(struct vfio_region_fault_prod *)vdev->fault_pages;
> >> +
> >> +	if (cons_header->version > NR_FAULT_ABIS)
> >> +		return -EINVAL;
> >> +
> >> +	if (!vdev->fault_abi) {
> >> +		vdev->fault_abi = cons_header->version;
> >> +		prod_header->entry_size =
> >> +			fault_abi_versions[vdev->fault_abi - 1].entry_size;
> >> +		prod_header->nb_entries = PAGE_SIZE / prod_header->entry_size;  
> > 
> > Is this sufficient for 4K hosts?  Clearly a 64K host has 16x the number
> > of entries, so if this is a heuristic the results are vastly different.  
> This series only deals with unrecoverable errors. We don't expect many
> of them so I did not consider the need to have a more complicated
> heuristic. Now if we consider the PRI use case we need to reconsider the
> size of the fault queue. If this feature is introduced later with a new
> region type, then we can handle this later?
> 
> Practically the event queue size is set by the guest SMMUv3 driver and
> trapped at the SMMUV3 QEMU device level. So we could communicate this
> info through IOMMU MR notifiers but that's a rather complicated chain
> and I would rather avoid that complexity if not necessary.

I'd hope that this interface accounts for both errors and page mapping
requests and that things like queue size are specified in the user
interface to account for these sorts of differences.  Thanks,

Alex
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  parent reply index

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-26 16:09 [PATCH v8 00/29] SMMUv3 Nested Stage Setup Eric Auger
2019-05-26 16:09 ` [PATCH v8 01/29] driver core: Add per device iommu param Eric Auger
2019-05-26 16:09 ` [PATCH v8 02/29] iommu: Introduce device fault data Eric Auger
2019-05-26 16:09 ` [PATCH v8 03/29] iommu: Introduce device fault report API Eric Auger
2019-05-26 16:09 ` [PATCH v8 04/29] iommu: Add recoverable fault reporting Eric Auger
2019-06-03 22:31   ` Alex Williamson
2019-06-04 15:48     ` Jacob Pan
2019-05-26 16:09 ` [PATCH v8 05/29] iommu: Add a timeout parameter for PRQ response Eric Auger
2019-06-03 22:32   ` Alex Williamson
2019-06-04 10:52     ` Jean-Philippe Brucker
2019-06-04 15:50       ` Jacob Pan
2019-05-26 16:09 ` [PATCH v8 06/29] trace/iommu: Add sva trace events Eric Auger
2019-05-26 16:09 ` [PATCH v8 07/29] iommu: Use device fault trace event Eric Auger
2019-05-26 16:09 ` [PATCH v8 08/29] iommu: Introduce attach/detach_pasid_table API Eric Auger
2019-05-26 16:09 ` [PATCH v8 09/29] iommu: Introduce cache_invalidate API Eric Auger
2019-05-26 16:09 ` [PATCH v8 10/29] iommu: Introduce bind/unbind_guest_msi Eric Auger
2019-05-26 16:09 ` [PATCH v8 11/29] iommu/arm-smmu-v3: Maintain a SID->device structure Eric Auger
2019-05-26 16:09 ` [PATCH v8 12/29] iommu/smmuv3: Dynamically allocate s1_cfg and s2_cfg Eric Auger
2019-05-26 16:09 ` [PATCH v8 13/29] iommu/smmuv3: Get prepared for nested stage support Eric Auger
2019-05-26 16:09 ` [PATCH v8 14/29] iommu/smmuv3: Implement attach/detach_pasid_table Eric Auger
2019-05-26 16:09 ` [PATCH v8 15/29] iommu/smmuv3: Introduce __arm_smmu_tlb_inv_asid/s1_range_nosync Eric Auger
2019-05-26 16:09 ` [PATCH v8 16/29] iommu/smmuv3: Implement cache_invalidate Eric Auger
2019-05-26 16:09 ` [PATCH v8 17/29] dma-iommu: Implement NESTED_MSI cookie Eric Auger
2019-05-26 16:09 ` [PATCH v8 18/29] iommu/smmuv3: Nested mode single MSI doorbell per domain enforcement Eric Auger
2019-05-26 16:09 ` [PATCH v8 19/29] iommu/smmuv3: Enforce incompatibility between nested mode and HW MSI regions Eric Auger
2019-05-26 16:09 ` [PATCH v8 20/29] iommu/smmuv3: Implement bind/unbind_guest_msi Eric Auger
2019-05-26 16:09 ` [PATCH v8 21/29] iommu/smmuv3: Report non recoverable faults Eric Auger
2019-05-26 16:09 ` [PATCH v8 22/29] vfio: VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE Eric Auger
2019-06-03 22:32   ` Alex Williamson
2019-05-26 16:09 ` [PATCH v8 23/29] vfio: VFIO_IOMMU_CACHE_INVALIDATE Eric Auger
2019-06-14 12:38   ` Liu, Yi L
2019-06-14 13:17     ` Auger Eric
2019-05-26 16:09 ` [PATCH v8 24/29] vfio: VFIO_IOMMU_BIND/UNBIND_MSI Eric Auger
2019-06-03 22:32   ` Alex Williamson
2019-06-07  8:30     ` Auger Eric
2019-05-26 16:10 ` [PATCH v8 25/29] vfio-pci: Add a new VFIO_REGION_TYPE_NESTED region type Eric Auger
2019-06-03 22:31   ` Alex Williamson
2019-06-07  8:28     ` Auger Eric
2019-06-07 12:47       ` Jean-Philippe Brucker
2019-06-07 16:29       ` Alex Williamson [this message]
2019-05-26 16:10 ` [PATCH v8 26/29] vfio-pci: Register an iommu fault handler Eric Auger
2019-06-03 22:31   ` Alex Williamson
2019-06-04 16:11     ` Auger Eric
2019-06-05 22:45       ` Jacob Pan
2019-06-06 18:54         ` Jean-Philippe Brucker
2019-06-06 20:29           ` Jacob Pan
2019-06-07  7:02             ` Auger Eric
2019-06-07 10:28             ` Jean-Philippe Brucker
2019-06-07 17:43               ` Jacob Pan
2019-06-10 12:45                 ` Jean-Philippe Brucker
2019-06-10 21:31                   ` Jacob Pan
2019-06-11 13:14                     ` Jean-Philippe Brucker
2019-06-12 18:53                       ` Jacob Pan
2019-06-18 14:04                         ` Jean-Philippe Brucker
2019-06-19  0:19                           ` Jacob Pan
2019-06-19 11:44                             ` Jean-Philippe Brucker
2019-07-11 13:07                           ` Auger Eric
2019-06-07 12:48   ` Jean-Philippe Brucker
2019-06-07 14:18     ` Auger Eric
2019-05-26 16:10 ` [PATCH v8 27/29] vfio_pci: Allow to mmap the fault queue Eric Auger
2019-05-26 16:10 ` [PATCH v8 28/29] vfio-pci: Add VFIO_PCI_DMA_FAULT_IRQ_INDEX Eric Auger
2019-06-03 22:31   ` Alex Williamson
2019-06-04 16:11     ` Auger Eric
2019-05-26 16:10 ` [PATCH v8 29/29] vfio: Document nested stage control Eric Auger

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190607102931.0bf2dfe0@x1.home \
    --to=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=eric.auger.pro@gmail.com \
    --cc=eric.auger@redhat.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=robin.murphy@arm.com \
    --cc=vincent.stehle@arm.com \
    --cc=will.deacon@arm.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

KVM ARM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/kvmarm/0 kvmarm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 kvmarm kvmarm/ https://lore.kernel.org/kvmarm \
		kvmarm@lists.cs.columbia.edu kvmarm@archiver.kernel.org
	public-inbox-index kvmarm


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/edu.columbia.cs.lists.kvmarm


AGPL code for this site: git clone https://public-inbox.org/ public-inbox