Linux-PCI Archive on lore.kernel.org
 help / color / Atom feed
From: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
To: Jon Derrick <jonathan.derrick@intel.com>
Cc: linux-pci@vger.kernel.org, qemu-devel@nongnu.org,
	Bjorn Helgaas <helgaas@kernel.org>,
	virtualization@lists.linux-foundation.org,
	Christoph Hellwig <hch@lst.de>,
	Andrzej Jakowski <andrzej.jakowski@linux.intel.com>,
	Alex Williamson <alex.williamson@redhat.com>
Subject: Re: [PATCH v3 2/2] PCI: vmd: Use Shadow MEMBAR registers for QEMU/KVM guests
Date: Mon, 6 Jul 2020 10:16:25 +0100
Message-ID: <20200706091625.GA26377@e121166-lin.cambridge.arm.com> (raw)
In-Reply-To: <20200528030240.16024-4-jonathan.derrick@intel.com>

On Wed, May 27, 2020 at 11:02:40PM -0400, Jon Derrick wrote:
> VMD device 28C0 natively assists guest passthrough of the VMD endpoint
> through the use of shadow registers that provide Host Physical Addresses
> to correctly assign bridge windows. These shadow registers are only
> available if VMD config space register 0x70, bit 1 is set.
> 
> In order to support this mode in existing VMD devices which don't
> natively support the shadow register, it was decided that the hypervisor
> could offer the shadow registers in a vendor-specific PCI capability.
> 
> QEMU has been modified to create this vendor-specific capability and
> supply the shadow membar registers for VMDs which don't natively support
> this feature. This patch adds this mode and updates the supported device
> list to allow this feature to be used on these VMDs.
> 
> Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
> ---
>  drivers/pci/controller/vmd.c | 44 ++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 38 insertions(+), 6 deletions(-)

Applied to pci/vmd, thanks.

Lorenzo

> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index e386d4e..76d8acb 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -40,13 +40,19 @@ enum vmd_features {
>  	 * membars, in order to allow proper address translation during
>  	 * resource assignment to enable guest virtualization
>  	 */
> -	VMD_FEAT_HAS_MEMBAR_SHADOW	= (1 << 0),
> +	VMD_FEAT_HAS_MEMBAR_SHADOW		= (1 << 0),
>  
>  	/*
>  	 * Device may provide root port configuration information which limits
>  	 * bus numbering
>  	 */
> -	VMD_FEAT_HAS_BUS_RESTRICTIONS	= (1 << 1),
> +	VMD_FEAT_HAS_BUS_RESTRICTIONS		= (1 << 1),
> +
> +	/*
> +	 * Device contains physical location shadow registers in
> +	 * vendor-specific capability space
> +	 */
> +	VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP	= (1 << 2),
>  };
>  
>  /*
> @@ -454,6 +460,28 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>  		}
>  	}
>  
> +	if (features & VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP) {
> +		int pos = pci_find_capability(vmd->dev, PCI_CAP_ID_VNDR);
> +		u32 reg, regu;
> +
> +		pci_read_config_dword(vmd->dev, pos + 4, &reg);
> +
> +		/* "SHDW" */
> +		if (pos && reg == 0x53484457) {
> +			pci_read_config_dword(vmd->dev, pos + 8, &reg);
> +			pci_read_config_dword(vmd->dev, pos + 12, &regu);
> +			offset[0] = vmd->dev->resource[VMD_MEMBAR1].start -
> +					(((u64) regu << 32 | reg) &
> +					 PCI_BASE_ADDRESS_MEM_MASK);
> +
> +			pci_read_config_dword(vmd->dev, pos + 16, &reg);
> +			pci_read_config_dword(vmd->dev, pos + 20, &regu);
> +			offset[1] = vmd->dev->resource[VMD_MEMBAR2].start -
> +					(((u64) regu << 32 | reg) &
> +					 PCI_BASE_ADDRESS_MEM_MASK);
> +		}
> +	}
> +
>  	/*
>  	 * Certain VMD devices may have a root port configuration option which
>  	 * limits the bus range to between 0-127, 128-255, or 224-255
> @@ -716,16 +744,20 @@ static int vmd_resume(struct device *dev)
>  static SIMPLE_DEV_PM_OPS(vmd_dev_pm_ops, vmd_suspend, vmd_resume);
>  
>  static const struct pci_device_id vmd_ids[] = {
> -	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_201D),},
> +	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_201D),
> +		.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP,},
>  	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0),
>  		.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW |
>  				VMD_FEAT_HAS_BUS_RESTRICTIONS,},
>  	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x467f),
> -		.driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> +		.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
> +				VMD_FEAT_HAS_BUS_RESTRICTIONS,},
>  	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4c3d),
> -		.driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> +		.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
> +				VMD_FEAT_HAS_BUS_RESTRICTIONS,},
>  	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B),
> -		.driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> +		.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
> +				VMD_FEAT_HAS_BUS_RESTRICTIONS,},
>  	{0,}
>  };
>  MODULE_DEVICE_TABLE(pci, vmd_ids);
> -- 
> 1.8.3.1
> 

      reply index

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-28  3:02 [PATCH v3 0/2] VMD endpoint passthrough support Jon Derrick
2020-05-28  3:02 ` [PATCH v3 FOR QEMU v3] hw/vfio: Add VMD Passthrough Quirk Jon Derrick
2020-05-28  3:02 ` [PATCH v3 1/2] PCI: vmd: Filter resource type bits from shadow register Jon Derrick
2020-05-29 10:33   ` Lorenzo Pieralisi
2020-05-29 15:53     ` Derrick, Jonathan
2020-05-29 16:18       ` Lorenzo Pieralisi
2020-06-11 21:16         ` Derrick, Jonathan
2020-06-12 13:54           ` Lorenzo Pieralisi
2020-06-12 15:11             ` Derrick, Jonathan
2020-05-28  3:02 ` [PATCH v3 2/2] PCI: vmd: Use Shadow MEMBAR registers for QEMU/KVM guests Jon Derrick
2020-07-06  9:16   ` Lorenzo Pieralisi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200706091625.GA26377@e121166-lin.cambridge.arm.com \
    --to=lorenzo.pieralisi@arm.com \
    --cc=alex.williamson@redhat.com \
    --cc=andrzej.jakowski@linux.intel.com \
    --cc=hch@lst.de \
    --cc=helgaas@kernel.org \
    --cc=jonathan.derrick@intel.com \
    --cc=linux-pci@vger.kernel.org \
    --cc=qemu-devel@nongnu.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-PCI Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-pci/0 linux-pci/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-pci linux-pci/ https://lore.kernel.org/linux-pci \
		linux-pci@vger.kernel.org
	public-inbox-index linux-pci

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-pci


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git