All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Colin Xu <colin.xu@intel.com>
Cc: kvm@vger.kernel.org, zhenyuw@linux.intel.com,
	hang.yuan@linux.intel.com, swee.yee.fonn@intel.com,
	fred.gao@intel.com
Subject: Re: [PATCH v2] vfio/pci: Add OpRegion 2.0 Extended VBT support.
Date: Mon, 30 Aug 2021 14:27:42 -0600	[thread overview]
Message-ID: <20210830142742.402af95f.alex.williamson@redhat.com> (raw)
In-Reply-To: <20210827023716.105075-1-colin.xu@intel.com>

On Fri, 27 Aug 2021 10:37:16 +0800
Colin Xu <colin.xu@intel.com> wrote:

> Due to historical reason, some legacy shipped system doesn't follow
> OpRegion 2.1 spec but still stick to OpRegion 2.0, in which the extended
> VBT is not contigious after OpRegion in physical address, but any
> location pointed by RVDA via absolute address. Thus it's impossible
> to map a contigious range to hold both OpRegion and extended VBT as 2.1.
> 
> Since the only difference between OpRegion 2.0 and 2.1 is where extended
> VBT is stored: For 2.0, RVDA is the absolute address of extended VBT
> while for 2.1, RVDA is the relative address of extended VBT to OpRegion
> baes, and there is no other difference between OpRegion 2.0 and 2.1,
> it's feasible to amend OpRegion support for these legacy system (before
> upgrading the system firmware), by kazlloc a range to shadown OpRegion
> from the beginning and stitch VBT after closely, patch the shadow
> OpRegion version from 2.0 to 2.1, and patch the shadow RVDA to relative
> address. So that from the vfio igd OpRegion r/w ops view, only OpRegion
> 2.1 is exposed regardless the underneath host OpRegion is 2.0 or 2.1
> if the extended VBT exists. vfio igd OpRegion r/w ops will return either
> shadowed data (OpRegion 2.0) or directly from physical address
> (OpRegion 2.1+) based on host OpRegion version and RVDA/RVDS. The shadow
> mechanism makes it possible to support legacy systems on the market.
> 
> V2:
> Validate RVDA for 2.1+ before increasing total size. (Alex)
> 
> Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
> Cc: Hang Yuan <hang.yuan@linux.intel.com>
> Cc: Swee Yee Fonn <swee.yee.fonn@intel.com>
> Cc: Fred Gao <fred.gao@intel.com>
> Signed-off-by: Colin Xu <colin.xu@intel.com>
> ---
>  drivers/vfio/pci/vfio_pci_igd.c | 117 ++++++++++++++++++++------------
>  1 file changed, 75 insertions(+), 42 deletions(-)
> 
> diff --git a/drivers/vfio/pci/vfio_pci_igd.c b/drivers/vfio/pci/vfio_pci_igd.c
> index 228df565e9bc..9cd44498b378 100644
> --- a/drivers/vfio/pci/vfio_pci_igd.c
> +++ b/drivers/vfio/pci/vfio_pci_igd.c
> @@ -48,7 +48,10 @@ static size_t vfio_pci_igd_rw(struct vfio_pci_device *vdev, char __user *buf,
>  static void vfio_pci_igd_release(struct vfio_pci_device *vdev,
>  				 struct vfio_pci_region *region)
>  {
> -	memunmap(region->data);
> +	if (is_ioremap_addr(region->data))
> +		memunmap(region->data);
> +	else
> +		kfree(region->data);


Since we don't have write support to the OpRegion, should we always
allocate a shadow copy to simplify?  Or rather than a shadow copy,
since we don't support mmap of the region, our read handler could
virtualize version and rvda on the fly and shift accesses so that the
VBT appears contiguous.  That might also leave us better positioned for
handling dynamic changes (ex. does the data change when a monitor is
plugged/unplugged) and perhaps eventually write support.


>  }
>  
>  static const struct vfio_pci_regops vfio_pci_igd_regops = {
> @@ -59,10 +62,11 @@ static const struct vfio_pci_regops vfio_pci_igd_regops = {
>  static int vfio_pci_igd_opregion_init(struct vfio_pci_device *vdev)
>  {
>  	__le32 *dwordp = (__le32 *)(vdev->vconfig + OPREGION_PCI_ADDR);
> -	u32 addr, size;
> -	void *base;
> +	u32 addr, size, rvds = 0;
> +	void *base, *opregionvbt;


opregionvbt could be scoped within the branch it's used.


>  	int ret;
>  	u16 version;
> +	u64 rvda = 0;
>  
>  	ret = pci_read_config_dword(vdev->pdev, OPREGION_PCI_ADDR, &addr);
>  	if (ret)
> @@ -89,66 +93,95 @@ static int vfio_pci_igd_opregion_init(struct vfio_pci_device *vdev)
>  	size *= 1024; /* In KB */
>  
>  	/*
> -	 * Support opregion v2.1+
> -	 * When VBT data exceeds 6KB size and cannot be within mailbox #4, then
> -	 * the Extended VBT region next to opregion is used to hold the VBT data.
> -	 * RVDA (Relative Address of VBT Data from Opregion Base) and RVDS
> -	 * (Raw VBT Data Size) from opregion structure member are used to hold the
> -	 * address from region base and size of VBT data. RVDA/RVDS are not
> -	 * defined before opregion 2.0.
> +	 * OpRegion and VBT:
> +	 * When VBT data doesn't exceed 6KB, it's stored in Mailbox #4.
> +	 * When VBT data exceeds 6KB size, Mailbox #4 is no longer large enough
> +	 * to hold the VBT data, the Extended VBT region is introduced since
> +	 * OpRegion 2.0 to hold the VBT data. Since OpRegion 2.0, RVDA/RVDS are
> +	 * introduced to define the extended VBT data location and size.
> +	 * OpRegion 2.0: RVDA defines the absolute physical address of the
> +	 *   extended VBT data, RVDS defines the VBT data size.
> +	 * OpRegion 2.1 and above: RVDA defines the relative address of the
> +	 *   extended VBT data to OpRegion base, RVDS defines the VBT data size.
>  	 *
> -	 * opregion 2.1+: RVDA is unsigned, relative offset from
> -	 * opregion base, and should point to the end of opregion.
> -	 * otherwise, exposing to userspace to allow read access to everything between
> -	 * the OpRegion and VBT is not safe.
> -	 * RVDS is defined as size in bytes.
> -	 *
> -	 * opregion 2.0: rvda is the physical VBT address.
> -	 * Since rvda is HPA it cannot be directly used in guest.
> -	 * And it should not be practically available for end user,so it is not supported.
> +	 * Due to the RVDA difference in OpRegion VBT (also the only diff between
> +	 * 2.0 and 2.1), while for OpRegion 2.1 and above it's possible to map
> +	 * a contigious memory to expose OpRegion and VBT r/w via the vfio
> +	 * region, for OpRegion 2.0 shadow and amendment mechanism is used to
> +	 * expose OpRegion and VBT r/w properly. So that from r/w ops view, only
> +	 * OpRegion 2.1 is exposed regardless underneath Region is 2.0 or 2.1.
>  	 */
>  	version = le16_to_cpu(*(__le16 *)(base + OPREGION_VERSION));
> -	if (version >= 0x0200) {
> -		u64 rvda;
> -		u32 rvds;
>  
> +	if (version >= 0x0200) {
>  		rvda = le64_to_cpu(*(__le64 *)(base + OPREGION_RVDA));
>  		rvds = le32_to_cpu(*(__le32 *)(base + OPREGION_RVDS));
> +
> +		/* The extended VBT must follows OpRegion for OpRegion 2.1+ */


Why?  If we're going to make our own OpRegion to account for v2.0, why
should it not apply to the same scenario for >2.0?


> +		if (rvda != size && version > 0x0200) {
> +			memunmap(base);
> +			pci_err(vdev->pdev,
> +				"Extended VBT does not follow opregion on version 0x%04x\n",
> +				version);
> +			return -EINVAL;
> +		}
> +
> +		/* The extended VBT is valid only when RVDA/RVDS are non-zero. */
>  		if (rvda && rvds) {
> -			/* no support for opregion v2.0 with physical VBT address */
> -			if (version == 0x0200) {
> +			size += rvds;
> +		}
> +	}
> +
> +	if (size != OPREGION_SIZE) {


@size can only != OPREGION_SIZE due to the above branch, so the below
could all be scoped under the version test, or perhaps to a separate
function.


> +		/* Allocate memory for OpRegion and extended VBT for 2.0 */
> +		if (rvda && rvds && version == 0x0200) {


We go down this path even if the VBT was contiguous with the OpRegion.


> +			void *vbt_base;
> +
> +			vbt_base = memremap(rvda, rvds, MEMREMAP_WB);
> +			if (!vbt_base) {
>  				memunmap(base);
> -				pci_err(vdev->pdev,
> -					"IGD assignment does not support opregion v2.0 with an extended VBT region\n");
> -				return -EINVAL;
> +				return -ENOMEM;
>  			}
>  
> -			if (rvda != size) {
> +			opregionvbt = kzalloc(size, GFP_KERNEL);
> +			if (!opregionvbt) {
>  				memunmap(base);
> -				pci_err(vdev->pdev,
> -					"Extended VBT does not follow opregion on version 0x%04x\n",
> -					version);
> -				return -EINVAL;
> +				memunmap(vbt_base);
> +				return -ENOMEM;
>  			}
>  
> -			/* region size for opregion v2.0+: opregion and VBT size. */
> -			size += rvds;
> +			/* Stitch VBT after OpRegion noncontigious */
> +			memcpy(opregionvbt, base, OPREGION_SIZE);
> +			memcpy(opregionvbt + OPREGION_SIZE, vbt_base, rvds);
> +
> +			/* Patch OpRegion 2.0 to 2.1 */
> +			*(__le16 *)(opregionvbt + OPREGION_VERSION) = 0x0201;


= cpu_to_le16(0x0201);


> +			/* Patch RVDA to relative address after OpRegion */
> +			*(__le64 *)(opregionvbt + OPREGION_RVDA) = OPREGION_SIZE;


= cpu_to_le64(OPREGION_SIZE);


I think this is what triggered the sparse errors.  Thanks,

Alex

> +
> +			memunmap(vbt_base);
> +			memunmap(base);
> +
> +			/* Register shadow instead of map as vfio_region */
> +			base = opregionvbt;
> +		/* Remap OpRegion + extended VBT for 2.1+ */
> +		} else {
> +			memunmap(base);
> +			base = memremap(addr, size, MEMREMAP_WB);
> +			if (!base)
> +				return -ENOMEM;
>  		}
>  	}
>  
> -	if (size != OPREGION_SIZE) {
> -		memunmap(base);
> -		base = memremap(addr, size, MEMREMAP_WB);
> -		if (!base)
> -			return -ENOMEM;
> -	}
> -
>  	ret = vfio_pci_register_dev_region(vdev,
>  		PCI_VENDOR_ID_INTEL | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
>  		VFIO_REGION_SUBTYPE_INTEL_IGD_OPREGION,
>  		&vfio_pci_igd_regops, size, VFIO_REGION_INFO_FLAG_READ, base);
>  	if (ret) {
> -		memunmap(base);
> +		if (is_ioremap_addr(base))
> +			memunmap(base);
> +		else
> +			kfree(base);
>  		return ret;
>  	}
>  


  reply	other threads:[~2021-08-30 20:27 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-13  2:13 [PATCH] vfio/pci: Add OpRegion 2.0 Extended VBT support Colin Xu
2021-08-16 22:39 ` Alex Williamson
2021-08-17  0:40   ` Colin Xu
2021-08-27  1:36     ` Colin Xu
2021-08-27  1:48       ` Alex Williamson
2021-08-27  2:24         ` Colin Xu
2021-08-27  2:37           ` [PATCH v2] " Colin Xu
2021-08-30 20:27             ` Alex Williamson [this message]
2021-09-02  7:11               ` Colin Xu
2021-09-02 21:46                 ` Alex Williamson
2021-09-03  2:23                   ` Colin Xu
2021-09-03 22:36                     ` Alex Williamson
2021-09-07  6:14                       ` Colin Xu
2021-09-09  5:09                         ` [PATCH v3] vfio/pci: Add OpRegion 2.0+ " Colin Xu
2021-09-09 22:00                           ` Alex Williamson
2021-09-13 12:39                             ` Colin Xu
2021-09-13 12:41                               ` [PATCH v4] " Colin Xu
2021-09-13 15:14                                 ` Alex Williamson
2021-09-14  4:18                                   ` Colin Xu
2021-09-14  4:29                                     ` [PATCH v5] " Colin Xu
2021-09-14  9:11                                       ` [PATCH v6] " Colin Xu
2021-09-24 21:24                                         ` Alex Williamson
2021-10-03 15:46                                           ` Colin Xu
2021-10-03 15:53                                             ` [PATCH v7] " Colin Xu
2021-10-11 21:44                                               ` Alex Williamson
2021-10-12 12:48                                                 ` [PATCH v8] " Colin Xu
2021-10-12 17:12                                                   ` Alex Williamson
2021-10-12 23:10                                                     ` Colin Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210830142742.402af95f.alex.williamson@redhat.com \
    --to=alex.williamson@redhat.com \
    --cc=colin.xu@intel.com \
    --cc=fred.gao@intel.com \
    --cc=hang.yuan@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=swee.yee.fonn@intel.com \
    --cc=zhenyuw@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.