* [PATCH v13 1/8] tools: Add vga=vmware
2015-11-28 21:44 [PATCH v13 0/8] Xen VMware tools support Don Slutz
@ 2015-11-28 21:44 ` Don Slutz
2016-01-27 16:55 ` Konrad Rzeszutek Wilk
2015-11-28 21:44 ` [PATCH v13 2/8] xen: Add support for VMware cpuid leaves Don Slutz
` (7 subsequent siblings)
8 siblings, 1 reply; 24+ messages in thread
From: Don Slutz @ 2015-11-28 21:44 UTC (permalink / raw)
To: xen-devel
Cc: Jun Nakajima, Wei Liu, Kevin Tian, Keir Fraser, Ian Campbell,
George Dunlap, Andrew Cooper, Stefano Stabellini, Eddie Dong,
Don Slutz, Don Slutz, Tim Deegan, Aravind Gopalakrishnan,
Jan Beulich, Suravee Suthikulpanit, Boris Ostrovsky, Ian Jackson
From: Don Slutz <dslutz@verizon.com>
This allows use of QEMU's VMware emulated video card
NOTE: vga=vmware is not supported by device_model_version=qemu-xen-traditional
Signed-off-by: Don Slutz <dslutz@verizon.com>
CC: Don Slutz <don.slutz@gmail.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
v13:
Added Acked-by: Ian Campbell
v12:
Dropped LIBXL_HAVE_LIBXL_VGA_INTERFACE_TYPE_VMWARE
This means that the later patch that defines LIBXL_HAVE_VMWARE
is now also required.
v11:
Dropped support for Qemu-trad.
Also changed later patchs to not need this one.
v10: New at v10.
Was part of "tools: Add vmware_hwver support"
docs/man/xl.cfg.pod.5 | 4 +++-
tools/libxl/libxl_dm.c | 9 +++++++++
tools/libxl/libxl_types.idl | 1 +
tools/libxl/xl_cmdimpl.c | 2 ++
4 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 3b695bd..9a5744d 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1547,7 +1547,7 @@ This option is deprecated, use vga="stdvga" instead.
=item B<vga="STRING">
-Selects the emulated video card (none|stdvga|cirrus|qxl).
+Selects the emulated video card (none|stdvga|cirrus|qxl|vmware).
The default is cirrus.
In general, QXL should work with the Spice remote display protocol
@@ -1555,6 +1555,8 @@ for acceleration, and QXL driver is necessary in guest in this case.
QXL can also work with the VNC protocol, but it will be like a standard
VGA without acceleration.
+NOTE: vmware is not supported on B<device_model_version = "qemu-xen-traditional">
+
=item B<vnc=BOOLEAN>
Allow access to the display via the VNC protocol. This enables the
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index a4934df..c76fd90 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -529,6 +529,10 @@ static int libxl__build_device_model_args_old(libxl__gc *gc,
case LIBXL_VGA_INTERFACE_TYPE_NONE:
flexarray_append_pair(dm_args, "-vga", "none");
break;
+ case LIBXL_VGA_INTERFACE_TYPE_VMWARE:
+ LOG(ERROR, "vga=vmware is not supported by "
+ "qemu-xen-traditional");
+ return ERROR_INVAL;
case LIBXL_VGA_INTERFACE_TYPE_QXL:
break;
}
@@ -969,6 +973,11 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
GCSPRINTF("qxl-vga,vram_size_mb=%"PRIu64",ram_size_mb=%"PRIu64,
(b_info->video_memkb/2/1024), (b_info->video_memkb/2/1024) ) );
break;
+ case LIBXL_VGA_INTERFACE_TYPE_VMWARE:
+ flexarray_append_pair(dm_args, "-device",
+ GCSPRINTF("vmware-svga,vgamem_mb=%d",
+ libxl__sizekb_to_mb(b_info->video_memkb)));
+ break;
}
if (b_info->u.hvm.boot) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index cf3730f..4f46594 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -207,6 +207,7 @@ libxl_vga_interface_type = Enumeration("vga_interface_type", [
(2, "STD"),
(3, "NONE"),
(4, "QXL"),
+ (5, "VMWARE"),
], init_val = "LIBXL_VGA_INTERFACE_TYPE_CIRRUS")
libxl_vendor_device = Enumeration("vendor_device", [
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 2b6371d..93d5295 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -2260,6 +2260,8 @@ skip_vfb:
b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
} else if (!strcmp(buf, "qxl")) {
b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_QXL;
+ } else if (!strcmp(buf, "vmware")) {
+ b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_VMWARE;
} else {
fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
exit(1);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v13 1/8] tools: Add vga=vmware
2015-11-28 21:44 ` [PATCH v13 1/8] tools: Add vga=vmware Don Slutz
@ 2016-01-27 16:55 ` Konrad Rzeszutek Wilk
0 siblings, 0 replies; 24+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-01-27 16:55 UTC (permalink / raw)
To: Don Slutz
Cc: Jun Nakajima, Tim Deegan, Kevin Tian, Keir Fraser, Ian Campbell,
Stefano Stabellini, George Dunlap, Andrew Cooper, Eddie Dong,
Don Slutz, xen-devel, Aravind Gopalakrishnan, Wei Liu,
Jan Beulich, Suravee Suthikulpanit, Boris Ostrovsky, Ian Jackson
On Sat, Nov 28, 2015 at 04:44:58PM -0500, Don Slutz wrote:
> From: Don Slutz <dslutz@verizon.com>
>
> This allows use of QEMU's VMware emulated video card
>
> NOTE: vga=vmware is not supported by device_model_version=qemu-xen-traditional
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> CC: Don Slutz <don.slutz@gmail.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
> v13:
> Added Acked-by: Ian Campbell
>
> v12:
> Dropped LIBXL_HAVE_LIBXL_VGA_INTERFACE_TYPE_VMWARE
> This means that the later patch that defines LIBXL_HAVE_VMWARE
> is now also required.
>
> v11:
> Dropped support for Qemu-trad.
> Also changed later patchs to not need this one.
>
> v10: New at v10.
>
> Was part of "tools: Add vmware_hwver support"
>
>
> docs/man/xl.cfg.pod.5 | 4 +++-
> tools/libxl/libxl_dm.c | 9 +++++++++
> tools/libxl/libxl_types.idl | 1 +
> tools/libxl/xl_cmdimpl.c | 2 ++
> 4 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 3b695bd..9a5744d 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -1547,7 +1547,7 @@ This option is deprecated, use vga="stdvga" instead.
>
> =item B<vga="STRING">
>
> -Selects the emulated video card (none|stdvga|cirrus|qxl).
> +Selects the emulated video card (none|stdvga|cirrus|qxl|vmware).
> The default is cirrus.
>
> In general, QXL should work with the Spice remote display protocol
> @@ -1555,6 +1555,8 @@ for acceleration, and QXL driver is necessary in guest in this case.
> QXL can also work with the VNC protocol, but it will be like a standard
> VGA without acceleration.
>
> +NOTE: vmware is not supported on B<device_model_version = "qemu-xen-traditional">
> +
> =item B<vnc=BOOLEAN>
>
> Allow access to the display via the VNC protocol. This enables the
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index a4934df..c76fd90 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -529,6 +529,10 @@ static int libxl__build_device_model_args_old(libxl__gc *gc,
> case LIBXL_VGA_INTERFACE_TYPE_NONE:
> flexarray_append_pair(dm_args, "-vga", "none");
> break;
> + case LIBXL_VGA_INTERFACE_TYPE_VMWARE:
> + LOG(ERROR, "vga=vmware is not supported by "
> + "qemu-xen-traditional");
> + return ERROR_INVAL;
> case LIBXL_VGA_INTERFACE_TYPE_QXL:
> break;
> }
> @@ -969,6 +973,11 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
> GCSPRINTF("qxl-vga,vram_size_mb=%"PRIu64",ram_size_mb=%"PRIu64,
> (b_info->video_memkb/2/1024), (b_info->video_memkb/2/1024) ) );
> break;
> + case LIBXL_VGA_INTERFACE_TYPE_VMWARE:
> + flexarray_append_pair(dm_args, "-device",
> + GCSPRINTF("vmware-svga,vgamem_mb=%d",
> + libxl__sizekb_to_mb(b_info->video_memkb)));
> + break;
> }
>
> if (b_info->u.hvm.boot) {
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index cf3730f..4f46594 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -207,6 +207,7 @@ libxl_vga_interface_type = Enumeration("vga_interface_type", [
> (2, "STD"),
> (3, "NONE"),
> (4, "QXL"),
> + (5, "VMWARE"),
> ], init_val = "LIBXL_VGA_INTERFACE_TYPE_CIRRUS")
>
> libxl_vendor_device = Enumeration("vendor_device", [
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 2b6371d..93d5295 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -2260,6 +2260,8 @@ skip_vfb:
> b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
> } else if (!strcmp(buf, "qxl")) {
> b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_QXL;
> + } else if (!strcmp(buf, "vmware")) {
> + b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_VMWARE;
> } else {
> fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
> exit(1);
> --
> 1.8.3.1
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v13 2/8] xen: Add support for VMware cpuid leaves
2015-11-28 21:44 [PATCH v13 0/8] Xen VMware tools support Don Slutz
2015-11-28 21:44 ` [PATCH v13 1/8] tools: Add vga=vmware Don Slutz
@ 2015-11-28 21:44 ` Don Slutz
2015-12-16 10:28 ` Jan Beulich
2015-11-28 21:45 ` [PATCH v13 3/8] tools: Add vmware_hwver support Don Slutz
` (6 subsequent siblings)
8 siblings, 1 reply; 24+ messages in thread
From: Don Slutz @ 2015-11-28 21:44 UTC (permalink / raw)
To: xen-devel
Cc: Jun Nakajima, Wei Liu, Kevin Tian, Keir Fraser, Ian Campbell,
George Dunlap, Andrew Cooper, Stefano Stabellini, Eddie Dong,
Don Slutz, Don Slutz, Tim Deegan, Aravind Gopalakrishnan,
Jan Beulich, Suravee Suthikulpanit, Boris Ostrovsky, Ian Jackson
From: Don Slutz <dslutz@verizon.com>
This is done by adding xen_arch_domainconfig vmware_hw. It is set to
the VMware virtual hardware version.
Currently 0, 3-4, 6-11 are good values. However the
code only checks for == 0 or != 0 or >= 7.
If non-zero then
Return VMware's cpuid leaves. If >= 7 return data, else
return 0.
The support of hypervisor cpuid leaves has not been agreed to.
MicroSoft Hyper-V (AKA viridian) currently must be at 0x40000000.
VMware currently must be at 0x40000000.
KVM currently must be at 0x40000000 (from Seabios).
Xen can be found at the first otherwise unused 0x100 aligned
offset between 0x40000000 and 0x40010000.
http://download.microsoft.com/download/F/B/0/FB0D01A3-8E3A-4F5F-AA59-08C8026D3B8A/requirements-for-implementing-microsoft-hypervisor-interface.docx
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009458
http://lwn.net/Articles/301888/
Attempted to get this cleaned up.
So based on this, I picked the order:
Xen at 0x40000000 or
Viridian or VMware at 0x40000000 and Xen at 0x40000100
If both Viridian and VMware selected, report an error.
Since I need to change xen/arch/x86/hvm/Makefile; also add
a newline at end of file.
Signed-off-by: Don Slutz <dslutz@verizon.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Don Slutz <don.slutz@gmail.com>
Tools side only: Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
v13:
Adjust temporary comment to include which patch.
Adjust copyright year to 2012-2015
v12:
No change
v11:
Adjust /* Disallow if vmware_hwver */
Newline after break;
Added Reviewed-by: Andrew Cooper.
It would be worth to add an explicit vmware_hwver = 0 in the
libxl__arch_domain_prepare_config.
Note: Adds a tool change to this patch.
v10:
Did not add "Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>"
because of changes here to do things the new way.
Reword comment message to reflect new way.
v9:
s/vmware_hw/vmware_hwver/i
Change -EXDEV to EOPNOTSUPP.
Done.
adding another subdirectory: xen/arch/x86/hvm/vmware
Much will depend on the discussion of the subsequent patches.
TBD.
So for versions < 7 there's effectively no CPUID support at all?
Changed to check at entry.
The comment /* Params for VMware */ seems wrong...
Changed to /* emulated VMware Hardware Version */
Also please use d, not _d in #define is_vmware_domain()
Changed. Line is now > 80 characters, so split into 2.
v7:
Prevent setting of HVM_PARAM_VIRIDIAN if HVM_PARAM_VMWARE_HW set.
v5:
Given how is_viridian and is_vmware are defined I think '||' is more
appropriate.
Fixed.
The names of all three functions are bogus.
removed static support routines.
This hunk is unrelated, but is perhaps something better fixed.
Added to commit message.
include <xen/types.h> (IIRC) please.
Done.
At least 1 pair of brackets please, especially as the placement of
brackets affects the result of this particular calculation.
Switch to "1000000ull / APIC_BUS_CYCLE_NS"
tools/libxl/libxl_x86.c | 2 +
xen/arch/x86/domain.c | 1 +
xen/arch/x86/hvm/Makefile | 1 +
xen/arch/x86/hvm/hvm.c | 11 ++++++
xen/arch/x86/hvm/vmware/Makefile | 1 +
xen/arch/x86/hvm/vmware/cpuid.c | 77 +++++++++++++++++++++++++++++++++++++++
xen/arch/x86/traps.c | 8 +++-
xen/include/asm-x86/hvm/domain.h | 3 ++
xen/include/asm-x86/hvm/hvm.h | 6 +++
xen/include/asm-x86/hvm/vmware.h | 33 +++++++++++++++++
xen/include/public/arch-x86/xen.h | 1 +
11 files changed, 142 insertions(+), 2 deletions(-)
create mode 100644 xen/arch/x86/hvm/vmware/Makefile
create mode 100644 xen/arch/x86/hvm/vmware/cpuid.c
create mode 100644 xen/include/asm-x86/hvm/vmware.h
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 183b6c2..51a10b8 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -12,6 +12,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
else
xc_config->emulation_flags = 0;
+ /* Note: will be changed in next patch (tools: Add ...). */
+ xc_config->vmware_hwver = 0;
return 0;
}
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2a8d5c1..a1fbcb5 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -548,6 +548,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
{
d->arch.hvm_domain.hap_enabled =
hvm_funcs.hap_supported && (domcr_flags & DOMCRF_hap);
+ d->arch.hvm_domain.vmware_hwver = config->vmware_hwver;
rc = create_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0, NULL, NULL);
}
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index 794e793..5d5a184 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -1,5 +1,6 @@
subdir-y += svm
subdir-y += vmx
+subdir-y += vmware
obj-y += asid.o
obj-y += emulate.o
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index db0aeba..9cdc980 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -61,6 +61,7 @@
#include <asm/hvm/event.h>
#include <asm/hvm/vmx/vmx.h>
#include <asm/altp2m.h>
+#include <asm/hvm/vmware.h>
#include <asm/mtrr.h>
#include <asm/apic.h>
#include <asm/vm_event.h>
@@ -4552,6 +4553,9 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
if ( cpuid_viridian_leaves(input, eax, ebx, ecx, edx) )
return;
+ if ( cpuid_vmware_leaves(input, eax, ebx, ecx, edx) )
+ return;
+
if ( cpuid_hypervisor_leaves(input, count, eax, ebx, ecx, edx) )
return;
@@ -6067,6 +6071,13 @@ static int hvm_allow_set_param(struct domain *d,
{
/* The following parameters should only be changed once. */
case HVM_PARAM_VIRIDIAN:
+ /* Disallow if vmware_hwver is in use */
+ if ( d->arch.hvm_domain.vmware_hwver )
+ {
+ rc = -EOPNOTSUPP;
+ break;
+ }
+ /* Fall through */
case HVM_PARAM_IOREQ_SERVER_PFN:
case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
case HVM_PARAM_ALTP2M:
diff --git a/xen/arch/x86/hvm/vmware/Makefile b/xen/arch/x86/hvm/vmware/Makefile
new file mode 100644
index 0000000..3fb2e0b
--- /dev/null
+++ b/xen/arch/x86/hvm/vmware/Makefile
@@ -0,0 +1 @@
+obj-y += cpuid.o
diff --git a/xen/arch/x86/hvm/vmware/cpuid.c b/xen/arch/x86/hvm/vmware/cpuid.c
new file mode 100644
index 0000000..1db8972
--- /dev/null
+++ b/xen/arch/x86/hvm/vmware/cpuid.c
@@ -0,0 +1,77 @@
+/*
+ * arch/x86/hvm/vmware/cpuid.c
+ *
+ * Copyright (C) 2012-2015 Verizon Corporation
+ *
+ * This file is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License Version 2 (GPLv2)
+ * as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details. <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/sched.h>
+
+#include <asm/hvm/hvm.h>
+#include <asm/hvm/vmware.h>
+
+/*
+ * VMware hardware version 7 defines some of these cpuid levels,
+ * below is a brief description about those.
+ *
+ * Leaf 0x40000000, Hypervisor CPUID information
+ * # EAX: The maximum input value for hypervisor CPUID info (0x40000010).
+ * # EBX, ECX, EDX: Hypervisor vendor ID signature. E.g. "VMwareVMware"
+ *
+ * Leaf 0x40000010, Timing information.
+ * # EAX: (Virtual) TSC frequency in kHz.
+ * # EBX: (Virtual) Bus (local apic timer) frequency in kHz.
+ * # ECX, EDX: RESERVED
+ */
+
+int cpuid_vmware_leaves(uint32_t idx, uint32_t *eax, uint32_t *ebx,
+ uint32_t *ecx, uint32_t *edx)
+{
+ struct domain *d = current->domain;
+
+ if ( !is_vmware_domain(d) ||
+ d->arch.hvm_domain.vmware_hwver < 7 )
+ return 0;
+
+ switch ( idx - 0x40000000 )
+ {
+ case 0x0:
+ *eax = 0x40000010; /* Largest leaf */
+ *ebx = 0x61774d56; /* "VMwa" */
+ *ecx = 0x4d566572; /* "reVM" */
+ *edx = 0x65726177; /* "ware" */
+ break;
+
+ case 0x10:
+ /* (Virtual) TSC frequency in kHz. */
+ *eax = d->arch.tsc_khz;
+ /* (Virtual) Bus (local apic timer) frequency in kHz. */
+ *ebx = 1000000ull / APIC_BUS_CYCLE_NS;
+ *ecx = 0; /* Reserved */
+ *edx = 0; /* Reserved */
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 075f98e..833a38b 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -754,8 +754,12 @@ int cpuid_hypervisor_leaves( uint32_t idx, uint32_t sub_idx,
uint32_t *eax, uint32_t *ebx, uint32_t *ecx, uint32_t *edx)
{
struct domain *currd = current->domain;
- /* Optionally shift out of the way of Viridian architectural leaves. */
- uint32_t base = is_viridian_domain(currd) ? 0x40000100 : 0x40000000;
+ /*
+ * Optionally shift out of the way of Viridian or VMware
+ * architectural leaves.
+ */
+ uint32_t base = is_viridian_domain(currd) || is_vmware_domain(currd) ?
+ 0x40000100 : 0x40000000;
uint32_t limit, dummy;
idx -= base;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index a8cc2ad..c832823 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -111,6 +111,9 @@ struct hvm_domain {
uint64_t *params;
+ /* emulated VMware Hardware Version */
+ uint64_t vmware_hwver;
+
/* Memory ranges with pinned cache attributes. */
struct list_head pinned_cacheattr_ranges;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index f80e143..d3f34c9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -360,6 +360,12 @@ static inline unsigned long hvm_get_shadow_gs_base(struct vcpu *v)
#define has_viridian_time_ref_count(d) \
(is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_time_ref_count))
+#define vmware_feature_mask(d) \
+ ((d)->arch.hvm_domain.vmware_hwver)
+
+#define is_vmware_domain(d) \
+ (is_hvm_domain(d) && vmware_feature_mask(d))
+
void hvm_hypervisor_cpuid_leaf(uint32_t sub_idx,
uint32_t *eax, uint32_t *ebx,
uint32_t *ecx, uint32_t *edx);
diff --git a/xen/include/asm-x86/hvm/vmware.h b/xen/include/asm-x86/hvm/vmware.h
new file mode 100644
index 0000000..e9f7ade
--- /dev/null
+++ b/xen/include/asm-x86/hvm/vmware.h
@@ -0,0 +1,33 @@
+/*
+ * asm-x86/hvm/vmware.h
+ *
+ * Copyright (C) 2012-2015 Verizon Corporation
+ *
+ * This file is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License Version 2 (GPLv2)
+ * as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details. <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef ASM_X86_HVM_VMWARE_H__
+#define ASM_X86_HVM_VMWARE_H__
+
+#include <xen/types.h>
+
+int cpuid_vmware_leaves(uint32_t idx, uint32_t *eax, uint32_t *ebx,
+ uint32_t *ecx, uint32_t *edx);
+
+#endif /* ASM_X86_HVM_VMWARE_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index cdd93c1..dcbf4aa 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -290,6 +290,7 @@ struct xen_arch_domainconfig {
XEN_X86_EMU_VGA | XEN_X86_EMU_IOMMU | \
XEN_X86_EMU_PIT)
uint32_t emulation_flags;
+ uint64_t vmware_hwver;
};
#endif
--
1.8.3.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v13 2/8] xen: Add support for VMware cpuid leaves
2015-11-28 21:44 ` [PATCH v13 2/8] xen: Add support for VMware cpuid leaves Don Slutz
@ 2015-12-16 10:28 ` Jan Beulich
2015-12-20 17:48 ` Don Slutz
0 siblings, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2015-12-16 10:28 UTC (permalink / raw)
To: Don Slutz
Cc: Jun Nakajima, Tim Deegan, Kevin Tian, Wei Liu, Ian Campbell,
Stefano Stabellini, George Dunlap, Andrew Cooper, Ian Jackson,
Don Slutz, xen-devel, Eddie Dong, Aravind Gopalakrishnan,
Suravee Suthikulpanit, Keir Fraser, Boris Ostrovsky
>>> On 28.11.15 at 22:44, <don.slutz@gmail.com> wrote:
> --- a/xen/include/public/arch-x86/xen.h
> +++ b/xen/include/public/arch-x86/xen.h
> @@ -290,6 +290,7 @@ struct xen_arch_domainconfig {
> XEN_X86_EMU_VGA | XEN_X86_EMU_IOMMU | \
> XEN_X86_EMU_PIT)
> uint32_t emulation_flags;
> + uint64_t vmware_hwver;
It's been a while since the last iteration, so could you please remind
me again why this needs to be a 64-bit quantity?
Jan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v13 2/8] xen: Add support for VMware cpuid leaves
2015-12-16 10:28 ` Jan Beulich
@ 2015-12-20 17:48 ` Don Slutz
0 siblings, 0 replies; 24+ messages in thread
From: Don Slutz @ 2015-12-20 17:48 UTC (permalink / raw)
To: Jan Beulich
Cc: Jun Nakajima, Tim Deegan, Kevin Tian, Wei Liu, Ian Campbell,
Stefano Stabellini, George Dunlap, Andrew Cooper, Ian Jackson,
Don Slutz, xen-devel, Eddie Dong, Aravind Gopalakrishnan,
Suravee Suthikulpanit, Keir Fraser, Boris Ostrovsky
On 12/16/15 05:28, Jan Beulich wrote:
>>>> On 28.11.15 at 22:44, <don.slutz@gmail.com> wrote:
>> --- a/xen/include/public/arch-x86/xen.h
>> +++ b/xen/include/public/arch-x86/xen.h
>> @@ -290,6 +290,7 @@ struct xen_arch_domainconfig {
>> XEN_X86_EMU_VGA | XEN_X86_EMU_IOMMU | \
>> XEN_X86_EMU_PIT)
>> uint32_t emulation_flags;
>> + uint64_t vmware_hwver;
>
> It's been a while since the last iteration, so could you please remind
> me again why this needs to be a 64-bit quantity?
Looking at it, it only needs to be a 32-bit quantity. My memory says
that it was 64-bits because that was the size of a HVM_PARAM_... which
was were it was passed before this code existed.
I see no reason not to change to 32-bit, and so will do so.
-Don Slutz
>
> Jan
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v13 3/8] tools: Add vmware_hwver support
2015-11-28 21:44 [PATCH v13 0/8] Xen VMware tools support Don Slutz
2015-11-28 21:44 ` [PATCH v13 1/8] tools: Add vga=vmware Don Slutz
2015-11-28 21:44 ` [PATCH v13 2/8] xen: Add support for VMware cpuid leaves Don Slutz
@ 2015-11-28 21:45 ` Don Slutz
2015-11-28 21:45 ` [PATCH v13 4/8] vmware: Add VMware provided include file Don Slutz
` (5 subsequent siblings)
8 siblings, 0 replies; 24+ messages in thread
From: Don Slutz @ 2015-11-28 21:45 UTC (permalink / raw)
To: xen-devel
Cc: Jun Nakajima, Wei Liu, Kevin Tian, Keir Fraser, Ian Campbell,
George Dunlap, Andrew Cooper, Stefano Stabellini, Eddie Dong,
Don Slutz, Don Slutz, Tim Deegan, Aravind Gopalakrishnan,
Jan Beulich, Suravee Suthikulpanit, Boris Ostrovsky, Ian Jackson
From: Don Slutz <dslutz@verizon.com>
This is used to set xen_arch_domainconfig vmware_hw. It is set to
the emulated VMware virtual hardware version.
Currently 0, 3-4, 6-11 are good values. However the code only
checks for == 0, != 0, or < 7.
Signed-off-by: Don Slutz <dslutz@verizon.com>
CC: Don Slutz <don.slutz@gmail.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
v13:
Added: Acked-by: Ian Campbell
v12:
I'm not sure this hunk has anything to do with this patch, nor
what the semantic difference between the old and new text is
supposed to be.
Dropped comment change.
v11:
Dropped "If non-zero then default VGA to VMware's VGA"
v10:
LIBXL_HAVE_LIBXL_VGA_INTERFACE_TYPE_VMWARE &
LIBXL_HAVE_BUILDINFO_HVM_VMWARE_HWVER are arriving together
a single umbrella could be used.
Since I split the LIBXL_VGA_INTERFACE_TYPE_VMWARE into
it's own patch, this is not longer true.
But I did use 1 for the 2 c_info changes.
Please use GCSPRINTF.
Remove vga=vmware from here.
v9:
I assumed that s/vmware_hw/vmware_hwver/ is not a big enough
change to drop the Reviewed-by. Did a minor edit to the
commit message to add 7 to the list of values checked.
v7:
Default handling of hvm.vga.kind bad.
Fixed.
Default of vmware_port should be based on vmware_hw.
Done.
v5:
Anything looking for Xen according to the Xen cpuid instructions...
Adjusted doc to new wording.
docs/man/xl.cfg.pod.5 | 17 +++++++++++++++++
tools/libxl/libxl_create.c | 4 +++-
tools/libxl/libxl_types.idl | 1 +
tools/libxl/libxl_x86.c | 3 +--
tools/libxl/xl_cmdimpl.c | 2 ++
5 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 9a5744d..088ea00 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1506,6 +1506,23 @@ The viridian option can be specified as a boolean. A value of true (1)
is equivalent to the list [ "defaults" ], and a value of false (0) is
equivalent to an empty list.
+=item B<vmware_hwver=NUMBER>
+
+Turns on or off the exposure of VMware cpuid. The number is
+VMware's hardware version number, where 0 is off. A number >= 7
+is needed to enable exposure of VMware cpuid.
+
+The hardware version number (vmware_hwver) comes from VMware config files.
+
+=over 4
+
+In a .vmx it is virtualHW.version
+
+In a .ovf it is part of the value of vssd:VirtualSystemType.
+For vssd:VirtualSystemType == vmx-07, vmware_hwver = 7.
+
+=back
+
=back
=head3 Emulated VGA Graphics Device
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 8770486..291c569 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -423,7 +423,7 @@ int libxl__domain_build(libxl__gc *gc,
vments[4] = "start_time";
vments[5] = GCSPRINTF("%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
- localents = libxl__calloc(gc, 9, sizeof(char *));
+ localents = libxl__calloc(gc, 11, sizeof(char *));
i = 0;
localents[i++] = "platform/acpi";
localents[i++] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
@@ -431,6 +431,8 @@ int libxl__domain_build(libxl__gc *gc,
localents[i++] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
localents[i++] = "platform/acpi_s4";
localents[i++] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : "0";
+ localents[i++] = "platform/vmware_hwver";
+ localents[i++] = GCSPRINTF("%"PRId64, d_config->c_info.vmware_hwver);
if (info->u.hvm.mmio_hole_memkb) {
uint64_t max_ram_below_4g =
(1ULL << 32) - (info->u.hvm.mmio_hole_memkb << 10);
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 4f46594..4c4bb8c 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -372,6 +372,7 @@ libxl_domain_create_info = Struct("domain_create_info",[
("run_hotplug_scripts",libxl_defbool),
("pvh", libxl_defbool),
("driver_domain",libxl_defbool),
+ ("vmware_hwver", uint64),
], dir=DIR_IN)
libxl_domain_restore_params = Struct("domain_restore_params", [
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 51a10b8..03e717d 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -12,8 +12,7 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
else
xc_config->emulation_flags = 0;
- /* Note: will be changed in next patch (tools: Add ...). */
- xc_config->vmware_hwver = 0;
+ xc_config->vmware_hwver = d_config->c_info.vmware_hwver;
return 0;
}
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 93d5295..abd776e 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1485,6 +1485,8 @@ static void parse_config_data(const char *config_source,
b_info->cmdline = parse_cmdline(config);
xlu_cfg_get_defbool(config, "driver_domain", &c_info->driver_domain, 0);
+ if (!xlu_cfg_get_long(config, "vmware_hwver", &l, 1))
+ c_info->vmware_hwver = l;
switch(b_info->type) {
case LIBXL_DOMAIN_TYPE_HVM:
--
1.8.3.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v13 4/8] vmware: Add VMware provided include file.
2015-11-28 21:44 [PATCH v13 0/8] Xen VMware tools support Don Slutz
` (2 preceding siblings ...)
2015-11-28 21:45 ` [PATCH v13 3/8] tools: Add vmware_hwver support Don Slutz
@ 2015-11-28 21:45 ` Don Slutz
2015-11-28 21:45 ` [PATCH v13 5/8] xen: Add vmware_port support Don Slutz
` (4 subsequent siblings)
8 siblings, 0 replies; 24+ messages in thread
From: Don Slutz @ 2015-11-28 21:45 UTC (permalink / raw)
To: xen-devel
Cc: Jun Nakajima, Wei Liu, Kevin Tian, Keir Fraser, Ian Campbell,
George Dunlap, Andrew Cooper, Stefano Stabellini, Eddie Dong,
Don Slutz, Don Slutz, Tim Deegan, Aravind Gopalakrishnan,
Jan Beulich, Suravee Suthikulpanit, Boris Ostrovsky, Ian Jackson
From: Don Slutz <dslutz@verizon.com>
This file: backdoor_def.h comes from:
http://packages.vmware.com/tools/esx/3.5latest/rhel4/SRPMS/index.html
open-vm-tools-kmod-7.4.8-396269.423167.src.rpm
open-vm-tools-kmod-7.4.8.tar.gz
vmhgfs/backdoor_def.h
and is unchanged.
Added the badly named include file includeCheck.h also. It only has
a comment and is provided so that backdoor_def.h can be used without
change.
Signed-off-by: Don Slutz <dslutz@verizon.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Don Slutz <don.slutz@gmail.com>
---
v11,v12,v13:
No change
v10:
Add Acked-by: Andrew Cooper
v9:
Either the description is wrong, or the patch is stale.
stale commit message -- fixed.
I'd say a file with a single comment line in it would suffice.
Done.
xen/arch/x86/hvm/vmware/backdoor_def.h | 167 +++++++++++++++++++++++++++++++++
xen/arch/x86/hvm/vmware/includeCheck.h | 1 +
2 files changed, 168 insertions(+)
create mode 100644 xen/arch/x86/hvm/vmware/backdoor_def.h
create mode 100644 xen/arch/x86/hvm/vmware/includeCheck.h
diff --git a/xen/arch/x86/hvm/vmware/backdoor_def.h b/xen/arch/x86/hvm/vmware/backdoor_def.h
new file mode 100644
index 0000000..e76795f
--- /dev/null
+++ b/xen/arch/x86/hvm/vmware/backdoor_def.h
@@ -0,0 +1,167 @@
+/* **********************************************************
+ * Copyright 1998 VMware, Inc. All rights reserved.
+ * **********************************************************
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/*
+ * backdoor_def.h --
+ *
+ * This contains backdoor defines that can be included from
+ * an assembly language file.
+ */
+
+
+
+#ifndef _BACKDOOR_DEF_H_
+#define _BACKDOOR_DEF_H_
+
+#define INCLUDE_ALLOW_MODULE
+#define INCLUDE_ALLOW_USERLEVEL
+#define INCLUDE_ALLOW_VMMEXT
+#define INCLUDE_ALLOW_VMCORE
+#define INCLUDE_ALLOW_VMKERNEL
+#include "includeCheck.h"
+
+/*
+ * If you want to add a new low-level backdoor call for a guest userland
+ * application, please consider using the GuestRpc mechanism instead. --hpreg
+ */
+
+#define BDOOR_MAGIC 0x564D5868
+
+/* Low-bandwidth backdoor port. --hpreg */
+
+#define BDOOR_PORT 0x5658
+
+#define BDOOR_CMD_GETMHZ 1
+/*
+ * BDOOR_CMD_APMFUNCTION is used by:
+ *
+ * o The FrobOS code, which instead should either program the virtual chipset
+ * (like the new BIOS code does, matthias offered to implement that), or not
+ * use any VM-specific code (which requires that we correctly implement
+ * "power off on CLI HLT" for SMP VMs, boris offered to implement that)
+ *
+ * o The old BIOS code, which will soon be jettisoned
+ *
+ * --hpreg
+ */
+#define BDOOR_CMD_APMFUNCTION 2
+#define BDOOR_CMD_GETDISKGEO 3
+#define BDOOR_CMD_GETPTRLOCATION 4
+#define BDOOR_CMD_SETPTRLOCATION 5
+#define BDOOR_CMD_GETSELLENGTH 6
+#define BDOOR_CMD_GETNEXTPIECE 7
+#define BDOOR_CMD_SETSELLENGTH 8
+#define BDOOR_CMD_SETNEXTPIECE 9
+#define BDOOR_CMD_GETVERSION 10
+#define BDOOR_CMD_GETDEVICELISTELEMENT 11
+#define BDOOR_CMD_TOGGLEDEVICE 12
+#define BDOOR_CMD_GETGUIOPTIONS 13
+#define BDOOR_CMD_SETGUIOPTIONS 14
+#define BDOOR_CMD_GETSCREENSIZE 15
+#define BDOOR_CMD_MONITOR_CONTROL 16
+#define BDOOR_CMD_GETHWVERSION 17
+#define BDOOR_CMD_OSNOTFOUND 18
+#define BDOOR_CMD_GETUUID 19
+#define BDOOR_CMD_GETMEMSIZE 20
+#define BDOOR_CMD_HOSTCOPY 21 /* Devel only */
+/* BDOOR_CMD_GETOS2INTCURSOR, 22, is very old and defunct. Reuse. */
+#define BDOOR_CMD_GETTIME 23 /* Deprecated. Use GETTIMEFULL. */
+#define BDOOR_CMD_STOPCATCHUP 24
+#define BDOOR_CMD_PUTCHR 25 /* Devel only */
+#define BDOOR_CMD_ENABLE_MSG 26 /* Devel only */
+#define BDOOR_CMD_GOTO_TCL 27 /* Devel only */
+#define BDOOR_CMD_INITPCIOPROM 28
+#define BDOOR_CMD_INT13 29
+#define BDOOR_CMD_MESSAGE 30
+#define BDOOR_CMD_RSVD0 31
+#define BDOOR_CMD_RSVD1 32
+#define BDOOR_CMD_RSVD2 33
+#define BDOOR_CMD_ISACPIDISABLED 34
+#define BDOOR_CMD_TOE 35 /* Not in use */
+/* BDOOR_CMD_INITLSIOPROM, 36, was merged with 28. Reuse. */
+#define BDOOR_CMD_PATCH_SMBIOS_STRUCTS 37
+#define BDOOR_CMD_MAPMEM 38 /* Devel only */
+#define BDOOR_CMD_ABSPOINTER_DATA 39
+#define BDOOR_CMD_ABSPOINTER_STATUS 40
+#define BDOOR_CMD_ABSPOINTER_COMMAND 41
+#define BDOOR_CMD_TIMER_SPONGE 42
+#define BDOOR_CMD_PATCH_ACPI_TABLES 43
+/* Catch-all to allow synchronous tests */
+#define BDOOR_CMD_DEVEL_FAKEHARDWARE 44 /* Debug only - needed in beta */
+#define BDOOR_CMD_GETHZ 45
+#define BDOOR_CMD_GETTIMEFULL 46
+#define BDOOR_CMD_STATELOGGER 47
+#define BDOOR_CMD_CHECKFORCEBIOSSETUP 48
+#define BDOOR_CMD_LAZYTIMEREMULATION 49
+#define BDOOR_CMD_BIOSBBS 50
+#define BDOOR_CMD_MAX 51
+
+/*
+ * IMPORTANT NOTE: When modifying the behavior of an existing backdoor command,
+ * you must adhere to the semantics expected by the oldest Tools who use that
+ * command. Specifically, do not alter the way in which the command modifies
+ * the registers. Otherwise backwards compatibility will suffer.
+ */
+
+/* High-bandwidth backdoor port. --hpreg */
+
+#define BDOORHB_PORT 0x5659
+
+#define BDOORHB_CMD_MESSAGE 0
+#define BDOORHB_CMD_MAX 1
+
+/*
+ * There is another backdoor which allows access to certain TSC-related
+ * values using otherwise illegal PMC indices when the pseudo_perfctr
+ * control flag is set.
+ */
+
+#define BDOOR_PMC_HW_TSC 0x10000
+#define BDOOR_PMC_REAL_NS 0x10001
+#define BDOOR_PMC_APPARENT_NS 0x10002
+
+#define IS_BDOOR_PMC(index) (((index) | 3) == 0x10003)
+#define BDOOR_CMD(ecx) ((ecx) & 0xffff)
+
+
+#ifdef VMM
+/*
+ *----------------------------------------------------------------------
+ *
+ * Backdoor_CmdRequiresFullyValidVCPU --
+ *
+ * A few backdoor commands require the full VCPU to be valid
+ * (including GDTR, IDTR, TR and LDTR). The rest get read/write
+ * access to GPRs and read access to Segment registers (selectors).
+ *
+ * Result:
+ * True iff VECX contains a command that require the full VCPU to
+ * be valid.
+ *
+ *----------------------------------------------------------------------
+ */
+static INLINE Bool
+Backdoor_CmdRequiresFullyValidVCPU(unsigned cmd)
+{
+ return cmd == BDOOR_CMD_RSVD0 ||
+ cmd == BDOOR_CMD_RSVD1 ||
+ cmd == BDOOR_CMD_RSVD2;
+}
+#endif
+
+#endif
diff --git a/xen/arch/x86/hvm/vmware/includeCheck.h b/xen/arch/x86/hvm/vmware/includeCheck.h
new file mode 100644
index 0000000..3b63fa4
--- /dev/null
+++ b/xen/arch/x86/hvm/vmware/includeCheck.h
@@ -0,0 +1 @@
+/* Nothing here. Just to use backdoor_def.h without change. */
--
1.8.3.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v13 5/8] xen: Add vmware_port support
2015-11-28 21:44 [PATCH v13 0/8] Xen VMware tools support Don Slutz
` (3 preceding siblings ...)
2015-11-28 21:45 ` [PATCH v13 4/8] vmware: Add VMware provided include file Don Slutz
@ 2015-11-28 21:45 ` Don Slutz
2015-12-16 15:12 ` Jan Beulich
2015-11-28 21:45 ` [PATCH v13 6/8] tools: " Don Slutz
` (3 subsequent siblings)
8 siblings, 1 reply; 24+ messages in thread
From: Don Slutz @ 2015-11-28 21:45 UTC (permalink / raw)
To: xen-devel
Cc: Jun Nakajima, Wei Liu, Kevin Tian, Keir Fraser, Ian Campbell,
George Dunlap, Andrew Cooper, Stefano Stabellini, Eddie Dong,
Don Slutz, Don Slutz, Tim Deegan, Aravind Gopalakrishnan,
Jan Beulich, Suravee Suthikulpanit, Boris Ostrovsky, Ian Jackson
From: Don Slutz <dslutz@verizon.com>
This includes adding is_vmware_port_enabled
This is a new xen_arch_domainconfig flag,
XEN_DOMCTL_CONFIG_VMWARE_PORT_MASK.
This enables limited support of VMware's hyper-call.
This is both a more complete support then in currently provided by
QEMU and/or KVM and less. The missing part requires QEMU changes
and has been left out until the QEMU patches are accepted upstream.
VMware's hyper-call is also known as VMware Backdoor I/O Port.
Note: this support does not depend on vmware_hw being non-zero.
Summary is that VMware treats "in (%dx),%eax" (or "out %eax,(%dx)")
to port 0x5658 specially. Note: since many operations return data
in EAX, "in (%dx),%eax" is the one to use. The other lengths like
"in (%dx),%al" will still do things, only AL part of EAX will be
changed. For "out %eax,(%dx)" of all lengths, EAX will remain
unchanged.
An open source example of using this is:
http://open-vm-tools.sourceforge.net/
Which only uses "inl (%dx)". Also
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009458
Some of the best info is at:
https://sites.google.com/site/chitchatvmback/backdoor
Signed-off-by: Don Slutz <dslutz@verizon.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Don Slutz <don.slutz@gmail.com>
---
v13:
Changed to uint32_t arch_flags, since the emulation_flags is this.
v12:
Surrounding code avoiding the use of "break" makes the result
look rather inconsistent. Please move this up immediately after
the XSM check, or drop the "break".
Moved it up.
v11:
Dropped ASSERT(is_hvm_domain(currd))
Newline after break;
v10:
Probably better as EOPNOTSUPP, as it is a configuration problem.
This function looks as if it should be static.
I would suggest putting vmport_register declaration in hvm.h ...
As indicated before, I don't think this is a good use case for a
domain creation flag.
Switch to the new config way.
struct domain *d => struct domain *currd
Are you sure you don't want to zero the high halves of 64-bit ...
Comment added.
Then just have this handled into the default case.
Reworked new_eax handling.
is_hvm_domain(currd)
And - why here rather than before the switch() or even right at the
start of the function?
Moved to start.
With that, is it really correct that OUT updates the other registers
just like IN? If so, this deserves a comment, so that readers won't
think this is in error.
All done in comment at start.
v9:
Switch to x86_emulator to handle #GP code moved to next patch.
Can you explain why a HVM param isn't suitable here?
Issue with changing QEMU on the fly.
Andrew Cooper: My recommendation is still to use a creation flag
So no change.
Please move SVM's identical definition into ...
Did this as #1. No longer needed, but since the patch was ready
I have included it.
--Lots of questions about code that no long is part of this patch. --
With this, is handling other than 32-bit in/out really
meaningful/correct?
Added comment about this.
Since you can't get here for PV, I can't see what you need this.
Changed to an ASSERT.
Why version 4?
Added comment about this.
-- Several questions about register changes.
Re-coded to use new_eax and set *val to this.
Change to generealy use reg->_e..
These ei1/ei2 checks belong in the callers imo -
Moved.
the "port" function parameter isn't even checked
Add check for exact match.
If dropping the code is safe without also forbidding the
combination of nested and VMware emulation.
Added the forbidding the combination of nested and VMware.
Mostly do to the cases of the nested virtual code is the one
to handle VMware stuff if needed, not the root one. Also I am
having issues testing xen nested in xen and using hvm.
v7:
More on AMD in the commit message.
Switch to only change 32bit part of registers, what VMware
does.
Too much logging and tracing.
Dropped a lot of it. This includes vmport_debug=
v6:
Dropped the attempt to use svm_nextrip_insn_length via
__get_instruction_length (added in v2). Just always look
at upto 15 bytes on AMD.
v5:
we should make sure that svm_vmexit_gp_intercept is not executed for
any other guest.
Added an ASSERT on is_vmware_port_enabled.
magic integers?
Added #define for them.
I am fairly certain that you need some brackets here.
Added brackets.
xen/arch/x86/domain.c | 2 +
xen/arch/x86/hvm/hvm.c | 9 +++
xen/arch/x86/hvm/vmware/Makefile | 1 +
xen/arch/x86/hvm/vmware/vmport.c | 148 ++++++++++++++++++++++++++++++++++++++
xen/include/asm-x86/hvm/domain.h | 3 +
xen/include/asm-x86/hvm/hvm.h | 2 +
xen/include/public/arch-x86/xen.h | 4 ++
7 files changed, 169 insertions(+)
create mode 100644 xen/arch/x86/hvm/vmware/vmport.c
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index a1fbcb5..b9739ef 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -549,6 +549,8 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
d->arch.hvm_domain.hap_enabled =
hvm_funcs.hap_supported && (domcr_flags & DOMCRF_hap);
d->arch.hvm_domain.vmware_hwver = config->vmware_hwver;
+ d->arch.hvm_domain.is_vmware_port_enabled =
+ !!(config->arch_flags & XEN_DOMCTL_CONFIG_VMWARE_PORT_MASK);
rc = create_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0, NULL, NULL);
}
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 9cdc980..c347b63 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1601,6 +1601,9 @@ int hvm_domain_initialise(struct domain *d)
register_dpci_portio_handler(d);
+ if ( d->arch.hvm_domain.is_vmware_port_enabled )
+ vmport_register(d);
+
if ( is_pvh_domain(d) )
{
register_portio_handler(d, 0, 0x10003, handle_pvh_io);
@@ -6195,6 +6198,12 @@ static int hvmop_set_param(
rc = xsm_hvm_param_nested(XSM_PRIV, d);
if ( rc )
break;
+ /* Prevent nestedhvm with vmport */
+ if ( d->arch.hvm_domain.is_vmware_port_enabled )
+ {
+ rc = -EOPNOTSUPP;
+ break;
+ }
if ( a.value > 1 )
rc = -EINVAL;
/*
diff --git a/xen/arch/x86/hvm/vmware/Makefile b/xen/arch/x86/hvm/vmware/Makefile
index 3fb2e0b..cd8815b 100644
--- a/xen/arch/x86/hvm/vmware/Makefile
+++ b/xen/arch/x86/hvm/vmware/Makefile
@@ -1 +1,2 @@
obj-y += cpuid.o
+obj-y += vmport.o
diff --git a/xen/arch/x86/hvm/vmware/vmport.c b/xen/arch/x86/hvm/vmware/vmport.c
new file mode 100644
index 0000000..f24d8e3
--- /dev/null
+++ b/xen/arch/x86/hvm/vmware/vmport.c
@@ -0,0 +1,148 @@
+/*
+ * HVM VMPORT emulation
+ *
+ * Copyright (C) 2012 Verizon Corporation
+ *
+ * This file is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License Version 2 (GPLv2)
+ * as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details. <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/lib.h>
+#include <asm/hvm/hvm.h>
+#include <asm/hvm/support.h>
+
+#include "backdoor_def.h"
+
+static int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val)
+{
+ struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+ /*
+ * While VMware expects only 32-bit in, they do support using
+ * other sizes and out. However they do require only the 1 port
+ * and the correct value in eax. Since some of the data
+ * returned in eax is smaller the 32 bits and/or you only need
+ * the other registers the dir and bytes do not need any
+ * checking. The caller will handle the bytes, and dir is
+ * handled below for eax.
+ */
+ if ( port == BDOOR_PORT && regs->_eax == BDOOR_MAGIC )
+ {
+ uint32_t new_eax = ~0u;
+ uint64_t value;
+ struct vcpu *curr = current;
+ struct domain *currd = curr->domain;
+
+ /*
+ * VMware changes the other (non eax) registers ignoring dir
+ * (IN vs OUT). It also changes only the 32-bit part
+ * leaving the high 32-bits unchanged, unlike what one would
+ * expect to happen.
+ */
+ switch ( regs->_ecx & 0xffff )
+ {
+ case BDOOR_CMD_GETMHZ:
+ new_eax = currd->arch.tsc_khz / 1000;
+ break;
+
+ case BDOOR_CMD_GETVERSION:
+ /* MAGIC */
+ regs->_ebx = BDOOR_MAGIC;
+ /* VERSION_MAGIC */
+ new_eax = 6;
+ /* Claim we are an ESX. VMX_TYPE_SCALABLE_SERVER */
+ regs->_ecx = 2;
+ break;
+
+ case BDOOR_CMD_GETHWVERSION:
+ /* vmware_hw */
+ new_eax = currd->arch.hvm_domain.vmware_hwver;
+ /*
+ * Returning zero is not the best. VMware was not at
+ * all consistent in the handling of this command until
+ * VMware hardware version 4. So it is better to claim
+ * 4 then 0. This should only happen in strange configs.
+ */
+ if ( !new_eax )
+ new_eax = 4;
+ break;
+
+ case BDOOR_CMD_GETHZ:
+ {
+ struct segment_register sreg;
+
+ hvm_get_segment_register(curr, x86_seg_ss, &sreg);
+ if ( sreg.attr.fields.dpl == 0 )
+ {
+ value = currd->arch.tsc_khz * 1000;
+ /* apic-frequency (bus speed) */
+ regs->_ecx = 1000000000ULL / APIC_BUS_CYCLE_NS;
+ /* High part of tsc-frequency */
+ regs->_ebx = value >> 32;
+ /* Low part of tsc-frequency */
+ new_eax = value;
+ }
+ break;
+
+ }
+ case BDOOR_CMD_GETTIME:
+ value = get_localtime_us(currd) -
+ currd->time_offset_seconds * 1000000ULL;
+ /* hostUsecs */
+ regs->_ebx = value % 1000000UL;
+ /* hostSecs */
+ new_eax = value / 1000000ULL;
+ /* maxTimeLag */
+ regs->_ecx = 1000000;
+ /* offset to GMT in minutes */
+ regs->_edx = currd->time_offset_seconds / 60;
+ break;
+
+ case BDOOR_CMD_GETTIMEFULL:
+ /* BDOOR_MAGIC */
+ new_eax = BDOOR_MAGIC;
+ value = get_localtime_us(currd) -
+ currd->time_offset_seconds * 1000000ULL;
+ /* hostUsecs */
+ regs->_ebx = value % 1000000UL;
+ /* hostSecs low 32 bits */
+ regs->_edx = value / 1000000ULL;
+ /* hostSecs high 32 bits */
+ regs->_esi = (value / 1000000ULL) >> 32;
+ /* maxTimeLag */
+ regs->_ecx = 1000000;
+ break;
+
+ default:
+ /* Let backing DM handle */
+ return X86EMUL_UNHANDLEABLE;
+ }
+ if ( dir == IOREQ_READ )
+ *val = new_eax;
+ }
+ else if ( dir == IOREQ_READ )
+ *val = ~0u;
+
+ return X86EMUL_OKAY;
+}
+
+void vmport_register(struct domain *d)
+{
+ register_portio_handler(d, BDOOR_PORT, 4, vmport_ioport);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index c832823..5860d51 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -126,6 +126,9 @@ struct hvm_domain {
spinlock_t uc_lock;
bool_t is_in_uc_mode;
+ /* VMware backdoor port available */
+ bool_t is_vmware_port_enabled;
+
/* Pass-through */
struct hvm_iommu hvm_iommu;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index d3f34c9..857cbb4 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -566,6 +566,8 @@ void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v);
/* emulates #VE */
bool_t altp2m_vcpu_emulate_ve(struct vcpu *v);
+void vmport_register(struct domain *d);
+
#endif /* __ASM_X86_HVM_HVM_H__ */
/*
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index dcbf4aa..22eb1d5 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -290,6 +290,10 @@ struct xen_arch_domainconfig {
XEN_X86_EMU_VGA | XEN_X86_EMU_IOMMU | \
XEN_X86_EMU_PIT)
uint32_t emulation_flags;
+/* Enable use of vmware backdoor port. */
+#define XEN_DOMCTL_CONFIG_VMWARE_PORT_BIT 0
+#define XEN_DOMCTL_CONFIG_VMWARE_PORT_MASK (1U << XEN_DOMCTL_CONFIG_VMWARE_PORT_BIT)
+ uint32_t arch_flags;
uint64_t vmware_hwver;
};
#endif
--
1.8.3.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v13 5/8] xen: Add vmware_port support
2015-11-28 21:45 ` [PATCH v13 5/8] xen: Add vmware_port support Don Slutz
@ 2015-12-16 15:12 ` Jan Beulich
2015-12-20 18:15 ` Don Slutz
0 siblings, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2015-12-16 15:12 UTC (permalink / raw)
To: Don Slutz
Cc: Jun Nakajima, Tim Deegan, Kevin Tian, Wei Liu, Ian Campbell,
Stefano Stabellini, George Dunlap, Andrew Cooper, Ian Jackson,
xen-devel, Aravind Gopalakrishnan, Suravee Suthikulpanit,
Keir Fraser, Boris Ostrovsky
>>> On 28.11.15 at 22:45, <don.slutz@gmail.com> wrote:
> ---
> v13:
> Changed to uint32_t arch_flags, since the emulation_flags is this.
I don't really understand this sentence, and I also don't understand
why you couldn't just use another XEN_X86_EMU_ flag, e.g.
XEN_X86_EMU_VMWARE_PORT (which would be the first one outside
the current all-or-nothing model).
Jan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v13 5/8] xen: Add vmware_port support
2015-12-16 15:12 ` Jan Beulich
@ 2015-12-20 18:15 ` Don Slutz
0 siblings, 0 replies; 24+ messages in thread
From: Don Slutz @ 2015-12-20 18:15 UTC (permalink / raw)
To: Jan Beulich
Cc: Jun Nakajima, Tim Deegan, Kevin Tian, Wei Liu, Ian Campbell,
Stefano Stabellini, George Dunlap, Andrew Cooper, Ian Jackson,
xen-devel, Aravind Gopalakrishnan, Suravee Suthikulpanit,
Keir Fraser, Boris Ostrovsky
On 12/16/15 10:12, Jan Beulich wrote:
>>>> On 28.11.15 at 22:45, <don.slutz@gmail.com> wrote:
>> ---
>> v13:
>> Changed to uint32_t arch_flags, since the emulation_flags is this.
>
> I don't really understand this sentence, and I also don't understand
> why you couldn't just use another XEN_X86_EMU_ flag, e.g.
> XEN_X86_EMU_VMWARE_PORT (which would be the first one outside
> the current all-or-nothing model).
>
Next attempt at that sentence:
During re-base of the patch, changed uint64_t arch_flags to uint32_t
arch_flags. This appeared to be ok because the added emulation_flags
was a unit32_t.
I have no reason not to use another XEN_X86_EMU_ flag. It was not clear
when I was re-baseing that it would be ok to do so.
Will drop arch_flags and use XEN_X86_EMU_VMWARE_PORT which will not be
added to "all".
-Don Slutz
> Jan
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v13 6/8] tools: Add vmware_port support
2015-11-28 21:44 [PATCH v13 0/8] Xen VMware tools support Don Slutz
` (4 preceding siblings ...)
2015-11-28 21:45 ` [PATCH v13 5/8] xen: Add vmware_port support Don Slutz
@ 2015-11-28 21:45 ` Don Slutz
2015-11-28 21:45 ` [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT Don Slutz
` (2 subsequent siblings)
8 siblings, 0 replies; 24+ messages in thread
From: Don Slutz @ 2015-11-28 21:45 UTC (permalink / raw)
To: xen-devel
Cc: Jun Nakajima, Wei Liu, Kevin Tian, Keir Fraser, Ian Campbell,
George Dunlap, Andrew Cooper, Stefano Stabellini, Eddie Dong,
Don Slutz, Don Slutz, Tim Deegan, Aravind Gopalakrishnan,
Jan Beulich, Suravee Suthikulpanit, Boris Ostrovsky, Ian Jackson
From: Don Slutz <don.slutz@gmail.com>
This new libxl_domain_create_info field is used to set
XEN_DOMCTL_CONFIG_VMWARE_PORT_MASK in the xc_domain_configuration_t
for x86.
In xen it is is_vmware_port_enabled.
If is_vmware_port_enabled then
enable a limited support of VMware's hyper-call.
VMware's hyper-call is also known as VMware Backdoor I/O Port.
if vmware_port is not specified in the config file, let
"vmware_hwver != 0" be the default value. This means that only
vmware_hwver = 7 needs to be specified to enable both features.
vmware_hwver = 7 is special because that is what controls the
enable of CPUID leaves for VMware (vmware_hwver >= 7).
Note: vmware_port and nestedhvm cannot be specified at the
same time.
Signed-off-by: Don Slutz <dslutz@verizon.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
v13:
Added Acked-by: Ian Campbell
v12:
s/come/comes/
In v11 this seems to have morphed into only
LIBXL_HAVE_LIBXL_VGA_INTERFACE_TYPE_VMWARE being provided, which
is clearly not an appropriate umbrella #define.
"#define LIBXL_HAVE_CREATEINFO_VMWARE 1"
Lets just have a single one of these indicating support for
vmware, it should be added at the end of the series after all
the baseline vmware functionality is in place. I think that
means hwver, vga=vmware and this port stuff.
Make (tools: Add vga=vmware) no longer independent.
Change the #define to "LIBXL_HAVE_VMWARE"
v11:
Dropped "If non-zero then default VGA to VMware's VGA"
v10:
If..." at the start of the sentence ...
Also, why is 7 special?
docs/man/xl.cfg.pod.5 | 15 +++++++++++++++
tools/libxl/libxl.h | 5 +++++
tools/libxl/libxl_create.c | 9 +++++++++
tools/libxl/libxl_types.idl | 1 +
tools/libxl/libxl_x86.c | 2 ++
tools/libxl/xl_cmdimpl.c | 1 +
6 files changed, 33 insertions(+)
diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 088ea00..d25f7f1 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1512,6 +1512,8 @@ Turns on or off the exposure of VMware cpuid. The number is
VMware's hardware version number, where 0 is off. A number >= 7
is needed to enable exposure of VMware cpuid.
+If not zero it changes the default for vmware_port to on.
+
The hardware version number (vmware_hwver) comes from VMware config files.
=over 4
@@ -1523,6 +1525,19 @@ For vssd:VirtualSystemType == vmx-07, vmware_hwver = 7.
=back
+=item B<vmware_port=BOOLEAN>
+
+Turns on or off the exposure of VMware port. This is known as
+vmport in QEMU. Also called VMware Backdoor I/O Port. Not all
+defined VMware backdoor commands are implemented. All of the
+ones that Linux kernel uses are defined.
+
+Defaults to enabled if vmware_hwver is non-zero (i.e. enabled)
+otherwise defaults to disabled.
+
+Note: vmware_port and nestedhvm cannot be specified at the
+same time.
+
=back
=head3 Emulated VGA Graphics Device
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 6b73848..6025414 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -218,6 +218,11 @@
#define LIBXL_HAVE_SOFT_RESET 1
/*
+ * libxl has VMware changes.
+ */
+#define LIBXL_HAVE_VMWARE 1
+
+/*
* libxl ABI compatibility
*
* The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 291c569..239491e 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -41,6 +41,7 @@ int libxl__domain_create_info_setdefault(libxl__gc *gc,
libxl_defbool_setdefault(&c_info->hap, libxl_defbool_val(c_info->pvh));
}
+ libxl_defbool_setdefault(&c_info->vmware_port, c_info->vmware_hwver != 0);
libxl_defbool_setdefault(&c_info->run_hotplug_scripts, true);
libxl_defbool_setdefault(&c_info->driver_domain, false);
@@ -891,6 +892,14 @@ static void initiate_domain_create(libxl__egc *egc,
LOG(ERROR, "Unable to set domain build info defaults");
goto error_out;
}
+ if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
+ libxl_defbool_val(d_config->b_info.u.hvm.nested_hvm) &&
+ libxl_defbool_val(d_config->c_info.vmware_port)) {
+ LOG(ERROR,
+ "vmware_port and nestedhvm cannot be enabled simultaneously\n");
+ ret = ERROR_INVAL;
+ goto error_out;
+ }
if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
(libxl_defbool_val(d_config->b_info.u.hvm.nested_hvm) &&
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 4c4bb8c..89910c0 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -373,6 +373,7 @@ libxl_domain_create_info = Struct("domain_create_info",[
("pvh", libxl_defbool),
("driver_domain",libxl_defbool),
("vmware_hwver", uint64),
+ ("vmware_port", libxl_defbool),
], dir=DIR_IN)
libxl_domain_restore_params = Struct("domain_restore_params", [
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 03e717d..3cb2ce3 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -13,6 +13,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
xc_config->emulation_flags = 0;
xc_config->vmware_hwver = d_config->c_info.vmware_hwver;
+ if (libxl_defbool_val(d_config->c_info.vmware_port))
+ xc_config->arch_flags |= XEN_DOMCTL_CONFIG_VMWARE_PORT_MASK;
return 0;
}
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index abd776e..6c77b3d 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1325,6 +1325,7 @@ static void parse_config_data(const char *config_source,
}
xlu_cfg_get_defbool(config, "oos", &c_info->oos, 0);
+ xlu_cfg_get_defbool(config, "vmware_port", &c_info->vmware_port, 0);
if (!xlu_cfg_get_string (config, "pool", &buf, 0))
xlu_cfg_replace_string(config, "pool", &c_info->pool_name, 0);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
2015-11-28 21:44 [PATCH v13 0/8] Xen VMware tools support Don Slutz
` (5 preceding siblings ...)
2015-11-28 21:45 ` [PATCH v13 6/8] tools: " Don Slutz
@ 2015-11-28 21:45 ` Don Slutz
2015-12-01 11:28 ` Paul Durrant
2015-12-21 14:10 ` Jan Beulich
2015-11-28 21:45 ` [PATCH v13 8/8] Add xentrace to vmware_port Don Slutz
2016-03-04 21:50 ` ping Re: [PATCH v13 0/8] Xen VMware tools support Konrad Rzeszutek Wilk
8 siblings, 2 replies; 24+ messages in thread
From: Don Slutz @ 2015-11-28 21:45 UTC (permalink / raw)
To: xen-devel
Cc: Jun Nakajima, Wei Liu, Kevin Tian, Keir Fraser, Ian Campbell,
George Dunlap, Andrew Cooper, Stefano Stabellini, Eddie Dong,
Don Slutz, Don Slutz, Tim Deegan, Aravind Gopalakrishnan,
Jan Beulich, Suravee Suthikulpanit, Boris Ostrovsky, Ian Jackson
From: Don Slutz <dslutz@verizon.com>
This adds synchronization of the 6 vcpu registers (only 32bits of
them) that vmport.c needs between Xen and QEMU.
This is to avoid a 2nd and 3rd exchange between QEMU and Xen to
fetch and put these 6 vcpu registers used by the code in vmport.c
and vmmouse.c
In the tools, enable usage of QEMU's vmport code.
The currently most useful VMware port support that QEMU has is the
VMware mouse support. Xorg included a VMware mouse support that
uses absolute mode. This make using a mouse in X11 much nicer.
Signed-off-by: Don Slutz <dslutz@verizon.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
CC: Don Slutz <don.slutz@gmail.com>
---
v13:
Rebased on staging (not a simple rebase, needed rework to function
with changes).
I would have made this !vmport_check_port ...
Changed to !is_vmware, and invert vmport_check_port's return value.
Use 1 one for "list_for_each_entry ( sv, ..."
Added full stop in comments.
v12:
Rebase changes.
Pass size to vmport_check_port() -- required if overlap
I.E. inl on port 0x5657, 0x5656, 0x5655, 0x5659, 0x565a,
and 0x565b.
Move define of vmport_check_port() into this patch from ring3
patch.
v11:
No change
v10:
These literals should become an enum.
I don't think the invalidate type is needed.
Code handling "case X86EMUL_UNHANDLEABLE:" in emulate.c
is unclear.
Comment about "special' range of 1" is not clear.
v9:
New code was presented as an RFC before this.
Paul Durrant sugested I add support for other IOREQ types
to HVMOP_map_io_range_to_ioreq_server.
I have done this.
tools/libxc/xc_dom_x86.c | 5 +-
tools/libxl/libxl_dm.c | 2 +
xen/arch/x86/hvm/emulate.c | 68 +++++++++++---
xen/arch/x86/hvm/hvm.c | 194 ++++++++++++++++++++++++++++++++++-----
xen/arch/x86/hvm/vmware/vmport.c | 14 +++
xen/include/asm-x86/hvm/domain.h | 3 +-
xen/include/asm-x86/hvm/hvm.h | 2 +
xen/include/public/hvm/hvm_op.h | 5 +
xen/include/public/hvm/ioreq.h | 17 ++++
xen/include/public/hvm/params.h | 4 +-
10 files changed, 275 insertions(+), 39 deletions(-)
diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
index 5ff33ca..4c8a7fe 100644
--- a/tools/libxc/xc_dom_x86.c
+++ b/tools/libxc/xc_dom_x86.c
@@ -60,7 +60,8 @@
#define SPECIALPAGE_IOREQ 5
#define SPECIALPAGE_IDENT_PT 6
#define SPECIALPAGE_CONSOLE 7
-#define NR_SPECIAL_PAGES 8
+#define SPECIALPAGE_VMPORT_REGS 8
+#define NR_SPECIAL_PAGES 9
#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
#define NR_IOREQ_SERVER_PAGES 8
@@ -613,6 +614,8 @@ static int alloc_magic_pages_hvm(struct xc_dom_image *dom)
special_pfn(SPECIALPAGE_BUFIOREQ));
xc_hvm_param_set(xch, domid, HVM_PARAM_IOREQ_PFN,
special_pfn(SPECIALPAGE_IOREQ));
+ xc_hvm_param_set(xch, domid, HVM_PARAM_VMPORT_REGS_PFN,
+ special_pfn(SPECIALPAGE_VMPORT_REGS));
xc_hvm_param_set(xch, domid, HVM_PARAM_CONSOLE_PFN,
special_pfn(SPECIALPAGE_CONSOLE));
xc_hvm_param_set(xch, domid, HVM_PARAM_PAGING_RING_PFN,
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index c76fd90..50e6d93 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -1150,6 +1150,8 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
}
}
+ if (libxl_defbool_val(c_info->vmware_port))
+ machinearg = GCSPRINTF("%s,vmport=on", machinearg);
flexarray_append(dm_args, machinearg);
for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
flexarray_append(dm_args, b_info->extra_hvm[i]);
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index e1017b5..d407741 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -113,6 +113,7 @@ static int hvmemul_do_io(
};
void *p_data = (void *)data;
int rc;
+ bool_t is_vmware = !is_mmio && !data_is_addr && vmport_check_port(p.addr, p.size);
/*
* Weird-sized accesses have undefined behaviour: we discard writes
@@ -133,7 +134,7 @@ static int hvmemul_do_io(
p = vio->io_req;
/* Verify the emulation request has been correctly re-issued */
- if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
+ if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : is_vmware ? IOREQ_TYPE_VMWARE_PORT : IOREQ_TYPE_PIO)) ||
(p.addr != addr) ||
(p.size != size) ||
(p.count != reps) ||
@@ -167,26 +168,65 @@ static int hvmemul_do_io(
vio->io_req.state = STATE_IOREQ_NONE;
break;
case X86EMUL_UNHANDLEABLE:
- {
- struct hvm_ioreq_server *s =
- hvm_select_ioreq_server(curr->domain, &p);
-
- /* If there is no suitable backing DM, just ignore accesses */
- if ( !s )
+ if ( !is_vmware )
{
- rc = hvm_process_io_intercept(&null_handler, &p);
- vio->io_req.state = STATE_IOREQ_NONE;
+ struct hvm_ioreq_server *s =
+ hvm_select_ioreq_server(curr->domain, &p);
+
+ /* If there is no suitable backing DM, just ignore accesses */
+ if ( !s )
+ {
+ rc = hvm_process_io_intercept(&null_handler, &p);
+ vio->io_req.state = STATE_IOREQ_NONE;
+ }
+ else
+ {
+ rc = hvm_send_ioreq(s, &p, 0);
+ if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
+ vio->io_req.state = STATE_IOREQ_NONE;
+ else if ( data_is_addr )
+ rc = X86EMUL_OKAY;
+ }
}
else
{
- rc = hvm_send_ioreq(s, &p, 0);
- if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
+ struct hvm_ioreq_server *s;
+ vmware_regs_t *vr;
+
+ BUILD_BUG_ON(sizeof(ioreq_t) < sizeof(vmware_regs_t));
+
+ p.type = IOREQ_TYPE_VMWARE_PORT;
+ vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
+ s = hvm_select_ioreq_server(curr->domain, &p);
+ vr = get_vmport_regs_any(s, curr);
+
+ /*
+ * If there is no suitable backing DM, just ignore accesses. If
+ * we do not have access to registers to pass to QEMU, just
+ * ignore access.
+ */
+ if ( !s || !vr )
+ {
+ rc = hvm_process_io_intercept(&null_handler, &p);
vio->io_req.state = STATE_IOREQ_NONE;
- else if ( data_is_addr )
- rc = X86EMUL_OKAY;
+ }
+ else
+ {
+ struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+ p.data = regs->rax;
+ vr->ebx = regs->_ebx;
+ vr->ecx = regs->_ecx;
+ vr->edx = regs->_edx;
+ vr->esi = regs->_esi;
+ vr->edi = regs->_edi;
+
+ rc = hvm_send_ioreq(s, &p, 0);
+ if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
+ vio->io_req.state = STATE_IOREQ_NONE;
+ }
}
break;
- }
default:
BUG();
}
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c347b63..07b4025 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -415,6 +415,45 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v)
return &p->vcpu_ioreq[v->vcpu_id];
}
+static vmware_regs_t *get_vmport_regs_one(struct hvm_ioreq_server *s,
+ struct vcpu *v)
+{
+ struct hvm_ioreq_vcpu *sv;
+
+ list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry )
+ {
+ if ( sv->vcpu == v )
+ {
+ shared_vmport_iopage_t *p = s->vmport_ioreq.va;
+ if ( !p )
+ return NULL;
+ return &p->vcpu_vmport_regs[v->vcpu_id];
+ }
+ }
+ return NULL;
+}
+
+vmware_regs_t *get_vmport_regs_any(struct hvm_ioreq_server *s, struct vcpu *v)
+{
+ struct domain *d = v->domain;
+
+ ASSERT((v == current) || !vcpu_runnable(v));
+
+ if ( s )
+ return get_vmport_regs_one(s, v);
+
+ list_for_each_entry ( s,
+ &d->arch.hvm_domain.ioreq_server.list,
+ list_entry )
+ {
+ vmware_regs_t *ret = get_vmport_regs_one(s, v);
+
+ if ( ret )
+ return ret;
+ }
+ return NULL;
+}
+
bool_t hvm_io_pending(struct vcpu *v)
{
struct domain *d = v->domain;
@@ -536,6 +575,21 @@ void hvm_do_resume(struct vcpu *v)
handle_mmio();
break;
case HVMIO_pio_completion:
+ if ( vio->io_req.type == IOREQ_TYPE_VMWARE_PORT ) {
+ vmware_regs_t *vr = get_vmport_regs_any(NULL, v);
+
+ if ( vr )
+ {
+ struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+ /* Only change the 32bit part of the register. */
+ regs->_ebx = vr->ebx;
+ regs->_ecx = vr->ecx;
+ regs->_edx = vr->edx;
+ regs->_esi = vr->esi;
+ regs->_edi = vr->edi;
+ }
+ }
(void)handle_pio(vio->io_req.addr, vio->io_req.size,
vio->io_req.dir);
break;
@@ -618,22 +672,56 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
}
-static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
+typedef enum {
+ IOREQ_PAGE_TYPE_IOREQ,
+ IOREQ_PAGE_TYPE_BUFIOREQ,
+ IOREQ_PAGE_TYPE_VMPORT,
+} ioreq_page_type_t;
+
+static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, ioreq_page_type_t buf)
{
- struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+ struct hvm_ioreq_page *iorp = NULL;
+
+ switch ( buf )
+ {
+ case IOREQ_PAGE_TYPE_IOREQ:
+ iorp = &s->ioreq;
+ break;
+ case IOREQ_PAGE_TYPE_BUFIOREQ:
+ iorp = &s->bufioreq;
+ break;
+ case IOREQ_PAGE_TYPE_VMPORT:
+ iorp = &s->vmport_ioreq;
+ break;
+ }
+ ASSERT(iorp);
destroy_ring_for_helper(&iorp->va, iorp->page);
}
static int hvm_map_ioreq_page(
- struct hvm_ioreq_server *s, bool_t buf, unsigned long gmfn)
+ struct hvm_ioreq_server *s, ioreq_page_type_t buf, unsigned long gmfn)
{
struct domain *d = s->domain;
- struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+ struct hvm_ioreq_page *iorp = NULL;
struct page_info *page;
void *va;
int rc;
+ switch ( buf )
+ {
+ case IOREQ_PAGE_TYPE_IOREQ:
+ iorp = &s->ioreq;
+ break;
+ case IOREQ_PAGE_TYPE_BUFIOREQ:
+ iorp = &s->bufioreq;
+ break;
+ case IOREQ_PAGE_TYPE_VMPORT:
+ iorp = &s->vmport_ioreq;
+ break;
+ }
+ ASSERT(iorp);
+
if ( (rc = prepare_ring_for_helper(d, gmfn, &page, &va)) )
return rc;
@@ -849,19 +937,32 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
unsigned long ioreq_pfn,
- unsigned long bufioreq_pfn)
+ unsigned long bufioreq_pfn,
+ unsigned long vmport_ioreq_pfn)
{
int rc;
- rc = hvm_map_ioreq_page(s, 0, ioreq_pfn);
+ rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ, ioreq_pfn);
if ( rc )
return rc;
if ( bufioreq_pfn != INVALID_GFN )
- rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
+ rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_BUFIOREQ, bufioreq_pfn);
if ( rc )
- hvm_unmap_ioreq_page(s, 0);
+ {
+ hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
+ return rc;
+ }
+
+ rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT, vmport_ioreq_pfn);
+
+ if ( rc )
+ {
+ if ( bufioreq_pfn != INVALID_GFN )
+ hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_BUFIOREQ);
+ hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
+ }
return rc;
}
@@ -873,6 +974,8 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
struct domain *d = s->domain;
unsigned long ioreq_pfn = INVALID_GFN;
unsigned long bufioreq_pfn = INVALID_GFN;
+ unsigned long vmport_ioreq_pfn =
+ d->arch.hvm_domain.params[HVM_PARAM_VMPORT_REGS_PFN];
int rc;
if ( is_default )
@@ -884,7 +987,8 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
ASSERT(handle_bufioreq);
return hvm_ioreq_server_map_pages(s,
d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN],
- d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN]);
+ d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN],
+ vmport_ioreq_pfn);
}
rc = hvm_alloc_ioreq_gmfn(d, &ioreq_pfn);
@@ -893,8 +997,8 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
rc = hvm_alloc_ioreq_gmfn(d, &bufioreq_pfn);
if ( !rc )
- rc = hvm_ioreq_server_map_pages(s, ioreq_pfn, bufioreq_pfn);
-
+ rc = hvm_ioreq_server_map_pages(s, ioreq_pfn, bufioreq_pfn,
+ vmport_ioreq_pfn);
if ( rc )
{
hvm_free_ioreq_gmfn(d, ioreq_pfn);
@@ -909,11 +1013,15 @@ static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s,
{
struct domain *d = s->domain;
bool_t handle_bufioreq = ( s->bufioreq.va != NULL );
+ bool_t handle_vmport_ioreq = ( s->vmport_ioreq.va != NULL );
+
+ if ( handle_vmport_ioreq )
+ hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT);
if ( handle_bufioreq )
- hvm_unmap_ioreq_page(s, 1);
+ hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_BUFIOREQ);
- hvm_unmap_ioreq_page(s, 0);
+ hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
if ( !is_default )
{
@@ -948,12 +1056,38 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
{
char *name;
+ char *type_name = NULL;
+ unsigned int limit;
- rc = asprintf(&name, "ioreq_server %d %s", s->id,
- (i == HVMOP_IO_RANGE_PORT) ? "port" :
- (i == HVMOP_IO_RANGE_MEMORY) ? "memory" :
- (i == HVMOP_IO_RANGE_PCI) ? "pci" :
- "");
+ switch ( i )
+ {
+ case HVMOP_IO_RANGE_PORT:
+ type_name = "port";
+ limit = MAX_NR_IO_RANGES;
+ break;
+ case HVMOP_IO_RANGE_MEMORY:
+ type_name = "memory";
+ limit = MAX_NR_IO_RANGES;
+ break;
+ case HVMOP_IO_RANGE_PCI:
+ type_name = "pci";
+ limit = MAX_NR_IO_RANGES;
+ break;
+ case HVMOP_IO_RANGE_VMWARE_PORT:
+ type_name = "VMware port";
+ limit = 1;
+ break;
+ case HVMOP_IO_RANGE_TIMEOFFSET:
+ type_name = "timeoffset";
+ limit = 1;
+ break;
+ default:
+ break;
+ }
+ if ( !type_name )
+ continue;
+
+ rc = asprintf(&name, "ioreq_server %d %s", s->id, type_name);
if ( rc )
goto fail;
@@ -966,7 +1100,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
if ( !s->range[i] )
goto fail;
- rangeset_limit(s->range[i], MAX_NR_IO_RANGES);
+ rangeset_limit(s->range[i], limit);
+
+ /* VMware port */
+ if ( i == HVMOP_IO_RANGE_VMWARE_PORT &&
+ s->domain->arch.hvm_domain.is_vmware_port_enabled )
+ rc = rangeset_add_range(s->range[i], 1, 1);
}
done:
@@ -1271,6 +1410,8 @@ static int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
case HVMOP_IO_RANGE_PORT:
case HVMOP_IO_RANGE_MEMORY:
case HVMOP_IO_RANGE_PCI:
+ case HVMOP_IO_RANGE_VMWARE_PORT:
+ case HVMOP_IO_RANGE_TIMEOFFSET:
r = s->range[type];
break;
@@ -1322,6 +1463,8 @@ static int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
case HVMOP_IO_RANGE_PORT:
case HVMOP_IO_RANGE_MEMORY:
case HVMOP_IO_RANGE_PCI:
+ case HVMOP_IO_RANGE_VMWARE_PORT:
+ case HVMOP_IO_RANGE_TIMEOFFSET:
r = s->range[type];
break;
@@ -2558,9 +2701,6 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
return NULL;
- if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
- return d->arch.hvm_domain.default_ioreq_server;
-
cf8 = d->arch.hvm_domain.pci_cf8;
if ( p->type == IOREQ_TYPE_PIO &&
@@ -2613,7 +2753,10 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
BUILD_BUG_ON(IOREQ_TYPE_PIO != HVMOP_IO_RANGE_PORT);
BUILD_BUG_ON(IOREQ_TYPE_COPY != HVMOP_IO_RANGE_MEMORY);
BUILD_BUG_ON(IOREQ_TYPE_PCI_CONFIG != HVMOP_IO_RANGE_PCI);
+ BUILD_BUG_ON(IOREQ_TYPE_VMWARE_PORT != HVMOP_IO_RANGE_VMWARE_PORT);
+ BUILD_BUG_ON(IOREQ_TYPE_TIMEOFFSET != HVMOP_IO_RANGE_TIMEOFFSET);
r = s->range[type];
+ ASSERT(r);
switch ( type )
{
@@ -2640,6 +2783,13 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
}
break;
+ case IOREQ_TYPE_VMWARE_PORT:
+ case IOREQ_TYPE_TIMEOFFSET:
+ /* The 'special' range of [1,1] is checked for being enabled. */
+ if ( rangeset_contains_singleton(r, 1) )
+ return s;
+
+ break;
}
}
diff --git a/xen/arch/x86/hvm/vmware/vmport.c b/xen/arch/x86/hvm/vmware/vmport.c
index f24d8e3..ee993b3 100644
--- a/xen/arch/x86/hvm/vmware/vmport.c
+++ b/xen/arch/x86/hvm/vmware/vmport.c
@@ -137,6 +137,20 @@ void vmport_register(struct domain *d)
register_portio_handler(d, BDOOR_PORT, 4, vmport_ioport);
}
+int vmport_check_port(unsigned int port, unsigned int bytes)
+{
+ struct vcpu *curr = current;
+ struct domain *currd = curr->domain;
+
+ if ( port + bytes > BDOOR_PORT && port < BDOOR_PORT + 4 &&
+ is_hvm_domain(currd) &&
+ currd->arch.hvm_domain.is_vmware_port_enabled )
+ {
+ return 1;
+ }
+ return 0;
+}
+
/*
* Local variables:
* mode: C
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 5860d51..d87dedf 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -48,7 +48,7 @@ struct hvm_ioreq_vcpu {
bool_t pending;
};
-#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
+#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_VMWARE_PORT + 1)
#define MAX_NR_IO_RANGES 256
struct hvm_ioreq_server {
@@ -63,6 +63,7 @@ struct hvm_ioreq_server {
ioservid_t id;
struct hvm_ioreq_page ioreq;
struct list_head ioreq_vcpu_list;
+ struct hvm_ioreq_page vmport_ioreq;
struct hvm_ioreq_page bufioreq;
/* Lock to serialize access to buffered ioreq ring */
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 857cbb4..78547ec 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -567,6 +567,8 @@ void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v);
bool_t altp2m_vcpu_emulate_ve(struct vcpu *v);
void vmport_register(struct domain *d);
+int vmport_check_port(unsigned int port, unsigned int bytes);
+vmware_regs_t *get_vmport_regs_any(struct hvm_ioreq_server *s, struct vcpu *v);
#endif /* __ASM_X86_HVM_HVM_H__ */
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 1606185..3dc99af 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -323,6 +323,9 @@ DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_ioreq_server_info_t);
*
* NOTE: unless an emulation request falls entirely within a range mapped
* by a secondary emulator, it will not be passed to that emulator.
+ *
+ * NOTE: The 'special' range of [1,1] is what is checked for on
+ * TIMEOFFSET and VMWARE_PORT.
*/
#define HVMOP_map_io_range_to_ioreq_server 19
#define HVMOP_unmap_io_range_from_ioreq_server 20
@@ -333,6 +336,8 @@ struct xen_hvm_io_range {
# define HVMOP_IO_RANGE_PORT 0 /* I/O port range */
# define HVMOP_IO_RANGE_MEMORY 1 /* MMIO range */
# define HVMOP_IO_RANGE_PCI 2 /* PCI segment/bus/dev/func range */
+# define HVMOP_IO_RANGE_TIMEOFFSET 7 /* TIMEOFFSET special range */
+# define HVMOP_IO_RANGE_VMWARE_PORT 9 /* VMware port special range */
uint64_aligned_t start, end; /* IN - inclusive start and end of range */
};
typedef struct xen_hvm_io_range xen_hvm_io_range_t;
diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
index 2e5809b..2f326cf 100644
--- a/xen/include/public/hvm/ioreq.h
+++ b/xen/include/public/hvm/ioreq.h
@@ -37,6 +37,7 @@
#define IOREQ_TYPE_PCI_CONFIG 2
#define IOREQ_TYPE_TIMEOFFSET 7
#define IOREQ_TYPE_INVALIDATE 8 /* mapcache */
+#define IOREQ_TYPE_VMWARE_PORT 9 /* pio + vmport registers */
/*
* VMExit dispatcher should cooperate with instruction decoder to
@@ -48,6 +49,8 @@
*
* 63....48|47..40|39..35|34..32|31........0
* SEGMENT |BUS |DEV |FN |OFFSET
+ *
+ * For I/O type IOREQ_TYPE_VMWARE_PORT also use the vmware_regs.
*/
struct ioreq {
uint64_t addr; /* physical address */
@@ -66,11 +69,25 @@ struct ioreq {
};
typedef struct ioreq ioreq_t;
+struct vmware_regs {
+ uint32_t esi;
+ uint32_t edi;
+ uint32_t ebx;
+ uint32_t ecx;
+ uint32_t edx;
+};
+typedef struct vmware_regs vmware_regs_t;
+
struct shared_iopage {
struct ioreq vcpu_ioreq[1];
};
typedef struct shared_iopage shared_iopage_t;
+struct shared_vmport_iopage {
+ struct vmware_regs vcpu_vmport_regs[1];
+};
+typedef struct shared_vmport_iopage shared_vmport_iopage_t;
+
struct buf_ioreq {
uint8_t type; /* I/O type */
uint8_t pad:1;
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index b437444..61a744f 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -52,6 +52,8 @@
#define HVM_PARAM_PAE_ENABLED 4
#define HVM_PARAM_IOREQ_PFN 5
+/* VMWare Port PFN. */
+#define HVM_PARAM_VMPORT_REGS_PFN 36
#define HVM_PARAM_BUFIOREQ_PFN 6
#define HVM_PARAM_BUFIOREQ_EVTCHN 26
@@ -197,6 +199,6 @@
/* Boolean: Enable altp2m */
#define HVM_PARAM_ALTP2M 35
-#define HVM_NR_PARAMS 36
+#define HVM_NR_PARAMS 37
#endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
--
1.8.3.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
2015-11-28 21:45 ` [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT Don Slutz
@ 2015-12-01 11:28 ` Paul Durrant
2015-12-04 0:23 ` Don Slutz
2015-12-21 14:10 ` Jan Beulich
1 sibling, 1 reply; 24+ messages in thread
From: Paul Durrant @ 2015-12-01 11:28 UTC (permalink / raw)
To: Don Slutz, xen-devel
Cc: Kevin Tian, Keir (Xen.org),
Ian Campbell, Andrew Cooper, Eddie Dong, George Dunlap,
Don Slutz, Tim (Xen.org),
Jan Beulich, Stefano Stabellini, Aravind Gopalakrishnan,
Jun Nakajima, Ian Jackson, Wei Liu, Boris Ostrovsky,
Suravee Suthikulpanit
> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of Don Slutz
> Sent: 28 November 2015 21:45
> To: xen-devel@lists.xen.org
> Cc: Jun Nakajima; Wei Liu; Kevin Tian; Keir (Xen.org); Ian Campbell; George
> Dunlap; Andrew Cooper; Stefano Stabellini; Eddie Dong; Don Slutz; Don Slutz;
> Tim (Xen.org); Aravind Gopalakrishnan; Jan Beulich; Suravee Suthikulpanit;
> Boris Ostrovsky; Ian Jackson
> Subject: [Xen-devel] [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
>
> From: Don Slutz <dslutz@verizon.com>
>
> This adds synchronization of the 6 vcpu registers (only 32bits of
> them) that vmport.c needs between Xen and QEMU.
>
> This is to avoid a 2nd and 3rd exchange between QEMU and Xen to
> fetch and put these 6 vcpu registers used by the code in vmport.c
> and vmmouse.c
>
> In the tools, enable usage of QEMU's vmport code.
>
> The currently most useful VMware port support that QEMU has is the
> VMware mouse support. Xorg included a VMware mouse support that
> uses absolute mode. This make using a mouse in X11 much nicer.
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> CC: Don Slutz <don.slutz@gmail.com>
> ---
> v13:
> Rebased on staging (not a simple rebase, needed rework to function
> with changes).
> I would have made this !vmport_check_port ...
> Changed to !is_vmware, and invert vmport_check_port's return value.
> Use 1 one for "list_for_each_entry ( sv, ..."
> Added full stop in comments.
>
> v12:
> Rebase changes.
>
> Pass size to vmport_check_port() -- required if overlap
> I.E. inl on port 0x5657, 0x5656, 0x5655, 0x5659, 0x565a,
> and 0x565b.
>
> Move define of vmport_check_port() into this patch from ring3
> patch.
>
> v11:
> No change
>
> v10:
> These literals should become an enum.
> I don't think the invalidate type is needed.
> Code handling "case X86EMUL_UNHANDLEABLE:" in emulate.c
> is unclear.
> Comment about "special' range of 1" is not clear.
>
>
> v9:
> New code was presented as an RFC before this.
>
> Paul Durrant sugested I add support for other IOREQ types
> to HVMOP_map_io_range_to_ioreq_server.
> I have done this.
>
> tools/libxc/xc_dom_x86.c | 5 +-
> tools/libxl/libxl_dm.c | 2 +
> xen/arch/x86/hvm/emulate.c | 68 +++++++++++---
> xen/arch/x86/hvm/hvm.c | 194
> ++++++++++++++++++++++++++++++++++-----
> xen/arch/x86/hvm/vmware/vmport.c | 14 +++
> xen/include/asm-x86/hvm/domain.h | 3 +-
> xen/include/asm-x86/hvm/hvm.h | 2 +
> xen/include/public/hvm/hvm_op.h | 5 +
> xen/include/public/hvm/ioreq.h | 17 ++++
> xen/include/public/hvm/params.h | 4 +-
> 10 files changed, 275 insertions(+), 39 deletions(-)
>
> diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
> index 5ff33ca..4c8a7fe 100644
> --- a/tools/libxc/xc_dom_x86.c
> +++ b/tools/libxc/xc_dom_x86.c
> @@ -60,7 +60,8 @@
> #define SPECIALPAGE_IOREQ 5
> #define SPECIALPAGE_IDENT_PT 6
> #define SPECIALPAGE_CONSOLE 7
> -#define NR_SPECIAL_PAGES 8
> +#define SPECIALPAGE_VMPORT_REGS 8
> +#define NR_SPECIAL_PAGES 9
> #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>
> #define NR_IOREQ_SERVER_PAGES 8
> @@ -613,6 +614,8 @@ static int alloc_magic_pages_hvm(struct
> xc_dom_image *dom)
> special_pfn(SPECIALPAGE_BUFIOREQ));
> xc_hvm_param_set(xch, domid, HVM_PARAM_IOREQ_PFN,
> special_pfn(SPECIALPAGE_IOREQ));
> + xc_hvm_param_set(xch, domid, HVM_PARAM_VMPORT_REGS_PFN,
> + special_pfn(SPECIALPAGE_VMPORT_REGS));
> xc_hvm_param_set(xch, domid, HVM_PARAM_CONSOLE_PFN,
> special_pfn(SPECIALPAGE_CONSOLE));
> xc_hvm_param_set(xch, domid, HVM_PARAM_PAGING_RING_PFN,
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index c76fd90..50e6d93 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -1150,6 +1150,8 @@ static int
> libxl__build_device_model_args_new(libxl__gc *gc,
> }
> }
>
> + if (libxl_defbool_val(c_info->vmware_port))
> + machinearg = GCSPRINTF("%s,vmport=on", machinearg);
> flexarray_append(dm_args, machinearg);
> for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
> flexarray_append(dm_args, b_info->extra_hvm[i]);
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index e1017b5..d407741 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -113,6 +113,7 @@ static int hvmemul_do_io(
> };
> void *p_data = (void *)data;
> int rc;
> + bool_t is_vmware = !is_mmio && !data_is_addr &&
> vmport_check_port(p.addr, p.size);
>
> /*
> * Weird-sized accesses have undefined behaviour: we discard writes
> @@ -133,7 +134,7 @@ static int hvmemul_do_io(
> p = vio->io_req;
>
> /* Verify the emulation request has been correctly re-issued */
> - if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
> + if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : is_vmware ?
> IOREQ_TYPE_VMWARE_PORT : IOREQ_TYPE_PIO)) ||
is_vmware already incorporated !is_mmio so there's a redundant check in that expression. The extra test also makes it look pretty ugly... probably better re-factored into an if statement.
> (p.addr != addr) ||
> (p.size != size) ||
> (p.count != reps) ||
> @@ -167,26 +168,65 @@ static int hvmemul_do_io(
> vio->io_req.state = STATE_IOREQ_NONE;
> break;
> case X86EMUL_UNHANDLEABLE:
> - {
> - struct hvm_ioreq_server *s =
> - hvm_select_ioreq_server(curr->domain, &p);
> -
> - /* If there is no suitable backing DM, just ignore accesses */
> - if ( !s )
> + if ( !is_vmware )
> {
> - rc = hvm_process_io_intercept(&null_handler, &p);
> - vio->io_req.state = STATE_IOREQ_NONE;
> + struct hvm_ioreq_server *s =
> + hvm_select_ioreq_server(curr->domain, &p);
> +
> + /* If there is no suitable backing DM, just ignore accesses */
> + if ( !s )
> + {
> + rc = hvm_process_io_intercept(&null_handler, &p);
> + vio->io_req.state = STATE_IOREQ_NONE;
> + }
> + else
> + {
> + rc = hvm_send_ioreq(s, &p, 0);
> + if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
> + vio->io_req.state = STATE_IOREQ_NONE;
> + else if ( data_is_addr )
> + rc = X86EMUL_OKAY;
> + }
> }
> else
> {
> - rc = hvm_send_ioreq(s, &p, 0);
> - if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
> + struct hvm_ioreq_server *s;
> + vmware_regs_t *vr;
> +
> + BUILD_BUG_ON(sizeof(ioreq_t) < sizeof(vmware_regs_t));
> +
> + p.type = IOREQ_TYPE_VMWARE_PORT;
> + vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
This could be done in a single statement.
> + s = hvm_select_ioreq_server(curr->domain, &p);
> + vr = get_vmport_regs_any(s, curr);
> +
> + /*
> + * If there is no suitable backing DM, just ignore accesses. If
> + * we do not have access to registers to pass to QEMU, just
> + * ignore access.
> + */
> + if ( !s || !vr )
> + {
> + rc = hvm_process_io_intercept(&null_handler, &p);
> vio->io_req.state = STATE_IOREQ_NONE;
> - else if ( data_is_addr )
> - rc = X86EMUL_OKAY;
> + }
> + else
> + {
> + struct cpu_user_regs *regs = guest_cpu_user_regs();
> +
> + p.data = regs->rax;
> + vr->ebx = regs->_ebx;
> + vr->ecx = regs->_ecx;
> + vr->edx = regs->_edx;
> + vr->esi = regs->_esi;
> + vr->edi = regs->_edi;
> +
> + rc = hvm_send_ioreq(s, &p, 0);
> + if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
> + vio->io_req.state = STATE_IOREQ_NONE;
> + }
> }
> break;
> - }
> default:
> BUG();
> }
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index c347b63..07b4025 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -415,6 +415,45 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server
> *s, struct vcpu *v)
> return &p->vcpu_ioreq[v->vcpu_id];
> }
>
> +static vmware_regs_t *get_vmport_regs_one(struct hvm_ioreq_server *s,
> + struct vcpu *v)
> +{
> + struct hvm_ioreq_vcpu *sv;
> +
> + list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry )
> + {
> + if ( sv->vcpu == v )
> + {
> + shared_vmport_iopage_t *p = s->vmport_ioreq.va;
> + if ( !p )
> + return NULL;
> + return &p->vcpu_vmport_regs[v->vcpu_id];
> + }
> + }
> + return NULL;
> +}
> +
> +vmware_regs_t *get_vmport_regs_any(struct hvm_ioreq_server *s, struct
> vcpu *v)
> +{
> + struct domain *d = v->domain;
> +
> + ASSERT((v == current) || !vcpu_runnable(v));
> +
> + if ( s )
> + return get_vmport_regs_one(s, v);
> +
> + list_for_each_entry ( s,
> + &d->arch.hvm_domain.ioreq_server.list,
> + list_entry )
> + {
> + vmware_regs_t *ret = get_vmport_regs_one(s, v);
> +
> + if ( ret )
> + return ret;
> + }
> + return NULL;
> +}
> +
> bool_t hvm_io_pending(struct vcpu *v)
> {
> struct domain *d = v->domain;
> @@ -536,6 +575,21 @@ void hvm_do_resume(struct vcpu *v)
> handle_mmio();
> break;
> case HVMIO_pio_completion:
> + if ( vio->io_req.type == IOREQ_TYPE_VMWARE_PORT ) {
> + vmware_regs_t *vr = get_vmport_regs_any(NULL, v);
> +
> + if ( vr )
> + {
> + struct cpu_user_regs *regs = guest_cpu_user_regs();
> +
> + /* Only change the 32bit part of the register. */
> + regs->_ebx = vr->ebx;
> + regs->_ecx = vr->ecx;
> + regs->_edx = vr->edx;
> + regs->_esi = vr->esi;
> + regs->_edi = vr->edi;
> + }
> + }
> (void)handle_pio(vio->io_req.addr, vio->io_req.size,
> vio->io_req.dir);
> break;
> @@ -618,22 +672,56 @@ static void hvm_free_ioreq_gmfn(struct domain
> *d, unsigned long gmfn)
> set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
> }
>
> -static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t
> buf)
> +typedef enum {
> + IOREQ_PAGE_TYPE_IOREQ,
> + IOREQ_PAGE_TYPE_BUFIOREQ,
> + IOREQ_PAGE_TYPE_VMPORT,
> +} ioreq_page_type_t;
> +
> +static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s,
> ioreq_page_type_t buf)
> {
> - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> + struct hvm_ioreq_page *iorp = NULL;
> +
> + switch ( buf )
> + {
> + case IOREQ_PAGE_TYPE_IOREQ:
> + iorp = &s->ioreq;
> + break;
> + case IOREQ_PAGE_TYPE_BUFIOREQ:
> + iorp = &s->bufioreq;
> + break;
> + case IOREQ_PAGE_TYPE_VMPORT:
> + iorp = &s->vmport_ioreq;
> + break;
> + }
> + ASSERT(iorp);
>
> destroy_ring_for_helper(&iorp->va, iorp->page);
> }
>
> static int hvm_map_ioreq_page(
> - struct hvm_ioreq_server *s, bool_t buf, unsigned long gmfn)
> + struct hvm_ioreq_server *s, ioreq_page_type_t buf, unsigned long
> gmfn)
> {
> struct domain *d = s->domain;
> - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> + struct hvm_ioreq_page *iorp = NULL;
> struct page_info *page;
> void *va;
> int rc;
>
> + switch ( buf )
> + {
> + case IOREQ_PAGE_TYPE_IOREQ:
> + iorp = &s->ioreq;
> + break;
> + case IOREQ_PAGE_TYPE_BUFIOREQ:
> + iorp = &s->bufioreq;
> + break;
> + case IOREQ_PAGE_TYPE_VMPORT:
> + iorp = &s->vmport_ioreq;
> + break;
> + }
> + ASSERT(iorp);
> +
> if ( (rc = prepare_ring_for_helper(d, gmfn, &page, &va)) )
> return rc;
>
> @@ -849,19 +937,32 @@ static void
> hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
>
> static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
> unsigned long ioreq_pfn,
> - unsigned long bufioreq_pfn)
> + unsigned long bufioreq_pfn,
> + unsigned long vmport_ioreq_pfn)
> {
> int rc;
>
> - rc = hvm_map_ioreq_page(s, 0, ioreq_pfn);
> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ, ioreq_pfn);
> if ( rc )
> return rc;
>
> if ( bufioreq_pfn != INVALID_GFN )
> - rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_BUFIOREQ,
> bufioreq_pfn);
>
> if ( rc )
> - hvm_unmap_ioreq_page(s, 0);
> + {
> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
> + return rc;
> + }
> +
> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT,
> vmport_ioreq_pfn);
Is every ioreq server going to have one of these? It doesn't look like it, so should you not have validity check on the pfn?
Paul
> +
> + if ( rc )
> + {
> + if ( bufioreq_pfn != INVALID_GFN )
> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_BUFIOREQ);
> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
> + }
>
> return rc;
> }
> @@ -873,6 +974,8 @@ static int hvm_ioreq_server_setup_pages(struct
> hvm_ioreq_server *s,
> struct domain *d = s->domain;
> unsigned long ioreq_pfn = INVALID_GFN;
> unsigned long bufioreq_pfn = INVALID_GFN;
> + unsigned long vmport_ioreq_pfn =
> + d->arch.hvm_domain.params[HVM_PARAM_VMPORT_REGS_PFN];
> int rc;
>
> if ( is_default )
> @@ -884,7 +987,8 @@ static int hvm_ioreq_server_setup_pages(struct
> hvm_ioreq_server *s,
> ASSERT(handle_bufioreq);
> return hvm_ioreq_server_map_pages(s,
> d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN],
> - d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN]);
> + d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN],
> + vmport_ioreq_pfn);
> }
>
> rc = hvm_alloc_ioreq_gmfn(d, &ioreq_pfn);
> @@ -893,8 +997,8 @@ static int hvm_ioreq_server_setup_pages(struct
> hvm_ioreq_server *s,
> rc = hvm_alloc_ioreq_gmfn(d, &bufioreq_pfn);
>
> if ( !rc )
> - rc = hvm_ioreq_server_map_pages(s, ioreq_pfn, bufioreq_pfn);
> -
> + rc = hvm_ioreq_server_map_pages(s, ioreq_pfn, bufioreq_pfn,
> + vmport_ioreq_pfn);
> if ( rc )
> {
> hvm_free_ioreq_gmfn(d, ioreq_pfn);
> @@ -909,11 +1013,15 @@ static void
> hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s,
> {
> struct domain *d = s->domain;
> bool_t handle_bufioreq = ( s->bufioreq.va != NULL );
> + bool_t handle_vmport_ioreq = ( s->vmport_ioreq.va != NULL );
> +
> + if ( handle_vmport_ioreq )
> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT);
>
> if ( handle_bufioreq )
> - hvm_unmap_ioreq_page(s, 1);
> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_BUFIOREQ);
>
> - hvm_unmap_ioreq_page(s, 0);
> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
>
> if ( !is_default )
> {
> @@ -948,12 +1056,38 @@ static int
> hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
> for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
> {
> char *name;
> + char *type_name = NULL;
> + unsigned int limit;
>
> - rc = asprintf(&name, "ioreq_server %d %s", s->id,
> - (i == HVMOP_IO_RANGE_PORT) ? "port" :
> - (i == HVMOP_IO_RANGE_MEMORY) ? "memory" :
> - (i == HVMOP_IO_RANGE_PCI) ? "pci" :
> - "");
> + switch ( i )
> + {
> + case HVMOP_IO_RANGE_PORT:
> + type_name = "port";
> + limit = MAX_NR_IO_RANGES;
> + break;
> + case HVMOP_IO_RANGE_MEMORY:
> + type_name = "memory";
> + limit = MAX_NR_IO_RANGES;
> + break;
> + case HVMOP_IO_RANGE_PCI:
> + type_name = "pci";
> + limit = MAX_NR_IO_RANGES;
> + break;
> + case HVMOP_IO_RANGE_VMWARE_PORT:
> + type_name = "VMware port";
> + limit = 1;
> + break;
> + case HVMOP_IO_RANGE_TIMEOFFSET:
> + type_name = "timeoffset";
> + limit = 1;
> + break;
> + default:
> + break;
> + }
> + if ( !type_name )
> + continue;
> +
> + rc = asprintf(&name, "ioreq_server %d %s", s->id, type_name);
> if ( rc )
> goto fail;
>
> @@ -966,7 +1100,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct
> hvm_ioreq_server *s,
> if ( !s->range[i] )
> goto fail;
>
> - rangeset_limit(s->range[i], MAX_NR_IO_RANGES);
> + rangeset_limit(s->range[i], limit);
> +
> + /* VMware port */
> + if ( i == HVMOP_IO_RANGE_VMWARE_PORT &&
> + s->domain->arch.hvm_domain.is_vmware_port_enabled )
> + rc = rangeset_add_range(s->range[i], 1, 1);
> }
>
> done:
> @@ -1271,6 +1410,8 @@ static int
> hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
> case HVMOP_IO_RANGE_PORT:
> case HVMOP_IO_RANGE_MEMORY:
> case HVMOP_IO_RANGE_PCI:
> + case HVMOP_IO_RANGE_VMWARE_PORT:
> + case HVMOP_IO_RANGE_TIMEOFFSET:
> r = s->range[type];
> break;
>
> @@ -1322,6 +1463,8 @@ static int
> hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
> case HVMOP_IO_RANGE_PORT:
> case HVMOP_IO_RANGE_MEMORY:
> case HVMOP_IO_RANGE_PCI:
> + case HVMOP_IO_RANGE_VMWARE_PORT:
> + case HVMOP_IO_RANGE_TIMEOFFSET:
> r = s->range[type];
> break;
>
> @@ -2558,9 +2701,6 @@ struct hvm_ioreq_server
> *hvm_select_ioreq_server(struct domain *d,
> if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
> return NULL;
>
> - if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
> - return d->arch.hvm_domain.default_ioreq_server;
> -
> cf8 = d->arch.hvm_domain.pci_cf8;
>
> if ( p->type == IOREQ_TYPE_PIO &&
> @@ -2613,7 +2753,10 @@ struct hvm_ioreq_server
> *hvm_select_ioreq_server(struct domain *d,
> BUILD_BUG_ON(IOREQ_TYPE_PIO != HVMOP_IO_RANGE_PORT);
> BUILD_BUG_ON(IOREQ_TYPE_COPY != HVMOP_IO_RANGE_MEMORY);
> BUILD_BUG_ON(IOREQ_TYPE_PCI_CONFIG !=
> HVMOP_IO_RANGE_PCI);
> + BUILD_BUG_ON(IOREQ_TYPE_VMWARE_PORT !=
> HVMOP_IO_RANGE_VMWARE_PORT);
> + BUILD_BUG_ON(IOREQ_TYPE_TIMEOFFSET !=
> HVMOP_IO_RANGE_TIMEOFFSET);
> r = s->range[type];
> + ASSERT(r);
>
> switch ( type )
> {
> @@ -2640,6 +2783,13 @@ struct hvm_ioreq_server
> *hvm_select_ioreq_server(struct domain *d,
> }
>
> break;
> + case IOREQ_TYPE_VMWARE_PORT:
> + case IOREQ_TYPE_TIMEOFFSET:
> + /* The 'special' range of [1,1] is checked for being enabled. */
> + if ( rangeset_contains_singleton(r, 1) )
> + return s;
> +
> + break;
> }
> }
>
> diff --git a/xen/arch/x86/hvm/vmware/vmport.c
> b/xen/arch/x86/hvm/vmware/vmport.c
> index f24d8e3..ee993b3 100644
> --- a/xen/arch/x86/hvm/vmware/vmport.c
> +++ b/xen/arch/x86/hvm/vmware/vmport.c
> @@ -137,6 +137,20 @@ void vmport_register(struct domain *d)
> register_portio_handler(d, BDOOR_PORT, 4, vmport_ioport);
> }
>
> +int vmport_check_port(unsigned int port, unsigned int bytes)
> +{
> + struct vcpu *curr = current;
> + struct domain *currd = curr->domain;
> +
> + if ( port + bytes > BDOOR_PORT && port < BDOOR_PORT + 4 &&
> + is_hvm_domain(currd) &&
> + currd->arch.hvm_domain.is_vmware_port_enabled )
> + {
> + return 1;
> + }
> + return 0;
> +}
> +
> /*
> * Local variables:
> * mode: C
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-
> x86/hvm/domain.h
> index 5860d51..d87dedf 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -48,7 +48,7 @@ struct hvm_ioreq_vcpu {
> bool_t pending;
> };
>
> -#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
> +#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_VMWARE_PORT + 1)
> #define MAX_NR_IO_RANGES 256
>
> struct hvm_ioreq_server {
> @@ -63,6 +63,7 @@ struct hvm_ioreq_server {
> ioservid_t id;
> struct hvm_ioreq_page ioreq;
> struct list_head ioreq_vcpu_list;
> + struct hvm_ioreq_page vmport_ioreq;
> struct hvm_ioreq_page bufioreq;
>
> /* Lock to serialize access to buffered ioreq ring */
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-
> x86/hvm/hvm.h
> index 857cbb4..78547ec 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -567,6 +567,8 @@ void altp2m_vcpu_update_vmfunc_ve(struct vcpu
> *v);
> bool_t altp2m_vcpu_emulate_ve(struct vcpu *v);
>
> void vmport_register(struct domain *d);
> +int vmport_check_port(unsigned int port, unsigned int bytes);
> +vmware_regs_t *get_vmport_regs_any(struct hvm_ioreq_server *s, struct
> vcpu *v);
>
> #endif /* __ASM_X86_HVM_HVM_H__ */
>
> diff --git a/xen/include/public/hvm/hvm_op.h
> b/xen/include/public/hvm/hvm_op.h
> index 1606185..3dc99af 100644
> --- a/xen/include/public/hvm/hvm_op.h
> +++ b/xen/include/public/hvm/hvm_op.h
> @@ -323,6 +323,9 @@
> DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_ioreq_server_info_t);
> *
> * NOTE: unless an emulation request falls entirely within a range mapped
> * by a secondary emulator, it will not be passed to that emulator.
> + *
> + * NOTE: The 'special' range of [1,1] is what is checked for on
> + * TIMEOFFSET and VMWARE_PORT.
> */
> #define HVMOP_map_io_range_to_ioreq_server 19
> #define HVMOP_unmap_io_range_from_ioreq_server 20
> @@ -333,6 +336,8 @@ struct xen_hvm_io_range {
> # define HVMOP_IO_RANGE_PORT 0 /* I/O port range */
> # define HVMOP_IO_RANGE_MEMORY 1 /* MMIO range */
> # define HVMOP_IO_RANGE_PCI 2 /* PCI segment/bus/dev/func range */
> +# define HVMOP_IO_RANGE_TIMEOFFSET 7 /* TIMEOFFSET special range
> */
> +# define HVMOP_IO_RANGE_VMWARE_PORT 9 /* VMware port special
> range */
> uint64_aligned_t start, end; /* IN - inclusive start and end of range */
> };
> typedef struct xen_hvm_io_range xen_hvm_io_range_t;
> diff --git a/xen/include/public/hvm/ioreq.h
> b/xen/include/public/hvm/ioreq.h
> index 2e5809b..2f326cf 100644
> --- a/xen/include/public/hvm/ioreq.h
> +++ b/xen/include/public/hvm/ioreq.h
> @@ -37,6 +37,7 @@
> #define IOREQ_TYPE_PCI_CONFIG 2
> #define IOREQ_TYPE_TIMEOFFSET 7
> #define IOREQ_TYPE_INVALIDATE 8 /* mapcache */
> +#define IOREQ_TYPE_VMWARE_PORT 9 /* pio + vmport registers */
>
> /*
> * VMExit dispatcher should cooperate with instruction decoder to
> @@ -48,6 +49,8 @@
> *
> * 63....48|47..40|39..35|34..32|31........0
> * SEGMENT |BUS |DEV |FN |OFFSET
> + *
> + * For I/O type IOREQ_TYPE_VMWARE_PORT also use the vmware_regs.
> */
> struct ioreq {
> uint64_t addr; /* physical address */
> @@ -66,11 +69,25 @@ struct ioreq {
> };
> typedef struct ioreq ioreq_t;
>
> +struct vmware_regs {
> + uint32_t esi;
> + uint32_t edi;
> + uint32_t ebx;
> + uint32_t ecx;
> + uint32_t edx;
> +};
> +typedef struct vmware_regs vmware_regs_t;
> +
> struct shared_iopage {
> struct ioreq vcpu_ioreq[1];
> };
> typedef struct shared_iopage shared_iopage_t;
>
> +struct shared_vmport_iopage {
> + struct vmware_regs vcpu_vmport_regs[1];
> +};
> +typedef struct shared_vmport_iopage shared_vmport_iopage_t;
> +
> struct buf_ioreq {
> uint8_t type; /* I/O type */
> uint8_t pad:1;
> diff --git a/xen/include/public/hvm/params.h
> b/xen/include/public/hvm/params.h
> index b437444..61a744f 100644
> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -52,6 +52,8 @@
> #define HVM_PARAM_PAE_ENABLED 4
>
> #define HVM_PARAM_IOREQ_PFN 5
> +/* VMWare Port PFN. */
> +#define HVM_PARAM_VMPORT_REGS_PFN 36
>
> #define HVM_PARAM_BUFIOREQ_PFN 6
> #define HVM_PARAM_BUFIOREQ_EVTCHN 26
> @@ -197,6 +199,6 @@
> /* Boolean: Enable altp2m */
> #define HVM_PARAM_ALTP2M 35
>
> -#define HVM_NR_PARAMS 36
> +#define HVM_NR_PARAMS 37
>
> #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
> --
> 1.8.3.1
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
2015-12-01 11:28 ` Paul Durrant
@ 2015-12-04 0:23 ` Don Slutz
2015-12-04 8:59 ` Paul Durrant
0 siblings, 1 reply; 24+ messages in thread
From: Don Slutz @ 2015-12-04 0:23 UTC (permalink / raw)
To: Paul Durrant, xen-devel
Cc: Kevin Tian, Keir (Xen.org),
Ian Campbell, Andrew Cooper, Eddie Dong, George Dunlap,
Don Slutz, Tim (Xen.org),
Jan Beulich, Stefano Stabellini, Aravind Gopalakrishnan,
Jun Nakajima, Ian Jackson, Wei Liu, Boris Ostrovsky,
Suravee Suthikulpanit
On 12/01/15 06:28, Paul Durrant wrote:
>> -----Original Message-----
>> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
>> bounces@lists.xen.org] On Behalf Of Don Slutz
>> Sent: 28 November 2015 21:45
>> To: xen-devel@lists.xen.org
>> Cc: Jun Nakajima; Wei Liu; Kevin Tian; Keir (Xen.org); Ian Campbell; George
>> Dunlap; Andrew Cooper; Stefano Stabellini; Eddie Dong; Don Slutz; Don Slutz;
>> Tim (Xen.org); Aravind Gopalakrishnan; Jan Beulich; Suravee Suthikulpanit;
>> Boris Ostrovsky; Ian Jackson
>> Subject: [Xen-devel] [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
>>
>> From: Don Slutz <dslutz@verizon.com>
>>
...
>>
>> /* Verify the emulation request has been correctly re-issued */
>> - if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
>> + if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : is_vmware ?
>> IOREQ_TYPE_VMWARE_PORT : IOREQ_TYPE_PIO)) ||
>
> is_vmware already incorporated !is_mmio so there's a redundant
> check in that expression. The extra test also makes it look
> pretty ugly... probably better re-factored into an if
> statement.
>
Ok, Will add a variable, that is set via an if statement. Thinking about:
case STATE_IORESP_READY:
+ {
+ uint8_t calc_type = p.type;
+
+ if ( is_vmware )
+ calc_type = IOREQ_TYPE_VMWARE_PORT;
+
vio->io_req.state = STATE_IOREQ_NONE;
p = vio->io_req;
/* Verify the emulation request has been correctly re-issued */
- if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
+ if ( (p.type != calc_type) ||
>> (p.addr != addr) ||
>> (p.size != size) ||
>> (p.count != reps) ||
...
>> +
>> + p.type = IOREQ_TYPE_VMWARE_PORT;
>> + vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
>
> This could be done in a single statement.
>
Ok.
p.type = vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
or
vio->io_req.type = p.type = IOREQ_TYPE_VMWARE_PORT;
is clearer to you?
>> + s = hvm_select_ioreq_server(curr->domain, &p);
...
>>
>> if ( rc )
>> - hvm_unmap_ioreq_page(s, 0);
>> + {
>> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
>> + return rc;
>> + }
>> +
>> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT,
>> vmport_ioreq_pfn);
>
> Is every ioreq server going to have one of these? It doesn't look
> like it, so should you not have validity check on the pfn?
>
Currently the default is that all ioreq servers get the mapping:
+ /* VMware port */
+ if ( i == HVMOP_IO_RANGE_VMWARE_PORT &&
+ s->domain->arch.hvm_domain.is_vmware_port_enabled )
+ rc = rangeset_add_range(s->range[i], 1, 1);
but you are right that a check on is_vmware_port_enabled should be
added. Will do.
-Don Slutz
> Paul
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
2015-12-04 0:23 ` Don Slutz
@ 2015-12-04 8:59 ` Paul Durrant
2015-12-04 21:31 ` Don Slutz
0 siblings, 1 reply; 24+ messages in thread
From: Paul Durrant @ 2015-12-04 8:59 UTC (permalink / raw)
To: Don Slutz, xen-devel
Cc: Kevin Tian, Keir (Xen.org),
Ian Campbell, Andrew Cooper, Eddie Dong, George Dunlap,
Don Slutz, Tim (Xen.org),
Jan Beulich, Stefano Stabellini, Aravind Gopalakrishnan,
Jun Nakajima, Ian Jackson, Wei Liu, Boris Ostrovsky,
Suravee Suthikulpanit
> -----Original Message-----
> From: Don Slutz [mailto:don.slutz@gmail.com]
> Sent: 04 December 2015 00:23
> To: Paul Durrant; xen-devel@lists.xen.org
> Cc: Jun Nakajima; Wei Liu; Kevin Tian; Keir (Xen.org); Ian Campbell; George
> Dunlap; Andrew Cooper; Stefano Stabellini; Eddie Dong; Don Slutz; Tim
> (Xen.org); Aravind Gopalakrishnan; Jan Beulich; Suravee Suthikulpanit; Boris
> Ostrovsky; Ian Jackson
> Subject: Re: [Xen-devel] [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
>
> On 12/01/15 06:28, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> >> bounces@lists.xen.org] On Behalf Of Don Slutz
> >> Sent: 28 November 2015 21:45
> >> To: xen-devel@lists.xen.org
> >> Cc: Jun Nakajima; Wei Liu; Kevin Tian; Keir (Xen.org); Ian Campbell;
> George
> >> Dunlap; Andrew Cooper; Stefano Stabellini; Eddie Dong; Don Slutz; Don
> Slutz;
> >> Tim (Xen.org); Aravind Gopalakrishnan; Jan Beulich; Suravee
> Suthikulpanit;
> >> Boris Ostrovsky; Ian Jackson
> >> Subject: [Xen-devel] [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
> >>
> >> From: Don Slutz <dslutz@verizon.com>
> >>
> ...
> >>
> >> /* Verify the emulation request has been correctly re-issued */
> >> - if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
> >> + if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : is_vmware ?
> >> IOREQ_TYPE_VMWARE_PORT : IOREQ_TYPE_PIO)) ||
> >
> > is_vmware already incorporated !is_mmio so there's a redundant
> > check in that expression. The extra test also makes it look
> > pretty ugly... probably better re-factored into an if
> > statement.
> >
>
> Ok, Will add a variable, that is set via an if statement. Thinking about:
>
> case STATE_IORESP_READY:
> + {
> + uint8_t calc_type = p.type;
> +
> + if ( is_vmware )
> + calc_type = IOREQ_TYPE_VMWARE_PORT;
> +
> vio->io_req.state = STATE_IOREQ_NONE;
> p = vio->io_req;
>
> /* Verify the emulation request has been correctly re-issued */
> - if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
> + if ( (p.type != calc_type) ||
>
>
>
>
> >> (p.addr != addr) ||
> >> (p.size != size) ||
> >> (p.count != reps) ||
> ...
> >> +
> >> + p.type = IOREQ_TYPE_VMWARE_PORT;
> >> + vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
> >
> > This could be done in a single statement.
> >
>
> Ok.
>
> p.type = vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
>
> or
>
> vio->io_req.type = p.type = IOREQ_TYPE_VMWARE_PORT;
>
> is clearer to you?
I think that the former is probably better.
>
> >> + s = hvm_select_ioreq_server(curr->domain, &p);
> ...
> >>
> >> if ( rc )
> >> - hvm_unmap_ioreq_page(s, 0);
> >> + {
> >> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
> >> + return rc;
> >> + }
> >> +
> >> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT,
> >> vmport_ioreq_pfn);
> >
> > Is every ioreq server going to have one of these? It doesn't look
> > like it, so should you not have validity check on the pfn?
> >
>
>
> Currently the default is that all ioreq servers get the mapping:
>
That's probably a bit wasteful. It should probably be selectable in the create HVM op. I don't know whether you'd even need it there in the default server since I guess the QEMU end of things post-dates the use of the HVM op (rather than the old param).
> + /* VMware port */
> + if ( i == HVMOP_IO_RANGE_VMWARE_PORT &&
> + s->domain->arch.hvm_domain.is_vmware_port_enabled )
> + rc = rangeset_add_range(s->range[i], 1, 1);
>
>
>
> but you are right that a check on is_vmware_port_enabled should be
> added. Will do.
Cool. Thanks,
Paul
>
> -Don Slutz
>
> > Paul
> >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
2015-12-04 8:59 ` Paul Durrant
@ 2015-12-04 21:31 ` Don Slutz
2015-12-07 13:36 ` Paul Durrant
0 siblings, 1 reply; 24+ messages in thread
From: Don Slutz @ 2015-12-04 21:31 UTC (permalink / raw)
To: Paul Durrant, xen-devel
Cc: Kevin Tian, Keir (Xen.org),
Ian Campbell, Andrew Cooper, Eddie Dong, George Dunlap,
Don Slutz, Tim (Xen.org),
Jan Beulich, Stefano Stabellini, Aravind Gopalakrishnan,
Jun Nakajima, Ian Jackson, Wei Liu, Boris Ostrovsky,
Suravee Suthikulpanit
On 12/04/15 03:59, Paul Durrant wrote:
>> -----Original Message-----
>> From: Don Slutz [mailto:don.slutz@gmail.com]
>> Sent: 04 December 2015 00:23
>> To: Paul Durrant; xen-devel@lists.xen.org
>> Cc: Jun Nakajima; Wei Liu; Kevin Tian; Keir (Xen.org); Ian Campbell; George
>> Dunlap; Andrew Cooper; Stefano Stabellini; Eddie Dong; Don Slutz; Tim
>> (Xen.org); Aravind Gopalakrishnan; Jan Beulich; Suravee Suthikulpanit; Boris
>> Ostrovsky; Ian Jackson
>> Subject: Re: [Xen-devel] [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
>>
>> On 12/01/15 06:28, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
>>>> bounces@lists.xen.org] On Behalf Of Don Slutz
>>>> Sent: 28 November 2015 21:45
>>>> To: xen-devel@lists.xen.org
>>>> Cc: Jun Nakajima; Wei Liu; Kevin Tian; Keir (Xen.org); Ian Campbell;
>> George
>>>> Dunlap; Andrew Cooper; Stefano Stabellini; Eddie Dong; Don Slutz; Don
>> Slutz;
>>>> Tim (Xen.org); Aravind Gopalakrishnan; Jan Beulich; Suravee
>> Suthikulpanit;
>>>> Boris Ostrovsky; Ian Jackson
>>>> Subject: [Xen-devel] [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
>>>>
>>>> From: Don Slutz <dslutz@verizon.com>
>>>>
>> ...
>>>>
>>>> /* Verify the emulation request has been correctly re-issued */
>>>> - if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
>>>> + if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : is_vmware ?
>>>> IOREQ_TYPE_VMWARE_PORT : IOREQ_TYPE_PIO)) ||
>>>
>>> is_vmware already incorporated !is_mmio so there's a redundant
>>> check in that expression. The extra test also makes it look
>>> pretty ugly... probably better re-factored into an if
>>> statement.
>>>
>>
>> Ok, Will add a variable, that is set via an if statement. Thinking about:
>>
>> case STATE_IORESP_READY:
>> + {
>> + uint8_t calc_type = p.type;
>> +
>> + if ( is_vmware )
>> + calc_type = IOREQ_TYPE_VMWARE_PORT;
>> +
>> vio->io_req.state = STATE_IOREQ_NONE;
>> p = vio->io_req;
>>
>> /* Verify the emulation request has been correctly re-issued */
>> - if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
>> + if ( (p.type != calc_type) ||
>>
>>
>>
>>
>>>> (p.addr != addr) ||
>>>> (p.size != size) ||
>>>> (p.count != reps) ||
>> ...
>>>> +
>>>> + p.type = IOREQ_TYPE_VMWARE_PORT;
>>>> + vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
>>>
>>> This could be done in a single statement.
>>>
>>
>> Ok.
>>
>> p.type = vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
>>
>> or
>>
>> vio->io_req.type = p.type = IOREQ_TYPE_VMWARE_PORT;
>>
>> is clearer to you?
>
> I think that the former is probably better.
>
Will use this.
>>
>>>> + s = hvm_select_ioreq_server(curr->domain, &p);
>> ...
>>>>
>>>> if ( rc )
>>>> - hvm_unmap_ioreq_page(s, 0);
>>>> + {
>>>> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
>>>> + return rc;
>>>> + }
>>>> +
>>>> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT,
>>>> vmport_ioreq_pfn);
>>>
>>> Is every ioreq server going to have one of these? It doesn't look
>>> like it, so should you not have validity check on the pfn?
>>>
>>
>>
>> Currently the default is that all ioreq servers get the mapping:
>>
>
> That's probably a bit wasteful. It should probably be
> selectable in the create HVM op.
Since the most common case is QEMU and it can use it since upstream
version 2.2.0, the waste is real small. If a non-QEMU ioreq server does
not want it, it add 0 overhead. The only case I know of (which is PVH
related to PVH) is if QEMU is not running and you add a non-QEMU ioreq
server that does not use it, you get 1 page + page table entries.
While the following exists:
#define HVM_IOREQSRV_BUFIOREQ_OFF 0
#define HVM_IOREQSRV_BUFIOREQ_LEGACY 1
/*
* Use this when read_pointer gets updated atomically and
* the pointer pair gets read atomically:
*/
#define HVM_IOREQSRV_BUFIOREQ_ATOMIC 2
uint8_t handle_bufioreq; /* IN - should server handle buffered ioreqs */
I see no tests that use these. Also adding vmport enable/disable to
handle_bufioreq is much more of a hack then I expect to pass a code
review.
Does not look simple to add a new additional argument, but that could
just be my lack of understanding in the area.
> I don't know whether you'd
> even need it there in the default server since I guess the QEMU
> end of things post-dates the use of the HVM op (rather than the
> old param).
>
Not sure how to parse "post-dates". Here is some tables about this that
I know about:
xen Supports handle_bufioreq
create_ioreq_server
4.5 yes 0 or !0
4.6 yes 0 or !0
4.7 yes 0 or !0
Upstream vmport is_default atomic
QEMU support
2.2.x yes yes n/a
2.3.x yes no no
2.4.x yes no no
2.5.x yes no yes
Xen vmport is_default atomic
QEMU support
4.5.x no yes n/a
4.6.x yes no yes
4.7.x yes no yes
With just a "xen only" build, you will not get a QEMU that is a default
ioreq server. However it looks possible to mix Xen and QEMU and get
this case.
So unless I hear otherwise, I will not be making a change here.
>> + /* VMware port */
>> + if ( i == HVMOP_IO_RANGE_VMWARE_PORT &&
>> + s->domain->arch.hvm_domain.is_vmware_port_enabled )
>> + rc = rangeset_add_range(s->range[i], 1, 1);
>>
>>
>>
>> but you are right that a check on is_vmware_port_enabled should be
>> added. Will do.
>
> Cool. Thanks,
>
> Paul
>
>>
>> -Don Slutz
>>
>>> Paul
>>>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
2015-12-04 21:31 ` Don Slutz
@ 2015-12-07 13:36 ` Paul Durrant
2015-12-14 12:39 ` Don Slutz
0 siblings, 1 reply; 24+ messages in thread
From: Paul Durrant @ 2015-12-07 13:36 UTC (permalink / raw)
To: Don Slutz, xen-devel
Cc: Kevin Tian, Keir (Xen.org),
Ian Campbell, Andrew Cooper, Eddie Dong, George Dunlap,
Don Slutz, Tim (Xen.org),
Jan Beulich, Stefano Stabellini, Aravind Gopalakrishnan,
Jun Nakajima, Ian Jackson, Wei Liu, Boris Ostrovsky,
Suravee Suthikulpanit
> -----Original Message-----
[snip]
> >>>>
> >>>> if ( rc )
> >>>> - hvm_unmap_ioreq_page(s, 0);
> >>>> + {
> >>>> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
> >>>> + return rc;
> >>>> + }
> >>>> +
> >>>> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT,
> >>>> vmport_ioreq_pfn);
> >>>
> >>> Is every ioreq server going to have one of these? It doesn't look
> >>> like it, so should you not have validity check on the pfn?
> >>>
> >>
> >>
> >> Currently the default is that all ioreq servers get the mapping:
> >>
> >
> > That's probably a bit wasteful. It should probably be
> > selectable in the create HVM op.
>
> Since the most common case is QEMU and it can use it since upstream
> version 2.2.0, the waste is real small. If a non-QEMU ioreq server does
> not want it, it add 0 overhead.
It's not 0 overhead if an extra magic page is used for each ioreq server is it?
> The only case I know of (which is PVH
> related to PVH) is if QEMU is not running and you add a non-QEMU ioreq
> server that does not use it, you get 1 page + page table entries.
>
> While the following exists:
>
> #define HVM_IOREQSRV_BUFIOREQ_OFF 0
> #define HVM_IOREQSRV_BUFIOREQ_LEGACY 1
> /*
> * Use this when read_pointer gets updated atomically and
> * the pointer pair gets read atomically:
> */
> #define HVM_IOREQSRV_BUFIOREQ_ATOMIC 2
> uint8_t handle_bufioreq; /* IN - should server handle buffered ioreqs */
>
> I see no tests that use these. Also adding vmport enable/disable to
> handle_bufioreq is much more of a hack then I expect to pass a code
> review.
>
> Does not look simple to add a new additional argument, but that could
> just be my lack of understanding in the area.
It doesn't look that bad. The bufioreq flag has now expanded from 1 bit to 2 bits, but you could rename 'handle_bufioreq' to 'flags' or somesuch and then use bit 3 to indicate whether or not the vmport ioreq page should be allocated.
>
> > I don't know whether you'd
> > even need it there in the default server since I guess the QEMU
> > end of things post-dates the use of the HVM op (rather than the
> > old param).
> >
>
> Not sure how to parse "post-dates". Here is some tables about this that
> I know about:
>
>
> xen Supports handle_bufioreq
> create_ioreq_server
> 4.5 yes 0 or !0
> 4.6 yes 0 or !0
> 4.7 yes 0 or !0
>
> Upstream vmport is_default atomic
> QEMU support
>
> 2.2.x yes yes n/a
> 2.3.x yes no no
> 2.4.x yes no no
> 2.5.x yes no yes
>
> Xen vmport is_default atomic
> QEMU support
>
> 4.5.x no yes n/a
> 4.6.x yes no yes
> 4.7.x yes no yes
>
> With just a "xen only" build, you will not get a QEMU that is a default
> ioreq server. However it looks possible to mix Xen and QEMU and get
> this case.
>
What I meant was that I believe that the vmport ioreq page will only be used by a qemu that also make use of the create_ioreq_server hmvop, so you don't have to care about making it work with older QEMU. It looks like 2.2.x may prove me wrong though in which case it should be present in the default ioreq server but still optional for all others.
Paul
> So unless I hear otherwise, I will not be making a change here.
>
> >> + /* VMware port */
> >> + if ( i == HVMOP_IO_RANGE_VMWARE_PORT &&
> >> + s->domain->arch.hvm_domain.is_vmware_port_enabled )
> >> + rc = rangeset_add_range(s->range[i], 1, 1);
> >>
> >>
> >>
> >> but you are right that a check on is_vmware_port_enabled should be
> >> added. Will do.
> >
> > Cool. Thanks,
> >
> > Paul
> >
> >>
> >> -Don Slutz
> >>
> >>> Paul
> >>>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
2015-12-07 13:36 ` Paul Durrant
@ 2015-12-14 12:39 ` Don Slutz
0 siblings, 0 replies; 24+ messages in thread
From: Don Slutz @ 2015-12-14 12:39 UTC (permalink / raw)
To: Paul Durrant, xen-devel
Cc: Kevin Tian, Keir (Xen.org),
Ian Campbell, Andrew Cooper, Eddie Dong, George Dunlap,
Don Slutz, Tim (Xen.org),
Jan Beulich, Stefano Stabellini, Aravind Gopalakrishnan,
Jun Nakajima, Ian Jackson, Wei Liu, Boris Ostrovsky,
Suravee Suthikulpanit
On 12/07/15 08:36, Paul Durrant wrote:
>> -----Original Message-----
> [snip]
>>>>>>
>>>>>> if ( rc )
>>>>>> - hvm_unmap_ioreq_page(s, 0);
>>>>>> + {
>>>>>> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
>>>>>> + return rc;
>>>>>> + }
>>>>>> +
>>>>>> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT,
>>>>>> vmport_ioreq_pfn);
>>>>>
>>>>> Is every ioreq server going to have one of these? It doesn't look
>>>>> like it, so should you not have validity check on the pfn?
>>>>>
>>>>
>>>>
>>>> Currently the default is that all ioreq servers get the mapping:
>>>>
>>>
>>> That's probably a bit wasteful. It should probably be
>>> selectable in the create HVM op.
>>
>> Since the most common case is QEMU and it can use it since upstream
>> version 2.2.0, the waste is real small. If a non-QEMU ioreq server does
>> not want it, it add 0 overhead.
>
> It's not 0 overhead if an extra magic page is used for each ioreq server is it?
>
My understanding is that the Xen overhead is 1 entry in p2m for each
ioreq server.
>> The only case I know of (which is PVH
>> related to PVH) is if QEMU is not running and you add a non-QEMU ioreq
>> server that does not use it, you get 1 page + page table entries.
>>
>> While the following exists:
>>
>> #define HVM_IOREQSRV_BUFIOREQ_OFF 0
>> #define HVM_IOREQSRV_BUFIOREQ_LEGACY 1
>> /*
>> * Use this when read_pointer gets updated atomically and
>> * the pointer pair gets read atomically:
>> */
>> #define HVM_IOREQSRV_BUFIOREQ_ATOMIC 2
>> uint8_t handle_bufioreq; /* IN - should server handle buffered ioreqs */
>>
>> I see no tests that use these. Also adding vmport enable/disable to
>> handle_bufioreq is much more of a hack then I expect to pass a code
>> review.
>>
>> Does not look simple to add a new additional argument, but that could
>> just be my lack of understanding in the area.
>
> It doesn't look that bad. The bufioreq flag has now expanded
> from 1 bit to 2 bits, but you could rename 'handle_bufioreq' to
> 'flags' or some such and then use bit 3 to indicate whether or
> not the vmport ioreq page should be allocated.
>
Ok, I will code this up and plan to go with it. Since old QEMU need to
be supported, bit 3 will be a negative flag, when set no page will be
mapped.
>>
>>> I don't know whether you'd
>>> even need it there in the default server since I guess the QEMU
>>> end of things post-dates the use of the HVM op (rather than the
>>> old param).
>>>
>>
>> Not sure how to parse "post-dates". Here is some tables about this that
>> I know about:
>>
>>
>> xen Supports handle_bufioreq
>> create_ioreq_server
>> 4.5 yes 0 or !0
>> 4.6 yes 0 or !0
>> 4.7 yes 0 or !0
>>
>> Upstream vmport is_default atomic
>> QEMU support
>>
>> 2.2.x yes yes n/a
>> 2.3.x yes no no
>> 2.4.x yes no no
>> 2.5.x yes no yes
>>
>> Xen vmport is_default atomic
>> QEMU support
>>
>> 4.5.x no yes n/a
>> 4.6.x yes no yes
>> 4.7.x yes no yes
>>
>> With just a "xen only" build, you will not get a QEMU that is a default
>> ioreq server. However it looks possible to mix Xen and QEMU and get
>> this case.
>>
>
> What I meant was that I believe that the vmport ioreq page will
> only be used by a qemu that also make use of the
> create_ioreq_server hmvop, so you don't have to care about
> making it work with older QEMU. It looks like 2.2.x may prove
> me wrong though in which case it should be present in the
> default ioreq server but still optional for all others.
>
Yes, default ioreq server will get the mapping when enabled,
optional for all others.
-Don Slutz
> Paul
>
>> So unless I hear otherwise, I will not be making a change here.
>>
>>>> + /* VMware port */
>>>> + if ( i == HVMOP_IO_RANGE_VMWARE_PORT &&
>>>> + s->domain->arch.hvm_domain.is_vmware_port_enabled )
>>>> + rc = rangeset_add_range(s->range[i], 1, 1);
>>>>
>>>>
>>>>
>>>> but you are right that a check on is_vmware_port_enabled should be
>>>> added. Will do.
>>>
>>> Cool. Thanks,
>>>
>>> Paul
>>>
>>>>
>>>> -Don Slutz
>>>>
>>>>> Paul
>>>>>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
2015-11-28 21:45 ` [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT Don Slutz
2015-12-01 11:28 ` Paul Durrant
@ 2015-12-21 14:10 ` Jan Beulich
2016-01-10 19:42 ` Don Slutz
1 sibling, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2015-12-21 14:10 UTC (permalink / raw)
To: Don Slutz
Cc: Jun Nakajima, Tim Deegan, Kevin Tian, Wei Liu, Ian Campbell,
Stefano Stabellini, George Dunlap, Andrew Cooper, Ian Jackson,
Don Slutz, xen-devel, Eddie Dong, Aravind Gopalakrishnan,
Suravee Suthikulpanit, Keir Fraser, Boris Ostrovsky
>>> On 28.11.15 at 22:45, <don.slutz@gmail.com> wrote:
> @@ -133,7 +134,7 @@ static int hvmemul_do_io(
> p = vio->io_req;
>
> /* Verify the emulation request has been correctly re-issued */
> - if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
> + if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : is_vmware ? IOREQ_TYPE_VMWARE_PORT : IOREQ_TYPE_PIO)) ||
Long line needs breaking up.
> @@ -167,26 +168,65 @@ static int hvmemul_do_io(
> vio->io_req.state = STATE_IOREQ_NONE;
> break;
> case X86EMUL_UNHANDLEABLE:
> - {
> - struct hvm_ioreq_server *s =
> - hvm_select_ioreq_server(curr->domain, &p);
> -
> - /* If there is no suitable backing DM, just ignore accesses */
> - if ( !s )
> + if ( !is_vmware )
> {
> - rc = hvm_process_io_intercept(&null_handler, &p);
> - vio->io_req.state = STATE_IOREQ_NONE;
> + struct hvm_ioreq_server *s =
> + hvm_select_ioreq_server(curr->domain, &p);
> +
> + /* If there is no suitable backing DM, just ignore accesses */
> + if ( !s )
> + {
> + rc = hvm_process_io_intercept(&null_handler, &p);
> + vio->io_req.state = STATE_IOREQ_NONE;
> + }
> + else
> + {
> + rc = hvm_send_ioreq(s, &p, 0);
> + if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
> + vio->io_req.state = STATE_IOREQ_NONE;
> + else if ( data_is_addr )
> + rc = X86EMUL_OKAY;
> + }
> }
> else
> {
> - rc = hvm_send_ioreq(s, &p, 0);
> - if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
> + struct hvm_ioreq_server *s;
> + vmware_regs_t *vr;
> +
> + BUILD_BUG_ON(sizeof(ioreq_t) < sizeof(vmware_regs_t));
> +
> + p.type = IOREQ_TYPE_VMWARE_PORT;
> + vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
> + s = hvm_select_ioreq_server(curr->domain, &p);
> + vr = get_vmport_regs_any(s, curr);
> +
> + /*
> + * If there is no suitable backing DM, just ignore accesses. If
> + * we do not have access to registers to pass to QEMU, just
> + * ignore access.
> + */
> + if ( !s || !vr )
> + {
> + rc = hvm_process_io_intercept(&null_handler, &p);
> vio->io_req.state = STATE_IOREQ_NONE;
> - else if ( data_is_addr )
> - rc = X86EMUL_OKAY;
> + }
> + else
> + {
> + struct cpu_user_regs *regs = guest_cpu_user_regs();
const
> + p.data = regs->rax;
> + vr->ebx = regs->_ebx;
> + vr->ecx = regs->_ecx;
> + vr->edx = regs->_edx;
> + vr->esi = regs->_esi;
> + vr->edi = regs->_edi;
> +
> + rc = hvm_send_ioreq(s, &p, 0);
> + if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
> + vio->io_req.state = STATE_IOREQ_NONE;
> + }
> }
> break;
Especially if there is no common code to be broken out at the
beginning or end of these new if/else branches, then please avoid
re-indenting the entire existing code block (perhaps by making your
new code a if ( unlikely(is_vmware) ) ... else if ( ... ) prefix), to aid
reviewing.
> @@ -536,6 +575,21 @@ void hvm_do_resume(struct vcpu *v)
> handle_mmio();
> break;
> case HVMIO_pio_completion:
> + if ( vio->io_req.type == IOREQ_TYPE_VMWARE_PORT ) {
> + vmware_regs_t *vr = get_vmport_regs_any(NULL, v);
> +
> + if ( vr )
> + {
> + struct cpu_user_regs *regs = guest_cpu_user_regs();
> +
> + /* Only change the 32bit part of the register. */
> + regs->_ebx = vr->ebx;
> + regs->_ecx = vr->ecx;
> + regs->_edx = vr->edx;
> + regs->_esi = vr->esi;
> + regs->_edi = vr->edi;
> + }
Just like in one of the other patches the comment need extending to
say _why_ you only change the 32-bit parts (for future readers to
not consider this a bug).
> @@ -618,22 +672,56 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
> set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
> }
>
> -static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
> +typedef enum {
> + IOREQ_PAGE_TYPE_IOREQ,
> + IOREQ_PAGE_TYPE_BUFIOREQ,
> + IOREQ_PAGE_TYPE_VMPORT,
Lower case please (and this being a local enum the prefix probably
could be shortened).
> +} ioreq_page_type_t;
> +
> +static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, ioreq_page_type_t buf)
The parameter should no longer be named "buf".
> @@ -849,19 +937,32 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
>
> static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
> unsigned long ioreq_pfn,
> - unsigned long bufioreq_pfn)
> + unsigned long bufioreq_pfn,
> + unsigned long vmport_ioreq_pfn)
> {
> int rc;
>
> - rc = hvm_map_ioreq_page(s, 0, ioreq_pfn);
> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ, ioreq_pfn);
> if ( rc )
> return rc;
>
> if ( bufioreq_pfn != INVALID_GFN )
> - rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_BUFIOREQ, bufioreq_pfn);
>
> if ( rc )
> - hvm_unmap_ioreq_page(s, 0);
> + {
> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
> + return rc;
> + }
> +
> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT, vmport_ioreq_pfn);
I think Paul has already pointed out that this shouldn't be
unconditional. And that not just nor for every server, but also
because the caller doesn't seem to guarantee validity of the passed
in PFN (and its implicit initializer value of zero is certainly not a valid
one to use here).
> @@ -948,12 +1056,38 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
> for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
> {
> char *name;
> + char *type_name = NULL;
> + unsigned int limit;
>
> - rc = asprintf(&name, "ioreq_server %d %s", s->id,
> - (i == HVMOP_IO_RANGE_PORT) ? "port" :
> - (i == HVMOP_IO_RANGE_MEMORY) ? "memory" :
> - (i == HVMOP_IO_RANGE_PCI) ? "pci" :
> - "");
> + switch ( i )
> + {
> + case HVMOP_IO_RANGE_PORT:
> + type_name = "port";
> + limit = MAX_NR_IO_RANGES;
> + break;
> + case HVMOP_IO_RANGE_MEMORY:
> + type_name = "memory";
> + limit = MAX_NR_IO_RANGES;
> + break;
> + case HVMOP_IO_RANGE_PCI:
> + type_name = "pci";
> + limit = MAX_NR_IO_RANGES;
> + break;
> + case HVMOP_IO_RANGE_VMWARE_PORT:
> + type_name = "VMware port";
> + limit = 1;
> + break;
Do you really need to set up a (dummy) range set for this?
> + case HVMOP_IO_RANGE_TIMEOFFSET:
> + type_name = "timeoffset";
> + limit = 1;
> + break;
This case didn't exist before, and is unrelated to your change.
> @@ -2558,9 +2701,6 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
> if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
> return NULL;
>
> - if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
> - return d->arch.hvm_domain.default_ioreq_server;
I'd prefer if this shortcut could stay.
> --- a/xen/arch/x86/hvm/vmware/vmport.c
> +++ b/xen/arch/x86/hvm/vmware/vmport.c
> @@ -137,6 +137,20 @@ void vmport_register(struct domain *d)
> register_portio_handler(d, BDOOR_PORT, 4, vmport_ioport);
> }
>
> +int vmport_check_port(unsigned int port, unsigned int bytes)
bool_t
> +{
> + struct vcpu *curr = current;
You don't really need this local variable.
> + struct domain *currd = curr->domain;
> +
> + if ( port + bytes > BDOOR_PORT && port < BDOOR_PORT + 4 &&
This will match for e.g. port == BDOOR_PORT + 1 and bytes == 4,
which I don't think is correct.
> + is_hvm_domain(currd) &&
> + currd->arch.hvm_domain.is_vmware_port_enabled )
> + {
> + return 1;
> + }
> + return 0;
> +}
All of this could be a single return statement.
> @@ -66,11 +69,25 @@ struct ioreq {
> };
> typedef struct ioreq ioreq_t;
>
> +struct vmware_regs {
> + uint32_t esi;
> + uint32_t edi;
> + uint32_t ebx;
> + uint32_t ecx;
> + uint32_t edx;
> +};
> +typedef struct vmware_regs vmware_regs_t;
I doubt you need the typedef (considering the use below), and I
don't think having something prefixed vmware_ in the Xen public
headers is a good idea.
Also throughout the series I didn't find any code addition to
guarantee (perhaps at build time) that BDOOR_PORT doesn't
collide with any other use ports (statically assigned ones as well
as the range used for dynamic assignment to PCI devices).
Jan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
2015-12-21 14:10 ` Jan Beulich
@ 2016-01-10 19:42 ` Don Slutz
2016-01-11 13:50 ` Jan Beulich
0 siblings, 1 reply; 24+ messages in thread
From: Don Slutz @ 2016-01-10 19:42 UTC (permalink / raw)
To: Jan Beulich
Cc: Jun Nakajima, Tim Deegan, Kevin Tian, Wei Liu, Ian Campbell,
Stefano Stabellini, George Dunlap, Andrew Cooper, Ian Jackson,
Don Slutz, xen-devel, Eddie Dong, Aravind Gopalakrishnan,
Suravee Suthikulpanit, Keir Fraser, Boris Ostrovsky
On 12/21/15 09:10, Jan Beulich wrote:
>>>> On 28.11.15 at 22:45, <don.slutz@gmail.com> wrote:
>> @@ -133,7 +134,7 @@ static int hvmemul_do_io(
>> p = vio->io_req;
>>
>> /* Verify the emulation request has been correctly re-issued */
>> - if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
>> + if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : is_vmware ? IOREQ_TYPE_VMWARE_PORT : IOREQ_TYPE_PIO)) ||
>
> Long line needs breaking up.
>
Already adjusted to:
- if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
+ if ( (p.type != calc_type) ||
>> @@ -167,26 +168,65 @@ static int hvmemul_do_io(
>> vio->io_req.state = STATE_IOREQ_NONE;
>> break;
>> case X86EMUL_UNHANDLEABLE:
>> - {
>> - struct hvm_ioreq_server *s =
>> - hvm_select_ioreq_server(curr->domain, &p);
...
>> + if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
>> + vio->io_req.state = STATE_IOREQ_NONE;
>> + }
>> }
>> break;
>
> Especially if there is no common code to be broken out at the
> beginning or end of these new if/else branches, then please avoid
> re-indenting the entire existing code block (perhaps by making your
> new code a if ( unlikely(is_vmware) ) ... else if ( ... ) prefix), to aid
> reviewing.
>
The suggested if above gives:
--------------------------------------------------------------------------
@@ -167,26 +175,64 @@ static int hvmemul_do_io(
vio->io_req.state = STATE_IOREQ_NONE;
break;
case X86EMUL_UNHANDLEABLE:
- {
- struct hvm_ioreq_server *s =
- hvm_select_ioreq_server(curr->domain, &p);
-
- /* If there is no suitable backing DM, just ignore accesses */
- if ( !s )
+ if ( unlikely(is_vmware) )
{
- rc = hvm_process_io_intercept(&null_handler, &p);
- vio->io_req.state = STATE_IOREQ_NONE;
+ struct hvm_ioreq_server *s;
+ vmware_regs_t *vr;
+
+ BUILD_BUG_ON(sizeof(ioreq_t) < sizeof(vmware_regs_t));
+
+ p.type = vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
+ s = hvm_select_ioreq_server(curr->domain, &p);
+ vr = get_vmport_regs_any(s, curr);
+
+ /*
+ * If there is no suitable backing DM, just ignore
accesses. If
+ * we do not have access to registers to pass to QEMU, just
+ * ignore access.
+ */
+ if ( !s || !vr )
+ {
+ rc = hvm_process_io_intercept(&null_handler, &p);
+ vio->io_req.state = STATE_IOREQ_NONE;
+ }
+ else
+ {
+ struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+ p.data = regs->rax;
+ vr->ebx = regs->_ebx;
+ vr->ecx = regs->_ecx;
+ vr->edx = regs->_edx;
+ vr->esi = regs->_esi;
+ vr->edi = regs->_edi;
+
+ rc = hvm_send_ioreq(s, &p, 0);
+ if ( rc != X86EMUL_RETRY ||
curr->domain->is_shutting_down )
+ vio->io_req.state = STATE_IOREQ_NONE;
+ }
}
else
{
- rc = hvm_send_ioreq(s, &p, 0);
- if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
+ struct hvm_ioreq_server *s =
+ hvm_select_ioreq_server(curr->domain, &p);
+
+ /* If there is no suitable backing DM, just ignore accesses */
+ if ( !s )
+ {
+ rc = hvm_process_io_intercept(&null_handler, &p);
vio->io_req.state = STATE_IOREQ_NONE;
- else if ( data_is_addr )
- rc = X86EMUL_OKAY;
+ }
+ else
+ {
+ rc = hvm_send_ioreq(s, &p, 0);
+ if ( rc != X86EMUL_RETRY ||
curr->domain->is_shutting_down )
+ vio->io_req.state = STATE_IOREQ_NONE;
+ else if ( data_is_addr )
+ rc = X86EMUL_OKAY;
+ }
}
break;
- }
default:
BUG();
}
-------------------------------------------------------------------------
Which I feel is just as hard to review from the diff. The only way I
know of to reduce the diff is:
--------------------------------------------------------------------------
@@ -167,6 +175,44 @@ static int hvmemul_do_io(
vio->io_req.state = STATE_IOREQ_NONE;
break;
case X86EMUL_UNHANDLEABLE:
+ if ( unlikely(is_vmware) )
+ {
+ struct hvm_ioreq_server *s;
+ vmware_regs_t *vr;
+
+ BUILD_BUG_ON(sizeof(ioreq_t) < sizeof(vmware_regs_t));
+
+ p.type = vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
+ s = hvm_select_ioreq_server(curr->domain, &p);
+ vr = get_vmport_regs_any(s, curr);
+
+ /*
+ * If there is no suitable backing DM, just ignore
accesses. If
+ * we do not have access to registers to pass to QEMU, just
+ * ignore access.
+ */
+ if ( !s || !vr )
+ {
+ rc = hvm_process_io_intercept(&null_handler, &p);
+ vio->io_req.state = STATE_IOREQ_NONE;
+ }
+ else
+ {
+ struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+ p.data = regs->rax;
+ vr->ebx = regs->_ebx;
+ vr->ecx = regs->_ecx;
+ vr->edx = regs->_edx;
+ vr->esi = regs->_esi;
+ vr->edi = regs->_edi;
+
+ rc = hvm_send_ioreq(s, &p, 0);
+ if ( rc != X86EMUL_RETRY ||
curr->domain->is_shutting_down )
+ vio->io_req.state = STATE_IOREQ_NONE;
+ }
+ break;
+ }
{
struct hvm_ioreq_server *s =
hvm_select_ioreq_server(curr->domain, &p);
-------------------------------------------------------------------------
Which is clear in the diff but looks to me as a strange construct.
However I am happy to go any of the 3 ways.
>> @@ -536,6 +575,21 @@ void hvm_do_resume(struct vcpu *v)
>> handle_mmio();
>> break;
>> case HVMIO_pio_completion:
>> + if ( vio->io_req.type == IOREQ_TYPE_VMWARE_PORT ) {
>> + vmware_regs_t *vr = get_vmport_regs_any(NULL, v);
>> +
>> + if ( vr )
>> + {
>> + struct cpu_user_regs *regs = guest_cpu_user_regs();
>> +
>> + /* Only change the 32bit part of the register. */
>> + regs->_ebx = vr->ebx;
>> + regs->_ecx = vr->ecx;
>> + regs->_edx = vr->edx;
>> + regs->_esi = vr->esi;
>> + regs->_edi = vr->edi;
>> + }
>
> Just like in one of the other patches the comment need extending to
> say _why_ you only change the 32-bit parts (for future readers to
> not consider this a bug).
>
Will change to this:
+ /* The code in QEMU that uses these registers,
+ * vmport.c and vmmouse.c, only uses only the 32bit
+ * part of the register. This is how VMware defined
+ * the use of these registers.
+ */
+ regs->_ebx = vr->ebx;
>> @@ -618,22 +672,56 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
>> set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
>> }
>>
>> -static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
>> +typedef enum {
>> + IOREQ_PAGE_TYPE_IOREQ,
>> + IOREQ_PAGE_TYPE_BUFIOREQ,
>> + IOREQ_PAGE_TYPE_VMPORT,
>
> Lower case please (and this being a local enum the prefix probably
> could be shortened).
>
Ok, how about ioreq_pt_?
>> +} ioreq_page_type_t;
>> +
>> +static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, ioreq_page_type_t buf)
>
> The parameter should no longer be named "buf".
pt or page_type?
>
>> @@ -849,19 +937,32 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
>>
>> static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
>> unsigned long ioreq_pfn,
>> - unsigned long bufioreq_pfn)
>> + unsigned long bufioreq_pfn,
>> + unsigned long vmport_ioreq_pfn)
>> {
>> int rc;
>>
>> - rc = hvm_map_ioreq_page(s, 0, ioreq_pfn);
>> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ, ioreq_pfn);
>> if ( rc )
>> return rc;
>>
>> if ( bufioreq_pfn != INVALID_GFN )
>> - rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
>> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_BUFIOREQ, bufioreq_pfn);
>>
>> if ( rc )
>> - hvm_unmap_ioreq_page(s, 0);
>> + {
>> + hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
>> + return rc;
>> + }
>> +
>> + rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT, vmport_ioreq_pfn);
>
> I think Paul has already pointed out that this shouldn't be
> unconditional. And that not just nor for every server, but also
> because the caller doesn't seem to guarantee validity of the passed
> in PFN (and its implicit initializer value of zero is certainly not a valid
> one to use here).
>
Paul did, and the code is currently:
+ if ( s->domain->arch.hvm_domain.is_vmware_port_enabled &&
+ vmport_ioreq_pfn != INVALID_GFN )
+ {
+ rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT,
vmport_ioreq_pfn);
The only way that vmport_ioreq_pfn could be zero is a custom version of
alloc_magic_pages_hvm() or some other way of not using libxc.
Since none of the rest of the uses of the HVM_PARAM's special pages
checked for zero, I did not add a check here.
However if you want a check for zero, I am happy to add one.
>> @@ -948,12 +1056,38 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
>> for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
>> {
>> char *name;
>> + char *type_name = NULL;
>> + unsigned int limit;
>>
>> - rc = asprintf(&name, "ioreq_server %d %s", s->id,
>> - (i == HVMOP_IO_RANGE_PORT) ? "port" :
>> - (i == HVMOP_IO_RANGE_MEMORY) ? "memory" :
>> - (i == HVMOP_IO_RANGE_PCI) ? "pci" :
>> - "");
>> + switch ( i )
>> + {
>> + case HVMOP_IO_RANGE_PORT:
>> + type_name = "port";
>> + limit = MAX_NR_IO_RANGES;
>> + break;
>> + case HVMOP_IO_RANGE_MEMORY:
>> + type_name = "memory";
>> + limit = MAX_NR_IO_RANGES;
>> + break;
>> + case HVMOP_IO_RANGE_PCI:
>> + type_name = "pci";
>> + limit = MAX_NR_IO_RANGES;
>> + break;
>> + case HVMOP_IO_RANGE_VMWARE_PORT:
>> + type_name = "VMware port";
>> + limit = 1;
>> + break;
>
> Do you really need to set up a (dummy) range set for this?
Yes, this is to allow an ioreq server to at any time enable or disable
sending of an ioreq that is of the type IOREQ_TYPE_VMWARE_PORT to it.
>
>> + case HVMOP_IO_RANGE_TIMEOFFSET:
>> + type_name = "timeoffset";
>> + limit = 1;
>> + break;
>
> This case didn't exist before, and is unrelated to your change.
>
That is true. However Paul asked me to add it and since it was a very
small addition (these lines) I saw no reason to add a seperate patch.
It would either have to follow this patch or include the expansion of
s->range and to include missing range handling.
>> @@ -2558,9 +2701,6 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
>> if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
>> return NULL;
>>
>> - if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
>> - return d->arch.hvm_domain.default_ioreq_server;
>
> I'd prefer if this shortcut could stay.
I do not understand why. Currently this is the same as
if ( p->type == IOREQ_TYPE_TIMEOFFSET )
return d->arch.hvm_domain.default_ioreq_server;
As so to keep it for VMware I would add the case of
IOREQ_TYPE_VMWARE_PORT and to add the support for IOREQ_TYPE_TIMEOFFSET
that Paul asked for it would also be added. Turning this into a long line.
As far as I can see this is the same as a default case later, which is
not needed because you end up with the default_ioreq_server.
The only benifit I can see is to save instructions in the least common
case. It adds code in most common case, which is backwards from what I
expect and does make the code harder to follow.
>
>> --- a/xen/arch/x86/hvm/vmware/vmport.c
>> +++ b/xen/arch/x86/hvm/vmware/vmport.c
>> @@ -137,6 +137,20 @@ void vmport_register(struct domain *d)
>> register_portio_handler(d, BDOOR_PORT, 4, vmport_ioport);
>> }
>>
>> +int vmport_check_port(unsigned int port, unsigned int bytes)
>
> bool_t
>
Sure.
>> +{
>> + struct vcpu *curr = current;
>
> You don't really need this local variable.
ok, and use d instead of currd?
>
>> + struct domain *currd = curr->domain;
>> +
>> + if ( port + bytes > BDOOR_PORT && port < BDOOR_PORT + 4 &&
>
> This will match for e.g. port == BDOOR_PORT + 1 and bytes == 4,
> which I don't think is correct.
>
My testing on VMware says that this is what happens there. I.E. the
VMware "device" respondes to 4 ports. Not that you can cleanly use the
other 3 since EAX is not fully available.
>> + is_hvm_domain(currd) &&
>> + currd->arch.hvm_domain.is_vmware_port_enabled )
>> + {
>> + return 1;
>> + }
>> + return 0;
>> +}
>
> All of this could be a single return statement.
>
Will convert to a single return.
>> @@ -66,11 +69,25 @@ struct ioreq {
>> };
>> typedef struct ioreq ioreq_t;
>>
>> +struct vmware_regs {
>> + uint32_t esi;
>> + uint32_t edi;
>> + uint32_t ebx;
>> + uint32_t ecx;
>> + uint32_t edx;
>> +};
>> +typedef struct vmware_regs vmware_regs_t;
>
> I doubt you need the typedef (considering the use below), and I
> don't think having something prefixed vmware_ in the Xen public
> headers is a good idea.
>
QEMU has used this typedef since QEMU 2.2.0 released on Dec 9, 2014.
Went in by pull request:
http://lists.xenproject.org/archives/html/xen-devel/2014-10/msg03705.html
The thread started at
http://lists.xenproject.org/archives/html/xen-devel/2014-09/msg04346.html
During this time was when the name could be easily changed. I know of
no simple way to do so now. Any change like this needs to start in QEMU
and be accepted up-stream and in Xen's version of up-stream QEMU. After
that Xen is changed.
The thread
http://lists.xenproject.org/archives/html/xen-devel/2015-06/msg00476.html
looks at this being more general and just VMware.
> Also throughout the series I didn't find any code addition to
> guarantee (perhaps at build time) that BDOOR_PORT doesn't
> collide with any other use ports (statically assigned ones as well
> as the range used for dynamic assignment to PCI devices).
Since this is optional code, I am having an issue understanding this
statement. Xen will not do anything with BDOOR_PORT when vmware_port=0.
I do not know how at build time to check for run time optional items.
What ports are in use include what QEMU has.
My understanding was that configuration issues like overlapping or
multiple uses of I/O port is the users issue not Xen.
I do not see any code in xen that checks for this for other ports. So
it is not clear what the set of port in xen needs to be checked. The
default range used for dynamic assignment to PCI devices is 0xc000 -
0xffff, which does not mean that the guest is prevented from adjusting
pci bridges so that BDOOR_PORT is an overlap. But that is true of a lot
of the rest of the ports in Xen.
register_portio_handler() could be changed to check at run time for Xen.
At this time I have no plans to add code related to this.
-Don Slutz
>
> Jan
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
2016-01-10 19:42 ` Don Slutz
@ 2016-01-11 13:50 ` Jan Beulich
0 siblings, 0 replies; 24+ messages in thread
From: Jan Beulich @ 2016-01-11 13:50 UTC (permalink / raw)
To: Don Slutz
Cc: Jun Nakajima, Tim Deegan, Kevin Tian, Wei Liu, Ian Campbell,
Stefano Stabellini, George Dunlap, Andrew Cooper, Ian Jackson,
Don Slutz, xen-devel, Eddie Dong, Aravind Gopalakrishnan,
Suravee Suthikulpanit, Keir Fraser, Boris Ostrovsky
>>> On 10.01.16 at 20:42, <don.slutz@gmail.com> wrote:
> On 12/21/15 09:10, Jan Beulich wrote:
>>>>> On 28.11.15 at 22:45, <don.slutz@gmail.com> wrote:
>>> @@ -167,26 +168,65 @@ static int hvmemul_do_io(
>>> vio->io_req.state = STATE_IOREQ_NONE;
>>> break;
>>> case X86EMUL_UNHANDLEABLE:
>>> - {
>>> - struct hvm_ioreq_server *s =
>>> - hvm_select_ioreq_server(curr->domain, &p);
> ...
>>> + if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down )
>>> + vio->io_req.state = STATE_IOREQ_NONE;
>>> + }
>>> }
>>> break;
>>
>> Especially if there is no common code to be broken out at the
>> beginning or end of these new if/else branches, then please avoid
>> re-indenting the entire existing code block (perhaps by making your
>> new code a if ( unlikely(is_vmware) ) ... else if ( ... ) prefix), to aid
>> reviewing.
>>
>
> The suggested if above gives:
> [...]
Bad, and not really what was meant with the comment.
> Which I feel is just as hard to review from the diff. The only way I
> know of to reduce the diff is:
> [...]
This is mostly what was meant, with ...
> {
> struct hvm_ioreq_server *s =
> hvm_select_ioreq_server(curr->domain, &p);
... the declaration (but not the initializer) retained ahead of the
vmware code block (which also wants a variable of that type).
>>> @@ -536,6 +575,21 @@ void hvm_do_resume(struct vcpu *v)
>>> handle_mmio();
>>> break;
>>> case HVMIO_pio_completion:
>>> + if ( vio->io_req.type == IOREQ_TYPE_VMWARE_PORT ) {
>>> + vmware_regs_t *vr = get_vmport_regs_any(NULL, v);
>>> +
>>> + if ( vr )
>>> + {
>>> + struct cpu_user_regs *regs = guest_cpu_user_regs();
>>> +
>>> + /* Only change the 32bit part of the register. */
>>> + regs->_ebx = vr->ebx;
>>> + regs->_ecx = vr->ecx;
>>> + regs->_edx = vr->edx;
>>> + regs->_esi = vr->esi;
>>> + regs->_edi = vr->edi;
>>> + }
>>
>> Just like in one of the other patches the comment need extending to
>> say _why_ you only change the 32-bit parts (for future readers to
>> not consider this a bug).
>>
>
> Will change to this:
>
>
> + /* The code in QEMU that uses these registers,
> + * vmport.c and vmmouse.c, only uses only the 32bit
> + * part of the register. This is how VMware defined
> + * the use of these registers.
> + */
> + regs->_ebx = vr->ebx;
Hopefully with proper coding style, and ideally with one of the two
"only" removed.
>>> @@ -618,22 +672,56 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
>>> set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
>>> }
>>>
>>> -static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
>>> +typedef enum {
>>> + IOREQ_PAGE_TYPE_IOREQ,
>>> + IOREQ_PAGE_TYPE_BUFIOREQ,
>>> + IOREQ_PAGE_TYPE_VMPORT,
>>
>> Lower case please (and this being a local enum the prefix probably
>> could be shortened).
>
> Ok, how about ioreq_pt_?
Fine with me.
>>> +} ioreq_page_type_t;
>>> +
>>> +static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, ioreq_page_type_t buf)
>>
>> The parameter should no longer be named "buf".
>
> pt or page_type?
Whichever you prefer.
>>> @@ -948,12 +1056,38 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
>>> for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
>>> {
>>> char *name;
>>> + char *type_name = NULL;
>>> + unsigned int limit;
>>>
>>> - rc = asprintf(&name, "ioreq_server %d %s", s->id,
>>> - (i == HVMOP_IO_RANGE_PORT) ? "port" :
>>> - (i == HVMOP_IO_RANGE_MEMORY) ? "memory" :
>>> - (i == HVMOP_IO_RANGE_PCI) ? "pci" :
>>> - "");
>>> + switch ( i )
>>> + {
>>> + case HVMOP_IO_RANGE_PORT:
>>> + type_name = "port";
>>> + limit = MAX_NR_IO_RANGES;
>>> + break;
>>> + case HVMOP_IO_RANGE_MEMORY:
>>> + type_name = "memory";
>>> + limit = MAX_NR_IO_RANGES;
>>> + break;
>>> + case HVMOP_IO_RANGE_PCI:
>>> + type_name = "pci";
>>> + limit = MAX_NR_IO_RANGES;
>>> + break;
>>> + case HVMOP_IO_RANGE_VMWARE_PORT:
>>> + type_name = "VMware port";
>>> + limit = 1;
>>> + break;
>>
>> Do you really need to set up a (dummy) range set for this?
>
> Yes, this is to allow an ioreq server to at any time enable or disable
> sending of an ioreq that is of the type IOREQ_TYPE_VMWARE_PORT to it.
Not sure I understand how this is meant to answer th question.
>>> + case HVMOP_IO_RANGE_TIMEOFFSET:
>>> + type_name = "timeoffset";
>>> + limit = 1;
>>> + break;
>>
>> This case didn't exist before, and is unrelated to your change.
>>
>
> That is true. However Paul asked me to add it and since it was a very
> small addition (these lines) I saw no reason to add a seperate patch.
> It would either have to follow this patch or include the expansion of
> s->range and to include missing range handling.
Well, it sort of depends on the resolution of the issue above: This
is a dummy as well, and I'd prefer to get away without dummies if
possible.
>>> @@ -2558,9 +2701,6 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
>>> if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
>>> return NULL;
>>>
>>> - if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
>>> - return d->arch.hvm_domain.default_ioreq_server;
>>
>> I'd prefer if this shortcut could stay.
>
> I do not understand why.
Again best to be viewed in conjunction with the comment on the
dummies you add above.
>>> +{
>>> + struct vcpu *curr = current;
>>
>> You don't really need this local variable.
>
> ok, and use d instead of currd?
No, why would you?
>>> + struct domain *currd = curr->domain;
>>> +
>>> + if ( port + bytes > BDOOR_PORT && port < BDOOR_PORT + 4 &&
>>
>> This will match for e.g. port == BDOOR_PORT + 1 and bytes == 4,
>> which I don't think is correct.
>
> My testing on VMware says that this is what happens there. I.E. the
> VMware "device" respondes to 4 ports. Not that you can cleanly use the
> other 3 since EAX is not fully available.
But accessing BDOOR_PORT + 1 with a 4-byte operation ought to
be undefined (or properly split up). After all you don't know what is
on BDOOR_PORT + 4.
>>> @@ -66,11 +69,25 @@ struct ioreq {
>>> };
>>> typedef struct ioreq ioreq_t;
>>>
>>> +struct vmware_regs {
>>> + uint32_t esi;
>>> + uint32_t edi;
>>> + uint32_t ebx;
>>> + uint32_t ecx;
>>> + uint32_t edx;
>>> +};
>>> +typedef struct vmware_regs vmware_regs_t;
>>
>> I doubt you need the typedef (considering the use below), and I
>> don't think having something prefixed vmware_ in the Xen public
>> headers is a good idea.
>
> QEMU has used this typedef since QEMU 2.2.0 released on Dec 9, 2014.
>
> Went in by pull request:
>
> http://lists.xenproject.org/archives/html/xen-devel/2014-10/msg03705.html
>
>
> The thread started at
>
> http://lists.xenproject.org/archives/html/xen-devel/2014-09/msg04346.html
>
> During this time was when the name could be easily changed. I know of
> no simple way to do so now. Any change like this needs to start in QEMU
> and be accepted up-stream and in Xen's version of up-stream QEMU. After
> that Xen is changed.
All of this may be true and fine, but none of this is - to me - a
reason to introduce new unclean names into the Xen public
interface. In no event do I see qemu dictating naming to us.
>> Also throughout the series I didn't find any code addition to
>> guarantee (perhaps at build time) that BDOOR_PORT doesn't
>> collide with any other use ports (statically assigned ones as well
>> as the range used for dynamic assignment to PCI devices).
>
> Since this is optional code, I am having an issue understanding this
> statement. Xen will not do anything with BDOOR_PORT when vmware_port=0.
>
> I do not know how at build time to check for run time optional items.
>
> What ports are in use include what QEMU has.
>
> My understanding was that configuration issues like overlapping or
> multiple uses of I/O port is the users issue not Xen.
>
> I do not see any code in xen that checks for this for other ports. So
> it is not clear what the set of port in xen needs to be checked. The
> default range used for dynamic assignment to PCI devices is 0xc000 -
> 0xffff, which does not mean that the guest is prevented from adjusting
> pci bridges so that BDOOR_PORT is an overlap. But that is true of a lot
> of the rest of the ports in Xen.
>
> register_portio_handler() could be changed to check at run time for Xen.
That would be a runtime check, and hence too late. My concern is
for someone to change one of the definitions not realizing that such
a change would result in an overlap of ranges, which is known and
hence could be checked for at compile time.
Jan
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v13 8/8] Add xentrace to vmware_port
2015-11-28 21:44 [PATCH v13 0/8] Xen VMware tools support Don Slutz
` (6 preceding siblings ...)
2015-11-28 21:45 ` [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT Don Slutz
@ 2015-11-28 21:45 ` Don Slutz
2016-03-04 21:50 ` ping Re: [PATCH v13 0/8] Xen VMware tools support Konrad Rzeszutek Wilk
8 siblings, 0 replies; 24+ messages in thread
From: Don Slutz @ 2015-11-28 21:45 UTC (permalink / raw)
To: xen-devel
Cc: Jun Nakajima, Wei Liu, Kevin Tian, Keir Fraser, Ian Campbell,
George Dunlap, Andrew Cooper, Stefano Stabellini, Eddie Dong,
Don Slutz, Don Slutz, Tim Deegan, Aravind Gopalakrishnan,
Jan Beulich, Suravee Suthikulpanit, Boris Ostrovsky, Ian Jackson
From: Don Slutz <dslutz@verizon.com>
Also added missing TRAP_DEBUG & VLAPIC.
Signed-off-by: Don Slutz <dslutz@verizon.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
CC: Don Slutz <don.slutz@gmail.com>
---
v13:
Please do this by extending the existing infrastructure rather
than special-casing 7 on the side. (i.e. extend ND to take 7
parameters, and introduce HVMTRACE_7D)
= { d1, d2, d3, d4, d5, d6, d7 } will be far shorter, linewise.
v12:
Switch VMPORT_IGNORED to port, regs->_eax.
v11:
No change
v10:
Added Acked-by: Ian Campbell
Added back in the trace point calls.
Why is cmd in this patch?
Because the trace points use it.
v9:
Dropped unneed VMPORT_UNHANDLED, VMPORT_DECODE.
v7:
Dropped some of the new traces.
Added HVMTRACE_ND7.
v6:
Dropped the attempt to use svm_nextrip_insn_length via
__get_instruction_length (added in v2). Just always look
at upto 15 bytes on AMD.
v5:
exitinfo1 is used twice.
Fixed.
tools/xentrace/formats | 5 ++++
xen/arch/x86/hvm/hvm.c | 3 ++
xen/arch/x86/hvm/svm/svm.c | 6 ++--
xen/arch/x86/hvm/vmware/vmport.c | 14 ++++++++--
xen/arch/x86/hvm/vmx/vmx.c | 6 ++--
xen/include/asm-x86/hvm/trace.h | 59 ++++++++++++++++++++--------------------
xen/include/public/trace.h | 3 ++
7 files changed, 57 insertions(+), 39 deletions(-)
diff --git a/tools/xentrace/formats b/tools/xentrace/formats
index 5d7b72a..ac8800e 100644
--- a/tools/xentrace/formats
+++ b/tools/xentrace/formats
@@ -79,6 +79,11 @@
0x00082020 CPU%(cpu)d %(tsc)d (+%(reltsc)8d) INTR_WINDOW [ value = 0x%(1)08x ]
0x00082021 CPU%(cpu)d %(tsc)d (+%(reltsc)8d) NPF [ gpa = 0x%(2)08x%(1)08x mfn = 0x%(4)08x%(3)08x qual = 0x%(5)04x p2mt = 0x%(6)04x ]
0x00082023 CPU%(cpu)d %(tsc)d (+%(reltsc)8d) TRAP [ vector = 0x%(1)02x ]
+0x00082024 CPU%(cpu)d %(tsc)d (+%(reltsc)8d) TRAP_DEBUG [ exit_qualification = 0x%(1)08x ]
+0x00082025 CPU%(cpu)d %(tsc)d (+%(reltsc)8d) VLAPIC
+0x00082026 CPU%(cpu)d %(tsc)d (+%(reltsc)8d) VMPORT_HANDLED [ cmd = %(1)d eax = 0x%(2)08x ebx = 0x%(3)08x ecx = 0x%(4)08x edx = 0x%(5)08x esi = 0x%(6)08x edi = 0x%(7)08x ]
+0x00082027 CPU%(cpu)d %(tsc)d (+%(reltsc)8d) VMPORT_IGNORED [ port = %(1)d eax = 0x%(2)08x ]
+0x00082028 CPU%(cpu)d %(tsc)d (+%(reltsc)8d) VMPORT_QEMU [ eax = 0x%(1)08x ebx = 0x%(2)08x ecx = 0x%(3)08x edx = 0x%(4)08x esi = 0x%(5)08x edi = 0x%(6)08x ]
0x0010f001 CPU%(cpu)d %(tsc)d (+%(reltsc)8d) page_grant_map [ domid = %(1)d ]
0x0010f002 CPU%(cpu)d %(tsc)d (+%(reltsc)8d) page_grant_unmap [ domid = %(1)d ]
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 07b4025..6003317 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -588,6 +588,9 @@ void hvm_do_resume(struct vcpu *v)
regs->_edx = vr->edx;
regs->_esi = vr->esi;
regs->_edi = vr->edi;
+ HVMTRACE_ND(VMPORT_QEMU, 0, 1/*cycles*/, 6,
+ vio->io_req.data, regs->_ebx, regs->_ecx,
+ regs->_edx, regs->_esi, regs->_edi, 0);
}
}
(void)handle_pio(vio->io_req.addr, vio->io_req.size,
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 200a5df..ce45e8d 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2291,12 +2291,12 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
HVMTRACE_ND(VMEXIT64, vcpu_guestmode ? TRC_HVM_NESTEDFLAG : 0,
1/*cycles*/, 3, exit_reason,
(uint32_t)regs->eip, (uint32_t)((uint64_t)regs->eip >> 32),
- 0, 0, 0);
+ 0, 0, 0, 0);
else
HVMTRACE_ND(VMEXIT, vcpu_guestmode ? TRC_HVM_NESTEDFLAG : 0,
1/*cycles*/, 2, exit_reason,
(uint32_t)regs->eip,
- 0, 0, 0, 0);
+ 0, 0, 0, 0, 0);
if ( vcpu_guestmode ) {
enum nestedhvm_vmexits nsret;
@@ -2685,7 +2685,7 @@ void svm_trace_vmentry(void)
struct vcpu *curr = current;
HVMTRACE_ND(VMENTRY,
nestedhvm_vcpu_in_guestmode(curr) ? TRC_HVM_NESTEDFLAG : 0,
- 1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
+ 1/*cycles*/, 0, 0, 0, 0, 0, 0, 0, 0);
}
/*
diff --git a/xen/arch/x86/hvm/vmware/vmport.c b/xen/arch/x86/hvm/vmware/vmport.c
index ee993b3..9f66328 100644
--- a/xen/arch/x86/hvm/vmware/vmport.c
+++ b/xen/arch/x86/hvm/vmware/vmport.c
@@ -16,6 +16,7 @@
#include <xen/lib.h>
#include <asm/hvm/hvm.h>
#include <asm/hvm/support.h>
+#include <asm/hvm/trace.h>
#include "backdoor_def.h"
@@ -35,6 +36,7 @@ static int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val)
if ( port == BDOOR_PORT && regs->_eax == BDOOR_MAGIC )
{
uint32_t new_eax = ~0u;
+ uint16_t cmd = regs->_ecx;
uint64_t value;
struct vcpu *curr = current;
struct domain *currd = curr->domain;
@@ -45,7 +47,7 @@ static int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val)
* leaving the high 32-bits unchanged, unlike what one would
* expect to happen.
*/
- switch ( regs->_ecx & 0xffff )
+ switch ( cmd )
{
case BDOOR_CMD_GETMHZ:
new_eax = currd->arch.tsc_khz / 1000;
@@ -123,11 +125,17 @@ static int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val)
/* Let backing DM handle */
return X86EMUL_UNHANDLEABLE;
}
+ HVMTRACE_7D(VMPORT_HANDLED, cmd, new_eax, regs->_ebx, regs->_ecx,
+ regs->_edx, regs->_esi, regs->_edi);
if ( dir == IOREQ_READ )
*val = new_eax;
}
- else if ( dir == IOREQ_READ )
- *val = ~0u;
+ else
+ {
+ HVMTRACE_2D(VMPORT_IGNORED, port, regs->_eax);
+ if ( dir == IOREQ_READ )
+ *val = ~0u;
+ }
return X86EMUL_OKAY;
}
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 2581e97..5325e8d 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2886,11 +2886,11 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
if ( hvm_long_mode_enabled(v) )
HVMTRACE_ND(VMEXIT64, 0, 1/*cycles*/, 3, exit_reason,
(uint32_t)regs->eip, (uint32_t)((uint64_t)regs->eip >> 32),
- 0, 0, 0);
+ 0, 0, 0, 0);
else
HVMTRACE_ND(VMEXIT, 0, 1/*cycles*/, 2, exit_reason,
(uint32_t)regs->eip,
- 0, 0, 0, 0);
+ 0, 0, 0, 0, 0);
perfc_incra(vmexits, exit_reason);
@@ -3506,7 +3506,7 @@ void vmx_vmenter_helper(const struct cpu_user_regs *regs)
vpid_sync_all();
out:
- HVMTRACE_ND(VMENTRY, 0, 1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
+ HVMTRACE_ND(VMENTRY, 0, 1/*cycles*/, 0, 0, 0, 0, 0, 0, 0, 0);
__vmwrite(GUEST_RIP, regs->rip);
__vmwrite(GUEST_RSP, regs->rsp);
diff --git a/xen/include/asm-x86/hvm/trace.h b/xen/include/asm-x86/hvm/trace.h
index de802a6..3ec7817 100644
--- a/xen/include/asm-x86/hvm/trace.h
+++ b/xen/include/asm-x86/hvm/trace.h
@@ -54,6 +54,9 @@
#define DO_TRC_HVM_TRAP DEFAULT_HVM_MISC
#define DO_TRC_HVM_TRAP_DEBUG DEFAULT_HVM_MISC
#define DO_TRC_HVM_VLAPIC DEFAULT_HVM_MISC
+#define DO_TRC_HVM_VMPORT_HANDLED DEFAULT_HVM_IO
+#define DO_TRC_HVM_VMPORT_IGNORED DEFAULT_HVM_IO
+#define DO_TRC_HVM_VMPORT_QEMU DEFAULT_HVM_IO
#define TRC_PAR_LONG(par) ((par)&0xFFFFFFFF),((par)>>32)
@@ -65,38 +68,34 @@
#define TRACE_2_LONG_4D(_e, d1, d2, d3, d4, ...) \
TRACE_6D(_e, d1, d2, d3, d4)
-#define HVMTRACE_ND(evt, modifier, cycles, count, d1, d2, d3, d4, d5, d6) \
- do { \
- if ( unlikely(tb_init_done) && DO_TRC_HVM_ ## evt ) \
- { \
- struct { \
- u32 d[6]; \
- } _d; \
- _d.d[0]=(d1); \
- _d.d[1]=(d2); \
- _d.d[2]=(d3); \
- _d.d[3]=(d4); \
- _d.d[4]=(d5); \
- _d.d[5]=(d6); \
- __trace_var(TRC_HVM_ ## evt | (modifier), cycles, \
- sizeof(*_d.d) * count, &_d); \
- } \
+#define HVMTRACE_ND(evt, modifier, cycles, count, d1, d2, d3, d4, d5, d6, d7) \
+ do { \
+ if ( unlikely(tb_init_done) && DO_TRC_HVM_ ## evt ) \
+ { \
+ struct { \
+ u32 d[7]; \
+ } _d = { { d1, d2, d3, d4, d5, d6, d7 } }; \
+ __trace_var(TRC_HVM_ ## evt | (modifier), cycles, \
+ sizeof(*_d.d) * count, &_d); \
+ } \
} while(0)
-#define HVMTRACE_6D(evt, d1, d2, d3, d4, d5, d6) \
- HVMTRACE_ND(evt, 0, 0, 6, d1, d2, d3, d4, d5, d6)
-#define HVMTRACE_5D(evt, d1, d2, d3, d4, d5) \
- HVMTRACE_ND(evt, 0, 0, 5, d1, d2, d3, d4, d5, 0)
-#define HVMTRACE_4D(evt, d1, d2, d3, d4) \
- HVMTRACE_ND(evt, 0, 0, 4, d1, d2, d3, d4, 0, 0)
-#define HVMTRACE_3D(evt, d1, d2, d3) \
- HVMTRACE_ND(evt, 0, 0, 3, d1, d2, d3, 0, 0, 0)
-#define HVMTRACE_2D(evt, d1, d2) \
- HVMTRACE_ND(evt, 0, 0, 2, d1, d2, 0, 0, 0, 0)
-#define HVMTRACE_1D(evt, d1) \
- HVMTRACE_ND(evt, 0, 0, 1, d1, 0, 0, 0, 0, 0)
-#define HVMTRACE_0D(evt) \
- HVMTRACE_ND(evt, 0, 0, 0, 0, 0, 0, 0, 0, 0)
+#define HVMTRACE_7D(evt, d1, d2, d3, d4, d5, d6, d7) \
+ HVMTRACE_ND(evt, 0, 0, 7, d1, d2, d3, d4, d5, d6, d7)
+#define HVMTRACE_6D(evt, d1, d2, d3, d4, d5, d6) \
+ HVMTRACE_ND(evt, 0, 0, 6, d1, d2, d3, d4, d5, d6, 0)
+#define HVMTRACE_5D(evt, d1, d2, d3, d4, d5) \
+ HVMTRACE_ND(evt, 0, 0, 5, d1, d2, d3, d4, d5, 0, 0)
+#define HVMTRACE_4D(evt, d1, d2, d3, d4) \
+ HVMTRACE_ND(evt, 0, 0, 4, d1, d2, d3, d4, 0, 0, 0)
+#define HVMTRACE_3D(evt, d1, d2, d3) \
+ HVMTRACE_ND(evt, 0, 0, 3, d1, d2, d3, 0, 0, 0, 0)
+#define HVMTRACE_2D(evt, d1, d2) \
+ HVMTRACE_ND(evt, 0, 0, 2, d1, d2, 0, 0, 0, 0, 0)
+#define HVMTRACE_1D(evt, d1) \
+ HVMTRACE_ND(evt, 0, 0, 1, d1, 0, 0, 0, 0, 0, 0)
+#define HVMTRACE_0D(evt) \
+ HVMTRACE_ND(evt, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
#define HVMTRACE_LONG_1D(evt, d1) \
HVMTRACE_2D(evt ## 64, (d1) & 0xFFFFFFFF, (d1) >> 32)
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
index 274f8f6..548acfe 100644
--- a/xen/include/public/trace.h
+++ b/xen/include/public/trace.h
@@ -227,6 +227,9 @@
#define TRC_HVM_TRAP (TRC_HVM_HANDLER + 0x23)
#define TRC_HVM_TRAP_DEBUG (TRC_HVM_HANDLER + 0x24)
#define TRC_HVM_VLAPIC (TRC_HVM_HANDLER + 0x25)
+#define TRC_HVM_VMPORT_HANDLED (TRC_HVM_HANDLER + 0x26)
+#define TRC_HVM_VMPORT_IGNORED (TRC_HVM_HANDLER + 0x27)
+#define TRC_HVM_VMPORT_QEMU (TRC_HVM_HANDLER + 0x28)
#define TRC_HVM_IOPORT_WRITE (TRC_HVM_HANDLER + 0x216)
#define TRC_HVM_IOMEM_WRITE (TRC_HVM_HANDLER + 0x217)
--
1.8.3.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* ping Re: [PATCH v13 0/8] Xen VMware tools support
2015-11-28 21:44 [PATCH v13 0/8] Xen VMware tools support Don Slutz
` (7 preceding siblings ...)
2015-11-28 21:45 ` [PATCH v13 8/8] Add xentrace to vmware_port Don Slutz
@ 2016-03-04 21:50 ` Konrad Rzeszutek Wilk
8 siblings, 0 replies; 24+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-03-04 21:50 UTC (permalink / raw)
To: Don Slutz
Cc: Jun Nakajima, Tim Deegan, Kevin Tian, Keir Fraser, Ian Campbell,
Stefano Stabellini, George Dunlap, Andrew Cooper, Eddie Dong,
xen-devel, Aravind Gopalakrishnan, Wei Liu, Jan Beulich,
Suravee Suthikulpanit, Boris Ostrovsky, Ian Jackson
On Sat, Nov 28, 2015 at 04:44:57PM -0500, Don Slutz wrote:
> Changes v12 to v13:
> Rebased on staging (not a simple rebase, needed rework to function
> with changes).
Hey Don,
I was wondering what your plans are for this. The feature freeze window
is coming mighty close and this a pretty neat feature to have in.
Thanks!
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 24+ messages in thread