All of lore.kernel.org
 help / color / mirror / Atom feed
* [XEN PATCH 0/3] Configure qemu upstream correctly by default for igd-passthru
@ 2023-01-10  7:32   ` Chuck Zmudzinski
  0 siblings, 0 replies; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-09 23:08 UTC (permalink / raw)
  To: anthony.perard; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

Sorry for the length of this cover letter but it is helpful to put all
the pros and cons of the two different approaches to solving the problem
of configuring the Intel IGD with qemu upstream and libxl in one place,
which I attempt to do here. Of course the other approach involves a
patch to qemu [1] instead of using this patch series for libxl.

The quick answer:

I think the other patch to qemu is the better option, but I would be OK
if you use this patch series instead.

Details with my reasons for preferring the other patch to qemu over this
patch series to libxl: 

I call attention to the commit message of the first patch which points
out that using the "pc" machine and adding the xen platform device on
the qemu upstream command line is not functionally equivalent to using
the "xenfv" machine which automatically adds the xen platform device
earlier in the guest creation process. As a result, there is a noticeable
reduction in the performance of the guest during startup with the "pc"
machne type even if the xen platform device is added via the qemu
command line options, although eventually both Linux and Windows guests
perform equally well once the guest operating system is fully loaded.

Specifically, startup time is longer and neither the grub vga drivers
nor the windows vga drivers in early startup perform as well when the
xen platform device is added via the qemu command line instead of being
added immediately after the other emulated i440fx pci devices when the
"xenfv" machine type is used.

For example, when using the "pc" machine, which adds the xen platform
device using a command line option, the Linux guest could not display
the grub boot menu at the native resolution of the monitor, but with the
"xenfv" machine, the grub menu is displayed at the full 1920x1080
native resolution of the monitor for testing. So improved startup
performance is an advantage for the patch for qemu.

I also call attention to the last point of the commit message of the
second patch and the comments for reviewers section of the second patch.
This approach, as opposed to fixing this in qemu upstream, makes
maintaining the code in libxl__build_device_model_args_new more
difficult and therefore increases the chances of problems caused by
coding errors and typos for users of libxl. So that is another advantage
of the patch for qemu.

OTOH, fixing this in qemu causes newer qemu versions to behave
differently than previous versions of qemu, which the qemu community
does not like, although they seem OK with the other patch since it only
affects qemu "xenfv" machine types, but they do not want the patch to
affect toolstacks like libvirt that do not use qemu upstream's
autoconfiguration options as much as libxl does, and, of course, libvirt
can manage qemu "xenfv" machines so exising "xenfv" guests configured
manually by libvirt could be adversely affected by the patch to qemu,
but only if those same guests are also configured for igd-passthrough,
which is likely a very small number of possibly affected libvirt users
of qemu.

A year or two ago I tried to configure guests for pci passthrough on xen
using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
could not convert an xl.cfg file with a configuration item
pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
So it is unlikely there are any users out there using libvirt to
configure xen hvm guests for igd passthrough on xen, and those are the
only users that could be adversely affected by the simpler patch to qemu
to fix this.

The only advantage of this patch series over the qemu patch is that
this patch series does not need any patches to qemu to make Intel IGD
configuration easier with libxl so the risk of affecting other qemu
users is entirely eliminated if we use this patch instead of patching
qemu. The cost of patching libxl instead of qemu is reduced startup
performance compared to what could be achieved by patching qemu instead
and an increased risk that the tedious process of manually managing the
slot addresses of all the emulated devices will make it more difficult
to keep the libxl code free of bugs.

I will leave it to the maintainer of the code in both qemu and libxl
(Anthony) to decide which, if any, of the patches to apply. I am OK with
either this patch series to libxl or the proposed patch to qemu to fix
this problem, but I do recommend the other patch to qemu over this patch
series because of the improved performance during startup with that
patch and the relatively low risk that any libvirt users will be
adversely affected by that patch.

Brief statement of the problem this patch series solves:

Currently only the qemu traditional device model reserves slot 2 for the
Intel Integrated Graphics Device (IGD) with default settings. Assigning
the Intel IGD to slot 2 is necessary for the device to operate properly
when passed through as the primary graphics adapter. The qemu
traditional device model takes care of this by reserving slot 2 for the
Intel IGD, but the upstream qemu device model currently does not reserve
slot 2 for the Intel IGD.

This patch series modifies libxl so the upstream qemu device model will
also, with default settings, assign slot 2 for the Intel IGD.

There are three reasons why it is difficult to configure the guest
so the Intel IGD is assigned to slot 2 in the guest using libxl and the
upstream device model, so the patch series is logically organized in
three separate patches; each patch resolves one of the three reasons
that cause problems:

The description of what each of the three libxl patches do:

1. With the default "xenfv" machine type, qemu upstream is hard-coded
   to assign the xen platform device to slot 2. The first patch fixes
   that by using the "pc" machine instead when gfx_passthru type is igd
   and, if xen_platform_pci is set in the guest config, libxl now assigns
   the xen platform device to slot 3, making it possible to assign the
   IGD to slot 2. The patch only affects guests with the gfx_passthru
   option enabled. The default behavior (xen_platform_pci is enabled
   but gfx_passthru option is disabled) of using the "xenfv" machine
   type is preserved. Another way to describe what the patch does is
   to say that it adds a second exception to the default choice of the
   "xenfv" machine type, with the first exception being that the "pc"
   machine type is also used instead of "xenfv" if the xen platform pci
   device is disabled in the guest xl.cfg file.

2. Currently, with libxl and qemu upstream, most emulated pci devices
   are by default automatically assigned a pci slot, and the emulated
   ones are assigned before the passed through ones, which means that
   even if libxl is patched so the xen platform device will not be
   assigned to slot 2, any other emulated device will be assigned slot 2
   unless libxl explicitly assigns the slot address of each emulated pci
   device in such a way that the IGD will be assigned slot 2. The second
   patch fixes this by hard coding the slot assignment for the emulated
   devices instead of deferring to qemu upstream's auto-assignment which
   does not do what is necessary to configure the Intel IGD correctly.
   With the second patch applied, it is possible to configure the Intel
   IGD correctly by using the @VSLOT parameter in xl.cfg to specify the
   slot address of each passed through pci device in the guest. The
   second patch is also designed to not change the default behavior of
   letting qemu autoconfigure the pci slot addresses when igd
   gfx_pasthru is disabled in xl.cfg.  

3. For convenience, the third patch automatically assigns slot 2 to the
   Intel IGD when the gfx_passthru type is igd so with the third patch
   appled it is not necessary to set the @VSLOT parameter to configure
   the Intel IGD correctly.

Testing:

I tested a system with Intel IGD passthrough and two other pci devices
passed through, with and without the xen platform device. I also did
tests on guests without any pci passthrough configured. In all cases
tested, libxl behaved as expected. For example, the device model
arguments are only changed if gfx_passthru is set for the IGD, libxl
respected administrator settings such as @VSLOT and xen_platform_pci
with the patch series applied, and not adding the xen platform device to
the guest caused reduced performance because in that case the guest
could not take advantage of the improvements offered by the Xen PV
drivers in the guest. I tested the following emulated devices on my
setup: xen-platform, e1000, and VGA. I also verified the device that is
added by the "hdtype = 'ahci'" xl.cfg option is configured correctly
with the patch applied. I did not test all 12 devices that could be
affected by patch 2 of the series. These include the intel-hda high
definition audio device, a virtio-serial, device, etc. Once can look
at the second patch for the full list of qemu emulated devices whose
behavior is affected by the second patch of the series when the guest
is configured for igd gfx_passthru. These devices are also subject
to mistakes in the patch not discovered by the compiler, as mentioned
in the comments for reviewers section of the second patch. 

[1] https://lore.kernel.org/qemu-devel/8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com/

Chuck Zmudzinski (3):
  libxl/dm: Use "pc" machine type for Intel IGD passthrough
  libxl/dm: Manage pci slot assignment for Intel IGD passthrough
  libxl/dm: Assign slot 2 by default for Intel IGD passthrough

 tools/libs/light/libxl_dm.c | 227 +++++++++++++++++++++++++++++-------
 1 file changed, 183 insertions(+), 44 deletions(-)

-- 
2.39.0



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [XEN PATCH 1/3] libxl/dm: Use "pc" machine type for Intel IGD passthrough
@ 2023-01-10  7:32     ` Chuck Zmudzinski
  0 siblings, 0 replies; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-09 23:08 UTC (permalink / raw)
  To: anthony.perard; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

The default qemu upstream "xenfv" machine type that is used when an HVM
guest is configured for Intel IGD passthrough assigns slot 2 to the
xen platform pci device. It is a requirement that slot 2 be assigned to
the Intel IGD when it is passed through as the primary graphics adapter.
Using the "pc" machine type instead of the "xenfv" machine type in that
case makes it possible for qemu upstream to assign slot 2 to the IGD.

Using the qemu "pc" machine and adding the xen platform device on the
qemu command line instead of using the qemu "xenfv" machine which
automatically adds the xen platform device earlier in the guest creation
process does come with some degredation of startup performance: startup
is slower and some vga drivers in use during early boot are unable to
display the screen at the native resolution of the monitor, but once
the guest operating system (Windows or Linux) is fully loaded, there
is no noticeable difference in the performance of the guest when using
the "pc" machine type instead of the "xenfv" machine type.

With this patch, libxl continues to use default "xenfv" machine type
with the default settings of xen_platform_pci enabled and igd
gfx_passthru disabled. The patch only affects machines configured with
gfx_passthru enabled.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Reviewers might find this patch easier to review by looking at the
resulting code in the patched file instead of looking at the diff because
it is hard to follow the logical flow of the resulting code in the diff
because the patch moves the check for igd gfx_passthru before the check
for disabling the xen platform device. That change was made because it
results in a more simplified logical flow in the resulting code.

 tools/libs/light/libxl_dm.c | 37 ++++++++++++++++++++-----------------
 1 file changed, 20 insertions(+), 17 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index fc264a3a13..2048815611 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1809,7 +1809,26 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, b_info->extra_pv[i]);
         break;
     case LIBXL_DOMAIN_TYPE_HVM:
-        if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
+        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
+            enum libxl_gfx_passthru_kind gfx_passthru_kind =
+                            libxl__detect_gfx_passthru_kind(gc, guest_config);
+            switch (gfx_passthru_kind) {
+            case LIBXL_GFX_PASSTHRU_KIND_IGD:
+                /*
+                 * Using the machine "pc" because with the default machine "xenfv"
+                 * the xen-platform device will be assigned to slot 2, but with
+                 * GFX_PASSTHRU_KIND_IGD, slot 2 needs to be reserved for the Intel IGD.
+                 */
+                machinearg = libxl__strdup(gc, "pc,accel=xen,suppress-vmdesc=on,igd-passthru=on");
+                break;
+            case LIBXL_GFX_PASSTHRU_KIND_DEFAULT:
+                LOGD(ERROR, guest_domid, "unable to detect required gfx_passthru_kind");
+                return ERROR_FAIL;
+            default:
+                LOGD(ERROR, guest_domid, "invalid value for gfx_passthru_kind");
+                return ERROR_INVAL;
+            }
+        } else if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
             /* Switching here to the machine "pc" which does not add
              * the xen-platform device instead of the default "xenfv" machine.
              */
@@ -1831,22 +1850,6 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
         }
 
-        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
-            enum libxl_gfx_passthru_kind gfx_passthru_kind =
-                            libxl__detect_gfx_passthru_kind(gc, guest_config);
-            switch (gfx_passthru_kind) {
-            case LIBXL_GFX_PASSTHRU_KIND_IGD:
-                machinearg = GCSPRINTF("%s,igd-passthru=on", machinearg);
-                break;
-            case LIBXL_GFX_PASSTHRU_KIND_DEFAULT:
-                LOGD(ERROR, guest_domid, "unable to detect required gfx_passthru_kind");
-                return ERROR_FAIL;
-            default:
-                LOGD(ERROR, guest_domid, "invalid value for gfx_passthru_kind");
-                return ERROR_INVAL;
-            }
-        }
-
         flexarray_append(dm_args, machinearg);
         for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
             flexarray_append(dm_args, b_info->extra_hvm[i]);
-- 
2.39.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [XEN PATCH 2/3] libxl/dm: Manage pci slot assignment for Intel IGD passthrough
@ 2023-01-10  7:32     ` Chuck Zmudzinski
  0 siblings, 0 replies; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-09 23:08 UTC (permalink / raw)
  To: anthony.perard; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

By default, except for the ich9-usb-uhci device which libxl assigns to
slot 29 (0xid), libxl defers to qemu upstream's automatic slot assignment
process, which is simply to assign each emulated device to the next
available slot on the pci bus. With this default behavior, libxl and
qemu will not configure the Intel IGD correctly. Specifically, the Intel
IGD must be assigned to slot 2, but the current default behavior is to
assign one of the emulated devices to slot 2.

With this patch libxl uses qemu command line options to specify the slot
address of each pci device in the guest when gfx_passthru is enabled
for the Intel IGD. It uses the same simple algorithm of assigning the
next available slot, except that it starts with slot 3 instead of slot 2
for the emulated devices. This process of slot assignment aims to
simulate the behavior of existing machines as much as possible.

The default behavior (when igd gfx_passthru is disabled) of letting qemu
manage the slot addresses of emulated pci devices is preserved. The
patch also preserves the special treatment for the ich9 usb2 controller
(ich9-usb-ehci1) that libxl currently assigns to slot.function 29.7 and
the associated ich9-usb-uhciN devices to slot 29 (0x1d).

For future maintainance of this code, it is important that pci devices
that are managed by the libxl__build_device_model_args_new function
follow the logic of this patch and use the new local counter next_slot
to assign the slot address instead of letting upstream qemu assign the
slot if the guest is configured for Intel IGD passthrough, and preserve
the current behavior of letting qemu assign the slot address otherwise.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
The diff of this patch is easier to review if it is generated using
the -b (aka --ignore-space-change) option of diff/git-diff to filter
out some of the changes that are only due to white space.

This patch is difficult to verify for correctness without testing all
the devices that could be added by the libxl__build_device_model_args_new
function. There are 12 places where the addr=%x option needed to be
added to the arguments of the "-device" qemu option that adds an
emulated pci device which corresponds to at least 12 different devices
that could be affected by this patch if the patch contains mistakes
that the compiler did not notice. One mistake the compiler would not
notice is a missing next_slot++; statement that would result in qemu
trying to assign a device to a slot that is already assigned, which
is an error in qemu. I did enough tests to find some mistakes in the
patch which of course I fixed before submitting the patch. So I cannot
guarantee that there are not any other mistakes in the patch because
I don't have the resources to do the necessary testing of the many
possible configurations that could be affected by this patch.

 tools/libs/light/libxl_dm.c | 168 ++++++++++++++++++++++++++++++------
 1 file changed, 141 insertions(+), 27 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 2048815611..2720b5d4d0 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1205,6 +1205,20 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     const char *path, *chardev;
     bool is_stubdom = libxl_defbool_val(b_info->device_model_stubdomain);
     int rc;
+    int next_slot;
+    bool configure_pci_for_igd = false;
+    /*
+     * next_slot is only used when we need to configure the pci
+     * slots for the Intel IGD. Slot 2 will be for the Intel IGD.
+     */
+    next_slot = 3;
+    if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
+        enum libxl_gfx_passthru_kind gfx_passthru_kind =
+                        libxl__detect_gfx_passthru_kind(gc, guest_config);
+        if (gfx_passthru_kind == LIBXL_GFX_PASSTHRU_KIND_IGD) {
+            configure_pci_for_igd = true;
+        }
+    }
 
     dm_args = flexarray_make(gc, 16, 1);
     dm_envs = flexarray_make(gc, 16, 1);
@@ -1372,6 +1386,20 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
 
         if (b_info->cmdline)
             flexarray_vappend(dm_args, "-append", b_info->cmdline, NULL);
+        /*
+         * When the Intel IGD is configured for primary graphics
+         * passthrough, we need to manually add the xen platform
+         * device because the QEMU machine type is "pc". Add it first to
+         * simulate the behavior of the "xenfv" QEMU machine type which
+         * always adds the xen platform device first. But in this case it
+         * will be at slot 3 because we are reserving slot 2 for the IGD.
+         */
+        if (configure_pci_for_igd &&
+            libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
+            flexarray_append_pair(dm_args, "-device",
+                        GCSPRINTF("xen-platform,addr=%x", next_slot));
+            next_slot++;
+        }
 
         /* Find out early if one of the disk is on the scsi bus and add a scsi
          * controller. This is done ahead to keep the same behavior as previous
@@ -1381,7 +1409,14 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                 continue;
             }
             if (strncmp(disks[i].vdev, "sd", 2) == 0) {
-                flexarray_vappend(dm_args, "-device", "lsi53c895a", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("lsi53c895a,addr=%x", next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "lsi53c895a",
+                                      NULL);
+                }
                 break;
             }
         }
@@ -1436,31 +1471,67 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, "-spice");
             flexarray_append(dm_args, spiceoptions);
             if (libxl_defbool_val(b_info->u.hvm.spice.vdagent)) {
-                flexarray_vappend(dm_args, "-device", "virtio-serial",
-                    "-chardev", "spicevmc,id=vdagent,name=vdagent", "-device",
-                    "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
-                    NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("virtio-serial,addr=%x", next_slot),
+                        "-chardev", "spicevmc,id=vdagent,name=vdagent",
+                        "-device",
+                        "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
+                        NULL);
+                next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "virtio-serial",
+                        "-chardev", "spicevmc,id=vdagent,name=vdagent",
+                        "-device",
+                        "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
+                        NULL);
+                }
             }
         }
 
         switch (b_info->u.hvm.vga.kind) {
         case LIBXL_VGA_INTERFACE_TYPE_STD:
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("VGA,vgamem_mb=%d",
-                libxl__sizekb_to_mb(b_info->video_memkb)));
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("VGA,addr=%x,vgamem_mb=%d", next_slot,
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("VGA,vgamem_mb=%d",
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+            }
             break;
         case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("cirrus-vga,vgamem_mb=%d",
-                libxl__sizekb_to_mb(b_info->video_memkb)));
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("cirrus-vga,addr=%x,vgamem_mb=%d", next_slot,
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("cirrus-vga,vgamem_mb=%d",
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+            }
             break;
         case LIBXL_VGA_INTERFACE_TYPE_NONE:
             break;
         case LIBXL_VGA_INTERFACE_TYPE_QXL:
             /* QXL have 2 ram regions, ram and vram */
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("qxl-vga,vram_size_mb=%"PRIu64",ram_size_mb=%"PRIu64,
-                (b_info->video_memkb/2/1024), (b_info->video_memkb/2/1024) ) );
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("qxl-vga,addr=%x,vram_size_mb=%"PRIu64
+                    ",ram_size_mb=%"PRIu64, next_slot,
+                    (b_info->video_memkb/2/1024),
+                    (b_info->video_memkb/2/1024) ) );
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("qxl-vga,vram_size_mb=%"PRIu64
+                    ",ram_size_mb=%"PRIu64,
+                    (b_info->video_memkb/2/1024),
+                    (b_info->video_memkb/2/1024) ) );
+            }
             break;
         default:
             LOGD(ERROR, guest_domid, "Invalid emulated video card specified");
@@ -1496,8 +1567,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         } else if (b_info->u.hvm.usbversion) {
             switch (b_info->u.hvm.usbversion) {
             case 1:
-                flexarray_vappend(dm_args,
-                    "-device", "piix3-usb-uhci,id=usb", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("piix3-usb-uhci,addr=%x,id=usb",
+                                  next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args,
+                        "-device", "piix3-usb-uhci,id=usb", NULL);
+                }
                 break;
             case 2:
                 flexarray_append_pair(dm_args, "-device",
@@ -1509,8 +1587,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                         i, 2*(i-1), i-1));
                 break;
             case 3:
-                flexarray_vappend(dm_args,
-                    "-device", "nec-usb-xhci,id=usb", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("nec-usb-xhci,addr=%x,id=usb",
+                                  next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args,
+                        "-device", "nec-usb-xhci,id=usb", NULL);
+                }
                 break;
             default:
                 LOGD(ERROR, guest_domid, "usbversion parameter is invalid, "
@@ -1542,8 +1627,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
 
             switch (soundhw) {
             case LIBXL__QEMU_SOUNDHW_HDA:
-                flexarray_vappend(dm_args, "-device", "intel-hda",
-                                  "-device", "hda-duplex", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("intel-hda,addr=%x", next_slot),
+                                   "-device", "hda-duplex", NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "intel-hda",
+                                      "-device", "hda-duplex", NULL);
+                }
                 break;
             default:
                 flexarray_append_pair(dm_args, "-device",
@@ -1573,10 +1665,18 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                                                 guest_domid, nics[i].devid,
                                                 LIBXL_NIC_TYPE_VIF_IOEMU);
                 flexarray_append(dm_args, "-device");
-                flexarray_append(dm_args,
-                   GCSPRINTF("%s,id=nic%d,netdev=net%d,mac=%s",
-                             nics[i].model, nics[i].devid,
-                             nics[i].devid, smac));
+                if (configure_pci_for_igd) {
+                    flexarray_append(dm_args,
+                        GCSPRINTF("%s,addr=%x,id=nic%d,netdev=net%d,mac=%s",
+                                  nics[i].model, next_slot, nics[i].devid,
+                                  nics[i].devid, smac));
+                    next_slot++;
+                } else {
+                    flexarray_append(dm_args,
+                        GCSPRINTF("%s,id=nic%d,netdev=net%d,mac=%s",
+                                  nics[i].model, nics[i].devid,
+                                  nics[i].devid, smac));
+                }
                 flexarray_append(dm_args, "-netdev");
                 flexarray_append(dm_args,
                                  GCSPRINTF("type=tap,id=net%d,ifname=%s,"
@@ -1865,8 +1965,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         flexarray_append_pair(dm_envs, "XEN_DOMAIN_ID", GCSPRINTF("%d", guest_domid));
 
-        if (b_info->u.hvm.hdtype == LIBXL_HDTYPE_AHCI)
-            flexarray_append_pair(dm_args, "-device", "ahci,id=ahci0");
+        if (b_info->u.hvm.hdtype == LIBXL_HDTYPE_AHCI) {
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                            GCSPRINTF("ahci,addr=%x,id=ahci0", next_slot));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device", "ahci,id=ahci0");
+            }
+        }
         for (i = 0; i < num_disks; i++) {
             int disk, part;
             int dev_number =
@@ -2043,7 +2150,14 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         switch (b_info->u.hvm.vendor_device) {
         case LIBXL_VENDOR_DEVICE_XENSERVER:
             flexarray_append(dm_args, "-device");
-            flexarray_append(dm_args, "xen-pvdevice,device-id=0xc000");
+            if (configure_pci_for_igd) {
+                flexarray_append(dm_args,
+                GCSPRINTF("xen-pvdevice,addr=%x,device-id=0xc000",
+                           next_slot));
+                next_slot++;
+            } else {
+                flexarray_append(dm_args, "xen-pvdevice,device-id=0xc000");
+            }
             break;
         default:
             break;
-- 
2.39.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [XEN PATCH 3/3] libxl/dm: Assign slot 2 by default for Intel IGD passthrough
@ 2023-01-10  7:32     ` Chuck Zmudzinski
  0 siblings, 0 replies; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-09 23:08 UTC (permalink / raw)
  To: anthony.perard; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

It is possible for the administrator to manually specify the virtual
slot addresses of passed through pci devices on the guest's pci bus
using the @VSLOT parameter in xl.cfg. With this patch, libxl will by
default assign the Intel IGD to slot 2 when gfx_passthru is configured
for the Intel IGD so it will no longer be necessary to use the @VSLOT
setting to configure the IGD correctly. Also, with this patch, libxl
will not override explicit @VSLOT settings by the administrator so
in that case the patch will have no effect on guest behavior.

The default behavior of letting qemu manage the slot addresses of passed
through pci devices when gfx_passthru is disabled and the administrator
does not set @VSLOT for passed through pci devices is also preserved.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
 tools/libs/light/libxl_dm.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 2720b5d4d0..b51ebae643 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1207,6 +1207,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     int rc;
     int next_slot;
     bool configure_pci_for_igd = false;
+    const int igd_slot = 2;
     /*
      * next_slot is only used when we need to configure the pci
      * slots for the Intel IGD. Slot 2 will be for the Intel IGD.
@@ -2173,6 +2174,27 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     flexarray_append(dm_envs, NULL);
     if (envs)
         *envs = (char **) flexarray_contents(dm_envs);
+    if (configure_pci_for_igd) {
+        libxl_device_pci *pci = NULL;
+        for (i = 0; i < guest_config->num_pcidevs; i++) {
+            pci = &guest_config->pcidevs[i];
+            if (!pci->vdevfn) {
+                /*
+                 * Find the Intel IGD and configure it for slot 2.
+                 * Configure any other devices for slot next_slot.
+                 * Since the guest is configured for IGD passthrough,
+                 * assume the device on the host at slot 2 is the IGD.
+                 */
+                if (pci->domain == 0 && pci->bus == 0 &&
+                    pci->dev == igd_slot && pci->func == 0) {
+                    pci->vdevfn = PCI_DEVFN(igd_slot, 0);
+                } else {
+                    pci->vdevfn = PCI_DEVFN(next_slot, 0);
+                    next_slot++;
+                }
+            }
+        }
+    }
     return 0;
 }
 
-- 
2.39.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [XEN PATCH v2 0/3] Configure qemu upstream correctly by default for igd-passthru
@ 2023-01-10  7:32   ` Chuck Zmudzinski
  0 siblings, 0 replies; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-10  7:32 UTC (permalink / raw)
  To: anthony.perard; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

Sorry for the length of this cover letter but it is helpful to put all
the pros and cons of the two different approaches to solving the problem
of configuring the Intel IGD with qemu upstream and libxl in one place,
which I attempt to do here. Of course the other approach involves a
patch to qemu [1] instead of using this patch series for libxl.

The quick answer:

I think the other patch to qemu is the better option, but I would be OK
if you use this patch series instead.

Details with my reasons for preferring the other patch to qemu over this
patch series to libxl: 

I call attention to the commit message of the first patch which points
out that using the "pc" machine and adding the xen platform device on
the qemu upstream command line is not functionally equivalent to using
the "xenfv" machine which automatically adds the xen platform device
earlier in the guest creation process. As a result, there is a noticeable
reduction in the performance of the guest during startup with the "pc"
machne type even if the xen platform device is added via the qemu
command line options, although eventually both Linux and Windows guests
perform equally well once the guest operating system is fully loaded.

Specifically, startup time is longer and neither the grub vga drivers
nor the windows vga drivers in early startup perform as well when the
xen platform device is added via the qemu command line instead of being
added immediately after the other emulated i440fx pci devices when the
"xenfv" machine type is used.

For example, when using the "pc" machine, which adds the xen platform
device using a command line option, the Linux guest could not display
the grub boot menu at the native resolution of the monitor, but with the
"xenfv" machine, the grub menu is displayed at the full 1920x1080
native resolution of the monitor for testing. So improved startup
performance is an advantage for the patch for qemu.

I also call attention to the last point of the commit message of the
second patch and the comments for reviewers section of the second patch.
This approach, as opposed to fixing this in qemu upstream, makes
maintaining the code in libxl__build_device_model_args_new more
difficult and therefore increases the chances of problems caused by
coding errors and typos for users of libxl. So that is another advantage
of the patch for qemu.

OTOH, fixing this in qemu causes newer qemu versions to behave
differently than previous versions of qemu, which the qemu community
does not like, although they seem OK with the other patch since it only
affects qemu "xenfv" machine types, but they do not want the patch to
affect toolstacks like libvirt that do not use qemu upstream's
autoconfiguration options as much as libxl does, and, of course, libvirt
can manage qemu "xenfv" machines so exising "xenfv" guests configured
manually by libvirt could be adversely affected by the patch to qemu,
but only if those same guests are also configured for igd-passthrough,
which is likely a very small number of possibly affected libvirt users
of qemu.

A year or two ago I tried to configure guests for pci passthrough on xen
using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
could not convert an xl.cfg file with a configuration item
pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
So it is unlikely there are any users out there using libvirt to
configure xen hvm guests for igd passthrough on xen, and those are the
only users that could be adversely affected by the simpler patch to qemu
to fix this.

The only advantage of this patch series over the qemu patch is that
this patch series does not need any patches to qemu to make Intel IGD
configuration easier with libxl so the risk of affecting other qemu
users is entirely eliminated if we use this patch instead of patching
qemu. The cost of patching libxl instead of qemu is reduced startup
performance compared to what could be achieved by patching qemu instead
and an increased risk that the tedious process of manually managing the
slot addresses of all the emulated devices will make it more difficult
to keep the libxl code free of bugs.

I will leave it to the maintainer of the code in both qemu and libxl
(Anthony) to decide which, if any, of the patches to apply. I am OK with
either this patch series to libxl or the proposed patch to qemu to fix
this problem, but I do recommend the other patch to qemu over this patch
series because of the improved performance during startup with that
patch and the relatively low risk that any libvirt users will be
adversely affected by that patch.

Brief statement of the problem this patch series solves:

Currently only the qemu traditional device model reserves slot 2 for the
Intel Integrated Graphics Device (IGD) with default settings. Assigning
the Intel IGD to slot 2 is necessary for the device to operate properly
when passed through as the primary graphics adapter. The qemu
traditional device model takes care of this by reserving slot 2 for the
Intel IGD, but the upstream qemu device model currently does not reserve
slot 2 for the Intel IGD.

This patch series modifies libxl so the upstream qemu device model will
also, with default settings, assign slot 2 for the Intel IGD.

There are three reasons why it is difficult to configure the guest
so the Intel IGD is assigned to slot 2 in the guest using libxl and the
upstream device model, so the patch series is logically organized in
three separate patches; each patch resolves one of the three reasons
that cause problems:

The description of what each of the three libxl patches do:

1. With the default "xenfv" machine type, qemu upstream is hard-coded
   to assign the xen platform device to slot 2. The first patch fixes
   that by using the "pc" machine instead when gfx_passthru type is igd
   and, if xen_platform_pci is set in the guest config, libxl now assigns
   the xen platform device to slot 3, making it possible to assign the
   IGD to slot 2. The patch only affects guests with the gfx_passthru
   option enabled. The default behavior (xen_platform_pci is enabled
   but gfx_passthru option is disabled) of using the "xenfv" machine
   type is preserved. Another way to describe what the patch does is
   to say that it adds a second exception to the default choice of the
   "xenfv" machine type, with the first exception being that the "pc"
   machine type is also used instead of "xenfv" if the xen platform pci
   device is disabled in the guest xl.cfg file.

2. Currently, with libxl and qemu upstream, most emulated pci devices
   are by default automatically assigned a pci slot, and the emulated
   ones are assigned before the passed through ones, which means that
   even if libxl is patched so the xen platform device will not be
   assigned to slot 2, any other emulated device will be assigned slot 2
   unless libxl explicitly assigns the slot address of each emulated pci
   device in such a way that the IGD will be assigned slot 2. The second
   patch fixes this by hard coding the slot assignment for the emulated
   devices instead of deferring to qemu upstream's auto-assignment which
   does not do what is necessary to configure the Intel IGD correctly.
   With the second patch applied, it is possible to configure the Intel
   IGD correctly by using the @VSLOT parameter in xl.cfg to specify the
   slot address of each passed through pci device in the guest. The
   second patch is also designed to not change the default behavior of
   letting qemu autoconfigure the pci slot addresses when igd
   gfx_pasthru is disabled in xl.cfg.  

3. For convenience, the third patch automatically assigns slot 2 to the
   Intel IGD when the gfx_passthru type is igd so with the third patch
   appled it is not necessary to set the @VSLOT parameter to configure
   the Intel IGD correctly.

Testing:

I tested a system with Intel IGD passthrough and two other pci devices
passed through, with and without the xen platform device. I also did
tests on guests without any pci passthrough configured. In all cases
tested, libxl behaved as expected. For example, the device model
arguments are only changed if gfx_passthru is set for the IGD, libxl
respected administrator settings such as @VSLOT and xen_platform_pci
with the patch series applied, and not adding the xen platform device to
the guest caused reduced performance because in that case the guest
could not take advantage of the improvements offered by the Xen PV
drivers in the guest. I tested the following emulated devices on my
setup: xen-platform, e1000, and VGA. I also verified the device that is
added by the "hdtype = 'ahci'" xl.cfg option is configured correctly
with the patch applied. I did not test all 12 devices that could be
affected by patch 2 of the series. These include the intel-hda high
definition audio device, a virtio-serial, device, etc. Once can look
at the second patch for the full list of qemu emulated devices whose
behavior is affected by the second patch of the series when the guest
is configured for igd gfx_passthru. These devices are also subject
to mistakes in the patch not discovered by the compiler, as mentioned
in the comments for reviewers section of the second patch. 

[1] https://lore.kernel.org/qemu-devel/a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com/

v2: correct the link to the qemu patch - the link in v1 was to an
    incorrect version of the patch

Chuck Zmudzinski (3):
  libxl/dm: Use "pc" machine type for Intel IGD passthrough
  libxl/dm: Manage pci slot assignment for Intel IGD passthrough
  libxl/dm: Assign slot 2 by default for Intel IGD passthrough

 tools/libs/light/libxl_dm.c | 227 +++++++++++++++++++++++++++++-------
 1 file changed, 183 insertions(+), 44 deletions(-)

-- 
2.39.0



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [XEN PATCH v2 1/3] libxl/dm: Use "pc" machine type for Intel IGD passthrough
@ 2023-01-10  7:32     ` Chuck Zmudzinski
  0 siblings, 0 replies; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-10  7:32 UTC (permalink / raw)
  To: anthony.perard; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

The default qemu upstream "xenfv" machine type that is used when an HVM
guest is configured for Intel IGD passthrough assigns slot 2 to the
xen platform pci device. It is a requirement that slot 2 be assigned to
the Intel IGD when it is passed through as the primary graphics adapter.
Using the "pc" machine type instead of the "xenfv" machine type in that
case makes it possible for qemu upstream to assign slot 2 to the IGD.

Using the qemu "pc" machine and adding the xen platform device on the
qemu command line instead of using the qemu "xenfv" machine which
automatically adds the xen platform device earlier in the guest creation
process does come with some degredation of startup performance: startup
is slower and some vga drivers in use during early boot are unable to
display the screen at the native resolution of the monitor, but once
the guest operating system (Windows or Linux) is fully loaded, there
is no noticeable difference in the performance of the guest when using
the "pc" machine type instead of the "xenfv" machine type.

With this patch, libxl continues to use default "xenfv" machine type
with the default settings of xen_platform_pci enabled and igd
gfx_passthru disabled. The patch only affects machines configured with
gfx_passthru enabled.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Reviewers might find this patch easier to review by looking at the
resulting code in the patched file instead of looking at the diff because
it is hard to follow the logical flow of the resulting code in the diff
because the patch moves the check for igd gfx_passthru before the check
for disabling the xen platform device. That change was made because it
results in a more simplified logical flow in the resulting code.

v2: No changes to this patch since v1

 tools/libs/light/libxl_dm.c | 37 ++++++++++++++++++++-----------------
 1 file changed, 20 insertions(+), 17 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index fc264a3a13..2048815611 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1809,7 +1809,26 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, b_info->extra_pv[i]);
         break;
     case LIBXL_DOMAIN_TYPE_HVM:
-        if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
+        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
+            enum libxl_gfx_passthru_kind gfx_passthru_kind =
+                            libxl__detect_gfx_passthru_kind(gc, guest_config);
+            switch (gfx_passthru_kind) {
+            case LIBXL_GFX_PASSTHRU_KIND_IGD:
+                /*
+                 * Using the machine "pc" because with the default machine "xenfv"
+                 * the xen-platform device will be assigned to slot 2, but with
+                 * GFX_PASSTHRU_KIND_IGD, slot 2 needs to be reserved for the Intel IGD.
+                 */
+                machinearg = libxl__strdup(gc, "pc,accel=xen,suppress-vmdesc=on,igd-passthru=on");
+                break;
+            case LIBXL_GFX_PASSTHRU_KIND_DEFAULT:
+                LOGD(ERROR, guest_domid, "unable to detect required gfx_passthru_kind");
+                return ERROR_FAIL;
+            default:
+                LOGD(ERROR, guest_domid, "invalid value for gfx_passthru_kind");
+                return ERROR_INVAL;
+            }
+        } else if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
             /* Switching here to the machine "pc" which does not add
              * the xen-platform device instead of the default "xenfv" machine.
              */
@@ -1831,22 +1850,6 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
         }
 
-        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
-            enum libxl_gfx_passthru_kind gfx_passthru_kind =
-                            libxl__detect_gfx_passthru_kind(gc, guest_config);
-            switch (gfx_passthru_kind) {
-            case LIBXL_GFX_PASSTHRU_KIND_IGD:
-                machinearg = GCSPRINTF("%s,igd-passthru=on", machinearg);
-                break;
-            case LIBXL_GFX_PASSTHRU_KIND_DEFAULT:
-                LOGD(ERROR, guest_domid, "unable to detect required gfx_passthru_kind");
-                return ERROR_FAIL;
-            default:
-                LOGD(ERROR, guest_domid, "invalid value for gfx_passthru_kind");
-                return ERROR_INVAL;
-            }
-        }
-
         flexarray_append(dm_args, machinearg);
         for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
             flexarray_append(dm_args, b_info->extra_hvm[i]);
-- 
2.39.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [XEN PATCH v2 2/3] libxl/dm: Manage pci slot assignment for Intel IGD passthrough
@ 2023-01-10  7:32     ` Chuck Zmudzinski
  0 siblings, 0 replies; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-10  7:32 UTC (permalink / raw)
  To: anthony.perard; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

By default, except for the ich9-usb-uhci device which libxl assigns to
slot 29 (0xid), libxl defers to qemu upstream's automatic slot assignment
process, which is simply to assign each emulated device to the next
available slot on the pci bus. With this default behavior, libxl and
qemu will not configure the Intel IGD correctly. Specifically, the Intel
IGD must be assigned to slot 2, but the current default behavior is to
assign one of the emulated devices to slot 2.

With this patch libxl uses qemu command line options to specify the slot
address of each pci device in the guest when gfx_passthru is enabled
for the Intel IGD. It uses the same simple algorithm of assigning the
next available slot, except that it starts with slot 3 instead of slot 2
for the emulated devices. This process of slot assignment aims to
simulate the behavior of existing machines as much as possible.

The default behavior (when igd gfx_passthru is disabled) of letting qemu
manage the slot addresses of emulated pci devices is preserved. The
patch also preserves the special treatment for the ich9 usb2 controller
(ich9-usb-ehci1) that libxl currently assigns to slot.function 29.7 and
the associated ich9-usb-uhciN devices to slot 29 (0x1d).

For future maintainance of this code, it is important that pci devices
that are managed by the libxl__build_device_model_args_new function
follow the logic of this patch and use the new local counter next_slot
to assign the slot address instead of letting upstream qemu assign the
slot if the guest is configured for Intel IGD passthrough, and preserve
the current behavior of letting qemu assign the slot address otherwise.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
The diff of this patch is easier to review if it is generated using
the -b (aka --ignore-space-change) option of diff/git-diff to filter
out some of the changes that are only due to white space.

This patch is difficult to verify for correctness without testing all
the devices that could be added by the libxl__build_device_model_args_new
function. There are 12 places where the addr=%x option needed to be
added to the arguments of the "-device" qemu option that adds an
emulated pci device which corresponds to at least 12 different devices
that could be affected by this patch if the patch contains mistakes
that the compiler did not notice. One mistake the compiler would not
notice is a missing next_slot++; statement that would result in qemu
trying to assign a device to a slot that is already assigned, which
is an error in qemu. I did enough tests to find some mistakes in the
patch which of course I fixed before submitting the patch. So I cannot
guarantee that there are not any other mistakes in the patch because
I don't have the resources to do the necessary testing of the many
possible configurations that could be affected by this patch.

v2: No changes to this patch since v1

 tools/libs/light/libxl_dm.c | 168 ++++++++++++++++++++++++++++++------
 1 file changed, 141 insertions(+), 27 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 2048815611..2720b5d4d0 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1205,6 +1205,20 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     const char *path, *chardev;
     bool is_stubdom = libxl_defbool_val(b_info->device_model_stubdomain);
     int rc;
+    int next_slot;
+    bool configure_pci_for_igd = false;
+    /*
+     * next_slot is only used when we need to configure the pci
+     * slots for the Intel IGD. Slot 2 will be for the Intel IGD.
+     */
+    next_slot = 3;
+    if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
+        enum libxl_gfx_passthru_kind gfx_passthru_kind =
+                        libxl__detect_gfx_passthru_kind(gc, guest_config);
+        if (gfx_passthru_kind == LIBXL_GFX_PASSTHRU_KIND_IGD) {
+            configure_pci_for_igd = true;
+        }
+    }
 
     dm_args = flexarray_make(gc, 16, 1);
     dm_envs = flexarray_make(gc, 16, 1);
@@ -1372,6 +1386,20 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
 
         if (b_info->cmdline)
             flexarray_vappend(dm_args, "-append", b_info->cmdline, NULL);
+        /*
+         * When the Intel IGD is configured for primary graphics
+         * passthrough, we need to manually add the xen platform
+         * device because the QEMU machine type is "pc". Add it first to
+         * simulate the behavior of the "xenfv" QEMU machine type which
+         * always adds the xen platform device first. But in this case it
+         * will be at slot 3 because we are reserving slot 2 for the IGD.
+         */
+        if (configure_pci_for_igd &&
+            libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
+            flexarray_append_pair(dm_args, "-device",
+                        GCSPRINTF("xen-platform,addr=%x", next_slot));
+            next_slot++;
+        }
 
         /* Find out early if one of the disk is on the scsi bus and add a scsi
          * controller. This is done ahead to keep the same behavior as previous
@@ -1381,7 +1409,14 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                 continue;
             }
             if (strncmp(disks[i].vdev, "sd", 2) == 0) {
-                flexarray_vappend(dm_args, "-device", "lsi53c895a", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("lsi53c895a,addr=%x", next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "lsi53c895a",
+                                      NULL);
+                }
                 break;
             }
         }
@@ -1436,31 +1471,67 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, "-spice");
             flexarray_append(dm_args, spiceoptions);
             if (libxl_defbool_val(b_info->u.hvm.spice.vdagent)) {
-                flexarray_vappend(dm_args, "-device", "virtio-serial",
-                    "-chardev", "spicevmc,id=vdagent,name=vdagent", "-device",
-                    "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
-                    NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("virtio-serial,addr=%x", next_slot),
+                        "-chardev", "spicevmc,id=vdagent,name=vdagent",
+                        "-device",
+                        "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
+                        NULL);
+                next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "virtio-serial",
+                        "-chardev", "spicevmc,id=vdagent,name=vdagent",
+                        "-device",
+                        "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
+                        NULL);
+                }
             }
         }
 
         switch (b_info->u.hvm.vga.kind) {
         case LIBXL_VGA_INTERFACE_TYPE_STD:
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("VGA,vgamem_mb=%d",
-                libxl__sizekb_to_mb(b_info->video_memkb)));
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("VGA,addr=%x,vgamem_mb=%d", next_slot,
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("VGA,vgamem_mb=%d",
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+            }
             break;
         case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("cirrus-vga,vgamem_mb=%d",
-                libxl__sizekb_to_mb(b_info->video_memkb)));
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("cirrus-vga,addr=%x,vgamem_mb=%d", next_slot,
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("cirrus-vga,vgamem_mb=%d",
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+            }
             break;
         case LIBXL_VGA_INTERFACE_TYPE_NONE:
             break;
         case LIBXL_VGA_INTERFACE_TYPE_QXL:
             /* QXL have 2 ram regions, ram and vram */
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("qxl-vga,vram_size_mb=%"PRIu64",ram_size_mb=%"PRIu64,
-                (b_info->video_memkb/2/1024), (b_info->video_memkb/2/1024) ) );
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("qxl-vga,addr=%x,vram_size_mb=%"PRIu64
+                    ",ram_size_mb=%"PRIu64, next_slot,
+                    (b_info->video_memkb/2/1024),
+                    (b_info->video_memkb/2/1024) ) );
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("qxl-vga,vram_size_mb=%"PRIu64
+                    ",ram_size_mb=%"PRIu64,
+                    (b_info->video_memkb/2/1024),
+                    (b_info->video_memkb/2/1024) ) );
+            }
             break;
         default:
             LOGD(ERROR, guest_domid, "Invalid emulated video card specified");
@@ -1496,8 +1567,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         } else if (b_info->u.hvm.usbversion) {
             switch (b_info->u.hvm.usbversion) {
             case 1:
-                flexarray_vappend(dm_args,
-                    "-device", "piix3-usb-uhci,id=usb", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("piix3-usb-uhci,addr=%x,id=usb",
+                                  next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args,
+                        "-device", "piix3-usb-uhci,id=usb", NULL);
+                }
                 break;
             case 2:
                 flexarray_append_pair(dm_args, "-device",
@@ -1509,8 +1587,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                         i, 2*(i-1), i-1));
                 break;
             case 3:
-                flexarray_vappend(dm_args,
-                    "-device", "nec-usb-xhci,id=usb", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("nec-usb-xhci,addr=%x,id=usb",
+                                  next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args,
+                        "-device", "nec-usb-xhci,id=usb", NULL);
+                }
                 break;
             default:
                 LOGD(ERROR, guest_domid, "usbversion parameter is invalid, "
@@ -1542,8 +1627,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
 
             switch (soundhw) {
             case LIBXL__QEMU_SOUNDHW_HDA:
-                flexarray_vappend(dm_args, "-device", "intel-hda",
-                                  "-device", "hda-duplex", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("intel-hda,addr=%x", next_slot),
+                                   "-device", "hda-duplex", NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "intel-hda",
+                                      "-device", "hda-duplex", NULL);
+                }
                 break;
             default:
                 flexarray_append_pair(dm_args, "-device",
@@ -1573,10 +1665,18 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                                                 guest_domid, nics[i].devid,
                                                 LIBXL_NIC_TYPE_VIF_IOEMU);
                 flexarray_append(dm_args, "-device");
-                flexarray_append(dm_args,
-                   GCSPRINTF("%s,id=nic%d,netdev=net%d,mac=%s",
-                             nics[i].model, nics[i].devid,
-                             nics[i].devid, smac));
+                if (configure_pci_for_igd) {
+                    flexarray_append(dm_args,
+                        GCSPRINTF("%s,addr=%x,id=nic%d,netdev=net%d,mac=%s",
+                                  nics[i].model, next_slot, nics[i].devid,
+                                  nics[i].devid, smac));
+                    next_slot++;
+                } else {
+                    flexarray_append(dm_args,
+                        GCSPRINTF("%s,id=nic%d,netdev=net%d,mac=%s",
+                                  nics[i].model, nics[i].devid,
+                                  nics[i].devid, smac));
+                }
                 flexarray_append(dm_args, "-netdev");
                 flexarray_append(dm_args,
                                  GCSPRINTF("type=tap,id=net%d,ifname=%s,"
@@ -1865,8 +1965,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         flexarray_append_pair(dm_envs, "XEN_DOMAIN_ID", GCSPRINTF("%d", guest_domid));
 
-        if (b_info->u.hvm.hdtype == LIBXL_HDTYPE_AHCI)
-            flexarray_append_pair(dm_args, "-device", "ahci,id=ahci0");
+        if (b_info->u.hvm.hdtype == LIBXL_HDTYPE_AHCI) {
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                            GCSPRINTF("ahci,addr=%x,id=ahci0", next_slot));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device", "ahci,id=ahci0");
+            }
+        }
         for (i = 0; i < num_disks; i++) {
             int disk, part;
             int dev_number =
@@ -2043,7 +2150,14 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         switch (b_info->u.hvm.vendor_device) {
         case LIBXL_VENDOR_DEVICE_XENSERVER:
             flexarray_append(dm_args, "-device");
-            flexarray_append(dm_args, "xen-pvdevice,device-id=0xc000");
+            if (configure_pci_for_igd) {
+                flexarray_append(dm_args,
+                GCSPRINTF("xen-pvdevice,addr=%x,device-id=0xc000",
+                           next_slot));
+                next_slot++;
+            } else {
+                flexarray_append(dm_args, "xen-pvdevice,device-id=0xc000");
+            }
             break;
         default:
             break;
-- 
2.39.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [XEN PATCH v2 3/3] libxl/dm: Assign slot 2 by default for Intel IGD passthrough
@ 2023-01-10  7:32     ` Chuck Zmudzinski
  0 siblings, 0 replies; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-10  7:32 UTC (permalink / raw)
  To: anthony.perard; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

It is possible for the administrator to manually specify the virtual
slot addresses of passed through pci devices on the guest's pci bus
using the @VSLOT parameter in xl.cfg. With this patch, libxl will by
default assign the Intel IGD to slot 2 when gfx_passthru is configured
for the Intel IGD so it will no longer be necessary to use the @VSLOT
setting to configure the IGD correctly. Also, with this patch, libxl
will not override explicit @VSLOT settings by the administrator so
in that case the patch will have no effect on guest behavior.

The default behavior of letting qemu manage the slot addresses of passed
through pci devices when gfx_passthru is disabled and the administrator
does not set @VSLOT for passed through pci devices is also preserved.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
v2: No changes to this patch since v1

 tools/libs/light/libxl_dm.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 2720b5d4d0..b51ebae643 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1207,6 +1207,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     int rc;
     int next_slot;
     bool configure_pci_for_igd = false;
+    const int igd_slot = 2;
     /*
      * next_slot is only used when we need to configure the pci
      * slots for the Intel IGD. Slot 2 will be for the Intel IGD.
@@ -2173,6 +2174,27 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     flexarray_append(dm_envs, NULL);
     if (envs)
         *envs = (char **) flexarray_contents(dm_envs);
+    if (configure_pci_for_igd) {
+        libxl_device_pci *pci = NULL;
+        for (i = 0; i < guest_config->num_pcidevs; i++) {
+            pci = &guest_config->pcidevs[i];
+            if (!pci->vdevfn) {
+                /*
+                 * Find the Intel IGD and configure it for slot 2.
+                 * Configure any other devices for slot next_slot.
+                 * Since the guest is configured for IGD passthrough,
+                 * assume the device on the host at slot 2 is the IGD.
+                 */
+                if (pci->domain == 0 && pci->bus == 0 &&
+                    pci->dev == igd_slot && pci->func == 0) {
+                    pci->vdevfn = PCI_DEVFN(igd_slot, 0);
+                } else {
+                    pci->vdevfn = PCI_DEVFN(next_slot, 0);
+                    next_slot++;
+                }
+            }
+        }
+    }
     return 0;
 }
 
-- 
2.39.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default for igd-passthru
  2023-01-10  7:32   ` [XEN PATCH v2 " Chuck Zmudzinski
@ 2023-01-25 11:37     ` Anthony PERARD
  -1 siblings, 0 replies; 14+ messages in thread
From: Anthony PERARD via @ 2023-01-25 11:37 UTC (permalink / raw)
  To: Chuck Zmudzinski; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
> I call attention to the commit message of the first patch which points
> out that using the "pc" machine and adding the xen platform device on
> the qemu upstream command line is not functionally equivalent to using
> the "xenfv" machine which automatically adds the xen platform device
> earlier in the guest creation process. As a result, there is a noticeable
> reduction in the performance of the guest during startup with the "pc"
> machne type even if the xen platform device is added via the qemu
> command line options, although eventually both Linux and Windows guests
> perform equally well once the guest operating system is fully loaded.

There shouldn't be a difference between "xenfv" machine or using the
"pc" machine while adding the "xen-platform" device, at least with
regards to access to disk or network.

The first patch of the series is using the "pc" machine without any
"xen-platform" device, so we can't compare startup performance based on
that.

> Specifically, startup time is longer and neither the grub vga drivers
> nor the windows vga drivers in early startup perform as well when the
> xen platform device is added via the qemu command line instead of being
> added immediately after the other emulated i440fx pci devices when the
> "xenfv" machine type is used.

The "xen-platform" device is mostly an hint to a guest that they can use
pv-disk and pv-network devices. I don't think it would change anything
with regards to graphics.

> For example, when using the "pc" machine, which adds the xen platform
> device using a command line option, the Linux guest could not display
> the grub boot menu at the native resolution of the monitor, but with the
> "xenfv" machine, the grub menu is displayed at the full 1920x1080
> native resolution of the monitor for testing. So improved startup
> performance is an advantage for the patch for qemu.

I've just found out that when doing IGD passthrough, both machine
"xenfv" and "pc" are much more different than I though ... :-(
pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
turns copy some informations from the real host bridge.
I guess this new host bridge help when the firmware setup the graphic
for grub.

> I also call attention to the last point of the commit message of the
> second patch and the comments for reviewers section of the second patch.
> This approach, as opposed to fixing this in qemu upstream, makes
> maintaining the code in libxl__build_device_model_args_new more
> difficult and therefore increases the chances of problems caused by
> coding errors and typos for users of libxl. So that is another advantage
> of the patch for qemu.

We would just needs to use a different approach in libxl when generating
the command line. We could probably avoid duplications. I was hopping to
have patch series for libxl that would change the machine used to start
using "pc" instead of "xenfv" for all configurations, but based on the
point above (IGD specific change to "xenfv"), then I guess we can't
really do anything from libxl to fix IGD passthrough.

> OTOH, fixing this in qemu causes newer qemu versions to behave
> differently than previous versions of qemu, which the qemu community
> does not like, although they seem OK with the other patch since it only
> affects qemu "xenfv" machine types, but they do not want the patch to
> affect toolstacks like libvirt that do not use qemu upstream's
> autoconfiguration options as much as libxl does, and, of course, libvirt
> can manage qemu "xenfv" machines so exising "xenfv" guests configured
> manually by libvirt could be adversely affected by the patch to qemu,
> but only if those same guests are also configured for igd-passthrough,
> which is likely a very small number of possibly affected libvirt users
> of qemu.
> 
> A year or two ago I tried to configure guests for pci passthrough on xen
> using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
> could not convert an xl.cfg file with a configuration item
> pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
> So it is unlikely there are any users out there using libvirt to
> configure xen hvm guests for igd passthrough on xen, and those are the
> only users that could be adversely affected by the simpler patch to qemu
> to fix this.

FYI, libvirt should be using libxl to create guest, I don't think there
is another way for libvirt to create xen guests.



So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
IGD passthrough as the "xenfv" machine has already some workaround to
make IGD work and just need some more.

I've seen that the patch for QEMU is now reviewed, so I look at having
it merged soonish.

Thanks,

-- 
Anthony PERARD


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default for igd-passthru
@ 2023-01-25 11:37     ` Anthony PERARD
  0 siblings, 0 replies; 14+ messages in thread
From: Anthony PERARD @ 2023-01-25 11:37 UTC (permalink / raw)
  To: Chuck Zmudzinski; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
> I call attention to the commit message of the first patch which points
> out that using the "pc" machine and adding the xen platform device on
> the qemu upstream command line is not functionally equivalent to using
> the "xenfv" machine which automatically adds the xen platform device
> earlier in the guest creation process. As a result, there is a noticeable
> reduction in the performance of the guest during startup with the "pc"
> machne type even if the xen platform device is added via the qemu
> command line options, although eventually both Linux and Windows guests
> perform equally well once the guest operating system is fully loaded.

There shouldn't be a difference between "xenfv" machine or using the
"pc" machine while adding the "xen-platform" device, at least with
regards to access to disk or network.

The first patch of the series is using the "pc" machine without any
"xen-platform" device, so we can't compare startup performance based on
that.

> Specifically, startup time is longer and neither the grub vga drivers
> nor the windows vga drivers in early startup perform as well when the
> xen platform device is added via the qemu command line instead of being
> added immediately after the other emulated i440fx pci devices when the
> "xenfv" machine type is used.

The "xen-platform" device is mostly an hint to a guest that they can use
pv-disk and pv-network devices. I don't think it would change anything
with regards to graphics.

> For example, when using the "pc" machine, which adds the xen platform
> device using a command line option, the Linux guest could not display
> the grub boot menu at the native resolution of the monitor, but with the
> "xenfv" machine, the grub menu is displayed at the full 1920x1080
> native resolution of the monitor for testing. So improved startup
> performance is an advantage for the patch for qemu.

I've just found out that when doing IGD passthrough, both machine
"xenfv" and "pc" are much more different than I though ... :-(
pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
turns copy some informations from the real host bridge.
I guess this new host bridge help when the firmware setup the graphic
for grub.

> I also call attention to the last point of the commit message of the
> second patch and the comments for reviewers section of the second patch.
> This approach, as opposed to fixing this in qemu upstream, makes
> maintaining the code in libxl__build_device_model_args_new more
> difficult and therefore increases the chances of problems caused by
> coding errors and typos for users of libxl. So that is another advantage
> of the patch for qemu.

We would just needs to use a different approach in libxl when generating
the command line. We could probably avoid duplications. I was hopping to
have patch series for libxl that would change the machine used to start
using "pc" instead of "xenfv" for all configurations, but based on the
point above (IGD specific change to "xenfv"), then I guess we can't
really do anything from libxl to fix IGD passthrough.

> OTOH, fixing this in qemu causes newer qemu versions to behave
> differently than previous versions of qemu, which the qemu community
> does not like, although they seem OK with the other patch since it only
> affects qemu "xenfv" machine types, but they do not want the patch to
> affect toolstacks like libvirt that do not use qemu upstream's
> autoconfiguration options as much as libxl does, and, of course, libvirt
> can manage qemu "xenfv" machines so exising "xenfv" guests configured
> manually by libvirt could be adversely affected by the patch to qemu,
> but only if those same guests are also configured for igd-passthrough,
> which is likely a very small number of possibly affected libvirt users
> of qemu.
> 
> A year or two ago I tried to configure guests for pci passthrough on xen
> using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
> could not convert an xl.cfg file with a configuration item
> pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
> So it is unlikely there are any users out there using libvirt to
> configure xen hvm guests for igd passthrough on xen, and those are the
> only users that could be adversely affected by the simpler patch to qemu
> to fix this.

FYI, libvirt should be using libxl to create guest, I don't think there
is another way for libvirt to create xen guests.



So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
IGD passthrough as the "xenfv" machine has already some workaround to
make IGD work and just need some more.

I've seen that the patch for QEMU is now reviewed, so I look at having
it merged soonish.

Thanks,

-- 
Anthony PERARD


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default for igd-passthru
  2023-01-25 11:37     ` Anthony PERARD
  (?)
@ 2023-01-25 20:20     ` Chuck Zmudzinski
  -1 siblings, 0 replies; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-25 20:20 UTC (permalink / raw)
  To: Anthony PERARD; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

On 1/25/2023 6:37 AM, Anthony PERARD wrote:
> On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
> > I call attention to the commit message of the first patch which points
> > out that using the "pc" machine and adding the xen platform device on
> > the qemu upstream command line is not functionally equivalent to using
> > the "xenfv" machine which automatically adds the xen platform device
> > earlier in the guest creation process. As a result, there is a noticeable
> > reduction in the performance of the guest during startup with the "pc"
> > machne type even if the xen platform device is added via the qemu
> > command line options, although eventually both Linux and Windows guests
> > perform equally well once the guest operating system is fully loaded.
>
> There shouldn't be a difference between "xenfv" machine or using the
> "pc" machine while adding the "xen-platform" device, at least with
> regards to access to disk or network.
>
> The first patch of the series is using the "pc" machine without any
> "xen-platform" device, so we can't compare startup performance based on
> that.
>
> > Specifically, startup time is longer and neither the grub vga drivers
> > nor the windows vga drivers in early startup perform as well when the
> > xen platform device is added via the qemu command line instead of being
> > added immediately after the other emulated i440fx pci devices when the
> > "xenfv" machine type is used.
>
> The "xen-platform" device is mostly an hint to a guest that they can use
> pv-disk and pv-network devices. I don't think it would change anything
> with regards to graphics.
>
> > For example, when using the "pc" machine, which adds the xen platform
> > device using a command line option, the Linux guest could not display
> > the grub boot menu at the native resolution of the monitor, but with the
> > "xenfv" machine, the grub menu is displayed at the full 1920x1080
> > native resolution of the monitor for testing. So improved startup
> > performance is an advantage for the patch for qemu.
>
> I've just found out that when doing IGD passthrough, both machine
> "xenfv" and "pc" are much more different than I though ... :-(
> pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
> turns copy some informations from the real host bridge.
> I guess this new host bridge help when the firmware setup the graphic
> for grub.
>
> > I also call attention to the last point of the commit message of the
> > second patch and the comments for reviewers section of the second patch.
> > This approach, as opposed to fixing this in qemu upstream, makes
> > maintaining the code in libxl__build_device_model_args_new more
> > difficult and therefore increases the chances of problems caused by
> > coding errors and typos for users of libxl. So that is another advantage
> > of the patch for qemu.
>
> We would just needs to use a different approach in libxl when generating
> the command line. We could probably avoid duplications. I was hopping to
> have patch series for libxl that would change the machine used to start
> using "pc" instead of "xenfv" for all configurations, but based on the
> point above (IGD specific change to "xenfv"), then I guess we can't
> really do anything from libxl to fix IGD passthrough.
>
> > OTOH, fixing this in qemu causes newer qemu versions to behave
> > differently than previous versions of qemu, which the qemu community
> > does not like, although they seem OK with the other patch since it only
> > affects qemu "xenfv" machine types, but they do not want the patch to
> > affect toolstacks like libvirt that do not use qemu upstream's
> > autoconfiguration options as much as libxl does, and, of course, libvirt
> > can manage qemu "xenfv" machines so exising "xenfv" guests configured
> > manually by libvirt could be adversely affected by the patch to qemu,
> > but only if those same guests are also configured for igd-passthrough,
> > which is likely a very small number of possibly affected libvirt users
> > of qemu.
> > 
> > A year or two ago I tried to configure guests for pci passthrough on xen
> > using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
> > could not convert an xl.cfg file with a configuration item
> > pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
> > So it is unlikely there are any users out there using libvirt to
> > configure xen hvm guests for igd passthrough on xen, and those are the
> > only users that could be adversely affected by the simpler patch to qemu
> > to fix this.
>
> FYI, libvirt should be using libxl to create guest, I don't think there
> is another way for libvirt to create xen guests.

I have success using libvirt as a frontend to libxl for most of my xen guests,
except for HVM guests that have pci devices passed through because the
tool to convert an xl.cfg file to libvirt xml was not able to convert the
pci = ... line in xl.cfg. Perhaps newer versions of libvirt can do it (I haven't
tried it since at least a couple of years ago with an older version of libvirt).

>
>
>
> So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
> IGD passthrough as the "xenfv" machine has already some workaround to
> make IGD work and just need some more.
>
> I've seen that the patch for QEMU is now reviewed, so I look at having
> it merged soonish.

Hi Anthony,

Thanks for looking at this and for also looking at the Qemu patch
to fix this. As I said earlier, I think to fix this problem for the IGD,
the qemu patch is probably better than this patch to libxl.

Regarding the rest of your comments, I think the Xen developers
need to decide what the roadmap for the future development of
Xen HVM machines on x86 is before deciding on any further
changes. I have not noticed much development in this feature
in the past few years, except for Bernhard Beschow who has been
doing some work to make the piix3 stuff more maintainable in
Qemu upstream. When that is done, it might be an opportunity to do
some work improving the "xenfv" machine in Qemu upstream.
The "pc" machine type is of course a very old machine type
to still be using as the device model for modern systems.

I noticed about four or five years ago there was a patch set
proposed to use "q35" instead of "pc" for Xen HVM guests and
Qemu upstream, but there did not seem to be any agreement
about the best way to implement that change, with some saying
more of it should be implemented outside of Qemu and by libxl
or maybe hvmloader instead. If anyone can describe if there is a
roadmap for the future of Xen HVM on x86, that would be helpful.
Thanks,

Chuck


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default for igd-passthru
  2023-01-25 11:37     ` Anthony PERARD
  (?)
  (?)
@ 2023-01-25 23:19     ` Chuck Zmudzinski
  2023-01-30  0:38       ` Chuck Zmudzinski
  -1 siblings, 1 reply; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-25 23:19 UTC (permalink / raw)
  To: Anthony PERARD; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

On 1/25/2023 6:37 AM, Anthony PERARD wrote:
> On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
> > I call attention to the commit message of the first patch which points
> > out that using the "pc" machine and adding the xen platform device on
> > the qemu upstream command line is not functionally equivalent to using
> > the "xenfv" machine which automatically adds the xen platform device
> > earlier in the guest creation process. As a result, there is a noticeable
> > reduction in the performance of the guest during startup with the "pc"
> > machne type even if the xen platform device is added via the qemu
> > command line options, although eventually both Linux and Windows guests
> > perform equally well once the guest operating system is fully loaded.
>
> There shouldn't be a difference between "xenfv" machine or using the
> "pc" machine while adding the "xen-platform" device, at least with
> regards to access to disk or network.
>
> The first patch of the series is using the "pc" machine without any
> "xen-platform" device, so we can't compare startup performance based on
> that.
>
> > Specifically, startup time is longer and neither the grub vga drivers
> > nor the windows vga drivers in early startup perform as well when the
> > xen platform device is added via the qemu command line instead of being
> > added immediately after the other emulated i440fx pci devices when the
> > "xenfv" machine type is used.
>
> The "xen-platform" device is mostly an hint to a guest that they can use
> pv-disk and pv-network devices. I don't think it would change anything
> with regards to graphics.
>
> > For example, when using the "pc" machine, which adds the xen platform
> > device using a command line option, the Linux guest could not display
> > the grub boot menu at the native resolution of the monitor, but with the
> > "xenfv" machine, the grub menu is displayed at the full 1920x1080
> > native resolution of the monitor for testing. So improved startup
> > performance is an advantage for the patch for qemu.
>
> I've just found out that when doing IGD passthrough, both machine
> "xenfv" and "pc" are much more different than I though ... :-(
> pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
> turns copy some informations from the real host bridge.
> I guess this new host bridge help when the firmware setup the graphic
> for grub.

I am surprised it works at all with the "pc" machine, that is, without the
TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE that is used in the "xenfv"
machine. This only seems to affect the legacy grub vga driver and the legacy
Windows vga driver during early boot. Still, I much prefer keeping the "xenfv"
machine for Intel IGD than this workaround of patching libxl to use the "pc"
machine.

>
> > I also call attention to the last point of the commit message of the
> > second patch and the comments for reviewers section of the second patch.
> > This approach, as opposed to fixing this in qemu upstream, makes
> > maintaining the code in libxl__build_device_model_args_new more
> > difficult and therefore increases the chances of problems caused by
> > coding errors and typos for users of libxl. So that is another advantage
> > of the patch for qemu.
>
> We would just needs to use a different approach in libxl when generating
> the command line. We could probably avoid duplications. I was hopping to
> have patch series for libxl that would change the machine used to start
> using "pc" instead of "xenfv" for all configurations, but based on the
> point above (IGD specific change to "xenfv"), then I guess we can't
> really do anything from libxl to fix IGD passthrough.

We could switch to the "pc" machine, but we would need to patch
qemu also so the "pc" machine uses the special device the "xenfv"
machine uses (TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE).
So it is simpler to just use the other patch to qemu and not patch
libxl at all to fix this.

>
> > OTOH, fixing this in qemu causes newer qemu versions to behave
> > differently than previous versions of qemu, which the qemu community
> > does not like, although they seem OK with the other patch since it only
> > affects qemu "xenfv" machine types, but they do not want the patch to
> > affect toolstacks like libvirt that do not use qemu upstream's
> > autoconfiguration options as much as libxl does, and, of course, libvirt
> > can manage qemu "xenfv" machines so exising "xenfv" guests configured
> > manually by libvirt could be adversely affected by the patch to qemu,
> > but only if those same guests are also configured for igd-passthrough,
> > which is likely a very small number of possibly affected libvirt users
> > of qemu.
> > 
> > A year or two ago I tried to configure guests for pci passthrough on xen
> > using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
> > could not convert an xl.cfg file with a configuration item
> > pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
> > So it is unlikely there are any users out there using libvirt to
> > configure xen hvm guests for igd passthrough on xen, and those are the
> > only users that could be adversely affected by the simpler patch to qemu
> > to fix this.
>
> FYI, libvirt should be using libxl to create guest, I don't think there
> is another way for libvirt to create xen guests.
>
>
>
> So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
> IGD passthrough as the "xenfv" machine has already some workaround to
> make IGD work and just need some more.
>
> I've seen that the patch for QEMU is now reviewed, so I look at having
> it merged soonish.
>
> Thanks,
>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default for igd-passthru
  2023-01-25 23:19     ` Chuck Zmudzinski
@ 2023-01-30  0:38       ` Chuck Zmudzinski
  2023-01-31 19:35         ` Chuck Zmudzinski
  0 siblings, 1 reply; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-30  0:38 UTC (permalink / raw)
  To: Anthony PERARD; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

On 1/25/23 6:19 PM, Chuck Zmudzinski wrote:
> On 1/25/2023 6:37 AM, Anthony PERARD wrote:
>> On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
>> > I call attention to the commit message of the first patch which points
>> > out that using the "pc" machine and adding the xen platform device on
>> > the qemu upstream command line is not functionally equivalent to using
>> > the "xenfv" machine which automatically adds the xen platform device
>> > earlier in the guest creation process. As a result, there is a noticeable
>> > reduction in the performance of the guest during startup with the "pc"
>> > machne type even if the xen platform device is added via the qemu
>> > command line options, although eventually both Linux and Windows guests
>> > perform equally well once the guest operating system is fully loaded.
>>
>> There shouldn't be a difference between "xenfv" machine or using the
>> "pc" machine while adding the "xen-platform" device, at least with
>> regards to access to disk or network.
>>
>> The first patch of the series is using the "pc" machine without any
>> "xen-platform" device, so we can't compare startup performance based on
>> that.
>>
>> > Specifically, startup time is longer and neither the grub vga drivers
>> > nor the windows vga drivers in early startup perform as well when the
>> > xen platform device is added via the qemu command line instead of being
>> > added immediately after the other emulated i440fx pci devices when the
>> > "xenfv" machine type is used.
>>
>> The "xen-platform" device is mostly an hint to a guest that they can use
>> pv-disk and pv-network devices. I don't think it would change anything
>> with regards to graphics.
>>
>> > For example, when using the "pc" machine, which adds the xen platform
>> > device using a command line option, the Linux guest could not display
>> > the grub boot menu at the native resolution of the monitor, but with the
>> > "xenfv" machine, the grub menu is displayed at the full 1920x1080
>> > native resolution of the monitor for testing. So improved startup
>> > performance is an advantage for the patch for qemu.
>>
>> I've just found out that when doing IGD passthrough, both machine
>> "xenfv" and "pc" are much more different than I though ... :-(
>> pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
>> turns copy some informations from the real host bridge.
>> I guess this new host bridge help when the firmware setup the graphic
>> for grub.

Yes, it is needed - see below for the very simple patch to Qemu
upstream that fixes it for the "pc" machine!

> 
> I am surprised it works at all with the "pc" machine, that is, without the
> TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE that is used in the "xenfv"
> machine. This only seems to affect the legacy grub vga driver and the legacy
> Windows vga driver during early boot. Still, I much prefer keeping the "xenfv"
> machine for Intel IGD than this workaround of patching libxl to use the "pc"
> machine.
> 
>>
>> > I also call attention to the last point of the commit message of the
>> > second patch and the comments for reviewers section of the second patch.
>> > This approach, as opposed to fixing this in qemu upstream, makes
>> > maintaining the code in libxl__build_device_model_args_new more
>> > difficult and therefore increases the chances of problems caused by
>> > coding errors and typos for users of libxl. So that is another advantage
>> > of the patch for qemu.
>>
>> We would just needs to use a different approach in libxl when generating
>> the command line. We could probably avoid duplications.

I was thinking we could also either write a test to verify the correctness
of the second patch to ensure it generates the correct Qemu command line
or take the time to verify the second patch's accuracy before committing it.

>> I was hopping to
>> have patch series for libxl that would change the machine used to start
>> using "pc" instead of "xenfv" for all configurations, but based on the
>> point above (IGD specific change to "xenfv"), then I guess we can't
>> really do anything from libxl to fix IGD passthrough.
> 
> We could switch to the "pc" machine, but we would need to patch
> qemu also so the "pc" machine uses the special device the "xenfv"
> machine uses (TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE).
> ...

I just tested a very simple patch to Qemu upstream to fix the
difference between the upstream Qemu "pc" machine and the upstream
Qemu "xenfv" machine:

--- a/hw/i386/pc_piix.c	2023-01-28 13:22:15.714595514 -0500
+++ b/hw/i386/pc_piix.c	2023-01-29 18:08:34.668491593 -0500
@@ -434,6 +434,8 @@
             compat(machine); \
         } \
         pc_init1(machine, TYPE_I440FX_PCI_HOST_BRIDGE, \
+                 xen_igd_gfx_pt_enabled() ? \
+                 TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE : \
                  TYPE_I440FX_PCI_DEVICE); \
     } \
     DEFINE_PC_MACHINE(suffix, name, pc_init_##suffix, optionfn)
----- snip -------

With this simple two-line patch to upstream Qemu, we can use the "pc"
machine without any regressions such as the startup performance
degradation I observed without this small patch to fix the "pc" machine
with igd passthru!

The "pc" machine maintainers for upstream Qemu would need to accept
this small patch to Qemu upstream. They might prefer this to the
other Qemu patch that reserves slot 2 for the Qemu upstream "xenfv"
machine when the guest is configured for igd passthru.

>>
>> ...
>>
>> So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
>> IGD passthrough as the "xenfv" machine has already some workaround to
>> make IGD work and just need some more.

Well, the little patch to upstream shown above fixes the "pc" machine
with IGD so maybe this approach of patching libxl to use the "pc" machine
will be a viable fix for the IGD.

>>
>> I've seen that the patch for QEMU is now reviewed, so I look at having
>> it merged soonish.
>>
>> Thanks,
>>
> 

I just added the bit of information above to help you decide which
approach to use to improve the support for the igd passthru feature
with Xen and Qemu upstream. I think the test of the small patch to
Qemu to fix the "pc" machine with igd passthru makes this patch to
libxl a viable alternative to the other patch to Qemu upstream that
reserves slot 2 when using the "xenfv" machine and igd passthru.

Thanks,

Chuck


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default for igd-passthru
  2023-01-30  0:38       ` Chuck Zmudzinski
@ 2023-01-31 19:35         ` Chuck Zmudzinski
  0 siblings, 0 replies; 14+ messages in thread
From: Chuck Zmudzinski @ 2023-01-31 19:35 UTC (permalink / raw)
  To: Anthony PERARD; +Cc: xen-devel, Wei Liu, Juergen Gross, qemu-devel

On 1/29/23 7:38 PM, Chuck Zmudzinski wrote:
> On 1/25/23 6:19 PM, Chuck Zmudzinski wrote:
>> On 1/25/2023 6:37 AM, Anthony PERARD wrote:
>>> On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
>>> > I call attention to the commit message of the first patch which points
>>> > out that using the "pc" machine and adding the xen platform device on
>>> > the qemu upstream command line is not functionally equivalent to using
>>> > the "xenfv" machine which automatically adds the xen platform device
>>> > earlier in the guest creation process. As a result, there is a noticeable
>>> > reduction in the performance of the guest during startup with the "pc"
>>> > machne type even if the xen platform device is added via the qemu
>>> > command line options, although eventually both Linux and Windows guests
>>> > perform equally well once the guest operating system is fully loaded.
>>>
>>> There shouldn't be a difference between "xenfv" machine or using the
>>> "pc" machine while adding the "xen-platform" device, at least with
>>> regards to access to disk or network.
>>>
>>> The first patch of the series is using the "pc" machine without any
>>> "xen-platform" device, so we can't compare startup performance based on
>>> that.
>>>
>>> > Specifically, startup time is longer and neither the grub vga drivers
>>> > nor the windows vga drivers in early startup perform as well when the
>>> > xen platform device is added via the qemu command line instead of being
>>> > added immediately after the other emulated i440fx pci devices when the
>>> > "xenfv" machine type is used.
>>>
>>> The "xen-platform" device is mostly an hint to a guest that they can use
>>> pv-disk and pv-network devices. I don't think it would change anything
>>> with regards to graphics.
>>>
>>> > For example, when using the "pc" machine, which adds the xen platform
>>> > device using a command line option, the Linux guest could not display
>>> > the grub boot menu at the native resolution of the monitor, but with the
>>> > "xenfv" machine, the grub menu is displayed at the full 1920x1080
>>> > native resolution of the monitor for testing. So improved startup
>>> > performance is an advantage for the patch for qemu.
>>>
>>> I've just found out that when doing IGD passthrough, both machine
>>> "xenfv" and "pc" are much more different than I though ... :-(
>>> pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
>>> turns copy some informations from the real host bridge.
>>> I guess this new host bridge help when the firmware setup the graphic
>>> for grub.
> 
> Yes, it is needed - see below for the very simple patch to Qemu
> upstream that fixes it for the "pc" machine!
> 
>> 
>> I am surprised it works at all with the "pc" machine, that is, without the
>> TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE that is used in the "xenfv"
>> machine. This only seems to affect the legacy grub vga driver and the legacy
>> Windows vga driver during early boot. Still, I much prefer keeping the "xenfv"
>> machine for Intel IGD than this workaround of patching libxl to use the "pc"
>> machine.
>> 
>>>
>>> > I also call attention to the last point of the commit message of the
>>> > second patch and the comments for reviewers section of the second patch.
>>> > This approach, as opposed to fixing this in qemu upstream, makes
>>> > maintaining the code in libxl__build_device_model_args_new more
>>> > difficult and therefore increases the chances of problems caused by
>>> > coding errors and typos for users of libxl. So that is another advantage
>>> > of the patch for qemu.
>>>
>>> We would just needs to use a different approach in libxl when generating
>>> the command line. We could probably avoid duplications.
> 
> I was thinking we could also either write a test to verify the correctness
> of the second patch to ensure it generates the correct Qemu command line
> or take the time to verify the second patch's accuracy before committing it.
> 
>>> I was hopping to
>>> have patch series for libxl that would change the machine used to start
>>> using "pc" instead of "xenfv" for all configurations, but based on the
>>> point above (IGD specific change to "xenfv"), then I guess we can't
>>> really do anything from libxl to fix IGD passthrough.
>> 
>> We could switch to the "pc" machine, but we would need to patch
>> qemu also so the "pc" machine uses the special device the "xenfv"
>> machine uses (TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE).
>> ...
> 
> I just tested a very simple patch to Qemu upstream to fix the
> difference between the upstream Qemu "pc" machine and the upstream
> Qemu "xenfv" machine:
> 
> --- a/hw/i386/pc_piix.c	2023-01-28 13:22:15.714595514 -0500
> +++ b/hw/i386/pc_piix.c	2023-01-29 18:08:34.668491593 -0500
> @@ -434,6 +434,8 @@
>              compat(machine); \
>          } \
>          pc_init1(machine, TYPE_I440FX_PCI_HOST_BRIDGE, \
> +                 xen_igd_gfx_pt_enabled() ? \
> +                 TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE : \
>                   TYPE_I440FX_PCI_DEVICE); \
>      } \
>      DEFINE_PC_MACHINE(suffix, name, pc_init_##suffix, optionfn)
> ----- snip -------
> 
> With this simple two-line patch to upstream Qemu, we can use the "pc"
> machine without any regressions such as the startup performance
> degradation I observed without this small patch to fix the "pc" machine
> with igd passthru!

Hi Anthony,

Actually, to implement the fix for the "pc" machine and IGD in Qemu
upstream and not break builds for configurations such as --disable-xen
the patch to Qemu needs to add four lines instead of two (still trivial!):


--- a/hw/i386/pc_piix.c	2023-01-29 18:05:15.714595514 -0500
+++ b/hw/i386/pc_piix.c	2023-01-29 18:08:34.668491593 -0500
@@ -434,6 +434,8 @@
             compat(machine); \
         } \
         pc_init1(machine, TYPE_I440FX_PCI_HOST_BRIDGE, \
+                 pc_xen_igd_gfx_pt_enabled() ? \
+                 TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE : \
                  TYPE_I440FX_PCI_DEVICE); \
     } \
     DEFINE_PC_MACHINE(suffix, name, pc_init_##suffix, optionfn)
--- a/include/sysemu/xen.h	2023-01-20 08:17:55.000000000 -0500
+++ b/include/sysemu/xen.h	2023-01-30 00:18:29.276886734 -0500
@@ -23,6 +23,7 @@
 extern bool xen_allowed;
 
 #define xen_enabled()           (xen_allowed)
+#define pc_xen_igd_gfx_pt_enabled()    xen_igd_gfx_pt_enabled()
 
 #ifndef CONFIG_USER_ONLY
 void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
@@ -33,6 +34,7 @@
 #else /* !CONFIG_XEN_IS_POSSIBLE */
 
 #define xen_enabled() 0
+#define pc_xen_igd_gfx_pt_enabled() 0
 #ifndef CONFIG_USER_ONLY
 static inline void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
 {
------- snip -------

Would you support this patch to Qemu if I formally submitted it to
Qemu as a replacement for the current more complicated patch to Qemu
that I proposed to reserve slot 2 for the IGD?

Thanks,

Chuck

> 
> The "pc" machine maintainers for upstream Qemu would need to accept
> this small patch to Qemu upstream. They might prefer this to the
> other Qemu patch that reserves slot 2 for the Qemu upstream "xenfv"
> machine when the guest is configured for igd passthru.
> 
>>>
>>> ...
>>>
>>> So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
>>> IGD passthrough as the "xenfv" machine has already some workaround to
>>> make IGD work and just need some more.
> 
> Well, the little patch to upstream shown above fixes the "pc" machine
> with IGD so maybe this approach of patching libxl to use the "pc" machine
> will be a viable fix for the IGD.
> 
>>>
>>> I've seen that the patch for QEMU is now reviewed, so I look at having
>>> it merged soonish.
>>>
>>> Thanks,
>>>
>> 
> 
> I just added the bit of information above to help you decide which
> approach to use to improve the support for the igd passthru feature
> with Xen and Qemu upstream. I think the test of the small patch to
> Qemu to fix the "pc" machine with igd passthru makes this patch to
> libxl a viable alternative to the other patch to Qemu upstream that
> reserves slot 2 when using the "xenfv" machine and igd passthru.
> 
> Thanks,
> 
> Chuck



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-01-31 19:36 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <cover.1673300848.git.brchuckz.ref@aol.com>
2023-01-09 23:08 ` [XEN PATCH 0/3] Configure qemu upstream correctly by default for igd-passthru Chuck Zmudzinski
2023-01-10  7:32   ` [XEN PATCH v2 " Chuck Zmudzinski
2023-01-09 23:08   ` [XEN PATCH 1/3] libxl/dm: Use "pc" machine type for Intel IGD passthrough Chuck Zmudzinski
2023-01-10  7:32     ` [XEN PATCH v2 " Chuck Zmudzinski
2023-01-09 23:08   ` [XEN PATCH 2/3] libxl/dm: Manage pci slot assignment " Chuck Zmudzinski
2023-01-10  7:32     ` [XEN PATCH v2 " Chuck Zmudzinski
2023-01-09 23:08   ` [XEN PATCH 3/3] libxl/dm: Assign slot 2 by default " Chuck Zmudzinski
2023-01-10  7:32     ` [XEN PATCH v2 " Chuck Zmudzinski
2023-01-25 11:37   ` [XEN PATCH v2 0/3] Configure qemu upstream correctly by default for igd-passthru Anthony PERARD via
2023-01-25 11:37     ` Anthony PERARD
2023-01-25 20:20     ` Chuck Zmudzinski
2023-01-25 23:19     ` Chuck Zmudzinski
2023-01-30  0:38       ` Chuck Zmudzinski
2023-01-31 19:35         ` Chuck Zmudzinski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.