All of lore.kernel.org
 help / color / mirror / Atom feed
* disable qemu PCI devices in HVM domains
@ 2008-12-11  3:10 James Harper
  2008-12-11  9:06 ` Keir Fraser
  0 siblings, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-11  3:10 UTC (permalink / raw)
  To: xen-devel

I would like to implement a mechanism to disable PCI devices by writing
to a certain IO port to prevent the problem of duplication where PV on
HVM drivers are used.

Disabling drivers in the GPLPV drivers is quite troublesome and while I
think 3.3 has eliminated the corruption problem (no caching appears to
make windows understand that it is the same underlying device) I'd still
rather that the qemu PCI devices weren't there at all.

So if the PV drivers loaded early in the boot process, they would write
to a designated IO port and qemu would then tell the PCI devices to not
work anymore. Because I would do it at the driver level rather than at
the device level, it should happen even before windows has loaded the
pci.sys driver and started probing the PCI space.

Does that sound acceptable?

Thanks

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-11  3:10 disable qemu PCI devices in HVM domains James Harper
@ 2008-12-11  9:06 ` Keir Fraser
  2008-12-11  9:08   ` James Harper
  2008-12-11  9:28   ` James Harper
  0 siblings, 2 replies; 44+ messages in thread
From: Keir Fraser @ 2008-12-11  9:06 UTC (permalink / raw)
  To: James Harper, xen-devel; +Cc: Ian Jackson

On 11/12/2008 03:10, "James Harper" <james.harper@bendigoit.com.au> wrote:

> So if the PV drivers loaded early in the boot process, they would write
> to a designated IO port and qemu would then tell the PCI devices to not
> work anymore. Because I would do it at the driver level rather than at
> the device level, it should happen even before windows has loaded the
> pci.sys driver and started probing the PCI space.
> 
> Does that sound acceptable?

It sounds good in principle. Do you know how you would hack it into qemu?
Ian Jackson may have views on that.

 -- Keir

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-11  9:06 ` Keir Fraser
@ 2008-12-11  9:08   ` James Harper
  2008-12-11  9:28   ` James Harper
  1 sibling, 0 replies; 44+ messages in thread
From: James Harper @ 2008-12-11  9:08 UTC (permalink / raw)
  To: Keir Fraser, xen-devel; +Cc: Ian Jackson

> On 11/12/2008 03:10, "James Harper" <james.harper@bendigoit.com.au>
wrote:
> 
> > So if the PV drivers loaded early in the boot process, they would
write
> > to a designated IO port and qemu would then tell the PCI devices to
not
> > work anymore. Because I would do it at the driver level rather than
at
> > the device level, it should happen even before windows has loaded
the
> > pci.sys driver and started probing the PCI space.
> >
> > Does that sound acceptable?
> 
> It sounds good in principle. Do you know how you would hack it into
qemu?
> Ian Jackson may have views on that.
> 

Since that email I have already implemented it and it seems to work
well. I'll email details shortly for discussion.

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-11  9:06 ` Keir Fraser
  2008-12-11  9:08   ` James Harper
@ 2008-12-11  9:28   ` James Harper
  2008-12-11 17:57     ` Ian Jackson
  1 sibling, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-11  9:28 UTC (permalink / raw)
  To: Keir Fraser, xen-devel; +Cc: Ian Jackson

[-- Attachment #1: Type: text/plain, Size: 2659 bytes --]

> On 11/12/2008 03:10, "James Harper" <james.harper@bendigoit.com.au>
wrote:
> 
> > So if the PV drivers loaded early in the boot process, they would
write
> > to a designated IO port and qemu would then tell the PCI devices to
not
> > work anymore. Because I would do it at the driver level rather than
at
> > the device level, it should happen even before windows has loaded
the
> > pci.sys driver and started probing the PCI space.
> >
> > Does that sound acceptable?
> 
> It sounds good in principle. Do you know how you would hack it into
qemu?
> Ian Jackson may have views on that.
> 

The attached patch is what I have developed so far. There is also a
logging facility in there too - XenSource has something similar but mine
is better :) (with the exception of rate limiting, which mine doesn't do
yet)

Basically, I use ioports 0x10-0x1F for communication. Reads to 0x10-0x12
return 'X', 'E', and 'N' respectively. 0x13 returns a version number.
This is to allow to detection of the mechanism (otherwise I have to
fallback to the previous method of disabling the ide and network drivers
in windows, which is unreliable and messy and the WHQL guys will laugh
at me :)

Writes to ports 0x10-0x17 records data in logging buffers (1 per port),
printing on a newline. I wanted to be able to have a method of logging
that relied on the barest minimum of overhead. Synchronising access to
an ioport across multiple windows drivers would require setup and I
needed to be able to log before that setup. So what I do is raise the
IRQL to HIGH_LEVEL (disabling interrupts) and then write to port
0x10+cpu. Without that I was getting log messages mixed up with each
other during high rates of logging, where I was looking for races etc.
This way it's clean and I even get to know what the CPU doing the
logging is. It's only really useful for debugging so I think the
performance hit is acceptable. If you don't want to include that part of
it then that's fine.

A write to port 0x18 with a non-zero value will set the global variable
xen_pci_disable_devices. A write of 0 will unset it, but I'm not sure
where that would be useful.

I have added a disable_flag variable to the pci dev structure. This is
set to 1 for devices that should be disabled when PV drivers are in use
(ide and rtl8139, possibly others?).

In pci_data_read, if xen_pci_disable_devices and pci_dev->disable_flag
are both set, all 1's are returned as per when no device is there at
all. If this is done before windows loads pci.sys and scans for devices,
it's as if the device isn't there.

Let me know what you think.

James


[-- Attachment #2: qemu-pci-disable.patch --]
[-- Type: application/octet-stream, Size: 5187 bytes --]

diff --git a/hw/ide.c b/hw/ide.c
index dae6e7f..b7b6871 100644
--- a/hw/ide.c
+++ b/hw/ide.c
@@ -3419,6 +3419,8 @@ void pci_piix3_ide_init(PCIBus *bus, BlockDriverState **hd_table, int devfn,
                                            sizeof(PCIIDEState),
                                            devfn,
                                            NULL, NULL);
+    d->dev.disable_flag = 1;
+
     d->type = IDE_TYPE_PIIX3;
 
     pci_conf = d->dev.config;
diff --git a/hw/pci.c b/hw/pci.c
index 1de68fd..c55c371 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -26,6 +26,8 @@
 #include "console.h"
 #include "net.h"
 
+uint32_t xen_pci_disable_devices;
+
 //#define DEBUG_PCI
 
 struct PCIBus {
@@ -160,6 +162,7 @@ PCIDevice *pci_register_device(PCIBus *bus, const char *name,
     pci_dev->config_write = config_write;
     bus->devices[devfn] = pci_dev;
     pci_dev->irq = qemu_allocate_irqs(pci_set_irq, pci_dev, 4);
+    pci_dev->disable_flag = 0;
     return pci_dev;
 }
 
@@ -465,6 +468,12 @@ uint32_t pci_data_read(void *opaque, uint32_t addr, int len)
         }
         goto the_end;
     }
+    if (pci_dev->disable_flag && xen_pci_disable_devices)
+    {
+        printf("read disabled device\n");
+	goto fail;
+    }
+
     config_addr = addr & 0xff;
     val = pci_dev->config_read(pci_dev, config_addr, len);
 #if defined(DEBUG_PCI)
diff --git a/hw/pci.h b/hw/pci.h
index 4adc4d7..c57841a 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -61,6 +61,9 @@ struct PCIDevice {
 
     /* Current IRQ levels.  Used internally by the generic PCI code.  */
     int irq_state[4];
+
+    /* when set, this device will be disabled when xen_pci_disable_devices is set */
+    int disable_flag;
 };
 
 extern char direct_pci_str[];
diff --git a/hw/rtl8139.c b/hw/rtl8139.c
index 9ae76e6..d5f0774 100644
--- a/hw/rtl8139.c
+++ b/hw/rtl8139.c
@@ -536,7 +536,7 @@ static void prom9346_decode_command(EEprom9346 *eeprom, uint8_t command)
                     DEBUG_PRINT(("RTL8139: eeprom begin write all\n"));
                     break;
                 case Chip9346_op_write_disable:
-                    DEBUG_PRINT(("RTL8139: eeprom write disabled\n"));
+                    DEBUG_PRINT(("RTL8139: eeprom write disable\n"));
                     break;
             }
             break;
@@ -3429,6 +3429,7 @@ void pci_rtl8139_init(PCIBus *bus, NICInfo *nd, int devfn)
                                               "RTL8139", sizeof(PCIRTL8139State),
                                               devfn,
                                               NULL, NULL);
+    d->dev.disable_flag = 1; /* should be disabled by PV on HVM drivers */
     pci_conf = d->dev.config;
     pci_conf[0x00] = 0xec; /* Realtek 8139 */
     pci_conf[0x01] = 0x10;
diff --git a/hw/xen_platform.c b/hw/xen_platform.c
index 430e603..11acac3 100644
--- a/hw/xen_platform.c
+++ b/hw/xen_platform.c
@@ -30,6 +30,7 @@
 #include <xenguest.h>
 
 extern FILE *logfile;
+extern uint32_t xen_pci_disable_devices;
 
 #define PFFLAG_ROM_LOCK 1 /* Sets whether ROM memory area is RW or RO */
 
@@ -39,6 +40,59 @@ typedef struct PCIXenPlatformState
   uint8_t    platform_flags;
 } PCIXenPlatformState;
 
+static uint32_t platform_global_ioport_readb(void *opaque, uint32_t addr)
+{
+    switch (addr & 0x0f)
+    {
+    case 0:
+	return 'X';
+    case 1:
+	return 'E';
+    case 2:
+	return 'N';
+    case 3:
+	return 1; /* version */
+    default:
+	return 0;
+    }
+}
+
+static void platform_global_ioport_writeb(void *opaque, uint32_t addr, uint32_t val)
+{
+    static char buffer[8][512];
+    static int bufpos[8] = {0};
+    int cpu;
+
+    switch (addr & 0x0f)
+    {
+    case 0:
+    case 1:
+    case 2:
+    case 3:
+    case 4:
+    case 5:
+    case 6:
+    case 7:
+	cpu = addr & 0x07;
+    	if (bufpos[cpu] == 511 || val == '\n')
+    	{
+            buffer[cpu][bufpos[cpu]] = 0;
+            printf("%02d: %s\n", cpu, buffer[cpu]);
+            bufpos[cpu] = 0;
+        }
+        if (val == '\n')
+	    return;
+        buffer[cpu][bufpos[cpu]++] = val;
+	break;
+    case 8:
+	xen_pci_disable_devices = val;
+	printf("set xen_pci_disable_devices = %d\n", val);
+	break;
+    default:
+	break;
+    }
+}
+
 static uint32_t xen_platform_ioport_readb(void *opaque, uint32_t addr)
 {
     PCIXenPlatformState *s = opaque;
@@ -208,6 +262,9 @@ void pci_xen_platform_init(PCIBus *bus)
     pci_register_io_region(&d->pci_dev, 0, 0x100,
                            PCI_ADDRESS_SPACE_IO, platform_ioport_map);
 
+    register_ioport_write(0x10, 0x10, 1, platform_global_ioport_writeb, NULL);
+    register_ioport_read(0x10, 0x10, 1, platform_global_ioport_readb, NULL);
+
     /* reserve 16MB mmio address for share memory*/
     pci_register_io_region(&d->pci_dev, 1, 0x1000000,
                            PCI_ADDRESS_SPACE_MEM_PREFETCH, platform_mmio_map);
diff --git a/vl.c b/vl.c
index e50a02d..1366a37 100644
--- a/vl.c
+++ b/vl.c
@@ -5401,6 +5401,8 @@ static int drive_init(struct drive_opt *arg, int snapshot,
            approximation.  */
     case IF_FLOPPY:
         bdrv_set_type_hint(bdrv, BDRV_TYPE_FLOPPY);
+	if (!drv)
+	    drv = &bdrv_raw;
         break;
     case IF_PFLASH:
     case IF_MTD:

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-11  9:28   ` James Harper
@ 2008-12-11 17:57     ` Ian Jackson
  2008-12-11 22:06       ` James Harper
  2008-12-13  9:33       ` James Harper
  0 siblings, 2 replies; 44+ messages in thread
From: Ian Jackson @ 2008-12-11 17:57 UTC (permalink / raw)
  To: James Harper; +Cc: xen-devel, Keir Fraser

James Harper writes ("RE: [Xen-devel] disable qemu PCI devices in HVM domains"):
> Basically, I use ioports 0x10-0x1F for communication.

You seem to mean thedse absolute addresses in IO space ?  Rather than
offsets in the Xen platform device, for example ?  And your driver is
going to write, blind, into these locations ?  Surely that risks
causing problems if your driver is run in a situation where those
ports are used for something else ?

I like the principle of disabling the drivers via an instruction to
qemu rather than by attempting to wrestle with the Windows driver
machinery to try to hide the devices.  But couldn't we simulate a PCI
unplug or a medium change or something instead ?  Then you could do it
later in the boot after your own drivers have properly bound.

And presumably the instrunction to do so should be given via the Xen
PCI platform device ?

Also, what if the user wants (for some reason) to use PV drivers for
disk but emulated-real-hardware drivers for network, or something ?

Ian.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-11 17:57     ` Ian Jackson
@ 2008-12-11 22:06       ` James Harper
  2008-12-13  9:33       ` James Harper
  1 sibling, 0 replies; 44+ messages in thread
From: James Harper @ 2008-12-11 22:06 UTC (permalink / raw)
  To: Ian Jackson; +Cc: xen-devel, Keir Fraser

> 
> James Harper writes ("RE: [Xen-devel] disable qemu PCI devices in HVM
> domains"):
> > Basically, I use ioports 0x10-0x1F for communication.
> 
> You seem to mean thedse absolute addresses in IO space ?  Rather than
> offsets in the Xen platform device, for example ?

Yes. Under the current scheme I need to disable the other PCI devices
before the platform pci device is enumerated.

> And your driver is
> going to write, blind, into these locations ?

No. A read is done first to check for some magic bytes.

> Surely that risks
> causing problems if your driver is run in a situation where those
> ports are used for something else ?

Perhaps, but under Xen+qemu we have a tightly controlled situation where
that is unlikely, I would have thought. Perhaps you have a situation in
mind where this would be a problem? Would a passed-through pci device
ever try and use these ports? If I had allocated them in qemu then
wouldn't pci passthrough avoid them?

> I like the principle of disabling the drivers via an instruction to
> qemu rather than by attempting to wrestle with the Windows driver
> machinery to try to hide the devices.  But couldn't we simulate a PCI
> unplug or a medium change or something instead ? Then you could do it
> later in the boot after your own drivers have properly bound.
> 
> And presumably the instrunction to do so should be given via the Xen
> PCI platform device ?
> 

Possibly. A hot unplug could work too, but it would need to happen after
windows had enumerated all the pci devices (if done from platform pci)
but before windows had enumerated and started using the ide devices.
Remember that windows is closed source and therefore is a huge pita when
trying to do something they hadn't thought of, like make something
happen at a very specific point during boot.

> Also, what if the user wants (for some reason) to use PV drivers for
> disk but emulated-real-hardware drivers for network, or something ?

I could allow for that - just set up the disable flag as masks instead.
The previous 'xenhide' windows driver scheme allowed that, but it was
seldom used. With the exception of a user trying to avoid a bug in the
PV drivers (in which case time would be better spent fixing the bug than
implementing a mechanism to avoid it) the only time it would be useful
is for when a CDROM is used - blkback doesn't have a mechanism for
ejecting physical CDROM devices. Unfortunately the latter would be even
harder as the CDROM is not a PCI device itself, and at the PCI device
level we don't know what is going to be attached...

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-11 17:57     ` Ian Jackson
  2008-12-11 22:06       ` James Harper
@ 2008-12-13  9:33       ` James Harper
  2008-12-13  9:55         ` Keir Fraser
  1 sibling, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-13  9:33 UTC (permalink / raw)
  To: Ian Jackson; +Cc: xen-devel, Keir Fraser

> 
> I like the principle of disabling the drivers via an instruction to
> qemu rather than by attempting to wrestle with the Windows driver
> machinery to try to hide the devices.  But couldn't we simulate a PCI
> unplug or a medium change or something instead ?  Then you could do it
> later in the boot after your own drivers have properly bound.
> 

Is the 'pci unplug' as simple as making a call somewhere like
pci_unplug(id of ide adapter)? I'm concerned that Windows may not like
this.

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-13  9:33       ` James Harper
@ 2008-12-13  9:55         ` Keir Fraser
  2008-12-13 10:05           ` James Harper
  2008-12-15 17:10           ` Steven Smith
  0 siblings, 2 replies; 44+ messages in thread
From: Keir Fraser @ 2008-12-13  9:55 UTC (permalink / raw)
  To: James Harper, Ian Jackson; +Cc: Steven Smith, xen-devel

On 13/12/2008 09:33, "James Harper" <james.harper@bendigoit.com.au> wrote:

>> I like the principle of disabling the drivers via an instruction to
>> qemu rather than by attempting to wrestle with the Windows driver
>> machinery to try to hide the devices.  But couldn't we simulate a PCI
>> unplug or a medium change or something instead ?  Then you could do it
>> later in the boot after your own drivers have properly bound.
>> 
> 
> Is the 'pci unplug' as simple as making a call somewhere like
> pci_unplug(id of ide adapter)? I'm concerned that Windows may not like
> this.

My own opinion is that the ioports are fine, but they should be offsets from
the xen-platform-pci device's ioport bar. Also we ought to document the
ports in xen-platform-pci's source file, as it's going to start getting
messy in there.

I'm not sure if the approach taken by the Citrix drivers could be at all
useful. Cc'ing Steven Smith in case he has any comments to make.

 -- Keir

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-13  9:55         ` Keir Fraser
@ 2008-12-13 10:05           ` James Harper
  2008-12-13 11:13             ` Keir Fraser
  2008-12-15 17:10           ` Steven Smith
  1 sibling, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-13 10:05 UTC (permalink / raw)
  To: Keir Fraser, Ian Jackson; +Cc: Steven Smith, xen-devel

> > Is the 'pci unplug' as simple as making a call somewhere like
> > pci_unplug(id of ide adapter)? I'm concerned that Windows may not
like
> > this.
> 
> My own opinion is that the ioports are fine, but they should be
offsets
> from
> the xen-platform-pci device's ioport bar. Also we ought to document
the
> ports in xen-platform-pci's source file, as it's going to start
getting
> messy in there.
>
> I'm not sure if the approach taken by the Citrix drivers could be at
all
> useful. Cc'ing Steven Smith in case he has any comments to make.

Citrix use fixed ports for logging. The problem with using
xen-platform-pci's ioport range is that I can only get to them once the
windows pci bus driver has enumerated all the devices, and it's too late
by then as Windows has already enumerated PDO's for all the devices. The
only way around this would be to send an 'unplug' event to xen to remove
the PCI devices, but is the standard PCI bus emulated by qemu hotplug
capable? And would windows even cope with a hotplug event that early in
the boot process?

A completely different way of solving this (as per another previous
email) is for qemu to (optionally) emulate things at the int13 level
rather than at the pci device level. That way there would be no PCI
device at all and Windows would only ever see my drivers. We can already
do this for net, but obviously that isn't required for booting
(normally). So /dev/hd[a-d] would be PCI devices and anything else would
be hooked into int13.

If only Windows were more like Linux...

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-13 10:05           ` James Harper
@ 2008-12-13 11:13             ` Keir Fraser
  2008-12-13 11:31               ` James Harper
  0 siblings, 1 reply; 44+ messages in thread
From: Keir Fraser @ 2008-12-13 11:13 UTC (permalink / raw)
  To: James Harper, Ian Jackson; +Cc: Steven Smith, xen-devel

On 13/12/2008 10:05, "James Harper" <james.harper@bendigoit.com.au> wrote:

> Citrix use fixed ports for logging. The problem with using
> xen-platform-pci's ioport range is that I can only get to them once the
> windows pci bus driver has enumerated all the devices, and it's too late
> by then as Windows has already enumerated PDO's for all the devices. The
> only way around this would be to send an 'unplug' event to xen to remove
> the PCI devices, but is the standard PCI bus emulated by qemu hotplug
> capable? And would windows even cope with a hotplug event that early in
> the boot process?

Does Windows remap the BAR? If not you can go probe the PCI space yourself?
It's really easy to do.

 -- Keir

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-13 11:13             ` Keir Fraser
@ 2008-12-13 11:31               ` James Harper
  0 siblings, 0 replies; 44+ messages in thread
From: James Harper @ 2008-12-13 11:31 UTC (permalink / raw)
  To: Keir Fraser, Ian Jackson; +Cc: Steven Smith, xen-devel

> On 13/12/2008 10:05, "James Harper" <james.harper@bendigoit.com.au>
wrote:
> 
> > Citrix use fixed ports for logging. The problem with using
> > xen-platform-pci's ioport range is that I can only get to them once
the
> > windows pci bus driver has enumerated all the devices, and it's too
late
> > by then as Windows has already enumerated PDO's for all the devices.
The
> > only way around this would be to send an 'unplug' event to xen to
remove
> > the PCI devices, but is the standard PCI bus emulated by qemu
hotplug
> > capable? And would windows even cope with a hotplug event that early
in
> > the boot process?
> 
> Does Windows remap the BAR?

I don't know.

> If not you can go probe the PCI space yourself?
> It's really easy to do.

Is it just a matter of probing a few ioports? Doing so would be 'against
the rules' as far as Windows is concerned, although it would work.

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-13  9:55         ` Keir Fraser
  2008-12-13 10:05           ` James Harper
@ 2008-12-15 17:10           ` Steven Smith
  2008-12-15 23:58             ` James Harper
                               ` (5 more replies)
  1 sibling, 6 replies; 44+ messages in thread
From: Steven Smith @ 2008-12-15 17:10 UTC (permalink / raw)
  To: Keir Fraser; +Cc: James Harper, Ian Jackson, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 4405 bytes --]

> >> I like the principle of disabling the drivers via an instruction to
> >> qemu rather than by attempting to wrestle with the Windows driver
> >> machinery to try to hide the devices.  But couldn't we simulate a PCI
> >> unplug or a medium change or something instead ?  Then you could do it
> >> later in the boot after your own drivers have properly bound.
> >> 
> > 
> > Is the 'pci unplug' as simple as making a call somewhere like
> > pci_unplug(id of ide adapter)? I'm concerned that Windows may not like
> > this.
> My own opinion is that the ioports are fine, but they should be offsets from
> the xen-platform-pci device's ioport bar. Also we ought to document the
> ports in xen-platform-pci's source file, as it's going to start getting
> messy in there.
> 
> I'm not sure if the approach taken by the Citrix drivers could be at all
> useful. Cc'ing Steven Smith in case he has any comments to make.
I can't see any reason why the approach we take in our closed-source
drivers wouldn't work here as well.  I've attached the appropriate
patches from our product qemu patchqueue, tidied up and stripped of
the most obviously XenServer-specific bits, and made to apply to
current ioemu-remote.


The protocol covers three basic things:

-- Disconnecting emulated devices.
-- Getting log messages out of the drivers and into dom0.
-- Allowing dom0 to block the loading of specific drivers.  This is
   intended as a backwards-compatibility thing: if we discover a bug
   in some old version of the drivers, then rather than working around
   it in Xen, we have the option of just making those drivers fall
   back to emulated mode.

The current protocol works like this (from the point of view of
drivers):

1) When the drivers first come up, they check whether the unplug logic
   is available by reading a two-byte magic number from IO port 0x10.
   These should be 0x49d2.  If the magic number doesn't match, the
   drivers don't do anything.

2) The drivers read a one-byte protocol version from IO port 0x12.  If
   this is 0, skip to 6.

3) The drivers write a two-byte product number to IO port 0x12.  At
   the moment, the only drivers using this protocol are our
   closed-source ones, which use product number 1.

4) The drivers write a four-byte build number to IO port 0x10.

5) The drivers check the magic number by reading two bytes from 0x10
   again.  If it's changed from 0x49d2, the drivers are blacklisted
   and should not load.

6) The drivers write a two-byte bitmask of devices to unplug to IO
   port 0x10.  The defined fields are:

   1 -- All IDE disks (not including CD drives)
   2 -- All emulated NICs
   4 -- All IDE disks except for the primary master (not including CD
	drives)

   The relevant emulated devices then disappear from the relevant
   buses.  For most guest operating systems, you want to do this
   before device enumeration happens.

...) Once the drivers have checked the magic number (and the
     blacklist, if appropriate), they can send log messages to qemu
     which will be logged to wherever qemu's logs go
     (/var/log/xen/qemu-dm.log on normal Xen, dom0 syslog on
     XenServer).  These messages are written to IO port 0x12 a byte at
     a time, and are terminated by newlines.  There's a fairly
     aggressive rate limiter on these messages, so they shouldn't be
     used for anything even vaguely high-volume, but they're rather
     useful for debugging and support.

This isn't exactly a pretty protocol, but it does solve the problem.


The blacklist is, from qemu's point of view, handled mostly through
xenstore.  A driver version is considered to be blacklisted if
/mh/driver-blacklist/{product_name}/{build_number} exists and is
readable, where {build_number} is the build number from step 4 as a
decimal number.  {product_name} is a string corresponding to the
product number in step 3; at present, the only product number is 1,
which has a product_name of xensource-windows.


A previous version of the protocol put the IO ports on the PCI
platform device.  Unfortunately, that makes it difficult to get at
them before PCI bus enumeration happens, which complicates removal of
the emulated NICs.  It is possible to work around these but (at least
on Windows) it's complicated and messy, and generally best avoided.

Steven.

[-- Attachment #1.1.2: series --]
[-- Type: text/plain, Size: 95 bytes --]

support-hvm-pv-drivers-ioemu-support
hvm-log-to-dom0
rate_limit_guest_syslog
pv-driver-version

[-- Attachment #1.1.3: support-hvm-pv-drivers-ioemu-support --]
[-- Type: text/plain, Size: 12357 bytes --]

Index: ioemu-remote/hw/ide.c
===================================================================
--- ioemu-remote.orig/hw/ide.c	2008-12-15 16:02:19.000000000 +0000
+++ ioemu-remote/hw/ide.c	2008-12-15 16:02:35.000000000 +0000
@@ -484,6 +484,7 @@
     int type; /* see IDE_TYPE_xxx */
 } PCIIDEState;
 
+static PCIIDEState *principal_ide_controller;
 
 #if defined(__ia64__)
 #include <xen/hvm/ioreq.h>
@@ -2778,6 +2779,47 @@
     s->media_changed = 0;
 }
 
+/* Unplug all of the IDE hard disks, starting at index @start in the
+   table. */
+static void _ide_unplug_harddisks(int start)
+{
+    IDEState *s;
+    int i, j;
+
+    if (!principal_ide_controller) {
+        fprintf(stderr, "No principal controller?\n");
+        return;
+    }
+    for (i = start; i < 4; i++) {
+        s = principal_ide_controller->ide_if + i;
+        if (!s->bs)
+            continue; /* drive not present */
+        if (s->is_cdrom)
+            continue; /* cdrom */
+        /* Is a hard disk, unplug it. */
+        for (j = 0; j < nb_drives; j++)
+            if (drives_table[j].bdrv == s->bs)
+                drives_table[j].bdrv = NULL;
+        bdrv_close(s->bs);
+        s->bs = NULL;
+        ide_reset(s);
+    }
+}
+
+/* Unplug all hard disks except for the primary master (which will
+   almost always be the boot device). */
+void ide_unplug_aux_harddisks(void)
+{
+    _ide_unplug_harddisks(1);
+}
+
+/* Unplug all hard disks, including the boot device. */
+void ide_unplug_harddisks(void)
+{
+    _ide_unplug_harddisks(0);
+}
+
+
 struct partition {
 	uint8_t boot_ind;		/* 0x80 - active */
 	uint8_t head;		/* starting head */
@@ -3290,6 +3332,9 @@
                                            sizeof(PCIIDEState),
                                            -1,
                                            NULL, NULL);
+    if (principal_ide_controller)
+	abort();
+    principal_ide_controller = d;
     d->type = IDE_TYPE_CMD646;
     pci_conf = d->dev.config;
     pci_conf[0x00] = 0x95; // CMD646
@@ -3419,6 +3464,9 @@
                                            sizeof(PCIIDEState),
                                            devfn,
                                            NULL, NULL);
+    if (principal_ide_controller)
+	abort();
+    principal_ide_controller = d;
     d->type = IDE_TYPE_PIIX3;
 
     pci_conf = d->dev.config;
Index: ioemu-remote/hw/pci.c
===================================================================
--- ioemu-remote.orig/hw/pci.c	2008-12-15 16:02:19.000000000 +0000
+++ ioemu-remote/hw/pci.c	2008-12-15 16:02:22.000000000 +0000
@@ -26,6 +26,9 @@
 #include "console.h"
 #include "net.h"
 
+#include "exec-all.h"
+#include "qemu-xen.h"
+
 //#define DEBUG_PCI
 
 struct PCIBus {
@@ -648,6 +651,46 @@
     }
 }
 
+void pci_unplug_netifs(void)
+{
+    PCIBus *bus;
+    PCIDevice *dev;
+    PCIIORegion *region;
+    int x;
+    int i;
+
+    /* We only support one PCI bus */
+    for (bus = first_bus; bus; bus = NULL) {
+       for (x = 0; x < 256; x++) {
+           dev = bus->devices[x];
+           if (dev &&
+               dev->config[0xa] == 0 &&
+               dev->config[0xb] == 2) {
+               /* Found a netif.  Remove it from the bus.  Note that
+                  we don't free it here, since there could still be
+                  references to it floating around.  There are only
+                  ever one or two structures leaked, and it's not
+                  worth finding them all. */
+               bus->devices[x] = NULL;
+               for (i = 0; i < PCI_NUM_REGIONS; i++) {
+                   region = &dev->io_regions[i];
+                   if (region->addr == (uint32_t)-1 ||
+                       region->size == 0)
+                       continue;
+                   fprintf(logfile, "region type %d at [%x,%x).\n",
+                           region->type, region->addr,
+                           region->addr+region->size);
+                   if (region->type == PCI_ADDRESS_SPACE_IO) {
+                       isa_unassign_ioport(region->addr, region->size);
+                   } else if (region->type == PCI_ADDRESS_SPACE_MEM) {
+                       unregister_iomem(region->addr);
+                   }
+               }
+           }
+       }
+    }
+}
+
 typedef struct {
     PCIDevice dev;
     PCIBus *bus;
Index: ioemu-remote/qemu-xen.h
===================================================================
--- ioemu-remote.orig/qemu-xen.h	2008-12-15 16:02:19.000000000 +0000
+++ ioemu-remote/qemu-xen.h	2008-12-15 16:02:22.000000000 +0000
@@ -26,8 +26,11 @@
 void xen_vga_vram_map(uint64_t vram_addr, int copy);
 #endif
 
-
+void ide_unplug_harddisks(void);
+void net_tap_shutdown_all(void);
+void pci_unplug_netifs(void);
 void destroy_hvm_domain(void);
+void unregister_iomem(target_phys_addr_t start);
 
 #ifdef __ia64__
 static inline void xc_domain_shutdown_hook(int xc_handle, uint32_t domid)
Index: ioemu-remote/vl.c
===================================================================
--- ioemu-remote.orig/vl.c	2008-12-15 16:02:19.000000000 +0000
+++ ioemu-remote/vl.c	2008-12-15 16:02:22.000000000 +0000
@@ -262,6 +262,20 @@
 
 #include "xen-vl-extra.c"
 
+typedef struct IOHandlerRecord {
+    int fd;
+    IOCanRWHandler *fd_read_poll;
+    IOHandler *fd_read;
+    IOHandler *fd_write;
+    int deleted;
+    void *opaque;
+    /* temporary data */
+    struct pollfd *ufd;
+    struct IOHandlerRecord *next;
+} IOHandlerRecord;
+
+static IOHandlerRecord *first_io_handler;
+
 /***********************************************************/
 /* x86 ISA bus support */
 
@@ -4055,6 +4069,7 @@
 typedef struct TAPState {
     VLANClientState *vc;
     int fd;
+    struct TAPState *next;
     char down_script[1024];
     char script_arg[1024];
 } TAPState;
@@ -4092,6 +4107,34 @@
     }
 }
 
+static TAPState *head_net_tap;
+
+void net_tap_shutdown_all(void)
+{
+    struct IOHandlerRecord **pioh, *ioh;
+
+    while (head_net_tap) {
+       pioh = &first_io_handler;
+       for (;;) {
+           ioh = *pioh;
+           if (ioh == NULL)
+               break;
+           if (ioh->fd == head_net_tap->fd) {
+               *pioh = ioh->next;
+               qemu_free(ioh);
+               break;
+           }
+           pioh = &ioh->next;
+       }
+       if (!ioh)
+           fprintf(stderr,
+                   "warning: can't find iohandler for %d to close it properly.\n",
+                   head_net_tap->fd);
+       close(head_net_tap->fd);
+       head_net_tap = head_net_tap->next;
+    }
+}
+
 /* fd support */
 
 static TAPState *net_tap_fd_init(VLANState *vlan, int fd)
@@ -4103,6 +4146,8 @@
         return NULL;
     s->fd = fd;
     s->vc = qemu_new_vlan_client(vlan, tap_receive, NULL, s);
+    s->next = head_net_tap;
+    head_net_tap = s;
     qemu_set_fd_handler(s->fd, tap_send, NULL, s);
     snprintf(s->vc->info_str, sizeof(s->vc->info_str), "tap: fd=%d", fd);
     return s;
@@ -5666,20 +5711,6 @@
 
 #define MAX_IO_HANDLERS 64
 
-typedef struct IOHandlerRecord {
-    int fd;
-    IOCanRWHandler *fd_read_poll;
-    IOHandler *fd_read;
-    IOHandler *fd_write;
-    int deleted;
-    void *opaque;
-    /* temporary data */
-    struct pollfd *ufd;
-    struct IOHandlerRecord *next;
-} IOHandlerRecord;
-
-static IOHandlerRecord *first_io_handler;
-
 /* XXX: fd_read_poll should be suppressed, but an API change is
    necessary in the character devices to suppress fd_can_read(). */
 int qemu_set_fd_handler2(int fd,
Index: ioemu-remote/block-raw-posix.c
===================================================================
--- ioemu-remote.orig/block-raw-posix.c	2008-12-15 16:02:19.000000000 +0000
+++ ioemu-remote/block-raw-posix.c	2008-12-15 16:02:22.000000000 +0000
@@ -55,6 +55,7 @@
 #include <sys/ioctl.h>
 #include <linux/cdrom.h>
 #include <linux/fd.h>
+#include <sys/mount.h>
 #endif
 #ifdef __FreeBSD__
 #include <sys/disk.h>
@@ -125,6 +126,10 @@
         return ret;
     }
     s->fd = fd;
+#ifndef CONFIG_STUBDOM
+    /* Invalidate buffer cache for this device. */
+    ioctl(s->fd, BLKFLSBUF, 0);
+#endif
     return 0;
 }
 
@@ -505,6 +510,10 @@
 {
     BDRVRawState *s = bs->opaque;
     if (s->fd >= 0) {
+#ifndef CONFIG_STUBDOM
+        /* Invalidate buffer cache for this device. */
+        ioctl(s->fd, BLKFLSBUF, 0);
+#endif
         close(s->fd);
         s->fd = -1;
     }
Index: ioemu-remote/i386-dm/exec-dm.c
===================================================================
--- ioemu-remote.orig/i386-dm/exec-dm.c	2008-12-15 16:02:19.000000000 +0000
+++ ioemu-remote/i386-dm/exec-dm.c	2008-12-15 16:02:22.000000000 +0000
@@ -267,7 +267,7 @@
 
 /* XXX: Simple implementation. Fix later */
 #define MAX_MMIO 32
-struct mmio_space {
+static struct mmio_space {
         target_phys_addr_t start;
         unsigned long size;
         unsigned long io_index;
@@ -413,6 +413,17 @@
         return 0;
 }
 
+void unregister_iomem(target_phys_addr_t start)
+{
+    int index = iomem_index(start);
+    if (index) {
+        fprintf(logfile, "squash iomem [%lx, %lx).\n", mmio[index].start,
+                mmio[index].start + mmio[index].size);
+        mmio[index].start = mmio[index].size = 0;
+    }
+}
+
+
 #if defined(__i386__) || defined(__x86_64__)
 #define phys_ram_addr(x) (qemu_map_cache(x))
 #elif defined(__ia64__)
Index: ioemu-remote/hw/xen_platform.c
===================================================================
--- ioemu-remote.orig/hw/xen_platform.c	2008-12-15 16:02:19.000000000 +0000
+++ ioemu-remote/hw/xen_platform.c	2008-12-15 16:02:35.000000000 +0000
@@ -24,6 +24,7 @@
  */
 
 #include "hw.h"
+#include "pc.h"
 #include "pci.h"
 #include "irq.h"
 #include "qemu-xen.h"
@@ -163,6 +164,52 @@
     cpu_register_physical_memory(addr, 0x1000000, mmio_io_addr);
 }
 
+#define UNPLUG_ALL_IDE_DISKS 1
+#define UNPLUG_ALL_NICS 2
+#define UNPLUG_AUX_IDE_DISKS 4
+
+static void platform_fixed_ioport_write2(void *opaque, uint32_t addr, uint32_t val)
+{
+    switch (addr - 0x10) {
+    case 0:
+        /* Unplug devices.  Value is a bitmask of which devices to
+           unplug, with bit 0 the IDE devices, bit 1 the network
+           devices, and bit 2 the non-primary-master IDE devices. */
+        if (val & UNPLUG_ALL_IDE_DISKS)
+            ide_unplug_harddisks();
+        if (val & UNPLUG_ALL_NICS) {
+            pci_unplug_netifs();
+            net_tap_shutdown_all();
+        }
+        if (val & UNPLUG_AUX_IDE_DISKS) {
+            ide_unplug_aux_harddisks();
+        }
+        break;
+    }
+}
+
+static uint32_t platform_fixed_ioport_read2(void *opaque, uint32_t addr)
+{
+    switch (addr - 0x10) {
+    case 0:
+        return 0x49d2; /* Magic value so that you can identify the
+                          interface. */
+    default:
+        return 0xffff;
+    }
+}
+
+static uint32_t platform_fixed_ioport_read1(void *opaque, uint32_t addr)
+{
+    switch (addr - 0x10) {
+    case 2:
+        /* Version number */
+        return 0;
+    default:
+        return 0xff;
+    }
+}
+
 struct pci_config_header {
     uint16_t vendor_id;
     uint16_t device_id;
@@ -255,4 +302,7 @@
 
     register_savevm("platform", 0, 2, xen_pci_save, xen_pci_load, d);
     printf("Done register platform.\n");
+    register_ioport_write(0x10, 16, 2, platform_fixed_ioport_write2, NULL);
+    register_ioport_read(0x10, 16, 2, platform_fixed_ioport_read2, NULL);
+    register_ioport_read(0x10, 16, 1, platform_fixed_ioport_read1, NULL);
 }
Index: ioemu-remote/hw/pc.h
===================================================================
--- ioemu-remote.orig/hw/pc.h	2008-12-15 16:02:19.000000000 +0000
+++ ioemu-remote/hw/pc.h	2008-12-15 16:02:35.000000000 +0000
@@ -146,6 +146,8 @@
                         qemu_irq *pic);
 void pci_piix4_ide_init(PCIBus *bus, BlockDriverState **hd_table, int devfn,
                         qemu_irq *pic);
+void ide_unplug_harddisks(void);
+void ide_unplug_aux_harddisks(void);
 
 /* ne2000.c */
 

[-- Attachment #1.1.4: hvm-log-to-dom0 --]
[-- Type: text/plain, Size: 2130 bytes --]

Index: ioemu-remote/hw/xen_platform.c
===================================================================
--- ioemu-remote.orig/hw/xen_platform.c	2008-12-15 15:57:04.000000000 +0000
+++ ioemu-remote/hw/xen_platform.c	2008-12-15 16:00:19.000000000 +0000
@@ -31,6 +31,8 @@
 #include <xenguest.h>
 
 extern FILE *logfile;
+static char log_buffer[4096];
+static int log_buffer_off;
 
 #define PFFLAG_ROM_LOCK 1 /* Sets whether ROM memory area is RW or RO */
 
@@ -68,6 +70,18 @@
             d->platform_flags = val & PFFLAG_ROM_LOCK;
         break;
     }
+    case 8:
+        {
+            if (val == '\n' || log_buffer_off == sizeof(log_buffer) - 1) {
+                /* Flush buffer */
+                log_buffer[log_buffer_off] = 0;
+                fprintf(logfile, "%s\n", log_buffer);
+                log_buffer_off = 0;
+                break;
+            }
+            log_buffer[log_buffer_off++] = val;
+        }
+        break;
     default:
         break;
     }
@@ -180,6 +194,24 @@
     }
 }
 
+
+static void platform_fixed_ioport_write1(void *opaque, uint32_t addr, uint32_t val)
+{
+    switch (addr - 0x10) {
+    case 2:
+        /* Send bytes to syslog */
+        if (val == '\n' || log_buffer_off == sizeof(log_buffer) - 1) {
+            /* Flush buffer */
+            log_buffer[log_buffer_off] = 0;
+            fprintf(logfile, "%s\n", log_buffer);
+            log_buffer_off = 0;
+            break;
+        }
+        log_buffer[log_buffer_off++] = val;
+        break;
+    }
+}
+
 static uint32_t platform_fixed_ioport_read2(void *opaque, uint32_t addr)
 {
     switch (addr - 0x10) {
@@ -295,6 +327,7 @@
     register_savevm("platform", 0, 2, xen_pci_save, xen_pci_load, d);
     printf("Done register platform.\n");
     register_ioport_write(0x10, 16, 2, platform_fixed_ioport_write2, NULL);
+    register_ioport_write(0x10, 16, 1, platform_fixed_ioport_write1, NULL);
     register_ioport_read(0x10, 16, 2, platform_fixed_ioport_read2, NULL);
     register_ioport_read(0x10, 16, 1, platform_fixed_ioport_read1, NULL);
 }

[-- Attachment #1.1.5: rate_limit_guest_syslog --]
[-- Type: text/plain, Size: 4625 bytes --]

Index: ioemu-remote/hw/xen_platform.c
===================================================================
--- ioemu-remote.orig/hw/xen_platform.c	2008-12-15 15:02:53.000000000 +0000
+++ ioemu-remote/hw/xen_platform.c	2008-12-15 15:03:34.000000000 +0000
@@ -29,8 +29,10 @@
 #include "irq.h"
 #include "qemu-xen.h"
 
+#include <assert.h>
 #include <xenguest.h>
 
+static int throttling_disabled;
 extern FILE *logfile;
 static char log_buffer[4096];
 static int log_buffer_off;
@@ -44,6 +46,88 @@
   uint64_t   vga_stolen_ram;
 } PCIXenPlatformState;
 
+/* We throttle access to dom0 syslog, to avoid DOS attacks.  This is
+   modelled as a token bucket, with one token for every byte of log.
+   The bucket size is 128KB (->1024 lines of 128 bytes each) and
+   refills at 256B/s.  It starts full.  The guest is blocked if no
+   tokens are available when it tries to generate a log message. */
+#define BUCKET_MAX_SIZE (128*1024)
+#define BUCKET_FILL_RATE 256
+
+static void throttle(unsigned count)
+{
+    static unsigned available;
+    static struct timespec last_refil;
+    static int started;
+    static int warned;
+
+    struct timespec waiting_for, now;
+    double delay;
+    struct timespec ts;
+
+    if (throttling_disabled)
+        return;
+
+    if (!started) {
+        clock_gettime(CLOCK_MONOTONIC, &last_refil);
+        available = BUCKET_MAX_SIZE;
+        started = 1;
+    }
+
+    if (count > BUCKET_MAX_SIZE) {
+        fprintf(logfile, "tried to get %d tokens, but bucket size is %d\n",
+                BUCKET_MAX_SIZE, count);
+        exit(1);
+    }
+
+    if (available < count) {
+        /* The bucket is empty.  Refil it */
+
+        /* When will it be full enough to handle this request? */
+        delay = (double)(count - available) / BUCKET_FILL_RATE;
+        waiting_for = last_refil;
+        waiting_for.tv_sec += delay;
+        waiting_for.tv_nsec += (delay - (int)delay) * 1e9;
+        if (waiting_for.tv_nsec >= 1000000000) {
+            waiting_for.tv_nsec -= 1000000000;
+            waiting_for.tv_sec++;
+        }
+
+        /* How long do we have to wait? (might be negative) */
+        clock_gettime(CLOCK_MONOTONIC, &now);
+        ts.tv_sec = waiting_for.tv_sec - now.tv_sec;
+        ts.tv_nsec = waiting_for.tv_nsec - now.tv_nsec;
+        if (ts.tv_nsec < 0) {
+            ts.tv_sec--;
+            ts.tv_nsec += 1000000000;
+        }
+
+        /* Wait for it. */
+        if (ts.tv_sec > 0 ||
+            (ts.tv_sec == 0 && ts.tv_nsec > 0)) {
+            if (!warned) {
+                fprintf(logfile, "throttling guest access to syslog");
+                warned = 1;
+            }
+            while (nanosleep(&ts, &ts) < 0 && errno == EINTR)
+                ;
+        }
+
+        /* Refil */
+        clock_gettime(CLOCK_MONOTONIC, &now);
+        delay = (now.tv_sec - last_refil.tv_sec) +
+            (now.tv_nsec - last_refil.tv_nsec) * 1.0e-9;
+        available += BUCKET_FILL_RATE * delay;
+        if (available > BUCKET_MAX_SIZE)
+            available = BUCKET_MAX_SIZE;
+        last_refil = now;
+    }
+
+    assert(available >= count);
+
+    available -= count;
+}
+
 static uint32_t xen_platform_ioport_readb(void *opaque, uint32_t addr)
 {
     PCIXenPlatformState *s = opaque;
@@ -76,6 +160,7 @@
             if (val == '\n' || log_buffer_off == sizeof(log_buffer) - 1) {
                 /* Flush buffer */
                 log_buffer[log_buffer_off] = 0;
+                throttle(log_buffer_off);
                 fprintf(logfile, "%s\n", log_buffer);
                 log_buffer_off = 0;
                 break;
@@ -278,6 +363,7 @@
         if (val == '\n' || log_buffer_off == sizeof(log_buffer) - 1) {
             /* Flush buffer */
             log_buffer[log_buffer_off] = 0;
+            throttle(log_buffer_off);
             fprintf(logfile, "%s\n", log_buffer);
             log_buffer_off = 0;
             break;
@@ -302,6 +388,7 @@
 {
     PCIXenPlatformState *d;
     struct pci_config_header *pch;
+    struct stat stbuf;
 
     printf("Register xen platform.\n");
     d = (PCIXenPlatformState *)pci_register_device(
@@ -337,4 +424,8 @@
     register_ioport_write(0x10, 16, 1, platform_fixed_ioport_write1, NULL);
     register_ioport_read(0x10, 16, 2, platform_fixed_ioport_read2, NULL);
     register_ioport_read(0x10, 16, 1, platform_fixed_ioport_read1, NULL);
+
+    if (stat("/etc/disable-guest-log-throttle", &stbuf) == 0)
+        throttling_disabled = 1;
+
 }

[-- Attachment #1.1.6: pv-driver-version --]
[-- Type: text/plain, Size: 4720 bytes --]

Index: ioemu-remote/hw/xen_platform.c
===================================================================
--- ioemu-remote.orig/hw/xen_platform.c	2008-12-15 15:03:34.000000000 +0000
+++ ioemu-remote/hw/xen_platform.c	2008-12-15 15:06:08.000000000 +0000
@@ -32,6 +32,8 @@
 #include <assert.h>
 #include <xenguest.h>
 
+static int drivers_blacklisted;
+static uint16_t driver_product_version;
 static int throttling_disabled;
 extern FILE *logfile;
 static char log_buffer[4096];
@@ -341,6 +343,42 @@
             ide_unplug_aux_harddisks();
         }
         break;
+    case 2:
+        switch (val) {
+        case 1:
+            fprintf(logfile, "Citrix Windows PV drivers loaded in guest\n");
+            break;
+        case 0:
+            fprintf(logfile, "Guest claimed to be running PV product 0?\n");
+            break;
+        default:
+            fprintf(logfile, "Unknown PV product %d loaded in guest\n", val);
+            break;
+        }
+        driver_product_version = val;
+        break;
+    }
+}
+
+static void platform_fixed_ioport_write4(void *opaque, uint32_t addr,
+                                         uint32_t val)
+{
+    switch (addr - 0x10) {
+    case 0:
+        /* PV driver version */
+        if (driver_product_version == 0) {
+            fprintf(logfile,
+                    "Drivers tried to set their version number (%d) before setting the product number?\n",
+                    val);
+            return;
+        }
+        fprintf(logfile, "PV driver build %d\n", val);
+        if (xenstore_pv_driver_build_blacklisted(driver_product_version,
+                                                 val)) {
+            fprintf(logfile, "Drivers are blacklisted!\n");
+            drivers_blacklisted = 1;
+        }
+        break;
     }
 }
 
@@ -348,8 +386,14 @@
 {
     switch (addr - 0x10) {
     case 0:
-        return 0x49d2; /* Magic value so that you can identify the
-                          interface. */
+        if (drivers_blacklisted) {
+            /* The drivers will recognise this magic number and refuse
+             * to do anything. */
+            return 0xd249;
+        } else {
+            /* Magic value so that you can identify the interface. */
+            return 0x49d2;
+        }
     default:
         return 0xffff;
     }
@@ -378,7 +422,7 @@
     switch (addr - 0x10) {
     case 2:
         /* Version number */
-        return 0;
+        return 1;
     default:
         return 0xff;
     }
@@ -420,6 +464,7 @@
 
     register_savevm("platform", 0, 2, xen_pci_save, xen_pci_load, d);
     printf("Done register platform.\n");
+    register_ioport_write(0x10, 16, 4, platform_fixed_ioport_write4, NULL);
     register_ioport_write(0x10, 16, 2, platform_fixed_ioport_write2, NULL);
     register_ioport_write(0x10, 16, 1, platform_fixed_ioport_write1, NULL);
     register_ioport_read(0x10, 16, 2, platform_fixed_ioport_read2, NULL);
Index: ioemu-remote/qemu-xen.h
===================================================================
--- ioemu-remote.orig/qemu-xen.h	2008-12-15 15:02:53.000000000 +0000
+++ ioemu-remote/qemu-xen.h	2008-12-15 15:06:39.000000000 +0000
@@ -91,6 +91,8 @@
 char *xenstore_device_model_read(int domid, char *key, unsigned int *len);
 char *xenstore_read_battery_data(int battery_status);
 int xenstore_refresh_battery_status(void);
+int xenstore_pv_driver_build_blacklisted(uint16_t product_number,
+                                         uint32_t build_nr);
 
 /* xenfbfront.c */
 int xenfb_pv_display_init(DisplayState *ds);
Index: ioemu-remote/xenstore.c
===================================================================
--- ioemu-remote.orig/xenstore.c	2008-12-15 14:30:47.000000000 +0000
+++ ioemu-remote/xenstore.c	2008-12-15 15:06:08.000000000 +0000
@@ -782,6 +782,34 @@
     free(path);
 }
 
+int
+xenstore_pv_driver_build_blacklisted(uint16_t product_nr,
+                                     uint32_t build_nr)
+{
+    char *buf = NULL;
+    char *tmp;
+    const char *product;
+
+    switch (product_nr) {
+    case 1:
+        product = "xensource-windows";
+        break;
+    default:
+        /* Don't know what product this is -> we can't blacklist
+         * it. */
+        return 0;
+    }
+    if (asprintf(&buf, "/mh/driver-blacklist/%s/%d", product, build_nr) < 0)
+        return 0;
+    tmp = xs_read(xsh, XBT_NULL, buf, NULL);
+    free(tmp);
+    free(buf);
+    if (tmp == NULL)
+        return 0;
+    else
+        return 1;
+}
+
 void xenstore_record_dm_state(char *state)
 {
     xenstore_record_dm("state", state);

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-15 17:10           ` Steven Smith
@ 2008-12-15 23:58             ` James Harper
  2008-12-16 10:20               ` Steven Smith
  2008-12-16  3:01             ` James Harper
                               ` (4 subsequent siblings)
  5 siblings, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-15 23:58 UTC (permalink / raw)
  To: Steven Smith, Keir Fraser; +Cc: xen-devel, Ian Jackson

> > I'm not sure if the approach taken by the Citrix drivers could be at
all
> > useful. Cc'ing Steven Smith in case he has any comments to make.
> I can't see any reason why the approach we take in our closed-source
> drivers wouldn't work here as well.  I've attached the appropriate
> patches from our product qemu patchqueue, tidied up and stripped of
> the most obviously XenServer-specific bits, and made to apply to
> current ioemu-remote.
> 
> 
> The protocol covers three basic things:
> 
> -- Disconnecting emulated devices.
> -- Getting log messages out of the drivers and into dom0.
> -- Allowing dom0 to block the loading of specific drivers.  This is
>    intended as a backwards-compatibility thing: if we discover a bug
>    in some old version of the drivers, then rather than working around
>    it in Xen, we have the option of just making those drivers fall
>    back to emulated mode.
> 
> The current protocol works like this (from the point of view of
> drivers):
> 
> 1) When the drivers first come up, they check whether the unplug logic
>    is available by reading a two-byte magic number from IO port 0x10.
>    These should be 0x49d2.  If the magic number doesn't match, the
>    drivers don't do anything.
> 
> 2) The drivers read a one-byte protocol version from IO port 0x12.  If
>    this is 0, skip to 6.
> 
> 3) The drivers write a two-byte product number to IO port 0x12.  At
>    the moment, the only drivers using this protocol are our
>    closed-source ones, which use product number 1.
> 
> 4) The drivers write a four-byte build number to IO port 0x10.
> 
> 5) The drivers check the magic number by reading two bytes from 0x10
>    again.  If it's changed from 0x49d2, the drivers are blacklisted
>    and should not load.
> 
> 6) The drivers write a two-byte bitmask of devices to unplug to IO
>    port 0x10.  The defined fields are:
> 
>    1 -- All IDE disks (not including CD drives)
>    2 -- All emulated NICs
>    4 -- All IDE disks except for the primary master (not including CD
> 	drives)

Interesting. This seems more flexible than my initial approach which was
to block just the PCI devices. I guess it makes sense to leave the qemu
CDROM's as they support the eject etc functionality, and performance is
seldom such a problem.

>    The relevant emulated devices then disappear from the relevant
>    buses.  For most guest operating systems, you want to do this
>    before device enumeration happens.
> 
> ...) Once the drivers have checked the magic number (and the
>      blacklist, if appropriate), they can send log messages to qemu
>      which will be logged to wherever qemu's logs go
>      (/var/log/xen/qemu-dm.log on normal Xen, dom0 syslog on
>      XenServer).  These messages are written to IO port 0x12 a byte at
>      a time, and are terminated by newlines.  There's a fairly
>      aggressive rate limiter on these messages, so they shouldn't be
>      used for anything even vaguely high-volume, but they're rather
>      useful for debugging and support.
> 
> This isn't exactly a pretty protocol, but it does solve the problem.
> 
> 
> The blacklist is, from qemu's point of view, handled mostly through
> xenstore.  A driver version is considered to be blacklisted if
> /mh/driver-blacklist/{product_name}/{build_number} exists and is
> readable, where {build_number} is the build number from step 4 as a
> decimal number.  {product_name} is a string corresponding to the
> product number in step 3; at present, the only product number is 1,
> which has a product_name of xensource-windows.
> 
> 
> A previous version of the protocol put the IO ports on the PCI
> platform device.  Unfortunately, that makes it difficult to get at
> them before PCI bus enumeration happens, which complicates removal of
> the emulated NICs.  It is possible to work around these but (at least
> on Windows) it's complicated and messy, and generally best avoided.

I found that too :)

It's a bit more complicated than I would have preferred, but I think
there is value in keeping the tree's in sync rather than re-implementing
things.

As a result of this patch, does that mean that the Citrix Windows PV
drivers might work on the GPL tree? Is that a problem?

Thanks

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-15 17:10           ` Steven Smith
  2008-12-15 23:58             ` James Harper
@ 2008-12-16  3:01             ` James Harper
  2008-12-16 10:27               ` Steven Smith
  2008-12-17  3:47             ` James Harper
                               ` (3 subsequent siblings)
  5 siblings, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-16  3:01 UTC (permalink / raw)
  To: Steven Smith, Keir Fraser; +Cc: xen-devel, Ian Jackson

Steven,

Can you please comment on how the disabling should behave across
migration or save/restores? Are the flags retained as part of the saved
state or does the disabling need to happen again once the domain is
migrated/restored?

Thanks

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-15 23:58             ` James Harper
@ 2008-12-16 10:20               ` Steven Smith
  0 siblings, 0 replies; 44+ messages in thread
From: Steven Smith @ 2008-12-16 10:20 UTC (permalink / raw)
  To: James Harper; +Cc: Steven Smith, xen-devel, Ian Jackson, Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 1897 bytes --]

> > 6) The drivers write a two-byte bitmask of devices to unplug to IO
> >    port 0x10.  The defined fields are:
> > 
> >    1 -- All IDE disks (not including CD drives)
> >    2 -- All emulated NICs
> >    4 -- All IDE disks except for the primary master (not including CD
> > 	drives)
> Interesting. This seems more flexible than my initial approach which was
> to block just the PCI devices. I guess it makes sense to leave the qemu
> CDROM's as they support the eject etc functionality, and performance is
> seldom such a problem.
Yeah, that was pretty much our thinking.  If you want a bit to turn
off the IDE controller completely then that would be pretty easy to
add.

> > A previous version of the protocol put the IO ports on the PCI
> > platform device.  Unfortunately, that makes it difficult to get at
> > them before PCI bus enumeration happens, which complicates removal of
> > the emulated NICs.  It is possible to work around these but (at least
> > on Windows) it's complicated and messy, and generally best avoided.
> 
> I found that too :)
> 
> It's a bit more complicated than I would have preferred, but I think
> there is value in keeping the tree's in sync rather than re-implementing
> things.
> 
> As a result of this patch, does that mean that the Citrix Windows PV
> drivers might work on the GPL tree?
Not quite.  You still need a few other little bits of our toolstack,
but nothing which couldn't be added to xend with twenty minutes
hacking (which is how I tested these patches).

Of course, the driver EULA still prohibits using them with anything
other than XenServer.

> Is that a problem?
Not really.  All of the patches I've posted were open source already,
so someone who really wanted to use the drivers could have done so,
and there's enough still missing to make it inconvenient for casual
infringement.

Steven.

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-16  3:01             ` James Harper
@ 2008-12-16 10:27               ` Steven Smith
  2008-12-16 11:06                 ` James Harper
  0 siblings, 1 reply; 44+ messages in thread
From: Steven Smith @ 2008-12-16 10:27 UTC (permalink / raw)
  To: James Harper; +Cc: Steven Smith, xen-devel, Ian Jackson, Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 587 bytes --]

> Can you please comment on how the disabling should behave across
> migration or save/restores? Are the flags retained as part of the saved
> state or does the disabling need to happen again once the domain is
> migrated/restored?
In the patches I sent, the flags are discarded across migration (so
all of the emulated devices come back).  Our drivers get in very early
after the domain is resumed after migration and re-issue the magic
writes which unplug the devices.  Saving them wouldn't cause us any
problems, though, so if that's easier for you then feel free to add
it.

Steven.

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-16 10:27               ` Steven Smith
@ 2008-12-16 11:06                 ` James Harper
  2008-12-16 11:28                   ` Keir Fraser
  0 siblings, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-16 11:06 UTC (permalink / raw)
  To: Steven Smith; +Cc: xen-devel, Ian Jackson, Keir Fraser

> > Can you please comment on how the disabling should behave across
> > migration or save/restores? Are the flags retained as part of the
saved
> > state or does the disabling need to happen again once the domain is
> > migrated/restored?
> In the patches I sent, the flags are discarded across migration (so
> all of the emulated devices come back).  Our drivers get in very early
> after the domain is resumed after migration and re-issue the magic
> writes which unplug the devices.  Saving them wouldn't cause us any
> problems, though, so if that's easier for you then feel free to add
> it.

The only time this would not work for me is if the DomU was migrated
from a machine that supported device disabling to one that didn't (eg
didn't have these patches).

Is that even a supported scenario - eg migration between machines
running (even slightly) different versions of code?

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-16 11:06                 ` James Harper
@ 2008-12-16 11:28                   ` Keir Fraser
  0 siblings, 0 replies; 44+ messages in thread
From: Keir Fraser @ 2008-12-16 11:28 UTC (permalink / raw)
  To: James Harper, Steven Smith; +Cc: xen-devel, Ian Jackson

On 16/12/2008 11:06, "James Harper" <james.harper@bendigoit.com.au> wrote:

> The only time this would not work for me is if the DomU was migrated
> from a machine that supported device disabling to one that didn't (eg
> didn't have these patches).
> 
> Is that even a supported scenario - eg migration between machines
> running (even slightly) different versions of code?

Migration to older hypervisor and tools is not supported.

 -- Keir

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-15 17:10           ` Steven Smith
  2008-12-15 23:58             ` James Harper
  2008-12-16  3:01             ` James Harper
@ 2008-12-17  3:47             ` James Harper
  2008-12-17  8:27               ` Keir Fraser
  2008-12-17 10:36               ` Ian Jackson
  2008-12-19  0:15             ` James Harper
                               ` (2 subsequent siblings)
  5 siblings, 2 replies; 44+ messages in thread
From: James Harper @ 2008-12-17  3:47 UTC (permalink / raw)
  To: Steven Smith, Keir Fraser; +Cc: xen-devel, Ian Jackson

> > I'm not sure if the approach taken by the Citrix drivers could be at
all
> > useful. Cc'ing Steven Smith in case he has any comments to make.
> I can't see any reason why the approach we take in our closed-source
> drivers wouldn't work here as well.  I've attached the appropriate
> patches from our product qemu patchqueue, tidied up and stripped of
> the most obviously XenServer-specific bits, and made to apply to
> current ioemu-remote.
> 

3.3.1 is feature frozen at this point isn't it, which means 3.3.2 for
these patches right?

Thanks

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-17  3:47             ` James Harper
@ 2008-12-17  8:27               ` Keir Fraser
  2008-12-17  8:32                 ` Keir Fraser
  2008-12-17 10:36               ` Ian Jackson
  1 sibling, 1 reply; 44+ messages in thread
From: Keir Fraser @ 2008-12-17  8:27 UTC (permalink / raw)
  To: James Harper, Steven Smith; +Cc: xen-devel, Ian Jackson

On 17/12/2008 03:47, "James Harper" <james.harper@bendigoit.com.au> wrote:

>>> I'm not sure if the approach taken by the Citrix drivers could be at
> all
>>> useful. Cc'ing Steven Smith in case he has any comments to make.
>> I can't see any reason why the approach we take in our closed-source
>> drivers wouldn't work here as well.  I've attached the appropriate
>> patches from our product qemu patchqueue, tidied up and stripped of
>> the most obviously XenServer-specific bits, and made to apply to
>> current ioemu-remote.
> 
> 3.3.1 is feature frozen at this point isn't it, which means 3.3.2 for
> these patches right?

I'm reluctant to introduce anything into a stable branch which would prevent
us migrating domains up or down the branch. If we are going to shove it in,
it should happen now, for 3.3.1. My .1 releases tend to be generously
proportioned, but beyond a .1 release I definitely don't take anything other
than pure bug fixes.

 -- Keir

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-17  8:27               ` Keir Fraser
@ 2008-12-17  8:32                 ` Keir Fraser
  2008-12-17  9:21                   ` James Harper
  0 siblings, 1 reply; 44+ messages in thread
From: Keir Fraser @ 2008-12-17  8:32 UTC (permalink / raw)
  To: James Harper, Steven Smith; +Cc: xen-devel, Ian Jackson

On 17/12/2008 08:27, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:

>> 3.3.1 is feature frozen at this point isn't it, which means 3.3.2 for
>> these patches right?
> 
> I'm reluctant to introduce anything into a stable branch which would prevent
> us migrating domains up or down the branch. If we are going to shove it in, it
> should happen now, for 3.3.1. My .1 releases tend to be generously
> proportioned, but beyond a .1 release I definitely don't take anything other
> than pure bug fixes.

The upshot really is: if you can agree on a qemu-dm patch this week then
fine (and the interface it implements is then locked and unchangeable).
Otherwise it's a 3.4.0 feature.

 -- Keir

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-17  8:32                 ` Keir Fraser
@ 2008-12-17  9:21                   ` James Harper
  2009-01-22 16:13                     ` Pasi Kärkkäinen
  0 siblings, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-17  9:21 UTC (permalink / raw)
  To: Keir Fraser, Steven Smith; +Cc: xen-devel, Ian Jackson

> On 17/12/2008 08:27, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:
> 
> >> 3.3.1 is feature frozen at this point isn't it, which means 3.3.2
for
> >> these patches right?
> >
> > I'm reluctant to introduce anything into a stable branch which would
> prevent
> > us migrating domains up or down the branch. If we are going to shove
it
> in, it
> > should happen now, for 3.3.1. My .1 releases tend to be generously
> > proportioned, but beyond a .1 release I definitely don't take
anything
> other
> > than pure bug fixes.
> 
> The upshot really is: if you can agree on a qemu-dm patch this week
then
> fine (and the interface it implements is then locked and
unchangeable).
> Otherwise it's a 3.4.0 feature.
> 

Unfortunately I don't think that gives me enough time to do testing...
what's the timeframe for a 3.4.0 release?

Thanks

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-17  3:47             ` James Harper
  2008-12-17  8:27               ` Keir Fraser
@ 2008-12-17 10:36               ` Ian Jackson
  2008-12-17 11:00                 ` Keir Fraser
  1 sibling, 1 reply; 44+ messages in thread
From: Ian Jackson @ 2008-12-17 10:36 UTC (permalink / raw)
  To: Keir Fraser; +Cc: Steven Smith, James Harper, xen-devel

Keir Fraser writes ("Re: [Xen-devel] disable qemu PCI devices in HVM domains"):
> I'm reluctant to introduce anything into a stable branch which would prevent
> us migrating domains up or down the branch. If we are going to shove it in,
> it should happen now, for 3.3.1. My .1 releases tend to be generously
> proportioned, but beyond a .1 release I definitely don't take anything other
> than pure bug fixes.

Can we make it so that the patch to qemu-dm does not affect guests
which do not use James's PV drivers ?  That would allow us to support
migration of any production domains.  It seems that James's drivers
are not production-stable right now anyway (feel free to disagree,
James!)

Are there likely to be people using xen-unstable with the Citrix PV
drivers ?  Steven suggested that would be a violation of the licence
of the Citrix drivers but is that true even for a Citrix/Xenserver
customer ?  If we don't need to worry about the migration-
compatibility of such guests then the answer to my first question may
be easier ...

Ian.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-17 10:36               ` Ian Jackson
@ 2008-12-17 11:00                 ` Keir Fraser
  2008-12-17 11:01                   ` James Harper
  0 siblings, 1 reply; 44+ messages in thread
From: Keir Fraser @ 2008-12-17 11:00 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Steven Smith, James Harper, xen-devel

On 17/12/2008 10:36, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:

> Keir Fraser writes ("Re: [Xen-devel] disable qemu PCI devices in HVM
> domains"):
>> I'm reluctant to introduce anything into a stable branch which would prevent
>> us migrating domains up or down the branch. If we are going to shove it in,
>> it should happen now, for 3.3.1. My .1 releases tend to be generously
>> proportioned, but beyond a .1 release I definitely don't take anything other
>> than pure bug fixes.
> 
> Can we make it so that the patch to qemu-dm does not affect guests
> which do not use James's PV drivers ?  That would allow us to support
> migration of any production domains.  It seems that James's drivers
> are not production-stable right now anyway (feel free to disagree,
> James!)

Well, maybe. Frankly 3.4.0 will probably be out not long after 3.3.2 anyway.

 -- Keir

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-17 11:00                 ` Keir Fraser
@ 2008-12-17 11:01                   ` James Harper
  0 siblings, 0 replies; 44+ messages in thread
From: James Harper @ 2008-12-17 11:01 UTC (permalink / raw)
  To: Keir Fraser, Ian Jackson; +Cc: Steven Smith, xen-devel

> > Keir Fraser writes ("Re: [Xen-devel] disable qemu PCI devices in HVM
> > domains"):
> >> I'm reluctant to introduce anything into a stable branch which
would
> prevent
> >> us migrating domains up or down the branch. If we are going to
shove it
> in,
> >> it should happen now, for 3.3.1. My .1 releases tend to be
generously
> >> proportioned, but beyond a .1 release I definitely don't take
anything
> other
> >> than pure bug fixes.
> >
> > Can we make it so that the patch to qemu-dm does not affect guests
> > which do not use James's PV drivers ?  That would allow us to
support
> > migration of any production domains.  It seems that James's drivers
> > are not production-stable right now anyway (feel free to disagree,
> > James!)
> 
> Well, maybe. Frankly 3.4.0 will probably be out not long after 3.3.2
> anyway.
> 

I'd rather wait and get the interface just how we want it than put
something in now in a rush.

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-15 17:10           ` Steven Smith
                               ` (2 preceding siblings ...)
  2008-12-17  3:47             ` James Harper
@ 2008-12-19  0:15             ` James Harper
  2008-12-19  9:51               ` Ian Jackson
  2008-12-19  5:47             ` James Harper
  2008-12-30 17:49             ` Ian Jackson
  5 siblings, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-19  0:15 UTC (permalink / raw)
  To: Steven Smith, Keir Fraser; +Cc: xen-devel, Ian Jackson

Steven,

I've just tried applying the patches and the last one doesn't quite
apply - this is the resulting qemu-xen.h.rej file:

***************
*** 91,96 ****
  char *xenstore_device_model_read(int domid, char *key, unsigned int
*len);
  char *xenstore_read_battery_data(int battery_status);
  int xenstore_refresh_battery_status(void);

  /* xenfbfront.c */
  int xenfb_pv_display_init(DisplayState *ds);
--- 91,98 ----
  char *xenstore_device_model_read(int domid, char *key, unsigned int
*len);
  char *xenstore_read_battery_data(int battery_status);
  int xenstore_refresh_battery_status(void);
+ int xenstore_pv_driver_build_blacklisted(uint16_t product_number,
+                                          uint32_t build_nr);

  /* xenfbfront.c */
  int xenfb_pv_display_init(DisplayState *ds);

Am I doing something wrong? Maybe I have the wrong version of the git
tree?

After adding the lines manually it compiles, and I'll test it a bit
later.

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-15 17:10           ` Steven Smith
                               ` (3 preceding siblings ...)
  2008-12-19  0:15             ` James Harper
@ 2008-12-19  5:47             ` James Harper
  2008-12-21 19:56               ` Steven Smith
  2008-12-30 17:49             ` Ian Jackson
  5 siblings, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-19  5:47 UTC (permalink / raw)
  To: Steven Smith, Keir Fraser; +Cc: xen-devel, Ian Jackson

> > >> I like the principle of disabling the drivers via an instruction
to
> > >> qemu rather than by attempting to wrestle with the Windows driver
> > >> machinery to try to hide the devices.  But couldn't we simulate a
PCI
> > >> unplug or a medium change or something instead ?  Then you could
do
> it
> > >> later in the boot after your own drivers have properly bound.
> > >>
> > >
> > > Is the 'pci unplug' as simple as making a call somewhere like
> > > pci_unplug(id of ide adapter)? I'm concerned that Windows may not
like
> > > this.
> > My own opinion is that the ioports are fine, but they should be
offsets
> from
> > the xen-platform-pci device's ioport bar. Also we ought to document
the
> > ports in xen-platform-pci's source file, as it's going to start
getting
> > messy in there.
> >
> > I'm not sure if the approach taken by the Citrix drivers could be at
all
> > useful. Cc'ing Steven Smith in case he has any comments to make.
> I can't see any reason why the approach we take in our closed-source
> drivers wouldn't work here as well.  I've attached the appropriate
> patches from our product qemu patchqueue, tidied up and stripped of
> the most obviously XenServer-specific bits, and made to apply to
> current ioemu-remote.
> 
> 
> The protocol covers three basic things:
> 
> -- Disconnecting emulated devices.
> -- Getting log messages out of the drivers and into dom0.
> -- Allowing dom0 to block the loading of specific drivers.  This is
>    intended as a backwards-compatibility thing: if we discover a bug
>    in some old version of the drivers, then rather than working around
>    it in Xen, we have the option of just making those drivers fall
>    back to emulated mode.
> 
> The current protocol works like this (from the point of view of
> drivers):
> 
> 1) When the drivers first come up, they check whether the unplug logic
>    is available by reading a two-byte magic number from IO port 0x10.
>    These should be 0x49d2.  If the magic number doesn't match, the
>    drivers don't do anything.
> 
> 
> 5) The drivers check the magic number by reading two bytes from 0x10
>    again.  If it's changed from 0x49d2, the drivers are blacklisted
>    and should not load.

It appears that you set it to '0xd249' when the driver is blacklisted.
Can I rely on that?

My logging code runs independently in a number of unrelated places, and
may not be called for some of those places until after the blacklist
process has occurred. So testing for == 0x49d2 || == 0xd249 would tell
me if the backend supported logging.

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-19  0:15             ` James Harper
@ 2008-12-19  9:51               ` Ian Jackson
  2008-12-19  9:54                 ` James Harper
  0 siblings, 1 reply; 44+ messages in thread
From: Ian Jackson @ 2008-12-19  9:51 UTC (permalink / raw)
  To: James Harper; +Cc: Steven Smith, xen-devel, Keir Fraser

James Harper writes ("RE: [Xen-devel] disable qemu PCI devices in HVM domains"):
> I've just tried applying the patches and the last one doesn't quite
> apply - this is the resulting qemu-xen.h.rej file:

The public (non-staging) tree will have seen a biggish push recently,
due to my most recent merge with upstream making it through the
automated tests; perhaps you just need to pull ?

Ian.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-19  9:51               ` Ian Jackson
@ 2008-12-19  9:54                 ` James Harper
  2008-12-19  9:57                   ` Ian Jackson
  0 siblings, 1 reply; 44+ messages in thread
From: James Harper @ 2008-12-19  9:54 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Steven Smith, xen-devel, Keir Fraser

> From: Ian Jackson [mailto:Ian.Jackson@eu.citrix.com]
> Sent: Friday, 19 December 2008 20:52
> To: James Harper
> Cc: Steven Smith; Keir Fraser; xen-devel@lists.xensource.com
> Subject: RE: [Xen-devel] disable qemu PCI devices in HVM domains
> 
> James Harper writes ("RE: [Xen-devel] disable qemu PCI devices in HVM
> domains"):
> > I've just tried applying the patches and the last one doesn't quite
> > apply - this is the resulting qemu-xen.h.rej file:
> 
> The public (non-staging) tree will have seen a biggish push recently,
> due to my most recent merge with upstream making it through the
> automated tests; perhaps you just need to pull ?
> 

I did pull before applying the patches. Maybe that was the problem?
Steven's patches were posted on the 16th, when was your push?

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-19  9:54                 ` James Harper
@ 2008-12-19  9:57                   ` Ian Jackson
  0 siblings, 0 replies; 44+ messages in thread
From: Ian Jackson @ 2008-12-19  9:57 UTC (permalink / raw)
  To: James Harper; +Cc: Steven Smith, xen-devel, Keir Fraser

James Harper writes ("RE: [Xen-devel] disable qemu PCI devices in HVM domains"):
> I did pull before applying the patches. Maybe that was the problem?

Probably, yes.

> Steven's patches were posted on the 16th, when was your push?

Wed, 17 Dec 2008 06:22:01 GMT

(It's done automatically rather than by hand.)

Ian.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-19  5:47             ` James Harper
@ 2008-12-21 19:56               ` Steven Smith
  2008-12-31 16:22                 ` Ian Jackson
  0 siblings, 1 reply; 44+ messages in thread
From: Steven Smith @ 2008-12-21 19:56 UTC (permalink / raw)
  To: James Harper; +Cc: Steven Smith, xen-devel, Ian Jackson, Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 378 bytes --]

> > 5) The drivers check the magic number by reading two bytes from 0x10
> >    again.  If it's changed from 0x49d2, the drivers are blacklisted
> >    and should not load.
> It appears that you set it to '0xd249' when the driver is blacklisted.
> Can I rely on that?
I can't see any reason for us to ever change it, so if it makes things
easier for you then go ahead.

Steven.

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-15 17:10           ` Steven Smith
                               ` (4 preceding siblings ...)
  2008-12-19  5:47             ` James Harper
@ 2008-12-30 17:49             ` Ian Jackson
  2009-01-07 11:07               ` Steven Smith
  5 siblings, 1 reply; 44+ messages in thread
From: Ian Jackson @ 2008-12-30 17:49 UTC (permalink / raw)
  To: Steven Smith; +Cc: xen-devel, James Harper, Keir Fraser

Steven Smith writes ("Re: [Xen-devel] disable qemu PCI devices in HVM domains"):
> I can't see any reason why the approach we take in our closed-source
> drivers wouldn't work here as well.  I've attached the appropriate
> patches from our product qemu patchqueue, tidied up and stripped of
> the most obviously XenServer-specific bits, and made to apply to
> current ioemu-remote.

I'm just in the process of applying this and I came across this:

@@ -792,6 +793,10 @@ static void raw_close(BlockDriverState *bs)
...
+#ifndef CONFIG_STUBDOM
+        /* Invalidate buffer cache for this device. */
+        ioctl(s->fd, BLKFLSBUF, 0);
+#endif

Does this mean that there is currently, in the Open Source qemu-dm
tree, a cache coherency problem between emulated and PV disk paths ?

What about Linux platforms with existing PV drivers which do not
engage in the blacklisting/disabling protocol ?

Ian.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-21 19:56               ` Steven Smith
@ 2008-12-31 16:22                 ` Ian Jackson
  2009-01-01 12:35                   ` James Harper
  2009-01-07 11:10                   ` Steven Smith
  0 siblings, 2 replies; 44+ messages in thread
From: Ian Jackson @ 2008-12-31 16:22 UTC (permalink / raw)
  To: Steven Smith, James Harper; +Cc: xen-devel

Steven Smith writes ("Re: [Xen-devel] disable qemu PCI devices in HVM domains"):
> [stuff]

Thanks for that.  I have applied your patches to the code, and checked
in your commentary as a new document file after some editing.

I've also:

* Documented this assurance you game James:

 > > 5) The drivers check the magic number by reading two bytes from 0x10
 > > >    again.  If it's changed from 0x49d2, the drivers are blacklisted
 > > >    and should not load.
 > > It appears that you set it to '0xd249' when the driver is blacklisted.
 > > Can I rely on that?
 > I can't see any reason for us to ever change it, so if it makes things
 > easier for you then go ahead.

* Documented the registry location and allocation protocol
  (qemu-xen-unstable's xenstore.c, and ask on xen-devel, respectively)

* Assigned 0xffff for `experimental' drivers in the hope that people
  won't steal one of the existing assignments.

* Assigned 2 ("gplpv-windows") for James's drivers.  James, you are
  in the best position to decide what the `build id' should look like
  and there is no need to document it other than in your release
  notes, so I'll leave that to you.

Ian.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2008-12-31 16:22                 ` Ian Jackson
@ 2009-01-01 12:35                   ` James Harper
  2009-01-02 12:30                     ` Ian Jackson
  2009-01-07 11:10                   ` Steven Smith
  1 sibling, 1 reply; 44+ messages in thread
From: James Harper @ 2009-01-01 12:35 UTC (permalink / raw)
  To: Ian Jackson, Steven Smith; +Cc: xen-devel

> 
> Steven Smith writes ("Re: [Xen-devel] disable qemu PCI devices in HVM
> domains"):
> > [stuff]
> 
> Thanks for that.  I have applied your patches to the code, and checked
> in your commentary as a new document file after some editing.
> 
> I've also:
> 
> * Documented this assurance you game James:
> 
>  > > 5) The drivers check the magic number by reading two bytes from
0x10
>  > > >    again.  If it's changed from 0x49d2, the drivers are
blacklisted
>  > > >    and should not load.
>  > > It appears that you set it to '0xd249' when the driver is
> blacklisted.
>  > > Can I rely on that?
>  > I can't see any reason for us to ever change it, so if it makes
things
>  > easier for you then go ahead.
> 
> * Documented the registry location and allocation protocol
>   (qemu-xen-unstable's xenstore.c, and ask on xen-devel, respectively)
> 
> * Assigned 0xffff for `experimental' drivers in the hope that people
>   won't steal one of the existing assignments.
> 
> * Assigned 2 ("gplpv-windows") for James's drivers.  James, you are
>   in the best position to decide what the `build id' should look like
>   and there is no need to document it other than in your release
>   notes, so I'll leave that to you.
> 

Thanks Ian. I assume that that patch is in qemu-xen-unstable as opposed
to qemu-xen-3.3-testing... is there an easy way I can pull that patch
into my copy of 3.3-testing?

Thanks

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2009-01-01 12:35                   ` James Harper
@ 2009-01-02 12:30                     ` Ian Jackson
  0 siblings, 0 replies; 44+ messages in thread
From: Ian Jackson @ 2009-01-02 12:30 UTC (permalink / raw)
  To: James Harper; +Cc: Steven Smith, xen-devel

James Harper writes ("RE: [Xen-devel] disable qemu PCI devices in HVM domains"):
> Thanks Ian. I assume that that patch is in qemu-xen-unstable as opposed
> to qemu-xen-3.3-testing... is there an easy way I can pull that patch
> into my copy of 3.3-testing?

Yes.  Well, I did it as a series of commits to make better sense of it
in the history so it's more than one patch.  And there was a minor
conflict in the first one.  So to save you the bother I've done the
cherry pick and made such a branch myself.

You can fetch it with

  git-fetch \
    http://xenbits.xensource.com/git-http/staging/qemu-xen-3.3-testing.git \
    iwj.magic-ioport-3.3:iwj.magic-ioport-3.3

which will make a local branch called `iwj.magic-ioport-3.3'.  Or you
can pull it or whatever you like.  That's for git 1.4.4.4 as in Debian
etch.  They keep changing the UI, so if you have another version of
git feel free to ask me to double-check the relevant runes.

In case we need to update this protocol I'm happy to maintain this
topic branch for that purpose.

Ian.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-30 17:49             ` Ian Jackson
@ 2009-01-07 11:07               ` Steven Smith
  2009-01-07 11:52                 ` Ian Jackson
  0 siblings, 1 reply; 44+ messages in thread
From: Steven Smith @ 2009-01-07 11:07 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Steven Smith, xen-devel, James Harper, Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 2222 bytes --]

> > I can't see any reason why the approach we take in our closed-source
> > drivers wouldn't work here as well.  I've attached the appropriate
> > patches from our product qemu patchqueue, tidied up and stripped of
> > the most obviously XenServer-specific bits, and made to apply to
> > current ioemu-remote.
> 
> I'm just in the process of applying this and I came across this:
> 
> @@ -792,6 +793,10 @@ static void raw_close(BlockDriverState *bs)
> ...
> +#ifndef CONFIG_STUBDOM
> +        /* Invalidate buffer cache for this device. */
> +        ioctl(s->fd, BLKFLSBUF, 0);
> +#endif
> 
> Does this mean that there is currently, in the Open Source qemu-dm
> tree, a cache coherency problem between emulated and PV disk paths ?
I think for correctness it's probably sufficient to issue a flush
whenever we switch between emulated and PV mode when the previous mode
had issued some writes.  As far as I'm aware, all of the existing
Windows drivers will boot off of emulated and then switch to PV mode
before any writes are issued, so we should be okay.  The switch from
PV to emulated which happens when you reboot a guest should be covered
by the BLKFLSBUF at the end of raw_open(), so I think we're okay there
as well.

So this hunk is probably, strictly speaking, redundant for all current
driver implementations.

Having said that, it's clearly more robust to not rely on the various
drivers being able to get in before any writes are issued, so it's
probably a good thing to have anyway.

> What about Linux platforms with existing PV drivers which do not
> engage in the blacklisting/disabling protocol ?
Yeah, things might go a bit funny if you write using emulated drivers
and then switch to PV ones without rebooting in between.  I think
that's probably a fairly unusual thing to do, but it's not really
invalid.

I'm not sure what the best way of fixing this would be.  You could
conceivably have blkback tell qemu to do a flush when the frontend
connects and before blkback starts doing IO, but that's kind of ugly.
Alternatively, we could modify blkfront so that it tells qemu to flush
devices when appropriate, but that won't help existing drivers.

Steven.

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-31 16:22                 ` Ian Jackson
  2009-01-01 12:35                   ` James Harper
@ 2009-01-07 11:10                   ` Steven Smith
  1 sibling, 0 replies; 44+ messages in thread
From: Steven Smith @ 2009-01-07 11:10 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Steven Smith, James Harper, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 448 bytes --]

> > [stuff]
> Thanks for that.  I have applied your patches to the code, and checked
> in your commentary as a new document file after some editing.
Thanks.

> * Documented the registry location and allocation protocol
>   (qemu-xen-unstable's xenstore.c, and ask on xen-devel, respectively)
> 
> * Assigned 0xffff for `experimental' drivers in the hope that people
>   won't steal one of the existing assignments.
Good idea.

Steven.

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2009-01-07 11:07               ` Steven Smith
@ 2009-01-07 11:52                 ` Ian Jackson
  2009-01-07 12:05                   ` James Harper
  2009-01-07 15:32                   ` Steven Smith
  0 siblings, 2 replies; 44+ messages in thread
From: Ian Jackson @ 2009-01-07 11:52 UTC (permalink / raw)
  To: Steven Smith; +Cc: xen-devel, Keir Fraser, James Harper

Steven Smith writes ("Re: [Xen-devel] disable qemu PCI devices in HVM domains"):
> > +#ifndef CONFIG_STUBDOM
> > +        /* Invalidate buffer cache for this device. */
> > +        ioctl(s->fd, BLKFLSBUF, 0);
> > +#endif
...
> So this hunk is probably, strictly speaking, redundant for all current
> driver implementations.

Right, good.

> Having said that, it's clearly more robust to not rely on the various
> drivers being able to get in before any writes are issued, so it's
> probably a good thing to have anyway.

Well, except that I would prefer not to carry a change in this part of
the qemu code unless it was actually necessary.

> > What about Linux platforms with existing PV drivers which do not
> > engage in the blacklisting/disabling protocol ?
>
> Yeah, things might go a bit funny if you write using emulated drivers
> and then switch to PV ones without rebooting in between.  I think
> that's probably a fairly unusual thing to do, but it's not really
> invalid.

Provided that whatever is managing this change (be it user or some
tool in the guest) knows that this is a multipath situation and to
take the appropriate steps.

> I'm not sure what the best way of fixing this would be.  You could
> conceivably have blkback tell qemu to do a flush when the frontend
> connects and before blkback starts doing IO, but that's kind of ugly.
> Alternatively, we could modify blkfront so that it tells qemu to flush
> devices when appropriate, but that won't help existing drivers.

The guest can instruct qemu to flush writes through the host buffer
cache by issueing an IDE FLUSH CACHE command, which translates to
fsync().

Ian.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: disable qemu PCI devices in HVM domains
  2009-01-07 11:52                 ` Ian Jackson
@ 2009-01-07 12:05                   ` James Harper
  2009-01-07 15:32                   ` Steven Smith
  1 sibling, 0 replies; 44+ messages in thread
From: James Harper @ 2009-01-07 12:05 UTC (permalink / raw)
  To: Ian Jackson, Steven Smith; +Cc: xen-devel, Keir Fraser

> > > What about Linux platforms with existing PV drivers which do not
> > > engage in the blacklisting/disabling protocol ?
> >
> > Yeah, things might go a bit funny if you write using emulated
drivers
> > and then switch to PV ones without rebooting in between.  I think
> > that's probably a fairly unusual thing to do, but it's not really
> > invalid.
> 
> Provided that whatever is managing this change (be it user or some
> tool in the guest) knows that this is a multipath situation and to
> take the appropriate steps.
> 

Under Xen 3.3 I have seen windows decide that the qemu device and the pv
device (same underlying device) were the same and that one was a
multipath to another. I assume that this is because a write to one shows
up in the other.

Under Xen 3.2 this doesn't happen and things break, horribly.

James

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2009-01-07 11:52                 ` Ian Jackson
  2009-01-07 12:05                   ` James Harper
@ 2009-01-07 15:32                   ` Steven Smith
  1 sibling, 0 replies; 44+ messages in thread
From: Steven Smith @ 2009-01-07 15:32 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Steven Smith, xen-devel, Keir Fraser, James Harper


[-- Attachment #1.1: Type: text/plain, Size: 1254 bytes --]

> > > +#ifndef CONFIG_STUBDOM
> > > +        /* Invalidate buffer cache for this device. */
> > > +        ioctl(s->fd, BLKFLSBUF, 0);
> > > +#endif
> ...
> > So this hunk is probably, strictly speaking, redundant for all current
> > driver implementations.
> 
> Right, good.
> 
> > Having said that, it's clearly more robust to not rely on the various
> > drivers being able to get in before any writes are issued, so it's
> > probably a good thing to have anyway.
> 
> Well, except that I would prefer not to carry a change in this part of
> the qemu code unless it was actually necessary.
It's obviously always good to reduce skew with upstream, but in this
particular case it's a relatively small patch which eliminates a class
of potential bugs which are:

-- Nasty, in that they could lead to stuff on disk becoming corrupt.
-- Likely to be hard to reliably reproduce.
-- Difficult to demonstrate to be absent in all cases, because it's
   hard to be absolutely confident that the Windows bootloader doesn't
   write *something* under some obscure situation which we haven't
   thought of.

Personally, I'd feel a lot more confident with the flush present, but
if you really hate it then it can probably go.

Steven.

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2008-12-17  9:21                   ` James Harper
@ 2009-01-22 16:13                     ` Pasi Kärkkäinen
  2009-01-22 17:40                       ` Ian Jackson
  0 siblings, 1 reply; 44+ messages in thread
From: Pasi Kärkkäinen @ 2009-01-22 16:13 UTC (permalink / raw)
  To: James Harper; +Cc: Steven Smith, xen-devel, Ian Jackson, Keir Fraser

On Wed, Dec 17, 2008 at 08:21:00PM +1100, James Harper wrote:
> > On 17/12/2008 08:27, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:
> > 
> > >> 3.3.1 is feature frozen at this point isn't it, which means 3.3.2
> for
> > >> these patches right?
> > >
> > > I'm reluctant to introduce anything into a stable branch which would
> > prevent
> > > us migrating domains up or down the branch. If we are going to shove
> it
> > in, it
> > > should happen now, for 3.3.1. My .1 releases tend to be generously
> > > proportioned, but beyond a .1 release I definitely don't take
> anything
> > other
> > > than pure bug fixes.
> > 
> > The upshot really is: if you can agree on a qemu-dm patch this week
> then
> > fine (and the interface it implements is then locked and
> unchangeable).
> > Otherwise it's a 3.4.0 feature.
> > 
> 
> Unfortunately I don't think that gives me enough time to do testing...
> what's the timeframe for a 3.4.0 release?
> 

Was this patch merged already? 

This would be a good thing to have for Xen 3.3.2 too!

-- Pasi

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2009-01-22 16:13                     ` Pasi Kärkkäinen
@ 2009-01-22 17:40                       ` Ian Jackson
  2009-01-22 19:25                         ` Pasi Kärkkäinen
  0 siblings, 1 reply; 44+ messages in thread
From: Ian Jackson @ 2009-01-22 17:40 UTC (permalink / raw)
  To: Pasi Kärkkäinen
  Cc: Steven Smith, xen-devel, James Harper, Keir Fraser

Pasi Kärkkäinen writes ("Re: [Xen-devel] disable qemu PCI devices in HVM domains"):
> [ioport disable protocol patch]
>
> Was this patch merged already? 
> This would be a good thing to have for Xen 3.3.2 too!

It's in xen-unstable, yes, but not yet in the 3.3 series.

It does seem to have been very stable since it's gone in - no
complaints or requests for amendments.  So perhaps we should consider
it for a backport.

Ian.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2009-01-22 17:40                       ` Ian Jackson
@ 2009-01-22 19:25                         ` Pasi Kärkkäinen
  2009-03-02 12:07                           ` Pasi Kärkkäinen
  0 siblings, 1 reply; 44+ messages in thread
From: Pasi Kärkkäinen @ 2009-01-22 19:25 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Steven Smith, xen-devel, James Harper, Keir Fraser

On Thu, Jan 22, 2009 at 05:40:10PM +0000, Ian Jackson wrote:
> Pasi Kärkkäinen writes ("Re: [Xen-devel] disable qemu PCI devices in HVM domains"):
> > [ioport disable protocol patch]
> >
> > Was this patch merged already? 
> > This would be a good thing to have for Xen 3.3.2 too!
> 
> It's in xen-unstable, yes, but not yet in the 3.3 series.
> 
> It does seem to have been very stable since it's gone in - no
> complaints or requests for amendments.  So perhaps we should consider
> it for a backport.
> 

OK.

James Harper has a patch against Xen 3.3.1:

http://www.meadowcourt.org/downloads/qemu_disable_patches.zip

http://lists.xensource.com/archives/html/xen-users/2009-01/msg00139.html
http://lists.xensource.com/archives/html/xen-users/2009-01/msg00142.html
http://lists.xensource.com/archives/html/xen-users/2009-01/msg00147.html

Comments about merging that into xen-3.3-testing? 

-- Pasi

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: disable qemu PCI devices in HVM domains
  2009-01-22 19:25                         ` Pasi Kärkkäinen
@ 2009-03-02 12:07                           ` Pasi Kärkkäinen
  0 siblings, 0 replies; 44+ messages in thread
From: Pasi Kärkkäinen @ 2009-03-02 12:07 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Steven Smith, xen-devel, Keir Fraser, James Harper

On Thu, Jan 22, 2009 at 09:25:57PM +0200, Pasi Kärkkäinen wrote:
> On Thu, Jan 22, 2009 at 05:40:10PM +0000, Ian Jackson wrote:
> > Pasi Kärkkäinen writes ("Re: [Xen-devel] disable qemu PCI devices in HVM domains"):
> > > [ioport disable protocol patch]
> > >
> > > Was this patch merged already? 
> > > This would be a good thing to have for Xen 3.3.2 too!
> > 
> > It's in xen-unstable, yes, but not yet in the 3.3 series.
> > 
> > It does seem to have been very stable since it's gone in - no
> > complaints or requests for amendments.  So perhaps we should consider
> > it for a backport.
> > 
> 
> OK.
> 
> James Harper has a patch against Xen 3.3.1:
> 
> http://www.meadowcourt.org/downloads/qemu_disable_patches.zip
> 
> http://lists.xensource.com/archives/html/xen-users/2009-01/msg00139.html
> http://lists.xensource.com/archives/html/xen-users/2009-01/msg00142.html
> http://lists.xensource.com/archives/html/xen-users/2009-01/msg00147.html
> 
> Comments about merging that into xen-3.3-testing? 
> 

Btw this backported patch is included in the latest Fedora Xen (3.3.1-6)
packages. 

Would be nice to get it also committed to xen-3.3-testing.hg

-- Pasi

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2009-03-02 12:07 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-12-11  3:10 disable qemu PCI devices in HVM domains James Harper
2008-12-11  9:06 ` Keir Fraser
2008-12-11  9:08   ` James Harper
2008-12-11  9:28   ` James Harper
2008-12-11 17:57     ` Ian Jackson
2008-12-11 22:06       ` James Harper
2008-12-13  9:33       ` James Harper
2008-12-13  9:55         ` Keir Fraser
2008-12-13 10:05           ` James Harper
2008-12-13 11:13             ` Keir Fraser
2008-12-13 11:31               ` James Harper
2008-12-15 17:10           ` Steven Smith
2008-12-15 23:58             ` James Harper
2008-12-16 10:20               ` Steven Smith
2008-12-16  3:01             ` James Harper
2008-12-16 10:27               ` Steven Smith
2008-12-16 11:06                 ` James Harper
2008-12-16 11:28                   ` Keir Fraser
2008-12-17  3:47             ` James Harper
2008-12-17  8:27               ` Keir Fraser
2008-12-17  8:32                 ` Keir Fraser
2008-12-17  9:21                   ` James Harper
2009-01-22 16:13                     ` Pasi Kärkkäinen
2009-01-22 17:40                       ` Ian Jackson
2009-01-22 19:25                         ` Pasi Kärkkäinen
2009-03-02 12:07                           ` Pasi Kärkkäinen
2008-12-17 10:36               ` Ian Jackson
2008-12-17 11:00                 ` Keir Fraser
2008-12-17 11:01                   ` James Harper
2008-12-19  0:15             ` James Harper
2008-12-19  9:51               ` Ian Jackson
2008-12-19  9:54                 ` James Harper
2008-12-19  9:57                   ` Ian Jackson
2008-12-19  5:47             ` James Harper
2008-12-21 19:56               ` Steven Smith
2008-12-31 16:22                 ` Ian Jackson
2009-01-01 12:35                   ` James Harper
2009-01-02 12:30                     ` Ian Jackson
2009-01-07 11:10                   ` Steven Smith
2008-12-30 17:49             ` Ian Jackson
2009-01-07 11:07               ` Steven Smith
2009-01-07 11:52                 ` Ian Jackson
2009-01-07 12:05                   ` James Harper
2009-01-07 15:32                   ` Steven Smith

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.