All of lore.kernel.org
 help / color / mirror / Atom feed
* PV driver domains and S3 sleep
@ 2010-09-16 11:44 Rafal Wojtczuk
  2010-09-16 11:52 ` Keir Fraser
  2010-09-20 20:45 ` PV driver domains and S3 sleep Konrad Rzeszutek Wilk
  0 siblings, 2 replies; 11+ messages in thread
From: Rafal Wojtczuk @ 2010-09-16 11:44 UTC (permalink / raw)
  To: xen-devel

Hello,

The topic is self-explanatory: how to ensure that a PV driver domain correctly 
prepares its PCI devices for S3 sleep?
If I do "pm-suspend" in dom0, and the driver domain has active network interfaces, 
suspend hangs the system. Yes, in case of this particular machine, suspend works
fine when there is no driver domain. 
It is possible to manually invoke scripts from /usr/lib64/pm-utils/sleep.d/ in driver 
domain. In the test case, "ifconfig down wlan0" in the driver domain allows
the suspend to go smoothly. But generally, is it enough ? The kernel device driver should 
prepare the PCI device properly for S3, shouldn't it ?  
Would it be more proper to [somehow] notify a driver domain _kernel_ that we are 
going to S3 (just like dom0 kernel is notified), and let it execute all necessary actions 
(including, but not only, launching of usermode pm-utils scripts), just like dom0 kernel 
does ? Would it work at all, considering that driver domain kernel has no access to 
ACPI tables ? 
Currently, how are these issues taken care of in the mainstream Xen? 

Thanks in advance,
Rafal Wojtczuk

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: PV driver domains and S3 sleep
  2010-09-16 11:44 PV driver domains and S3 sleep Rafal Wojtczuk
@ 2010-09-16 11:52 ` Keir Fraser
  2010-09-16 19:04   ` Joanna Rutkowska
  2010-09-24 14:24   ` PCI hotplug problem [was: PV driver domains and S3 sleep] Rafal Wojtczuk
  2010-09-20 20:45 ` PV driver domains and S3 sleep Konrad Rzeszutek Wilk
  1 sibling, 2 replies; 11+ messages in thread
From: Keir Fraser @ 2010-09-16 11:52 UTC (permalink / raw)
  To: Rafal Wojtczuk, xen-devel

On 16/09/2010 12:44, "Rafal Wojtczuk" <rafal@invisiblethingslab.com> wrote:

> The topic is self-explanatory: how to ensure that a PV driver domain correctly
> prepares its PCI devices for S3 sleep?
> If I do "pm-suspend" in dom0, and the driver domain has active network
> interfaces, 
> suspend hangs the system. Yes, in case of this particular machine, suspend
> works
> fine when there is no driver domain.
> It is possible to manually invoke scripts from /usr/lib64/pm-utils/sleep.d/ in
> driver 
> domain. In the test case, "ifconfig down wlan0" in the driver domain allows
> the suspend to go smoothly. But generally, is it enough ? The kernel device
> driver should 
> prepare the PCI device properly for S3, shouldn't it ?
> Would it be more proper to [somehow] notify a driver domain _kernel_ that we
> are 
> going to S3 (just like dom0 kernel is notified), and let it execute all
> necessary actions
> (including, but not only, launching of usermode pm-utils scripts), just like
> dom0 kernel 
> does ? Would it work at all, considering that driver domain kernel has no
> access to 
> ACPI tables ? 
> Currently, how are these issues taken care of in the mainstream Xen?

I don't think it currently is handled. HVM driver domains (using VT-d or
equivalent) can be put into virtual S3. We would need an equivalent concept
for PV driver domains. Or for devices to be hot-unplugged from the driver
domain, and re-plugged on resume?

 -- Keir

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: PV driver domains and S3 sleep
  2010-09-16 11:52 ` Keir Fraser
@ 2010-09-16 19:04   ` Joanna Rutkowska
  2010-09-17  0:22     ` Jeremy Fitzhardinge
  2010-09-24 14:24   ` PCI hotplug problem [was: PV driver domains and S3 sleep] Rafal Wojtczuk
  1 sibling, 1 reply; 11+ messages in thread
From: Joanna Rutkowska @ 2010-09-16 19:04 UTC (permalink / raw)
  To: Keir Fraser; +Cc: xen-devel, Rafal Wojtczuk


[-- Attachment #1.1: Type: text/plain, Size: 1736 bytes --]

On 09/16/10 13:52, Keir Fraser wrote:
> On 16/09/2010 12:44, "Rafal Wojtczuk" <rafal@invisiblethingslab.com> wrote:
> 
>> The topic is self-explanatory: how to ensure that a PV driver domain correctly
>> prepares its PCI devices for S3 sleep?
>> If I do "pm-suspend" in dom0, and the driver domain has active network
>> interfaces, 
>> suspend hangs the system. Yes, in case of this particular machine, suspend
>> works
>> fine when there is no driver domain.
>> It is possible to manually invoke scripts from /usr/lib64/pm-utils/sleep.d/ in
>> driver 
>> domain. In the test case, "ifconfig down wlan0" in the driver domain allows
>> the suspend to go smoothly. But generally, is it enough ? The kernel device
>> driver should 
>> prepare the PCI device properly for S3, shouldn't it ?
>> Would it be more proper to [somehow] notify a driver domain _kernel_ that we
>> are 
>> going to S3 (just like dom0 kernel is notified), and let it execute all
>> necessary actions
>> (including, but not only, launching of usermode pm-utils scripts), just like
>> dom0 kernel 
>> does ? Would it work at all, considering that driver domain kernel has no
>> access to 
>> ACPI tables ? 
>> Currently, how are these issues taken care of in the mainstream Xen?
> 
> I don't think it currently is handled. HVM driver domains (using VT-d or
> equivalent) can be put into virtual S3. We would need an equivalent concept
> for PV driver domains. Or for devices to be hot-unplugged from the driver
> domain, and re-plugged on resume?
> 

But, can you explain how Xen notifies Dom0 when the system enters S3,
and if the same mechanism could be (easily) used to do the same for a
driver PV domain?

Thanks,
joanna.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 518 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: PV driver domains and S3 sleep
  2010-09-16 19:04   ` Joanna Rutkowska
@ 2010-09-17  0:22     ` Jeremy Fitzhardinge
  2010-09-24 14:30       ` Rafal Wojtczuk
  0 siblings, 1 reply; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-17  0:22 UTC (permalink / raw)
  To: Joanna Rutkowska; +Cc: xen-devel, Keir Fraser, Rafal Wojtczuk

 On 09/16/2010 12:04 PM, Joanna Rutkowska wrote:
> On 09/16/10 13:52, Keir Fraser wrote:
>> On 16/09/2010 12:44, "Rafal Wojtczuk" <rafal@invisiblethingslab.com> wrote:
>>
>>> The topic is self-explanatory: how to ensure that a PV driver domain correctly
>>> prepares its PCI devices for S3 sleep?
>>> If I do "pm-suspend" in dom0, and the driver domain has active network
>>> interfaces, 
>>> suspend hangs the system. Yes, in case of this particular machine, suspend
>>> works
>>> fine when there is no driver domain.
>>> It is possible to manually invoke scripts from /usr/lib64/pm-utils/sleep.d/ in
>>> driver 
>>> domain. In the test case, "ifconfig down wlan0" in the driver domain allows
>>> the suspend to go smoothly. But generally, is it enough ? The kernel device
>>> driver should 
>>> prepare the PCI device properly for S3, shouldn't it ?
>>> Would it be more proper to [somehow] notify a driver domain _kernel_ that we
>>> are 
>>> going to S3 (just like dom0 kernel is notified), and let it execute all
>>> necessary actions
>>> (including, but not only, launching of usermode pm-utils scripts), just like
>>> dom0 kernel 
>>> does ? Would it work at all, considering that driver domain kernel has no
>>> access to 
>>> ACPI tables ? 
>>> Currently, how are these issues taken care of in the mainstream Xen?
>> I don't think it currently is handled. HVM driver domains (using VT-d or
>> equivalent) can be put into virtual S3. We would need an equivalent concept
>> for PV driver domains. Or for devices to be hot-unplugged from the driver
>> domain, and re-plugged on resume?
>>
> But, can you explain how Xen notifies Dom0 when the system enters S3,
> and if the same mechanism could be (easily) used to do the same for a
> driver PV domain?

The dom0 kernel initiates S3 itself (possibly in response to a
lid-switch or the like), so it knows it is going into S3.  As part of
that it can do something to notify all the other domains that they in
turn need to do something.

I think the simplest thing to do is just do a regular PV save/restore on
the domains, but without needing to save their pages to disk.  That way
their regular device model suspend/resume will do the right thing to the
hardware devices.  The only change that may be needed is to make sure
that the normal PCI suspend/resume bus calls are done on the Xen
save/restore path.

Or perhaps the domains will need to know whether its a normal
save/restore, vs S3, vs S5.  In that case we could add a xenstore key
which they could watch to know they need to do something.  But it would
need a bit of thought to handshake the whole process.

    J

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: PV driver domains and S3 sleep
  2010-09-16 11:44 PV driver domains and S3 sleep Rafal Wojtczuk
  2010-09-16 11:52 ` Keir Fraser
@ 2010-09-20 20:45 ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 11+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-09-20 20:45 UTC (permalink / raw)
  To: Rafal Wojtczuk; +Cc: xen-devel

On Thu, Sep 16, 2010 at 01:44:24PM +0200, Rafal Wojtczuk wrote:
> Hello,
> 
> The topic is self-explanatory: how to ensure that a PV driver domain correctly 
> prepares its PCI devices for S3 sleep?
> If I do "pm-suspend" in dom0, and the driver domain has active network interfaces, 
> suspend hangs the system. Yes, in case of this particular machine, suspend works
> fine when there is no driver domain. 
> It is possible to manually invoke scripts from /usr/lib64/pm-utils/sleep.d/ in driver 
> domain. In the test case, "ifconfig down wlan0" in the driver domain allows
> the suspend to go smoothly. But generally, is it enough ? The kernel device driver should 

The pci_disable calls that are made do put the devices in the D3 (or is it D0 state).
However those calls are not made when you do 'ifconfig X down' (I think). You need
to do 'rmmod ipw2100' to trigger those calls, or trigger the drivers' suspend call
invocation.

The drivers' suspend call invocation is a twisty maze of dependency (ie, must first
suspend the driver, and only after that you can suspend the PCI bus).

The S3 suspend on Linux also freezez the user space, cgroups, and whole bunch
of other stuff. But you don't care about that.

What I think you care about is to put the device in the appropiate D state.

> prepare the PCI device properly for S3, shouldn't it ?  
> Would it be more proper to [somehow] notify a driver domain _kernel_ that we are 
> going to S3 (just like dom0 kernel is notified), and let it execute all necessary actions 
> (including, but not only, launching of usermode pm-utils scripts), just like dom0 kernel 
> does ? Would it work at all, considering that driver domain kernel has no access to 
> ACPI tables ? 

I think that depends on the PCI device. In laptop world, the wireless card can
do some weird stuff when you press Ctrl-F5 for example - it would invoke some 
ACPI code (well, the Linux kernel AML driver would invoke it), which then
disables/unloads the driver as appropiate. With DomU having no ACPI support, it means
that the Dom0 would yank the PCI device away from the DomU - which actually considering
that we are using pciback as the owner, would mean you could pass a request to the
DomU saying: "Hey, reconfigure now. Device going away." And I think that might
work today actually.

But back to putting the device in the appropiate D state. You could
pass in DomU a call akin to doing "suspend" > /sys/power/state
which should do the appropiate PCI move.

> Currently, how are these issues taken care of in the mainstream Xen? 

Never explored I fear.
> 
> Thanks in advance,
> Rafal Wojtczuk
> 
> 
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* PCI hotplug problem [was: PV driver domains and S3 sleep]
  2010-09-16 11:52 ` Keir Fraser
  2010-09-16 19:04   ` Joanna Rutkowska
@ 2010-09-24 14:24   ` Rafal Wojtczuk
  2010-09-27 17:07     ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 11+ messages in thread
From: Rafal Wojtczuk @ 2010-09-24 14:24 UTC (permalink / raw)
  To: Keir Fraser; +Cc: xen-devel

On Thu, Sep 16, 2010 at 12:52:02PM +0100, Keir Fraser wrote:
> > The topic is self-explanatory: how to ensure that a PV driver domain correctly
> > prepares its PCI devices for S3 sleep?
[cut]
> > Currently, how are these issues taken care of in the mainstream Xen?

> I don't think it currently is handled. HVM driver domains (using VT-d or
> equivalent) can be put into virtual S3. We would need an equivalent concept
> for PV driver domains. Or for devices to be hot-unplugged from the driver
> domain, and re-plugged on resume?

The idea of using PCI hotplug is nice, however, PCI hotplug does not seem to
work with the used setup (xen-3.4.3, all 64bit). Hot-unplug works, however the 
following hotplug makes the driver domain kernel spit out the following:

Sep 24 09:46:01 localhost kernel: [  113.045927] pcifront pci-0: Rescanning
PCI Frontend Bus 0000:00
Sep 24 09:46:15 localhost kernel: [  126.843990] pcifront pci-0: Rescanning
PCI Frontend Bus 0000:00
Sep 24 09:46:15 localhost kernel: [  126.846217] pcifront pci-0: New device
on 0000:00:01.00 found.
Sep 24 09:46:15 localhost kernel: [  126.846523] iwlagn 0000:00:01.0: device
not available (can't reserve [mem 0xf8000000-0xf8001fff 64bit])

^C
[root@localhost ~]# cat /proc/iomem 
f6000000-f600ffff : 0000:00:00.0
  f6000000-f600ffff : tg3
[root@localhost ~]# lspci
00:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5787M Gigabit
Ethernet PCI Express (rev 02)
00:01.0 Network controller: Intel Corporation PRO/Wireless 4965 AG or AGN
[Kedron] Network Connection (rev 61)

Nothing suspicious in xend, Xen and dom0 logs.

The domU and dom0 kernels are the same, 2.6.34.1-10.xenlinux (SUSE patches
for 2.6.34.1).

With old pvops (2.6.31.9-1.pvops0) in domU, the message on the hot-plug is similar:
Sep 24 09:50:40 localhost kernel: pcifront pci-0: Rescanning PCI Frontend
Bus 0000:00
Sep 24 09:50:51 localhost kernel: pcifront pci-0: Rescanning PCI Frontend
Bus 0000:00
Sep 24 09:50:51 localhost kernel: pcifront pci-0: New device on
0000:00:01.00 found.
Sep 24 09:50:51 localhost kernel: iwlagn 0000:00:01.0: device not available
because of BAR 0 [0xf8000000-0xf8001fff] collisions

Others seem to experience similar problems (e.g.
http://permalink.gmane.org/gmane.comp.emulators.xen.devel/80766). Does
anyone know the solution ?

Regards,
Rafal Wojtczuk

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: PV driver domains and S3 sleep
  2010-09-17  0:22     ` Jeremy Fitzhardinge
@ 2010-09-24 14:30       ` Rafal Wojtczuk
  2010-09-24 18:06         ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 11+ messages in thread
From: Rafal Wojtczuk @ 2010-09-24 14:30 UTC (permalink / raw)
  To: Jeremy Fitzhardinge; +Cc: xen-devel, Keir Fraser, Joanna Rutkowska

On Thu, Sep 16, 2010 at 05:22:50PM -0700, Jeremy Fitzhardinge wrote:
> >>> The topic is self-explanatory: how to ensure that a PV driver domain correctly
> >>> prepares its PCI devices for S3 sleep?
[cut]
> I think the simplest thing to do is just do a regular PV save/restore on
> the domains, but without needing to save their pages to disk.  That way

I suspect suspend/resume of the driver domain will kill established net backend/frontend 
connections ? So we also would have to network-detach all VMs interfaces, and
reattach. It does not look pretty. 

Regards,
Rafal Wojtczuk

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: PV driver domains and S3 sleep
  2010-09-24 14:30       ` Rafal Wojtczuk
@ 2010-09-24 18:06         ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-24 18:06 UTC (permalink / raw)
  To: Rafal Wojtczuk; +Cc: xen-devel, Keir Fraser, Joanna Rutkowska

 On 09/24/2010 07:30 AM, Rafal Wojtczuk wrote:
> On Thu, Sep 16, 2010 at 05:22:50PM -0700, Jeremy Fitzhardinge wrote:
>>>>> The topic is self-explanatory: how to ensure that a PV driver domain correctly
>>>>> prepares its PCI devices for S3 sleep?
> [cut]
>> I think the simplest thing to do is just do a regular PV save/restore on
>> the domains, but without needing to save their pages to disk.  That way
> I suspect suspend/resume of the driver domain will kill established net backend/frontend 
> connections ? So we also would have to network-detach all VMs interfaces, and
> reattach. It does not look pretty. 
>

Not generally.  The blkfront and netfront drivers don't really do
anything on a save; they certainly don't change the xenbus connection
state.  The normal mode of operation for save/restore or migration is
that after resuming the frontends suddenly find their backends are no
longer connected and will quietly attempt to reconnect before doing on,
resulting in just a little IO hiccup.

In this case, the backends will still be there and will remain in a
connected state, so the frontends won't even notice after resuming.

However, the pcifront driver would implement the suspend method and make
sure the pci bus does its normal suspend operation.

My main concern is that I'm not sure how the handshake with dom0 would
work, so that it knows the suspend is finished - oh I guess the normal
way, waiting for the domain to suspend itself (or timeout mysteriously).

But it might just be better to add a pciback xenstore key to tell
pcifront to do whatever's required for an ACPI suspend (S3, S5 or
whatever), and have a corresponding pcifront xenstore key to handshake
that the suspend has completed (or failed).  At least that way you could
get some useful diagnostics on failure.

    J

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: PCI hotplug problem [was: PV driver domains and S3 sleep]
  2010-09-24 14:24   ` PCI hotplug problem [was: PV driver domains and S3 sleep] Rafal Wojtczuk
@ 2010-09-27 17:07     ` Konrad Rzeszutek Wilk
  2010-10-01 14:24       ` PCI hotplug problem Rafal Wojtczuk
  0 siblings, 1 reply; 11+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-09-27 17:07 UTC (permalink / raw)
  To: Rafal Wojtczuk; +Cc: xen-devel, Keir Fraser

On Fri, Sep 24, 2010 at 04:24:58PM +0200, Rafal Wojtczuk wrote:
> On Thu, Sep 16, 2010 at 12:52:02PM +0100, Keir Fraser wrote:
> > > The topic is self-explanatory: how to ensure that a PV driver domain correctly
> > > prepares its PCI devices for S3 sleep?
> [cut]
> > > Currently, how are these issues taken care of in the mainstream Xen?
> 
> > I don't think it currently is handled. HVM driver domains (using VT-d or
> > equivalent) can be put into virtual S3. We would need an equivalent concept
> > for PV driver domains. Or for devices to be hot-unplugged from the driver
> > domain, and re-plugged on resume?
> 
> The idea of using PCI hotplug is nice, however, PCI hotplug does not seem to
> work with the used setup (xen-3.4.3, all 64bit). Hot-unplug works, however the 
> following hotplug makes the driver domain kernel spit out the following:
> 
> Sep 24 09:46:01 localhost kernel: [  113.045927] pcifront pci-0: Rescanning
> PCI Frontend Bus 0000:00
> Sep 24 09:46:15 localhost kernel: [  126.843990] pcifront pci-0: Rescanning
> PCI Frontend Bus 0000:00
> Sep 24 09:46:15 localhost kernel: [  126.846217] pcifront pci-0: New device
> on 0000:00:01.00 found.
> Sep 24 09:46:15 localhost kernel: [  126.846523] iwlagn 0000:00:01.0: device
> not available (can't reserve [mem 0xf8000000-0xf8001fff 64bit])
> 
> ^C
> [root@localhost ~]# cat /proc/iomem 
> f6000000-f600ffff : 0000:00:00.0
>   f6000000-f600ffff : tg3
> [root@localhost ~]# lspci
> 00:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5787M Gigabit
> Ethernet PCI Express (rev 02)
> 00:01.0 Network controller: Intel Corporation PRO/Wireless 4965 AG or AGN
> [Kedron] Network Connection (rev 61)
> 
> Nothing suspicious in xend, Xen and dom0 logs.
> 
> The domU and dom0 kernels are the same, 2.6.34.1-10.xenlinux (SUSE patches
> for 2.6.34.1).
> 
> With old pvops (2.6.31.9-1.pvops0) in domU, the message on the hot-plug is similar:
> Sep 24 09:50:40 localhost kernel: pcifront pci-0: Rescanning PCI Frontend
> Bus 0000:00
> Sep 24 09:50:51 localhost kernel: pcifront pci-0: Rescanning PCI Frontend
> Bus 0000:00
> Sep 24 09:50:51 localhost kernel: pcifront pci-0: New device on
> 0000:00:01.00 found.
> Sep 24 09:50:51 localhost kernel: iwlagn 0000:00:01.0: device not available
> because of BAR 0 [0xf8000000-0xf8001fff] collisions
> 
> Others seem to experience similar problems (e.g.
> http://permalink.gmane.org/gmane.comp.emulators.xen.devel/80766). Does
> anyone know the solution ?

I had an off-mailing list conversation with that fellow and I spun out
a bunch of patches to fix his issue.

You need these patches:
Konrad Rzeszutek Wilk (3):
      xen-pcifront: Enforce scanning of device functions on initial execution.
      xen-pcifront: Claim PCI resources before going live.
      xen-pcifront: Don't race with udev when discovering new devices.

I think they are in Jeremy's upstream tree.. ah, right you guys aren't using
Jeremy's tree.

Get them from: git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git

pv/pcifront-2.6.34

you also might want to update your pciback driver too (pv/pciback-2.6.32)
> 
> Regards,
> Rafal Wojtczuk
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: PCI hotplug problem
  2010-09-27 17:07     ` Konrad Rzeszutek Wilk
@ 2010-10-01 14:24       ` Rafal Wojtczuk
  2010-10-01 15:23         ` Jan Beulich
  0 siblings, 1 reply; 11+ messages in thread
From: Rafal Wojtczuk @ 2010-10-01 14:24 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Keir Fraser

On Mon, Sep 27, 2010 at 01:07:05PM -0400, Konrad Rzeszutek Wilk wrote:

> > The idea of using PCI hotplug is nice, however, PCI hotplug does not seem to
> > work with the used setup (xen-3.4.3, all 64bit). Hot-unplug works, however the 
> > following hotplug makes the driver domain kernel spit out the following:
[cut]
> > Sep 24 09:46:15 localhost kernel: [  126.846523] iwlagn 0000:00:01.0: device
> > not available (can't reserve [mem 0xf8000000-0xf8001fff 64bit])
[cut]
> > Others seem to experience similar problems (e.g.
> > http://permalink.gmane.org/gmane.comp.emulators.xen.devel/80766). Does
> > anyone know the solution ?
> 
> I had an off-mailing list conversation with that fellow and I spun out
> a bunch of patches to fix his issue.
> 
> You need these patches:
> Konrad Rzeszutek Wilk (3):
>       xen-pcifront: Enforce scanning of device functions on initial execution.
>       xen-pcifront: Claim PCI resources before going live.
>       xen-pcifront: Don't race with udev when discovering new devices.
> 
> I think they are in Jeremy's upstream tree.. ah, right you guys aren't using
> Jeremy's tree.
> 
> Get them from: git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> 
> pv/pcifront-2.6.34

Indeed these patches help, thank you.
There is one more problem with the linux-2.6.18-xen.hg pcifront (that affect
derived code, e.g. OpenSUSE kernel, too). unbind_from_irqhandler() is
mistakenly passed evtchn, instead of irq. Compare line 68 of
http://xenbits.xensource.com/linux-2.6.18-xen.hg?file/a66a7c64b1d0/drivers/xen/pcifront/xenbus.c
with pvops equivalent
http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/pci/xen-pcifront.c;h=10868aeae818d69980b8519f8a77b38d6ab58a4c;hb=HEAD#l758

The following patch helps.
Regards,
Rafal Wojtczuk


unbind_from_irqhandler takes irq, not evtchn, as its first argument.

Signed-off-by: Rafal Wojtczuk <rafal@invisiblethingslab.com>
--- linux-2.6.34.1/drivers/xen/pcifront/xenbus.c.orig   2010-09-29 16:47:39.961674359 +0200
+++ linux-2.6.34.1/drivers/xen/pcifront/xenbus.c        2010-09-29 16:47:49.458675391 +0200
@@ -61,7 +61,7 @@ static void free_pdev(struct pcifront_de

        /*For PCIE_AER error handling job*/
        flush_scheduled_work();
-       unbind_from_irqhandler(pdev->evtchn, pdev);
+       unbind_from_irqhandler(irq_from_evtchn(pdev->evtchn), pdev);

        if (pdev->evtchn != INVALID_EVTCHN)
                xenbus_free_evtchn(pdev->xdev, pdev->evtchn);

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: PCI hotplug problem
  2010-10-01 14:24       ` PCI hotplug problem Rafal Wojtczuk
@ 2010-10-01 15:23         ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2010-10-01 15:23 UTC (permalink / raw)
  To: Rafal Wojtczuk; +Cc: xen-devel, Keir Fraser, Konrad Rzeszutek Wilk

>>> On 01.10.10 at 16:24, Rafal Wojtczuk <rafal@invisiblethingslab.com> wrote:
> There is one more problem with the linux-2.6.18-xen.hg pcifront (that affect
> derived code, e.g. OpenSUSE kernel, too). unbind_from_irqhandler() is
> mistakenly passed evtchn, instead of irq. Compare line 68 of
> http://xenbits.xensource.com/linux-2.6.18-xen.hg?file/a66a7c64b1d0/drivers/xen 
> /pcifront/xenbus.c
> with pvops equivalent
> http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/pc 
> i/xen-pcifront.c;h=10868aeae818d69980b8519f8a77b38d6ab58a4c;hb=HEAD#l758
> 
> The following patch helps.

Except there is no irq_from_evtchn() in the original tree. I'll post a
better, more complete patch later.

Jan

> Regards,
> Rafal Wojtczuk
> 
> 
> unbind_from_irqhandler takes irq, not evtchn, as its first argument.
> 
> Signed-off-by: Rafal Wojtczuk <rafal@invisiblethingslab.com>
> --- linux-2.6.34.1/drivers/xen/pcifront/xenbus.c.orig   2010-09-29 
> 16:47:39.961674359 +0200
> +++ linux-2.6.34.1/drivers/xen/pcifront/xenbus.c        2010-09-29 
> 16:47:49.458675391 +0200
> @@ -61,7 +61,7 @@ static void free_pdev(struct pcifront_de
> 
>         /*For PCIE_AER error handling job*/
>         flush_scheduled_work();
> -       unbind_from_irqhandler(pdev->evtchn, pdev);
> +       unbind_from_irqhandler(irq_from_evtchn(pdev->evtchn), pdev);
> 
>         if (pdev->evtchn != INVALID_EVTCHN)
>                 xenbus_free_evtchn(pdev->xdev, pdev->evtchn);

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2010-10-01 15:23 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-16 11:44 PV driver domains and S3 sleep Rafal Wojtczuk
2010-09-16 11:52 ` Keir Fraser
2010-09-16 19:04   ` Joanna Rutkowska
2010-09-17  0:22     ` Jeremy Fitzhardinge
2010-09-24 14:30       ` Rafal Wojtczuk
2010-09-24 18:06         ` Jeremy Fitzhardinge
2010-09-24 14:24   ` PCI hotplug problem [was: PV driver domains and S3 sleep] Rafal Wojtczuk
2010-09-27 17:07     ` Konrad Rzeszutek Wilk
2010-10-01 14:24       ` PCI hotplug problem Rafal Wojtczuk
2010-10-01 15:23         ` Jan Beulich
2010-09-20 20:45 ` PV driver domains and S3 sleep Konrad Rzeszutek Wilk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.