All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
       [not found] <FF3E5629F9FF6745ADE4EE690CEC81AD2ACE05F0@SJEXCHMB09.corp.ad.broadcom.com>
@ 2013-06-26  9:10 ` Ian Campbell
  2013-06-26 15:09   ` Eric Trudeau
  0 siblings, 1 reply; 25+ messages in thread
From: Ian Campbell @ 2013-06-26  9:10 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: xen-devel

(moving to xen-devel, xen-arm is for the older PV ARM port)

On Tue, 2013-06-25 at 23:59 +0000, Eric Trudeau wrote:
> Hi, I am trying to build the XEN tools for our port of XEN to our
> Cortex A15-based platform.
> 
> I am using the repo at git://xenbits.xenproject.org/xen.git to
> cross-compile the tools into our rootfs.

Which branch/changeset are you using?

I've heard that the recent XSA-55 fixes have broken guest loading on
ARM, but your problems do not seem to be related to that (I mention it
because it might be something you hit after we work through your current
issues)

> This repo is what we are using for our Hypervisor development also.
> 
> When building the tools, I found that libaio does not build for ARM
> since it was missing tools/libaio/src/syscall-arm.h.
> 
> I found a version on the web so that I could continue.

Are you picking up the libaio in tools/libaio? That is an ancient
snapshot used to support older distros which don't ship libaio
themselves. It really shouldn't even be considered for ARM (this is a
bug in our build system IMHO). If you install the libaio (and headers
etc) from your distro then configure will detect this and not use the
embedded copy. (The embedded copy is on my hitlist for cruft removal for
4.4, but we are frozen for 4.3 right now so I can't touch it)

FWIW I wrote up some details of how I cross compile with Debian/Ubuntu
at
http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/CrossCompiling which includes lists of packages to install etc.

> I also found that xc_dom_elfloader.c’s function, xc_dom_guest_type(),
> did not know about the EM_ARM machine type.

You shouldn't be hitting the elf loader paths, you should instead use
the zImage version of the kernel which will hit the
xc_dom_armzimageloader.c code paths.

We did at one point in the very early days of the port support loading
ELF files via a quick hack of a tool ("xcbuild") but that has been
superceded by the proper toolstack support and the use of zImage.

> Am I using the wrong repo for development of the tools for ARM?


> 
> If not, the twiki seems to indicate that xl is working on ARM to
> create guest machines.

Modulo any bugs introduced by XSA-55 we believe it is.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-06-26  9:10 ` [XenARM] XEN tools for ARM with Virtualization Extensions Ian Campbell
@ 2013-06-26 15:09   ` Eric Trudeau
  2013-06-26 16:45     ` Ian Campbell
  0 siblings, 1 reply; 25+ messages in thread
From: Eric Trudeau @ 2013-06-26 15:09 UTC (permalink / raw)
  To: Ian Campbell, xen-devel

(moving to xen-devel, xen-arm is for the older PV ARM port)

On Tue, 2013-06-25 at 23:59 +0000, Eric Trudeau wrote:
> Hi, I am trying to build the XEN tools for our port of XEN to our
> Cortex A15-based platform.
> 
> I am using the repo at git://xenbits.xenproject.org/xen.git to
> cross-compile the tools into our rootfs.

Which branch/changeset are you using?

[EPT] We are currently based on commit, "8ee5e39 QEMU_TAG update".

I've heard that the recent XSA-55 fixes have broken guest loading on
ARM, but your problems do not seem to be related to that (I mention it
because it might be something you hit after we work through your current
issues)

> This repo is what we are using for our Hypervisor development also.
> 
> When building the tools, I found that libaio does not build for ARM
> since it was missing tools/libaio/src/syscall-arm.h.
> 
> I found a version on the web so that I could continue.

Are you picking up the libaio in tools/libaio? That is an ancient
snapshot used to support older distros which don't ship libaio
themselves. It really shouldn't even be considered for ARM (this is a
bug in our build system IMHO). If you install the libaio (and headers
etc) from your distro then configure will detect this and not use the
embedded copy. (The embedded copy is on my hitlist for cruft removal for
4.4, but we are frozen for 4.3 right now so I can't touch it)

FWIW I wrote up some details of how I cross compile with Debian/Ubuntu
at
http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/CrossCompiling which includes lists of packages to install etc.

[EPT] We have our own rootfs since this is an embedded platform with many custom devices.  I will try to debug the issues with
the tools/libaio unless I find that the issue seems to be with that module.

> I also found that xc_dom_elfloader.c’s function, xc_dom_guest_type(),
> did not know about the EM_ARM machine type.

You shouldn't be hitting the elf loader paths, you should instead use
the zImage version of the kernel which will hit the
xc_dom_armzimageloader.c code paths.

We did at one point in the very early days of the port support loading
ELF files via a quick hack of a tool ("xcbuild") but that has been
superceded by the proper toolstack support and the use of zImage.

[EPT] I noticed when I used my zImage that it seemed to check for a gzip magic number so I thought maybe I should not use it.
I will go back to using the zImage since I believe the issue is the same.  It appears that virt_base and virt_alloc_end are invalid.
I will try to find out how these are supposed to be determined.  Any tips would be welcome. :)

> Am I using the wrong repo for development of the tools for ARM?


> 
> If not, the twiki seems to indicate that xl is working on ARM to
> create guest machines.

Modulo any bugs introduced by XSA-55 we believe it is.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-06-26 15:09   ` Eric Trudeau
@ 2013-06-26 16:45     ` Ian Campbell
  2013-06-27 16:21       ` Eric Trudeau
  0 siblings, 1 reply; 25+ messages in thread
From: Ian Campbell @ 2013-06-26 16:45 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: xen-devel

Is there any chance you could configure your MUA for ">" style quoting?

On Wed, 2013-06-26 at 15:09 +0000, Eric Trudeau wrote:
> (moving to xen-devel, xen-arm is for the older PV ARM port)
> 
> On Tue, 2013-06-25 at 23:59 +0000, Eric Trudeau wrote:
> > Hi, I am trying to build the XEN tools for our port of XEN to our
> > Cortex A15-based platform.
> > 
> > I am using the repo at git://xenbits.xenproject.org/xen.git to
> > cross-compile the tools into our rootfs.
> 
> Which branch/changeset are you using?
> 
> [EPT] We are currently based on commit, "8ee5e39 QEMU_TAG update".

You may want to pull up to 47a1d0b48ce3010c1b119541353cb1a7b5ed5de9 to
get the fix for the XSA-55 regression I mentioned.

> I've heard that the recent XSA-55 fixes have broken guest loading on
> ARM, but your problems do not seem to be related to that (I mention it
> because it might be something you hit after we work through your current
> issues)
> 
> > This repo is what we are using for our Hypervisor development also.
> > 
> > When building the tools, I found that libaio does not build for ARM
> > since it was missing tools/libaio/src/syscall-arm.h.
> > 
> > I found a version on the web so that I could continue.
> 
> Are you picking up the libaio in tools/libaio? That is an ancient
> snapshot used to support older distros which don't ship libaio
> themselves. It really shouldn't even be considered for ARM (this is a
> bug in our build system IMHO). If you install the libaio (and headers
> etc) from your distro then configure will detect this and not use the
> embedded copy. (The embedded copy is on my hitlist for cruft removal for
> 4.4, but we are frozen for 4.3 right now so I can't touch it)
> 
> FWIW I wrote up some details of how I cross compile with Debian/Ubuntu
> at
> http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/CrossCompiling which includes lists of packages to install etc.
> 
> [EPT] We have our own rootfs since this is an embedded platform with many custom devices.  I will try to debug the issues with
> the tools/libaio unless I find that the issue seems to be with that module.

I would highly (*very* highly) recommend that you integrate a more
recent upstream libaio into your rootfs and cross environment rather
than using the tools/libaio source which comes from Xen.

The homepage on kernel.org seems dead but you could use Debian upstream
tarball:
http://ftp.de.debian.org/debian/pool/main/liba/libaio/libaio_0.3.109.orig.tar.gz

> > I also found that xc_dom_elfloader.c’s function, xc_dom_guest_type(),
> > did not know about the EM_ARM machine type.
> 
> You shouldn't be hitting the elf loader paths, you should instead use
> the zImage version of the kernel which will hit the
> xc_dom_armzimageloader.c code paths.
> 
> We did at one point in the very early days of the port support loading
> ELF files via a quick hack of a tool ("xcbuild") but that has been
> superceded by the proper toolstack support and the use of zImage.
> 
> [EPT] I noticed when I used my zImage that it seemed to check for a gzip magic number so I thought maybe I should not use it.
> I will go back to using the zImage since I believe the issue is the same.  It appears that virt_base and virt_alloc_end are invalid.
> I will try to find out how these are supposed to be determined.  Any tips would be welcome. :)

This sounds like the XSA-55 regression I referred to. If you pull up to
the commit I mention above (which is currently in staging and will
propagate to master after automated test, hopefully overnight) then this
problem should go away.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-06-26 16:45     ` Ian Campbell
@ 2013-06-27 16:21       ` Eric Trudeau
  2013-06-27 16:34         ` Ian Campbell
  0 siblings, 1 reply; 25+ messages in thread
From: Eric Trudeau @ 2013-06-27 16:21 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Wednesday, June 26, 2013 12:45 PM
> To: Eric Trudeau
> Cc: xen-devel
> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> 
> Is there any chance you could configure your MUA for ">" style quoting?

Done.

> 
> On Wed, 2013-06-26 at 15:09 +0000, Eric Trudeau wrote:
> > (moving to xen-devel, xen-arm is for the older PV ARM port)
> >
> > On Tue, 2013-06-25 at 23:59 +0000, Eric Trudeau wrote:
> > > Hi, I am trying to build the XEN tools for our port of XEN to our
> > > Cortex A15-based platform.
> > >
> > > I am using the repo at git://xenbits.xenproject.org/xen.git to
> > > cross-compile the tools into our rootfs.
> >
> > Which branch/changeset are you using?
> >
> > [EPT] We are currently based on commit, "8ee5e39 QEMU_TAG update".
> 
> You may want to pull up to 47a1d0b48ce3010c1b119541353cb1a7b5ed5de9 to
> get the fix for the XSA-55 regression I mentioned.
> 

See below.

> > I've heard that the recent XSA-55 fixes have broken guest loading on
> > ARM, but your problems do not seem to be related to that (I mention it
> > because it might be something you hit after we work through your current
> > issues)
> >
> > > This repo is what we are using for our Hypervisor development also.
> > >
> > > When building the tools, I found that libaio does not build for ARM
> > > since it was missing tools/libaio/src/syscall-arm.h.
> > >
> > > I found a version on the web so that I could continue.
> >
> > Are you picking up the libaio in tools/libaio? That is an ancient
> > snapshot used to support older distros which don't ship libaio
> > themselves. It really shouldn't even be considered for ARM (this is a
> > bug in our build system IMHO). If you install the libaio (and headers
> > etc) from your distro then configure will detect this and not use the
> > embedded copy. (The embedded copy is on my hitlist for cruft removal for
> > 4.4, but we are frozen for 4.3 right now so I can't touch it)
> >
> > FWIW I wrote up some details of how I cross compile with Debian/Ubuntu
> > at
> >
> http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/CrossCompil
> ing which includes lists of packages to install etc.
> >
> > [EPT] We have our own rootfs since this is an embedded platform with many
> custom devices.  I will try to debug the issues with
> > the tools/libaio unless I find that the issue seems to be with that module.
> 
> I would highly (*very* highly) recommend that you integrate a more
> recent upstream libaio into your rootfs and cross environment rather
> than using the tools/libaio source which comes from Xen.
> 
> The homepage on kernel.org seems dead but you could use Debian upstream
> tarball:
> http://ftp.de.debian.org/debian/pool/main/liba/libaio/libaio_0.3.109.orig.tar.gz
> 

Done.

> > > I also found that xc_dom_elfloader.c’s function, xc_dom_guest_type(),
> > > did not know about the EM_ARM machine type.
> >
> > You shouldn't be hitting the elf loader paths, you should instead use
> > the zImage version of the kernel which will hit the
> > xc_dom_armzimageloader.c code paths.
> >
> > We did at one point in the very early days of the port support loading
> > ELF files via a quick hack of a tool ("xcbuild") but that has been
> > superceded by the proper toolstack support and the use of zImage.
> >
> > [EPT] I noticed when I used my zImage that it seemed to check for a gzip
> magic number so I thought maybe I should not use it.
> > I will go back to using the zImage since I believe the issue is the same.  It
> appears that virt_base and virt_alloc_end are invalid.
> > I will try to find out how these are supposed to be determined.  Any tips would
> be welcome. :)
> 
> This sounds like the XSA-55 regression I referred to. If you pull up to
> the commit I mention above (which is currently in staging and will
> propagate to master after automated test, hopefully overnight) then this
> problem should go away.
> 

I rebased to the XSA-55 commit and now I can create the guest.  I am able to debug a kernel init panic.

Thanks so much.

My next question is why does the wiki for ARM give an example dom.cfg that does not have builder=hvm?
I tried it and it seems that xc_hvm_build_arm.c does not support building HVM guests yet.

Thanks,
Eric
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-06-27 16:21       ` Eric Trudeau
@ 2013-06-27 16:34         ` Ian Campbell
  2013-06-27 19:01           ` Eric Trudeau
  0 siblings, 1 reply; 25+ messages in thread
From: Ian Campbell @ 2013-06-27 16:34 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: xen-devel

On Thu, 2013-06-27 at 16:21 +0000, Eric Trudeau wrote:
> > -----Original Message-----
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > Sent: Wednesday, June 26, 2013 12:45 PM
> > To: Eric Trudeau
> > Cc: xen-devel
> > Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> > 
> > Is there any chance you could configure your MUA for ">" style quoting?
> 
> Done.

Thank you!

> > > > I also found that xc_dom_elfloader.c’s function, xc_dom_guest_type(),
> > > > did not know about the EM_ARM machine type.
> > >
> > > You shouldn't be hitting the elf loader paths, you should instead use
> > > the zImage version of the kernel which will hit the
> > > xc_dom_armzimageloader.c code paths.
> > >
> > > We did at one point in the very early days of the port support loading
> > > ELF files via a quick hack of a tool ("xcbuild") but that has been
> > > superceded by the proper toolstack support and the use of zImage.
> > >
> > > [EPT] I noticed when I used my zImage that it seemed to check for a gzip
> > magic number so I thought maybe I should not use it.
> > > I will go back to using the zImage since I believe the issue is the same.  It
> > appears that virt_base and virt_alloc_end are invalid.
> > > I will try to find out how these are supposed to be determined.  Any tips would
> > be welcome. :)
> > 
> > This sounds like the XSA-55 regression I referred to. If you pull up to
> > the commit I mention above (which is currently in staging and will
> > propagate to master after automated test, hopefully overnight) then this
> > problem should go away.
> > 
> 
> I rebased to the XSA-55 commit and now I can create the guest.  I am
> able to debug a kernel init panic.

Excellent!

> Thanks so much.
> 
> My next question is why does the wiki for ARM give an example dom.cfg that does not have builder=hvm?
> I tried it and it seems that xc_hvm_build_arm.c does not support building HVM guests yet.

Xen on ARM does not support HVM guests. On ARM guests are a hybrid PV
using hardware extensions where possible, but at the toolstack level we
treat them as PV guests.

You might find one or the other of these slides useful for describing
the sorts of guests which Xen on ARM supports:
http://fr.slideshare.net/xen_com_mgr/xensummit-na-2012-xen-on-arm-cortex-a15
http://www.xenproject.org/help/presentations-and-videos/video/latest/linaro-connect-introduction-to-xen-on-arm.html

HVM, in the sense of emulating a complete real world platform as a
guest, is something which we might eventually do but it is not something
which is currently on our roadmap.

If you are looking to use a non-Linux guest then porting an OS to Xen on
ARM is far more trivial than the PV x86 port. The Linux port to Xen on
ARM was a pretty small amount of code.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-06-27 16:34         ` Ian Campbell
@ 2013-06-27 19:01           ` Eric Trudeau
  2013-06-27 19:33             ` Julien Grall
  0 siblings, 1 reply; 25+ messages in thread
From: Eric Trudeau @ 2013-06-27 19:01 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Thursday, June 27, 2013 12:34 PM
> To: Eric Trudeau
> Cc: xen-devel
> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> 
> On Thu, 2013-06-27 at 16:21 +0000, Eric Trudeau wrote:
> > > -----Original Message-----
> > > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > > Sent: Wednesday, June 26, 2013 12:45 PM
> > > To: Eric Trudeau
> > > Cc: xen-devel
> > > Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> > >
>

> > I rebased to the XSA-55 commit and now I can create the guest.  I am
> > able to debug a kernel init panic.
> 

My panic seems related to memory regions in device tree.  I am appended
my DTB to the kernel zImage.
How does the memory assigned by XEN for the guest domain get inserted
into the device tree?
Does Hypervisor or the toolchain manipulate the appended DTB and modify
the hypervisor node's reg and irq properties? What about the memory node?

Thanks,
Eric

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-06-27 19:01           ` Eric Trudeau
@ 2013-06-27 19:33             ` Julien Grall
  2013-06-27 20:19               ` Eric Trudeau
  2013-06-27 22:40               ` Julien Grall
  0 siblings, 2 replies; 25+ messages in thread
From: Julien Grall @ 2013-06-27 19:33 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: Ian Campbell, xen-devel

On 06/27/2013 08:01 PM, Eric Trudeau wrote:

>> -----Original Message-----
>> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
>> Sent: Thursday, June 27, 2013 12:34 PM
>> To: Eric Trudeau
>> Cc: xen-devel
>> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
>>
>> On Thu, 2013-06-27 at 16:21 +0000, Eric Trudeau wrote:
>>>> -----Original Message-----
>>>> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
>>>> Sent: Wednesday, June 26, 2013 12:45 PM
>>>> To: Eric Trudeau
>>>> Cc: xen-devel
>>>> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
>>>>
>>
> 
>>> I rebased to the XSA-55 commit and now I can create the guest.  I am
>>> able to debug a kernel init panic.
>>
> 
> My panic seems related to memory regions in device tree.  I am appended
> my DTB to the kernel zImage.
> How does the memory assigned by XEN for the guest domain get inserted
> into the device tree?
> Does Hypervisor or the toolchain manipulate the appended DTB and modify
> the hypervisor node's reg and irq properties? What about the memory node?


For the moment, the toolstack isn't not able to parse/modify the guest
DTB. Memory and IRQ properties are hardcoded in the hypervisor and the
toolstack. The different values need to match the following constraints:
  - The memory region start from 0x80000000. The size needs to be the
same in the configuration file and the DTS, otherwise the domain will
crash. I believe the default size is 128MB.
  - IRQ properties are:
      * event channel: 31, except if you have modified the IRQ number in
Xen for dom0;
      * timer: same IRQs number as the dom0 DTS;
  - GIC range: same range as the dom0 DTS.

Cheers,

-- 
Julien

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-06-27 19:33             ` Julien Grall
@ 2013-06-27 20:19               ` Eric Trudeau
  2013-06-28  7:47                 ` Ian Campbell
  2013-06-27 22:40               ` Julien Grall
  1 sibling, 1 reply; 25+ messages in thread
From: Eric Trudeau @ 2013-06-27 20:19 UTC (permalink / raw)
  To: xen-devel

> -----Original Message-----
> From: Julien Grall [mailto:julien.grall@linaro.org]
> Sent: Thursday, June 27, 2013 3:34 PM
> To: Eric Trudeau
> Cc: Ian Campbell; xen-devel
> Subject: Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization
> Extensions
> 
> On 06/27/2013 08:01 PM, Eric Trudeau wrote:
> 
> >> -----Original Message-----
> >> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> >> Sent: Thursday, June 27, 2013 12:34 PM
> >> To: Eric Trudeau
> >> Cc: xen-devel
> >> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> >>
> >> On Thu, 2013-06-27 at 16:21 +0000, Eric Trudeau wrote:
> >>>> -----Original Message-----
> >>>> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> >>>> Sent: Wednesday, June 26, 2013 12:45 PM
> >>>> To: Eric Trudeau
> >>>> Cc: xen-devel
> >>>> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> >>>>
> >>
> >
> >>> I rebased to the XSA-55 commit and now I can create the guest.  I am
> >>> able to debug a kernel init panic.
> >>
> >
> > My panic seems related to memory regions in device tree.  I am appended
> > my DTB to the kernel zImage.
> > How does the memory assigned by XEN for the guest domain get inserted
> > into the device tree?
> > Does Hypervisor or the toolchain manipulate the appended DTB and modify
> > the hypervisor node's reg and irq properties? What about the memory node?
> 
> 
> For the moment, the toolstack isn't not able to parse/modify the guest
> DTB. Memory and IRQ properties are hardcoded in the hypervisor and the
> toolstack. The different values need to match the following constraints:
>   - The memory region start from 0x80000000. The size needs to be the
> same in the configuration file and the DTS, otherwise the domain will
> crash. I believe the default size is 128MB.
>   - IRQ properties are:
>       * event channel: 31, except if you have modified the IRQ number in
> Xen for dom0;
>       * timer: same IRQs number as the dom0 DTS;
>   - GIC range: same range as the dom0 DTS.
> 

I changed my DTS to hard code memory at 0x80000000 with size 0x800000.
Now, I hit my first I/O access fault.  I tried using iomem attribute in my dom1.cfg.
Is iomem attribute supported yet on ARM?
I see the rangeset iomem_caps being set correctly, but I don't know if it is being
added to the VTTBR for my guest dom.

(XEN) General information for domain 2:
(XEN)     refcnt=3 dying=0 pause_count=1
(XEN)     nr_pages=32770 xenheap_pages=2 shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=33024
(XEN)     handle=7432e1e4-fa79-418d-8333-870db97f4338 vm_assist=00000000
(XEN) GICH_LRs (vcpu 0) mask=0
(XEN)    VCPU_LR[0]=0
(XEN)    VCPU_LR[1]=0
(XEN)    VCPU_LR[2]=0
(XEN)    VCPU_LR[3]=0
(XEN) Rangesets belonging to domain 2:
(XEN)     Interrupts { }
(XEN)     I/O Memory { f0000-f0ffe }

Hypervisor fault shows the offending IPA.

(XEN) Guest data abort: Translation fault at level 2
(XEN)     gva=fa406780
(XEN)     gpa=00000000f0406780
(XEN)     size=2 sign=0 write=1 reg=2
(XEN)     eat=0 cm=0 s1ptw=0 dfsc=6
(XEN) dom2 IPA 0x00000000f0406780

Thanks,
Eric

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-06-27 19:33             ` Julien Grall
  2013-06-27 20:19               ` Eric Trudeau
@ 2013-06-27 22:40               ` Julien Grall
  2013-07-09 17:10                 ` Eric Trudeau
  1 sibling, 1 reply; 25+ messages in thread
From: Julien Grall @ 2013-06-27 22:40 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: Ian Campbell, xen-devel

On 06/27/2013 09:19 PM, Eric Trudeau wrote:

>> -----Original Message-----
>> From: Julien Grall [mailto:julien.grall@linaro.org]
>> Sent: Thursday, June 27, 2013 3:34 PM
>> To: Eric Trudeau
>> Cc: Ian Campbell; xen-devel
>> Subject: Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization
>> Extensions
>>
>> On 06/27/2013 08:01 PM, Eric Trudeau wrote:
>>
>>>> -----Original Message-----
>>>> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
>>>> Sent: Thursday, June 27, 2013 12:34 PM
>>>> To: Eric Trudeau
>>>> Cc: xen-devel
>>>> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
>>>>
>>>> On Thu, 2013-06-27 at 16:21 +0000, Eric Trudeau wrote:
>>>>>> -----Original Message-----
>>>>>> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
>>>>>> Sent: Wednesday, June 26, 2013 12:45 PM
>>>>>> To: Eric Trudeau
>>>>>> Cc: xen-devel
>>>>>> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
>>>>>>
>>>>
>>>
>>>>> I rebased to the XSA-55 commit and now I can create the guest.  I am
>>>>> able to debug a kernel init panic.
>>>>
>>>
>>> My panic seems related to memory regions in device tree.  I am appended
>>> my DTB to the kernel zImage.
>>> How does the memory assigned by XEN for the guest domain get inserted
>>> into the device tree?
>>> Does Hypervisor or the toolchain manipulate the appended DTB and modify
>>> the hypervisor node's reg and irq properties? What about the memory node?
>>
>>
>> For the moment, the toolstack isn't not able to parse/modify the guest
>> DTB. Memory and IRQ properties are hardcoded in the hypervisor and the
>> toolstack. The different values need to match the following constraints:
>>   - The memory region start from 0x80000000. The size needs to be the
>> same in the configuration file and the DTS, otherwise the domain will
>> crash. I believe the default size is 128MB.
>>   - IRQ properties are:
>>       * event channel: 31, except if you have modified the IRQ number in
>> Xen for dom0;
>>       * timer: same IRQs number as the dom0 DTS;
>>   - GIC range: same range as the dom0 DTS.
>>
>
> I changed my DTS to hard code memory at 0x80000000 with size 0x800000.
> Now, I hit my first I/O access fault.  I tried using iomem attribute in my dom1.cfg.
> Is iomem attribute supported yet on ARM?
> I see the rangeset iomem_caps being set correctly, but I don't know if it is being
> added to the VTTBR for my guest dom.

Are you trying to assign a device to your guest? If so, Xen doesn't
yet support it.

To map physical memory range in a guest you need to use/implement
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping
which is missing on ARM. To help you, you can read this thread:
http://lists.xen.org/archives/html/xen-devel/2013-06/msg00870.html

Cheers,

--
Julien

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-06-27 20:19               ` Eric Trudeau
@ 2013-06-28  7:47                 ` Ian Campbell
  0 siblings, 0 replies; 25+ messages in thread
From: Ian Campbell @ 2013-06-28  7:47 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: xen-devel

On Thu, 2013-06-27 at 20:19 +0000, Eric Trudeau wrote:
> > -----Original Message-----
> > From: Julien Grall [mailto:julien.grall@linaro.org]
> > Sent: Thursday, June 27, 2013 3:34 PM
> > To: Eric Trudeau
> > Cc: Ian Campbell; xen-devel
> > Subject: Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization
> > Extensions
> > 
> > On 06/27/2013 08:01 PM, Eric Trudeau wrote:
> > 
> > >> -----Original Message-----
> > >> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > >> Sent: Thursday, June 27, 2013 12:34 PM
> > >> To: Eric Trudeau
> > >> Cc: xen-devel
> > >> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> > >>
> > >> On Thu, 2013-06-27 at 16:21 +0000, Eric Trudeau wrote:
> > >>>> -----Original Message-----
> > >>>> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > >>>> Sent: Wednesday, June 26, 2013 12:45 PM
> > >>>> To: Eric Trudeau
> > >>>> Cc: xen-devel
> > >>>> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> > >>>>
> > >>
> > >
> > >>> I rebased to the XSA-55 commit and now I can create the guest.  I am
> > >>> able to debug a kernel init panic.
> > >>
> > >
> > > My panic seems related to memory regions in device tree.  I am appended
> > > my DTB to the kernel zImage.
> > > How does the memory assigned by XEN for the guest domain get inserted
> > > into the device tree?
> > > Does Hypervisor or the toolchain manipulate the appended DTB and modify
> > > the hypervisor node's reg and irq properties? What about the memory node?
> > 
> > 
> > For the moment, the toolstack isn't not able to parse/modify the guest
> > DTB. Memory and IRQ properties are hardcoded in the hypervisor and the
> > toolstack. The different values need to match the following constraints:
> >   - The memory region start from 0x80000000. The size needs to be the
> > same in the configuration file and the DTS, otherwise the domain will
> > crash. I believe the default size is 128MB.
> >   - IRQ properties are:
> >       * event channel: 31, except if you have modified the IRQ number in
> > Xen for dom0;
> >       * timer: same IRQs number as the dom0 DTS;
> >   - GIC range: same range as the dom0 DTS.
> > 
> 
> I changed my DTS to hard code memory at 0x80000000 with size 0x800000.
> Now, I hit my first I/O access fault.  I tried using iomem attribute in my dom1.cfg.
> Is iomem attribute supported yet on ARM?

Not yet, no. It's on our TODO, as part of the more general device
assignment stuff:
http://wiki.xen.org/wiki/Xen_ARM_TODO#Device_assignment_to_DomUs
This (along with save/restore/migration) is one of the higher priority
items on the schedule once we've got the basics going.

Have you confirmed that if you create a plain Linux domain per the wiki
without any "fancy" features such as io passthrough that it works OK on
your platform?

Making iomem = work is more than likely just a case of wiring up a few
hypercalls in an obvious way. Julien has identified
XEN_DOMCTL_memory_mapping which seems like a good start.

Does the device you are trying to pass through do DMA? If so does your
platform have an IOMMU? That will add some additional complexity.

> I see the rangeset iomem_caps being set correctly, but I don't know if it is being
> added to the VTTBR for my guest dom.
> 
> (XEN) General information for domain 2:
> (XEN)     refcnt=3 dying=0 pause_count=1
> (XEN)     nr_pages=32770 xenheap_pages=2 shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=33024
> (XEN)     handle=7432e1e4-fa79-418d-8333-870db97f4338 vm_assist=00000000
> (XEN) GICH_LRs (vcpu 0) mask=0
> (XEN)    VCPU_LR[0]=0
> (XEN)    VCPU_LR[1]=0
> (XEN)    VCPU_LR[2]=0
> (XEN)    VCPU_LR[3]=0
> (XEN) Rangesets belonging to domain 2:
> (XEN)     Interrupts { }
> (XEN)     I/O Memory { f0000-f0ffe }
> 
> Hypervisor fault shows the offending IPA.
> 
> (XEN) Guest data abort: Translation fault at level 2
> (XEN)     gva=fa406780
> (XEN)     gpa=00000000f0406780
> (XEN)     size=2 sign=0 write=1 reg=2
> (XEN)     eat=0 cm=0 s1ptw=0 dfsc=6
> (XEN) dom2 IPA 0x00000000f0406780
> 
> Thanks,
> Eric
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-06-27 22:40               ` Julien Grall
@ 2013-07-09 17:10                 ` Eric Trudeau
  2013-07-10 12:50                   ` Ian Campbell
  2013-07-10 13:37                   ` Stefano Stabellini
  0 siblings, 2 replies; 25+ messages in thread
From: Eric Trudeau @ 2013-07-09 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Ian Campbell

> -----Original Message-----
> From: Julien Grall [mailto:julien.grall@linaro.org]
> Sent: Thursday, June 27, 2013 6:41 PM
> To: Eric Trudeau
> Cc: xen-devel; Ian Campbell
> Subject: Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization
> Extensions
> 
> On 06/27/2013 09:19 PM, Eric Trudeau wrote:
> 
> >> -----Original Message-----
> >> From: Julien Grall [mailto:julien.grall@linaro.org]
> >> Sent: Thursday, June 27, 2013 3:34 PM
> >> To: Eric Trudeau
> >> Cc: Ian Campbell; xen-devel
> >> Subject: Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization
> >> Extensions
> >>
> >> On 06/27/2013 08:01 PM, Eric Trudeau wrote:
> >>
> >>>> -----Original Message-----
> >>>> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> >>>> Sent: Thursday, June 27, 2013 12:34 PM
> >>>> To: Eric Trudeau
> >>>> Cc: xen-devel
> >>>> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> >>>>
> >>>> On Thu, 2013-06-27 at 16:21 +0000, Eric Trudeau wrote:
> >>>>>> -----Original Message-----
> >>>>>> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> >>>>>> Sent: Wednesday, June 26, 2013 12:45 PM
> >>>>>> To: Eric Trudeau
> >>>>>> Cc: xen-devel
> >>>>>> Subject: Re: [XenARM] XEN tools for ARM with Virtualization Extensions
> >>>>>>
> >>>>
> >>>
> >>>>> I rebased to the XSA-55 commit and now I can create the guest.  I am
> >>>>> able to debug a kernel init panic.
> >>>>
> >>>
> >>> My panic seems related to memory regions in device tree.  I am appended
> >>> my DTB to the kernel zImage.
> >>> How does the memory assigned by XEN for the guest domain get inserted
> >>> into the device tree?
> >>> Does Hypervisor or the toolchain manipulate the appended DTB and modify
> >>> the hypervisor node's reg and irq properties? What about the memory
> node?
> >>
> >>
> >> For the moment, the toolstack isn't not able to parse/modify the guest
> >> DTB. Memory and IRQ properties are hardcoded in the hypervisor and the
> >> toolstack. The different values need to match the following constraints:
> >>   - The memory region start from 0x80000000. The size needs to be the
> >> same in the configuration file and the DTS, otherwise the domain will
> >> crash. I believe the default size is 128MB.
> >>   - IRQ properties are:
> >>       * event channel: 31, except if you have modified the IRQ number in
> >> Xen for dom0;
> >>       * timer: same IRQs number as the dom0 DTS;
> >>   - GIC range: same range as the dom0 DTS.
> >>
> >
> > I changed my DTS to hard code memory at 0x80000000 with size 0x800000.
> > Now, I hit my first I/O access fault.  I tried using iomem attribute in my
> dom1.cfg.
> > Is iomem attribute supported yet on ARM?
> > I see the rangeset iomem_caps being set correctly, but I don't know if it is
> being
> > added to the VTTBR for my guest dom.
> 
> Are you trying to assign a device to your guest? If so, Xen doesn't
> yet support it.
> 
> To map physical memory range in a guest you need to use/implement
> xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping
> which is missing on ARM. To help you, you can read this thread:
> http://lists.xen.org/archives/html/xen-devel/2013-06/msg00870.html
> 

I added support for XEN_DOMCTL_memory_mapping in xen/arch/arm/domctl.c
using xen/arch/x86/domctl.c as a model.  I only implemented add functionality.
I then modified domcreate_launch_dm() to call xc_domain_memory_mapping()
instead of xc_domain_iomem_permission for the iomem regions in the domain
cfg file.

This allowed my kernel which unfortunately has some hard-coded accesses to
device memory to boot up in a DomU guest machine without crashing.
Now, I am looking into how to enable IRQs in my guest domains.
Would I implement xc_domain_bind_pt_irq/XEN_DOMCTL_bind_pt_irq in a similar
way as xc_domain_memory_mapping?  Or will the existing xc_domain_irq_permission
calls work?

What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on ARM?

Thanks,
Eric

------------------------------------------------------------------

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 0c32d0b..4196c0c 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -970,8 +970,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
         LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
             domid, io->start, io->start + io->number - 1);

-        ret = xc_domain_iomem_permission(CTX->xch, domid,
-                                          io->start, io->number, 1);
+        ret = xc_domain_memory_mapping(CTX->xch, domid,
+                                       io->start, io->start,
+                                       io->number, 1);
         if (ret < 0) {
             LOGE(ERROR,
                  "failed give dom%d access to iomem range %"PRIx64"-%"PRIx64,
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 851ee40..222aac9 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -10,11 +10,83 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 #include <public/domctl.h>
+#include <xen/iocap.h>
+#include <xsm/xsm.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>

 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    return -ENOSYS;
+    long ret = 0;
+    bool_t copyback = 0;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_memory_mapping:
+    {
+        unsigned long gfn = domctl->u.memory_mapping.first_gfn;
+        unsigned long mfn = domctl->u.memory_mapping.first_mfn;
+        unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
+        int add = domctl->u.memory_mapping.add_mapping;
+
+        /* removing i/o memory is not implemented yet */
+        if (!add) {
+            ret = -ENOSYS;
+            break;
+        }
+        ret = -EINVAL;
+        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
+             /* x86 checks wrap based on paddr_bits which is not implemented on ARM? */
+             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits - PAGE_SHIFT)) || */
+             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
+            break;
+
+        ret = -EPERM;
+        if ( current->domain->domain_id != 0 )
+            break;
+
+        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add);
+        if ( ret )
+            break;
+
+        if ( add )
+        {
+            printk(XENLOG_G_INFO
+                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
+                   d->domain_id, gfn, mfn, nr_mfns);
+
+            ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
+            if ( !ret && paging_mode_translate(d) )
+            {
+                ret = map_mmio_regions(d, gfn << PAGE_SHIFT,
+                                       (gfn + nr_mfns - 1) << PAGE_SHIFT,
+                                       mfn << PAGE_SHIFT);
+                if ( ret )
+                {
+                    printk(XENLOG_G_WARNING
+                           "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx\n",
+                           d->domain_id, gfn, mfn, nr_mfns);
+                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
+                         is_hardware_domain(current->domain) )
+                        printk(XENLOG_ERR
+                               "memory_map: failed to deny dom%d access to [%lx,%lx]\n",
+                               d->domain_id, mfn, mfn + nr_mfns - 1);
+                }
+            }
+        }
+    }
+    break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret = -EFAULT;
+
+    return ret;
 }

 void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-09 17:10                 ` Eric Trudeau
@ 2013-07-10 12:50                   ` Ian Campbell
  2013-07-10 19:38                     ` Eric Trudeau
  2013-07-10 13:37                   ` Stefano Stabellini
  1 sibling, 1 reply; 25+ messages in thread
From: Ian Campbell @ 2013-07-10 12:50 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: Julien Grall, Stefano.Stabellini, xen-devel

On Tue, 2013-07-09 at 17:10 +0000, Eric Trudeau wrote:
> Now, I am looking into how to enable IRQs in my guest domains.
> Would I implement xc_domain_bind_pt_irq/XEN_DOMCTL_bind_pt_irq in a similar
> way as xc_domain_memory_mapping?  Or will the existing xc_domain_irq_permission
> calls work?

I think in principal either should work. I'm not 100% familiar with this
stuff but I think xc_domain_irq_permissions just lets you expose an IRQ
to the guest, with a 1:1 mapping from host to guest IRQs etc. The
bind_pt_irq stuff is more flexible and lets you map a non-1:1 mapping
and handles some of the more complex variants.

I think for your usecase you can probably get away with the simple
version. When we come to do proper device pass through we will likely
need to build upon the bind_pt functionality.

> What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on ARM?

There's a function like route_irq_to_guest which we use to route IRQs to
dom0 during boot. In principal that could also be used to reroute an IRQ
to a guest, but I'm not sure how it will interact with the reassignment,
since in your case the IRQ starts off bound to dom0. Hopefully it's just
a small change to make it work for this case.

> 
> Thanks,
> Eric
> 
> ------------------------------------------------------------------
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 0c32d0b..4196c0c 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -970,8 +970,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
>          LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
>              domid, io->start, io->start + io->number - 1);
> 
> -        ret = xc_domain_iomem_permission(CTX->xch, domid,
> -                                          io->start, io->number, 1);
> +        ret = xc_domain_memory_mapping(CTX->xch, domid,
> +                                       io->start, io->start,
> +                                       io->number, 1);
>          if (ret < 0) {
>              LOGE(ERROR,
>                   "failed give dom%d access to iomem range %"PRIx64"-%"PRIx64,
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 851ee40..222aac9 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -10,11 +10,83 @@
>  #include <xen/errno.h>
>  #include <xen/sched.h>
>  #include <public/domctl.h>
> +#include <xen/iocap.h>
> +#include <xsm/xsm.h>
> +#include <xen/paging.h>
> +#include <xen/guest_access.h>
> 
>  long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>                      XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
> -    return -ENOSYS;
> +    long ret = 0;
> +    bool_t copyback = 0;
> +
> +    switch ( domctl->cmd )
> +    {
> +    case XEN_DOMCTL_memory_mapping:
> +    {
> +        unsigned long gfn = domctl->u.memory_mapping.first_gfn;
> +        unsigned long mfn = domctl->u.memory_mapping.first_mfn;
> +        unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
> +        int add = domctl->u.memory_mapping.add_mapping;
> +
> +        /* removing i/o memory is not implemented yet */
> +        if (!add) {
> +            ret = -ENOSYS;
> +            break;
> +        }
> +        ret = -EINVAL;
> +        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
> +             /* x86 checks wrap based on paddr_bits which is not implemented on ARM? */
> +             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits - PAGE_SHIFT)) || */
> +             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
> +            break;
> +
> +        ret = -EPERM;
> +        if ( current->domain->domain_id != 0 )
> +            break;
> +
> +        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add);
> +        if ( ret )
> +            break;
> +
> +        if ( add )
> +        {
> +            printk(XENLOG_G_INFO
> +                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
> +                   d->domain_id, gfn, mfn, nr_mfns);
> +
> +            ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
> +            if ( !ret && paging_mode_translate(d) )
> +            {
> +                ret = map_mmio_regions(d, gfn << PAGE_SHIFT,
> +                                       (gfn + nr_mfns - 1) << PAGE_SHIFT,
> +                                       mfn << PAGE_SHIFT);
> +                if ( ret )
> +                {
> +                    printk(XENLOG_G_WARNING
> +                           "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx\n",
> +                           d->domain_id, gfn, mfn, nr_mfns);
> +                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
> +                         is_hardware_domain(current->domain) )
> +                        printk(XENLOG_ERR
> +                               "memory_map: failed to deny dom%d access to [%lx,%lx]\n",
> +                               d->domain_id, mfn, mfn + nr_mfns - 1);
> +                }
> +            }
> +        }
> +    }
> +    break;
> +
> +    default:
> +        ret = -ENOSYS;
> +        break;
> +    }
> +
> +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
> +        ret = -EFAULT;
> +
> +    return ret;
>  }
> 
>  void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-09 17:10                 ` Eric Trudeau
  2013-07-10 12:50                   ` Ian Campbell
@ 2013-07-10 13:37                   ` Stefano Stabellini
  2013-07-10 15:33                     ` Eric Trudeau
  1 sibling, 1 reply; 25+ messages in thread
From: Stefano Stabellini @ 2013-07-10 13:37 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: Julien Grall, Ian Campbell, xen-devel

On Tue, 9 Jul 2013, Eric Trudeau wrote:
> I added support for XEN_DOMCTL_memory_mapping in xen/arch/arm/domctl.c
> using xen/arch/x86/domctl.c as a model.  I only implemented add functionality.
> I then modified domcreate_launch_dm() to call xc_domain_memory_mapping()
> instead of xc_domain_iomem_permission for the iomem regions in the domain
> cfg file.
> 
> This allowed my kernel which unfortunately has some hard-coded accesses to
> device memory to boot up in a DomU guest machine without crashing.
> Now, I am looking into how to enable IRQs in my guest domains.
> Would I implement xc_domain_bind_pt_irq/XEN_DOMCTL_bind_pt_irq in a similar
> way as xc_domain_memory_mapping?  Or will the existing xc_domain_irq_permission
> calls work?
> 
> What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on ARM?
 
I think we would probably need to introduce a new hypercall because
neither PHYSDEVOP_map_pirq nor XEN_DOMCTL_bind_pt_irq fit our model
very well.
The first one creates the mapping (as in allows the guest to receive the
interrupt but only as an event channel notification), the second one
sets up the emulation.
For HVM guests on x86 we need to call both. For PV guests, just the
first one.

On ARM we would need to introduce something similar to
PHYSDEVOP_map_pirq that instead of returning a pirq returns the guest
irq.
We could actually use PHYSDEVOP_map_pirq, if we mandate that on ARM pirq
means guest irq, but I wouldn't recommend it because it could lead to
confusion.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-10 13:37                   ` Stefano Stabellini
@ 2013-07-10 15:33                     ` Eric Trudeau
  2013-07-10 15:37                       ` Stefano Stabellini
  0 siblings, 1 reply; 25+ messages in thread
From: Eric Trudeau @ 2013-07-10 15:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Ian Campbell, Stefano Stabellini

> > What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on
> ARM?
> 
> I think we would probably need to introduce a new hypercall because
> neither PHYSDEVOP_map_pirq nor XEN_DOMCTL_bind_pt_irq fit our model
> very well.
> The first one creates the mapping (as in allows the guest to receive the
> interrupt but only as an event channel notification), the second one
> sets up the emulation.
> For HVM guests on x86 we need to call both. For PV guests, just the
> first one.
> 
> On ARM we would need to introduce something similar to
> PHYSDEVOP_map_pirq that instead of returning a pirq returns the guest
> irq.
> We could actually use PHYSDEVOP_map_pirq, if we mandate that on ARM pirq
> means guest irq, but I wouldn't recommend it because it could lead to
> confusion.

Will I need to implement other PHYSDEVOP_* operations, like x86 which has
PHYSDEVOP_eoi and PHYSDEVOP_irq_status_query?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-10 15:33                     ` Eric Trudeau
@ 2013-07-10 15:37                       ` Stefano Stabellini
  0 siblings, 0 replies; 25+ messages in thread
From: Stefano Stabellini @ 2013-07-10 15:37 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: Julien Grall, Ian Campbell, Stefano Stabellini, xen-devel

On Wed, 10 Jul 2013, Eric Trudeau wrote:
> > > What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on
> > ARM?
> > 
> > I think we would probably need to introduce a new hypercall because
> > neither PHYSDEVOP_map_pirq nor XEN_DOMCTL_bind_pt_irq fit our model
> > very well.
> > The first one creates the mapping (as in allows the guest to receive the
> > interrupt but only as an event channel notification), the second one
> > sets up the emulation.
> > For HVM guests on x86 we need to call both. For PV guests, just the
> > first one.
> > 
> > On ARM we would need to introduce something similar to
> > PHYSDEVOP_map_pirq that instead of returning a pirq returns the guest
> > irq.
> > We could actually use PHYSDEVOP_map_pirq, if we mandate that on ARM pirq
> > means guest irq, but I wouldn't recommend it because it could lead to
> > confusion.
> 
> Will I need to implement other PHYSDEVOP_* operations, like x86 which has
> PHYSDEVOP_eoi and PHYSDEVOP_irq_status_query?
> 

Good question. The answer is no, because we don't use pirqs. The two
hypercalls that you mentioned are both part of the same mechanism that
allows a PV guest to receive hardware interrupts as event channels on
x86. We don't need it on ARM because we can inject hardware interrupts
directly thanks to the gic/vgic.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-10 12:50                   ` Ian Campbell
@ 2013-07-10 19:38                     ` Eric Trudeau
  2013-07-10 22:04                       ` Julien Grall
  0 siblings, 1 reply; 25+ messages in thread
From: Eric Trudeau @ 2013-07-10 19:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano.Stabellini, Ian Campbell

> > What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on
> ARM?
> 
> There's a function like route_irq_to_guest which we use to route IRQs to
> dom0 during boot. In principal that could also be used to reroute an IRQ
> to a guest, but I'm not sure how it will interact with the reassignment,
> since in your case the IRQ starts off bound to dom0. Hopefully it's just
> a small change to make it work for this case.
> 

In gic_route_irq_to_guest(), there is this call:
    gic_set_irq_properties(irq->irq, level, 1u << smp_processor_id(), 0xa0);

I believe this will mean all IRQs will be routed on the current processor.
I believe this is PCPU0 when the dom0 device tree is being parsed.
When the PHYSDEVOP_map_pirq call is handled by the Hypervisor, the PCPU
may be any of the processors.
Is there a reason when we only route the IRQs for Dom0 to one PCPU?
If only one is allowed, should it be PCPU0?

Thanks,
Eric

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-10 19:38                     ` Eric Trudeau
@ 2013-07-10 22:04                       ` Julien Grall
  2013-07-10 22:13                         ` Ian Campbell
  0 siblings, 1 reply; 25+ messages in thread
From: Julien Grall @ 2013-07-10 22:04 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: Stefano.Stabellini, Ian Campbell, xen-devel

On 10 July 2013 20:38, Eric Trudeau <etrudeau@broadcom.com> wrote:
>> > What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on
>> ARM?
>>
>> There's a function like route_irq_to_guest which we use to route IRQs to
>> dom0 during boot. In principal that could also be used to reroute an IRQ
>> to a guest, but I'm not sure how it will interact with the reassignment,
>> since in your case the IRQ starts off bound to dom0. Hopefully it's just
>> a small change to make it work for this case.
>>
>
> In gic_route_irq_to_guest(), there is this call:
>     gic_set_irq_properties(irq->irq, level, 1u << smp_processor_id(), 0xa0);
>
> I believe this will mean all IRQs will be routed on the current processor.
> I believe this is PCPU0 when the dom0 device tree is being parsed.
> When the PHYSDEVOP_map_pirq call is handled by the Hypervisor, the PCPU
> may be any of the processors.
> Is there a reason when we only route the IRQs for Dom0 to one PCPU?
> If only one is allowed, should it be PCPU0?

I don't see any particular reason to route the PIRQs on only PCPU0.
I think it's because Xen only routes SPIs to VCPU0, which is mostly
running on PCPU0.

The best solution is routing the PIRQ on the PCPU which is running the VCPU.

--
Julien Grall

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-10 22:04                       ` Julien Grall
@ 2013-07-10 22:13                         ` Ian Campbell
  2013-07-11 16:46                           ` Eric Trudeau
  0 siblings, 1 reply; 25+ messages in thread
From: Ian Campbell @ 2013-07-10 22:13 UTC (permalink / raw)
  To: Julien Grall; +Cc: Eric Trudeau, Stefano.Stabellini, xen-devel

On Wed, 2013-07-10 at 23:04 +0100, Julien Grall wrote:
> On 10 July 2013 20:38, Eric Trudeau <etrudeau@broadcom.com> wrote:
> >> > What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on
> >> ARM?
> >>
> >> There's a function like route_irq_to_guest which we use to route IRQs to
> >> dom0 during boot. In principal that could also be used to reroute an IRQ
> >> to a guest, but I'm not sure how it will interact with the reassignment,
> >> since in your case the IRQ starts off bound to dom0. Hopefully it's just
> >> a small change to make it work for this case.
> >>
> >
> > In gic_route_irq_to_guest(), there is this call:
> >     gic_set_irq_properties(irq->irq, level, 1u << smp_processor_id(), 0xa0);
> >
> > I believe this will mean all IRQs will be routed on the current processor.
> > I believe this is PCPU0 when the dom0 device tree is being parsed.
> > When the PHYSDEVOP_map_pirq call is handled by the Hypervisor, the PCPU
> > may be any of the processors.
> > Is there a reason when we only route the IRQs for Dom0 to one PCPU?
> > If only one is allowed, should it be PCPU0?
> 
> I don't see any particular reason to route the PIRQs on only PCPU0.
> I think it's because Xen only routes SPIs to VCPU0, which is mostly
> running on PCPU0.
> 
> The best solution is routing the PIRQ on the PCPU which is running the VCPU.

For interrupts which are routed to the guest we should track the guest's
writes to the virtual target registers and then configure the physical
target registers correctly when migrating vcpus around so that the
interrupt follows the vcpu around.

Ian.

> 
> --
> Julien Grall

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-10 22:13                         ` Ian Campbell
@ 2013-07-11 16:46                           ` Eric Trudeau
  2013-07-11 17:18                             ` Ian Campbell
  0 siblings, 1 reply; 25+ messages in thread
From: Eric Trudeau @ 2013-07-11 16:46 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano.Stabellini, Ian Campbell

> > >> > What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on
> > >> ARM?
> > >>
> > >> There's a function like route_irq_to_guest which we use to route IRQs to
> > >> dom0 during boot. In principal that could also be used to reroute an IRQ
> > >> to a guest, but I'm not sure how it will interact with the reassignment,
> > >> since in your case the IRQ starts off bound to dom0. Hopefully it's just
> > >> a small change to make it work for this case.
> > >>

I have IRQs for devices being passed to ARM DomU guests now.  Thanks for your
help.

The IRQ mapping needs to be unmapped when the domU is destroyed and it is not
happening presently.  I don't see a call to PHYSDEVOP_unmap_pirq or anything.
Should the mapped IRQs for a guest be unmapped when Xen destroys the domain
based on the IRQ rangeset?  Where do you suggest doing this?
Or should the tools call an xc_physdev_unmap_pirq() when destroying the domain?
I would think it should be implicit in Hypervisor code to relinquish all resources of a
domain upon destruction...

Thanks,
Eric

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-11 16:46                           ` Eric Trudeau
@ 2013-07-11 17:18                             ` Ian Campbell
  2013-07-12  0:25                               ` Eric Trudeau
  0 siblings, 1 reply; 25+ messages in thread
From: Ian Campbell @ 2013-07-11 17:18 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: Julien Grall, Stefano.Stabellini, xen-devel

On Thu, 2013-07-11 at 16:46 +0000, Eric Trudeau wrote:
> > > >> > What functions should I call to implement  XEN_DOMCTL_bind_pt_irq on
> > > >> ARM?
> > > >>
> > > >> There's a function like route_irq_to_guest which we use to route IRQs to
> > > >> dom0 during boot. In principal that could also be used to reroute an IRQ
> > > >> to a guest, but I'm not sure how it will interact with the reassignment,
> > > >> since in your case the IRQ starts off bound to dom0. Hopefully it's just
> > > >> a small change to make it work for this case.
> > > >>
> 
> I have IRQs for devices being passed to ARM DomU guests now.  Thanks for your
> help.

Excellent! Do you have any useful patches?

> The IRQ mapping needs to be unmapped when the domU is destroyed and it is not
> happening presently.  I don't see a call to PHYSDEVOP_unmap_pirq or anything.
> Should the mapped IRQs for a guest be unmapped when Xen destroys the domain
> based on the IRQ rangeset?  Where do you suggest doing this?
> Or should the tools call an xc_physdev_unmap_pirq() when destroying the domain?
> I would think it should be implicit in Hypervisor code to relinquish all resources of a
> domain upon destruction...

Having the hypervisor cleanup on destroy sounds sensible to me, x86
seems to call free_domain_pirqs from arch_domain_destroy.

I think the unmap_pirq path is for hotunplug

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-11 17:18                             ` Ian Campbell
@ 2013-07-12  0:25                               ` Eric Trudeau
  2013-07-12 10:41                                 ` Julien Grall
  0 siblings, 1 reply; 25+ messages in thread
From: Eric Trudeau @ 2013-07-12  0:25 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano.Stabellini, Ian Campbell

> -----Original Message-----
> From: Ian Campbell [mailto:ian.campbell@citrix.com]
> Sent: Thursday, July 11, 2013 1:18 PM
> To: Eric Trudeau
> Cc: xen-devel; Julien Grall; Stefano.Stabellini@citrix.com
> Subject: Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization
> Extensions
> 
> On Thu, 2013-07-11 at 16:46 +0000, Eric Trudeau wrote:
> > > > >> > What functions should I call to implement  XEN_DOMCTL_bind_pt_irq
> on
> > > > >> ARM?
> > > > >>
> > > > >> There's a function like route_irq_to_guest which we use to route IRQs
> to
> > > > >> dom0 during boot. In principal that could also be used to reroute an IRQ
> > > > >> to a guest, but I'm not sure how it will interact with the reassignment,
> > > > >> since in your case the IRQ starts off bound to dom0. Hopefully it's just
> > > > >> a small change to make it work for this case.
> > > > >>
> >
> > I have IRQs for devices being passed to ARM DomU guests now.  Thanks for
> your
> > help.
> 
> Excellent! Do you have any useful patches?
> 

I added code in the arm domain.c to release the IRQs when a domain is destroyed.
I am providing my changes, but I believe there may be more work to have a clean
solution.  Specifically, the following items may need to be addressed.

1.      The dom.cfg "irqs" section has the 32 added to the device-tree IRQ number because
           I wasn't sure where to add the translation.  This can be cleaned up when
           guests are able to be given DTB files and Xen can parse them.

2.      I assume level-triggered IRQs since the "irqs" list is only irq numbers.  This also
           could be left until guest DTB files are supported.

3.      I added clearing of the IRQ_GUEST bit in the desc->status in release_irq because
            I didn't see where it would be unset.  This is probably not a big deal since not
            many situations arise where an IRQ is sometimes host and sometimes guest.

4.      I changed vgic.c to use the d->nr_irqs instead of assuming guests have no SPIs.
             This allowed me to use the extra_domu_irqs boot param to allow guests to
             have SPIs.

5.      I added a check for whether an IRQ was already in use, earlier in the flow of
             gic_route_irq_to_guest() so that the desc->handler is not messed with before
             __setup_irq does the check and returns an error.  Also, gic_set_irq_properties
             will be avoided in the error case as well.

I rebased to commit 9eabb07 and verified my changes.  I needed the fix in gic_irq_shutdown
or my release_irq changes caused other IRQs to be disabled when a domain was destroyed.

Thanks,
Eric

>From 551f174c9653def01890fadd2dd769ab8a9acde7 Mon Sep 17 00:00:00 2001
From: Eric Trudeau <etrudeau@broadcom.com>
Date: Thu, 11 Jul 2013 20:03:51 -0400
Subject: [PATCH] Add support for Guest physdev irqs

---
 xen/arch/arm/domain.c  | 16 ++++++++++++++++
 xen/arch/arm/gic.c     | 15 ++++++++++-----
 xen/arch/arm/physdev.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++--
 xen/arch/arm/vgic.c    |  5 +----
 4 files changed, 73 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 4c434a1..52d3429 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -31,6 +31,8 @@
 #include <asm/gic.h>
 #include "vtimer.h"
 #include "vpl011.h"
+#include <xen/iocap.h>
+#include <xen/irq.h>

 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);

@@ -513,8 +515,22 @@ fail:
     return rc;
 }

+static int release_domain_irqs(struct domain *d)
+{
+    int i;
+    for (i = 0; i <= d->nr_pirqs; i++) {
+        if (irq_access_permitted(d, i)) {
+            release_irq(i);
+        }
+    }
+    return 0;
+}
+
+
 void arch_domain_destroy(struct domain *d)
 {
+    if (d->irq_caps != NULL)
+        release_domain_irqs(d);
     p2m_teardown(d);
     domain_vgic_free(d);
     domain_uart0_free(d);
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index cafb681..1f576d1 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -510,7 +510,7 @@ void gic_route_spis(void)
     }
 }

-void __init release_irq(unsigned int irq)
+void release_irq(unsigned int irq)
 {
     struct irq_desc *desc;
     unsigned long flags;
@@ -522,6 +522,7 @@ void __init release_irq(unsigned int irq)
     action = desc->action;
     desc->action  = NULL;
     desc->status |= IRQ_DISABLED;
+    desc->status &= ~IRQ_GUEST;

     spin_lock(&gic.lock);
     desc->handler->shutdown(desc);
@@ -707,6 +708,12 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     spin_lock_irqsave(&desc->lock, flags);
     spin_lock(&gic.lock);

+    if ( desc->action != NULL )
+    {
+        retval = -EBUSY;
+        goto out;
+    }
+
     desc->handler = &gic_guest_irq_type;
     desc->status |= IRQ_GUEST;

@@ -715,12 +722,10 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     gic_set_irq_properties(irq->irq, level, 1u << smp_processor_id(), 0xa0);

     retval = __setup_irq(desc, irq->irq, action);
-    if (retval) {
-        xfree(action);
-        goto out;
-    }

 out:
+    if (retval)
+        xfree(action);
     spin_unlock(&gic.lock);
     spin_unlock_irqrestore(&desc->lock, flags);
     return retval;
diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index 61b4a18..8a5f770 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -9,12 +9,56 @@
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <asm/hypercall.h>
+#include <public/physdev.h>
+#include <xen/guest_access.h>
+#include <xen/irq.h>
+#include <xen/sched.h>
+#include <asm/gic.h>


 int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
-    return -ENOSYS;
+    int ret;
+
+    switch ( cmd )
+    {
+    case PHYSDEVOP_map_pirq: {
+        physdev_map_pirq_t map;
+        struct dt_irq irq;
+        struct domain *d;
+
+        ret = -EFAULT;
+        if ( copy_from_guest(&map, arg, 1) != 0 )
+            break;
+
+        d = rcu_lock_domain_by_any_id(map.domid);
+        if ( d == NULL ) {
+            ret = -ESRCH;
+            break;
+        }
+
+        irq.irq = map.pirq;
+        irq.type = DT_IRQ_TYPE_LEVEL_MASK;
+
+        ret = gic_route_irq_to_guest(d, &irq, "GuestIRQ");
+        if (!ret)
+            printk("%s gic_route_irq_to_guest added IRQ %d to dom%d\n",
+                   __FUNCTION__, irq.irq, d->domain_id);
+
+        rcu_unlock_domain(d);
+
+        if (!ret && __copy_to_guest(arg, &map, 1) )
+            ret = -EFAULT;
+        break;
+    }
+
+    default:
+        printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
+        ret = -ENOSYS;
+        break;
+    }
+
+    return ret;
 }

 /*
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 2e4b11f..0ebcdac 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -82,10 +82,7 @@ int domain_vgic_init(struct domain *d)
     /* Currently nr_lines in vgic and gic doesn't have the same meanings
      * Here nr_lines = number of SPIs
      */
-    if ( d->domain_id == 0 )
-        d->arch.vgic.nr_lines = gic_number_lines() - 32;
-    else
-        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
+    d->arch.vgic.nr_lines = d->nr_pirqs - 32;

     d->arch.vgic.shared_irqs =
         xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
--
1.8.1.2

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-12  0:25                               ` Eric Trudeau
@ 2013-07-12 10:41                                 ` Julien Grall
  2013-07-12 17:45                                   ` Eric Trudeau
  0 siblings, 1 reply; 25+ messages in thread
From: Julien Grall @ 2013-07-12 10:41 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: Stefano.Stabellini, Ian Campbell, xen-devel

On 12 July 2013 01:25, Eric Trudeau <etrudeau@broadcom.com> wrote:
>> -----Original Message-----
>> From: Ian Campbell [mailto:ian.campbell@citrix.com]
>> Sent: Thursday, July 11, 2013 1:18 PM
>> To: Eric Trudeau
>> Cc: xen-devel; Julien Grall; Stefano.Stabellini@citrix.com
>> Subject: Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization
>> Extensions
>>
>> On Thu, 2013-07-11 at 16:46 +0000, Eric Trudeau wrote:
>> > > > >> > What functions should I call to implement  XEN_DOMCTL_bind_pt_irq
>> on
>> > > > >> ARM?
>> > > > >>
>> > > > >> There's a function like route_irq_to_guest which we use to route IRQs
>> to
>> > > > >> dom0 during boot. In principal that could also be used to reroute an IRQ
>> > > > >> to a guest, but I'm not sure how it will interact with the reassignment,
>> > > > >> since in your case the IRQ starts off bound to dom0. Hopefully it's just
>> > > > >> a small change to make it work for this case.
>> > > > >>
>> >
>> > I have IRQs for devices being passed to ARM DomU guests now.  Thanks for
>> your
>> > help.
>>
>> Excellent! Do you have any useful patches?
>>
>
> I added code in the arm domain.c to release the IRQs when a domain is destroyed.
> I am providing my changes, but I believe there may be more work to have a clean
> solution.  Specifically, the following items may need to be addressed.

Thanks for that patch!

> 1.      The dom.cfg "irqs" section has the 32 added to the device-tree IRQ number because
>            I wasn't sure where to add the translation.  This can be cleaned up when
>            guests are able to be given DTB files and Xen can parse them.
>
> 2.      I assume level-triggered IRQs since the "irqs" list is only irq numbers.  This also
>            could be left until guest DTB files are supported.
>
> 3.      I added clearing of the IRQ_GUEST bit in the desc->status in release_irq because
>             I didn't see where it would be unset.  This is probably not a big deal since not
>             many situations arise where an IRQ is sometimes host and sometimes guest.

Could you send a separate patch for this fix?

> 4.      I changed vgic.c to use the d->nr_irqs instead of assuming guests have no SPIs.
>              This allowed me to use the extra_domu_irqs boot param to allow guests to
>              have SPIs.

How do you define the number of SPIs for dom0?  Also with extra_dom0_irqs?

By default nr_pirqs = static_nr_irqs + extra_dom{0,U}_irqs.

On ARM, start_nr_irqs equals to 1024. So you are allocating much more IRQs
than requested/supported by the GIC.

In any case, the number of IRQs per guest must not be "hardcoded" via the
command line. This value is different on each board.

For dom0, we can use the same number as the host and for a guest we can
give a parameters during the domain creation to let know how many SPIs is
needed for the guest.

> 5.      I added a check for whether an IRQ was already in use, earlier in the flow of
>              gic_route_irq_to_guest() so that the desc->handler is not messed with before
>              __setup_irq does the check and returns an error.  Also, gic_set_irq_properties
>              will be avoided in the error case as well.
>
> I rebased to commit 9eabb07 and verified my changes.  I needed the fix in gic_irq_shutdown
> or my release_irq changes caused other IRQs to be disabled when a domain was destroyed.

I'm surprised, this issue should have been corrected with the commit
751554b. I don't see a fix in this patch, do you have one?

> From 551f174c9653def01890fadd2dd769ab8a9acde7 Mon Sep 17 00:00:00 2001
> From: Eric Trudeau <etrudeau@broadcom.com>
> Date: Thu, 11 Jul 2013 20:03:51 -0400
> Subject: [PATCH] Add support for Guest physdev irqs
>
> ---
>  xen/arch/arm/domain.c  | 16 ++++++++++++++++
>  xen/arch/arm/gic.c     | 15 ++++++++++-----
>  xen/arch/arm/physdev.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++--
>  xen/arch/arm/vgic.c    |  5 +----
>  4 files changed, 73 insertions(+), 11 deletions(-)
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 4c434a1..52d3429 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -31,6 +31,8 @@
>  #include <asm/gic.h>
>  #include "vtimer.h"
>  #include "vpl011.h"
> +#include <xen/iocap.h>
> +#include <xen/irq.h>
>
>  DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
>
> @@ -513,8 +515,22 @@ fail:
>      return rc;
>  }
>
> +static int release_domain_irqs(struct domain *d)
> +{
> +    int i;
> +    for (i = 0; i <= d->nr_pirqs; i++) {
> +        if (irq_access_permitted(d, i)) {
> +            release_irq(i);
> +        }
> +    }
> +    return 0;
> +}

As you may know, release_irq will spin until the flags IRQ_INPROGRESS
is unset. This is done my the maintenance handler.

Now, let say the domain has crashed and the IRQ is not yet EOIed, then you
will spin forever.

> +
>  void arch_domain_destroy(struct domain *d)
>  {
> +    if (d->irq_caps != NULL)
You don't need this check.
During the domain create, Xen ensures that irq_caps is not NULL.

> +        release_domain_irqs(d);
>      p2m_teardown(d);
>      domain_vgic_free(d);
>      domain_uart0_free(d);
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index cafb681..1f576d1 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -510,7 +510,7 @@ void gic_route_spis(void)
>      }
>  }
>
> -void __init release_irq(unsigned int irq)
> +void release_irq(unsigned int irq)
>  {
>      struct irq_desc *desc;
>      unsigned long flags;
> @@ -522,6 +522,7 @@ void __init release_irq(unsigned int irq)
>      action = desc->action;
>      desc->action  = NULL;
>      desc->status |= IRQ_DISABLED;
> +    desc->status &= ~IRQ_GUEST;
>
>      spin_lock(&gic.lock);
>      desc->handler->shutdown(desc);
> @@ -707,6 +708,12 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>      spin_lock_irqsave(&desc->lock, flags);
>      spin_lock(&gic.lock);
>
> +    if ( desc->action != NULL )
> +    {
> +        retval = -EBUSY;
> +        goto out;
> +    }
> +
>      desc->handler = &gic_guest_irq_type;
>      desc->status |= IRQ_GUEST;
>
> @@ -715,12 +722,10 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>      gic_set_irq_properties(irq->irq, level, 1u << smp_processor_id(), 0xa0);
>
>      retval = __setup_irq(desc, irq->irq, action);
> -    if (retval) {
> -        xfree(action);
> -        goto out;
> -    }
>
>  out:
> +    if (retval)
> +        xfree(action);
>      spin_unlock(&gic.lock);
>      spin_unlock_irqrestore(&desc->lock, flags);
>      return retval;
> diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
> index 61b4a18..8a5f770 100644
> --- a/xen/arch/arm/physdev.c
> +++ b/xen/arch/arm/physdev.c
> @@ -9,12 +9,56 @@
>  #include <xen/lib.h>
>  #include <xen/errno.h>
>  #include <asm/hypercall.h>
> +#include <public/physdev.h>
> +#include <xen/guest_access.h>
> +#include <xen/irq.h>
> +#include <xen/sched.h>
> +#include <asm/gic.h>
>
>
>  int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>  {
> -    printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
> -    return -ENOSYS;
> +    int ret;
> +
> +    switch ( cmd )
> +    {
> +    case PHYSDEVOP_map_pirq: {
> +        physdev_map_pirq_t map;
> +        struct dt_irq irq;
> +        struct domain *d;
> +
> +        ret = -EFAULT;
> +        if ( copy_from_guest(&map, arg, 1) != 0 )
> +            break;
> +
> +        d = rcu_lock_domain_by_any_id(map.domid);
> +        if ( d == NULL ) {
> +            ret = -ESRCH;
> +            break;
> +        }

Missing sanity check on the map.pirq value.

> +        irq.irq = map.pirq;
> +        irq.type = DT_IRQ_TYPE_LEVEL_MASK;
> +
> +        ret = gic_route_irq_to_guest(d, &irq, "GuestIRQ");

Do you plan to handle non 1:1 IRQ mapping?
How does work your the IRQ mapping if the IRQ is already mapped to dom0?

> +        if (!ret)
> +            printk("%s gic_route_irq_to_guest added IRQ %d to dom%d\n",
> +                   __FUNCTION__, irq.irq, d->domain_id);
> +
> +        rcu_unlock_domain(d);
> +
> +        if (!ret && __copy_to_guest(arg, &map, 1) )
> +            ret = -EFAULT;
> +        break;
> +    }
> +
> +    default:
> +        printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
> +        ret = -ENOSYS;
> +        break;
> +    }
> +
> +    return ret;
>  }
>
>  /*
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 2e4b11f..0ebcdac 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -82,10 +82,7 @@ int domain_vgic_init(struct domain *d)
>      /* Currently nr_lines in vgic and gic doesn't have the same meanings
>       * Here nr_lines = number of SPIs
>       */
> -    if ( d->domain_id == 0 )
> -        d->arch.vgic.nr_lines = gic_number_lines() - 32;
> -    else
> -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> +    d->arch.vgic.nr_lines = d->nr_pirqs - 32;

If you want to stick on the 1:1 mapping, the best solution
is to set "nr_lines to gic_number_lines() - 32" for all the domains.

--
Julien Grall

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-12 10:41                                 ` Julien Grall
@ 2013-07-12 17:45                                   ` Eric Trudeau
  2013-07-15 17:45                                     ` Stefano Stabellini
  0 siblings, 1 reply; 25+ messages in thread
From: Eric Trudeau @ 2013-07-12 17:45 UTC (permalink / raw)
  To: Julien Grall; +Cc: Stefano.Stabellini, Ian Campbell, xen-devel

> >
> > I added code in the arm domain.c to release the IRQs when a domain is
> destroyed.
> > I am providing my changes, but I believe there may be more work to have a
> clean
> > solution.  Specifically, the following items may need to be addressed.
> 
> Thanks for that patch!
> 
> > 1.      The dom.cfg "irqs" section has the 32 added to the device-tree IRQ
> number because
> >            I wasn't sure where to add the translation.  This can be cleaned up when
> >            guests are able to be given DTB files and Xen can parse them.
> >
> > 2.      I assume level-triggered IRQs since the "irqs" list is only irq numbers.  This
> also
> >            could be left until guest DTB files are supported.
> >
> > 3.      I added clearing of the IRQ_GUEST bit in the desc->status in release_irq
> because
> >             I didn't see where it would be unset.  This is probably not a big deal since
> not
> >             many situations arise where an IRQ is sometimes host and sometimes
> guest.
> 
> Could you send a separate patch for this fix?
> 

I have included a separate patch for just this fix.

> > 4.      I changed vgic.c to use the d->nr_irqs instead of assuming guests have no
> SPIs.
> >              This allowed me to use the extra_domu_irqs boot param to allow
> guests to
> >              have SPIs.
> 
> How do you define the number of SPIs for dom0?  Also with extra_dom0_irqs?
> 
> By default nr_pirqs = static_nr_irqs + extra_dom{0,U}_irqs.
> 
> On ARM, start_nr_irqs equals to 1024. So you are allocating much more IRQs
> than requested/supported by the GIC.
> 
> In any case, the number of IRQs per guest must not be "hardcoded" via the
> command line. This value is different on each board.
> 
> For dom0, we can use the same number as the host and for a guest we can
> give a parameters during the domain creation to let know how many SPIs is
> needed for the guest.
> 

I will use gic_number_lines - 32 as you mention in your comment on vgic.c changes
below and only support 1:1 IRQ mapping for now.  This and other changes based
on your comments below will be sent in a new patch shortly.

> > 5.      I added a check for whether an IRQ was already in use, earlier in the flow
> of
> >              gic_route_irq_to_guest() so that the desc->handler is not messed with
> before
> >              __setup_irq does the check and returns an error.  Also,
> gic_set_irq_properties
> >              will be avoided in the error case as well.
> >
> > I rebased to commit 9eabb07 and verified my changes.  I needed the fix in
> gic_irq_shutdown
> > or my release_irq changes caused other IRQs to be disabled when a domain
> was destroyed.
> 
> I'm surprised, this issue should have been corrected with the commit
> 751554b. I don't see a fix in this patch, do you have one?
> 

My apology for the confusion.  I was originally testing on master before the 751554b
commit that you made and spent an hour or so debugging the issue until I remembered
your comments about the fix relating to ICENABLER and so I rebased to master yesterday
and then, with your fix, was able to get guests to be created/destroyed/created, etc.

> > From 551f174c9653def01890fadd2dd769ab8a9acde7 Mon Sep 17 00:00:00
> 2001
> > From: Eric Trudeau <etrudeau@broadcom.com>
> > Date: Thu, 11 Jul 2013 20:03:51 -0400
> > Subject: [PATCH] Add support for Guest physdev irqs
> >
> > ---
> >  xen/arch/arm/domain.c  | 16 ++++++++++++++++
> >  xen/arch/arm/gic.c     | 15 ++++++++++-----
> >  xen/arch/arm/physdev.c | 48
> ++++++++++++++++++++++++++++++++++++++++++++++--
> >  xen/arch/arm/vgic.c    |  5 +----
> >  4 files changed, 73 insertions(+), 11 deletions(-)
> >
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 4c434a1..52d3429 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -31,6 +31,8 @@
> >  #include <asm/gic.h>
> >  #include "vtimer.h"
> >  #include "vpl011.h"
> > +#include <xen/iocap.h>
> > +#include <xen/irq.h>
> >
> >  DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
> >
> > @@ -513,8 +515,22 @@ fail:
> >      return rc;
> >  }
> >
> > +static int release_domain_irqs(struct domain *d)
> > +{
> > +    int i;
> > +    for (i = 0; i <= d->nr_pirqs; i++) {
> > +        if (irq_access_permitted(d, i)) {
> > +            release_irq(i);
> > +        }
> > +    }
> > +    return 0;
> > +}
> 
> As you may know, release_irq will spin until the flags IRQ_INPROGRESS
> is unset. This is done my the maintenance handler.
> 
> Now, let say the domain has crashed and the IRQ is not yet EOIed, then you
> will spin forever.
> 
> > +
> >  void arch_domain_destroy(struct domain *d)
> >  {
> > +    if (d->irq_caps != NULL)
> You don't need this check.
> During the domain create, Xen ensures that irq_caps is not NULL.
> 
> > +        release_domain_irqs(d);
> >      p2m_teardown(d);
> >      domain_vgic_free(d);
> >      domain_uart0_free(d);
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index cafb681..1f576d1 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -510,7 +510,7 @@ void gic_route_spis(void)
> >      }
> >  }
> >
> > -void __init release_irq(unsigned int irq)
> > +void release_irq(unsigned int irq)
> >  {
> >      struct irq_desc *desc;
> >      unsigned long flags;
> > @@ -522,6 +522,7 @@ void __init release_irq(unsigned int irq)
> >      action = desc->action;
> >      desc->action  = NULL;
> >      desc->status |= IRQ_DISABLED;
> > +    desc->status &= ~IRQ_GUEST;
> >
> >      spin_lock(&gic.lock);
> >      desc->handler->shutdown(desc);
> > @@ -707,6 +708,12 @@ int gic_route_irq_to_guest(struct domain *d, const
> struct dt_irq *irq,
> >      spin_lock_irqsave(&desc->lock, flags);
> >      spin_lock(&gic.lock);
> >
> > +    if ( desc->action != NULL )
> > +    {
> > +        retval = -EBUSY;
> > +        goto out;
> > +    }
> > +
> >      desc->handler = &gic_guest_irq_type;
> >      desc->status |= IRQ_GUEST;
> >
> > @@ -715,12 +722,10 @@ int gic_route_irq_to_guest(struct domain *d, const
> struct dt_irq *irq,
> >      gic_set_irq_properties(irq->irq, level, 1u << smp_processor_id(), 0xa0);
> >
> >      retval = __setup_irq(desc, irq->irq, action);
> > -    if (retval) {
> > -        xfree(action);
> > -        goto out;
> > -    }
> >
> >  out:
> > +    if (retval)
> > +        xfree(action);
> >      spin_unlock(&gic.lock);
> >      spin_unlock_irqrestore(&desc->lock, flags);
> >      return retval;
> > diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
> > index 61b4a18..8a5f770 100644
> > --- a/xen/arch/arm/physdev.c
> > +++ b/xen/arch/arm/physdev.c
> > @@ -9,12 +9,56 @@
> >  #include <xen/lib.h>
> >  #include <xen/errno.h>
> >  #include <asm/hypercall.h>
> > +#include <public/physdev.h>
> > +#include <xen/guest_access.h>
> > +#include <xen/irq.h>
> > +#include <xen/sched.h>
> > +#include <asm/gic.h>
> >
> >
> >  int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> >  {
> > -    printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__,
> cmd);
> > -    return -ENOSYS;
> > +    int ret;
> > +
> > +    switch ( cmd )
> > +    {
> > +    case PHYSDEVOP_map_pirq: {
> > +        physdev_map_pirq_t map;
> > +        struct dt_irq irq;
> > +        struct domain *d;
> > +
> > +        ret = -EFAULT;
> > +        if ( copy_from_guest(&map, arg, 1) != 0 )
> > +            break;
> > +
> > +        d = rcu_lock_domain_by_any_id(map.domid);
> > +        if ( d == NULL ) {
> > +            ret = -ESRCH;
> > +            break;
> > +        }
> 
> Missing sanity check on the map.pirq value.
> 
> > +        irq.irq = map.pirq;
> > +        irq.type = DT_IRQ_TYPE_LEVEL_MASK;
> > +
> > +        ret = gic_route_irq_to_guest(d, &irq, "GuestIRQ");
> 
> Do you plan to handle non 1:1 IRQ mapping?
> How does work your the IRQ mapping if the IRQ is already mapped to dom0?
> 
> > +        if (!ret)
> > +            printk("%s gic_route_irq_to_guest added IRQ %d to dom%d\n",
> > +                   __FUNCTION__, irq.irq, d->domain_id);
> > +
> > +        rcu_unlock_domain(d);
> > +
> > +        if (!ret && __copy_to_guest(arg, &map, 1) )
> > +            ret = -EFAULT;
> > +        break;
> > +    }
> > +
> > +    default:
> > +        printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__,
> cmd);
> > +        ret = -ENOSYS;
> > +        break;
> > +    }
> > +
> > +    return ret;
> >  }
> >
> >  /*
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 2e4b11f..0ebcdac 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -82,10 +82,7 @@ int domain_vgic_init(struct domain *d)
> >      /* Currently nr_lines in vgic and gic doesn't have the same meanings
> >       * Here nr_lines = number of SPIs
> >       */
> > -    if ( d->domain_id == 0 )
> > -        d->arch.vgic.nr_lines = gic_number_lines() - 32;
> > -    else
> > -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> > +    d->arch.vgic.nr_lines = d->nr_pirqs - 32;
> 
> If you want to stick on the 1:1 mapping, the best solution
> is to set "nr_lines to gic_number_lines() - 32" for all the domains.
> 
> --
> Julien Grall


Here is the separate patch for the clearing of IRQ_GUEST in desc->status during release_irq
execution.

>From 84cf2265c5eabffa9de538e9b35bca20fe8f55ef Mon Sep 17 00:00:00 2001
From: Eric Trudeau <etrudeau@broadcom.com>
Date: Fri, 12 Jul 2013 13:30:48 -0400
Subject: [PATCH] xen/arm:  Clear the IRQ_GUEST bit in desc->status when
 releasing an IRQ

While adding support for guest domU IRQs, I noticed that release_irq did
not clear the IRQ_GUEST bit in the IRQ's desc->status field.
This is probably not a big deal since not many situations are likely to arise
where an IRQ is sometimes host and sometimes guest.

Signed-off-by: Eric Trudeau <etrudeau@broadcom.com>
---
 xen/arch/arm/gic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 1487f27..ed15ec3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -524,6 +524,7 @@ void release_irq(unsigned int irq)
     action = desc->action;
     desc->action  = NULL;
     desc->status |= IRQ_DISABLED;
+    desc->status &= ~IRQ_GUEST;

     spin_lock(&gic.lock);
     desc->handler->shutdown(desc);
--
1.8.1.2

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-12 17:45                                   ` Eric Trudeau
@ 2013-07-15 17:45                                     ` Stefano Stabellini
  2013-07-19 14:20                                       ` Ian Campbell
  0 siblings, 1 reply; 25+ messages in thread
From: Stefano Stabellini @ 2013-07-15 17:45 UTC (permalink / raw)
  To: Eric Trudeau; +Cc: Julien Grall, Stefano.Stabellini, Ian Campbell, xen-devel

On Fri, 12 Jul 2013, Eric Trudeau wrote:
> Here is the separate patch for the clearing of IRQ_GUEST in desc->status during release_irq
> execution.
> 
> From 84cf2265c5eabffa9de538e9b35bca20fe8f55ef Mon Sep 17 00:00:00 2001
> From: Eric Trudeau <etrudeau@broadcom.com>
> Date: Fri, 12 Jul 2013 13:30:48 -0400
> Subject: [PATCH] xen/arm:  Clear the IRQ_GUEST bit in desc->status when
>  releasing an IRQ
> 
> While adding support for guest domU IRQs, I noticed that release_irq did
> not clear the IRQ_GUEST bit in the IRQ's desc->status field.
> This is probably not a big deal since not many situations are likely to arise
> where an IRQ is sometimes host and sometimes guest.
> 
> Signed-off-by: Eric Trudeau <etrudeau@broadcom.com>

We didn't notice this before because nobody calls release_irq at the
moment. In any case:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/arch/arm/gic.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 1487f27..ed15ec3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -524,6 +524,7 @@ void release_irq(unsigned int irq)
>      action = desc->action;
>      desc->action  = NULL;
>      desc->status |= IRQ_DISABLED;
> +    desc->status &= ~IRQ_GUEST;
> 
>      spin_lock(&gic.lock);
>      desc->handler->shutdown(desc);
> --
> 1.8.1.2
> 
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [XenARM] XEN tools for ARM with Virtualization Extensions
  2013-07-15 17:45                                     ` Stefano Stabellini
@ 2013-07-19 14:20                                       ` Ian Campbell
  0 siblings, 0 replies; 25+ messages in thread
From: Ian Campbell @ 2013-07-19 14:20 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Eric Trudeau, Julien Grall, Stefano.Stabellini, xen-devel

On Mon, 2013-07-15 at 18:45 +0100, Stefano Stabellini wrote:
> On Fri, 12 Jul 2013, Eric Trudeau wrote:
> > Here is the separate patch for the clearing of IRQ_GUEST in desc->status during release_irq
> > execution.
> > 
> > From 84cf2265c5eabffa9de538e9b35bca20fe8f55ef Mon Sep 17 00:00:00 2001
> > From: Eric Trudeau <etrudeau@broadcom.com>
> > Date: Fri, 12 Jul 2013 13:30:48 -0400
> > Subject: [PATCH] xen/arm:  Clear the IRQ_GUEST bit in desc->status when
> >  releasing an IRQ
> > 
> > While adding support for guest domU IRQs, I noticed that release_irq did
> > not clear the IRQ_GUEST bit in the IRQ's desc->status field.
> > This is probably not a big deal since not many situations are likely to arise
> > where an IRQ is sometimes host and sometimes guest.
> > 
> > Signed-off-by: Eric Trudeau <etrudeau@broadcom.com>
> 
> We didn't notice this before because nobody calls release_irq at the
> moment. In any case:
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Applied, thanks.

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2013-07-19 14:20 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <FF3E5629F9FF6745ADE4EE690CEC81AD2ACE05F0@SJEXCHMB09.corp.ad.broadcom.com>
2013-06-26  9:10 ` [XenARM] XEN tools for ARM with Virtualization Extensions Ian Campbell
2013-06-26 15:09   ` Eric Trudeau
2013-06-26 16:45     ` Ian Campbell
2013-06-27 16:21       ` Eric Trudeau
2013-06-27 16:34         ` Ian Campbell
2013-06-27 19:01           ` Eric Trudeau
2013-06-27 19:33             ` Julien Grall
2013-06-27 20:19               ` Eric Trudeau
2013-06-28  7:47                 ` Ian Campbell
2013-06-27 22:40               ` Julien Grall
2013-07-09 17:10                 ` Eric Trudeau
2013-07-10 12:50                   ` Ian Campbell
2013-07-10 19:38                     ` Eric Trudeau
2013-07-10 22:04                       ` Julien Grall
2013-07-10 22:13                         ` Ian Campbell
2013-07-11 16:46                           ` Eric Trudeau
2013-07-11 17:18                             ` Ian Campbell
2013-07-12  0:25                               ` Eric Trudeau
2013-07-12 10:41                                 ` Julien Grall
2013-07-12 17:45                                   ` Eric Trudeau
2013-07-15 17:45                                     ` Stefano Stabellini
2013-07-19 14:20                                       ` Ian Campbell
2013-07-10 13:37                   ` Stefano Stabellini
2013-07-10 15:33                     ` Eric Trudeau
2013-07-10 15:37                       ` Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.