All of lore.kernel.org
 help / color / mirror / Atom feed
* Creating a magic page for PV mem_access
@ 2013-06-01  1:24 Aravindh Puthiyaparambil (aravindp)
  2013-06-03  9:23 ` Tim Deegan
  0 siblings, 1 reply; 8+ messages in thread
From: Aravindh Puthiyaparambil (aravindp) @ 2013-06-01  1:24 UTC (permalink / raw)
  To: Tim Deegan (tim@xen.org); +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 784 bytes --]

Hi Tim,

I am trying to create a magic / special page for the PV mem_access. I am mimicking what is being done for the console page (alloc_magic_pages()) on the tools side. On the hypervisor side, I am planning on stashing the address of this page in the pv_domain structure akin to how the special pages are stored in params[] of the hvm_domain structure. With HVM domains, xc_set_hvm_param() is used by the tools to populate this on the hypervisor side. What is the method I should follow to do this for PV domains? I see how things work for the console, xenconsole pages as they get passed through the start_info structure. Do I need to implement an equivalent xc_set_pv_param() or do I use the start_info page to store the mem_access magic page address?

Thanks,
Aravindh


[-- Attachment #1.2: Type: text/html, Size: 2594 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Creating a magic page for PV mem_access
  2013-06-01  1:24 Creating a magic page for PV mem_access Aravindh Puthiyaparambil (aravindp)
@ 2013-06-03  9:23 ` Tim Deegan
  2013-06-03 19:11   ` Aravindh Puthiyaparambil (aravindp)
  0 siblings, 1 reply; 8+ messages in thread
From: Tim Deegan @ 2013-06-03  9:23 UTC (permalink / raw)
  To: Aravindh Puthiyaparambil (aravindp); +Cc: xen-devel

Hi,

At 01:24 +0000 on 01 Jun (1370049844), Aravindh Puthiyaparambil (aravindp) wrote:
> I am trying to create a magic / special page for the PV mem_access. I
> am mimicking what is being done for the console page
> (alloc_magic_pages()) on the tools side. On the hypervisor side, I am
> planning on stashing the address of this page in the pv_domain
> structure akin to how the special pages are stored in params[] of the
> hvm_domain structure.

OK, can you back up a bit and describe what you're going to use this
page for?  A PV domain's 'magic' pages may not be quite what you want.
First, they're owned by the guest, so the guest can write to them (and
so they can't be trusted for doing hypervisor->dom0 communications).
And second, I'm not sure that mem_access pages really need to
saved/restored with the rest of the VM -- I'd have thought that you
could just set up a new, empty ring on the far side.

> With HVM domains, xc_set_hvm_param() is used by
> the tools to populate this on the hypervisor side. What is the method
> I should follow to do this for PV domains? I see how things work for
> the console, xenconsole pages as they get passed through the
> start_info structure. Do I need to implement an equivalent
> xc_set_pv_param() or do I use the start_info page to store the
> mem_access magic page address?

Again, that depends on what the page is used for.  If the guest needs to
access it, then it needs to find out about it somehow, but the usual way
to pass that kind of config info to the guest is using Xenstore.  If the
guest needs to be involved right form the start of day (i.e. before
Xenbus gets going) it might have to go in the start_info, but that's a
less attractive option.

Cheers,

Tim.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Creating a magic page for PV mem_access
  2013-06-03  9:23 ` Tim Deegan
@ 2013-06-03 19:11   ` Aravindh Puthiyaparambil (aravindp)
  2013-06-05 10:32     ` Tim Deegan
  0 siblings, 1 reply; 8+ messages in thread
From: Aravindh Puthiyaparambil (aravindp) @ 2013-06-03 19:11 UTC (permalink / raw)
  To: Tim Deegan; +Cc: xen-devel

> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Monday, June 03, 2013 2:24 AM
> To: Aravindh Puthiyaparambil (aravindp)
> Cc: xen-devel@lists.xensource.com
> Subject: Re: Creating a magic page for PV mem_access
> 
> Hi,
> 
> At 01:24 +0000 on 01 Jun (1370049844), Aravindh Puthiyaparambil (aravindp)
> wrote:
> > I am trying to create a magic / special page for the PV mem_access. I
> > am mimicking what is being done for the console page
> > (alloc_magic_pages()) on the tools side. On the hypervisor side, I am
> > planning on stashing the address of this page in the pv_domain
> > structure akin to how the special pages are stored in params[] of the
> > hvm_domain structure.
> 
> OK, can you back up a bit and describe what you're going to use this page
> for?  A PV domain's 'magic' pages may not be quite what you want.
> First, they're owned by the guest, so the guest can write to them (and so
> they can't be trusted for doing hypervisor->dom0 communications).

You are right. I didn't realize magic pages are writable by the guest. So this is not a good option.

> And second, I'm not sure that mem_access pages really need to
> saved/restored with the rest of the VM -- I'd have thought that you could
> just set up a new, empty ring on the far side.

I am trying to mimic what is being done in the HVM side for mem_event pages. In setup_guest() (xc_hvm_build_x86.c), I see "special pages" being created for console, paging, access and sharing ring pages. Then xc_set_hvm_param() is used to inform the hypervisor. When a mem_event / mem_access client comes up, it uses xc_get_hvm_param() to get the pfn and maps it in. I want to do something similar for PV.

On seeing console here sent me down the path of trying to do the same for the PV access ring page akin to what was being done for PV console page.

> > With HVM domains, xc_set_hvm_param() is used by the tools to populate
> > this on the hypervisor side. What is the method I should follow to do
> > this for PV domains? I see how things work for the console, xenconsole
> > pages as they get passed through the start_info structure. Do I need
> > to implement an equivalent
> > xc_set_pv_param() or do I use the start_info page to store the
> > mem_access magic page address?
> 
> Again, that depends on what the page is used for.  If the guest needs to
> access it, then it needs to find out about it somehow, but the usual way to
> pass that kind of config info to the guest is using Xenstore.  If the guest
> needs to be involved right form the start of day (i.e. before Xenbus gets
> going) it might have to go in the start_info, but that's a less attractive option.

The guest does not need to know about this page so we definitely do not need to use start_info.

Thanks,
Aravindh

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Creating a magic page for PV mem_access
  2013-06-03 19:11   ` Aravindh Puthiyaparambil (aravindp)
@ 2013-06-05 10:32     ` Tim Deegan
  2013-06-06  0:14       ` Aravindh Puthiyaparambil (aravindp)
  0 siblings, 1 reply; 8+ messages in thread
From: Tim Deegan @ 2013-06-05 10:32 UTC (permalink / raw)
  To: Aravindh Puthiyaparambil (aravindp); +Cc: xen-devel

Hi,

At 19:11 +0000 on 03 Jun (1370286719), Aravindh Puthiyaparambil (aravindp) wrote:
> > > I am trying to create a magic / special page for the PV mem_access. I
> > > am mimicking what is being done for the console page
> > > (alloc_magic_pages()) on the tools side. On the hypervisor side, I am
> > > planning on stashing the address of this page in the pv_domain
> > > structure akin to how the special pages are stored in params[] of the
> > > hvm_domain structure.
> > 
> > OK, can you back up a bit and describe what you're going to use this page
> > for?  A PV domain's 'magic' pages may not be quite what you want.
> > First, they're owned by the guest, so the guest can write to them (and so
> > they can't be trusted for doing hypervisor->dom0 communications).
> 
> You are right. I didn't realize magic pages are writable by the
> guest. So this is not a good option.
> 
> > And second, I'm not sure that mem_access pages really need to
> > saved/restored with the rest of the VM -- I'd have thought that you could
> > just set up a new, empty ring on the far side.
> 
> I am trying to mimic what is being done in the HVM side for mem_event
> pages. In setup_guest() (xc_hvm_build_x86.c), I see "special pages"
> being created for console, paging, access and sharing ring pages. Then
> xc_set_hvm_param() is used to inform the hypervisor. When a mem_event
> / mem_access client comes up, it uses xc_get_hvm_param() to get the
> pfn and maps it in. I want to do something similar for PV.

Yep.  I think it might be better to invent up a new interface for those
pages, rather than using domain memory for them.  We can then deprecate
the old HVM-specific params interface.

Tim.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Creating a magic page for PV mem_access
  2013-06-05 10:32     ` Tim Deegan
@ 2013-06-06  0:14       ` Aravindh Puthiyaparambil (aravindp)
  0 siblings, 0 replies; 8+ messages in thread
From: Aravindh Puthiyaparambil (aravindp) @ 2013-06-06  0:14 UTC (permalink / raw)
  To: Tim Deegan; +Cc: xen-devel

> At 19:11 +0000 on 03 Jun (1370286719), Aravindh Puthiyaparambil (aravindp)
> wrote:
> > > > I am trying to create a magic / special page for the PV
> > > > mem_access. I am mimicking what is being done for the console page
> > > > (alloc_magic_pages()) on the tools side. On the hypervisor side, I
> > > > am planning on stashing the address of this page in the pv_domain
> > > > structure akin to how the special pages are stored in params[] of
> > > > the hvm_domain structure.
> > >
> > > OK, can you back up a bit and describe what you're going to use this
> > > page for?  A PV domain's 'magic' pages may not be quite what you want.
> > > First, they're owned by the guest, so the guest can write to them
> > > (and so they can't be trusted for doing hypervisor->dom0
> communications).
> >
> > You are right. I didn't realize magic pages are writable by the guest.
> > So this is not a good option.

BTW, are the HVM special pages (access, paging, sharing) accessible by the guest kernel?

> > > And second, I'm not sure that mem_access pages really need to
> > > saved/restored with the rest of the VM -- I'd have thought that you
> > > could just set up a new, empty ring on the far side.
> >
> > I am trying to mimic what is being done in the HVM side for mem_event
> > pages. In setup_guest() (xc_hvm_build_x86.c), I see "special pages"
> > being created for console, paging, access and sharing ring pages. Then
> > xc_set_hvm_param() is used to inform the hypervisor. When a
> mem_event
> > / mem_access client comes up, it uses xc_get_hvm_param() to get the
> > pfn and maps it in. I want to do something similar for PV.
> 
> Yep.  I think it might be better to invent up a new interface for those pages,
> rather than using domain memory for them.  We can then deprecate the old
> HVM-specific params interface.

Are you saying that these pages (access, sharing, paging) should live in Dom0 instead of the domain's memory and only be created when a mem_event listener is active? Or am I completely off track?

Thanks,
Aravindh

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Creating a magic page for PV mem_access
  2013-06-06  5:33   ` Aravindh Puthiyaparambil (aravindp)
@ 2013-06-06  5:38     ` Andres Lagar-Cavilla
  0 siblings, 0 replies; 8+ messages in thread
From: Andres Lagar-Cavilla @ 2013-06-06  5:38 UTC (permalink / raw)
  To: Aravindh Puthiyaparambil (aravindp)
  Cc: Andres Lagar-Cavilla, Stefano Stabellini, Tim (Xen.org), xen-devel

On Jun 6, 2013, at 1:33 AM, "Aravindh Puthiyaparambil (aravindp)" <aravindp@cisco.com> wrote:

>>>  At 19:11 +0000 on 03 Jun (1370286719), Aravindh Puthiyaparambil
>>> (aravindp)
>>>> 
>>>> wrote:
>>>>>>> I am trying to create a magic / special page for the PV
>>>>>>> mem_access. I am mimicking what is being done for the console page
>>>>>>> (alloc_magic_pages()) on the tools side. On the hypervisor side, I
>>>>>>> am planning on stashing the address of this page in the pv_domain
>>>>>>> structure akin to how the special pages are stored in params[] of
>>>>>>> the hvm_domain structure.
>>>>>> 
>>>>>> OK, can you back up a bit and describe what you're going to use
>>>>>> this page for?  A PV domain's 'magic' pages may not be quite what you
>> want.
>>>>>> First, they're owned by the guest, so the guest can write to them
>>>>>> (and so they can't be trusted for doing hypervisor->dom0
>>>> communications).
>>>>> 
>>>>> You are right. I didn't realize magic pages are writable by the guest.
>>>>> So this is not a good option.
>>> 
>>> BTW, are the HVM special pages (access, paging, sharing) accessible by the
>> guest kernel?
>> 
>> Aravindh,
>> I am responsible for this mess, so I'll add some information here.
>> 
>> Yes the guest kernel can see these pages. Because of the way the e820 is laid
>> out, the guest kernel should never venture in there, but nothing prevents a
>> cunning/mischievous guest from doing so.
>> 
>> Importantly, the pages are not populated by the builder or BIOS. If you look
>> at the in-tree tools (xenpaging, xen-access), the pages are populated,
>> mapped, and removed from the physmap in a compact yet non-atomic
>> sequence. In this manner, the pages are no longer available to the guest by
>> the end of that sequence, and will be automatically garbage collected once
>> the ring-consuming dom0 tool dies. There is a window of opportunity for the
>> guest to screw things, and the recommendation is to carry out the above
>> sequence with the domain paused, to make it atomic guest-wise.
> 
> This is what I am doing too. I was worried about the case when the special page is pre-populated and a malicious guest has mapped in this page before a mem_event listener has been attached to the domain. But I guess removing from the physmap should cause the earlier mapping to become invalid.

Yeah the malicious guest will blow its brains out. The worst that could happen is DoS for the evil guy.

> 
>> I discussed this with Stefano at a Hackathon about a year ago, briefly. The
>> consensus was that the "true" solution is to create a new (set of)
>> XENMAPSPACE_* variants for the xen add to physmap calls.
>> 
>>> 
>>>>>> And second, I'm not sure that mem_access pages really need to
>>>>>> saved/restored with the rest of the VM -- I'd have thought that you
>>>>>> could just set up a new, empty ring on the far side.
>>>>> 
>>>>> I am trying to mimic what is being done in the HVM side for
>>>>> mem_event pages. In setup_guest() (xc_hvm_build_x86.c), I see
>> "special pages"
>>>>> being created for console, paging, access and sharing ring pages.
>>>>> Then
>>>>> xc_set_hvm_param() is used to inform the hypervisor. When a
>>>> mem_event
>>>>> / mem_access client comes up, it uses xc_get_hvm_param() to get the
>>>>> pfn and maps it in. I want to do something similar for PV.
>>>> 
>>>> Yep.  I think it might be better to invent up a new interface for
>>>> those pages, rather than using domain memory for them.  We can then
>>>> deprecate the old HVM-specific params interface.
>>> 
>>> Are you saying that these pages (access, sharing, paging) should live in
>> Dom0 instead of the domain's memory and only be created when a
>> mem_event listener is active? Or am I completely off track?
>> 
>> And here is why the mess exists in the first place. Originally, these pages
>> were allocated by the dom0 tool itself, and passed down to Xen, which
>> would map them by resolving the user-space vaddr of the dom0 tool to the
>> mfn. This had the horrible property of letting the hypervisor corrupt random
>> dom0 memory if the tool crashed and the page got reused, or even if the
>> page got migrated by the Linux kernel.
> 
> Yes, that does indeed look to be a risky approach to take. For now, I will stick with the HVM approach for PV guests too.
> 
>> A kernel driver in dom0 would have also solved the problem, but now you
>> are involving the burden of the entire set of dom0 versions out there.
>> 
>> Hope this helps
> 
> That was immensely helpful. Thank you.
No problem
Andres
> 
> Aravindh
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Creating a magic page for PV mem_access
  2013-06-06  4:07 ` Andres Lagar-Cavilla
@ 2013-06-06  5:33   ` Aravindh Puthiyaparambil (aravindp)
  2013-06-06  5:38     ` Andres Lagar-Cavilla
  0 siblings, 1 reply; 8+ messages in thread
From: Aravindh Puthiyaparambil (aravindp) @ 2013-06-06  5:33 UTC (permalink / raw)
  To: Andres Lagar-Cavilla; +Cc: Stefano Stabellini, Tim (Xen.org), xen-devel

> >   At 19:11 +0000 on 03 Jun (1370286719), Aravindh Puthiyaparambil
> > (aravindp)
> >>
> >> wrote:
> >>>>> I am trying to create a magic / special page for the PV
> >>>>> mem_access. I am mimicking what is being done for the console page
> >>>>> (alloc_magic_pages()) on the tools side. On the hypervisor side, I
> >>>>> am planning on stashing the address of this page in the pv_domain
> >>>>> structure akin to how the special pages are stored in params[] of
> >>>>> the hvm_domain structure.
> >>>>
> >>>> OK, can you back up a bit and describe what you're going to use
> >>>> this page for?  A PV domain's 'magic' pages may not be quite what you
> want.
> >>>> First, they're owned by the guest, so the guest can write to them
> >>>> (and so they can't be trusted for doing hypervisor->dom0
> >> communications).
> >>>
> >>> You are right. I didn't realize magic pages are writable by the guest.
> >>> So this is not a good option.
> >
> > BTW, are the HVM special pages (access, paging, sharing) accessible by the
> guest kernel?
> 
> Aravindh,
> I am responsible for this mess, so I'll add some information here.
> 
> Yes the guest kernel can see these pages. Because of the way the e820 is laid
> out, the guest kernel should never venture in there, but nothing prevents a
> cunning/mischievous guest from doing so.
> 
> Importantly, the pages are not populated by the builder or BIOS. If you look
> at the in-tree tools (xenpaging, xen-access), the pages are populated,
> mapped, and removed from the physmap in a compact yet non-atomic
> sequence. In this manner, the pages are no longer available to the guest by
> the end of that sequence, and will be automatically garbage collected once
> the ring-consuming dom0 tool dies. There is a window of opportunity for the
> guest to screw things, and the recommendation is to carry out the above
> sequence with the domain paused, to make it atomic guest-wise.

This is what I am doing too. I was worried about the case when the special page is pre-populated and a malicious guest has mapped in this page before a mem_event listener has been attached to the domain. But I guess removing from the physmap should cause the earlier mapping to become invalid.
 
> I discussed this with Stefano at a Hackathon about a year ago, briefly. The
> consensus was that the "true" solution is to create a new (set of)
> XENMAPSPACE_* variants for the xen add to physmap calls.
> 
> >
> >>>> And second, I'm not sure that mem_access pages really need to
> >>>> saved/restored with the rest of the VM -- I'd have thought that you
> >>>> could just set up a new, empty ring on the far side.
> >>>
> >>> I am trying to mimic what is being done in the HVM side for
> >>> mem_event pages. In setup_guest() (xc_hvm_build_x86.c), I see
> "special pages"
> >>> being created for console, paging, access and sharing ring pages.
> >>> Then
> >>> xc_set_hvm_param() is used to inform the hypervisor. When a
> >> mem_event
> >>> / mem_access client comes up, it uses xc_get_hvm_param() to get the
> >>> pfn and maps it in. I want to do something similar for PV.
> >>
> >> Yep.  I think it might be better to invent up a new interface for
> >> those pages, rather than using domain memory for them.  We can then
> >> deprecate the old HVM-specific params interface.
> >
> > Are you saying that these pages (access, sharing, paging) should live in
> Dom0 instead of the domain's memory and only be created when a
> mem_event listener is active? Or am I completely off track?
> 
> And here is why the mess exists in the first place. Originally, these pages
> were allocated by the dom0 tool itself, and passed down to Xen, which
> would map them by resolving the user-space vaddr of the dom0 tool to the
> mfn. This had the horrible property of letting the hypervisor corrupt random
> dom0 memory if the tool crashed and the page got reused, or even if the
> page got migrated by the Linux kernel.

Yes, that does indeed look to be a risky approach to take. For now, I will stick with the HVM approach for PV guests too.

> A kernel driver in dom0 would have also solved the problem, but now you
> are involving the burden of the entire set of dom0 versions out there.
> 
> Hope this helps

That was immensely helpful. Thank you.

Aravindh

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Creating a magic page for PV mem_access
       [not found] <mailman.416.1370486679.32487.xen-devel@lists.xen.org>
@ 2013-06-06  4:07 ` Andres Lagar-Cavilla
  2013-06-06  5:33   ` Aravindh Puthiyaparambil (aravindp)
  0 siblings, 1 reply; 8+ messages in thread
From: Andres Lagar-Cavilla @ 2013-06-06  4:07 UTC (permalink / raw)
  To: Aravindh Puthiyaparambil (aravindp)
  Cc: Stefano Stabellini, Tim (Xen.org), xen-devel

>   At 19:11 +0000 on 03 Jun (1370286719), Aravindh Puthiyaparambil (aravindp)
>> 
>> wrote:
>>>>> I am trying to create a magic / special page for the PV
>>>>> mem_access. I am mimicking what is being done for the console page
>>>>> (alloc_magic_pages()) on the tools side. On the hypervisor side, I
>>>>> am planning on stashing the address of this page in the pv_domain
>>>>> structure akin to how the special pages are stored in params[] of
>>>>> the hvm_domain structure.
>>>> 
>>>> OK, can you back up a bit and describe what you're going to use this
>>>> page for?  A PV domain's 'magic' pages may not be quite what you want.
>>>> First, they're owned by the guest, so the guest can write to them
>>>> (and so they can't be trusted for doing hypervisor->dom0
>> communications).
>>> 
>>> You are right. I didn't realize magic pages are writable by the guest.
>>> So this is not a good option.
> 
> BTW, are the HVM special pages (access, paging, sharing) accessible by the guest kernel?

Aravindh,
I am responsible for this mess, so I'll add some information here.

Yes the guest kernel can see these pages. Because of the way the e820 is laid out, the guest kernel should never venture in there, but nothing prevents a cunning/mischievous guest from doing so.

Importantly, the pages are not populated by the builder or BIOS. If you look at the in-tree tools (xenpaging, xen-access), the pages are populated, mapped, and removed from the physmap in a compact yet non-atomic sequence. In this manner, the pages are no longer available to the guest by the end of that sequence, and will be automatically garbage collected once the ring-consuming dom0 tool dies. There is a window of opportunity for the guest to screw things, and the recommendation is to carry out the above sequence with the domain paused, to make it atomic guest-wise.

I discussed this with Stefano at a Hackathon about a year ago, briefly. The consensus was that the "true" solution is to create a new (set of) XENMAPSPACE_* variants for the xen add to physmap calls.

> 
>>>> And second, I'm not sure that mem_access pages really need to
>>>> saved/restored with the rest of the VM -- I'd have thought that you
>>>> could just set up a new, empty ring on the far side.
>>> 
>>> I am trying to mimic what is being done in the HVM side for mem_event
>>> pages. In setup_guest() (xc_hvm_build_x86.c), I see "special pages"
>>> being created for console, paging, access and sharing ring pages. Then
>>> xc_set_hvm_param() is used to inform the hypervisor. When a
>> mem_event
>>> / mem_access client comes up, it uses xc_get_hvm_param() to get the
>>> pfn and maps it in. I want to do something similar for PV.
>> 
>> Yep.  I think it might be better to invent up a new interface for those pages,
>> rather than using domain memory for them.  We can then deprecate the old
>> HVM-specific params interface.
> 
> Are you saying that these pages (access, sharing, paging) should live in Dom0 instead of the domain's memory and only be created when a mem_event listener is active? Or am I completely off track?

And here is why the mess exists in the first place. Originally, these pages were allocated by the dom0 tool itself, and passed down to Xen, which would map them by resolving the user-space vaddr of the dom0 tool to the mfn. This had the horrible property of letting the hypervisor corrupt random dom0 memory if the tool crashed and the page got reused, or even if the page got migrated by the Linux kernel.

A kernel driver in dom0 would have also solved the problem, but now you are involving the burden of the entire set of dom0 versions out there.

Hope this helps
Andres

> 
> Thanks,
> Aravindh

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-06-06  5:38 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-01  1:24 Creating a magic page for PV mem_access Aravindh Puthiyaparambil (aravindp)
2013-06-03  9:23 ` Tim Deegan
2013-06-03 19:11   ` Aravindh Puthiyaparambil (aravindp)
2013-06-05 10:32     ` Tim Deegan
2013-06-06  0:14       ` Aravindh Puthiyaparambil (aravindp)
     [not found] <mailman.416.1370486679.32487.xen-devel@lists.xen.org>
2013-06-06  4:07 ` Andres Lagar-Cavilla
2013-06-06  5:33   ` Aravindh Puthiyaparambil (aravindp)
2013-06-06  5:38     ` Andres Lagar-Cavilla

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.