All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] xen/arm: Implement domain_get_maximum_gpfn
@ 2014-07-01 14:57 Julien Grall
  2014-07-01 16:57 ` Stefano Stabellini
  2014-07-02  9:12 ` Ian Campbell
  0 siblings, 2 replies; 30+ messages in thread
From: Julien Grall @ 2014-07-01 14:57 UTC (permalink / raw)
  To: xen-devel; +Cc: stefano.stabellini, Julien Grall, tim, ian.campbell

The function domain_get_maximum_gpfn is returning the maximum gpfn ever
mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/mm.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 03a0533..2d40f07 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -947,7 +947,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    return -ENOSYS;
+    return d->arch.p2m.max_mapped_gfn;
 }
 
 void share_xen_page_with_guest(struct page_info *page,
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-01 14:57 [PATCH] xen/arm: Implement domain_get_maximum_gpfn Julien Grall
@ 2014-07-01 16:57 ` Stefano Stabellini
  2014-07-01 18:36   ` Julien Grall
  2014-07-09 11:38   ` Julien Grall
  2014-07-02  9:12 ` Ian Campbell
  1 sibling, 2 replies; 30+ messages in thread
From: Stefano Stabellini @ 2014-07-01 16:57 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, tim, ian.campbell, stefano.stabellini

On Tue, 1 Jul 2014, Julien Grall wrote:
> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/arch/arm/mm.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 03a0533..2d40f07 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -947,7 +947,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
>  
>  unsigned long domain_get_maximum_gpfn(struct domain *d)
>  {
> -    return -ENOSYS;
> +    return d->arch.p2m.max_mapped_gfn;
>  }
>  
>  void share_xen_page_with_guest(struct page_info *page,
> -- 
> 1.7.10.4
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-01 16:57 ` Stefano Stabellini
@ 2014-07-01 18:36   ` Julien Grall
  2014-07-01 18:53     ` Andrew Cooper
  2014-07-09 11:38   ` Julien Grall
  1 sibling, 1 reply; 30+ messages in thread
From: Julien Grall @ 2014-07-01 18:36 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, tim, ian.campbell, stefano.stabellini



On 01/07/14 17:57, Stefano Stabellini wrote:
> On Tue, 1 Jul 2014, Julien Grall wrote:
>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Thanks. Just a quick follow-up on your question on IRC.

LPAE supports up to 48 bits on ARMv8 (40 bits on v7), so the MFN will 
just fit in 32 bits.

I'm a bit worry about what happen if there is an error? The current 
hypercall doesn't look like to be safe for that. Indeed, the return 
value is used to store the higher gpfn. If the guest also use internal 
error, then we are screw.

This is mostly an issue when the toolstack is running in 32 bits guest 
on 64 bits hypervisor. How x86 support this case?

Stefano was suggesting to introduce a new hypercall 
XENMEM_maximum_gpfn_v2 which will take a pointer to a gpfn in parameter.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-01 18:36   ` Julien Grall
@ 2014-07-01 18:53     ` Andrew Cooper
  0 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2014-07-01 18:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, tim, ian.campbell, stefano.stabellini

On 01/07/14 19:36, Julien Grall wrote:
>
>
> On 01/07/14 17:57, Stefano Stabellini wrote:
>> On Tue, 1 Jul 2014, Julien Grall wrote:
>>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this
>>> purpose.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
> Thanks. Just a quick follow-up on your question on IRC.
>
> LPAE supports up to 48 bits on ARMv8 (40 bits on v7), so the MFN will
> just fit in 32 bits.
>
> I'm a bit worry about what happen if there is an error? The current
> hypercall doesn't look like to be safe for that. Indeed, the return
> value is used to store the higher gpfn. If the guest also use internal
> error, then we are screw.

All hypercalls should return a long (domains' natural width) in their
first parameter, allowing for -ERANGE for truncation cases.  Not all
hypercalls actually adhere to this currently, but retroactively
enforcing this in the ABI should be fine, as it just only changes the
cases where we couldn't represent the result correctly anyway and
effectively returned junk.

>
> This is mostly an issue when the toolstack is running in 32 bits guest
> on 64 bits hypervisor. How x86 support this case?

The 32bit compat layer in Xen does quite a lot of truncation detection,
and will fail with -ERANGE.

There are other issues, such as libxc truncating the return value of
this specific hypercall from a long to an int (although that is rather
more simple to fix).

>
> Stefano was suggesting to introduce a new hypercall
> XENMEM_maximum_gpfn_v2 which will take a pointer to a gpfn in parameter.
>
> Regards,
>

I am not sure this is actually needed.  A toolstack of thinner width
than Xen almost certainly won't be capable of constructing such a
domain; there are many other similar issues in the Xen/toolstack abi/api
at the point at which hypercalls like this would start truncating.

~Andrew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-01 14:57 [PATCH] xen/arm: Implement domain_get_maximum_gpfn Julien Grall
  2014-07-01 16:57 ` Stefano Stabellini
@ 2014-07-02  9:12 ` Ian Campbell
  2014-07-02  9:19   ` Julien Grall
  1 sibling, 1 reply; 30+ messages in thread
From: Ian Campbell @ 2014-07-02  9:12 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, tim, stefano.stabellini

On Tue, 2014-07-01 at 15:57 +0100, Julien Grall wrote:
> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.

What is using the result of this hypercall?

> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/mm.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 03a0533..2d40f07 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -947,7 +947,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
>  
>  unsigned long domain_get_maximum_gpfn(struct domain *d)
>  {
> -    return -ENOSYS;
> +    return d->arch.p2m.max_mapped_gfn;
>  }
>  
>  void share_xen_page_with_guest(struct page_info *page,

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02  9:12 ` Ian Campbell
@ 2014-07-02  9:19   ` Julien Grall
  2014-07-02  9:22     ` Ian Campbell
  0 siblings, 1 reply; 30+ messages in thread
From: Julien Grall @ 2014-07-02  9:19 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel, tim, stefano.stabellini

Hi Ian,

On 02/07/14 10:12, Ian Campbell wrote:
> On Tue, 2014-07-01 at 15:57 +0100, Julien Grall wrote:
>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
>
> What is using the result of this hypercall?

The result is at least used by xc_dom_gnttab_hvm_seed, to get a scratch 
GFN to initialize grant table.

IHMO this is buggy on ARM (and x86?), because we could have map 
everything up to the end of the address space (currently 40 bits).

I plan to rework a bit this code.

Without this patch, xc_dom_gnttab_hvm_seed will use by mistake the pfn 0 
(0xffffffff + 1).

Regards,


-- 
Julien Grall

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02  9:19   ` Julien Grall
@ 2014-07-02  9:22     ` Ian Campbell
  2014-07-02  9:37       ` Julien Grall
  0 siblings, 1 reply; 30+ messages in thread
From: Ian Campbell @ 2014-07-02  9:22 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, tim, stefano.stabellini

On Wed, 2014-07-02 at 10:19 +0100, Julien Grall wrote:
> Hi Ian,
> 
> On 02/07/14 10:12, Ian Campbell wrote:
> > On Tue, 2014-07-01 at 15:57 +0100, Julien Grall wrote:
> >> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> >> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
> >
> > What is using the result of this hypercall?
> 
> The result is at least used by xc_dom_gnttab_hvm_seed, to get a scratch 
> GFN to initialize grant table.
> 
> IHMO this is buggy on ARM (and x86?), because we could have map 
> everything up to the end of the address space (currently 40 bits).

I wonder if we could find a way to not need this hypercall at all.

Any reason why both arm and x86 can't just use a fixed scratch pfn for
this temporary mapping? Both of them surely have spaces which they can
guarantee won't overlap with anything.

> I plan to rework a bit this code.
> 
> Without this patch, xc_dom_gnttab_hvm_seed will use by mistake the pfn 0 
> (0xffffffff + 1).
> 
> Regards,
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02  9:22     ` Ian Campbell
@ 2014-07-02  9:37       ` Julien Grall
  2014-07-02  9:41         ` Ian Campbell
  0 siblings, 1 reply; 30+ messages in thread
From: Julien Grall @ 2014-07-02  9:37 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel, tim, Stefano Stabellini, Roger Pau Monné

(Adding Roger)

On 02/07/14 10:22, Ian Campbell wrote:
> On Wed, 2014-07-02 at 10:19 +0100, Julien Grall wrote:
>> Hi Ian,
>>
>> On 02/07/14 10:12, Ian Campbell wrote:
>>> On Tue, 2014-07-01 at 15:57 +0100, Julien Grall wrote:
>>>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>>>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
>>>
>>> What is using the result of this hypercall?
>>
>> The result is at least used by xc_dom_gnttab_hvm_seed, to get a scratch
>> GFN to initialize grant table.
>>
>> IHMO this is buggy on ARM (and x86?), because we could have map
>> everything up to the end of the address space (currently 40 bits).
> 
> I wonder if we could find a way to not need this hypercall at all.
> 
> Any reason why both arm and x86 can't just use a fixed scratch pfn for
> this temporary mapping? Both of them surely have spaces which they can
> guarantee won't overlap with anything.

This was the previous behavior until last November.

commit db062c28f30eb68d1b5d7a910445a0ba1136179a
Date:   Wed Nov 13 09:26:13 2013 +0100

    libxc: move temporary grant table mapping to end of memory
    
    In order to set up the grant table for HVM guests, libxc needs to map
    the grant table temporarily.  At the moment, it does this by adding the
    grant page to the HVM guest's p2m table in the MMIO hole (at gfn 0xFFFFE),
    then mapping that gfn, setting up the table, then unmapping the gfn and
    removing it from the p2m table.
    
    This breaks with PVH guests with 4G or more of ram, because there is
    no MMIO hole; so it ends up clobbering a valid RAM p2m entry, then
    leaving a "hole" when it removes the grant map from the p2m table.
    Since the guest thinks this is normal ram, when it maps it and tries
    to access the page, it crashes.
    
    This patch maps the page at max_gfn+1 instead.

I'm not sure what to do for x86, so I was planning to introduce a per-arch hook to retrieve a scratch gpfn.
x86 would keep the current behavior, and ARM will use the GNTTAB space in the layout.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02  9:37       ` Julien Grall
@ 2014-07-02  9:41         ` Ian Campbell
  2014-07-02  9:50           ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: Ian Campbell @ 2014-07-02  9:41 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Tim Deegan, Jan Beulich, Stefano Stabellini,
	Roger Pau Monné

On Wed, 2014-07-02 at 10:37 +0100, Julien Grall wrote:
> (Adding Roger)
> 
> On 02/07/14 10:22, Ian Campbell wrote:
> > On Wed, 2014-07-02 at 10:19 +0100, Julien Grall wrote:
> >> Hi Ian,
> >>
> >> On 02/07/14 10:12, Ian Campbell wrote:
> >>> On Tue, 2014-07-01 at 15:57 +0100, Julien Grall wrote:
> >>>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> >>>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
> >>>
> >>> What is using the result of this hypercall?
> >>
> >> The result is at least used by xc_dom_gnttab_hvm_seed, to get a scratch
> >> GFN to initialize grant table.
> >>
> >> IHMO this is buggy on ARM (and x86?), because we could have map
> >> everything up to the end of the address space (currently 40 bits).
> > 
> > I wonder if we could find a way to not need this hypercall at all.
> > 
> > Any reason why both arm and x86 can't just use a fixed scratch pfn for
> > this temporary mapping? Both of them surely have spaces which they can
> > guarantee won't overlap with anything.
> 
> This was the previous behavior until last November.
> 
> commit db062c28f30eb68d1b5d7a910445a0ba1136179a
> Date:   Wed Nov 13 09:26:13 2013 +0100
> 
>     libxc: move temporary grant table mapping to end of memory
>     
>     In order to set up the grant table for HVM guests, libxc needs to map
>     the grant table temporarily.  At the moment, it does this by adding the
>     grant page to the HVM guest's p2m table in the MMIO hole (at gfn 0xFFFFE),
>     then mapping that gfn, setting up the table, then unmapping the gfn and
>     removing it from the p2m table.
>     
>     This breaks with PVH guests with 4G or more of ram, because there is
>     no MMIO hole; so it ends up clobbering a valid RAM p2m entry, then
>     leaving a "hole" when it removes the grant map from the p2m table.
>     Since the guest thinks this is normal ram, when it maps it and tries
>     to access the page, it crashes.
>     
>     This patch maps the page at max_gfn+1 instead.
> 
> I'm not sure what to do for x86, so I was planning to introduce a per-arch hook to retrieve a scratch gpfn.
> x86 would keep the current behavior, and ARM will use the GNTTAB space in the layout.

Perhaps x86 could use some well known MMIO space, like the APIC at
0xfff????

(adding some more x86 folks)

Ian.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02  9:41         ` Ian Campbell
@ 2014-07-02  9:50           ` Jan Beulich
  2014-07-02  9:52             ` Ian Campbell
  2014-07-02 10:19             ` Roger Pau Monné
  0 siblings, 2 replies; 30+ messages in thread
From: Jan Beulich @ 2014-07-02  9:50 UTC (permalink / raw)
  To: Ian Campbell, Julien Grall
  Cc: xen-devel, Tim Deegan, Stefano Stabellini, roger.pau

>>> On 02.07.14 at 11:41, <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-07-02 at 10:37 +0100, Julien Grall wrote:
>> On 02/07/14 10:22, Ian Campbell wrote:
>> > Any reason why both arm and x86 can't just use a fixed scratch pfn for
>> > this temporary mapping? Both of them surely have spaces which they can
>> > guarantee won't overlap with anything.
>> 
>> This was the previous behavior until last November.
>> 
>> commit db062c28f30eb68d1b5d7a910445a0ba1136179a
>> Date:   Wed Nov 13 09:26:13 2013 +0100
>> 
>>     libxc: move temporary grant table mapping to end of memory
>>     
>>     In order to set up the grant table for HVM guests, libxc needs to map
>>     the grant table temporarily.  At the moment, it does this by adding the
>>     grant page to the HVM guest's p2m table in the MMIO hole (at gfn 0xFFFFE),
>>     then mapping that gfn, setting up the table, then unmapping the gfn and
>>     removing it from the p2m table.
>>     
>>     This breaks with PVH guests with 4G or more of ram, because there is
>>     no MMIO hole; so it ends up clobbering a valid RAM p2m entry, then
>>     leaving a "hole" when it removes the grant map from the p2m table.
>>     Since the guest thinks this is normal ram, when it maps it and tries
>>     to access the page, it crashes.
>>     
>>     This patch maps the page at max_gfn+1 instead.
>> 
>> I'm not sure what to do for x86, so I was planning to introduce a per-arch 
> hook to retrieve a scratch gpfn.
>> x86 would keep the current behavior, and ARM will use the GNTTAB space in 
> the layout.
> 
> Perhaps x86 could use some well known MMIO space, like the APIC at
> 0xfff????

Except that PVH has no LAPIC right now. Yet with the recent hole
punching patches I wonder whether "there is no MMIO hole" is actually
correct. Roger?

Jan

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02  9:50           ` Jan Beulich
@ 2014-07-02  9:52             ` Ian Campbell
  2014-07-02 10:19             ` Roger Pau Monné
  1 sibling, 0 replies; 30+ messages in thread
From: Ian Campbell @ 2014-07-02  9:52 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Tim Deegan, Julien Grall, Stefano Stabellini, roger.pau

On Wed, 2014-07-02 at 10:50 +0100, Jan Beulich wrote:
> >>> On 02.07.14 at 11:41, <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-07-02 at 10:37 +0100, Julien Grall wrote:
> >> On 02/07/14 10:22, Ian Campbell wrote:
> >> > Any reason why both arm and x86 can't just use a fixed scratch pfn for
> >> > this temporary mapping? Both of them surely have spaces which they can
> >> > guarantee won't overlap with anything.
> >> 
> >> This was the previous behavior until last November.
> >> 
> >> commit db062c28f30eb68d1b5d7a910445a0ba1136179a
> >> Date:   Wed Nov 13 09:26:13 2013 +0100
> >> 
> >>     libxc: move temporary grant table mapping to end of memory
> >>     
> >>     In order to set up the grant table for HVM guests, libxc needs to map
> >>     the grant table temporarily.  At the moment, it does this by adding the
> >>     grant page to the HVM guest's p2m table in the MMIO hole (at gfn 0xFFFFE),
> >>     then mapping that gfn, setting up the table, then unmapping the gfn and
> >>     removing it from the p2m table.
> >>     
> >>     This breaks with PVH guests with 4G or more of ram, because there is
> >>     no MMIO hole; so it ends up clobbering a valid RAM p2m entry, then
> >>     leaving a "hole" when it removes the grant map from the p2m table.
> >>     Since the guest thinks this is normal ram, when it maps it and tries
> >>     to access the page, it crashes.
> >>     
> >>     This patch maps the page at max_gfn+1 instead.
> >> 
> >> I'm not sure what to do for x86, so I was planning to introduce a per-arch 
> > hook to retrieve a scratch gpfn.
> >> x86 would keep the current behavior, and ARM will use the GNTTAB space in 
> > the layout.
> > 
> > Perhaps x86 could use some well known MMIO space, like the APIC at
> > 0xfff????
> 
> Except that PVH has no LAPIC right now. Yet with the recent hole
> punching patches I wonder whether "there is no MMIO hole" is actually
> correct. Roger?

Another option might be to just burn a special pfn on a scratch mapping.

Ian.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02  9:50           ` Jan Beulich
  2014-07-02  9:52             ` Ian Campbell
@ 2014-07-02 10:19             ` Roger Pau Monné
  2014-07-02 10:31               ` Jan Beulich
  1 sibling, 1 reply; 30+ messages in thread
From: Roger Pau Monné @ 2014-07-02 10:19 UTC (permalink / raw)
  To: Jan Beulich, Ian Campbell, Julien Grall
  Cc: xen-devel, Stefano Stabellini, Tim Deegan

On 02/07/14 11:50, Jan Beulich wrote:
>>>> On 02.07.14 at 11:41, <Ian.Campbell@citrix.com> wrote:
>> On Wed, 2014-07-02 at 10:37 +0100, Julien Grall wrote:
>>> On 02/07/14 10:22, Ian Campbell wrote:
>>>> Any reason why both arm and x86 can't just use a fixed scratch pfn for
>>>> this temporary mapping? Both of them surely have spaces which they can
>>>> guarantee won't overlap with anything.
>>>
>>> This was the previous behavior until last November.
>>>
>>> commit db062c28f30eb68d1b5d7a910445a0ba1136179a
>>> Date:   Wed Nov 13 09:26:13 2013 +0100
>>>
>>>     libxc: move temporary grant table mapping to end of memory
>>>     
>>>     In order to set up the grant table for HVM guests, libxc needs to map
>>>     the grant table temporarily.  At the moment, it does this by adding the
>>>     grant page to the HVM guest's p2m table in the MMIO hole (at gfn 0xFFFFE),
>>>     then mapping that gfn, setting up the table, then unmapping the gfn and
>>>     removing it from the p2m table.
>>>     
>>>     This breaks with PVH guests with 4G or more of ram, because there is
>>>     no MMIO hole; so it ends up clobbering a valid RAM p2m entry, then
>>>     leaving a "hole" when it removes the grant map from the p2m table.
>>>     Since the guest thinks this is normal ram, when it maps it and tries
>>>     to access the page, it crashes.
>>>     
>>>     This patch maps the page at max_gfn+1 instead.
>>>
>>> I'm not sure what to do for x86, so I was planning to introduce a per-arch 
>> hook to retrieve a scratch gpfn.
>>> x86 would keep the current behavior, and ARM will use the GNTTAB space in 
>> the layout.
>>
>> Perhaps x86 could use some well known MMIO space, like the APIC at
>> 0xfff????
> 
> Except that PVH has no LAPIC right now. Yet with the recent hole
> punching patches I wonder whether "there is no MMIO hole" is actually
> correct. Roger?

For PVH guests there's still no MMIO hole (or any other kind of hole) at
all, the hole(s) is only there for Dom0.

Roger.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02 10:19             ` Roger Pau Monné
@ 2014-07-02 10:31               ` Jan Beulich
  2014-07-02 10:51                 ` Roger Pau Monné
  2014-07-02 13:44                 ` Konrad Rzeszutek Wilk
  0 siblings, 2 replies; 30+ messages in thread
From: Jan Beulich @ 2014-07-02 10:31 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Ian Campbell, Tim Deegan

>>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
> For PVH guests there's still no MMIO hole (or any other kind of hole) at
> all, the hole(s) is only there for Dom0.

So where would passed through devices get their MMIO BARs located?
(I realize pass-through isn't supported yet for PVH, but I didn't expect
such fundamental things to be missing.)

Jan

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02 10:31               ` Jan Beulich
@ 2014-07-02 10:51                 ` Roger Pau Monné
  2014-07-02 10:52                   ` Ian Campbell
  2014-07-02 13:44                 ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 30+ messages in thread
From: Roger Pau Monné @ 2014-07-02 10:51 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Ian Campbell, Tim Deegan

On 02/07/14 12:31, Jan Beulich wrote:
>>>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
>> For PVH guests there's still no MMIO hole (or any other kind of hole) at
>> all, the hole(s) is only there for Dom0.
> 
> So where would passed through devices get their MMIO BARs located?
> (I realize pass-through isn't supported yet for PVH, but I didn't expect
> such fundamental things to be missing.)

We could always add a MMIO region to a PVH guest in backwards compatible
way, the only requirement is to make sure the e820 provided to the guest
has this hole set up, but I see no reason to add it before having this
functionality, or to add it unconditionally to guests even if no devices
are passed through.

Also, shouldn't PVH guests use pcifront/pciback, which means it won't
have any BARs mapped directly?

Roger.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02 10:51                 ` Roger Pau Monné
@ 2014-07-02 10:52                   ` Ian Campbell
  2014-07-02 10:58                     ` Andrew Cooper
  0 siblings, 1 reply; 30+ messages in thread
From: Ian Campbell @ 2014-07-02 10:52 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Jan Beulich, Tim Deegan

On Wed, 2014-07-02 at 12:51 +0200, Roger Pau Monné wrote:
> On 02/07/14 12:31, Jan Beulich wrote:
> >>>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
> >> For PVH guests there's still no MMIO hole (or any other kind of hole) at
> >> all, the hole(s) is only there for Dom0.
> > 
> > So where would passed through devices get their MMIO BARs located?
> > (I realize pass-through isn't supported yet for PVH, but I didn't expect
> > such fundamental things to be missing.)
> 
> We could always add a MMIO region to a PVH guest in backwards compatible
> way, the only requirement is to make sure the e820 provided to the guest
> has this hole set up, but I see no reason to add it before having this
> functionality, or to add it unconditionally to guests even if no devices
> are passed through.
> 
> Also, shouldn't PVH guests use pcifront/pciback, which means it won't
> have any BARs mapped directly?

They need to map them somewhere in their physical address to be able to
use them... (Unlike a PV guest which I think maps them in the virtual
address space "by magic" avoiding the need for a p2m entry).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02 10:52                   ` Ian Campbell
@ 2014-07-02 10:58                     ` Andrew Cooper
  2014-07-02 11:21                       ` Ian Campbell
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2014-07-02 10:58 UTC (permalink / raw)
  To: Ian Campbell, Roger Pau Monné
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Jan Beulich, Tim Deegan

On 02/07/14 11:52, Ian Campbell wrote:
> On Wed, 2014-07-02 at 12:51 +0200, Roger Pau Monné wrote:
>> On 02/07/14 12:31, Jan Beulich wrote:
>>>>>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
>>>> For PVH guests there's still no MMIO hole (or any other kind of hole) at
>>>> all, the hole(s) is only there for Dom0.
>>> So where would passed through devices get their MMIO BARs located?
>>> (I realize pass-through isn't supported yet for PVH, but I didn't expect
>>> such fundamental things to be missing.)
>> We could always add a MMIO region to a PVH guest in backwards compatible
>> way, the only requirement is to make sure the e820 provided to the guest
>> has this hole set up, but I see no reason to add it before having this
>> functionality, or to add it unconditionally to guests even if no devices
>> are passed through.
>>
>> Also, shouldn't PVH guests use pcifront/pciback, which means it won't
>> have any BARs mapped directly?
> They need to map them somewhere in their physical address to be able to
> use them... (Unlike a PV guest which I think maps them in the virtual
> address space "by magic" avoiding the need for a p2m entry).
>
> Ian.

With respect to the original problem of accidentally punching a hole in
the guest

Why cant libxc clean up after itself?  From my understanding, it is a
simple increase reservation to fill the hole it 'borrowed' during setup.

This avoids MMIO ranges in pure PVH guests (arm included).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02 10:58                     ` Andrew Cooper
@ 2014-07-02 11:21                       ` Ian Campbell
  0 siblings, 0 replies; 30+ messages in thread
From: Ian Campbell @ 2014-07-02 11:21 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Julien Grall, Tim Deegan, Stefano Stabellini, Jan Beulich,
	xen-devel, Roger Pau Monné

On Wed, 2014-07-02 at 11:58 +0100, Andrew Cooper wrote:
> On 02/07/14 11:52, Ian Campbell wrote:
> > On Wed, 2014-07-02 at 12:51 +0200, Roger Pau Monné wrote:
> >> On 02/07/14 12:31, Jan Beulich wrote:
> >>>>>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
> >>>> For PVH guests there's still no MMIO hole (or any other kind of hole) at
> >>>> all, the hole(s) is only there for Dom0.
> >>> So where would passed through devices get their MMIO BARs located?
> >>> (I realize pass-through isn't supported yet for PVH, but I didn't expect
> >>> such fundamental things to be missing.)
> >> We could always add a MMIO region to a PVH guest in backwards compatible
> >> way, the only requirement is to make sure the e820 provided to the guest
> >> has this hole set up, but I see no reason to add it before having this
> >> functionality, or to add it unconditionally to guests even if no devices
> >> are passed through.
> >>
> >> Also, shouldn't PVH guests use pcifront/pciback, which means it won't
> >> have any BARs mapped directly?
> > They need to map them somewhere in their physical address to be able to
> > use them... (Unlike a PV guest which I think maps them in the virtual
> > address space "by magic" avoiding the need for a p2m entry).
> >
> > Ian.
> 
> With respect to the original problem of accidentally punching a hole in
> the guest
> 
> Why cant libxc clean up after itself?  From my understanding, it is a
> simple increase reservation to fill the hole it 'borrowed' during setup.

I think the issue is that in the functions in question it doesn't know
if there was RAM there to start with (since it's based on guest RAM
allocation and layout etc), so it doesn't know if it needs refill the
hole or not.

Not insurmountable though I suspect.

> 
> This avoids MMIO ranges in pure PVH guests (arm included).
> 
> ~Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-02 10:31               ` Jan Beulich
  2014-07-02 10:51                 ` Roger Pau Monné
@ 2014-07-02 13:44                 ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 30+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-07-02 13:44 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Ian Campbell, Julien Grall, Tim Deegan, Stefano Stabellini,
	xen-devel, Roger Pau Monné

On Wed, Jul 02, 2014 at 11:31:05AM +0100, Jan Beulich wrote:
> >>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
> > For PVH guests there's still no MMIO hole (or any other kind of hole) at
> > all, the hole(s) is only there for Dom0.
> 
> So where would passed through devices get their MMIO BARs located?

If it ends up uisng 'e820_host' the E820 that the guest has
ends up looking like the hosts. At that point the memory
map looks quite similar to the dom0 one - and the MMIO
BARs should fit in nicely.

> (I realize pass-through isn't supported yet for PVH, but I didn't expect
> such fundamental things to be missing.)
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-01 16:57 ` Stefano Stabellini
  2014-07-01 18:36   ` Julien Grall
@ 2014-07-09 11:38   ` Julien Grall
  2014-07-16 16:02     ` Ian Campbell
  1 sibling, 1 reply; 30+ messages in thread
From: Julien Grall @ 2014-07-09 11:38 UTC (permalink / raw)
  To: ian.campbell; +Cc: xen-devel, tim, stefano.stabellini, Stefano Stabellini

Hi Ian,

On 07/01/2014 05:57 PM, Stefano Stabellini wrote:
> On Tue, 1 Jul 2014, Julien Grall wrote:
>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Following the discussion on this thread, I will send a patch to fix
xc_dom_gnttab_hvm_seed at least for ARM. Not sure what to do on x86, I
may keep the same behavior.

I think this patch is still relevant and won't break things as we don't
have platform with 48bits PA support.

Ian, do I have your ack on this patch?

>>  xen/arch/arm/mm.c |    2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 03a0533..2d40f07 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -947,7 +947,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
>>  
>>  unsigned long domain_get_maximum_gpfn(struct domain *d)
>>  {
>> -    return -ENOSYS;
>> +    return d->arch.p2m.max_mapped_gfn;
>>  }
>>  
>>  void share_xen_page_with_guest(struct page_info *page,
>> -- 
>> 1.7.10.4
>>

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-09 11:38   ` Julien Grall
@ 2014-07-16 16:02     ` Ian Campbell
  2014-07-16 18:17       ` Julien Grall
  2014-09-01 21:32       ` Julien Grall
  0 siblings, 2 replies; 30+ messages in thread
From: Ian Campbell @ 2014-07-16 16:02 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, tim, stefano.stabellini, Stefano Stabellini

On Wed, 2014-07-09 at 12:38 +0100, Julien Grall wrote:
> Hi Ian,
> 
> On 07/01/2014 05:57 PM, Stefano Stabellini wrote:
> > On Tue, 1 Jul 2014, Julien Grall wrote:
> >> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> >> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > 
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Following the discussion on this thread, I will send a patch to fix
> xc_dom_gnttab_hvm_seed at least for ARM. Not sure what to do on x86, I
> may keep the same behavior.
> 
> I think this patch is still relevant and won't break things as we don't
> have platform with 48bits PA support.
> 
> Ian, do I have your ack on this patch?

Oops, when I first saw this I read the first para and stopped, sorry.

I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed for ARM
and continue to punt on this interface until it is actually needed by
something unavoidable on the guest side (and simultaneously hope that
day never comes...).

Ian.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-16 16:02     ` Ian Campbell
@ 2014-07-16 18:17       ` Julien Grall
  2014-09-01 21:32       ` Julien Grall
  1 sibling, 0 replies; 30+ messages in thread
From: Julien Grall @ 2014-07-16 18:17 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel, tim, stefano.stabellini, Stefano Stabellini

Hi Ian,

On 16/07/14 17:02, Ian Campbell wrote:
> On Wed, 2014-07-09 at 12:38 +0100, Julien Grall wrote:
>> Hi Ian,
>>
>> On 07/01/2014 05:57 PM, Stefano Stabellini wrote:
>>> On Tue, 1 Jul 2014, Julien Grall wrote:
>>>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>>>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>
>>> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>
>> Following the discussion on this thread, I will send a patch to fix
>> xc_dom_gnttab_hvm_seed at least for ARM. Not sure what to do on x86, I
>> may keep the same behavior.
>>
>> I think this patch is still relevant and won't break things as we don't
>> have platform with 48bits PA support.
>>
>> Ian, do I have your ack on this patch?
>
> Oops, when I first saw this I read the first para and stopped, sorry.
>
> I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed for ARM
> and continue to punt on this interface until it is actually needed by
> something unavoidable on the guest side (and simultaneously hope that
> day never comes...).

This hypercall is used in 2 difference locations: domain save, and dump 
core.

I expect to have both working on Xen ARM soon. I will also send a fix 
when I will find time for xc_dom_gnttab_hvm_seed.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-07-16 16:02     ` Ian Campbell
  2014-07-16 18:17       ` Julien Grall
@ 2014-09-01 21:32       ` Julien Grall
  2014-09-03  8:44         ` Ian Campbell
  1 sibling, 1 reply; 30+ messages in thread
From: Julien Grall @ 2014-09-01 21:32 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel, tim, stefano.stabellini, Stefano Stabellini

Hi Ian,

On 16/07/14 12:02, Ian Campbell wrote:
> I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed for ARM
> and continue to punt on this interface until it is actually needed by
> something unavoidable on the guest side (and simultaneously hope that
> day never comes...).

This patch is a requirement to make Xen Memory access working on ARM. 
Could you reconsider the possibility to apply this patch on Xen?

For the main concern of this thread (i.e the buggy scratch pfn with 
xc_dom_gnttab_hvm_seed), I wrote a patch to let the architecture decided 
which scrach pfn should be used. I will send the patch next week.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-01 21:32       ` Julien Grall
@ 2014-09-03  8:44         ` Ian Campbell
  2014-09-03  9:00           ` Tamas K Lengyel
  0 siblings, 1 reply; 30+ messages in thread
From: Ian Campbell @ 2014-09-03  8:44 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, tim, stefano.stabellini, Stefano Stabellini

On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
> Hi Ian,
> 
> On 16/07/14 12:02, Ian Campbell wrote:
> > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed for ARM
> > and continue to punt on this interface until it is actually needed by
> > something unavoidable on the guest side (and simultaneously hope that
> > day never comes...).
> 
> This patch is a requirement to make Xen Memory access working on ARM. 
> Could you reconsider the possibility to apply this patch on Xen?

Needs more rationale as to why it is required for Xen Memory (do you
mean xenaccess?). I assume I'll find that in the relevant thread once I
get to it?

> For the main concern of this thread (i.e the buggy scratch pfn with 
> xc_dom_gnttab_hvm_seed), I wrote a patch to let the architecture decided 
> which scrach pfn should be used. I will send the patch next week.

OK.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-03  8:44         ` Ian Campbell
@ 2014-09-03  9:00           ` Tamas K Lengyel
  2014-09-08 20:43             ` Julien Grall
  0 siblings, 1 reply; 30+ messages in thread
From: Tamas K Lengyel @ 2014-09-03  9:00 UTC (permalink / raw)
  To: Ian Campbell
  Cc: xen-devel, Julien Grall, Tim Deegan, Stefano Stabellini,
	Stefano Stabellini


[-- Attachment #1.1: Type: text/plain, Size: 1561 bytes --]

On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com>
wrote:

> On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
> > Hi Ian,
> >
> > On 16/07/14 12:02, Ian Campbell wrote:
> > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed for ARM
> > > and continue to punt on this interface until it is actually needed by
> > > something unavoidable on the guest side (and simultaneously hope that
> > > day never comes...).
> >
> > This patch is a requirement to make Xen Memory access working on ARM.
> > Could you reconsider the possibility to apply this patch on Xen?
>
> Needs more rationale as to why it is required for Xen Memory (do you
> mean xenaccess?). I assume I'll find that in the relevant thread once I
> get to it?
>
>
It's used in a non-critical sanity check for performance reasons, as seen
here:
https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/common/mem_access.c#L75.
Without the sanity check we might attempt to set mem_access permissions on
gpfn's that don't exist for the guest. It wouldn't break anything to do
that but if we know beforehand that the gpfn is outside the scope of what
the guest has we can skip the entire thing.


> > For the main concern of this thread (i.e the buggy scratch pfn with
> > xc_dom_gnttab_hvm_seed), I wrote a patch to let the architecture decided
> > which scrach pfn should be used. I will send the patch next week.
>
> OK.
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 2529 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-03  9:00           ` Tamas K Lengyel
@ 2014-09-08 20:43             ` Julien Grall
  2014-09-08 20:47               ` Tamas K Lengyel
  0 siblings, 1 reply; 30+ messages in thread
From: Julien Grall @ 2014-09-08 20:43 UTC (permalink / raw)
  To: Tamas K Lengyel, Ian Campbell
  Cc: xen-devel, Tim Deegan, Stefano Stabellini, Stefano Stabellini

Hello Tamas,

On 03/09/14 02:00, Tamas K Lengyel wrote:
>
>
>
> On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com
> <mailto:Ian.Campbell@citrix.com>> wrote:
>
>     On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>      > Hi Ian,
>      >
>      > On 16/07/14 12:02, Ian Campbell wrote:
>      > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
>     for ARM
>      > > and continue to punt on this interface until it is actually
>     needed by
>      > > something unavoidable on the guest side (and simultaneously
>     hope that
>      > > day never comes...).
>      >
>      > This patch is a requirement to make Xen Memory access working on ARM.
>      > Could you reconsider the possibility to apply this patch on Xen?
>
>     Needs more rationale as to why it is required for Xen Memory (do you
>     mean xenaccess?). I assume I'll find that in the relevant thread once I
>     get to it?
>
>
> It's used in a non-critical sanity check for performance reasons, as
> seen here:
> https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/common/mem_access.c#L75.
> Without the sanity check we might attempt to set mem_access permissions
> on gpfn's that don't exist for the guest. It wouldn't break anything to
> do that but if we know beforehand that the gpfn is outside the scope of
> what the guest has we can skip the entire thing.

It might be better if you carry this patch on your series.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-08 20:43             ` Julien Grall
@ 2014-09-08 20:47               ` Tamas K Lengyel
  2014-09-09 12:50                 ` Tamas K Lengyel
  2014-09-10 11:21                 ` Tamas K Lengyel
  0 siblings, 2 replies; 30+ messages in thread
From: Tamas K Lengyel @ 2014-09-08 20:47 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Tim Deegan, Ian Campbell, Stefano Stabellini,
	Stefano Stabellini


[-- Attachment #1.1: Type: text/plain, Size: 1646 bytes --]

On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hello Tamas,
>
> On 03/09/14 02:00, Tamas K Lengyel wrote:
>
>>
>>
>>
>> On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com
>> <mailto:Ian.Campbell@citrix.com>> wrote:
>>
>>     On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>>      > Hi Ian,
>>      >
>>      > On 16/07/14 12:02, Ian Campbell wrote:
>>      > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
>>     for ARM
>>      > > and continue to punt on this interface until it is actually
>>     needed by
>>      > > something unavoidable on the guest side (and simultaneously
>>     hope that
>>      > > day never comes...).
>>      >
>>      > This patch is a requirement to make Xen Memory access working on
>> ARM.
>>      > Could you reconsider the possibility to apply this patch on Xen?
>>
>>     Needs more rationale as to why it is required for Xen Memory (do you
>>     mean xenaccess?). I assume I'll find that in the relevant thread once
>> I
>>     get to it?
>>
>>
>> It's used in a non-critical sanity check for performance reasons, as
>> seen here:
>> https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/
>> common/mem_access.c#L75.
>> Without the sanity check we might attempt to set mem_access permissions
>> on gpfn's that don't exist for the guest. It wouldn't break anything to
>> do that but if we know beforehand that the gpfn is outside the scope of
>> what the guest has we can skip the entire thing.
>>
>
> It might be better if you carry this patch on your series.
>
> Regards,
>
> --
> Julien Grall
>

Alright.

Thanks,
Tamas

[-- Attachment #1.2: Type: text/html, Size: 2720 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-08 20:47               ` Tamas K Lengyel
@ 2014-09-09 12:50                 ` Tamas K Lengyel
  2014-09-09 13:09                   ` Andrew Cooper
  2014-09-10 11:21                 ` Tamas K Lengyel
  1 sibling, 1 reply; 30+ messages in thread
From: Tamas K Lengyel @ 2014-09-09 12:50 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Tim Deegan, Ian Campbell, Stefano Stabellini,
	Stefano Stabellini


[-- Attachment #1.1: Type: text/plain, Size: 2160 bytes --]

On Mon, Sep 8, 2014 at 10:47 PM, Tamas K Lengyel <tamas.lengyel@zentific.com
> wrote:

>
> On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall <julien.grall@linaro.org>
> wrote:
>
>> Hello Tamas,
>>
>> On 03/09/14 02:00, Tamas K Lengyel wrote:
>>
>>>
>>>
>>>
>>> On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com
>>> <mailto:Ian.Campbell@citrix.com>> wrote:
>>>
>>>     On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>>>      > Hi Ian,
>>>      >
>>>      > On 16/07/14 12:02, Ian Campbell wrote:
>>>      > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
>>>     for ARM
>>>      > > and continue to punt on this interface until it is actually
>>>     needed by
>>>      > > something unavoidable on the guest side (and simultaneously
>>>     hope that
>>>      > > day never comes...).
>>>      >
>>>      > This patch is a requirement to make Xen Memory access working on
>>> ARM.
>>>      > Could you reconsider the possibility to apply this patch on Xen?
>>>
>>>     Needs more rationale as to why it is required for Xen Memory (do you
>>>     mean xenaccess?). I assume I'll find that in the relevant thread
>>> once I
>>>     get to it?
>>>
>>>
>>> It's used in a non-critical sanity check for performance reasons, as
>>> seen here:
>>> https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/
>>> common/mem_access.c#L75.
>>> Without the sanity check we might attempt to set mem_access permissions
>>> on gpfn's that don't exist for the guest. It wouldn't break anything to
>>> do that but if we know beforehand that the gpfn is outside the scope of
>>> what the guest has we can skip the entire thing.
>>>
>>
>> It might be better if you carry this patch on your series.
>>
>> Regards,
>>
>> --
>> Julien Grall
>>
>
> Alright.
>
> Thanks,
> Tamas
>

As a sidenote, if this patch is problematic to merge for some reason, the
current implementation still needs to change to return 0 instead of -ENOSYS
as to conform to the x86 implementation. On the x86 side 0 is used to
indicate failure. See 7ffc9779aa5120c5098d938cb88f69a1dda9a0fe "x86: make
certain memory sub-ops return valid values" for more info.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 3561 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-09 12:50                 ` Tamas K Lengyel
@ 2014-09-09 13:09                   ` Andrew Cooper
  2014-09-09 14:01                     ` Tamas K Lengyel
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2014-09-09 13:09 UTC (permalink / raw)
  To: Tamas K Lengyel, Julien Grall
  Cc: xen-devel, Tim Deegan, Ian Campbell, Stefano Stabellini,
	Stefano Stabellini


[-- Attachment #1.1: Type: text/plain, Size: 3032 bytes --]

On 09/09/14 13:50, Tamas K Lengyel wrote:
>
>
> On Mon, Sep 8, 2014 at 10:47 PM, Tamas K Lengyel
> <tamas.lengyel@zentific.com <mailto:tamas.lengyel@zentific.com>> wrote:
>
>
>     On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall
>     <julien.grall@linaro.org <mailto:julien.grall@linaro.org>> wrote:
>
>         Hello Tamas,
>
>         On 03/09/14 02:00, Tamas K Lengyel wrote:
>
>
>
>
>             On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell
>             <Ian.Campbell@citrix.com <mailto:Ian.Campbell@citrix.com>
>             <mailto:Ian.Campbell@citrix.com
>             <mailto:Ian.Campbell@citrix.com>>> wrote:
>
>                 On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>                  > Hi Ian,
>                  >
>                  > On 16/07/14 12:02, Ian Campbell wrote:
>                  > > I'd much prefer to just have the fix to
>             xc_dom_gnttab_hvm_seed
>                 for ARM
>                  > > and continue to punt on this interface until it
>             is actually
>                 needed by
>                  > > something unavoidable on the guest side (and
>             simultaneously
>                 hope that
>                  > > day never comes...).
>                  >
>                  > This patch is a requirement to make Xen Memory
>             access working on ARM.
>                  > Could you reconsider the possibility to apply this
>             patch on Xen?
>
>                 Needs more rationale as to why it is required for Xen
>             Memory (do you
>                 mean xenaccess?). I assume I'll find that in the
>             relevant thread once I
>                 get to it?
>
>
>             It's used in a non-critical sanity check for performance
>             reasons, as
>             seen here:
>             https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/common/mem_access.c#L75.
>             Without the sanity check we might attempt to set
>             mem_access permissions
>             on gpfn's that don't exist for the guest. It wouldn't
>             break anything to
>             do that but if we know beforehand that the gpfn is outside
>             the scope of
>             what the guest has we can skip the entire thing.
>
>
>         It might be better if you carry this patch on your series.
>
>         Regards,
>
>         -- 
>         Julien Grall
>
>
>     Alright.
>
>     Thanks,
>     Tamas
>
>
> As a sidenote, if this patch is problematic to merge for some reason,
> the current implementation still needs to change to return 0 instead
> of -ENOSYS as to conform to the x86 implementation. On the x86 side 0
> is used to indicate failure. See
> 7ffc9779aa5120c5098d938cb88f69a1dda9a0fe "x86: make certain memory
> sub-ops return valid values" for more info.

0 does not indicate failure.  It indicates the value held in the shared
info field, which if 0, indicates that the domain is still being built
by the toolstack.

-ENOSYS is still a valid failure case.

~Andrew

[-- Attachment #1.2: Type: text/html, Size: 7639 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-09 13:09                   ` Andrew Cooper
@ 2014-09-09 14:01                     ` Tamas K Lengyel
  0 siblings, 0 replies; 30+ messages in thread
From: Tamas K Lengyel @ 2014-09-09 14:01 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Ian Campbell, Stefano Stabellini, Julien Grall, Tim Deegan,
	Stefano Stabellini, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2654 bytes --]

On Tue, Sep 9, 2014 at 3:09 PM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

>  On 09/09/14 13:50, Tamas K Lengyel wrote:
>
>
>
> On Mon, Sep 8, 2014 at 10:47 PM, Tamas K Lengyel <
> tamas.lengyel@zentific.com> wrote:
>
>>
>>   On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall <julien.grall@linaro.org>
>> wrote:
>>
>>> Hello Tamas,
>>>
>>> On 03/09/14 02:00, Tamas K Lengyel wrote:
>>>
>>>>
>>>>
>>>>
>>>> On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com
>>>>  <mailto:Ian.Campbell@citrix.com>> wrote:
>>>>
>>>>     On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>>>>      > Hi Ian,
>>>>      >
>>>>      > On 16/07/14 12:02, Ian Campbell wrote:
>>>>      > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
>>>>     for ARM
>>>>      > > and continue to punt on this interface until it is actually
>>>>     needed by
>>>>      > > something unavoidable on the guest side (and simultaneously
>>>>     hope that
>>>>      > > day never comes...).
>>>>      >
>>>>      > This patch is a requirement to make Xen Memory access working on
>>>> ARM.
>>>>      > Could you reconsider the possibility to apply this patch on Xen?
>>>>
>>>>     Needs more rationale as to why it is required for Xen Memory (do you
>>>>     mean xenaccess?). I assume I'll find that in the relevant thread
>>>> once I
>>>>     get to it?
>>>>
>>>>
>>>> It's used in a non-critical sanity check for performance reasons, as
>>>> seen here:
>>>>
>>>> https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/common/mem_access.c#L75
>>>> .
>>>> Without the sanity check we might attempt to set mem_access permissions
>>>> on gpfn's that don't exist for the guest. It wouldn't break anything to
>>>> do that but if we know beforehand that the gpfn is outside the scope of
>>>> what the guest has we can skip the entire thing.
>>>>
>>>
>>> It might be better if you carry this patch on your series.
>>>
>>> Regards,
>>>
>>> --
>>> Julien Grall
>>>
>>
>>   Alright.
>>
>>  Thanks,
>>  Tamas
>>
>
>  As a sidenote, if this patch is problematic to merge for some reason,
> the current implementation still needs to change to return 0 instead of
> -ENOSYS as to conform to the x86 implementation. On the x86 side 0 is used
> to indicate failure. See 7ffc9779aa5120c5098d938cb88f69a1dda9a0fe "x86:
> make certain memory sub-ops return valid values" for more info.
>
>
> 0 does not indicate failure.  It indicates the value held in the shared
> info field, which if 0, indicates that the domain is still being built by
> the toolstack.
>
> -ENOSYS is still a valid failure case.
>
> ~Andrew
>

Hm, in that case ignore my comment.

Thanks,
Tamas

[-- Attachment #1.2: Type: text/html, Size: 7326 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-08 20:47               ` Tamas K Lengyel
  2014-09-09 12:50                 ` Tamas K Lengyel
@ 2014-09-10 11:21                 ` Tamas K Lengyel
  1 sibling, 0 replies; 30+ messages in thread
From: Tamas K Lengyel @ 2014-09-10 11:21 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Tim Deegan, Ian Campbell, Stefano Stabellini,
	Stefano Stabellini


[-- Attachment #1.1: Type: text/plain, Size: 2432 bytes --]

On Mon, Sep 8, 2014 at 10:47 PM, Tamas K Lengyel <tamas.lengyel@zentific.com
> wrote:

>
> On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall <julien.grall@linaro.org>
> wrote:
>
>> Hello Tamas,
>>
>> On 03/09/14 02:00, Tamas K Lengyel wrote:
>>
>>>
>>>
>>>
>>> On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com
>>> <mailto:Ian.Campbell@citrix.com>> wrote:
>>>
>>>     On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>>>      > Hi Ian,
>>>      >
>>>      > On 16/07/14 12:02, Ian Campbell wrote:
>>>      > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
>>>     for ARM
>>>      > > and continue to punt on this interface until it is actually
>>>     needed by
>>>      > > something unavoidable on the guest side (and simultaneously
>>>     hope that
>>>      > > day never comes...).
>>>      >
>>>      > This patch is a requirement to make Xen Memory access working on
>>> ARM.
>>>      > Could you reconsider the possibility to apply this patch on Xen?
>>>
>>>     Needs more rationale as to why it is required for Xen Memory (do you
>>>     mean xenaccess?). I assume I'll find that in the relevant thread
>>> once I
>>>     get to it?
>>>
>>>
>>> It's used in a non-critical sanity check for performance reasons, as
>>> seen here:
>>> https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/
>>> common/mem_access.c#L75.
>>> Without the sanity check we might attempt to set mem_access permissions
>>> on gpfn's that don't exist for the guest. It wouldn't break anything to
>>> do that but if we know beforehand that the gpfn is outside the scope of
>>> what the guest has we can skip the entire thing.
>>>
>>
>> It might be better if you carry this patch on your series.
>>
>> Regards,
>>
>> --
>> Julien Grall
>>
>
As I have been looking at this more and more I'm coming to the conclusion
that actually replacing the check in mem_access would be more sensible. The
reason being that the user calling this code normally does it based on
information gathered via getdomaininfo, and max_pages reported back to the
user is higher then this implementation of domain_get_maximum_gpfn reports
(on ARM at least). I switched xen-access to use tot_pages instead of
max_pages, but tot_pages have been less then domain_get_maximum_gpfn
reports. I'm not entirely sure why there is such a discrepancy, but
enforcing a value that the user will ultimately 'guess' does not make much
sense.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 3747 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2014-09-10 11:21 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-01 14:57 [PATCH] xen/arm: Implement domain_get_maximum_gpfn Julien Grall
2014-07-01 16:57 ` Stefano Stabellini
2014-07-01 18:36   ` Julien Grall
2014-07-01 18:53     ` Andrew Cooper
2014-07-09 11:38   ` Julien Grall
2014-07-16 16:02     ` Ian Campbell
2014-07-16 18:17       ` Julien Grall
2014-09-01 21:32       ` Julien Grall
2014-09-03  8:44         ` Ian Campbell
2014-09-03  9:00           ` Tamas K Lengyel
2014-09-08 20:43             ` Julien Grall
2014-09-08 20:47               ` Tamas K Lengyel
2014-09-09 12:50                 ` Tamas K Lengyel
2014-09-09 13:09                   ` Andrew Cooper
2014-09-09 14:01                     ` Tamas K Lengyel
2014-09-10 11:21                 ` Tamas K Lengyel
2014-07-02  9:12 ` Ian Campbell
2014-07-02  9:19   ` Julien Grall
2014-07-02  9:22     ` Ian Campbell
2014-07-02  9:37       ` Julien Grall
2014-07-02  9:41         ` Ian Campbell
2014-07-02  9:50           ` Jan Beulich
2014-07-02  9:52             ` Ian Campbell
2014-07-02 10:19             ` Roger Pau Monné
2014-07-02 10:31               ` Jan Beulich
2014-07-02 10:51                 ` Roger Pau Monné
2014-07-02 10:52                   ` Ian Campbell
2014-07-02 10:58                     ` Andrew Cooper
2014-07-02 11:21                       ` Ian Campbell
2014-07-02 13:44                 ` Konrad Rzeszutek Wilk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.