All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] xen: arm: invalidate caches after map_domain_page done
@ 2014-08-01  7:25 Andrii Tseglytskyi
  2014-08-01  9:23 ` Julien Grall
  0 siblings, 1 reply; 12+ messages in thread
From: Andrii Tseglytskyi @ 2014-08-01  7:25 UTC (permalink / raw)
  To: xen-devel, Ian Campbell, Stefano Stabellini, Tim Deegan

In some cases, memory page returned by map_domain_page() contains
invalid data. Issue is observed when map_domain_page() is used
immediately after p2m_lookup() function, when random page of
guest domain memory is need to be mapped to xen. Data on this
already memory page may be not valid. Issue is fixed when
caches are invalidated after mapping is done.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/mm.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0a243b0..085780a 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -304,7 +304,7 @@ void *map_domain_page(unsigned long mfn)
      * We may not have flushed this specific subpage at map time,
      * since we only flush the 4k page not the superpage
      */
-    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);
+    clean_and_invalidate_xen_dcache_va_range((void *)va, PAGE_SIZE);
 
     return (void *)va;
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01  7:25 [PATCH] xen: arm: invalidate caches after map_domain_page done Andrii Tseglytskyi
@ 2014-08-01  9:23 ` Julien Grall
  2014-08-01 10:01   ` Andrii Tseglytskyi
  0 siblings, 1 reply; 12+ messages in thread
From: Julien Grall @ 2014-08-01  9:23 UTC (permalink / raw)
  To: Andrii Tseglytskyi, xen-devel, Ian Campbell, Stefano Stabellini,
	Tim Deegan

Hi Andrii,

On 01/08/14 08:25, Andrii Tseglytskyi wrote:
> In some cases, memory page returned by map_domain_page() contains
> invalid data. Issue is observed when map_domain_page() is used
> immediately after p2m_lookup() function, when random page of
> guest domain memory is need to be mapped to xen. Data on this
> already memory page may be not valid. Issue is fixed when
> caches are invalidated after mapping is done.
>
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---
>   xen/arch/arm/mm.c |    2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 0a243b0..085780a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -304,7 +304,7 @@ void *map_domain_page(unsigned long mfn)
>        * We may not have flushed this specific subpage at map time,
>        * since we only flush the 4k page not the superpage
>        */
> -    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);

Why did you remove the flush TLB? It's requirement to make sure the VA 
will pointed to the right PA.

> +    clean_and_invalidate_xen_dcache_va_range((void *)va, PAGE_SIZE);

This is not the right approach, map_domain_page is heavily used to map 
hypercall data page. Those pages must reside in normal inner-cacheable
memory. So cleaning the cache is useless and time consuming.

If you want to clean and invalidate the cache, even though I don't think 
this right by reading the commit message, you have to introduce a new 
helper.

>       return (void *)va;
>   }
>

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01  9:23 ` Julien Grall
@ 2014-08-01 10:01   ` Andrii Tseglytskyi
  2014-08-01 10:28     ` Julien Grall
  0 siblings, 1 reply; 12+ messages in thread
From: Andrii Tseglytskyi @ 2014-08-01 10:01 UTC (permalink / raw)
  To: Julien Grall; +Cc: Tim Deegan, Stefano Stabellini, Ian Campbell, xen-devel

Hi Julien,


On Fri, Aug 1, 2014 at 12:23 PM, Julien Grall <julien.grall@linaro.org> wrote:
>
> Hi Andrii,
>
>
> On 01/08/14 08:25, Andrii Tseglytskyi wrote:
>>
>> In some cases, memory page returned by map_domain_page() contains
>> invalid data. Issue is observed when map_domain_page() is used
>> immediately after p2m_lookup() function, when random page of
>> guest domain memory is need to be mapped to xen. Data on this
>> already memory page may be not valid. Issue is fixed when
>> caches are invalidated after mapping is done.
>>
>> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
>> ---
>>   xen/arch/arm/mm.c |    2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 0a243b0..085780a 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -304,7 +304,7 @@ void *map_domain_page(unsigned long mfn)
>>        * We may not have flushed this specific subpage at map time,
>>        * since we only flush the 4k page not the superpage
>>        */
>> -    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);
>
>
> Why did you remove the flush TLB? It's requirement to make sure the VA will pointed to the right PA.
>
>> +    clean_and_invalidate_xen_dcache_va_range((void *)va, PAGE_SIZE);
>
>
> This is not the right approach, map_domain_page is heavily used to map hypercall data page. Those pages must reside in normal inner-cacheable

What if use map_domain_page() outside hypercall ?

> memory. So cleaning the cache is useless and time consuming.

I see that without cache invalidating page contains invalid data. Let
me explain more deeply:
As far as you know - I resumed my work with remoteproc MMU driver and
I started fixing review comments. One of your comments is that
ioremap_nocache() API should not be used for mapping domain pages,
therefore I tried using map_domain_page, and this works fine for me
only in case if I invalidate caches of already mapped va.

I compared page dumps - 1 st page was mapped with ioremap_nocache()
function, second page was mapped with map_domain_page() function, and
I got the following output:

(XEN) SGX_L2_MMU: pte_table[0] 0x9d428019 tmp[0] 0x9d428019
(XEN) SGX_L2_MMU: pte_table[1] 0x9d429019 tmp[1] 0x9d429019
(XEN) SGX_L2_MMU: pte_table[2] 0x9d42e019 tmp[2] 0x9d42e019
(XEN) SGX_L2_MMU: pte_table[3] 0x9d42f019 tmp[3] 0x00000000  <-- data
is not valid here

pte_table pointer is mapped using ioremap_nocache(), tmp is mapped
using map_domain_page()

>
>
> If you want to clean and invalidate the cache, even though I don't think this right by reading the commit message, you have to introduce a new helper.
>

Taking in account your previous comment - that map_domain_page() is
used for mapping of hypercall data page, looks like I can't use this
API as is. In my code I solved this by calling of
clean_and_invalidate_xen_dcache_va_range() immediately after page is
mapped. This works fine for me - no invalid data is observed.

>>       return (void *)va;
>>   }
>>
>
> Regards,
>
> --
> Julien Grall




-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01 10:01   ` Andrii Tseglytskyi
@ 2014-08-01 10:28     ` Julien Grall
  2014-08-01 10:50       ` Andrii Tseglytskyi
  0 siblings, 1 reply; 12+ messages in thread
From: Julien Grall @ 2014-08-01 10:28 UTC (permalink / raw)
  To: Andrii Tseglytskyi
  Cc: Tim Deegan, Stefano Stabellini, Ian Campbell, xen-devel



On 01/08/14 11:01, Andrii Tseglytskyi wrote:
> On Fri, Aug 1, 2014 at 12:23 PM, Julien Grall <julien.grall@linaro.org> wrote:
>>
>> Hi Andrii,
>>
>>
>> On 01/08/14 08:25, Andrii Tseglytskyi wrote:
>>>
>>> In some cases, memory page returned by map_domain_page() contains
>>> invalid data. Issue is observed when map_domain_page() is used
>>> immediately after p2m_lookup() function, when random page of
>>> guest domain memory is need to be mapped to xen. Data on this
>>> already memory page may be not valid. Issue is fixed when
>>> caches are invalidated after mapping is done.
>>>
>>> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
>>> ---
>>>    xen/arch/arm/mm.c |    2 +-
>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>>> index 0a243b0..085780a 100644
>>> --- a/xen/arch/arm/mm.c
>>> +++ b/xen/arch/arm/mm.c
>>> @@ -304,7 +304,7 @@ void *map_domain_page(unsigned long mfn)
>>>         * We may not have flushed this specific subpage at map time,
>>>         * since we only flush the 4k page not the superpage
>>>         */
>>> -    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);
>>
>>
>> Why did you remove the flush TLB? It's requirement to make sure the VA will pointed to the right PA.
>>
>>> +    clean_and_invalidate_xen_dcache_va_range((void *)va, PAGE_SIZE);
>>
>>
>> This is not the right approach, map_domain_page is heavily used to map hypercall data page. Those pages must reside in normal inner-cacheable
>
> What if use map_domain_page() outside hypercall ?

In general we require cache enabled for any communication with Xen. 
Until now, we didn't have any case where map_domain_page is used outside 
this scope.

>> memory. So cleaning the cache is useless and time consuming.
>
> I see that without cache invalidating page contains invalid data. Let
> me explain more deeply:
> As far as you know - I resumed my work with remoteproc MMU driver and
> I started fixing review comments. One of your comments is that
> ioremap_nocache() API should not be used for mapping domain pages,
> therefore I tried using map_domain_page, and this works fine for me
> only in case if I invalidate caches of already mapped va.
>
> I compared page dumps - 1 st page was mapped with ioremap_nocache()
> function, second page was mapped with map_domain_page() function, and
> I got the following output:
>
> (XEN) SGX_L2_MMU: pte_table[0] 0x9d428019 tmp[0] 0x9d428019
> (XEN) SGX_L2_MMU: pte_table[1] 0x9d429019 tmp[1] 0x9d429019
> (XEN) SGX_L2_MMU: pte_table[2] 0x9d42e019 tmp[2] 0x9d42e019
> (XEN) SGX_L2_MMU: pte_table[3] 0x9d42f019 tmp[3] 0x00000000  <-- data
> is not valid here
>
> pte_table pointer is mapped using ioremap_nocache(), tmp is mapped
> using map_domain_page()
>>
>>
>> If you want to clean and invalidate the cache, even though I don't think this right by reading the commit message, you have to introduce a new helper.
>>
>
> Taking in account your previous comment - that map_domain_page() is
> used for mapping of hypercall data page, looks like I can't use this
> API as is. In my code I solved this by calling of
> clean_and_invalidate_xen_dcache_va_range() immediately after page is
> mapped. This works fine for me - no invalid data is observed.

What is the attribute of this page in guest? non-cacheable?

I understand why you would need the invalidate, even though it's 
specific to your case. But not the clean...
If the page is non-cacheable you may write stall data in the memory.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01 10:28     ` Julien Grall
@ 2014-08-01 10:50       ` Andrii Tseglytskyi
  2014-08-01 10:58         ` Julien Grall
  0 siblings, 1 reply; 12+ messages in thread
From: Andrii Tseglytskyi @ 2014-08-01 10:50 UTC (permalink / raw)
  To: Julien Grall; +Cc: Tim Deegan, Stefano Stabellini, Ian Campbell, xen-devel

On Fri, Aug 1, 2014 at 1:28 PM, Julien Grall <julien.grall@linaro.org> wrote:
>
>
> On 01/08/14 11:01, Andrii Tseglytskyi wrote:
>>
>> On Fri, Aug 1, 2014 at 12:23 PM, Julien Grall <julien.grall@linaro.org>
>> wrote:
>>>
>>>
>>> Hi Andrii,
>>>
>>>
>>> On 01/08/14 08:25, Andrii Tseglytskyi wrote:
>>>>
>>>>
>>>> In some cases, memory page returned by map_domain_page() contains
>>>> invalid data. Issue is observed when map_domain_page() is used
>>>> immediately after p2m_lookup() function, when random page of
>>>> guest domain memory is need to be mapped to xen. Data on this
>>>> already memory page may be not valid. Issue is fixed when
>>>> caches are invalidated after mapping is done.
>>>>
>>>> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
>>>> ---
>>>>    xen/arch/arm/mm.c |    2 +-
>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>>>> index 0a243b0..085780a 100644
>>>> --- a/xen/arch/arm/mm.c
>>>> +++ b/xen/arch/arm/mm.c
>>>> @@ -304,7 +304,7 @@ void *map_domain_page(unsigned long mfn)
>>>>         * We may not have flushed this specific subpage at map time,
>>>>         * since we only flush the 4k page not the superpage
>>>>         */
>>>> -    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);
>>>
>>>
>>>
>>> Why did you remove the flush TLB? It's requirement to make sure the VA
>>> will pointed to the right PA.
>>>
>>>> +    clean_and_invalidate_xen_dcache_va_range((void *)va, PAGE_SIZE);
>>>
>>>
>>>
>>> This is not the right approach, map_domain_page is heavily used to map
>>> hypercall data page. Those pages must reside in normal inner-cacheable
>>
>>
>> What if use map_domain_page() outside hypercall ?
>
>
> In general we require cache enabled for any communication with Xen. Until
> now, we didn't have any case where map_domain_page is used outside this
> scope.
>
>
>>> memory. So cleaning the cache is useless and time consuming.
>>
>>
>> I see that without cache invalidating page contains invalid data. Let
>> me explain more deeply:
>> As far as you know - I resumed my work with remoteproc MMU driver and
>> I started fixing review comments. One of your comments is that
>> ioremap_nocache() API should not be used for mapping domain pages,
>> therefore I tried using map_domain_page, and this works fine for me
>> only in case if I invalidate caches of already mapped va.
>>
>> I compared page dumps - 1 st page was mapped with ioremap_nocache()
>> function, second page was mapped with map_domain_page() function, and
>> I got the following output:
>>
>> (XEN) SGX_L2_MMU: pte_table[0] 0x9d428019 tmp[0] 0x9d428019
>> (XEN) SGX_L2_MMU: pte_table[1] 0x9d429019 tmp[1] 0x9d429019
>> (XEN) SGX_L2_MMU: pte_table[2] 0x9d42e019 tmp[2] 0x9d42e019
>> (XEN) SGX_L2_MMU: pte_table[3] 0x9d42f019 tmp[3] 0x00000000  <-- data
>> is not valid here
>>
>> pte_table pointer is mapped using ioremap_nocache(), tmp is mapped
>> using map_domain_page()
>>>
>>>
>>>
>>> If you want to clean and invalidate the cache, even though I don't think
>>> this right by reading the commit message, you have to introduce a new
>>> helper.
>>>
>>
>> Taking in account your previous comment - that map_domain_page() is
>> used for mapping of hypercall data page, looks like I can't use this
>> API as is. In my code I solved this by calling of
>> clean_and_invalidate_xen_dcache_va_range() immediately after page is
>> mapped. This works fine for me - no invalid data is observed.
>
>
> What is the attribute of this page in guest? non-cacheable?
>

It is domain heap memory. I expect it should be cacheable. How can I
make sure with this?


> I understand why you would need the invalidate, even though it's specific to
> your case. But not the clean...
> If the page is non-cacheable you may write stall data in the memory.
>
> Regards,
>
> --
> Julien Grall



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01 10:50       ` Andrii Tseglytskyi
@ 2014-08-01 10:58         ` Julien Grall
  2014-08-01 11:37           ` Andrii Tseglytskyi
  0 siblings, 1 reply; 12+ messages in thread
From: Julien Grall @ 2014-08-01 10:58 UTC (permalink / raw)
  To: Andrii Tseglytskyi
  Cc: Tim Deegan, Stefano Stabellini, Ian Campbell, xen-devel



On 01/08/14 11:50, Andrii Tseglytskyi wrote:
> On Fri, Aug 1, 2014 at 1:28 PM, Julien Grall <julien.grall@linaro.org> wrote:
>>
>>
>> On 01/08/14 11:01, Andrii Tseglytskyi wrote:
>>>
>>> On Fri, Aug 1, 2014 at 12:23 PM, Julien Grall <julien.grall@linaro.org>
>>> wrote:
>>>>
>>>>
>>>> Hi Andrii,
>>>>
>>>>
>>>> On 01/08/14 08:25, Andrii Tseglytskyi wrote:
>>>>>
>>>>>
>>>>> In some cases, memory page returned by map_domain_page() contains
>>>>> invalid data. Issue is observed when map_domain_page() is used
>>>>> immediately after p2m_lookup() function, when random page of
>>>>> guest domain memory is need to be mapped to xen. Data on this
>>>>> already memory page may be not valid. Issue is fixed when
>>>>> caches are invalidated after mapping is done.
>>>>>
>>>>> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
>>>>> ---
>>>>>     xen/arch/arm/mm.c |    2 +-
>>>>>     1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>>>>> index 0a243b0..085780a 100644
>>>>> --- a/xen/arch/arm/mm.c
>>>>> +++ b/xen/arch/arm/mm.c
>>>>> @@ -304,7 +304,7 @@ void *map_domain_page(unsigned long mfn)
>>>>>          * We may not have flushed this specific subpage at map time,
>>>>>          * since we only flush the 4k page not the superpage
>>>>>          */
>>>>> -    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);
>>>>
>>>>
>>>>
>>>> Why did you remove the flush TLB? It's requirement to make sure the VA
>>>> will pointed to the right PA.
>>>>
>>>>> +    clean_and_invalidate_xen_dcache_va_range((void *)va, PAGE_SIZE);
>>>>
>>>>
>>>>
>>>> This is not the right approach, map_domain_page is heavily used to map
>>>> hypercall data page. Those pages must reside in normal inner-cacheable
>>>
>>>
>>> What if use map_domain_page() outside hypercall ?
>>
>>
>> In general we require cache enabled for any communication with Xen. Until
>> now, we didn't have any case where map_domain_page is used outside this
>> scope.
>>
>>
>>>> memory. So cleaning the cache is useless and time consuming.
>>>
>>>
>>> I see that without cache invalidating page contains invalid data. Let
>>> me explain more deeply:
>>> As far as you know - I resumed my work with remoteproc MMU driver and
>>> I started fixing review comments. One of your comments is that
>>> ioremap_nocache() API should not be used for mapping domain pages,
>>> therefore I tried using map_domain_page, and this works fine for me
>>> only in case if I invalidate caches of already mapped va.
>>>
>>> I compared page dumps - 1 st page was mapped with ioremap_nocache()
>>> function, second page was mapped with map_domain_page() function, and
>>> I got the following output:
>>>
>>> (XEN) SGX_L2_MMU: pte_table[0] 0x9d428019 tmp[0] 0x9d428019
>>> (XEN) SGX_L2_MMU: pte_table[1] 0x9d429019 tmp[1] 0x9d429019
>>> (XEN) SGX_L2_MMU: pte_table[2] 0x9d42e019 tmp[2] 0x9d42e019
>>> (XEN) SGX_L2_MMU: pte_table[3] 0x9d42f019 tmp[3] 0x00000000  <-- data
>>> is not valid here
>>>
>>> pte_table pointer is mapped using ioremap_nocache(), tmp is mapped
>>> using map_domain_page()
>>>>
>>>>
>>>>
>>>> If you want to clean and invalidate the cache, even though I don't think
>>>> this right by reading the commit message, you have to introduce a new
>>>> helper.
>>>>
>>>
>>> Taking in account your previous comment - that map_domain_page() is
>>> used for mapping of hypercall data page, looks like I can't use this
>>> API as is. In my code I solved this by calling of
>>> clean_and_invalidate_xen_dcache_va_range() immediately after page is
>>> mapped. This works fine for me - no invalid data is observed.
>>
>>
>> What is the attribute of this page in guest? non-cacheable?
>>
>
> It is domain heap memory. I expect it should be cacheable. How can I
> make sure with this?

I'm lost...

The page you are trying to map belongs to a guest, right? When the guest 
writes data in this page. Does it map with cacheable attribute or not? I 
suspect no.

I think your current problem is the cache contains stall data. In this 
case you have to only invalidate the cache.

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01 10:58         ` Julien Grall
@ 2014-08-01 11:37           ` Andrii Tseglytskyi
  2014-08-01 14:01             ` Julien Grall
  0 siblings, 1 reply; 12+ messages in thread
From: Andrii Tseglytskyi @ 2014-08-01 11:37 UTC (permalink / raw)
  To: Julien Grall; +Cc: Tim Deegan, Stefano Stabellini, Ian Campbell, xen-devel

Oh, now got your point.


>>> On 01/08/14 11:01, Andrii Tseglytskyi wrote:
>>>>
>>>>
>>>> On Fri, Aug 1, 2014 at 12:23 PM, Julien Grall <julien.grall@linaro.org>
>>>> wrote:
>>>>>
>>>>>
>>>>>
>>>>> Hi Andrii,
>>>>>
>>>>>
>>>>> On 01/08/14 08:25, Andrii Tseglytskyi wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> In some cases, memory page returned by map_domain_page() contains
>>>>>> invalid data. Issue is observed when map_domain_page() is used
>>>>>> immediately after p2m_lookup() function, when random page of
>>>>>> guest domain memory is need to be mapped to xen. Data on this
>>>>>> already memory page may be not valid. Issue is fixed when
>>>>>> caches are invalidated after mapping is done.
>>>>>>
>>>>>> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
>>>>>> ---
>>>>>>     xen/arch/arm/mm.c |    2 +-
>>>>>>     1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>>>>>> index 0a243b0..085780a 100644
>>>>>> --- a/xen/arch/arm/mm.c
>>>>>> +++ b/xen/arch/arm/mm.c
>>>>>> @@ -304,7 +304,7 @@ void *map_domain_page(unsigned long mfn)
>>>>>>          * We may not have flushed this specific subpage at map time,
>>>>>>          * since we only flush the 4k page not the superpage
>>>>>>          */
>>>>>> -    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Why did you remove the flush TLB? It's requirement to make sure the VA
>>>>> will pointed to the right PA.
>>>>>
>>>>>> +    clean_and_invalidate_xen_dcache_va_range((void *)va, PAGE_SIZE);
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> This is not the right approach, map_domain_page is heavily used to map
>>>>> hypercall data page. Those pages must reside in normal inner-cacheable
>>>>
>>>>
>>>>
>>>> What if use map_domain_page() outside hypercall ?
>>>
>>>
>>>
>>> In general we require cache enabled for any communication with Xen. Until
>>> now, we didn't have any case where map_domain_page is used outside this
>>> scope.
>>>
>>>
>>>>> memory. So cleaning the cache is useless and time consuming.
>>>>
>>>>
>>>>
>>>> I see that without cache invalidating page contains invalid data. Let
>>>> me explain more deeply:
>>>> As far as you know - I resumed my work with remoteproc MMU driver and
>>>> I started fixing review comments. One of your comments is that
>>>> ioremap_nocache() API should not be used for mapping domain pages,
>>>> therefore I tried using map_domain_page, and this works fine for me
>>>> only in case if I invalidate caches of already mapped va.
>>>>
>>>> I compared page dumps - 1 st page was mapped with ioremap_nocache()
>>>> function, second page was mapped with map_domain_page() function, and
>>>> I got the following output:
>>>>
>>>> (XEN) SGX_L2_MMU: pte_table[0] 0x9d428019 tmp[0] 0x9d428019
>>>> (XEN) SGX_L2_MMU: pte_table[1] 0x9d429019 tmp[1] 0x9d429019
>>>> (XEN) SGX_L2_MMU: pte_table[2] 0x9d42e019 tmp[2] 0x9d42e019
>>>> (XEN) SGX_L2_MMU: pte_table[3] 0x9d42f019 tmp[3] 0x00000000  <-- data
>>>> is not valid here
>>>>
>>>> pte_table pointer is mapped using ioremap_nocache(), tmp is mapped
>>>> using map_domain_page()
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> If you want to clean and invalidate the cache, even though I don't
>>>>> think
>>>>> this right by reading the commit message, you have to introduce a new
>>>>> helper.
>>>>>
>>>>
>>>> Taking in account your previous comment - that map_domain_page() is
>>>> used for mapping of hypercall data page, looks like I can't use this
>>>> API as is. In my code I solved this by calling of
>>>> clean_and_invalidate_xen_dcache_va_range() immediately after page is
>>>> mapped. This works fine for me - no invalid data is observed.
>>>
>>>
>>>
>>> What is the attribute of this page in guest? non-cacheable?
>>>
>>
>> It is domain heap memory. I expect it should be cacheable. How can I
>> make sure with this?
>
>
> I'm lost...
>
> The page you are trying to map belongs to a guest, right? When the guest
> writes data in this page. Does it map with cacheable attribute or not? I
> suspect no.

Hard to say. This page is allocated using usual kernel API such as
kmalloc(). Then its pfn is stored in MMU 1-st level translation
pagetable.
Before storing - kernel driver flush corresponding cache ranges. After
this - remoteproc_iommu framework translates pfns to mfns.
So - I would expect that after map_domain_page() function call all
data will be valid.

>
> I think your current problem is the cache contains stall data. In this case
> you have to only invalidate the cache.
>

May work. But it looks like new helper should be introduced in this
case. I see that only clean_and_invalidate_xen_dcache_va_range API is
present, no standalone API for invalidating only.



> --
> Julien Grall



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01 11:37           ` Andrii Tseglytskyi
@ 2014-08-01 14:01             ` Julien Grall
  2014-08-01 15:06               ` Andrii Tseglytskyi
  0 siblings, 1 reply; 12+ messages in thread
From: Julien Grall @ 2014-08-01 14:01 UTC (permalink / raw)
  To: Andrii Tseglytskyi
  Cc: Tim Deegan, Stefano Stabellini, Ian Campbell, xen-devel



On 01/08/14 12:37, Andrii Tseglytskyi wrote:
>> The page you are trying to map belongs to a guest, right? When the guest
>> writes data in this page. Does it map with cacheable attribute or not? I
>> suspect no.
>
> Hard to say. This page is allocated using usual kernel API such as
> kmalloc(). Then its pfn is stored in MMU 1-st level translation
> pagetable.

If it's allocated from kmalloc then the page is allocated with cacheable 
attribute.

> Before storing - kernel driver flush corresponding cache ranges.

Clean and invalidate the cache, right?

 > After
> this - remoteproc_iommu framework translates pfns to mfns.
> So - I would expect that after map_domain_page() function call all
> data will be valid.

Actually with your explanation me too. :)

I'm run out of idea. Maybe Ian or Stefano have any clue.

>>
>> I think your current problem is the cache contains stall data. In this case
>> you have to only invalidate the cache.
>>
>
> May work. But it looks like new helper should be introduced in this
> case. I see that only clean_and_invalidate_xen_dcache_va_range API is
> present, no standalone API for invalidating only.

You can add a new helper. I suspect it will be necessary sooner or later 
in other places.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01 14:01             ` Julien Grall
@ 2014-08-01 15:06               ` Andrii Tseglytskyi
  2014-08-01 17:49                 ` Julien Grall
  0 siblings, 1 reply; 12+ messages in thread
From: Andrii Tseglytskyi @ 2014-08-01 15:06 UTC (permalink / raw)
  To: Julien Grall; +Cc: Tim Deegan, Stefano Stabellini, Ian Campbell, xen-devel

Hi Julien,

On Fri, Aug 1, 2014 at 5:01 PM, Julien Grall <julien.grall@linaro.org> wrote:
>
>
> On 01/08/14 12:37, Andrii Tseglytskyi wrote:
>>>
>>> The page you are trying to map belongs to a guest, right? When the guest
>>> writes data in this page. Does it map with cacheable attribute or not? I
>>> suspect no.
>>
>>
>> Hard to say. This page is allocated using usual kernel API such as
>> kmalloc(). Then its pfn is stored in MMU 1-st level translation
>> pagetable.
>
>
> If it's allocated from kmalloc then the page is allocated with cacheable
> attribute.
>
>
>> Before storing - kernel driver flush corresponding cache ranges.
>
>
> Clean and invalidate the cache, right?
>

Looks like invalidate only. See my next comment.

>
>> After
>>
>> this - remoteproc_iommu framework translates pfns to mfns.
>> So - I would expect that after map_domain_page() function call all
>> data will be valid.
>
>
> Actually with your explanation me too. :)
>
> I'm run out of idea. Maybe Ian or Stefano have any clue.
>

Looks like I see where is the issue:
After mapping done kernel driver calls flush_tlb_all() function, which
just invalidates cache, it does the similar command, as the following
Xen macros:

#define DTLBIALL        p15,0,c8,c6,0   /* Invalidate data TLB */

Then after mapping done, remoteproc_iommu starts translation, calls
map_domain_page() -> flush_xen_data_tlb_range_va_local(),
which is described with following macros:

#define TLBIMVAH        p15,4,c8,c7,1   /* Invalidate Unified Hyp. TLB by MVA */

So, I got 2 invalidates and no cleans. And when I started using
clean_and_invalidate_xen_dcache_va_range() I got both:

#define DCCIMVAC        p15,0,c7,c14,1  /* Data cache clean and
invalidate by MVA */

I need both - clean and invalidate. If I don't have clean - data may
still present in cache and not flushed to RAM - I will see invalid
data after map_domain_page() call

>
>>>
>>> I think your current problem is the cache contains stall data. In this
>>> case
>>> you have to only invalidate the cache.
>>>
>>
>> May work. But it looks like new helper should be introduced in this
>> case. I see that only clean_and_invalidate_xen_dcache_va_range API is
>> present, no standalone API for invalidating only.
>
>
> You can add a new helper. I suspect it will be necessary sooner or later in
> other places.

I think wrapper which just cleans cache after map_domain_page() will
be good enough. Will check.

>
> Regards,
>
> --
> Julien Grall



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01 15:06               ` Andrii Tseglytskyi
@ 2014-08-01 17:49                 ` Julien Grall
  2014-08-01 18:54                   ` Andrii Tseglytskyi
  0 siblings, 1 reply; 12+ messages in thread
From: Julien Grall @ 2014-08-01 17:49 UTC (permalink / raw)
  To: Andrii Tseglytskyi
  Cc: Tim Deegan, Stefano Stabellini, Ian Campbell, xen-devel



On 01/08/14 16:06, Andrii Tseglytskyi wrote:
> Looks like I see where is the issue:
> After mapping done kernel driver calls flush_tlb_all() function, which
> just invalidates cache, it does the similar command, as the following

flush_tlb_all doesn't invalidate the cache but the TLB.

> Xen macros:
>
> #define DTLBIALL        p15,0,c8,c6,0   /* Invalidate data TLB */
>
> Then after mapping done, remoteproc_iommu starts translation, calls
> map_domain_page() -> flush_xen_data_tlb_range_va_local(),
> which is described with following macros:
>
> #define TLBIMVAH        p15,4,c8,c7,1   /* Invalidate Unified Hyp. TLB by MVA */
>
> So, I got 2 invalidates and no cleans. And when I started using
> clean_and_invalidate_xen_dcache_va_range() I got both:
>
> #define DCCIMVAC        p15,0,c7,c14,1  /* Data cache clean and
> invalidate by MVA */
>
> I need both - clean and invalidate. If I don't have clean - data may
> still present in cache and not flushed to RAM - I will see invalid
> data after map_domain_page() call

You seem to mix TLB and cache in your mail. If the page has been mapped 
with cache attribute (should be done by kmalloc), then it should not 
have any issue in Xen.

Your patch is removing the TLB flush and you are very lucky that Xen is 
still working correctly...

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01 17:49                 ` Julien Grall
@ 2014-08-01 18:54                   ` Andrii Tseglytskyi
  2014-08-05 14:33                     ` Ian Campbell
  0 siblings, 1 reply; 12+ messages in thread
From: Andrii Tseglytskyi @ 2014-08-01 18:54 UTC (permalink / raw)
  To: Julien Grall; +Cc: Tim Deegan, Stefano Stabellini, Ian Campbell, xen-devel

 Hi Julien,

On Fri, Aug 1, 2014 at 8:49 PM, Julien Grall <julien.grall@linaro.org> wrote:
>
>
> On 01/08/14 16:06, Andrii Tseglytskyi wrote:
>>
>> Looks like I see where is the issue:
>> After mapping done kernel driver calls flush_tlb_all() function, which
>> just invalidates cache, it does the similar command, as the following
>
>
> flush_tlb_all doesn't invalidate the cache but the TLB.
>
>
>> Xen macros:
>>
>> #define DTLBIALL        p15,0,c8,c6,0   /* Invalidate data TLB */
>>
>> Then after mapping done, remoteproc_iommu starts translation, calls
>> map_domain_page() -> flush_xen_data_tlb_range_va_local(),
>> which is described with following macros:
>>
>> #define TLBIMVAH        p15,4,c8,c7,1   /* Invalidate Unified Hyp. TLB by
>> MVA */
>>
>> So, I got 2 invalidates and no cleans. And when I started using
>> clean_and_invalidate_xen_dcache_va_range() I got both:
>>
>> #define DCCIMVAC        p15,0,c7,c14,1  /* Data cache clean and
>> invalidate by MVA */
>>
>> I need both - clean and invalidate. If I don't have clean - data may
>> still present in cache and not flushed to RAM - I will see invalid
>> data after map_domain_page() call
>
>
> You seem to mix TLB and cache in your mail. If the page has been mapped with
> cache attribute (should be done by kmalloc), then it should not have any
> issue in Xen.
>
> Your patch is removing the TLB flush and you are very lucky that Xen is
> still working correctly...

I will not remove TLB flush and modify common code. In any case
map_domain_page() does not work as is for me.
And I think that having any dependencies on how page is mapped in
kernel - cacheable or not is not the best solution for me.
I think I'll introduce a new wrapper with map_domain_page() and proper
cache invalidation API, which will work for me.

Thanks a lot for detailed review and clear explanations ))

Regards,
Andrii

>
> Regards,
>
> --
> Julien Grall



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] xen: arm: invalidate caches after map_domain_page done
  2014-08-01 18:54                   ` Andrii Tseglytskyi
@ 2014-08-05 14:33                     ` Ian Campbell
  0 siblings, 0 replies; 12+ messages in thread
From: Ian Campbell @ 2014-08-05 14:33 UTC (permalink / raw)
  To: Andrii Tseglytskyi
  Cc: Tim Deegan, Julien Grall, Stefano Stabellini, xen-devel

On Fri, 2014-08-01 at 21:54 +0300, Andrii Tseglytskyi wrote:
>  Hi Julien,
> 
> On Fri, Aug 1, 2014 at 8:49 PM, Julien Grall <julien.grall@linaro.org> wrote:
> >
> >
> > On 01/08/14 16:06, Andrii Tseglytskyi wrote:
> >>
> >> Looks like I see where is the issue:
> >> After mapping done kernel driver calls flush_tlb_all() function, which
> >> just invalidates cache, it does the similar command, as the following
> >
> >
> > flush_tlb_all doesn't invalidate the cache but the TLB.
> >
> >
> >> Xen macros:
> >>
> >> #define DTLBIALL        p15,0,c8,c6,0   /* Invalidate data TLB */
> >>
> >> Then after mapping done, remoteproc_iommu starts translation, calls
> >> map_domain_page() -> flush_xen_data_tlb_range_va_local(),
> >> which is described with following macros:
> >>
> >> #define TLBIMVAH        p15,4,c8,c7,1   /* Invalidate Unified Hyp. TLB by
> >> MVA */
> >>
> >> So, I got 2 invalidates and no cleans. And when I started using
> >> clean_and_invalidate_xen_dcache_va_range() I got both:
> >>
> >> #define DCCIMVAC        p15,0,c7,c14,1  /* Data cache clean and
> >> invalidate by MVA */
> >>
> >> I need both - clean and invalidate. If I don't have clean - data may
> >> still present in cache and not flushed to RAM - I will see invalid
> >> data after map_domain_page() call
> >
> >
> > You seem to mix TLB and cache in your mail. If the page has been mapped with
> > cache attribute (should be done by kmalloc), then it should not have any
> > issue in Xen.
> >
> > Your patch is removing the TLB flush and you are very lucky that Xen is
> > still working correctly...
> 
> I will not remove TLB flush and modify common code. In any case
> map_domain_page() does not work as is for me.
> And I think that having any dependencies on how page is mapped in
> kernel - cacheable or not is not the best solution for me.

I'd much prefer that it was understood why the kernel's supposedly
cacheable mappings + maintenance done by the driver are not working for
you before adding new API on the Xen side.

You message at 01/08/14 16:06 seems to be a bit confused WRT caches vs.
TLBs and you also said that the kernel was only invalidating the caches,
not cleaning (invalidating == throw away data in the cache, exposing
whatever was in the underlying RAM).

I think you need to sort all of that out on the kernel side before
considering hypervisor patches.

Ian.

> I think I'll introduce a new wrapper with map_domain_page() and proper
> cache invalidation API, which will work for me.
> 
> Thanks a lot for detailed review and clear explanations ))
> 
> Regards,
> Andrii
> 
> >
> > Regards,
> >
> > --
> > Julien Grall
> 
> 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2014-08-05 14:33 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-01  7:25 [PATCH] xen: arm: invalidate caches after map_domain_page done Andrii Tseglytskyi
2014-08-01  9:23 ` Julien Grall
2014-08-01 10:01   ` Andrii Tseglytskyi
2014-08-01 10:28     ` Julien Grall
2014-08-01 10:50       ` Andrii Tseglytskyi
2014-08-01 10:58         ` Julien Grall
2014-08-01 11:37           ` Andrii Tseglytskyi
2014-08-01 14:01             ` Julien Grall
2014-08-01 15:06               ` Andrii Tseglytskyi
2014-08-01 17:49                 ` Julien Grall
2014-08-01 18:54                   ` Andrii Tseglytskyi
2014-08-05 14:33                     ` Ian Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.