All of lore.kernel.org
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	nd <nd@arm.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Stefano Stabellini <stefanos@xilinx.com>
Subject: Re: [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability
Date: Wed, 24 Apr 2019 11:42:04 +0100	[thread overview]
Message-ID: <29efc3a4-7012-30e0-0688-4741eca49ed4@arm.com> (raw)
In-Reply-To: <alpine.DEB.2.10.1904221431580.1370@sstabellini-ThinkPad-X260>

Hi,

On 22/04/2019 22:59, Stefano Stabellini wrote:
> On Sun, 21 Apr 2019, Julien Grall wrote:
>>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>>> index 30cfb01..5b8fcc5 100644
>>>>> --- a/xen/arch/arm/p2m.c
>>>>> +++ b/xen/arch/arm/p2m.c
>>>>> @@ -1068,9 +1068,24 @@ int unmap_regions_p2mt(struct domain *d,
>>>>>     int map_mmio_regions(struct domain *d,
>>>>>                          gfn_t start_gfn,
>>>>>                          unsigned long nr,
>>>>> -                     mfn_t mfn)
>>>>> +                     mfn_t mfn,
>>>>> +                     uint32_t cache_policy)
>>>>>     {
>>>>> -    return p2m_insert_mapping(d, start_gfn, nr, mfn,
>>>>> p2m_mmio_direct_dev);
>>>>> +    p2m_type_t t;
>>>>> +
>>>>> +    switch ( cache_policy )
>>>>> +    {
>>>>> +    case CACHEABILITY_MEMORY:
>>>>> +        t = p2m_ram_rw;
>>>>
>>>> Potentially, you want to clean the cache here.
>>>
>>> We have been talking about this and I have been looking through the
>>> code. I am still not exactly sure how to proceed.
>>>
>>> Is there a reason why cacheable reserved_memory pages should be treated
>>> differently from normal memory, in regards to cleaning the cache? It
>>> seems to me that they should be the same in terms of cache issues?
>>
>> Your wording is a bit confusing. I guess what you call "normal memory" is
>> guest memory, am I right?
> 
> Yes, right. I wonder if we need to come up with clearer terms. Given the
> many types of memory we have to deal with, it might become even more
> confusing going forward. Guest normal memory maybe? Or guest RAM?

The term "normal memory" is really confusing because this is a memory type on 
Arm. reserved-regions are also not *MMIO* as they are part of the RAM that was 
reserved for special usage. So the term "guest RAM" is also not appropriate.

I understand that 'iomem' is a quick way to get reserved-memory regions mapped 
in the guest. However, this feels like an abuse of the interface because 
reserved-memory are technically not an MMIO. They also can be used by the OS for 
storing data when not in use (providing the DT node contain the property
'reusable').

Overall, we want to rethink how 'reserved-regions' are going to be treated. The 
solution suggested in this series is not going to be viable very long.

> 
> 
>> Any memory assigned to the guest is and clean & invalidate (technically clean
>> is enough) before getting assigned to the guest (see flush_page_to_ram). So
>> this patch is introducing a different behavior that what we currently have for
>> other normal memory.
> 
> This is what I was trying to understand, thanks for the pointer. I am
> unsure whether we want to do this for reserved-memory regions too: on
> one hand, it would make things more consistent, on the other hand I am
> not sure it is the right behavior for reserved-memory. Let's think it
> through.
> 
> The use case is communication with other heterogeneous CPUs. In that
> case, it would matter if a domU crashes with the ring mapped and an
> unflushed write (partial?) to the ring. The domU gets restarted with the
> same ring mapping. In this case, it looks like we would want to clean
> the cache. It wouldn't matter if it is done at VM shutdown or at VM
> creation time.
> 
> So maybe it makes sense to do something like flush_page_to_ram for
> reserved-memory pages. It seems simple to do it at VM creation time,
> because we could invalidate the cache when map_mmio_regions is called,
> either there or from the domctl handler. On the other hand, I don't know
> where to do it at domain destruction time because no domctl is called to
> unmap the reserved-memory region. Also, cleaning the cache at domain
> destruction time would introduce a difference compared to guest normal
> memory.
> 
> I know I said the opposite in our meeting, but maybe cleaning the cache
> for reserved-memory regions at domain creation time is the right way
> forward?

I don't have a strong opinion on it.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

WARNING: multiple messages have this Message-ID (diff)
From: Julien Grall <julien.grall@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	nd <nd@arm.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Stefano Stabellini <stefanos@xilinx.com>
Subject: Re: [Xen-devel] [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability
Date: Wed, 24 Apr 2019 11:42:04 +0100	[thread overview]
Message-ID: <29efc3a4-7012-30e0-0688-4741eca49ed4@arm.com> (raw)
Message-ID: <20190424104204.GgZPbWJqJO8xTSzVAe8stRBMpXSsq6j_u-DCc316X4I@z> (raw)
In-Reply-To: <alpine.DEB.2.10.1904221431580.1370@sstabellini-ThinkPad-X260>

Hi,

On 22/04/2019 22:59, Stefano Stabellini wrote:
> On Sun, 21 Apr 2019, Julien Grall wrote:
>>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>>> index 30cfb01..5b8fcc5 100644
>>>>> --- a/xen/arch/arm/p2m.c
>>>>> +++ b/xen/arch/arm/p2m.c
>>>>> @@ -1068,9 +1068,24 @@ int unmap_regions_p2mt(struct domain *d,
>>>>>     int map_mmio_regions(struct domain *d,
>>>>>                          gfn_t start_gfn,
>>>>>                          unsigned long nr,
>>>>> -                     mfn_t mfn)
>>>>> +                     mfn_t mfn,
>>>>> +                     uint32_t cache_policy)
>>>>>     {
>>>>> -    return p2m_insert_mapping(d, start_gfn, nr, mfn,
>>>>> p2m_mmio_direct_dev);
>>>>> +    p2m_type_t t;
>>>>> +
>>>>> +    switch ( cache_policy )
>>>>> +    {
>>>>> +    case CACHEABILITY_MEMORY:
>>>>> +        t = p2m_ram_rw;
>>>>
>>>> Potentially, you want to clean the cache here.
>>>
>>> We have been talking about this and I have been looking through the
>>> code. I am still not exactly sure how to proceed.
>>>
>>> Is there a reason why cacheable reserved_memory pages should be treated
>>> differently from normal memory, in regards to cleaning the cache? It
>>> seems to me that they should be the same in terms of cache issues?
>>
>> Your wording is a bit confusing. I guess what you call "normal memory" is
>> guest memory, am I right?
> 
> Yes, right. I wonder if we need to come up with clearer terms. Given the
> many types of memory we have to deal with, it might become even more
> confusing going forward. Guest normal memory maybe? Or guest RAM?

The term "normal memory" is really confusing because this is a memory type on 
Arm. reserved-regions are also not *MMIO* as they are part of the RAM that was 
reserved for special usage. So the term "guest RAM" is also not appropriate.

I understand that 'iomem' is a quick way to get reserved-memory regions mapped 
in the guest. However, this feels like an abuse of the interface because 
reserved-memory are technically not an MMIO. They also can be used by the OS for 
storing data when not in use (providing the DT node contain the property
'reusable').

Overall, we want to rethink how 'reserved-regions' are going to be treated. The 
solution suggested in this series is not going to be viable very long.

> 
> 
>> Any memory assigned to the guest is and clean & invalidate (technically clean
>> is enough) before getting assigned to the guest (see flush_page_to_ram). So
>> this patch is introducing a different behavior that what we currently have for
>> other normal memory.
> 
> This is what I was trying to understand, thanks for the pointer. I am
> unsure whether we want to do this for reserved-memory regions too: on
> one hand, it would make things more consistent, on the other hand I am
> not sure it is the right behavior for reserved-memory. Let's think it
> through.
> 
> The use case is communication with other heterogeneous CPUs. In that
> case, it would matter if a domU crashes with the ring mapped and an
> unflushed write (partial?) to the ring. The domU gets restarted with the
> same ring mapping. In this case, it looks like we would want to clean
> the cache. It wouldn't matter if it is done at VM shutdown or at VM
> creation time.
> 
> So maybe it makes sense to do something like flush_page_to_ram for
> reserved-memory pages. It seems simple to do it at VM creation time,
> because we could invalidate the cache when map_mmio_regions is called,
> either there or from the domctl handler. On the other hand, I don't know
> where to do it at domain destruction time because no domctl is called to
> unmap the reserved-memory region. Also, cleaning the cache at domain
> destruction time would introduce a difference compared to guest normal
> memory.
> 
> I know I said the opposite in our meeting, but maybe cleaning the cache
> for reserved-memory regions at domain creation time is the right way
> forward?

I don't have a strong opinion on it.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-04-24 10:42 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-26 23:06 [PATCH 0/6] iomem cacheability Stefano Stabellini
2019-02-26 23:07 ` [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability Stefano Stabellini
2019-02-26 23:18   ` Julien Grall
2019-04-20  0:02     ` Stefano Stabellini
2019-04-20  0:02       ` [Xen-devel] " Stefano Stabellini
2019-04-21 17:32       ` Julien Grall
2019-04-21 17:32         ` [Xen-devel] " Julien Grall
2019-04-22 21:59         ` Stefano Stabellini
2019-04-22 21:59           ` [Xen-devel] " Stefano Stabellini
2019-04-24 10:42           ` Julien Grall [this message]
2019-04-24 10:42             ` Julien Grall
2019-02-27 10:34   ` Jan Beulich
2019-04-17 21:12     ` Stefano Stabellini
2019-04-17 21:12       ` [Xen-devel] " Stefano Stabellini
2019-04-17 21:25       ` Julien Grall
2019-04-17 21:25         ` [Xen-devel] " Julien Grall
2019-04-17 21:55         ` Stefano Stabellini
2019-04-17 21:55           ` [Xen-devel] " Stefano Stabellini
2019-04-25 10:41       ` Jan Beulich
2019-04-25 10:41         ` [Xen-devel] " Jan Beulich
2019-04-25 22:31         ` Stefano Stabellini
2019-04-25 22:31           ` [Xen-devel] " Stefano Stabellini
2019-04-26  7:12           ` Jan Beulich
2019-04-26  7:12             ` [Xen-devel] " Jan Beulich
2019-02-27 19:28   ` Julien Grall
2019-04-19 23:20     ` Stefano Stabellini
2019-04-19 23:20       ` [Xen-devel] " Stefano Stabellini
2019-04-21 17:14       ` Julien Grall
2019-04-21 17:14         ` [Xen-devel] " Julien Grall
2019-04-22 17:33         ` Stefano Stabellini
2019-04-22 17:33           ` [Xen-devel] " Stefano Stabellini
2019-04-22 17:42           ` Julien Grall
2019-04-22 17:42             ` [Xen-devel] " Julien Grall
2019-02-27 21:02   ` Julien Grall
2019-02-26 23:07 ` [PATCH 2/6] libxc: xc_domain_memory_mapping, " Stefano Stabellini
2019-02-26 23:07 ` [PATCH 3/6] libxl/xl: add cacheability option to iomem Stefano Stabellini
2019-02-27 20:02   ` Julien Grall
2019-04-19 23:13     ` Stefano Stabellini
2019-04-19 23:13       ` [Xen-devel] " Stefano Stabellini
2019-02-26 23:07 ` [PATCH 4/6] xen/arm: keep track of reserved-memory regions Stefano Stabellini
2019-02-28 14:38   ` Julien Grall
2019-02-26 23:07 ` [PATCH 5/6] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
2019-02-26 23:45   ` Julien Grall
2019-04-22 22:42     ` Stefano Stabellini
2019-04-22 22:42       ` [Xen-devel] " Stefano Stabellini
2019-04-23  8:09       ` Julien Grall
2019-04-23  8:09         ` [Xen-devel] " Julien Grall
2019-04-23 17:32         ` Stefano Stabellini
2019-04-23 17:32           ` [Xen-devel] " Stefano Stabellini
2019-04-23 18:37           ` Julien Grall
2019-04-23 18:37             ` [Xen-devel] " Julien Grall
2019-04-23 21:34             ` Stefano Stabellini
2019-04-23 21:34               ` [Xen-devel] " Stefano Stabellini
2019-02-26 23:07 ` [PATCH 6/6] xen/docs: how to map a page between dom0 and domU using iomem Stefano Stabellini
2019-03-03 17:20 ` [PATCH 0/6] iomem cacheability Amit Tomer
2019-03-05 21:22   ` Stefano Stabellini
2019-03-05 22:45     ` Julien Grall
2019-03-06 11:46       ` Amit Tomer
2019-03-06 22:42         ` Stefano Stabellini
2019-03-06 22:59           ` Julien Grall
2019-03-07  8:42             ` Amit Tomer
2019-03-07 10:04               ` Julien Grall
2019-03-07 21:24                 ` Stefano Stabellini
2019-03-08 10:10                   ` Amit Tomer
2019-03-08 16:37                     ` Julien Grall
2019-03-08 17:44                       ` Amit Tomer
2019-03-06 11:30     ` Amit Tomer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=29efc3a4-7012-30e0-0688-4741eca49ed4@arm.com \
    --to=julien.grall@arm.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=nd@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=stefanos@xilinx.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.