From: Julien Grall <julien.grall@arm.com> To: Stefano Stabellini <sstabellini@kernel.org> Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, nd <nd@arm.com>, "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "JBeulich@suse.com" <JBeulich@suse.com>, Stefano Stabellini <stefanos@xilinx.com> Subject: Re: [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability Date: Sun, 21 Apr 2019 18:32:34 +0100 [thread overview] Message-ID: <c748bb73-03d8-bb00-6b72-3f5b04d89c48@arm.com> (raw) In-Reply-To: <alpine.DEB.2.10.1904191632380.1370@sstabellini-ThinkPad-X260> Hi Stefano, On 4/20/19 1:02 AM, Stefano Stabellini wrote: > On Tue, 26 Feb 2019, Julien Grall wrote: >> Hi, >> >> On 26/02/2019 23:07, Stefano Stabellini wrote: >>> Reuse the existing padding field to pass cacheability information about >>> the memory mapping, specifically, whether the memory should be mapped as >>> normal memory or as device memory (this is what we have today). >>> >>> Add a cacheability parameter to map_mmio_regions. 0 means device >>> memory, which is what we have today. >>> >>> On ARM, map device memory as p2m_mmio_direct_dev (as it is already done >>> today) and normal memory as p2m_ram_rw. >>> >>> On x86, return error if the cacheability requested is not device memory. >>> >>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com> >>> CC: JBeulich@suse.com >>> CC: andrew.cooper3@citrix.com >>> --- >>> xen/arch/arm/gic-v2.c | 3 ++- >>> xen/arch/arm/p2m.c | 19 +++++++++++++++++-- >>> xen/arch/arm/platforms/exynos5.c | 4 ++-- >>> xen/arch/arm/platforms/omap5.c | 8 ++++---- >>> xen/arch/arm/vgic-v2.c | 2 +- >>> xen/arch/arm/vgic/vgic-v2.c | 2 +- >>> xen/arch/x86/hvm/dom0_build.c | 7 +++++-- >>> xen/arch/x86/mm/p2m.c | 6 +++++- >>> xen/common/domctl.c | 8 +++++--- >>> xen/drivers/vpci/header.c | 3 ++- >>> xen/include/public/domctl.h | 4 +++- >>> xen/include/xen/p2m-common.h | 3 ++- >>> 12 files changed, 49 insertions(+), 20 deletions(-) >>> >>> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c >>> index e7eb01f..1ea3da2 100644 >>> --- a/xen/arch/arm/gic-v2.c >>> +++ b/xen/arch/arm/gic-v2.c >>> @@ -690,7 +690,8 @@ static int gicv2_map_hwdown_extra_mappings(struct domain *d) >>> >>> ret = map_mmio_regions(d, gaddr_to_gfn(v2m_data->addr), >>> PFN_UP(v2m_data->size), >>> - maddr_to_mfn(v2m_data->addr)); >>> + maddr_to_mfn(v2m_data->addr), >>> + CACHEABILITY_DEVMEM); >>> if ( ret ) >>> { >>> printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n", >>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c >>> index 30cfb01..5b8fcc5 100644 >>> --- a/xen/arch/arm/p2m.c >>> +++ b/xen/arch/arm/p2m.c >>> @@ -1068,9 +1068,24 @@ int unmap_regions_p2mt(struct domain *d, >>> int map_mmio_regions(struct domain *d, >>> gfn_t start_gfn, >>> unsigned long nr, >>> - mfn_t mfn) >>> + mfn_t mfn, >>> + uint32_t cache_policy) >>> { >>> - return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_dev); >>> + p2m_type_t t; >>> + >>> + switch ( cache_policy ) >>> + { >>> + case CACHEABILITY_MEMORY: >>> + t = p2m_ram_rw; >> >> Potentially, you want to clean the cache here. > > We have been talking about this and I have been looking through the > code. I am still not exactly sure how to proceed. > > Is there a reason why cacheable reserved_memory pages should be treated > differently from normal memory, in regards to cleaning the cache? It > seems to me that they should be the same in terms of cache issues? Your wording is a bit confusing. I guess what you call "normal memory" is guest memory, am I right? Any memory assigned to the guest is and clean & invalidate (technically clean is enough) before getting assigned to the guest (see flush_page_to_ram). So this patch is introducing a different behavior that what we currently have for other normal memory. But my concern is you may inconsistently use the memory attributes breaking coherency. For instance, you map in Guest A with cacheable attributes then after the guest died, you remap to guest B with a non-cacheable attributes. guest B may have an inconsistent view of the memory mapped. This is one case where cleaning the cache would be necessary. One could consider this is part of the "device reset" (this is a bit of the name abuse), so Xen should not take care of it. The most important bit is to have documentation that reflect the issues with such parameters. So the user is aware of what could go wrong when using "iomem". > > Is there a place where we clean the dcache for normal pages, one that is > not tied to p2m->clean_pte, which is different (it is there for iommu > reasons)? p2m->clean_pte is only here to deal with non-coherence IOMMU page-table walker. See above for flushing normal pages. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
WARNING: multiple messages have this Message-ID (diff)
From: Julien Grall <julien.grall@arm.com> To: Stefano Stabellini <sstabellini@kernel.org> Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, nd <nd@arm.com>, "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "JBeulich@suse.com" <JBeulich@suse.com>, Stefano Stabellini <stefanos@xilinx.com> Subject: Re: [Xen-devel] [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability Date: Sun, 21 Apr 2019 18:32:34 +0100 [thread overview] Message-ID: <c748bb73-03d8-bb00-6b72-3f5b04d89c48@arm.com> (raw) Message-ID: <20190421173234.RpIq7yIukAKX_BTETb8mKz7Og2BnTRrUTu97Hy493eM@z> (raw) In-Reply-To: <alpine.DEB.2.10.1904191632380.1370@sstabellini-ThinkPad-X260> Hi Stefano, On 4/20/19 1:02 AM, Stefano Stabellini wrote: > On Tue, 26 Feb 2019, Julien Grall wrote: >> Hi, >> >> On 26/02/2019 23:07, Stefano Stabellini wrote: >>> Reuse the existing padding field to pass cacheability information about >>> the memory mapping, specifically, whether the memory should be mapped as >>> normal memory or as device memory (this is what we have today). >>> >>> Add a cacheability parameter to map_mmio_regions. 0 means device >>> memory, which is what we have today. >>> >>> On ARM, map device memory as p2m_mmio_direct_dev (as it is already done >>> today) and normal memory as p2m_ram_rw. >>> >>> On x86, return error if the cacheability requested is not device memory. >>> >>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com> >>> CC: JBeulich@suse.com >>> CC: andrew.cooper3@citrix.com >>> --- >>> xen/arch/arm/gic-v2.c | 3 ++- >>> xen/arch/arm/p2m.c | 19 +++++++++++++++++-- >>> xen/arch/arm/platforms/exynos5.c | 4 ++-- >>> xen/arch/arm/platforms/omap5.c | 8 ++++---- >>> xen/arch/arm/vgic-v2.c | 2 +- >>> xen/arch/arm/vgic/vgic-v2.c | 2 +- >>> xen/arch/x86/hvm/dom0_build.c | 7 +++++-- >>> xen/arch/x86/mm/p2m.c | 6 +++++- >>> xen/common/domctl.c | 8 +++++--- >>> xen/drivers/vpci/header.c | 3 ++- >>> xen/include/public/domctl.h | 4 +++- >>> xen/include/xen/p2m-common.h | 3 ++- >>> 12 files changed, 49 insertions(+), 20 deletions(-) >>> >>> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c >>> index e7eb01f..1ea3da2 100644 >>> --- a/xen/arch/arm/gic-v2.c >>> +++ b/xen/arch/arm/gic-v2.c >>> @@ -690,7 +690,8 @@ static int gicv2_map_hwdown_extra_mappings(struct domain *d) >>> >>> ret = map_mmio_regions(d, gaddr_to_gfn(v2m_data->addr), >>> PFN_UP(v2m_data->size), >>> - maddr_to_mfn(v2m_data->addr)); >>> + maddr_to_mfn(v2m_data->addr), >>> + CACHEABILITY_DEVMEM); >>> if ( ret ) >>> { >>> printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n", >>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c >>> index 30cfb01..5b8fcc5 100644 >>> --- a/xen/arch/arm/p2m.c >>> +++ b/xen/arch/arm/p2m.c >>> @@ -1068,9 +1068,24 @@ int unmap_regions_p2mt(struct domain *d, >>> int map_mmio_regions(struct domain *d, >>> gfn_t start_gfn, >>> unsigned long nr, >>> - mfn_t mfn) >>> + mfn_t mfn, >>> + uint32_t cache_policy) >>> { >>> - return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_dev); >>> + p2m_type_t t; >>> + >>> + switch ( cache_policy ) >>> + { >>> + case CACHEABILITY_MEMORY: >>> + t = p2m_ram_rw; >> >> Potentially, you want to clean the cache here. > > We have been talking about this and I have been looking through the > code. I am still not exactly sure how to proceed. > > Is there a reason why cacheable reserved_memory pages should be treated > differently from normal memory, in regards to cleaning the cache? It > seems to me that they should be the same in terms of cache issues? Your wording is a bit confusing. I guess what you call "normal memory" is guest memory, am I right? Any memory assigned to the guest is and clean & invalidate (technically clean is enough) before getting assigned to the guest (see flush_page_to_ram). So this patch is introducing a different behavior that what we currently have for other normal memory. But my concern is you may inconsistently use the memory attributes breaking coherency. For instance, you map in Guest A with cacheable attributes then after the guest died, you remap to guest B with a non-cacheable attributes. guest B may have an inconsistent view of the memory mapped. This is one case where cleaning the cache would be necessary. One could consider this is part of the "device reset" (this is a bit of the name abuse), so Xen should not take care of it. The most important bit is to have documentation that reflect the issues with such parameters. So the user is aware of what could go wrong when using "iomem". > > Is there a place where we clean the dcache for normal pages, one that is > not tied to p2m->clean_pte, which is different (it is there for iommu > reasons)? p2m->clean_pte is only here to deal with non-coherence IOMMU page-table walker. See above for flushing normal pages. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2019-04-21 17:32 UTC|newest] Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-02-26 23:06 [PATCH 0/6] iomem cacheability Stefano Stabellini 2019-02-26 23:07 ` [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability Stefano Stabellini 2019-02-26 23:18 ` Julien Grall 2019-04-20 0:02 ` Stefano Stabellini 2019-04-20 0:02 ` [Xen-devel] " Stefano Stabellini 2019-04-21 17:32 ` Julien Grall [this message] 2019-04-21 17:32 ` Julien Grall 2019-04-22 21:59 ` Stefano Stabellini 2019-04-22 21:59 ` [Xen-devel] " Stefano Stabellini 2019-04-24 10:42 ` Julien Grall 2019-04-24 10:42 ` [Xen-devel] " Julien Grall 2019-02-27 10:34 ` Jan Beulich 2019-04-17 21:12 ` Stefano Stabellini 2019-04-17 21:12 ` [Xen-devel] " Stefano Stabellini 2019-04-17 21:25 ` Julien Grall 2019-04-17 21:25 ` [Xen-devel] " Julien Grall 2019-04-17 21:55 ` Stefano Stabellini 2019-04-17 21:55 ` [Xen-devel] " Stefano Stabellini 2019-04-25 10:41 ` Jan Beulich 2019-04-25 10:41 ` [Xen-devel] " Jan Beulich 2019-04-25 22:31 ` Stefano Stabellini 2019-04-25 22:31 ` [Xen-devel] " Stefano Stabellini 2019-04-26 7:12 ` Jan Beulich 2019-04-26 7:12 ` [Xen-devel] " Jan Beulich 2019-02-27 19:28 ` Julien Grall 2019-04-19 23:20 ` Stefano Stabellini 2019-04-19 23:20 ` [Xen-devel] " Stefano Stabellini 2019-04-21 17:14 ` Julien Grall 2019-04-21 17:14 ` [Xen-devel] " Julien Grall 2019-04-22 17:33 ` Stefano Stabellini 2019-04-22 17:33 ` [Xen-devel] " Stefano Stabellini 2019-04-22 17:42 ` Julien Grall 2019-04-22 17:42 ` [Xen-devel] " Julien Grall 2019-02-27 21:02 ` Julien Grall 2019-02-26 23:07 ` [PATCH 2/6] libxc: xc_domain_memory_mapping, " Stefano Stabellini 2019-02-26 23:07 ` [PATCH 3/6] libxl/xl: add cacheability option to iomem Stefano Stabellini 2019-02-27 20:02 ` Julien Grall 2019-04-19 23:13 ` Stefano Stabellini 2019-04-19 23:13 ` [Xen-devel] " Stefano Stabellini 2019-02-26 23:07 ` [PATCH 4/6] xen/arm: keep track of reserved-memory regions Stefano Stabellini 2019-02-28 14:38 ` Julien Grall 2019-02-26 23:07 ` [PATCH 5/6] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini 2019-02-26 23:45 ` Julien Grall 2019-04-22 22:42 ` Stefano Stabellini 2019-04-22 22:42 ` [Xen-devel] " Stefano Stabellini 2019-04-23 8:09 ` Julien Grall 2019-04-23 8:09 ` [Xen-devel] " Julien Grall 2019-04-23 17:32 ` Stefano Stabellini 2019-04-23 17:32 ` [Xen-devel] " Stefano Stabellini 2019-04-23 18:37 ` Julien Grall 2019-04-23 18:37 ` [Xen-devel] " Julien Grall 2019-04-23 21:34 ` Stefano Stabellini 2019-04-23 21:34 ` [Xen-devel] " Stefano Stabellini 2019-02-26 23:07 ` [PATCH 6/6] xen/docs: how to map a page between dom0 and domU using iomem Stefano Stabellini 2019-03-03 17:20 ` [PATCH 0/6] iomem cacheability Amit Tomer 2019-03-05 21:22 ` Stefano Stabellini 2019-03-05 22:45 ` Julien Grall 2019-03-06 11:46 ` Amit Tomer 2019-03-06 22:42 ` Stefano Stabellini 2019-03-06 22:59 ` Julien Grall 2019-03-07 8:42 ` Amit Tomer 2019-03-07 10:04 ` Julien Grall 2019-03-07 21:24 ` Stefano Stabellini 2019-03-08 10:10 ` Amit Tomer 2019-03-08 16:37 ` Julien Grall 2019-03-08 17:44 ` Amit Tomer 2019-03-06 11:30 ` Amit Tomer
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=c748bb73-03d8-bb00-6b72-3f5b04d89c48@arm.com \ --to=julien.grall@arm.com \ --cc=JBeulich@suse.com \ --cc=andrew.cooper3@citrix.com \ --cc=nd@arm.com \ --cc=sstabellini@kernel.org \ --cc=stefanos@xilinx.com \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.