All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] design: design doc for 1:1 direct-map
@ 2020-12-08  5:21 Penny Zheng
  2020-12-08  9:07 ` Julien Grall
  2020-12-08 10:29 ` Fam Zheng
  0 siblings, 2 replies; 10+ messages in thread
From: Penny Zheng @ 2020-12-08  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Kaly.Xin, Wei.Chen, nd

This is one draft design about the infrastructure for now, not ready
for upstream yet (hence the RFC tag), thought it'd be useful to firstly
start a discussion with the community.

Create one design doc for 1:1 direct-map.
It aims to describe why and how we allocate 1:1 direct-map(guest physical
== physical) domains.

This document is partly based on Stefano Stabellini's patch serie v1:
[direct-map DomUs](
https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00707.html).

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
For the part regarding allocating 1:1 direct-map domains with user-defined
memory regions, it will be included in next design of static memory
allocation.
---
 docs/designs/1_1_direct-map.md | 87 ++++++++++++++++++++++++++++++++++
 1 file changed, 87 insertions(+)
 create mode 100644 docs/designs/1_1_direct-map.md

diff --git a/docs/designs/1_1_direct-map.md b/docs/designs/1_1_direct-map.md
new file mode 100644
index 0000000000..ce3e2c77fd
--- /dev/null
+++ b/docs/designs/1_1_direct-map.md
@@ -0,0 +1,87 @@
+# Preface
+
+The document is an early draft for direct-map memory map
+(`guest physical == physical`) of domUs. And right now, it constrains to ARM
+architecture.
+
+It aims to describe why and how the guest would be created as direct-map domain.
+
+This document is partly based on Stefano Stabellini's patch serie v1:
+[direct-map DomUs](
+https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00707.html).
+
+This is a first draft and some questions are still unanswered. When this is the
+case, the text shall contain XXX.
+
+# Introduction
+
+## Background
+
+Cases where domU needs direct-map memory map:
+
+  * IOMMU not present in the system.
+  * IOMMU disabled, since it doesn't cover a specific device.
+  * IOMMU disabled, since it doesn't have enough bandwidth.
+  * IOMMU disabled, since it adds too much latency.
+
+*WARNING:
+Users should be careful that it is not always secure to assign a device without
+IOMMU/SMMU protection.
+Users must be aware of this risk, that guests having access to hardware with
+DMA capacity must be trusted, or it could use the DMA engine to access any
+other memory area.
+Guests could use additional security hardware component like NOC, System MPU
+to protect the memory.
+
+## Design
+
+The implementation may cover following aspects:
+
+### Native Address and IRQ numbers for GIC and UART(vPL011)
+
+Today, fixed addresses and IRQ numbers are used to map GIC and UART(vPL011)
+in DomUs. And it may cause potential clash on direct-map domains.
+So, Using native addresses and irq numbers for GIC, UART(vPL011), in
+direct-map domains is necessary.
+e.g.
+For the virtual interrupt of vPL011: instead of always using `GUEST_VPL011_SPI`,
+try to reuse the physical SPI number if possible.
+
+### Device tree option: `direct_map`
+
+Introduce a new device tree option `direct_map` for direct-map domains.
+Then, when users try to allocate one direct-map domain(except DOM0),
+`direct-map` property needs to be added under the appropriate `/chosen/domUx`.
+
+
+            chosen {
+                ...
+                domU1 {
+                    compatible = "xen, domain";
+                    #address-cells = <0x2>;
+                    #size-cells = <0x1>;
+                    direct-map;
+                    ...
+                };
+                ...
+            };
+
+If users are using imagebuilder, they can add to boot.source something like the
+following:
+
+    fdt set /chosen/domU1 direct-map
+
+Users could also use `xl` to create direct-map domains, just use the following
+config option: `direct-map=true`
+
+### direct-map guest memory allocation
+
+Func `allocate_memory_direct_map` is based on `allocate_memory_11`, and shall
+be refined to allocate memory for all direct-map domains, including DOM0.
+Roughly speaking, firstly, it tries to allocate arbitrary memory chunk of
+requested size from domain sub-allocator(`alloc_domheap_pages`). If fail,
+split the chunk into halves, and re-try, until it succeed or bail out with the
+smallest chunk size.
+Then, `insert_11_bank` shall insert above allocated pages into a memory bank,
+which are ordered by address, and also set up guest P2M mapping(
+`guest_physmap_add_page`) to ensure `gfn == mfn`.
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread
* RE: [RFC] design: design doc for 1:1 direct-map
@ 2021-01-06 11:51 Penny Zheng
  0 siblings, 0 replies; 10+ messages in thread
From: Penny Zheng @ 2021-01-06 11:51 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini
  Cc: Bertrand Marquis, Kaly Xin, Wei Chen, nd, Paul Durrant, famzheng

[-- Attachment #1: Type: text/plain, Size: 18866 bytes --]

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, January 5, 2021 8:41 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Kaly Xin
> <Kaly.Xin@arm.com>; Wei Chen <Wei.Chen@arm.com>; nd <nd@arm.com>;
> Paul Durrant <paul@xen.org>; famzheng@amazon.com
> Subject: Re: [RFC] design: design doc for 1:1 direct-map
>
>
>
> On 10/12/2020 07:02, Penny Zheng wrote:
> > Hi Julien
>
> Hi Penny,
>
> Apologies for the late answer.
>

Hi Julien

NP. Thanks for the detailed comments again ;).

> >
> > Thanks for the nice and detailed comments. (*^▽^*) Here are the
> > replies:
> >
> >> -----Original Message-----
> >> From: Julien Grall <julien@xen.org>
> >> Sent: Tuesday, December 8, 2020 5:07 PM
> >> To: Penny Zheng <Penny.Zheng@arm.com>;
> >> xen-devel@lists.xenproject.org; sstabellini@kernel.org
> >> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Kaly Xin
> >> <Kaly.Xin@arm.com>; Wei Chen <Wei.Chen@arm.com>; nd
> <nd@arm.com>;
> >> Paul Durrant <paul@xen.org>; famzheng@amazon.com
> >> Subject: Re: [RFC] design: design doc for 1:1 direct-map
> >>
> >> Hi Penny,
> >>
> >> I am adding Paul and Zheng in the thread as there are similar
> >> interest for the
> >> x86 side.
> >>
> >> On 08/12/2020 05:21, Penny Zheng wrote:
> >>> This is one draft design about the infrastructure for now, not ready
> >>> for upstream yet (hence the RFC tag), thought it'd be useful to
> >>> firstly start a discussion with the community.
> >>>
> >>> Create one design doc for 1:1 direct-map.
> >>> It aims to describe why and how we allocate 1:1 direct-map(guest
> >>> physical == physical) domains.
> >>>
> >>> This document is partly based on Stefano Stabellini's patch serie v1:
> >>> [direct-map DomUs](
> >>> https://lists.xenproject.org/archives/html/xen-devel/2020-
> >> 04/msg00707.html).
> >>
> >> May I ask why a different approach?
> >
> > In Stefano original design, he'd like to allocate 1:1 direct-map with
> > user-defined memory regions and he prefers allocating it from sub-domain
> allocator.
>
> I am not entirely sure what you are referring to with "sub-domain allocator".
>

Sorry. I mean domain-heap sub-allocator. ;)
I found this reference here
( https://github.com/xen-project/xen/blob/master/xen/common/page_alloc.c#L2233 ).

> >
> > And it brings quite a discussion there and in the last, everyone kinds
> > of all agrees that it is not workable. Since if requested memory ever
> > goes into any allocators, no matter boot, or sub-domain allocator, we
> > could not ensure that before actually allocating it for one 1:1
> > direct-map domain, it will not be into any other use.
>
> Yes, you cannot give the memory to the heap allocator and expect the region
> to always be free. However, you can mark them as reserve so the allocator
> doesn't touch it.
>
> We (AWS) also needs to reserve memory for later use in the case of
> LiveUpdate. In our case, the memory already contain guest data, so it is not
> possible to give them to any allocator.
>
> We solved it by excluding the page from any allocator and then marking then
> page as allocated/used when giving to the domain.
>
> There are some corner cases unsolved when using NUMA. Aside that this
> work because the heap allocator don't keep a list of in-use pages.
>

Yes. I agree.

Here is my new rough idea on how to allocate memory for direct-map domains:

# New Boot Module: `BOOTMOD_STATIC_MEM`

So in order to limit the fragmentation of the memory, users must select one
appropriate chunk of memory as static memory allocation pool during
compile-time.

Later, all direct-map guests' memory must be allocated from this static
memory allocation pool.

Also, we call this kind domain, whose RAM is allocated from static memory
allocation pool, `static domain`.

Static domains include not only direct-map domains. But this part is
irrelevant to this design.

Memory in static memory allocation pool shall be reserved from the beginning,
to ensure that it shall not go to any memory allocator for other use, no
matter boot allocator or domain-heap allocator.

Here introduces new `BOOTMOD_STATIC_MEM` node to define static memory
allocation pool under the `/chosen` node in device tree.

It contains the following properties:

- compatible

    Must always include the following compatiblity string:

        `"multiboot,static-mem"`

- reg

    reg specifies the physical address of the module in RAM and the length
of the module.

Here is one example:
        module@0xc0000000 {
                compatible = "multiboot,static-mem";
                reg = <0xc0000000 0x40000000>;
        };

RAM at 0xc0000000 of 1G size will be reserved as static memory allocation pool.

# New Page Flag: `PGC_state_reserved`

In order to differentiate pages allocated from static memory allocation pool,
with those which are allocated from heap allocator for normal domains, we
shall introduce a new page flag `PGC_state_reserved` to tell.

During boot time of preparing memory for Xen(`setup_mm`), after setting up
frame tables for all RAM(`setup_frametable_mappings`), we shall do extra
initialization on part of frame tables referring static memory allocation
pool, that is, granting `PGC_state_reserved` flag to those `struct page_info`s.

# `allocate_static_mem()`

Usually, when allocating memory for normal domains, we try to allocate
requested guest memory from heap allocator. But here, for direct-map
domains(also static domain), it shall come from static memory allocation pool.

Here, introduce a new func `allocate_static_mem()` to implement static memory
allocation.

For each page, it includes the following steps:
1. Check if it is a valid page in static memory allocation pool.
2. Check if the page is reserved(`PGC_state_reserved`).
3. Do the necessary preparation on struct page_info, like, following the same
cache-coherency policy in `alloc_heap_pages`, turning page status from
`PGC_state_reserved` to `PGC_state_used`, etc.

> >
> > So I'd prefer to split original design into two parts: one is here,
> > that user only wants to allocate one 1:1 direct-map domain, not caring
> > about where the ram will be located into.
>
> While I understand that a user may not care where the direct-map memory
> is allocated. However, I question the usefulness because:
>
> 1) This doesn't work with MPU
> 2) You may end up with provide the guest with many small regions if the
> guest is not created right after boot or rebooting.
>
> Can you outline what would be your use case here?
>

Yes. It does not work with MPU.
But it’s workable when iommu disabled/missing with MMU on. In some cases, when
users doing DMA on trusted domains, GPA = PA is maybe just enough.

And for the point 2, I share the same concerns.
So how about all direct-map domains, no matter with user-defined memory
regions or not care where to locate, are all allocated through static memory
allocation pool. You could find more details above.

But, here, it may leave DOM0 as a very `special` direct-map domain, only
its memory are allocated from heap allocator.

And of course we could limit the number of memory slots and the size of it.
We fail on surpassing maximum memory slots, or allocated memory slot
too small.

Which one do you prefer? ;)

>
> >
> >>> +architecture.
> >>> +
> >>> +It aims to describe why and how the guest would be created as
> >>> +direct-map
> >> domain.
> >>> +
> >>> +This document is partly based on Stefano Stabellini's patch serie v1:
> >>> +[direct-map DomUs](
> >>> +https://lists.xenproject.org/archives/html/xen-devel/2020-
> >> 04/msg00707.html).
> >>> +
> >>> +This is a first draft and some questions are still unanswered. When
> >>> +this is the case, the text shall contain XXX.
> >>> +
> >>> +# Introduction
> >>> +
> >>> +## Background
> >>> +
> >>> +Cases where domU needs direct-map memory map:
> >>> +
> >>> +  * IOMMU not present in the system.
> >>> +  * IOMMU disabled, since it doesn't cover a specific device.
> >>
> >> If the device is not covered by the IOMMU, then why would you want to
> >> disable the IOMMUs for the rest of the system?
> >>
> >
> > This is a mixed scenario. We pass some devices to VM with SMMU, and we
> > pass other devices to VM without SMMU. We could not guarantee guest
> > DMA security.
>
> Not really, you can guarantee DMA security if devices not protected by an
> IOMMU are assigned to *trusted* domains.
>
> >
> > So users may want to disable the SMMU, at least, they can gain some
> > performance improvement from SMMU disabled.
>
> That's an understandable argument. Yet, I think this only works if you trust
> *all* your domains. So a user may still want to keep IOMMU on when
> assigning devices (as long as they are protected by an IOMMU) to a non-
> trusted domain.
>
> So I would suggest to rephrase your second bullet point with:
>
> "IOMMU disabled if all the guests are trusted"
>

Thanks
I'll do that. ;)

> >>> +  * IOMMU disabled, since it doesn't have enough bandwidth.
> >>
> >> I am not sure to understand this one.
> >>
> >
> > In some SoC, there would be multiple devices connected to one SMMU.
> >
> > In some extreme situation, multiple devices do DMA concurrency, The
> > translation requests can exceed SMMU's translation capacity. This will
> > cause DMA latency.
>
> Ok. So either the SoC doesn't fit your use-case or the SoC was not correctly
> designed. Therefore, I would call that a workaround :). I would suggest to
> update the design doc with more information.
>
> OOI, is it really necessary to turn off the IOMMU? Would it be possible to
> instead have a few devices by-passing the IOMMU when they are assigned to
> a trusted domain?
>

Yes, of course.  It's totally up to users to decide whether to just turn off or
by-passing. Sometimes, the by-passing is more desirable. ;)

And if users want to turn off, direct-map is their choice to assigning device
to trusted domain.

I should elaborate more in my design.
> >
> >>> +  * IOMMU disabled, since it adds too much latency.
> >>
> >> The list above sounds like direct-map memory would be necessary even
> >> without device-passthrough. Can you clarify it?
> >>
> >
> > Okay.
> >
> > SMMU on different SoCs can be implemented differently. For example,
> > some SoC vendor may remove the TLB inside SMMU.
> >
> > In this case, the SMMU will add latency in DMA progress. Users may
> > want to disable the SMMU for some Realtime scenarios.
>
> Thanks for the explanation, however this wasn't my question. I was pointed
> out that your example gave the impression that domaion with not devices
> assigned would also need to be direct-mapped.
>
> Could you confirm whether this is the intended purpose?
>

Sorry, I don't know which part above give you the impression, and maybe I
could refine it to eliminate the ambiguity. ;)

My intended purpose here is still to cover all user scenarios where uses may want
to disable IOMMU, but still want to do DMA.

> >
> >>> +
> >>> +*WARNING:
> >>> +Users should be careful that it is not always secure to assign a
> >>> +device without
> >>
> >> s/careful/aware/ I think. Also, it is never secure to assign a device
> >> without IOMMU/SMMU unless you have a replacement.
> >>
> >> I would suggest to reword it something like:
> >>
> >> "When the device is not protected by the IOMMU, the administrator
> >> should make sure that:
> >>      - The device is assigned to a trusted guest
> >>      - You have an additional security mechanism on the platform (e.g
> >> MPU) to protect the memory."
> >>
> >
> > Thanks for the rephrase. (*^▽^*)
> >
> >>> +IOMMU/SMMU protection.
> >>> +Users must be aware of this risk, that guests having access to
> >>> +hardware with DMA capacity must be trusted, or it could use the DMA
> >>> +engine to access any other memory area.
> >>> +Guests could use additional security hardware component like NOC,
> >>> +System MPU to protect the memory.
> >>
> >> What's the NOC?
> >>
> >
> > Network on Chip.
> >
> > Some kind of SoC level firewall that limits the devices' DMA access
> > range or CPU memory access range.
>
> I would suggest to use the longer term or introduce an accronym section.
>

Thx. I will use longer term to rephrase. ;)
> >
> >>> +
> >>> +## Design
> >>> +
> >>> +The implementation may cover following aspects:
> >>> +
> >>> +### Native Address and IRQ numbers for GIC and UART(vPL011)
> >>> +
> >>> +Today, fixed addresses and IRQ numbers are used to map GIC and
> >>> +UART(vPL011) in DomUs. And it may cause potential clash on
> >>> +direct-map
> >> domains.
> >>> +So, Using native addresses and irq numbers for GIC, UART(vPL011),
> >>> +in direct-map domains is necessary.
> >>> +e.g.
> >>
> >> To me e.g. means example. But below this is not an example, this is a
> >> requirement in order to use the vpl011 on system without pl011 UART.
> >>
> >
> > Yes, right.
> > I'll delete e.g. here
> >
> >>> +For the virtual interrupt of vPL011: instead of always using
> >>> +`GUEST_VPL011_SPI`, try to reuse the physical SPI number if possible.
> >>
> >> How would you find the following region for guest using PV drivers;
> >>      - Event channel interrupt
> >>      - Grant table area
> >>
> > Good catch! thousand thx. 😉
> >
> > We've done some investigation on this part. Correct me if I am wrong.
> >
> > Pages like shared_info, grant table, etc, shared between guests and
> > xen, are mapped by ARM guests using the hypercall
> HYPERVISOR_memory_op
> > and always would not be directly mapped, even in dom0.
>
> Any memory shared with Xen (e.g grant table, shared info) should never be
> used for DMA. So I don't think you need to directly mapped them.
>
> In the case of shared memory between guest, I would suggest to look at what
> we do in dom0 for dealing with DMA on "foreign" pages.
>

Thx for pointing out. I will dig it later. ;)

> >
> > So, here, we suggest that maybe we could do some modification in the
> > hypercall to let it not only pass gfn to xen, but also receive already
> > allocated mfns(e.g. grant
> > tables) from xen in direct map situation.
>
> Regardless the modification required in Linux, all the memory hypercalls are
> part of the stable ABI. So any change should be carefully though to avoid
> breaking backward compatibility.
>
> However, I don't think you any to modify any of the hypercalls today (see
> above).
>
> > But If so, it involves modification in linux, o(╥﹏╥)o.
> >
> > And also we incline to keep all guest related pages(including ram,
> > grant tables,
> > etc) in one whole piece.
>
> Do you mean physically contiguous in the host memory? If so, I am not sure
> this can be achieved when letting the Xen chosing the placement and having
> a good success rate.
>
> >
> > Right now, pages like grant tables are allocated separately in Xen
> > heap, so don't stand much chance to be consistent with the guest ram.
>
> I don't quite understand why you need that consistency. In fact, Dom0 is
> direct mapped and we are able to have multiple memory ranges and all the
> shared memory not direct mapped.
>

Yes, right.

For now, Dom0 is direct mapped and all its shared memory with Xen is not direct
mapped. And it is still working well.
And also DMA shall never use this shared memory, So it's better for me not to
consider for it here.

And for why I brought up this physically consistency here, it may derive from some
MPU cases.

If trying to set up direct-map domains based on MPU, users may be constricted to
use very limited memory slots for all accessible memory(ram, shared memory with
Xen, etc).

Following current mechanism, it may end up to one slot for grant table, one slot for
shared_info, one slot for ioreq, etc. And some slots may hold very little ram, like,
ioreq shared pages only take two pages, which is quite a waste.

But like I said before, it should not be considered here. ;)

> [...]
>
> >
> >>> +following:
> >>> +
> >>> +    fdt set /chosen/domU1 direct-map
> >>> +
> >>> +Users could also use `xl` to create direct-map domains, just use
> >>> +the following config option: `direct-map=true`
> >>> +
> >>> +### direct-map guest memory allocation
> >>> +
> >>> +Func `allocate_memory_direct_map` is based on `allocate_memory_11`,
> >>> +and shall be refined to allocate memory for all direct-map domains,
> >> including DOM0.
> >>> +Roughly speaking, firstly, it tries to allocate arbitrary memory
> >>> +chunk of requested size from domain
> >>> +sub-allocator(`alloc_domheap_pages`). If fail, split the chunk into
> >>> +halves, and re-try, until it succeed or bail out with the smallest chunk
> size.
> >>
> >> If you have a mix of domain with direct-mapped and normal domain, you
> >> may end up to have the memory so small that your direct-mapped
> domain
> >> will have many small banks. This is going to be a major problem if
> >> you are creating the domain at runtime (you suggest xl can be used).
> >>
> >> In addition, some users may want to be able to control the location
> >> of the memory as this reduced the amount of work in the guest (e.g
> >> you don't have to dynamically discover the memory).
> >>
> >> I think it would be best to always require the admin to select the
> >> RAM bank used by a direct mapped domain. Alternatively, we could have
> >> a pool of memory that can only be used for direct mapped domain. This
> >> should limit the fragmentation of the memory.
> >>
> >
> > Yep, in some cases, if we have mix of domains with direct-mapped with
> > user- defined memory regions (scattering loosely)and normal domains at
> > the beginning, it may fail when we later creating the domain at
> > runtime (use xl), no matter direct-map domain or not.
>
> It is not only about creating a new domain. It is also rebooting a running
> domain.
>
> In the reboot case, you may be able to re-allocate the memory but this will
> more by luck than anything else.
>

Yes , Much more confirming me that using static memory allocation pool
is better. Since it could also eliminate above risk.
> >
> > But, users should be free to allocate where they want, we may not
> > limit a pool of memory to use.
>
> Right, the memory pool is to try to limit the risk when the user decides to let
> Xen chosing where the memory is allocated.
>

I agree.

> Cheers,
>
> --
> Julien Grall

Cheers

--
Penny Zheng

[-- Attachment #2: Type: text/html, Size: 43123 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-01-06 11:51 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-08  5:21 [RFC] design: design doc for 1:1 direct-map Penny Zheng
2020-12-08  9:07 ` Julien Grall
2020-12-08  9:12   ` Jan Beulich
2020-12-08 10:22     ` Fam Zheng
2020-12-08 10:53       ` Jan Beulich
2020-12-08 11:23         ` Fam Zheng
2020-12-10  7:02   ` Penny Zheng
2021-01-05 12:41     ` Julien Grall
2020-12-08 10:29 ` Fam Zheng
2021-01-06 11:51 Penny Zheng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.