xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] direct-map memory map
@ 2021-10-15  3:09 Penny Zheng
  2021-10-15  3:09 ` [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap Penny Zheng
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Penny Zheng @ 2021-10-15  3:09 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Wei.Chen, Bertrand.Marquis

Cases where domU needs direct-map memory map:
  * IOMMU not present in the system.
  * IOMMU disabled if it doesn't cover a specific device and all the guests
are trusted. Thinking a mixed scenario, where a few devices with IOMMU and
a few without, then guest DMA security still could not be totally guaranteed.
So users may want to disable the IOMMU, to at least gain some performance
improvement from IOMMU disabled.
  * IOMMU disabled as a workaround when it doesn't have enough bandwidth.
To be specific, in a few extreme situation, when multiple devices do DMA
concurrently, these requests may exceed IOMMU's transmission capacity.
  * IOMMU disabled when it adds too much latency on DMA. For example,
TLB may be missing in some IOMMU hardware, which may bring latency in DMA
progress, so users may want to disable it in some realtime scenario.

*WARNING:
Users should be aware that it is not always secure to assign a DMA-capable
device without IOMMU protection.
When the device is not protected by the IOMMU, the administrator should make
sure that:
 1. The device is assigned to a trusted guest.
 2. Users have additional security mechanism on the platform.

Requesting direct-map memory mapping for the domain, when IOMMU is absent from
the system or it is disabled (status = "disabled" in device tree), In which
case, "direct-map" property is added under the appropriate domain node.

Right now, direct-map is only supported when domain on Static Allocation,
that is, "xen,static-mem" is also necessary in the domain configuration.

Looking into related [design link](
https://lists.xenproject.org/archives/html/xen-devel/2021-05/msg00882.html)
for more details.

The whole design is about Static Allocation and direct-map, and this
Patch Serie only covers parts of it, which are direct-map memory map.
Other features will be delievered through different patch series.

See https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg00855.html
for Domain on Static Allocation.

This patch serie is based on
https://lists.xenproject.org/archives/html/xen-devel/2021-10/msg00822.html

---
v2 changes:
- remove the introduce of internal flag
- Refine is_domain_direct_mapped to check whether the flag
XEN_DOMCTL_CDF_directmap is set
- reword "1:1 direct-map" to just "direct-map"
- split the common codes into two helpers: parse_static_mem_prop and
acquire_static_memory_bank to deduce complexity.
- introduce a new helper allocate_static_memory_11 for allocating static
memory for direct-map guests
- remove panic action since it is fine to assign a non-DMA capable device when
IOMMU and direct-map both off
- remove redistributor accessor
- introduce new helper "is_domain_use_host_layout()"
- explain why vpl011 initialization before creating its device tree node
- error out if the domain is direct-mapped and the IRQ is not found
- harden the code and add a check/comment when the hardware UART region
is smaller than CUEST_VPL011_SIZE.

Stefano Stabellini (6):
  xen: introduce XEN_DOMCTL_CDF_directmap
  xen/arm: introduce direct-map for domUs
  xen/arm: if direct-map domain use native addresses for GICv2
  xen/arm: if direct-map domain use native addresses for GICv3
  xen/arm: if direct-map domain use native UART address and IRQ number
    for vPL011
  xen/docs: add a document to explain how to do passthrough without
    IOMMU

 docs/misc/arm/device-tree/booting.txt |  10 +
 docs/misc/arm/passthrough-noiommu.txt |  54 +++++
 xen/arch/arm/domain.c                 |   3 +-
 xen/arch/arm/domain_build.c           | 316 ++++++++++++++++++++------
 xen/arch/arm/vgic-v2.c                |  26 ++-
 xen/arch/arm/vgic-v3.c                |  20 +-
 xen/arch/arm/vgic/vgic-v2.c           |  27 ++-
 xen/arch/arm/vpl011.c                 |  54 ++++-
 xen/common/domain.c                   |   3 +-
 xen/include/asm-arm/domain.h          |   7 +-
 xen/include/asm-arm/new_vgic.h        |  10 +
 xen/include/asm-arm/vgic.h            |  12 +-
 xen/include/asm-arm/vpl011.h          |   2 +
 xen/include/public/domctl.h           |   4 +-
 14 files changed, 449 insertions(+), 99 deletions(-)
 create mode 100644 docs/misc/arm/passthrough-noiommu.txt

-- 
2.25.1



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap
  2021-10-15  3:09 [PATCH v2 0/6] direct-map memory map Penny Zheng
@ 2021-10-15  3:09 ` Penny Zheng
  2021-10-15  8:46   ` Jan Beulich
  2021-10-15  8:56   ` Julien Grall
  2021-10-15  3:09 ` [PATCH v2 2/6] xen/arm: introduce direct-map for domUs Penny Zheng
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 18+ messages in thread
From: Penny Zheng @ 2021-10-15  3:09 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Wei.Chen, Bertrand.Marquis

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

This commit introduces a new arm-specific flag XEN_DOMCTL_CDF_directmap to
specify that this domain should have its memory directly mapped
(guest physical address == physical address) at domain creation.

Refine is_domain_direct_mapped to check whether the flag
XEN_DOMCTL_CDF_directmap is set.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
CC: andrew.cooper3@citrix.com
CC: jbeulich@suse.com
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <ian.jackson@eu.citrix.com>
CC: Wei Liu <wl@xen.org>
CC: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/arm/domain.c        | 3 ++-
 xen/arch/arm/domain_build.c  | 4 +++-
 xen/common/domain.c          | 3 ++-
 xen/include/asm-arm/domain.h | 4 ++--
 xen/include/public/domctl.h  | 4 +++-
 5 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index eef0661beb..8cee1c6349 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -628,7 +628,8 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
 {
     unsigned int max_vcpus;
     unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
-    unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
+    unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu |
+                                   XEN_DOMCTL_CDF_directmap);
 
     if ( (config->flags & ~flags_optional) != flags_required )
     {
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 0167731ab0..37e2d62d47 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3069,8 +3069,10 @@ static int __init construct_dom0(struct domain *d)
 void __init create_dom0(void)
 {
     struct domain *dom0;
+    /* DOM0 has always its memory directly mapped. */
     struct xen_domctl_createdomain dom0_cfg = {
-        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
+        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
+                 XEN_DOMCTL_CDF_directmap,
         .max_evtchn_port = -1,
         .max_grant_frames = gnttab_dom0_frames(),
         .max_maptrack_frames = -1,
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 8b53c49d1e..7a6131db74 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -486,7 +486,8 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
          ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
            XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
            XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
-           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_vpmu) )
+           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_vpmu |
+           XEN_DOMCTL_CDF_directmap) )
     {
         dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
         return -EINVAL;
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 14e575288f..fc42c6a310 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -29,8 +29,8 @@ enum domain_type {
 #define is_64bit_domain(d) (0)
 #endif
 
-/* The hardware domain has always its memory direct mapped. */
-#define is_domain_direct_mapped(d) is_hardware_domain(d)
+/* Check if domain is direct-map memory map. */
+#define is_domain_direct_mapped(d) (d->options & XEN_DOMCTL_CDF_directmap)
 
 struct vtimer {
     struct vcpu *v;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 238384b5ae..b505a0db51 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -72,9 +72,11 @@ struct xen_domctl_createdomain {
 #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
 /* Should we expose the vPMU to the guest? */
 #define XEN_DOMCTL_CDF_vpmu           (1U << 7)
+/* If this domain has its memory directly mapped? (ARM only) */
+#define XEN_DOMCTL_CDF_directmap      (1U << 8)
 
 /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
-#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_vpmu
+#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_directmap
 
     uint32_t flags;
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/6] xen/arm: introduce direct-map for domUs
  2021-10-15  3:09 [PATCH v2 0/6] direct-map memory map Penny Zheng
  2021-10-15  3:09 ` [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap Penny Zheng
@ 2021-10-15  3:09 ` Penny Zheng
  2021-10-19 17:19   ` Julien Grall
  2021-10-15  3:09 ` [PATCH v2 3/6] xen/arm: if direct-map domain use native addresses for GICv2 Penny Zheng
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Penny Zheng @ 2021-10-15  3:09 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Wei.Chen, Bertrand.Marquis

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Cases where domU needs direct-map memory map:
  * IOMMU not present in the system.
  * IOMMU disabled if it doesn't cover a specific device and all the guests
are trusted. Thinking a mixed scenario, where a few devices with IOMMU and
a few without, then guest DMA security still could not be totally guaranteed.
So users may want to disable the IOMMU, to at least gain some performance
improvement from IOMMU disabled.
  * IOMMU disabled as a workaround when it doesn't have enough bandwidth.
To be specific, in a few extreme situation, when multiple devices do DMA
concurrently, these requests may exceed IOMMU's transmission capacity.
  * IOMMU disabled when it adds too much latency on DMA. For example,
TLB may be missing in some IOMMU hardware, which may bring latency in DMA
progress, so users may want to disable it in some realtime scenario.
  * Guest OS relies on the host memory layout-capablei

*WARNING:
Users should be aware that it is not always secure to assign a DMA-capable
device without IOMMU protection.
The administrator should make sure that:
 1. The device is assigned to a trusted guest.
 2. Users have additional security mechanism on the platform.

This commit also avoids setting XEN_DOMCTL_CDF_iommu when the IOMMU is
absent/disabled.

For now, direct-map is only supported when domain on Static Allocation, that is,
"xen.static-mem" must be also defined in the domain configuration.

This commit also introduces a new helper allocate_static_memory_11 to allocate
static memory as guest RAM for direct-map domain.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 docs/misc/arm/device-tree/booting.txt |  10 ++
 xen/arch/arm/domain_build.c           | 215 ++++++++++++++++++++------
 2 files changed, 179 insertions(+), 46 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 71895663a4..297f8fa0c8 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -182,6 +182,16 @@ with the following properties:
     Both #address-cells and #size-cells need to be specified because
     both sub-nodes (described shortly) have reg properties.
 
+- direct-map
+
+    Optional for Domain on Static Allocation.
+    An empty property to request the memory of the domain to be
+    direct-map (guest physical address == physical address).
+    WARNING:
+    Users must be aware of this risk, when doing DMA-capable device assignment,
+    direct-map guest must be trusted or have additional security mechanism,
+    otherwise it could use the DMA engine to access any other memory area.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 37e2d62d47..d9118e5bc1 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -492,8 +492,14 @@ static bool __init append_static_memory_to_bank(struct domain *d,
 {
     int res;
     unsigned int nr_pages = PFN_DOWN(size);
-    /* Infer next GFN. */
-    gfn_t sgfn = gaddr_to_gfn(bank->start + bank->size);
+    gfn_t sgfn;
+
+    if ( !is_domain_direct_mapped(d) )
+        /* Infer next GFN when GFN != MFN. */
+        sgfn = gaddr_to_gfn(bank->start + bank->size);
+    else
+        sgfn = gaddr_to_gfn(mfn_to_maddr(smfn));
+
 
     res = guest_physmap_add_pages(d, sgfn, smfn, nr_pages);
     if ( res )
@@ -507,12 +513,69 @@ static bool __init append_static_memory_to_bank(struct domain *d,
     return true;
 }
 
-/* Allocate memory from static memory as RAM for one specific domain d. */
+static int __init acquire_static_memory_bank(struct domain *d,
+                                             const __be32 **cell,
+                                             u32 addr_cells, u32 size_cells,
+                                             paddr_t *pbase, paddr_t *psize)
+{
+    int res = 0;
+
+    device_tree_get_reg(cell, addr_cells, size_cells, pbase, psize);
+    ASSERT(IS_ALIGNED(*pbase, PAGE_SIZE) && IS_ALIGNED(*psize, PAGE_SIZE));
+    if ( PFN_DOWN(*psize) > UINT_MAX )
+    {
+        printk(XENLOG_ERR "%pd: static memory size too large: %#"PRIpaddr,
+               d, *psize);
+        return -EINVAL;
+    }
+
+    res = acquire_domstatic_pages(d, maddr_to_mfn(*pbase), PFN_DOWN(*psize), 0);
+    if ( res )
+        printk(XENLOG_ERR
+               "%pd: failed to acquire static memory: %d.\n", d, res);
+
+    return res;
+}
+
+static int __init parse_static_mem_prop(const struct dt_device_node *node,
+                                        u32 *addr_cells, u32 *size_cells,
+                                        int *length, const __be32 **cell)
+{
+    const struct dt_property *prop;
+
+    prop = dt_find_property(node, "xen,static-mem", NULL);
+    if ( !dt_property_read_u32(node, "#xen,static-mem-address-cells",
+                               addr_cells) )
+    {
+        printk(XENLOG_ERR
+               "failed to read \"#xen,static-mem-address-cells\".\n");
+        return -EINVAL;
+    }
+
+    if ( !dt_property_read_u32(node, "#xen,static-mem-size-cells",
+                               size_cells) )
+    {
+        printk(XENLOG_ERR
+               "failed to read \"#xen,static-mem-size-cells\".\n");
+        return -EINVAL;
+    }
+
+    *cell = (const __be32 *)prop->value;
+    *length = prop->length;
+
+    return 0;
+}
+
+/*
+ * Allocate static memory as RAM for one specific domain d.
+ * The static memory will be mapped in the guest at the usual guest
+ * memory addresses (GUEST_RAM0_BASE, GUEST_RAM1_BASE) defined by
+ * xen/include/public/arch-arm.h.
+ */
 static void __init allocate_static_memory(struct domain *d,
                                           struct kernel_info *kinfo,
                                           const struct dt_device_node *node)
 {
-    const struct dt_property *prop;
     u32 addr_cells, size_cells, reg_cells;
     unsigned int nr_banks, gbank, bank = 0;
     const uint64_t rambase[] = GUEST_RAM_BANK_BASES;
@@ -521,61 +584,31 @@ static void __init allocate_static_memory(struct domain *d,
     u64 tot_size = 0;
     paddr_t pbase, psize, gsize;
     mfn_t smfn;
-    int res;
+    int length;
 
-    prop = dt_find_property(node, "xen,static-mem", NULL);
-    if ( !dt_property_read_u32(node, "#xen,static-mem-address-cells",
-                               &addr_cells) )
-    {
-        printk(XENLOG_ERR
-               "%pd: failed to read \"#xen,static-mem-address-cells\".\n", d);
-        goto fail;
-    }
+    gbank = 0;
+    gsize = ramsize[gbank];
+    kinfo->mem.bank[gbank].start = rambase[gbank];
 
-    if ( !dt_property_read_u32(node, "#xen,static-mem-size-cells",
-                               &size_cells) )
+    if ( parse_static_mem_prop(node, &addr_cells, &size_cells, &length, &cell) )
     {
         printk(XENLOG_ERR
-               "%pd: failed to read \"#xen,static-mem-size-cells\".\n", d);
+               "%pd: failed to parse \"xen,static-mem\" property.\n", d);
         goto fail;
     }
     reg_cells = addr_cells + size_cells;
-
-    /*
-     * The static memory will be mapped in the guest at the usual guest memory
-     * addresses (GUEST_RAM0_BASE, GUEST_RAM1_BASE) defined by
-     * xen/include/public/arch-arm.h.
-     */
-    gbank = 0;
-    gsize = ramsize[gbank];
-    kinfo->mem.bank[gbank].start = rambase[gbank];
-
-    cell = (const __be32 *)prop->value;
-    nr_banks = (prop->length) / (reg_cells * sizeof (u32));
+    nr_banks = length / (reg_cells * sizeof (u32));
 
     for ( ; bank < nr_banks; bank++ )
     {
-        device_tree_get_reg(&cell, addr_cells, size_cells, &pbase, &psize);
-        ASSERT(IS_ALIGNED(pbase, PAGE_SIZE) && IS_ALIGNED(psize, PAGE_SIZE));
-
-        if ( PFN_DOWN(psize) > UINT_MAX )
-        {
-            printk(XENLOG_ERR "%pd: static memory size too large: %#"PRIpaddr,
-                   d, psize);
+        if ( acquire_static_memory_bank(d, &cell, addr_cells, size_cells,
+                                        &pbase, &psize) )
             goto fail;
-        }
-        smfn = maddr_to_mfn(pbase);
-        res = acquire_domstatic_pages(d, smfn, PFN_DOWN(psize), 0);
-        if ( res )
-        {
-            printk(XENLOG_ERR
-                   "%pd: failed to acquire static memory: %d.\n", d, res);
-            goto fail;
-        }
 
         printk(XENLOG_INFO "%pd: STATIC BANK[%u] %#"PRIpaddr"-%#"PRIpaddr"\n",
                d, bank, pbase, pbase + psize);
 
+        smfn = maddr_to_mfn(pbase);
         while ( 1 )
         {
             /* Map as much as possible the static range to the guest bank */
@@ -638,12 +671,91 @@ static void __init allocate_static_memory(struct domain *d,
  fail:
     panic("Failed to allocate requested static memory for domain %pd.", d);
 }
+
+/*
+ * Allocate static memory as RAM for one specific domain d.
+ * The static memory will be directly mapped in the guest(Guest Physical
+ * Address == Physical Address).
+ */
+static void __init allocate_static_memory_11(struct domain *d,
+                                             struct kernel_info *kinfo,
+                                             const struct dt_device_node *node)
+{
+    u32 addr_cells, size_cells, reg_cells;
+    unsigned int nr_banks, gbank = 0, bank = 0;
+    const __be32 *cell;
+    u64 tot_size = 0;
+    paddr_t pbase, psize;
+    mfn_t smfn;
+    int length;
+
+    if ( parse_static_mem_prop(node, &addr_cells, &size_cells, &length, &cell) )
+    {
+        printk(XENLOG_ERR
+               "%pd: failed to parse \"xen,static-mem\" property.\n", d);
+        goto fail;
+    }
+    reg_cells = addr_cells + size_cells;
+    nr_banks = length / (reg_cells * sizeof (u32));
+
+    for ( ; bank < nr_banks; bank++ )
+    {
+        if ( acquire_static_memory_bank(d, &cell, addr_cells, size_cells,
+                                        &pbase, &psize) )
+            goto fail;
+
+        printk(XENLOG_INFO "%pd: STATIC BANK[%u] %#"PRIpaddr"-%#"PRIpaddr"\n",
+               d, bank, pbase, pbase + psize);
+
+        /*
+         * One guest memory bank is matched with one physical
+         * memory bank.
+         */
+        smfn = maddr_to_mfn(pbase);
+        gbank = bank;
+        kinfo->mem.bank[gbank].start = pbase;
+
+        if ( !append_static_memory_to_bank(d, &kinfo->mem.bank[gbank],
+                                           smfn, psize) )
+            goto fail;
+
+        tot_size += psize;
+    }
+
+    kinfo->mem.nr_banks = ++gbank;
+
+    kinfo->unassigned_mem -= tot_size;
+    /*
+     * The property 'memory' should match the amount of memory given to the
+     * guest.
+     * Currently, it is only possible to either acquire static memory or let
+     * Xen allocate. *Mixing* is not supported.
+     */
+    if ( kinfo->unassigned_mem )
+    {
+        printk(XENLOG_ERR
+               "Size of \"memory\" property doesn't match up with the sum-up of \"xen,static-mem\". Unsupported configuration.\n");
+        goto fail;
+    }
+
+    return;
+
+ fail:
+    panic("Failed to allocate requested static memory for direct-map domain %pd.",
+          d);
+}
 #else
 static void __init allocate_static_memory(struct domain *d,
                                           struct kernel_info *kinfo,
                                           const struct dt_device_node *node)
 {
 }
+
+static void __init allocate_static_memory_11(struct domain *d,
+                                             struct kernel_info *kinfo,
+                                             const struct dt_device_node *node)
+{
+}
 #endif
 
 static int __init write_properties(struct domain *d, struct kernel_info *kinfo,
@@ -2936,7 +3048,12 @@ static int __init construct_domU(struct domain *d,
     if ( !dt_find_property(node, "xen,static-mem", NULL) )
         allocate_memory(d, &kinfo);
     else
-        allocate_static_memory(d, &kinfo, node);
+    {
+        if ( is_domain_direct_mapped(d) )
+            allocate_static_memory_11(d, &kinfo, node);
+        else
+            allocate_static_memory(d, &kinfo, node);
+    }
 
     rc = prepare_dtb_domU(d, &kinfo);
     if ( rc < 0 )
@@ -2976,8 +3093,14 @@ void __init create_domUs(void)
             panic("Missing property 'cpus' for domain %s\n",
                   dt_node_name(node));
 
+        if ( dt_property_read_bool(node, "direct-map") )
+            d_cfg.flags |= XEN_DOMCTL_CDF_directmap;
+
         if ( dt_find_compatible_node(node, NULL, "multiboot,device-tree") )
-            d_cfg.flags |= XEN_DOMCTL_CDF_iommu;
+        {
+            if ( iommu_enabled )
+                d_cfg.flags |= XEN_DOMCTL_CDF_iommu;
+        }
 
         if ( !dt_property_read_u32(node, "nr_spis", &d_cfg.arch.nr_spis) )
         {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 3/6] xen/arm: if direct-map domain use native addresses for GICv2
  2021-10-15  3:09 [PATCH v2 0/6] direct-map memory map Penny Zheng
  2021-10-15  3:09 ` [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap Penny Zheng
  2021-10-15  3:09 ` [PATCH v2 2/6] xen/arm: introduce direct-map for domUs Penny Zheng
@ 2021-10-15  3:09 ` Penny Zheng
  2021-10-19 17:39   ` Julien Grall
  2021-10-15  3:09 ` [PATCH v2 4/6] xen/arm: if direct-map domain use native addresses for GICv3 Penny Zheng
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Penny Zheng @ 2021-10-15  3:09 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Wei.Chen, Bertrand.Marquis

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Today we use native addresses to map the GICv2 for Dom0 and fixed
addresses for DomUs.

This patch changes the behavior so that native addresses are used for
all domains that are direct-map memory map.

NEW VGIC has different naming schemes, like referring distributor base
address as vgic_dist_base, other than the dbase. So this patch also introduces
vgic_dist_base/vgic_cpu_base accessor to access correct distributor base
address/cpu interface base address on varied scenarios,

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/domain_build.c    | 10 +++++++---
 xen/arch/arm/vgic-v2.c         | 26 +++++++++++++++++++++-----
 xen/arch/arm/vgic/vgic-v2.c    | 27 ++++++++++++++++++++++-----
 xen/include/asm-arm/new_vgic.h | 10 ++++++++++
 xen/include/asm-arm/vgic.h     | 12 +++++++++++-
 5 files changed, 71 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d9118e5bc1..6cd03e4d0f 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2207,8 +2207,12 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
     int res = 0;
     __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) * 2];
     __be32 *cells;
+    struct domain *d = kinfo->d;
+    char buf[38];
 
-    res = fdt_begin_node(fdt, "interrupt-controller@"__stringify(GUEST_GICD_BASE));
+    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
+             vgic_dist_base(&d->arch.vgic));
+    res = fdt_begin_node(fdt, buf);
     if ( res )
         return res;
 
@@ -2230,9 +2234,9 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
 
     cells = &reg[0];
     dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
-                       GUEST_GICD_BASE, GUEST_GICD_SIZE);
+                       vgic_dist_base(&d->arch.vgic), GUEST_GICD_SIZE);
     dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
-                       GUEST_GICC_BASE, GUEST_GICC_SIZE);
+                       vgic_cpu_base(&d->arch.vgic), GUEST_GICC_SIZE);
 
     res = fdt_property(fdt, "reg", reg, sizeof(reg));
     if (res)
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index b2da886adc..a8cf8173d0 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -652,7 +652,7 @@ static int vgic_v2_vcpu_init(struct vcpu *v)
 static int vgic_v2_domain_init(struct domain *d)
 {
     int ret;
-    paddr_t cbase, csize;
+    paddr_t csize;
     paddr_t vbase;
 
     /*
@@ -669,10 +669,26 @@ static int vgic_v2_domain_init(struct domain *d)
          * Note that we assume the size of the CPU interface is always
          * aligned to PAGE_SIZE.
          */
-        cbase = vgic_v2_hw.cbase;
+        d->arch.vgic.cbase = vgic_v2_hw.cbase;
         csize = vgic_v2_hw.csize;
         vbase = vgic_v2_hw.vbase;
     }
+    else if ( is_domain_direct_mapped(d) )
+    {
+        /*
+         * For non-dom0 direct_mapped guests we only map a 8kB CPU
+         * interface but we make sure it is at a location occupied by
+         * the physical GIC in the host device tree.
+         *
+         * We need to add an offset to the virtual CPU interface base
+         * address when the GIC is aliased to get a 8kB contiguous
+         * region.
+         */
+        d->arch.vgic.dbase = vgic_v2_hw.dbase;
+        d->arch.vgic.cbase = vgic_v2_hw.cbase + vgic_v2_hw.aliased_offset;
+        csize = GUEST_GICC_SIZE;
+        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
+    }
     else
     {
         d->arch.vgic.dbase = GUEST_GICD_BASE;
@@ -683,7 +699,7 @@ static int vgic_v2_domain_init(struct domain *d)
          * region.
          */
         BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
-        cbase = GUEST_GICC_BASE;
+        d->arch.vgic.cbase = GUEST_GICC_BASE;
         csize = GUEST_GICC_SIZE;
         vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
     }
@@ -692,8 +708,8 @@ static int vgic_v2_domain_init(struct domain *d)
      * Map the gic virtual cpu interface in the gic cpu interface
      * region of the guest.
      */
-    ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
-                           maddr_to_mfn(vbase));
+    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
+                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
     if ( ret )
         return ret;
 
diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c
index b5ba4ace87..ce1f6e4373 100644
--- a/xen/arch/arm/vgic/vgic-v2.c
+++ b/xen/arch/arm/vgic/vgic-v2.c
@@ -258,7 +258,7 @@ void vgic_v2_enable(struct vcpu *vcpu)
 int vgic_v2_map_resources(struct domain *d)
 {
     struct vgic_dist *dist = &d->arch.vgic;
-    paddr_t cbase, csize;
+    paddr_t csize;
     paddr_t vbase;
     int ret;
 
@@ -276,10 +276,27 @@ int vgic_v2_map_resources(struct domain *d)
          * Note that we assume the size of the CPU interface is always
          * aligned to PAGE_SIZE.
          */
-        cbase = gic_v2_hw_data.cbase;
+        d->arch.vgic.vgic_cpu_base = gic_v2_hw_data.cbase;
         csize = gic_v2_hw_data.csize;
         vbase = gic_v2_hw_data.vbase;
     }
+    else if ( is_domain_direct_mapped(d) )
+    {
+        d->arch.vgic.vgic_dist_base = gic_v2_hw_data.dbase;
+        /*
+         * For non-dom0 direct_mapped guests we only map a 8kB CPU
+         * interface but we make sure it is at a location occupied by
+         * the physical GIC in the host device tree.
+         *
+         * We need to add an offset to the virtual CPU interface base
+         * address when the GIC is aliased to get a 8kB contiguous
+         * region.
+         */
+        d->arch.vgic.vgic_cpu_base = gic_v2_hw_data.cbase +
+                                     gic_v2_hw_data.aliased_offset;
+        csize = GUEST_GICC_SIZE;
+        vbase = gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset;
+    }
     else
     {
         d->arch.vgic.vgic_dist_base = GUEST_GICD_BASE;
@@ -290,7 +307,7 @@ int vgic_v2_map_resources(struct domain *d)
          * region.
          */
         BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
-        cbase = GUEST_GICC_BASE;
+        d->arch.vgic.vgic_cpu_base = GUEST_GICC_BASE;
         csize = GUEST_GICC_SIZE;
         vbase = gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset;
     }
@@ -308,8 +325,8 @@ int vgic_v2_map_resources(struct domain *d)
      * Map the gic virtual cpu interface in the gic cpu interface
      * region of the guest.
      */
-    ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
-                           maddr_to_mfn(vbase));
+    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.vgic_cpu_base),
+                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
     if ( ret )
     {
         gdprintk(XENLOG_ERR, "Unable to remap VGIC CPU to VCPU\n");
diff --git a/xen/include/asm-arm/new_vgic.h b/xen/include/asm-arm/new_vgic.h
index 97d622bff6..28b0882798 100644
--- a/xen/include/asm-arm/new_vgic.h
+++ b/xen/include/asm-arm/new_vgic.h
@@ -186,6 +186,16 @@ struct vgic_cpu {
     uint32_t num_id_bits;
 };
 
+static inline paddr_t vgic_cpu_base(struct vgic_dist *vgic)
+{
+    return vgic->vgic_cpu_base;
+}
+
+static inline paddr_t vgic_dist_base(struct vgic_dist *vgic)
+{
+    return vgic->vgic_dist_base;
+}
+
 #endif /* __ASM_ARM_NEW_VGIC_H */
 
 /*
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 62c2ae538d..3167bbb68b 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -152,6 +152,7 @@ struct vgic_dist {
     struct pending_irq *pending_irqs;
     /* Base address for guest GIC */
     paddr_t dbase; /* Distributor base address */
+    paddr_t cbase; /* CPU interface base address */
 #ifdef CONFIG_GICV3
     /* GIC V3 addressing */
     /* List of contiguous occupied by the redistributors */
@@ -271,13 +272,22 @@ static inline int REG_RANK_NR(int b, uint32_t n)
 
 enum gic_sgi_mode;
 
+static inline paddr_t vgic_cpu_base(struct vgic_dist *vgic)
+{
+    return vgic->cbase;
+}
+
+static inline paddr_t vgic_dist_base(struct vgic_dist *vgic)
+{
+    return vgic->dbase;
+}
+
 /*
  * Offset of GICD_<FOO><n> with its rank, for GICD_<FOO> size <s> with
  * <b>-bits-per-interrupt.
  */
 #define REG_RANK_INDEX(b, n, s) ((((n) >> s) & ((b)-1)) % 32)
 
-
 extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq);
 extern void vgic_remove_irq_from_queues(struct vcpu *v, struct pending_irq *p);
 extern void gic_remove_from_lr_pending(struct vcpu *v, struct pending_irq *p);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 4/6] xen/arm: if direct-map domain use native addresses for GICv3
  2021-10-15  3:09 [PATCH v2 0/6] direct-map memory map Penny Zheng
                   ` (2 preceding siblings ...)
  2021-10-15  3:09 ` [PATCH v2 3/6] xen/arm: if direct-map domain use native addresses for GICv2 Penny Zheng
@ 2021-10-15  3:09 ` Penny Zheng
  2021-10-20 11:11   ` Julien Grall
  2021-10-15  3:09 ` [PATCH v2 5/6] xen/arm: if direct-map domain use native UART address and IRQ number for vPL011 Penny Zheng
  2021-10-15  3:09 ` [PATCH v2 6/6] xen/docs: add a document to explain how to do passthrough without IOMMU Penny Zheng
  5 siblings, 1 reply; 18+ messages in thread
From: Penny Zheng @ 2021-10-15  3:09 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Wei.Chen, Bertrand.Marquis

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Today we use native addresses to map the GICv3 for Dom0 and fixed
addresses for DomUs.

This patch changes the behavior so that native addresses are used for
all direct-map domains(including Dom0).

Considering that dom0 may not always be directly mapped in the future,
this patch introduces a new helper "is_domain_use_host_layout()" that
wraps both two check "is_domain_direct_mapped(d) || is_hardware_domain(d)"
for more flexible useage.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/domain_build.c  | 46 +++++++++++++++++++++++++++---------
 xen/arch/arm/vgic-v3.c       | 20 +++++++++-------
 xen/include/asm-arm/domain.h |  3 +++
 3 files changed, 50 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 6cd03e4d0f..7e0ee07e06 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2255,16 +2255,20 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
     return res;
 }
 
+#ifdef CONFIG_ARM_64
 static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
 {
     void *fdt = kinfo->fdt;
     int res = 0;
-    __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) * 2];
+    __be32 *reg;
     __be32 *cells;
+    struct domain *d = kinfo->d;
+    char buf[38];
+    unsigned int i, len = 0;
 
-    res = fdt_begin_node(fdt, "interrupt-controller@"__stringify(GUEST_GICV3_GICD_BASE));
-    if ( res )
-        return res;
+    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
+             vgic_dist_base(&d->arch.vgic));
+    res = fdt_begin_node(fdt, buf);
 
     res = fdt_property_cell(fdt, "#address-cells", 0);
     if ( res )
@@ -2282,35 +2286,55 @@ static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
     if ( res )
         return res;
 
+    /* reg specifies all re-distributors and Distributor. */
+    len = (GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) *
+          (d->arch.vgic.nr_regions + 1) * sizeof(__be32);
+    reg = xmalloc_bytes(len);
+    if ( reg == NULL )
+        return -ENOMEM;
+
     cells = &reg[0];
     dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
-                       GUEST_GICV3_GICD_BASE, GUEST_GICV3_GICD_SIZE);
-    dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
-                       GUEST_GICV3_GICR0_BASE, GUEST_GICV3_GICR0_SIZE);
+                       vgic_dist_base(&d->arch.vgic), GUEST_GICV3_GICD_SIZE);
 
-    res = fdt_property(fdt, "reg", reg, sizeof(reg));
+    for ( i = 0;
+          i < d->arch.vgic.nr_regions;
+          i++, cells += (GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) )
+    {
+        dt_child_set_range(&cells,
+                           GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
+                           d->arch.vgic.rdist_regions[i].base,
+                           d->arch.vgic.rdist_regions[i].size);
+    }
+
+    res = fdt_property(fdt, "reg", reg, len);
     if (res)
-        return res;
+        goto out;
 
     res = fdt_property_cell(fdt, "linux,phandle", kinfo->phandle_gic);
     if (res)
-        return res;
+        goto out;
 
     res = fdt_property_cell(fdt, "phandle", kinfo->phandle_gic);
     if (res)
-        return res;
+        goto out;
 
     res = fdt_end_node(fdt);
 
+ out:
+    xfree(reg);
     return res;
 }
+#endif
 
 static int __init make_gic_domU_node(struct kernel_info *kinfo)
 {
     switch ( kinfo->d->arch.vgic.version )
     {
+#ifdef CONFIG_ARM_64
     case GIC_V3:
         return make_gicv3_domU_node(kinfo);
+#endif
     case GIC_V2:
         return make_gicv2_domU_node(kinfo);
     default:
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index cb5a70c42e..70168ca1ac 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1641,14 +1641,15 @@ static inline unsigned int vgic_v3_max_rdist_count(struct domain *d)
      * Normally there is only one GICv3 redistributor region.
      * The GICv3 DT binding provisions for multiple regions, since there are
      * platforms out there which need those (multi-socket systems).
-     * For Dom0 we have to live with the MMIO layout the hardware provides,
-     * so we have to copy the multiple regions - as the first region may not
-     * provide enough space to hold all redistributors we need.
+     * For direct-map domain(including dom0), we have to live with the MMIO
+     * layout the hardware provides, so we have to copy the multiple regions
+     * - as the first region may not provide enough space to hold all
+     * redistributors we need.
      * However DomU get a constructed memory map, so we can go with
      * the architected single redistributor region.
      */
-    return is_hardware_domain(d) ? vgic_v3_hw.nr_rdist_regions :
-               GUEST_GICV3_RDIST_REGIONS;
+    return is_domain_use_host_layout(d) ? vgic_v3_hw.nr_rdist_regions :
+                                          GUEST_GICV3_RDIST_REGIONS;
 }
 
 static int vgic_v3_domain_init(struct domain *d)
@@ -1670,10 +1671,13 @@ static int vgic_v3_domain_init(struct domain *d)
     radix_tree_init(&d->arch.vgic.pend_lpi_tree);
 
     /*
-     * Domain 0 gets the hardware address.
-     * Guests get the virtual platform layout.
+     * Since we map the whole GICv3 register memory map(64KB) for
+     * all guests(including DOM0), DOM0 and direct-map guests could be
+     * treated the same way here.
+     * direct-map domain (including Dom0) gets the hardware address.
+     * Other guests get the virtual platform layout.
      */
-    if ( is_hardware_domain(d) )
+    if ( is_domain_use_host_layout(d) )
     {
         unsigned int first_cpu = 0;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index fc42c6a310..e8ce3ac8d2 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -32,6 +32,9 @@ enum domain_type {
 /* Check if domain is direct-map memory map. */
 #define is_domain_direct_mapped(d) (d->options & XEN_DOMCTL_CDF_directmap)
 
+#define is_domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
+                                      is_hardware_domain(d))
+
 struct vtimer {
     struct vcpu *v;
     int irq;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 5/6] xen/arm: if direct-map domain use native UART address and IRQ number for vPL011
  2021-10-15  3:09 [PATCH v2 0/6] direct-map memory map Penny Zheng
                   ` (3 preceding siblings ...)
  2021-10-15  3:09 ` [PATCH v2 4/6] xen/arm: if direct-map domain use native addresses for GICv3 Penny Zheng
@ 2021-10-15  3:09 ` Penny Zheng
  2021-10-20 11:38   ` Julien Grall
  2021-10-15  3:09 ` [PATCH v2 6/6] xen/docs: add a document to explain how to do passthrough without IOMMU Penny Zheng
  5 siblings, 1 reply; 18+ messages in thread
From: Penny Zheng @ 2021-10-15  3:09 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Wei.Chen, Bertrand.Marquis

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

We always use a fix address to map the vPL011 to domains. The address
could be a problem for direct-map domains.

So, for domains that are directly mapped, reuse the address of the
physical UART on the platform to avoid potential clashes.

Do the same for the virtual IRQ number: instead of always using
GUEST_VPL011_SPI, try to reuse the physical SPI number if possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/domain_build.c  | 41 ++++++++++++++++++++++-----
 xen/arch/arm/vpl011.c        | 54 +++++++++++++++++++++++++++++++-----
 xen/include/asm-arm/vpl011.h |  2 ++
 3 files changed, 83 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7e0ee07e06..f3e87709f6 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -30,6 +30,7 @@
 
 #include <xen/irq.h>
 #include <xen/grant_table.h>
+#include <xen/serial.h>
 
 static unsigned int __initdata opt_dom0_max_vcpus;
 integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
@@ -2350,8 +2351,11 @@ static int __init make_vpl011_uart_node(struct kernel_info *kinfo)
     gic_interrupt_t intr;
     __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS];
     __be32 *cells;
+    struct domain *d = kinfo->d;
+    char buf[27];
 
-    res = fdt_begin_node(fdt, "sbsa-uart@"__stringify(GUEST_PL011_BASE));
+    snprintf(buf, sizeof(buf), "sbsa-uart@%"PRIx64, d->arch.vpl011.base_addr);
+    res = fdt_begin_node(fdt, buf);
     if ( res )
         return res;
 
@@ -2361,14 +2365,14 @@ static int __init make_vpl011_uart_node(struct kernel_info *kinfo)
 
     cells = &reg[0];
     dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS,
-                       GUEST_ROOT_SIZE_CELLS, GUEST_PL011_BASE,
+                       GUEST_ROOT_SIZE_CELLS, d->arch.vpl011.base_addr,
                        GUEST_PL011_SIZE);
 
     res = fdt_property(fdt, "reg", reg, sizeof(reg));
     if ( res )
         return res;
 
-    set_interrupt(intr, GUEST_VPL011_SPI, 0xf, DT_IRQ_TYPE_LEVEL_HIGH);
+    set_interrupt(intr, d->arch.vpl011.virq, 0xf, DT_IRQ_TYPE_LEVEL_HIGH);
 
     res = fdt_property(fdt, "interrupts", intr, sizeof (intr));
     if ( res )
@@ -3083,6 +3087,14 @@ static int __init construct_domU(struct domain *d,
             allocate_static_memory(d, &kinfo, node);
     }
 
+    /*
+     * Base address and irq number are needed when creating vpl011 device
+     * tree node in prepare_dtb_domU, so initialization on related variables
+     * shall be dealt firstly.
+     */
+    if ( kinfo.vpl011 )
+        rc = domain_vpl011_init(d, NULL);
+
     rc = prepare_dtb_domU(d, &kinfo);
     if ( rc < 0 )
         return rc;
@@ -3091,9 +3103,6 @@ static int __init construct_domU(struct domain *d,
     if ( rc < 0 )
         return rc;
 
-    if ( kinfo.vpl011 )
-        rc = domain_vpl011_init(d, NULL);
-
     return rc;
 }
 
@@ -3132,15 +3141,33 @@ void __init create_domUs(void)
 
         if ( !dt_property_read_u32(node, "nr_spis", &d_cfg.arch.nr_spis) )
         {
+            unsigned int vpl011_virq = GUEST_VPL011_SPI;
+
             d_cfg.arch.nr_spis = gic_number_lines() - 32;
 
+            /*
+             * The VPL011 virq is GUEST_VPL011_SPI, unless direct-map in
+             * set, in which case we'll try to match the hardware.
+             *
+             * Typically, d->arch.vpl011.virq has the vpl011 irq number
+             * but at this point of the boot sequence it is not
+             * initialized yet.
+             */
+            if ( d_cfg.flags & XEN_DOMCTL_CDF_directmap )
+            {
+                vpl011_virq = serial_irq(SERHND_DTUART);
+                if ( vpl011_virq < 0 )
+                    panic("Error getting IRQ number for this serial port %d\n",
+                          SERHND_DTUART);
+            }
+
             /*
              * vpl011 uses one emulated SPI. If vpl011 is requested, make
              * sure that we allocate enough SPIs for it.
              */
             if ( dt_property_read_bool(node, "vpl011") )
                 d_cfg.arch.nr_spis = MAX(d_cfg.arch.nr_spis,
-                                         GUEST_VPL011_SPI - 32 + 1);
+                                         vpl011_virq - 32 + 1);
         }
 
         /*
diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index 895f436cc4..2de59e584d 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -29,6 +29,7 @@
 #include <xen/mm.h>
 #include <xen/sched.h>
 #include <xen/console.h>
+#include <xen/serial.h>
 #include <public/domctl.h>
 #include <public/io/console.h>
 #include <asm/pl011-uart.h>
@@ -71,11 +72,11 @@ static void vpl011_update_interrupt_status(struct domain *d)
      * status bit has been set since the last time.
      */
     if ( uartmis & ~vpl011->shadow_uartmis )
-        vgic_inject_irq(d, NULL, GUEST_VPL011_SPI, true);
+        vgic_inject_irq(d, NULL, vpl011->virq, true);
 
     vpl011->shadow_uartmis = uartmis;
 #else
-    vgic_inject_irq(d, NULL, GUEST_VPL011_SPI, uartmis);
+    vgic_inject_irq(d, NULL, vpl011->virq, uartmis);
 #endif
 }
 
@@ -347,7 +348,8 @@ static int vpl011_mmio_read(struct vcpu *v,
                             void *priv)
 {
     struct hsr_dabt dabt = info->dabt;
-    uint32_t vpl011_reg = (uint32_t)(info->gpa - GUEST_PL011_BASE);
+    uint32_t vpl011_reg = (uint32_t)(info->gpa -
+                                     v->domain->arch.vpl011.base_addr);
     struct vpl011 *vpl011 = &v->domain->arch.vpl011;
     struct domain *d = v->domain;
     unsigned long flags;
@@ -430,7 +432,8 @@ static int vpl011_mmio_write(struct vcpu *v,
                              void *priv)
 {
     struct hsr_dabt dabt = info->dabt;
-    uint32_t vpl011_reg = (uint32_t)(info->gpa - GUEST_PL011_BASE);
+    uint32_t vpl011_reg = (uint32_t)(info->gpa -
+                                     v->domain->arch.vpl011.base_addr);
     struct vpl011 *vpl011 = &v->domain->arch.vpl011;
     struct domain *d = v->domain;
     unsigned long flags;
@@ -626,6 +629,43 @@ int domain_vpl011_init(struct domain *d, struct vpl011_init_info *info)
     if ( vpl011->backend.dom.ring_buf )
         return -EINVAL;
 
+    if ( is_domain_direct_mapped(d) )
+    {
+        const struct vuart_info *uart = serial_vuart_info(SERHND_DTUART);
+        int vpl011_irq = serial_irq(SERHND_DTUART);
+
+        /*
+         * Since the PL011 we emulate for the guest requires a 4KB region,
+         * and on some Hardware (IIRC pine64), the UART MMIO region is
+         * less than 4KB, in which case, there may exist multiple devices
+         * within the same 4KB region, here adds the following check to
+         * prevent potential known pitfalls
+         */
+        if ( uart->size < GUEST_PL011_SIZE )
+        {
+            printk(XENLOG_ERR
+                   "The hardware UART region is smaller than GUEST_PL011_SIZE, impossible to emulate on direct-map guests.\n");
+            return -EINVAL;
+        }
+
+        if ( uart != NULL && vpl011_irq > 0 )
+        {
+            vpl011->base_addr = uart->base_addr;
+            vpl011->virq = vpl011_irq;
+        }
+        else
+        {
+            printk(XENLOG_ERR
+                   "Unable to reuse physical UART address and irq for vPL011 on direct-mapped domain.\n");
+            return -EINVAL;
+        }
+    }
+    else
+    {
+        vpl011->base_addr = GUEST_PL011_BASE;
+        vpl011->virq = GUEST_VPL011_SPI;
+    }
+
     /*
      * info is NULL when the backend is in Xen.
      * info is != NULL when the backend is in a domain.
@@ -661,7 +701,7 @@ int domain_vpl011_init(struct domain *d, struct vpl011_init_info *info)
         }
     }
 
-    rc = vgic_reserve_virq(d, GUEST_VPL011_SPI);
+    rc = vgic_reserve_virq(d, vpl011->virq);
     if ( !rc )
     {
         rc = -EINVAL;
@@ -673,12 +713,12 @@ int domain_vpl011_init(struct domain *d, struct vpl011_init_info *info)
     spin_lock_init(&vpl011->lock);
 
     register_mmio_handler(d, &vpl011_mmio_handler,
-                          GUEST_PL011_BASE, GUEST_PL011_SIZE, NULL);
+                          vpl011->base_addr, GUEST_PL011_SIZE, NULL);
 
     return 0;
 
 out2:
-    vgic_free_virq(d, GUEST_VPL011_SPI);
+    vgic_free_virq(d, vpl011->virq);
 
 out1:
     if ( vpl011->backend_in_domain )
diff --git a/xen/include/asm-arm/vpl011.h b/xen/include/asm-arm/vpl011.h
index e6c7ab7381..c09abcd7a9 100644
--- a/xen/include/asm-arm/vpl011.h
+++ b/xen/include/asm-arm/vpl011.h
@@ -53,6 +53,8 @@ struct vpl011 {
     uint32_t    uarticr;        /* Interrupt clear register */
     uint32_t    uartris;        /* Raw interrupt status register */
     uint32_t    shadow_uartmis; /* shadow masked interrupt register */
+    paddr_t     base_addr;
+    unsigned int virq;
     spinlock_t  lock;
     evtchn_port_t evtchn;
 };
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 6/6] xen/docs: add a document to explain how to do passthrough without IOMMU
  2021-10-15  3:09 [PATCH v2 0/6] direct-map memory map Penny Zheng
                   ` (4 preceding siblings ...)
  2021-10-15  3:09 ` [PATCH v2 5/6] xen/arm: if direct-map domain use native UART address and IRQ number for vPL011 Penny Zheng
@ 2021-10-15  3:09 ` Penny Zheng
  5 siblings, 0 replies; 18+ messages in thread
From: Penny Zheng @ 2021-10-15  3:09 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Wei.Chen, Bertrand.Marquis

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Make sure to start with a WARNING about security.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 docs/misc/arm/passthrough-noiommu.txt | 54 +++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)
 create mode 100644 docs/misc/arm/passthrough-noiommu.txt

diff --git a/docs/misc/arm/passthrough-noiommu.txt b/docs/misc/arm/passthrough-noiommu.txt
new file mode 100644
index 0000000000..61aeb8a5cd
--- /dev/null
+++ b/docs/misc/arm/passthrough-noiommu.txt
@@ -0,0 +1,54 @@
+Request Device Assignment without IOMMU support
+===============================================
+
+*WARNING:
+Users should be aware that it is not always secure to assign a device without
+IOMMU protection.
+When the device is not protected by the IOMMU, the administrator should make
+sure that:
+ 1. The device is assigned to a trusted guest.
+ 2. Users have additional security mechanism on the platform.
+
+
+This document assumes that the IOMMU is absent from the system or it is
+disabled (status = "disabled" in device tree).
+
+
+Add xen,force-assign-without-iommu; to the device tree snippet:
+
+ethernet: ethernet@ff0e0000 {
+	compatible = "cdns,zynqmp-gem";
+	xen,path = "/amba/ethernet@ff0e0000";
+	xen,reg = <0x0 0xff0e0000 0x1000 0x0 0xff0e0000>;
+	xen,force-assign-without-iommu;
+};
+
+Request 1:1 memory mapping for the domain on static allocation
+==============================================================
+
+Add a direct-map property under the appropriate /chosen/domU node which
+is also statically allocated with physical memory ranges through
+xen,static-mem property as its guest RAM.
+
+Below is an example on how to specify the 1:1 memory mapping for the domain
+on static allocation in the device-tree:
+
+/ {
+	chosen {
+		domU1 {
+			compatible = "xen,domain";
+			#address-cells = <0x2>;
+			#size-cells = <0x2>;
+			cpus = <2>;
+			memory = <0x0 0x80000>;
+			#xen,static-mem-address-cells = <0x1>;
+			#xen,static-mem-size-cells = <0x1>;
+			xen,static-mem = <0x30000000 0x20000000>;
+			direct-map;
+			...
+		};
+	};
+};
+
+Besides reserving a 512MB region starting at the host physical address
+0x30000000 to DomU1, it also requests 1:1 memory mapping.
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap
  2021-10-15  3:09 ` [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap Penny Zheng
@ 2021-10-15  8:46   ` Jan Beulich
  2021-10-15  9:59     ` Penny Zheng
  2021-10-15  8:56   ` Julien Grall
  1 sibling, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2021-10-15  8:46 UTC (permalink / raw)
  To: Penny Zheng; +Cc: Wei.Chen, Bertrand.Marquis, xen-devel, sstabellini, julien

On 15.10.2021 05:09, Penny Zheng wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> This commit introduces a new arm-specific flag XEN_DOMCTL_CDF_directmap to
> specify that this domain should have its memory directly mapped
> (guest physical address == physical address) at domain creation.
> 
> Refine is_domain_direct_mapped to check whether the flag
> XEN_DOMCTL_CDF_directmap is set.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> CC: andrew.cooper3@citrix.com
> CC: jbeulich@suse.com
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <ian.jackson@eu.citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: "Roger Pau Monné" <roger.pau@citrix.com>
> ---

Please have here a brief log of changes in the new version, to aid
reviewers.

>  xen/arch/arm/domain.c        | 3 ++-
>  xen/arch/arm/domain_build.c  | 4 +++-
>  xen/common/domain.c          | 3 ++-
>  xen/include/asm-arm/domain.h | 4 ++--
>  xen/include/public/domctl.h  | 4 +++-
>  5 files changed, 12 insertions(+), 6 deletions(-)

You clearly had to re-base over the XEN_DOMCTL_CDF_vpmu addition. I think
just like that change (which I'd expect you to have looked at while doing
the re-base) you also need to at least fiddle with OCaml's
domain_create_flag, to keep the ABI check there happy.

> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -72,9 +72,11 @@ struct xen_domctl_createdomain {
>  #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
>  /* Should we expose the vPMU to the guest? */
>  #define XEN_DOMCTL_CDF_vpmu           (1U << 7)
> +/* If this domain has its memory directly mapped? (ARM only) */
> +#define XEN_DOMCTL_CDF_directmap      (1U << 8)

The comment doesn't read well; how about "Should domain memory be directly
mapped?" That's if a comment here is really needed in the first place. I
also don't think "Arm only" should be here - this may go stale. What I'm
missing in this regard is rejecting of the flag in x86'es
arch_sanitise_domain_config() (or by whichever other means).

Jan



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap
  2021-10-15  3:09 ` [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap Penny Zheng
  2021-10-15  8:46   ` Jan Beulich
@ 2021-10-15  8:56   ` Julien Grall
  2021-10-15 10:03     ` Penny Zheng
  2021-11-09 10:05     ` Penny Zheng
  1 sibling, 2 replies; 18+ messages in thread
From: Julien Grall @ 2021-10-15  8:56 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Wei.Chen, Bertrand.Marquis

Hi Penny,

On 15/10/2021 04:09, Penny Zheng wrote:
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 0167731ab0..37e2d62d47 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3069,8 +3069,10 @@ static int __init construct_dom0(struct domain *d)
>   void __init create_dom0(void)
>   {
>       struct domain *dom0;
> +    /* DOM0 has always its memory directly mapped. */
>       struct xen_domctl_createdomain dom0_cfg = {
> -        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> +        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> +                 XEN_DOMCTL_CDF_directmap,
>           .max_evtchn_port = -1,
>           .max_grant_frames = gnttab_dom0_frames(),
>           .max_maptrack_frames = -1,
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 8b53c49d1e..7a6131db74 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -486,7 +486,8 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>            ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>              XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
>              XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
> -           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_vpmu) )
> +           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_vpmu |
> +           XEN_DOMCTL_CDF_directmap) )
>       {
>           dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
>           return -EINVAL;
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 14e575288f..fc42c6a310 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -29,8 +29,8 @@ enum domain_type {
>   #define is_64bit_domain(d) (0)
>   #endif
>   
> -/* The hardware domain has always its memory direct mapped. */
> -#define is_domain_direct_mapped(d) is_hardware_domain(d)
> +/* Check if domain is direct-map memory map. */
> +#define is_domain_direct_mapped(d) (d->options & XEN_DOMCTL_CDF_directmap)
>   
>   struct vtimer {
>       struct vcpu *v;
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 238384b5ae..b505a0db51 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -72,9 +72,11 @@ struct xen_domctl_createdomain {
>   #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
>   /* Should we expose the vPMU to the guest? */
>   #define XEN_DOMCTL_CDF_vpmu           (1U << 7)
> +/* If this domain has its memory directly mapped? (ARM only) */
> +#define XEN_DOMCTL_CDF_directmap      (1U << 8)
>   
>   /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
> -#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_vpmu
> +#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_directmap

In the previous version, this flag was only settable for domain created 
by Xen. Now this is also settable by the toolstack. I don't think the 
toolstack can sensibly use this flag (at least in the current state).

So can you explain why this flag is exposed to the toolstack?

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap
  2021-10-15  8:46   ` Jan Beulich
@ 2021-10-15  9:59     ` Penny Zheng
  2021-10-15 10:08       ` Jan Beulich
  0 siblings, 1 reply; 18+ messages in thread
From: Penny Zheng @ 2021-10-15  9:59 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Wei Chen, Bertrand Marquis, xen-devel, sstabellini, julien

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Friday, October 15, 2021 4:47 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Wei Chen <Wei.Chen@arm.com>; Bertrand Marquis
> <Bertrand.Marquis@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap
> 
> On 15.10.2021 05:09, Penny Zheng wrote:
> > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> >
> > This commit introduces a new arm-specific flag
> > XEN_DOMCTL_CDF_directmap to specify that this domain should have its
> > memory directly mapped (guest physical address == physical address) at
> domain creation.
> >
> > Refine is_domain_direct_mapped to check whether the flag
> > XEN_DOMCTL_CDF_directmap is set.
> >
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> > CC: andrew.cooper3@citrix.com
> > CC: jbeulich@suse.com
> > CC: George Dunlap <George.Dunlap@eu.citrix.com>
> > CC: Ian Jackson <ian.jackson@eu.citrix.com>
> > CC: Wei Liu <wl@xen.org>
> > CC: "Roger Pau Monné" <roger.pau@citrix.com>
> > ---
> 
> Please have here a brief log of changes in the new version, to aid reviewers.
> 

Sure.

> >  xen/arch/arm/domain.c        | 3 ++-
> >  xen/arch/arm/domain_build.c  | 4 +++-
> >  xen/common/domain.c          | 3 ++-
> >  xen/include/asm-arm/domain.h | 4 ++--  xen/include/public/domctl.h  |
> > 4 +++-
> >  5 files changed, 12 insertions(+), 6 deletions(-)
> 
> You clearly had to re-base over the XEN_DOMCTL_CDF_vpmu addition. I think
> just like that change (which I'd expect you to have looked at while doing the
> re-base) you also need to at least fiddle with OCaml's domain_create_flag, to
> keep the ABI check there happy.
> 

The patch serie is based on the staging branch with an extra commit "
Revert "xen/domctl: Introduce XEN_DOMCTL_CDF_vpci flag", which
Is already been pushed to community for review.(
https://lists.xenproject.org/archives/html/xen-devel/2021-10/msg00822.html)

> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -72,9 +72,11 @@ struct xen_domctl_createdomain {
> >  #define XEN_DOMCTL_CDF_nested_virt    (1U <<
> _XEN_DOMCTL_CDF_nested_virt)
> >  /* Should we expose the vPMU to the guest? */
> >  #define XEN_DOMCTL_CDF_vpmu           (1U << 7)
> > +/* If this domain has its memory directly mapped? (ARM only) */
> > +#define XEN_DOMCTL_CDF_directmap      (1U << 8)
> 
> The comment doesn't read well; how about "Should domain memory be
> directly mapped?" That's if a comment here is really needed in the first place. I
> also don't think "Arm only" should be here - this may go stale. What I'm
> missing in this regard is rejecting of the flag in x86'es
> arch_sanitise_domain_config() (or by whichever other means).
> 
> Jan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap
  2021-10-15  8:56   ` Julien Grall
@ 2021-10-15 10:03     ` Penny Zheng
  2021-11-09 10:05     ` Penny Zheng
  1 sibling, 0 replies; 18+ messages in thread
From: Penny Zheng @ 2021-10-15 10:03 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Wei Chen, Bertrand Marquis

Hi julien 

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Friday, October 15, 2021 4:57 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Wei Chen <Wei.Chen@arm.com>; Bertrand Marquis
> <Bertrand.Marquis@arm.com>
> Subject: Re: [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap
> 
> Hi Penny,
> 
> On 15/10/2021 04:09, Penny Zheng wrote:
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 0167731ab0..37e2d62d47 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -3069,8 +3069,10 @@ static int __init construct_dom0(struct domain *d)
> >   void __init create_dom0(void)
> >   {
> >       struct domain *dom0;
> > +    /* DOM0 has always its memory directly mapped. */
> >       struct xen_domctl_createdomain dom0_cfg = {
> > -        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> > +        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> > +                 XEN_DOMCTL_CDF_directmap,
> >           .max_evtchn_port = -1,
> >           .max_grant_frames = gnttab_dom0_frames(),
> >           .max_maptrack_frames = -1,
> > diff --git a/xen/common/domain.c b/xen/common/domain.c index
> > 8b53c49d1e..7a6131db74 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -486,7 +486,8 @@ static int sanitise_domain_config(struct
> xen_domctl_createdomain *config)
> >            ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> >              XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
> >              XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
> > -           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_vpmu) )
> > +           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_vpmu |
> > +           XEN_DOMCTL_CDF_directmap) )
> >       {
> >           dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
> >           return -EINVAL;
> > diff --git a/xen/include/asm-arm/domain.h
> > b/xen/include/asm-arm/domain.h index 14e575288f..fc42c6a310 100644
> > --- a/xen/include/asm-arm/domain.h
> > +++ b/xen/include/asm-arm/domain.h
> > @@ -29,8 +29,8 @@ enum domain_type {
> >   #define is_64bit_domain(d) (0)
> >   #endif
> >
> > -/* The hardware domain has always its memory direct mapped. */
> > -#define is_domain_direct_mapped(d) is_hardware_domain(d)
> > +/* Check if domain is direct-map memory map. */ #define
> > +is_domain_direct_mapped(d) (d->options & XEN_DOMCTL_CDF_directmap)
> >
> >   struct vtimer {
> >       struct vcpu *v;
> > diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> > index 238384b5ae..b505a0db51 100644
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -72,9 +72,11 @@ struct xen_domctl_createdomain {
> >   #define XEN_DOMCTL_CDF_nested_virt    (1U <<
> _XEN_DOMCTL_CDF_nested_virt)
> >   /* Should we expose the vPMU to the guest? */
> >   #define XEN_DOMCTL_CDF_vpmu           (1U << 7)
> > +/* If this domain has its memory directly mapped? (ARM only) */
> > +#define XEN_DOMCTL_CDF_directmap      (1U << 8)
> >
> >   /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
> > -#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_vpmu
> > +#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_directmap
> 
> In the previous version, this flag was only settable for domain created by Xen.
> Now this is also settable by the toolstack. I don't think the toolstack can
> sensibly use this flag (at least in the current state).
> 
> So can you explain why this flag is exposed to the toolstack?

Oh,  I misunderstood the previous discussion on internal usage a bit, sorry.

And I will make it back to xen/include/xen/domain.h 

> 
> Cheers,
> 
> --
> Julien Grall

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap
  2021-10-15  9:59     ` Penny Zheng
@ 2021-10-15 10:08       ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2021-10-15 10:08 UTC (permalink / raw)
  To: Penny Zheng; +Cc: Wei Chen, Bertrand Marquis, xen-devel, sstabellini, julien

On 15.10.2021 11:59, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Friday, October 15, 2021 4:47 PM
>>
>> On 15.10.2021 05:09, Penny Zheng wrote:
>>>  xen/arch/arm/domain.c        | 3 ++-
>>>  xen/arch/arm/domain_build.c  | 4 +++-
>>>  xen/common/domain.c          | 3 ++-
>>>  xen/include/asm-arm/domain.h | 4 ++--  xen/include/public/domctl.h  |
>>> 4 +++-
>>>  5 files changed, 12 insertions(+), 6 deletions(-)
>>
>> You clearly had to re-base over the XEN_DOMCTL_CDF_vpmu addition. I think
>> just like that change (which I'd expect you to have looked at while doing the
>> re-base) you also need to at least fiddle with OCaml's domain_create_flag, to
>> keep the ABI check there happy.
>>
> 
> The patch serie is based on the staging branch with an extra commit "
> Revert "xen/domctl: Introduce XEN_DOMCTL_CDF_vpci flag", which
> Is already been pushed to community for review.(
> https://lists.xenproject.org/archives/html/xen-devel/2021-10/msg00822.html)

I was assuming so, and hence I didn't also refer you to the vPCI
patch, which you could also have used for reference. I guess my
primary problem here is that I can't see what you're trying to tell
me.

Jan



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/6] xen/arm: introduce direct-map for domUs
  2021-10-15  3:09 ` [PATCH v2 2/6] xen/arm: introduce direct-map for domUs Penny Zheng
@ 2021-10-19 17:19   ` Julien Grall
  0 siblings, 0 replies; 18+ messages in thread
From: Julien Grall @ 2021-10-19 17:19 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Wei.Chen, Bertrand.Marquis

Hi Penny,

On 15/10/2021 04:09, Penny Zheng wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> Cases where domU needs direct-map memory map:
>    * IOMMU not present in the system.
>    * IOMMU disabled if it doesn't cover a specific device and all the guests
> are trusted. Thinking a mixed scenario, where a few devices with IOMMU and
> a few without, then guest DMA security still could not be totally guaranteed.
> So users may want to disable the IOMMU, to at least gain some performance
> improvement from IOMMU disabled.
>    * IOMMU disabled as a workaround when it doesn't have enough bandwidth.
> To be specific, in a few extreme situation, when multiple devices do DMA
> concurrently, these requests may exceed IOMMU's transmission capacity.
>    * IOMMU disabled when it adds too much latency on DMA. For example,
> TLB may be missing in some IOMMU hardware, which may bring latency in DMA
> progress, so users may want to disable it in some realtime scenario.
>    * Guest OS relies on the host memory layout-capablei
> 
> *WARNING:
> Users should be aware that it is not always secure to assign a DMA-capable
> device without IOMMU protection.
> The administrator should make sure that:
>   1. The device is assigned to a trusted guest.
>   2. Users have additional security mechanism on the platform.
> 
> This commit also avoids setting XEN_DOMCTL_CDF_iommu when the IOMMU is
> absent/disabled.

This change looks unrelated to the rest of this patch. Can you split in 
a separate patch?

> 
> For now, direct-map is only supported when domain on Static Allocation, that is,
> "xen.static-mem" must be also defined in the domain configuration.
> 
> This commit also introduces a new helper allocate_static_memory_11 to allocate
> static memory as guest RAM for direct-map domain.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   docs/misc/arm/device-tree/booting.txt |  10 ++
>   xen/arch/arm/domain_build.c           | 215 ++++++++++++++++++++------
>   2 files changed, 179 insertions(+), 46 deletions(-)
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> index 71895663a4..297f8fa0c8 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -182,6 +182,16 @@ with the following properties:
>       Both #address-cells and #size-cells need to be specified because
>       both sub-nodes (described shortly) have reg properties.
>   
> +- direct-map
> +
> +    Optional for Domain on Static Allocation.

I read this as this is mandatory when the domain doesn't use static 
allocation. So how about:

"Only available when statically allocation memory is used for the domain".

> +    An empty property to request the memory of the domain to be
> +    direct-map (guest physical address == physical address).
> +    WARNING:
> +    Users must be aware of this risk, when doing DMA-capable device assignment,
> +    direct-map guest must be trusted or have additional security mechanism,
> +    otherwise it could use the DMA engine to access any other memory area.

I find this warning a bit odd to read because the property will not turn 
off the IOMMU for the domain. This is only going to happen if the admin 
decided to do it.

 From my understanding, it is not going to be possible to assign a 
device without IOMMU unless the property 
"xen,force-assign-without-iommu" is set. This is not a mechanism 
specific to direct-map, so I would drop this WARNING here.

> +
>   Under the "xen,domain" compatible node, one or more sub-nodes are present
>   for the DomU kernel and ramdisk.
>   
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 37e2d62d47..d9118e5bc1 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -492,8 +492,14 @@ static bool __init append_static_memory_to_bank(struct domain *d,
>   {
>       int res;
>       unsigned int nr_pages = PFN_DOWN(size);
> -    /* Infer next GFN. */
> -    gfn_t sgfn = gaddr_to_gfn(bank->start + bank->size);
> +    gfn_t sgfn;
> +
> +    if ( !is_domain_direct_mapped(d) )
> +        /* Infer next GFN when GFN != MFN. */
> +        sgfn = gaddr_to_gfn(bank->start + bank->size);
> +    else
> +        sgfn = gaddr_to_gfn(mfn_to_maddr(smfn));
> +
>   
>       res = guest_physmap_add_pages(d, sgfn, smfn, nr_pages);
>       if ( res )
> @@ -507,12 +513,69 @@ static bool __init append_static_memory_to_bank(struct domain *d,
>       return true;
>   }
>   
> -/* Allocate memory from static memory as RAM for one specific domain d. */
> +static int __init acquire_static_memory_bank(struct domain *d,
> +                                             const __be32 **cell,
> +                                             u32 addr_cells, u32 size_cells,
> +                                             paddr_t *pbase, paddr_t *psize)
> +{
> +    int res = 0;
> +
> +    device_tree_get_reg(cell, addr_cells, size_cells, pbase, psize);
> +    ASSERT(IS_ALIGNED(*pbase, PAGE_SIZE) && IS_ALIGNED(*psize, PAGE_SIZE));
> +    if ( PFN_DOWN(*psize) > UINT_MAX )
> +    {
> +        printk(XENLOG_ERR "%pd: static memory size too large: %#"PRIpaddr,
> +               d, *psize);
> +        return -EINVAL;
> +    }
> +
> +    res = acquire_domstatic_pages(d, maddr_to_mfn(*pbase), PFN_DOWN(*psize), 0);
> +    if ( res )
> +        printk(XENLOG_ERR
> +               "%pd: failed to acquire static memory: %d.\n", d, res);
> +
> +    return res;
> +}
> +
> +static int __init parse_static_mem_prop(const struct dt_device_node *node,
> +                                        u32 *addr_cells, u32 *size_cells,
> +                                        int *length, const __be32 **cell)
> +{
> +    const struct dt_property *prop;
> +
> +    prop = dt_find_property(node, "xen,static-mem", NULL);
> +    if ( !dt_property_read_u32(node, "#xen,static-mem-address-cells",
> +                               addr_cells) )
> +    {
> +        printk(XENLOG_ERR
> +               "failed to read \"#xen,static-mem-address-cells\".\n");
> +        return -EINVAL;
> +    }
> +
> +    if ( !dt_property_read_u32(node, "#xen,static-mem-size-cells",
> +                               size_cells) )
> +    {
> +        printk(XENLOG_ERR
> +               "failed to read \"#xen,static-mem-size-cells\".\n");
> +        return -EINVAL;
> +    }
> +
> +    *cell = (const __be32 *)prop->value;
> +    *length = prop->length;
> +
> +    return 0;
> +}
> +
> +/*
> + * Allocate static memory as RAM for one specific domain d.
> + * The static memory will be mapped in the guest at the usual guest
> + * memory addresses (GUEST_RAM0_BASE, GUEST_RAM1_BASE) defined by
> + * xen/include/public/arch-arm.h.
> + */
>   static void __init allocate_static_memory(struct domain *d,
>                                             struct kernel_info *kinfo,
>                                             const struct dt_device_node *node)
>   {
> -    const struct dt_property *prop;
>       u32 addr_cells, size_cells, reg_cells;
>       unsigned int nr_banks, gbank, bank = 0;
>       const uint64_t rambase[] = GUEST_RAM_BANK_BASES;
> @@ -521,61 +584,31 @@ static void __init allocate_static_memory(struct domain *d,
>       u64 tot_size = 0;
>       paddr_t pbase, psize, gsize;
>       mfn_t smfn;
> -    int res;
> +    int length;
>   
> -    prop = dt_find_property(node, "xen,static-mem", NULL);
> -    if ( !dt_property_read_u32(node, "#xen,static-mem-address-cells",
> -                               &addr_cells) )
> -    {
> -        printk(XENLOG_ERR
> -               "%pd: failed to read \"#xen,static-mem-address-cells\".\n", d);
> -        goto fail;
> -    }
> +    gbank = 0;
> +    gsize = ramsize[gbank];
> +    kinfo->mem.bank[gbank].start = rambase[gbank];

I am not sure to understand why this 3 assignments were moved earlier. I 
am not against the change, but this looks somewhat unrelated to this 
patch itself. The same...

>   
> -    if ( !dt_property_read_u32(node, "#xen,static-mem-size-cells",
> -                               &size_cells) )
> +    if ( parse_static_mem_prop(node, &addr_cells, &size_cells, &length, &cell) )
>       {
>           printk(XENLOG_ERR
> -               "%pd: failed to read \"#xen,static-mem-size-cells\".\n", d);
> +               "%pd: failed to parse \"xen,static-mem\" property.\n", d);
>           goto fail;
>       }
>       reg_cells = addr_cells + size_cells;
> -
> -    /*
> -     * The static memory will be mapped in the guest at the usual guest memory
> -     * addresses (GUEST_RAM0_BASE, GUEST_RAM1_BASE) defined by
> -     * xen/include/public/arch-arm.h.
> -     */

... applies to this comment.


But, I would suggest to move the split off of the code outside in a 
separate patch. This will make easier to review new changes vs refactoring.

> -    gbank = 0;
> -    gsize = ramsize[gbank];
> -    kinfo->mem.bank[gbank].start = rambase[gbank];
> -
> -    cell = (const __be32 *)prop->value;
> -    nr_banks = (prop->length) / (reg_cells * sizeof (u32));
> +    nr_banks = length / (reg_cells * sizeof (u32));
>   
>       for ( ; bank < nr_banks; bank++ )
>       {
> -        device_tree_get_reg(&cell, addr_cells, size_cells, &pbase, &psize);
> -        ASSERT(IS_ALIGNED(pbase, PAGE_SIZE) && IS_ALIGNED(psize, PAGE_SIZE));
> -
> -        if ( PFN_DOWN(psize) > UINT_MAX )
> -        {
> -            printk(XENLOG_ERR "%pd: static memory size too large: %#"PRIpaddr,
> -                   d, psize);
> +        if ( acquire_static_memory_bank(d, &cell, addr_cells, size_cells,
> +                                        &pbase, &psize) )
>               goto fail;
> -        }
> -        smfn = maddr_to_mfn(pbase);
> -        res = acquire_domstatic_pages(d, smfn, PFN_DOWN(psize), 0);
> -        if ( res )
> -        {
> -            printk(XENLOG_ERR
> -                   "%pd: failed to acquire static memory: %d.\n", d, res);
> -            goto fail;
> -        }
>   
>           printk(XENLOG_INFO "%pd: STATIC BANK[%u] %#"PRIpaddr"-%#"PRIpaddr"\n",
>                  d, bank, pbase, pbase + psize);
>   
> +        smfn = maddr_to_mfn(pbase);
>           while ( 1 )
>           {
>               /* Map as much as possible the static range to the guest bank */
> @@ -638,12 +671,91 @@ static void __init allocate_static_memory(struct domain *d,
>    fail:
>       panic("Failed to allocate requested static memory for domain %pd.", d);
>   }
> +
> +/*
> + * Allocate static memory as RAM for one specific domain d.
> + * The static memory will be directly mapped in the guest(Guest Physical
> + * Address == Physical Address).
> + */
> +static void __init allocate_static_memory_11(struct domain *d,
> +                                             struct kernel_info *kinfo,
> +                                             const struct dt_device_node *node)
> +{
> +    u32 addr_cells, size_cells, reg_cells;
> +    unsigned int nr_banks, gbank = 0, bank = 0;
> +    const __be32 *cell;
> +    u64 tot_size = 0;
> +    paddr_t pbase, psize;
> +    mfn_t smfn;
> +    int length;
> +
> +    if ( parse_static_mem_prop(node, &addr_cells, &size_cells, &length, &cell) )
> +    {
> +        printk(XENLOG_ERR
> +               "%pd: failed to parse \"xen,static-mem\" property.\n", d);
> +        goto fail;
> +    }
> +    reg_cells = addr_cells + size_cells;
> +    nr_banks = length / (reg_cells * sizeof (u32));
> +
> +    for ( ; bank < nr_banks; bank++ )
> +    {
> +        if ( acquire_static_memory_bank(d, &cell, addr_cells, size_cells,
> +                                        &pbase, &psize) )
> +            goto fail;
> +
> +        printk(XENLOG_INFO "%pd: STATIC BANK[%u] %#"PRIpaddr"-%#"PRIpaddr"\n",
> +               d, bank, pbase, pbase + psize);
> +
> +        /*
> +         * One guest memory bank is matched with one physical
> +         * memory bank.
> +         */
> +        smfn = maddr_to_mfn(pbase);
> +        gbank = bank;

gbank looks pointless as this is equivalent to bank. Can you drop it?

> +        kinfo->mem.bank[gbank].start = pbase;
AFAICT, nr_banks is computed from the DT property. However, our internal 
array has a fixed size. So you want to check we have enough space in the 
array to store all the banks.

> +
> +        if ( !append_static_memory_to_bank(d, &kinfo->mem.bank[gbank],
> +                                           smfn, psize) )
> +            goto fail;
> +
> +        tot_size += psize;
> +    }
> +
> +    kinfo->mem.nr_banks = ++gbank;

At the end of the loop, ++gbank == nr_banks. So how about using nr_banks 
directly?

> +
> +    kinfo->unassigned_mem -= tot_size;
> +    /*
> +     * The property 'memory' should match the amount of memory given to the
> +     * guest.
> +     * Currently, it is only possible to either acquire static memory or let
> +     * Xen allocate. *Mixing* is not supported.
> +     */
> +    if ( kinfo->unassigned_mem )
> +    {
> +        printk(XENLOG_ERR
> +               "Size of \"memory\" property doesn't match up with the sum-up of \"xen,static-mem\". Unsupported configuration.\n");
> +        goto fail;
> +    }
> +
> +    return;
> +
> + fail:
> +    panic("Failed to allocate requested static memory for direct-map domain %pd.",
> +          d);
> +}
>   #else
>   static void __init allocate_static_memory(struct domain *d,
>                                             struct kernel_info *kinfo,
>                                             const struct dt_device_node *node)
>   {
>   }
> +
> +static void __init allocate_static_memory_11(struct domain *d,
> +                                             struct kernel_info *kinfo,
> +                                             const struct dt_device_node *node)
> +{

This helper is not meant to be reachable when built with 
!CONFIG_STATIC_MEMORY. So I would add ASSERT_UNREACHABLE to catch any 
misuse.

> +}
>   #endif
>   
>   static int __init write_properties(struct domain *d, struct kernel_info *kinfo,
> @@ -2936,7 +3048,12 @@ static int __init construct_domU(struct domain *d,
>       if ( !dt_find_property(node, "xen,static-mem", NULL) )
>           allocate_memory(d, &kinfo);
>       else
> -        allocate_static_memory(d, &kinfo, node);
> +    {
> +        if ( is_domain_direct_mapped(d) )
> +            allocate_static_memory_11(d, &kinfo, node);
> +        else
> +            allocate_static_memory(d, &kinfo, node);
> +    }
>   
>       rc = prepare_dtb_domU(d, &kinfo);
>       if ( rc < 0 )
> @@ -2976,8 +3093,14 @@ void __init create_domUs(void)
>               panic("Missing property 'cpus' for domain %s\n",
>                     dt_node_name(node));
>   
> +        if ( dt_property_read_bool(node, "direct-map") )
> +            d_cfg.flags |= XEN_DOMCTL_CDF_directmap;
> +
>           if ( dt_find_compatible_node(node, NULL, "multiboot,device-tree") )
> -            d_cfg.flags |= XEN_DOMCTL_CDF_iommu;
> +        {
> +            if ( iommu_enabled )
> +                d_cfg.flags |= XEN_DOMCTL_CDF_iommu;
> +        }

As I wrote above, this change looks unrelated to this patch. Regardless 
that, the nested if (and indentation) can be avoided if you write:

if ( dt_find_compatible(....) && iommu_enabled )
  ...

>   
>           if ( !dt_property_read_u32(node, "nr_spis", &d_cfg.arch.nr_spis) )
>           {
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/6] xen/arm: if direct-map domain use native addresses for GICv2
  2021-10-15  3:09 ` [PATCH v2 3/6] xen/arm: if direct-map domain use native addresses for GICv2 Penny Zheng
@ 2021-10-19 17:39   ` Julien Grall
  0 siblings, 0 replies; 18+ messages in thread
From: Julien Grall @ 2021-10-19 17:39 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Wei.Chen, Bertrand.Marquis

Hi Penny,

On 15/10/2021 04:09, Penny Zheng wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> Today we use native addresses to map the GICv2 for Dom0 and fixed
> addresses for DomUs.
> 
> This patch changes the behavior so that native addresses are used for
> all domains that are direct-map memory map.

I would simply say: "all domains that are direct-mapped" or "all domains 
that have the memory direct-mapped".

> 
> NEW VGIC has different naming schemes, like referring distributor base
> address as vgic_dist_base, other than the dbase. So this patch also introduces
> vgic_dist_base/vgic_cpu_base accessor to access correct distributor base
> address/cpu interface base address on varied scenarios,
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/domain_build.c    | 10 +++++++---
>   xen/arch/arm/vgic-v2.c         | 26 +++++++++++++++++++++-----
>   xen/arch/arm/vgic/vgic-v2.c    | 27 ++++++++++++++++++++++-----
>   xen/include/asm-arm/new_vgic.h | 10 ++++++++++
>   xen/include/asm-arm/vgic.h     | 12 +++++++++++-
>   5 files changed, 71 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index d9118e5bc1..6cd03e4d0f 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2207,8 +2207,12 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
>       int res = 0;
>       __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) * 2];
>       __be32 *cells;
> +    struct domain *d = kinfo->d;
> +    char buf[38];

Please add a comment explaining how the 38 was found. For an example, 
please look at the comment on top of buf in make_memory_node().

>   
> -    res = fdt_begin_node(fdt, "interrupt-controller@"__stringify(GUEST_GICD_BASE));
> +    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
> +             vgic_dist_base(&d->arch.vgic));
> +    res = fdt_begin_node(fdt, buf);
>       if ( res )
>           return res;
>   
> @@ -2230,9 +2234,9 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
>   
>       cells = &reg[0];
>       dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
> -                       GUEST_GICD_BASE, GUEST_GICD_SIZE);
> +                       vgic_dist_base(&d->arch.vgic), GUEST_GICD_SIZE);
>       dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
> -                       GUEST_GICC_BASE, GUEST_GICC_SIZE);
> +                       vgic_cpu_base(&d->arch.vgic), GUEST_GICC_SIZE);
>   
>       res = fdt_property(fdt, "reg", reg, sizeof(reg));
>       if (res)
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index b2da886adc..a8cf8173d0 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -652,7 +652,7 @@ static int vgic_v2_vcpu_init(struct vcpu *v)
>   static int vgic_v2_domain_init(struct domain *d)
>   {
>       int ret;
> -    paddr_t cbase, csize;
> +    paddr_t csize;
>       paddr_t vbase;
>   
>       /*
> @@ -669,10 +669,26 @@ static int vgic_v2_domain_init(struct domain *d)

The comment on top of the "if ( is_hardware_domain(d) )" needs to be 
updated because, after this patch, guests will not always use the 
virtual platform layout.

>            * Note that we assume the size of the CPU interface is always
>            * aligned to PAGE_SIZE.
>            */
> -        cbase = vgic_v2_hw.cbase;
> +        d->arch.vgic.cbase = vgic_v2_hw.cbase;
>           csize = vgic_v2_hw.csize;
>           vbase = vgic_v2_hw.vbase;
>       }
> +    else if ( is_domain_direct_mapped(d) )
> +    {
> +        /*
> +         * For non-dom0 direct_mapped guests we only map a 8kB CPU
Technically, is_hardware_domain() is not equivalent to dom0 (although, 
it is so far the case on Arm). So please avoid using "dom0" and instead 
use "hardware domain".

Also, the wording used is confusing because, from my understanding, dom0 
is not a guest (although it is a domain).

So how about:

"For all the direct-mapped domain other than the hardware domain, ...".

> +         * interface but we make sure it is at a location occupied by
> +         * the physical GIC in the host device tree.
> +         *
> +         * We need to add an offset to the virtual CPU interface base
> +         * address when the GIC is aliased to get a 8kB contiguous
> +         * region.
> +         */
> +        d->arch.vgic.dbase = vgic_v2_hw.dbase;
> +        d->arch.vgic.cbase = vgic_v2_hw.cbase + vgic_v2_hw.aliased_offset;
Couldn't we simply map the CPU interface at the GPA vgic_v2_hw.cbase?

> +        csize = GUEST_GICC_SIZE;
> +        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
> +    }
>       else
>       {
>           d->arch.vgic.dbase = GUEST_GICD_BASE;
> @@ -683,7 +699,7 @@ static int vgic_v2_domain_init(struct domain *d)
>            * region.
>            */
>           BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
> -        cbase = GUEST_GICC_BASE;
> +        d->arch.vgic.cbase = GUEST_GICC_BASE;
>           csize = GUEST_GICC_SIZE;
>           vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
>       }
> @@ -692,8 +708,8 @@ static int vgic_v2_domain_init(struct domain *d)
>        * Map the gic virtual cpu interface in the gic cpu interface
>        * region of the guest.
>        */
> -    ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
> -                           maddr_to_mfn(vbase));
> +    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
> +                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
>       if ( ret )
>           return ret;
>   
> diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c
> index b5ba4ace87..ce1f6e4373 100644
> --- a/xen/arch/arm/vgic/vgic-v2.c
> +++ b/xen/arch/arm/vgic/vgic-v2.c
> @@ -258,7 +258,7 @@ void vgic_v2_enable(struct vcpu *vcpu)
>   int vgic_v2_map_resources(struct domain *d)
>   {
>       struct vgic_dist *dist = &d->arch.vgic;
> -    paddr_t cbase, csize;
> +    paddr_t csize;
>       paddr_t vbase;
>       int ret;
>   
> @@ -276,10 +276,27 @@ int vgic_v2_map_resources(struct domain *d)

Same remark about the comment on top of the 'if ( is_hardware_domain(d) )'.

>            * Note that we assume the size of the CPU interface is always
>            * aligned to PAGE_SIZE.
>            */
> -        cbase = gic_v2_hw_data.cbase;
> +        d->arch.vgic.vgic_cpu_base = gic_v2_hw_data.cbase;
>           csize = gic_v2_hw_data.csize;
>           vbase = gic_v2_hw_data.vbase;
>       }
> +    else if ( is_domain_direct_mapped(d) )
> +    {
> +        d->arch.vgic.vgic_dist_base = gic_v2_hw_data.dbase;
> +        /*
> +         * For non-dom0 direct_mapped guests we only map a 8kB CPU

Same remark here for the non-dom0 part.

> +         * interface but we make sure it is at a location occupied by
> +         * the physical GIC in the host device tree.
> +         *
> +         * We need to add an offset to the virtual CPU interface base
> +         * address when the GIC is aliased to get a 8kB contiguous
> +         * region.
> +         */
> +        d->arch.vgic.vgic_cpu_base = gic_v2_hw_data.cbase +
> +                                     gic_v2_hw_data.aliased_offset;

Same question here.

> +        csize = GUEST_GICC_SIZE;
> +        vbase = gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset;
> +    }
>       else
>       {
>           d->arch.vgic.vgic_dist_base = GUEST_GICD_BASE;
> @@ -290,7 +307,7 @@ int vgic_v2_map_resources(struct domain *d)
>            * region.
>            */
>           BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
> -        cbase = GUEST_GICC_BASE;
> +        d->arch.vgic.vgic_cpu_base = GUEST_GICC_BASE;
>           csize = GUEST_GICC_SIZE;
>           vbase = gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset;
>       }
> @@ -308,8 +325,8 @@ int vgic_v2_map_resources(struct domain *d)
>        * Map the gic virtual cpu interface in the gic cpu interface
>        * region of the guest.
>        */
> -    ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
> -                           maddr_to_mfn(vbase));
> +    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.vgic_cpu_base),
> +                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
>       if ( ret )
>       {
>           gdprintk(XENLOG_ERR, "Unable to remap VGIC CPU to VCPU\n");
> diff --git a/xen/include/asm-arm/new_vgic.h b/xen/include/asm-arm/new_vgic.h
> index 97d622bff6..28b0882798 100644
> --- a/xen/include/asm-arm/new_vgic.h
> +++ b/xen/include/asm-arm/new_vgic.h
> @@ -186,6 +186,16 @@ struct vgic_cpu {
>       uint32_t num_id_bits;
>   };
>   
> +static inline paddr_t vgic_cpu_base(struct vgic_dist *vgic)
> +{
> +    return vgic->vgic_cpu_base;
> +}
> +
> +static inline paddr_t vgic_dist_base(struct vgic_dist *vgic)
> +{
> +    return vgic->vgic_dist_base;
> +}
> +
>   #endif /* __ASM_ARM_NEW_VGIC_H */
>   
>   /*
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index 62c2ae538d..3167bbb68b 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -152,6 +152,7 @@ struct vgic_dist {
>       struct pending_irq *pending_irqs;
>       /* Base address for guest GIC */
>       paddr_t dbase; /* Distributor base address */
> +    paddr_t cbase; /* CPU interface base address */
>   #ifdef CONFIG_GICV3
>       /* GIC V3 addressing */
>       /* List of contiguous occupied by the redistributors */
> @@ -271,13 +272,22 @@ static inline int REG_RANK_NR(int b, uint32_t n)
>   
>   enum gic_sgi_mode;
>   
> +static inline paddr_t vgic_cpu_base(struct vgic_dist *vgic)
> +{
> +    return vgic->cbase;
> +}
> +
> +static inline paddr_t vgic_dist_base(struct vgic_dist *vgic)
> +{
> +    return vgic->dbase;
> +}
> +
>   /*
>    * Offset of GICD_<FOO><n> with its rank, for GICD_<FOO> size <s> with
>    * <b>-bits-per-interrupt.
>    */
>   #define REG_RANK_INDEX(b, n, s) ((((n) >> s) & ((b)-1)) % 32)
>   
> -

This looks like a spurious change.

>   extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq);
>   extern void vgic_remove_irq_from_queues(struct vcpu *v, struct pending_irq *p);
>   extern void gic_remove_from_lr_pending(struct vcpu *v, struct pending_irq *p);
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 4/6] xen/arm: if direct-map domain use native addresses for GICv3
  2021-10-15  3:09 ` [PATCH v2 4/6] xen/arm: if direct-map domain use native addresses for GICv3 Penny Zheng
@ 2021-10-20 11:11   ` Julien Grall
  2021-10-21  5:45     ` Penny Zheng
  0 siblings, 1 reply; 18+ messages in thread
From: Julien Grall @ 2021-10-20 11:11 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Wei.Chen, Bertrand.Marquis

Hi Penny,

On 15/10/2021 04:09, Penny Zheng wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> Today we use native addresses to map the GICv3 for Dom0 and fixed
> addresses for DomUs.
> 
> This patch changes the behavior so that native addresses are used for
> all direct-map domains(including Dom0).
> 
> Considering that dom0 may not always be directly mapped in the future,
> this patch introduces a new helper "is_domain_use_host_layout()" that
> wraps both two check "is_domain_direct_mapped(d) || is_hardware_domain(d)"
> for more flexible useage.

Typo: s/useage/usage/

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/domain_build.c  | 46 +++++++++++++++++++++++++++---------
>   xen/arch/arm/vgic-v3.c       | 20 +++++++++-------
>   xen/include/asm-arm/domain.h |  3 +++
>   3 files changed, 50 insertions(+), 19 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 6cd03e4d0f..7e0ee07e06 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2255,16 +2255,20 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
>       return res;
>   }
>   
> +#ifdef CONFIG_ARM_64

The code below is specific to the GICv3 (and not 64-bit). So this should 
be gated with CONFIG_GICV3.

Personally, I would have gated the code in a separate patch. But I am 
fine if this is added in this patch so long this is mentionned in the 
commit message.

>   static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
>   {
>       void *fdt = kinfo->fdt;
>       int res = 0;
> -    __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) * 2];
> +    __be32 *reg;
>       __be32 *cells;
> +    struct domain *d = kinfo->d;

AFAICT, 'd' is not going to be modified. So please add const.

> +    char buf[38];

Please explain how 38 was found. For an example, see the comment on top 
of 'buf' in make_memory_node().

> +    unsigned int i, len = 0;
>   
> -    res = fdt_begin_node(fdt, "interrupt-controller@"__stringify(GUEST_GICV3_GICD_BASE));
> -    if ( res )
> -        return res;
> +    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
> +             vgic_dist_base(&d->arch.vgic));
> +    res = fdt_begin_node(fdt, buf);

You set res but it gets overwritten just below. Were you meant to check 
the return value?

>   
>       res = fdt_property_cell(fdt, "#address-cells", 0);
>       if ( res )
> @@ -2282,35 +2286,55 @@ static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
>       if ( res )
>           return res;
>   
> +    /* reg specifies all re-distributors and Distributor. */
> +    len = (GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) *
> +          (d->arch.vgic.nr_regions + 1) * sizeof(__be32);
> +    reg = xmalloc_bytes(len);
> +    if ( reg == NULL )
> +        return -ENOMEM;
> +
>       cells = &reg[0];
>       dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
> -                       GUEST_GICV3_GICD_BASE, GUEST_GICV3_GICD_SIZE);
> -    dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
> -                       GUEST_GICV3_GICR0_BASE, GUEST_GICV3_GICR0_SIZE);
> +                       vgic_dist_base(&d->arch.vgic), GUEST_GICV3_GICD_SIZE);
>   
> -    res = fdt_property(fdt, "reg", reg, sizeof(reg));
> +    for ( i = 0;
> +          i < d->arch.vgic.nr_regions;
> +          i++, cells += (GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) )

dt_child_set_range() will already update cells to the next ones. So this 
needs to be dropped.

I was expecting this to be caugt during test. So did you test this 
series with GICv3? How about the new vGIC?

> +    {
> +        dt_child_set_range(&cells,
> +                           GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
> +                           d->arch.vgic.rdist_regions[i].base,
> +                           d->arch.vgic.rdist_regions[i].size);
> +    }
> +
> +    res = fdt_property(fdt, "reg", reg, len);

I would suggest to free 'reg' right here as you don't need it 
afterwards. This will also remove the requirement to add a 'out' label 
and the addition of 'goto'.

>       if (res)
> -        return res;
> +        goto out;
>   >       res = fdt_property_cell(fdt, "linux,phandle", kinfo->phandle_gic);
>       if (res)
> -        return res;
> +        goto out;
>   
>       res = fdt_property_cell(fdt, "phandle", kinfo->phandle_gic);
>       if (res)
> -        return res;
> +        goto out;
>   
>       res = fdt_end_node(fdt);
>   
> + out:
> +    xfree(reg);
>       return res;
>   }
> +#endif
>   
>   static int __init make_gic_domU_node(struct kernel_info *kinfo)
>   {
>       switch ( kinfo->d->arch.vgic.version )
>       {
> +#ifdef CONFIG_ARM_64
>       case GIC_V3:
>           return make_gicv3_domU_node(kinfo);
> +#endif
>       case GIC_V2:
>           return make_gicv2_domU_node(kinfo);
>       default:
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index cb5a70c42e..70168ca1ac 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -1641,14 +1641,15 @@ static inline unsigned int vgic_v3_max_rdist_count(struct domain *d)
>        * Normally there is only one GICv3 redistributor region.
>        * The GICv3 DT binding provisions for multiple regions, since there are
>        * platforms out there which need those (multi-socket systems).
> -     * For Dom0 we have to live with the MMIO layout the hardware provides,
> -     * so we have to copy the multiple regions - as the first region may not
> -     * provide enough space to hold all redistributors we need.
> +     * For direct-map domain(including dom0), we have to live with the MMIO

I find the sentence somewhat misleading because it implies that dom0 is 
is a direct-map domain (this is true today, but this merely an 
implementation details).

However, with the change below, I think it would be better to write "For 
domain using the host memory layout..." and not going into details and 
what sort of domain we refer to here. Instead...

> +     * layout the hardware provides, so we have to copy the multiple regions
> +     * - as the first region may not provide enough space to hold all
> +     * redistributors we need.
>        * However DomU get a constructed memory map, so we can go with
>        * the architected single redistributor region.
>        */
> -    return is_hardware_domain(d) ? vgic_v3_hw.nr_rdist_regions :
> -               GUEST_GICV3_RDIST_REGIONS;
> +    return is_domain_use_host_layout(d) ? vgic_v3_hw.nr_rdist_regions :
> +                                          GUEST_GICV3_RDIST_REGIONS;
>   }
>   
>   static int vgic_v3_domain_init(struct domain *d)
> @@ -1670,10 +1671,13 @@ static int vgic_v3_domain_init(struct domain *d)
>       radix_tree_init(&d->arch.vgic.pend_lpi_tree);
>   
>       /*
> -     * Domain 0 gets the hardware address.
> -     * Guests get the virtual platform layout.
> +     * Since we map the whole GICv3 register memory map(64KB) for
> +     * all guests(including DOM0), DOM0 and direct-map guests could be
> +     * treated the same way here.
> +     * direct-map domain (including Dom0) gets the hardware address.
> +     * Other guests get the virtual platform layout.
>        */
> -    if ( is_hardware_domain(d) )
> +    if ( is_domain_use_host_layout(d) )
>       {
>           unsigned int first_cpu = 0;
>   
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index fc42c6a310..e8ce3ac8d2 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -32,6 +32,9 @@ enum domain_type {
>   /* Check if domain is direct-map memory map. */
>   #define is_domain_direct_mapped(d) (d->options & XEN_DOMCTL_CDF_directmap)
>   
> +#define is_domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> +                                      is_hardware_domain(d))

... the details should be on top of this comment. The advantage is there 
is less chance for a comment to rot.

Regarding the name, I would either drop the 'is_' or s/use/using/. My 
preference goes for the former as it makes the name sightly shorter.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 5/6] xen/arm: if direct-map domain use native UART address and IRQ number for vPL011
  2021-10-15  3:09 ` [PATCH v2 5/6] xen/arm: if direct-map domain use native UART address and IRQ number for vPL011 Penny Zheng
@ 2021-10-20 11:38   ` Julien Grall
  0 siblings, 0 replies; 18+ messages in thread
From: Julien Grall @ 2021-10-20 11:38 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Wei.Chen, Bertrand.Marquis

Hi Penny,

On 15/10/2021 04:09, Penny Zheng wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> We always use a fix address to map the vPL011 to domains. The address
> could be a problem for direct-map domains.
> 
> So, for domains that are directly mapped, reuse the address of the
> physical UART on the platform to avoid potential clashes.
> 
> Do the same for the virtual IRQ number: instead of always using
> GUEST_VPL011_SPI, try to reuse the physical SPI number if possible.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/domain_build.c  | 41 ++++++++++++++++++++++-----
>   xen/arch/arm/vpl011.c        | 54 +++++++++++++++++++++++++++++++-----
>   xen/include/asm-arm/vpl011.h |  2 ++
>   3 files changed, 83 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 7e0ee07e06..f3e87709f6 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -30,6 +30,7 @@
>   
>   #include <xen/irq.h>
>   #include <xen/grant_table.h>
> +#include <xen/serial.h>
>   
>   static unsigned int __initdata opt_dom0_max_vcpus;
>   integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
> @@ -2350,8 +2351,11 @@ static int __init make_vpl011_uart_node(struct kernel_info *kinfo)
>       gic_interrupt_t intr;
>       __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS];
>       __be32 *cells;
> +    struct domain *d = kinfo->d;
> +    char buf[27];

Please explain how the '27' was found.

>   
> -    res = fdt_begin_node(fdt, "sbsa-uart@"__stringify(GUEST_PL011_BASE));
> +    snprintf(buf, sizeof(buf), "sbsa-uart@%"PRIx64, d->arch.vpl011.base_addr);
> +    res = fdt_begin_node(fdt, buf);
>       if ( res )
>           return res;
>   
> @@ -2361,14 +2365,14 @@ static int __init make_vpl011_uart_node(struct kernel_info *kinfo)
>   
>       cells = &reg[0];
>       dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS,
> -                       GUEST_ROOT_SIZE_CELLS, GUEST_PL011_BASE,
> +                       GUEST_ROOT_SIZE_CELLS, d->arch.vpl011.base_addr,
>                          GUEST_PL011_SIZE);
>   
>       res = fdt_property(fdt, "reg", reg, sizeof(reg));
>       if ( res )
>           return res;
>   
> -    set_interrupt(intr, GUEST_VPL011_SPI, 0xf, DT_IRQ_TYPE_LEVEL_HIGH);
> +    set_interrupt(intr, d->arch.vpl011.virq, 0xf, DT_IRQ_TYPE_LEVEL_HIGH);
>   
>       res = fdt_property(fdt, "interrupts", intr, sizeof (intr));
>       if ( res )
> @@ -3083,6 +3087,14 @@ static int __init construct_domU(struct domain *d,
>               allocate_static_memory(d, &kinfo, node);
>       }
>   
> +    /*
> +     * Base address and irq number are needed when creating vpl011 device
> +     * tree node in prepare_dtb_domU, so initialization on related variables
> +     * shall be dealt firstly.
> +     */
> +    if ( kinfo.vpl011 )
> +        rc = domain_vpl011_init(d, NULL);
> +
>       rc = prepare_dtb_domU(d, &kinfo);
>       if ( rc < 0 )
>           return rc;
> @@ -3091,9 +3103,6 @@ static int __init construct_domU(struct domain *d,
>       if ( rc < 0 )
>           return rc;
>   
> -    if ( kinfo.vpl011 )
> -        rc = domain_vpl011_init(d, NULL);
> -
>       return rc;
>   }
>   
> @@ -3132,15 +3141,33 @@ void __init create_domUs(void)
>   
>           if ( !dt_property_read_u32(node, "nr_spis", &d_cfg.arch.nr_spis) )
>           {
> +            unsigned int vpl011_virq = GUEST_VPL011_SPI;
> +
>               d_cfg.arch.nr_spis = gic_number_lines() - 32;
>   
> +            /*
> +             * The VPL011 virq is GUEST_VPL011_SPI, unless direct-map in
> +             * set, in which case we'll try to match the hardware.

I think you want to drop "try" to avoid implying there is a fallback if 
we can't match it.

> +             *
> +             * Typically, d->arch.vpl011.virq has the vpl011 irq number
> +             * but at this point of the boot sequence it is not
> +             * initialized yet.
The last paragraph is difficult to read. We are building the domain (not 
booting!) and at this point we are still trying to figure out its 
initial configuration. IOW, the domain is not even created.

I think it would be better to say the domain is not built. So we need to 
open-code the logic to find the vIRQ. We should also probably mention 
that the logic should match the one in domain_vpl011_init().

> +             */
> +            if ( d_cfg.flags & XEN_DOMCTL_CDF_directmap )
> +            {
> +                vpl011_virq = serial_irq(SERHND_DTUART);
> +                if ( vpl011_virq < 0 )
> +                    panic("Error getting IRQ number for this serial port %d\n",
> +                          SERHND_DTUART);
> +            }
> +
>               /*
>                * vpl011 uses one emulated SPI. If vpl011 is requested, make
>                * sure that we allocate enough SPIs for it.
>                */
>               if ( dt_property_read_bool(node, "vpl011") )
>                   d_cfg.arch.nr_spis = MAX(d_cfg.arch.nr_spis,
> -                                         GUEST_VPL011_SPI - 32 + 1);
> +                                         vpl011_virq - 32 + 1);
>           }
>   
>           /*
> diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
> index 895f436cc4..2de59e584d 100644
> --- a/xen/arch/arm/vpl011.c
> +++ b/xen/arch/arm/vpl011.c
> @@ -29,6 +29,7 @@
>   #include <xen/mm.h>
>   #include <xen/sched.h>
>   #include <xen/console.h>
> +#include <xen/serial.h>
>   #include <public/domctl.h>
>   #include <public/io/console.h>
>   #include <asm/pl011-uart.h>
> @@ -71,11 +72,11 @@ static void vpl011_update_interrupt_status(struct domain *d)
>        * status bit has been set since the last time.
>        */
>       if ( uartmis & ~vpl011->shadow_uartmis )
> -        vgic_inject_irq(d, NULL, GUEST_VPL011_SPI, true);
> +        vgic_inject_irq(d, NULL, vpl011->virq, true);
>   
>       vpl011->shadow_uartmis = uartmis;
>   #else
> -    vgic_inject_irq(d, NULL, GUEST_VPL011_SPI, uartmis);
> +    vgic_inject_irq(d, NULL, vpl011->virq, uartmis);
>   #endif
>   }
>   
> @@ -347,7 +348,8 @@ static int vpl011_mmio_read(struct vcpu *v,
>                               void *priv)
>   {
>       struct hsr_dabt dabt = info->dabt;
> -    uint32_t vpl011_reg = (uint32_t)(info->gpa - GUEST_PL011_BASE);
> +    uint32_t vpl011_reg = (uint32_t)(info->gpa -
> +                                     v->domain->arch.vpl011.base_addr);
>       struct vpl011 *vpl011 = &v->domain->arch.vpl011;
>       struct domain *d = v->domain;
>       unsigned long flags;
> @@ -430,7 +432,8 @@ static int vpl011_mmio_write(struct vcpu *v,
>                                void *priv)
>   {
>       struct hsr_dabt dabt = info->dabt;
> -    uint32_t vpl011_reg = (uint32_t)(info->gpa - GUEST_PL011_BASE);
> +    uint32_t vpl011_reg = (uint32_t)(info->gpa -
> +                                     v->domain->arch.vpl011.base_addr);
>       struct vpl011 *vpl011 = &v->domain->arch.vpl011;
>       struct domain *d = v->domain;
>       unsigned long flags;
> @@ -626,6 +629,43 @@ int domain_vpl011_init(struct domain *d, struct vpl011_init_info *info)
>       if ( vpl011->backend.dom.ring_buf )
>           return -EINVAL;
>   

I would suggest to add a comment similar to the first paragraph you 
added in create_domUs(). In addition to that, it would be good to 
mention the code should stay in sync with the one in create_domUs().

> +    if ( is_domain_direct_mapped(d) )
> +    {
> +        const struct vuart_info *uart = serial_vuart_info(SERHND_DTUART);
> +        int vpl011_irq = serial_irq(SERHND_DTUART);
> +
> +        /*
> +         * Since the PL011 we emulate for the guest requires a 4KB region,
> +         * and on some Hardware (IIRC pine64), the UART MMIO region is
In fact, this is an issue that seems to be common on (old?) sunxi SoC. I 
would suggest to replace the (...) with (e.g. on some sunxi SoC).

> +         * less than 4KB, in which case, there may exist multiple devices
> +         * within the same 4KB region, here adds the following check to
> +         * prevent potential known pitfalls
> +         */
> +        if ( uart->size < GUEST_PL011_SIZE )
> +        {
> +            printk(XENLOG_ERR
> +                   "The hardware UART region is smaller than GUEST_PL011_SIZE, impossible to emulate on direct-map guests.\n");

s/on/it for/ I believe.

But i am not sure it is worth mentionning that we can't emulate it. This 
is sort of implied by the fact this returns an error. So how about:

vpl011: Can't re-use the Xen UART MMIO region as it is too small.

Note that I mention "Xen" rather than "HW" because there might be 
multiple uart on platforms. So I feel "Xen" might make it clearer which 
one we are referring to.

> +            return -EINVAL;
> +        }
> +
> +        if ( uart != NULL && vpl011_irq > 0 )

You are checking uart is not-NULL here but just before you dereference 
it. So shouldn't you move the check earlier?

> +        {
> +            vpl011->base_addr = uart->base_addr;
> +            vpl011->virq = vpl011_irq;
> +        }
> +        else
> +        {
> +            printk(XENLOG_ERR
> +                   "Unable to reuse physical UART address and irq for vPL011 on direct-mapped domain.\n");

In this case how about:

vpl011: Unable to re-use the Xen UART information

> +            return -EINVAL;
> +        }
> +    }
> +    else
> +    {
> +        vpl011->base_addr = GUEST_PL011_BASE;
> +        vpl011->virq = GUEST_VPL011_SPI;
> +    }
> +
>       /*
>        * info is NULL when the backend is in Xen.
>        * info is != NULL when the backend is in a domain.
> @@ -661,7 +701,7 @@ int domain_vpl011_init(struct domain *d, struct vpl011_init_info *info)
>           }
>       }
>   
> -    rc = vgic_reserve_virq(d, GUEST_VPL011_SPI);
> +    rc = vgic_reserve_virq(d, vpl011->virq);
>       if ( !rc )
>       {
>           rc = -EINVAL;
> @@ -673,12 +713,12 @@ int domain_vpl011_init(struct domain *d, struct vpl011_init_info *info)
>       spin_lock_init(&vpl011->lock);
>   
>       register_mmio_handler(d, &vpl011_mmio_handler,
> -                          GUEST_PL011_BASE, GUEST_PL011_SIZE, NULL);
> +                          vpl011->base_addr, GUEST_PL011_SIZE, NULL);
>   
>       return 0;
>   
>   out2:
> -    vgic_free_virq(d, GUEST_VPL011_SPI);
> +    vgic_free_virq(d, vpl011->virq);
>   
>   out1:
>       if ( vpl011->backend_in_domain )
> diff --git a/xen/include/asm-arm/vpl011.h b/xen/include/asm-arm/vpl011.h
> index e6c7ab7381..c09abcd7a9 100644
> --- a/xen/include/asm-arm/vpl011.h
> +++ b/xen/include/asm-arm/vpl011.h
> @@ -53,6 +53,8 @@ struct vpl011 {
>       uint32_t    uarticr;        /* Interrupt clear register */
>       uint32_t    uartris;        /* Raw interrupt status register */
>       uint32_t    shadow_uartmis; /* shadow masked interrupt register */
> +    paddr_t     base_addr;
> +    unsigned int virq;
>       spinlock_t  lock;
>       evtchn_port_t evtchn;
>   };
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH v2 4/6] xen/arm: if direct-map domain use native addresses for GICv3
  2021-10-20 11:11   ` Julien Grall
@ 2021-10-21  5:45     ` Penny Zheng
  0 siblings, 0 replies; 18+ messages in thread
From: Penny Zheng @ 2021-10-21  5:45 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Wei Chen, Bertrand Marquis

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Wednesday, October 20, 2021 7:11 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Wei Chen <Wei.Chen@arm.com>; Bertrand Marquis
> <Bertrand.Marquis@arm.com>
> Subject: Re: [PATCH v2 4/6] xen/arm: if direct-map domain use native
> addresses for GICv3
> 
> Hi Penny,
> 
> On 15/10/2021 04:09, Penny Zheng wrote:
> > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> >
> > Today we use native addresses to map the GICv3 for Dom0 and fixed
> > addresses for DomUs.
> >
> > This patch changes the behavior so that native addresses are used for
> > all direct-map domains(including Dom0).
> >
> > Considering that dom0 may not always be directly mapped in the future,
> > this patch introduces a new helper "is_domain_use_host_layout()" that
> > wraps both two check "is_domain_direct_mapped(d) ||
> is_hardware_domain(d)"
> > for more flexible useage.
> 
> Typo: s/useage/usage/
> 
> >
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   xen/arch/arm/domain_build.c  | 46 +++++++++++++++++++++++++++---------
> >   xen/arch/arm/vgic-v3.c       | 20 +++++++++-------
> >   xen/include/asm-arm/domain.h |  3 +++
> >   3 files changed, 50 insertions(+), 19 deletions(-)
> >
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 6cd03e4d0f..7e0ee07e06 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -2255,16 +2255,20 @@ static int __init make_gicv2_domU_node(struct
> kernel_info *kinfo)
> >       return res;
> >   }
> >
> > +#ifdef CONFIG_ARM_64
> 
> The code below is specific to the GICv3 (and not 64-bit). So this should be
> gated with CONFIG_GICV3.
> 
> Personally, I would have gated the code in a separate patch. But I am fine if
> this is added in this patch so long this is mentionned in the commit message.
> 
> >   static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
> >   {
> >       void *fdt = kinfo->fdt;
> >       int res = 0;
> > -    __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) *
> 2];
> > +    __be32 *reg;
> >       __be32 *cells;
> > +    struct domain *d = kinfo->d;
> 
> AFAICT, 'd' is not going to be modified. So please add const.
> 
> > +    char buf[38];
> 
> Please explain how 38 was found. For an example, see the comment on top of
> 'buf' in make_memory_node().
> 
> > +    unsigned int i, len = 0;
> >
> > -    res = fdt_begin_node(fdt, "interrupt-
> controller@"__stringify(GUEST_GICV3_GICD_BASE));
> > -    if ( res )
> > -        return res;
> > +    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
> > +             vgic_dist_base(&d->arch.vgic));
> > +    res = fdt_begin_node(fdt, buf);
> 
> You set res but it gets overwritten just below. Were you meant to check the
> return value?
> 
> >
> >       res = fdt_property_cell(fdt, "#address-cells", 0);
> >       if ( res )
> > @@ -2282,35 +2286,55 @@ static int __init make_gicv3_domU_node(struct
> kernel_info *kinfo)
> >       if ( res )
> >           return res;
> >
> > +    /* reg specifies all re-distributors and Distributor. */
> > +    len = (GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) *
> > +          (d->arch.vgic.nr_regions + 1) * sizeof(__be32);
> > +    reg = xmalloc_bytes(len);
> > +    if ( reg == NULL )
> > +        return -ENOMEM;
> > +
> >       cells = &reg[0];
> >       dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS,
> GUEST_ROOT_SIZE_CELLS,
> > -                       GUEST_GICV3_GICD_BASE, GUEST_GICV3_GICD_SIZE);
> > -    dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS,
> GUEST_ROOT_SIZE_CELLS,
> > -                       GUEST_GICV3_GICR0_BASE, GUEST_GICV3_GICR0_SIZE);
> > +                       vgic_dist_base(&d->arch.vgic),
> > + GUEST_GICV3_GICD_SIZE);
> >
> > -    res = fdt_property(fdt, "reg", reg, sizeof(reg));
> > +    for ( i = 0;
> > +          i < d->arch.vgic.nr_regions;
> > +          i++, cells += (GUEST_ROOT_ADDRESS_CELLS +
> > + GUEST_ROOT_SIZE_CELLS) )
> 
> dt_child_set_range() will already update cells to the next ones. So this needs
> to be dropped.
> 

You're so right!!! I checked the code and it will already points to the next ones

> I was expecting this to be caugt during test. So did you test this series with
> GICv3? How about the new vGIC?
> 

Yes, I test it with both on core A and R. It's working and I think that's because that the redistributor
region size is always 1 in my test, so ...

> > +    {
> > +        dt_child_set_range(&cells,
> > +                           GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
> > +                           d->arch.vgic.rdist_regions[i].base,
> > +                           d->arch.vgic.rdist_regions[i].size);
> > +    }
> > +
> > +    res = fdt_property(fdt, "reg", reg, len);
> 
> I would suggest to free 'reg' right here as you don't need it afterwards. This will
> also remove the requirement to add a 'out' label and the addition of 'goto'.
> 
> >       if (res)
> > -        return res;
> > +        goto out;
> >   >       res = fdt_property_cell(fdt, "linux,phandle", kinfo->phandle_gic);
> >       if (res)
> > -        return res;
> > +        goto out;
> >
> >       res = fdt_property_cell(fdt, "phandle", kinfo->phandle_gic);
> >       if (res)
> > -        return res;
> > +        goto out;
> >
> >       res = fdt_end_node(fdt);
> >
> > + out:
> > +    xfree(reg);
> >       return res;
> >   }
> > +#endif
> >
> >   static int __init make_gic_domU_node(struct kernel_info *kinfo)
> >   {
> >       switch ( kinfo->d->arch.vgic.version )
> >       {
> > +#ifdef CONFIG_ARM_64
> >       case GIC_V3:
> >           return make_gicv3_domU_node(kinfo);
> > +#endif
> >       case GIC_V2:
> >           return make_gicv2_domU_node(kinfo);
> >       default:
> > diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index
> > cb5a70c42e..70168ca1ac 100644
> > --- a/xen/arch/arm/vgic-v3.c
> > +++ b/xen/arch/arm/vgic-v3.c
> > @@ -1641,14 +1641,15 @@ static inline unsigned int
> vgic_v3_max_rdist_count(struct domain *d)
> >        * Normally there is only one GICv3 redistributor region.
> >        * The GICv3 DT binding provisions for multiple regions, since there are
> >        * platforms out there which need those (multi-socket systems).
> > -     * For Dom0 we have to live with the MMIO layout the hardware provides,
> > -     * so we have to copy the multiple regions - as the first region may not
> > -     * provide enough space to hold all redistributors we need.
> > +     * For direct-map domain(including dom0), we have to live with
> > + the MMIO
> 
> I find the sentence somewhat misleading because it implies that dom0 is is a
> direct-map domain (this is true today, but this merely an implementation
> details).
> 
> However, with the change below, I think it would be better to write "For
> domain using the host memory layout..." and not going into details and what
> sort of domain we refer to here. Instead...
> 
> > +     * layout the hardware provides, so we have to copy the multiple regions
> > +     * - as the first region may not provide enough space to hold all
> > +     * redistributors we need.
> >        * However DomU get a constructed memory map, so we can go with
> >        * the architected single redistributor region.
> >        */
> > -    return is_hardware_domain(d) ? vgic_v3_hw.nr_rdist_regions :
> > -               GUEST_GICV3_RDIST_REGIONS;
> > +    return is_domain_use_host_layout(d) ? vgic_v3_hw.nr_rdist_regions :
> > +                                          GUEST_GICV3_RDIST_REGIONS;
> >   }
> >
> >   static int vgic_v3_domain_init(struct domain *d) @@ -1670,10
> > +1671,13 @@ static int vgic_v3_domain_init(struct domain *d)
> >       radix_tree_init(&d->arch.vgic.pend_lpi_tree);
> >
> >       /*
> > -     * Domain 0 gets the hardware address.
> > -     * Guests get the virtual platform layout.
> > +     * Since we map the whole GICv3 register memory map(64KB) for
> > +     * all guests(including DOM0), DOM0 and direct-map guests could be
> > +     * treated the same way here.
> > +     * direct-map domain (including Dom0) gets the hardware address.
> > +     * Other guests get the virtual platform layout.
> >        */
> > -    if ( is_hardware_domain(d) )
> > +    if ( is_domain_use_host_layout(d) )
> >       {
> >           unsigned int first_cpu = 0;
> >
> > diff --git a/xen/include/asm-arm/domain.h
> > b/xen/include/asm-arm/domain.h index fc42c6a310..e8ce3ac8d2 100644
> > --- a/xen/include/asm-arm/domain.h
> > +++ b/xen/include/asm-arm/domain.h
> > @@ -32,6 +32,9 @@ enum domain_type {
> >   /* Check if domain is direct-map memory map. */
> >   #define is_domain_direct_mapped(d) (d->options &
> > XEN_DOMCTL_CDF_directmap)
> >
> > +#define is_domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> > +                                      is_hardware_domain(d))
> 
> ... the details should be on top of this comment. The advantage is there is less
> chance for a comment to rot.
> 
> Regarding the name, I would either drop the 'is_' or s/use/using/. My
> preference goes for the former as it makes the name sightly shorter.
> 
> Cheers,
> 
> --
> Julien Grall

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap
  2021-10-15  8:56   ` Julien Grall
  2021-10-15 10:03     ` Penny Zheng
@ 2021-11-09 10:05     ` Penny Zheng
  1 sibling, 0 replies; 18+ messages in thread
From: Penny Zheng @ 2021-11-09 10:05 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini
  Cc: Wei Chen, Bertrand Marquis, Michal Orzel

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Friday, October 15, 2021 4:57 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Wei Chen <Wei.Chen@arm.com>; Bertrand Marquis
> <Bertrand.Marquis@arm.com>
> Subject: Re: [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap
> 
> Hi Penny,
> 
> On 15/10/2021 04:09, Penny Zheng wrote:
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 0167731ab0..37e2d62d47 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -3069,8 +3069,10 @@ static int __init construct_dom0(struct domain *d)
> >   void __init create_dom0(void)
> >   {
> >       struct domain *dom0;
> > +    /* DOM0 has always its memory directly mapped. */
> >       struct xen_domctl_createdomain dom0_cfg = {
> > -        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> > +        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> > +                 XEN_DOMCTL_CDF_directmap,
> >           .max_evtchn_port = -1,
> >           .max_grant_frames = gnttab_dom0_frames(),
> >           .max_maptrack_frames = -1,
> > diff --git a/xen/common/domain.c b/xen/common/domain.c index
> > 8b53c49d1e..7a6131db74 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -486,7 +486,8 @@ static int sanitise_domain_config(struct
> xen_domctl_createdomain *config)
> >            ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> >              XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
> >              XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
> > -           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_vpmu) )
> > +           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_vpmu |
> > +           XEN_DOMCTL_CDF_directmap) )
> >       {
> >           dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
> >           return -EINVAL;
> > diff --git a/xen/include/asm-arm/domain.h
> > b/xen/include/asm-arm/domain.h index 14e575288f..fc42c6a310 100644
> > --- a/xen/include/asm-arm/domain.h
> > +++ b/xen/include/asm-arm/domain.h
> > @@ -29,8 +29,8 @@ enum domain_type {
> >   #define is_64bit_domain(d) (0)
> >   #endif
> >
> > -/* The hardware domain has always its memory direct mapped. */
> > -#define is_domain_direct_mapped(d) is_hardware_domain(d)
> > +/* Check if domain is direct-map memory map. */ #define
> > +is_domain_direct_mapped(d) (d->options & XEN_DOMCTL_CDF_directmap)
> >
> >   struct vtimer {
> >       struct vcpu *v;
> > diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> > index 238384b5ae..b505a0db51 100644
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -72,9 +72,11 @@ struct xen_domctl_createdomain {
> >   #define XEN_DOMCTL_CDF_nested_virt    (1U <<
> _XEN_DOMCTL_CDF_nested_virt)
> >   /* Should we expose the vPMU to the guest? */
> >   #define XEN_DOMCTL_CDF_vpmu           (1U << 7)
> > +/* If this domain has its memory directly mapped? (ARM only) */
> > +#define XEN_DOMCTL_CDF_directmap      (1U << 8)
> >
> >   /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
> > -#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_vpmu
> > +#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_directmap
> 
> In the previous version, this flag was only settable for domain created by Xen.
> Now this is also settable by the toolstack. I don't think the toolstack can
> sensibly use this flag (at least in the current state).
> 
> So can you explain why this flag is exposed to the toolstack?
> 

if I moved XEN_DOMCTL_CDF_directmap to the domain.h, and let it hold the
bit 8, in case later another developer wants to introduce a new flag to accidently
hold 8 bit too, I would like to add some explanatory comments here and maybe rename
the XEN_DOMCTL_CDF_directmap to XEN_DOMCTL_CDF_Internal_directmap, wdyt?

> Cheers,
> 
> --

Cheers

Penny Zheng
> Julien Grall

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-11-09 10:06 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-15  3:09 [PATCH v2 0/6] direct-map memory map Penny Zheng
2021-10-15  3:09 ` [PATCH v2 1/6] xen: introduce XEN_DOMCTL_CDF_directmap Penny Zheng
2021-10-15  8:46   ` Jan Beulich
2021-10-15  9:59     ` Penny Zheng
2021-10-15 10:08       ` Jan Beulich
2021-10-15  8:56   ` Julien Grall
2021-10-15 10:03     ` Penny Zheng
2021-11-09 10:05     ` Penny Zheng
2021-10-15  3:09 ` [PATCH v2 2/6] xen/arm: introduce direct-map for domUs Penny Zheng
2021-10-19 17:19   ` Julien Grall
2021-10-15  3:09 ` [PATCH v2 3/6] xen/arm: if direct-map domain use native addresses for GICv2 Penny Zheng
2021-10-19 17:39   ` Julien Grall
2021-10-15  3:09 ` [PATCH v2 4/6] xen/arm: if direct-map domain use native addresses for GICv3 Penny Zheng
2021-10-20 11:11   ` Julien Grall
2021-10-21  5:45     ` Penny Zheng
2021-10-15  3:09 ` [PATCH v2 5/6] xen/arm: if direct-map domain use native UART address and IRQ number for vPL011 Penny Zheng
2021-10-20 11:38   ` Julien Grall
2021-10-15  3:09 ` [PATCH v2 6/6] xen/docs: add a document to explain how to do passthrough without IOMMU Penny Zheng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).