xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien@xen.org>
To: Henry Wang <Henry.Wang@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Wei Chen <wei.chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/4] xen/arm: Defer GICv2 CPU interface mapping until the first access
Date: Mon, 20 Mar 2023 19:26:22 +0000	[thread overview]
Message-ID: <e2421c2d-6ae1-bf70-530d-0683d943f519@xen.org> (raw)
In-Reply-To: <20230130040614.188296-4-Henry.Wang@arm.com>

Hi Henry,

On 30/01/2023 04:06, Henry Wang wrote:
> Currently, the mapping of the GICv2 CPU interface is created in
> arch_domain_create(). This causes some troubles in populating and
> freeing of the domain P2M pages pool. For example, a default 16
> P2M pages are required in p2m_init() to cope with the P2M mapping
> of 8KB GICv2 CPU interface area, and these 16 P2M pages would cause
> the complexity of P2M destroy in the failure path of
> arch_domain_create().
> 
> As per discussion in [1], similarly as the MMIO access for ACPI, this
> patch defers the GICv2 CPU interface mapping until the first MMIO
> access. This is achieved by moving the GICv2 CPU interface mapping
> code from vgic_v2_domain_init()/vgic_v2_map_resources() to the
> stage-2 data abort trap handling code. The original CPU interface
> size and virtual CPU interface base address is now saved in
> `struct vgic_dist` instead of the local variable of
> vgic_v2_domain_init()/vgic_v2_map_resources().
> 
> Take the opportunity to unify the way of data access using the
> existing pointer to struct vgic_dist in vgic_v2_map_resources() for
> new GICv2.
> 
> Since the hardware domain (Dom0) has an unlimited size P2M pool, the
> gicv2_map_hwdom_extra_mappings() is also not touched by this patch.

I didn't notice this in the previous version. The fact that dom0 has 
unlimited size P2M pool doesn't matter here (this could also change in 
the future). Even if the P2M pool was limited, then it would be fine 
because the extra mappings happen after domain_create(). So there is no 
need to map them on-demand as the code could be order the way we want.

So this paragraph needs to be reworded.

>   #ifdef CONFIG_GICV3
>       /* GIC V3 addressing */
>       /* List of contiguous occupied by the redistributors */
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 061c92acbd..9dd703f661 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1787,9 +1787,12 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
>   }
>   
>   /*
> - * When using ACPI, most of the MMIO regions will be mapped on-demand
> - * in stage-2 page tables for the hardware domain because Xen is not
> - * able to know from the EFI memory map the MMIO regions.
> + * Try to map the MMIO regions for some special cases:
> + * 1. When using ACPI, most of the MMIO regions will be mapped on-demand
> + *    in stage-2 page tables for the hardware domain because Xen is not
> + *    able to know from the EFI memory map the MMIO regions.
> + * 2. For guests using GICv2, the GICv2 CPU interface mapping is created
> + *    on the first access of the MMIO region.
>    */
>   static bool try_map_mmio(gfn_t gfn)
>   {
> @@ -1798,6 +1801,15 @@ static bool try_map_mmio(gfn_t gfn)
>       /* For the hardware domain, all MMIOs are mapped with GFN == MFN */
>       mfn_t mfn = _mfn(gfn_x(gfn));
>   
> +    /*
> +     * Map the GICv2 virtual CPU interface in the GIC CPU interface
> +     * region of the guest on the first access of the MMIO region.
> +     */
> +    if ( d->arch.vgic.version == GIC_V2 &&
> +         gfn_eq(gfn, gaddr_to_gfn(d->arch.vgic.cbase)) )

The CPU interface size is 8KB (bigger in some cases) but here you only 
check for the access to be in the first 4KB.

Cheers,

-- 
Julien Grall


  reply	other threads:[~2023-03-20 19:26 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-30  4:06 [PATCH v2 0/4] P2M improvements for Arm Henry Wang
2023-01-30  4:06 ` [PATCH v2 1/4] xen/arm: Reduce redundant clear root pages when teardown p2m Henry Wang
2023-01-30  4:06 ` [PATCH v2 2/4] xen/arm: Rename vgic_cpu_base and vgic_dist_base for new vGIC Henry Wang
2023-03-20 18:47   ` Julien Grall
2023-01-30  4:06 ` [PATCH v2 3/4] xen/arm: Defer GICv2 CPU interface mapping until the first access Henry Wang
2023-03-20 19:26   ` Julien Grall [this message]
2023-03-21  3:57     ` Henry Wang
2023-03-21  4:00       ` Henry Wang
2023-03-21 15:35       ` Julien Grall
2023-03-22  2:31         ` Henry Wang
2023-01-30  4:06 ` [PATCH v2 4/4] xen/arm: Clean-up in p2m_init() and p2m_final_teardown() Henry Wang
2023-03-21 15:50   ` Julien Grall
2023-03-22  2:21     ` Henry Wang
2023-03-22  8:02       ` Michal Orzel
2023-03-22  8:16         ` Henry Wang
2023-03-17  1:42 ` [PATCH v2 0/4] P2M improvements for Arm Henry Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e2421c2d-6ae1-bf70-530d-0683d943f519@xen.org \
    --to=julien@xen.org \
    --cc=Henry.Wang@arm.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=bertrand.marquis@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=wei.chen@arm.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).