linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Fuad Tabba <tabba@google.com>
To: Yanan Wang <wangyanan55@huawei.com>
Cc: Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
	Quentin Perret <qperret@google.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Catalin Marinas <catalin.marinas@arm.com>
Subject: Re: [PATCH v7 3/4] KVM: arm64: Tweak parameters of guest cache maintenance functions
Date: Fri, 18 Jun 2021 10:29:40 +0100	[thread overview]
Message-ID: <CA+EHjTxS9Kae3dXLsC7XDi4neb21JGwOxZzsBN8OevczRPXn8Q@mail.gmail.com> (raw)
In-Reply-To: <20210617105824.31752-4-wangyanan55@huawei.com>

Hi Yanan,

On Thu, Jun 17, 2021 at 11:58 AM Yanan Wang <wangyanan55@huawei.com> wrote:
>
> Adjust the parameter "kvm_pfn_t pfn" of __clean_dcache_guest_page
> and __invalidate_icache_guest_page to "void *va", which paves the
> way for converting these two guest CMO functions into callbacks in
> structure kvm_pgtable_mm_ops. No functional change.
>
> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
> ---
>  arch/arm64/include/asm/kvm_mmu.h |  9 ++-------
>  arch/arm64/kvm/mmu.c             | 28 +++++++++++++++-------------
>  2 files changed, 17 insertions(+), 20 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 25ed956f9af1..6844a7550392 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -187,10 +187,8 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
>         return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
>  }
>
> -static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
> +static inline void __clean_dcache_guest_page(void *va, size_t size)
>  {
> -       void *va = page_address(pfn_to_page(pfn));
> -
>         /*
>          * With FWB, we ensure that the guest always accesses memory using
>          * cacheable attributes, and we don't have to clean to PoC when
> @@ -203,16 +201,13 @@ static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
>         kvm_flush_dcache_to_poc(va, size);
>  }
>
> -static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
> -                                                 unsigned long size)
> +static inline void __invalidate_icache_guest_page(void *va, size_t size)
>  {
>         if (icache_is_aliasing()) {
>                 /* any kind of VIPT cache */
>                 __flush_icache_all();
>         } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
>                 /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
> -               void *va = page_address(pfn_to_page(pfn));
> -
>                 invalidate_icache_range((unsigned long)va,
>                                         (unsigned long)va + size);
>         }
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 5742ba765ff9..b980f8a47cbb 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -126,6 +126,16 @@ static void *kvm_host_va(phys_addr_t phys)
>         return __va(phys);
>  }
>
> +static void clean_dcache_guest_page(void *va, size_t size)
> +{
> +       __clean_dcache_guest_page(va, size);
> +}
> +
> +static void invalidate_icache_guest_page(void *va, size_t size)
> +{
> +       __invalidate_icache_guest_page(va, size);
> +}
> +
>  /*
>   * Unmapping vs dcache management:
>   *
> @@ -693,16 +703,6 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
>         kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
>  }
>
> -static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
> -{
> -       __clean_dcache_guest_page(pfn, size);
> -}
> -
> -static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size)
> -{
> -       __invalidate_icache_guest_page(pfn, size);
> -}
> -
>  static void kvm_send_hwpoison_signal(unsigned long address, short lsb)
>  {
>         send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, lsb, current);
> @@ -1013,11 +1013,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>                 prot |= KVM_PGTABLE_PROT_W;
>
>         if (fault_status != FSC_PERM && !device)
> -               clean_dcache_guest_page(pfn, vma_pagesize);
> +               clean_dcache_guest_page(page_address(pfn_to_page(pfn)),
> +                                       vma_pagesize);
>
>         if (exec_fault) {
>                 prot |= KVM_PGTABLE_PROT_X;
> -               invalidate_icache_guest_page(pfn, vma_pagesize);
> +               invalidate_icache_guest_page(page_address(pfn_to_page(pfn)),
> +                                            vma_pagesize);
>         }
>
>         if (device)
> @@ -1219,7 +1221,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>          * We've moved a page around, probably through CoW, so let's treat it
>          * just like a translation fault and clean the cache to the PoC.
>          */
> -       clean_dcache_guest_page(pfn, PAGE_SIZE);
> +       clean_dcache_guest_page(page_address(pfn_to_page(pfn), PAGE_SIZE);
>
>         /*
>          * The MMU notifiers will have unmapped a huge PMD before calling
> --
> 2.23.0


Reviewed-by: Fuad Tabba <tabba@google.com>

Thanks,
/fuad

> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2021-06-18  9:30 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-17 10:58 [PATCH v7 0/4] KVM: arm64: Improve efficiency of stage2 page table Yanan Wang
2021-06-17 10:58 ` [PATCH v7 1/4] KVM: arm64: Introduce two cache maintenance callbacks Yanan Wang
2021-06-17 12:38   ` Will Deacon
2021-06-17 14:20     ` Marc Zyngier
2021-06-18  1:52       ` wangyanan (Y)
2021-06-18  8:59         ` Fuad Tabba
2021-06-18 11:10           ` Marc Zyngier
2021-06-17 10:58 ` [PATCH v7 2/4] KVM: arm64: Introduce mm_ops member for structure stage2_attr_data Yanan Wang
2021-06-18  9:29   ` Fuad Tabba
2021-06-17 10:58 ` [PATCH v7 3/4] KVM: arm64: Tweak parameters of guest cache maintenance functions Yanan Wang
2021-06-18  9:29   ` Fuad Tabba [this message]
     [not found]   ` <87czsjcsv8.wl-maz@kernel.org>
2021-06-18 13:14     ` wangyanan (Y)
2021-06-17 10:58 ` [PATCH v7 4/4] KVM: arm64: Move guest CMOs to the fault handlers Yanan Wang
2021-06-17 12:45   ` Will Deacon
2021-06-17 12:59     ` Marc Zyngier
2021-06-17 13:21       ` Will Deacon
2021-06-17 13:37         ` Marc Zyngier
2021-06-18  9:30   ` Fuad Tabba
2021-06-18 11:38 ` [PATCH v7 0/4] KVM: arm64: Improve efficiency of stage2 page table Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+EHjTxS9Kae3dXLsC7XDi4neb21JGwOxZzsBN8OevczRPXn8Q@mail.gmail.com \
    --to=tabba@google.com \
    --cc=alexandru.elisei@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=qperret@google.com \
    --cc=wangyanan55@huawei.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).