All of lore.kernel.org
 help / color / mirror / Atom feed
From: Raghavendra Rao Ananta <rananta@google.com>
To: Oliver Upton <oliver.upton@linux.dev>
Cc: Oliver Upton <oupton@google.com>, Marc Zyngier <maz@kernel.org>,
	Ricardo Koller <ricarkol@google.com>,
	Reiji Watanabe <reijiw@google.com>,
	James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Will Deacon <will@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Jing Zhang <jingzhangos@google.com>,
	Colton Lewis <coltonlewis@google.com>,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 2/7] KVM: arm64: Add FEAT_TLBIRANGE support
Date: Tue, 4 Apr 2023 14:39:18 -0700	[thread overview]
Message-ID: <CAJHc60x2kkkHi=tO4UymOeFeKA315bibhHgSeXZpdEReFUh4-g@mail.gmail.com> (raw)
In-Reply-To: <ZCxvXq0dftq/Szra@linux.dev>

On Tue, Apr 4, 2023 at 11:41 AM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Mon, Apr 03, 2023 at 10:26:01AM -0700, Raghavendra Rao Ananta wrote:
> > Hi Oliver,
> >
> > On Wed, Mar 29, 2023 at 6:19 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > On Mon, Feb 06, 2023 at 05:23:35PM +0000, Raghavendra Rao Ananta wrote:
> > > > Define a generic function __kvm_tlb_flush_range() to
> > > > invalidate the TLBs over a range of addresses. The
> > > > implementation accepts 'op' as a generic TLBI operation.
> > > > Upcoming patches will use this to implement IPA based
> > > > TLB invalidations (ipas2e1is).
> > > >
> > > > If the system doesn't support FEAT_TLBIRANGE, the
> > > > implementation falls back to flushing the pages one by one
> > > > for the range supplied.
> > > >
> > > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > > > ---
> > > >  arch/arm64/include/asm/kvm_asm.h | 18 ++++++++++++++++++
> > > >  1 file changed, 18 insertions(+)
> > > >
> > > > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> > > > index 43c3bc0f9544d..995ff048e8851 100644
> > > > --- a/arch/arm64/include/asm/kvm_asm.h
> > > > +++ b/arch/arm64/include/asm/kvm_asm.h
> > > > @@ -221,6 +221,24 @@ DECLARE_KVM_NVHE_SYM(__per_cpu_end);
> > > >  DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs);
> > > >  #define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs)
> > > >
> > > > +#define __kvm_tlb_flush_range(op, mmu, start, end, level, tlb_level) do {    \
> > > > +     unsigned long pages, stride;                                            \
> > > > +                                                                             \
> > > > +     stride = kvm_granule_size(level);                                       \
> > >
> > > Hmm... There's a rather subtle and annoying complication here that I
> > > don't believe is handled.
> > >
> > > Similar to what I said in the last spin of the series, there is no
> > > guarantee that a range of IPAs is mapped at the exact same level
> > > throughout. Dirty logging and memslots that aren't hugepage aligned
> > > could lead to a mix of mapping levels being used within a range of the
> > > IPA space.
> > >
> > Unlike the comment on v1, the level/stride here is used to jump the
> > addresses in case the system doesn't support TLBIRANGE. The TTL hint
> > is 0.
>
> Right. So we agree that the level is not uniform throughout the provided
> range. The invalidation by IPA is also used if 'pages' is odd, even on
> systems with TLBIRANGE. We must assume the worst case here, in that the
> TLBI by IPA invalidated a single PTE-level entry. You could wind up
> over-invalidating in that case, but you'd still be correct.
>
Sure, let's always assume the stride as 4k. But with
over-invalidation, do you think the penalty is acceptable, especially
when invalidating say >2M blocks for systems without TLBIRANGE?
In __kvm_tlb_flush_vmid_range(), what if we just rely on the iterative
approach for invalidating odd number pages on systems with TLBIRANGE.
For !TLBIRANGE systems simply invalidate all of TLB (like we do
today). Thoughts?

Thank you.
Raghavendra


> > That being said, do you think we can always assume the least possible
> > stride (say, 4k) and hardcode it?
> > With respect to alignment, since the function is only called while
> > breaking the table PTE,  do you think it'll still be a problem even if
> > we go with the least granularity stride?
>
> I believe so. If we want to apply the range-based invalidations generally
> in KVM then we will not always be dealing with a block-aligned chunk of
> address.
>
> > > > +     start = round_down(start, stride);                                      \
> > > > +     end = round_up(end, stride);                                            \
> > > > +     pages = (end - start) >> PAGE_SHIFT;                                    \
> > > > +                                                                             \
> > > > +     if ((!system_supports_tlb_range() &&                                    \
> > > > +          (end - start) >= (MAX_TLBI_OPS * stride)) ||                       \
> > >
> > > Doesn't checking for TLBIRANGE above eliminate the need to test against
> > > MAX_TLBI_OPS?
> > >
> > Derived from __flush_tlb_range(), I think the condition is used to
> > just flush everything if the range is too large to iterate and flush
> > when the system doesn't support TLBIRANGE. Probably to prevent
> > soft-lockups?
>
> Right, but you test above for system_supports_tlb_range(), meaning that
> you'd unconditionally call __kvm_tlb_flush_vmid() below.
>
> > > > +         pages >= MAX_TLBI_RANGE_PAGES) {                                    \
> > > > +             __kvm_tlb_flush_vmid(mmu);                                      \
> > > > +             break;                                                          \
> > > > +     }                                                                       \
> > > > +                                                                             \
> > > > +     __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false);    \
> > > > +} while (0)
>
> --
> Thanks,
> Oliver

WARNING: multiple messages have this Message-ID (diff)
From: Raghavendra Rao Ananta <rananta@google.com>
To: Oliver Upton <oliver.upton@linux.dev>
Cc: Oliver Upton <oupton@google.com>, Marc Zyngier <maz@kernel.org>,
	 Ricardo Koller <ricarkol@google.com>,
	Reiji Watanabe <reijiw@google.com>,
	 James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	 Suzuki K Poulose <suzuki.poulose@arm.com>,
	Will Deacon <will@kernel.org>,
	 Paolo Bonzini <pbonzini@redhat.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	 Jing Zhang <jingzhangos@google.com>,
	Colton Lewis <coltonlewis@google.com>,
	 linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	 linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 2/7] KVM: arm64: Add FEAT_TLBIRANGE support
Date: Tue, 4 Apr 2023 14:39:18 -0700	[thread overview]
Message-ID: <CAJHc60x2kkkHi=tO4UymOeFeKA315bibhHgSeXZpdEReFUh4-g@mail.gmail.com> (raw)
In-Reply-To: <ZCxvXq0dftq/Szra@linux.dev>

On Tue, Apr 4, 2023 at 11:41 AM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Mon, Apr 03, 2023 at 10:26:01AM -0700, Raghavendra Rao Ananta wrote:
> > Hi Oliver,
> >
> > On Wed, Mar 29, 2023 at 6:19 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > On Mon, Feb 06, 2023 at 05:23:35PM +0000, Raghavendra Rao Ananta wrote:
> > > > Define a generic function __kvm_tlb_flush_range() to
> > > > invalidate the TLBs over a range of addresses. The
> > > > implementation accepts 'op' as a generic TLBI operation.
> > > > Upcoming patches will use this to implement IPA based
> > > > TLB invalidations (ipas2e1is).
> > > >
> > > > If the system doesn't support FEAT_TLBIRANGE, the
> > > > implementation falls back to flushing the pages one by one
> > > > for the range supplied.
> > > >
> > > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > > > ---
> > > >  arch/arm64/include/asm/kvm_asm.h | 18 ++++++++++++++++++
> > > >  1 file changed, 18 insertions(+)
> > > >
> > > > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> > > > index 43c3bc0f9544d..995ff048e8851 100644
> > > > --- a/arch/arm64/include/asm/kvm_asm.h
> > > > +++ b/arch/arm64/include/asm/kvm_asm.h
> > > > @@ -221,6 +221,24 @@ DECLARE_KVM_NVHE_SYM(__per_cpu_end);
> > > >  DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs);
> > > >  #define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs)
> > > >
> > > > +#define __kvm_tlb_flush_range(op, mmu, start, end, level, tlb_level) do {    \
> > > > +     unsigned long pages, stride;                                            \
> > > > +                                                                             \
> > > > +     stride = kvm_granule_size(level);                                       \
> > >
> > > Hmm... There's a rather subtle and annoying complication here that I
> > > don't believe is handled.
> > >
> > > Similar to what I said in the last spin of the series, there is no
> > > guarantee that a range of IPAs is mapped at the exact same level
> > > throughout. Dirty logging and memslots that aren't hugepage aligned
> > > could lead to a mix of mapping levels being used within a range of the
> > > IPA space.
> > >
> > Unlike the comment on v1, the level/stride here is used to jump the
> > addresses in case the system doesn't support TLBIRANGE. The TTL hint
> > is 0.
>
> Right. So we agree that the level is not uniform throughout the provided
> range. The invalidation by IPA is also used if 'pages' is odd, even on
> systems with TLBIRANGE. We must assume the worst case here, in that the
> TLBI by IPA invalidated a single PTE-level entry. You could wind up
> over-invalidating in that case, but you'd still be correct.
>
Sure, let's always assume the stride as 4k. But with
over-invalidation, do you think the penalty is acceptable, especially
when invalidating say >2M blocks for systems without TLBIRANGE?
In __kvm_tlb_flush_vmid_range(), what if we just rely on the iterative
approach for invalidating odd number pages on systems with TLBIRANGE.
For !TLBIRANGE systems simply invalidate all of TLB (like we do
today). Thoughts?

Thank you.
Raghavendra


> > That being said, do you think we can always assume the least possible
> > stride (say, 4k) and hardcode it?
> > With respect to alignment, since the function is only called while
> > breaking the table PTE,  do you think it'll still be a problem even if
> > we go with the least granularity stride?
>
> I believe so. If we want to apply the range-based invalidations generally
> in KVM then we will not always be dealing with a block-aligned chunk of
> address.
>
> > > > +     start = round_down(start, stride);                                      \
> > > > +     end = round_up(end, stride);                                            \
> > > > +     pages = (end - start) >> PAGE_SHIFT;                                    \
> > > > +                                                                             \
> > > > +     if ((!system_supports_tlb_range() &&                                    \
> > > > +          (end - start) >= (MAX_TLBI_OPS * stride)) ||                       \
> > >
> > > Doesn't checking for TLBIRANGE above eliminate the need to test against
> > > MAX_TLBI_OPS?
> > >
> > Derived from __flush_tlb_range(), I think the condition is used to
> > just flush everything if the range is too large to iterate and flush
> > when the system doesn't support TLBIRANGE. Probably to prevent
> > soft-lockups?
>
> Right, but you test above for system_supports_tlb_range(), meaning that
> you'd unconditionally call __kvm_tlb_flush_vmid() below.
>
> > > > +         pages >= MAX_TLBI_RANGE_PAGES) {                                    \
> > > > +             __kvm_tlb_flush_vmid(mmu);                                      \
> > > > +             break;                                                          \
> > > > +     }                                                                       \
> > > > +                                                                             \
> > > > +     __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false);    \
> > > > +} while (0)
>
> --
> Thanks,
> Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2023-04-04 21:39 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-06 17:23 [PATCH v2 0/7] KVM: arm64: Add support for FEAT_TLBIRANGE Raghavendra Rao Ananta
2023-02-06 17:23 ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 1/7] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 2/7] KVM: arm64: Add FEAT_TLBIRANGE support Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-03-30  1:19   ` Oliver Upton
2023-03-30  1:19     ` Oliver Upton
2023-04-03 17:26     ` Raghavendra Rao Ananta
2023-04-03 17:26       ` Raghavendra Rao Ananta
2023-04-04 18:41       ` Oliver Upton
2023-04-04 18:41         ` Oliver Upton
2023-04-04 18:50         ` Oliver Upton
2023-04-04 18:50           ` Oliver Upton
2023-04-04 21:39         ` Raghavendra Rao Ananta [this message]
2023-04-04 21:39           ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 3/7] KVM: arm64: Implement __kvm_tlb_flush_range_vmid_ipa() Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-03-30  0:59   ` Oliver Upton
2023-03-30  0:59     ` Oliver Upton
2023-04-03 21:08     ` Raghavendra Rao Ananta
2023-04-03 21:08       ` Raghavendra Rao Ananta
2023-04-04 18:46       ` Oliver Upton
2023-04-04 18:46         ` Oliver Upton
2023-04-04 20:50         ` Raghavendra Rao Ananta
2023-04-04 20:50           ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 4/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-03-30  0:53   ` Oliver Upton
2023-03-30  0:53     ` Oliver Upton
2023-04-03 21:23     ` Raghavendra Rao Ananta
2023-04-03 21:23       ` Raghavendra Rao Ananta
2023-04-04 19:09       ` Oliver Upton
2023-04-04 19:09         ` Oliver Upton
2023-04-04 20:59         ` Raghavendra Rao Ananta
2023-04-04 20:59           ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 5/7] KVM: arm64: Flush only the memslot after write-protect Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 6/7] KVM: arm64: Break the table entries using TLBI range instructions Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-03-30  0:17   ` Oliver Upton
2023-03-30  0:17     ` Oliver Upton
2023-04-03 21:25     ` Raghavendra Rao Ananta
2023-04-03 21:25       ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 7/7] KVM: arm64: Create a fast stage-2 unmap path Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-03-30  0:42   ` Oliver Upton
2023-03-30  0:42     ` Oliver Upton
2023-04-04 17:52     ` Raghavendra Rao Ananta
2023-04-04 17:52       ` Raghavendra Rao Ananta
2023-04-04 19:19       ` Oliver Upton
2023-04-04 19:19         ` Oliver Upton
2023-04-04 21:07         ` Raghavendra Rao Ananta
2023-04-04 21:07           ` Raghavendra Rao Ananta
2023-04-04 21:30           ` Oliver Upton
2023-04-04 21:30             ` Oliver Upton
2023-04-04 21:45             ` Raghavendra Rao Ananta
2023-04-04 21:45               ` Raghavendra Rao Ananta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJHc60x2kkkHi=tO4UymOeFeKA315bibhHgSeXZpdEReFUh4-g@mail.gmail.com' \
    --to=rananta@google.com \
    --cc=alexandru.elisei@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=coltonlewis@google.com \
    --cc=james.morse@arm.com \
    --cc=jingzhangos@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=oupton@google.com \
    --cc=pbonzini@redhat.com \
    --cc=reijiw@google.com \
    --cc=ricarkol@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.