All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Will Deacon <will@kernel.org>,
	Suleiman Souhlal <suleiman@google.com>,
	Joel Fernandes <joelaf@google.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Subject: Re: [PATCHv2 2/4] arm64: add guest pvstate support
Date: Wed, 21 Jul 2021 11:16:02 +0100	[thread overview]
Message-ID: <87bl6w2crh.wl-maz@kernel.org> (raw)
In-Reply-To: <YPffONafHdke/O3l@google.com>

On Wed, 21 Jul 2021 09:47:52 +0100,
Sergey Senozhatsky <senozhatsky@chromium.org> wrote:
> 
> On (21/07/21 09:22), Marc Zyngier wrote:
> > On Wed, 21 Jul 2021 03:05:25 +0100,
> > Sergey Senozhatsky <senozhatsky@chromium.org> wrote:
> > > 
> > > On (21/07/12 16:42), Marc Zyngier wrote:
> > > > > 
> > > > > PV-vcpu-state is a per-CPU struct, which, for the time being,
> > > > > holds boolean `preempted' vCPU state. During the startup,
> > > > > given that host supports PV-state, each guest vCPU sends
> > > > > a pointer to its per-CPU variable to the host as a payload
> > > > 
> > > > What is the expected memory type for this memory region? What is its
> > > > life cycle? Where is it allocated from?
> > > 
> > > Guest per-CPU area, which physical addresses is shared with the
> > > host.
> > 
> > Again: what are the memory types you expect this to be used with?
> 
> I heard your questions, I'm trying to figure out the answers now.
> 
> As of memory type - I presume you are talking about coherent vs
> non-coherent memory.

No. I'm talking about cacheable vs non-cacheable. The ARM architecture
is always coherent for memory that is inner-shareable, which applies
to any system running Linux. On the other hand, there is no
architected cache snooping when using non-cacheable accesses.

> Can guest per-CPU memory be non-coherent? Guest never writes
> anything to the region of memory it shares with the host, it only
> reads what the host writes to it. All reads and writes are done from
> CPU (no devices DMA access, etc).
> 
> Do we need any cache flushes/syncs in this case?

If you expect the guest to have non-cacheable mappings (or to run with
its MMU off at any point, which amounts to the same thing) *and* still
be able to access the shared page, then *someone* will have to perform
CMOs to make these writes visible to the PoC (unless you have FWB).

Needless to say, this would kill any sort of performance gain this
feature could hypothetically bring. Defining the scope for the access
would help mitigating this, even if that's just a sentence saying "the
shared page *must* be accessed from a cacheable mapping".

> 
> > When will the hypervisor ever stop accessing this?
> 
> KVM always access it for the vcpus that are getting scheduled out or
> scheduled in on the host side.

I was more hinting at whether there was a way to disable this at
runtime. Think of a guest using kexec, for example, where you really
don't want the hypervisor to start messing with memory that has been
since reallocated by the guest.

> > How does it work across reset?
> 
> I need to figure out what happens during reset/migration in the first
> place.

Yup.

	M.

-- 
Without deviation from the norm, progress is not possible.

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Joel Fernandes <joelaf@google.com>,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	Suleiman Souhlal <suleiman@google.com>,
	Will Deacon <will@kernel.org>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCHv2 2/4] arm64: add guest pvstate support
Date: Wed, 21 Jul 2021 11:16:02 +0100	[thread overview]
Message-ID: <87bl6w2crh.wl-maz@kernel.org> (raw)
In-Reply-To: <YPffONafHdke/O3l@google.com>

On Wed, 21 Jul 2021 09:47:52 +0100,
Sergey Senozhatsky <senozhatsky@chromium.org> wrote:
> 
> On (21/07/21 09:22), Marc Zyngier wrote:
> > On Wed, 21 Jul 2021 03:05:25 +0100,
> > Sergey Senozhatsky <senozhatsky@chromium.org> wrote:
> > > 
> > > On (21/07/12 16:42), Marc Zyngier wrote:
> > > > > 
> > > > > PV-vcpu-state is a per-CPU struct, which, for the time being,
> > > > > holds boolean `preempted' vCPU state. During the startup,
> > > > > given that host supports PV-state, each guest vCPU sends
> > > > > a pointer to its per-CPU variable to the host as a payload
> > > > 
> > > > What is the expected memory type for this memory region? What is its
> > > > life cycle? Where is it allocated from?
> > > 
> > > Guest per-CPU area, which physical addresses is shared with the
> > > host.
> > 
> > Again: what are the memory types you expect this to be used with?
> 
> I heard your questions, I'm trying to figure out the answers now.
> 
> As of memory type - I presume you are talking about coherent vs
> non-coherent memory.

No. I'm talking about cacheable vs non-cacheable. The ARM architecture
is always coherent for memory that is inner-shareable, which applies
to any system running Linux. On the other hand, there is no
architected cache snooping when using non-cacheable accesses.

> Can guest per-CPU memory be non-coherent? Guest never writes
> anything to the region of memory it shares with the host, it only
> reads what the host writes to it. All reads and writes are done from
> CPU (no devices DMA access, etc).
> 
> Do we need any cache flushes/syncs in this case?

If you expect the guest to have non-cacheable mappings (or to run with
its MMU off at any point, which amounts to the same thing) *and* still
be able to access the shared page, then *someone* will have to perform
CMOs to make these writes visible to the PoC (unless you have FWB).

Needless to say, this would kill any sort of performance gain this
feature could hypothetically bring. Defining the scope for the access
would help mitigating this, even if that's just a sentence saying "the
shared page *must* be accessed from a cacheable mapping".

> 
> > When will the hypervisor ever stop accessing this?
> 
> KVM always access it for the vcpus that are getting scheduled out or
> scheduled in on the host side.

I was more hinting at whether there was a way to disable this at
runtime. Think of a guest using kexec, for example, where you really
don't want the hypervisor to start messing with memory that has been
since reallocated by the guest.

> > How does it work across reset?
> 
> I need to figure out what happens during reset/migration in the first
> place.

Yup.

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Will Deacon <will@kernel.org>,
	Suleiman Souhlal <suleiman@google.com>,
	Joel Fernandes <joelaf@google.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Subject: Re: [PATCHv2 2/4] arm64: add guest pvstate support
Date: Wed, 21 Jul 2021 11:16:02 +0100	[thread overview]
Message-ID: <87bl6w2crh.wl-maz@kernel.org> (raw)
In-Reply-To: <YPffONafHdke/O3l@google.com>

On Wed, 21 Jul 2021 09:47:52 +0100,
Sergey Senozhatsky <senozhatsky@chromium.org> wrote:
> 
> On (21/07/21 09:22), Marc Zyngier wrote:
> > On Wed, 21 Jul 2021 03:05:25 +0100,
> > Sergey Senozhatsky <senozhatsky@chromium.org> wrote:
> > > 
> > > On (21/07/12 16:42), Marc Zyngier wrote:
> > > > > 
> > > > > PV-vcpu-state is a per-CPU struct, which, for the time being,
> > > > > holds boolean `preempted' vCPU state. During the startup,
> > > > > given that host supports PV-state, each guest vCPU sends
> > > > > a pointer to its per-CPU variable to the host as a payload
> > > > 
> > > > What is the expected memory type for this memory region? What is its
> > > > life cycle? Where is it allocated from?
> > > 
> > > Guest per-CPU area, which physical addresses is shared with the
> > > host.
> > 
> > Again: what are the memory types you expect this to be used with?
> 
> I heard your questions, I'm trying to figure out the answers now.
> 
> As of memory type - I presume you are talking about coherent vs
> non-coherent memory.

No. I'm talking about cacheable vs non-cacheable. The ARM architecture
is always coherent for memory that is inner-shareable, which applies
to any system running Linux. On the other hand, there is no
architected cache snooping when using non-cacheable accesses.

> Can guest per-CPU memory be non-coherent? Guest never writes
> anything to the region of memory it shares with the host, it only
> reads what the host writes to it. All reads and writes are done from
> CPU (no devices DMA access, etc).
> 
> Do we need any cache flushes/syncs in this case?

If you expect the guest to have non-cacheable mappings (or to run with
its MMU off at any point, which amounts to the same thing) *and* still
be able to access the shared page, then *someone* will have to perform
CMOs to make these writes visible to the PoC (unless you have FWB).

Needless to say, this would kill any sort of performance gain this
feature could hypothetically bring. Defining the scope for the access
would help mitigating this, even if that's just a sentence saying "the
shared page *must* be accessed from a cacheable mapping".

> 
> > When will the hypervisor ever stop accessing this?
> 
> KVM always access it for the vcpus that are getting scheduled out or
> scheduled in on the host side.

I was more hinting at whether there was a way to disable this at
runtime. Think of a guest using kexec, for example, where you really
don't want the hypervisor to start messing with memory that has been
since reallocated by the guest.

> > How does it work across reset?
> 
> I need to figure out what happens during reset/migration in the first
> place.

Yup.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-07-21 10:28 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-09  4:37 [PATCHv2 0/4] arm64:kvm: teach guest sched that VCPUs can be preempted Sergey Senozhatsky
2021-07-09  4:37 ` Sergey Senozhatsky
2021-07-09  4:37 ` Sergey Senozhatsky
2021-07-09  4:37 ` [PATCHv2 1/4] arm64: smccc: Add SMCCC pv-vcpu-state function call IDs Sergey Senozhatsky
2021-07-09  4:37   ` Sergey Senozhatsky
2021-07-09  4:37   ` Sergey Senozhatsky
2021-07-12 14:22   ` Marc Zyngier
2021-07-12 14:22     ` Marc Zyngier
2021-07-12 14:22     ` Marc Zyngier
2021-07-09  4:37 ` [PATCHv2 2/4] arm64: add guest pvstate support Sergey Senozhatsky
2021-07-09  4:37   ` Sergey Senozhatsky
2021-07-09  4:37   ` Sergey Senozhatsky
2021-07-09  7:39   ` David Edmondson
2021-07-09  7:39     ` David Edmondson
2021-07-09  7:39     ` David Edmondson
2021-07-09  7:52     ` Sergey Senozhatsky
2021-07-09  7:52       ` Sergey Senozhatsky
2021-07-09  7:52       ` Sergey Senozhatsky
2021-07-09 18:58   ` Joel Fernandes
2021-07-09 18:58     ` Joel Fernandes
2021-07-09 18:58     ` Joel Fernandes
2021-07-09 21:53     ` Sergey Senozhatsky
2021-07-09 21:53       ` Sergey Senozhatsky
2021-07-09 21:53       ` Sergey Senozhatsky
2021-07-11 16:58       ` Joel Fernandes
2021-07-11 16:58         ` Joel Fernandes
2021-07-11 16:58         ` Joel Fernandes
2021-07-12 15:42   ` Marc Zyngier
2021-07-12 15:42     ` Marc Zyngier
2021-07-12 15:42     ` Marc Zyngier
2021-07-21  2:05     ` Sergey Senozhatsky
2021-07-21  2:05       ` Sergey Senozhatsky
2021-07-21  2:05       ` Sergey Senozhatsky
2021-07-21  8:22       ` Marc Zyngier
2021-07-21  8:22         ` Marc Zyngier
2021-07-21  8:22         ` Marc Zyngier
2021-07-21  8:47         ` Sergey Senozhatsky
2021-07-21  8:47           ` Sergey Senozhatsky
2021-07-21  8:47           ` Sergey Senozhatsky
2021-07-21 10:16           ` Marc Zyngier [this message]
2021-07-21 10:16             ` Marc Zyngier
2021-07-21 10:16             ` Marc Zyngier
2021-07-09  4:37 ` [PATCHv2 3/4] arm64: do not use dummy vcpu_is_preempted() Sergey Senozhatsky
2021-07-09  4:37   ` Sergey Senozhatsky
2021-07-09  4:37   ` Sergey Senozhatsky
2021-07-12 15:47   ` Marc Zyngier
2021-07-12 15:47     ` Marc Zyngier
2021-07-12 15:47     ` Marc Zyngier
2021-07-21  2:06     ` Sergey Senozhatsky
2021-07-21  2:06       ` Sergey Senozhatsky
2021-07-21  2:06       ` Sergey Senozhatsky
2021-07-09  4:37 ` [PATCHv2 4/4] arm64: add host pv-vcpu-state support Sergey Senozhatsky
2021-07-09  4:37   ` Sergey Senozhatsky
2021-07-09  4:37   ` Sergey Senozhatsky
2021-07-12 16:24   ` Marc Zyngier
2021-07-12 16:24     ` Marc Zyngier
2021-07-12 16:24     ` Marc Zyngier
2021-07-20 18:44     ` Joel Fernandes
2021-07-20 18:44       ` Joel Fernandes
2021-07-20 18:44       ` Joel Fernandes
2021-07-21  8:40       ` Marc Zyngier
2021-07-21  8:40         ` Marc Zyngier
2021-07-21  8:40         ` Marc Zyngier
2021-07-21 10:38         ` Sergey Senozhatsky
2021-07-21 10:38           ` Sergey Senozhatsky
2021-07-21 10:38           ` Sergey Senozhatsky
2021-07-21 11:08           ` Marc Zyngier
2021-07-21 11:08             ` Marc Zyngier
2021-07-21 11:08             ` Marc Zyngier
2021-07-21  1:15     ` Sergey Senozhatsky
2021-07-21  1:15       ` Sergey Senozhatsky
2021-07-21  1:15       ` Sergey Senozhatsky
2021-07-21  9:10       ` Marc Zyngier
2021-07-21  9:10         ` Marc Zyngier
2021-07-21  9:10         ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87bl6w2crh.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=joelaf@google.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=senozhatsky@chromium.org \
    --cc=suleiman@google.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.