All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Oliver Upton <oupton@google.com>
Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu,
	Paolo Bonzini <pbonzini@redhat.com>,
	Marc Zyngier <maz@kernel.org>, Peter Shier <pshier@google.com>,
	Jim Mattson <jmattson@google.com>,
	David Matlack <dmatlack@google.com>,
	Ricardo Koller <ricarkol@google.com>,
	Jing Zhang <jingzhangos@google.com>,
	Raghavendra Rao Anata <rananta@google.com>,
	James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	Andrew Jones <drjones@redhat.com>
Subject: Re: [PATCH v5 02/13] KVM: x86: Refactor tsc synchronization code
Date: Fri, 30 Jul 2021 18:08:25 +0000	[thread overview]
Message-ID: <YQRAGSJ1PxwXA2m/@google.com> (raw)
In-Reply-To: <20210729173300.181775-3-oupton@google.com>

On Thu, Jul 29, 2021, Oliver Upton wrote:
> Refactor kvm_synchronize_tsc to make a new function that allows callers
> to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly
> for the sake of participating in TSC synchronization.
> 
> This changes the locking semantics around TSC writes.

"refactor" and "changes the locking semantics" are somewhat contradictory.  The
correct way to do this is to first change the locking semantics, then extract the
helper you want.  That makes review and archaeology easier, and isolates the
locking change in case it isn't so safe after all.

> Writes to the TSC will now take the pvclock gtod lock while holding the tsc
> write lock, whereas before these locks were disjoint.
> 
> Reviewed-by: David Matlack <dmatlack@google.com>
> Signed-off-by: Oliver Upton <oupton@google.com>
> ---
> +/*
> + * Infers attempts to synchronize the guest's tsc from host writes. Sets the
> + * offset for the vcpu and tracks the TSC matching generation that the vcpu
> + * participates in.
> + *
> + * Must hold kvm->arch.tsc_write_lock to call this function.

Drop this blurb, lockdep assertions exist for a reason :-)

> + */
> +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc,
> +				  u64 ns, bool matched)
> +{
> +	struct kvm *kvm = vcpu->kvm;
> +	bool already_matched;

Eww, not your code, but "matched" and "already_matched" are not helpful names,
e.g. they don't provide a clue as to _what_ matched, and thus don't explain why
there are two separate variables.  And I would expect an "already" variant to
come in from the caller, not the other way 'round.

  matched         => freq_matched
  already_matched => gen_matched

> +	unsigned long flags;
> +
> +	lockdep_assert_held(&kvm->arch.tsc_write_lock);
> +
> +	already_matched =
> +	       (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation);
> +
> +	/*
> +	 * We track the most recent recorded KHZ, write and time to
> +	 * allow the matching interval to be extended at each write.
> +	 */
> +	kvm->arch.last_tsc_nsec = ns;
> +	kvm->arch.last_tsc_write = tsc;
> +	kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz;
> +
> +	vcpu->arch.last_guest_tsc = tsc;
> +
> +	/* Keep track of which generation this VCPU has synchronized to */
> +	vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation;
> +	vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec;
> +	vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write;
> +
> +	kvm_vcpu_write_tsc_offset(vcpu, offset);
> +
> +	spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags);

I believe this can be spin_lock(), since AFAICT the caller _must_ disable IRQs
when taking tsc_write_lock, i.e. we know IRQs are disabled at this point.

> +	if (!matched) {
> +		/*
> +		 * We split periods of matched TSC writes into generations.
> +		 * For each generation, we track the original measured
> +		 * nanosecond time, offset, and write, so if TSCs are in
> +		 * sync, we can match exact offset, and if not, we can match
> +		 * exact software computation in compute_guest_tsc()
> +		 *
> +		 * These values are tracked in kvm->arch.cur_xxx variables.
> +		 */
> +		kvm->arch.nr_vcpus_matched_tsc = 0;
> +		kvm->arch.cur_tsc_generation++;
> +		kvm->arch.cur_tsc_nsec = ns;
> +		kvm->arch.cur_tsc_write = tsc;
> +		kvm->arch.cur_tsc_offset = offset;

IMO, adjusting kvm->arch.cur_tsc_* belongs outside of pvclock_gtod_sync_lock.
Based on the existing code, it is protected by tsc_write_lock.  I don't care
about the extra work while holding pvclock_gtod_sync_lock, but it's very confusing
to see code that reads variables outside of a lock, then take a lock and write
those same variables without first rechecking.

> +		matched = false;

What's the point of clearing "matched"?  It's already false...

> +	} else if (!already_matched) {
> +		kvm->arch.nr_vcpus_matched_tsc++;
> +	}
> +
> +	kvm_track_tsc_matching(vcpu);
> +	spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags);
> +}
> +

WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <seanjc@google.com>
To: Oliver Upton <oupton@google.com>
Cc: kvm@vger.kernel.org, Marc Zyngier <maz@kernel.org>,
	Peter Shier <pshier@google.com>,
	Raghavendra Rao Anata <rananta@google.com>,
	David Matlack <dmatlack@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org,
	Jim Mattson <jmattson@google.com>
Subject: Re: [PATCH v5 02/13] KVM: x86: Refactor tsc synchronization code
Date: Fri, 30 Jul 2021 18:08:25 +0000	[thread overview]
Message-ID: <YQRAGSJ1PxwXA2m/@google.com> (raw)
In-Reply-To: <20210729173300.181775-3-oupton@google.com>

On Thu, Jul 29, 2021, Oliver Upton wrote:
> Refactor kvm_synchronize_tsc to make a new function that allows callers
> to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly
> for the sake of participating in TSC synchronization.
> 
> This changes the locking semantics around TSC writes.

"refactor" and "changes the locking semantics" are somewhat contradictory.  The
correct way to do this is to first change the locking semantics, then extract the
helper you want.  That makes review and archaeology easier, and isolates the
locking change in case it isn't so safe after all.

> Writes to the TSC will now take the pvclock gtod lock while holding the tsc
> write lock, whereas before these locks were disjoint.
> 
> Reviewed-by: David Matlack <dmatlack@google.com>
> Signed-off-by: Oliver Upton <oupton@google.com>
> ---
> +/*
> + * Infers attempts to synchronize the guest's tsc from host writes. Sets the
> + * offset for the vcpu and tracks the TSC matching generation that the vcpu
> + * participates in.
> + *
> + * Must hold kvm->arch.tsc_write_lock to call this function.

Drop this blurb, lockdep assertions exist for a reason :-)

> + */
> +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc,
> +				  u64 ns, bool matched)
> +{
> +	struct kvm *kvm = vcpu->kvm;
> +	bool already_matched;

Eww, not your code, but "matched" and "already_matched" are not helpful names,
e.g. they don't provide a clue as to _what_ matched, and thus don't explain why
there are two separate variables.  And I would expect an "already" variant to
come in from the caller, not the other way 'round.

  matched         => freq_matched
  already_matched => gen_matched

> +	unsigned long flags;
> +
> +	lockdep_assert_held(&kvm->arch.tsc_write_lock);
> +
> +	already_matched =
> +	       (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation);
> +
> +	/*
> +	 * We track the most recent recorded KHZ, write and time to
> +	 * allow the matching interval to be extended at each write.
> +	 */
> +	kvm->arch.last_tsc_nsec = ns;
> +	kvm->arch.last_tsc_write = tsc;
> +	kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz;
> +
> +	vcpu->arch.last_guest_tsc = tsc;
> +
> +	/* Keep track of which generation this VCPU has synchronized to */
> +	vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation;
> +	vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec;
> +	vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write;
> +
> +	kvm_vcpu_write_tsc_offset(vcpu, offset);
> +
> +	spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags);

I believe this can be spin_lock(), since AFAICT the caller _must_ disable IRQs
when taking tsc_write_lock, i.e. we know IRQs are disabled at this point.

> +	if (!matched) {
> +		/*
> +		 * We split periods of matched TSC writes into generations.
> +		 * For each generation, we track the original measured
> +		 * nanosecond time, offset, and write, so if TSCs are in
> +		 * sync, we can match exact offset, and if not, we can match
> +		 * exact software computation in compute_guest_tsc()
> +		 *
> +		 * These values are tracked in kvm->arch.cur_xxx variables.
> +		 */
> +		kvm->arch.nr_vcpus_matched_tsc = 0;
> +		kvm->arch.cur_tsc_generation++;
> +		kvm->arch.cur_tsc_nsec = ns;
> +		kvm->arch.cur_tsc_write = tsc;
> +		kvm->arch.cur_tsc_offset = offset;

IMO, adjusting kvm->arch.cur_tsc_* belongs outside of pvclock_gtod_sync_lock.
Based on the existing code, it is protected by tsc_write_lock.  I don't care
about the extra work while holding pvclock_gtod_sync_lock, but it's very confusing
to see code that reads variables outside of a lock, then take a lock and write
those same variables without first rechecking.

> +		matched = false;

What's the point of clearing "matched"?  It's already false...

> +	} else if (!already_matched) {
> +		kvm->arch.nr_vcpus_matched_tsc++;
> +	}
> +
> +	kvm_track_tsc_matching(vcpu);
> +	spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags);
> +}
> +
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <seanjc@google.com>
To: Oliver Upton <oupton@google.com>
Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu,
	Paolo Bonzini <pbonzini@redhat.com>,
	Marc Zyngier <maz@kernel.org>, Peter Shier <pshier@google.com>,
	Jim Mattson <jmattson@google.com>,
	David Matlack <dmatlack@google.com>,
	Ricardo Koller <ricarkol@google.com>,
	Jing Zhang <jingzhangos@google.com>,
	Raghavendra Rao Anata <rananta@google.com>,
	James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	Andrew Jones <drjones@redhat.com>
Subject: Re: [PATCH v5 02/13] KVM: x86: Refactor tsc synchronization code
Date: Fri, 30 Jul 2021 18:08:25 +0000	[thread overview]
Message-ID: <YQRAGSJ1PxwXA2m/@google.com> (raw)
In-Reply-To: <20210729173300.181775-3-oupton@google.com>

On Thu, Jul 29, 2021, Oliver Upton wrote:
> Refactor kvm_synchronize_tsc to make a new function that allows callers
> to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly
> for the sake of participating in TSC synchronization.
> 
> This changes the locking semantics around TSC writes.

"refactor" and "changes the locking semantics" are somewhat contradictory.  The
correct way to do this is to first change the locking semantics, then extract the
helper you want.  That makes review and archaeology easier, and isolates the
locking change in case it isn't so safe after all.

> Writes to the TSC will now take the pvclock gtod lock while holding the tsc
> write lock, whereas before these locks were disjoint.
> 
> Reviewed-by: David Matlack <dmatlack@google.com>
> Signed-off-by: Oliver Upton <oupton@google.com>
> ---
> +/*
> + * Infers attempts to synchronize the guest's tsc from host writes. Sets the
> + * offset for the vcpu and tracks the TSC matching generation that the vcpu
> + * participates in.
> + *
> + * Must hold kvm->arch.tsc_write_lock to call this function.

Drop this blurb, lockdep assertions exist for a reason :-)

> + */
> +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc,
> +				  u64 ns, bool matched)
> +{
> +	struct kvm *kvm = vcpu->kvm;
> +	bool already_matched;

Eww, not your code, but "matched" and "already_matched" are not helpful names,
e.g. they don't provide a clue as to _what_ matched, and thus don't explain why
there are two separate variables.  And I would expect an "already" variant to
come in from the caller, not the other way 'round.

  matched         => freq_matched
  already_matched => gen_matched

> +	unsigned long flags;
> +
> +	lockdep_assert_held(&kvm->arch.tsc_write_lock);
> +
> +	already_matched =
> +	       (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation);
> +
> +	/*
> +	 * We track the most recent recorded KHZ, write and time to
> +	 * allow the matching interval to be extended at each write.
> +	 */
> +	kvm->arch.last_tsc_nsec = ns;
> +	kvm->arch.last_tsc_write = tsc;
> +	kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz;
> +
> +	vcpu->arch.last_guest_tsc = tsc;
> +
> +	/* Keep track of which generation this VCPU has synchronized to */
> +	vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation;
> +	vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec;
> +	vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write;
> +
> +	kvm_vcpu_write_tsc_offset(vcpu, offset);
> +
> +	spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags);

I believe this can be spin_lock(), since AFAICT the caller _must_ disable IRQs
when taking tsc_write_lock, i.e. we know IRQs are disabled at this point.

> +	if (!matched) {
> +		/*
> +		 * We split periods of matched TSC writes into generations.
> +		 * For each generation, we track the original measured
> +		 * nanosecond time, offset, and write, so if TSCs are in
> +		 * sync, we can match exact offset, and if not, we can match
> +		 * exact software computation in compute_guest_tsc()
> +		 *
> +		 * These values are tracked in kvm->arch.cur_xxx variables.
> +		 */
> +		kvm->arch.nr_vcpus_matched_tsc = 0;
> +		kvm->arch.cur_tsc_generation++;
> +		kvm->arch.cur_tsc_nsec = ns;
> +		kvm->arch.cur_tsc_write = tsc;
> +		kvm->arch.cur_tsc_offset = offset;

IMO, adjusting kvm->arch.cur_tsc_* belongs outside of pvclock_gtod_sync_lock.
Based on the existing code, it is protected by tsc_write_lock.  I don't care
about the extra work while holding pvclock_gtod_sync_lock, but it's very confusing
to see code that reads variables outside of a lock, then take a lock and write
those same variables without first rechecking.

> +		matched = false;

What's the point of clearing "matched"?  It's already false...

> +	} else if (!already_matched) {
> +		kvm->arch.nr_vcpus_matched_tsc++;
> +	}
> +
> +	kvm_track_tsc_matching(vcpu);
> +	spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags);
> +}
> +

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-07-30 18:08 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-29 17:32 [PATCH v5 00/13] KVM: Add idempotent controls for migrating system counter state Oliver Upton
2021-07-29 17:32 ` Oliver Upton
2021-07-29 17:32 ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 01/13] KVM: x86: Report host tsc and realtime values in KVM_GET_CLOCK Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-30 17:48   ` Sean Christopherson
2021-07-30 17:48     ` Sean Christopherson
2021-07-30 17:48     ` Sean Christopherson
2021-07-30 18:24     ` Oliver Upton
2021-07-30 18:24       ` Oliver Upton
2021-07-30 18:24       ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 02/13] KVM: x86: Refactor tsc synchronization code Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-30 18:08   ` Sean Christopherson [this message]
2021-07-30 18:08     ` Sean Christopherson
2021-07-30 18:08     ` Sean Christopherson
2021-08-03 21:18     ` Oliver Upton
2021-08-03 21:18       ` Oliver Upton
2021-08-03 21:18       ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 03/13] KVM: x86: Expose TSC offset controls to userspace Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-30 18:34   ` Sean Christopherson
2021-07-30 18:34     ` Sean Christopherson
2021-07-30 18:34     ` Sean Christopherson
2021-07-29 17:32 ` [PATCH v5 04/13] tools: arch: x86: pull in pvclock headers Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 05/13] selftests: KVM: Add test for KVM_{GET,SET}_CLOCK Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 06/13] selftests: KVM: Fix kvm device helper ioctl assertions Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 07/13] selftests: KVM: Add helpers for vCPU device attributes Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 08/13] selftests: KVM: Introduce system counter offset test Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 09/13] KVM: arm64: Allow userspace to configure a vCPU's virtual offset Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-30 10:12   ` Marc Zyngier
2021-07-30 10:12     ` Marc Zyngier
2021-07-30 10:12     ` Marc Zyngier
2021-08-02 23:27     ` Oliver Upton
2021-08-02 23:27       ` Oliver Upton
2021-08-02 23:27       ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 10/13] selftests: KVM: Add support for aarch64 to system_counter_offset_test Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 11/13] KVM: arm64: Provide userspace access to the physical counter offset Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-30 11:08   ` Marc Zyngier
2021-07-30 11:08     ` Marc Zyngier
2021-07-30 11:08     ` Marc Zyngier
2021-07-30 15:22     ` Oliver Upton
2021-07-30 15:22       ` Oliver Upton
2021-07-30 15:22       ` Oliver Upton
2021-07-30 16:17       ` Marc Zyngier
2021-07-30 16:17         ` Marc Zyngier
2021-07-30 16:17         ` Marc Zyngier
2021-07-30 16:48         ` Oliver Upton
2021-07-30 16:48           ` Oliver Upton
2021-07-30 16:48           ` Oliver Upton
2021-08-04  6:59           ` Oliver Upton
2021-08-04  6:59             ` Oliver Upton
2021-08-04  6:59             ` Oliver Upton
2021-07-29 17:32 ` [PATCH v5 12/13] selftests: KVM: Test physical counter offsetting Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:32   ` Oliver Upton
2021-07-29 17:33 ` [PATCH v5 13/13] selftests: KVM: Add counter emulation benchmark Oliver Upton
2021-07-29 17:33   ` Oliver Upton
2021-07-29 17:33   ` Oliver Upton
2021-07-29 17:45   ` Andrew Jones
2021-07-29 17:45     ` Andrew Jones
2021-07-29 17:45     ` Andrew Jones

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YQRAGSJ1PxwXA2m/@google.com \
    --to=seanjc@google.com \
    --cc=alexandru.elisei@arm.com \
    --cc=dmatlack@google.com \
    --cc=drjones@redhat.com \
    --cc=james.morse@arm.com \
    --cc=jingzhangos@google.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=oupton@google.com \
    --cc=pbonzini@redhat.com \
    --cc=pshier@google.com \
    --cc=rananta@google.com \
    --cc=ricarkol@google.com \
    --cc=suzuki.poulose@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.