From: Oliver Upton <oupton@google.com>
To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Sean Christopherson <seanjc@google.com>,
Marc Zyngier <maz@kernel.org>, Peter Shier <pshier@google.com>,
Jim Mattson <jmattson@google.com>,
David Matlack <dmatlack@google.com>,
Ricardo Koller <ricarkol@google.com>,
Jing Zhang <jingzhangos@google.com>,
Raghavendra Rao Anata <rananta@google.com>,
James Morse <james.morse@arm.com>,
Alexandru Elisei <alexandru.elisei@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
linux-arm-kernel@lists.infradead.org,
Andrew Jones <drjones@redhat.com>, Will Deacon <will@kernel.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Oliver Upton <oupton@google.com>
Subject: [PATCH v7 4/6] KVM: x86: Take the pvclock sync lock behind the tsc_write_lock
Date: Mon, 16 Aug 2021 00:11:28 +0000 [thread overview]
Message-ID: <20210816001130.3059564-5-oupton@google.com> (raw)
In-Reply-To: <20210816001130.3059564-1-oupton@google.com>
A later change requires that the pvclock sync lock be taken while
holding the tsc_write_lock. Change the locking in kvm_synchronize_tsc()
to align with the requirement to isolate the locking change to its own
commit.
Cc: Sean Christopherson <seanjc@google.com>
Signed-off-by: Oliver Upton <oupton@google.com>
---
Documentation/virt/kvm/locking.rst | 11 +++++++++++
arch/x86/kvm/x86.c | 2 +-
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
index 8138201efb09..0bf346adac2a 100644
--- a/Documentation/virt/kvm/locking.rst
+++ b/Documentation/virt/kvm/locking.rst
@@ -36,6 +36,9 @@ On x86:
holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise
there's no need to take kvm->arch.tdp_mmu_pages_lock at all).
+- kvm->arch.tsc_write_lock is taken outside
+ kvm->arch.pvclock_gtod_sync_lock
+
Everything else is a leaf: no other lock is taken inside the critical
sections.
@@ -222,6 +225,14 @@ time it will be set using the Dirty tracking mechanism described above.
:Comment: 'raw' because hardware enabling/disabling must be atomic /wrt
migration.
+:Name: kvm_arch::pvclock_gtod_sync_lock
+:Type: raw_spinlock_t
+:Arch: x86
+:Protects: kvm_arch::{cur_tsc_generation,cur_tsc_nsec,cur_tsc_write,
+ cur_tsc_offset,nr_vcpus_matched_tsc}
+:Comment: 'raw' because updating the kvm master clock must not be
+ preempted.
+
:Name: kvm_arch::tsc_write_lock
:Type: raw_spinlock
:Arch: x86
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b1e9a4885be6..f1434cd388b9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2533,7 +2533,6 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write;
kvm_vcpu_write_tsc_offset(vcpu, offset);
- raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags);
if (!matched) {
@@ -2544,6 +2543,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
kvm_track_tsc_matching(vcpu);
spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags);
+ raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
}
static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu,
--
2.33.0.rc1.237.g0d66db33f3-goog
WARNING: multiple messages have this Message-ID (diff)
From: Oliver Upton <oupton@google.com>
To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu
Cc: Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
Raghavendra Rao Anata <rananta@google.com>,
Peter Shier <pshier@google.com>,
Sean Christopherson <seanjc@google.com>,
David Matlack <dmatlack@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
linux-arm-kernel@lists.infradead.org,
Jim Mattson <jmattson@google.com>
Subject: [PATCH v7 4/6] KVM: x86: Take the pvclock sync lock behind the tsc_write_lock
Date: Mon, 16 Aug 2021 00:11:28 +0000 [thread overview]
Message-ID: <20210816001130.3059564-5-oupton@google.com> (raw)
In-Reply-To: <20210816001130.3059564-1-oupton@google.com>
A later change requires that the pvclock sync lock be taken while
holding the tsc_write_lock. Change the locking in kvm_synchronize_tsc()
to align with the requirement to isolate the locking change to its own
commit.
Cc: Sean Christopherson <seanjc@google.com>
Signed-off-by: Oliver Upton <oupton@google.com>
---
Documentation/virt/kvm/locking.rst | 11 +++++++++++
arch/x86/kvm/x86.c | 2 +-
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
index 8138201efb09..0bf346adac2a 100644
--- a/Documentation/virt/kvm/locking.rst
+++ b/Documentation/virt/kvm/locking.rst
@@ -36,6 +36,9 @@ On x86:
holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise
there's no need to take kvm->arch.tdp_mmu_pages_lock at all).
+- kvm->arch.tsc_write_lock is taken outside
+ kvm->arch.pvclock_gtod_sync_lock
+
Everything else is a leaf: no other lock is taken inside the critical
sections.
@@ -222,6 +225,14 @@ time it will be set using the Dirty tracking mechanism described above.
:Comment: 'raw' because hardware enabling/disabling must be atomic /wrt
migration.
+:Name: kvm_arch::pvclock_gtod_sync_lock
+:Type: raw_spinlock_t
+:Arch: x86
+:Protects: kvm_arch::{cur_tsc_generation,cur_tsc_nsec,cur_tsc_write,
+ cur_tsc_offset,nr_vcpus_matched_tsc}
+:Comment: 'raw' because updating the kvm master clock must not be
+ preempted.
+
:Name: kvm_arch::tsc_write_lock
:Type: raw_spinlock
:Arch: x86
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b1e9a4885be6..f1434cd388b9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2533,7 +2533,6 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write;
kvm_vcpu_write_tsc_offset(vcpu, offset);
- raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags);
if (!matched) {
@@ -2544,6 +2543,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
kvm_track_tsc_matching(vcpu);
spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags);
+ raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
}
static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu,
--
2.33.0.rc1.237.g0d66db33f3-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
WARNING: multiple messages have this Message-ID (diff)
From: Oliver Upton <oupton@google.com>
To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Sean Christopherson <seanjc@google.com>,
Marc Zyngier <maz@kernel.org>, Peter Shier <pshier@google.com>,
Jim Mattson <jmattson@google.com>,
David Matlack <dmatlack@google.com>,
Ricardo Koller <ricarkol@google.com>,
Jing Zhang <jingzhangos@google.com>,
Raghavendra Rao Anata <rananta@google.com>,
James Morse <james.morse@arm.com>,
Alexandru Elisei <alexandru.elisei@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
linux-arm-kernel@lists.infradead.org,
Andrew Jones <drjones@redhat.com>, Will Deacon <will@kernel.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Oliver Upton <oupton@google.com>
Subject: [PATCH v7 4/6] KVM: x86: Take the pvclock sync lock behind the tsc_write_lock
Date: Mon, 16 Aug 2021 00:11:28 +0000 [thread overview]
Message-ID: <20210816001130.3059564-5-oupton@google.com> (raw)
In-Reply-To: <20210816001130.3059564-1-oupton@google.com>
A later change requires that the pvclock sync lock be taken while
holding the tsc_write_lock. Change the locking in kvm_synchronize_tsc()
to align with the requirement to isolate the locking change to its own
commit.
Cc: Sean Christopherson <seanjc@google.com>
Signed-off-by: Oliver Upton <oupton@google.com>
---
Documentation/virt/kvm/locking.rst | 11 +++++++++++
arch/x86/kvm/x86.c | 2 +-
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
index 8138201efb09..0bf346adac2a 100644
--- a/Documentation/virt/kvm/locking.rst
+++ b/Documentation/virt/kvm/locking.rst
@@ -36,6 +36,9 @@ On x86:
holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise
there's no need to take kvm->arch.tdp_mmu_pages_lock at all).
+- kvm->arch.tsc_write_lock is taken outside
+ kvm->arch.pvclock_gtod_sync_lock
+
Everything else is a leaf: no other lock is taken inside the critical
sections.
@@ -222,6 +225,14 @@ time it will be set using the Dirty tracking mechanism described above.
:Comment: 'raw' because hardware enabling/disabling must be atomic /wrt
migration.
+:Name: kvm_arch::pvclock_gtod_sync_lock
+:Type: raw_spinlock_t
+:Arch: x86
+:Protects: kvm_arch::{cur_tsc_generation,cur_tsc_nsec,cur_tsc_write,
+ cur_tsc_offset,nr_vcpus_matched_tsc}
+:Comment: 'raw' because updating the kvm master clock must not be
+ preempted.
+
:Name: kvm_arch::tsc_write_lock
:Type: raw_spinlock
:Arch: x86
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b1e9a4885be6..f1434cd388b9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2533,7 +2533,6 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write;
kvm_vcpu_write_tsc_offset(vcpu, offset);
- raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags);
if (!matched) {
@@ -2544,6 +2543,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
kvm_track_tsc_matching(vcpu);
spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags);
+ raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
}
static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu,
--
2.33.0.rc1.237.g0d66db33f3-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2021-08-16 0:11 UTC|newest]
Thread overview: 61+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-16 0:11 [PATCH v7 0/6] KVM: x86: Add idempotent controls for migrating system counter state Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-16 0:11 ` [PATCH v7 1/6] KVM: x86: Fix potential race in KVM_GET_CLOCK Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-19 18:24 ` Marcelo Tosatti
2021-08-19 18:24 ` Marcelo Tosatti
2021-08-19 18:24 ` Marcelo Tosatti
2021-08-20 18:22 ` Oliver Upton
2021-08-20 18:22 ` Oliver Upton
2021-08-20 18:22 ` Oliver Upton
2021-08-16 0:11 ` [PATCH v7 2/6] KVM: x86: Create helper methods for KVM_{GET,SET}_CLOCK ioctls Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-16 0:11 ` [PATCH v7 3/6] KVM: x86: Report host tsc and realtime values in KVM_GET_CLOCK Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-20 12:46 ` Marcelo Tosatti
2021-08-20 12:46 ` Marcelo Tosatti
2021-08-20 12:46 ` Marcelo Tosatti
2021-09-24 8:30 ` Paolo Bonzini
2021-09-24 8:30 ` Paolo Bonzini
2021-09-24 8:30 ` Paolo Bonzini
2021-08-16 0:11 ` Oliver Upton [this message]
2021-08-16 0:11 ` [PATCH v7 4/6] KVM: x86: Take the pvclock sync lock behind the tsc_write_lock Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-09-02 19:22 ` Sean Christopherson
2021-09-02 19:22 ` Sean Christopherson
2021-09-02 19:22 ` Sean Christopherson
2021-08-16 0:11 ` [PATCH v7 5/6] KVM: x86: Refactor tsc synchronization code Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-09-02 19:21 ` Sean Christopherson
2021-09-02 19:21 ` Sean Christopherson
2021-09-02 19:21 ` Sean Christopherson
2021-09-02 19:41 ` Oliver Upton
2021-09-02 19:41 ` Oliver Upton
2021-09-02 19:41 ` Oliver Upton
2021-09-24 9:28 ` Paolo Bonzini
2021-09-24 9:28 ` Paolo Bonzini
2021-09-24 9:28 ` Paolo Bonzini
2021-08-16 0:11 ` [PATCH v7 6/6] KVM: x86: Expose TSC offset controls to userspace Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-16 0:11 ` Oliver Upton
2021-08-23 20:56 ` Oliver Upton
2021-08-23 20:56 ` Oliver Upton
2021-08-23 20:56 ` Oliver Upton
2021-08-26 12:48 ` Marcelo Tosatti
2021-08-26 12:48 ` Marcelo Tosatti
2021-08-26 12:48 ` Marcelo Tosatti
2021-08-26 20:27 ` Oliver Upton
2021-08-26 20:27 ` Oliver Upton
2021-08-26 20:27 ` Oliver Upton
2021-08-29 8:25 ` Paolo Bonzini
2021-09-02 19:23 ` [PATCH v7 0/6] KVM: x86: Add idempotent controls for migrating system counter state Sean Christopherson
2021-09-02 19:23 ` Sean Christopherson
2021-09-02 19:23 ` Sean Christopherson
2021-09-02 19:45 ` Oliver Upton
2021-09-02 19:45 ` Oliver Upton
2021-09-02 19:45 ` Oliver Upton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210816001130.3059564-5-oupton@google.com \
--to=oupton@google.com \
--cc=alexandru.elisei@arm.com \
--cc=catalin.marinas@arm.com \
--cc=dmatlack@google.com \
--cc=drjones@redhat.com \
--cc=james.morse@arm.com \
--cc=jingzhangos@google.com \
--cc=jmattson@google.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=maz@kernel.org \
--cc=pbonzini@redhat.com \
--cc=pshier@google.com \
--cc=rananta@google.com \
--cc=ricarkol@google.com \
--cc=seanjc@google.com \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.