All of lore.kernel.org
 help / color / mirror / Atom feed
* PATCH v4 00/11] KVM: x86/xen: Add in-kernel Xen event channel delivery
@ 2021-11-20 10:27 ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:27 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

Event channels, yeah. That really is where I started.

It was all so simple in Joao and Ankur's original version at 
https://www.spinics.net/lists/kvm/msg182556.html — just a handful
of simple test_and_set_bit() calls on the mapped page.

When I posted v1 I didn't quite understand how steal time and nesting
were safely using the kvm_map_gfn() function, and I posted the Xen
part declaring that I had "reduced it to a previously solved problem".

Then I frowned at kvm_map_gfn() for a bit longer, concluded it was
basically impossible to use it safely on its own because the page it
maps might belong to another guest by the time it even returns to its
caller, and posted a v2 in which I did something safer for myself by
hooking into the MMU notifiers.

I then fixed the steal time reporting, and killed gfn_to_pfn_cache,
under separate cover.

In v3 of this series I re-introduced a saner gfn_to_pfn_cache with MMU 
notifier support to give it proper invalidation semantics. This can now 
be used for the Xen event channel support and should also be usable for 
fixing the various use-after-free races in the nesting code too — the 
last patch in this series being an untested proof of concept attempt at 
fixing one such.

Since adding a C file in virt/kvm/ was somewhat more painful than it
really should have been, there is a small detour into all the arch
specific Makefiles to make them include a common one.

v4: Rework the dirty marking given the relevation that it can only be 
done from the context of an active vCPU. So just defer it to happen in 
the unmap. Also introduce a lightweight unmap call instead of just the 
full destroy. Document the Xen shared info page as NOT participating in 
dirty tracking. Fix a typo in the CONFIG_HAVE_KVM_DIRTY_RING patch which 
prevented the capability from being advertised.

Intended for merging up to patch 10. Patch 11 is for illustration.




^ permalink raw reply	[flat|nested] 74+ messages in thread

* PATCH v4 00/11] KVM: x86/xen: Add in-kernel Xen event channel delivery
@ 2021-11-20 10:27 ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:27 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

Event channels, yeah. That really is where I started.

It was all so simple in Joao and Ankur's original version at 
https://www.spinics.net/lists/kvm/msg182556.html — just a handful
of simple test_and_set_bit() calls on the mapped page.

When I posted v1 I didn't quite understand how steal time and nesting
were safely using the kvm_map_gfn() function, and I posted the Xen
part declaring that I had "reduced it to a previously solved problem".

Then I frowned at kvm_map_gfn() for a bit longer, concluded it was
basically impossible to use it safely on its own because the page it
maps might belong to another guest by the time it even returns to its
caller, and posted a v2 in which I did something safer for myself by
hooking into the MMU notifiers.

I then fixed the steal time reporting, and killed gfn_to_pfn_cache,
under separate cover.

In v3 of this series I re-introduced a saner gfn_to_pfn_cache with MMU 
notifier support to give it proper invalidation semantics. This can now 
be used for the Xen event channel support and should also be usable for 
fixing the various use-after-free races in the nesting code too — the 
last patch in this series being an untested proof of concept attempt at 
fixing one such.

Since adding a C file in virt/kvm/ was somewhat more painful than it
really should have been, there is a small detour into all the arch
specific Makefiles to make them include a common one.

v4: Rework the dirty marking given the relevation that it can only be 
done from the context of an active vCPU. So just defer it to happen in 
the unmap. Also introduce a lightweight unmap call instead of just the 
full destroy. Document the Xen shared info page as NOT participating in 
dirty tracking. Fix a typo in the CONFIG_HAVE_KVM_DIRTY_RING patch which 
prevented the capability from being advertised.

Intended for merging up to patch 10. Patch 11 is for illustration.




_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 74+ messages in thread

* PATCH v4 00/11] KVM: x86/xen: Add in-kernel Xen event channel delivery
@ 2021-11-20 10:27 ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:27 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

Event channels, yeah. That really is where I started.

It was all so simple in Joao and Ankur's original version at 
https://www.spinics.net/lists/kvm/msg182556.html — just a handful
of simple test_and_set_bit() calls on the mapped page.

When I posted v1 I didn't quite understand how steal time and nesting
were safely using the kvm_map_gfn() function, and I posted the Xen
part declaring that I had "reduced it to a previously solved problem".

Then I frowned at kvm_map_gfn() for a bit longer, concluded it was
basically impossible to use it safely on its own because the page it
maps might belong to another guest by the time it even returns to its
caller, and posted a v2 in which I did something safer for myself by
hooking into the MMU notifiers.

I then fixed the steal time reporting, and killed gfn_to_pfn_cache,
under separate cover.

In v3 of this series I re-introduced a saner gfn_to_pfn_cache with MMU 
notifier support to give it proper invalidation semantics. This can now 
be used for the Xen event channel support and should also be usable for 
fixing the various use-after-free races in the nesting code too — the 
last patch in this series being an untested proof of concept attempt at 
fixing one such.

Since adding a C file in virt/kvm/ was somewhat more painful than it
really should have been, there is a small detour into all the arch
specific Makefiles to make them include a common one.

v4: Rework the dirty marking given the relevation that it can only be 
done from the context of an active vCPU. So just defer it to happen in 
the unmap. Also introduce a lightweight unmap call instead of just the 
full destroy. Document the Xen shared info page as NOT participating in 
dirty tracking. Fix a typo in the CONFIG_HAVE_KVM_DIRTY_RING patch which 
prevented the capability from being advertised.

Intended for merging up to patch 10. Patch 11 is for illustration.




^ permalink raw reply	[flat|nested] 74+ messages in thread

* PATCH v4 00/11] KVM: x86/xen: Add in-kernel Xen event channel delivery
@ 2021-11-20 10:27 ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:27 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

Event channels, yeah. That really is where I started.

It was all so simple in Joao and Ankur's original version at 
https://www.spinics.net/lists/kvm/msg182556.html — just a handful
of simple test_and_set_bit() calls on the mapped page.

When I posted v1 I didn't quite understand how steal time and nesting
were safely using the kvm_map_gfn() function, and I posted the Xen
part declaring that I had "reduced it to a previously solved problem".

Then I frowned at kvm_map_gfn() for a bit longer, concluded it was
basically impossible to use it safely on its own because the page it
maps might belong to another guest by the time it even returns to its
caller, and posted a v2 in which I did something safer for myself by
hooking into the MMU notifiers.

I then fixed the steal time reporting, and killed gfn_to_pfn_cache,
under separate cover.

In v3 of this series I re-introduced a saner gfn_to_pfn_cache with MMU 
notifier support to give it proper invalidation semantics. This can now 
be used for the Xen event channel support and should also be usable for 
fixing the various use-after-free races in the nesting code too — the 
last patch in this series being an untested proof of concept attempt at 
fixing one such.

Since adding a C file in virt/kvm/ was somewhat more painful than it
really should have been, there is a small detour into all the arch
specific Makefiles to make them include a common one.

v4: Rework the dirty marking given the relevation that it can only be 
done from the context of an active vCPU. So just defer it to happen in 
the unmap. Also introduce a lightweight unmap call instead of just the 
full destroy. Document the Xen shared info page as NOT participating in 
dirty tracking. Fix a typo in the CONFIG_HAVE_KVM_DIRTY_RING patch which 
prevented the capability from being advertised.

Intended for merging up to patch 10. Patch 11 is for illustration.



_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH v4 01/11] KVM: Introduce CONFIG_HAVE_KVM_DIRTY_RING
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

I'd like to make the build include dirty_ring.c based on whether the
arch wants it or not. That's a whole lot simpler if there's a config
symbol instead of doing it implicitly on KVM_DIRTY_LOG_PAGE_OFFSET
being set to something non-zero.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kvm/Kconfig           | 1 +
 include/linux/kvm_dirty_ring.h | 8 ++++----
 virt/kvm/Kconfig               | 3 +++
 virt/kvm/kvm_main.c            | 4 ++--
 4 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index 619186138176..d7fa0a42ac25 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -27,6 +27,7 @@ config KVM
 	select MMU_NOTIFIER
 	select HAVE_KVM_IRQCHIP
 	select HAVE_KVM_IRQFD
+	select HAVE_KVM_DIRTY_RING
 	select IRQ_BYPASS_MANAGER
 	select HAVE_KVM_IRQ_BYPASS
 	select HAVE_KVM_IRQ_ROUTING
diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
index fb0fa18878e2..906f899813dc 100644
--- a/include/linux/kvm_dirty_ring.h
+++ b/include/linux/kvm_dirty_ring.h
@@ -27,9 +27,9 @@ struct kvm_dirty_ring {
 	int index;
 };
 
-#if (KVM_DIRTY_LOG_PAGE_OFFSET == 0)
+#ifndef CONFIG_HAVE_KVM_DIRTY_RING
 /*
- * If KVM_DIRTY_LOG_PAGE_OFFSET not defined, kvm_dirty_ring.o should
+ * If CONFIG_HAVE_HVM_DIRTY_RING not defined, kvm_dirty_ring.o should
  * not be included as well, so define these nop functions for the arch.
  */
 static inline u32 kvm_dirty_ring_get_rsvd_entries(void)
@@ -69,7 +69,7 @@ static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
 	return true;
 }
 
-#else /* KVM_DIRTY_LOG_PAGE_OFFSET == 0 */
+#else /* CONFIG_HAVE_KVM_DIRTY_RING */
 
 u32 kvm_dirty_ring_get_rsvd_entries(void);
 int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size);
@@ -92,6 +92,6 @@ struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset);
 void kvm_dirty_ring_free(struct kvm_dirty_ring *ring);
 bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring);
 
-#endif /* KVM_DIRTY_LOG_PAGE_OFFSET == 0 */
+#endif /* CONFIG_HAVE_KVM_DIRTY_RING */
 
 #endif	/* KVM_DIRTY_RING_H */
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 62b39149b8c8..97cf5413ac25 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -13,6 +13,9 @@ config HAVE_KVM_IRQFD
 config HAVE_KVM_IRQ_ROUTING
        bool
 
+config HAVE_KVM_DIRTY_RING
+       bool
+
 config HAVE_KVM_EVENTFD
        bool
        select EVENTFD
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 72c6453bcef4..8eb8c962838d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3432,7 +3432,7 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
 
 static bool kvm_page_in_dirty_ring(struct kvm *kvm, unsigned long pgoff)
 {
-#if KVM_DIRTY_LOG_PAGE_OFFSET > 0
+#ifdef CONFIG_HAVE_KVM_DIRTY_RING
 	return (pgoff >= KVM_DIRTY_LOG_PAGE_OFFSET) &&
 	    (pgoff < KVM_DIRTY_LOG_PAGE_OFFSET +
 	     kvm->dirty_ring_size / PAGE_SIZE);
@@ -4135,7 +4135,7 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 	case KVM_CAP_NR_MEMSLOTS:
 		return KVM_USER_MEM_SLOTS;
 	case KVM_CAP_DIRTY_LOG_RING:
-#if KVM_DIRTY_LOG_PAGE_OFFSET > 0
+#ifdef CONFIG_HAVE_KVM_DIRTY_RING
 		return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
 #else
 		return 0;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 01/11] KVM: Introduce CONFIG_HAVE_KVM_DIRTY_RING
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

I'd like to make the build include dirty_ring.c based on whether the
arch wants it or not. That's a whole lot simpler if there's a config
symbol instead of doing it implicitly on KVM_DIRTY_LOG_PAGE_OFFSET
being set to something non-zero.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kvm/Kconfig           | 1 +
 include/linux/kvm_dirty_ring.h | 8 ++++----
 virt/kvm/Kconfig               | 3 +++
 virt/kvm/kvm_main.c            | 4 ++--
 4 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index 619186138176..d7fa0a42ac25 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -27,6 +27,7 @@ config KVM
 	select MMU_NOTIFIER
 	select HAVE_KVM_IRQCHIP
 	select HAVE_KVM_IRQFD
+	select HAVE_KVM_DIRTY_RING
 	select IRQ_BYPASS_MANAGER
 	select HAVE_KVM_IRQ_BYPASS
 	select HAVE_KVM_IRQ_ROUTING
diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
index fb0fa18878e2..906f899813dc 100644
--- a/include/linux/kvm_dirty_ring.h
+++ b/include/linux/kvm_dirty_ring.h
@@ -27,9 +27,9 @@ struct kvm_dirty_ring {
 	int index;
 };
 
-#if (KVM_DIRTY_LOG_PAGE_OFFSET == 0)
+#ifndef CONFIG_HAVE_KVM_DIRTY_RING
 /*
- * If KVM_DIRTY_LOG_PAGE_OFFSET not defined, kvm_dirty_ring.o should
+ * If CONFIG_HAVE_HVM_DIRTY_RING not defined, kvm_dirty_ring.o should
  * not be included as well, so define these nop functions for the arch.
  */
 static inline u32 kvm_dirty_ring_get_rsvd_entries(void)
@@ -69,7 +69,7 @@ static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
 	return true;
 }
 
-#else /* KVM_DIRTY_LOG_PAGE_OFFSET == 0 */
+#else /* CONFIG_HAVE_KVM_DIRTY_RING */
 
 u32 kvm_dirty_ring_get_rsvd_entries(void);
 int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size);
@@ -92,6 +92,6 @@ struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset);
 void kvm_dirty_ring_free(struct kvm_dirty_ring *ring);
 bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring);
 
-#endif /* KVM_DIRTY_LOG_PAGE_OFFSET == 0 */
+#endif /* CONFIG_HAVE_KVM_DIRTY_RING */
 
 #endif	/* KVM_DIRTY_RING_H */
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 62b39149b8c8..97cf5413ac25 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -13,6 +13,9 @@ config HAVE_KVM_IRQFD
 config HAVE_KVM_IRQ_ROUTING
        bool
 
+config HAVE_KVM_DIRTY_RING
+       bool
+
 config HAVE_KVM_EVENTFD
        bool
        select EVENTFD
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 72c6453bcef4..8eb8c962838d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3432,7 +3432,7 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
 
 static bool kvm_page_in_dirty_ring(struct kvm *kvm, unsigned long pgoff)
 {
-#if KVM_DIRTY_LOG_PAGE_OFFSET > 0
+#ifdef CONFIG_HAVE_KVM_DIRTY_RING
 	return (pgoff >= KVM_DIRTY_LOG_PAGE_OFFSET) &&
 	    (pgoff < KVM_DIRTY_LOG_PAGE_OFFSET +
 	     kvm->dirty_ring_size / PAGE_SIZE);
@@ -4135,7 +4135,7 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 	case KVM_CAP_NR_MEMSLOTS:
 		return KVM_USER_MEM_SLOTS;
 	case KVM_CAP_DIRTY_LOG_RING:
-#if KVM_DIRTY_LOG_PAGE_OFFSET > 0
+#ifdef CONFIG_HAVE_KVM_DIRTY_RING
 		return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
 #else
 		return 0;
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 01/11] KVM: Introduce CONFIG_HAVE_KVM_DIRTY_RING
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

I'd like to make the build include dirty_ring.c based on whether the
arch wants it or not. That's a whole lot simpler if there's a config
symbol instead of doing it implicitly on KVM_DIRTY_LOG_PAGE_OFFSET
being set to something non-zero.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kvm/Kconfig           | 1 +
 include/linux/kvm_dirty_ring.h | 8 ++++----
 virt/kvm/Kconfig               | 3 +++
 virt/kvm/kvm_main.c            | 4 ++--
 4 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index 619186138176..d7fa0a42ac25 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -27,6 +27,7 @@ config KVM
 	select MMU_NOTIFIER
 	select HAVE_KVM_IRQCHIP
 	select HAVE_KVM_IRQFD
+	select HAVE_KVM_DIRTY_RING
 	select IRQ_BYPASS_MANAGER
 	select HAVE_KVM_IRQ_BYPASS
 	select HAVE_KVM_IRQ_ROUTING
diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
index fb0fa18878e2..906f899813dc 100644
--- a/include/linux/kvm_dirty_ring.h
+++ b/include/linux/kvm_dirty_ring.h
@@ -27,9 +27,9 @@ struct kvm_dirty_ring {
 	int index;
 };
 
-#if (KVM_DIRTY_LOG_PAGE_OFFSET == 0)
+#ifndef CONFIG_HAVE_KVM_DIRTY_RING
 /*
- * If KVM_DIRTY_LOG_PAGE_OFFSET not defined, kvm_dirty_ring.o should
+ * If CONFIG_HAVE_HVM_DIRTY_RING not defined, kvm_dirty_ring.o should
  * not be included as well, so define these nop functions for the arch.
  */
 static inline u32 kvm_dirty_ring_get_rsvd_entries(void)
@@ -69,7 +69,7 @@ static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
 	return true;
 }
 
-#else /* KVM_DIRTY_LOG_PAGE_OFFSET == 0 */
+#else /* CONFIG_HAVE_KVM_DIRTY_RING */
 
 u32 kvm_dirty_ring_get_rsvd_entries(void);
 int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size);
@@ -92,6 +92,6 @@ struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset);
 void kvm_dirty_ring_free(struct kvm_dirty_ring *ring);
 bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring);
 
-#endif /* KVM_DIRTY_LOG_PAGE_OFFSET == 0 */
+#endif /* CONFIG_HAVE_KVM_DIRTY_RING */
 
 #endif	/* KVM_DIRTY_RING_H */
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 62b39149b8c8..97cf5413ac25 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -13,6 +13,9 @@ config HAVE_KVM_IRQFD
 config HAVE_KVM_IRQ_ROUTING
        bool
 
+config HAVE_KVM_DIRTY_RING
+       bool
+
 config HAVE_KVM_EVENTFD
        bool
        select EVENTFD
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 72c6453bcef4..8eb8c962838d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3432,7 +3432,7 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
 
 static bool kvm_page_in_dirty_ring(struct kvm *kvm, unsigned long pgoff)
 {
-#if KVM_DIRTY_LOG_PAGE_OFFSET > 0
+#ifdef CONFIG_HAVE_KVM_DIRTY_RING
 	return (pgoff >= KVM_DIRTY_LOG_PAGE_OFFSET) &&
 	    (pgoff < KVM_DIRTY_LOG_PAGE_OFFSET +
 	     kvm->dirty_ring_size / PAGE_SIZE);
@@ -4135,7 +4135,7 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 	case KVM_CAP_NR_MEMSLOTS:
 		return KVM_USER_MEM_SLOTS;
 	case KVM_CAP_DIRTY_LOG_RING:
-#if KVM_DIRTY_LOG_PAGE_OFFSET > 0
+#ifdef CONFIG_HAVE_KVM_DIRTY_RING
 		return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
 #else
 		return 0;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 01/11] KVM: Introduce CONFIG_HAVE_KVM_DIRTY_RING
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

I'd like to make the build include dirty_ring.c based on whether the
arch wants it or not. That's a whole lot simpler if there's a config
symbol instead of doing it implicitly on KVM_DIRTY_LOG_PAGE_OFFSET
being set to something non-zero.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kvm/Kconfig           | 1 +
 include/linux/kvm_dirty_ring.h | 8 ++++----
 virt/kvm/Kconfig               | 3 +++
 virt/kvm/kvm_main.c            | 4 ++--
 4 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index 619186138176..d7fa0a42ac25 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -27,6 +27,7 @@ config KVM
 	select MMU_NOTIFIER
 	select HAVE_KVM_IRQCHIP
 	select HAVE_KVM_IRQFD
+	select HAVE_KVM_DIRTY_RING
 	select IRQ_BYPASS_MANAGER
 	select HAVE_KVM_IRQ_BYPASS
 	select HAVE_KVM_IRQ_ROUTING
diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
index fb0fa18878e2..906f899813dc 100644
--- a/include/linux/kvm_dirty_ring.h
+++ b/include/linux/kvm_dirty_ring.h
@@ -27,9 +27,9 @@ struct kvm_dirty_ring {
 	int index;
 };
 
-#if (KVM_DIRTY_LOG_PAGE_OFFSET == 0)
+#ifndef CONFIG_HAVE_KVM_DIRTY_RING
 /*
- * If KVM_DIRTY_LOG_PAGE_OFFSET not defined, kvm_dirty_ring.o should
+ * If CONFIG_HAVE_HVM_DIRTY_RING not defined, kvm_dirty_ring.o should
  * not be included as well, so define these nop functions for the arch.
  */
 static inline u32 kvm_dirty_ring_get_rsvd_entries(void)
@@ -69,7 +69,7 @@ static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
 	return true;
 }
 
-#else /* KVM_DIRTY_LOG_PAGE_OFFSET == 0 */
+#else /* CONFIG_HAVE_KVM_DIRTY_RING */
 
 u32 kvm_dirty_ring_get_rsvd_entries(void);
 int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size);
@@ -92,6 +92,6 @@ struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset);
 void kvm_dirty_ring_free(struct kvm_dirty_ring *ring);
 bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring);
 
-#endif /* KVM_DIRTY_LOG_PAGE_OFFSET == 0 */
+#endif /* CONFIG_HAVE_KVM_DIRTY_RING */
 
 #endif	/* KVM_DIRTY_RING_H */
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 62b39149b8c8..97cf5413ac25 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -13,6 +13,9 @@ config HAVE_KVM_IRQFD
 config HAVE_KVM_IRQ_ROUTING
        bool
 
+config HAVE_KVM_DIRTY_RING
+       bool
+
 config HAVE_KVM_EVENTFD
        bool
        select EVENTFD
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 72c6453bcef4..8eb8c962838d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3432,7 +3432,7 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
 
 static bool kvm_page_in_dirty_ring(struct kvm *kvm, unsigned long pgoff)
 {
-#if KVM_DIRTY_LOG_PAGE_OFFSET > 0
+#ifdef CONFIG_HAVE_KVM_DIRTY_RING
 	return (pgoff >= KVM_DIRTY_LOG_PAGE_OFFSET) &&
 	    (pgoff < KVM_DIRTY_LOG_PAGE_OFFSET +
 	     kvm->dirty_ring_size / PAGE_SIZE);
@@ -4135,7 +4135,7 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 	case KVM_CAP_NR_MEMSLOTS:
 		return KVM_USER_MEM_SLOTS;
 	case KVM_CAP_DIRTY_LOG_RING:
-#if KVM_DIRTY_LOG_PAGE_OFFSET > 0
+#ifdef CONFIG_HAVE_KVM_DIRTY_RING
 		return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
 #else
 		return 0;
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 02/11] KVM: Add Makefile.kvm for common files, use it for x86
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Splitting kvm_main.c out into smaller and better-organized files is
slightly non-trivial when it involves editing a bunch of per-arch
KVM makefiles. Provide virt/kvm/Makefile.kvm for them to include.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Marc Zyngier <maz@kernel.org>
---
 arch/x86/kvm/Makefile |  7 +------
 virt/kvm/Makefile.kvm | 13 +++++++++++++
 2 files changed, 14 insertions(+), 6 deletions(-)
 create mode 100644 virt/kvm/Makefile.kvm

diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 75dfd27b6e8a..30f244b64523 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -7,12 +7,7 @@ ifeq ($(CONFIG_FRAME_POINTER),y)
 OBJECT_FILES_NON_STANDARD_vmenter.o := y
 endif
 
-KVM := ../../../virt/kvm
-
-kvm-y			+= $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o \
-				$(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o \
-				$(KVM)/dirty_ring.o $(KVM)/binary_stats.o
-kvm-$(CONFIG_KVM_ASYNC_PF)	+= $(KVM)/async_pf.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 kvm-y			+= x86.o emulate.o i8259.o irq.o lapic.o \
 			   i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \
diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
new file mode 100644
index 000000000000..ffdcad3cc97a
--- /dev/null
+++ b/virt/kvm/Makefile.kvm
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for Kernel-based Virtual Machine module
+#
+
+KVM ?= ../../../virt/kvm
+
+kvm-y := $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/binary_stats.o
+kvm-$(CONFIG_KVM_VFIO) += $(KVM)/vfio.o
+kvm-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
+kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
+kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
+kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 02/11] KVM: Add Makefile.kvm for common files, use it for x86
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Splitting kvm_main.c out into smaller and better-organized files is
slightly non-trivial when it involves editing a bunch of per-arch
KVM makefiles. Provide virt/kvm/Makefile.kvm for them to include.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Marc Zyngier <maz@kernel.org>
---
 arch/x86/kvm/Makefile |  7 +------
 virt/kvm/Makefile.kvm | 13 +++++++++++++
 2 files changed, 14 insertions(+), 6 deletions(-)
 create mode 100644 virt/kvm/Makefile.kvm

diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 75dfd27b6e8a..30f244b64523 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -7,12 +7,7 @@ ifeq ($(CONFIG_FRAME_POINTER),y)
 OBJECT_FILES_NON_STANDARD_vmenter.o := y
 endif
 
-KVM := ../../../virt/kvm
-
-kvm-y			+= $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o \
-				$(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o \
-				$(KVM)/dirty_ring.o $(KVM)/binary_stats.o
-kvm-$(CONFIG_KVM_ASYNC_PF)	+= $(KVM)/async_pf.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 kvm-y			+= x86.o emulate.o i8259.o irq.o lapic.o \
 			   i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \
diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
new file mode 100644
index 000000000000..ffdcad3cc97a
--- /dev/null
+++ b/virt/kvm/Makefile.kvm
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for Kernel-based Virtual Machine module
+#
+
+KVM ?= ../../../virt/kvm
+
+kvm-y := $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/binary_stats.o
+kvm-$(CONFIG_KVM_VFIO) += $(KVM)/vfio.o
+kvm-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
+kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
+kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
+kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 02/11] KVM: Add Makefile.kvm for common files, use it for x86
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Splitting kvm_main.c out into smaller and better-organized files is
slightly non-trivial when it involves editing a bunch of per-arch
KVM makefiles. Provide virt/kvm/Makefile.kvm for them to include.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Marc Zyngier <maz@kernel.org>
---
 arch/x86/kvm/Makefile |  7 +------
 virt/kvm/Makefile.kvm | 13 +++++++++++++
 2 files changed, 14 insertions(+), 6 deletions(-)
 create mode 100644 virt/kvm/Makefile.kvm

diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 75dfd27b6e8a..30f244b64523 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -7,12 +7,7 @@ ifeq ($(CONFIG_FRAME_POINTER),y)
 OBJECT_FILES_NON_STANDARD_vmenter.o := y
 endif
 
-KVM := ../../../virt/kvm
-
-kvm-y			+= $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o \
-				$(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o \
-				$(KVM)/dirty_ring.o $(KVM)/binary_stats.o
-kvm-$(CONFIG_KVM_ASYNC_PF)	+= $(KVM)/async_pf.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 kvm-y			+= x86.o emulate.o i8259.o irq.o lapic.o \
 			   i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \
diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
new file mode 100644
index 000000000000..ffdcad3cc97a
--- /dev/null
+++ b/virt/kvm/Makefile.kvm
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for Kernel-based Virtual Machine module
+#
+
+KVM ?= ../../../virt/kvm
+
+kvm-y := $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/binary_stats.o
+kvm-$(CONFIG_KVM_VFIO) += $(KVM)/vfio.o
+kvm-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
+kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
+kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
+kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 02/11] KVM: Add Makefile.kvm for common files, use it for x86
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Splitting kvm_main.c out into smaller and better-organized files is
slightly non-trivial when it involves editing a bunch of per-arch
KVM makefiles. Provide virt/kvm/Makefile.kvm for them to include.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Marc Zyngier <maz@kernel.org>
---
 arch/x86/kvm/Makefile |  7 +------
 virt/kvm/Makefile.kvm | 13 +++++++++++++
 2 files changed, 14 insertions(+), 6 deletions(-)
 create mode 100644 virt/kvm/Makefile.kvm

diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 75dfd27b6e8a..30f244b64523 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -7,12 +7,7 @@ ifeq ($(CONFIG_FRAME_POINTER),y)
 OBJECT_FILES_NON_STANDARD_vmenter.o := y
 endif
 
-KVM := ../../../virt/kvm
-
-kvm-y			+= $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o \
-				$(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o \
-				$(KVM)/dirty_ring.o $(KVM)/binary_stats.o
-kvm-$(CONFIG_KVM_ASYNC_PF)	+= $(KVM)/async_pf.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 kvm-y			+= x86.o emulate.o i8259.o irq.o lapic.o \
 			   i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \
diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
new file mode 100644
index 000000000000..ffdcad3cc97a
--- /dev/null
+++ b/virt/kvm/Makefile.kvm
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for Kernel-based Virtual Machine module
+#
+
+KVM ?= ../../../virt/kvm
+
+kvm-y := $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/binary_stats.o
+kvm-$(CONFIG_KVM_VFIO) += $(KVM)/vfio.o
+kvm-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
+kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
+kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
+kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 03/11] KVM: s390: Use Makefile.kvm for common files
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/Makefile | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/s390/kvm/Makefile b/arch/s390/kvm/Makefile
index b3aaadc60ead..e4f50453cf7f 100644
--- a/arch/s390/kvm/Makefile
+++ b/arch/s390/kvm/Makefile
@@ -3,13 +3,11 @@
 #
 # Copyright IBM Corp. 2008
 
-KVM := ../../../virt/kvm
-common-objs = $(KVM)/kvm_main.o $(KVM)/eventfd.o  $(KVM)/async_pf.o \
-	      $(KVM)/irqchip.o $(KVM)/vfio.o $(KVM)/binary_stats.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 ccflags-y := -Ivirt/kvm -Iarch/s390/kvm
 
-kvm-objs := $(common-objs) kvm-s390.o intercept.o interrupt.o priv.o sigp.o
+kvm-objs := kvm-s390.o intercept.o interrupt.o priv.o sigp.o
 kvm-objs += diag.o gaccess.o guestdbg.o vsie.o pv.o
 
 obj-$(CONFIG_KVM) += kvm.o
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 03/11] KVM: s390: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/Makefile | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/s390/kvm/Makefile b/arch/s390/kvm/Makefile
index b3aaadc60ead..e4f50453cf7f 100644
--- a/arch/s390/kvm/Makefile
+++ b/arch/s390/kvm/Makefile
@@ -3,13 +3,11 @@
 #
 # Copyright IBM Corp. 2008
 
-KVM := ../../../virt/kvm
-common-objs = $(KVM)/kvm_main.o $(KVM)/eventfd.o  $(KVM)/async_pf.o \
-	      $(KVM)/irqchip.o $(KVM)/vfio.o $(KVM)/binary_stats.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 ccflags-y := -Ivirt/kvm -Iarch/s390/kvm
 
-kvm-objs := $(common-objs) kvm-s390.o intercept.o interrupt.o priv.o sigp.o
+kvm-objs := kvm-s390.o intercept.o interrupt.o priv.o sigp.o
 kvm-objs += diag.o gaccess.o guestdbg.o vsie.o pv.o
 
 obj-$(CONFIG_KVM) += kvm.o
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 03/11] KVM: s390: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/Makefile | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/s390/kvm/Makefile b/arch/s390/kvm/Makefile
index b3aaadc60ead..e4f50453cf7f 100644
--- a/arch/s390/kvm/Makefile
+++ b/arch/s390/kvm/Makefile
@@ -3,13 +3,11 @@
 #
 # Copyright IBM Corp. 2008
 
-KVM := ../../../virt/kvm
-common-objs = $(KVM)/kvm_main.o $(KVM)/eventfd.o  $(KVM)/async_pf.o \
-	      $(KVM)/irqchip.o $(KVM)/vfio.o $(KVM)/binary_stats.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 ccflags-y := -Ivirt/kvm -Iarch/s390/kvm
 
-kvm-objs := $(common-objs) kvm-s390.o intercept.o interrupt.o priv.o sigp.o
+kvm-objs := kvm-s390.o intercept.o interrupt.o priv.o sigp.o
 kvm-objs += diag.o gaccess.o guestdbg.o vsie.o pv.o
 
 obj-$(CONFIG_KVM) += kvm.o
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 03/11] KVM: s390: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/Makefile | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/s390/kvm/Makefile b/arch/s390/kvm/Makefile
index b3aaadc60ead..e4f50453cf7f 100644
--- a/arch/s390/kvm/Makefile
+++ b/arch/s390/kvm/Makefile
@@ -3,13 +3,11 @@
 #
 # Copyright IBM Corp. 2008
 
-KVM := ../../../virt/kvm
-common-objs = $(KVM)/kvm_main.o $(KVM)/eventfd.o  $(KVM)/async_pf.o \
-	      $(KVM)/irqchip.o $(KVM)/vfio.o $(KVM)/binary_stats.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 ccflags-y := -Ivirt/kvm -Iarch/s390/kvm
 
-kvm-objs := $(common-objs) kvm-s390.o intercept.o interrupt.o priv.o sigp.o
+kvm-objs := kvm-s390.o intercept.o interrupt.o priv.o sigp.o
 kvm-objs += diag.o gaccess.o guestdbg.o vsie.o pv.o
 
 obj-$(CONFIG_KVM) += kvm.o
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 04/11] KVM: mips: Use Makefile.kvm for common files
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/mips/kvm/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/mips/kvm/Makefile b/arch/mips/kvm/Makefile
index d3710959da55..21ff75bcdbc4 100644
--- a/arch/mips/kvm/Makefile
+++ b/arch/mips/kvm/Makefile
@@ -2,9 +2,10 @@
 # Makefile for KVM support for MIPS
 #
 
+include $(srctree)/virt/kvm/Makefile.kvm
+
 ccflags-y += -Ivirt/kvm -Iarch/mips/kvm
 
-kvm-y := $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o eventfd.o binary_stats.o)
 kvm-$(CONFIG_CPU_HAS_MSA) += msa.o
 
 kvm-y +=    mips.o emulate.o entry.o \
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 04/11] KVM: mips: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/mips/kvm/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/mips/kvm/Makefile b/arch/mips/kvm/Makefile
index d3710959da55..21ff75bcdbc4 100644
--- a/arch/mips/kvm/Makefile
+++ b/arch/mips/kvm/Makefile
@@ -2,9 +2,10 @@
 # Makefile for KVM support for MIPS
 #
 
+include $(srctree)/virt/kvm/Makefile.kvm
+
 ccflags-y += -Ivirt/kvm -Iarch/mips/kvm
 
-kvm-y := $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o eventfd.o binary_stats.o)
 kvm-$(CONFIG_CPU_HAS_MSA) += msa.o
 
 kvm-y +=    mips.o emulate.o entry.o \
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 04/11] KVM: mips: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/mips/kvm/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/mips/kvm/Makefile b/arch/mips/kvm/Makefile
index d3710959da55..21ff75bcdbc4 100644
--- a/arch/mips/kvm/Makefile
+++ b/arch/mips/kvm/Makefile
@@ -2,9 +2,10 @@
 # Makefile for KVM support for MIPS
 #
 
+include $(srctree)/virt/kvm/Makefile.kvm
+
 ccflags-y += -Ivirt/kvm -Iarch/mips/kvm
 
-kvm-y := $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o eventfd.o binary_stats.o)
 kvm-$(CONFIG_CPU_HAS_MSA) += msa.o
 
 kvm-y +=    mips.o emulate.o entry.o \
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 04/11] KVM: mips: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/mips/kvm/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/mips/kvm/Makefile b/arch/mips/kvm/Makefile
index d3710959da55..21ff75bcdbc4 100644
--- a/arch/mips/kvm/Makefile
+++ b/arch/mips/kvm/Makefile
@@ -2,9 +2,10 @@
 # Makefile for KVM support for MIPS
 #
 
+include $(srctree)/virt/kvm/Makefile.kvm
+
 ccflags-y += -Ivirt/kvm -Iarch/mips/kvm
 
-kvm-y := $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o eventfd.o binary_stats.o)
 kvm-$(CONFIG_CPU_HAS_MSA) += msa.o
 
 kvm-y +=    mips.o emulate.o entry.o \
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 05/11] KVM: RISC-V: Use Makefile.kvm for common files
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/riscv/kvm/Makefile | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 30cdd1df0098..300590225348 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -5,14 +5,10 @@
 
 ccflags-y += -I $(srctree)/$(src)
 
-KVM := ../../../virt/kvm
+include $(srctree)/virt/kvm/Makefile.kvm
 
 obj-$(CONFIG_KVM) += kvm.o
 
-kvm-y += $(KVM)/kvm_main.o
-kvm-y += $(KVM)/coalesced_mmio.o
-kvm-y += $(KVM)/binary_stats.o
-kvm-y += $(KVM)/eventfd.o
 kvm-y += main.o
 kvm-y += vm.o
 kvm-y += vmid.o
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 05/11] KVM: RISC-V: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/riscv/kvm/Makefile | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 30cdd1df0098..300590225348 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -5,14 +5,10 @@
 
 ccflags-y += -I $(srctree)/$(src)
 
-KVM := ../../../virt/kvm
+include $(srctree)/virt/kvm/Makefile.kvm
 
 obj-$(CONFIG_KVM) += kvm.o
 
-kvm-y += $(KVM)/kvm_main.o
-kvm-y += $(KVM)/coalesced_mmio.o
-kvm-y += $(KVM)/binary_stats.o
-kvm-y += $(KVM)/eventfd.o
 kvm-y += main.o
 kvm-y += vm.o
 kvm-y += vmid.o
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 05/11] KVM: RISC-V: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/riscv/kvm/Makefile | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 30cdd1df0098..300590225348 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -5,14 +5,10 @@
 
 ccflags-y += -I $(srctree)/$(src)
 
-KVM := ../../../virt/kvm
+include $(srctree)/virt/kvm/Makefile.kvm
 
 obj-$(CONFIG_KVM) += kvm.o
 
-kvm-y += $(KVM)/kvm_main.o
-kvm-y += $(KVM)/coalesced_mmio.o
-kvm-y += $(KVM)/binary_stats.o
-kvm-y += $(KVM)/eventfd.o
 kvm-y += main.o
 kvm-y += vm.o
 kvm-y += vmid.o
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 05/11] KVM: RISC-V: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/riscv/kvm/Makefile | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 30cdd1df0098..300590225348 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -5,14 +5,10 @@
 
 ccflags-y += -I $(srctree)/$(src)
 
-KVM := ../../../virt/kvm
+include $(srctree)/virt/kvm/Makefile.kvm
 
 obj-$(CONFIG_KVM) += kvm.o
 
-kvm-y += $(KVM)/kvm_main.o
-kvm-y += $(KVM)/coalesced_mmio.o
-kvm-y += $(KVM)/binary_stats.o
-kvm-y += $(KVM)/eventfd.o
 kvm-y += main.o
 kvm-y += vm.o
 kvm-y += vmid.o
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 06/11] KVM: powerpc: Use Makefile.kvm for common files
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

It's all fairly baroque but in the end, I don't think there's any reason
for $(KVM)/irqchip.o to have been handled differently, as they all end
up in $(kvm-y) in the end anyway, regardless of whether they get there
via $(common-objs-y) and the CPU-specific object lists.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
---
 arch/powerpc/kvm/Makefile | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 583c14ef596e..245f59118413 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -4,11 +4,8 @@
 #
 
 ccflags-y := -Ivirt/kvm -Iarch/powerpc/kvm
-KVM := ../../../virt/kvm
 
-common-objs-y = $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/binary_stats.o
-common-objs-$(CONFIG_KVM_VFIO) += $(KVM)/vfio.o
-common-objs-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 common-objs-y += powerpc.o emulate_loadstore.o
 obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
@@ -125,7 +122,6 @@ kvm-book3s_32-objs := \
 kvm-objs-$(CONFIG_KVM_BOOK3S_32) := $(kvm-book3s_32-objs)
 
 kvm-objs-$(CONFIG_KVM_MPIC) += mpic.o
-kvm-objs-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
 
 kvm-objs := $(kvm-objs-m) $(kvm-objs-y)
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 06/11] KVM: powerpc: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

It's all fairly baroque but in the end, I don't think there's any reason
for $(KVM)/irqchip.o to have been handled differently, as they all end
up in $(kvm-y) in the end anyway, regardless of whether they get there
via $(common-objs-y) and the CPU-specific object lists.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
---
 arch/powerpc/kvm/Makefile | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 583c14ef596e..245f59118413 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -4,11 +4,8 @@
 #
 
 ccflags-y := -Ivirt/kvm -Iarch/powerpc/kvm
-KVM := ../../../virt/kvm
 
-common-objs-y = $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/binary_stats.o
-common-objs-$(CONFIG_KVM_VFIO) += $(KVM)/vfio.o
-common-objs-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 common-objs-y += powerpc.o emulate_loadstore.o
 obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
@@ -125,7 +122,6 @@ kvm-book3s_32-objs := \
 kvm-objs-$(CONFIG_KVM_BOOK3S_32) := $(kvm-book3s_32-objs)
 
 kvm-objs-$(CONFIG_KVM_MPIC) += mpic.o
-kvm-objs-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
 
 kvm-objs := $(kvm-objs-m) $(kvm-objs-y)
 
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 06/11] KVM: powerpc: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

It's all fairly baroque but in the end, I don't think there's any reason
for $(KVM)/irqchip.o to have been handled differently, as they all end
up in $(kvm-y) in the end anyway, regardless of whether they get there
via $(common-objs-y) and the CPU-specific object lists.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
---
 arch/powerpc/kvm/Makefile | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 583c14ef596e..245f59118413 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -4,11 +4,8 @@
 #
 
 ccflags-y := -Ivirt/kvm -Iarch/powerpc/kvm
-KVM := ../../../virt/kvm
 
-common-objs-y = $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/binary_stats.o
-common-objs-$(CONFIG_KVM_VFIO) += $(KVM)/vfio.o
-common-objs-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 common-objs-y += powerpc.o emulate_loadstore.o
 obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
@@ -125,7 +122,6 @@ kvm-book3s_32-objs := \
 kvm-objs-$(CONFIG_KVM_BOOK3S_32) := $(kvm-book3s_32-objs)
 
 kvm-objs-$(CONFIG_KVM_MPIC) += mpic.o
-kvm-objs-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
 
 kvm-objs := $(kvm-objs-m) $(kvm-objs-y)
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 06/11] KVM: powerpc: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

It's all fairly baroque but in the end, I don't think there's any reason
for $(KVM)/irqchip.o to have been handled differently, as they all end
up in $(kvm-y) in the end anyway, regardless of whether they get there
via $(common-objs-y) and the CPU-specific object lists.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
---
 arch/powerpc/kvm/Makefile | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 583c14ef596e..245f59118413 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -4,11 +4,8 @@
 #
 
 ccflags-y := -Ivirt/kvm -Iarch/powerpc/kvm
-KVM := ../../../virt/kvm
 
-common-objs-y = $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/binary_stats.o
-common-objs-$(CONFIG_KVM_VFIO) += $(KVM)/vfio.o
-common-objs-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
+include $(srctree)/virt/kvm/Makefile.kvm
 
 common-objs-y += powerpc.o emulate_loadstore.o
 obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
@@ -125,7 +122,6 @@ kvm-book3s_32-objs := \
 kvm-objs-$(CONFIG_KVM_BOOK3S_32) := $(kvm-book3s_32-objs)
 
 kvm-objs-$(CONFIG_KVM_MPIC) += mpic.o
-kvm-objs-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
 
 kvm-objs := $(kvm-objs-m) $(kvm-objs-y)
 
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 07/11] KVM: arm64: Use Makefile.kvm for common files
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/Makefile | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 989bb5dad2c8..04a53f71a6b6 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -5,14 +5,12 @@
 
 ccflags-y += -I $(srctree)/$(src)
 
-KVM=../../../virt/kvm
+include $(srctree)/virt/kvm/Makefile.kvm
 
 obj-$(CONFIG_KVM) += kvm.o
 obj-$(CONFIG_KVM) += hyp/
 
-kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
-	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
-	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
+kvm-y += arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
 	 inject_fault.o va_layout.o handle_exit.o \
 	 guest.o debug.o reset.o sys_regs.o \
 	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 07/11] KVM: arm64: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/Makefile | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 989bb5dad2c8..04a53f71a6b6 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -5,14 +5,12 @@
 
 ccflags-y += -I $(srctree)/$(src)
 
-KVM=../../../virt/kvm
+include $(srctree)/virt/kvm/Makefile.kvm
 
 obj-$(CONFIG_KVM) += kvm.o
 obj-$(CONFIG_KVM) += hyp/
 
-kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
-	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
-	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
+kvm-y += arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
 	 inject_fault.o va_layout.o handle_exit.o \
 	 guest.o debug.o reset.o sys_regs.o \
 	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 07/11] KVM: arm64: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/Makefile | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 989bb5dad2c8..04a53f71a6b6 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -5,14 +5,12 @@
 
 ccflags-y += -I $(srctree)/$(src)
 
-KVM=../../../virt/kvm
+include $(srctree)/virt/kvm/Makefile.kvm
 
 obj-$(CONFIG_KVM) += kvm.o
 obj-$(CONFIG_KVM) += hyp/
 
-kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
-	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
-	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
+kvm-y += arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
 	 inject_fault.o va_layout.o handle_exit.o \
 	 guest.o debug.o reset.o sys_regs.o \
 	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 07/11] KVM: arm64: Use Makefile.kvm for common files
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/Makefile | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 989bb5dad2c8..04a53f71a6b6 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -5,14 +5,12 @@
 
 ccflags-y += -I $(srctree)/$(src)
 
-KVM=../../../virt/kvm
+include $(srctree)/virt/kvm/Makefile.kvm
 
 obj-$(CONFIG_KVM) += kvm.o
 obj-$(CONFIG_KVM) += hyp/
 
-kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
-	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
-	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
+kvm-y += arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
 	 inject_fault.o va_layout.o handle_exit.o \
 	 guest.o debug.o reset.o sys_regs.o \
 	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 08/11] KVM: Reinstate gfn_to_pfn_cache with invalidation support
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

This can be used in two modes. There is an atomic mode where the cached
mapping is accessed while holding the rwlock, and a mode where the
physical address is used by a vCPU in guest mode.

For the latter case, an invalidation will wake the vCPU with the new
KVM_REQ_GPC_INVALIDATE, and the architecture will need to refresh any
caches it still needs to access before entering guest mode again.

Only one vCPU can be targeted by the wake requests; it's simple enough
to make it wake all vCPUs or even a mask but I don't see a use case for
that additional complexity right now.

Invalidation happens from the invalidate_range_start MMU notifier, which
needs to be able to sleep in order to wake the vCPU and wait for it.

This means that revalidation potentially needs to "wait" for the MMU
operation to complete and the invalidate_range_end notifier to be
invoked. Like the vCPU when it takes a page fault in that period, we
just spin — fixing that in a future patch by implementing an actual
*wait* may be another part of shaving this particularly hirsute yak.

As noted in the comments in the function itself, the only case where
the invalidate_range_start notifier is expected to be called *without*
being able to sleep is when the OOM reaper is killing the process. In
that case, we expect the vCPU threads already to have exited, and thus
there will be nothing to wake, and no reason to wait. So we clear the
KVM_REQUEST_WAIT bit and send the request anyway, then complain loudly
if there actually *was* anything to wake up.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kvm/Kconfig              |   1 +
 include/linux/kvm_host.h          | 103 ++++++++++
 include/linux/kvm_types.h         |  18 ++
 virt/kvm/Kconfig                  |   3 +
 virt/kvm/Makefile.kvm             |   1 +
 virt/kvm/dirty_ring.c             |   2 +-
 virt/kvm/kvm_main.c               |  12 +-
 virt/kvm/{mmu_lock.h => kvm_mm.h} |  23 ++-
 virt/kvm/pfncache.c               | 318 ++++++++++++++++++++++++++++++
 9 files changed, 474 insertions(+), 7 deletions(-)
 rename virt/kvm/{mmu_lock.h => kvm_mm.h} (55%)
 create mode 100644 virt/kvm/pfncache.c

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index d7fa0a42ac25..af351107d47f 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -26,6 +26,7 @@ config KVM
 	select PREEMPT_NOTIFIERS
 	select MMU_NOTIFIER
 	select HAVE_KVM_IRQCHIP
+	select HAVE_KVM_PFNCACHE
 	select HAVE_KVM_IRQFD
 	select HAVE_KVM_DIRTY_RING
 	select IRQ_BYPASS_MANAGER
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c310648cc8f1..457c38d75913 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -151,6 +151,7 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_UNBLOCK           2
 #define KVM_REQ_UNHALT            3
 #define KVM_REQ_VM_DEAD           (4 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+#define KVM_REQ_GPC_INVALIDATE    (5 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
 #define KVM_REQUEST_ARCH_BASE     8
 
 #define KVM_ARCH_REQ_FLAGS(nr, flags) ({ \
@@ -559,6 +560,10 @@ struct kvm {
 	unsigned long mn_active_invalidate_count;
 	struct rcuwait mn_memslots_update_rcuwait;
 
+	/* For management / invalidation of gfn_to_pfn_caches */
+	spinlock_t gpc_lock;
+	struct list_head gpc_list;
+
 	/*
 	 * created_vcpus is protected by kvm->lock, and is incremented
 	 * at the beginning of KVM_CREATE_VCPU.  online_vcpus is only
@@ -966,6 +971,104 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
 			 unsigned long len);
 void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn);
 
+/**
+ * kvm_gfn_to_pfn_cache_init - prepare a cached kernel mapping and HPA for a
+ *                             given guest physical address.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @vcpu:	   vCPU to be used for marking pages dirty and to be woken on
+ *		   invalidation.
+ * @guest_uses_pa: indicates that the resulting host physical PFN is used while
+ *		   @vcpu is IN_GUEST_MODE so invalidations should wake it.
+ * @kernel_map:    requests a kernel virtual mapping (kmap / memremap).
+ * @gpa:	   guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   0 for success.
+ *		   -EINVAL for a mapping which would cross a page boundary.
+ *                 -EFAULT for an untranslatable guest physical address.
+ *
+ * This primes a gfn_to_pfn_cache and links it into the @kvm's list for
+ * invalidations to be processed. Invalidation callbacks to @vcpu using
+ * %KVM_REQ_GPC_INVALIDATE will occur only for MMU notifiers, not for KVM
+ * memslot changes. Callers are required to use kvm_gfn_to_pfn_cache_check()
+ * to ensure that the cache is valid before accessing the target page.
+ */
+int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+			      struct kvm_vcpu *vcpu, bool guest_uses_pa,
+			      bool kernel_map, gpa_t gpa, unsigned long len,
+			      bool dirty);
+
+/**
+ * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @gpa:	   current guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   %true if the cache is still valid and the address matches.
+ *		   %false if the cache is not valid.
+ *
+ * Callers outside IN_GUEST_MODE context should hold a read lock on @gpc->lock
+ * while calling this function, and then continue to hold the lock until the
+ * access is complete.
+ *
+ * Callers in IN_GUEST_MODE may do so without locking, although they should
+ * still hold a read lock on kvm->scru for the memslot checks.
+ */
+bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				gpa_t gpa, unsigned long len);
+
+/**
+ * kvm_gfn_to_pfn_cache_refresh - update a previously initialized cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @gpa:	   updated guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   0 for success.
+ *		   -EINVAL for a mapping which would cross a page boundary.
+ *                 -EFAULT for an untranslatable guest physical address.
+ *
+ * This will attempt to refresh a gfn_to_pfn_cache. Note that a successful
+ * returm from this function does not mean the page can be immediately
+ * accessed because it may have raced with an invalidation. Callers must
+ * still lock and check the cache status, as this function does not return
+ * with the lock still held to permit access.
+ */
+int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				 gpa_t gpa, unsigned long len, bool dirty);
+
+/**
+ * kvm_gfn_to_pfn_cache_unmap - temporarily unmap a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ *
+ * This unmaps the referenced page and marks it dirty, if appropriate. The
+ * cache is left in the invalid state but at least the mapping from GPA to
+ * userspace HVA will remain cached and can be reused on a subsequent
+ * refresh.
+ */
+void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
+
+/**
+ * kvm_gfn_to_pfn_cache_destroy - destroy and unlink a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ *
+ * This removes a cache from the @kvm's list to be processed on MMU notifier
+ * invocation.
+ */
+void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
+
 void kvm_sigset_activate(struct kvm_vcpu *vcpu);
 void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
 
diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index 234eab059839..22a52ef12b4d 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -19,6 +19,7 @@ struct kvm_memslots;
 enum kvm_mr_change;
 
 #include <linux/types.h>
+#include <linux/spinlock_types.h>
 
 #include <asm/kvm_types.h>
 
@@ -53,6 +54,23 @@ struct gfn_to_hva_cache {
 	struct kvm_memory_slot *memslot;
 };
 
+struct gfn_to_pfn_cache {
+	u64 generation;
+	gpa_t gpa;
+	unsigned long uhva;
+	struct kvm_memory_slot *memslot;
+	struct kvm_vcpu *vcpu;
+	struct list_head list;
+	rwlock_t lock;
+	void *khva;
+	kvm_pfn_t pfn;
+	bool active;
+	bool valid;
+	bool dirty;
+	bool kernel_map;
+	bool guest_uses_pa;
+};
+
 #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
 /*
  * Memory caches are used to preallocate memory ahead of various MMU flows,
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 97cf5413ac25..f4834c20e4a6 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -4,6 +4,9 @@
 config HAVE_KVM
        bool
 
+config HAVE_KVM_PFNCACHE
+       bool
+
 config HAVE_KVM_IRQCHIP
        bool
 
diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
index ffdcad3cc97a..2c27d5d0c367 100644
--- a/virt/kvm/Makefile.kvm
+++ b/virt/kvm/Makefile.kvm
@@ -11,3 +11,4 @@ kvm-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
 kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
 kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
 kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
+kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o
diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index 8e9874760fb3..222ecc81d7df 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -9,7 +9,7 @@
 #include <linux/vmalloc.h>
 #include <linux/kvm_dirty_ring.h>
 #include <trace/events/kvm.h>
-#include "mmu_lock.h"
+#include "kvm_mm.h"
 
 int __weak kvm_cpu_dirty_log_size(void)
 {
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8eb8c962838d..e639481456b8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -59,7 +59,7 @@
 
 #include "coalesced_mmio.h"
 #include "async_pf.h"
-#include "mmu_lock.h"
+#include "kvm_mm.h"
 #include "vfio.h"
 
 #define CREATE_TRACE_POINTS
@@ -684,6 +684,9 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
 	kvm->mn_active_invalidate_count++;
 	spin_unlock(&kvm->mn_invalidate_lock);
 
+	gfn_to_pfn_cache_invalidate_start(kvm, range->start, range->end,
+					  hva_range.may_block);
+
 	__kvm_handle_hva_range(kvm, &hva_range);
 
 	return 0;
@@ -1051,6 +1054,9 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	spin_lock_init(&kvm->mn_invalidate_lock);
 	rcuwait_init(&kvm->mn_memslots_update_rcuwait);
 
+	INIT_LIST_HEAD(&kvm->gpc_list);
+	spin_lock_init(&kvm->gpc_lock);
+
 	INIT_LIST_HEAD(&kvm->devices);
 
 	BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
@@ -2406,8 +2412,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
  * 2): @write_fault = false && @writable, @writable will tell the caller
  *     whether the mapping is writable.
  */
-static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
-			bool write_fault, bool *writable)
+kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
+		     bool write_fault, bool *writable)
 {
 	struct vm_area_struct *vma;
 	kvm_pfn_t pfn = 0;
diff --git a/virt/kvm/mmu_lock.h b/virt/kvm/kvm_mm.h
similarity index 55%
rename from virt/kvm/mmu_lock.h
rename to virt/kvm/kvm_mm.h
index 9e1308f9734c..b976e4b07e88 100644
--- a/virt/kvm/mmu_lock.h
+++ b/virt/kvm/kvm_mm.h
@@ -1,7 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0-only
 
-#ifndef KVM_MMU_LOCK_H
-#define KVM_MMU_LOCK_H 1
+#ifndef __KVM_MM_H__
+#define __KVM_MM_H__ 1
 
 /*
  * Architectures can choose whether to use an rwlock or spinlock
@@ -20,4 +20,21 @@
 #define KVM_MMU_UNLOCK(kvm)    spin_unlock(&(kvm)->mmu_lock)
 #endif /* KVM_HAVE_MMU_RWLOCK */
 
-#endif
+kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
+		     bool write_fault, bool *writable);
+
+#ifdef CONFIG_HAVE_KVM_PFNCACHE
+void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
+				       unsigned long start,
+				       unsigned long end,
+				       bool may_block);
+#else
+static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
+						     unsigned long start,
+						     unsigned long end,
+						     bool may_block)
+{
+}
+#endif /* HAVE_KVM_PFNCACHE */
+
+#endif /* __KVM_MM_H__ */
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
new file mode 100644
index 000000000000..5fcbce2a5385
--- /dev/null
+++ b/virt/kvm/pfncache.c
@@ -0,0 +1,318 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Kernel-based Virtual Machine driver for Linux
+ *
+ * This module enables kernel and guest-mode vCPU access to guest physical
+ * memory with suitable invalidation mechanisms.
+ *
+ * Copyright © 2021 Amazon.com, Inc. or its affiliates.
+ *
+ * Authors:
+ *   David Woodhouse <dwmw2@infradead.org>
+ */
+
+#include <linux/kvm_host.h>
+#include <linux/kvm.h>
+#include <linux/highmem.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+
+#include "kvm_mm.h"
+
+/*
+ * MMU notifier 'invalidate_range_start' hook.
+ */
+void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start,
+				       unsigned long end, bool may_block)
+{
+	DECLARE_BITMAP(vcpu_bitmap, KVM_MAX_VCPUS);
+	struct gfn_to_pfn_cache *gpc;
+	bool wake_vcpus = false;
+
+	spin_lock(&kvm->gpc_lock);
+	list_for_each_entry(gpc, &kvm->gpc_list, list) {
+		write_lock_irq(&gpc->lock);
+
+		/* Only a single page so no need to care about length */
+		if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) &&
+		    gpc->uhva >= start && gpc->uhva < end) {
+			gpc->valid = false;
+
+			/*
+			 * If a guest vCPU could be using the physical address,
+			 * it needs to be woken.
+			 */
+			if (gpc->guest_uses_pa) {
+				if (!wake_vcpus) {
+					wake_vcpus = true;
+					bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS);
+				}
+				__set_bit(gpc->vcpu->vcpu_idx, vcpu_bitmap);
+			}
+
+			/*
+			 * We cannot call mark_page_dirty() from here because
+			 * this physical CPU might not have an active vCPU
+			 * with which to do the KVM dirty tracking.
+			 *
+			 * Neither is there any point in telling the kernel MM
+			 * that the underlying page is dirty. A vCPU in guest
+			 * mode might still be writing to it up to the point
+			 * where we wake them a few lines further down anyway.
+			 *
+			 * So all the dirty marking happens on the unmap.
+			 */
+		}
+		write_unlock_irq(&gpc->lock);
+	}
+	spin_unlock(&kvm->gpc_lock);
+
+	if (wake_vcpus) {
+		unsigned int req = KVM_REQ_GPC_INVALIDATE;
+		bool called;
+
+		/*
+		 * If the OOM reaper is active, then all vCPUs should have
+		 * been stopped already, so perform the request without
+		 * KVM_REQUEST_WAIT and be sad if any needed to be woken.
+		 */
+		if (!may_block)
+			req &= ~KVM_REQUEST_WAIT;
+
+		called = kvm_make_vcpus_request_mask(kvm, req, vcpu_bitmap);
+
+		WARN_ON_ONCE(called && !may_block);
+	}
+}
+
+bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				gpa_t gpa, unsigned long len)
+{
+	struct kvm_memslots *slots = kvm_memslots(kvm);
+
+	if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE)
+		return false;
+
+	if (gpc->gpa != gpa || gpc->generation != slots->generation ||
+	    kvm_is_error_hva(gpc->uhva))
+		return false;
+
+	if (!gpc->valid)
+		return false;
+
+	return true;
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check);
+
+static void __release_gpc(struct kvm *kvm, kvm_pfn_t pfn, void *khva,
+			  gpa_t gpa, bool dirty)
+{
+	/* Unmap the old page if it was mapped before */
+	if (!is_error_noslot_pfn(pfn)) {
+		if (pfn_valid(pfn)) {
+			kunmap(pfn_to_page(pfn));
+#ifdef CONFIG_HAS_IOMEM
+		} else {
+			memunmap(khva);
+#endif
+		}
+
+		kvm_release_pfn(pfn, dirty);
+		if (dirty)
+			mark_page_dirty(kvm, gpa);
+	}
+}
+
+int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				 gpa_t gpa, unsigned long len, bool dirty)
+{
+	struct kvm_memslots *slots = kvm_memslots(kvm);
+	unsigned long page_offset = gpa & ~PAGE_MASK;
+	kvm_pfn_t old_pfn, new_pfn;
+	unsigned long old_uhva;
+	gpa_t old_gpa;
+	void *old_khva;
+	bool old_valid, old_dirty;
+	int ret = 0;
+
+	/*
+	 * If must fit within a single page. The 'len' argument is
+	 * only to enforce that.
+	 */
+	if (page_offset + len > PAGE_SIZE)
+		return -EINVAL;
+
+	write_lock_irq(&gpc->lock);
+
+	old_gpa = gpc->gpa;
+	old_pfn = gpc->pfn;
+	old_khva = gpc->khva;
+	old_uhva = gpc->uhva;
+	old_valid = gpc->valid;
+	old_dirty = gpc->dirty;
+
+	/* If the userspace HVA is invalid, refresh that first */
+	if (gpc->gpa != gpa || gpc->generation != slots->generation ||
+	    kvm_is_error_hva(gpc->uhva)) {
+		gfn_t gfn = gpa_to_gfn(gpa);
+
+		gpc->dirty = false;
+		gpc->gpa = gpa;
+		gpc->generation = slots->generation;
+		gpc->memslot = __gfn_to_memslot(slots, gfn);
+		gpc->uhva = gfn_to_hva_memslot(gpc->memslot, gfn);
+
+		if (kvm_is_error_hva(gpc->uhva)) {
+			ret = -EFAULT;
+			goto out;
+		}
+
+		gpc->uhva += page_offset;
+	}
+
+	/*
+	 * If the userspace HVA changed or the PFN was already invalid,
+	 * drop the lock and do the HVA to PFN lookup again.
+	 */
+	if (!old_valid || old_uhva != gpc->uhva) {
+		unsigned long uhva = gpc->uhva;
+		void *new_khva = NULL;
+		unsigned long mmu_seq;
+		int retry;
+
+		/* Placeholders for "hva is valid but not yet mapped" */
+		gpc->pfn = KVM_PFN_ERR_FAULT;
+		gpc->khva = NULL;
+		gpc->valid = true;
+
+		write_unlock_irq(&gpc->lock);
+
+	retry_map:
+		mmu_seq = kvm->mmu_notifier_seq;
+		smp_rmb();
+
+		/* We always request a writeable mapping */
+		new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
+		if (is_error_noslot_pfn(new_pfn)) {
+			ret = -EFAULT;
+			goto map_done;
+		}
+
+		read_lock(&kvm->mmu_lock);
+		retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
+		read_unlock(&kvm->mmu_lock);
+		if (retry) {
+			cond_resched();
+			goto retry_map;
+		}
+
+		if (gpc->kernel_map) {
+			if (new_pfn == old_pfn) {
+				new_khva = (void *)((unsigned long)old_khva - page_offset);
+				old_pfn = KVM_PFN_ERR_FAULT;
+				old_khva = NULL;
+			} else if (pfn_valid(new_pfn)) {
+				new_khva = kmap(pfn_to_page(new_pfn));
+#ifdef CONFIG_HAS_IOMEM
+			} else {
+				new_khva = memremap(pfn_to_hpa(new_pfn), PAGE_SIZE, MEMREMAP_WB);
+#endif
+			}
+			if (!new_khva)
+				ret = -EFAULT;
+		}
+
+	map_done:
+		write_lock_irq(&gpc->lock);
+		if (ret) {
+			gpc->valid = false;
+			gpc->pfn = KVM_PFN_ERR_FAULT;
+			gpc->khva = NULL;
+		} else {
+			/* At this point, gpc->valid may already have been cleared */
+			gpc->pfn = new_pfn;
+			gpc->khva = new_khva + page_offset;
+		}
+	}
+
+ out:
+	if (ret)
+		gpc->dirty = false;
+	else
+		gpc->dirty = dirty;
+
+	write_unlock_irq(&gpc->lock);
+
+	__release_gpc(kvm, old_pfn, old_khva, old_gpa, old_dirty);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_refresh);
+
+void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+{
+	void *old_khva;
+	kvm_pfn_t old_pfn;
+	bool old_dirty;
+	gpa_t old_gpa;
+
+	write_lock_irq(&gpc->lock);
+
+	gpc->valid = false;
+
+	old_khva = gpc->khva;
+	old_dirty = gpc->dirty;
+	old_gpa = gpc->gpa;
+	old_pfn = gpc->pfn;
+
+	/*
+	 * We can leave the GPA → uHVA map cache intact but the PFN
+	 * lookup will need to be redone even for the same page.
+	 */
+	gpc->khva = NULL;
+	gpc->pfn = KVM_PFN_ERR_FAULT;
+
+	write_unlock_irq(&gpc->lock);
+
+	__release_gpc(kvm, old_pfn, old_khva, old_gpa, old_dirty);
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap);
+
+
+int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+			      struct kvm_vcpu *vcpu, bool guest_uses_pa,
+			      bool kernel_map, gpa_t gpa, unsigned long len,
+			      bool dirty)
+{
+	if (!gpc->active) {
+		rwlock_init(&gpc->lock);
+
+		gpc->khva = NULL;
+		gpc->pfn = KVM_PFN_ERR_FAULT;
+		gpc->uhva = KVM_HVA_ERR_BAD;
+		gpc->vcpu = vcpu;
+		gpc->kernel_map = kernel_map;
+		gpc->guest_uses_pa = guest_uses_pa;
+		gpc->valid = false;
+		gpc->active = true;
+
+		spin_lock(&kvm->gpc_lock);
+		list_add(&gpc->list, &kvm->gpc_list);
+		spin_unlock(&kvm->gpc_lock);
+	}
+	return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len, dirty);
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_init);
+
+void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+{
+	if (gpc->active) {
+		spin_lock(&kvm->gpc_lock);
+		list_del(&gpc->list);
+		spin_unlock(&kvm->gpc_lock);
+
+		kvm_gfn_to_pfn_cache_unmap(kvm, gpc);
+		gpc->active = false;
+	}
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_destroy);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 08/11] KVM: Reinstate gfn_to_pfn_cache with invalidation support
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

This can be used in two modes. There is an atomic mode where the cached
mapping is accessed while holding the rwlock, and a mode where the
physical address is used by a vCPU in guest mode.

For the latter case, an invalidation will wake the vCPU with the new
KVM_REQ_GPC_INVALIDATE, and the architecture will need to refresh any
caches it still needs to access before entering guest mode again.

Only one vCPU can be targeted by the wake requests; it's simple enough
to make it wake all vCPUs or even a mask but I don't see a use case for
that additional complexity right now.

Invalidation happens from the invalidate_range_start MMU notifier, which
needs to be able to sleep in order to wake the vCPU and wait for it.

This means that revalidation potentially needs to "wait" for the MMU
operation to complete and the invalidate_range_end notifier to be
invoked. Like the vCPU when it takes a page fault in that period, we
just spin — fixing that in a future patch by implementing an actual
*wait* may be another part of shaving this particularly hirsute yak.

As noted in the comments in the function itself, the only case where
the invalidate_range_start notifier is expected to be called *without*
being able to sleep is when the OOM reaper is killing the process. In
that case, we expect the vCPU threads already to have exited, and thus
there will be nothing to wake, and no reason to wait. So we clear the
KVM_REQUEST_WAIT bit and send the request anyway, then complain loudly
if there actually *was* anything to wake up.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kvm/Kconfig              |   1 +
 include/linux/kvm_host.h          | 103 ++++++++++
 include/linux/kvm_types.h         |  18 ++
 virt/kvm/Kconfig                  |   3 +
 virt/kvm/Makefile.kvm             |   1 +
 virt/kvm/dirty_ring.c             |   2 +-
 virt/kvm/kvm_main.c               |  12 +-
 virt/kvm/{mmu_lock.h => kvm_mm.h} |  23 ++-
 virt/kvm/pfncache.c               | 318 ++++++++++++++++++++++++++++++
 9 files changed, 474 insertions(+), 7 deletions(-)
 rename virt/kvm/{mmu_lock.h => kvm_mm.h} (55%)
 create mode 100644 virt/kvm/pfncache.c

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index d7fa0a42ac25..af351107d47f 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -26,6 +26,7 @@ config KVM
 	select PREEMPT_NOTIFIERS
 	select MMU_NOTIFIER
 	select HAVE_KVM_IRQCHIP
+	select HAVE_KVM_PFNCACHE
 	select HAVE_KVM_IRQFD
 	select HAVE_KVM_DIRTY_RING
 	select IRQ_BYPASS_MANAGER
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c310648cc8f1..457c38d75913 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -151,6 +151,7 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_UNBLOCK           2
 #define KVM_REQ_UNHALT            3
 #define KVM_REQ_VM_DEAD           (4 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+#define KVM_REQ_GPC_INVALIDATE    (5 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
 #define KVM_REQUEST_ARCH_BASE     8
 
 #define KVM_ARCH_REQ_FLAGS(nr, flags) ({ \
@@ -559,6 +560,10 @@ struct kvm {
 	unsigned long mn_active_invalidate_count;
 	struct rcuwait mn_memslots_update_rcuwait;
 
+	/* For management / invalidation of gfn_to_pfn_caches */
+	spinlock_t gpc_lock;
+	struct list_head gpc_list;
+
 	/*
 	 * created_vcpus is protected by kvm->lock, and is incremented
 	 * at the beginning of KVM_CREATE_VCPU.  online_vcpus is only
@@ -966,6 +971,104 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
 			 unsigned long len);
 void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn);
 
+/**
+ * kvm_gfn_to_pfn_cache_init - prepare a cached kernel mapping and HPA for a
+ *                             given guest physical address.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @vcpu:	   vCPU to be used for marking pages dirty and to be woken on
+ *		   invalidation.
+ * @guest_uses_pa: indicates that the resulting host physical PFN is used while
+ *		   @vcpu is IN_GUEST_MODE so invalidations should wake it.
+ * @kernel_map:    requests a kernel virtual mapping (kmap / memremap).
+ * @gpa:	   guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   0 for success.
+ *		   -EINVAL for a mapping which would cross a page boundary.
+ *                 -EFAULT for an untranslatable guest physical address.
+ *
+ * This primes a gfn_to_pfn_cache and links it into the @kvm's list for
+ * invalidations to be processed. Invalidation callbacks to @vcpu using
+ * %KVM_REQ_GPC_INVALIDATE will occur only for MMU notifiers, not for KVM
+ * memslot changes. Callers are required to use kvm_gfn_to_pfn_cache_check()
+ * to ensure that the cache is valid before accessing the target page.
+ */
+int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+			      struct kvm_vcpu *vcpu, bool guest_uses_pa,
+			      bool kernel_map, gpa_t gpa, unsigned long len,
+			      bool dirty);
+
+/**
+ * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @gpa:	   current guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   %true if the cache is still valid and the address matches.
+ *		   %false if the cache is not valid.
+ *
+ * Callers outside IN_GUEST_MODE context should hold a read lock on @gpc->lock
+ * while calling this function, and then continue to hold the lock until the
+ * access is complete.
+ *
+ * Callers in IN_GUEST_MODE may do so without locking, although they should
+ * still hold a read lock on kvm->scru for the memslot checks.
+ */
+bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				gpa_t gpa, unsigned long len);
+
+/**
+ * kvm_gfn_to_pfn_cache_refresh - update a previously initialized cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @gpa:	   updated guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   0 for success.
+ *		   -EINVAL for a mapping which would cross a page boundary.
+ *                 -EFAULT for an untranslatable guest physical address.
+ *
+ * This will attempt to refresh a gfn_to_pfn_cache. Note that a successful
+ * returm from this function does not mean the page can be immediately
+ * accessed because it may have raced with an invalidation. Callers must
+ * still lock and check the cache status, as this function does not return
+ * with the lock still held to permit access.
+ */
+int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				 gpa_t gpa, unsigned long len, bool dirty);
+
+/**
+ * kvm_gfn_to_pfn_cache_unmap - temporarily unmap a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ *
+ * This unmaps the referenced page and marks it dirty, if appropriate. The
+ * cache is left in the invalid state but at least the mapping from GPA to
+ * userspace HVA will remain cached and can be reused on a subsequent
+ * refresh.
+ */
+void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
+
+/**
+ * kvm_gfn_to_pfn_cache_destroy - destroy and unlink a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ *
+ * This removes a cache from the @kvm's list to be processed on MMU notifier
+ * invocation.
+ */
+void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
+
 void kvm_sigset_activate(struct kvm_vcpu *vcpu);
 void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
 
diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index 234eab059839..22a52ef12b4d 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -19,6 +19,7 @@ struct kvm_memslots;
 enum kvm_mr_change;
 
 #include <linux/types.h>
+#include <linux/spinlock_types.h>
 
 #include <asm/kvm_types.h>
 
@@ -53,6 +54,23 @@ struct gfn_to_hva_cache {
 	struct kvm_memory_slot *memslot;
 };
 
+struct gfn_to_pfn_cache {
+	u64 generation;
+	gpa_t gpa;
+	unsigned long uhva;
+	struct kvm_memory_slot *memslot;
+	struct kvm_vcpu *vcpu;
+	struct list_head list;
+	rwlock_t lock;
+	void *khva;
+	kvm_pfn_t pfn;
+	bool active;
+	bool valid;
+	bool dirty;
+	bool kernel_map;
+	bool guest_uses_pa;
+};
+
 #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
 /*
  * Memory caches are used to preallocate memory ahead of various MMU flows,
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 97cf5413ac25..f4834c20e4a6 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -4,6 +4,9 @@
 config HAVE_KVM
        bool
 
+config HAVE_KVM_PFNCACHE
+       bool
+
 config HAVE_KVM_IRQCHIP
        bool
 
diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
index ffdcad3cc97a..2c27d5d0c367 100644
--- a/virt/kvm/Makefile.kvm
+++ b/virt/kvm/Makefile.kvm
@@ -11,3 +11,4 @@ kvm-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
 kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
 kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
 kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
+kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o
diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index 8e9874760fb3..222ecc81d7df 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -9,7 +9,7 @@
 #include <linux/vmalloc.h>
 #include <linux/kvm_dirty_ring.h>
 #include <trace/events/kvm.h>
-#include "mmu_lock.h"
+#include "kvm_mm.h"
 
 int __weak kvm_cpu_dirty_log_size(void)
 {
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8eb8c962838d..e639481456b8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -59,7 +59,7 @@
 
 #include "coalesced_mmio.h"
 #include "async_pf.h"
-#include "mmu_lock.h"
+#include "kvm_mm.h"
 #include "vfio.h"
 
 #define CREATE_TRACE_POINTS
@@ -684,6 +684,9 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
 	kvm->mn_active_invalidate_count++;
 	spin_unlock(&kvm->mn_invalidate_lock);
 
+	gfn_to_pfn_cache_invalidate_start(kvm, range->start, range->end,
+					  hva_range.may_block);
+
 	__kvm_handle_hva_range(kvm, &hva_range);
 
 	return 0;
@@ -1051,6 +1054,9 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	spin_lock_init(&kvm->mn_invalidate_lock);
 	rcuwait_init(&kvm->mn_memslots_update_rcuwait);
 
+	INIT_LIST_HEAD(&kvm->gpc_list);
+	spin_lock_init(&kvm->gpc_lock);
+
 	INIT_LIST_HEAD(&kvm->devices);
 
 	BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
@@ -2406,8 +2412,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
  * 2): @write_fault = false && @writable, @writable will tell the caller
  *     whether the mapping is writable.
  */
-static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
-			bool write_fault, bool *writable)
+kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
+		     bool write_fault, bool *writable)
 {
 	struct vm_area_struct *vma;
 	kvm_pfn_t pfn = 0;
diff --git a/virt/kvm/mmu_lock.h b/virt/kvm/kvm_mm.h
similarity index 55%
rename from virt/kvm/mmu_lock.h
rename to virt/kvm/kvm_mm.h
index 9e1308f9734c..b976e4b07e88 100644
--- a/virt/kvm/mmu_lock.h
+++ b/virt/kvm/kvm_mm.h
@@ -1,7 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0-only
 
-#ifndef KVM_MMU_LOCK_H
-#define KVM_MMU_LOCK_H 1
+#ifndef __KVM_MM_H__
+#define __KVM_MM_H__ 1
 
 /*
  * Architectures can choose whether to use an rwlock or spinlock
@@ -20,4 +20,21 @@
 #define KVM_MMU_UNLOCK(kvm)    spin_unlock(&(kvm)->mmu_lock)
 #endif /* KVM_HAVE_MMU_RWLOCK */
 
-#endif
+kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
+		     bool write_fault, bool *writable);
+
+#ifdef CONFIG_HAVE_KVM_PFNCACHE
+void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
+				       unsigned long start,
+				       unsigned long end,
+				       bool may_block);
+#else
+static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
+						     unsigned long start,
+						     unsigned long end,
+						     bool may_block)
+{
+}
+#endif /* HAVE_KVM_PFNCACHE */
+
+#endif /* __KVM_MM_H__ */
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
new file mode 100644
index 000000000000..5fcbce2a5385
--- /dev/null
+++ b/virt/kvm/pfncache.c
@@ -0,0 +1,318 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Kernel-based Virtual Machine driver for Linux
+ *
+ * This module enables kernel and guest-mode vCPU access to guest physical
+ * memory with suitable invalidation mechanisms.
+ *
+ * Copyright © 2021 Amazon.com, Inc. or its affiliates.
+ *
+ * Authors:
+ *   David Woodhouse <dwmw2@infradead.org>
+ */
+
+#include <linux/kvm_host.h>
+#include <linux/kvm.h>
+#include <linux/highmem.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+
+#include "kvm_mm.h"
+
+/*
+ * MMU notifier 'invalidate_range_start' hook.
+ */
+void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start,
+				       unsigned long end, bool may_block)
+{
+	DECLARE_BITMAP(vcpu_bitmap, KVM_MAX_VCPUS);
+	struct gfn_to_pfn_cache *gpc;
+	bool wake_vcpus = false;
+
+	spin_lock(&kvm->gpc_lock);
+	list_for_each_entry(gpc, &kvm->gpc_list, list) {
+		write_lock_irq(&gpc->lock);
+
+		/* Only a single page so no need to care about length */
+		if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) &&
+		    gpc->uhva >= start && gpc->uhva < end) {
+			gpc->valid = false;
+
+			/*
+			 * If a guest vCPU could be using the physical address,
+			 * it needs to be woken.
+			 */
+			if (gpc->guest_uses_pa) {
+				if (!wake_vcpus) {
+					wake_vcpus = true;
+					bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS);
+				}
+				__set_bit(gpc->vcpu->vcpu_idx, vcpu_bitmap);
+			}
+
+			/*
+			 * We cannot call mark_page_dirty() from here because
+			 * this physical CPU might not have an active vCPU
+			 * with which to do the KVM dirty tracking.
+			 *
+			 * Neither is there any point in telling the kernel MM
+			 * that the underlying page is dirty. A vCPU in guest
+			 * mode might still be writing to it up to the point
+			 * where we wake them a few lines further down anyway.
+			 *
+			 * So all the dirty marking happens on the unmap.
+			 */
+		}
+		write_unlock_irq(&gpc->lock);
+	}
+	spin_unlock(&kvm->gpc_lock);
+
+	if (wake_vcpus) {
+		unsigned int req = KVM_REQ_GPC_INVALIDATE;
+		bool called;
+
+		/*
+		 * If the OOM reaper is active, then all vCPUs should have
+		 * been stopped already, so perform the request without
+		 * KVM_REQUEST_WAIT and be sad if any needed to be woken.
+		 */
+		if (!may_block)
+			req &= ~KVM_REQUEST_WAIT;
+
+		called = kvm_make_vcpus_request_mask(kvm, req, vcpu_bitmap);
+
+		WARN_ON_ONCE(called && !may_block);
+	}
+}
+
+bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				gpa_t gpa, unsigned long len)
+{
+	struct kvm_memslots *slots = kvm_memslots(kvm);
+
+	if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE)
+		return false;
+
+	if (gpc->gpa != gpa || gpc->generation != slots->generation ||
+	    kvm_is_error_hva(gpc->uhva))
+		return false;
+
+	if (!gpc->valid)
+		return false;
+
+	return true;
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check);
+
+static void __release_gpc(struct kvm *kvm, kvm_pfn_t pfn, void *khva,
+			  gpa_t gpa, bool dirty)
+{
+	/* Unmap the old page if it was mapped before */
+	if (!is_error_noslot_pfn(pfn)) {
+		if (pfn_valid(pfn)) {
+			kunmap(pfn_to_page(pfn));
+#ifdef CONFIG_HAS_IOMEM
+		} else {
+			memunmap(khva);
+#endif
+		}
+
+		kvm_release_pfn(pfn, dirty);
+		if (dirty)
+			mark_page_dirty(kvm, gpa);
+	}
+}
+
+int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				 gpa_t gpa, unsigned long len, bool dirty)
+{
+	struct kvm_memslots *slots = kvm_memslots(kvm);
+	unsigned long page_offset = gpa & ~PAGE_MASK;
+	kvm_pfn_t old_pfn, new_pfn;
+	unsigned long old_uhva;
+	gpa_t old_gpa;
+	void *old_khva;
+	bool old_valid, old_dirty;
+	int ret = 0;
+
+	/*
+	 * If must fit within a single page. The 'len' argument is
+	 * only to enforce that.
+	 */
+	if (page_offset + len > PAGE_SIZE)
+		return -EINVAL;
+
+	write_lock_irq(&gpc->lock);
+
+	old_gpa = gpc->gpa;
+	old_pfn = gpc->pfn;
+	old_khva = gpc->khva;
+	old_uhva = gpc->uhva;
+	old_valid = gpc->valid;
+	old_dirty = gpc->dirty;
+
+	/* If the userspace HVA is invalid, refresh that first */
+	if (gpc->gpa != gpa || gpc->generation != slots->generation ||
+	    kvm_is_error_hva(gpc->uhva)) {
+		gfn_t gfn = gpa_to_gfn(gpa);
+
+		gpc->dirty = false;
+		gpc->gpa = gpa;
+		gpc->generation = slots->generation;
+		gpc->memslot = __gfn_to_memslot(slots, gfn);
+		gpc->uhva = gfn_to_hva_memslot(gpc->memslot, gfn);
+
+		if (kvm_is_error_hva(gpc->uhva)) {
+			ret = -EFAULT;
+			goto out;
+		}
+
+		gpc->uhva += page_offset;
+	}
+
+	/*
+	 * If the userspace HVA changed or the PFN was already invalid,
+	 * drop the lock and do the HVA to PFN lookup again.
+	 */
+	if (!old_valid || old_uhva != gpc->uhva) {
+		unsigned long uhva = gpc->uhva;
+		void *new_khva = NULL;
+		unsigned long mmu_seq;
+		int retry;
+
+		/* Placeholders for "hva is valid but not yet mapped" */
+		gpc->pfn = KVM_PFN_ERR_FAULT;
+		gpc->khva = NULL;
+		gpc->valid = true;
+
+		write_unlock_irq(&gpc->lock);
+
+	retry_map:
+		mmu_seq = kvm->mmu_notifier_seq;
+		smp_rmb();
+
+		/* We always request a writeable mapping */
+		new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
+		if (is_error_noslot_pfn(new_pfn)) {
+			ret = -EFAULT;
+			goto map_done;
+		}
+
+		read_lock(&kvm->mmu_lock);
+		retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
+		read_unlock(&kvm->mmu_lock);
+		if (retry) {
+			cond_resched();
+			goto retry_map;
+		}
+
+		if (gpc->kernel_map) {
+			if (new_pfn == old_pfn) {
+				new_khva = (void *)((unsigned long)old_khva - page_offset);
+				old_pfn = KVM_PFN_ERR_FAULT;
+				old_khva = NULL;
+			} else if (pfn_valid(new_pfn)) {
+				new_khva = kmap(pfn_to_page(new_pfn));
+#ifdef CONFIG_HAS_IOMEM
+			} else {
+				new_khva = memremap(pfn_to_hpa(new_pfn), PAGE_SIZE, MEMREMAP_WB);
+#endif
+			}
+			if (!new_khva)
+				ret = -EFAULT;
+		}
+
+	map_done:
+		write_lock_irq(&gpc->lock);
+		if (ret) {
+			gpc->valid = false;
+			gpc->pfn = KVM_PFN_ERR_FAULT;
+			gpc->khva = NULL;
+		} else {
+			/* At this point, gpc->valid may already have been cleared */
+			gpc->pfn = new_pfn;
+			gpc->khva = new_khva + page_offset;
+		}
+	}
+
+ out:
+	if (ret)
+		gpc->dirty = false;
+	else
+		gpc->dirty = dirty;
+
+	write_unlock_irq(&gpc->lock);
+
+	__release_gpc(kvm, old_pfn, old_khva, old_gpa, old_dirty);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_refresh);
+
+void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+{
+	void *old_khva;
+	kvm_pfn_t old_pfn;
+	bool old_dirty;
+	gpa_t old_gpa;
+
+	write_lock_irq(&gpc->lock);
+
+	gpc->valid = false;
+
+	old_khva = gpc->khva;
+	old_dirty = gpc->dirty;
+	old_gpa = gpc->gpa;
+	old_pfn = gpc->pfn;
+
+	/*
+	 * We can leave the GPA → uHVA map cache intact but the PFN
+	 * lookup will need to be redone even for the same page.
+	 */
+	gpc->khva = NULL;
+	gpc->pfn = KVM_PFN_ERR_FAULT;
+
+	write_unlock_irq(&gpc->lock);
+
+	__release_gpc(kvm, old_pfn, old_khva, old_gpa, old_dirty);
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap);
+
+
+int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+			      struct kvm_vcpu *vcpu, bool guest_uses_pa,
+			      bool kernel_map, gpa_t gpa, unsigned long len,
+			      bool dirty)
+{
+	if (!gpc->active) {
+		rwlock_init(&gpc->lock);
+
+		gpc->khva = NULL;
+		gpc->pfn = KVM_PFN_ERR_FAULT;
+		gpc->uhva = KVM_HVA_ERR_BAD;
+		gpc->vcpu = vcpu;
+		gpc->kernel_map = kernel_map;
+		gpc->guest_uses_pa = guest_uses_pa;
+		gpc->valid = false;
+		gpc->active = true;
+
+		spin_lock(&kvm->gpc_lock);
+		list_add(&gpc->list, &kvm->gpc_list);
+		spin_unlock(&kvm->gpc_lock);
+	}
+	return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len, dirty);
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_init);
+
+void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+{
+	if (gpc->active) {
+		spin_lock(&kvm->gpc_lock);
+		list_del(&gpc->list);
+		spin_unlock(&kvm->gpc_lock);
+
+		kvm_gfn_to_pfn_cache_unmap(kvm, gpc);
+		gpc->active = false;
+	}
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_destroy);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 08/11] KVM: Reinstate gfn_to_pfn_cache with invalidation support
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

This can be used in two modes. There is an atomic mode where the cached
mapping is accessed while holding the rwlock, and a mode where the
physical address is used by a vCPU in guest mode.

For the latter case, an invalidation will wake the vCPU with the new
KVM_REQ_GPC_INVALIDATE, and the architecture will need to refresh any
caches it still needs to access before entering guest mode again.

Only one vCPU can be targeted by the wake requests; it's simple enough
to make it wake all vCPUs or even a mask but I don't see a use case for
that additional complexity right now.

Invalidation happens from the invalidate_range_start MMU notifier, which
needs to be able to sleep in order to wake the vCPU and wait for it.

This means that revalidation potentially needs to "wait" for the MMU
operation to complete and the invalidate_range_end notifier to be
invoked. Like the vCPU when it takes a page fault in that period, we
just spin — fixing that in a future patch by implementing an actual
*wait* may be another part of shaving this particularly hirsute yak.

As noted in the comments in the function itself, the only case where
the invalidate_range_start notifier is expected to be called *without*
being able to sleep is when the OOM reaper is killing the process. In
that case, we expect the vCPU threads already to have exited, and thus
there will be nothing to wake, and no reason to wait. So we clear the
KVM_REQUEST_WAIT bit and send the request anyway, then complain loudly
if there actually *was* anything to wake up.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kvm/Kconfig              |   1 +
 include/linux/kvm_host.h          | 103 ++++++++++
 include/linux/kvm_types.h         |  18 ++
 virt/kvm/Kconfig                  |   3 +
 virt/kvm/Makefile.kvm             |   1 +
 virt/kvm/dirty_ring.c             |   2 +-
 virt/kvm/kvm_main.c               |  12 +-
 virt/kvm/{mmu_lock.h => kvm_mm.h} |  23 ++-
 virt/kvm/pfncache.c               | 318 ++++++++++++++++++++++++++++++
 9 files changed, 474 insertions(+), 7 deletions(-)
 rename virt/kvm/{mmu_lock.h => kvm_mm.h} (55%)
 create mode 100644 virt/kvm/pfncache.c

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index d7fa0a42ac25..af351107d47f 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -26,6 +26,7 @@ config KVM
 	select PREEMPT_NOTIFIERS
 	select MMU_NOTIFIER
 	select HAVE_KVM_IRQCHIP
+	select HAVE_KVM_PFNCACHE
 	select HAVE_KVM_IRQFD
 	select HAVE_KVM_DIRTY_RING
 	select IRQ_BYPASS_MANAGER
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c310648cc8f1..457c38d75913 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -151,6 +151,7 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_UNBLOCK           2
 #define KVM_REQ_UNHALT            3
 #define KVM_REQ_VM_DEAD           (4 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+#define KVM_REQ_GPC_INVALIDATE    (5 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
 #define KVM_REQUEST_ARCH_BASE     8
 
 #define KVM_ARCH_REQ_FLAGS(nr, flags) ({ \
@@ -559,6 +560,10 @@ struct kvm {
 	unsigned long mn_active_invalidate_count;
 	struct rcuwait mn_memslots_update_rcuwait;
 
+	/* For management / invalidation of gfn_to_pfn_caches */
+	spinlock_t gpc_lock;
+	struct list_head gpc_list;
+
 	/*
 	 * created_vcpus is protected by kvm->lock, and is incremented
 	 * at the beginning of KVM_CREATE_VCPU.  online_vcpus is only
@@ -966,6 +971,104 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
 			 unsigned long len);
 void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn);
 
+/**
+ * kvm_gfn_to_pfn_cache_init - prepare a cached kernel mapping and HPA for a
+ *                             given guest physical address.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @vcpu:	   vCPU to be used for marking pages dirty and to be woken on
+ *		   invalidation.
+ * @guest_uses_pa: indicates that the resulting host physical PFN is used while
+ *		   @vcpu is IN_GUEST_MODE so invalidations should wake it.
+ * @kernel_map:    requests a kernel virtual mapping (kmap / memremap).
+ * @gpa:	   guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   0 for success.
+ *		   -EINVAL for a mapping which would cross a page boundary.
+ *                 -EFAULT for an untranslatable guest physical address.
+ *
+ * This primes a gfn_to_pfn_cache and links it into the @kvm's list for
+ * invalidations to be processed. Invalidation callbacks to @vcpu using
+ * %KVM_REQ_GPC_INVALIDATE will occur only for MMU notifiers, not for KVM
+ * memslot changes. Callers are required to use kvm_gfn_to_pfn_cache_check()
+ * to ensure that the cache is valid before accessing the target page.
+ */
+int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+			      struct kvm_vcpu *vcpu, bool guest_uses_pa,
+			      bool kernel_map, gpa_t gpa, unsigned long len,
+			      bool dirty);
+
+/**
+ * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @gpa:	   current guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   %true if the cache is still valid and the address matches.
+ *		   %false if the cache is not valid.
+ *
+ * Callers outside IN_GUEST_MODE context should hold a read lock on @gpc->lock
+ * while calling this function, and then continue to hold the lock until the
+ * access is complete.
+ *
+ * Callers in IN_GUEST_MODE may do so without locking, although they should
+ * still hold a read lock on kvm->scru for the memslot checks.
+ */
+bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				gpa_t gpa, unsigned long len);
+
+/**
+ * kvm_gfn_to_pfn_cache_refresh - update a previously initialized cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @gpa:	   updated guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   0 for success.
+ *		   -EINVAL for a mapping which would cross a page boundary.
+ *                 -EFAULT for an untranslatable guest physical address.
+ *
+ * This will attempt to refresh a gfn_to_pfn_cache. Note that a successful
+ * returm from this function does not mean the page can be immediately
+ * accessed because it may have raced with an invalidation. Callers must
+ * still lock and check the cache status, as this function does not return
+ * with the lock still held to permit access.
+ */
+int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				 gpa_t gpa, unsigned long len, bool dirty);
+
+/**
+ * kvm_gfn_to_pfn_cache_unmap - temporarily unmap a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ *
+ * This unmaps the referenced page and marks it dirty, if appropriate. The
+ * cache is left in the invalid state but at least the mapping from GPA to
+ * userspace HVA will remain cached and can be reused on a subsequent
+ * refresh.
+ */
+void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
+
+/**
+ * kvm_gfn_to_pfn_cache_destroy - destroy and unlink a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ *
+ * This removes a cache from the @kvm's list to be processed on MMU notifier
+ * invocation.
+ */
+void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
+
 void kvm_sigset_activate(struct kvm_vcpu *vcpu);
 void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
 
diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index 234eab059839..22a52ef12b4d 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -19,6 +19,7 @@ struct kvm_memslots;
 enum kvm_mr_change;
 
 #include <linux/types.h>
+#include <linux/spinlock_types.h>
 
 #include <asm/kvm_types.h>
 
@@ -53,6 +54,23 @@ struct gfn_to_hva_cache {
 	struct kvm_memory_slot *memslot;
 };
 
+struct gfn_to_pfn_cache {
+	u64 generation;
+	gpa_t gpa;
+	unsigned long uhva;
+	struct kvm_memory_slot *memslot;
+	struct kvm_vcpu *vcpu;
+	struct list_head list;
+	rwlock_t lock;
+	void *khva;
+	kvm_pfn_t pfn;
+	bool active;
+	bool valid;
+	bool dirty;
+	bool kernel_map;
+	bool guest_uses_pa;
+};
+
 #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
 /*
  * Memory caches are used to preallocate memory ahead of various MMU flows,
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 97cf5413ac25..f4834c20e4a6 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -4,6 +4,9 @@
 config HAVE_KVM
        bool
 
+config HAVE_KVM_PFNCACHE
+       bool
+
 config HAVE_KVM_IRQCHIP
        bool
 
diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
index ffdcad3cc97a..2c27d5d0c367 100644
--- a/virt/kvm/Makefile.kvm
+++ b/virt/kvm/Makefile.kvm
@@ -11,3 +11,4 @@ kvm-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
 kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
 kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
 kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
+kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o
diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index 8e9874760fb3..222ecc81d7df 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -9,7 +9,7 @@
 #include <linux/vmalloc.h>
 #include <linux/kvm_dirty_ring.h>
 #include <trace/events/kvm.h>
-#include "mmu_lock.h"
+#include "kvm_mm.h"
 
 int __weak kvm_cpu_dirty_log_size(void)
 {
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8eb8c962838d..e639481456b8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -59,7 +59,7 @@
 
 #include "coalesced_mmio.h"
 #include "async_pf.h"
-#include "mmu_lock.h"
+#include "kvm_mm.h"
 #include "vfio.h"
 
 #define CREATE_TRACE_POINTS
@@ -684,6 +684,9 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
 	kvm->mn_active_invalidate_count++;
 	spin_unlock(&kvm->mn_invalidate_lock);
 
+	gfn_to_pfn_cache_invalidate_start(kvm, range->start, range->end,
+					  hva_range.may_block);
+
 	__kvm_handle_hva_range(kvm, &hva_range);
 
 	return 0;
@@ -1051,6 +1054,9 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	spin_lock_init(&kvm->mn_invalidate_lock);
 	rcuwait_init(&kvm->mn_memslots_update_rcuwait);
 
+	INIT_LIST_HEAD(&kvm->gpc_list);
+	spin_lock_init(&kvm->gpc_lock);
+
 	INIT_LIST_HEAD(&kvm->devices);
 
 	BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
@@ -2406,8 +2412,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
  * 2): @write_fault = false && @writable, @writable will tell the caller
  *     whether the mapping is writable.
  */
-static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
-			bool write_fault, bool *writable)
+kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
+		     bool write_fault, bool *writable)
 {
 	struct vm_area_struct *vma;
 	kvm_pfn_t pfn = 0;
diff --git a/virt/kvm/mmu_lock.h b/virt/kvm/kvm_mm.h
similarity index 55%
rename from virt/kvm/mmu_lock.h
rename to virt/kvm/kvm_mm.h
index 9e1308f9734c..b976e4b07e88 100644
--- a/virt/kvm/mmu_lock.h
+++ b/virt/kvm/kvm_mm.h
@@ -1,7 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0-only
 
-#ifndef KVM_MMU_LOCK_H
-#define KVM_MMU_LOCK_H 1
+#ifndef __KVM_MM_H__
+#define __KVM_MM_H__ 1
 
 /*
  * Architectures can choose whether to use an rwlock or spinlock
@@ -20,4 +20,21 @@
 #define KVM_MMU_UNLOCK(kvm)    spin_unlock(&(kvm)->mmu_lock)
 #endif /* KVM_HAVE_MMU_RWLOCK */
 
-#endif
+kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
+		     bool write_fault, bool *writable);
+
+#ifdef CONFIG_HAVE_KVM_PFNCACHE
+void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
+				       unsigned long start,
+				       unsigned long end,
+				       bool may_block);
+#else
+static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
+						     unsigned long start,
+						     unsigned long end,
+						     bool may_block)
+{
+}
+#endif /* HAVE_KVM_PFNCACHE */
+
+#endif /* __KVM_MM_H__ */
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
new file mode 100644
index 000000000000..5fcbce2a5385
--- /dev/null
+++ b/virt/kvm/pfncache.c
@@ -0,0 +1,318 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Kernel-based Virtual Machine driver for Linux
+ *
+ * This module enables kernel and guest-mode vCPU access to guest physical
+ * memory with suitable invalidation mechanisms.
+ *
+ * Copyright © 2021 Amazon.com, Inc. or its affiliates.
+ *
+ * Authors:
+ *   David Woodhouse <dwmw2@infradead.org>
+ */
+
+#include <linux/kvm_host.h>
+#include <linux/kvm.h>
+#include <linux/highmem.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+
+#include "kvm_mm.h"
+
+/*
+ * MMU notifier 'invalidate_range_start' hook.
+ */
+void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start,
+				       unsigned long end, bool may_block)
+{
+	DECLARE_BITMAP(vcpu_bitmap, KVM_MAX_VCPUS);
+	struct gfn_to_pfn_cache *gpc;
+	bool wake_vcpus = false;
+
+	spin_lock(&kvm->gpc_lock);
+	list_for_each_entry(gpc, &kvm->gpc_list, list) {
+		write_lock_irq(&gpc->lock);
+
+		/* Only a single page so no need to care about length */
+		if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) &&
+		    gpc->uhva >= start && gpc->uhva < end) {
+			gpc->valid = false;
+
+			/*
+			 * If a guest vCPU could be using the physical address,
+			 * it needs to be woken.
+			 */
+			if (gpc->guest_uses_pa) {
+				if (!wake_vcpus) {
+					wake_vcpus = true;
+					bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS);
+				}
+				__set_bit(gpc->vcpu->vcpu_idx, vcpu_bitmap);
+			}
+
+			/*
+			 * We cannot call mark_page_dirty() from here because
+			 * this physical CPU might not have an active vCPU
+			 * with which to do the KVM dirty tracking.
+			 *
+			 * Neither is there any point in telling the kernel MM
+			 * that the underlying page is dirty. A vCPU in guest
+			 * mode might still be writing to it up to the point
+			 * where we wake them a few lines further down anyway.
+			 *
+			 * So all the dirty marking happens on the unmap.
+			 */
+		}
+		write_unlock_irq(&gpc->lock);
+	}
+	spin_unlock(&kvm->gpc_lock);
+
+	if (wake_vcpus) {
+		unsigned int req = KVM_REQ_GPC_INVALIDATE;
+		bool called;
+
+		/*
+		 * If the OOM reaper is active, then all vCPUs should have
+		 * been stopped already, so perform the request without
+		 * KVM_REQUEST_WAIT and be sad if any needed to be woken.
+		 */
+		if (!may_block)
+			req &= ~KVM_REQUEST_WAIT;
+
+		called = kvm_make_vcpus_request_mask(kvm, req, vcpu_bitmap);
+
+		WARN_ON_ONCE(called && !may_block);
+	}
+}
+
+bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				gpa_t gpa, unsigned long len)
+{
+	struct kvm_memslots *slots = kvm_memslots(kvm);
+
+	if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE)
+		return false;
+
+	if (gpc->gpa != gpa || gpc->generation != slots->generation ||
+	    kvm_is_error_hva(gpc->uhva))
+		return false;
+
+	if (!gpc->valid)
+		return false;
+
+	return true;
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check);
+
+static void __release_gpc(struct kvm *kvm, kvm_pfn_t pfn, void *khva,
+			  gpa_t gpa, bool dirty)
+{
+	/* Unmap the old page if it was mapped before */
+	if (!is_error_noslot_pfn(pfn)) {
+		if (pfn_valid(pfn)) {
+			kunmap(pfn_to_page(pfn));
+#ifdef CONFIG_HAS_IOMEM
+		} else {
+			memunmap(khva);
+#endif
+		}
+
+		kvm_release_pfn(pfn, dirty);
+		if (dirty)
+			mark_page_dirty(kvm, gpa);
+	}
+}
+
+int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				 gpa_t gpa, unsigned long len, bool dirty)
+{
+	struct kvm_memslots *slots = kvm_memslots(kvm);
+	unsigned long page_offset = gpa & ~PAGE_MASK;
+	kvm_pfn_t old_pfn, new_pfn;
+	unsigned long old_uhva;
+	gpa_t old_gpa;
+	void *old_khva;
+	bool old_valid, old_dirty;
+	int ret = 0;
+
+	/*
+	 * If must fit within a single page. The 'len' argument is
+	 * only to enforce that.
+	 */
+	if (page_offset + len > PAGE_SIZE)
+		return -EINVAL;
+
+	write_lock_irq(&gpc->lock);
+
+	old_gpa = gpc->gpa;
+	old_pfn = gpc->pfn;
+	old_khva = gpc->khva;
+	old_uhva = gpc->uhva;
+	old_valid = gpc->valid;
+	old_dirty = gpc->dirty;
+
+	/* If the userspace HVA is invalid, refresh that first */
+	if (gpc->gpa != gpa || gpc->generation != slots->generation ||
+	    kvm_is_error_hva(gpc->uhva)) {
+		gfn_t gfn = gpa_to_gfn(gpa);
+
+		gpc->dirty = false;
+		gpc->gpa = gpa;
+		gpc->generation = slots->generation;
+		gpc->memslot = __gfn_to_memslot(slots, gfn);
+		gpc->uhva = gfn_to_hva_memslot(gpc->memslot, gfn);
+
+		if (kvm_is_error_hva(gpc->uhva)) {
+			ret = -EFAULT;
+			goto out;
+		}
+
+		gpc->uhva += page_offset;
+	}
+
+	/*
+	 * If the userspace HVA changed or the PFN was already invalid,
+	 * drop the lock and do the HVA to PFN lookup again.
+	 */
+	if (!old_valid || old_uhva != gpc->uhva) {
+		unsigned long uhva = gpc->uhva;
+		void *new_khva = NULL;
+		unsigned long mmu_seq;
+		int retry;
+
+		/* Placeholders for "hva is valid but not yet mapped" */
+		gpc->pfn = KVM_PFN_ERR_FAULT;
+		gpc->khva = NULL;
+		gpc->valid = true;
+
+		write_unlock_irq(&gpc->lock);
+
+	retry_map:
+		mmu_seq = kvm->mmu_notifier_seq;
+		smp_rmb();
+
+		/* We always request a writeable mapping */
+		new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
+		if (is_error_noslot_pfn(new_pfn)) {
+			ret = -EFAULT;
+			goto map_done;
+		}
+
+		read_lock(&kvm->mmu_lock);
+		retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
+		read_unlock(&kvm->mmu_lock);
+		if (retry) {
+			cond_resched();
+			goto retry_map;
+		}
+
+		if (gpc->kernel_map) {
+			if (new_pfn == old_pfn) {
+				new_khva = (void *)((unsigned long)old_khva - page_offset);
+				old_pfn = KVM_PFN_ERR_FAULT;
+				old_khva = NULL;
+			} else if (pfn_valid(new_pfn)) {
+				new_khva = kmap(pfn_to_page(new_pfn));
+#ifdef CONFIG_HAS_IOMEM
+			} else {
+				new_khva = memremap(pfn_to_hpa(new_pfn), PAGE_SIZE, MEMREMAP_WB);
+#endif
+			}
+			if (!new_khva)
+				ret = -EFAULT;
+		}
+
+	map_done:
+		write_lock_irq(&gpc->lock);
+		if (ret) {
+			gpc->valid = false;
+			gpc->pfn = KVM_PFN_ERR_FAULT;
+			gpc->khva = NULL;
+		} else {
+			/* At this point, gpc->valid may already have been cleared */
+			gpc->pfn = new_pfn;
+			gpc->khva = new_khva + page_offset;
+		}
+	}
+
+ out:
+	if (ret)
+		gpc->dirty = false;
+	else
+		gpc->dirty = dirty;
+
+	write_unlock_irq(&gpc->lock);
+
+	__release_gpc(kvm, old_pfn, old_khva, old_gpa, old_dirty);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_refresh);
+
+void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+{
+	void *old_khva;
+	kvm_pfn_t old_pfn;
+	bool old_dirty;
+	gpa_t old_gpa;
+
+	write_lock_irq(&gpc->lock);
+
+	gpc->valid = false;
+
+	old_khva = gpc->khva;
+	old_dirty = gpc->dirty;
+	old_gpa = gpc->gpa;
+	old_pfn = gpc->pfn;
+
+	/*
+	 * We can leave the GPA → uHVA map cache intact but the PFN
+	 * lookup will need to be redone even for the same page.
+	 */
+	gpc->khva = NULL;
+	gpc->pfn = KVM_PFN_ERR_FAULT;
+
+	write_unlock_irq(&gpc->lock);
+
+	__release_gpc(kvm, old_pfn, old_khva, old_gpa, old_dirty);
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap);
+
+
+int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+			      struct kvm_vcpu *vcpu, bool guest_uses_pa,
+			      bool kernel_map, gpa_t gpa, unsigned long len,
+			      bool dirty)
+{
+	if (!gpc->active) {
+		rwlock_init(&gpc->lock);
+
+		gpc->khva = NULL;
+		gpc->pfn = KVM_PFN_ERR_FAULT;
+		gpc->uhva = KVM_HVA_ERR_BAD;
+		gpc->vcpu = vcpu;
+		gpc->kernel_map = kernel_map;
+		gpc->guest_uses_pa = guest_uses_pa;
+		gpc->valid = false;
+		gpc->active = true;
+
+		spin_lock(&kvm->gpc_lock);
+		list_add(&gpc->list, &kvm->gpc_list);
+		spin_unlock(&kvm->gpc_lock);
+	}
+	return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len, dirty);
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_init);
+
+void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+{
+	if (gpc->active) {
+		spin_lock(&kvm->gpc_lock);
+		list_del(&gpc->list);
+		spin_unlock(&kvm->gpc_lock);
+
+		kvm_gfn_to_pfn_cache_unmap(kvm, gpc);
+		gpc->active = false;
+	}
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_destroy);
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 08/11] KVM: Reinstate gfn_to_pfn_cache with invalidation support
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

This can be used in two modes. There is an atomic mode where the cached
mapping is accessed while holding the rwlock, and a mode where the
physical address is used by a vCPU in guest mode.

For the latter case, an invalidation will wake the vCPU with the new
KVM_REQ_GPC_INVALIDATE, and the architecture will need to refresh any
caches it still needs to access before entering guest mode again.

Only one vCPU can be targeted by the wake requests; it's simple enough
to make it wake all vCPUs or even a mask but I don't see a use case for
that additional complexity right now.

Invalidation happens from the invalidate_range_start MMU notifier, which
needs to be able to sleep in order to wake the vCPU and wait for it.

This means that revalidation potentially needs to "wait" for the MMU
operation to complete and the invalidate_range_end notifier to be
invoked. Like the vCPU when it takes a page fault in that period, we
just spin — fixing that in a future patch by implementing an actual
*wait* may be another part of shaving this particularly hirsute yak.

As noted in the comments in the function itself, the only case where
the invalidate_range_start notifier is expected to be called *without*
being able to sleep is when the OOM reaper is killing the process. In
that case, we expect the vCPU threads already to have exited, and thus
there will be nothing to wake, and no reason to wait. So we clear the
KVM_REQUEST_WAIT bit and send the request anyway, then complain loudly
if there actually *was* anything to wake up.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kvm/Kconfig              |   1 +
 include/linux/kvm_host.h          | 103 ++++++++++
 include/linux/kvm_types.h         |  18 ++
 virt/kvm/Kconfig                  |   3 +
 virt/kvm/Makefile.kvm             |   1 +
 virt/kvm/dirty_ring.c             |   2 +-
 virt/kvm/kvm_main.c               |  12 +-
 virt/kvm/{mmu_lock.h => kvm_mm.h} |  23 ++-
 virt/kvm/pfncache.c               | 318 ++++++++++++++++++++++++++++++
 9 files changed, 474 insertions(+), 7 deletions(-)
 rename virt/kvm/{mmu_lock.h => kvm_mm.h} (55%)
 create mode 100644 virt/kvm/pfncache.c

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index d7fa0a42ac25..af351107d47f 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -26,6 +26,7 @@ config KVM
 	select PREEMPT_NOTIFIERS
 	select MMU_NOTIFIER
 	select HAVE_KVM_IRQCHIP
+	select HAVE_KVM_PFNCACHE
 	select HAVE_KVM_IRQFD
 	select HAVE_KVM_DIRTY_RING
 	select IRQ_BYPASS_MANAGER
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c310648cc8f1..457c38d75913 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -151,6 +151,7 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_UNBLOCK           2
 #define KVM_REQ_UNHALT            3
 #define KVM_REQ_VM_DEAD           (4 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+#define KVM_REQ_GPC_INVALIDATE    (5 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
 #define KVM_REQUEST_ARCH_BASE     8
 
 #define KVM_ARCH_REQ_FLAGS(nr, flags) ({ \
@@ -559,6 +560,10 @@ struct kvm {
 	unsigned long mn_active_invalidate_count;
 	struct rcuwait mn_memslots_update_rcuwait;
 
+	/* For management / invalidation of gfn_to_pfn_caches */
+	spinlock_t gpc_lock;
+	struct list_head gpc_list;
+
 	/*
 	 * created_vcpus is protected by kvm->lock, and is incremented
 	 * at the beginning of KVM_CREATE_VCPU.  online_vcpus is only
@@ -966,6 +971,104 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
 			 unsigned long len);
 void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn);
 
+/**
+ * kvm_gfn_to_pfn_cache_init - prepare a cached kernel mapping and HPA for a
+ *                             given guest physical address.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @vcpu:	   vCPU to be used for marking pages dirty and to be woken on
+ *		   invalidation.
+ * @guest_uses_pa: indicates that the resulting host physical PFN is used while
+ *		   @vcpu is IN_GUEST_MODE so invalidations should wake it.
+ * @kernel_map:    requests a kernel virtual mapping (kmap / memremap).
+ * @gpa:	   guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   0 for success.
+ *		   -EINVAL for a mapping which would cross a page boundary.
+ *                 -EFAULT for an untranslatable guest physical address.
+ *
+ * This primes a gfn_to_pfn_cache and links it into the @kvm's list for
+ * invalidations to be processed. Invalidation callbacks to @vcpu using
+ * %KVM_REQ_GPC_INVALIDATE will occur only for MMU notifiers, not for KVM
+ * memslot changes. Callers are required to use kvm_gfn_to_pfn_cache_check()
+ * to ensure that the cache is valid before accessing the target page.
+ */
+int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+			      struct kvm_vcpu *vcpu, bool guest_uses_pa,
+			      bool kernel_map, gpa_t gpa, unsigned long len,
+			      bool dirty);
+
+/**
+ * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @gpa:	   current guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   %true if the cache is still valid and the address matches.
+ *		   %false if the cache is not valid.
+ *
+ * Callers outside IN_GUEST_MODE context should hold a read lock on @gpc->lock
+ * while calling this function, and then continue to hold the lock until the
+ * access is complete.
+ *
+ * Callers in IN_GUEST_MODE may do so without locking, although they should
+ * still hold a read lock on kvm->scru for the memslot checks.
+ */
+bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				gpa_t gpa, unsigned long len);
+
+/**
+ * kvm_gfn_to_pfn_cache_refresh - update a previously initialized cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ * @gpa:	   updated guest physical address to map.
+ * @len:	   sanity check; the range being access must fit a single page.
+ * @dirty:         mark the cache dirty immediately.
+ *
+ * @return:	   0 for success.
+ *		   -EINVAL for a mapping which would cross a page boundary.
+ *                 -EFAULT for an untranslatable guest physical address.
+ *
+ * This will attempt to refresh a gfn_to_pfn_cache. Note that a successful
+ * returm from this function does not mean the page can be immediately
+ * accessed because it may have raced with an invalidation. Callers must
+ * still lock and check the cache status, as this function does not return
+ * with the lock still held to permit access.
+ */
+int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				 gpa_t gpa, unsigned long len, bool dirty);
+
+/**
+ * kvm_gfn_to_pfn_cache_unmap - temporarily unmap a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ *
+ * This unmaps the referenced page and marks it dirty, if appropriate. The
+ * cache is left in the invalid state but at least the mapping from GPA to
+ * userspace HVA will remain cached and can be reused on a subsequent
+ * refresh.
+ */
+void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
+
+/**
+ * kvm_gfn_to_pfn_cache_destroy - destroy and unlink a gfn_to_pfn_cache.
+ *
+ * @kvm:	   pointer to kvm instance.
+ * @gpc:	   struct gfn_to_pfn_cache object.
+ *
+ * This removes a cache from the @kvm's list to be processed on MMU notifier
+ * invocation.
+ */
+void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);
+
 void kvm_sigset_activate(struct kvm_vcpu *vcpu);
 void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
 
diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index 234eab059839..22a52ef12b4d 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -19,6 +19,7 @@ struct kvm_memslots;
 enum kvm_mr_change;
 
 #include <linux/types.h>
+#include <linux/spinlock_types.h>
 
 #include <asm/kvm_types.h>
 
@@ -53,6 +54,23 @@ struct gfn_to_hva_cache {
 	struct kvm_memory_slot *memslot;
 };
 
+struct gfn_to_pfn_cache {
+	u64 generation;
+	gpa_t gpa;
+	unsigned long uhva;
+	struct kvm_memory_slot *memslot;
+	struct kvm_vcpu *vcpu;
+	struct list_head list;
+	rwlock_t lock;
+	void *khva;
+	kvm_pfn_t pfn;
+	bool active;
+	bool valid;
+	bool dirty;
+	bool kernel_map;
+	bool guest_uses_pa;
+};
+
 #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
 /*
  * Memory caches are used to preallocate memory ahead of various MMU flows,
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 97cf5413ac25..f4834c20e4a6 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -4,6 +4,9 @@
 config HAVE_KVM
        bool
 
+config HAVE_KVM_PFNCACHE
+       bool
+
 config HAVE_KVM_IRQCHIP
        bool
 
diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
index ffdcad3cc97a..2c27d5d0c367 100644
--- a/virt/kvm/Makefile.kvm
+++ b/virt/kvm/Makefile.kvm
@@ -11,3 +11,4 @@ kvm-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
 kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
 kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
 kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
+kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o
diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index 8e9874760fb3..222ecc81d7df 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -9,7 +9,7 @@
 #include <linux/vmalloc.h>
 #include <linux/kvm_dirty_ring.h>
 #include <trace/events/kvm.h>
-#include "mmu_lock.h"
+#include "kvm_mm.h"
 
 int __weak kvm_cpu_dirty_log_size(void)
 {
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8eb8c962838d..e639481456b8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -59,7 +59,7 @@
 
 #include "coalesced_mmio.h"
 #include "async_pf.h"
-#include "mmu_lock.h"
+#include "kvm_mm.h"
 #include "vfio.h"
 
 #define CREATE_TRACE_POINTS
@@ -684,6 +684,9 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
 	kvm->mn_active_invalidate_count++;
 	spin_unlock(&kvm->mn_invalidate_lock);
 
+	gfn_to_pfn_cache_invalidate_start(kvm, range->start, range->end,
+					  hva_range.may_block);
+
 	__kvm_handle_hva_range(kvm, &hva_range);
 
 	return 0;
@@ -1051,6 +1054,9 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	spin_lock_init(&kvm->mn_invalidate_lock);
 	rcuwait_init(&kvm->mn_memslots_update_rcuwait);
 
+	INIT_LIST_HEAD(&kvm->gpc_list);
+	spin_lock_init(&kvm->gpc_lock);
+
 	INIT_LIST_HEAD(&kvm->devices);
 
 	BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
@@ -2406,8 +2412,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
  * 2): @write_fault = false && @writable, @writable will tell the caller
  *     whether the mapping is writable.
  */
-static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
-			bool write_fault, bool *writable)
+kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
+		     bool write_fault, bool *writable)
 {
 	struct vm_area_struct *vma;
 	kvm_pfn_t pfn = 0;
diff --git a/virt/kvm/mmu_lock.h b/virt/kvm/kvm_mm.h
similarity index 55%
rename from virt/kvm/mmu_lock.h
rename to virt/kvm/kvm_mm.h
index 9e1308f9734c..b976e4b07e88 100644
--- a/virt/kvm/mmu_lock.h
+++ b/virt/kvm/kvm_mm.h
@@ -1,7 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0-only
 
-#ifndef KVM_MMU_LOCK_H
-#define KVM_MMU_LOCK_H 1
+#ifndef __KVM_MM_H__
+#define __KVM_MM_H__ 1
 
 /*
  * Architectures can choose whether to use an rwlock or spinlock
@@ -20,4 +20,21 @@
 #define KVM_MMU_UNLOCK(kvm)    spin_unlock(&(kvm)->mmu_lock)
 #endif /* KVM_HAVE_MMU_RWLOCK */
 
-#endif
+kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
+		     bool write_fault, bool *writable);
+
+#ifdef CONFIG_HAVE_KVM_PFNCACHE
+void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
+				       unsigned long start,
+				       unsigned long end,
+				       bool may_block);
+#else
+static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
+						     unsigned long start,
+						     unsigned long end,
+						     bool may_block)
+{
+}
+#endif /* HAVE_KVM_PFNCACHE */
+
+#endif /* __KVM_MM_H__ */
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
new file mode 100644
index 000000000000..5fcbce2a5385
--- /dev/null
+++ b/virt/kvm/pfncache.c
@@ -0,0 +1,318 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Kernel-based Virtual Machine driver for Linux
+ *
+ * This module enables kernel and guest-mode vCPU access to guest physical
+ * memory with suitable invalidation mechanisms.
+ *
+ * Copyright © 2021 Amazon.com, Inc. or its affiliates.
+ *
+ * Authors:
+ *   David Woodhouse <dwmw2@infradead.org>
+ */
+
+#include <linux/kvm_host.h>
+#include <linux/kvm.h>
+#include <linux/highmem.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+
+#include "kvm_mm.h"
+
+/*
+ * MMU notifier 'invalidate_range_start' hook.
+ */
+void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start,
+				       unsigned long end, bool may_block)
+{
+	DECLARE_BITMAP(vcpu_bitmap, KVM_MAX_VCPUS);
+	struct gfn_to_pfn_cache *gpc;
+	bool wake_vcpus = false;
+
+	spin_lock(&kvm->gpc_lock);
+	list_for_each_entry(gpc, &kvm->gpc_list, list) {
+		write_lock_irq(&gpc->lock);
+
+		/* Only a single page so no need to care about length */
+		if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) &&
+		    gpc->uhva >= start && gpc->uhva < end) {
+			gpc->valid = false;
+
+			/*
+			 * If a guest vCPU could be using the physical address,
+			 * it needs to be woken.
+			 */
+			if (gpc->guest_uses_pa) {
+				if (!wake_vcpus) {
+					wake_vcpus = true;
+					bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS);
+				}
+				__set_bit(gpc->vcpu->vcpu_idx, vcpu_bitmap);
+			}
+
+			/*
+			 * We cannot call mark_page_dirty() from here because
+			 * this physical CPU might not have an active vCPU
+			 * with which to do the KVM dirty tracking.
+			 *
+			 * Neither is there any point in telling the kernel MM
+			 * that the underlying page is dirty. A vCPU in guest
+			 * mode might still be writing to it up to the point
+			 * where we wake them a few lines further down anyway.
+			 *
+			 * So all the dirty marking happens on the unmap.
+			 */
+		}
+		write_unlock_irq(&gpc->lock);
+	}
+	spin_unlock(&kvm->gpc_lock);
+
+	if (wake_vcpus) {
+		unsigned int req = KVM_REQ_GPC_INVALIDATE;
+		bool called;
+
+		/*
+		 * If the OOM reaper is active, then all vCPUs should have
+		 * been stopped already, so perform the request without
+		 * KVM_REQUEST_WAIT and be sad if any needed to be woken.
+		 */
+		if (!may_block)
+			req &= ~KVM_REQUEST_WAIT;
+
+		called = kvm_make_vcpus_request_mask(kvm, req, vcpu_bitmap);
+
+		WARN_ON_ONCE(called && !may_block);
+	}
+}
+
+bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				gpa_t gpa, unsigned long len)
+{
+	struct kvm_memslots *slots = kvm_memslots(kvm);
+
+	if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE)
+		return false;
+
+	if (gpc->gpa != gpa || gpc->generation != slots->generation ||
+	    kvm_is_error_hva(gpc->uhva))
+		return false;
+
+	if (!gpc->valid)
+		return false;
+
+	return true;
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check);
+
+static void __release_gpc(struct kvm *kvm, kvm_pfn_t pfn, void *khva,
+			  gpa_t gpa, bool dirty)
+{
+	/* Unmap the old page if it was mapped before */
+	if (!is_error_noslot_pfn(pfn)) {
+		if (pfn_valid(pfn)) {
+			kunmap(pfn_to_page(pfn));
+#ifdef CONFIG_HAS_IOMEM
+		} else {
+			memunmap(khva);
+#endif
+		}
+
+		kvm_release_pfn(pfn, dirty);
+		if (dirty)
+			mark_page_dirty(kvm, gpa);
+	}
+}
+
+int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+				 gpa_t gpa, unsigned long len, bool dirty)
+{
+	struct kvm_memslots *slots = kvm_memslots(kvm);
+	unsigned long page_offset = gpa & ~PAGE_MASK;
+	kvm_pfn_t old_pfn, new_pfn;
+	unsigned long old_uhva;
+	gpa_t old_gpa;
+	void *old_khva;
+	bool old_valid, old_dirty;
+	int ret = 0;
+
+	/*
+	 * If must fit within a single page. The 'len' argument is
+	 * only to enforce that.
+	 */
+	if (page_offset + len > PAGE_SIZE)
+		return -EINVAL;
+
+	write_lock_irq(&gpc->lock);
+
+	old_gpa = gpc->gpa;
+	old_pfn = gpc->pfn;
+	old_khva = gpc->khva;
+	old_uhva = gpc->uhva;
+	old_valid = gpc->valid;
+	old_dirty = gpc->dirty;
+
+	/* If the userspace HVA is invalid, refresh that first */
+	if (gpc->gpa != gpa || gpc->generation != slots->generation ||
+	    kvm_is_error_hva(gpc->uhva)) {
+		gfn_t gfn = gpa_to_gfn(gpa);
+
+		gpc->dirty = false;
+		gpc->gpa = gpa;
+		gpc->generation = slots->generation;
+		gpc->memslot = __gfn_to_memslot(slots, gfn);
+		gpc->uhva = gfn_to_hva_memslot(gpc->memslot, gfn);
+
+		if (kvm_is_error_hva(gpc->uhva)) {
+			ret = -EFAULT;
+			goto out;
+		}
+
+		gpc->uhva += page_offset;
+	}
+
+	/*
+	 * If the userspace HVA changed or the PFN was already invalid,
+	 * drop the lock and do the HVA to PFN lookup again.
+	 */
+	if (!old_valid || old_uhva != gpc->uhva) {
+		unsigned long uhva = gpc->uhva;
+		void *new_khva = NULL;
+		unsigned long mmu_seq;
+		int retry;
+
+		/* Placeholders for "hva is valid but not yet mapped" */
+		gpc->pfn = KVM_PFN_ERR_FAULT;
+		gpc->khva = NULL;
+		gpc->valid = true;
+
+		write_unlock_irq(&gpc->lock);
+
+	retry_map:
+		mmu_seq = kvm->mmu_notifier_seq;
+		smp_rmb();
+
+		/* We always request a writeable mapping */
+		new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
+		if (is_error_noslot_pfn(new_pfn)) {
+			ret = -EFAULT;
+			goto map_done;
+		}
+
+		read_lock(&kvm->mmu_lock);
+		retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
+		read_unlock(&kvm->mmu_lock);
+		if (retry) {
+			cond_resched();
+			goto retry_map;
+		}
+
+		if (gpc->kernel_map) {
+			if (new_pfn == old_pfn) {
+				new_khva = (void *)((unsigned long)old_khva - page_offset);
+				old_pfn = KVM_PFN_ERR_FAULT;
+				old_khva = NULL;
+			} else if (pfn_valid(new_pfn)) {
+				new_khva = kmap(pfn_to_page(new_pfn));
+#ifdef CONFIG_HAS_IOMEM
+			} else {
+				new_khva = memremap(pfn_to_hpa(new_pfn), PAGE_SIZE, MEMREMAP_WB);
+#endif
+			}
+			if (!new_khva)
+				ret = -EFAULT;
+		}
+
+	map_done:
+		write_lock_irq(&gpc->lock);
+		if (ret) {
+			gpc->valid = false;
+			gpc->pfn = KVM_PFN_ERR_FAULT;
+			gpc->khva = NULL;
+		} else {
+			/* At this point, gpc->valid may already have been cleared */
+			gpc->pfn = new_pfn;
+			gpc->khva = new_khva + page_offset;
+		}
+	}
+
+ out:
+	if (ret)
+		gpc->dirty = false;
+	else
+		gpc->dirty = dirty;
+
+	write_unlock_irq(&gpc->lock);
+
+	__release_gpc(kvm, old_pfn, old_khva, old_gpa, old_dirty);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_refresh);
+
+void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+{
+	void *old_khva;
+	kvm_pfn_t old_pfn;
+	bool old_dirty;
+	gpa_t old_gpa;
+
+	write_lock_irq(&gpc->lock);
+
+	gpc->valid = false;
+
+	old_khva = gpc->khva;
+	old_dirty = gpc->dirty;
+	old_gpa = gpc->gpa;
+	old_pfn = gpc->pfn;
+
+	/*
+	 * We can leave the GPA → uHVA map cache intact but the PFN
+	 * lookup will need to be redone even for the same page.
+	 */
+	gpc->khva = NULL;
+	gpc->pfn = KVM_PFN_ERR_FAULT;
+
+	write_unlock_irq(&gpc->lock);
+
+	__release_gpc(kvm, old_pfn, old_khva, old_gpa, old_dirty);
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap);
+
+
+int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+			      struct kvm_vcpu *vcpu, bool guest_uses_pa,
+			      bool kernel_map, gpa_t gpa, unsigned long len,
+			      bool dirty)
+{
+	if (!gpc->active) {
+		rwlock_init(&gpc->lock);
+
+		gpc->khva = NULL;
+		gpc->pfn = KVM_PFN_ERR_FAULT;
+		gpc->uhva = KVM_HVA_ERR_BAD;
+		gpc->vcpu = vcpu;
+		gpc->kernel_map = kernel_map;
+		gpc->guest_uses_pa = guest_uses_pa;
+		gpc->valid = false;
+		gpc->active = true;
+
+		spin_lock(&kvm->gpc_lock);
+		list_add(&gpc->list, &kvm->gpc_list);
+		spin_unlock(&kvm->gpc_lock);
+	}
+	return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len, dirty);
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_init);
+
+void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
+{
+	if (gpc->active) {
+		spin_lock(&kvm->gpc_lock);
+		list_del(&gpc->list);
+		spin_unlock(&kvm->gpc_lock);
+
+		kvm_gfn_to_pfn_cache_unmap(kvm, gpc);
+		gpc->active = false;
+	}
+}
+EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_destroy);
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 09/11] KVM: x86/xen: Maintain valid mapping of Xen shared_info page
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Use the newly reinstated gfn_to_pfn_cache to maintain a kernel mapping
of the Xen shared_info page so that it can be accessed in atomic context.

Note that we do not participate in dirty tracking for the shared info
page and we do not explicitly mark it dirty every single tim we deliver
an event channel interrupts. We wouldn't want to do that even if we *did*
have a valid vCPU context with which to do so.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 Documentation/virt/kvm/api.rst  | 12 ++++++++++++
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/xen.c              | 25 ++++++++++++++-----------
 3 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index aeeb071c7688..455664c39d42 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -371,6 +371,9 @@ The bits in the dirty bitmap are cleared before the ioctl returns, unless
 KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is enabled.  For more information,
 see the description of the capability.
 
+Note that the Xen shared info page, if configured, shall always be assumed
+to be dirty. KVM will not explicitly mark it such.
+
 4.9 KVM_SET_MEMORY_ALIAS
 ------------------------
 
@@ -5134,6 +5137,15 @@ KVM_XEN_ATTR_TYPE_SHARED_INFO
   not aware of the Xen CPU id which is used as the index into the
   vcpu_info[] array, so cannot know the correct default location.
 
+  Note that the shared info page may be constantly written to by KVM;
+  it contains the event channel bitmap used to deliver interrupts to
+  a Xen guest, amongst other things. It is exempt from dirty tracking
+  mechanisms — KVM will not explicitly mark the page as dirty each
+  time an event channel interrupt is delivered to the guest! Thus,
+  userspace should always assume that the designated GFN is dirty if
+  any vCPU has been running or any event channel interrupts can be
+  routed to the guest.
+
 KVM_XEN_ATTR_TYPE_UPCALL_VECTOR
   Sets the exception vector used to deliver Xen event channel upcalls.
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6ac61f85e07b..4b2b4ecf3b46 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1014,7 +1014,7 @@ struct msr_bitmap_range {
 struct kvm_xen {
 	bool long_mode;
 	u8 upcall_vector;
-	gfn_t shinfo_gfn;
+	struct gfn_to_pfn_cache shinfo_cache;
 };
 
 enum kvm_irqchip_mode {
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index dff2bdf9507a..da4bf2c6407f 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -23,16 +23,21 @@ DEFINE_STATIC_KEY_DEFERRED_FALSE(kvm_xen_enabled, HZ);
 
 static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 {
+	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
 	gpa_t gpa = gfn_to_gpa(gfn);
 	int wc_ofs, sec_hi_ofs;
 	int ret = 0;
 	int idx = srcu_read_lock(&kvm->srcu);
 
-	if (kvm_is_error_hva(gfn_to_hva(kvm, gfn))) {
-		ret = -EFAULT;
+	if (gfn == GPA_INVALID) {
+		kvm_gfn_to_pfn_cache_destroy(kvm, gpc);
 		goto out;
 	}
-	kvm->arch.xen.shinfo_gfn = gfn;
+
+	ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true, gpa,
+					PAGE_SIZE, false);
+	if (ret)
+		goto out;
 
 	/* Paranoia checks on the 32-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, wc) != 0x900);
@@ -260,15 +265,9 @@ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
 		break;
 
 	case KVM_XEN_ATTR_TYPE_SHARED_INFO:
-		if (data->u.shared_info.gfn == GPA_INVALID) {
-			kvm->arch.xen.shinfo_gfn = GPA_INVALID;
-			r = 0;
-			break;
-		}
 		r = kvm_xen_shared_info_init(kvm, data->u.shared_info.gfn);
 		break;
 
-
 	case KVM_XEN_ATTR_TYPE_UPCALL_VECTOR:
 		if (data->u.vector && data->u.vector < 0x10)
 			r = -EINVAL;
@@ -299,7 +298,10 @@ int kvm_xen_hvm_get_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
 		break;
 
 	case KVM_XEN_ATTR_TYPE_SHARED_INFO:
-		data->u.shared_info.gfn = kvm->arch.xen.shinfo_gfn;
+		if (kvm->arch.xen.shinfo_cache.active)
+			data->u.shared_info.gfn = gpa_to_gfn(kvm->arch.xen.shinfo_cache.gpa);
+		else
+			data->u.shared_info.gfn = GPA_INVALID;
 		r = 0;
 		break;
 
@@ -661,11 +663,12 @@ int kvm_xen_hvm_config(struct kvm *kvm, struct kvm_xen_hvm_config *xhc)
 
 void kvm_xen_init_vm(struct kvm *kvm)
 {
-	kvm->arch.xen.shinfo_gfn = GPA_INVALID;
 }
 
 void kvm_xen_destroy_vm(struct kvm *kvm)
 {
+	kvm_gfn_to_pfn_cache_destroy(kvm, &kvm->arch.xen.shinfo_cache);
+
 	if (kvm->arch.xen_hvm_config.msr)
 		static_branch_slow_dec_deferred(&kvm_xen_enabled);
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 09/11] KVM: x86/xen: Maintain valid mapping of Xen shared_info page
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

Use the newly reinstated gfn_to_pfn_cache to maintain a kernel mapping
of the Xen shared_info page so that it can be accessed in atomic context.

Note that we do not participate in dirty tracking for the shared info
page and we do not explicitly mark it dirty every single tim we deliver
an event channel interrupts. We wouldn't want to do that even if we *did*
have a valid vCPU context with which to do so.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 Documentation/virt/kvm/api.rst  | 12 ++++++++++++
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/xen.c              | 25 ++++++++++++++-----------
 3 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index aeeb071c7688..455664c39d42 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -371,6 +371,9 @@ The bits in the dirty bitmap are cleared before the ioctl returns, unless
 KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is enabled.  For more information,
 see the description of the capability.
 
+Note that the Xen shared info page, if configured, shall always be assumed
+to be dirty. KVM will not explicitly mark it such.
+
 4.9 KVM_SET_MEMORY_ALIAS
 ------------------------
 
@@ -5134,6 +5137,15 @@ KVM_XEN_ATTR_TYPE_SHARED_INFO
   not aware of the Xen CPU id which is used as the index into the
   vcpu_info[] array, so cannot know the correct default location.
 
+  Note that the shared info page may be constantly written to by KVM;
+  it contains the event channel bitmap used to deliver interrupts to
+  a Xen guest, amongst other things. It is exempt from dirty tracking
+  mechanisms — KVM will not explicitly mark the page as dirty each
+  time an event channel interrupt is delivered to the guest! Thus,
+  userspace should always assume that the designated GFN is dirty if
+  any vCPU has been running or any event channel interrupts can be
+  routed to the guest.
+
 KVM_XEN_ATTR_TYPE_UPCALL_VECTOR
   Sets the exception vector used to deliver Xen event channel upcalls.
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6ac61f85e07b..4b2b4ecf3b46 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1014,7 +1014,7 @@ struct msr_bitmap_range {
 struct kvm_xen {
 	bool long_mode;
 	u8 upcall_vector;
-	gfn_t shinfo_gfn;
+	struct gfn_to_pfn_cache shinfo_cache;
 };
 
 enum kvm_irqchip_mode {
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index dff2bdf9507a..da4bf2c6407f 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -23,16 +23,21 @@ DEFINE_STATIC_KEY_DEFERRED_FALSE(kvm_xen_enabled, HZ);
 
 static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 {
+	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
 	gpa_t gpa = gfn_to_gpa(gfn);
 	int wc_ofs, sec_hi_ofs;
 	int ret = 0;
 	int idx = srcu_read_lock(&kvm->srcu);
 
-	if (kvm_is_error_hva(gfn_to_hva(kvm, gfn))) {
-		ret = -EFAULT;
+	if (gfn == GPA_INVALID) {
+		kvm_gfn_to_pfn_cache_destroy(kvm, gpc);
 		goto out;
 	}
-	kvm->arch.xen.shinfo_gfn = gfn;
+
+	ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true, gpa,
+					PAGE_SIZE, false);
+	if (ret)
+		goto out;
 
 	/* Paranoia checks on the 32-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, wc) != 0x900);
@@ -260,15 +265,9 @@ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
 		break;
 
 	case KVM_XEN_ATTR_TYPE_SHARED_INFO:
-		if (data->u.shared_info.gfn == GPA_INVALID) {
-			kvm->arch.xen.shinfo_gfn = GPA_INVALID;
-			r = 0;
-			break;
-		}
 		r = kvm_xen_shared_info_init(kvm, data->u.shared_info.gfn);
 		break;
 
-
 	case KVM_XEN_ATTR_TYPE_UPCALL_VECTOR:
 		if (data->u.vector && data->u.vector < 0x10)
 			r = -EINVAL;
@@ -299,7 +298,10 @@ int kvm_xen_hvm_get_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
 		break;
 
 	case KVM_XEN_ATTR_TYPE_SHARED_INFO:
-		data->u.shared_info.gfn = kvm->arch.xen.shinfo_gfn;
+		if (kvm->arch.xen.shinfo_cache.active)
+			data->u.shared_info.gfn = gpa_to_gfn(kvm->arch.xen.shinfo_cache.gpa);
+		else
+			data->u.shared_info.gfn = GPA_INVALID;
 		r = 0;
 		break;
 
@@ -661,11 +663,12 @@ int kvm_xen_hvm_config(struct kvm *kvm, struct kvm_xen_hvm_config *xhc)
 
 void kvm_xen_init_vm(struct kvm *kvm)
 {
-	kvm->arch.xen.shinfo_gfn = GPA_INVALID;
 }
 
 void kvm_xen_destroy_vm(struct kvm *kvm)
 {
+	kvm_gfn_to_pfn_cache_destroy(kvm, &kvm->arch.xen.shinfo_cache);
+
 	if (kvm->arch.xen_hvm_config.msr)
 		static_branch_slow_dec_deferred(&kvm_xen_enabled);
 }
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 09/11] KVM: x86/xen: Maintain valid mapping of Xen shared_info page
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Use the newly reinstated gfn_to_pfn_cache to maintain a kernel mapping
of the Xen shared_info page so that it can be accessed in atomic context.

Note that we do not participate in dirty tracking for the shared info
page and we do not explicitly mark it dirty every single tim we deliver
an event channel interrupts. We wouldn't want to do that even if we *did*
have a valid vCPU context with which to do so.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 Documentation/virt/kvm/api.rst  | 12 ++++++++++++
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/xen.c              | 25 ++++++++++++++-----------
 3 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index aeeb071c7688..455664c39d42 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -371,6 +371,9 @@ The bits in the dirty bitmap are cleared before the ioctl returns, unless
 KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is enabled.  For more information,
 see the description of the capability.
 
+Note that the Xen shared info page, if configured, shall always be assumed
+to be dirty. KVM will not explicitly mark it such.
+
 4.9 KVM_SET_MEMORY_ALIAS
 ------------------------
 
@@ -5134,6 +5137,15 @@ KVM_XEN_ATTR_TYPE_SHARED_INFO
   not aware of the Xen CPU id which is used as the index into the
   vcpu_info[] array, so cannot know the correct default location.
 
+  Note that the shared info page may be constantly written to by KVM;
+  it contains the event channel bitmap used to deliver interrupts to
+  a Xen guest, amongst other things. It is exempt from dirty tracking
+  mechanisms — KVM will not explicitly mark the page as dirty each
+  time an event channel interrupt is delivered to the guest! Thus,
+  userspace should always assume that the designated GFN is dirty if
+  any vCPU has been running or any event channel interrupts can be
+  routed to the guest.
+
 KVM_XEN_ATTR_TYPE_UPCALL_VECTOR
   Sets the exception vector used to deliver Xen event channel upcalls.
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6ac61f85e07b..4b2b4ecf3b46 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1014,7 +1014,7 @@ struct msr_bitmap_range {
 struct kvm_xen {
 	bool long_mode;
 	u8 upcall_vector;
-	gfn_t shinfo_gfn;
+	struct gfn_to_pfn_cache shinfo_cache;
 };
 
 enum kvm_irqchip_mode {
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index dff2bdf9507a..da4bf2c6407f 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -23,16 +23,21 @@ DEFINE_STATIC_KEY_DEFERRED_FALSE(kvm_xen_enabled, HZ);
 
 static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 {
+	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
 	gpa_t gpa = gfn_to_gpa(gfn);
 	int wc_ofs, sec_hi_ofs;
 	int ret = 0;
 	int idx = srcu_read_lock(&kvm->srcu);
 
-	if (kvm_is_error_hva(gfn_to_hva(kvm, gfn))) {
-		ret = -EFAULT;
+	if (gfn == GPA_INVALID) {
+		kvm_gfn_to_pfn_cache_destroy(kvm, gpc);
 		goto out;
 	}
-	kvm->arch.xen.shinfo_gfn = gfn;
+
+	ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true, gpa,
+					PAGE_SIZE, false);
+	if (ret)
+		goto out;
 
 	/* Paranoia checks on the 32-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, wc) != 0x900);
@@ -260,15 +265,9 @@ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
 		break;
 
 	case KVM_XEN_ATTR_TYPE_SHARED_INFO:
-		if (data->u.shared_info.gfn == GPA_INVALID) {
-			kvm->arch.xen.shinfo_gfn = GPA_INVALID;
-			r = 0;
-			break;
-		}
 		r = kvm_xen_shared_info_init(kvm, data->u.shared_info.gfn);
 		break;
 
-
 	case KVM_XEN_ATTR_TYPE_UPCALL_VECTOR:
 		if (data->u.vector && data->u.vector < 0x10)
 			r = -EINVAL;
@@ -299,7 +298,10 @@ int kvm_xen_hvm_get_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
 		break;
 
 	case KVM_XEN_ATTR_TYPE_SHARED_INFO:
-		data->u.shared_info.gfn = kvm->arch.xen.shinfo_gfn;
+		if (kvm->arch.xen.shinfo_cache.active)
+			data->u.shared_info.gfn = gpa_to_gfn(kvm->arch.xen.shinfo_cache.gpa);
+		else
+			data->u.shared_info.gfn = GPA_INVALID;
 		r = 0;
 		break;
 
@@ -661,11 +663,12 @@ int kvm_xen_hvm_config(struct kvm *kvm, struct kvm_xen_hvm_config *xhc)
 
 void kvm_xen_init_vm(struct kvm *kvm)
 {
-	kvm->arch.xen.shinfo_gfn = GPA_INVALID;
 }
 
 void kvm_xen_destroy_vm(struct kvm *kvm)
 {
+	kvm_gfn_to_pfn_cache_destroy(kvm, &kvm->arch.xen.shinfo_cache);
+
 	if (kvm->arch.xen_hvm_config.msr)
 		static_branch_slow_dec_deferred(&kvm_xen_enabled);
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 09/11] KVM: x86/xen: Maintain valid mapping of Xen shared_info page
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

Use the newly reinstated gfn_to_pfn_cache to maintain a kernel mapping
of the Xen shared_info page so that it can be accessed in atomic context.

Note that we do not participate in dirty tracking for the shared info
page and we do not explicitly mark it dirty every single tim we deliver
an event channel interrupts. We wouldn't want to do that even if we *did*
have a valid vCPU context with which to do so.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 Documentation/virt/kvm/api.rst  | 12 ++++++++++++
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/xen.c              | 25 ++++++++++++++-----------
 3 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index aeeb071c7688..455664c39d42 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -371,6 +371,9 @@ The bits in the dirty bitmap are cleared before the ioctl returns, unless
 KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is enabled.  For more information,
 see the description of the capability.
 
+Note that the Xen shared info page, if configured, shall always be assumed
+to be dirty. KVM will not explicitly mark it such.
+
 4.9 KVM_SET_MEMORY_ALIAS
 ------------------------
 
@@ -5134,6 +5137,15 @@ KVM_XEN_ATTR_TYPE_SHARED_INFO
   not aware of the Xen CPU id which is used as the index into the
   vcpu_info[] array, so cannot know the correct default location.
 
+  Note that the shared info page may be constantly written to by KVM;
+  it contains the event channel bitmap used to deliver interrupts to
+  a Xen guest, amongst other things. It is exempt from dirty tracking
+  mechanisms — KVM will not explicitly mark the page as dirty each
+  time an event channel interrupt is delivered to the guest! Thus,
+  userspace should always assume that the designated GFN is dirty if
+  any vCPU has been running or any event channel interrupts can be
+  routed to the guest.
+
 KVM_XEN_ATTR_TYPE_UPCALL_VECTOR
   Sets the exception vector used to deliver Xen event channel upcalls.
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6ac61f85e07b..4b2b4ecf3b46 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1014,7 +1014,7 @@ struct msr_bitmap_range {
 struct kvm_xen {
 	bool long_mode;
 	u8 upcall_vector;
-	gfn_t shinfo_gfn;
+	struct gfn_to_pfn_cache shinfo_cache;
 };
 
 enum kvm_irqchip_mode {
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index dff2bdf9507a..da4bf2c6407f 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -23,16 +23,21 @@ DEFINE_STATIC_KEY_DEFERRED_FALSE(kvm_xen_enabled, HZ);
 
 static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 {
+	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
 	gpa_t gpa = gfn_to_gpa(gfn);
 	int wc_ofs, sec_hi_ofs;
 	int ret = 0;
 	int idx = srcu_read_lock(&kvm->srcu);
 
-	if (kvm_is_error_hva(gfn_to_hva(kvm, gfn))) {
-		ret = -EFAULT;
+	if (gfn == GPA_INVALID) {
+		kvm_gfn_to_pfn_cache_destroy(kvm, gpc);
 		goto out;
 	}
-	kvm->arch.xen.shinfo_gfn = gfn;
+
+	ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true, gpa,
+					PAGE_SIZE, false);
+	if (ret)
+		goto out;
 
 	/* Paranoia checks on the 32-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, wc) != 0x900);
@@ -260,15 +265,9 @@ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
 		break;
 
 	case KVM_XEN_ATTR_TYPE_SHARED_INFO:
-		if (data->u.shared_info.gfn == GPA_INVALID) {
-			kvm->arch.xen.shinfo_gfn = GPA_INVALID;
-			r = 0;
-			break;
-		}
 		r = kvm_xen_shared_info_init(kvm, data->u.shared_info.gfn);
 		break;
 
-
 	case KVM_XEN_ATTR_TYPE_UPCALL_VECTOR:
 		if (data->u.vector && data->u.vector < 0x10)
 			r = -EINVAL;
@@ -299,7 +298,10 @@ int kvm_xen_hvm_get_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
 		break;
 
 	case KVM_XEN_ATTR_TYPE_SHARED_INFO:
-		data->u.shared_info.gfn = kvm->arch.xen.shinfo_gfn;
+		if (kvm->arch.xen.shinfo_cache.active)
+			data->u.shared_info.gfn = gpa_to_gfn(kvm->arch.xen.shinfo_cache.gpa);
+		else
+			data->u.shared_info.gfn = GPA_INVALID;
 		r = 0;
 		break;
 
@@ -661,11 +663,12 @@ int kvm_xen_hvm_config(struct kvm *kvm, struct kvm_xen_hvm_config *xhc)
 
 void kvm_xen_init_vm(struct kvm *kvm)
 {
-	kvm->arch.xen.shinfo_gfn = GPA_INVALID;
 }
 
 void kvm_xen_destroy_vm(struct kvm *kvm)
 {
+	kvm_gfn_to_pfn_cache_destroy(kvm, &kvm->arch.xen.shinfo_cache);
+
 	if (kvm->arch.xen_hvm_config.msr)
 		static_branch_slow_dec_deferred(&kvm_xen_enabled);
 }
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 10/11] KVM: x86/xen: Add KVM_IRQ_ROUTING_XEN_EVTCHN and event channel delivery
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

This adds basic support for delivering 2 level event channels to a guest.

Initially, it only supports delivery via the IRQ routing table, triggered
by an eventfd. In order to do so, it has a kvm_xen_set_evtchn_fast()
function which will use the pre-mapped shared_info page if it already
exists and is still valid, while the slow path through the irqfd_inject
workqueue will remap the shared_info page if necessary.

It sets the bits in the shared_info page but not the vcpu_info; that is
deferred to __kvm_xen_has_interrupt() which raises the vector to the
appropriate vCPU.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 Documentation/virt/kvm/api.rst                |  21 ++
 arch/x86/include/asm/kvm_host.h               |   1 +
 arch/x86/kvm/irq_comm.c                       |  12 +
 arch/x86/kvm/x86.c                            |   3 +-
 arch/x86/kvm/xen.c                            | 262 +++++++++++++++++-
 arch/x86/kvm/xen.h                            |   9 +
 include/linux/kvm_host.h                      |   7 +
 include/uapi/linux/kvm.h                      |  11 +
 .../selftests/kvm/x86_64/xen_shinfo_test.c    | 112 +++++++-
 9 files changed, 431 insertions(+), 7 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 455664c39d42..ec4d693851a2 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -1799,6 +1799,7 @@ No flags are specified so far, the corresponding field must be set to zero.
 		struct kvm_irq_routing_msi msi;
 		struct kvm_irq_routing_s390_adapter adapter;
 		struct kvm_irq_routing_hv_sint hv_sint;
+		struct kvm_irq_routing_xen_evtchn xen_evtchn;
 		__u32 pad[8];
 	} u;
   };
@@ -1808,6 +1809,7 @@ No flags are specified so far, the corresponding field must be set to zero.
   #define KVM_IRQ_ROUTING_MSI 2
   #define KVM_IRQ_ROUTING_S390_ADAPTER 3
   #define KVM_IRQ_ROUTING_HV_SINT 4
+  #define KVM_IRQ_ROUTING_XEN_EVTCHN 5
 
 flags:
 
@@ -1859,6 +1861,20 @@ address_hi must be zero.
 	__u32 sint;
   };
 
+  struct kvm_irq_routing_xen_evtchn {
+	__u32 port;
+	__u32 vcpu;
+	__u32 priority;
+  };
+
+
+When KVM_CAP_XEN_HVM includes the KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL bit
+in its indication of supported features, routing to Xen event channels
+is supported. Although the priority field is present, only the value
+KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL is supported, which means delivery by
+2 level event channels. FIFO event channel support may be added in
+the future.
+
 
 4.55 KVM_SET_TSC_KHZ
 --------------------
@@ -7413,6 +7429,7 @@ PVHVM guests. Valid flags are::
   #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
   #define KVM_XEN_HVM_CONFIG_SHARED_INFO	(1 << 2)
   #define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 2)
+  #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 3)
 
 The KVM_XEN_HVM_CONFIG_HYPERCALL_MSR flag indicates that the KVM_XEN_HVM_CONFIG
 ioctl is available, for the guest to set its hypercall page.
@@ -7432,6 +7449,10 @@ The KVM_XEN_HVM_CONFIG_RUNSTATE flag indicates that the runstate-related
 features KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR/_CURRENT/_DATA/_ADJUST are
 supported by the KVM_XEN_VCPU_SET_ATTR/KVM_XEN_VCPU_GET_ATTR ioctls.
 
+The KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL flag indicates that IRQ routing entries
+of the type KVM_IRQ_ROUTING_XEN_EVTCHN are supported, with the priority
+field set to indicate 2 level event channel delivery.
+
 8.31 KVM_CAP_PPC_MULTITCE
 -------------------------
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4b2b4ecf3b46..6ea2446ab851 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -604,6 +604,7 @@ struct kvm_vcpu_xen {
 	u64 last_steal;
 	u64 runstate_entry_time;
 	u64 runstate_times[4];
+	unsigned long evtchn_pending_sel;
 };
 
 struct kvm_vcpu_arch {
diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c
index d5b72a08e566..afd2de84be60 100644
--- a/arch/x86/kvm/irq_comm.c
+++ b/arch/x86/kvm/irq_comm.c
@@ -24,6 +24,7 @@
 
 #include "hyperv.h"
 #include "x86.h"
+#include "xen.h"
 
 static int kvm_set_pic_irq(struct kvm_kernel_irq_routing_entry *e,
 			   struct kvm *kvm, int irq_source_id, int level,
@@ -175,6 +176,13 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
 			return r;
 		break;
 
+#ifdef CONFIG_KVM_XEN
+	case KVM_IRQ_ROUTING_XEN_EVTCHN:
+		if (!level)
+			return -1;
+
+		return kvm_xen_set_evtchn_fast(e, kvm);
+#endif
 	default:
 		break;
 	}
@@ -310,6 +318,10 @@ int kvm_set_routing_entry(struct kvm *kvm,
 		e->hv_sint.vcpu = ue->u.hv_sint.vcpu;
 		e->hv_sint.sint = ue->u.hv_sint.sint;
 		break;
+#ifdef CONFIG_KVM_XEN
+	case KVM_IRQ_ROUTING_XEN_EVTCHN:
+		return kvm_xen_setup_evtchn(kvm, e, ue);
+#endif
 	default:
 		return -EINVAL;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5a403d92833f..fa56c590d8db 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4147,7 +4147,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_XEN_HVM:
 		r = KVM_XEN_HVM_CONFIG_HYPERCALL_MSR |
 		    KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL |
-		    KVM_XEN_HVM_CONFIG_SHARED_INFO;
+		    KVM_XEN_HVM_CONFIG_SHARED_INFO |
+		    KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL;
 		if (sched_info_on())
 			r |= KVM_XEN_HVM_CONFIG_RUNSTATE;
 		break;
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index da4bf2c6407f..4b380d2157a9 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -16,6 +16,7 @@
 #include <trace/events/kvm.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/vcpu.h>
+#include <xen/interface/event_channel.h>
 
 #include "trace.h"
 
@@ -195,6 +196,8 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
 
 int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 {
+	unsigned long evtchn_pending_sel = READ_ONCE(v->arch.xen.evtchn_pending_sel);
+	bool atomic = in_atomic() || !task_is_running(current);
 	int err;
 	u8 rc = 0;
 
@@ -204,6 +207,9 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 */
 	struct gfn_to_hva_cache *ghc = &v->arch.xen.vcpu_info_cache;
 	struct kvm_memslots *slots = kvm_memslots(v->kvm);
+	bool ghc_valid = slots->generation == ghc->generation &&
+		!kvm_is_error_hva(ghc->hva) && ghc->memslot;
+
 	unsigned int offset = offsetof(struct vcpu_info, evtchn_upcall_pending);
 
 	/* No need for compat handling here */
@@ -219,8 +225,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 * cache in kvm_read_guest_offset_cached(), but just uses
 	 * __get_user() instead. And falls back to the slow path.
 	 */
-	if (likely(slots->generation == ghc->generation &&
-		   !kvm_is_error_hva(ghc->hva) && ghc->memslot)) {
+	if (!evtchn_pending_sel && ghc_valid) {
 		/* Fast path */
 		pagefault_disable();
 		err = __get_user(rc, (u8 __user *)ghc->hva + offset);
@@ -239,11 +244,82 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 * and we'll end up getting called again from a context where we *can*
 	 * fault in the page and wait for it.
 	 */
-	if (in_atomic() || !task_is_running(current))
+	if (atomic)
 		return 1;
 
-	kvm_read_guest_offset_cached(v->kvm, ghc, &rc, offset,
-				     sizeof(rc));
+	if (!ghc_valid) {
+		err = kvm_gfn_to_hva_cache_init(v->kvm, ghc, ghc->gpa, ghc->len);
+		if (err || !ghc->memslot) {
+			/*
+			 * If this failed, userspace has screwed up the
+			 * vcpu_info mapping. No interrupts for you.
+			 */
+			return 0;
+		}
+	}
+
+	/*
+	 * Now we have a valid (protected by srcu) userspace HVA in
+	 * ghc->hva which points to the struct vcpu_info. If there
+	 * are any bits in the in-kernel evtchn_pending_sel then
+	 * we need to write those to the guest vcpu_info and set
+	 * its evtchn_upcall_pending flag. If there aren't any bits
+	 * to add, we only want to *check* evtchn_upcall_pending.
+	 */
+	if (evtchn_pending_sel) {
+		bool long_mode = v->kvm->arch.xen.long_mode;
+
+		if (!user_access_begin((void *)ghc->hva, sizeof(struct vcpu_info)))
+			return 0;
+
+		if (IS_ENABLED(CONFIG_64BIT) && long_mode) {
+			struct vcpu_info __user *vi = (void *)ghc->hva;
+
+			/* Attempt to set the evtchn_pending_sel bits in the
+			 * guest, and if that succeeds then clear the same
+			 * bits in the in-kernel version. */
+			asm volatile("1:\t" LOCK_PREFIX "orq %0, %1\n"
+				     "\tnotq %0\n"
+				     "\t" LOCK_PREFIX "andq %0, %2\n"
+				     "2:\n"
+				     "\t.section .fixup,\"ax\"\n"
+				     "3:\tjmp\t2b\n"
+				     "\t.previous\n"
+				     _ASM_EXTABLE_UA(1b, 3b)
+				     : "=r" (evtchn_pending_sel),
+				       "+m" (vi->evtchn_pending_sel),
+				       "+m" (v->arch.xen.evtchn_pending_sel)
+				     : "0" (evtchn_pending_sel));
+		} else {
+			struct compat_vcpu_info __user *vi = (void *)ghc->hva;
+			u32 evtchn_pending_sel32 = evtchn_pending_sel;
+
+			/* Attempt to set the evtchn_pending_sel bits in the
+			 * guest, and if that succeeds then clear the same
+			 * bits in the in-kernel version. */
+			asm volatile("1:\t" LOCK_PREFIX "orl %0, %1\n"
+				     "\tnotl %0\n"
+				     "\t" LOCK_PREFIX "andl %0, %2\n"
+				     "2:\n"
+				     "\t.section .fixup,\"ax\"\n"
+				     "3:\tjmp\t2b\n"
+				     "\t.previous\n"
+				     _ASM_EXTABLE_UA(1b, 3b)
+				     : "=r" (evtchn_pending_sel32),
+				       "+m" (vi->evtchn_pending_sel),
+				       "+m" (v->arch.xen.evtchn_pending_sel)
+				     : "0" (evtchn_pending_sel32));
+		}
+		rc = 1;
+		unsafe_put_user(rc, (u8 __user *)ghc->hva + offset, err);
+
+	err:
+		user_access_end();
+
+		mark_page_dirty_in_slot(v->kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
+	} else {
+		__get_user(rc, (u8 __user *)ghc->hva + offset);
+	}
 
 	return rc;
 }
@@ -740,3 +816,179 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
 
 	return 0;
 }
+
+static inline int max_evtchn_port(struct kvm *kvm)
+{
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode)
+		return EVTCHN_2L_NR_CHANNELS;
+	else
+		return COMPAT_EVTCHN_2L_NR_CHANNELS;
+}
+
+/*
+ * This follows the kvm_set_irq() API, so it returns:
+ *  < 0   Interrupt was ignored (masked or not delivered for other reasons)
+ *  = 0   Interrupt was coalesced (previous irq is still pending)
+ *  > 0   Number of CPUs interrupt was delivered to
+ */
+int kvm_xen_set_evtchn_fast(struct kvm_kernel_irq_routing_entry *e,
+			    struct kvm *kvm)
+{
+	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+	struct kvm_vcpu *vcpu;
+	unsigned long *pending_bits, *mask_bits;
+	unsigned long flags;
+	int port_word_bit;
+	bool kick_vcpu = false;
+	int idx;
+	int rc;
+
+	vcpu = kvm_get_vcpu_by_id(kvm, e->xen_evtchn.vcpu);
+	if (!vcpu)
+		return -1;
+
+	if (!vcpu->arch.xen.vcpu_info_set)
+		return -1;
+
+	if (e->xen_evtchn.port >= max_evtchn_port(kvm))
+		return -1;
+
+	rc = -EWOULDBLOCK;
+	read_lock_irqsave(&gpc->lock, flags);
+
+	idx = srcu_read_lock(&kvm->srcu);
+	if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE))
+		goto out_rcu;
+
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) {
+		struct shared_info *shinfo = gpc->khva;
+		pending_bits = (unsigned long *)&shinfo->evtchn_pending;
+		mask_bits = (unsigned long *)&shinfo->evtchn_mask;
+		port_word_bit = e->xen_evtchn.port / 64;
+	} else {
+		struct compat_shared_info *shinfo = gpc->khva;
+		pending_bits = (unsigned long *)&shinfo->evtchn_pending;
+		mask_bits = (unsigned long *)&shinfo->evtchn_mask;
+		port_word_bit = e->xen_evtchn.port / 32;
+	}
+
+	/*
+	 * If this port wasn't already set, and if it isn't masked, then
+	 * we try to set the corresponding bit in the in-kernel shadow of
+	 * evtchn_pending_sel for the target vCPU. And if *that* wasn't
+	 * already set, then we kick the vCPU in question to write to the
+	 * *real* evtchn_pending_sel in its own guest vcpu_info struct.
+	 */
+	if (test_and_set_bit(e->xen_evtchn.port, pending_bits)) {
+		rc = 0; /* It was already raised */
+	} else if (test_bit(e->xen_evtchn.port, mask_bits)) {
+		rc = -1; /* Masked */
+	} else {
+		rc = 1; /* Delivered. But was the vCPU waking already? */
+		if (!test_and_set_bit(port_word_bit, &vcpu->arch.xen.evtchn_pending_sel))
+			kick_vcpu = true;
+	}
+
+ out_rcu:
+	srcu_read_unlock(&kvm->srcu, idx);
+	read_unlock_irqrestore(&gpc->lock, flags);
+
+	if (kick_vcpu) {
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+		kvm_vcpu_kick(vcpu);
+	}
+
+	return rc;
+}
+
+/* This is the version called from kvm_set_irq() as the .set function */
+static int evtchn_set_fn(struct kvm_kernel_irq_routing_entry *e, struct kvm *kvm,
+			 int irq_source_id, int level, bool line_status)
+{
+	bool mm_borrowed = false;
+	int rc;
+
+	if (!level)
+		return -1;
+
+	rc = kvm_xen_set_evtchn_fast(e, kvm);
+	if (rc != -EWOULDBLOCK)
+		return rc;
+
+	if (current->mm != kvm->mm) {
+		/*
+		 * If not on a thread which already belongs to this KVM,
+		 * we'd better be in the irqfd workqueue.
+		 */
+		if (WARN_ON_ONCE(current->mm))
+			return -EINVAL;
+
+		kthread_use_mm(kvm->mm);
+		mm_borrowed = true;
+	}
+
+	/*
+	 * For the irqfd workqueue, using the main kvm->lock mutex is
+	 * fine since this function is invoked from kvm_set_irq() with
+	 * no other lock held, no srcu. In future if it will be called
+	 * directly from a vCPU thread (e.g. on hypercall for an IPI)
+	 * then it may need to switch to using a leaf-node mutex for
+	 * serializing the shared_info mapping.
+	 */
+	mutex_lock(&kvm->lock);
+
+	/*
+	 * It is theoretically possible for the page to be unmapped
+	 * and the MMU notifier to invalidate the shared_info before
+	 * we even get to use it. In that case, this looks like an
+	 * infinite loop. It was tempting to do it via the userspace
+	 * HVA instead... but that just *hides* the fact that it's
+	 * an infinite loop, because if a fault occurs and it waits
+	 * for the page to come back, it can *still* immediately
+	 * fault and have to wait again, repeatedly.
+	 *
+	 * Conversely, the page could also have been reinstated by
+	 * another thread before we even obtain the mutex above, so
+	 * check again *first* before remapping it.
+	 */
+	do {
+		struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+		int idx;
+
+		rc = kvm_xen_set_evtchn_fast(e, kvm);
+		if (rc != -EWOULDBLOCK)
+			break;
+
+		idx = srcu_read_lock(&kvm->srcu);
+		rc = kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpc->gpa,
+						  PAGE_SIZE, false);
+		srcu_read_unlock(&kvm->srcu, idx);
+	} while(!rc);
+
+	mutex_unlock(&kvm->lock);
+
+	if (mm_borrowed)
+		kthread_unuse_mm(kvm->mm);
+
+	return rc;
+}
+
+int kvm_xen_setup_evtchn(struct kvm *kvm,
+			 struct kvm_kernel_irq_routing_entry *e,
+			 const struct kvm_irq_routing_entry *ue)
+
+{
+	if (ue->u.xen_evtchn.port >= max_evtchn_port(kvm))
+		return -EINVAL;
+
+	/* We only support 2 level event channels for now */
+	if (ue->u.xen_evtchn.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL)
+		return -EINVAL;
+
+	e->xen_evtchn.port = ue->u.xen_evtchn.port;
+	e->xen_evtchn.vcpu = ue->u.xen_evtchn.vcpu;
+	e->xen_evtchn.priority = ue->u.xen_evtchn.priority;
+	e->set = evtchn_set_fn;
+
+	return 0;
+}
diff --git a/arch/x86/kvm/xen.h b/arch/x86/kvm/xen.h
index cc0cf5f37450..adbcc9ed59db 100644
--- a/arch/x86/kvm/xen.h
+++ b/arch/x86/kvm/xen.h
@@ -24,6 +24,12 @@ int kvm_xen_hvm_config(struct kvm *kvm, struct kvm_xen_hvm_config *xhc);
 void kvm_xen_init_vm(struct kvm *kvm);
 void kvm_xen_destroy_vm(struct kvm *kvm);
 
+int kvm_xen_set_evtchn_fast(struct kvm_kernel_irq_routing_entry *e,
+			    struct kvm *kvm);
+int kvm_xen_setup_evtchn(struct kvm *kvm,
+			 struct kvm_kernel_irq_routing_entry *e,
+			 const struct kvm_irq_routing_entry *ue);
+
 static inline bool kvm_xen_msr_enabled(struct kvm *kvm)
 {
 	return static_branch_unlikely(&kvm_xen_enabled.key) &&
@@ -134,6 +140,9 @@ struct compat_shared_info {
 	struct compat_arch_shared_info arch;
 };
 
+#define COMPAT_EVTCHN_2L_NR_CHANNELS (8 *				\
+				      sizeof_field(struct compat_shared_info, \
+						   evtchn_pending))
 struct compat_vcpu_runstate_info {
     int state;
     uint64_t state_entry_time;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 457c38d75913..47fbc253d72b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -470,6 +470,12 @@ struct kvm_hv_sint {
 	u32 sint;
 };
 
+struct kvm_xen_evtchn {
+	u32 port;
+	u32 vcpu;
+	u32 priority;
+};
+
 struct kvm_kernel_irq_routing_entry {
 	u32 gsi;
 	u32 type;
@@ -490,6 +496,7 @@ struct kvm_kernel_irq_routing_entry {
 		} msi;
 		struct kvm_s390_adapter_int adapter;
 		struct kvm_hv_sint hv_sint;
+		struct kvm_xen_evtchn xen_evtchn;
 	};
 	struct hlist_node link;
 };
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 1daa45268de2..12421e76adcb 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1162,11 +1162,20 @@ struct kvm_irq_routing_hv_sint {
 	__u32 sint;
 };
 
+struct kvm_irq_routing_xen_evtchn {
+	__u32 port;
+	__u32 vcpu;
+	__u32 priority;
+};
+
+#define KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL ((__u32)(-1))
+
 /* gsi routing entry types */
 #define KVM_IRQ_ROUTING_IRQCHIP 1
 #define KVM_IRQ_ROUTING_MSI 2
 #define KVM_IRQ_ROUTING_S390_ADAPTER 3
 #define KVM_IRQ_ROUTING_HV_SINT 4
+#define KVM_IRQ_ROUTING_XEN_EVTCHN 5
 
 struct kvm_irq_routing_entry {
 	__u32 gsi;
@@ -1178,6 +1187,7 @@ struct kvm_irq_routing_entry {
 		struct kvm_irq_routing_msi msi;
 		struct kvm_irq_routing_s390_adapter adapter;
 		struct kvm_irq_routing_hv_sint hv_sint;
+		struct kvm_irq_routing_xen_evtchn xen_evtchn;
 		__u32 pad[8];
 	} u;
 };
@@ -1208,6 +1218,7 @@ struct kvm_x86_mce {
 #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
 #define KVM_XEN_HVM_CONFIG_SHARED_INFO		(1 << 2)
 #define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 3)
+#define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 4)
 
 struct kvm_xen_hvm_config {
 	__u32 flags;
diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
index a0699f00b3d6..a865e60a042c 100644
--- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
@@ -14,6 +14,9 @@
 #include <stdint.h>
 #include <time.h>
 #include <sched.h>
+#include <signal.h>
+
+#include <sys/eventfd.h>
 
 #define VCPU_ID		5
 
@@ -22,10 +25,12 @@
 #define SHINFO_REGION_SLOT	10
 #define PAGE_SIZE		4096
 
+#define SHINFO_ADDR	(SHINFO_REGION_GPA)
 #define PVTIME_ADDR	(SHINFO_REGION_GPA + PAGE_SIZE)
 #define RUNSTATE_ADDR	(SHINFO_REGION_GPA + PAGE_SIZE + 0x20)
 #define VCPU_INFO_ADDR	(SHINFO_REGION_GPA + 0x40)
 
+#define SHINFO_VADDR	(SHINFO_REGION_GVA)
 #define RUNSTATE_VADDR	(SHINFO_REGION_GVA + PAGE_SIZE + 0x20)
 #define VCPU_INFO_VADDR	(SHINFO_REGION_GVA + 0x40)
 
@@ -73,15 +78,30 @@ struct vcpu_info {
         struct pvclock_vcpu_time_info time;
 }; /* 64 bytes (x86) */
 
+struct shared_info {
+	struct vcpu_info vcpu_info[32];
+	unsigned long evtchn_pending[64];
+	unsigned long evtchn_mask[64];
+	struct pvclock_wall_clock wc;
+	uint32_t wc_sec_hi;
+	/* arch_shared_info here */
+};
+
 #define RUNSTATE_running  0
 #define RUNSTATE_runnable 1
 #define RUNSTATE_blocked  2
 #define RUNSTATE_offline  3
 
+struct {
+	struct kvm_irq_routing info;
+	struct kvm_irq_routing_entry entries[2];
+} irq_routes;
+
 static void evtchn_handler(struct ex_regs *regs)
 {
 	struct vcpu_info *vi = (void *)VCPU_INFO_VADDR;
 	vi->evtchn_upcall_pending = 0;
+	vi->evtchn_pending_sel = 0;
 
 	GUEST_SYNC(0x20);
 }
@@ -127,7 +147,19 @@ static void guest_code(void)
 	GUEST_SYNC(6);
 	GUEST_ASSERT(rs->time[RUNSTATE_runnable] >= MIN_STEAL_TIME);
 
-	GUEST_DONE();
+	/* Attempt to deliver a *masked* interrupt */
+	GUEST_SYNC(7);
+
+	/* Wait until we see the bit set */
+	struct shared_info *si = (void *)SHINFO_VADDR;
+	while (!si->evtchn_pending[0])
+		__asm__ __volatile__ ("rep nop" : : : "memory");
+
+	/* Now deliver an *unmasked* interrupt */
+	GUEST_SYNC(8);
+
+	for (;;)
+		__asm__ __volatile__ ("rep nop" : : : "memory");
 }
 
 static int cmp_timespec(struct timespec *a, struct timespec *b)
@@ -144,6 +176,11 @@ static int cmp_timespec(struct timespec *a, struct timespec *b)
 		return 0;
 }
 
+static void handle_alrm(int sig)
+{
+	TEST_FAIL("IRQ delivery timed out");
+}
+
 int main(int argc, char *argv[])
 {
 	struct timespec min_ts, max_ts, vm_ts;
@@ -155,6 +192,7 @@ int main(int argc, char *argv[])
 	}
 
 	bool do_runstate_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_RUNSTATE);
+	bool do_eventfd_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL);
 
 	clock_gettime(CLOCK_REALTIME, &min_ts);
 
@@ -166,6 +204,11 @@ int main(int argc, char *argv[])
 				    SHINFO_REGION_GPA, SHINFO_REGION_SLOT, 2, 0);
 	virt_map(vm, SHINFO_REGION_GVA, SHINFO_REGION_GPA, 2);
 
+	struct shared_info *shinfo = addr_gpa2hva(vm, SHINFO_VADDR);
+
+	int zero_fd = open("/dev/zero", O_RDONLY);
+	TEST_ASSERT(zero_fd != -1, "Failed to open /dev/zero");
+
 	struct kvm_xen_hvm_config hvmc = {
 		.flags = KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL,
 		.msr = XEN_HYPERCALL_MSR,
@@ -184,6 +227,16 @@ int main(int argc, char *argv[])
 	};
 	vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &ha);
 
+	/*
+	 * Test what happens when the HVA of the shinfo page is remapped after
+	 * the kernel has a reference to it. But make sure we copy the clock
+	 * info over since that's only set at setup time, and we test it later.
+	 */
+	struct pvclock_wall_clock wc_copy = shinfo->wc;
+	void *m = mmap(shinfo, PAGE_SIZE, PROT_READ|PROT_WRITE, MAP_FIXED|MAP_PRIVATE, zero_fd, 0);
+	TEST_ASSERT(m == shinfo, "Failed to map /dev/zero over shared info");
+	shinfo->wc = wc_copy;
+
 	struct kvm_xen_vcpu_attr vi = {
 		.type = KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO,
 		.u.gpa = VCPU_INFO_ADDR,
@@ -214,6 +267,49 @@ int main(int argc, char *argv[])
 		vcpu_ioctl(vm, VCPU_ID, KVM_XEN_VCPU_SET_ATTR, &st);
 	}
 
+	int irq_fd[2] = { -1, -1 };
+
+	if (do_eventfd_tests) {
+		irq_fd[0] = eventfd(0, 0);
+		irq_fd[1] = eventfd(0, 0);
+
+		/* Unexpected, but not a KVM failure */
+		if (irq_fd[0] == -1 || irq_fd[1] == -1)
+			do_eventfd_tests = false;
+	}
+
+	if (do_eventfd_tests) {
+		irq_routes.info.nr = 2;
+
+		irq_routes.entries[0].gsi = 32;
+		irq_routes.entries[0].type = KVM_IRQ_ROUTING_XEN_EVTCHN;
+		irq_routes.entries[0].u.xen_evtchn.port = 15;
+		irq_routes.entries[0].u.xen_evtchn.vcpu = VCPU_ID;
+		irq_routes.entries[0].u.xen_evtchn.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL;
+
+		irq_routes.entries[1].gsi = 33;
+		irq_routes.entries[1].type = KVM_IRQ_ROUTING_XEN_EVTCHN;
+		irq_routes.entries[1].u.xen_evtchn.port = 66;
+		irq_routes.entries[1].u.xen_evtchn.vcpu = VCPU_ID;
+		irq_routes.entries[1].u.xen_evtchn.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL;
+
+		vm_ioctl(vm, KVM_SET_GSI_ROUTING, &irq_routes);
+
+		struct kvm_irqfd ifd = { };
+
+		ifd.fd = irq_fd[0];
+		ifd.gsi = 32;
+		vm_ioctl(vm, KVM_IRQFD, &ifd);
+
+		ifd.fd = irq_fd[1];
+		ifd.gsi = 33;
+		vm_ioctl(vm, KVM_IRQFD, &ifd);
+
+		struct sigaction sa = { };
+		sa.sa_handler = handle_alrm;
+		sigaction(SIGALRM, &sa, NULL);
+	}
+
 	struct vcpu_info *vinfo = addr_gpa2hva(vm, VCPU_INFO_VADDR);
 	vinfo->evtchn_upcall_pending = 0;
 
@@ -289,9 +385,23 @@ int main(int argc, char *argv[])
 					sched_yield();
 				} while (get_run_delay() < rundelay);
 				break;
+			case 7:
+				if (!do_eventfd_tests)
+					goto done;
+				shinfo->evtchn_mask[0] = 0x8000;
+				eventfd_write(irq_fd[0], 1UL);
+				alarm(1);
+				break;
+			case 8:
+				eventfd_write(irq_fd[1], 1UL);
+				evtchn_irq_expected = true;
+				break;
+
 			case 0x20:
 				TEST_ASSERT(evtchn_irq_expected, "Unexpected event channel IRQ");
 				evtchn_irq_expected = false;
+				if (shinfo->evtchn_pending[1])
+					goto done;
 				break;
 			}
 			break;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 10/11] KVM: x86/xen: Add KVM_IRQ_ROUTING_XEN_EVTCHN and event channel delivery
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

This adds basic support for delivering 2 level event channels to a guest.

Initially, it only supports delivery via the IRQ routing table, triggered
by an eventfd. In order to do so, it has a kvm_xen_set_evtchn_fast()
function which will use the pre-mapped shared_info page if it already
exists and is still valid, while the slow path through the irqfd_inject
workqueue will remap the shared_info page if necessary.

It sets the bits in the shared_info page but not the vcpu_info; that is
deferred to __kvm_xen_has_interrupt() which raises the vector to the
appropriate vCPU.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 Documentation/virt/kvm/api.rst                |  21 ++
 arch/x86/include/asm/kvm_host.h               |   1 +
 arch/x86/kvm/irq_comm.c                       |  12 +
 arch/x86/kvm/x86.c                            |   3 +-
 arch/x86/kvm/xen.c                            | 262 +++++++++++++++++-
 arch/x86/kvm/xen.h                            |   9 +
 include/linux/kvm_host.h                      |   7 +
 include/uapi/linux/kvm.h                      |  11 +
 .../selftests/kvm/x86_64/xen_shinfo_test.c    | 112 +++++++-
 9 files changed, 431 insertions(+), 7 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 455664c39d42..ec4d693851a2 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -1799,6 +1799,7 @@ No flags are specified so far, the corresponding field must be set to zero.
 		struct kvm_irq_routing_msi msi;
 		struct kvm_irq_routing_s390_adapter adapter;
 		struct kvm_irq_routing_hv_sint hv_sint;
+		struct kvm_irq_routing_xen_evtchn xen_evtchn;
 		__u32 pad[8];
 	} u;
   };
@@ -1808,6 +1809,7 @@ No flags are specified so far, the corresponding field must be set to zero.
   #define KVM_IRQ_ROUTING_MSI 2
   #define KVM_IRQ_ROUTING_S390_ADAPTER 3
   #define KVM_IRQ_ROUTING_HV_SINT 4
+  #define KVM_IRQ_ROUTING_XEN_EVTCHN 5
 
 flags:
 
@@ -1859,6 +1861,20 @@ address_hi must be zero.
 	__u32 sint;
   };
 
+  struct kvm_irq_routing_xen_evtchn {
+	__u32 port;
+	__u32 vcpu;
+	__u32 priority;
+  };
+
+
+When KVM_CAP_XEN_HVM includes the KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL bit
+in its indication of supported features, routing to Xen event channels
+is supported. Although the priority field is present, only the value
+KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL is supported, which means delivery by
+2 level event channels. FIFO event channel support may be added in
+the future.
+
 
 4.55 KVM_SET_TSC_KHZ
 --------------------
@@ -7413,6 +7429,7 @@ PVHVM guests. Valid flags are::
   #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
   #define KVM_XEN_HVM_CONFIG_SHARED_INFO	(1 << 2)
   #define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 2)
+  #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 3)
 
 The KVM_XEN_HVM_CONFIG_HYPERCALL_MSR flag indicates that the KVM_XEN_HVM_CONFIG
 ioctl is available, for the guest to set its hypercall page.
@@ -7432,6 +7449,10 @@ The KVM_XEN_HVM_CONFIG_RUNSTATE flag indicates that the runstate-related
 features KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR/_CURRENT/_DATA/_ADJUST are
 supported by the KVM_XEN_VCPU_SET_ATTR/KVM_XEN_VCPU_GET_ATTR ioctls.
 
+The KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL flag indicates that IRQ routing entries
+of the type KVM_IRQ_ROUTING_XEN_EVTCHN are supported, with the priority
+field set to indicate 2 level event channel delivery.
+
 8.31 KVM_CAP_PPC_MULTITCE
 -------------------------
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4b2b4ecf3b46..6ea2446ab851 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -604,6 +604,7 @@ struct kvm_vcpu_xen {
 	u64 last_steal;
 	u64 runstate_entry_time;
 	u64 runstate_times[4];
+	unsigned long evtchn_pending_sel;
 };
 
 struct kvm_vcpu_arch {
diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c
index d5b72a08e566..afd2de84be60 100644
--- a/arch/x86/kvm/irq_comm.c
+++ b/arch/x86/kvm/irq_comm.c
@@ -24,6 +24,7 @@
 
 #include "hyperv.h"
 #include "x86.h"
+#include "xen.h"
 
 static int kvm_set_pic_irq(struct kvm_kernel_irq_routing_entry *e,
 			   struct kvm *kvm, int irq_source_id, int level,
@@ -175,6 +176,13 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
 			return r;
 		break;
 
+#ifdef CONFIG_KVM_XEN
+	case KVM_IRQ_ROUTING_XEN_EVTCHN:
+		if (!level)
+			return -1;
+
+		return kvm_xen_set_evtchn_fast(e, kvm);
+#endif
 	default:
 		break;
 	}
@@ -310,6 +318,10 @@ int kvm_set_routing_entry(struct kvm *kvm,
 		e->hv_sint.vcpu = ue->u.hv_sint.vcpu;
 		e->hv_sint.sint = ue->u.hv_sint.sint;
 		break;
+#ifdef CONFIG_KVM_XEN
+	case KVM_IRQ_ROUTING_XEN_EVTCHN:
+		return kvm_xen_setup_evtchn(kvm, e, ue);
+#endif
 	default:
 		return -EINVAL;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5a403d92833f..fa56c590d8db 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4147,7 +4147,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_XEN_HVM:
 		r = KVM_XEN_HVM_CONFIG_HYPERCALL_MSR |
 		    KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL |
-		    KVM_XEN_HVM_CONFIG_SHARED_INFO;
+		    KVM_XEN_HVM_CONFIG_SHARED_INFO |
+		    KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL;
 		if (sched_info_on())
 			r |= KVM_XEN_HVM_CONFIG_RUNSTATE;
 		break;
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index da4bf2c6407f..4b380d2157a9 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -16,6 +16,7 @@
 #include <trace/events/kvm.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/vcpu.h>
+#include <xen/interface/event_channel.h>
 
 #include "trace.h"
 
@@ -195,6 +196,8 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
 
 int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 {
+	unsigned long evtchn_pending_sel = READ_ONCE(v->arch.xen.evtchn_pending_sel);
+	bool atomic = in_atomic() || !task_is_running(current);
 	int err;
 	u8 rc = 0;
 
@@ -204,6 +207,9 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 */
 	struct gfn_to_hva_cache *ghc = &v->arch.xen.vcpu_info_cache;
 	struct kvm_memslots *slots = kvm_memslots(v->kvm);
+	bool ghc_valid = slots->generation == ghc->generation &&
+		!kvm_is_error_hva(ghc->hva) && ghc->memslot;
+
 	unsigned int offset = offsetof(struct vcpu_info, evtchn_upcall_pending);
 
 	/* No need for compat handling here */
@@ -219,8 +225,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 * cache in kvm_read_guest_offset_cached(), but just uses
 	 * __get_user() instead. And falls back to the slow path.
 	 */
-	if (likely(slots->generation == ghc->generation &&
-		   !kvm_is_error_hva(ghc->hva) && ghc->memslot)) {
+	if (!evtchn_pending_sel && ghc_valid) {
 		/* Fast path */
 		pagefault_disable();
 		err = __get_user(rc, (u8 __user *)ghc->hva + offset);
@@ -239,11 +244,82 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 * and we'll end up getting called again from a context where we *can*
 	 * fault in the page and wait for it.
 	 */
-	if (in_atomic() || !task_is_running(current))
+	if (atomic)
 		return 1;
 
-	kvm_read_guest_offset_cached(v->kvm, ghc, &rc, offset,
-				     sizeof(rc));
+	if (!ghc_valid) {
+		err = kvm_gfn_to_hva_cache_init(v->kvm, ghc, ghc->gpa, ghc->len);
+		if (err || !ghc->memslot) {
+			/*
+			 * If this failed, userspace has screwed up the
+			 * vcpu_info mapping. No interrupts for you.
+			 */
+			return 0;
+		}
+	}
+
+	/*
+	 * Now we have a valid (protected by srcu) userspace HVA in
+	 * ghc->hva which points to the struct vcpu_info. If there
+	 * are any bits in the in-kernel evtchn_pending_sel then
+	 * we need to write those to the guest vcpu_info and set
+	 * its evtchn_upcall_pending flag. If there aren't any bits
+	 * to add, we only want to *check* evtchn_upcall_pending.
+	 */
+	if (evtchn_pending_sel) {
+		bool long_mode = v->kvm->arch.xen.long_mode;
+
+		if (!user_access_begin((void *)ghc->hva, sizeof(struct vcpu_info)))
+			return 0;
+
+		if (IS_ENABLED(CONFIG_64BIT) && long_mode) {
+			struct vcpu_info __user *vi = (void *)ghc->hva;
+
+			/* Attempt to set the evtchn_pending_sel bits in the
+			 * guest, and if that succeeds then clear the same
+			 * bits in the in-kernel version. */
+			asm volatile("1:\t" LOCK_PREFIX "orq %0, %1\n"
+				     "\tnotq %0\n"
+				     "\t" LOCK_PREFIX "andq %0, %2\n"
+				     "2:\n"
+				     "\t.section .fixup,\"ax\"\n"
+				     "3:\tjmp\t2b\n"
+				     "\t.previous\n"
+				     _ASM_EXTABLE_UA(1b, 3b)
+				     : "=r" (evtchn_pending_sel),
+				       "+m" (vi->evtchn_pending_sel),
+				       "+m" (v->arch.xen.evtchn_pending_sel)
+				     : "0" (evtchn_pending_sel));
+		} else {
+			struct compat_vcpu_info __user *vi = (void *)ghc->hva;
+			u32 evtchn_pending_sel32 = evtchn_pending_sel;
+
+			/* Attempt to set the evtchn_pending_sel bits in the
+			 * guest, and if that succeeds then clear the same
+			 * bits in the in-kernel version. */
+			asm volatile("1:\t" LOCK_PREFIX "orl %0, %1\n"
+				     "\tnotl %0\n"
+				     "\t" LOCK_PREFIX "andl %0, %2\n"
+				     "2:\n"
+				     "\t.section .fixup,\"ax\"\n"
+				     "3:\tjmp\t2b\n"
+				     "\t.previous\n"
+				     _ASM_EXTABLE_UA(1b, 3b)
+				     : "=r" (evtchn_pending_sel32),
+				       "+m" (vi->evtchn_pending_sel),
+				       "+m" (v->arch.xen.evtchn_pending_sel)
+				     : "0" (evtchn_pending_sel32));
+		}
+		rc = 1;
+		unsafe_put_user(rc, (u8 __user *)ghc->hva + offset, err);
+
+	err:
+		user_access_end();
+
+		mark_page_dirty_in_slot(v->kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
+	} else {
+		__get_user(rc, (u8 __user *)ghc->hva + offset);
+	}
 
 	return rc;
 }
@@ -740,3 +816,179 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
 
 	return 0;
 }
+
+static inline int max_evtchn_port(struct kvm *kvm)
+{
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode)
+		return EVTCHN_2L_NR_CHANNELS;
+	else
+		return COMPAT_EVTCHN_2L_NR_CHANNELS;
+}
+
+/*
+ * This follows the kvm_set_irq() API, so it returns:
+ *  < 0   Interrupt was ignored (masked or not delivered for other reasons)
+ *  = 0   Interrupt was coalesced (previous irq is still pending)
+ *  > 0   Number of CPUs interrupt was delivered to
+ */
+int kvm_xen_set_evtchn_fast(struct kvm_kernel_irq_routing_entry *e,
+			    struct kvm *kvm)
+{
+	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+	struct kvm_vcpu *vcpu;
+	unsigned long *pending_bits, *mask_bits;
+	unsigned long flags;
+	int port_word_bit;
+	bool kick_vcpu = false;
+	int idx;
+	int rc;
+
+	vcpu = kvm_get_vcpu_by_id(kvm, e->xen_evtchn.vcpu);
+	if (!vcpu)
+		return -1;
+
+	if (!vcpu->arch.xen.vcpu_info_set)
+		return -1;
+
+	if (e->xen_evtchn.port >= max_evtchn_port(kvm))
+		return -1;
+
+	rc = -EWOULDBLOCK;
+	read_lock_irqsave(&gpc->lock, flags);
+
+	idx = srcu_read_lock(&kvm->srcu);
+	if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE))
+		goto out_rcu;
+
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) {
+		struct shared_info *shinfo = gpc->khva;
+		pending_bits = (unsigned long *)&shinfo->evtchn_pending;
+		mask_bits = (unsigned long *)&shinfo->evtchn_mask;
+		port_word_bit = e->xen_evtchn.port / 64;
+	} else {
+		struct compat_shared_info *shinfo = gpc->khva;
+		pending_bits = (unsigned long *)&shinfo->evtchn_pending;
+		mask_bits = (unsigned long *)&shinfo->evtchn_mask;
+		port_word_bit = e->xen_evtchn.port / 32;
+	}
+
+	/*
+	 * If this port wasn't already set, and if it isn't masked, then
+	 * we try to set the corresponding bit in the in-kernel shadow of
+	 * evtchn_pending_sel for the target vCPU. And if *that* wasn't
+	 * already set, then we kick the vCPU in question to write to the
+	 * *real* evtchn_pending_sel in its own guest vcpu_info struct.
+	 */
+	if (test_and_set_bit(e->xen_evtchn.port, pending_bits)) {
+		rc = 0; /* It was already raised */
+	} else if (test_bit(e->xen_evtchn.port, mask_bits)) {
+		rc = -1; /* Masked */
+	} else {
+		rc = 1; /* Delivered. But was the vCPU waking already? */
+		if (!test_and_set_bit(port_word_bit, &vcpu->arch.xen.evtchn_pending_sel))
+			kick_vcpu = true;
+	}
+
+ out_rcu:
+	srcu_read_unlock(&kvm->srcu, idx);
+	read_unlock_irqrestore(&gpc->lock, flags);
+
+	if (kick_vcpu) {
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+		kvm_vcpu_kick(vcpu);
+	}
+
+	return rc;
+}
+
+/* This is the version called from kvm_set_irq() as the .set function */
+static int evtchn_set_fn(struct kvm_kernel_irq_routing_entry *e, struct kvm *kvm,
+			 int irq_source_id, int level, bool line_status)
+{
+	bool mm_borrowed = false;
+	int rc;
+
+	if (!level)
+		return -1;
+
+	rc = kvm_xen_set_evtchn_fast(e, kvm);
+	if (rc != -EWOULDBLOCK)
+		return rc;
+
+	if (current->mm != kvm->mm) {
+		/*
+		 * If not on a thread which already belongs to this KVM,
+		 * we'd better be in the irqfd workqueue.
+		 */
+		if (WARN_ON_ONCE(current->mm))
+			return -EINVAL;
+
+		kthread_use_mm(kvm->mm);
+		mm_borrowed = true;
+	}
+
+	/*
+	 * For the irqfd workqueue, using the main kvm->lock mutex is
+	 * fine since this function is invoked from kvm_set_irq() with
+	 * no other lock held, no srcu. In future if it will be called
+	 * directly from a vCPU thread (e.g. on hypercall for an IPI)
+	 * then it may need to switch to using a leaf-node mutex for
+	 * serializing the shared_info mapping.
+	 */
+	mutex_lock(&kvm->lock);
+
+	/*
+	 * It is theoretically possible for the page to be unmapped
+	 * and the MMU notifier to invalidate the shared_info before
+	 * we even get to use it. In that case, this looks like an
+	 * infinite loop. It was tempting to do it via the userspace
+	 * HVA instead... but that just *hides* the fact that it's
+	 * an infinite loop, because if a fault occurs and it waits
+	 * for the page to come back, it can *still* immediately
+	 * fault and have to wait again, repeatedly.
+	 *
+	 * Conversely, the page could also have been reinstated by
+	 * another thread before we even obtain the mutex above, so
+	 * check again *first* before remapping it.
+	 */
+	do {
+		struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+		int idx;
+
+		rc = kvm_xen_set_evtchn_fast(e, kvm);
+		if (rc != -EWOULDBLOCK)
+			break;
+
+		idx = srcu_read_lock(&kvm->srcu);
+		rc = kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpc->gpa,
+						  PAGE_SIZE, false);
+		srcu_read_unlock(&kvm->srcu, idx);
+	} while(!rc);
+
+	mutex_unlock(&kvm->lock);
+
+	if (mm_borrowed)
+		kthread_unuse_mm(kvm->mm);
+
+	return rc;
+}
+
+int kvm_xen_setup_evtchn(struct kvm *kvm,
+			 struct kvm_kernel_irq_routing_entry *e,
+			 const struct kvm_irq_routing_entry *ue)
+
+{
+	if (ue->u.xen_evtchn.port >= max_evtchn_port(kvm))
+		return -EINVAL;
+
+	/* We only support 2 level event channels for now */
+	if (ue->u.xen_evtchn.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL)
+		return -EINVAL;
+
+	e->xen_evtchn.port = ue->u.xen_evtchn.port;
+	e->xen_evtchn.vcpu = ue->u.xen_evtchn.vcpu;
+	e->xen_evtchn.priority = ue->u.xen_evtchn.priority;
+	e->set = evtchn_set_fn;
+
+	return 0;
+}
diff --git a/arch/x86/kvm/xen.h b/arch/x86/kvm/xen.h
index cc0cf5f37450..adbcc9ed59db 100644
--- a/arch/x86/kvm/xen.h
+++ b/arch/x86/kvm/xen.h
@@ -24,6 +24,12 @@ int kvm_xen_hvm_config(struct kvm *kvm, struct kvm_xen_hvm_config *xhc);
 void kvm_xen_init_vm(struct kvm *kvm);
 void kvm_xen_destroy_vm(struct kvm *kvm);
 
+int kvm_xen_set_evtchn_fast(struct kvm_kernel_irq_routing_entry *e,
+			    struct kvm *kvm);
+int kvm_xen_setup_evtchn(struct kvm *kvm,
+			 struct kvm_kernel_irq_routing_entry *e,
+			 const struct kvm_irq_routing_entry *ue);
+
 static inline bool kvm_xen_msr_enabled(struct kvm *kvm)
 {
 	return static_branch_unlikely(&kvm_xen_enabled.key) &&
@@ -134,6 +140,9 @@ struct compat_shared_info {
 	struct compat_arch_shared_info arch;
 };
 
+#define COMPAT_EVTCHN_2L_NR_CHANNELS (8 *				\
+				      sizeof_field(struct compat_shared_info, \
+						   evtchn_pending))
 struct compat_vcpu_runstate_info {
     int state;
     uint64_t state_entry_time;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 457c38d75913..47fbc253d72b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -470,6 +470,12 @@ struct kvm_hv_sint {
 	u32 sint;
 };
 
+struct kvm_xen_evtchn {
+	u32 port;
+	u32 vcpu;
+	u32 priority;
+};
+
 struct kvm_kernel_irq_routing_entry {
 	u32 gsi;
 	u32 type;
@@ -490,6 +496,7 @@ struct kvm_kernel_irq_routing_entry {
 		} msi;
 		struct kvm_s390_adapter_int adapter;
 		struct kvm_hv_sint hv_sint;
+		struct kvm_xen_evtchn xen_evtchn;
 	};
 	struct hlist_node link;
 };
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 1daa45268de2..12421e76adcb 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1162,11 +1162,20 @@ struct kvm_irq_routing_hv_sint {
 	__u32 sint;
 };
 
+struct kvm_irq_routing_xen_evtchn {
+	__u32 port;
+	__u32 vcpu;
+	__u32 priority;
+};
+
+#define KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL ((__u32)(-1))
+
 /* gsi routing entry types */
 #define KVM_IRQ_ROUTING_IRQCHIP 1
 #define KVM_IRQ_ROUTING_MSI 2
 #define KVM_IRQ_ROUTING_S390_ADAPTER 3
 #define KVM_IRQ_ROUTING_HV_SINT 4
+#define KVM_IRQ_ROUTING_XEN_EVTCHN 5
 
 struct kvm_irq_routing_entry {
 	__u32 gsi;
@@ -1178,6 +1187,7 @@ struct kvm_irq_routing_entry {
 		struct kvm_irq_routing_msi msi;
 		struct kvm_irq_routing_s390_adapter adapter;
 		struct kvm_irq_routing_hv_sint hv_sint;
+		struct kvm_irq_routing_xen_evtchn xen_evtchn;
 		__u32 pad[8];
 	} u;
 };
@@ -1208,6 +1218,7 @@ struct kvm_x86_mce {
 #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
 #define KVM_XEN_HVM_CONFIG_SHARED_INFO		(1 << 2)
 #define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 3)
+#define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 4)
 
 struct kvm_xen_hvm_config {
 	__u32 flags;
diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
index a0699f00b3d6..a865e60a042c 100644
--- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
@@ -14,6 +14,9 @@
 #include <stdint.h>
 #include <time.h>
 #include <sched.h>
+#include <signal.h>
+
+#include <sys/eventfd.h>
 
 #define VCPU_ID		5
 
@@ -22,10 +25,12 @@
 #define SHINFO_REGION_SLOT	10
 #define PAGE_SIZE		4096
 
+#define SHINFO_ADDR	(SHINFO_REGION_GPA)
 #define PVTIME_ADDR	(SHINFO_REGION_GPA + PAGE_SIZE)
 #define RUNSTATE_ADDR	(SHINFO_REGION_GPA + PAGE_SIZE + 0x20)
 #define VCPU_INFO_ADDR	(SHINFO_REGION_GPA + 0x40)
 
+#define SHINFO_VADDR	(SHINFO_REGION_GVA)
 #define RUNSTATE_VADDR	(SHINFO_REGION_GVA + PAGE_SIZE + 0x20)
 #define VCPU_INFO_VADDR	(SHINFO_REGION_GVA + 0x40)
 
@@ -73,15 +78,30 @@ struct vcpu_info {
         struct pvclock_vcpu_time_info time;
 }; /* 64 bytes (x86) */
 
+struct shared_info {
+	struct vcpu_info vcpu_info[32];
+	unsigned long evtchn_pending[64];
+	unsigned long evtchn_mask[64];
+	struct pvclock_wall_clock wc;
+	uint32_t wc_sec_hi;
+	/* arch_shared_info here */
+};
+
 #define RUNSTATE_running  0
 #define RUNSTATE_runnable 1
 #define RUNSTATE_blocked  2
 #define RUNSTATE_offline  3
 
+struct {
+	struct kvm_irq_routing info;
+	struct kvm_irq_routing_entry entries[2];
+} irq_routes;
+
 static void evtchn_handler(struct ex_regs *regs)
 {
 	struct vcpu_info *vi = (void *)VCPU_INFO_VADDR;
 	vi->evtchn_upcall_pending = 0;
+	vi->evtchn_pending_sel = 0;
 
 	GUEST_SYNC(0x20);
 }
@@ -127,7 +147,19 @@ static void guest_code(void)
 	GUEST_SYNC(6);
 	GUEST_ASSERT(rs->time[RUNSTATE_runnable] >= MIN_STEAL_TIME);
 
-	GUEST_DONE();
+	/* Attempt to deliver a *masked* interrupt */
+	GUEST_SYNC(7);
+
+	/* Wait until we see the bit set */
+	struct shared_info *si = (void *)SHINFO_VADDR;
+	while (!si->evtchn_pending[0])
+		__asm__ __volatile__ ("rep nop" : : : "memory");
+
+	/* Now deliver an *unmasked* interrupt */
+	GUEST_SYNC(8);
+
+	for (;;)
+		__asm__ __volatile__ ("rep nop" : : : "memory");
 }
 
 static int cmp_timespec(struct timespec *a, struct timespec *b)
@@ -144,6 +176,11 @@ static int cmp_timespec(struct timespec *a, struct timespec *b)
 		return 0;
 }
 
+static void handle_alrm(int sig)
+{
+	TEST_FAIL("IRQ delivery timed out");
+}
+
 int main(int argc, char *argv[])
 {
 	struct timespec min_ts, max_ts, vm_ts;
@@ -155,6 +192,7 @@ int main(int argc, char *argv[])
 	}
 
 	bool do_runstate_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_RUNSTATE);
+	bool do_eventfd_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL);
 
 	clock_gettime(CLOCK_REALTIME, &min_ts);
 
@@ -166,6 +204,11 @@ int main(int argc, char *argv[])
 				    SHINFO_REGION_GPA, SHINFO_REGION_SLOT, 2, 0);
 	virt_map(vm, SHINFO_REGION_GVA, SHINFO_REGION_GPA, 2);
 
+	struct shared_info *shinfo = addr_gpa2hva(vm, SHINFO_VADDR);
+
+	int zero_fd = open("/dev/zero", O_RDONLY);
+	TEST_ASSERT(zero_fd != -1, "Failed to open /dev/zero");
+
 	struct kvm_xen_hvm_config hvmc = {
 		.flags = KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL,
 		.msr = XEN_HYPERCALL_MSR,
@@ -184,6 +227,16 @@ int main(int argc, char *argv[])
 	};
 	vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &ha);
 
+	/*
+	 * Test what happens when the HVA of the shinfo page is remapped after
+	 * the kernel has a reference to it. But make sure we copy the clock
+	 * info over since that's only set at setup time, and we test it later.
+	 */
+	struct pvclock_wall_clock wc_copy = shinfo->wc;
+	void *m = mmap(shinfo, PAGE_SIZE, PROT_READ|PROT_WRITE, MAP_FIXED|MAP_PRIVATE, zero_fd, 0);
+	TEST_ASSERT(m == shinfo, "Failed to map /dev/zero over shared info");
+	shinfo->wc = wc_copy;
+
 	struct kvm_xen_vcpu_attr vi = {
 		.type = KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO,
 		.u.gpa = VCPU_INFO_ADDR,
@@ -214,6 +267,49 @@ int main(int argc, char *argv[])
 		vcpu_ioctl(vm, VCPU_ID, KVM_XEN_VCPU_SET_ATTR, &st);
 	}
 
+	int irq_fd[2] = { -1, -1 };
+
+	if (do_eventfd_tests) {
+		irq_fd[0] = eventfd(0, 0);
+		irq_fd[1] = eventfd(0, 0);
+
+		/* Unexpected, but not a KVM failure */
+		if (irq_fd[0] == -1 || irq_fd[1] == -1)
+			do_eventfd_tests = false;
+	}
+
+	if (do_eventfd_tests) {
+		irq_routes.info.nr = 2;
+
+		irq_routes.entries[0].gsi = 32;
+		irq_routes.entries[0].type = KVM_IRQ_ROUTING_XEN_EVTCHN;
+		irq_routes.entries[0].u.xen_evtchn.port = 15;
+		irq_routes.entries[0].u.xen_evtchn.vcpu = VCPU_ID;
+		irq_routes.entries[0].u.xen_evtchn.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL;
+
+		irq_routes.entries[1].gsi = 33;
+		irq_routes.entries[1].type = KVM_IRQ_ROUTING_XEN_EVTCHN;
+		irq_routes.entries[1].u.xen_evtchn.port = 66;
+		irq_routes.entries[1].u.xen_evtchn.vcpu = VCPU_ID;
+		irq_routes.entries[1].u.xen_evtchn.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL;
+
+		vm_ioctl(vm, KVM_SET_GSI_ROUTING, &irq_routes);
+
+		struct kvm_irqfd ifd = { };
+
+		ifd.fd = irq_fd[0];
+		ifd.gsi = 32;
+		vm_ioctl(vm, KVM_IRQFD, &ifd);
+
+		ifd.fd = irq_fd[1];
+		ifd.gsi = 33;
+		vm_ioctl(vm, KVM_IRQFD, &ifd);
+
+		struct sigaction sa = { };
+		sa.sa_handler = handle_alrm;
+		sigaction(SIGALRM, &sa, NULL);
+	}
+
 	struct vcpu_info *vinfo = addr_gpa2hva(vm, VCPU_INFO_VADDR);
 	vinfo->evtchn_upcall_pending = 0;
 
@@ -289,9 +385,23 @@ int main(int argc, char *argv[])
 					sched_yield();
 				} while (get_run_delay() < rundelay);
 				break;
+			case 7:
+				if (!do_eventfd_tests)
+					goto done;
+				shinfo->evtchn_mask[0] = 0x8000;
+				eventfd_write(irq_fd[0], 1UL);
+				alarm(1);
+				break;
+			case 8:
+				eventfd_write(irq_fd[1], 1UL);
+				evtchn_irq_expected = true;
+				break;
+
 			case 0x20:
 				TEST_ASSERT(evtchn_irq_expected, "Unexpected event channel IRQ");
 				evtchn_irq_expected = false;
+				if (shinfo->evtchn_pending[1])
+					goto done;
 				break;
 			}
 			break;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 10/11] KVM: x86/xen: Add KVM_IRQ_ROUTING_XEN_EVTCHN and event channel delivery
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

This adds basic support for delivering 2 level event channels to a guest.

Initially, it only supports delivery via the IRQ routing table, triggered
by an eventfd. In order to do so, it has a kvm_xen_set_evtchn_fast()
function which will use the pre-mapped shared_info page if it already
exists and is still valid, while the slow path through the irqfd_inject
workqueue will remap the shared_info page if necessary.

It sets the bits in the shared_info page but not the vcpu_info; that is
deferred to __kvm_xen_has_interrupt() which raises the vector to the
appropriate vCPU.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 Documentation/virt/kvm/api.rst                |  21 ++
 arch/x86/include/asm/kvm_host.h               |   1 +
 arch/x86/kvm/irq_comm.c                       |  12 +
 arch/x86/kvm/x86.c                            |   3 +-
 arch/x86/kvm/xen.c                            | 262 +++++++++++++++++-
 arch/x86/kvm/xen.h                            |   9 +
 include/linux/kvm_host.h                      |   7 +
 include/uapi/linux/kvm.h                      |  11 +
 .../selftests/kvm/x86_64/xen_shinfo_test.c    | 112 +++++++-
 9 files changed, 431 insertions(+), 7 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 455664c39d42..ec4d693851a2 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -1799,6 +1799,7 @@ No flags are specified so far, the corresponding field must be set to zero.
 		struct kvm_irq_routing_msi msi;
 		struct kvm_irq_routing_s390_adapter adapter;
 		struct kvm_irq_routing_hv_sint hv_sint;
+		struct kvm_irq_routing_xen_evtchn xen_evtchn;
 		__u32 pad[8];
 	} u;
   };
@@ -1808,6 +1809,7 @@ No flags are specified so far, the corresponding field must be set to zero.
   #define KVM_IRQ_ROUTING_MSI 2
   #define KVM_IRQ_ROUTING_S390_ADAPTER 3
   #define KVM_IRQ_ROUTING_HV_SINT 4
+  #define KVM_IRQ_ROUTING_XEN_EVTCHN 5
 
 flags:
 
@@ -1859,6 +1861,20 @@ address_hi must be zero.
 	__u32 sint;
   };
 
+  struct kvm_irq_routing_xen_evtchn {
+	__u32 port;
+	__u32 vcpu;
+	__u32 priority;
+  };
+
+
+When KVM_CAP_XEN_HVM includes the KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL bit
+in its indication of supported features, routing to Xen event channels
+is supported. Although the priority field is present, only the value
+KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL is supported, which means delivery by
+2 level event channels. FIFO event channel support may be added in
+the future.
+
 
 4.55 KVM_SET_TSC_KHZ
 --------------------
@@ -7413,6 +7429,7 @@ PVHVM guests. Valid flags are::
   #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
   #define KVM_XEN_HVM_CONFIG_SHARED_INFO	(1 << 2)
   #define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 2)
+  #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 3)
 
 The KVM_XEN_HVM_CONFIG_HYPERCALL_MSR flag indicates that the KVM_XEN_HVM_CONFIG
 ioctl is available, for the guest to set its hypercall page.
@@ -7432,6 +7449,10 @@ The KVM_XEN_HVM_CONFIG_RUNSTATE flag indicates that the runstate-related
 features KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR/_CURRENT/_DATA/_ADJUST are
 supported by the KVM_XEN_VCPU_SET_ATTR/KVM_XEN_VCPU_GET_ATTR ioctls.
 
+The KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL flag indicates that IRQ routing entries
+of the type KVM_IRQ_ROUTING_XEN_EVTCHN are supported, with the priority
+field set to indicate 2 level event channel delivery.
+
 8.31 KVM_CAP_PPC_MULTITCE
 -------------------------
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4b2b4ecf3b46..6ea2446ab851 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -604,6 +604,7 @@ struct kvm_vcpu_xen {
 	u64 last_steal;
 	u64 runstate_entry_time;
 	u64 runstate_times[4];
+	unsigned long evtchn_pending_sel;
 };
 
 struct kvm_vcpu_arch {
diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c
index d5b72a08e566..afd2de84be60 100644
--- a/arch/x86/kvm/irq_comm.c
+++ b/arch/x86/kvm/irq_comm.c
@@ -24,6 +24,7 @@
 
 #include "hyperv.h"
 #include "x86.h"
+#include "xen.h"
 
 static int kvm_set_pic_irq(struct kvm_kernel_irq_routing_entry *e,
 			   struct kvm *kvm, int irq_source_id, int level,
@@ -175,6 +176,13 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
 			return r;
 		break;
 
+#ifdef CONFIG_KVM_XEN
+	case KVM_IRQ_ROUTING_XEN_EVTCHN:
+		if (!level)
+			return -1;
+
+		return kvm_xen_set_evtchn_fast(e, kvm);
+#endif
 	default:
 		break;
 	}
@@ -310,6 +318,10 @@ int kvm_set_routing_entry(struct kvm *kvm,
 		e->hv_sint.vcpu = ue->u.hv_sint.vcpu;
 		e->hv_sint.sint = ue->u.hv_sint.sint;
 		break;
+#ifdef CONFIG_KVM_XEN
+	case KVM_IRQ_ROUTING_XEN_EVTCHN:
+		return kvm_xen_setup_evtchn(kvm, e, ue);
+#endif
 	default:
 		return -EINVAL;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5a403d92833f..fa56c590d8db 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4147,7 +4147,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_XEN_HVM:
 		r = KVM_XEN_HVM_CONFIG_HYPERCALL_MSR |
 		    KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL |
-		    KVM_XEN_HVM_CONFIG_SHARED_INFO;
+		    KVM_XEN_HVM_CONFIG_SHARED_INFO |
+		    KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL;
 		if (sched_info_on())
 			r |= KVM_XEN_HVM_CONFIG_RUNSTATE;
 		break;
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index da4bf2c6407f..4b380d2157a9 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -16,6 +16,7 @@
 #include <trace/events/kvm.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/vcpu.h>
+#include <xen/interface/event_channel.h>
 
 #include "trace.h"
 
@@ -195,6 +196,8 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
 
 int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 {
+	unsigned long evtchn_pending_sel = READ_ONCE(v->arch.xen.evtchn_pending_sel);
+	bool atomic = in_atomic() || !task_is_running(current);
 	int err;
 	u8 rc = 0;
 
@@ -204,6 +207,9 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 */
 	struct gfn_to_hva_cache *ghc = &v->arch.xen.vcpu_info_cache;
 	struct kvm_memslots *slots = kvm_memslots(v->kvm);
+	bool ghc_valid = slots->generation == ghc->generation &&
+		!kvm_is_error_hva(ghc->hva) && ghc->memslot;
+
 	unsigned int offset = offsetof(struct vcpu_info, evtchn_upcall_pending);
 
 	/* No need for compat handling here */
@@ -219,8 +225,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 * cache in kvm_read_guest_offset_cached(), but just uses
 	 * __get_user() instead. And falls back to the slow path.
 	 */
-	if (likely(slots->generation == ghc->generation &&
-		   !kvm_is_error_hva(ghc->hva) && ghc->memslot)) {
+	if (!evtchn_pending_sel && ghc_valid) {
 		/* Fast path */
 		pagefault_disable();
 		err = __get_user(rc, (u8 __user *)ghc->hva + offset);
@@ -239,11 +244,82 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 * and we'll end up getting called again from a context where we *can*
 	 * fault in the page and wait for it.
 	 */
-	if (in_atomic() || !task_is_running(current))
+	if (atomic)
 		return 1;
 
-	kvm_read_guest_offset_cached(v->kvm, ghc, &rc, offset,
-				     sizeof(rc));
+	if (!ghc_valid) {
+		err = kvm_gfn_to_hva_cache_init(v->kvm, ghc, ghc->gpa, ghc->len);
+		if (err || !ghc->memslot) {
+			/*
+			 * If this failed, userspace has screwed up the
+			 * vcpu_info mapping. No interrupts for you.
+			 */
+			return 0;
+		}
+	}
+
+	/*
+	 * Now we have a valid (protected by srcu) userspace HVA in
+	 * ghc->hva which points to the struct vcpu_info. If there
+	 * are any bits in the in-kernel evtchn_pending_sel then
+	 * we need to write those to the guest vcpu_info and set
+	 * its evtchn_upcall_pending flag. If there aren't any bits
+	 * to add, we only want to *check* evtchn_upcall_pending.
+	 */
+	if (evtchn_pending_sel) {
+		bool long_mode = v->kvm->arch.xen.long_mode;
+
+		if (!user_access_begin((void *)ghc->hva, sizeof(struct vcpu_info)))
+			return 0;
+
+		if (IS_ENABLED(CONFIG_64BIT) && long_mode) {
+			struct vcpu_info __user *vi = (void *)ghc->hva;
+
+			/* Attempt to set the evtchn_pending_sel bits in the
+			 * guest, and if that succeeds then clear the same
+			 * bits in the in-kernel version. */
+			asm volatile("1:\t" LOCK_PREFIX "orq %0, %1\n"
+				     "\tnotq %0\n"
+				     "\t" LOCK_PREFIX "andq %0, %2\n"
+				     "2:\n"
+				     "\t.section .fixup,\"ax\"\n"
+				     "3:\tjmp\t2b\n"
+				     "\t.previous\n"
+				     _ASM_EXTABLE_UA(1b, 3b)
+				     : "=r" (evtchn_pending_sel),
+				       "+m" (vi->evtchn_pending_sel),
+				       "+m" (v->arch.xen.evtchn_pending_sel)
+				     : "0" (evtchn_pending_sel));
+		} else {
+			struct compat_vcpu_info __user *vi = (void *)ghc->hva;
+			u32 evtchn_pending_sel32 = evtchn_pending_sel;
+
+			/* Attempt to set the evtchn_pending_sel bits in the
+			 * guest, and if that succeeds then clear the same
+			 * bits in the in-kernel version. */
+			asm volatile("1:\t" LOCK_PREFIX "orl %0, %1\n"
+				     "\tnotl %0\n"
+				     "\t" LOCK_PREFIX "andl %0, %2\n"
+				     "2:\n"
+				     "\t.section .fixup,\"ax\"\n"
+				     "3:\tjmp\t2b\n"
+				     "\t.previous\n"
+				     _ASM_EXTABLE_UA(1b, 3b)
+				     : "=r" (evtchn_pending_sel32),
+				       "+m" (vi->evtchn_pending_sel),
+				       "+m" (v->arch.xen.evtchn_pending_sel)
+				     : "0" (evtchn_pending_sel32));
+		}
+		rc = 1;
+		unsafe_put_user(rc, (u8 __user *)ghc->hva + offset, err);
+
+	err:
+		user_access_end();
+
+		mark_page_dirty_in_slot(v->kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
+	} else {
+		__get_user(rc, (u8 __user *)ghc->hva + offset);
+	}
 
 	return rc;
 }
@@ -740,3 +816,179 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
 
 	return 0;
 }
+
+static inline int max_evtchn_port(struct kvm *kvm)
+{
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode)
+		return EVTCHN_2L_NR_CHANNELS;
+	else
+		return COMPAT_EVTCHN_2L_NR_CHANNELS;
+}
+
+/*
+ * This follows the kvm_set_irq() API, so it returns:
+ *  < 0   Interrupt was ignored (masked or not delivered for other reasons)
+ *  = 0   Interrupt was coalesced (previous irq is still pending)
+ *  > 0   Number of CPUs interrupt was delivered to
+ */
+int kvm_xen_set_evtchn_fast(struct kvm_kernel_irq_routing_entry *e,
+			    struct kvm *kvm)
+{
+	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+	struct kvm_vcpu *vcpu;
+	unsigned long *pending_bits, *mask_bits;
+	unsigned long flags;
+	int port_word_bit;
+	bool kick_vcpu = false;
+	int idx;
+	int rc;
+
+	vcpu = kvm_get_vcpu_by_id(kvm, e->xen_evtchn.vcpu);
+	if (!vcpu)
+		return -1;
+
+	if (!vcpu->arch.xen.vcpu_info_set)
+		return -1;
+
+	if (e->xen_evtchn.port >= max_evtchn_port(kvm))
+		return -1;
+
+	rc = -EWOULDBLOCK;
+	read_lock_irqsave(&gpc->lock, flags);
+
+	idx = srcu_read_lock(&kvm->srcu);
+	if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE))
+		goto out_rcu;
+
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) {
+		struct shared_info *shinfo = gpc->khva;
+		pending_bits = (unsigned long *)&shinfo->evtchn_pending;
+		mask_bits = (unsigned long *)&shinfo->evtchn_mask;
+		port_word_bit = e->xen_evtchn.port / 64;
+	} else {
+		struct compat_shared_info *shinfo = gpc->khva;
+		pending_bits = (unsigned long *)&shinfo->evtchn_pending;
+		mask_bits = (unsigned long *)&shinfo->evtchn_mask;
+		port_word_bit = e->xen_evtchn.port / 32;
+	}
+
+	/*
+	 * If this port wasn't already set, and if it isn't masked, then
+	 * we try to set the corresponding bit in the in-kernel shadow of
+	 * evtchn_pending_sel for the target vCPU. And if *that* wasn't
+	 * already set, then we kick the vCPU in question to write to the
+	 * *real* evtchn_pending_sel in its own guest vcpu_info struct.
+	 */
+	if (test_and_set_bit(e->xen_evtchn.port, pending_bits)) {
+		rc = 0; /* It was already raised */
+	} else if (test_bit(e->xen_evtchn.port, mask_bits)) {
+		rc = -1; /* Masked */
+	} else {
+		rc = 1; /* Delivered. But was the vCPU waking already? */
+		if (!test_and_set_bit(port_word_bit, &vcpu->arch.xen.evtchn_pending_sel))
+			kick_vcpu = true;
+	}
+
+ out_rcu:
+	srcu_read_unlock(&kvm->srcu, idx);
+	read_unlock_irqrestore(&gpc->lock, flags);
+
+	if (kick_vcpu) {
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+		kvm_vcpu_kick(vcpu);
+	}
+
+	return rc;
+}
+
+/* This is the version called from kvm_set_irq() as the .set function */
+static int evtchn_set_fn(struct kvm_kernel_irq_routing_entry *e, struct kvm *kvm,
+			 int irq_source_id, int level, bool line_status)
+{
+	bool mm_borrowed = false;
+	int rc;
+
+	if (!level)
+		return -1;
+
+	rc = kvm_xen_set_evtchn_fast(e, kvm);
+	if (rc != -EWOULDBLOCK)
+		return rc;
+
+	if (current->mm != kvm->mm) {
+		/*
+		 * If not on a thread which already belongs to this KVM,
+		 * we'd better be in the irqfd workqueue.
+		 */
+		if (WARN_ON_ONCE(current->mm))
+			return -EINVAL;
+
+		kthread_use_mm(kvm->mm);
+		mm_borrowed = true;
+	}
+
+	/*
+	 * For the irqfd workqueue, using the main kvm->lock mutex is
+	 * fine since this function is invoked from kvm_set_irq() with
+	 * no other lock held, no srcu. In future if it will be called
+	 * directly from a vCPU thread (e.g. on hypercall for an IPI)
+	 * then it may need to switch to using a leaf-node mutex for
+	 * serializing the shared_info mapping.
+	 */
+	mutex_lock(&kvm->lock);
+
+	/*
+	 * It is theoretically possible for the page to be unmapped
+	 * and the MMU notifier to invalidate the shared_info before
+	 * we even get to use it. In that case, this looks like an
+	 * infinite loop. It was tempting to do it via the userspace
+	 * HVA instead... but that just *hides* the fact that it's
+	 * an infinite loop, because if a fault occurs and it waits
+	 * for the page to come back, it can *still* immediately
+	 * fault and have to wait again, repeatedly.
+	 *
+	 * Conversely, the page could also have been reinstated by
+	 * another thread before we even obtain the mutex above, so
+	 * check again *first* before remapping it.
+	 */
+	do {
+		struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+		int idx;
+
+		rc = kvm_xen_set_evtchn_fast(e, kvm);
+		if (rc != -EWOULDBLOCK)
+			break;
+
+		idx = srcu_read_lock(&kvm->srcu);
+		rc = kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpc->gpa,
+						  PAGE_SIZE, false);
+		srcu_read_unlock(&kvm->srcu, idx);
+	} while(!rc);
+
+	mutex_unlock(&kvm->lock);
+
+	if (mm_borrowed)
+		kthread_unuse_mm(kvm->mm);
+
+	return rc;
+}
+
+int kvm_xen_setup_evtchn(struct kvm *kvm,
+			 struct kvm_kernel_irq_routing_entry *e,
+			 const struct kvm_irq_routing_entry *ue)
+
+{
+	if (ue->u.xen_evtchn.port >= max_evtchn_port(kvm))
+		return -EINVAL;
+
+	/* We only support 2 level event channels for now */
+	if (ue->u.xen_evtchn.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL)
+		return -EINVAL;
+
+	e->xen_evtchn.port = ue->u.xen_evtchn.port;
+	e->xen_evtchn.vcpu = ue->u.xen_evtchn.vcpu;
+	e->xen_evtchn.priority = ue->u.xen_evtchn.priority;
+	e->set = evtchn_set_fn;
+
+	return 0;
+}
diff --git a/arch/x86/kvm/xen.h b/arch/x86/kvm/xen.h
index cc0cf5f37450..adbcc9ed59db 100644
--- a/arch/x86/kvm/xen.h
+++ b/arch/x86/kvm/xen.h
@@ -24,6 +24,12 @@ int kvm_xen_hvm_config(struct kvm *kvm, struct kvm_xen_hvm_config *xhc);
 void kvm_xen_init_vm(struct kvm *kvm);
 void kvm_xen_destroy_vm(struct kvm *kvm);
 
+int kvm_xen_set_evtchn_fast(struct kvm_kernel_irq_routing_entry *e,
+			    struct kvm *kvm);
+int kvm_xen_setup_evtchn(struct kvm *kvm,
+			 struct kvm_kernel_irq_routing_entry *e,
+			 const struct kvm_irq_routing_entry *ue);
+
 static inline bool kvm_xen_msr_enabled(struct kvm *kvm)
 {
 	return static_branch_unlikely(&kvm_xen_enabled.key) &&
@@ -134,6 +140,9 @@ struct compat_shared_info {
 	struct compat_arch_shared_info arch;
 };
 
+#define COMPAT_EVTCHN_2L_NR_CHANNELS (8 *				\
+				      sizeof_field(struct compat_shared_info, \
+						   evtchn_pending))
 struct compat_vcpu_runstate_info {
     int state;
     uint64_t state_entry_time;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 457c38d75913..47fbc253d72b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -470,6 +470,12 @@ struct kvm_hv_sint {
 	u32 sint;
 };
 
+struct kvm_xen_evtchn {
+	u32 port;
+	u32 vcpu;
+	u32 priority;
+};
+
 struct kvm_kernel_irq_routing_entry {
 	u32 gsi;
 	u32 type;
@@ -490,6 +496,7 @@ struct kvm_kernel_irq_routing_entry {
 		} msi;
 		struct kvm_s390_adapter_int adapter;
 		struct kvm_hv_sint hv_sint;
+		struct kvm_xen_evtchn xen_evtchn;
 	};
 	struct hlist_node link;
 };
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 1daa45268de2..12421e76adcb 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1162,11 +1162,20 @@ struct kvm_irq_routing_hv_sint {
 	__u32 sint;
 };
 
+struct kvm_irq_routing_xen_evtchn {
+	__u32 port;
+	__u32 vcpu;
+	__u32 priority;
+};
+
+#define KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL ((__u32)(-1))
+
 /* gsi routing entry types */
 #define KVM_IRQ_ROUTING_IRQCHIP 1
 #define KVM_IRQ_ROUTING_MSI 2
 #define KVM_IRQ_ROUTING_S390_ADAPTER 3
 #define KVM_IRQ_ROUTING_HV_SINT 4
+#define KVM_IRQ_ROUTING_XEN_EVTCHN 5
 
 struct kvm_irq_routing_entry {
 	__u32 gsi;
@@ -1178,6 +1187,7 @@ struct kvm_irq_routing_entry {
 		struct kvm_irq_routing_msi msi;
 		struct kvm_irq_routing_s390_adapter adapter;
 		struct kvm_irq_routing_hv_sint hv_sint;
+		struct kvm_irq_routing_xen_evtchn xen_evtchn;
 		__u32 pad[8];
 	} u;
 };
@@ -1208,6 +1218,7 @@ struct kvm_x86_mce {
 #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
 #define KVM_XEN_HVM_CONFIG_SHARED_INFO		(1 << 2)
 #define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 3)
+#define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 4)
 
 struct kvm_xen_hvm_config {
 	__u32 flags;
diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
index a0699f00b3d6..a865e60a042c 100644
--- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
@@ -14,6 +14,9 @@
 #include <stdint.h>
 #include <time.h>
 #include <sched.h>
+#include <signal.h>
+
+#include <sys/eventfd.h>
 
 #define VCPU_ID		5
 
@@ -22,10 +25,12 @@
 #define SHINFO_REGION_SLOT	10
 #define PAGE_SIZE		4096
 
+#define SHINFO_ADDR	(SHINFO_REGION_GPA)
 #define PVTIME_ADDR	(SHINFO_REGION_GPA + PAGE_SIZE)
 #define RUNSTATE_ADDR	(SHINFO_REGION_GPA + PAGE_SIZE + 0x20)
 #define VCPU_INFO_ADDR	(SHINFO_REGION_GPA + 0x40)
 
+#define SHINFO_VADDR	(SHINFO_REGION_GVA)
 #define RUNSTATE_VADDR	(SHINFO_REGION_GVA + PAGE_SIZE + 0x20)
 #define VCPU_INFO_VADDR	(SHINFO_REGION_GVA + 0x40)
 
@@ -73,15 +78,30 @@ struct vcpu_info {
         struct pvclock_vcpu_time_info time;
 }; /* 64 bytes (x86) */
 
+struct shared_info {
+	struct vcpu_info vcpu_info[32];
+	unsigned long evtchn_pending[64];
+	unsigned long evtchn_mask[64];
+	struct pvclock_wall_clock wc;
+	uint32_t wc_sec_hi;
+	/* arch_shared_info here */
+};
+
 #define RUNSTATE_running  0
 #define RUNSTATE_runnable 1
 #define RUNSTATE_blocked  2
 #define RUNSTATE_offline  3
 
+struct {
+	struct kvm_irq_routing info;
+	struct kvm_irq_routing_entry entries[2];
+} irq_routes;
+
 static void evtchn_handler(struct ex_regs *regs)
 {
 	struct vcpu_info *vi = (void *)VCPU_INFO_VADDR;
 	vi->evtchn_upcall_pending = 0;
+	vi->evtchn_pending_sel = 0;
 
 	GUEST_SYNC(0x20);
 }
@@ -127,7 +147,19 @@ static void guest_code(void)
 	GUEST_SYNC(6);
 	GUEST_ASSERT(rs->time[RUNSTATE_runnable] >= MIN_STEAL_TIME);
 
-	GUEST_DONE();
+	/* Attempt to deliver a *masked* interrupt */
+	GUEST_SYNC(7);
+
+	/* Wait until we see the bit set */
+	struct shared_info *si = (void *)SHINFO_VADDR;
+	while (!si->evtchn_pending[0])
+		__asm__ __volatile__ ("rep nop" : : : "memory");
+
+	/* Now deliver an *unmasked* interrupt */
+	GUEST_SYNC(8);
+
+	for (;;)
+		__asm__ __volatile__ ("rep nop" : : : "memory");
 }
 
 static int cmp_timespec(struct timespec *a, struct timespec *b)
@@ -144,6 +176,11 @@ static int cmp_timespec(struct timespec *a, struct timespec *b)
 		return 0;
 }
 
+static void handle_alrm(int sig)
+{
+	TEST_FAIL("IRQ delivery timed out");
+}
+
 int main(int argc, char *argv[])
 {
 	struct timespec min_ts, max_ts, vm_ts;
@@ -155,6 +192,7 @@ int main(int argc, char *argv[])
 	}
 
 	bool do_runstate_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_RUNSTATE);
+	bool do_eventfd_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL);
 
 	clock_gettime(CLOCK_REALTIME, &min_ts);
 
@@ -166,6 +204,11 @@ int main(int argc, char *argv[])
 				    SHINFO_REGION_GPA, SHINFO_REGION_SLOT, 2, 0);
 	virt_map(vm, SHINFO_REGION_GVA, SHINFO_REGION_GPA, 2);
 
+	struct shared_info *shinfo = addr_gpa2hva(vm, SHINFO_VADDR);
+
+	int zero_fd = open("/dev/zero", O_RDONLY);
+	TEST_ASSERT(zero_fd != -1, "Failed to open /dev/zero");
+
 	struct kvm_xen_hvm_config hvmc = {
 		.flags = KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL,
 		.msr = XEN_HYPERCALL_MSR,
@@ -184,6 +227,16 @@ int main(int argc, char *argv[])
 	};
 	vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &ha);
 
+	/*
+	 * Test what happens when the HVA of the shinfo page is remapped after
+	 * the kernel has a reference to it. But make sure we copy the clock
+	 * info over since that's only set at setup time, and we test it later.
+	 */
+	struct pvclock_wall_clock wc_copy = shinfo->wc;
+	void *m = mmap(shinfo, PAGE_SIZE, PROT_READ|PROT_WRITE, MAP_FIXED|MAP_PRIVATE, zero_fd, 0);
+	TEST_ASSERT(m == shinfo, "Failed to map /dev/zero over shared info");
+	shinfo->wc = wc_copy;
+
 	struct kvm_xen_vcpu_attr vi = {
 		.type = KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO,
 		.u.gpa = VCPU_INFO_ADDR,
@@ -214,6 +267,49 @@ int main(int argc, char *argv[])
 		vcpu_ioctl(vm, VCPU_ID, KVM_XEN_VCPU_SET_ATTR, &st);
 	}
 
+	int irq_fd[2] = { -1, -1 };
+
+	if (do_eventfd_tests) {
+		irq_fd[0] = eventfd(0, 0);
+		irq_fd[1] = eventfd(0, 0);
+
+		/* Unexpected, but not a KVM failure */
+		if (irq_fd[0] == -1 || irq_fd[1] == -1)
+			do_eventfd_tests = false;
+	}
+
+	if (do_eventfd_tests) {
+		irq_routes.info.nr = 2;
+
+		irq_routes.entries[0].gsi = 32;
+		irq_routes.entries[0].type = KVM_IRQ_ROUTING_XEN_EVTCHN;
+		irq_routes.entries[0].u.xen_evtchn.port = 15;
+		irq_routes.entries[0].u.xen_evtchn.vcpu = VCPU_ID;
+		irq_routes.entries[0].u.xen_evtchn.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL;
+
+		irq_routes.entries[1].gsi = 33;
+		irq_routes.entries[1].type = KVM_IRQ_ROUTING_XEN_EVTCHN;
+		irq_routes.entries[1].u.xen_evtchn.port = 66;
+		irq_routes.entries[1].u.xen_evtchn.vcpu = VCPU_ID;
+		irq_routes.entries[1].u.xen_evtchn.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL;
+
+		vm_ioctl(vm, KVM_SET_GSI_ROUTING, &irq_routes);
+
+		struct kvm_irqfd ifd = { };
+
+		ifd.fd = irq_fd[0];
+		ifd.gsi = 32;
+		vm_ioctl(vm, KVM_IRQFD, &ifd);
+
+		ifd.fd = irq_fd[1];
+		ifd.gsi = 33;
+		vm_ioctl(vm, KVM_IRQFD, &ifd);
+
+		struct sigaction sa = { };
+		sa.sa_handler = handle_alrm;
+		sigaction(SIGALRM, &sa, NULL);
+	}
+
 	struct vcpu_info *vinfo = addr_gpa2hva(vm, VCPU_INFO_VADDR);
 	vinfo->evtchn_upcall_pending = 0;
 
@@ -289,9 +385,23 @@ int main(int argc, char *argv[])
 					sched_yield();
 				} while (get_run_delay() < rundelay);
 				break;
+			case 7:
+				if (!do_eventfd_tests)
+					goto done;
+				shinfo->evtchn_mask[0] = 0x8000;
+				eventfd_write(irq_fd[0], 1UL);
+				alarm(1);
+				break;
+			case 8:
+				eventfd_write(irq_fd[1], 1UL);
+				evtchn_irq_expected = true;
+				break;
+
 			case 0x20:
 				TEST_ASSERT(evtchn_irq_expected, "Unexpected event channel IRQ");
 				evtchn_irq_expected = false;
+				if (shinfo->evtchn_pending[1])
+					goto done;
 				break;
 			}
 			break;
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 10/11] KVM: x86/xen: Add KVM_IRQ_ROUTING_XEN_EVTCHN and event channel delivery
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

This adds basic support for delivering 2 level event channels to a guest.

Initially, it only supports delivery via the IRQ routing table, triggered
by an eventfd. In order to do so, it has a kvm_xen_set_evtchn_fast()
function which will use the pre-mapped shared_info page if it already
exists and is still valid, while the slow path through the irqfd_inject
workqueue will remap the shared_info page if necessary.

It sets the bits in the shared_info page but not the vcpu_info; that is
deferred to __kvm_xen_has_interrupt() which raises the vector to the
appropriate vCPU.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 Documentation/virt/kvm/api.rst                |  21 ++
 arch/x86/include/asm/kvm_host.h               |   1 +
 arch/x86/kvm/irq_comm.c                       |  12 +
 arch/x86/kvm/x86.c                            |   3 +-
 arch/x86/kvm/xen.c                            | 262 +++++++++++++++++-
 arch/x86/kvm/xen.h                            |   9 +
 include/linux/kvm_host.h                      |   7 +
 include/uapi/linux/kvm.h                      |  11 +
 .../selftests/kvm/x86_64/xen_shinfo_test.c    | 112 +++++++-
 9 files changed, 431 insertions(+), 7 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 455664c39d42..ec4d693851a2 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -1799,6 +1799,7 @@ No flags are specified so far, the corresponding field must be set to zero.
 		struct kvm_irq_routing_msi msi;
 		struct kvm_irq_routing_s390_adapter adapter;
 		struct kvm_irq_routing_hv_sint hv_sint;
+		struct kvm_irq_routing_xen_evtchn xen_evtchn;
 		__u32 pad[8];
 	} u;
   };
@@ -1808,6 +1809,7 @@ No flags are specified so far, the corresponding field must be set to zero.
   #define KVM_IRQ_ROUTING_MSI 2
   #define KVM_IRQ_ROUTING_S390_ADAPTER 3
   #define KVM_IRQ_ROUTING_HV_SINT 4
+  #define KVM_IRQ_ROUTING_XEN_EVTCHN 5
 
 flags:
 
@@ -1859,6 +1861,20 @@ address_hi must be zero.
 	__u32 sint;
   };
 
+  struct kvm_irq_routing_xen_evtchn {
+	__u32 port;
+	__u32 vcpu;
+	__u32 priority;
+  };
+
+
+When KVM_CAP_XEN_HVM includes the KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL bit
+in its indication of supported features, routing to Xen event channels
+is supported. Although the priority field is present, only the value
+KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL is supported, which means delivery by
+2 level event channels. FIFO event channel support may be added in
+the future.
+
 
 4.55 KVM_SET_TSC_KHZ
 --------------------
@@ -7413,6 +7429,7 @@ PVHVM guests. Valid flags are::
   #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
   #define KVM_XEN_HVM_CONFIG_SHARED_INFO	(1 << 2)
   #define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 2)
+  #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 3)
 
 The KVM_XEN_HVM_CONFIG_HYPERCALL_MSR flag indicates that the KVM_XEN_HVM_CONFIG
 ioctl is available, for the guest to set its hypercall page.
@@ -7432,6 +7449,10 @@ The KVM_XEN_HVM_CONFIG_RUNSTATE flag indicates that the runstate-related
 features KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR/_CURRENT/_DATA/_ADJUST are
 supported by the KVM_XEN_VCPU_SET_ATTR/KVM_XEN_VCPU_GET_ATTR ioctls.
 
+The KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL flag indicates that IRQ routing entries
+of the type KVM_IRQ_ROUTING_XEN_EVTCHN are supported, with the priority
+field set to indicate 2 level event channel delivery.
+
 8.31 KVM_CAP_PPC_MULTITCE
 -------------------------
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4b2b4ecf3b46..6ea2446ab851 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -604,6 +604,7 @@ struct kvm_vcpu_xen {
 	u64 last_steal;
 	u64 runstate_entry_time;
 	u64 runstate_times[4];
+	unsigned long evtchn_pending_sel;
 };
 
 struct kvm_vcpu_arch {
diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c
index d5b72a08e566..afd2de84be60 100644
--- a/arch/x86/kvm/irq_comm.c
+++ b/arch/x86/kvm/irq_comm.c
@@ -24,6 +24,7 @@
 
 #include "hyperv.h"
 #include "x86.h"
+#include "xen.h"
 
 static int kvm_set_pic_irq(struct kvm_kernel_irq_routing_entry *e,
 			   struct kvm *kvm, int irq_source_id, int level,
@@ -175,6 +176,13 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
 			return r;
 		break;
 
+#ifdef CONFIG_KVM_XEN
+	case KVM_IRQ_ROUTING_XEN_EVTCHN:
+		if (!level)
+			return -1;
+
+		return kvm_xen_set_evtchn_fast(e, kvm);
+#endif
 	default:
 		break;
 	}
@@ -310,6 +318,10 @@ int kvm_set_routing_entry(struct kvm *kvm,
 		e->hv_sint.vcpu = ue->u.hv_sint.vcpu;
 		e->hv_sint.sint = ue->u.hv_sint.sint;
 		break;
+#ifdef CONFIG_KVM_XEN
+	case KVM_IRQ_ROUTING_XEN_EVTCHN:
+		return kvm_xen_setup_evtchn(kvm, e, ue);
+#endif
 	default:
 		return -EINVAL;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5a403d92833f..fa56c590d8db 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4147,7 +4147,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_XEN_HVM:
 		r = KVM_XEN_HVM_CONFIG_HYPERCALL_MSR |
 		    KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL |
-		    KVM_XEN_HVM_CONFIG_SHARED_INFO;
+		    KVM_XEN_HVM_CONFIG_SHARED_INFO |
+		    KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL;
 		if (sched_info_on())
 			r |= KVM_XEN_HVM_CONFIG_RUNSTATE;
 		break;
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index da4bf2c6407f..4b380d2157a9 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -16,6 +16,7 @@
 #include <trace/events/kvm.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/vcpu.h>
+#include <xen/interface/event_channel.h>
 
 #include "trace.h"
 
@@ -195,6 +196,8 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
 
 int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 {
+	unsigned long evtchn_pending_sel = READ_ONCE(v->arch.xen.evtchn_pending_sel);
+	bool atomic = in_atomic() || !task_is_running(current);
 	int err;
 	u8 rc = 0;
 
@@ -204,6 +207,9 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 */
 	struct gfn_to_hva_cache *ghc = &v->arch.xen.vcpu_info_cache;
 	struct kvm_memslots *slots = kvm_memslots(v->kvm);
+	bool ghc_valid = slots->generation == ghc->generation &&
+		!kvm_is_error_hva(ghc->hva) && ghc->memslot;
+
 	unsigned int offset = offsetof(struct vcpu_info, evtchn_upcall_pending);
 
 	/* No need for compat handling here */
@@ -219,8 +225,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 * cache in kvm_read_guest_offset_cached(), but just uses
 	 * __get_user() instead. And falls back to the slow path.
 	 */
-	if (likely(slots->generation == ghc->generation &&
-		   !kvm_is_error_hva(ghc->hva) && ghc->memslot)) {
+	if (!evtchn_pending_sel && ghc_valid) {
 		/* Fast path */
 		pagefault_disable();
 		err = __get_user(rc, (u8 __user *)ghc->hva + offset);
@@ -239,11 +244,82 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 	 * and we'll end up getting called again from a context where we *can*
 	 * fault in the page and wait for it.
 	 */
-	if (in_atomic() || !task_is_running(current))
+	if (atomic)
 		return 1;
 
-	kvm_read_guest_offset_cached(v->kvm, ghc, &rc, offset,
-				     sizeof(rc));
+	if (!ghc_valid) {
+		err = kvm_gfn_to_hva_cache_init(v->kvm, ghc, ghc->gpa, ghc->len);
+		if (err || !ghc->memslot) {
+			/*
+			 * If this failed, userspace has screwed up the
+			 * vcpu_info mapping. No interrupts for you.
+			 */
+			return 0;
+		}
+	}
+
+	/*
+	 * Now we have a valid (protected by srcu) userspace HVA in
+	 * ghc->hva which points to the struct vcpu_info. If there
+	 * are any bits in the in-kernel evtchn_pending_sel then
+	 * we need to write those to the guest vcpu_info and set
+	 * its evtchn_upcall_pending flag. If there aren't any bits
+	 * to add, we only want to *check* evtchn_upcall_pending.
+	 */
+	if (evtchn_pending_sel) {
+		bool long_mode = v->kvm->arch.xen.long_mode;
+
+		if (!user_access_begin((void *)ghc->hva, sizeof(struct vcpu_info)))
+			return 0;
+
+		if (IS_ENABLED(CONFIG_64BIT) && long_mode) {
+			struct vcpu_info __user *vi = (void *)ghc->hva;
+
+			/* Attempt to set the evtchn_pending_sel bits in the
+			 * guest, and if that succeeds then clear the same
+			 * bits in the in-kernel version. */
+			asm volatile("1:\t" LOCK_PREFIX "orq %0, %1\n"
+				     "\tnotq %0\n"
+				     "\t" LOCK_PREFIX "andq %0, %2\n"
+				     "2:\n"
+				     "\t.section .fixup,\"ax\"\n"
+				     "3:\tjmp\t2b\n"
+				     "\t.previous\n"
+				     _ASM_EXTABLE_UA(1b, 3b)
+				     : "=r" (evtchn_pending_sel),
+				       "+m" (vi->evtchn_pending_sel),
+				       "+m" (v->arch.xen.evtchn_pending_sel)
+				     : "0" (evtchn_pending_sel));
+		} else {
+			struct compat_vcpu_info __user *vi = (void *)ghc->hva;
+			u32 evtchn_pending_sel32 = evtchn_pending_sel;
+
+			/* Attempt to set the evtchn_pending_sel bits in the
+			 * guest, and if that succeeds then clear the same
+			 * bits in the in-kernel version. */
+			asm volatile("1:\t" LOCK_PREFIX "orl %0, %1\n"
+				     "\tnotl %0\n"
+				     "\t" LOCK_PREFIX "andl %0, %2\n"
+				     "2:\n"
+				     "\t.section .fixup,\"ax\"\n"
+				     "3:\tjmp\t2b\n"
+				     "\t.previous\n"
+				     _ASM_EXTABLE_UA(1b, 3b)
+				     : "=r" (evtchn_pending_sel32),
+				       "+m" (vi->evtchn_pending_sel),
+				       "+m" (v->arch.xen.evtchn_pending_sel)
+				     : "0" (evtchn_pending_sel32));
+		}
+		rc = 1;
+		unsafe_put_user(rc, (u8 __user *)ghc->hva + offset, err);
+
+	err:
+		user_access_end();
+
+		mark_page_dirty_in_slot(v->kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
+	} else {
+		__get_user(rc, (u8 __user *)ghc->hva + offset);
+	}
 
 	return rc;
 }
@@ -740,3 +816,179 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
 
 	return 0;
 }
+
+static inline int max_evtchn_port(struct kvm *kvm)
+{
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode)
+		return EVTCHN_2L_NR_CHANNELS;
+	else
+		return COMPAT_EVTCHN_2L_NR_CHANNELS;
+}
+
+/*
+ * This follows the kvm_set_irq() API, so it returns:
+ *  < 0   Interrupt was ignored (masked or not delivered for other reasons)
+ *  = 0   Interrupt was coalesced (previous irq is still pending)
+ *  > 0   Number of CPUs interrupt was delivered to
+ */
+int kvm_xen_set_evtchn_fast(struct kvm_kernel_irq_routing_entry *e,
+			    struct kvm *kvm)
+{
+	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+	struct kvm_vcpu *vcpu;
+	unsigned long *pending_bits, *mask_bits;
+	unsigned long flags;
+	int port_word_bit;
+	bool kick_vcpu = false;
+	int idx;
+	int rc;
+
+	vcpu = kvm_get_vcpu_by_id(kvm, e->xen_evtchn.vcpu);
+	if (!vcpu)
+		return -1;
+
+	if (!vcpu->arch.xen.vcpu_info_set)
+		return -1;
+
+	if (e->xen_evtchn.port >= max_evtchn_port(kvm))
+		return -1;
+
+	rc = -EWOULDBLOCK;
+	read_lock_irqsave(&gpc->lock, flags);
+
+	idx = srcu_read_lock(&kvm->srcu);
+	if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE))
+		goto out_rcu;
+
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) {
+		struct shared_info *shinfo = gpc->khva;
+		pending_bits = (unsigned long *)&shinfo->evtchn_pending;
+		mask_bits = (unsigned long *)&shinfo->evtchn_mask;
+		port_word_bit = e->xen_evtchn.port / 64;
+	} else {
+		struct compat_shared_info *shinfo = gpc->khva;
+		pending_bits = (unsigned long *)&shinfo->evtchn_pending;
+		mask_bits = (unsigned long *)&shinfo->evtchn_mask;
+		port_word_bit = e->xen_evtchn.port / 32;
+	}
+
+	/*
+	 * If this port wasn't already set, and if it isn't masked, then
+	 * we try to set the corresponding bit in the in-kernel shadow of
+	 * evtchn_pending_sel for the target vCPU. And if *that* wasn't
+	 * already set, then we kick the vCPU in question to write to the
+	 * *real* evtchn_pending_sel in its own guest vcpu_info struct.
+	 */
+	if (test_and_set_bit(e->xen_evtchn.port, pending_bits)) {
+		rc = 0; /* It was already raised */
+	} else if (test_bit(e->xen_evtchn.port, mask_bits)) {
+		rc = -1; /* Masked */
+	} else {
+		rc = 1; /* Delivered. But was the vCPU waking already? */
+		if (!test_and_set_bit(port_word_bit, &vcpu->arch.xen.evtchn_pending_sel))
+			kick_vcpu = true;
+	}
+
+ out_rcu:
+	srcu_read_unlock(&kvm->srcu, idx);
+	read_unlock_irqrestore(&gpc->lock, flags);
+
+	if (kick_vcpu) {
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+		kvm_vcpu_kick(vcpu);
+	}
+
+	return rc;
+}
+
+/* This is the version called from kvm_set_irq() as the .set function */
+static int evtchn_set_fn(struct kvm_kernel_irq_routing_entry *e, struct kvm *kvm,
+			 int irq_source_id, int level, bool line_status)
+{
+	bool mm_borrowed = false;
+	int rc;
+
+	if (!level)
+		return -1;
+
+	rc = kvm_xen_set_evtchn_fast(e, kvm);
+	if (rc != -EWOULDBLOCK)
+		return rc;
+
+	if (current->mm != kvm->mm) {
+		/*
+		 * If not on a thread which already belongs to this KVM,
+		 * we'd better be in the irqfd workqueue.
+		 */
+		if (WARN_ON_ONCE(current->mm))
+			return -EINVAL;
+
+		kthread_use_mm(kvm->mm);
+		mm_borrowed = true;
+	}
+
+	/*
+	 * For the irqfd workqueue, using the main kvm->lock mutex is
+	 * fine since this function is invoked from kvm_set_irq() with
+	 * no other lock held, no srcu. In future if it will be called
+	 * directly from a vCPU thread (e.g. on hypercall for an IPI)
+	 * then it may need to switch to using a leaf-node mutex for
+	 * serializing the shared_info mapping.
+	 */
+	mutex_lock(&kvm->lock);
+
+	/*
+	 * It is theoretically possible for the page to be unmapped
+	 * and the MMU notifier to invalidate the shared_info before
+	 * we even get to use it. In that case, this looks like an
+	 * infinite loop. It was tempting to do it via the userspace
+	 * HVA instead... but that just *hides* the fact that it's
+	 * an infinite loop, because if a fault occurs and it waits
+	 * for the page to come back, it can *still* immediately
+	 * fault and have to wait again, repeatedly.
+	 *
+	 * Conversely, the page could also have been reinstated by
+	 * another thread before we even obtain the mutex above, so
+	 * check again *first* before remapping it.
+	 */
+	do {
+		struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+		int idx;
+
+		rc = kvm_xen_set_evtchn_fast(e, kvm);
+		if (rc != -EWOULDBLOCK)
+			break;
+
+		idx = srcu_read_lock(&kvm->srcu);
+		rc = kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpc->gpa,
+						  PAGE_SIZE, false);
+		srcu_read_unlock(&kvm->srcu, idx);
+	} while(!rc);
+
+	mutex_unlock(&kvm->lock);
+
+	if (mm_borrowed)
+		kthread_unuse_mm(kvm->mm);
+
+	return rc;
+}
+
+int kvm_xen_setup_evtchn(struct kvm *kvm,
+			 struct kvm_kernel_irq_routing_entry *e,
+			 const struct kvm_irq_routing_entry *ue)
+
+{
+	if (ue->u.xen_evtchn.port >= max_evtchn_port(kvm))
+		return -EINVAL;
+
+	/* We only support 2 level event channels for now */
+	if (ue->u.xen_evtchn.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL)
+		return -EINVAL;
+
+	e->xen_evtchn.port = ue->u.xen_evtchn.port;
+	e->xen_evtchn.vcpu = ue->u.xen_evtchn.vcpu;
+	e->xen_evtchn.priority = ue->u.xen_evtchn.priority;
+	e->set = evtchn_set_fn;
+
+	return 0;
+}
diff --git a/arch/x86/kvm/xen.h b/arch/x86/kvm/xen.h
index cc0cf5f37450..adbcc9ed59db 100644
--- a/arch/x86/kvm/xen.h
+++ b/arch/x86/kvm/xen.h
@@ -24,6 +24,12 @@ int kvm_xen_hvm_config(struct kvm *kvm, struct kvm_xen_hvm_config *xhc);
 void kvm_xen_init_vm(struct kvm *kvm);
 void kvm_xen_destroy_vm(struct kvm *kvm);
 
+int kvm_xen_set_evtchn_fast(struct kvm_kernel_irq_routing_entry *e,
+			    struct kvm *kvm);
+int kvm_xen_setup_evtchn(struct kvm *kvm,
+			 struct kvm_kernel_irq_routing_entry *e,
+			 const struct kvm_irq_routing_entry *ue);
+
 static inline bool kvm_xen_msr_enabled(struct kvm *kvm)
 {
 	return static_branch_unlikely(&kvm_xen_enabled.key) &&
@@ -134,6 +140,9 @@ struct compat_shared_info {
 	struct compat_arch_shared_info arch;
 };
 
+#define COMPAT_EVTCHN_2L_NR_CHANNELS (8 *				\
+				      sizeof_field(struct compat_shared_info, \
+						   evtchn_pending))
 struct compat_vcpu_runstate_info {
     int state;
     uint64_t state_entry_time;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 457c38d75913..47fbc253d72b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -470,6 +470,12 @@ struct kvm_hv_sint {
 	u32 sint;
 };
 
+struct kvm_xen_evtchn {
+	u32 port;
+	u32 vcpu;
+	u32 priority;
+};
+
 struct kvm_kernel_irq_routing_entry {
 	u32 gsi;
 	u32 type;
@@ -490,6 +496,7 @@ struct kvm_kernel_irq_routing_entry {
 		} msi;
 		struct kvm_s390_adapter_int adapter;
 		struct kvm_hv_sint hv_sint;
+		struct kvm_xen_evtchn xen_evtchn;
 	};
 	struct hlist_node link;
 };
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 1daa45268de2..12421e76adcb 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1162,11 +1162,20 @@ struct kvm_irq_routing_hv_sint {
 	__u32 sint;
 };
 
+struct kvm_irq_routing_xen_evtchn {
+	__u32 port;
+	__u32 vcpu;
+	__u32 priority;
+};
+
+#define KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL ((__u32)(-1))
+
 /* gsi routing entry types */
 #define KVM_IRQ_ROUTING_IRQCHIP 1
 #define KVM_IRQ_ROUTING_MSI 2
 #define KVM_IRQ_ROUTING_S390_ADAPTER 3
 #define KVM_IRQ_ROUTING_HV_SINT 4
+#define KVM_IRQ_ROUTING_XEN_EVTCHN 5
 
 struct kvm_irq_routing_entry {
 	__u32 gsi;
@@ -1178,6 +1187,7 @@ struct kvm_irq_routing_entry {
 		struct kvm_irq_routing_msi msi;
 		struct kvm_irq_routing_s390_adapter adapter;
 		struct kvm_irq_routing_hv_sint hv_sint;
+		struct kvm_irq_routing_xen_evtchn xen_evtchn;
 		__u32 pad[8];
 	} u;
 };
@@ -1208,6 +1218,7 @@ struct kvm_x86_mce {
 #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
 #define KVM_XEN_HVM_CONFIG_SHARED_INFO		(1 << 2)
 #define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 3)
+#define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 4)
 
 struct kvm_xen_hvm_config {
 	__u32 flags;
diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
index a0699f00b3d6..a865e60a042c 100644
--- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
@@ -14,6 +14,9 @@
 #include <stdint.h>
 #include <time.h>
 #include <sched.h>
+#include <signal.h>
+
+#include <sys/eventfd.h>
 
 #define VCPU_ID		5
 
@@ -22,10 +25,12 @@
 #define SHINFO_REGION_SLOT	10
 #define PAGE_SIZE		4096
 
+#define SHINFO_ADDR	(SHINFO_REGION_GPA)
 #define PVTIME_ADDR	(SHINFO_REGION_GPA + PAGE_SIZE)
 #define RUNSTATE_ADDR	(SHINFO_REGION_GPA + PAGE_SIZE + 0x20)
 #define VCPU_INFO_ADDR	(SHINFO_REGION_GPA + 0x40)
 
+#define SHINFO_VADDR	(SHINFO_REGION_GVA)
 #define RUNSTATE_VADDR	(SHINFO_REGION_GVA + PAGE_SIZE + 0x20)
 #define VCPU_INFO_VADDR	(SHINFO_REGION_GVA + 0x40)
 
@@ -73,15 +78,30 @@ struct vcpu_info {
         struct pvclock_vcpu_time_info time;
 }; /* 64 bytes (x86) */
 
+struct shared_info {
+	struct vcpu_info vcpu_info[32];
+	unsigned long evtchn_pending[64];
+	unsigned long evtchn_mask[64];
+	struct pvclock_wall_clock wc;
+	uint32_t wc_sec_hi;
+	/* arch_shared_info here */
+};
+
 #define RUNSTATE_running  0
 #define RUNSTATE_runnable 1
 #define RUNSTATE_blocked  2
 #define RUNSTATE_offline  3
 
+struct {
+	struct kvm_irq_routing info;
+	struct kvm_irq_routing_entry entries[2];
+} irq_routes;
+
 static void evtchn_handler(struct ex_regs *regs)
 {
 	struct vcpu_info *vi = (void *)VCPU_INFO_VADDR;
 	vi->evtchn_upcall_pending = 0;
+	vi->evtchn_pending_sel = 0;
 
 	GUEST_SYNC(0x20);
 }
@@ -127,7 +147,19 @@ static void guest_code(void)
 	GUEST_SYNC(6);
 	GUEST_ASSERT(rs->time[RUNSTATE_runnable] >= MIN_STEAL_TIME);
 
-	GUEST_DONE();
+	/* Attempt to deliver a *masked* interrupt */
+	GUEST_SYNC(7);
+
+	/* Wait until we see the bit set */
+	struct shared_info *si = (void *)SHINFO_VADDR;
+	while (!si->evtchn_pending[0])
+		__asm__ __volatile__ ("rep nop" : : : "memory");
+
+	/* Now deliver an *unmasked* interrupt */
+	GUEST_SYNC(8);
+
+	for (;;)
+		__asm__ __volatile__ ("rep nop" : : : "memory");
 }
 
 static int cmp_timespec(struct timespec *a, struct timespec *b)
@@ -144,6 +176,11 @@ static int cmp_timespec(struct timespec *a, struct timespec *b)
 		return 0;
 }
 
+static void handle_alrm(int sig)
+{
+	TEST_FAIL("IRQ delivery timed out");
+}
+
 int main(int argc, char *argv[])
 {
 	struct timespec min_ts, max_ts, vm_ts;
@@ -155,6 +192,7 @@ int main(int argc, char *argv[])
 	}
 
 	bool do_runstate_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_RUNSTATE);
+	bool do_eventfd_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL);
 
 	clock_gettime(CLOCK_REALTIME, &min_ts);
 
@@ -166,6 +204,11 @@ int main(int argc, char *argv[])
 				    SHINFO_REGION_GPA, SHINFO_REGION_SLOT, 2, 0);
 	virt_map(vm, SHINFO_REGION_GVA, SHINFO_REGION_GPA, 2);
 
+	struct shared_info *shinfo = addr_gpa2hva(vm, SHINFO_VADDR);
+
+	int zero_fd = open("/dev/zero", O_RDONLY);
+	TEST_ASSERT(zero_fd != -1, "Failed to open /dev/zero");
+
 	struct kvm_xen_hvm_config hvmc = {
 		.flags = KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL,
 		.msr = XEN_HYPERCALL_MSR,
@@ -184,6 +227,16 @@ int main(int argc, char *argv[])
 	};
 	vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &ha);
 
+	/*
+	 * Test what happens when the HVA of the shinfo page is remapped after
+	 * the kernel has a reference to it. But make sure we copy the clock
+	 * info over since that's only set at setup time, and we test it later.
+	 */
+	struct pvclock_wall_clock wc_copy = shinfo->wc;
+	void *m = mmap(shinfo, PAGE_SIZE, PROT_READ|PROT_WRITE, MAP_FIXED|MAP_PRIVATE, zero_fd, 0);
+	TEST_ASSERT(m == shinfo, "Failed to map /dev/zero over shared info");
+	shinfo->wc = wc_copy;
+
 	struct kvm_xen_vcpu_attr vi = {
 		.type = KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO,
 		.u.gpa = VCPU_INFO_ADDR,
@@ -214,6 +267,49 @@ int main(int argc, char *argv[])
 		vcpu_ioctl(vm, VCPU_ID, KVM_XEN_VCPU_SET_ATTR, &st);
 	}
 
+	int irq_fd[2] = { -1, -1 };
+
+	if (do_eventfd_tests) {
+		irq_fd[0] = eventfd(0, 0);
+		irq_fd[1] = eventfd(0, 0);
+
+		/* Unexpected, but not a KVM failure */
+		if (irq_fd[0] == -1 || irq_fd[1] == -1)
+			do_eventfd_tests = false;
+	}
+
+	if (do_eventfd_tests) {
+		irq_routes.info.nr = 2;
+
+		irq_routes.entries[0].gsi = 32;
+		irq_routes.entries[0].type = KVM_IRQ_ROUTING_XEN_EVTCHN;
+		irq_routes.entries[0].u.xen_evtchn.port = 15;
+		irq_routes.entries[0].u.xen_evtchn.vcpu = VCPU_ID;
+		irq_routes.entries[0].u.xen_evtchn.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL;
+
+		irq_routes.entries[1].gsi = 33;
+		irq_routes.entries[1].type = KVM_IRQ_ROUTING_XEN_EVTCHN;
+		irq_routes.entries[1].u.xen_evtchn.port = 66;
+		irq_routes.entries[1].u.xen_evtchn.vcpu = VCPU_ID;
+		irq_routes.entries[1].u.xen_evtchn.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL;
+
+		vm_ioctl(vm, KVM_SET_GSI_ROUTING, &irq_routes);
+
+		struct kvm_irqfd ifd = { };
+
+		ifd.fd = irq_fd[0];
+		ifd.gsi = 32;
+		vm_ioctl(vm, KVM_IRQFD, &ifd);
+
+		ifd.fd = irq_fd[1];
+		ifd.gsi = 33;
+		vm_ioctl(vm, KVM_IRQFD, &ifd);
+
+		struct sigaction sa = { };
+		sa.sa_handler = handle_alrm;
+		sigaction(SIGALRM, &sa, NULL);
+	}
+
 	struct vcpu_info *vinfo = addr_gpa2hva(vm, VCPU_INFO_VADDR);
 	vinfo->evtchn_upcall_pending = 0;
 
@@ -289,9 +385,23 @@ int main(int argc, char *argv[])
 					sched_yield();
 				} while (get_run_delay() < rundelay);
 				break;
+			case 7:
+				if (!do_eventfd_tests)
+					goto done;
+				shinfo->evtchn_mask[0] = 0x8000;
+				eventfd_write(irq_fd[0], 1UL);
+				alarm(1);
+				break;
+			case 8:
+				eventfd_write(irq_fd[1], 1UL);
+				evtchn_irq_expected = true;
+				break;
+
 			case 0x20:
 				TEST_ASSERT(evtchn_irq_expected, "Unexpected event channel IRQ");
 				evtchn_irq_expected = false;
+				if (shinfo->evtchn_pending[1])
+					goto done;
 				break;
 			}
 			break;
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 10:28   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

This is what evolved during the discussion at
https://lore.kernel.org/kvm/960E233F-EC0B-4FB5-BA2E-C8D2CCB38B12@infradead.org/T/#m11d75fcfe2da357ec1dabba0d0e3abb91fd13665

As discussed, an alternative approach might be to augment
kvm_arch_memslots_updated() to raise KVM_REQ_GET_NESTED_STATE_PAGES to
each vCPU (and make that req only do anything on a given vCPU if that
vCPU is actually in L2 guest mode).

That would mean the reload gets actively triggered even on memslot
changes rather than only on MMU notifiers as is the case now. It could
*potentially* mean we can drop the new 'check_guest_maps' function.

The 'check_guest_maps' function could be a lot simpler than it is,
though. It only really needs to get kvm->memslots->generation, then
check each gpc->generation against that, and each gpc->valid.

Also I suspect we *shouldn't* destroy the virtual_apic_cache in
nested_vmx_vmexit(). We can just leave it there for next time the
vCPU enters guest mode. If it happens to get invalidated in the
meantime, that's fine and we'll refresh it on the way back in.
We probably *would* want to actively do something on memslot changes
in that case though, to ensure that even if the vCPU isn't in guest
mode any more, we *release* the cached page.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/vmx/nested.c       | 50 ++++++++++++++++++++++++++++-----
 arch/x86/kvm/vmx/vmx.c          | 12 +++++---
 arch/x86/kvm/vmx/vmx.h          |  2 +-
 arch/x86/kvm/x86.c              | 10 +++++++
 5 files changed, 63 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6ea2446ab851..24f6f3e2de47 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1511,6 +1511,7 @@ struct kvm_x86_nested_ops {
 	int (*enable_evmcs)(struct kvm_vcpu *vcpu,
 			    uint16_t *vmcs_version);
 	uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu);
+	void (*check_guest_maps)(struct kvm_vcpu *vcpu);
 };
 
 struct kvm_x86_init_ops {
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 1e2f66951566..01bfabcfbbce 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -309,7 +309,7 @@ static void free_nested(struct kvm_vcpu *vcpu)
 		kvm_release_page_clean(vmx->nested.apic_access_page);
 		vmx->nested.apic_access_page = NULL;
 	}
-	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
+	kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vmx->nested.virtual_apic_cache);
 	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
 	vmx->nested.pi_desc = NULL;
 
@@ -3179,10 +3179,12 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
 	}
 
 	if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) {
-		map = &vmx->nested.virtual_apic_map;
+		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
 
-		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) {
-			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn));
+ 		if (!kvm_gfn_to_pfn_cache_init(vcpu->kvm, gpc, vcpu, true, true,
+					       vmcs12->virtual_apic_page_addr,
+					       PAGE_SIZE, true)) {
+			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(gpc->pfn));
 		} else if (nested_cpu_has(vmcs12, CPU_BASED_CR8_LOAD_EXITING) &&
 		           nested_cpu_has(vmcs12, CPU_BASED_CR8_STORE_EXITING) &&
 			   !nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
@@ -3207,6 +3209,9 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
 	if (nested_cpu_has_posted_intr(vmcs12)) {
 		map = &vmx->nested.pi_desc_map;
 
+		if (kvm_vcpu_mapped(map))
+			kvm_vcpu_unmap(vcpu, map, true);
+
 		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) {
 			vmx->nested.pi_desc =
 				(struct pi_desc *)(((void *)map->hva) +
@@ -3251,6 +3256,29 @@ static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+static void nested_vmx_check_guest_maps(struct kvm_vcpu *vcpu)
+{
+	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	struct gfn_to_pfn_cache *gpc;
+
+	int valid;
+
+	if (nested_cpu_has_posted_intr(vmcs12)) {
+		gpc = &vmx->nested.virtual_apic_cache;
+
+		read_lock(&gpc->lock);
+		valid = kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc,
+						   vmcs12->virtual_apic_page_addr,
+						   PAGE_SIZE);
+		read_unlock(&gpc->lock);
+		if (!valid) {
+			kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+			return;
+		}
+	}
+}
+
 static int nested_vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa)
 {
 	struct vmcs12 *vmcs12;
@@ -3749,9 +3777,15 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
 
 	max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256);
 	if (max_irr != 256) {
-		vapic_page = vmx->nested.virtual_apic_map.hva;
-		if (!vapic_page)
+		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
+
+		read_lock(&gpc->lock);
+		if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE)) {
+			read_unlock(&gpc->lock);
 			goto mmio_needed;
+		}
+
+		vapic_page = gpc->khva;
 
 		__kvm_apic_update_irr(vmx->nested.pi_desc->pir,
 			vapic_page, &max_irr);
@@ -3761,6 +3795,7 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
 			status |= (u8)max_irr;
 			vmcs_write16(GUEST_INTR_STATUS, status);
 		}
+		read_unlock(&gpc->lock);
 	}
 
 	nested_mark_vmcs12_pages_dirty(vcpu);
@@ -4581,7 +4616,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
 		kvm_release_page_clean(vmx->nested.apic_access_page);
 		vmx->nested.apic_access_page = NULL;
 	}
-	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
+	kvm_gfn_to_pfn_cache_unmap(vcpu->kvm, &vmx->nested.virtual_apic_cache);
 	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
 	vmx->nested.pi_desc = NULL;
 
@@ -6756,4 +6791,5 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
 	.write_log_dirty = nested_vmx_write_pml_buffer,
 	.enable_evmcs = nested_enable_evmcs,
 	.get_evmcs_version = nested_get_evmcs_version,
+	.check_guest_maps = nested_vmx_check_guest_maps,
 };
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ba66c171d951..6c61faef86d3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3839,19 +3839,23 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
 static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-	void *vapic_page;
+	struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
 	u32 vppr;
 	int rvi;
 
 	if (WARN_ON_ONCE(!is_guest_mode(vcpu)) ||
 		!nested_cpu_has_vid(get_vmcs12(vcpu)) ||
-		WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn))
+		WARN_ON_ONCE(gpc->gpa == GPA_INVALID))
 		return false;
 
 	rvi = vmx_get_rvi();
 
-	vapic_page = vmx->nested.virtual_apic_map.hva;
-	vppr = *((u32 *)(vapic_page + APIC_PROCPRI));
+	read_lock(&gpc->lock);
+	if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE))
+		vppr = *((u32 *)(gpc->khva + APIC_PROCPRI));
+	else
+		vppr = 0xff;
+	read_unlock(&gpc->lock);
 
 	return ((rvi & 0xf0) > (vppr & 0xf0));
 }
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 4df2ac24ffc1..8364e7fc92a0 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -195,7 +195,7 @@ struct nested_vmx {
 	 * pointers, so we must keep them pinned while L2 runs.
 	 */
 	struct page *apic_access_page;
-	struct kvm_host_map virtual_apic_map;
+	struct gfn_to_pfn_cache virtual_apic_cache;
 	struct kvm_host_map pi_desc_map;
 
 	struct kvm_host_map msr_bitmap_map;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fa56c590d8db..01d20db5b1f4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9739,6 +9739,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 		if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
 			static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
+		if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
+			; /* Nothing to do. It just wanted to wake us */
 	}
 
 	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
@@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	local_irq_disable();
 	vcpu->mode = IN_GUEST_MODE;
 
+	/*
+	 * If the guest requires direct access to mapped L1 pages, check
+	 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
+	 * to go and revalidate them, if necessary.
+	 */
+	if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
+		kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
+
 	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
 
 	/*
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

This is what evolved during the discussion at
https://lore.kernel.org/kvm/960E233F-EC0B-4FB5-BA2E-C8D2CCB38B12@infradead.org/T/#m11d75fcfe2da357ec1dabba0d0e3abb91fd13665

As discussed, an alternative approach might be to augment
kvm_arch_memslots_updated() to raise KVM_REQ_GET_NESTED_STATE_PAGES to
each vCPU (and make that req only do anything on a given vCPU if that
vCPU is actually in L2 guest mode).

That would mean the reload gets actively triggered even on memslot
changes rather than only on MMU notifiers as is the case now. It could
*potentially* mean we can drop the new 'check_guest_maps' function.

The 'check_guest_maps' function could be a lot simpler than it is,
though. It only really needs to get kvm->memslots->generation, then
check each gpc->generation against that, and each gpc->valid.

Also I suspect we *shouldn't* destroy the virtual_apic_cache in
nested_vmx_vmexit(). We can just leave it there for next time the
vCPU enters guest mode. If it happens to get invalidated in the
meantime, that's fine and we'll refresh it on the way back in.
We probably *would* want to actively do something on memslot changes
in that case though, to ensure that even if the vCPU isn't in guest
mode any more, we *release* the cached page.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/vmx/nested.c       | 50 ++++++++++++++++++++++++++++-----
 arch/x86/kvm/vmx/vmx.c          | 12 +++++---
 arch/x86/kvm/vmx/vmx.h          |  2 +-
 arch/x86/kvm/x86.c              | 10 +++++++
 5 files changed, 63 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6ea2446ab851..24f6f3e2de47 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1511,6 +1511,7 @@ struct kvm_x86_nested_ops {
 	int (*enable_evmcs)(struct kvm_vcpu *vcpu,
 			    uint16_t *vmcs_version);
 	uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu);
+	void (*check_guest_maps)(struct kvm_vcpu *vcpu);
 };
 
 struct kvm_x86_init_ops {
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 1e2f66951566..01bfabcfbbce 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -309,7 +309,7 @@ static void free_nested(struct kvm_vcpu *vcpu)
 		kvm_release_page_clean(vmx->nested.apic_access_page);
 		vmx->nested.apic_access_page = NULL;
 	}
-	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
+	kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vmx->nested.virtual_apic_cache);
 	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
 	vmx->nested.pi_desc = NULL;
 
@@ -3179,10 +3179,12 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
 	}
 
 	if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) {
-		map = &vmx->nested.virtual_apic_map;
+		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
 
-		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) {
-			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn));
+ 		if (!kvm_gfn_to_pfn_cache_init(vcpu->kvm, gpc, vcpu, true, true,
+					       vmcs12->virtual_apic_page_addr,
+					       PAGE_SIZE, true)) {
+			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(gpc->pfn));
 		} else if (nested_cpu_has(vmcs12, CPU_BASED_CR8_LOAD_EXITING) &&
 		           nested_cpu_has(vmcs12, CPU_BASED_CR8_STORE_EXITING) &&
 			   !nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
@@ -3207,6 +3209,9 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
 	if (nested_cpu_has_posted_intr(vmcs12)) {
 		map = &vmx->nested.pi_desc_map;
 
+		if (kvm_vcpu_mapped(map))
+			kvm_vcpu_unmap(vcpu, map, true);
+
 		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) {
 			vmx->nested.pi_desc =
 				(struct pi_desc *)(((void *)map->hva) +
@@ -3251,6 +3256,29 @@ static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+static void nested_vmx_check_guest_maps(struct kvm_vcpu *vcpu)
+{
+	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	struct gfn_to_pfn_cache *gpc;
+
+	int valid;
+
+	if (nested_cpu_has_posted_intr(vmcs12)) {
+		gpc = &vmx->nested.virtual_apic_cache;
+
+		read_lock(&gpc->lock);
+		valid = kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc,
+						   vmcs12->virtual_apic_page_addr,
+						   PAGE_SIZE);
+		read_unlock(&gpc->lock);
+		if (!valid) {
+			kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+			return;
+		}
+	}
+}
+
 static int nested_vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa)
 {
 	struct vmcs12 *vmcs12;
@@ -3749,9 +3777,15 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
 
 	max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256);
 	if (max_irr != 256) {
-		vapic_page = vmx->nested.virtual_apic_map.hva;
-		if (!vapic_page)
+		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
+
+		read_lock(&gpc->lock);
+		if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE)) {
+			read_unlock(&gpc->lock);
 			goto mmio_needed;
+		}
+
+		vapic_page = gpc->khva;
 
 		__kvm_apic_update_irr(vmx->nested.pi_desc->pir,
 			vapic_page, &max_irr);
@@ -3761,6 +3795,7 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
 			status |= (u8)max_irr;
 			vmcs_write16(GUEST_INTR_STATUS, status);
 		}
+		read_unlock(&gpc->lock);
 	}
 
 	nested_mark_vmcs12_pages_dirty(vcpu);
@@ -4581,7 +4616,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
 		kvm_release_page_clean(vmx->nested.apic_access_page);
 		vmx->nested.apic_access_page = NULL;
 	}
-	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
+	kvm_gfn_to_pfn_cache_unmap(vcpu->kvm, &vmx->nested.virtual_apic_cache);
 	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
 	vmx->nested.pi_desc = NULL;
 
@@ -6756,4 +6791,5 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
 	.write_log_dirty = nested_vmx_write_pml_buffer,
 	.enable_evmcs = nested_enable_evmcs,
 	.get_evmcs_version = nested_get_evmcs_version,
+	.check_guest_maps = nested_vmx_check_guest_maps,
 };
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ba66c171d951..6c61faef86d3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3839,19 +3839,23 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
 static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-	void *vapic_page;
+	struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
 	u32 vppr;
 	int rvi;
 
 	if (WARN_ON_ONCE(!is_guest_mode(vcpu)) ||
 		!nested_cpu_has_vid(get_vmcs12(vcpu)) ||
-		WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn))
+		WARN_ON_ONCE(gpc->gpa == GPA_INVALID))
 		return false;
 
 	rvi = vmx_get_rvi();
 
-	vapic_page = vmx->nested.virtual_apic_map.hva;
-	vppr = *((u32 *)(vapic_page + APIC_PROCPRI));
+	read_lock(&gpc->lock);
+	if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE))
+		vppr = *((u32 *)(gpc->khva + APIC_PROCPRI));
+	else
+		vppr = 0xff;
+	read_unlock(&gpc->lock);
 
 	return ((rvi & 0xf0) > (vppr & 0xf0));
 }
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 4df2ac24ffc1..8364e7fc92a0 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -195,7 +195,7 @@ struct nested_vmx {
 	 * pointers, so we must keep them pinned while L2 runs.
 	 */
 	struct page *apic_access_page;
-	struct kvm_host_map virtual_apic_map;
+	struct gfn_to_pfn_cache virtual_apic_cache;
 	struct kvm_host_map pi_desc_map;
 
 	struct kvm_host_map msr_bitmap_map;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fa56c590d8db..01d20db5b1f4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9739,6 +9739,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 		if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
 			static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
+		if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
+			; /* Nothing to do. It just wanted to wake us */
 	}
 
 	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
@@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	local_irq_disable();
 	vcpu->mode = IN_GUEST_MODE;
 
+	/*
+	 * If the guest requires direct access to mapped L1 pages, check
+	 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
+	 * to go and revalidate them, if necessary.
+	 */
+	if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
+		kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
+
 	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
 
 	/*
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

From: David Woodhouse <dwmw@amazon.co.uk>

This is what evolved during the discussion at
https://lore.kernel.org/kvm/960E233F-EC0B-4FB5-BA2E-C8D2CCB38B12@infradead.org/T/#m11d75fcfe2da357ec1dabba0d0e3abb91fd13665

As discussed, an alternative approach might be to augment
kvm_arch_memslots_updated() to raise KVM_REQ_GET_NESTED_STATE_PAGES to
each vCPU (and make that req only do anything on a given vCPU if that
vCPU is actually in L2 guest mode).

That would mean the reload gets actively triggered even on memslot
changes rather than only on MMU notifiers as is the case now. It could
*potentially* mean we can drop the new 'check_guest_maps' function.

The 'check_guest_maps' function could be a lot simpler than it is,
though. It only really needs to get kvm->memslots->generation, then
check each gpc->generation against that, and each gpc->valid.

Also I suspect we *shouldn't* destroy the virtual_apic_cache in
nested_vmx_vmexit(). We can just leave it there for next time the
vCPU enters guest mode. If it happens to get invalidated in the
meantime, that's fine and we'll refresh it on the way back in.
We probably *would* want to actively do something on memslot changes
in that case though, to ensure that even if the vCPU isn't in guest
mode any more, we *release* the cached page.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/vmx/nested.c       | 50 ++++++++++++++++++++++++++++-----
 arch/x86/kvm/vmx/vmx.c          | 12 +++++---
 arch/x86/kvm/vmx/vmx.h          |  2 +-
 arch/x86/kvm/x86.c              | 10 +++++++
 5 files changed, 63 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6ea2446ab851..24f6f3e2de47 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1511,6 +1511,7 @@ struct kvm_x86_nested_ops {
 	int (*enable_evmcs)(struct kvm_vcpu *vcpu,
 			    uint16_t *vmcs_version);
 	uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu);
+	void (*check_guest_maps)(struct kvm_vcpu *vcpu);
 };
 
 struct kvm_x86_init_ops {
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 1e2f66951566..01bfabcfbbce 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -309,7 +309,7 @@ static void free_nested(struct kvm_vcpu *vcpu)
 		kvm_release_page_clean(vmx->nested.apic_access_page);
 		vmx->nested.apic_access_page = NULL;
 	}
-	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
+	kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vmx->nested.virtual_apic_cache);
 	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
 	vmx->nested.pi_desc = NULL;
 
@@ -3179,10 +3179,12 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
 	}
 
 	if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) {
-		map = &vmx->nested.virtual_apic_map;
+		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
 
-		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) {
-			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn));
+ 		if (!kvm_gfn_to_pfn_cache_init(vcpu->kvm, gpc, vcpu, true, true,
+					       vmcs12->virtual_apic_page_addr,
+					       PAGE_SIZE, true)) {
+			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(gpc->pfn));
 		} else if (nested_cpu_has(vmcs12, CPU_BASED_CR8_LOAD_EXITING) &&
 		           nested_cpu_has(vmcs12, CPU_BASED_CR8_STORE_EXITING) &&
 			   !nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
@@ -3207,6 +3209,9 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
 	if (nested_cpu_has_posted_intr(vmcs12)) {
 		map = &vmx->nested.pi_desc_map;
 
+		if (kvm_vcpu_mapped(map))
+			kvm_vcpu_unmap(vcpu, map, true);
+
 		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) {
 			vmx->nested.pi_desc =
 				(struct pi_desc *)(((void *)map->hva) +
@@ -3251,6 +3256,29 @@ static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+static void nested_vmx_check_guest_maps(struct kvm_vcpu *vcpu)
+{
+	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	struct gfn_to_pfn_cache *gpc;
+
+	int valid;
+
+	if (nested_cpu_has_posted_intr(vmcs12)) {
+		gpc = &vmx->nested.virtual_apic_cache;
+
+		read_lock(&gpc->lock);
+		valid = kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc,
+						   vmcs12->virtual_apic_page_addr,
+						   PAGE_SIZE);
+		read_unlock(&gpc->lock);
+		if (!valid) {
+			kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+			return;
+		}
+	}
+}
+
 static int nested_vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa)
 {
 	struct vmcs12 *vmcs12;
@@ -3749,9 +3777,15 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
 
 	max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256);
 	if (max_irr != 256) {
-		vapic_page = vmx->nested.virtual_apic_map.hva;
-		if (!vapic_page)
+		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
+
+		read_lock(&gpc->lock);
+		if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE)) {
+			read_unlock(&gpc->lock);
 			goto mmio_needed;
+		}
+
+		vapic_page = gpc->khva;
 
 		__kvm_apic_update_irr(vmx->nested.pi_desc->pir,
 			vapic_page, &max_irr);
@@ -3761,6 +3795,7 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
 			status |= (u8)max_irr;
 			vmcs_write16(GUEST_INTR_STATUS, status);
 		}
+		read_unlock(&gpc->lock);
 	}
 
 	nested_mark_vmcs12_pages_dirty(vcpu);
@@ -4581,7 +4616,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
 		kvm_release_page_clean(vmx->nested.apic_access_page);
 		vmx->nested.apic_access_page = NULL;
 	}
-	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
+	kvm_gfn_to_pfn_cache_unmap(vcpu->kvm, &vmx->nested.virtual_apic_cache);
 	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
 	vmx->nested.pi_desc = NULL;
 
@@ -6756,4 +6791,5 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
 	.write_log_dirty = nested_vmx_write_pml_buffer,
 	.enable_evmcs = nested_enable_evmcs,
 	.get_evmcs_version = nested_get_evmcs_version,
+	.check_guest_maps = nested_vmx_check_guest_maps,
 };
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ba66c171d951..6c61faef86d3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3839,19 +3839,23 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
 static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-	void *vapic_page;
+	struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
 	u32 vppr;
 	int rvi;
 
 	if (WARN_ON_ONCE(!is_guest_mode(vcpu)) ||
 		!nested_cpu_has_vid(get_vmcs12(vcpu)) ||
-		WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn))
+		WARN_ON_ONCE(gpc->gpa == GPA_INVALID))
 		return false;
 
 	rvi = vmx_get_rvi();
 
-	vapic_page = vmx->nested.virtual_apic_map.hva;
-	vppr = *((u32 *)(vapic_page + APIC_PROCPRI));
+	read_lock(&gpc->lock);
+	if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE))
+		vppr = *((u32 *)(gpc->khva + APIC_PROCPRI));
+	else
+		vppr = 0xff;
+	read_unlock(&gpc->lock);
 
 	return ((rvi & 0xf0) > (vppr & 0xf0));
 }
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 4df2ac24ffc1..8364e7fc92a0 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -195,7 +195,7 @@ struct nested_vmx {
 	 * pointers, so we must keep them pinned while L2 runs.
 	 */
 	struct page *apic_access_page;
-	struct kvm_host_map virtual_apic_map;
+	struct gfn_to_pfn_cache virtual_apic_cache;
 	struct kvm_host_map pi_desc_map;
 
 	struct kvm_host_map msr_bitmap_map;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fa56c590d8db..01d20db5b1f4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9739,6 +9739,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 		if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
 			static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
+		if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
+			; /* Nothing to do. It just wanted to wake us */
 	}
 
 	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
@@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	local_irq_disable();
 	vcpu->mode = IN_GUEST_MODE;
 
+	/*
+	 * If the guest requires direct access to mapped L1 pages, check
+	 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
+	 * to go and revalidate them, if necessary.
+	 */
+	if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
+		kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
+
 	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
 
 	/*
-- 
2.31.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 10:28   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 10:28 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

From: David Woodhouse <dwmw@amazon.co.uk>

This is what evolved during the discussion at
https://lore.kernel.org/kvm/960E233F-EC0B-4FB5-BA2E-C8D2CCB38B12@infradead.org/T/#m11d75fcfe2da357ec1dabba0d0e3abb91fd13665

As discussed, an alternative approach might be to augment
kvm_arch_memslots_updated() to raise KVM_REQ_GET_NESTED_STATE_PAGES to
each vCPU (and make that req only do anything on a given vCPU if that
vCPU is actually in L2 guest mode).

That would mean the reload gets actively triggered even on memslot
changes rather than only on MMU notifiers as is the case now. It could
*potentially* mean we can drop the new 'check_guest_maps' function.

The 'check_guest_maps' function could be a lot simpler than it is,
though. It only really needs to get kvm->memslots->generation, then
check each gpc->generation against that, and each gpc->valid.

Also I suspect we *shouldn't* destroy the virtual_apic_cache in
nested_vmx_vmexit(). We can just leave it there for next time the
vCPU enters guest mode. If it happens to get invalidated in the
meantime, that's fine and we'll refresh it on the way back in.
We probably *would* want to actively do something on memslot changes
in that case though, to ensure that even if the vCPU isn't in guest
mode any more, we *release* the cached page.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/vmx/nested.c       | 50 ++++++++++++++++++++++++++++-----
 arch/x86/kvm/vmx/vmx.c          | 12 +++++---
 arch/x86/kvm/vmx/vmx.h          |  2 +-
 arch/x86/kvm/x86.c              | 10 +++++++
 5 files changed, 63 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6ea2446ab851..24f6f3e2de47 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1511,6 +1511,7 @@ struct kvm_x86_nested_ops {
 	int (*enable_evmcs)(struct kvm_vcpu *vcpu,
 			    uint16_t *vmcs_version);
 	uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu);
+	void (*check_guest_maps)(struct kvm_vcpu *vcpu);
 };
 
 struct kvm_x86_init_ops {
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 1e2f66951566..01bfabcfbbce 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -309,7 +309,7 @@ static void free_nested(struct kvm_vcpu *vcpu)
 		kvm_release_page_clean(vmx->nested.apic_access_page);
 		vmx->nested.apic_access_page = NULL;
 	}
-	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
+	kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vmx->nested.virtual_apic_cache);
 	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
 	vmx->nested.pi_desc = NULL;
 
@@ -3179,10 +3179,12 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
 	}
 
 	if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) {
-		map = &vmx->nested.virtual_apic_map;
+		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
 
-		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) {
-			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn));
+ 		if (!kvm_gfn_to_pfn_cache_init(vcpu->kvm, gpc, vcpu, true, true,
+					       vmcs12->virtual_apic_page_addr,
+					       PAGE_SIZE, true)) {
+			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(gpc->pfn));
 		} else if (nested_cpu_has(vmcs12, CPU_BASED_CR8_LOAD_EXITING) &&
 		           nested_cpu_has(vmcs12, CPU_BASED_CR8_STORE_EXITING) &&
 			   !nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
@@ -3207,6 +3209,9 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
 	if (nested_cpu_has_posted_intr(vmcs12)) {
 		map = &vmx->nested.pi_desc_map;
 
+		if (kvm_vcpu_mapped(map))
+			kvm_vcpu_unmap(vcpu, map, true);
+
 		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) {
 			vmx->nested.pi_desc =
 				(struct pi_desc *)(((void *)map->hva) +
@@ -3251,6 +3256,29 @@ static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+static void nested_vmx_check_guest_maps(struct kvm_vcpu *vcpu)
+{
+	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	struct gfn_to_pfn_cache *gpc;
+
+	int valid;
+
+	if (nested_cpu_has_posted_intr(vmcs12)) {
+		gpc = &vmx->nested.virtual_apic_cache;
+
+		read_lock(&gpc->lock);
+		valid = kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc,
+						   vmcs12->virtual_apic_page_addr,
+						   PAGE_SIZE);
+		read_unlock(&gpc->lock);
+		if (!valid) {
+			kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+			return;
+		}
+	}
+}
+
 static int nested_vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa)
 {
 	struct vmcs12 *vmcs12;
@@ -3749,9 +3777,15 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
 
 	max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256);
 	if (max_irr != 256) {
-		vapic_page = vmx->nested.virtual_apic_map.hva;
-		if (!vapic_page)
+		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
+
+		read_lock(&gpc->lock);
+		if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE)) {
+			read_unlock(&gpc->lock);
 			goto mmio_needed;
+		}
+
+		vapic_page = gpc->khva;
 
 		__kvm_apic_update_irr(vmx->nested.pi_desc->pir,
 			vapic_page, &max_irr);
@@ -3761,6 +3795,7 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
 			status |= (u8)max_irr;
 			vmcs_write16(GUEST_INTR_STATUS, status);
 		}
+		read_unlock(&gpc->lock);
 	}
 
 	nested_mark_vmcs12_pages_dirty(vcpu);
@@ -4581,7 +4616,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
 		kvm_release_page_clean(vmx->nested.apic_access_page);
 		vmx->nested.apic_access_page = NULL;
 	}
-	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
+	kvm_gfn_to_pfn_cache_unmap(vcpu->kvm, &vmx->nested.virtual_apic_cache);
 	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
 	vmx->nested.pi_desc = NULL;
 
@@ -6756,4 +6791,5 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
 	.write_log_dirty = nested_vmx_write_pml_buffer,
 	.enable_evmcs = nested_enable_evmcs,
 	.get_evmcs_version = nested_get_evmcs_version,
+	.check_guest_maps = nested_vmx_check_guest_maps,
 };
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ba66c171d951..6c61faef86d3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3839,19 +3839,23 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
 static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-	void *vapic_page;
+	struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
 	u32 vppr;
 	int rvi;
 
 	if (WARN_ON_ONCE(!is_guest_mode(vcpu)) ||
 		!nested_cpu_has_vid(get_vmcs12(vcpu)) ||
-		WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn))
+		WARN_ON_ONCE(gpc->gpa == GPA_INVALID))
 		return false;
 
 	rvi = vmx_get_rvi();
 
-	vapic_page = vmx->nested.virtual_apic_map.hva;
-	vppr = *((u32 *)(vapic_page + APIC_PROCPRI));
+	read_lock(&gpc->lock);
+	if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE))
+		vppr = *((u32 *)(gpc->khva + APIC_PROCPRI));
+	else
+		vppr = 0xff;
+	read_unlock(&gpc->lock);
 
 	return ((rvi & 0xf0) > (vppr & 0xf0));
 }
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 4df2ac24ffc1..8364e7fc92a0 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -195,7 +195,7 @@ struct nested_vmx {
 	 * pointers, so we must keep them pinned while L2 runs.
 	 */
 	struct page *apic_access_page;
-	struct kvm_host_map virtual_apic_map;
+	struct gfn_to_pfn_cache virtual_apic_cache;
 	struct kvm_host_map pi_desc_map;
 
 	struct kvm_host_map msr_bitmap_map;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fa56c590d8db..01d20db5b1f4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9739,6 +9739,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 		if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
 			static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
+		if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
+			; /* Nothing to do. It just wanted to wake us */
 	}
 
 	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
@@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	local_irq_disable();
 	vcpu->mode = IN_GUEST_MODE;
 
+	/*
+	 * If the guest requires direct access to mapped L1 pages, check
+	 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
+	 * to go and revalidate them, if necessary.
+	 */
+	if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
+		kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
+
 	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
 
 	/*
-- 
2.31.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
  2021-11-20 10:28   ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 15:48     ` Mika Penttilä
  -1 siblings, 0 replies; 74+ messages in thread
From: Mika Penttilä @ 2021-11-20 15:48 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

Hi,

On 20.11.2021 12.28, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> This is what evolved during the discussion at
> https://lore.kernel.org/kvm/960E233F-EC0B-4FB5-BA2E-C8D2CCB38B12@infradead.org/T/#m11d75fcfe2da357ec1dabba0d0e3abb91fd13665
>
> As discussed, an alternative approach might be to augment
> kvm_arch_memslots_updated() to raise KVM_REQ_GET_NESTED_STATE_PAGES to
> each vCPU (and make that req only do anything on a given vCPU if that
> vCPU is actually in L2 guest mode).
>
> That would mean the reload gets actively triggered even on memslot
> changes rather than only on MMU notifiers as is the case now. It could
> *potentially* mean we can drop the new 'check_guest_maps' function.
>
> The 'check_guest_maps' function could be a lot simpler than it is,
> though. It only really needs to get kvm->memslots->generation, then
> check each gpc->generation against that, and each gpc->valid.
>
> Also I suspect we *shouldn't* destroy the virtual_apic_cache in
> nested_vmx_vmexit(). We can just leave it there for next time the
> vCPU enters guest mode. If it happens to get invalidated in the
> meantime, that's fine and we'll refresh it on the way back in.
> We probably *would* want to actively do something on memslot changes
> in that case though, to ensure that even if the vCPU isn't in guest
> mode any more, we *release* the cached page.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>   arch/x86/include/asm/kvm_host.h |  1 +
>   arch/x86/kvm/vmx/nested.c       | 50 ++++++++++++++++++++++++++++-----
>   arch/x86/kvm/vmx/vmx.c          | 12 +++++---
>   arch/x86/kvm/vmx/vmx.h          |  2 +-
>   arch/x86/kvm/x86.c              | 10 +++++++
>   5 files changed, 63 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 6ea2446ab851..24f6f3e2de47 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1511,6 +1511,7 @@ struct kvm_x86_nested_ops {
>   	int (*enable_evmcs)(struct kvm_vcpu *vcpu,
>   			    uint16_t *vmcs_version);
>   	uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu);
> +	void (*check_guest_maps)(struct kvm_vcpu *vcpu);
>   };
>   
>   struct kvm_x86_init_ops {
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 1e2f66951566..01bfabcfbbce 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -309,7 +309,7 @@ static void free_nested(struct kvm_vcpu *vcpu)
>   		kvm_release_page_clean(vmx->nested.apic_access_page);
>   		vmx->nested.apic_access_page = NULL;
>   	}
> -	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
> +	kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vmx->nested.virtual_apic_cache);
>   	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
>   	vmx->nested.pi_desc = NULL;
>   
> @@ -3179,10 +3179,12 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
>   	}
>   
>   	if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) {
> -		map = &vmx->nested.virtual_apic_map;
> +		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
>   
> -		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) {
> -			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn));
> + 		if (!kvm_gfn_to_pfn_cache_init(vcpu->kvm, gpc, vcpu, true, true,
> +					       vmcs12->virtual_apic_page_addr,
> +					       PAGE_SIZE, true)) {
> +			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(gpc->pfn));
>   		} else if (nested_cpu_has(vmcs12, CPU_BASED_CR8_LOAD_EXITING) &&
>   		           nested_cpu_has(vmcs12, CPU_BASED_CR8_STORE_EXITING) &&
>   			   !nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
> @@ -3207,6 +3209,9 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
>   	if (nested_cpu_has_posted_intr(vmcs12)) {
>   		map = &vmx->nested.pi_desc_map;
>   
> +		if (kvm_vcpu_mapped(map))
> +			kvm_vcpu_unmap(vcpu, map, true);
> +
>   		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) {
>   			vmx->nested.pi_desc =
>   				(struct pi_desc *)(((void *)map->hva) +
> @@ -3251,6 +3256,29 @@ static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
>   	return true;
>   }
>   
> +static void nested_vmx_check_guest_maps(struct kvm_vcpu *vcpu)
> +{
> +	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +	struct gfn_to_pfn_cache *gpc;
> +
> +	int valid;
> +
> +	if (nested_cpu_has_posted_intr(vmcs12)) {
> +		gpc = &vmx->nested.virtual_apic_cache;
> +
> +		read_lock(&gpc->lock);
> +		valid = kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc,
> +						   vmcs12->virtual_apic_page_addr,
> +						   PAGE_SIZE);
> +		read_unlock(&gpc->lock);
> +		if (!valid) {
> +			kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
> +			return;
> +		}
> +	}
> +}
> +
>   static int nested_vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa)
>   {
>   	struct vmcs12 *vmcs12;
> @@ -3749,9 +3777,15 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
>   
>   	max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256);
>   	if (max_irr != 256) {
> -		vapic_page = vmx->nested.virtual_apic_map.hva;
> -		if (!vapic_page)
> +		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
> +
> +		read_lock(&gpc->lock);
> +		if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE)) {
> +			read_unlock(&gpc->lock);
>   			goto mmio_needed;
> +		}
> +
> +		vapic_page = gpc->khva;
>   
>   		__kvm_apic_update_irr(vmx->nested.pi_desc->pir,
>   			vapic_page, &max_irr);
> @@ -3761,6 +3795,7 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
>   			status |= (u8)max_irr;
>   			vmcs_write16(GUEST_INTR_STATUS, status);
>   		}
> +		read_unlock(&gpc->lock);
>   	}
>   
>   	nested_mark_vmcs12_pages_dirty(vcpu);
> @@ -4581,7 +4616,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
>   		kvm_release_page_clean(vmx->nested.apic_access_page);
>   		vmx->nested.apic_access_page = NULL;
>   	}
> -	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
> +	kvm_gfn_to_pfn_cache_unmap(vcpu->kvm, &vmx->nested.virtual_apic_cache);
>   	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
>   	vmx->nested.pi_desc = NULL;
>   
> @@ -6756,4 +6791,5 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
>   	.write_log_dirty = nested_vmx_write_pml_buffer,
>   	.enable_evmcs = nested_enable_evmcs,
>   	.get_evmcs_version = nested_get_evmcs_version,
> +	.check_guest_maps = nested_vmx_check_guest_maps,
>   };
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ba66c171d951..6c61faef86d3 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -3839,19 +3839,23 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
>   static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
>   {
>   	struct vcpu_vmx *vmx = to_vmx(vcpu);
> -	void *vapic_page;
> +	struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
>   	u32 vppr;
>   	int rvi;
>   
>   	if (WARN_ON_ONCE(!is_guest_mode(vcpu)) ||
>   		!nested_cpu_has_vid(get_vmcs12(vcpu)) ||
> -		WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn))
> +		WARN_ON_ONCE(gpc->gpa == GPA_INVALID))
>   		return false;
>   
>   	rvi = vmx_get_rvi();
>   
> -	vapic_page = vmx->nested.virtual_apic_map.hva;
> -	vppr = *((u32 *)(vapic_page + APIC_PROCPRI));
> +	read_lock(&gpc->lock);
> +	if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE))
> +		vppr = *((u32 *)(gpc->khva + APIC_PROCPRI));
> +	else
> +		vppr = 0xff;
> +	read_unlock(&gpc->lock);
>   
>   	return ((rvi & 0xf0) > (vppr & 0xf0));
>   }
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 4df2ac24ffc1..8364e7fc92a0 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -195,7 +195,7 @@ struct nested_vmx {
>   	 * pointers, so we must keep them pinned while L2 runs.
>   	 */
>   	struct page *apic_access_page;
> -	struct kvm_host_map virtual_apic_map;
> +	struct gfn_to_pfn_cache virtual_apic_cache;
>   	struct kvm_host_map pi_desc_map;
>   
>   	struct kvm_host_map msr_bitmap_map;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fa56c590d8db..01d20db5b1f4 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9739,6 +9739,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>   
>   		if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
>   			static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
> +		if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
> +			; /* Nothing to do. It just wanted to wake us */
>   	}
>   
>   	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
> @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>   	local_irq_disable();
>   	vcpu->mode = IN_GUEST_MODE;
>   
> +	/*
> +	 * If the guest requires direct access to mapped L1 pages, check
> +	 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> +	 * to go and revalidate them, if necessary.
> +	 */
> +	if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> +		kvm_x86_ops.nested_ops->check_guest_maps(vcpu);

But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next 
vcpu_enter_guest() entry ?


> +
>   	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
>   
>   	/*


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 15:48     ` Mika Penttilä
  0 siblings, 0 replies; 74+ messages in thread
From: Mika Penttilä @ 2021-11-20 15:48 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

Hi,

On 20.11.2021 12.28, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> This is what evolved during the discussion at
> https://lore.kernel.org/kvm/960E233F-EC0B-4FB5-BA2E-C8D2CCB38B12@infradead.org/T/#m11d75fcfe2da357ec1dabba0d0e3abb91fd13665
>
> As discussed, an alternative approach might be to augment
> kvm_arch_memslots_updated() to raise KVM_REQ_GET_NESTED_STATE_PAGES to
> each vCPU (and make that req only do anything on a given vCPU if that
> vCPU is actually in L2 guest mode).
>
> That would mean the reload gets actively triggered even on memslot
> changes rather than only on MMU notifiers as is the case now. It could
> *potentially* mean we can drop the new 'check_guest_maps' function.
>
> The 'check_guest_maps' function could be a lot simpler than it is,
> though. It only really needs to get kvm->memslots->generation, then
> check each gpc->generation against that, and each gpc->valid.
>
> Also I suspect we *shouldn't* destroy the virtual_apic_cache in
> nested_vmx_vmexit(). We can just leave it there for next time the
> vCPU enters guest mode. If it happens to get invalidated in the
> meantime, that's fine and we'll refresh it on the way back in.
> We probably *would* want to actively do something on memslot changes
> in that case though, to ensure that even if the vCPU isn't in guest
> mode any more, we *release* the cached page.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>   arch/x86/include/asm/kvm_host.h |  1 +
>   arch/x86/kvm/vmx/nested.c       | 50 ++++++++++++++++++++++++++++-----
>   arch/x86/kvm/vmx/vmx.c          | 12 +++++---
>   arch/x86/kvm/vmx/vmx.h          |  2 +-
>   arch/x86/kvm/x86.c              | 10 +++++++
>   5 files changed, 63 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 6ea2446ab851..24f6f3e2de47 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1511,6 +1511,7 @@ struct kvm_x86_nested_ops {
>   	int (*enable_evmcs)(struct kvm_vcpu *vcpu,
>   			    uint16_t *vmcs_version);
>   	uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu);
> +	void (*check_guest_maps)(struct kvm_vcpu *vcpu);
>   };
>   
>   struct kvm_x86_init_ops {
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 1e2f66951566..01bfabcfbbce 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -309,7 +309,7 @@ static void free_nested(struct kvm_vcpu *vcpu)
>   		kvm_release_page_clean(vmx->nested.apic_access_page);
>   		vmx->nested.apic_access_page = NULL;
>   	}
> -	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
> +	kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vmx->nested.virtual_apic_cache);
>   	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
>   	vmx->nested.pi_desc = NULL;
>   
> @@ -3179,10 +3179,12 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
>   	}
>   
>   	if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) {
> -		map = &vmx->nested.virtual_apic_map;
> +		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
>   
> -		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) {
> -			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn));
> + 		if (!kvm_gfn_to_pfn_cache_init(vcpu->kvm, gpc, vcpu, true, true,
> +					       vmcs12->virtual_apic_page_addr,
> +					       PAGE_SIZE, true)) {
> +			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(gpc->pfn));
>   		} else if (nested_cpu_has(vmcs12, CPU_BASED_CR8_LOAD_EXITING) &&
>   		           nested_cpu_has(vmcs12, CPU_BASED_CR8_STORE_EXITING) &&
>   			   !nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
> @@ -3207,6 +3209,9 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
>   	if (nested_cpu_has_posted_intr(vmcs12)) {
>   		map = &vmx->nested.pi_desc_map;
>   
> +		if (kvm_vcpu_mapped(map))
> +			kvm_vcpu_unmap(vcpu, map, true);
> +
>   		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) {
>   			vmx->nested.pi_desc =
>   				(struct pi_desc *)(((void *)map->hva) +
> @@ -3251,6 +3256,29 @@ static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
>   	return true;
>   }
>   
> +static void nested_vmx_check_guest_maps(struct kvm_vcpu *vcpu)
> +{
> +	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +	struct gfn_to_pfn_cache *gpc;
> +
> +	int valid;
> +
> +	if (nested_cpu_has_posted_intr(vmcs12)) {
> +		gpc = &vmx->nested.virtual_apic_cache;
> +
> +		read_lock(&gpc->lock);
> +		valid = kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc,
> +						   vmcs12->virtual_apic_page_addr,
> +						   PAGE_SIZE);
> +		read_unlock(&gpc->lock);
> +		if (!valid) {
> +			kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
> +			return;
> +		}
> +	}
> +}
> +
>   static int nested_vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa)
>   {
>   	struct vmcs12 *vmcs12;
> @@ -3749,9 +3777,15 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
>   
>   	max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256);
>   	if (max_irr != 256) {
> -		vapic_page = vmx->nested.virtual_apic_map.hva;
> -		if (!vapic_page)
> +		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
> +
> +		read_lock(&gpc->lock);
> +		if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE)) {
> +			read_unlock(&gpc->lock);
>   			goto mmio_needed;
> +		}
> +
> +		vapic_page = gpc->khva;
>   
>   		__kvm_apic_update_irr(vmx->nested.pi_desc->pir,
>   			vapic_page, &max_irr);
> @@ -3761,6 +3795,7 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
>   			status |= (u8)max_irr;
>   			vmcs_write16(GUEST_INTR_STATUS, status);
>   		}
> +		read_unlock(&gpc->lock);
>   	}
>   
>   	nested_mark_vmcs12_pages_dirty(vcpu);
> @@ -4581,7 +4616,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
>   		kvm_release_page_clean(vmx->nested.apic_access_page);
>   		vmx->nested.apic_access_page = NULL;
>   	}
> -	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
> +	kvm_gfn_to_pfn_cache_unmap(vcpu->kvm, &vmx->nested.virtual_apic_cache);
>   	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
>   	vmx->nested.pi_desc = NULL;
>   
> @@ -6756,4 +6791,5 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
>   	.write_log_dirty = nested_vmx_write_pml_buffer,
>   	.enable_evmcs = nested_enable_evmcs,
>   	.get_evmcs_version = nested_get_evmcs_version,
> +	.check_guest_maps = nested_vmx_check_guest_maps,
>   };
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ba66c171d951..6c61faef86d3 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -3839,19 +3839,23 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
>   static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
>   {
>   	struct vcpu_vmx *vmx = to_vmx(vcpu);
> -	void *vapic_page;
> +	struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
>   	u32 vppr;
>   	int rvi;
>   
>   	if (WARN_ON_ONCE(!is_guest_mode(vcpu)) ||
>   		!nested_cpu_has_vid(get_vmcs12(vcpu)) ||
> -		WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn))
> +		WARN_ON_ONCE(gpc->gpa == GPA_INVALID))
>   		return false;
>   
>   	rvi = vmx_get_rvi();
>   
> -	vapic_page = vmx->nested.virtual_apic_map.hva;
> -	vppr = *((u32 *)(vapic_page + APIC_PROCPRI));
> +	read_lock(&gpc->lock);
> +	if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE))
> +		vppr = *((u32 *)(gpc->khva + APIC_PROCPRI));
> +	else
> +		vppr = 0xff;
> +	read_unlock(&gpc->lock);
>   
>   	return ((rvi & 0xf0) > (vppr & 0xf0));
>   }
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 4df2ac24ffc1..8364e7fc92a0 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -195,7 +195,7 @@ struct nested_vmx {
>   	 * pointers, so we must keep them pinned while L2 runs.
>   	 */
>   	struct page *apic_access_page;
> -	struct kvm_host_map virtual_apic_map;
> +	struct gfn_to_pfn_cache virtual_apic_cache;
>   	struct kvm_host_map pi_desc_map;
>   
>   	struct kvm_host_map msr_bitmap_map;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fa56c590d8db..01d20db5b1f4 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9739,6 +9739,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>   
>   		if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
>   			static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
> +		if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
> +			; /* Nothing to do. It just wanted to wake us */
>   	}
>   
>   	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
> @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>   	local_irq_disable();
>   	vcpu->mode = IN_GUEST_MODE;
>   
> +	/*
> +	 * If the guest requires direct access to mapped L1 pages, check
> +	 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> +	 * to go and revalidate them, if necessary.
> +	 */
> +	if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> +		kvm_x86_ops.nested_ops->check_guest_maps(vcpu);

But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next 
vcpu_enter_guest() entry ?


> +
>   	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
>   
>   	/*


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 15:48     ` Mika Penttilä
  0 siblings, 0 replies; 74+ messages in thread
From: Mika Penttilä @ 2021-11-20 15:48 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

Hi,

On 20.11.2021 12.28, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> This is what evolved during the discussion at
> https://lore.kernel.org/kvm/960E233F-EC0B-4FB5-BA2E-C8D2CCB38B12@infradead.org/T/#m11d75fcfe2da357ec1dabba0d0e3abb91fd13665
>
> As discussed, an alternative approach might be to augment
> kvm_arch_memslots_updated() to raise KVM_REQ_GET_NESTED_STATE_PAGES to
> each vCPU (and make that req only do anything on a given vCPU if that
> vCPU is actually in L2 guest mode).
>
> That would mean the reload gets actively triggered even on memslot
> changes rather than only on MMU notifiers as is the case now. It could
> *potentially* mean we can drop the new 'check_guest_maps' function.
>
> The 'check_guest_maps' function could be a lot simpler than it is,
> though. It only really needs to get kvm->memslots->generation, then
> check each gpc->generation against that, and each gpc->valid.
>
> Also I suspect we *shouldn't* destroy the virtual_apic_cache in
> nested_vmx_vmexit(). We can just leave it there for next time the
> vCPU enters guest mode. If it happens to get invalidated in the
> meantime, that's fine and we'll refresh it on the way back in.
> We probably *would* want to actively do something on memslot changes
> in that case though, to ensure that even if the vCPU isn't in guest
> mode any more, we *release* the cached page.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>   arch/x86/include/asm/kvm_host.h |  1 +
>   arch/x86/kvm/vmx/nested.c       | 50 ++++++++++++++++++++++++++++-----
>   arch/x86/kvm/vmx/vmx.c          | 12 +++++---
>   arch/x86/kvm/vmx/vmx.h          |  2 +-
>   arch/x86/kvm/x86.c              | 10 +++++++
>   5 files changed, 63 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 6ea2446ab851..24f6f3e2de47 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1511,6 +1511,7 @@ struct kvm_x86_nested_ops {
>   	int (*enable_evmcs)(struct kvm_vcpu *vcpu,
>   			    uint16_t *vmcs_version);
>   	uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu);
> +	void (*check_guest_maps)(struct kvm_vcpu *vcpu);
>   };
>   
>   struct kvm_x86_init_ops {
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 1e2f66951566..01bfabcfbbce 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -309,7 +309,7 @@ static void free_nested(struct kvm_vcpu *vcpu)
>   		kvm_release_page_clean(vmx->nested.apic_access_page);
>   		vmx->nested.apic_access_page = NULL;
>   	}
> -	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
> +	kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vmx->nested.virtual_apic_cache);
>   	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
>   	vmx->nested.pi_desc = NULL;
>   
> @@ -3179,10 +3179,12 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
>   	}
>   
>   	if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) {
> -		map = &vmx->nested.virtual_apic_map;
> +		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
>   
> -		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) {
> -			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn));
> + 		if (!kvm_gfn_to_pfn_cache_init(vcpu->kvm, gpc, vcpu, true, true,
> +					       vmcs12->virtual_apic_page_addr,
> +					       PAGE_SIZE, true)) {
> +			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(gpc->pfn));
>   		} else if (nested_cpu_has(vmcs12, CPU_BASED_CR8_LOAD_EXITING) &&
>   		           nested_cpu_has(vmcs12, CPU_BASED_CR8_STORE_EXITING) &&
>   			   !nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
> @@ -3207,6 +3209,9 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
>   	if (nested_cpu_has_posted_intr(vmcs12)) {
>   		map = &vmx->nested.pi_desc_map;
>   
> +		if (kvm_vcpu_mapped(map))
> +			kvm_vcpu_unmap(vcpu, map, true);
> +
>   		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) {
>   			vmx->nested.pi_desc =
>   				(struct pi_desc *)(((void *)map->hva) +
> @@ -3251,6 +3256,29 @@ static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
>   	return true;
>   }
>   
> +static void nested_vmx_check_guest_maps(struct kvm_vcpu *vcpu)
> +{
> +	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +	struct gfn_to_pfn_cache *gpc;
> +
> +	int valid;
> +
> +	if (nested_cpu_has_posted_intr(vmcs12)) {
> +		gpc = &vmx->nested.virtual_apic_cache;
> +
> +		read_lock(&gpc->lock);
> +		valid = kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc,
> +						   vmcs12->virtual_apic_page_addr,
> +						   PAGE_SIZE);
> +		read_unlock(&gpc->lock);
> +		if (!valid) {
> +			kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
> +			return;
> +		}
> +	}
> +}
> +
>   static int nested_vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa)
>   {
>   	struct vmcs12 *vmcs12;
> @@ -3749,9 +3777,15 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
>   
>   	max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256);
>   	if (max_irr != 256) {
> -		vapic_page = vmx->nested.virtual_apic_map.hva;
> -		if (!vapic_page)
> +		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
> +
> +		read_lock(&gpc->lock);
> +		if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE)) {
> +			read_unlock(&gpc->lock);
>   			goto mmio_needed;
> +		}
> +
> +		vapic_page = gpc->khva;
>   
>   		__kvm_apic_update_irr(vmx->nested.pi_desc->pir,
>   			vapic_page, &max_irr);
> @@ -3761,6 +3795,7 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
>   			status |= (u8)max_irr;
>   			vmcs_write16(GUEST_INTR_STATUS, status);
>   		}
> +		read_unlock(&gpc->lock);
>   	}
>   
>   	nested_mark_vmcs12_pages_dirty(vcpu);
> @@ -4581,7 +4616,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
>   		kvm_release_page_clean(vmx->nested.apic_access_page);
>   		vmx->nested.apic_access_page = NULL;
>   	}
> -	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
> +	kvm_gfn_to_pfn_cache_unmap(vcpu->kvm, &vmx->nested.virtual_apic_cache);
>   	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
>   	vmx->nested.pi_desc = NULL;
>   
> @@ -6756,4 +6791,5 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
>   	.write_log_dirty = nested_vmx_write_pml_buffer,
>   	.enable_evmcs = nested_enable_evmcs,
>   	.get_evmcs_version = nested_get_evmcs_version,
> +	.check_guest_maps = nested_vmx_check_guest_maps,
>   };
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ba66c171d951..6c61faef86d3 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -3839,19 +3839,23 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
>   static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
>   {
>   	struct vcpu_vmx *vmx = to_vmx(vcpu);
> -	void *vapic_page;
> +	struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
>   	u32 vppr;
>   	int rvi;
>   
>   	if (WARN_ON_ONCE(!is_guest_mode(vcpu)) ||
>   		!nested_cpu_has_vid(get_vmcs12(vcpu)) ||
> -		WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn))
> +		WARN_ON_ONCE(gpc->gpa == GPA_INVALID))
>   		return false;
>   
>   	rvi = vmx_get_rvi();
>   
> -	vapic_page = vmx->nested.virtual_apic_map.hva;
> -	vppr = *((u32 *)(vapic_page + APIC_PROCPRI));
> +	read_lock(&gpc->lock);
> +	if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE))
> +		vppr = *((u32 *)(gpc->khva + APIC_PROCPRI));
> +	else
> +		vppr = 0xff;
> +	read_unlock(&gpc->lock);
>   
>   	return ((rvi & 0xf0) > (vppr & 0xf0));
>   }
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 4df2ac24ffc1..8364e7fc92a0 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -195,7 +195,7 @@ struct nested_vmx {
>   	 * pointers, so we must keep them pinned while L2 runs.
>   	 */
>   	struct page *apic_access_page;
> -	struct kvm_host_map virtual_apic_map;
> +	struct gfn_to_pfn_cache virtual_apic_cache;
>   	struct kvm_host_map pi_desc_map;
>   
>   	struct kvm_host_map msr_bitmap_map;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fa56c590d8db..01d20db5b1f4 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9739,6 +9739,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>   
>   		if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
>   			static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
> +		if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
> +			; /* Nothing to do. It just wanted to wake us */
>   	}
>   
>   	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
> @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>   	local_irq_disable();
>   	vcpu->mode = IN_GUEST_MODE;
>   
> +	/*
> +	 * If the guest requires direct access to mapped L1 pages, check
> +	 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> +	 * to go and revalidate them, if necessary.
> +	 */
> +	if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> +		kvm_x86_ops.nested_ops->check_guest_maps(vcpu);

But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next 
vcpu_enter_guest() entry ?


> +
>   	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
>   
>   	/*


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 15:48     ` Mika Penttilä
  0 siblings, 0 replies; 74+ messages in thread
From: Mika Penttilä @ 2021-11-20 15:48 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev

Hi,

On 20.11.2021 12.28, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> This is what evolved during the discussion at
> https://lore.kernel.org/kvm/960E233F-EC0B-4FB5-BA2E-C8D2CCB38B12@infradead.org/T/#m11d75fcfe2da357ec1dabba0d0e3abb91fd13665
>
> As discussed, an alternative approach might be to augment
> kvm_arch_memslots_updated() to raise KVM_REQ_GET_NESTED_STATE_PAGES to
> each vCPU (and make that req only do anything on a given vCPU if that
> vCPU is actually in L2 guest mode).
>
> That would mean the reload gets actively triggered even on memslot
> changes rather than only on MMU notifiers as is the case now. It could
> *potentially* mean we can drop the new 'check_guest_maps' function.
>
> The 'check_guest_maps' function could be a lot simpler than it is,
> though. It only really needs to get kvm->memslots->generation, then
> check each gpc->generation against that, and each gpc->valid.
>
> Also I suspect we *shouldn't* destroy the virtual_apic_cache in
> nested_vmx_vmexit(). We can just leave it there for next time the
> vCPU enters guest mode. If it happens to get invalidated in the
> meantime, that's fine and we'll refresh it on the way back in.
> We probably *would* want to actively do something on memslot changes
> in that case though, to ensure that even if the vCPU isn't in guest
> mode any more, we *release* the cached page.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>   arch/x86/include/asm/kvm_host.h |  1 +
>   arch/x86/kvm/vmx/nested.c       | 50 ++++++++++++++++++++++++++++-----
>   arch/x86/kvm/vmx/vmx.c          | 12 +++++---
>   arch/x86/kvm/vmx/vmx.h          |  2 +-
>   arch/x86/kvm/x86.c              | 10 +++++++
>   5 files changed, 63 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 6ea2446ab851..24f6f3e2de47 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1511,6 +1511,7 @@ struct kvm_x86_nested_ops {
>   	int (*enable_evmcs)(struct kvm_vcpu *vcpu,
>   			    uint16_t *vmcs_version);
>   	uint16_t (*get_evmcs_version)(struct kvm_vcpu *vcpu);
> +	void (*check_guest_maps)(struct kvm_vcpu *vcpu);
>   };
>   
>   struct kvm_x86_init_ops {
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 1e2f66951566..01bfabcfbbce 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -309,7 +309,7 @@ static void free_nested(struct kvm_vcpu *vcpu)
>   		kvm_release_page_clean(vmx->nested.apic_access_page);
>   		vmx->nested.apic_access_page = NULL;
>   	}
> -	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
> +	kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vmx->nested.virtual_apic_cache);
>   	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
>   	vmx->nested.pi_desc = NULL;
>   
> @@ -3179,10 +3179,12 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
>   	}
>   
>   	if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) {
> -		map = &vmx->nested.virtual_apic_map;
> +		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
>   
> -		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) {
> -			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn));
> + 		if (!kvm_gfn_to_pfn_cache_init(vcpu->kvm, gpc, vcpu, true, true,
> +					       vmcs12->virtual_apic_page_addr,
> +					       PAGE_SIZE, true)) {
> +			vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(gpc->pfn));
>   		} else if (nested_cpu_has(vmcs12, CPU_BASED_CR8_LOAD_EXITING) &&
>   		           nested_cpu_has(vmcs12, CPU_BASED_CR8_STORE_EXITING) &&
>   			   !nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
> @@ -3207,6 +3209,9 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
>   	if (nested_cpu_has_posted_intr(vmcs12)) {
>   		map = &vmx->nested.pi_desc_map;
>   
> +		if (kvm_vcpu_mapped(map))
> +			kvm_vcpu_unmap(vcpu, map, true);
> +
>   		if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) {
>   			vmx->nested.pi_desc =
>   				(struct pi_desc *)(((void *)map->hva) +
> @@ -3251,6 +3256,29 @@ static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
>   	return true;
>   }
>   
> +static void nested_vmx_check_guest_maps(struct kvm_vcpu *vcpu)
> +{
> +	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +	struct gfn_to_pfn_cache *gpc;
> +
> +	int valid;
> +
> +	if (nested_cpu_has_posted_intr(vmcs12)) {
> +		gpc = &vmx->nested.virtual_apic_cache;
> +
> +		read_lock(&gpc->lock);
> +		valid = kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc,
> +						   vmcs12->virtual_apic_page_addr,
> +						   PAGE_SIZE);
> +		read_unlock(&gpc->lock);
> +		if (!valid) {
> +			kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
> +			return;
> +		}
> +	}
> +}
> +
>   static int nested_vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa)
>   {
>   	struct vmcs12 *vmcs12;
> @@ -3749,9 +3777,15 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
>   
>   	max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256);
>   	if (max_irr != 256) {
> -		vapic_page = vmx->nested.virtual_apic_map.hva;
> -		if (!vapic_page)
> +		struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
> +
> +		read_lock(&gpc->lock);
> +		if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE)) {
> +			read_unlock(&gpc->lock);
>   			goto mmio_needed;
> +		}
> +
> +		vapic_page = gpc->khva;
>   
>   		__kvm_apic_update_irr(vmx->nested.pi_desc->pir,
>   			vapic_page, &max_irr);
> @@ -3761,6 +3795,7 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
>   			status |= (u8)max_irr;
>   			vmcs_write16(GUEST_INTR_STATUS, status);
>   		}
> +		read_unlock(&gpc->lock);
>   	}
>   
>   	nested_mark_vmcs12_pages_dirty(vcpu);
> @@ -4581,7 +4616,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
>   		kvm_release_page_clean(vmx->nested.apic_access_page);
>   		vmx->nested.apic_access_page = NULL;
>   	}
> -	kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);
> +	kvm_gfn_to_pfn_cache_unmap(vcpu->kvm, &vmx->nested.virtual_apic_cache);
>   	kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true);
>   	vmx->nested.pi_desc = NULL;
>   
> @@ -6756,4 +6791,5 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
>   	.write_log_dirty = nested_vmx_write_pml_buffer,
>   	.enable_evmcs = nested_enable_evmcs,
>   	.get_evmcs_version = nested_get_evmcs_version,
> +	.check_guest_maps = nested_vmx_check_guest_maps,
>   };
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ba66c171d951..6c61faef86d3 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -3839,19 +3839,23 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
>   static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
>   {
>   	struct vcpu_vmx *vmx = to_vmx(vcpu);
> -	void *vapic_page;
> +	struct gfn_to_pfn_cache *gpc = &vmx->nested.virtual_apic_cache;
>   	u32 vppr;
>   	int rvi;
>   
>   	if (WARN_ON_ONCE(!is_guest_mode(vcpu)) ||
>   		!nested_cpu_has_vid(get_vmcs12(vcpu)) ||
> -		WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn))
> +		WARN_ON_ONCE(gpc->gpa == GPA_INVALID))
>   		return false;
>   
>   	rvi = vmx_get_rvi();
>   
> -	vapic_page = vmx->nested.virtual_apic_map.hva;
> -	vppr = *((u32 *)(vapic_page + APIC_PROCPRI));
> +	read_lock(&gpc->lock);
> +	if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE))
> +		vppr = *((u32 *)(gpc->khva + APIC_PROCPRI));
> +	else
> +		vppr = 0xff;
> +	read_unlock(&gpc->lock);
>   
>   	return ((rvi & 0xf0) > (vppr & 0xf0));
>   }
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 4df2ac24ffc1..8364e7fc92a0 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -195,7 +195,7 @@ struct nested_vmx {
>   	 * pointers, so we must keep them pinned while L2 runs.
>   	 */
>   	struct page *apic_access_page;
> -	struct kvm_host_map virtual_apic_map;
> +	struct gfn_to_pfn_cache virtual_apic_cache;
>   	struct kvm_host_map pi_desc_map;
>   
>   	struct kvm_host_map msr_bitmap_map;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fa56c590d8db..01d20db5b1f4 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9739,6 +9739,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>   
>   		if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
>   			static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
> +		if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
> +			; /* Nothing to do. It just wanted to wake us */
>   	}
>   
>   	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
> @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>   	local_irq_disable();
>   	vcpu->mode = IN_GUEST_MODE;
>   
> +	/*
> +	 * If the guest requires direct access to mapped L1 pages, check
> +	 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> +	 * to go and revalidate them, if necessary.
> +	 */
> +	if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> +		kvm_x86_ops.nested_ops->check_guest_maps(vcpu);

But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next 
vcpu_enter_guest() entry ?


> +
>   	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
>   
>   	/*

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
  2021-11-20 15:48     ` Mika Penttilä
  (?)
  (?)
@ 2021-11-20 16:21       ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 16:21 UTC (permalink / raw)
  To: Mika Penttilä, Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

[-- Attachment #1: Type: text/plain, Size: 924 bytes --]

On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
> > @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >        local_irq_disable();
> >        vcpu->mode = IN_GUEST_MODE;
> >    
> > +     /*
> > +      * If the guest requires direct access to mapped L1 pages, check
> > +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> > +      * to go and revalidate them, if necessary.
> > +      */
> > +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> > +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
> 
> But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next 
> vcpu_enter_guest() entry ?

Sure, but that's why this call to ->check_guest_maps() comes just a few
lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
us back out so that we go through vcpu_enter_guest() from the start
again?

[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 16:21       ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 16:21 UTC (permalink / raw)
  To: Mika Penttilä, Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 924 bytes --]

On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
> > @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >        local_irq_disable();
> >        vcpu->mode = IN_GUEST_MODE;
> >    
> > +     /*
> > +      * If the guest requires direct access to mapped L1 pages, check
> > +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> > +      * to go and revalidate them, if necessary.
> > +      */
> > +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> > +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
> 
> But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next 
> vcpu_enter_guest() entry ?

Sure, but that's why this call to ->check_guest_maps() comes just a few
lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
us back out so that we go through vcpu_enter_guest() from the start
again?

[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 16:21       ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 16:21 UTC (permalink / raw)
  To: Mika Penttilä, Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390


[-- Attachment #1.1: Type: text/plain, Size: 924 bytes --]

On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
> > @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >        local_irq_disable();
> >        vcpu->mode = IN_GUEST_MODE;
> >    
> > +     /*
> > +      * If the guest requires direct access to mapped L1 pages, check
> > +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> > +      * to go and revalidate them, if necessary.
> > +      */
> > +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> > +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
> 
> But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next 
> vcpu_enter_guest() entry ?

Sure, but that's why this call to ->check_guest_maps() comes just a few
lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
us back out so that we go through vcpu_enter_guest() from the start
again?

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 16:21       ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 16:21 UTC (permalink / raw)
  To: Mika Penttilä, Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 924 bytes --]

On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
> > @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >        local_irq_disable();
> >        vcpu->mode = IN_GUEST_MODE;
> >    
> > +     /*
> > +      * If the guest requires direct access to mapped L1 pages, check
> > +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> > +      * to go and revalidate them, if necessary.
> > +      */
> > +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> > +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
> 
> But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next 
> vcpu_enter_guest() entry ?

Sure, but that's why this call to ->check_guest_maps() comes just a few
lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
us back out so that we go through vcpu_enter_guest() from the start
again?

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
  2021-11-20 16:21       ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 16:30         ` Mika Penttilä
  -1 siblings, 0 replies; 74+ messages in thread
From: Mika Penttilä @ 2021-11-20 16:30 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390



On 20.11.2021 18.21, David Woodhouse wrote:
> On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
>>> @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>>>         local_irq_disable();
>>>         vcpu->mode = IN_GUEST_MODE;
>>>     
>>> +     /*
>>> +      * If the guest requires direct access to mapped L1 pages, check
>>> +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
>>> +      * to go and revalidate them, if necessary.
>>> +      */
>>> +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
>>> +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
>> But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next
>> vcpu_enter_guest() entry ?
> Sure, but that's why this call to ->check_guest_maps() comes just a few
> lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
> us back out so that we go through vcpu_enter_guest() from the start
> again?
Yes, I had forgotten that...

Thanks,
Mika


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 16:30         ` Mika Penttilä
  0 siblings, 0 replies; 74+ messages in thread
From: Mika Penttilä @ 2021-11-20 16:30 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev



On 20.11.2021 18.21, David Woodhouse wrote:
> On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
>>> @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>>>         local_irq_disable();
>>>         vcpu->mode = IN_GUEST_MODE;
>>>     
>>> +     /*
>>> +      * If the guest requires direct access to mapped L1 pages, check
>>> +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
>>> +      * to go and revalidate them, if necessary.
>>> +      */
>>> +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
>>> +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
>> But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next
>> vcpu_enter_guest() entry ?
> Sure, but that's why this call to ->check_guest_maps() comes just a few
> lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
> us back out so that we go through vcpu_enter_guest() from the start
> again?
Yes, I had forgotten that...

Thanks,
Mika


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 16:30         ` Mika Penttilä
  0 siblings, 0 replies; 74+ messages in thread
From: Mika Penttilä @ 2021-11-20 16:30 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390



On 20.11.2021 18.21, David Woodhouse wrote:
> On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
>>> @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>>>         local_irq_disable();
>>>         vcpu->mode = IN_GUEST_MODE;
>>>     
>>> +     /*
>>> +      * If the guest requires direct access to mapped L1 pages, check
>>> +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
>>> +      * to go and revalidate them, if necessary.
>>> +      */
>>> +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
>>> +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
>> But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next
>> vcpu_enter_guest() entry ?
> Sure, but that's why this call to ->check_guest_maps() comes just a few
> lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
> us back out so that we go through vcpu_enter_guest() from the start
> again?
Yes, I had forgotten that...

Thanks,
Mika


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 16:30         ` Mika Penttilä
  0 siblings, 0 replies; 74+ messages in thread
From: Mika Penttilä @ 2021-11-20 16:30 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev



On 20.11.2021 18.21, David Woodhouse wrote:
> On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
>>> @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>>>         local_irq_disable();
>>>         vcpu->mode = IN_GUEST_MODE;
>>>     
>>> +     /*
>>> +      * If the guest requires direct access to mapped L1 pages, check
>>> +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
>>> +      * to go and revalidate them, if necessary.
>>> +      */
>>> +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
>>> +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
>> But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next
>> vcpu_enter_guest() entry ?
> Sure, but that's why this call to ->check_guest_maps() comes just a few
> lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
> us back out so that we go through vcpu_enter_guest() from the start
> again?
Yes, I had forgotten that...

Thanks,
Mika

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
  2021-11-20 16:30         ` Mika Penttilä
  (?)
  (?)
@ 2021-11-20 17:02           ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 17:02 UTC (permalink / raw)
  To: Mika Penttilä, Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

[-- Attachment #1: Type: text/plain, Size: 1738 bytes --]

On Sat, 2021-11-20 at 18:30 +0200, Mika Penttilä wrote:
> 
> On 20.11.2021 18.21, David Woodhouse wrote:
> > On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
> > > > @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> > > >         local_irq_disable();
> > > >         vcpu->mode = IN_GUEST_MODE;
> > > >     
> > > > +     /*
> > > > +      * If the guest requires direct access to mapped L1 pages, check
> > > > +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> > > > +      * to go and revalidate them, if necessary.
> > > > +      */
> > > > +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> > > > +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
> > >
> > > But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next
> > > vcpu_enter_guest() entry ?
> >
> > Sure, but that's why this call to ->check_guest_maps() comes just a few
> > lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
> > us back out so that we go through vcpu_enter_guest() from the start
> > again?
>
> Yes, I had forgotten that...=

Having said that, I think Paolo was rightly trying to persuade me that
we don't need the check there at all anyway. Because the *invalidation*
will raise KVM_REQ_GPC_INVALIDATE, and that's sufficient to bounce us
out of IN_GUEST_MODE without us having to check the GPC explicitly.

We do need to ensure that we are up to date with memslots changes, but
that's a bit simpler, and we have kvm_arch_memslots_changed() waking us
for that too.

So yeah, with sufficient thought I think we can be entirely event-
driven instead of having to have an explicit *check* on entry.




[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 17:02           ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 17:02 UTC (permalink / raw)
  To: Mika Penttilä, Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 1738 bytes --]

On Sat, 2021-11-20 at 18:30 +0200, Mika Penttilä wrote:
> 
> On 20.11.2021 18.21, David Woodhouse wrote:
> > On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
> > > > @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> > > >         local_irq_disable();
> > > >         vcpu->mode = IN_GUEST_MODE;
> > > >     
> > > > +     /*
> > > > +      * If the guest requires direct access to mapped L1 pages, check
> > > > +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> > > > +      * to go and revalidate them, if necessary.
> > > > +      */
> > > > +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> > > > +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
> > >
> > > But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next
> > > vcpu_enter_guest() entry ?
> >
> > Sure, but that's why this call to ->check_guest_maps() comes just a few
> > lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
> > us back out so that we go through vcpu_enter_guest() from the start
> > again?
>
> Yes, I had forgotten that...=

Having said that, I think Paolo was rightly trying to persuade me that
we don't need the check there at all anyway. Because the *invalidation*
will raise KVM_REQ_GPC_INVALIDATE, and that's sufficient to bounce us
out of IN_GUEST_MODE without us having to check the GPC explicitly.

We do need to ensure that we are up to date with memslots changes, but
that's a bit simpler, and we have kvm_arch_memslots_changed() waking us
for that too.

So yeah, with sufficient thought I think we can be entirely event-
driven instead of having to have an explicit *check* on entry.




[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 17:02           ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 17:02 UTC (permalink / raw)
  To: Mika Penttilä, Paolo Bonzini, kvm
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390


[-- Attachment #1.1: Type: text/plain, Size: 1738 bytes --]

On Sat, 2021-11-20 at 18:30 +0200, Mika Penttilä wrote:
> 
> On 20.11.2021 18.21, David Woodhouse wrote:
> > On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
> > > > @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> > > >         local_irq_disable();
> > > >         vcpu->mode = IN_GUEST_MODE;
> > > >     
> > > > +     /*
> > > > +      * If the guest requires direct access to mapped L1 pages, check
> > > > +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> > > > +      * to go and revalidate them, if necessary.
> > > > +      */
> > > > +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> > > > +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
> > >
> > > But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next
> > > vcpu_enter_guest() entry ?
> >
> > Sure, but that's why this call to ->check_guest_maps() comes just a few
> > lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
> > us back out so that we go through vcpu_enter_guest() from the start
> > again?
>
> Yes, I had forgotten that...=

Having said that, I think Paolo was rightly trying to persuade me that
we don't need the check there at all anyway. Because the *invalidation*
will raise KVM_REQ_GPC_INVALIDATE, and that's sufficient to bounce us
out of IN_GUEST_MODE without us having to check the GPC explicitly.

We do need to ensure that we are up to date with memslots changes, but
that's a bit simpler, and we have kvm_arch_memslots_changed() waking us
for that too.

So yeah, with sufficient thought I think we can be entirely event-
driven instead of having to have an explicit *check* on entry.




[-- Attachment #1.2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-20 17:02           ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 17:02 UTC (permalink / raw)
  To: Mika Penttilä, Paolo Bonzini, kvm
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 1738 bytes --]

On Sat, 2021-11-20 at 18:30 +0200, Mika Penttilä wrote:
> 
> On 20.11.2021 18.21, David Woodhouse wrote:
> > On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote:
> > > > @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> > > >         local_irq_disable();
> > > >         vcpu->mode = IN_GUEST_MODE;
> > > >     
> > > > +     /*
> > > > +      * If the guest requires direct access to mapped L1 pages, check
> > > > +      * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
> > > > +      * to go and revalidate them, if necessary.
> > > > +      */
> > > > +     if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
> > > > +             kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
> > >
> > > But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next
> > > vcpu_enter_guest() entry ?
> >
> > Sure, but that's why this call to ->check_guest_maps() comes just a few
> > lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce
> > us back out so that we go through vcpu_enter_guest() from the start
> > again?
>
> Yes, I had forgotten that...=

Having said that, I think Paolo was rightly trying to persuade me that
we don't need the check there at all anyway. Because the *invalidation*
will raise KVM_REQ_GPC_INVALIDATE, and that's sufficient to bounce us
out of IN_GUEST_MODE without us having to check the GPC explicitly.

We do need to ensure that we are up to date with memslots changes, but
that's a bit simpler, and we have kvm_arch_memslots_changed() waking us
for that too.

So yeah, with sufficient thought I think we can be entirely event-
driven instead of having to have an explicit *check* on entry.




[-- Attachment #1.2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 10/11] KVM: x86/xen: Add KVM_IRQ_ROUTING_XEN_EVTCHN and event channel delivery
  2021-11-20 10:28   ` David Woodhouse
@ 2021-11-20 17:50     ` kernel test robot
  -1 siblings, 0 replies; 74+ messages in thread
From: kernel test robot @ 2021-11-20 17:50 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: kbuild-all, Boris Ostrovsky, Joao Martins,
	jmattson @ google . com, wanpengli @ tencent . com,
	seanjc @ google . com, vkuznets @ redhat . com,
	mtosatti @ redhat . com, joro @ 8bytes . org

[-- Attachment #1: Type: text/plain, Size: 7743 bytes --]

Hi David,

I love your patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on next-20211118]
[cannot apply to kvm/queue kvms390/next powerpc/topic/ppc-kvm kvmarm/next mst-vhost/linux-next v5.16-rc1]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git a90af8f15bdc9449ee2d24e1d73fa3f7e8633f81
config: i386-randconfig-s001-20211118 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.4-dirty
        # https://github.com/0day-ci/linux/commit/a9a90c7ab5f10064f2153f60e2410222c1b00700
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
        git checkout a9a90c7ab5f10064f2153f60e2410222c1b00700
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)
>> arch/x86/kvm/xen.c:272:22: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected void const [noderef] __user *ptr @@     got void * @@
   arch/x86/kvm/xen.c:272:22: sparse:     expected void const [noderef] __user *ptr
   arch/x86/kvm/xen.c:272:22: sparse:     got void *
>> arch/x86/kvm/xen.c:276:56: sparse: sparse: incorrect type in initializer (different address spaces) @@     expected struct vcpu_info [noderef] __user *vi @@     got void * @@
   arch/x86/kvm/xen.c:276:56: sparse:     expected struct vcpu_info [noderef] __user *vi
   arch/x86/kvm/xen.c:276:56: sparse:     got void *
>> arch/x86/kvm/xen.c:294:63: sparse: sparse: incorrect type in initializer (different address spaces) @@     expected struct compat_vcpu_info [noderef] __user *vi @@     got void * @@
   arch/x86/kvm/xen.c:294:63: sparse:     expected struct compat_vcpu_info [noderef] __user *vi
   arch/x86/kvm/xen.c:294:63: sparse:     got void *

vim +272 arch/x86/kvm/xen.c

   196	
   197	int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
   198	{
   199		unsigned long evtchn_pending_sel = READ_ONCE(v->arch.xen.evtchn_pending_sel);
   200		bool atomic = in_atomic() || !task_is_running(current);
   201		int err;
   202		u8 rc = 0;
   203	
   204		/*
   205		 * If the global upcall vector (HVMIRQ_callback_vector) is set and
   206		 * the vCPU's evtchn_upcall_pending flag is set, the IRQ is pending.
   207		 */
   208		struct gfn_to_hva_cache *ghc = &v->arch.xen.vcpu_info_cache;
   209		struct kvm_memslots *slots = kvm_memslots(v->kvm);
   210		bool ghc_valid = slots->generation == ghc->generation &&
   211			!kvm_is_error_hva(ghc->hva) && ghc->memslot;
   212	
   213		unsigned int offset = offsetof(struct vcpu_info, evtchn_upcall_pending);
   214	
   215		/* No need for compat handling here */
   216		BUILD_BUG_ON(offsetof(struct vcpu_info, evtchn_upcall_pending) !=
   217			     offsetof(struct compat_vcpu_info, evtchn_upcall_pending));
   218		BUILD_BUG_ON(sizeof(rc) !=
   219			     sizeof_field(struct vcpu_info, evtchn_upcall_pending));
   220		BUILD_BUG_ON(sizeof(rc) !=
   221			     sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending));
   222	
   223		/*
   224		 * For efficiency, this mirrors the checks for using the valid
   225		 * cache in kvm_read_guest_offset_cached(), but just uses
   226		 * __get_user() instead. And falls back to the slow path.
   227		 */
   228		if (!evtchn_pending_sel && ghc_valid) {
   229			/* Fast path */
   230			pagefault_disable();
   231			err = __get_user(rc, (u8 __user *)ghc->hva + offset);
   232			pagefault_enable();
   233			if (!err)
   234				return rc;
   235		}
   236	
   237		/* Slow path */
   238	
   239		/*
   240		 * This function gets called from kvm_vcpu_block() after setting the
   241		 * task to TASK_INTERRUPTIBLE, to see if it needs to wake immediately
   242		 * from a HLT. So we really mustn't sleep. If the page ended up absent
   243		 * at that point, just return 1 in order to trigger an immediate wake,
   244		 * and we'll end up getting called again from a context where we *can*
   245		 * fault in the page and wait for it.
   246		 */
   247		if (atomic)
   248			return 1;
   249	
   250		if (!ghc_valid) {
   251			err = kvm_gfn_to_hva_cache_init(v->kvm, ghc, ghc->gpa, ghc->len);
   252			if (err || !ghc->memslot) {
   253				/*
   254				 * If this failed, userspace has screwed up the
   255				 * vcpu_info mapping. No interrupts for you.
   256				 */
   257				return 0;
   258			}
   259		}
   260	
   261		/*
   262		 * Now we have a valid (protected by srcu) userspace HVA in
   263		 * ghc->hva which points to the struct vcpu_info. If there
   264		 * are any bits in the in-kernel evtchn_pending_sel then
   265		 * we need to write those to the guest vcpu_info and set
   266		 * its evtchn_upcall_pending flag. If there aren't any bits
   267		 * to add, we only want to *check* evtchn_upcall_pending.
   268		 */
   269		if (evtchn_pending_sel) {
   270			bool long_mode = v->kvm->arch.xen.long_mode;
   271	
 > 272			if (!user_access_begin((void *)ghc->hva, sizeof(struct vcpu_info)))
   273				return 0;
   274	
   275			if (IS_ENABLED(CONFIG_64BIT) && long_mode) {
 > 276				struct vcpu_info __user *vi = (void *)ghc->hva;
   277	
   278				/* Attempt to set the evtchn_pending_sel bits in the
   279				 * guest, and if that succeeds then clear the same
   280				 * bits in the in-kernel version. */
   281				asm volatile("1:\t" LOCK_PREFIX "orq %0, %1\n"
   282					     "\tnotq %0\n"
   283					     "\t" LOCK_PREFIX "andq %0, %2\n"
   284					     "2:\n"
   285					     "\t.section .fixup,\"ax\"\n"
   286					     "3:\tjmp\t2b\n"
   287					     "\t.previous\n"
   288					     _ASM_EXTABLE_UA(1b, 3b)
   289					     : "=r" (evtchn_pending_sel),
   290					       "+m" (vi->evtchn_pending_sel),
   291					       "+m" (v->arch.xen.evtchn_pending_sel)
   292					     : "0" (evtchn_pending_sel));
   293			} else {
 > 294				struct compat_vcpu_info __user *vi = (void *)ghc->hva;
   295				u32 evtchn_pending_sel32 = evtchn_pending_sel;
   296	
   297				/* Attempt to set the evtchn_pending_sel bits in the
   298				 * guest, and if that succeeds then clear the same
   299				 * bits in the in-kernel version. */
   300				asm volatile("1:\t" LOCK_PREFIX "orl %0, %1\n"
   301					     "\tnotl %0\n"
   302					     "\t" LOCK_PREFIX "andl %0, %2\n"
   303					     "2:\n"
   304					     "\t.section .fixup,\"ax\"\n"
   305					     "3:\tjmp\t2b\n"
   306					     "\t.previous\n"
   307					     _ASM_EXTABLE_UA(1b, 3b)
   308					     : "=r" (evtchn_pending_sel32),
   309					       "+m" (vi->evtchn_pending_sel),
   310					       "+m" (v->arch.xen.evtchn_pending_sel)
   311					     : "0" (evtchn_pending_sel32));
   312			}
   313			rc = 1;
   314			unsafe_put_user(rc, (u8 __user *)ghc->hva + offset, err);
   315	
   316		err:
   317			user_access_end();
   318	
   319			mark_page_dirty_in_slot(v->kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
   320		} else {
   321			__get_user(rc, (u8 __user *)ghc->hva + offset);
   322		}
   323	
   324		return rc;
   325	}
   326	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 37665 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 10/11] KVM: x86/xen: Add KVM_IRQ_ROUTING_XEN_EVTCHN and event channel delivery
@ 2021-11-20 17:50     ` kernel test robot
  0 siblings, 0 replies; 74+ messages in thread
From: kernel test robot @ 2021-11-20 17:50 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 7922 bytes --]

Hi David,

I love your patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on next-20211118]
[cannot apply to kvm/queue kvms390/next powerpc/topic/ppc-kvm kvmarm/next mst-vhost/linux-next v5.16-rc1]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git a90af8f15bdc9449ee2d24e1d73fa3f7e8633f81
config: i386-randconfig-s001-20211118 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.4-dirty
        # https://github.com/0day-ci/linux/commit/a9a90c7ab5f10064f2153f60e2410222c1b00700
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
        git checkout a9a90c7ab5f10064f2153f60e2410222c1b00700
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)
>> arch/x86/kvm/xen.c:272:22: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected void const [noderef] __user *ptr @@     got void * @@
   arch/x86/kvm/xen.c:272:22: sparse:     expected void const [noderef] __user *ptr
   arch/x86/kvm/xen.c:272:22: sparse:     got void *
>> arch/x86/kvm/xen.c:276:56: sparse: sparse: incorrect type in initializer (different address spaces) @@     expected struct vcpu_info [noderef] __user *vi @@     got void * @@
   arch/x86/kvm/xen.c:276:56: sparse:     expected struct vcpu_info [noderef] __user *vi
   arch/x86/kvm/xen.c:276:56: sparse:     got void *
>> arch/x86/kvm/xen.c:294:63: sparse: sparse: incorrect type in initializer (different address spaces) @@     expected struct compat_vcpu_info [noderef] __user *vi @@     got void * @@
   arch/x86/kvm/xen.c:294:63: sparse:     expected struct compat_vcpu_info [noderef] __user *vi
   arch/x86/kvm/xen.c:294:63: sparse:     got void *

vim +272 arch/x86/kvm/xen.c

   196	
   197	int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
   198	{
   199		unsigned long evtchn_pending_sel = READ_ONCE(v->arch.xen.evtchn_pending_sel);
   200		bool atomic = in_atomic() || !task_is_running(current);
   201		int err;
   202		u8 rc = 0;
   203	
   204		/*
   205		 * If the global upcall vector (HVMIRQ_callback_vector) is set and
   206		 * the vCPU's evtchn_upcall_pending flag is set, the IRQ is pending.
   207		 */
   208		struct gfn_to_hva_cache *ghc = &v->arch.xen.vcpu_info_cache;
   209		struct kvm_memslots *slots = kvm_memslots(v->kvm);
   210		bool ghc_valid = slots->generation == ghc->generation &&
   211			!kvm_is_error_hva(ghc->hva) && ghc->memslot;
   212	
   213		unsigned int offset = offsetof(struct vcpu_info, evtchn_upcall_pending);
   214	
   215		/* No need for compat handling here */
   216		BUILD_BUG_ON(offsetof(struct vcpu_info, evtchn_upcall_pending) !=
   217			     offsetof(struct compat_vcpu_info, evtchn_upcall_pending));
   218		BUILD_BUG_ON(sizeof(rc) !=
   219			     sizeof_field(struct vcpu_info, evtchn_upcall_pending));
   220		BUILD_BUG_ON(sizeof(rc) !=
   221			     sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending));
   222	
   223		/*
   224		 * For efficiency, this mirrors the checks for using the valid
   225		 * cache in kvm_read_guest_offset_cached(), but just uses
   226		 * __get_user() instead. And falls back to the slow path.
   227		 */
   228		if (!evtchn_pending_sel && ghc_valid) {
   229			/* Fast path */
   230			pagefault_disable();
   231			err = __get_user(rc, (u8 __user *)ghc->hva + offset);
   232			pagefault_enable();
   233			if (!err)
   234				return rc;
   235		}
   236	
   237		/* Slow path */
   238	
   239		/*
   240		 * This function gets called from kvm_vcpu_block() after setting the
   241		 * task to TASK_INTERRUPTIBLE, to see if it needs to wake immediately
   242		 * from a HLT. So we really mustn't sleep. If the page ended up absent
   243		 * at that point, just return 1 in order to trigger an immediate wake,
   244		 * and we'll end up getting called again from a context where we *can*
   245		 * fault in the page and wait for it.
   246		 */
   247		if (atomic)
   248			return 1;
   249	
   250		if (!ghc_valid) {
   251			err = kvm_gfn_to_hva_cache_init(v->kvm, ghc, ghc->gpa, ghc->len);
   252			if (err || !ghc->memslot) {
   253				/*
   254				 * If this failed, userspace has screwed up the
   255				 * vcpu_info mapping. No interrupts for you.
   256				 */
   257				return 0;
   258			}
   259		}
   260	
   261		/*
   262		 * Now we have a valid (protected by srcu) userspace HVA in
   263		 * ghc->hva which points to the struct vcpu_info. If there
   264		 * are any bits in the in-kernel evtchn_pending_sel then
   265		 * we need to write those to the guest vcpu_info and set
   266		 * its evtchn_upcall_pending flag. If there aren't any bits
   267		 * to add, we only want to *check* evtchn_upcall_pending.
   268		 */
   269		if (evtchn_pending_sel) {
   270			bool long_mode = v->kvm->arch.xen.long_mode;
   271	
 > 272			if (!user_access_begin((void *)ghc->hva, sizeof(struct vcpu_info)))
   273				return 0;
   274	
   275			if (IS_ENABLED(CONFIG_64BIT) && long_mode) {
 > 276				struct vcpu_info __user *vi = (void *)ghc->hva;
   277	
   278				/* Attempt to set the evtchn_pending_sel bits in the
   279				 * guest, and if that succeeds then clear the same
   280				 * bits in the in-kernel version. */
   281				asm volatile("1:\t" LOCK_PREFIX "orq %0, %1\n"
   282					     "\tnotq %0\n"
   283					     "\t" LOCK_PREFIX "andq %0, %2\n"
   284					     "2:\n"
   285					     "\t.section .fixup,\"ax\"\n"
   286					     "3:\tjmp\t2b\n"
   287					     "\t.previous\n"
   288					     _ASM_EXTABLE_UA(1b, 3b)
   289					     : "=r" (evtchn_pending_sel),
   290					       "+m" (vi->evtchn_pending_sel),
   291					       "+m" (v->arch.xen.evtchn_pending_sel)
   292					     : "0" (evtchn_pending_sel));
   293			} else {
 > 294				struct compat_vcpu_info __user *vi = (void *)ghc->hva;
   295				u32 evtchn_pending_sel32 = evtchn_pending_sel;
   296	
   297				/* Attempt to set the evtchn_pending_sel bits in the
   298				 * guest, and if that succeeds then clear the same
   299				 * bits in the in-kernel version. */
   300				asm volatile("1:\t" LOCK_PREFIX "orl %0, %1\n"
   301					     "\tnotl %0\n"
   302					     "\t" LOCK_PREFIX "andl %0, %2\n"
   303					     "2:\n"
   304					     "\t.section .fixup,\"ax\"\n"
   305					     "3:\tjmp\t2b\n"
   306					     "\t.previous\n"
   307					     _ASM_EXTABLE_UA(1b, 3b)
   308					     : "=r" (evtchn_pending_sel32),
   309					       "+m" (vi->evtchn_pending_sel),
   310					       "+m" (v->arch.xen.evtchn_pending_sel)
   311					     : "0" (evtchn_pending_sel32));
   312			}
   313			rc = 1;
   314			unsafe_put_user(rc, (u8 __user *)ghc->hva + offset, err);
   315	
   316		err:
   317			user_access_end();
   318	
   319			mark_page_dirty_in_slot(v->kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT);
   320		} else {
   321			__get_user(rc, (u8 __user *)ghc->hva + offset);
   322		}
   323	
   324		return rc;
   325	}
   326	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 37665 bytes --]

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [PATCH v4 12/11] KVM: x86: Fix wall clock writes in Xen shared_info not to mark page dirty
  2021-11-20 10:27 ` David Woodhouse
  (?)
  (?)
@ 2021-11-20 18:20   ` David Woodhouse
  -1 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 18:20 UTC (permalink / raw)
  To: Paolo Bonzini, kvm, butt3rflyh4ck
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390

[-- Attachment #1: Type: text/plain, Size: 5118 bytes --]

From: David Woodhouse <dwmw@amazon.co.uk>

When dirty ring logging is enabled, any dirty logging without an active
vCPU context will cause a kernel oops. But we've already declared that
the shared_info page doesn't get dirty tracking anyway, since it would
be kind of insane to mark it dirty every time we deliver an event channel
interrupt. Userspace is supposed to just assume it's always dirty any
time a vCPU can run or event channels are routed.

So stop using the generic kvm_write_wall_clock() and just write directly
through the gfn_to_pfn_cache that we already have set up.

We can make kvm_write_wall_clock() static in x86.c again now, but let's
not remove the 'sec_hi_ofs' argument even though it's not used yet. At
some point we *will* want to use that for KVM guests too.

Fixes: 629b5348841a ("KVM: x86/xen: update wallclock region")
Reported-by: butt3rflyh4ck <butterflyhuangxx@gmail.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
Putting this after the Xen evtchn series because now I have a kernel
mapping I can use to avoid the dirty tracking.

 arch/x86/kvm/x86.c |  2 +-
 arch/x86/kvm/x86.h |  1 -
 arch/x86/kvm/xen.c | 62 +++++++++++++++++++++++++++++++++++-----------
 3 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 01d20db5b1f4..d8f1d2169b45 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2118,7 +2118,7 @@ static s64 get_kvmclock_base_ns(void)
 }
 #endif
 
-void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs)
+static void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs)
 {
 	int version;
 	int r;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 997669ae9caa..0260836b42ff 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -306,7 +306,6 @@ static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
 	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
 }
 
-void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs);
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
 
 u64 get_kvmclock_ns(struct kvm *kvm);
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index ceddabd1f5c6..0e3f7d6e9fd7 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -25,8 +25,11 @@ DEFINE_STATIC_KEY_DEFERRED_FALSE(kvm_xen_enabled, HZ);
 static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 {
 	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+	struct pvclock_wall_clock *wc;
 	gpa_t gpa = gfn_to_gpa(gfn);
-	int wc_ofs, sec_hi_ofs;
+	u32 *wc_sec_hi;
+	u32 wc_version;
+	u64 wall_nsec;
 	int ret = 0;
 	int idx = srcu_read_lock(&kvm->srcu);
 
@@ -35,32 +38,63 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 		goto out;
 	}
 
-	ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true, gpa,
-					PAGE_SIZE, false);
-	if (ret)
-		goto out;
+	do {
+		ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true,
+						gpa, PAGE_SIZE, false);
+		if (ret)
+			goto out;
+
+		/*
+		 * This code mirrors kvm_write_wall_clock() except that it writes
+		 * directly through the pfn cache and doesn't mark the page dirty.
+		 */
+		wall_nsec = ktime_get_real_ns() - get_kvmclock_ns(kvm);
+
+		/* It could be invalid again already, so we need to check */
+		read_lock_irq(&gpc->lock);
+
+		if (gpc->valid)
+			break;
+
+		read_unlock_irq(&gpc->lock);
+	} while (1);
 
 	/* Paranoia checks on the 32-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, wc) != 0x900);
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, arch.wc_sec_hi) != 0x924);
 	BUILD_BUG_ON(offsetof(struct pvclock_vcpu_time_info, version) != 0);
 
-	/* 32-bit location by default */
-	wc_ofs = offsetof(struct compat_shared_info, wc);
-	sec_hi_ofs = offsetof(struct compat_shared_info, arch.wc_sec_hi);
-
 #ifdef CONFIG_X86_64
 	/* Paranoia checks on the 64-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct shared_info, wc) != 0xc00);
 	BUILD_BUG_ON(offsetof(struct shared_info, wc_sec_hi) != 0xc0c);
 
-	if (kvm->arch.xen.long_mode) {
-		wc_ofs = offsetof(struct shared_info, wc);
-		sec_hi_ofs = offsetof(struct shared_info, wc_sec_hi);
-	}
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) {
+		struct shared_info *shinfo = gpc->khva;
+
+		wc_sec_hi = &shinfo->wc_sec_hi;
+		wc = &shinfo->wc;
+	} else
 #endif
+	{
+		struct compat_shared_info *shinfo = gpc->khva;
+
+		wc_sec_hi = &shinfo->arch.wc_sec_hi;
+		wc = &shinfo->wc;
+	}
+
+	/* Increment and ensure an odd value */
+	wc_version = wc->version = (wc->version + 1) | 1;
+	smp_wmb();
+
+	wc->nsec = do_div(wall_nsec,  1000000000);
+	wc->sec = (u32)wall_nsec;
+	*wc_sec_hi = wall_nsec >> 32;
+	smp_wmb();
+
+	wc->version = wc_version + 1;
+	read_unlock_irq(&gpc->lock);
 
-	kvm_write_wall_clock(kvm, gpa + wc_ofs, sec_hi_ofs - wc_ofs);
 	kvm_make_all_cpus_request(kvm, KVM_REQ_MASTERCLOCK_UPDATE);
 
 out:
-- 
2.31.1


[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 12/11] KVM: x86: Fix wall clock writes in Xen shared_info not to mark page dirty
@ 2021-11-20 18:20   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 18:20 UTC (permalink / raw)
  To: Paolo Bonzini, kvm, butt3rflyh4ck
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Suzuki K Poulose, Boris Ostrovsky,
	Alexandru Elisei, linux-arm-kernel, jmattson @ google . com,
	seanjc @ google . com, mtosatti @ redhat . com, linux-mips,
	James Morse, kvm-riscv, Marc Zyngier, vkuznets @ redhat . com,
	linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 5118 bytes --]

From: David Woodhouse <dwmw@amazon.co.uk>

When dirty ring logging is enabled, any dirty logging without an active
vCPU context will cause a kernel oops. But we've already declared that
the shared_info page doesn't get dirty tracking anyway, since it would
be kind of insane to mark it dirty every time we deliver an event channel
interrupt. Userspace is supposed to just assume it's always dirty any
time a vCPU can run or event channels are routed.

So stop using the generic kvm_write_wall_clock() and just write directly
through the gfn_to_pfn_cache that we already have set up.

We can make kvm_write_wall_clock() static in x86.c again now, but let's
not remove the 'sec_hi_ofs' argument even though it's not used yet. At
some point we *will* want to use that for KVM guests too.

Fixes: 629b5348841a ("KVM: x86/xen: update wallclock region")
Reported-by: butt3rflyh4ck <butterflyhuangxx@gmail.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
Putting this after the Xen evtchn series because now I have a kernel
mapping I can use to avoid the dirty tracking.

 arch/x86/kvm/x86.c |  2 +-
 arch/x86/kvm/x86.h |  1 -
 arch/x86/kvm/xen.c | 62 +++++++++++++++++++++++++++++++++++-----------
 3 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 01d20db5b1f4..d8f1d2169b45 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2118,7 +2118,7 @@ static s64 get_kvmclock_base_ns(void)
 }
 #endif
 
-void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs)
+static void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs)
 {
 	int version;
 	int r;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 997669ae9caa..0260836b42ff 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -306,7 +306,6 @@ static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
 	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
 }
 
-void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs);
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
 
 u64 get_kvmclock_ns(struct kvm *kvm);
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index ceddabd1f5c6..0e3f7d6e9fd7 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -25,8 +25,11 @@ DEFINE_STATIC_KEY_DEFERRED_FALSE(kvm_xen_enabled, HZ);
 static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 {
 	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+	struct pvclock_wall_clock *wc;
 	gpa_t gpa = gfn_to_gpa(gfn);
-	int wc_ofs, sec_hi_ofs;
+	u32 *wc_sec_hi;
+	u32 wc_version;
+	u64 wall_nsec;
 	int ret = 0;
 	int idx = srcu_read_lock(&kvm->srcu);
 
@@ -35,32 +38,63 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 		goto out;
 	}
 
-	ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true, gpa,
-					PAGE_SIZE, false);
-	if (ret)
-		goto out;
+	do {
+		ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true,
+						gpa, PAGE_SIZE, false);
+		if (ret)
+			goto out;
+
+		/*
+		 * This code mirrors kvm_write_wall_clock() except that it writes
+		 * directly through the pfn cache and doesn't mark the page dirty.
+		 */
+		wall_nsec = ktime_get_real_ns() - get_kvmclock_ns(kvm);
+
+		/* It could be invalid again already, so we need to check */
+		read_lock_irq(&gpc->lock);
+
+		if (gpc->valid)
+			break;
+
+		read_unlock_irq(&gpc->lock);
+	} while (1);
 
 	/* Paranoia checks on the 32-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, wc) != 0x900);
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, arch.wc_sec_hi) != 0x924);
 	BUILD_BUG_ON(offsetof(struct pvclock_vcpu_time_info, version) != 0);
 
-	/* 32-bit location by default */
-	wc_ofs = offsetof(struct compat_shared_info, wc);
-	sec_hi_ofs = offsetof(struct compat_shared_info, arch.wc_sec_hi);
-
 #ifdef CONFIG_X86_64
 	/* Paranoia checks on the 64-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct shared_info, wc) != 0xc00);
 	BUILD_BUG_ON(offsetof(struct shared_info, wc_sec_hi) != 0xc0c);
 
-	if (kvm->arch.xen.long_mode) {
-		wc_ofs = offsetof(struct shared_info, wc);
-		sec_hi_ofs = offsetof(struct shared_info, wc_sec_hi);
-	}
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) {
+		struct shared_info *shinfo = gpc->khva;
+
+		wc_sec_hi = &shinfo->wc_sec_hi;
+		wc = &shinfo->wc;
+	} else
 #endif
+	{
+		struct compat_shared_info *shinfo = gpc->khva;
+
+		wc_sec_hi = &shinfo->arch.wc_sec_hi;
+		wc = &shinfo->wc;
+	}
+
+	/* Increment and ensure an odd value */
+	wc_version = wc->version = (wc->version + 1) | 1;
+	smp_wmb();
+
+	wc->nsec = do_div(wall_nsec,  1000000000);
+	wc->sec = (u32)wall_nsec;
+	*wc_sec_hi = wall_nsec >> 32;
+	smp_wmb();
+
+	wc->version = wc_version + 1;
+	read_unlock_irq(&gpc->lock);
 
-	kvm_write_wall_clock(kvm, gpa + wc_ofs, sec_hi_ofs - wc_ofs);
 	kvm_make_all_cpus_request(kvm, KVM_REQ_MASTERCLOCK_UPDATE);
 
 out:
-- 
2.31.1


[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 12/11] KVM: x86: Fix wall clock writes in Xen shared_info not to mark page dirty
@ 2021-11-20 18:20   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 18:20 UTC (permalink / raw)
  To: Paolo Bonzini, kvm, butt3rflyh4ck
  Cc: Boris Ostrovsky, Joao Martins, jmattson @ google . com,
	wanpengli @ tencent . com, seanjc @ google . com,
	vkuznets @ redhat . com, mtosatti @ redhat . com,
	joro @ 8bytes . org, karahmed, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Huacai Chen, Aleksandar Markovic, Michael Ellerman,
	Benjamin Herrenschmidt, Anup Patel, Christian Borntraeger,
	kvmarm, linux-arm-kernel, linux-mips, linuxppc-dev, kvm-riscv,
	linux-s390


[-- Attachment #1.1: Type: text/plain, Size: 5118 bytes --]

From: David Woodhouse <dwmw@amazon.co.uk>

When dirty ring logging is enabled, any dirty logging without an active
vCPU context will cause a kernel oops. But we've already declared that
the shared_info page doesn't get dirty tracking anyway, since it would
be kind of insane to mark it dirty every time we deliver an event channel
interrupt. Userspace is supposed to just assume it's always dirty any
time a vCPU can run or event channels are routed.

So stop using the generic kvm_write_wall_clock() and just write directly
through the gfn_to_pfn_cache that we already have set up.

We can make kvm_write_wall_clock() static in x86.c again now, but let's
not remove the 'sec_hi_ofs' argument even though it's not used yet. At
some point we *will* want to use that for KVM guests too.

Fixes: 629b5348841a ("KVM: x86/xen: update wallclock region")
Reported-by: butt3rflyh4ck <butterflyhuangxx@gmail.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
Putting this after the Xen evtchn series because now I have a kernel
mapping I can use to avoid the dirty tracking.

 arch/x86/kvm/x86.c |  2 +-
 arch/x86/kvm/x86.h |  1 -
 arch/x86/kvm/xen.c | 62 +++++++++++++++++++++++++++++++++++-----------
 3 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 01d20db5b1f4..d8f1d2169b45 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2118,7 +2118,7 @@ static s64 get_kvmclock_base_ns(void)
 }
 #endif
 
-void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs)
+static void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs)
 {
 	int version;
 	int r;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 997669ae9caa..0260836b42ff 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -306,7 +306,6 @@ static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
 	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
 }
 
-void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs);
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
 
 u64 get_kvmclock_ns(struct kvm *kvm);
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index ceddabd1f5c6..0e3f7d6e9fd7 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -25,8 +25,11 @@ DEFINE_STATIC_KEY_DEFERRED_FALSE(kvm_xen_enabled, HZ);
 static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 {
 	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+	struct pvclock_wall_clock *wc;
 	gpa_t gpa = gfn_to_gpa(gfn);
-	int wc_ofs, sec_hi_ofs;
+	u32 *wc_sec_hi;
+	u32 wc_version;
+	u64 wall_nsec;
 	int ret = 0;
 	int idx = srcu_read_lock(&kvm->srcu);
 
@@ -35,32 +38,63 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 		goto out;
 	}
 
-	ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true, gpa,
-					PAGE_SIZE, false);
-	if (ret)
-		goto out;
+	do {
+		ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true,
+						gpa, PAGE_SIZE, false);
+		if (ret)
+			goto out;
+
+		/*
+		 * This code mirrors kvm_write_wall_clock() except that it writes
+		 * directly through the pfn cache and doesn't mark the page dirty.
+		 */
+		wall_nsec = ktime_get_real_ns() - get_kvmclock_ns(kvm);
+
+		/* It could be invalid again already, so we need to check */
+		read_lock_irq(&gpc->lock);
+
+		if (gpc->valid)
+			break;
+
+		read_unlock_irq(&gpc->lock);
+	} while (1);
 
 	/* Paranoia checks on the 32-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, wc) != 0x900);
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, arch.wc_sec_hi) != 0x924);
 	BUILD_BUG_ON(offsetof(struct pvclock_vcpu_time_info, version) != 0);
 
-	/* 32-bit location by default */
-	wc_ofs = offsetof(struct compat_shared_info, wc);
-	sec_hi_ofs = offsetof(struct compat_shared_info, arch.wc_sec_hi);
-
 #ifdef CONFIG_X86_64
 	/* Paranoia checks on the 64-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct shared_info, wc) != 0xc00);
 	BUILD_BUG_ON(offsetof(struct shared_info, wc_sec_hi) != 0xc0c);
 
-	if (kvm->arch.xen.long_mode) {
-		wc_ofs = offsetof(struct shared_info, wc);
-		sec_hi_ofs = offsetof(struct shared_info, wc_sec_hi);
-	}
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) {
+		struct shared_info *shinfo = gpc->khva;
+
+		wc_sec_hi = &shinfo->wc_sec_hi;
+		wc = &shinfo->wc;
+	} else
 #endif
+	{
+		struct compat_shared_info *shinfo = gpc->khva;
+
+		wc_sec_hi = &shinfo->arch.wc_sec_hi;
+		wc = &shinfo->wc;
+	}
+
+	/* Increment and ensure an odd value */
+	wc_version = wc->version = (wc->version + 1) | 1;
+	smp_wmb();
+
+	wc->nsec = do_div(wall_nsec,  1000000000);
+	wc->sec = (u32)wall_nsec;
+	*wc_sec_hi = wall_nsec >> 32;
+	smp_wmb();
+
+	wc->version = wc_version + 1;
+	read_unlock_irq(&gpc->lock);
 
-	kvm_write_wall_clock(kvm, gpa + wc_ofs, sec_hi_ofs - wc_ofs);
 	kvm_make_all_cpus_request(kvm, KVM_REQ_MASTERCLOCK_UPDATE);
 
 out:
-- 
2.31.1


[-- Attachment #1.2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [PATCH v4 12/11] KVM: x86: Fix wall clock writes in Xen shared_info not to mark page dirty
@ 2021-11-20 18:20   ` David Woodhouse
  0 siblings, 0 replies; 74+ messages in thread
From: David Woodhouse @ 2021-11-20 18:20 UTC (permalink / raw)
  To: Paolo Bonzini, kvm, butt3rflyh4ck
  Cc: Anup Patel, wanpengli @ tencent . com, Catalin Marinas,
	Joao Martins, Will Deacon, kvmarm, linux-s390, Michael Ellerman,
	joro @ 8bytes . org, Huacai Chen, Christian Borntraeger,
	Aleksandar Markovic, karahmed, Benjamin Herrenschmidt,
	Boris Ostrovsky, linux-arm-kernel, jmattson @ google . com,
	mtosatti @ redhat . com, linux-mips, kvm-riscv, Marc Zyngier,
	vkuznets @ redhat . com, linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 5118 bytes --]

From: David Woodhouse <dwmw@amazon.co.uk>

When dirty ring logging is enabled, any dirty logging without an active
vCPU context will cause a kernel oops. But we've already declared that
the shared_info page doesn't get dirty tracking anyway, since it would
be kind of insane to mark it dirty every time we deliver an event channel
interrupt. Userspace is supposed to just assume it's always dirty any
time a vCPU can run or event channels are routed.

So stop using the generic kvm_write_wall_clock() and just write directly
through the gfn_to_pfn_cache that we already have set up.

We can make kvm_write_wall_clock() static in x86.c again now, but let's
not remove the 'sec_hi_ofs' argument even though it's not used yet. At
some point we *will* want to use that for KVM guests too.

Fixes: 629b5348841a ("KVM: x86/xen: update wallclock region")
Reported-by: butt3rflyh4ck <butterflyhuangxx@gmail.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
Putting this after the Xen evtchn series because now I have a kernel
mapping I can use to avoid the dirty tracking.

 arch/x86/kvm/x86.c |  2 +-
 arch/x86/kvm/x86.h |  1 -
 arch/x86/kvm/xen.c | 62 +++++++++++++++++++++++++++++++++++-----------
 3 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 01d20db5b1f4..d8f1d2169b45 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2118,7 +2118,7 @@ static s64 get_kvmclock_base_ns(void)
 }
 #endif
 
-void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs)
+static void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs)
 {
 	int version;
 	int r;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 997669ae9caa..0260836b42ff 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -306,7 +306,6 @@ static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
 	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
 }
 
-void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs);
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
 
 u64 get_kvmclock_ns(struct kvm *kvm);
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index ceddabd1f5c6..0e3f7d6e9fd7 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -25,8 +25,11 @@ DEFINE_STATIC_KEY_DEFERRED_FALSE(kvm_xen_enabled, HZ);
 static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 {
 	struct gfn_to_pfn_cache *gpc = &kvm->arch.xen.shinfo_cache;
+	struct pvclock_wall_clock *wc;
 	gpa_t gpa = gfn_to_gpa(gfn);
-	int wc_ofs, sec_hi_ofs;
+	u32 *wc_sec_hi;
+	u32 wc_version;
+	u64 wall_nsec;
 	int ret = 0;
 	int idx = srcu_read_lock(&kvm->srcu);
 
@@ -35,32 +38,63 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 		goto out;
 	}
 
-	ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true, gpa,
-					PAGE_SIZE, false);
-	if (ret)
-		goto out;
+	do {
+		ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true,
+						gpa, PAGE_SIZE, false);
+		if (ret)
+			goto out;
+
+		/*
+		 * This code mirrors kvm_write_wall_clock() except that it writes
+		 * directly through the pfn cache and doesn't mark the page dirty.
+		 */
+		wall_nsec = ktime_get_real_ns() - get_kvmclock_ns(kvm);
+
+		/* It could be invalid again already, so we need to check */
+		read_lock_irq(&gpc->lock);
+
+		if (gpc->valid)
+			break;
+
+		read_unlock_irq(&gpc->lock);
+	} while (1);
 
 	/* Paranoia checks on the 32-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, wc) != 0x900);
 	BUILD_BUG_ON(offsetof(struct compat_shared_info, arch.wc_sec_hi) != 0x924);
 	BUILD_BUG_ON(offsetof(struct pvclock_vcpu_time_info, version) != 0);
 
-	/* 32-bit location by default */
-	wc_ofs = offsetof(struct compat_shared_info, wc);
-	sec_hi_ofs = offsetof(struct compat_shared_info, arch.wc_sec_hi);
-
 #ifdef CONFIG_X86_64
 	/* Paranoia checks on the 64-bit struct layout */
 	BUILD_BUG_ON(offsetof(struct shared_info, wc) != 0xc00);
 	BUILD_BUG_ON(offsetof(struct shared_info, wc_sec_hi) != 0xc0c);
 
-	if (kvm->arch.xen.long_mode) {
-		wc_ofs = offsetof(struct shared_info, wc);
-		sec_hi_ofs = offsetof(struct shared_info, wc_sec_hi);
-	}
+	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) {
+		struct shared_info *shinfo = gpc->khva;
+
+		wc_sec_hi = &shinfo->wc_sec_hi;
+		wc = &shinfo->wc;
+	} else
 #endif
+	{
+		struct compat_shared_info *shinfo = gpc->khva;
+
+		wc_sec_hi = &shinfo->arch.wc_sec_hi;
+		wc = &shinfo->wc;
+	}
+
+	/* Increment and ensure an odd value */
+	wc_version = wc->version = (wc->version + 1) | 1;
+	smp_wmb();
+
+	wc->nsec = do_div(wall_nsec,  1000000000);
+	wc->sec = (u32)wall_nsec;
+	*wc_sec_hi = wall_nsec >> 32;
+	smp_wmb();
+
+	wc->version = wc_version + 1;
+	read_unlock_irq(&gpc->lock);
 
-	kvm_write_wall_clock(kvm, gpa + wc_ofs, sec_hi_ofs - wc_ofs);
 	kvm_make_all_cpus_request(kvm, KVM_REQ_MASTERCLOCK_UPDATE);
 
 out:
-- 
2.31.1


[-- Attachment #1.2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
  2021-11-20 10:28   ` David Woodhouse
@ 2021-11-24 17:55     ` kernel test robot
  -1 siblings, 0 replies; 74+ messages in thread
From: kernel test robot @ 2021-11-24 17:55 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: kbuild-all, Boris Ostrovsky, Joao Martins,
	jmattson @ google . com, wanpengli @ tencent . com,
	seanjc @ google . com, vkuznets @ redhat . com,
	mtosatti @ redhat . com, joro @ 8bytes . org

Hi David,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v5.16-rc2]
[cannot apply to kvm/queue kvms390/next powerpc/topic/ppc-kvm kvmarm/next mst-vhost/linux-next next-20211124]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git a90af8f15bdc9449ee2d24e1d73fa3f7e8633f81
config: i386-randconfig-a012-20211120 (https://download.01.org/0day-ci/archive/20211125/202111250122.hrwuy18G-lkp@intel.com/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/cc99b12d3d722f77f611a364b7a359bb3105ad44
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
        git checkout cc99b12d3d722f77f611a364b7a359bb3105ad44
        # save the config file to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   arch/x86/kvm/x86.c: In function 'vcpu_enter_guest':
>> arch/x86/kvm/x86.c:9743:4: error: suggest braces around empty body in an 'if' statement [-Werror=empty-body]
    9743 |    ; /* Nothing to do. It just wanted to wake us */
         |    ^
   cc1: all warnings being treated as errors


vim +/if +9743 arch/x86/kvm/x86.c

  9593	
  9594	/*
  9595	 * Returns 1 to let vcpu_run() continue the guest execution loop without
  9596	 * exiting to the userspace.  Otherwise, the value will be returned to the
  9597	 * userspace.
  9598	 */
  9599	static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
  9600	{
  9601		int r;
  9602		bool req_int_win =
  9603			dm_request_for_irq_injection(vcpu) &&
  9604			kvm_cpu_accept_dm_intr(vcpu);
  9605		fastpath_t exit_fastpath;
  9606	
  9607		bool req_immediate_exit = false;
  9608	
  9609		/* Forbid vmenter if vcpu dirty ring is soft-full */
  9610		if (unlikely(vcpu->kvm->dirty_ring_size &&
  9611			     kvm_dirty_ring_soft_full(&vcpu->dirty_ring))) {
  9612			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
  9613			trace_kvm_dirty_ring_exit(vcpu);
  9614			r = 0;
  9615			goto out;
  9616		}
  9617	
  9618		if (kvm_request_pending(vcpu)) {
  9619			if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) {
  9620				r = -EIO;
  9621				goto out;
  9622			}
  9623			if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) {
  9624				if (unlikely(!kvm_x86_ops.nested_ops->get_nested_state_pages(vcpu))) {
  9625					r = 0;
  9626					goto out;
  9627				}
  9628			}
  9629			if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
  9630				kvm_mmu_unload(vcpu);
  9631			if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
  9632				__kvm_migrate_timers(vcpu);
  9633			if (kvm_check_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu))
  9634				kvm_update_masterclock(vcpu->kvm);
  9635			if (kvm_check_request(KVM_REQ_GLOBAL_CLOCK_UPDATE, vcpu))
  9636				kvm_gen_kvmclock_update(vcpu);
  9637			if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
  9638				r = kvm_guest_time_update(vcpu);
  9639				if (unlikely(r))
  9640					goto out;
  9641			}
  9642			if (kvm_check_request(KVM_REQ_MMU_SYNC, vcpu))
  9643				kvm_mmu_sync_roots(vcpu);
  9644			if (kvm_check_request(KVM_REQ_LOAD_MMU_PGD, vcpu))
  9645				kvm_mmu_load_pgd(vcpu);
  9646			if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) {
  9647				kvm_vcpu_flush_tlb_all(vcpu);
  9648	
  9649				/* Flushing all ASIDs flushes the current ASID... */
  9650				kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);
  9651			}
  9652			if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
  9653				kvm_vcpu_flush_tlb_current(vcpu);
  9654			if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))
  9655				kvm_vcpu_flush_tlb_guest(vcpu);
  9656	
  9657			if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
  9658				vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
  9659				r = 0;
  9660				goto out;
  9661			}
  9662			if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) {
  9663				if (is_guest_mode(vcpu)) {
  9664					kvm_x86_ops.nested_ops->triple_fault(vcpu);
  9665				} else {
  9666					vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
  9667					vcpu->mmio_needed = 0;
  9668					r = 0;
  9669					goto out;
  9670				}
  9671			}
  9672			if (kvm_check_request(KVM_REQ_APF_HALT, vcpu)) {
  9673				/* Page is swapped out. Do synthetic halt */
  9674				vcpu->arch.apf.halted = true;
  9675				r = 1;
  9676				goto out;
  9677			}
  9678			if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
  9679				record_steal_time(vcpu);
  9680			if (kvm_check_request(KVM_REQ_SMI, vcpu))
  9681				process_smi(vcpu);
  9682			if (kvm_check_request(KVM_REQ_NMI, vcpu))
  9683				process_nmi(vcpu);
  9684			if (kvm_check_request(KVM_REQ_PMU, vcpu))
  9685				kvm_pmu_handle_event(vcpu);
  9686			if (kvm_check_request(KVM_REQ_PMI, vcpu))
  9687				kvm_pmu_deliver_pmi(vcpu);
  9688			if (kvm_check_request(KVM_REQ_IOAPIC_EOI_EXIT, vcpu)) {
  9689				BUG_ON(vcpu->arch.pending_ioapic_eoi > 255);
  9690				if (test_bit(vcpu->arch.pending_ioapic_eoi,
  9691					     vcpu->arch.ioapic_handled_vectors)) {
  9692					vcpu->run->exit_reason = KVM_EXIT_IOAPIC_EOI;
  9693					vcpu->run->eoi.vector =
  9694							vcpu->arch.pending_ioapic_eoi;
  9695					r = 0;
  9696					goto out;
  9697				}
  9698			}
  9699			if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu))
  9700				vcpu_scan_ioapic(vcpu);
  9701			if (kvm_check_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu))
  9702				vcpu_load_eoi_exitmap(vcpu);
  9703			if (kvm_check_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu))
  9704				kvm_vcpu_reload_apic_access_page(vcpu);
  9705			if (kvm_check_request(KVM_REQ_HV_CRASH, vcpu)) {
  9706				vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
  9707				vcpu->run->system_event.type = KVM_SYSTEM_EVENT_CRASH;
  9708				r = 0;
  9709				goto out;
  9710			}
  9711			if (kvm_check_request(KVM_REQ_HV_RESET, vcpu)) {
  9712				vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
  9713				vcpu->run->system_event.type = KVM_SYSTEM_EVENT_RESET;
  9714				r = 0;
  9715				goto out;
  9716			}
  9717			if (kvm_check_request(KVM_REQ_HV_EXIT, vcpu)) {
  9718				struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
  9719	
  9720				vcpu->run->exit_reason = KVM_EXIT_HYPERV;
  9721				vcpu->run->hyperv = hv_vcpu->exit;
  9722				r = 0;
  9723				goto out;
  9724			}
  9725	
  9726			/*
  9727			 * KVM_REQ_HV_STIMER has to be processed after
  9728			 * KVM_REQ_CLOCK_UPDATE, because Hyper-V SynIC timers
  9729			 * depend on the guest clock being up-to-date
  9730			 */
  9731			if (kvm_check_request(KVM_REQ_HV_STIMER, vcpu))
  9732				kvm_hv_process_stimers(vcpu);
  9733			if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu))
  9734				kvm_vcpu_update_apicv(vcpu);
  9735			if (kvm_check_request(KVM_REQ_APF_READY, vcpu))
  9736				kvm_check_async_pf_completion(vcpu);
  9737			if (kvm_check_request(KVM_REQ_MSR_FILTER_CHANGED, vcpu))
  9738				static_call(kvm_x86_msr_filter_changed)(vcpu);
  9739	
  9740			if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
  9741				static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
  9742			if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
> 9743				; /* Nothing to do. It just wanted to wake us */
  9744		}
  9745	
  9746		if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
  9747		    kvm_xen_has_interrupt(vcpu)) {
  9748			++vcpu->stat.req_event;
  9749			r = kvm_apic_accept_events(vcpu);
  9750			if (r < 0) {
  9751				r = 0;
  9752				goto out;
  9753			}
  9754			if (vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED) {
  9755				r = 1;
  9756				goto out;
  9757			}
  9758	
  9759			r = inject_pending_event(vcpu, &req_immediate_exit);
  9760			if (r < 0) {
  9761				r = 0;
  9762				goto out;
  9763			}
  9764			if (req_int_win)
  9765				static_call(kvm_x86_enable_irq_window)(vcpu);
  9766	
  9767			if (kvm_lapic_enabled(vcpu)) {
  9768				update_cr8_intercept(vcpu);
  9769				kvm_lapic_sync_to_vapic(vcpu);
  9770			}
  9771		}
  9772	
  9773		r = kvm_mmu_reload(vcpu);
  9774		if (unlikely(r)) {
  9775			goto cancel_injection;
  9776		}
  9777	
  9778		preempt_disable();
  9779	
  9780		static_call(kvm_x86_prepare_guest_switch)(vcpu);
  9781	
  9782		/*
  9783		 * Disable IRQs before setting IN_GUEST_MODE.  Posted interrupt
  9784		 * IPI are then delayed after guest entry, which ensures that they
  9785		 * result in virtual interrupt delivery.
  9786		 */
  9787		local_irq_disable();
  9788		vcpu->mode = IN_GUEST_MODE;
  9789	
  9790		/*
  9791		 * If the guest requires direct access to mapped L1 pages, check
  9792		 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
  9793		 * to go and revalidate them, if necessary.
  9794		 */
  9795		if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
  9796			kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
  9797	
  9798		srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
  9799	
  9800		/*
  9801		 * 1) We should set ->mode before checking ->requests.  Please see
  9802		 * the comment in kvm_vcpu_exiting_guest_mode().
  9803		 *
  9804		 * 2) For APICv, we should set ->mode before checking PID.ON. This
  9805		 * pairs with the memory barrier implicit in pi_test_and_set_on
  9806		 * (see vmx_deliver_posted_interrupt).
  9807		 *
  9808		 * 3) This also orders the write to mode from any reads to the page
  9809		 * tables done while the VCPU is running.  Please see the comment
  9810		 * in kvm_flush_remote_tlbs.
  9811		 */
  9812		smp_mb__after_srcu_read_unlock();
  9813	
  9814		/*
  9815		 * This handles the case where a posted interrupt was
  9816		 * notified with kvm_vcpu_kick.
  9817		 */
  9818		if (kvm_lapic_enabled(vcpu) && vcpu->arch.apicv_active)
  9819			static_call(kvm_x86_sync_pir_to_irr)(vcpu);
  9820	
  9821		if (kvm_vcpu_exit_request(vcpu)) {
  9822			vcpu->mode = OUTSIDE_GUEST_MODE;
  9823			smp_wmb();
  9824			local_irq_enable();
  9825			preempt_enable();
  9826			vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
  9827			r = 1;
  9828			goto cancel_injection;
  9829		}
  9830	
  9831		if (req_immediate_exit) {
  9832			kvm_make_request(KVM_REQ_EVENT, vcpu);
  9833			static_call(kvm_x86_request_immediate_exit)(vcpu);
  9834		}
  9835	
  9836		fpregs_assert_state_consistent();
  9837		if (test_thread_flag(TIF_NEED_FPU_LOAD))
  9838			switch_fpu_return();
  9839	
  9840		if (unlikely(vcpu->arch.switch_db_regs)) {
  9841			set_debugreg(0, 7);
  9842			set_debugreg(vcpu->arch.eff_db[0], 0);
  9843			set_debugreg(vcpu->arch.eff_db[1], 1);
  9844			set_debugreg(vcpu->arch.eff_db[2], 2);
  9845			set_debugreg(vcpu->arch.eff_db[3], 3);
  9846		} else if (unlikely(hw_breakpoint_active())) {
  9847			set_debugreg(0, 7);
  9848		}
  9849	
  9850		for (;;) {
  9851			/*
  9852			 * Assert that vCPU vs. VM APICv state is consistent.  An APICv
  9853			 * update must kick and wait for all vCPUs before toggling the
  9854			 * per-VM state, and responsing vCPUs must wait for the update
  9855			 * to complete before servicing KVM_REQ_APICV_UPDATE.
  9856			 */
  9857			WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu));
  9858	
  9859			exit_fastpath = static_call(kvm_x86_run)(vcpu);
  9860			if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
  9861				break;
  9862	
  9863			if (vcpu->arch.apicv_active)
  9864				static_call(kvm_x86_sync_pir_to_irr)(vcpu);
  9865	
  9866			if (unlikely(kvm_vcpu_exit_request(vcpu))) {
  9867				exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
  9868				break;
  9869			}
  9870		}
  9871	
  9872		/*
  9873		 * Do this here before restoring debug registers on the host.  And
  9874		 * since we do this before handling the vmexit, a DR access vmexit
  9875		 * can (a) read the correct value of the debug registers, (b) set
  9876		 * KVM_DEBUGREG_WONT_EXIT again.
  9877		 */
  9878		if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) {
  9879			WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP);
  9880			static_call(kvm_x86_sync_dirty_debug_regs)(vcpu);
  9881			kvm_update_dr0123(vcpu);
  9882			kvm_update_dr7(vcpu);
  9883		}
  9884	
  9885		/*
  9886		 * If the guest has used debug registers, at least dr7
  9887		 * will be disabled while returning to the host.
  9888		 * If we don't have active breakpoints in the host, we don't
  9889		 * care about the messed up debug address registers. But if
  9890		 * we have some of them active, restore the old state.
  9891		 */
  9892		if (hw_breakpoint_active())
  9893			hw_breakpoint_restore();
  9894	
  9895		vcpu->arch.last_vmentry_cpu = vcpu->cpu;
  9896		vcpu->arch.last_guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());
  9897	
  9898		vcpu->mode = OUTSIDE_GUEST_MODE;
  9899		smp_wmb();
  9900	
  9901		static_call(kvm_x86_handle_exit_irqoff)(vcpu);
  9902	
  9903		/*
  9904		 * Consume any pending interrupts, including the possible source of
  9905		 * VM-Exit on SVM and any ticks that occur between VM-Exit and now.
  9906		 * An instruction is required after local_irq_enable() to fully unblock
  9907		 * interrupts on processors that implement an interrupt shadow, the
  9908		 * stat.exits increment will do nicely.
  9909		 */
  9910		kvm_before_interrupt(vcpu);
  9911		local_irq_enable();
  9912		++vcpu->stat.exits;
  9913		local_irq_disable();
  9914		kvm_after_interrupt(vcpu);
  9915	
  9916		/*
  9917		 * Wait until after servicing IRQs to account guest time so that any
  9918		 * ticks that occurred while running the guest are properly accounted
  9919		 * to the guest.  Waiting until IRQs are enabled degrades the accuracy
  9920		 * of accounting via context tracking, but the loss of accuracy is
  9921		 * acceptable for all known use cases.
  9922		 */
  9923		vtime_account_guest_exit();
  9924	
  9925		if (lapic_in_kernel(vcpu)) {
  9926			s64 delta = vcpu->arch.apic->lapic_timer.advance_expire_delta;
  9927			if (delta != S64_MIN) {
  9928				trace_kvm_wait_lapic_expire(vcpu->vcpu_id, delta);
  9929				vcpu->arch.apic->lapic_timer.advance_expire_delta = S64_MIN;
  9930			}
  9931		}
  9932	
  9933		local_irq_enable();
  9934		preempt_enable();
  9935	
  9936		vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
  9937	
  9938		/*
  9939		 * Profile KVM exit RIPs:
  9940		 */
  9941		if (unlikely(prof_on == KVM_PROFILING)) {
  9942			unsigned long rip = kvm_rip_read(vcpu);
  9943			profile_hit(KVM_PROFILING, (void *)rip);
  9944		}
  9945	
  9946		if (unlikely(vcpu->arch.tsc_always_catchup))
  9947			kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
  9948	
  9949		if (vcpu->arch.apic_attention)
  9950			kvm_lapic_sync_from_vapic(vcpu);
  9951	
  9952		r = static_call(kvm_x86_handle_exit)(vcpu, exit_fastpath);
  9953		return r;
  9954	
  9955	cancel_injection:
  9956		if (req_immediate_exit)
  9957			kvm_make_request(KVM_REQ_EVENT, vcpu);
  9958		static_call(kvm_x86_cancel_injection)(vcpu);
  9959		if (unlikely(vcpu->arch.apic_attention))
  9960			kvm_lapic_sync_from_vapic(vcpu);
  9961	out:
  9962		return r;
  9963	}
  9964	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-24 17:55     ` kernel test robot
  0 siblings, 0 replies; 74+ messages in thread
From: kernel test robot @ 2021-11-24 17:55 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 15988 bytes --]

Hi David,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v5.16-rc2]
[cannot apply to kvm/queue kvms390/next powerpc/topic/ppc-kvm kvmarm/next mst-vhost/linux-next next-20211124]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git a90af8f15bdc9449ee2d24e1d73fa3f7e8633f81
config: i386-randconfig-a012-20211120 (https://download.01.org/0day-ci/archive/20211125/202111250122.hrwuy18G-lkp(a)intel.com/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/cc99b12d3d722f77f611a364b7a359bb3105ad44
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
        git checkout cc99b12d3d722f77f611a364b7a359bb3105ad44
        # save the config file to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   arch/x86/kvm/x86.c: In function 'vcpu_enter_guest':
>> arch/x86/kvm/x86.c:9743:4: error: suggest braces around empty body in an 'if' statement [-Werror=empty-body]
    9743 |    ; /* Nothing to do. It just wanted to wake us */
         |    ^
   cc1: all warnings being treated as errors


vim +/if +9743 arch/x86/kvm/x86.c

  9593	
  9594	/*
  9595	 * Returns 1 to let vcpu_run() continue the guest execution loop without
  9596	 * exiting to the userspace.  Otherwise, the value will be returned to the
  9597	 * userspace.
  9598	 */
  9599	static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
  9600	{
  9601		int r;
  9602		bool req_int_win =
  9603			dm_request_for_irq_injection(vcpu) &&
  9604			kvm_cpu_accept_dm_intr(vcpu);
  9605		fastpath_t exit_fastpath;
  9606	
  9607		bool req_immediate_exit = false;
  9608	
  9609		/* Forbid vmenter if vcpu dirty ring is soft-full */
  9610		if (unlikely(vcpu->kvm->dirty_ring_size &&
  9611			     kvm_dirty_ring_soft_full(&vcpu->dirty_ring))) {
  9612			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
  9613			trace_kvm_dirty_ring_exit(vcpu);
  9614			r = 0;
  9615			goto out;
  9616		}
  9617	
  9618		if (kvm_request_pending(vcpu)) {
  9619			if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) {
  9620				r = -EIO;
  9621				goto out;
  9622			}
  9623			if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) {
  9624				if (unlikely(!kvm_x86_ops.nested_ops->get_nested_state_pages(vcpu))) {
  9625					r = 0;
  9626					goto out;
  9627				}
  9628			}
  9629			if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
  9630				kvm_mmu_unload(vcpu);
  9631			if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
  9632				__kvm_migrate_timers(vcpu);
  9633			if (kvm_check_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu))
  9634				kvm_update_masterclock(vcpu->kvm);
  9635			if (kvm_check_request(KVM_REQ_GLOBAL_CLOCK_UPDATE, vcpu))
  9636				kvm_gen_kvmclock_update(vcpu);
  9637			if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
  9638				r = kvm_guest_time_update(vcpu);
  9639				if (unlikely(r))
  9640					goto out;
  9641			}
  9642			if (kvm_check_request(KVM_REQ_MMU_SYNC, vcpu))
  9643				kvm_mmu_sync_roots(vcpu);
  9644			if (kvm_check_request(KVM_REQ_LOAD_MMU_PGD, vcpu))
  9645				kvm_mmu_load_pgd(vcpu);
  9646			if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) {
  9647				kvm_vcpu_flush_tlb_all(vcpu);
  9648	
  9649				/* Flushing all ASIDs flushes the current ASID... */
  9650				kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);
  9651			}
  9652			if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
  9653				kvm_vcpu_flush_tlb_current(vcpu);
  9654			if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))
  9655				kvm_vcpu_flush_tlb_guest(vcpu);
  9656	
  9657			if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
  9658				vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
  9659				r = 0;
  9660				goto out;
  9661			}
  9662			if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) {
  9663				if (is_guest_mode(vcpu)) {
  9664					kvm_x86_ops.nested_ops->triple_fault(vcpu);
  9665				} else {
  9666					vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
  9667					vcpu->mmio_needed = 0;
  9668					r = 0;
  9669					goto out;
  9670				}
  9671			}
  9672			if (kvm_check_request(KVM_REQ_APF_HALT, vcpu)) {
  9673				/* Page is swapped out. Do synthetic halt */
  9674				vcpu->arch.apf.halted = true;
  9675				r = 1;
  9676				goto out;
  9677			}
  9678			if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
  9679				record_steal_time(vcpu);
  9680			if (kvm_check_request(KVM_REQ_SMI, vcpu))
  9681				process_smi(vcpu);
  9682			if (kvm_check_request(KVM_REQ_NMI, vcpu))
  9683				process_nmi(vcpu);
  9684			if (kvm_check_request(KVM_REQ_PMU, vcpu))
  9685				kvm_pmu_handle_event(vcpu);
  9686			if (kvm_check_request(KVM_REQ_PMI, vcpu))
  9687				kvm_pmu_deliver_pmi(vcpu);
  9688			if (kvm_check_request(KVM_REQ_IOAPIC_EOI_EXIT, vcpu)) {
  9689				BUG_ON(vcpu->arch.pending_ioapic_eoi > 255);
  9690				if (test_bit(vcpu->arch.pending_ioapic_eoi,
  9691					     vcpu->arch.ioapic_handled_vectors)) {
  9692					vcpu->run->exit_reason = KVM_EXIT_IOAPIC_EOI;
  9693					vcpu->run->eoi.vector =
  9694							vcpu->arch.pending_ioapic_eoi;
  9695					r = 0;
  9696					goto out;
  9697				}
  9698			}
  9699			if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu))
  9700				vcpu_scan_ioapic(vcpu);
  9701			if (kvm_check_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu))
  9702				vcpu_load_eoi_exitmap(vcpu);
  9703			if (kvm_check_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu))
  9704				kvm_vcpu_reload_apic_access_page(vcpu);
  9705			if (kvm_check_request(KVM_REQ_HV_CRASH, vcpu)) {
  9706				vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
  9707				vcpu->run->system_event.type = KVM_SYSTEM_EVENT_CRASH;
  9708				r = 0;
  9709				goto out;
  9710			}
  9711			if (kvm_check_request(KVM_REQ_HV_RESET, vcpu)) {
  9712				vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
  9713				vcpu->run->system_event.type = KVM_SYSTEM_EVENT_RESET;
  9714				r = 0;
  9715				goto out;
  9716			}
  9717			if (kvm_check_request(KVM_REQ_HV_EXIT, vcpu)) {
  9718				struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
  9719	
  9720				vcpu->run->exit_reason = KVM_EXIT_HYPERV;
  9721				vcpu->run->hyperv = hv_vcpu->exit;
  9722				r = 0;
  9723				goto out;
  9724			}
  9725	
  9726			/*
  9727			 * KVM_REQ_HV_STIMER has to be processed after
  9728			 * KVM_REQ_CLOCK_UPDATE, because Hyper-V SynIC timers
  9729			 * depend on the guest clock being up-to-date
  9730			 */
  9731			if (kvm_check_request(KVM_REQ_HV_STIMER, vcpu))
  9732				kvm_hv_process_stimers(vcpu);
  9733			if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu))
  9734				kvm_vcpu_update_apicv(vcpu);
  9735			if (kvm_check_request(KVM_REQ_APF_READY, vcpu))
  9736				kvm_check_async_pf_completion(vcpu);
  9737			if (kvm_check_request(KVM_REQ_MSR_FILTER_CHANGED, vcpu))
  9738				static_call(kvm_x86_msr_filter_changed)(vcpu);
  9739	
  9740			if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
  9741				static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
  9742			if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
> 9743				; /* Nothing to do. It just wanted to wake us */
  9744		}
  9745	
  9746		if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
  9747		    kvm_xen_has_interrupt(vcpu)) {
  9748			++vcpu->stat.req_event;
  9749			r = kvm_apic_accept_events(vcpu);
  9750			if (r < 0) {
  9751				r = 0;
  9752				goto out;
  9753			}
  9754			if (vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED) {
  9755				r = 1;
  9756				goto out;
  9757			}
  9758	
  9759			r = inject_pending_event(vcpu, &req_immediate_exit);
  9760			if (r < 0) {
  9761				r = 0;
  9762				goto out;
  9763			}
  9764			if (req_int_win)
  9765				static_call(kvm_x86_enable_irq_window)(vcpu);
  9766	
  9767			if (kvm_lapic_enabled(vcpu)) {
  9768				update_cr8_intercept(vcpu);
  9769				kvm_lapic_sync_to_vapic(vcpu);
  9770			}
  9771		}
  9772	
  9773		r = kvm_mmu_reload(vcpu);
  9774		if (unlikely(r)) {
  9775			goto cancel_injection;
  9776		}
  9777	
  9778		preempt_disable();
  9779	
  9780		static_call(kvm_x86_prepare_guest_switch)(vcpu);
  9781	
  9782		/*
  9783		 * Disable IRQs before setting IN_GUEST_MODE.  Posted interrupt
  9784		 * IPI are then delayed after guest entry, which ensures that they
  9785		 * result in virtual interrupt delivery.
  9786		 */
  9787		local_irq_disable();
  9788		vcpu->mode = IN_GUEST_MODE;
  9789	
  9790		/*
  9791		 * If the guest requires direct access to mapped L1 pages, check
  9792		 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
  9793		 * to go and revalidate them, if necessary.
  9794		 */
  9795		if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
  9796			kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
  9797	
  9798		srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
  9799	
  9800		/*
  9801		 * 1) We should set ->mode before checking ->requests.  Please see
  9802		 * the comment in kvm_vcpu_exiting_guest_mode().
  9803		 *
  9804		 * 2) For APICv, we should set ->mode before checking PID.ON. This
  9805		 * pairs with the memory barrier implicit in pi_test_and_set_on
  9806		 * (see vmx_deliver_posted_interrupt).
  9807		 *
  9808		 * 3) This also orders the write to mode from any reads to the page
  9809		 * tables done while the VCPU is running.  Please see the comment
  9810		 * in kvm_flush_remote_tlbs.
  9811		 */
  9812		smp_mb__after_srcu_read_unlock();
  9813	
  9814		/*
  9815		 * This handles the case where a posted interrupt was
  9816		 * notified with kvm_vcpu_kick.
  9817		 */
  9818		if (kvm_lapic_enabled(vcpu) && vcpu->arch.apicv_active)
  9819			static_call(kvm_x86_sync_pir_to_irr)(vcpu);
  9820	
  9821		if (kvm_vcpu_exit_request(vcpu)) {
  9822			vcpu->mode = OUTSIDE_GUEST_MODE;
  9823			smp_wmb();
  9824			local_irq_enable();
  9825			preempt_enable();
  9826			vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
  9827			r = 1;
  9828			goto cancel_injection;
  9829		}
  9830	
  9831		if (req_immediate_exit) {
  9832			kvm_make_request(KVM_REQ_EVENT, vcpu);
  9833			static_call(kvm_x86_request_immediate_exit)(vcpu);
  9834		}
  9835	
  9836		fpregs_assert_state_consistent();
  9837		if (test_thread_flag(TIF_NEED_FPU_LOAD))
  9838			switch_fpu_return();
  9839	
  9840		if (unlikely(vcpu->arch.switch_db_regs)) {
  9841			set_debugreg(0, 7);
  9842			set_debugreg(vcpu->arch.eff_db[0], 0);
  9843			set_debugreg(vcpu->arch.eff_db[1], 1);
  9844			set_debugreg(vcpu->arch.eff_db[2], 2);
  9845			set_debugreg(vcpu->arch.eff_db[3], 3);
  9846		} else if (unlikely(hw_breakpoint_active())) {
  9847			set_debugreg(0, 7);
  9848		}
  9849	
  9850		for (;;) {
  9851			/*
  9852			 * Assert that vCPU vs. VM APICv state is consistent.  An APICv
  9853			 * update must kick and wait for all vCPUs before toggling the
  9854			 * per-VM state, and responsing vCPUs must wait for the update
  9855			 * to complete before servicing KVM_REQ_APICV_UPDATE.
  9856			 */
  9857			WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu));
  9858	
  9859			exit_fastpath = static_call(kvm_x86_run)(vcpu);
  9860			if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
  9861				break;
  9862	
  9863			if (vcpu->arch.apicv_active)
  9864				static_call(kvm_x86_sync_pir_to_irr)(vcpu);
  9865	
  9866			if (unlikely(kvm_vcpu_exit_request(vcpu))) {
  9867				exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
  9868				break;
  9869			}
  9870		}
  9871	
  9872		/*
  9873		 * Do this here before restoring debug registers on the host.  And
  9874		 * since we do this before handling the vmexit, a DR access vmexit
  9875		 * can (a) read the correct value of the debug registers, (b) set
  9876		 * KVM_DEBUGREG_WONT_EXIT again.
  9877		 */
  9878		if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) {
  9879			WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP);
  9880			static_call(kvm_x86_sync_dirty_debug_regs)(vcpu);
  9881			kvm_update_dr0123(vcpu);
  9882			kvm_update_dr7(vcpu);
  9883		}
  9884	
  9885		/*
  9886		 * If the guest has used debug registers, at least dr7
  9887		 * will be disabled while returning to the host.
  9888		 * If we don't have active breakpoints in the host, we don't
  9889		 * care about the messed up debug address registers. But if
  9890		 * we have some of them active, restore the old state.
  9891		 */
  9892		if (hw_breakpoint_active())
  9893			hw_breakpoint_restore();
  9894	
  9895		vcpu->arch.last_vmentry_cpu = vcpu->cpu;
  9896		vcpu->arch.last_guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());
  9897	
  9898		vcpu->mode = OUTSIDE_GUEST_MODE;
  9899		smp_wmb();
  9900	
  9901		static_call(kvm_x86_handle_exit_irqoff)(vcpu);
  9902	
  9903		/*
  9904		 * Consume any pending interrupts, including the possible source of
  9905		 * VM-Exit on SVM and any ticks that occur between VM-Exit and now.
  9906		 * An instruction is required after local_irq_enable() to fully unblock
  9907		 * interrupts on processors that implement an interrupt shadow, the
  9908		 * stat.exits increment will do nicely.
  9909		 */
  9910		kvm_before_interrupt(vcpu);
  9911		local_irq_enable();
  9912		++vcpu->stat.exits;
  9913		local_irq_disable();
  9914		kvm_after_interrupt(vcpu);
  9915	
  9916		/*
  9917		 * Wait until after servicing IRQs to account guest time so that any
  9918		 * ticks that occurred while running the guest are properly accounted
  9919		 * to the guest.  Waiting until IRQs are enabled degrades the accuracy
  9920		 * of accounting via context tracking, but the loss of accuracy is
  9921		 * acceptable for all known use cases.
  9922		 */
  9923		vtime_account_guest_exit();
  9924	
  9925		if (lapic_in_kernel(vcpu)) {
  9926			s64 delta = vcpu->arch.apic->lapic_timer.advance_expire_delta;
  9927			if (delta != S64_MIN) {
  9928				trace_kvm_wait_lapic_expire(vcpu->vcpu_id, delta);
  9929				vcpu->arch.apic->lapic_timer.advance_expire_delta = S64_MIN;
  9930			}
  9931		}
  9932	
  9933		local_irq_enable();
  9934		preempt_enable();
  9935	
  9936		vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
  9937	
  9938		/*
  9939		 * Profile KVM exit RIPs:
  9940		 */
  9941		if (unlikely(prof_on == KVM_PROFILING)) {
  9942			unsigned long rip = kvm_rip_read(vcpu);
  9943			profile_hit(KVM_PROFILING, (void *)rip);
  9944		}
  9945	
  9946		if (unlikely(vcpu->arch.tsc_always_catchup))
  9947			kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
  9948	
  9949		if (vcpu->arch.apic_attention)
  9950			kvm_lapic_sync_from_vapic(vcpu);
  9951	
  9952		r = static_call(kvm_x86_handle_exit)(vcpu, exit_fastpath);
  9953		return r;
  9954	
  9955	cancel_injection:
  9956		if (req_immediate_exit)
  9957			kvm_make_request(KVM_REQ_EVENT, vcpu);
  9958		static_call(kvm_x86_cancel_injection)(vcpu);
  9959		if (unlikely(vcpu->arch.apic_attention))
  9960			kvm_lapic_sync_from_vapic(vcpu);
  9961	out:
  9962		return r;
  9963	}
  9964	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
  2021-11-20 10:28   ` David Woodhouse
@ 2021-11-26  2:38     ` kernel test robot
  -1 siblings, 0 replies; 74+ messages in thread
From: kernel test robot @ 2021-11-26  2:38 UTC (permalink / raw)
  To: David Woodhouse, Paolo Bonzini, kvm
  Cc: kbuild-all, Boris Ostrovsky, Joao Martins,
	jmattson @ google . com, wanpengli @ tencent . com,
	seanjc @ google . com, vkuznets @ redhat . com,
	mtosatti @ redhat . com, joro @ 8bytes . org

Hi David,

I love your patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v5.16-rc2]
[cannot apply to kvm/queue kvms390/next powerpc/topic/ppc-kvm kvmarm/next mst-vhost/linux-next next-20211125]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git a90af8f15bdc9449ee2d24e1d73fa3f7e8633f81
config: i386-randconfig-c021-20211118 (https://download.01.org/0day-ci/archive/20211126/202111261049.UGt3BQLW-lkp@intel.com/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/cc99b12d3d722f77f611a364b7a359bb3105ad44
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
        git checkout cc99b12d3d722f77f611a364b7a359bb3105ad44
        # save the config file to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   arch/x86/kvm/x86.c: In function 'vcpu_enter_guest':
>> arch/x86/kvm/x86.c:9743:4: warning: suggest braces around empty body in an 'if' statement [-Wempty-body]
    9743 |    ; /* Nothing to do. It just wanted to wake us */
         |    ^


vim +/if +9743 arch/x86/kvm/x86.c

  9593	
  9594	/*
  9595	 * Returns 1 to let vcpu_run() continue the guest execution loop without
  9596	 * exiting to the userspace.  Otherwise, the value will be returned to the
  9597	 * userspace.
  9598	 */
  9599	static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
  9600	{
  9601		int r;
  9602		bool req_int_win =
  9603			dm_request_for_irq_injection(vcpu) &&
  9604			kvm_cpu_accept_dm_intr(vcpu);
  9605		fastpath_t exit_fastpath;
  9606	
  9607		bool req_immediate_exit = false;
  9608	
  9609		/* Forbid vmenter if vcpu dirty ring is soft-full */
  9610		if (unlikely(vcpu->kvm->dirty_ring_size &&
  9611			     kvm_dirty_ring_soft_full(&vcpu->dirty_ring))) {
  9612			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
  9613			trace_kvm_dirty_ring_exit(vcpu);
  9614			r = 0;
  9615			goto out;
  9616		}
  9617	
  9618		if (kvm_request_pending(vcpu)) {
  9619			if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) {
  9620				r = -EIO;
  9621				goto out;
  9622			}
  9623			if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) {
  9624				if (unlikely(!kvm_x86_ops.nested_ops->get_nested_state_pages(vcpu))) {
  9625					r = 0;
  9626					goto out;
  9627				}
  9628			}
  9629			if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
  9630				kvm_mmu_unload(vcpu);
  9631			if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
  9632				__kvm_migrate_timers(vcpu);
  9633			if (kvm_check_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu))
  9634				kvm_update_masterclock(vcpu->kvm);
  9635			if (kvm_check_request(KVM_REQ_GLOBAL_CLOCK_UPDATE, vcpu))
  9636				kvm_gen_kvmclock_update(vcpu);
  9637			if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
  9638				r = kvm_guest_time_update(vcpu);
  9639				if (unlikely(r))
  9640					goto out;
  9641			}
  9642			if (kvm_check_request(KVM_REQ_MMU_SYNC, vcpu))
  9643				kvm_mmu_sync_roots(vcpu);
  9644			if (kvm_check_request(KVM_REQ_LOAD_MMU_PGD, vcpu))
  9645				kvm_mmu_load_pgd(vcpu);
  9646			if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) {
  9647				kvm_vcpu_flush_tlb_all(vcpu);
  9648	
  9649				/* Flushing all ASIDs flushes the current ASID... */
  9650				kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);
  9651			}
  9652			if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
  9653				kvm_vcpu_flush_tlb_current(vcpu);
  9654			if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))
  9655				kvm_vcpu_flush_tlb_guest(vcpu);
  9656	
  9657			if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
  9658				vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
  9659				r = 0;
  9660				goto out;
  9661			}
  9662			if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) {
  9663				if (is_guest_mode(vcpu)) {
  9664					kvm_x86_ops.nested_ops->triple_fault(vcpu);
  9665				} else {
  9666					vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
  9667					vcpu->mmio_needed = 0;
  9668					r = 0;
  9669					goto out;
  9670				}
  9671			}
  9672			if (kvm_check_request(KVM_REQ_APF_HALT, vcpu)) {
  9673				/* Page is swapped out. Do synthetic halt */
  9674				vcpu->arch.apf.halted = true;
  9675				r = 1;
  9676				goto out;
  9677			}
  9678			if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
  9679				record_steal_time(vcpu);
  9680			if (kvm_check_request(KVM_REQ_SMI, vcpu))
  9681				process_smi(vcpu);
  9682			if (kvm_check_request(KVM_REQ_NMI, vcpu))
  9683				process_nmi(vcpu);
  9684			if (kvm_check_request(KVM_REQ_PMU, vcpu))
  9685				kvm_pmu_handle_event(vcpu);
  9686			if (kvm_check_request(KVM_REQ_PMI, vcpu))
  9687				kvm_pmu_deliver_pmi(vcpu);
  9688			if (kvm_check_request(KVM_REQ_IOAPIC_EOI_EXIT, vcpu)) {
  9689				BUG_ON(vcpu->arch.pending_ioapic_eoi > 255);
  9690				if (test_bit(vcpu->arch.pending_ioapic_eoi,
  9691					     vcpu->arch.ioapic_handled_vectors)) {
  9692					vcpu->run->exit_reason = KVM_EXIT_IOAPIC_EOI;
  9693					vcpu->run->eoi.vector =
  9694							vcpu->arch.pending_ioapic_eoi;
  9695					r = 0;
  9696					goto out;
  9697				}
  9698			}
  9699			if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu))
  9700				vcpu_scan_ioapic(vcpu);
  9701			if (kvm_check_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu))
  9702				vcpu_load_eoi_exitmap(vcpu);
  9703			if (kvm_check_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu))
  9704				kvm_vcpu_reload_apic_access_page(vcpu);
  9705			if (kvm_check_request(KVM_REQ_HV_CRASH, vcpu)) {
  9706				vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
  9707				vcpu->run->system_event.type = KVM_SYSTEM_EVENT_CRASH;
  9708				r = 0;
  9709				goto out;
  9710			}
  9711			if (kvm_check_request(KVM_REQ_HV_RESET, vcpu)) {
  9712				vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
  9713				vcpu->run->system_event.type = KVM_SYSTEM_EVENT_RESET;
  9714				r = 0;
  9715				goto out;
  9716			}
  9717			if (kvm_check_request(KVM_REQ_HV_EXIT, vcpu)) {
  9718				struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
  9719	
  9720				vcpu->run->exit_reason = KVM_EXIT_HYPERV;
  9721				vcpu->run->hyperv = hv_vcpu->exit;
  9722				r = 0;
  9723				goto out;
  9724			}
  9725	
  9726			/*
  9727			 * KVM_REQ_HV_STIMER has to be processed after
  9728			 * KVM_REQ_CLOCK_UPDATE, because Hyper-V SynIC timers
  9729			 * depend on the guest clock being up-to-date
  9730			 */
  9731			if (kvm_check_request(KVM_REQ_HV_STIMER, vcpu))
  9732				kvm_hv_process_stimers(vcpu);
  9733			if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu))
  9734				kvm_vcpu_update_apicv(vcpu);
  9735			if (kvm_check_request(KVM_REQ_APF_READY, vcpu))
  9736				kvm_check_async_pf_completion(vcpu);
  9737			if (kvm_check_request(KVM_REQ_MSR_FILTER_CHANGED, vcpu))
  9738				static_call(kvm_x86_msr_filter_changed)(vcpu);
  9739	
  9740			if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
  9741				static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
  9742			if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
> 9743				; /* Nothing to do. It just wanted to wake us */
  9744		}
  9745	
  9746		if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
  9747		    kvm_xen_has_interrupt(vcpu)) {
  9748			++vcpu->stat.req_event;
  9749			r = kvm_apic_accept_events(vcpu);
  9750			if (r < 0) {
  9751				r = 0;
  9752				goto out;
  9753			}
  9754			if (vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED) {
  9755				r = 1;
  9756				goto out;
  9757			}
  9758	
  9759			r = inject_pending_event(vcpu, &req_immediate_exit);
  9760			if (r < 0) {
  9761				r = 0;
  9762				goto out;
  9763			}
  9764			if (req_int_win)
  9765				static_call(kvm_x86_enable_irq_window)(vcpu);
  9766	
  9767			if (kvm_lapic_enabled(vcpu)) {
  9768				update_cr8_intercept(vcpu);
  9769				kvm_lapic_sync_to_vapic(vcpu);
  9770			}
  9771		}
  9772	
  9773		r = kvm_mmu_reload(vcpu);
  9774		if (unlikely(r)) {
  9775			goto cancel_injection;
  9776		}
  9777	
  9778		preempt_disable();
  9779	
  9780		static_call(kvm_x86_prepare_guest_switch)(vcpu);
  9781	
  9782		/*
  9783		 * Disable IRQs before setting IN_GUEST_MODE.  Posted interrupt
  9784		 * IPI are then delayed after guest entry, which ensures that they
  9785		 * result in virtual interrupt delivery.
  9786		 */
  9787		local_irq_disable();
  9788		vcpu->mode = IN_GUEST_MODE;
  9789	
  9790		/*
  9791		 * If the guest requires direct access to mapped L1 pages, check
  9792		 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
  9793		 * to go and revalidate them, if necessary.
  9794		 */
  9795		if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
  9796			kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
  9797	
  9798		srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
  9799	
  9800		/*
  9801		 * 1) We should set ->mode before checking ->requests.  Please see
  9802		 * the comment in kvm_vcpu_exiting_guest_mode().
  9803		 *
  9804		 * 2) For APICv, we should set ->mode before checking PID.ON. This
  9805		 * pairs with the memory barrier implicit in pi_test_and_set_on
  9806		 * (see vmx_deliver_posted_interrupt).
  9807		 *
  9808		 * 3) This also orders the write to mode from any reads to the page
  9809		 * tables done while the VCPU is running.  Please see the comment
  9810		 * in kvm_flush_remote_tlbs.
  9811		 */
  9812		smp_mb__after_srcu_read_unlock();
  9813	
  9814		/*
  9815		 * This handles the case where a posted interrupt was
  9816		 * notified with kvm_vcpu_kick.
  9817		 */
  9818		if (kvm_lapic_enabled(vcpu) && vcpu->arch.apicv_active)
  9819			static_call(kvm_x86_sync_pir_to_irr)(vcpu);
  9820	
  9821		if (kvm_vcpu_exit_request(vcpu)) {
  9822			vcpu->mode = OUTSIDE_GUEST_MODE;
  9823			smp_wmb();
  9824			local_irq_enable();
  9825			preempt_enable();
  9826			vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
  9827			r = 1;
  9828			goto cancel_injection;
  9829		}
  9830	
  9831		if (req_immediate_exit) {
  9832			kvm_make_request(KVM_REQ_EVENT, vcpu);
  9833			static_call(kvm_x86_request_immediate_exit)(vcpu);
  9834		}
  9835	
  9836		fpregs_assert_state_consistent();
  9837		if (test_thread_flag(TIF_NEED_FPU_LOAD))
  9838			switch_fpu_return();
  9839	
  9840		if (unlikely(vcpu->arch.switch_db_regs)) {
  9841			set_debugreg(0, 7);
  9842			set_debugreg(vcpu->arch.eff_db[0], 0);
  9843			set_debugreg(vcpu->arch.eff_db[1], 1);
  9844			set_debugreg(vcpu->arch.eff_db[2], 2);
  9845			set_debugreg(vcpu->arch.eff_db[3], 3);
  9846		} else if (unlikely(hw_breakpoint_active())) {
  9847			set_debugreg(0, 7);
  9848		}
  9849	
  9850		for (;;) {
  9851			/*
  9852			 * Assert that vCPU vs. VM APICv state is consistent.  An APICv
  9853			 * update must kick and wait for all vCPUs before toggling the
  9854			 * per-VM state, and responsing vCPUs must wait for the update
  9855			 * to complete before servicing KVM_REQ_APICV_UPDATE.
  9856			 */
  9857			WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu));
  9858	
  9859			exit_fastpath = static_call(kvm_x86_run)(vcpu);
  9860			if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
  9861				break;
  9862	
  9863			if (vcpu->arch.apicv_active)
  9864				static_call(kvm_x86_sync_pir_to_irr)(vcpu);
  9865	
  9866			if (unlikely(kvm_vcpu_exit_request(vcpu))) {
  9867				exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
  9868				break;
  9869			}
  9870		}
  9871	
  9872		/*
  9873		 * Do this here before restoring debug registers on the host.  And
  9874		 * since we do this before handling the vmexit, a DR access vmexit
  9875		 * can (a) read the correct value of the debug registers, (b) set
  9876		 * KVM_DEBUGREG_WONT_EXIT again.
  9877		 */
  9878		if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) {
  9879			WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP);
  9880			static_call(kvm_x86_sync_dirty_debug_regs)(vcpu);
  9881			kvm_update_dr0123(vcpu);
  9882			kvm_update_dr7(vcpu);
  9883		}
  9884	
  9885		/*
  9886		 * If the guest has used debug registers, at least dr7
  9887		 * will be disabled while returning to the host.
  9888		 * If we don't have active breakpoints in the host, we don't
  9889		 * care about the messed up debug address registers. But if
  9890		 * we have some of them active, restore the old state.
  9891		 */
  9892		if (hw_breakpoint_active())
  9893			hw_breakpoint_restore();
  9894	
  9895		vcpu->arch.last_vmentry_cpu = vcpu->cpu;
  9896		vcpu->arch.last_guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());
  9897	
  9898		vcpu->mode = OUTSIDE_GUEST_MODE;
  9899		smp_wmb();
  9900	
  9901		static_call(kvm_x86_handle_exit_irqoff)(vcpu);
  9902	
  9903		/*
  9904		 * Consume any pending interrupts, including the possible source of
  9905		 * VM-Exit on SVM and any ticks that occur between VM-Exit and now.
  9906		 * An instruction is required after local_irq_enable() to fully unblock
  9907		 * interrupts on processors that implement an interrupt shadow, the
  9908		 * stat.exits increment will do nicely.
  9909		 */
  9910		kvm_before_interrupt(vcpu);
  9911		local_irq_enable();
  9912		++vcpu->stat.exits;
  9913		local_irq_disable();
  9914		kvm_after_interrupt(vcpu);
  9915	
  9916		/*
  9917		 * Wait until after servicing IRQs to account guest time so that any
  9918		 * ticks that occurred while running the guest are properly accounted
  9919		 * to the guest.  Waiting until IRQs are enabled degrades the accuracy
  9920		 * of accounting via context tracking, but the loss of accuracy is
  9921		 * acceptable for all known use cases.
  9922		 */
  9923		vtime_account_guest_exit();
  9924	
  9925		if (lapic_in_kernel(vcpu)) {
  9926			s64 delta = vcpu->arch.apic->lapic_timer.advance_expire_delta;
  9927			if (delta != S64_MIN) {
  9928				trace_kvm_wait_lapic_expire(vcpu->vcpu_id, delta);
  9929				vcpu->arch.apic->lapic_timer.advance_expire_delta = S64_MIN;
  9930			}
  9931		}
  9932	
  9933		local_irq_enable();
  9934		preempt_enable();
  9935	
  9936		vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
  9937	
  9938		/*
  9939		 * Profile KVM exit RIPs:
  9940		 */
  9941		if (unlikely(prof_on == KVM_PROFILING)) {
  9942			unsigned long rip = kvm_rip_read(vcpu);
  9943			profile_hit(KVM_PROFILING, (void *)rip);
  9944		}
  9945	
  9946		if (unlikely(vcpu->arch.tsc_always_catchup))
  9947			kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
  9948	
  9949		if (vcpu->arch.apic_attention)
  9950			kvm_lapic_sync_from_vapic(vcpu);
  9951	
  9952		r = static_call(kvm_x86_handle_exit)(vcpu, exit_fastpath);
  9953		return r;
  9954	
  9955	cancel_injection:
  9956		if (req_immediate_exit)
  9957			kvm_make_request(KVM_REQ_EVENT, vcpu);
  9958		static_call(kvm_x86_cancel_injection)(vcpu);
  9959		if (unlikely(vcpu->arch.apic_attention))
  9960			kvm_lapic_sync_from_vapic(vcpu);
  9961	out:
  9962		return r;
  9963	}
  9964	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc
@ 2021-11-26  2:38     ` kernel test robot
  0 siblings, 0 replies; 74+ messages in thread
From: kernel test robot @ 2021-11-26  2:38 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 15948 bytes --]

Hi David,

I love your patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v5.16-rc2]
[cannot apply to kvm/queue kvms390/next powerpc/topic/ppc-kvm kvmarm/next mst-vhost/linux-next next-20211125]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git a90af8f15bdc9449ee2d24e1d73fa3f7e8633f81
config: i386-randconfig-c021-20211118 (https://download.01.org/0day-ci/archive/20211126/202111261049.UGt3BQLW-lkp(a)intel.com/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/cc99b12d3d722f77f611a364b7a359bb3105ad44
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review David-Woodhouse/KVM-Introduce-CONFIG_HAVE_KVM_DIRTY_RING/20211120-192837
        git checkout cc99b12d3d722f77f611a364b7a359bb3105ad44
        # save the config file to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   arch/x86/kvm/x86.c: In function 'vcpu_enter_guest':
>> arch/x86/kvm/x86.c:9743:4: warning: suggest braces around empty body in an 'if' statement [-Wempty-body]
    9743 |    ; /* Nothing to do. It just wanted to wake us */
         |    ^


vim +/if +9743 arch/x86/kvm/x86.c

  9593	
  9594	/*
  9595	 * Returns 1 to let vcpu_run() continue the guest execution loop without
  9596	 * exiting to the userspace.  Otherwise, the value will be returned to the
  9597	 * userspace.
  9598	 */
  9599	static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
  9600	{
  9601		int r;
  9602		bool req_int_win =
  9603			dm_request_for_irq_injection(vcpu) &&
  9604			kvm_cpu_accept_dm_intr(vcpu);
  9605		fastpath_t exit_fastpath;
  9606	
  9607		bool req_immediate_exit = false;
  9608	
  9609		/* Forbid vmenter if vcpu dirty ring is soft-full */
  9610		if (unlikely(vcpu->kvm->dirty_ring_size &&
  9611			     kvm_dirty_ring_soft_full(&vcpu->dirty_ring))) {
  9612			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
  9613			trace_kvm_dirty_ring_exit(vcpu);
  9614			r = 0;
  9615			goto out;
  9616		}
  9617	
  9618		if (kvm_request_pending(vcpu)) {
  9619			if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) {
  9620				r = -EIO;
  9621				goto out;
  9622			}
  9623			if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) {
  9624				if (unlikely(!kvm_x86_ops.nested_ops->get_nested_state_pages(vcpu))) {
  9625					r = 0;
  9626					goto out;
  9627				}
  9628			}
  9629			if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
  9630				kvm_mmu_unload(vcpu);
  9631			if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
  9632				__kvm_migrate_timers(vcpu);
  9633			if (kvm_check_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu))
  9634				kvm_update_masterclock(vcpu->kvm);
  9635			if (kvm_check_request(KVM_REQ_GLOBAL_CLOCK_UPDATE, vcpu))
  9636				kvm_gen_kvmclock_update(vcpu);
  9637			if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
  9638				r = kvm_guest_time_update(vcpu);
  9639				if (unlikely(r))
  9640					goto out;
  9641			}
  9642			if (kvm_check_request(KVM_REQ_MMU_SYNC, vcpu))
  9643				kvm_mmu_sync_roots(vcpu);
  9644			if (kvm_check_request(KVM_REQ_LOAD_MMU_PGD, vcpu))
  9645				kvm_mmu_load_pgd(vcpu);
  9646			if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) {
  9647				kvm_vcpu_flush_tlb_all(vcpu);
  9648	
  9649				/* Flushing all ASIDs flushes the current ASID... */
  9650				kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);
  9651			}
  9652			if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
  9653				kvm_vcpu_flush_tlb_current(vcpu);
  9654			if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))
  9655				kvm_vcpu_flush_tlb_guest(vcpu);
  9656	
  9657			if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
  9658				vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
  9659				r = 0;
  9660				goto out;
  9661			}
  9662			if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) {
  9663				if (is_guest_mode(vcpu)) {
  9664					kvm_x86_ops.nested_ops->triple_fault(vcpu);
  9665				} else {
  9666					vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
  9667					vcpu->mmio_needed = 0;
  9668					r = 0;
  9669					goto out;
  9670				}
  9671			}
  9672			if (kvm_check_request(KVM_REQ_APF_HALT, vcpu)) {
  9673				/* Page is swapped out. Do synthetic halt */
  9674				vcpu->arch.apf.halted = true;
  9675				r = 1;
  9676				goto out;
  9677			}
  9678			if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
  9679				record_steal_time(vcpu);
  9680			if (kvm_check_request(KVM_REQ_SMI, vcpu))
  9681				process_smi(vcpu);
  9682			if (kvm_check_request(KVM_REQ_NMI, vcpu))
  9683				process_nmi(vcpu);
  9684			if (kvm_check_request(KVM_REQ_PMU, vcpu))
  9685				kvm_pmu_handle_event(vcpu);
  9686			if (kvm_check_request(KVM_REQ_PMI, vcpu))
  9687				kvm_pmu_deliver_pmi(vcpu);
  9688			if (kvm_check_request(KVM_REQ_IOAPIC_EOI_EXIT, vcpu)) {
  9689				BUG_ON(vcpu->arch.pending_ioapic_eoi > 255);
  9690				if (test_bit(vcpu->arch.pending_ioapic_eoi,
  9691					     vcpu->arch.ioapic_handled_vectors)) {
  9692					vcpu->run->exit_reason = KVM_EXIT_IOAPIC_EOI;
  9693					vcpu->run->eoi.vector =
  9694							vcpu->arch.pending_ioapic_eoi;
  9695					r = 0;
  9696					goto out;
  9697				}
  9698			}
  9699			if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu))
  9700				vcpu_scan_ioapic(vcpu);
  9701			if (kvm_check_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu))
  9702				vcpu_load_eoi_exitmap(vcpu);
  9703			if (kvm_check_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu))
  9704				kvm_vcpu_reload_apic_access_page(vcpu);
  9705			if (kvm_check_request(KVM_REQ_HV_CRASH, vcpu)) {
  9706				vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
  9707				vcpu->run->system_event.type = KVM_SYSTEM_EVENT_CRASH;
  9708				r = 0;
  9709				goto out;
  9710			}
  9711			if (kvm_check_request(KVM_REQ_HV_RESET, vcpu)) {
  9712				vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
  9713				vcpu->run->system_event.type = KVM_SYSTEM_EVENT_RESET;
  9714				r = 0;
  9715				goto out;
  9716			}
  9717			if (kvm_check_request(KVM_REQ_HV_EXIT, vcpu)) {
  9718				struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
  9719	
  9720				vcpu->run->exit_reason = KVM_EXIT_HYPERV;
  9721				vcpu->run->hyperv = hv_vcpu->exit;
  9722				r = 0;
  9723				goto out;
  9724			}
  9725	
  9726			/*
  9727			 * KVM_REQ_HV_STIMER has to be processed after
  9728			 * KVM_REQ_CLOCK_UPDATE, because Hyper-V SynIC timers
  9729			 * depend on the guest clock being up-to-date
  9730			 */
  9731			if (kvm_check_request(KVM_REQ_HV_STIMER, vcpu))
  9732				kvm_hv_process_stimers(vcpu);
  9733			if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu))
  9734				kvm_vcpu_update_apicv(vcpu);
  9735			if (kvm_check_request(KVM_REQ_APF_READY, vcpu))
  9736				kvm_check_async_pf_completion(vcpu);
  9737			if (kvm_check_request(KVM_REQ_MSR_FILTER_CHANGED, vcpu))
  9738				static_call(kvm_x86_msr_filter_changed)(vcpu);
  9739	
  9740			if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
  9741				static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
  9742			if (kvm_check_request(KVM_REQ_GPC_INVALIDATE, vcpu))
> 9743				; /* Nothing to do. It just wanted to wake us */
  9744		}
  9745	
  9746		if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
  9747		    kvm_xen_has_interrupt(vcpu)) {
  9748			++vcpu->stat.req_event;
  9749			r = kvm_apic_accept_events(vcpu);
  9750			if (r < 0) {
  9751				r = 0;
  9752				goto out;
  9753			}
  9754			if (vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED) {
  9755				r = 1;
  9756				goto out;
  9757			}
  9758	
  9759			r = inject_pending_event(vcpu, &req_immediate_exit);
  9760			if (r < 0) {
  9761				r = 0;
  9762				goto out;
  9763			}
  9764			if (req_int_win)
  9765				static_call(kvm_x86_enable_irq_window)(vcpu);
  9766	
  9767			if (kvm_lapic_enabled(vcpu)) {
  9768				update_cr8_intercept(vcpu);
  9769				kvm_lapic_sync_to_vapic(vcpu);
  9770			}
  9771		}
  9772	
  9773		r = kvm_mmu_reload(vcpu);
  9774		if (unlikely(r)) {
  9775			goto cancel_injection;
  9776		}
  9777	
  9778		preempt_disable();
  9779	
  9780		static_call(kvm_x86_prepare_guest_switch)(vcpu);
  9781	
  9782		/*
  9783		 * Disable IRQs before setting IN_GUEST_MODE.  Posted interrupt
  9784		 * IPI are then delayed after guest entry, which ensures that they
  9785		 * result in virtual interrupt delivery.
  9786		 */
  9787		local_irq_disable();
  9788		vcpu->mode = IN_GUEST_MODE;
  9789	
  9790		/*
  9791		 * If the guest requires direct access to mapped L1 pages, check
  9792		 * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES
  9793		 * to go and revalidate them, if necessary.
  9794		 */
  9795		if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps)
  9796			kvm_x86_ops.nested_ops->check_guest_maps(vcpu);
  9797	
  9798		srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
  9799	
  9800		/*
  9801		 * 1) We should set ->mode before checking ->requests.  Please see
  9802		 * the comment in kvm_vcpu_exiting_guest_mode().
  9803		 *
  9804		 * 2) For APICv, we should set ->mode before checking PID.ON. This
  9805		 * pairs with the memory barrier implicit in pi_test_and_set_on
  9806		 * (see vmx_deliver_posted_interrupt).
  9807		 *
  9808		 * 3) This also orders the write to mode from any reads to the page
  9809		 * tables done while the VCPU is running.  Please see the comment
  9810		 * in kvm_flush_remote_tlbs.
  9811		 */
  9812		smp_mb__after_srcu_read_unlock();
  9813	
  9814		/*
  9815		 * This handles the case where a posted interrupt was
  9816		 * notified with kvm_vcpu_kick.
  9817		 */
  9818		if (kvm_lapic_enabled(vcpu) && vcpu->arch.apicv_active)
  9819			static_call(kvm_x86_sync_pir_to_irr)(vcpu);
  9820	
  9821		if (kvm_vcpu_exit_request(vcpu)) {
  9822			vcpu->mode = OUTSIDE_GUEST_MODE;
  9823			smp_wmb();
  9824			local_irq_enable();
  9825			preempt_enable();
  9826			vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
  9827			r = 1;
  9828			goto cancel_injection;
  9829		}
  9830	
  9831		if (req_immediate_exit) {
  9832			kvm_make_request(KVM_REQ_EVENT, vcpu);
  9833			static_call(kvm_x86_request_immediate_exit)(vcpu);
  9834		}
  9835	
  9836		fpregs_assert_state_consistent();
  9837		if (test_thread_flag(TIF_NEED_FPU_LOAD))
  9838			switch_fpu_return();
  9839	
  9840		if (unlikely(vcpu->arch.switch_db_regs)) {
  9841			set_debugreg(0, 7);
  9842			set_debugreg(vcpu->arch.eff_db[0], 0);
  9843			set_debugreg(vcpu->arch.eff_db[1], 1);
  9844			set_debugreg(vcpu->arch.eff_db[2], 2);
  9845			set_debugreg(vcpu->arch.eff_db[3], 3);
  9846		} else if (unlikely(hw_breakpoint_active())) {
  9847			set_debugreg(0, 7);
  9848		}
  9849	
  9850		for (;;) {
  9851			/*
  9852			 * Assert that vCPU vs. VM APICv state is consistent.  An APICv
  9853			 * update must kick and wait for all vCPUs before toggling the
  9854			 * per-VM state, and responsing vCPUs must wait for the update
  9855			 * to complete before servicing KVM_REQ_APICV_UPDATE.
  9856			 */
  9857			WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu));
  9858	
  9859			exit_fastpath = static_call(kvm_x86_run)(vcpu);
  9860			if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
  9861				break;
  9862	
  9863			if (vcpu->arch.apicv_active)
  9864				static_call(kvm_x86_sync_pir_to_irr)(vcpu);
  9865	
  9866			if (unlikely(kvm_vcpu_exit_request(vcpu))) {
  9867				exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
  9868				break;
  9869			}
  9870		}
  9871	
  9872		/*
  9873		 * Do this here before restoring debug registers on the host.  And
  9874		 * since we do this before handling the vmexit, a DR access vmexit
  9875		 * can (a) read the correct value of the debug registers, (b) set
  9876		 * KVM_DEBUGREG_WONT_EXIT again.
  9877		 */
  9878		if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) {
  9879			WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP);
  9880			static_call(kvm_x86_sync_dirty_debug_regs)(vcpu);
  9881			kvm_update_dr0123(vcpu);
  9882			kvm_update_dr7(vcpu);
  9883		}
  9884	
  9885		/*
  9886		 * If the guest has used debug registers, at least dr7
  9887		 * will be disabled while returning to the host.
  9888		 * If we don't have active breakpoints in the host, we don't
  9889		 * care about the messed up debug address registers. But if
  9890		 * we have some of them active, restore the old state.
  9891		 */
  9892		if (hw_breakpoint_active())
  9893			hw_breakpoint_restore();
  9894	
  9895		vcpu->arch.last_vmentry_cpu = vcpu->cpu;
  9896		vcpu->arch.last_guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());
  9897	
  9898		vcpu->mode = OUTSIDE_GUEST_MODE;
  9899		smp_wmb();
  9900	
  9901		static_call(kvm_x86_handle_exit_irqoff)(vcpu);
  9902	
  9903		/*
  9904		 * Consume any pending interrupts, including the possible source of
  9905		 * VM-Exit on SVM and any ticks that occur between VM-Exit and now.
  9906		 * An instruction is required after local_irq_enable() to fully unblock
  9907		 * interrupts on processors that implement an interrupt shadow, the
  9908		 * stat.exits increment will do nicely.
  9909		 */
  9910		kvm_before_interrupt(vcpu);
  9911		local_irq_enable();
  9912		++vcpu->stat.exits;
  9913		local_irq_disable();
  9914		kvm_after_interrupt(vcpu);
  9915	
  9916		/*
  9917		 * Wait until after servicing IRQs to account guest time so that any
  9918		 * ticks that occurred while running the guest are properly accounted
  9919		 * to the guest.  Waiting until IRQs are enabled degrades the accuracy
  9920		 * of accounting via context tracking, but the loss of accuracy is
  9921		 * acceptable for all known use cases.
  9922		 */
  9923		vtime_account_guest_exit();
  9924	
  9925		if (lapic_in_kernel(vcpu)) {
  9926			s64 delta = vcpu->arch.apic->lapic_timer.advance_expire_delta;
  9927			if (delta != S64_MIN) {
  9928				trace_kvm_wait_lapic_expire(vcpu->vcpu_id, delta);
  9929				vcpu->arch.apic->lapic_timer.advance_expire_delta = S64_MIN;
  9930			}
  9931		}
  9932	
  9933		local_irq_enable();
  9934		preempt_enable();
  9935	
  9936		vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
  9937	
  9938		/*
  9939		 * Profile KVM exit RIPs:
  9940		 */
  9941		if (unlikely(prof_on == KVM_PROFILING)) {
  9942			unsigned long rip = kvm_rip_read(vcpu);
  9943			profile_hit(KVM_PROFILING, (void *)rip);
  9944		}
  9945	
  9946		if (unlikely(vcpu->arch.tsc_always_catchup))
  9947			kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
  9948	
  9949		if (vcpu->arch.apic_attention)
  9950			kvm_lapic_sync_from_vapic(vcpu);
  9951	
  9952		r = static_call(kvm_x86_handle_exit)(vcpu, exit_fastpath);
  9953		return r;
  9954	
  9955	cancel_injection:
  9956		if (req_immediate_exit)
  9957			kvm_make_request(KVM_REQ_EVENT, vcpu);
  9958		static_call(kvm_x86_cancel_injection)(vcpu);
  9959		if (unlikely(vcpu->arch.apic_attention))
  9960			kvm_lapic_sync_from_vapic(vcpu);
  9961	out:
  9962		return r;
  9963	}
  9964	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

^ permalink raw reply	[flat|nested] 74+ messages in thread

end of thread, other threads:[~2021-11-26  2:53 UTC | newest]

Thread overview: 74+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-20 10:27 PATCH v4 00/11] KVM: x86/xen: Add in-kernel Xen event channel delivery David Woodhouse
2021-11-20 10:27 ` David Woodhouse
2021-11-20 10:27 ` David Woodhouse
2021-11-20 10:27 ` David Woodhouse
2021-11-20 10:28 ` [PATCH v4 01/11] KVM: Introduce CONFIG_HAVE_KVM_DIRTY_RING David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28 ` [PATCH v4 02/11] KVM: Add Makefile.kvm for common files, use it for x86 David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28 ` [PATCH v4 03/11] KVM: s390: Use Makefile.kvm for common files David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28 ` [PATCH v4 04/11] KVM: mips: " David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28 ` [PATCH v4 05/11] KVM: RISC-V: " David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28 ` [PATCH v4 06/11] KVM: powerpc: " David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28 ` [PATCH v4 07/11] KVM: arm64: " David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28 ` [PATCH v4 08/11] KVM: Reinstate gfn_to_pfn_cache with invalidation support David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28 ` [PATCH v4 09/11] KVM: x86/xen: Maintain valid mapping of Xen shared_info page David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28 ` [PATCH v4 10/11] KVM: x86/xen: Add KVM_IRQ_ROUTING_XEN_EVTCHN and event channel delivery David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 17:50   ` kernel test robot
2021-11-20 17:50     ` kernel test robot
2021-11-20 10:28 ` [PATCH v4 11/11] KVM: x86: First attempt at converting nested virtual APIC page to gpc David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 10:28   ` David Woodhouse
2021-11-20 15:48   ` Mika Penttilä
2021-11-20 15:48     ` Mika Penttilä
2021-11-20 15:48     ` Mika Penttilä
2021-11-20 15:48     ` Mika Penttilä
2021-11-20 16:21     ` David Woodhouse
2021-11-20 16:21       ` David Woodhouse
2021-11-20 16:21       ` David Woodhouse
2021-11-20 16:21       ` David Woodhouse
2021-11-20 16:30       ` Mika Penttilä
2021-11-20 16:30         ` Mika Penttilä
2021-11-20 16:30         ` Mika Penttilä
2021-11-20 16:30         ` Mika Penttilä
2021-11-20 17:02         ` David Woodhouse
2021-11-20 17:02           ` David Woodhouse
2021-11-20 17:02           ` David Woodhouse
2021-11-20 17:02           ` David Woodhouse
2021-11-24 17:55   ` kernel test robot
2021-11-24 17:55     ` kernel test robot
2021-11-26  2:38   ` kernel test robot
2021-11-26  2:38     ` kernel test robot
2021-11-20 18:20 ` [PATCH v4 12/11] KVM: x86: Fix wall clock writes in Xen shared_info not to mark page dirty David Woodhouse
2021-11-20 18:20   ` David Woodhouse
2021-11-20 18:20   ` David Woodhouse
2021-11-20 18:20   ` David Woodhouse

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.